id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9904/astro-ph9904182.html
ar5iv
text
# 1 Introduction ## 1 Introduction The angular size - redshift relation, $`\mathrm{\Theta }(z)`$, is a kinematic test which potentially may discriminate the several cosmological models proposed in the literature. As widely known, because of the spacetime curvature, the expanding universe acts gravitationally as a lens of large focal length. Though nearby objects are not affected, a fixed angular size of an extragalactic source is initially seen decreasing up to a minimal value, say, at a critical redshift ($`z_m`$), after which increasing for higher redshifts. The precise determination of $`z_m`$, or equivalently, the corresponding minimal angular size value $`\mathrm{\Theta }(z_m)`$, may constitute a powerful tool in the search for deciding which are the more realistic world models. This lensing effect was first predicted by Hoyle, originally aiming to distinguish between the steady-state and Einstein-de Sitter cosmologies . Later on, the accumulated evidences against the steady state (mainly from CMBR) have put it aside, and more recently, the same is occurring with the theoretically favoured critical density FRW model . The data concerning the angular size - redshift relation are until nowadays somewhat controversial, specially because they envolve at least two kinds of observational dificulties. First, any large redshift object may have a wide range of proper sizes, and, second, evolutionary and selection effects probably are not negligible. The $`\mathrm{\Theta }(z)`$ relation for some extended sources samples seems to be quite imcompatible with the predictions of the standard FRW model when the latter effects are not taken into account . There have also been some claims that the best fit model for the observed distribution of high redshifts extended objects is provided by the standard Einstein-de Sitter universe ($`q_o=\frac{1}{2}`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) with no significant evolution . Parenthetically, these results are in contradiction with recent observations from type Ia supernovae, which seems to ruled out world models filled only by baryonic matter, and more generally, any model with positive deceleration parameter . The same happens with the corresponding bounds using the ages of old high redshift galaxies . The case for compact radio sources is also of great interest. These objects are apparently less sensitive to evolutionary effects since they are short-lived ($`10^3yr`$) and much smaller than their host galaxy. Initially, the data from a sample of 82 objects gave remarkable suport for the Einstein-de Sitter Universe . However, some analysis suggest that Kellerman has not really detected a significant increasing beyond the minimum . Some authors have also argued that models where $`\mathrm{\Theta }(z)`$ diminishes and after a given $`z`$ remains constant may also provide a good fit to Kellerman’s data. In particular, by analysing a subset of 59 compact sources within the same sample, Dabrowski et al. (1995) found that no useful bounds on the value of the deceleration parameter $`q_o`$ can be derived. Further, even considering that Euclidean angular sizes ($`\mathrm{\Theta }z^1`$) are excluded at 99$`\%`$ confidence level, and that the data are consistent with $`q_o=1/2`$, they apparently do not rule out extreme values of the deceleration parameter as $`q_o5`$ . More recently, based in a more complete sample of data, which include the ones originally obtained by Kellermann, it was argued that the $`\mathrm{\Theta }(z)`$ relation may be consistent with any model of the FRW class with deceleration parameter $`0.5`$ . In this context, we discuss here how the critical redshift giving the turn-up in angular sizes is determined for any expanding cosmology based on the FRW geometry. An analytical expression quite convenient for numerical evaluation is derived. The approach is exemplified for three different models of current cosmological interest: (i) open matter dominated FRW universe (OCDM), (ii) flat FRW type models with cosmological constant ($`\mathrm{\Lambda }`$CDM), (iii) the class of scalar field cosmologies (SF) proposed by Ratra and Peebles . Hopefully, the results derived here may be useful near future, when more accurate data become available. ## 2 The method Let us now consider the FRW line element $`(c=1)`$ $$ds^2=dt^2R^2(t)[d\chi ^2+S_k^2(\chi )(d\theta ^2+\mathrm{sin}^2\theta \mathrm{d}\varphi ^2)],$$ (1) where $`\chi `$, $`\theta `$, and $`\varphi `$ are dimensionless comoving coordinates, $`R(t)`$ is the scale factor, and $`S_k(\chi )`$ depends on the curvature parameter ($`k=0`$, $`\pm 1`$). The later function is defined by one of the following forms: $`S_k(\chi )=\mathrm{sinh}(\chi )`$, $`\chi `$, $`\mathrm{sin}\chi `$, respectively, for open, flat and closed Universes. In this background, the angular size-redshift relation for a rod of intrinsic length $`D`$ is easily obtained by integrating the spatial part of the above expression for $`\chi `$ and $`\varphi `$ fixed. One finds $$\theta (z)=\frac{D(1+z)}{R_oS_k(\chi )}.$$ (2) The dimensionless coordinate $`\chi `$ is given by $$\chi (z)=\frac{1}{H_oR_o}_{(1+z)^1}^1\frac{dx}{xE(x)},$$ (3) where $`x=\frac{R(t)}{R_o}=(1+z)^1`$ is a convenient integration variable. For the three kinds of cosmological models considered here (OCDM, $`\mathrm{\Lambda }`$CDM and SF) the dimensionless function $`E(x)`$ assume one of the following forms: $$E_{FRW}(x)=\left[1\mathrm{\Omega }_M+\mathrm{\Omega }_Mx^1\right]^{\frac{1}{2}},$$ (4) $$E_\mathrm{\Lambda }(x)=\left[(1\mathrm{\Omega }_\mathrm{\Lambda })x^1+\mathrm{\Omega }_\mathrm{\Lambda }x^2\right]^{\frac{1}{2}},$$ (5) $$E_{SF}(x)=\left[(1\mathrm{\Omega }_\varphi )x^1+\mathrm{\Omega }_\varphi x^{\frac{4\alpha }{2+\alpha }}\right]^{\frac{1}{2}},$$ (6) where $`\mathrm{\Omega }_M=\frac{8\pi G\rho _M}{3H_o^2}`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=\frac{\mathrm{\Lambda }}{3H_o^2}`$ and $`\mathrm{\Omega }_\varphi =\frac{8\pi G\rho _\varphi }{3H_o^2}`$, are the present day density parameters associated with the matter component, cosmological constant and the scalar field $`\varphi `$, respectively. Notice that equations (5) and (6) become identical if one takes $`\alpha =0`$ in the later, thereby showing that the scalar field model proposed by Ratra and Peebles may kinematically be equivalent to a flat $`\mathrm{\Lambda }`$CDM cosmology. The redshift $`z_m`$ at which the angular size takes the minimum value is the one cancelling out the derivative of $`\mathrm{\Theta }`$ with respect to $`z`$. Hence, from (2) we have the condition $$S_k(\chi _m)=(1+z_m)S_k^{}(\chi _m),$$ (7) where $`S_k^{}(\chi )=\frac{S_k}{\chi }\frac{\chi }{z}`$, a prime denotes differentiation with respect to $`z`$ and by definition $`\chi _m=\chi (z_m)`$. To proceed further, observe that (3) can readily be differentiated yielding, respectively, for the standard FRW (matter dominated), $`\mathrm{\Lambda }`$CDM and scalar field cosmologies $$(1+z_m)\chi _m^{}=\frac{(R_oH_o)^1}{\left[1\mathrm{\Omega }_M+\mathrm{\Omega }_M(1+z_m)\right]^{\frac{1}{2}}}=(R_oH_o)^1F(\mathrm{\Omega }_M,z_m),$$ (8) $$(1+z_m)\chi _m^{}=\frac{(R_oH_o)^1}{\left[(1\mathrm{\Omega }_\mathrm{\Lambda })(1+z_m)+\mathrm{\Omega }_\mathrm{\Lambda }(1+z_m)^2\right]^{\frac{1}{2}}}=(R_oH_o)^1L(\mathrm{\Omega }_\mathrm{\Lambda },z_m),$$ (9) $$(1+z_m)\chi _m^{}=\frac{(R_oH_o)^1}{\left[(1\mathrm{\Omega }_\varphi )(1+z_m)+\mathrm{\Omega }_\varphi (1+z_m)^{\frac{\alpha 4}{\alpha +2}}\right]^{\frac{1}{2}}}=(R_oH_o)^1S(\mathrm{\Omega }_\varphi ,\alpha ,z_m).$$ (10) Now, inserting the above equations into (7) we find for the cases above considered $$\frac{1}{(1\mathrm{\Omega }_M)^{\frac{1}{2}}}\mathrm{tanh}\left[(1\mathrm{\Omega }_M)^{\frac{1}{2}}_{(1+z_m)^1}^1\frac{dx}{xE_{FRW}(x)}\right]=F(\mathrm{\Omega }_M,z_m),$$ (11) $$_{(1+z_m)^1}^1\frac{dx}{xE_\mathrm{\Lambda }(x)}=L(\mathrm{\Omega }_\mathrm{\Lambda },z_m),$$ (12) $$_{(1+z_m)^1}^1\frac{dx}{xE_{SF}(x)}=S(\mathrm{\Omega }_\varphi ,\alpha ,z_m).$$ (13) The meaning of equations (11)-(13) is self evident. Each one represents an integro-algebraic equation for the critical redshift $`z_m`$ as a function of the physically meaningful parameters of the models. In general, these equations cannot be solved in closed analytical form for $`z_m`$. However, as one may check, if we take the limit $`\mathrm{\Omega }_M1`$ in (11), the value $`z_m=\frac{5}{4}`$ is readily achieved, which corresponds to the well known standard result for the dust filled FRW flat universe. The interesting point is that expressions (11)-(13) are quite convenient for numerical evaluations. As a matter of fact, their solutions can straightforwardly be obtained, for instance, by programming the integrations using simple numerical recipes in FORTRAN. In Fig. 1 we show the diagrams of $`z_m`$ as a function of the density parameter for each kind of model. As expected, in the standard FRW model, the critical redshift starts at $`z_m=1.25`$ when $`\mathrm{\Omega }_M`$ goes to unity. This value is pushed to the right direction, that is, it is displaced to higher redshifts as the $`\mathrm{\Omega }_M`$ parameter is decreased. For instance, for $`\mathrm{\Omega }_M=0.5`$ and $`\mathrm{\Omega }_M=0.2`$, we find $`z_m=1.58`$ and $`z_m=2.20`$, respectively. In the limiting case, $`\mathrm{\Omega }_M0`$, there is no minimum at all since $`z_m\mathrm{}`$. This means that the angular size decreases monotonically as a function of the redshift. For the scalar field case, one needs to fix the value of $`\alpha `$ in order to have a bidimensional plot. Given a value of $`\mathrm{\Omega }_\varphi `$, the minimum is also displaced for higher redshifts when the $`\alpha `$ parameter diminishes. Conversely, for a fixed value of $`\alpha `$, the minimum moves for lower redshifts when $`\mathrm{\Omega }_\varphi `$ is decreased. The limiting case ($`\alpha =0`$) is fully equivalent to a $`\mathrm{\Lambda }`$CDM model. As happens in the limiting case $`\mathrm{\Omega }_M0`$ ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$), the minimal value for $`\mathrm{\Theta }(z)`$ disappears when the cosmological constant contributes all the energy density of the Universe, that is, $`z_m\mathrm{}`$ if $`\mathrm{\Omega }_M0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }1`$ (in this connection see also ). For the class of models considered in this paper, the redshifts having the minimal angular size are displayed for several values of $`\mathrm{\Omega }_M`$ and $`\alpha `$ in Table 1. As can be seen there, the critical redshift at which the angular size is a minimal cannot alone discriminate between world models since different scenarios may provide the same $`z_m`$ value. However, when combinated with other tests, some interesting constraints on the cosmological models can be obtained. For example, when $`\mathrm{\Omega }_\varphi `$ is bigger than $`0.55`$, the model proposed by Ratra and Peebles yields a $`z_m`$ between the standard FRW flat model and the $`\mathrm{\Lambda }`$CDM cosmology. Then, suposing that the universe is really accelerating today ($`q_o<0`$), as indicated recently by measurements using type Ia supernovae , and by considering the results by Gurvits et al. , i.e., that the data are compatible with $`q_o0.5`$, the Ratra and Peebles models with $`0<\alpha 4`$ seems to be more in accordance with the angular size data for compact radio sources than the $`\mathrm{\Lambda }`$CDM model. It is worth notice that the same procedure may be applied when evolutionary and/or selection effects due to a linear size-redshift or to a linear size-luminosity dependence are taken into account. As widely believed, a plausible way of standing for such effects is to consider that the intrinsic linear size has a similar dependence on the redshift as the coordinate dependence, i.e., $`D=D_o(1+z)^c`$, being $`c<0`$ (see, for instance, and Refs. therein). In this case, equations (11)-(13) are still valid but the functions $`F(\mathrm{\Omega }_M,z_m)`$, $`L(\mathrm{\Omega }_\mathrm{\Lambda },z_m)`$, and $`S(\mathrm{\Omega }_\varphi ,\alpha ,z_m)`$ must be divided by a factor $`(1+c)`$. The displacement of $`z_m`$ relative to the case with no evolution ($`c=0`$) due to the effects cited above may be unexpectedly large. For example, if one takes $`c=0.8`$ as found by Buchalter et al. , the redshift of the minimum angular size for the Einstein-de Sitter case ($`\mathrm{\Omega }_M=1`$) moves from $`z_m=1.25`$ to $`z_m=11.25`$. In particular, this explains why the data of Gurvits et al. , although apparently in agreement with the Einstein-de Sitter universe, do not show clear evidence for a minimal angular size close to $`z=1.25`$, as should be expected for this model. Acknowledgments:This work was partially supported by the project Pronex/FINEP (No. 41.96.0908.00) and Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq (Brazilian Research Agency).
no-problem/9904/hep-ph9904249.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is well known that MSSU(5) provides no explanation of the charged fermion mass hierarchies and mixings, predicts the undesirable asymptotic relations $`m_s=m_\mu `$, $`\frac{m_d}{m_s}=\frac{m_e}{m_\mu }`$, and cannot simultaneously account for the atmospheric and solar neutrino data. In addition, taking account of supersymmetric and heavy threshold corrections, the MSSU(5) value for the strong coupling is $`0.126`$ , to be compared with the world average value of $`0.117\pm 0.005`$ In a recent paper we considered an $`SU(5)`$ model supplemented by two key ingredients. One is a $`𝒰(1)`$ flavor symmetry , suitably implemented for explaining the charged fermion mass hierarchies and mixings, and consistent with a variety of neutrino oscillation scenarios. For instance, it was shown how bi-maximal neutrino mixings could be realized for explaining the atmospheric and solar neutrino data. A second key ingredient is the introduction of two pairs of vector-like ‘matter’ superfields belonging to the $`\overline{15}+15`$ representations of $`SU(5)`$. They play an essential role in avoiding the undesirable asymptotic mass relations mentioned above <sup>4</sup><sup>4</sup>4The extended ‘matter’ sector of $`SU(5)`$, with additional $`\overline{15}+15`$ supermultiplets, leads to a scenario quite different from the case which includes scalar $`45`$-plets .. The purpose of this paper is to explore some key phenomenological consequences of such an extended $`SU(5)`$ scheme. In particular, it turns out that the $`\overline{15}+15`$ superfields play an essential role in reducing the predicted strong coupling to $`0.115`$, which is in excellent agreement with experiments. Furthermore, they also have an impact, albeit a modest one, on the proton lifetime. It turns out to be about five times longer than the MSSU(5) value. For obtaining a natural understanding of the charged fermion mass hierarchies and magnitudes of the CKM matrix elements, we supplement the $`𝒰(1)`$ flavor symmetry with a $`Z_2`$ $``$-symmetry. The latter helps in the generation of desired mass scales. The resolution of the atmospheric and solar neutrino puzzles necessitates in this approach the introduction of a sterile neutrino state $`\nu _s`$ which, thanks to the $`𝒰(1)`$ symmetry, can be kept light. Maximal $`\nu _\mu \nu _\tau `$ oscillations resolve the atmospheric neutrino anomaly, while the small mixing angle $`\nu _e\nu _s`$ MSW oscillations can explain the solar neutrino data. It turns out that the $`𝒰(1)`$ symmetry also implies an automatic $`Z_2`$ ‘matter’ parity (including higher order terms). ## 2 Extended Supersymmetric $`SU(5)`$: <br>Charged Fermion Masses and Mixings The scalar sector of $`SU(5)`$, which we consider here, in addition to $`\mathrm{\Sigma }(24)`$, $`\overline{H}(\overline{5})`$, $`H(5)`$ multiplets, also contains $`S`$ and $`X`$ singlets. We introduce the symmetry $`Z_2\times 𝒰(1)`$, where $`Z_2`$ is an $``$-symmetry. Under $`Z_2`$, $$(\mathrm{\Sigma },\overline{H},H,S)(\mathrm{\Sigma },\overline{H},H,S),$$ $$XX,WW.$$ (1) As will be discussed in more detail below, the anomalous $`𝒰(1)`$ flavor symmetry is crucial for obtaining the hierarchies among fermion masses and mixings. The $`𝒰(1)`$ charges of the ‘scalars’ are: $$Q_X=1,Q_{\overline{H}}=Q_H=2r,$$ $$Q_\mathrm{\Sigma }=Q_S=0,$$ (2) ($`r`$ is undetermined for the time being). The most general renormalizable scalar superpotential allowed under the symmetries reads: $$W_S=\mathrm{\Lambda }^2S+\frac{\lambda }{3}S^3+\frac{h}{2}S\mathrm{Tr}\mathrm{\Sigma }^2+\frac{\sigma }{3}\mathrm{Tr}\mathrm{\Sigma }^3+$$ $$\overline{H}(\lambda _1S+\lambda _2\mathrm{\Sigma })H,$$ (3) where $`\lambda `$, $`h`$, $`\sigma `$ and $`\lambda _{1,2}`$ are dimensionless couplings, and $`\mathrm{\Lambda }`$ is a mass scale of order $`M_{GUT}M_G`$. From (3), with supersymmetry unbroken, one obtains a non-vanishing $`\mathrm{\Sigma }`$ (and also $`S`$) in the desirable direction $$\mathrm{\Sigma }=\mathrm{Diag}(2,2,2,3,3)V,$$ (4) with $$V=\frac{h\mathrm{\Lambda }}{(15h^3+\lambda \sigma ^2)^{1/2}},S=\frac{\sigma \mathrm{\Lambda }}{(15h^3+\lambda \sigma ^2)^{1/2}}.$$ (5) From (5), assuming that $`\mathrm{\Lambda }10^{16}`$ GeV, with all coupling constants of order unity, we have $$\frac{V}{M_P}\frac{S}{M_P}ϵ_G10^2.$$ (6) As for the flavor $`𝒰(1)`$ symmetry, it is natural to consider it as an anomalous gauge symmetry. It is well known that anomalous $`U(1)`$ factors can appear in effective field theories from strings. The cancellation of its anomalies occurs through the Green-Schwarz mechanism . Due to the anomaly, the Fayet-Iliopoulos term $$\xi d^4\theta V_A$$ (7) is always generated , where, in string theory, $`\xi `$ is given by $$\xi =\frac{g_A^2M_P^2}{192\pi ^2}\mathrm{Tr}Q.$$ (8) The $`D_A`$-term will have the form $$\frac{g_A^2}{8}D_A^2=\frac{g_A^2}{8}\left(\mathrm{\Sigma }Q_a|\phi _a|^2+\xi \right)^2,$$ (9) where $`Q_a`$ is the ‘anomalous’ charge of $`\phi _a`$ superfield. In ref. the anomalous $`𝒰(1)`$ symmetry was considered as a mediator of SUSY breaking, while in ref. , the anomalous Abelian symmetries were exploited as flavor symmetries for a natural understanding of hierarchies of fermion masses and mixings. In our $`SU(5)`$ model, assuming $`\mathrm{Tr}Q<0`$ ($`\xi <0`$) and taking into account (2), we can ensure that the cancellation of (9) fixes the VEV of $`X`$ field as: $$X=\sqrt{\xi }.$$ (10) Further, we will assume that $$\frac{X}{M_P}ϵ0.2.$$ (11) The parameter $`ϵ`$ is an important expansion parameter for understanding the magnitudes of fermion masses and mixings. Together with the $`(10+\overline{5})_i`$ ($`i=1,2,3`$ is a family index) matter multiplets, we consider two pairs $`(\overline{15}+15)_{1,2}`$ of ‘matter’, which will play an important role for obtaining acceptable pattern of fermion masses. The transformation properties of ‘matter’ superfields under $`𝒰(1)`$ are given in Table (1). The relevant couplings will be<sup>5</sup><sup>5</sup>5We assume that $`Z_2`$ $``$ symmetry does not act on the matter superfields.: $$\begin{array}{ccc}& \begin{array}{ccc}\overline{5}_1& \overline{5}_2& \overline{5}_3\end{array}& \\ \begin{array}{c}10_1\\ 10_2\\ 10_3\end{array}& \left(\begin{array}{ccc}ϵ^5& ϵ^4& ϵ^3\\ ϵ^3& ϵ^2& ϵ\\ ϵ^2& ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)\overline{H}ϵ^a,& \end{array}$$ (12) $$\begin{array}{ccc}& \begin{array}{ccc}10_1& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_2& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_3\end{array}& \\ \begin{array}{c}10_1\\ 10_2\\ 10_3\end{array}& \left(\begin{array}{ccc}ϵ^6& ϵ^4& ϵ^3\\ ϵ^4& ϵ^2& ϵ\\ ϵ^3& ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)H,& \end{array}$$ (13) $$\begin{array}{cc}& \begin{array}{ccc}\overline{5}_1& \overline{5}_2& \overline{5}_3\end{array}\\ \begin{array}{c}15_1\\ 15_2\end{array}& \left(\begin{array}{ccc}ϵ^5& ϵ^4& ϵ^3\\ ϵ^4& ϵ^3& ϵ^2\end{array}\right)\overline{H}ϵ^a,\end{array}$$ (14) $$\begin{array}{cc}& \begin{array}{ccc}10_1& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_2& \mathrm{\hspace{0.17em}\hspace{0.17em}10}_3\end{array}\\ \begin{array}{c}\overline{15}_1\\ \overline{15}_2\end{array}& \left(\begin{array}{ccc}\mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}\\ ϵ^2& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}\end{array}\right)\mathrm{\Sigma },\end{array}\begin{array}{cc}& \begin{array}{cc}15_1& \mathrm{\hspace{0.17em}\hspace{0.17em}15}_2\end{array}\\ \begin{array}{c}\overline{15}_1\\ \overline{15}_2\end{array}& \left(\begin{array}{ccc}\mathrm{\hspace{0.17em}\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \\ ϵ^2& ϵ& \end{array}\right)(\mathrm{\Sigma }+S).\end{array}$$ (15) Noting that in terms of $`SU(3)_c\times SU(2)_W`$, $`15=(3,2)+(6,1)+(1,3)`$, we may conclude that the couplings involving $`15,\overline{15}`$ do not affect $`e^c`$ and $`l`$ states from $`10`$ and $`\overline{5}`$ respectively (they can only affect the $`q`$ states). The lepton mass matrix will coincide with (12), from which we have: $$\lambda _\tau ϵ^a,\lambda _e:\lambda _\mu :\lambda _\tau ϵ^5:ϵ^2:1,$$ (16) where $`a=0,1,2`$ determines the value of $`\mathrm{tan}\beta `$($`\frac{m_t}{m_b}ϵ^a`$). Turning to the quark sector, from (15) we see that $`10_3`$-plet also is not affected, while $`q_{10_1},q_{10_2}`$ will be mixed with $`q_{15_1},q_{15_2}`$. Analyzing (15), one can easily verify that for the ‘light’ $`q_i`$ states we will have: $$(10_1,15_1)\stackrel{}{_{}}q_1,15_2q_2,$$ $$10_2\stackrel{}{_{}}ϵq_2,10_3q_3.$$ (17) From (17), (12) and (14), we find the down quark mass matrix to be $$\begin{array}{ccc}& \begin{array}{ccc}d_1^c& d_2^c& d_3^c\end{array}& \\ \begin{array}{c}q_1\\ q_2\\ q_3\end{array}& \left(\begin{array}{ccc}ϵ^5& ϵ^4& ϵ^3\\ ϵ^4& ϵ^3& ϵ^2\\ ϵ^2& ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)ϵ^ah_d,& \end{array}$$ (18) from which $$\lambda _bϵ^a,\lambda _d:\lambda _s:\lambda _bϵ^5:ϵ^3:1.$$ (19) From (12), (16), (18), (19), and taking into account (17), we obtain $$\lambda _b=\lambda _\tau \left(1+𝒪(ϵ^2)\right)ϵ^a,$$ (20) while, for Yukawas of the second generation, $$\lambda _sϵ\lambda _\mu \frac{1}{5}\lambda _\mu .$$ (21) Assuming that $`\lambda _d2\lambda _e`$, from (16) (19) and (21) we will have $$\frac{\lambda _s}{\lambda _d}\frac{1}{10}\frac{\lambda _\mu }{\lambda _e}20.$$ (22) For up-type quarks, from (13), taking into account (17), we obtain $$\begin{array}{ccc}& \begin{array}{ccc}u_1^c& u_2^c& u_3^c\end{array}& \\ \begin{array}{c}q_1\\ q_2\\ q_3\end{array}& \left(\begin{array}{ccc}ϵ^6& ϵ^4& ϵ^3\\ ϵ^4& ϵ^3& ϵ^2\\ ϵ^3& ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)h_u,& \end{array}$$ (23) from which we obtained the desired Yukawa couplings $$\lambda _t1,\lambda _u:\lambda _c:\lambda _tϵ^6:ϵ^3:1.$$ (24) From (18) and (23), for the CKM matrix elements we find $$V_{us}ϵ,V_{cb}ϵ^2,V_{ub}ϵ^3,$$ (25) in good agreement with observations. To conclude, we see that with the help of $`𝒰(1)`$ flavor symmetry and $`\overline{15}+15`$-plets, in addition to the desirable hierarchies of charged fermion masses and CKM mixing angles, we can also get reasonable \[(20), (21), (22)\] asymptotic relations. ## 3 Value of $`\alpha _s(M_Z)`$ By analyzing the spectra of decoupled heavy states, from (15) we can verify that the masses of the states $`(\overline{6},1)+(1,\overline{3})+(6,1)+(1,3)`$ (from $`\overline{15}_2+15_2`$ respectively) are below the GUT scale and equal to $`M_SM_Gϵ`$. Indeed, these states will change the running of the gauge couplings above the $`M_S`$ scale and, as we will see, this opens up the possibility to obtain a reduced value for $`\alpha _s(M_Z)`$ <sup>6</sup><sup>6</sup>6For alternative mechanisms of achieving this see .. The solutions of the three renormalization-group (RG) equations are $$\alpha _G^1=\alpha _a^1\frac{b_a}{2\pi }\mathrm{ln}\frac{M_G}{M_Z}\frac{b_a^{}}{2\pi }\mathrm{ln}\frac{M_G}{M_S}+\mathrm{\Delta }_a+\delta _a,$$ (26) where $`\alpha _G`$ is the gauge coupling at the GUT scale, $`\alpha _a`$ denote the gauge couplings at $`M_Z`$ scale ($`\alpha _{1,2,3}`$ are the gauge couplings of $`U(1)_Y`$, $`SU(2)_W`$ and $`SU(3)_c`$ respectively), while $`b_a`$, $`b_a^{}`$ are given by $$(b_1,b_2,b_3)=(\frac{33}{5},1,3),(b_1^{},b_2^{},b_3^{})=(\frac{34}{5},4,5).$$ (27) The $`\mathrm{\Delta }_a`$ include all possible SUSY and heavy threshold corrections, and contributions from the two loop effects of MSSU(5). $`\delta _a`$ denote the difference of gauge coupling running between MSSU(5) and present model from $`M_S`$ up to $`M_G`$ in two loop approximation, $$\delta _a=\frac{1}{4\pi }\left(\frac{b_{ab}+b_{ab}^{}}{b_b+b_b^{}}\mathrm{ln}\frac{\alpha _b(M_S)}{\alpha _G}\frac{b_{ab}}{b_b}\mathrm{ln}\frac{\alpha _b(M_S)}{\alpha _G^0}\right),$$ (28) where $$\begin{array}{ccc}& & \\ & b_{ab}=\left(\begin{array}{ccc}\frac{199}{25}& \frac{27}{5}& \frac{88}{5}\\ \frac{9}{5}& \mathrm{\hspace{0.17em}\hspace{0.17em}25}& \mathrm{\hspace{0.17em}\hspace{0.17em}24}\\ \frac{11}{5}& \mathrm{\hspace{0.17em}\hspace{0.17em}9}& \mathrm{\hspace{0.17em}\hspace{0.17em}14}\end{array}\right),& \end{array}\begin{array}{ccc}& & \\ & b_{ab}^{}=\left(\begin{array}{ccc}\frac{904}{75}& \frac{144}{5}& \frac{128}{3}\\ \frac{144}{15}& \mathrm{\hspace{0.17em}\hspace{0.17em}24}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}\\ \frac{16}{3}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \frac{128}{3}\end{array}\right)& \end{array}$$ (29) and the appropriate couplings in (28) are calculated in one loop approximation. $`\alpha _G^0`$ is the gauge coupling at $`M_G`$ in MSSU(5). From (26), taking into account (27), one finds $$\alpha _s^1=\left(\alpha _s^1\right)^0+\frac{3}{2\pi }\mathrm{ln}\frac{M_G}{M_S}+\delta ,$$ (30) where $`\left(\alpha _s^1\right)^0=\frac{1}{7}\left(12\alpha _w^15\alpha _Y^1\right)+\frac{1}{7}\left(12\mathrm{\Delta }_25\mathrm{\Delta }_17\mathrm{\Delta }_3\right)`$ corresponds to the value of $`\alpha _s`$ obtained in MSSU(5) case, and $`\delta =\frac{1}{7}(12\delta _25\delta _17\delta _3)`$. Using the result $`\left(\alpha _s^1\right)^0=1/0.126`$ , and taking $`M_S/M_Gϵ0.2`$, (neglecting $`\delta `$ for the time being), we obtain $`\alpha _s0.115`$, in good agreement with experimental data . Taking into account (26) and (29), from (28) we obtain $`\delta =0.015`$, thus leaving the value of $`\alpha _s`$ unchanged as expected. ## 4 Proton Decay From (12) and (14), taking into account (17), we see that $`ql\overline{T}`$ type couplings in the family space have the same hierarchical structure as the down quark mass matrix (18). As far as $`qqT`$ operators are concerned, from (13), (17) one obtains, $$\begin{array}{ccc}& \begin{array}{ccc}q_1& q_2& q_3\end{array}& \\ \begin{array}{c}q_1\\ q_2\\ q_3\end{array}& \left(\begin{array}{ccc}ϵ^6& ϵ^5& ϵ^3\\ ϵ^5& ϵ^4& ϵ^2\\ ϵ^3& ϵ^2& \mathrm{\hspace{0.17em}\hspace{0.17em}1}\end{array}\right)T,& \end{array}$$ (31) from which we see that the appropriate couplings are suppressed by a factor $`ϵ(1/5)`$ compared to the up type quark mass matrix (23). From (26), we find that $`M_G=\left(\frac{M_S}{M_G}\right)^{1/2}M_G^0M_G^0/\sqrt{5}`$, where $`M_G^0`$ is the GUT scale in MSSU(5). From all this we may conclude that the proton life time in our model will be $`\tau _p5\tau _p^0`$ (that is, a factor $`5`$ larger than in MSSU(5)). For further suppression of nucleon decay, the mass scale $`M_S`$ should be reduced. However, this would ruin the gauge coupling unification unless some additional mechanism (for retaining unification) is applied. Such a program can be successfully realized in extended $`SU(5+N)`$ GUTs . ## 5 Neutrino Oscillations Turning to the neutrino sector, for accommodating the recent solar and atmospheric Superkamiokande data (see , respectively), we will invoke the mechanism suggested in refs. . The atmospheric anomaly is explained through maximal $`\nu _\mu \nu _\tau `$ mixings which is achieved through quasi-degenerate massive $`\nu _\mu `$, $`\nu _\tau `$ states. Since these states are too heavy to explain the solar neutrino data, we are led introduce a sterile neutrino state $`\nu _s`$. The solar neutrino anomaly is resolved via the small angle $`\nu _e\nu _s`$ MSW oscillations. Together with $`\nu _s`$ state we introduce two heavy right handed states $`𝒩_{2,3}`$. Choosing the $`𝒰(1)`$ charges of these states to be $$Q_{𝒩_2}=\frac{1}{2},Q_{𝒩_3}=\frac{1}{2},Q_{\nu _s}=\frac{41}{2},$$ (32) and in Table (1) taking $$r=\frac{a}{5}\frac{1}{10},$$ (33) the relevant couplings are (these singlet states do not transform under the $`Z_2`$ $``$ symmetry): $$\begin{array}{cc}& \begin{array}{cc}𝒩_2& 𝒩_3\end{array}\\ \begin{array}{c}\overline{5}_1\\ \overline{5}_2\\ \overline{5}_3\end{array}& \left(\begin{array}{ccc}ϵ^2& ϵ& \\ ϵ& 1& \\ \mathrm{\hspace{0.17em}1}& 0& \end{array}\right)H\end{array},\begin{array}{cc}& \begin{array}{cc}𝒩_2& 𝒩_3\end{array}\\ \begin{array}{c}𝒩_2\\ 𝒩_3\end{array}& \left(\begin{array}{ccc}ϵ& \mathrm{\hspace{0.17em}\hspace{0.17em}1}& \\ \mathrm{\hspace{0.17em}1}& \mathrm{\hspace{0.17em}\hspace{0.17em}0}& \end{array}\right)\rho S\end{array},$$ (34) $$W_{\nu s}=ϵ^{20}\left(\overline{5}_3+ϵ\overline{5}_2+ϵ^2\overline{5}_1\right)\nu _sH+Sϵ^{41}\nu _s^2,$$ (35) where $`\rho `$ is a dimensionless coupling. Integration of $`𝒩_{2,3}`$ states leads to the mass matrix for the ‘light’ neutrinos: $$\begin{array}{cccc}& \begin{array}{cccc}\nu _s& \nu _e& \nu _\mu & \nu _\tau \end{array}& & \\ m_\nu =\begin{array}{c}\nu _s\\ \nu _e\\ \nu _\mu \\ \nu _\tau \end{array}& \left(\begin{array}{cccc}m_{\nu _s}& m^{}ϵ^2& m^{}ϵ& m^{}\\ m^{}ϵ^2& mϵ^3& mϵ^2& m\\ m^{}ϵ& mϵ^2& mϵ& m\\ m^{}& mϵ& m& 0\end{array}\right),& & \end{array}$$ (36) where we have defined: $$m\frac{h_u^2}{\rho M_Pϵ_G},m^{}ϵ^{20}h_u,m_{\nu _s}M_Pϵ_Gϵ^{41}.$$ (37) Taking $`\rho 210^2`$, $`ϵ=0.20.22`$, from (37) we have $$m6.310^2\mathrm{eV},$$ $$m_{\nu _s}=(510^4310^2)\mathrm{eV},$$ $$m^{}=(1.810^31.210^2)\mathrm{eV}.$$ (38) Note that the sterile neutrino is kept light (see (35), (38)) by the $`𝒰(1)`$ symmetry . Taking $$m=6.310^2\mathrm{eV},m^{}=1.810^3\mathrm{eV},m_{\nu _s}=210^3\mathrm{eV}$$ (39) from (39) and (36), we have for the atmospheric neutrino oscillation parameters $$\mathrm{\Delta }m_{23}^2=2m^2ϵ210^3\mathrm{eV}^2,$$ $$\mathrm{sin}^22\theta _{\mu \tau }=1𝒪(ϵ^2).$$ (40) The solar neutrino oscillation parameters are given by $$\mathrm{\Delta }m_{\nu _e\nu _s}^2m_{\nu _s}^2410^6\mathrm{eV}^2,$$ $$\mathrm{sin}^22\theta _{es}4\left(\frac{m^{}ϵ^2}{m_{\nu _s}}\right)^2510^3.$$ (41) We see that the $`𝒰(1)`$ flavor symmetry helps provide a natural explanation of the solar and atmospheric experimental data. Note that $`a`$ is still undetermined, and therefore the magnitude of $`\mathrm{tan}\beta `$ is not fixed in our model. ## 6 Automatic Matter Parity Let us conclude by considering all possible ‘matter’ parity violating operators: $$\overline{5}_iH,10_i\overline{H}(\mathrm{\Sigma }\overline{H}),(\mathrm{\Sigma }+S)15_i\overline{H}\overline{H},$$ $$(\mathrm{\Sigma }+S)\overline{15}_iHH,(\mathrm{\Sigma }+S)10_i\overline{5}_j\overline{5}_k,10_i10_j10_k\overline{H},$$ $$(\mathrm{\Sigma }+S)15_i\overline{5}_j\overline{5}_k,\mathrm{\Sigma }^215_i15_j15_k\overline{H},\mathrm{}$$ (42) From Table (1), taking into account (33), we observe that the terms in (42) all have non-integer $`𝒰(1)`$ charges, and consequently are forbidden to ‘all orders’ in powers of $`X`$. Therefore, thanks to $`𝒰(1)`$ flavor symmetry, the model has automatic matter parity. ## 7 Conclusion In conclusion, we note that the mechanisms discussed here for resolving the various puzzles in $`SU(5)`$ can be successfully generalized to $`SU(5+N)`$ GUTs . In this paper we have not addressed the gauge hierarchy problem whose resolution in $`SU(5)`$ requires additional ’scalar’ multiplets belonging to $`50+\overline{50}+75`$. In such a scenario the Higgs doublets remain ’massless’, while the color triplets obtain masses by mixing with the triplets in $`50,\overline{50}`$. On the other hand, in order to retain perturbative gauge couplings up to $`M_P`$, the masses of $`50,\overline{50}`$ states should exceed $`M_G`$, which means that the ordinary color triplets (from $`H,\overline{H}`$) will lie below $`M_G`$. This would further destabilize the proton, and possibly disrupt unification of the gauge couplings. To avoid this, one could either consider more complicated $`SU(5)`$ scenarios with extended scalar sector, or extended $`SU(5+N)`$ GUTs . In the latter case, for instance, a $`SU(6)`$ model has been discussed in which the MSSM Higgs doublets are pseudo-Goldstone bosons, the proton lifetime is $`10^2\tau _p^{SU(5)}`$, and neutrino oscillations involve bi-maximal mixings.
no-problem/9904/astro-ph9904303.html
ar5iv
text
# Photodissociative Regulation of Star Formation in Metal-Free Pregalactic Clouds ## 1 Introduction After standard recombination at $`z1100`$, the universe is considered to be reionized at some redshift $`z5`$ from the negative results of Gunn-Peterson experiments in quasar spectra. It has been pointed out that first stars can play an important role in the reionization of the universe (e.g., Couchman & Rees 1986). This scenario has been investigated in detail using numerical simulations (e.g., Gnedin & Ostriker 1997) or semi-analytical models (e.g., Fukugita & Kawasaki 1994; Haiman & Loeb 1997). In the latter models, it is crucial to know whether a cloud of mass $`M`$ that virializes at redshift $`z`$ can cool or not, i.e., if star formation occurs or not. Haiman, Thoul, & Loeb (1996) investigated this problem (see also Tegmark et al. 1997). They found that molecular cooling plays a crucial role for clouds with $`T_{\mathrm{vir}}10^4\mathrm{K}`$ (“small” pregalactic clouds, hereafter) at $`10z_{\mathrm{vir}}100`$, where $`T_{\mathrm{vir}}`$ and $`z_{\mathrm{vir}}`$ are the virial temperature and the redshift at virialization respectively, and determined the minimum mass of the virialized cloud must have in order to cool in a Hubble time. On the other hand, Haiman, Rees, & Loeb (1997) pointed out that in the presence of ultraviolet (UV) background radiation at the level needed to ionize the universe, molecular hydrogen is photodissociated by far ultraviolet (FUV) photons, whose radiation energy is less than the Lyman limit, in small pregalactic clouds. Thus, they asserted that molecular hydrogen in small pregalactic clouds is universally destroyed long before the reionization of the universe. In their reionization model, Haiman & Loeb (1997) assumed that only objects with virial temperature above $`10^4`$ K can cool in a Hubble time, owing to atomic cooling and star formation occurs subsequently. Recently, Ciardi, Ferrara, & Abel (1999) found that the photodissociated regions are not large enough to overlap at $`z2030`$. In the same redshift range, the flux of FUV background is well below the threshold required by Haiman et al.(1997) to prevent the collapse of the clouds. However, molecular hydrogen in a virialized cloud is photodissociated not only by external FUV background radiation, but also by FUV photons produced by massive stars within the cloud. Here, we assess the negative feedback of massive star formation on molecular hydrogen formation in a primordial cloud. ## 2 Region of Influence of an OB star Around an OB star, hydrogen is photoionized, and an HII region is formed. Ionizing photons hardly escape from the HII region, but photons whose radiation energy are below the Lyman limit can get away. Such FUV photons photodissociate molecular hydrogen, and a photodissociation region (PDR) is formed just outside the HII region. In this section, we study how much mass in a primordial cloud is affected by such FUV photons from an OB star and, as a result, becomes unable to cool in a free-fall time owing to the lack of the coolant. We consider a small pregalactic cloud of primordial composition. In such an object, H<sub>2</sub> is formed mainly by the H<sup>-</sup> process at $`z100`$: $`\mathrm{H}+e^{}`$ $``$ $`\mathrm{H}^{}+\gamma ;`$ (1) $`\mathrm{H}+\mathrm{H}^{}`$ $``$ $`\mathrm{H}_2+e^{}.`$ (2) At $`z100`$, H<sup>-</sup> is predominantly photodissociated by CMB photons before the reaction (2) proceeds. On the other hand, in a PDR, photodissociation of H<sup>-</sup> by UV radiation from an OB star does not dominate the reaction (2) except in the vicinity of the star. Then we neglect the photodissociation of H<sup>-</sup>. The rate-determining stage of the H<sup>-</sup> process is the reaction (1), whose rate coefficient $`k_\mathrm{H}^{}`$ is (de Jong 1972) $$k_\mathrm{H}^{}=1.0\times 10^{18}T\mathrm{s}^1\mathrm{cm}^3.$$ (3) In a PDR, H<sub>2</sub> is dissociated mainly via the two-step photodissociation process: $$\mathrm{H}_2+\gamma \mathrm{H}_2^{}2\mathrm{H},$$ (4) whose rate coefficient $`k_{2\mathrm{s}\mathrm{t}\mathrm{e}\mathrm{p}}`$ is given by (Kepner, Babul, & Spergel 1997; Draine & Bertoldi 1996) $$k_{2\mathrm{s}\mathrm{t}\mathrm{e}\mathrm{p}}=1.13\times 10^8F_{\mathrm{LW}}\mathrm{s}^1.$$ (5) Here $`F_{\mathrm{LW}}(\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^2\mathrm{Hz}^1)`$ is the averaged radiation flux in the Lyman and Werner (LW) bands and can be written as $$F_{\mathrm{LW}}=F_{\mathrm{LW},\mathrm{ex}}f_{\mathrm{shield}},$$ (6) where $`F_{\mathrm{LW},\mathrm{ex}}`$ is the incident flux into the PDR at 12.4 eV and the shielding factor $`f_{\mathrm{shield}}`$ is given by (Draine & Bertoldi 1996) $$f_{\mathrm{shield}}=\mathrm{min}[1,(\frac{N_{\mathrm{H}_2}}{10^{14}})^{0.75}].$$ (7) The timescale in which the H<sub>2</sub> fraction reaches the equilibrium value is given by $$t_{\mathrm{dis}}=k_{2\mathrm{s}\mathrm{t}\mathrm{e}\mathrm{p}}^1.$$ (8) In the presence of FUV radiation, if the temperature and density were fixed, the H<sub>2</sub> fraction $`f`$ initially would increase and reach the equilibrium value for a temporal ionization fraction after $`t_{\mathrm{dis}}`$, and then it would decline as ionization fraction decreased as a result of recombination. Actually, if the pregalactic cloud can once produce a sufficient amount of molecular hydrogen to cool in a free-fall time, the cloud can collapse and star formation occurs subsequently. Note that because of their low ionization degree, inverse Compton cooling by CMB photons is not effective in small objects with $`T_{\mathrm{vir}}10^4`$ K. We investigate here how much mass around an OB star is affected by the photodissociating FUV radiation from the star and becomes unable to cool in a free-fall time. In particular, we seek the lower bound of such mass. The equilibrium number density of H<sub>2</sub> under ionization degree $`x`$ is $`n_{\mathrm{H}_2}`$ $`=`$ $`{\displaystyle \frac{k_\mathrm{H}^{}}{k_{2\mathrm{s}\mathrm{t}\mathrm{e}\mathrm{p}}}}xn^2`$ (9) $`=`$ $`0.88\times 10^{26}xF_{\mathrm{LW}}^1Tn^2.`$ (10) Near the star, $`F_{\mathrm{LW}}`$ is so large that the dissociation time $`t_{\mathrm{dis}}`$ is smaller than the recombination time $`t_{\mathrm{rec}}=(k_{\mathrm{rec}}x_\mathrm{i}n)^1`$, where $`k_{\mathrm{rec}}`$ is the recombination coefficient, $`x_\mathrm{i}`$ is the ionization degree at virialization, and $`n`$ is the number density of hydrogen nuclei. Then the chemical equilibrium between above processes is reached before significant recombination proceeds. Far distant from the star, the recombination proceeds before the molecular fraction reaches the equilibrium value and the ionization degree significantly diminishes. However, since we are seeking how much mass is at least affected by the photodissociating FUV radiation from the star, we use the equilibrium value (9) with initial ionization degree $`x=x_\mathrm{i}`$ as the H<sub>2</sub> number density. Consider an OB star, which radiates at the rate of $`L_{\mathrm{LW}}`$ \[ergs s<sup>-1</sup> Hz<sup>-1</sup>\] in the LW bands. For an O5 star, whose mass is 40 $`M_{\mathrm{}}`$, $`L_{\mathrm{LW}}10^{24}`$ ergs s<sup>-1</sup> Hz<sup>-1</sup>. At the point whose distance from the star is $`r`$, the averaged flux in the Lyman and Werner bands is approximately given by $$F_{\mathrm{LW}}=\frac{L_{\mathrm{LW}}}{4\pi r^2}f_{\mathrm{shield}}.$$ (11) Using above relations, we obtain the H<sub>2</sub> column density between the star and the point whose distance from the star is $`r`$: $$N_{\mathrm{H}_2}=\{\begin{array}{cc}CV\hfill & \text{(}N_{\mathrm{H}_2}<10^{14}\mathrm{cm}^3\text{)}\hfill \\ 10^{14}[0.25(CV/10^{14})+0.75]^4\hfill & \text{(}N_{\mathrm{H}_2}>10^{14}\mathrm{cm}^3\text{)}\hfill \end{array}$$ (12) where $`C`$ $`=`$ $`0.88\times 10^{26}x_\mathrm{i}L_{\mathrm{LW}}^1Tn^2,`$ (13) $`V`$ $`=`$ $`{\displaystyle \frac{4\pi }{3}}r^3.`$ (14) Here we have assumed $`n=`$const. in space, for simplicity. We define here the region of influence around a star as that where the cooling time $`t_{\mathrm{cool}}=\frac{(3/2)kT}{fn\mathrm{\Lambda }_{\mathrm{H}_2}}`$ becomes larger than the free-fall time $`t_{\mathrm{ff}}=(\frac{3\pi \mathrm{\Omega }_\mathrm{b}}{32Gm_\mathrm{p}n})^{1/2}`$ as a result of the photodissociation of molecular hydrogen, where $`f=n_{\mathrm{H}_2}/n`$ is the H<sub>2</sub> concentration, $`\mathrm{\Lambda }_{\mathrm{H}_2}`$ is the cooling function of molecular hydrogen, and $`\mathrm{\Omega }_\mathrm{b}`$ is the baryon mass fraction. The condition $`t_{\mathrm{cool}}>t_{\mathrm{ff}}`$ is satisfied as long as the H<sub>2</sub> fraction $`f<f^{(\mathrm{cool})}`$ $`=`$ $`({\displaystyle \frac{24Gm_\mathrm{p}}{\pi }})^{1/2}{\displaystyle \frac{kT}{\mathrm{\Omega }_\mathrm{b}^{1/2}n^{1/2}\mathrm{\Lambda }_{\mathrm{H}_2}}}`$ (15) $`=`$ $`1\times 10^3({\displaystyle \frac{n}{1\mathrm{c}\mathrm{m}^3}})^{1/2}({\displaystyle \frac{T}{10^3\mathrm{K}}})^3({\displaystyle \frac{\mathrm{\Omega }_\mathrm{b}}{0.05}})^{1/2},`$ (16) where we used our fit to the Martin, Schwarz,& Mandy (1996) H<sub>2</sub> cooling function $$\mathrm{\Lambda }_{\mathrm{H}_2}4\times 10^{25}(\frac{T}{1000\mathrm{K}})^4\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^3$$ (17) for the low temperature ($`600\mathrm{K}T3000\mathrm{K}`$) and low density ($`n10^4\mathrm{cm}^3`$) regime. The same condition leads to the condition on the averaged flux in the LW bands with equation (9): $`F_{\mathrm{LW}}>F_{\mathrm{LW}}^{(\mathrm{cool})}`$ $`=`$ $`{\displaystyle \frac{k_\mathrm{H}^{}x_\mathrm{i}n}{1.13\times 10^8f^{(\mathrm{cool})}}}`$ (18) $`=`$ $`0.7\times 10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^2\mathrm{Hz}^1({\displaystyle \frac{x_\mathrm{i}}{10^4}})({\displaystyle \frac{n}{1\mathrm{c}\mathrm{m}^3}})^{3/2}({\displaystyle \frac{T}{10^3\mathrm{K}}})^4({\displaystyle \frac{\mathrm{\Omega }_\mathrm{b}}{0.05}})^{1/2}.`$ (19) Corresponding to this critical LW flux $`F_{\mathrm{LW}}^{(\mathrm{cool})}`$, a critical radius $`r^{(\mathrm{cool})}`$ is determined by the relation $$F_{\mathrm{LW}}[r^{(\mathrm{cool})}]=F_{\mathrm{LW}}^{(\mathrm{cool})}.$$ (20) Actually, the timescale for molecular hydrogen to reach the equilibrium value at $`r=r^{(\mathrm{cool})}`$, $$t_{\mathrm{dis}}[r^{(\mathrm{cool})}]=4.0\times 10^8\mathrm{yr}(\frac{x_\mathrm{i}}{10^4})^1(\frac{n}{1\mathrm{c}\mathrm{m}^3})^{3/2}(\frac{T}{10^3\mathrm{K}})^4(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})^{1/2},$$ (21) is longer than the lifetime of an OB star $`t_{\mathrm{OB}}3\times 10^6`$ yr. This means that as far region as $`r=r^{(\mathrm{cool})}`$ is rarely affected within the lifetime of a single massive star. We define here another critical LW flux $`F_{\mathrm{LW}}^{(\mathrm{eq})}`$ and its corresponding radius $`r^{(\mathrm{eq})}`$ where the timescale for molecular hydrogen to reach the equilibrium value $`t_{\mathrm{dis}}`$ becomes equal to the lifetime of an OB star; $$F_{\mathrm{LW}}^{(\mathrm{eq})}=0.93\times 10^{22}\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^2\mathrm{Hz}^1(\frac{t_{\mathrm{OB}}}{3\times 10^6\mathrm{yr}})^1,$$ (22) $$F_{\mathrm{LW}}[r^{(\mathrm{eq})}]=F_{\mathrm{LW}}^{(\mathrm{eq})}.$$ (23) In the region of influence, we require that two conditions be met: (1) the cooling time for the equilibrium H<sub>2</sub> fraction is longer than the free-fall time (i.e., $`t_{\mathrm{cool}}[f=f^{(\mathrm{eq})}]>t_{\mathrm{ff}}`$,where $`f^{(\mathrm{eq})}`$ is the equilibrium H<sub>2</sub> fraction) and (2) the equilibrium H<sub>2</sub> fraction is reached within the lifetime of the central star (i.e., $`t_{\mathrm{dis}}<t_{\mathrm{OB}}`$). Hence the radius of influence $`r^{(\mathrm{inf})}`$ is determined by the smaller one of either $`r^{(\mathrm{cool})}`$ or $`r^{(\mathrm{eq})}`$; $$r^{(\mathrm{inf})}=\mathrm{min}[r^{(\mathrm{cool})},r^{(\mathrm{eq})}].$$ (24) Note the LW flux $`F_{\mathrm{LW}}`$ at which $`t_{\mathrm{dis}}`$ becomes equal to $`t_{\mathrm{rec}}`$ is $`2\times 10^{24}(x_\mathrm{i}/10^4)(n/1\mathrm{c}\mathrm{m}^3)(T/10^3\mathrm{K})^{0.64}\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^2\mathrm{Hz}^1`$. Here we used the recombination coefficient $`k_{\mathrm{rec}}=1.88\times 10^{10}T^{0.64}\mathrm{s}^1\mathrm{cm}^3`$ (Hutchins 1976). Therefore, even at the edge of the region of influence, the condition $`t_{\mathrm{dis}}<t_{\mathrm{rec}}`$ is usually satisfied. In this case, the chemical equilibrium value for H<sub>2</sub> is reached before the ionization degree significantly decreases from the initial value $`x_\mathrm{i}`$. If the self-shielding of LW band photons could be neglected, the radius $`r^{(\mathrm{cool})}`$ would be given by $`r^{(\mathrm{cool})}=r_{\mathrm{no}\mathrm{sh}}^{(\mathrm{cool})}`$ $`=`$ $`[{\displaystyle \frac{L_{\mathrm{LW}}}{4\pi F_{\mathrm{LW}}^{(\mathrm{cool})}}}]^{1/2}`$ (25) $`=`$ $`3.4\times 10^{23}\mathrm{cm}({\displaystyle \frac{x_\mathrm{i}}{10^4}})^{1/2}({\displaystyle \frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1}})^{1/2}({\displaystyle \frac{T}{10^3\mathrm{K}}})^2({\displaystyle \frac{n}{1\mathrm{c}\mathrm{m}^3}})^{3/4}({\displaystyle \frac{\mathrm{\Omega }_\mathrm{b}}{0.05}})^{1/4}.`$ (26) This expression is valid only when $`N_{\mathrm{H}_2}<10^{14}\mathrm{cm}^2`$. When the H<sub>2</sub> column density $`N_{\mathrm{H}_2}`$ becomes larger than $`10^{14}\mathrm{cm}^2`$, self-shielding of LW band photons begins as can be seen from equation (7). Here, we define the shielding radius $`r_{\mathrm{sh}}`$ as the radius where $`N_{\mathrm{H}_2}(r_{\mathrm{sh}})=10^{14}\mathrm{cm}^2`$. Using equation (12), the shielding radius is $`r_{\mathrm{sh}}`$ $`=`$ $`[{\displaystyle \frac{10^{14}}{(4\pi /3)C}}]^{1/3}`$ (27) $`=`$ $`3.0\times 10^{21}\mathrm{cm}({\displaystyle \frac{x_\mathrm{i}}{10^4}})^{1/3}({\displaystyle \frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1}})^{1/3}({\displaystyle \frac{T}{10^3\mathrm{K}}})^{1/3}({\displaystyle \frac{n}{1\mathrm{c}\mathrm{m}^3}})^{2/3}.`$ (28) When the self-shielding becomes important, $`N_{\mathrm{H}_2}`$ increases and $`F_{\mathrm{LW}}`$ decreases rapidly with $`r`$. Then $`r^{(\mathrm{cool})}`$ is not much larger than $`r_{\mathrm{sh}}`$. In such a case, we put $`r^{(\mathrm{cool})}=r_{\mathrm{sh}}`$ as a lower bound. Then $`r^{(\mathrm{cool})}`$ is given by the lesser one of those given by equations (25) or (27). In all the same way as above, we can obtain the value of $`r^{(\mathrm{eq})}`$; $$r^{(\mathrm{eq})}=\mathrm{min}[r_{\mathrm{no}\mathrm{sh}}^{(\mathrm{eq})},r_{\mathrm{sh}}],$$ (29) where $`r_{\mathrm{no}\mathrm{sh}}^{(\mathrm{eq})}`$ $`=`$ $`[{\displaystyle \frac{L_{\mathrm{LW}}}{4\pi F_{\mathrm{LW}}^{(\mathrm{eq})}}}]^{1/2}`$ (30) $`=`$ $`2.9\times 10^{22}({\displaystyle \frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1}})^{1/2}({\displaystyle \frac{t_{\mathrm{OB}}}{3\times 10^6\mathrm{yr}}})^{1/2}.`$ (31) The baryonic mass within the region of influence $`M_\mathrm{b}^{(\mathrm{inf})}`$ is then $`M_\mathrm{b}^{(\mathrm{inf})}`$ $`=`$ $`{\displaystyle \frac{4\pi }{3}}nm_\mathrm{p}r_{}^{(\mathrm{inf})}{}_{}{}^{3}`$ (32) $`=`$ $`\mathrm{min}\left\{\begin{array}{cc}1.4\times 10^{14}M_{\mathrm{}}(\frac{x_\mathrm{i}}{10^4})^{3/2}(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})^{3/2}(\frac{T}{10^3\mathrm{K}})^6(\frac{n}{1\mathrm{c}\mathrm{m}^3})^{5/4}(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})^{3/4}\hfill & \\ 0.85\times 10^{11}M_{\mathrm{}}(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})^{3/2}(\frac{t_{\mathrm{OB}}}{3\times 10^6\mathrm{yr}})^{3/2}(\frac{n}{1\mathrm{c}\mathrm{m}^3})\hfill & \\ 1.0\times 10^8M_{\mathrm{}}(\frac{x_\mathrm{i}}{10^4})^1(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})(\frac{T}{10^3\mathrm{K}})^1(\frac{n}{1\mathrm{c}\mathrm{m}^3})^1\hfill & \end{array}\right\}`$ (36) In equation (36), each expression corresponds to the case of $`r^{(\mathrm{inf})}=r_{\mathrm{no}\mathrm{sh}}^{(\mathrm{cool})}`$, $`r_{\mathrm{no}\mathrm{sh}}^{(\mathrm{eq})}`$, and $`r_{\mathrm{sh}}`$ from the top to bottom, respectively. We shall keep this order hereafter. At first glance, the first expression in equation (36) seems to be always larger than the others, but its stronger dependence on temperature makes it important for higher temperature (i.e., more massive) objects than the normalized value. From equation (36), we can see that the mass within a region of influence of an O star already exceeds the scale of the small pregalactic object. We have considered in this letter the regulation of star formation by photodissociation of molecular hydrogen in a pregalactic cloud. On the other hand, Lin & Murray (1992) considered only the regulation by photoionization. In such a case, the mass affected by an OB star, namely the baryonic mass within a Strömgren sphere, is $`M_\mathrm{b}^{(\mathrm{St})}`$ $`=`$ $`{\displaystyle \frac{m_\mathrm{p}nQ_{}}{k_{\mathrm{rec}}n^2}}`$ (37) $`=`$ $`3.7\times 10^3M_{\mathrm{}}({\displaystyle \frac{T}{10^3\mathrm{K}}})^{0.64}({\displaystyle \frac{n}{1\mathrm{c}\mathrm{m}^3}})^1({\displaystyle \frac{Q_{}}{10^{49}\mathrm{s}^1}}),`$ (38) where $`Q_{}`$ is the flux of ionizing photons by a OB star and $`Q_{}10^{49}\mathrm{s}^1`$ for an O5 star. This is by far smaller than our estimated mass of photodissociative influence $`M_\mathrm{b}^{(\mathrm{inf})}`$. To be more specific in the cosmological context, we consider here pregalactic clouds at virialization. The number density at virialization is $$n_{\mathrm{vir}}=0.68\mathrm{cm}^3h_{50}^2(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})(\frac{1+z_{\mathrm{vir}}}{30})^3,$$ (39) and the virial temperature is $$T_{\mathrm{vir}}=6.8\times 10^2\mathrm{K}h_{50}^{2/3}(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})^{2/3}(\frac{M_\mathrm{b}}{10^4M_{\mathrm{}}})^{2/3}(\frac{1+z_{\mathrm{vir}}}{30}).$$ (40) Substituting equations (39) and (40) into equation (36), we obtain $$M_\mathrm{b}^{(\mathrm{inf})}=\mathrm{min}\left\{\begin{array}{cc}2.3\times 10^{15}M_{\mathrm{}}(\frac{x_\mathrm{i}}{10^4})^{3/2}(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})^{3/2}h_{50}^{13/2}(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})^2(\frac{M_\mathrm{b}}{10^4M_{\mathrm{}}})^4(\frac{1+z_{\mathrm{vir}}}{30})^{39/4}\hfill & \\ 0.58\times 10^{11}M_{\mathrm{}}(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})^{3/2}(\frac{t_{\mathrm{OB}}}{3\times 10^6\mathrm{yr}})^{3/2}h_{50}^2(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})(\frac{1+z_{\mathrm{vir}}}{30})^3\hfill & \\ 2.2\times 10^8M_{\mathrm{}}(\frac{x_\mathrm{i}}{10^4})^1(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})h_{50}^{8/3}(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})^{1/3}(\frac{M_\mathrm{b}}{10^4M_{\mathrm{}}})^{2/3}(\frac{1+z_{\mathrm{vir}}}{30})^4\hfill & \end{array}\right\}$$ (41) In order for the star formation to continue after a massive star forms, the region of influence must be smaller than the original pregalactic cloud. Then $`M_\mathrm{b}^{(\mathrm{inf})}<M_\mathrm{b}`$ is the necessary condition, which leads to $$M_\mathrm{b}>\mathrm{min}\left\{\begin{array}{cc}1.9\times 10^6M_{\mathrm{}}(\frac{x_\mathrm{i}}{10^4})^{3/10}(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})^{3/10}h_{50}^{13/10}(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})^{2/5}(\frac{1+z_{\mathrm{vir}}}{30})^{39/20}\hfill & \\ 0.58\times 10^{11}M_{\mathrm{}}(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})^{3/2}(\frac{t_{\mathrm{OB}}}{3\times 10^6\mathrm{yr}})^{3/2}h_{50}^2(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})(\frac{1+z_{\mathrm{vir}}}{30})^3\hfill & \\ 4.0\times 10^6M_{\mathrm{}}(\frac{x_\mathrm{i}}{10^4})^{3/5}(\frac{L_{\mathrm{LW}}}{10^{24}\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1})^{3/5}h_{50}^{8/5}(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})^{1/5}(\frac{1+z_{\mathrm{vir}}}{30})^{12/5}\hfill & \end{array}\right\}.$$ (42) On the other hand, the baryonic mass of a pregalactic cloud that has virial temperature $`T_{\mathrm{vir}}`$ is $$M_\mathrm{b}=1.8\times 10^4M_{\mathrm{}}(\frac{T_{\mathrm{vir}}}{1000\mathrm{K}})^{3/2}h_{50}^1(\frac{\mathrm{\Omega }_\mathrm{b}}{0.05})(\frac{1+z_{\mathrm{vir}}}{30})^{3/2}.$$ (43) Comparing equations (42) and (43), we can see that for a small pregalactic cloud ($`T_{\mathrm{vir}}<10^4`$K) the condition $`M_\mathrm{b}^{(\mathrm{inf})}<M_\mathrm{b}`$ is hardly satisfied in the redshift range of $`10z_{\mathrm{vir}}100`$. This indicates that FUV radiation from one or a few OB stars prohibits the whole small pregalactic cloud from H<sub>2</sub> cooling and quenches subsequent star formation in it. After the death of the first OB star, star formation could occur somewhere in the cloud, and another OB star could form successively. Thereafter, some massive stars might form one after another, but only a few could co-exist simultaneously, as we have shown above. The timescale for reformation of OB stars depends on that for $`\mathrm{H}_2`$ replenishment after the death of the dissociating OB star, which is $`t_{\mathrm{rep}}`$ $`=`$ $`{\displaystyle \frac{f^{(\mathrm{cool})}}{k_\mathrm{H}^{}x_\mathrm{e}n}}`$ (44) $`=`$ $`3\times 10^8\mathrm{yr}({\displaystyle \frac{n}{1\mathrm{c}\mathrm{m}^3}})^{3/2}({\displaystyle \frac{T}{10^3\mathrm{K}}})^4({\displaystyle \frac{x_\mathrm{e}}{10^4}})^1({\displaystyle \frac{\mathrm{\Omega }_\mathrm{b}}{0.05}})^{1/2}.`$ (45) If the ionization degree is as high as unity, which is typical in the HII region, this timescale can be very short, namely, $`t_{\mathrm{rep}}3\times 10^4`$ yr. If the timescale for reformation of OB stars is smaller than the lifetime of OB star, it would be possible for a few OB stars to form every few million years, and, as a result, a considerable amount of stars would form in a Hubble time. However, we do not expect that the star formation continues as long as a Hubble time, because the gravitational binding energy of baryonic gas in such a small pregalactic cloud, $`E_{\mathrm{gr}}`$ $``$ $`{\displaystyle \frac{GMM_\mathrm{b}}{R_{\mathrm{vir}}}}`$ (46) $``$ $`3.3\times 10^{48}\mathrm{ergs}({\displaystyle \frac{M_\mathrm{b}}{10^4M_{\mathrm{}}}})^{5/3}({\displaystyle \frac{\mathrm{\Omega }_\mathrm{b}}{0.05}})^{2/3}h_{50}^{2/3}({\displaystyle \frac{1+z_{\mathrm{vir}}}{30}}),`$ (47) where $`M`$ is the total mass, and $`R_{\mathrm{vir}}`$ is the virial radius of the cloud, is so small that a few supernova explosions would blow out such a small pregalactic cloud (see, e.g., Dekel & Silk 1986). Thus, our probable scenario is as follows: in a small pregalactic cloud, star formation occurs only in a photodissociatively regulated fashion until several supernovae explode, and it stops thereafter. If numerous small stars were formed per a massive star, a substantial proportion of the original cloud would be converted to stars at last. However, in the primordial circumstance, the stellar initial mass function could be strongly biased toward the formation of massive stars because of the higher temperatures relative to the present-day counterpart. If this was the case and OB stars were formed selectively, the amount of gas mass that was converted to stars would be extremely small. ## 3 Summary We have studied the H<sub>2</sub> photodissociation region around an OB star in a primordial gas cloud. A region as large as the whole small pregalactic cloud is affected by only one or a few OB stars and becomes unable to cool in a free-fall time under the condition appropriate to virialization. Therefore, in those clouds which have virial temperatures less than $`10^4`$ K, star formation does not occur efficiently, unless the primordial initial mass function is extremely weighted toward low mass stars. If the reionization of the universe is caused by stellar UV radiation, some OB stars must form. However as we have shown, an OB star formed in a small pregalactic cloud would inevitably photodissociate the whole cloud and subsequent star formation would be strongly suppressed. Therefore, stellar UV radiation from small pregalactic clouds cannot play a significant role in the reionization of the universe. The authors would like to thank Toru Tsuribe for fruitful discussions, Humitaka Sato and Naoshi Sugiyama for continuous encouragement, Evan Scannapieco for checking the English, and the referee, Zoltán Haiman, for a careful reading of the manuscript and for useful comments. This work is supported in part by the Grant-in-Aid for Scientific Research on Priority Areas (No. 10147105) (R.N.), and the Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture, No. 08740170 (R.N.).
no-problem/9904/quant-ph9904009.html
ar5iv
text
# References quant-ph/9904009 New possibilities for supersymmetry breakdown in quantum mechanics and second order irreducible Darboux transformations Boris F. Samsonov Department of Quantum Field Theory, Tomsk State University, 36 Lenin Ave., 634050 Tomsk, Russia, email: samsonov@phys.tsu.ru Abstract. New types of irreducible second order Darboux transformations for the one dimensional Schrödinger equation are described. The main feature of such transformations is that the transformation functions have the eigenvalues grater then the ground state energy of the initial (or reference) Hamiltonian. When such a transformation is presented as a chain of two first order transformations, an intermediate potential is singular and therefore intermediate Hamiltonian can not be Hermitian while the final potential is regular and the final Hamiltonian is Hermitian. Second derivative supersymmetric quantum mechanical model based on a transformation of this kind exhibits properties inherent to models with exact and broken supersymmetry at once. PACS: 03.65.Ge; 03.65.Fd; 03.65.Ca Keywords: Supersymmetric quantum mechanics, Darboux transformation, Schrödinger equation, exact solutions 1. Supersymmetric quantum mechanics (SUSY QM) that has been introduced to illustrate problems of supersymmetry breakdown in quantum field theories finds now numerous applications in different fields of theoretical and mathematical physics (for a review see ). It is well known that the supersymmetry may be either exact or broken. In the case of the broken supersymmetry the entire spectrum of the supehamiltonian is degenerate while in the case of the exact one its vacuum state is nondegenerate. We would like to stress that other possibilities exist as well that may have applications in quantum field theories . In the conventional SUSY QM supercharges are built of first order Darboux transformation operators . Higher order Darboux transformation operators are involved in a higher derivative supersymmetry . In the simplest case we have a second order derivative supersymmetry with the supercharges built of the second order Darboux transformation operators. It is known that such a supersymmetry may be either reducible or not . The concept of complete reducibility is introduced as well . This concept is based on a theorem that establishes the equivalence between an $`N`$-order Darboux transformation and a chain of $`N`$ first order Darboux transformations . Every chain of $`N`$ first order Darboux transformations creates a chain of exactly soluble Hamiltonians $`h_0h_1\mathrm{}h_N`$. We suppose that $`h_0`$ and $`h_N`$ are Hermitian in a Hilbert space and admit unique self-adjoint extensions (i.e. they are essentially self-adjoint). To satisfy this condition the potentials $`V_i(x)`$, $`h_i=_x^2+V_i(x)`$, $`_x^2=_x_x`$, $`_x=d/dx`$, $`i=0,N`$ should be realvalued and free of singularities in their common domain of definition $`(a,b)`$ where $`a`$ or $`b`$ or both may be equal to $`+\mathrm{}`$ or $`\mathrm{}`$ . Any chain is called reducible if all intermediate potentials $`V_1(x)`$, $`\mathrm{}`$, $`V_{N1}(x)`$ are realvalued functions defined in $`(a,b)`$. The $`N`$th order Darboux transformation which is equivalent to the whole chain is called reducible as well. When at least one of the intermediate potentials is a complexvalued function the chain and the corresponding $`N`$th order transformation are called irreducible. Some reducible chain is called completely reducible if all these potentials are free of singularities in $`(a,b)`$. The corresponding $`N`$th order transformation is called completely reducible as well. 2. It can be shown that every $`N`$th order Darboux transformation is equivalent to the resulting action of a chain of well-defined transformations of order less or equal two. There is a marvellous vast literature devoted to the analysis of first order transformations (see e.g. ). Second order transformations are not explored in the same details. The main purpose of this letter is to fulfil this gap and give an analysis of second order transformations. Let $`L`$ be a second order Darboux transformation operator defined as an operator that intertwines two well-defined Hamiltonians $`h_0`$ and $`h_2`$, $`Lh_0=h_2L`$. Every operator of this type may be presented in a compact form as follows : $`L\psi =W^1(u_1,u_2)W(u_1,u_2,\psi )`$ where $`W`$ stands for the usual symbol of a Wronskian and the functions $`u_1(x)`$ and $`u_2(x)`$ called transformation functions are eigenfunctions of $`h_0`$, $`h_0u_{1,2}=\alpha _{1,2}u_{1,2}`$ which are not supposed to satisfy any boundary conditions. When supercharge operators $`Q`$ and $`Q^+`$ are built in terms of $`L`$ and $`L^+`$ where $`L^+`$ is Laplace adjoint to $`L`$ $$Q=\left(\begin{array}{cc}0& L^+\\ 0& 0\end{array}\right),Q^+=\left(\begin{array}{cc}0& 0\\ L& 0\end{array}\right)$$ then these operators together with the superhamiltonian $`=\mathrm{diag}(h_0,h_2)`$ close a second order superalgebra: $`Q^2=(Q^+)^2=0`$, $`QQ^++Q^+Q=(\alpha _1)(\alpha _2)`$ where $``$ is the unit $`2\times 2`$ matrix. The new potential $`V_2(x)`$ is defined by the initial potential $`V_0(x)`$ and the transformation functions $`V_2(x)=V_0(x)2[\mathrm{log}W(u_1,u_2)]^{\prime \prime }`$ where the prime stands for the derivative with respect to $`x`$. The operator $`L`$ can always be presented as a superposition of first order operators $`L=L^{(1)}L^{(2)}`$ where $`L^{(1)}=_x+(\mathrm{log}u_1)^{}`$, $`L^{(2)}=_x+(\mathrm{log}v)^{}`$, $`v=L^{(1)}u_2`$. The intermediate potential of such a chain of transformations is defined by the function $`u_1`$: $`V_1(x)=V_0(x)2(\mathrm{log}u_1)^{\prime \prime }`$. In the conventional SUSY QM transformation functions are supposed to be such that their eigenvalues are subject to the condition $`\alpha _{1,2}E_0`$ where $`E_0`$ is the ground state energy when $`h_0`$ has a discrete spectrum and it is the low bound of the continuum spectrum when the discrete spectrum is absent. They also may be chosen such that $`\alpha _1=E_0`$, $`\alpha _2=E_1`$ where $`E_1`$ is the energy of the first excited state. These choices correspond to the usual conception of the supersymmetry breakdown in quantum mechanics when the vacuum state of $``$ is nondegenerate for the exact supersymmetry and it is two fold degenerate for the broken supersymmetry. In the latter case the whole spectrum of $``$ is two fold degenerate. 3. In what follows we shall denote by $`E_k`$ and $`\psi _k`$ $`k=0,1,\mathrm{}`$ the discrete spectrum eigenvalues and eigenfunctions of the Hamiltonian $`h_0`$ respectively. We will suppose for simplicity that the whole spectrum of $`h_0`$ is discrete. We shall prove below that $`\alpha _{1,2}`$ may be chosen such that $`E_{k+1}\alpha _2>\alpha _1E_k`$. In this case the transformation functions $`u_{1,2}`$ has nodes in $`(a,b)`$ and consequently intermediate potential $`V_1(x)`$ has singularities in $`(a,b)`$. Nevertheless there exists such a choise of the transformation functions that the final potential $`V_2(x)`$ is free of singularities. We obtain thus a simplest irreducible chain of Darboux transformations. Such a transformation either deletes one or two energy levels ($`E_k`$ or $`E_{k+1}`$ or both) or creates one or two new energy levels disposed between $`E_k`$ and $`E_{k+1}`$ ($`\alpha _1`$ or $`\alpha _2`$ or both). The SUSY QM based on this transformation has the properties of the theories with the exact and broken supersymmetry at once. The ground state of $``$ is two fold degenerate and in the middle of the spectrum of $``$ there exist one or two nondegenerate energy levels. 4. Let us establish now conditions for the potential $`V_2(x)`$ to be free of singularities in the interval $`(a,b)`$. For this purpose it is sufficiently to analyse the second order Wronskian $`W(u_1,u_2)`$ as a function of $`x`$ and find the conditions when it is free of zeros in $`(a,b)`$. For the sake of definiteness we will consider the full real axis $`R=(\mathrm{},\mathrm{})`$ as the interval $`(a,b)`$ and suppose the potential $`V_0(x)`$ to be confining, i.e. $`|V_0(x)|\mathrm{}`$ as $`|x|\mathrm{}`$. We also assume the potential $`V_0(x)`$ to be sufficiently smooth function in $`R`$ (e.g. $`V_0(x)C_R^{\mathrm{}}`$) and bounded from below. In this case the operator $`h_0=_x^2+V_0(x)`$ initially defined on a set of infinitely differentiable functions with a compact support which is dense in the Hilbert space of functions defined over $`R`$ and square integrable with respect to the Lebesgue measure has a self adjoint closure that we shall denote $`h_0`$ as well. Moreover, $`h_0`$ has only a discrete spectrum $`E=E_k`$ with the eigenfunctions $`\psi =\psi _k`$, $`k=0,1,\mathrm{}`$. It is not difficult to see that every eigenfunction of $`h_0`$, $`\psi _E`$, with $`E`$ such that $`E_{k+1}>E>E_k`$, $`k=0,1,\mathrm{}`$ may have on $`R`$ either $`k+1`$ nodes or $`k+2`$ ones. Moreover, If $`\psi _E`$ has $`k+2`$ nodes then $`|\psi _E(x)|\mathrm{}`$ as $`|x|\mathrm{}`$. If $`\psi _E`$ has $`k+1`$ nodes then we have two possibilities: a) $`\psi _E0`$ as $`x\mathrm{}`$ (or equivalently as $`x\mathrm{}`$) and b) $`|\psi _E|\mathrm{}`$ as $`|x|\mathrm{}`$. In the first case we refer $`\psi _E`$ as the function with a zero asymptotic at the right infinity (or equivalently at the left infinity) and in the second case as the one with a growing asymptotic at both infinities. If $`\psi _E`$ is a zero asymptotic function then it has just $`k+1`$ nodes in $`R`$. These assertions are direct implications of the well-known Sturm oscillator theorem (see e. g. ). Our analysis shows that if the transformation functions $`u_1`$ and $`u_2`$, $`h_0u_{1,2}=\alpha _{1,2}u_{1,2}`$ are chosen such that $`E_{k+1}\alpha _2>\alpha _1>E_k`$, $`k=0,1,\mathrm{}`$ and $`u_1(x)`$ has $`k+2`$ nodes, $`u_2(x)`$ has $`k+1`$ nodes on $`R`$ then $`W(u_1,u_2)=W(x)`$ is free of zeros on $`R`$. Really. We note first of all that because of the conditions imposed on $`u_1`$ and $`u_2`$ they have simple and alternating zeros i.e. between any two consecutive zeros of one of them there is exactly one zero of the other. The total number of nodes of the functions $`u_1`$ and $`u_2`$ is odd and equal 2k+3. Let $`x_0`$, $`\mathrm{}`$, $`x_{2k+2}`$ be the zeros of the functions $`u_1(x)`$ and $`u_2(x)`$. Since $`W^{}(x)=(\alpha _1\alpha _2)u_1(x)u_2(x)`$ where the use of the Schrödinger equation is made the points $`x_0`$, $`\mathrm{}`$, $`x_{2k+2}`$ are the points of local minima and maxima of $`W(x)`$. It then follows that this function is monotone in every interval $`[x_j,x_{j+1}]`$, $`j=0,1,\mathrm{}`$ and the points $`x_0`$ and $`x_{2k+2}`$ are both either maxima or minima. It is not difficult to see that the sign of the function $`W(x)=u_1(x)u_2^{}(x)u_2(x)u_1^{}(x)`$ is the same for all $`x=x_0,\mathrm{}x_{2k+2}`$ and $`W(x)0`$ $`x[x_0,x_{2k+2}]`$. It remains now to analyse the behaviour of the function $`W(x)`$ for $`x<x_0`$ and $`x>x_{2k+2}`$. The sign of the functions $`u_1(x)`$ and $`u_2(x)`$ is without importance. Therefor without loss of generality we can always choose $`u_1(x)`$ and $`u_2(x)`$ such that $`u_{1,2}(x)0`$ for $`x>x_{2k+2}`$. In this case $`u_1(x_{2k+2})=0`$, $`u_2(x_{2k+2})>0`$, $`u_1^{}(x_{2k+2})>0`$. We conclude that $`W(x_{2k+2})<0`$ and $`W^{\prime \prime }(x_{2k+2})<0`$. This means that $`x_0`$ and $`x_{2k+2}`$ are the points of local maxima of $`W(x)`$. Taking into account the fact that $`W(x)`$ is monotone for $`x>x_{2k+2}`$ and $`x<x_0`$ (since $`W^{}(x)0`$ for all these $`x`$) and negative we find that $`W(x)0`$ for all $`xR`$. 5. It is clear from the above considerations that if $`u_1(x)`$ has $`k+1`$ nodes and $`u_2(x)`$ has $`k+2`$ nodes then under the same assumptions that above the function $`W(x)`$ is positive and riches the local maxima at $`x=x_0`$ and $`x=x_{2k+2}`$. Further, the function $`u_1(x)`$ is not square integrable on $`R`$ because of the condition $`E_{k+1}\alpha _2>\alpha _1>E_k`$. The function $`u_2(x)`$ is not square integrable for all $`\alpha _2[E_{k+1},E_k)`$ since it is assumed to have $`k+2`$ nodes. It follows then that $`|u_2(x)|\mathrm{}`$ as $`|x|\mathrm{}`$ and $`|u_1(x)|`$ tends to infinity in one of the infinities at least. This means that $`|W(x)|`$ tends to infinity in one of the infinities at least and $`|W(x)|`$ has one node at least. We could avoid such a behaviour of $`W(x)`$ if $`|W(x)|`$ would decrease as $`|x|\mathrm{}`$ instead of increase. For such a behaviour of $`|W(x)|`$ the function $`u_1(x)`$ should be square integrable over $`R`$. (Note that the case when $`u_2(x)`$ is square integrable was considered above.) I could not prove that for an arbitrary potential $`V_0(x)`$ the Wronskian $`W(x)`$ tends in this case to zero as $`|x|\mathrm{}`$. Nevertheless, I can indicate a wide class of potentials for which the condition $`W(x)0`$ as $`|x|\mathrm{}`$ holds. In particularly, all potentials satisfying the condition $$_{\mathrm{}}^+\mathrm{}|xV_0(x)|𝑑x<\mathrm{}$$ (scattering potentials) are of this type. As to confining potentials it can be proven that when the potential $`V_0(x)`$ is such that $$_{\mathrm{}}^{\mathrm{}}\left|\frac{V_0^{}}{V_0^{5/4}}\right|^2𝑑x<\mathrm{},_{\mathrm{}}^{\mathrm{}}\frac{|V_0^{\prime \prime }|}{|V_0|^{3/2}}𝑑x<\mathrm{}.$$ then $`W(x)0`$ as $`|x|\mathrm{}`$. The proof of this assertion is based on the asymptotic behaviour of the solutions of the Schrödinger equation for such potentials which is known (see e.g. ) and it is omitted here. This imply another possibilities for the choice of the transformation functions. If they are chosen such that $`E_{k+1}\alpha _2>\alpha _1=E_k`$, $`u_1(x)=\psi _k(x)`$, and $`u_2(x)`$ has $`k+1`$ nodes then $`W(u_1,u_2)0`$ $`xR`$. The proof can easily been obtained with the help of the asymptotic behaviour of $`u_1(x)`$ and $`u_2(x)`$. 6. We have formulated conditions that are sufficient to impose on the transformation functions to obtain a regular potential of the transformed Schrödinger equation with the potential $`V_2(x)`$. It can be shown with the aid of the known asymptotic of the solutions of the initial Schrödinger equation that $`V_2(x)V_0(x)`$ as $`|x|\mathrm{}`$. Moreover, the knowledge of this asymptotic allows us to find all solutions of the transformed Schrödinger equation that belong to the Hilbert space $`L_2(R)`$. This analysis is based on the following affirmations. The spectrum of the Hamiltonian $`h_N`$ related with $`h_0`$ by the $`N`$th order Darboux transformation coincides with the spectrum of $`h_0`$ with the possible exception of a finite number of discrete levels defined by the choice of the transformation functions. The level $`E=\alpha `$ is absent in the spectrum of $`h_N`$ if and only if $`u_\alpha \mathrm{Ker}L`$, $`h_0u_\alpha =\alpha u_\alpha `$ and $`u_\alpha L_2(R)`$. The level $`E=\alpha `$ is created in the spectrum of $`h_N`$ if and only if $`v_\alpha \mathrm{Ker}L^+`$, $`h_Nv_\alpha =\alpha v_\alpha `$ and $`v_\alpha L_2(R)`$. The space $`\mathrm{Ker}L^+`$ has the basis $`v_1(x),\mathrm{},v_N(x)`$, $`v_j=W^{(j)}(u_1,\mathrm{}u_N)/W(u_1,\mathrm{}u_N)`$, $`h_Nv_j=\alpha _jv_j`$ where $`W^{(j)}(u_1,\mathrm{}u_N)`$ is the Wronskian of order $`N1`$ built of the functions $`u_1,\mathrm{},u_N`$ except for the function $`u_j`$. The direct implication of this assertion is that the functions $`\phi _k=L\psi _k`$ are square integrable over $`R`$ for all $`\psi _ku_j`$. To find all square integrable solutions of the transformed Schrödinger equation it remains now to analyse the functions $`v_{1,2}=u_{2,1}/W(u_1,u_2)\mathrm{Ker}L^+`$. This analysis is possible because of the known asymptotic of the solutions of the initial Schrödinger equation. We have obtained the following results. If the transformation functions $`u_{1,2}`$ are such that $`E_{k+1}>\alpha _2>\alpha _1>E_k`$ and have growing asymptotic on both infinities then $`v_{1,2}L_2(R)`$ and the set $`\left\{v_1,v_2,\phi _n=L\psi _n,n=0,1,\mathrm{}\right\}`$ is complete in $`L_2(R)`$. (Hamiltonian $`h_2`$ has two additional energy levels $`E=\alpha _1,\alpha _2`$ with respect to $`h_0`$.) If the function $`u_2`$ has zero asymptotic then $`v_2L_2(R)`$, $`h_2`$ has a single additional energy level $`E=\alpha _1`$, and the set $`\left\{v_1,\phi _n=L\psi _n,n=0,1,\mathrm{}\right\}`$ forms a basis in $`L_2(R)`$. If $`\alpha _2=E_{k+1}`$, $`u_2=\psi _{k+1}`$, and $`u_1`$ has a growing asymptotic at both infinities then the level $`E=E_{k+1}`$ is absent in the spectrum of $`h_2`$ and the level $`E=\alpha _1`$ is created. The set $`\left\{v_1,\phi _n=L\psi _n,n=0,1,\mathrm{};nk+1\right\}`$ is complete in $`L_2(R)`$. If $`\alpha _1=E_k`$, $`u_1=\psi _k`$, and $`u_2`$ has a growing asymptotic at both infinities then the level $`E=E_k`$ is absent in the spectrum of $`h_2`$ and the level $`E=\alpha _2`$ is created. The basis in $`L_2(R)`$ is formed by the set $`\left\{v_2,\phi _n=L\psi _n,n=0,1,\mathrm{};nk\right\}`$. When $`u_2`$ has a zero asymptotic then we have only a deletion of the level $`E=E_k`$ and the basis is formed by the set $`\left\{\phi _n=L\psi _n,n=0,1,\mathrm{};nk\right\}`$. Finally, if $`\alpha _2=E_{k+1}`$, $`\alpha _1=E_k`$, and $`u_1=\psi _k`$, $`u_2=\psi _{k+1}`$ then both levels $`E=E_k`$ and $`E=E_{k+1}`$ are absent in the spectrum of $`h_2`$ and the position on the other levels is unchanged. The set $`\left\{\phi _n=L\psi _n,n=0,1,\mathrm{};nk,k+1\right\}`$ is complete in $`L_2(R)`$. This possibility has been earlier indicated by Krein . 7. As a final remark we note that the possibility to use the transformation functions with the eigenvalues higher then the ground state energy of $`h_0`$ has recently been noted in without any analysis. The financial support from the RFBR and the ministry of education of Russia is gratefully acknowledged.
no-problem/9904/astro-ph9904031.html
ar5iv
text
# 1 Introduction ## 1 Introduction The problem of bulge formation is in rapid evolution: not only many scenarios, dynamical processus, formation theories have been proposed and studied, but there has been great progress in observation of bulges: on ages, metallicities, structures, etc… One can cite in particular: * Detailed age study of individual globular clusters, with colour-magnitude diagrams of individual stars, with the high spatial resolution of HST (in the Milky Way bulge, and in Local Group galaxies) * More extinction-free studies of bulge structure through near-infrared imaging, made recently possible at large scale (wide extragalactic surveys, COBE for the Milky Way) * Morphological studies of galaxies at high redshift: this has been extensively developped in this meeting, and is the privileged tool to tackle galaxy evolution in situ. There are very good recent reviews on the subject, namely in proceedings of the STSci workshop held in 1998 (“When and how bulges form ?”, ed. Carollo, Fergusson & Wyse), Renzini (1999), Silk & Bouwens (1999), Carlberg (1999) or the Annual Review on “Galactic Bulges” by Wyse, Gilmore & Franx (1997). The main scenarios proposed for the fornation of bulges are: * Monolithic formation (Eggen, Lynden-Bell & Sandage 1962), or very early dissipative collapse at the beginning of galaxy formation. This assumes that the gas experiences violent 3D star formation, so quickly that it had no time to settle into a disk. This scenario was first proposed to explain the almost spherical old metal-poor stellar halo and the subsequent formation of disks with different thickness, that were thought to be an age sequence following the progressive gas settling. This is now proposed for the central bulge, the disk being acquired later. * Secular dynamical evolution: the time-scale of these processes are longer than the dynamical time (i.e. secular), but however smaller than the Hubble time. They are due to the dynamical interaction between the various components, disk, bulge, halo. Gravitational instabilities, such as bars and spirals, are able to transfer efficiently angular momentum, and produce radial mass flows towards the center. Due to vertical resonances, stars in the center are elevated above the plane, and contribute to bulge formation (Combes et al. 1990). * galaxy interactions, through merger and mass accretion. It is well known that major mergers between spirals can result in the formation of an elliptical galaxy (Toomre & Toomre 1972). Similarly, the accretion by a spiral galaxy of a dwarf companion could contribute to the formation of a spheroid at the center, assumimg that this minor merger has not destroyed the disk. In fact, all three of these main scenarii certainly occur, the question is to estimate the relative role of each of them, which is related to the time-scale of bulge formation. All these processes may be included in the general frame of hierachical galaxy formation. Disks are supposed to form through gas cooling in a dark halo. Cooling runs progressively from the center to the outer parts (case of continuous gas infall), and produces an inside-out formation of disks. Either this cooling is first violent at the center, due to the short dynamical time-scales, and the absence of a stabilising heavy stellar object, and a spheroid can form first; or secular dynamical evolution could afterwards transfer angular momentum, and bring mass to the center. In all scenarii, the speed of evolution and rate of star formation depends strongly on environment. Galaxy interactions are both responsible of the merger scenario, and also trigger or boost bar formation and secular evolution. Bulges at the center of galaxies accumulate all stars formed, and in any case they are expected to be older than the outer disk, and mainly with much more scattered properties (ages, abundances). We will first briefly review the constraints and clues brought by observations related to bulge formation, and then examine each scenario respectively. ## 2 Clues from observations Observations of our own bulge is impeded by dust extinction, confusion (crowding), contamination by foreground stars, that make data on the Milky Way very uncertain. There exist in the literature a certain number of prejudices, like the idea that “bulges are old and metal-rich, and small versions of ellipticals”, that are not today completely confirmed: the reality is not so simple, as clearly reviewed by Wyse et al (1997). ### 2.1 Metallicity In the Milky Way, the mean metallicity of the bulge is the same as that of the disk in the solar neighborhood, but in contrast to the disk, the bulge has a very wide scatter. This characteristic allows the metallicity distribution of the bulge to be explained by a closed box chemical model, contrary to the disk: the latter has a very narrow distribution of metallicities, that gives rise to the well-known G-dwarf problem (its solution requires other processes, like gas infall, etc..). In the Andromeda bulge also, super metal-rich globular clusters are the exception (Jablonka et al 1998). In the Milky Way bulge, there is no correlation between age, color, abundance and kinematics, which could have given clues about the origins (e.g. Rich 1997). There does not seem to have been a starburst, since there is no excess of $`\alpha `$ elements (Mc William & Rich 1994). ### 2.2 Age As for ages, however, we should not make the confusion: the bulge has not necessarily the age of its stars. In particular the bulge could have formed recently from old stars. It is true namely if the bulge has formed from bars. Thanks to the clear overall structure of the MW given by COBE-DIRBE, we know directly now that a bar is present, while it has long be assumed from gas kinematics only (e.g. Peters 1975). The peanut/boxy bulge of the MW has asymmetries due to a bar seen in perspective (e.g. Blitz & Spergel 1991, Zhao et al 1996). It is even difficult to distinguish what is bar and what is bulge (Kuijken 1996). Since strong bars are only a transient phase (see below) it is easy to extrapolate how the stars presently in the bar will form the bulge. If the bulges are thought to be older than disks, it might be due to the implicit average over the whole disk. In fact, when bulges and inner disks are compared, they have the same integrated colors (Peletier & Balcells 1996, de Jong 1996, see fig 1). But there exist clear radial gradients of colors and metallicities. Spiral galaxies become bluer with increasing radius. Moreover, colors correlate well with surface brightness. These colors and their gradients can best be explained with history of star formation, and dust reddening is not dominant, as shown by de Jong (1996): outer parts are younger than central regions. These observations support an inside-out galaxy formation. Alternatively, dynamical secular evolution can also account for these observations, through angular momentum transfer (cf Tsujimoto et al 1995). ### 2.3 Are bulges similar or not to giant ellipticals ? Bulges are spheroids that follow the same fundamental plane as elliptical galaxies. They also follow the same luminosity-metallicity relation (Jablonka et al 1996). But they are oblate rotaters (flattened by rotation), while giant ellipticals are not rotating. However, low-luminosity ellipticals also rotate (Davies et al 1983). There might be a continuity between bulges, lenticulars and ellipticals. Most of the latters have been observed with compact disks, e.g. Rix & White 1992, Rix et al. 1999). There is also continuity in the light profiles. They can be fitted as an exponential of $`r^{1/n}`$, while $`n`$ is a function of luminosity ($`n=`$4 for Ellipticals, $`\mathrm{}`$ for Bulges, Andredakis et al 1995). ### 2.4 High-redshift galaxies From the CFRS and LDSS, surveying galaxies at redshifts below 1, there is very little evolution of the luminosity function of red galaxies (Lilly et al. 1995, 1998). Their disk scale-length stays constant, up to $`z=1`$ (which could mean either no-evolution, or stationary merging). On the contrary, there is substantial evolution in the blue objects. Does it mean that a large fraction of bulges/spheroids are already formed? (or mean age of stars $``$ age of Universe). From HST images of the CFRS galaxies, Shade et al (1995, 96) conclude that red galaxies have large bulge-to-disks (B/D) ratios, and that most blue galaxies are interacting (also forming bulges, nucleated galaxies). Steidel et al (1996) from their sample of z $``$ 3 galaxies find in average objects smaller than today (see also Bouwens et al 1998). If violent starbursts are forming at high redshift, like the ultra-luminous IR galaxies observed nearby, they should dominate the sub-mm surveys. Already a large fraction of the cosmic IR background has been resolved into sources (Hughes et al. 1998). If these objects are forming the spheroids (bulges and ellipticals), then their epoch of formation is relatively recent $`z<2`$ (Lilly et al. 1999). From the high spatial resolution of HST, it becomes now possible to study the morphology of high-redshift galaxies, and address the evolution of the Hubble sequence. At least, it is possible to determine the concentration (C) and asymmetry (A) to classify galaxies (Abraham et al 1996, 99). If there is a consensus among the various studies, it is on the considerable increase of perturbed and interacting galaxies ($``$ 40% objects interacting). But on the concentration, or bulge-to-disk ratios, there is no unanimity. Abraham et al (1998) do not find evolution in the B/D ratio, and tend to favour monolithic bulge formation. Marleau & Simard (1998) find on the contrary many more disk-like faint galaxies in the Hubble Deep Fields, i.e. that there are less objects with high bulge-to-disk ratios at high redshift (with respect to $`z=0`$). ## 3 Monolithic dissipative formation This kind of violent and intense star formation is different to what is seen today in galactic disks: it occurs in three-dimension (Elmegreen 1999). In the deep potential well of a giant galaxy center, almost entirely due to the dark halo, there cannot be SF self-regulation by blow-out (supernovae, winds, pressure) as in disks or dwarfs today (Meurer et al 1997). Most of the time, in the ultra-luminous galaxies of the nearby universe, the starburst occurs in nuclear rings or disks (Downes & Solomon 1998). This could be different in the early universe, since there are no heavy disks formed. The threshold for SF in 3D can be estimated from the virial theorem (Spitzer 1942) as a critical central volumic gas density $`\rho _c=\rho _{vir}/(1+\beta /2)`$ for gaseous potential $`\beta GM_{gas}/R`$. The star formation rate SFR is then $`SFR\rho _{vir}/t_{dyn}\rho _{vir}^{3/2}`$ (an equivalent of Schmidt law). In localised dense regions, the SFR could be very large, and an extremely clumpy distribution of stars is expected. Simulations illustrating this process have been done by Noguchi (1998). ## 4 Secular evolution A wide series of N-body simulations, supported by observations, have established the various phases of this secular dynamical evolution (e.g. Sellwood & Wilkinson 1993, Pfenniger 1993, Martinet 1995, Buta & Combes 1996). They can be described as: * Gravitational instabilities spontaneously form bars, spirals, which transfer angular-momentum through the disk. Both stars and gas lose momentum to the wave. Radial gas flows in particular produce large central mass concentration. * Due to vertical resonances, the bar thickens in the center (box or peanut-shape, Combes et al 1990), and contributes to bulge formation (see fig 2). This scenario is compatible with the observed correlation between scale-lengths of bulges and disks (Courteau et al 1996). * Sufficient mass accumulation in the center (1-5% of the total disk mass) perturbs the rotation curve, changes the orbit precession rates, and destroy the bar, by creating perpendicular orbits $`x2`$. The galaxy has now become a hot stable system, with a central spheroid. * Through gas infall, the disk can become unstable again and reform a bar (with a different pattern speed) ## 5 Accretion and mergers Ellipticals can form by the mergers of two spirals (e.g. NGC 7252 prototype); but the most general case is the formation by the merger of several smaller objects. That ellipticals have accreted several smaller companions during their life is supported by the frequency of shells (as much as 50% according to Schweizer & Seitzer 1988). Bulges could similarly be the result of minor mergers; even today, there exist numerous companions around giant spirals. In hierachical cosmological models, it is easy to compute analytically the history of dark halo formation, and their merging rate (Kauffmann et al 1993, Baugh et al 1996). But the fate of the dissipative visible matter is still not well-known. According to some star formation and feedback parameters that fit the colour-magnitude relation of ellipticals in clusters, Kauffmann & Charlot (1998) have shown that massive galaxies must have formed continuously, and must have assembled only recently, in order to fit observed redshift distributions of K-band luminosity at $`z=1`$. If the bulge comes from accretion, observations of our own MW put constraints on the metallicity of the objects accreted (they should be relatively high). This put strong limits on the fraction of the bulge that has been accreted recently (Unavane et al 1996). ## 6 Combination of all scenarii In a hierachical scenario, there are many uncertainties about the physical processes controlling the baryonic matter. Feedback must prevent efficient conversion of gas into stars, to represent observations. There remains much freedom to determine it more quantitatively, as well as the distribution of angular momentum, etc… The simplest recipe is to assume that the angular momentum of the matter is not redistributed, and that the gas cools at the radius where the cooling time becomes shorter than the dynamical time (Mo et al 1998; van den Bosch 1999). This represents gas infall as cooling flows in dark haloes, and the formation of disks as an inside-out process. There are however many hypotheses possible: for instance, that the mass of the disk is a fixed fraction of the halo mass, or that the angular momentum of the disk is a fixed fraction of the halo angular momentum; also that the disk is exponential in shape, and is stable (which gives a condition on specific momentum). The efficiency of galaxy formation ($`ϵ_{gf}`$) is also a free parameter, as a function of redshift; it could vary proportionally to the Hubble constant, to ensure the Tully-Fisher relation (van den Bosch 1999). The specific angular momentum of the galaxies are supposed to be acquired by tides (Fall & Efstathiou 1980). These scenarios, coupled with sufficient feedback (due to star-formation) can explain high-surface brightness galaxies (HSB) and even LSB, if a range of angular momentum is assumed (Dalcanton et al 1997). The following constraints can be satisfied: 1) the Tully-Fisher scaling relation; 2) the density-morphology relation (Dressler 1980); 3) the surface-density/size relation $`\mu `$ \- R<sub>d</sub> (generalisation of Kormendy 1977). A general feature found by van den Bosch (1999), whatever the cooling/feedback hypotheses, is that the present observed disks lie very close to their stability limit (with respect to strong gravitational instabilities), which suggests a self-regulated formation, through dynamical processes: the disk alone is unstable until a sufficient bulge stabilises it. The gas infall could then alternatively favor bulge or disk formation according to the bulge-to-disk mass ratio it encounters at a given time. This could explain the observed coupling between bulges and disks. Bouwens et al. (1999) have recently confronted several bulge formation scenarii to the observations. At $`z=0`$, it appears almost impossible to distinguish between early and late bulge formation (cf fig 3). At high redshift, it becomes possible, essentially because a large fraction of the star formation occurs at the observed $`z`$. For instance, in fig 3 at top-left, the model of simultaneous formation (of disk and bulge) departs from the two others, since star-formation makes the bulge momentarily brighter. Could the star formation history (SFH) constrain the period of formation of bulges and ellipticals? Since spheroids contain 30% (Schechter & Dressler 1987), or 66% (Fukugita et al 1998) of the stars in the Universe, they cannot form too early, according to the Madau et al. (1996) SFH curve. However, this constraint is only a lower-limit on the formation time, since the spheroids could be recently formed from old stars. ## 7 Conclusions Bulges are very heterogeneous structures, with large scatter in colors, metallicities and ages. At large luminosities, bulges become similar to elliptical galaxies, at low luminosities, they resemble more spiral disks (or bars). There exist objects with a whole range of properties ensuring continuity between these two extremes, suggesting evolution or the combination of several processes. Bulge formation is certainly a combination between three main scenarii: a fraction of bulges could have formed early (at first collapse); then secular dynamical evolution enrich them; in parallel, according to environment, accretion and minor mergers contribute to raise their mass. Big spheroids (SOs, Es) can only form through major/minor mergers. Present disks have been (re)-formed recently. To be more quantitative, and precise the time-scale of bulge formation, a privileged solution is to observe and classify the morphology of galaxies as a function of redshift, together with detailed balance of the comoving volumic density of galaxies of each type. This is necessary to avoid confusion between a no-evolution model, at a given type, or a stationary evolution flow along the Hubble sequence.
no-problem/9904/quant-ph9904072.html
ar5iv
text
# Basic Quantum Theory and Measurement from the Viewpoint of Local Quantum Physics This is a condensed version of material prepared for and submitted to proceedings of the symposium entitled “New Insights in Quantum Mechanics-Fundamentals, Experimental Results and Theoretical Directions” Goslar, Germany, September 1-3, 1998 ## 1 LQP Principles and some Consequences If one thinks about the fundamental physical principles of this century which have stood their grounds in the transition from classical into quantum physics, relativistic causality as well as the closely related locality of quantum operators (together with the localization of quantum states) will certainly be the most prominent one. This principle entered physics through Einstein’s 1905 special relativity, which in turn resulted from bringing the Galilei relativity principle of classical mechanics into tune with Maxwell’s theory of electromagnetism. Therefore it incorporated Faraday’s “action at a neighborhood” principle which revolutionized 19<sup>th</sup> century physics. The two different aspects of Einstein’s special relativity, namely Poincaré covariance and the locally causal propagation of waves (in Minkowski space) were kept together in the classical setting. In the adaptation of relativity to LQP (local quantum physics<sup>1</sup><sup>1</sup>1We use this terminology, whenever we want to emphasize that we relate the principles of QFT not with necessarily with the standard text-book formalism that is based on quantization through Lagrangian formalism.) on the other hand , it is appropriate to keep them at least initially apart in the form of positive energy representations of the Poincaré group (leading to Wigner’s concept of particles) and Einstein causality of local observables (leading to observable local fields and local generalized “charges”). Here a synthesis is also possible, but it happens on a deeper level than in the classical setting and results in LQP as a new physical realm which is conceptually very different from both classical field theory and general QT (quantum theory). The elaboration of some of these differences, in particular as they may be relevant with respect to the measurement process, constitutes one of the aims of these notes. For material which already entered textbooks or review articles, we have preferred to quote the latter. A more detailed account of the consequences of causality in a much broader context can be found in . As a result of this added locality, LQP acquires a different framework than the kind of general quantum theory setting in which the basics of quantum theory and measurement (including those ideas, which in the fashionable language of the day, are referred to as “quantum computation”) are presented . Those concepts, which originate from the quantum adaptation of Einstein causality, lead in the presence of interactions to real particle creation (which artificially could be incorporated into a multichannel version quantum theory of particles) and, what has more importance within our presentation, to virtual particle structure (related to the phenomenon of vacuum polarization) which has no counterpart in global general quantum theory as quantum mechanics and cannot be incorporated into it at all. The latter remark preempts already the greater significance of superselected charges and their fusion, as opposed to particles and their quantum mechanical bound states. Thus the hierarchy of particles in QM is replaced by the hierarchy of charges and consequently we obtain “nuclear democracy” between particles. This is closely related to an almost anthropological principle which LQP realizes in a perfect way in laboratory particle physics: whenever energy-momentum and (generalized) charge conservation allow for particle creation channels to be opened, nature will maximally use this possibility. To be sure there are theoretical models of LQP (integrable/factorizing models in d=1+1 spacetime dimensions) which do not follow this dictum, but even in those cases at least its theoretical “virtual” version is realized: a vector state created by the application of an interacting field to the vacuum which has a one-particle component, is inexorably accompanied by a “polarization cloud” of particles/antiparticles (the hallmark of LQP). As already emphasized the only exception are free bosonic/fermionic fields and in a somewhat pointed (against history), but nevertheless correct manner, one may say that this very exception is the reason why QM as a nonrelativistic limit of LQP has a physical reality at all. More general braid group statistics, as it can occur together with exotic spin in low dimensional QFT, requires these polarization clouds already in the “freest” realization of anyons/plektons and they are not fading away in the nonrelativistic limit because they are needed to uphold braid group statistics in that limit. This is the reason why the attempts of Leinaas- Myrheim, Wilscek and many others, which draw on the analogy with the Aharanov-Bohm quantum mechanics may catch some aspects of plektons but miss the spin-statistics connection which is their most important property (i.e. their LQP characterization). This aspect of virtuality, which at first sight seems to complicate life since it activates the coupling between infinitely many degrees of freedom/channels, is counterbalanced by some very desirable and useful features: whereas general quantum theory needs an outside interpretative support, LQP carries this already within itself. It was emphasized already at the end of the 50<sup>ies</sup> (notably by Rudolf Haag ), that e.g. for a particle interpretation one does not need to resolve the distinction between the various local observables which are localized in the same space-time region (laboratory extension and time duration of measurement), the knowledge of the space-time affiliation of a generic observable from a region $`𝒪`$ is enough. The experimenter does not know more than the geometric spacetime placement of his counters and their sensitivity; the latter he usually has to determine by monitoring experiments. The basic nature of locality in interpreting the particle aspect of a theory is underlined by the fact that despite intense efforts nobody has succeeded to construct a viable nonlocal theory. Here “viable” is meant in the sense of conceptual completeness, namely that a theory is required to contain its own physical interpretation i.e. that one does not have to invent or impose formulas from outside this theory. Although physical reality may unfold itself like an onion or an infinite Russian “matrushka” with infinitely many layers of ever more general physical principles towards higher energies (smaller distances), it should still continue to be possible to have a mathematically consistent theory in each layer which is faithful to the principles valid in that layer. This has been fully achieved for quantum mechanics, but this goal was not yet reached in QFT. As a result of lack of nontrivial d=1+3 models or structural arguments which could demonstrate that the physical locality and spectral requirements allow for nontrivial solutions, the theory is still far from conceptual maturity, despite its impressive perturbation successes in QED, the Standard Model and in the area of Statistical Mechanics/Condensed Matter physics. Causality and locality are in a profound way related to the foundations of quantum theory in the spirit of von Neumann, which brings me a little closer to the topic of this symposium. In von Neumann’s formulation, observables are represented by selfadjoint operators and measurements are compatible if the operators commute. The totality of all measurements which are relatively compatible with a given set (i.e. noncommutativity within each set is allowed) generate a subalgebra: the commutant $`L^{}`$ of the given set of operators $`L`$. In particular in LQP, a conceptual framework which was not yet available to von Neumann, one is dealing with an isotonic “net” of subalgebras (in most physically interesting cases von Neumann factors, i.e. weakly closed operator algebras with a trivial center) $`𝒪𝒜(𝒪).`$ Therefore unlike quantum mechanics, the spatial localization and the time duration of observables becomes an integral part of the formalism. Causality gives an a-priori information about the size of spacetime $`𝒪`$ -affiliated operator (von Neumann) algebras: $$𝒜(𝒪)^{}𝒜(𝒪^{})$$ (1) in words: the commutant $`𝒜(𝒪)^{}`$ of the totality of local observables $`𝒜(𝒪)`$ localized in the spacetime region $`𝒪`$ contains the observables localized in its spacelike complement (disjoint) $`𝒪^{}.`$ In fact in most of the cases the equality sign will hold in which case one calls this strengthened (maximal) form of causality “Haag duality” : $$𝒜(𝒪)^{}=𝒜(𝒪^{})$$ (2) In words, the spacelike localized measurements are not only commensurable with the given observables in $`𝒪`$, but every measurement which is commensurable with all observables in $`𝒪,`$ is necessarily localized in the causal complement $`𝒪^{}.`$ Here we extended for algebraic convenience von Neumann’s notion of observables to the whole complex von Neumann algebra generated by hermitian operators localized in $`𝒪.`$ If one starts the theory from a net indexed by compact regions $`𝒪`$ as double cones, then algebras associated with unbounded regions $`𝒪^{}`$ are defined as the von Neumann algebra generated by all $`𝒜(𝒪_1)`$ if $`𝒪_1`$ ranges over all net indices $`𝒪_1𝒪^{}.`$ Whereas the Einstein causality (1) allows a traditional formulation in terms of pointlike fields $`A(x)`$ as $$[A(x),A(y)]=0,\left(xy\right)^2<0,$$ (3) Haag duality can only be formulated in the algebraic net setting of LQP, since it is not a property which can be expressed in terms of individual operators. This aspect is shared by many other important properties and results . One can prove that Haag duality always holds after a suitable extension of the net to the so-called dual net $`𝒜(𝒪)^d.`$ The latter may be defined independent of locality in terms of relative commutation properties as $$𝒜(𝒪)^d:=\underset{𝒪_1,𝒪_1^{}𝒪}{}𝒜(𝒪_1)^{}$$ (4) The relative commutance with respect to the observables is called (algebraic) “localizability”. These considerations show that causality, locality and localization in LQP have a natural and deep relation to the notion of compatibility of measurements. In addition there are subtle modifications with respect to the basic quantum structure with possible changes of environmental and other aspects of quantum measuring. The fundamental reason for all such modifications in the interpretation of LQP versus QM is the different structure of local algebras: the vacuum is not a pure state with respect to any algebra which is equal to or contained in an $`𝒜(𝒪)`$ with $`𝒪^{}`$ nonempty, and the sharply localized algebras $`𝒜(𝒪)`$ themselves do not admit pure states at all<sup>2</sup><sup>2</sup>2In order to find local algebras which are anywhere near quantum mechanical algebras and admit pure states and tensor products with entanglement similar to the inside/outside quantization box situation in Schrödinger theory, one has to allow for a “fuzzy” transition “collar” between a double cone and its causal disjoint outside, in more precise terms one has to consider a so-called split inclusion .! They possess an algebraic structure which has not been taken into account in the present day presentation of quantum basics including quantum computation. Since these fine points can only be appreciated with some more preparation, I will postpone their presentation. If the vacuum net (i.e. the vacuum representation of the observable net) is Haag dual, then all associated “charged” nets share this property, unless the charges are nonabelian (in which case the deviation from Haag duality is measured by the Jones index of the above inclusion, or in physical terms the statistics- or quantum-dimension ). If on the other hand even the vacuum representation of the observable net violates Haag duality, then this indicates spontaneous symmetry breaking i.e. not all internal symmetry algebraic automorphisms are spatially implementable. As already mentioned, in that case one can always maximize the algebra without destroying causality and without changing the Hilbert space, such that Haag duality is restored. This turns out to be related to the descend to the unbroken part of the symmetry which allows (since it is a subgroup) more invariants i.e. more observables. Since QM and what is usually referred to as the basics of quantum theory do not know these concepts at all, I am presenting in some sense a contrasting program to the (global) QT orientation of this symposium. But often one only penetrates the foundations of a framework more profoundly, if one looks at a contrasting structure even if the difference is (presently) not measurable. For an analogy we may refer to the Hawking effect which has attracted ever increasing attention as a matter of principle, even though there is hardly any experimental chance. In connection with this main theme of this symposium, it is interesting to ask if LQP could add something to our understanding of classical versus quantum reality (the ERP, Bell issue) or the measurement process i.e. production of “Schrödinger cat states” and observation of their subsequent decoherence. For the first issue I refer to . Apart from some speculative remarks , there exists no investigation of the measurement process which takes into consideration the characteristic properties of the local algebras in LQP. I tend to believe that, whereas most of the present ideas on coherent states of Schrödinger cats and their transition to von Neumann mixtures will remain or at least not suffer measurable quantitative modifications, LQP could be expected to lead to significant conceptual changes. Certainly it will add a universal aspect to the issue of decoherence through environments. Contrary to QM where the environment is introduced by extending the system, localized systems in LQP are always open subsystems for which the “causal disjoint” defines a kind of universal environment which is build into its formalism. Another structurally significant deviation which was already alluded to results from the fact that the vacuum becomes a thermal state with respect to the local algebras $`𝒜(𝒪).`$ There are two different mechanisms to generate thermal states: the standard coupling with a heat bath and the thermal aspect through restriction or localization and the creation of horizons . The latter is in one class with the Hawking-Unruh mechanism; the difference being that in the localization situation the horizon is not classical i.e. is not defined in terms of a differential geometric Killing generator of a symmetry transformation of the metric. The fact that algebras of the type $`𝒜(𝒪)`$ have no pure states is related to the different behavior of the pair inside/outside with respect to factorization: whereas in QM the boxed system factorizes with the system outside the box, the total algebra $`B(H)`$ in LQP is generated by $`A(O)`$ and its commutant $`B(H)=A(O)A(O)^{},`$ but it is not the tensor product of the two factor algebras $`𝒜(𝒪)`$ and $`𝒜(𝒪)^{}=𝒜(𝒪^{}).`$ In order to get back to a tensor product situation and be able to apply the concepts of entanglement and entropy, one has to do a sophisticated split which is only possible if one allows for a “collar” (see later) between $`𝒪`$ and $`𝒪^{}`$ . Since the thermal aspects of localization are analogous to black holes<sup>3</sup><sup>3</sup>3The analogy is especially tight for the wedge localization since the boundary of wedges define bifurcated classical “Killing horizons” (Unruh), whereas the boundary of e.g. a double cone in a massive theory defines a “quantum horizon”. This concept has a cood meaning with respect to the nongeometrically acting modular group associated with the latter situation, and it has no classical analogon (it is in fact a “hidden symmetry”)., there is no chance to directly measure such tiny effects. However in conceptual problems, e.g. the question if and how not only classical relativistic field theory, but also QFT excludes superluminal velocities, these subtle differences play a crucial role. Because of an unusual property of the vacuum in QFT (the later mentioned Reeh-Schlieder property), the exclusion of superluminal velocities requires more conceptual and mathematical understanding than in the classical case. Imposing the usual algebraic structure of QM (i.e. assuming tacitly that the local observables allow pure states) onto the local photon observables will lead to nonsensical results. Most sensational theoretical observations on causality violations which entered the press and in one case even Phys. Rev. Letters, suffer from incorrect tacit assumptions (if they are not already caused by a misunderstanding of the classical theory). We urge the reader to look at the fascinating reference and the conceptually wrong preceding article. Historically the first conceptually clear definition of localization of relativistic wave function was given by Newton and Wigner who adapted Born’s x-space probability interpretation to the Wigner relativistic particle theory. Apparently the result that there is no exact satisfactory relativistic localization (but only one sufficient for all practical purposes) disappointed Wigner so much, that he became distrustful of the usefulness of QFT in particle physics altogether (private communication by R. Haag). Whereas we know that this distrust was unjustified, we should at the same time acknowledge his stubborn insistence in the importance of the locality concept which he thought of as an indispensable requirement in addition the positive energy property and irreducibility of the Wigner representations. Without explanation we state that modular localization of state vectors is different from the Born probability interpretation. Rather subspaces of modular localized wave functions preempt the existence of causally localized observables already on the level of the Hilbert space of relativistic wave functons and have no counterpart at all in N-particle quantum mechanics. As will be explained later, modular localization may serve as a starting point for the construction of interacting nonperturbative LQP’s <sup>4</sup><sup>4</sup>4In fact the good modular localization properties are guarantied in finite component positive energy representations, with the Wigner infinite component “continuous spin” representations being the only exception.. In this infinite component finite energy representation it is not possible to come from the wedge localization down to the spacelike cone localization which is the coarsest localization which one needs for a particle interpretation.. It is worthwhile to emphasize that sharper localization of local algebras in LQP is not defined in terms of support properties of classical smearing functions but via the rather unusual formation of intersection of localized algebras; although in some cases as CCR- or CAR-algebras (or more generally Wightman fields) the algebraic formulation (1) can be reduced to this more classical concept. Since the modular structure is related to the so-called KMS property , it is not surprising that the modular localization has thermal aspects. In fact as mentioned before, there are two manifestations of thermality, the standard heat bath thermal behavior which is described by Gibbs formula or, after having performed the thermodynamic limit, by the KMS condition, and thermality caused by localization either with classical bifurcated Killing-horizons as in black holes curved spacetime and (Rindler, Unruh, Bisognano-Wichmann) wedge regions, or in a purely quantum manner as the boundary of the Minkowski space double cones. In the latter case the KMS state has no natural limiting description in terms of a Gibbs formula (which only applies to type $`I`$ and $`II`$, but not to type $`III`$ von Neumann algebras), a fact which is also related to the boundedness from below of the Hamiltonian, whereas the e.g. Lorentz boost (the modular operator of the wedge) does not share this property. In the reader also finds an discussion of localization and cluster properties in a heat bath thermal state. Although in these notes we will not enter these interesting thermal aspects, it should be emphasized that thermality (similar to the concept of virtual particle clouds) is an inexorable aspect of localization in LQP and does not need the Hawking type of Killing vector horizons. The close relation of particle and thermal physics (KMS thermal property$``$crossing symmetry of S-matrix and formfactors ) is a generic property of LQP and should not be counted as a characteristic success of string theory. Already in the very early development of algebraic QFT the nature of the local von Neumann algebras became an interesting issue. Although it was fairly easy (and expected) to see that i.e. wedge- or double cone- localized algebras are von Neumann factors (in analogy to the tensor product factorization of standard QT under formation of subsystems, it took the ingenuity of Araki to realize that these factors were of type $`III`$ (more precisely hyperfinite type $`III_1,`$ as we know nowadays thanks to the profound contributions of Connes and Haagerup), at that time still an exotic mathematical structure. Hyperfiniteness was expected from a physical point of view, since approximatability as limits of finite systems (matrix algebras) harmonizes very well with the idea of thermodynamic+scaling limits of lattice approximations. A surprise was the type $`III_1`$ nature which,as already mentioned, implies the absence of pure states (in fact all projectors are Murray von Neumann equivalent to 1) on such algebras; this property in some way anticipated the thermal aspect (Hawking-Unruh) of localization. Overlooking this fact (which makes local algebras significantly different from QM), it is easy to make conceptual mistakes which could e.g. suggest an apparent breakdown of causal propagation as already mentioned before. If one simply grafts concepts of QM onto the causality structure of LQP (e.g. quantum mechanical tunnelling, structure of states) without deriving them in LQP , one runs the risk of wrong conclusions about e.g. the possibility of superluminal velocities. A very interesting question is: what is the influence of the always present causally disjoint environment on the measurement process, given the fact that in the modern treatment the coupling to the environment and the associated decoherence relaxation are very important. Only certain aspects of classical versus quantum reality, as expressed in terms of Bell’s inequalities, have been discussed in the causal context of LQP . In the following we will sketch some more properties which set apart QM from LQP and whose conceptual impacts on decoherence of Schrödinger cats, entanglement etc. still is in need of understanding. Let me mention two more structural properties, intimately linked to causality, which distinguish LQP rather sharply from QM. One is the Reeh-Schlieder property: $`\overline{𝒫(𝒪)\mathrm{\Omega }}`$ $`=`$ $`H,i.e.cyclicityof\mathrm{\Omega }`$ (5) $`A𝒫(𝒪),A\mathrm{\Omega }`$ $`=`$ $`0A=0i.e.\mathrm{\Omega }separating`$ which either holds for the polynomial algebras of fields or for operator algebras $`𝒜(𝒪).`$ The first property, namely the denseness of states created from the vacuum by operators from arbitrarily small localization regions (a state describing a particle behind the moon<sup>5</sup><sup>5</sup>5This weird aspect should not be held against QFT but rather be taken as indicating that localization by a piece of hardware in a laboratory is also limited by an arbitrary large but finite energy, i.e. is a “phase space localization” (see subsequent discussion). In QM one obtains genuine localized subspaces without energy limitations. and an antiparticle on the earth can be approximated inside a laboratory of arbitrary small size and duration) is totally unexpected from the global viewpoint of general QT and has even attracted the interest of philosophers of natural sciences. If the naive interpretation of cyclicity/separability in the Reeh-Schlieder theorem leaves us with a feeling of science fiction, the way out is to ask: which among the dense set of localized states can be really produced with a controllable expenditure (of energy)? In QM to ask this question is not necessary since, as already mentioned, the localization at a given time via support properties of wave functions leads to a tensor product factorization of inside/outside so that the ground state factorizes and the application of the inside observables never leads to a dense set in the whole space. It turns out that most of the very important physical and geometrical informations are encoded into features of dense domains and in fact the aforementioned modular theory is explaining such relations. For the case at hand, the reconciliation of the Reeh-Schlieder theorem with common sense has led to the discovery of the physical relevance of localization with respect to phase space in LQP, i.e. the understanding of the size of degrees of freedom in the set: $`P_E𝒜(𝒪)\mathrm{\Omega }iscompact`$ (6) $`e^{\beta 𝐇}𝒜(𝒪)\mathrm{\Omega }isnuclear,𝐇`$ $`=`$ $`{\displaystyle E𝑑P_E}`$ The first property was introduces way back by Haag and Swieca , whereas the second statement (and similar nuclearity statements involving modular operators of local regions instead of the global Hamiltonian) which is more informative and easier to use, is a later result of Buchholz and Wichmann . It should be emphasized that the LQP degrees of freedom counting of Haag-Swieca, which gives an infinite (but still nuclear) number of localized states is different from the finiteness in QM, a fact often overlooked in present day’s string theoretic degree of freedom counting. The difference to the case of QM decreases if one uses instead of a strict energy cutoff a Gibbs damping factor $`e^{\beta H}.`$ In this case the map $`𝒜(𝒪)e^{\beta H}𝒜(𝒪)\mathrm{\Omega }`$ is “nuclear” if the degrees of freedom are not too much accumulative in order to prevent the existence of a maximal (Hagedorn) temperature. The nuclearity assures that a QFT, which was given in terms of its vacuum representation, also exists in a thermal state. An associated nuclearity index turns out to be the counterpart of the quantum mechanical Gibbs partition function and behaves in an entirely analogous way. The peculiarities of the above Haag-Swieca degrees of freedom counting are very much related to one of the oldest “exotic” and at the same time characteristic aspects of QFT: vacuum polarization. As discovered by Heisenberg, the partial charge: $$Q_V=_Vj_0(x)d^3x=\mathrm{}$$ (7) diverges as a result of uncontrolled vacuum fluctuations near the boundary. For the free field current it is easy to see that a better definition involving test functions, which takes into account the fact that the current is a 4-dim distribution and has no restriction to equal times, leads to a finite expression. The algebraic counterpart is the already mentioned so called “split property” namely that if one leaves between say the double cone (“relativistic box”) observable algebra $`𝒜(𝒪)`$and its causal disjoint $`𝒜(𝒪^{})`$ a “collar” region, then it is possible to construct in a canonical way a type $`I`$ tensor factor $`𝒩`$ which extends into the collar and one obtains inside/outside factorization if one leaves out the collar region (a fuzzy box). This is then the algebraic analog of Heisenberg’s smoothening of the boundary to control vacuum fluctuations. It is this “split inclusion” which allows to bring back some of the familiar structure of QM, since type I factors allow for pure states, tensor product factorization, entanglement and all the other properties at the heart of quantum theory and the measurement process. Although there is no time to explain this, let us nevertheless mention that the most adequate formalism for LQP which substitutes quantization and is most characteristic of LQP in contradistinction to QT, is the formalism of modular localization related to the Tomita modular theory of von Neumann algebras. The interaction enters through wedge algebras, thus giving wedges a similar fundamental role as they already had in the Unruh illustration of the thermal aspects of the Hawking effect. Modular localization also leads to a vast enlargement of the symmetry concepts in QFT beyond those geometric symmetries which enter the theory through quantized Noether currents. If by these remarks I have created the impression that local quantum physics is one of the conceptually most fertile and spiritually (not historically) young areas of future basic research with relevance to the basics of measurement and quantum computation, I have accomplished the purpose of these notes. Indeed I know of no other framework which brings together such seemingly different ideas as Spin & Statistics, TCP and crossing symmetry of particle physics on the one hand together with thermal and entropical aspects of (modular) localization & black hole physics on the other hand.
no-problem/9904/hep-ph9904365.html
ar5iv
text
# 𝑏-Parity: counting inclusive 𝑏-jets as an efficient probe of new flavor physics \[ ## Abstract We consider the inclusive reaction $`\mathrm{}^+\mathrm{}^{}nb+X`$ ($`n`$ = number of $`b`$-jets) in lepton colliders for which we propose a useful approximately conserved quantum number $`b_P=(1)^n`$ that we call $`b`$-Parity ($`b_P`$). We make the observation that the Standard Model (SM) is essentially $`b_P`$-even since SM $`b_P`$-violating signals are necessarily CKM suppressed. In contrast new flavor physics can produce $`b_P=1`$ signals whose only significant SM background is due to $`b`$-jet misidentification. Thus, we show that $`b`$-jet counting, that relies primarily on $`b`$-tagging, becomes a very simple and sensitive probe of new flavor physics (i.e., of $`b_P`$-violation). \] The Standard Model (SM), despite its enormous success, is believed to be the low-energy limit of a more fundamental theory whose nature will be probed by the next generation of colliders. New physics effects have been studied in a variety of processes using model independent approaches , as well as within specific models . All such investigations aim at providing a clear unambiguous signal for non SM physics effects. In this letter we propose one such signal obtained through simple $`b`$-jet counting. The approach is best suited to lepton colliders but may be extended to hadron colliders, e.g., it applies to the Fermilab Tevatron $`p\overline{p}`$ collider to the extent that the sea $`b`$-quark content of the protons can be ignored. Consider the inclusive multiple $`b`$-jet production in $`\mathrm{}^+\mathrm{}^{}`$ collisions $`\mathrm{}^+\mathrm{}^{}nb+X,`$ (1) where $`n`$ denotes the number of $`b`$ and $`\overline{b}`$-jets in the final state (FS) and $`X`$ stands for non $`b`$-jets, leptons and/or missing energy; it is understood that this represents the state after top-quark decay. For the reactions (1) we introduce a useful approximate symmetry we call $`b`$-Parity ($`b_P`$), defined as $`b_P=(1)^n.`$ (2) In the limit where the quarks mixing CKM matrix $`V`$ satisfies $`V_{3j}=V_{j3}=0`$ for $`j3`$, all SM processes are $`b_P`$-even since in this case the third generation quarks do not mix with the others, and this leads to the conservation of the corresponding flavor number. Given the fast top decay and since $`\mathrm{Br}(tbW)1`$, the experimentally-observed flavor number is in fact carried only by the $`b`$-quarks. Therefore, the measured quantum number reduces to the net number of detected $`b`$-quarks; we find it convenient to use instead the derived quantity $`b_P`$. The only SM processes that violate this conserved number necessarily involve the charged current interactions and are, therefore, suppressed by the corresponding small off-diagonal CKM elements $`|V_{cb}|^2`$, $`|V_{ub}|^2`$, $`|V_{ts}|^2`$ or $`|V_{td}|^2`$. As a consequence the irreducible SM background to $`b_P=1`$ processes (induced by new flavor physics beyond the SM) is strongly suppressed: the SM is essentially $`b_P`$-even. In the following we will look for experimental signatures of $`b_P`$-odd physics within multi-jet events. We will assume that a sample with a definite number of jets has been selected (we will use 2 and 4 jet samples) and determine (within each sample) the experimental sensitivity needed to detect – or rule out – new flavor physics of this type up to a certain scale. In contrast with other observables the determination of $`b_P`$ within a sample with a fixed number of jets relies primarily on the $`b`$-tagging efficiency and purity of the sample used and not on the particular structure of a given FS, nor does it require the identification of any other particle but the $`b`$. Thus, the main obstacle in this use of $`b_P`$ is the reducible SM background, due to jet mis-identification. This results from having a $`b`$-tagging efficiency $`ϵ_b`$ below 1, and/or having non-zero probabilities $`t_c`$ and $`t_j`$ of misidentifying a $`c`$ or light jets for a $`b`$-jet, respectively. This type of background would of course disappear as $`ϵ_b1`$ and $`t_{c,j}0`$, but even for the small value $`t_c=0.1`$ and high $`b`$-tagging efficiency $`ϵ_b=0.7`$, can produce a significant number of (miss-identified) events in the detector. Since for most experiments $`t_j`$ is very small , the only relevant experimental parameters for this probe of new physics are $`ϵ_b`$ and $`t_c`$. Consider now the inclusive $`b`$ and $`\overline{b}`$-jet production process in (1).<sup>*</sup><sup>*</sup>*To be specific, we will consider reactions in $`e^+e^{}`$ colliders, but the method is clearly extendible to muon colliders. Focusing only on multi-jet FS, let $`\sigma _{nm\mathrm{}}`$ be the cross-section for $$e^+e^{}nb+mc+\mathrm{}j,$$ (3) where $`j`$ is a light-quark or gluon jet and $`c`$ is a $`c`$-quark jet. Since our method does not require the detection of the charge of the $`b`$, $`n`$ is the number of $`b`$ \+ $`\overline{b}`$-quarks and similarly $`m`$ and $`\mathrm{}`$ are the number of the corresponding jets in (3) irrespective of the parent quarks charges. We denote by $`t_c`$ the $`c`$-jet mis-tagging probability (i.e., that of mistaking a $`c`$-jet for a $`b`$-jet) and by $`t_j`$ the light-jet mis-tagging probability (i.e., that of mistaking a light-jet or gluon-jet for a $`b`$-jet). Using these, the probability (or cross-section) for detecting precisely $`k`$ $`b`$-jets in the reaction (3) is given by $`\overline{\sigma }_k={\displaystyle \underset{u,v,w}{}}`$ $`P_u^nP_v^mP_w^{\mathrm{}}\left[ϵ_b^u(1ϵ_b)^{nu}\right]\left[t_c^v(1t_c)^{mv}\right]`$ (5) $`\left[t_j^w(1t_j)^\mathrm{}w\right]\sigma _{nm\mathrm{}}\delta _{u+v+w,k},`$ where $`P_j^i=i!/j!/(ij)!`$. To experimentally detect $`b_P`$-odd signals generated by new physics one should simply measure the number of events with an odd number of $`b`$-jets in the FS. In particular, for the reaction (3), we define $`N_{k,J}`$ to be the number of events with $`k`$ (taken odd) $`b`$-jets in a FS with a total of $`J`$ jets. The sensitivity of $`N_{k,J}`$ to $`b_P`$-violating new physics is determined by comparing the theoretical shift due to the underlying $`b_P=1`$ interactions with the expected error ($`\mathrm{\Delta }`$) in measuring the given quantity. Thus, requiring a signal of at least $`N_{SD}`$ standard deviations, we have $`\left|N_{k,J}N_{k,J}^{(\mathrm{SM})}\right|N_{SD}\mathrm{\Delta }.`$ (6) We will include three contributions to $`\mathrm{\Delta }`$ which we combine in quadrature: $`\mathrm{\Delta }^2=\mathrm{\Delta }_{\mathrm{stat}}^2+\mathrm{\Delta }_{\mathrm{sys}}^2+\mathrm{\Delta }_{\mathrm{theor}}^2`$, where $`\mathrm{\Delta }_{\mathrm{stat}}=\sqrt{N_{k,J}}`$ is the statistical error, $`\mathrm{\Delta }_{\mathrm{sys}}=N_{k,J}\delta _s`$ is a systematic error and $`\mathrm{\Delta }_{\mathrm{theor}}=N_{k,J}\delta _t`$ is the theoretical error in the numerical integration of the corresponding cross sections. The quantities $`\delta _{s,t}`$ denote the statistical and theoretical errors per event; $`\delta _s`$ is estimated using experimental values from related processes (eg. $`R_b`$ measurements), $`\delta _t`$ is derived from the errors in the Monte Carlo integration used in calculating the various cross sections. There are various types of specific models beyond the SM (e.g., multi-Higgs models, supersymmetry, etc.) that can alter the SM prediction for the cross-section of reaction (3). In this letter we will take a model-independent approach in which we investigate the limits that can be placed on the scale $`\mathrm{\Lambda }`$ of a new short-distance theory that can generate flavor violation, and which we parameterize using an effective Lagrangian $`_{eff}={\displaystyle \frac{1}{\mathrm{\Lambda }^2}}{\displaystyle \underset{i}{}}f_i𝒪_i+O(1/\mathrm{\Lambda }^3),`$ (7) where $`𝒪_i`$ are mass-dimension 6 gauge-invariant effective operators (some of which may have new flavor dynamics) We assume there are no significant lepton-number violation effects at scale $`\mathrm{\Lambda }`$ that would generate dimension 5 operators., and $`f_i`$ are coefficients that can be estimated using naturality arguments. As a concrete example that clearly illustrates the significance of $`b_P`$, we consider the effects of the $`b_P`$-odd effective four-Fermi operator $`𝒪=\left(\overline{\mathrm{}}\gamma ^\mu \mathrm{}\right)\left(\overline{q}_i\gamma _\mu q_j\right),`$ (8) where $`\mathrm{}`$ and $`q`$ are the SM left-handed lepton and quark $`SU(2)_L`$ doublets and $`i,j=1,2\mathrm{or}3`$ label the generation. This operator gives rise to contact $`e^+e^{}t\overline{c}`$ and $`e^+e^{}b\overline{s}`$ vertices (and their charged conjugates). It can be generated, for example, by an exchange of a heavy boson in the underlying theory (see ). Although our method applies to any $`b_P=1`$ process, in what follows we will investigate the effects of (8) on the reaction (3) as an illustration. In particular, on $`N_{1,2}`$ (i.e., 1 $`b`$-jet signal in a 2-jet sample, $`J=n+m+\mathrm{}=2`$) and on $`N_{1,4}`$ and $`N_{3,4}`$ (i.e., 1 and 3 $`b`$-jet signals in a 4-jet sample, $`J=n+m+\mathrm{}=4`$). Consider first the $`2`$-jet sample case: in the limit $`m_q=0`$ for all $`qt`$, the only relevant cross-sections are $`\sigma _d=\sigma (e^+e^{}d\overline{d})`$, $`\sigma _u=\sigma (e^+e^{}u\overline{u})`$, where $`d=d,s,b`$ and $`u=u,c`$, that are generated by the SM, and $`\sigma _{bs}=\sigma (e^+e^{}b\overline{s})=\sigma (e^+e^{}\overline{b}s)`$ generated by the $`eebs`$ contact term. These cross-sections are calculated by means of the CompHEP package , in which we implemented the Feynman rules for the $`e^+e^{}b\overline{s}`$ and $`e^+e^{}t\overline{c}`$ vertices generated by the operator (8). Using (5), we get the following cross-section for the 2-jet events, one of which is identified as a $`b`$-jet ($`\overline{\sigma }_1`$ with $`J=n+m+\mathrm{}=2`$) $`\overline{\sigma }_1`$ $`=`$ $`P_1^2\left[ϵ_b(1ϵ_b)+2t_j(1t_j)\right]\sigma _d`$ (11) $`+P_1^2\left[t_c(1t_c)+t_j(1t_j)\right]\sigma _u`$ $`+2(P_1^1)^2\left[ϵ_b(1t_j)+t_j(1ϵ_b)\right]\sigma _{bs}`$ that is used to calculate $`N_{1,2}`$. In Table I we give the largest $`\mathrm{\Lambda }`$ (the scale of the new $`b_P=1`$ physics) that can be probed or excluded at the level of 3 standard deviations ($`N_{SD}=3`$), derived using (6), for the three representative $`b`$-tagging efficiencies of $`25\%,40\%`$ and $`60\%`$ and fixing the $`c`$-jet and light-jet purity factors to $`10\%`$ and $`2\%`$, respectively.Note that the limits derived here and throughout the rest of the paper assume $`|f|=1`$ \[see (7)\]. Alternatively, they can be interpreted as limits on $`\mathrm{\Lambda }/\sqrt{|f|}`$. Results are given for three collider scenarios: $`\sqrt{s}=200`$ GeV with $`L=2.5`$ fb<sup>-1</sup>, $`\sqrt{s}=500`$ GeV with $`L=100`$ fb<sup>-1</sup> and $`\sqrt{s}=1`$ TeV with $`L=200`$ fb<sup>-1</sup>. Both the systematic error $`\delta _s`$ and the theoretical uncertainty $`\delta _t`$ are assumed to be $`5\%`$. Also, an angular cut on the c.m. scattering angle of $`|\mathrm{cos}\theta |<0.9`$ is imposed on each of the 2-jet cross-sections in (11). As expected, we see from Table I that the sensitivity to the new flavor physics induced by the four-Fermi interaction increases with the $`b`$-tagging efficiency. In Figs. 1 we show the regions in the $`ϵ_bt_c`$ plane (enclosed in the dark areas) where the flavor physics parameterized by (8) can be probed or excluded at the 3 standard deviation level (or higher); as an illustration we chose $`\mathrm{\Lambda }=4\sqrt{s}`$ ($`\sqrt{s}`$ denotes the collider CM energy) for the collider scenarios mentioned above. The calculation was done using (6) for $`N_{1,2}`$ with $`\delta _s=\delta _t=0.05`$, $`|\mathrm{cos}\theta |<0.9`$, $`t_j=0.02`$. Evidently, $`\mathrm{\Lambda }`$ as large as four times the c.m. energy of any of the three colliders may be probed or excluded even for rather small $`b`$-tagging efficiencies; typically $`ϵ_b\stackrel{>}{}25\%`$ will suffice as long as the purity factors (in particular $`t_c`$ being the more problematic one) are kept below the $`10\%`$ level. For the 4-jet sample there are numerous processes that can contribute to $`N_{1,4}`$ and $`N_{3,4}`$. At the parton level the 4-jet events may be categorized as follows: (1) events containing 2 quark-antiquark pairs or one quark-antiquark pair and two gluons, $`(q\overline{q})(q^{}\overline{q}^{})`$, $`(q\overline{q})gg`$, where both $`q`$ and $`q^{}`$ denote any light quark ($`q,q^{}t`$) including the case $`q=q^{}`$. (2) events with two charged quark pairs: $`(u\overline{d})(\overline{u}^{}d^{})`$, where $`u`$, $`u^{}`$ are either $`u`$ or $`c`$-quarks and $`d`$, $`d^{}`$ are any of the down-type quarks, excluding the states $`u=u^{}`$ and $`d=d^{}`$ since these are induced in type (1) above. (3) the 4 combinations $`(b\overline{s})gg`$, $`(b\overline{b})(b\overline{s})`$, $`(d\overline{d})(b\overline{s})`$ and $`(s\overline{s})(s\overline{b})`$ (and the corresponding charged conjugate states) generated by the presence of the four-Fermi operator. It is worth noting that the $`eetc`$ contact term also contributes through graphs containing a virtual top-quark exchange. In order to get a reliable jet separation within the 4-jet sample, we use the so-called Durham criterion , that requires the quantities $`y_{ij}^D=2\mathrm{m}\mathrm{i}\mathrm{n}(E_i^2,E_j^2)(1\mathrm{cos}\theta _{ij})/s`$, where $`E_i`$ and $`E_j`$ are the energies of the particles $`i`$ and $`j`$ and $`\theta _{ij}`$ is their relative angle ($`ij=1,\mathrm{},4`$). We evaluate all 4-jet cross-sections using the CompHEP package with the cuts $`y_{ij}^Dy_{\mathrm{cut}}`$ on all possible parton pairs $`ij`$ – we present our numerical results for $`y_{\mathrm{cut}}=0.01`$. In addition we neglect all quark masses except $`m_{\mathrm{top}}`$, and the strong coupling constant $`\alpha _s`$ was evaluated to the next-to-next-to leading order at a scale $`Q`$ equal to half the CM energy for 5 or 6 active quark flavors depending on whether $`Q<m_{\mathrm{top}}`$ or not respectively. For 6 flavors we used $`\mathrm{\Lambda }_{QCD}=118.5`$MeV (see for details). The results for the 4-jet case, using $`N_{1,4}`$, are shown in Figs. 2 for a 200, 500 and 1000 GeV colliders, where, as in Fig. 1, any value in the $`ϵ_bt_c`$ plane inclosed by the dark area will suffice for probing or ruling out (at $`3\sigma `$) the new four-Fermi operator in (8) with a scale $`\mathrm{\Lambda }`$ as indicated in the figure. As for the 2-jet sample, we take $`t_j=0.02`$ and a systematic error of $`5\%`$. Using the results of the CompHEP Monte-Carlo integration we estimate that our calculated 4-jet cross-sections are accurate up to the level of about $`10\%`$, accordingly we choose $`\delta _t=0.1`$ in Fig. 2. The 4-jet case is less sensitive to a $`b_P`$-odd signal induced by the four-Fermi operator in (8). For example, we see that for a 500 GeV collider with $`t_c0.1`$ and $`t_j=0.02`$, a $`b`$-tagging efficiency of about $`70\%`$ will be needed in order to probe or exclude a value of $`\mathrm{\Lambda }1300`$ GeV by measuring $`N_{1,4}`$, while a $`40\%`$ $`b`$-tagging efficiency will suffice for probing $`\mathrm{\Lambda }2000`$ GeV using $`N_{1,2}`$ in the 2-jet sample. Equivalently, for a given value of $`t_{c,j}`$, higher $`ϵ_b`$ will be required in the 4-jet measurement compared to the 2-jet one in order to detect a $`b_P`$-odd signal generated by the operator in (8) for the same value of $`\mathrm{\Lambda }`$. (this may be somewhat improved by reducing the theoretical uncertainties). We also find that $`N_{3,4}`$ is less sensitive than $`N_{1,4}`$ to $`𝒪_\mathrm{}q^{(1)}`$ in (8). Though $`N_{1,2}`$ is more efficient for probing type of new flavor physics which generate the four-Fermi operator (8) at low energies, this is not necessarily a general feature: certain types of new physics will not contribute to the 2-jet FS and must be probed using the 4-jet sample. This is the case, for example, for an effective vertex generating a right-handed $`Wbc`$ coupling, that may alter the flavor structure of SM, and give rise to sizable $`b_P=1`$ effects. Note, however, that this refers only to the exclusive 2 and 4 jet samples, since such a $`Wcb`$ right-handed coupling will give rise to a $`b_P`$-odd signal in inclusive 2-jet reactions such as $`e^+e^{}b+j+X`$, where $`j`$ is a light jet. The analysis of these events is, however, considerably more complex. We note that our cross sections include terms of order $`1/\mathrm{\Lambda }^4`$ that will be modified by dimension 8 effective operators, which are in general present in (7). Note, however, that such dimension 8 operators, if generated by the underlying high energy theory, are expected to give an additional uncertainty of order $`(s/\mathrm{\Lambda }^2)^2`$, below $`3\%`$ for the results presented. In addition we note that the above analysis assumes unbiased pure samples with a fixed jet number, the effects of contamination from events with different jet number have not been included. Before we summarize we wish to note that the following issues need further investigation: * Our $`b`$-jet counting method can be used to constrain specific models containing $`b_P=1`$ interactions. For example, supersymmetry with R-parity violation or with explicit flavor violation in the squark sector and/or multi-Higgs models without natural flavor conservation can give rise to $`tc,tu`$ (or $`bs,bd`$) transitions, which may lead to sizable $`b_P`$-odd signals in leptonic colliders. * In leptonic colliders with c.m. energies $`\stackrel{>}{}1.5`$ TeV, $`t`$-channel vector-boson fusion processes become important. At such energy scales, the SM $`b_P=1`$ reducible background needs to be reevaluated. At the same time, the $`V_1V_2`$-fusion processes ($`V_{1,2}=\gamma ,Z`$ or $`W`$) give rise to a variety of new possible $`b_P=1`$ signals from new flavor physics (see e.g., ). To summarize, we have shown that $`b`$-jet counting that relies on $`b`$-tagging (with moderate efficiency in a relatively pure multi-jet sample) can be used to efficiently probe physics beyond the SM. Reactions with $`n`$ final $`b`$-jets can be characterized through the use of the quantum number $`b_P=(1)^n`$ that we called $`b`$-Parity. Due to small off-diagonal CKM matrix elements, $`b_P`$ is conserved within the SM to very good accuracy; it follows that the SM contributions to the above reactions are $`b_P`$-even. Despite the presence of a (reducible) background, due to reduced $`b`$-tagging efficiency and sample purity, we showed that our method is sensitive enough to provide very useful limits on new flavor physics in a variety of scenarios (of which two examples are provided) using realistic values for $`ϵ_b`$. ###### Acknowledgements. We would like to thank D. Atwood R. Cole W. Gary R. Hawkings and B. Shen for illuminating comments and insights. This research was supported in part by US DOE contract number DE-FG03-94ER40837(UCR).
no-problem/9904/math-ph9904012.html
ar5iv
text
# Abstract ## Abstract A formal symplectic structure on $`R\times M`$ is constructed for the unsteady flow of an incompressible viscous fluid on a three dimensional domain $`M`$. The evolution equation for the helicity density is expressed via the divergence of the associated Liouville vector field that generates symplectic dilation. For an inviscid fluid this equation reduces to a conservation law. As an application the symplectic dilation is used to generate Hamiltonian automorphisms of the symplectic structure which are then related to the symmetries of the velocity field. The helicity which is first discovered in has been recognized to be an important ingradient of the problem of relationship between invariants of fluid motion and the topological structure of the vorticity field -. For three-dimensional flows its ergodic and topological interpretations were introduced and investigated in -. It has also been studied in the context of Noether theorems -. Kinematical aspects of helicity invariants in connection with the particle relabelling symmetries were discussed in . In this work, we shall show that there is also a dynamical content of the helicity density in the sense that the information contained in the Eulerian dynamical equations can be represented in the framework of symplectic geometry by a current vector field governing the dynamics of helicity. More precisely, starting from the Navier-Stokes equations of incompressible fluids we shall construct helicity four-vector whose divergence will define the time-evolution of helicity density. The dynamical properties of the fluid, such as viscosity, are implicit in this vector field. The evolution equation for the helicity density reduces to a conservation law for inviscid Euler flows. For fluid dynamical content of this work we shall refer to and the necessary mathematical background can be found in -. The Navier-Stokes equation for a viscous incompressible fluid in a bounded domain $`MR^3`$ is $$\frac{𝐯}{t}+𝐯𝐯=p+\nu ^2𝐯$$ (1) where $`𝐯`$ is the divergence-free velocity field tangent to the boundary of $`M`$, $`p`$ is the pressure per unit density and $`\nu `$ is the kinematic viscosity. The identity $`𝐯𝐯=\frac{1}{2}|𝐯|^2𝐯\times (\times 𝐯)`$ can be used to bring the equation (1) into the form $$\frac{𝐯}{t}𝐯\times (\times 𝐯)=\nu ^2𝐯(p+\frac{1}{2}v^2)$$ (2) and in terms of the vorticity field $`𝐰\times 𝐯`$ this gives $$\frac{𝐰}{t}\times (𝐯\times 𝐰)=\nu ^2𝐰.$$ (3) For a fluid with a potential $`\phi `$ and velocity field $`𝐯`$ the densities $$=\frac{1}{2}𝐯\times 𝐯,_w=\frac{1}{2}𝐰\times 𝐰,q=𝐰\phi =w(\phi )$$ (4) will be called helicity, vortical helicity and potential vorticity, respectively. ###### Proposition 1 For a velocity field satisfying Eqs.(1) and for $`q2\nu _w`$ the two-form $$\mathrm{\Omega }_\nu =(\phi +𝐯\times 𝐰\nu \times 𝐰)d𝐱dt+𝐰(d𝐱d𝐱)$$ (5) is symplectic on $`I\times M`$ where $`I`$ is an open interval in $`R`$. Moreover, it is exact, $`\mathrm{\Omega }_\nu =d\theta `$ with the Liouville (or canonical) one-form $$\theta =(\phi +p+\frac{1}{2}v^2)dt+𝐯d𝐱$$ (6) which is independent of the viscosity $`\nu `$. Proof: $`\mathrm{\Omega }_\nu `$ is closed by Eq. (3) and the divergence-free property of the vorticity field. The non-degeneracy follows from the recognition that the density in the symplectic volume $`\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu /2`$ is the function $`q2\nu _w`$ which is assumed to be non-zero. The exactness can be verified using Eq. (2). $``$ For an arbitrary smooth function $`f`$ of $`(t,x)`$ the unique Hamiltonian vector field $`X_f`$ defined by the symplectic two-form (5) via $`i(X_f)(\mathrm{\Omega }_\nu )=df`$ is given by $$X_f=\frac{1}{q2\nu _w}[w(f)(\frac{}{t}+v)+\frac{df}{dt}w+((\phi \nu \times 𝐰)\times f)].$$ (7) Here, $`d/dt`$ denotes the convective derivative $`_t+𝐯`$ which, viewed as a vector field on $`I\times M`$, is not Hamiltonian. In fact, with the notation $`v𝐯`$, one can check that the one-form $$i(_t+v)(\mathrm{\Omega }_\nu )=(\phi \nu \times 𝐰)(d𝐱𝐯dt)$$ (8) is not closed and hence $`_t+v`$ is not even locally Hamiltonian. Next proposition describes invariantly the connection between the symplectic structure (5) and the the helicity density. ###### Proposition 2 The identity $$d(\theta \mathrm{\Omega }_\nu )\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu 0$$ (9) gives the equation $$\frac{}{t}+(𝐯+\frac{1}{2}(p\frac{1}{2}𝐯^2)𝐰)=\frac{\nu }{2}𝐯^2𝐰\nu _w$$ (10) for the evolution of helicity density. Proof: We have $`\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu =2(q2\nu _w)dxdydzdt`$ and we compute $$[\phi 𝐰+𝐯\times (\phi \nu \times 𝐰)]=2q2\nu _w\nu 𝐯^2𝐰$$ (11) for the derivative of certain terms in the expression $`\theta _\nu \mathrm{\Omega }_\nu =2dxdydz`$ $`[(\phi +p{\displaystyle \frac{1}{2}}v^2)𝐰+2𝐯+𝐯\times (\phi \nu \times 𝐰)]d𝐱d𝐱dt`$ (12) for the three-form. Putting them together in the identity (9) we obtain Eq. (10). Upon integration, the term $`\nu 𝐯^2𝐰/2`$ in Eq. (10) gives the integral of $`\nu _w`$ and one obtains the usual expression for the time change of total helicity as given in, for example, Ref. . $``$ Note that the helicity flux in Eq. (10) is independent of the function $`\phi `$ which we have introduced by hand to make the symplectic form non-degenerate. Using the invariant description (9) of the evolution of helicity density, we shall introduce a current vector $`J_\nu `$ and show that it is an infinitesimal symplectic dilation of $`\mathrm{\Omega }_\nu `$. $`J_\nu `$ will be defined as the one-dimensional kernel of the three-form $`\theta \mathrm{\Omega }_\nu `$. Since the symplectic two-form is nondegenerate, it can be obtained as the unique solution of $$i(J_\nu )(\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu /2)=\theta \mathrm{\Omega }_\nu ,$$ (13) that is, as the dual of the three-form $`\theta \mathrm{\Omega }_\nu `$ with respect to the symplectic volume. We find $$J_\nu =\frac{1}{q2\nu _w}[2(_t+v)+(\phi +p\frac{1}{2}v^2)w+𝐯\times (\phi \nu \times 𝐰)]$$ (14) as the expression for the helicity current. ###### Proposition 3 $`J_\nu `$ is a vector field of divergence $`2`$ with respect to the symplectic volume and it is an infinitesimal symplectic dilation for $`\mathrm{\Omega }_\nu `$. The evolution of helicity density $``$ can be described by the identity $$div_{\mathrm{\Omega }_\nu }(J_\nu )20.$$ (15) Proof: The exterior derivative of Eq. (13) gives $`di(J_\nu )(\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu /2)`$ $`=`$ $`_{J_\nu }(\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu /2)div_{\mathrm{\Omega }_\nu }(J_\nu )\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu /2`$ (16) $`=`$ $`d(\theta \mathrm{\Omega }_\nu )=\mathrm{\Omega }_\nu \mathrm{\Omega }_\nu `$ (17) where we used the identity $`_J=i(J)d+di(J)`$ in the first equality and the second equality is the definition of the divergence. We see that $`J_\nu `$ is a vector field whose divergence is $`2`$. From the last equality, we conclude that the equation (15) is equivalent to Eq. (10) describing the evolution of helicity density. $`J_\nu `$ is the unique vector field satisfying $$i(J_\nu )(\mathrm{\Omega }_\nu )=\theta $$ (18) and it follows from this that $`J_\nu `$ fulfills the condition $$_{J_\nu }(\mathrm{\Omega }_\nu )=di(J_\nu )(\mathrm{\Omega }_\nu )=d\theta =\mathrm{\Omega }_\nu $$ (19) of being an infinitesimal symplectic dilation for $`\mathrm{\Omega }_\nu `$ . $`J_\nu `$ is also called to be the Liouville vector field of $`\mathrm{\Omega }_\nu `$ . $``$ We observed that the local existence of a Hamiltonian function for $`_t+v`$ is being prevented by the viscosity term . Moreover, the viscosity term causes the helicity not to be conserved. We shall now show that, for the case of inviscid incompressible fluids described by the Euler equation, namely Eq. (1) with $`\nu =0`$, $`_t+v`$ is Hamiltonian and that the helicity density $``$ is conserved. To this end, we assume that the scalar field $`\phi `$ is advected by the fluid motion $$\frac{\phi }{t}+𝐯\phi =0$$ (20) and that the potential vorticity $`q0`$. ###### Proposition 4 Let $`v`$ and $`\phi `$ satisfy Eq. (1) with $`\nu =0`$ and Eq.(20), respectively. Then, the suspended velocity field $`_t+v`$ on $`I\times M`$ and $`q^1w`$ are Hamiltonian vector fields for the exact symplectic two-form $$\mathrm{\Omega }_0=(\phi +𝐯\times 𝐰)d𝐱dt+𝐰(d𝐱d𝐱)=d\theta $$ (21) with the Hamiltonian functions $`\phi `$ and $`t`$, respectively. The evolution equation (10) reduces to the conservation law in divergence form for the helicity density. Proof: Using Eq. (20) $`_t+v`$ can be written in Hamiltonian form $`i(_t+v)(\mathrm{\Omega }_0)=d\phi `$. More generally, the Hamiltonian vector field with the symplectic two-form (21) for an arbitrary function $`f`$ on $`I\times M`$ is given by $$X_f=\frac{1}{q}[w(f)(\frac{}{t}+v)+\frac{df}{dt}w+(\phi \times f)]$$ (22) which clearly reduces to $`_t+v`$ for $`f=\phi `$ and to $`q^1w`$ for $`f=t`$. The conservation of helicity density is obvious. $``$ For the inviscid flow of the Euler equation the helicity current takes the form $$J_0=\frac{1}{q}[2(_t+v)+(\phi +p\frac{1}{2}v^2)w+𝐯\times \phi ]$$ (23) while the canonical one-form remains to be the same. That means, the difference between the dynamics of fluid motion with $`\nu =0`$ and $`\nu 0`$ is contained in the helicity current. Thus, the dynamical content of the helicity is encoded in its current and this, in turn, is connected with the symplectic structure on $`I\times M`$ which was constructed as a consequence of the Eulerian dynamical equations. The realization of dynamics of fluid motion in the symplectic framework is useful in the study of the geometry of the motion on $`M`$ and of the hypersurfaces in $`I\times M`$ defined by the time-dependent Lagrangian invariants, that is, the invariants of the velocity field. The present framework also provides geometric tools for the investigation of scaling properties of the fluid motion because the action by the Lie derivative of helicity current on tensorial objects corresponds to infinitesimal scaling transformations . Leaving the discussions of these issues elsewhere, we shall conclude this work with an application to the symmetry structure of the velocity field which is also related to the results presented in . ###### Proposition 5 Let $`X_f`$ be a Hamiltonian vector field for $`\mathrm{\Omega }_\nu `$. Then, the vector fields $`(_{J_\nu })^k(X_f),k=0,1,2,\mathrm{}`$ are infinitesimal Hamiltonian automorphisms of $`\mathrm{\Omega }_\nu `$. Proof: The symplectic two-form is invariant under the flows of Hamiltonian vector fields because $`_{X_f}(\mathrm{\Omega }_\nu )=di(X_f)(\mathrm{\Omega }_\nu )=d^2f0`$ where we used the identity $`_X=i(X)d+di(X)`$ for the Lie derivative, $`d\mathrm{\Omega }_\nu =0`$ and the Hamilton’s equations $`i(X_f)(\mathrm{\Omega }_\nu )=df`$. It then follows from the identity $$_{[J_\nu ,X_f]}=_{J_\nu }_{X_f}_{X_f}_{J_\nu }$$ (24) evaluated on $`\mathrm{\Omega }_\nu `$ that $`[J_\nu ,X_f]`$ also leaves $`\mathrm{\Omega }_\nu `$ invariant. Replacing $`X_f`$ with $`[J_\nu ,X_f]`$ in Eq. (24) we see that one can generate an infinite hierarchy of invariants of the symplectic two-form $`\mathrm{\Omega }_\nu `$. To see that these are Hamiltonian vector fields we compute $`i([J_\nu ,X_f])(\mathrm{\Omega }_\nu )`$ $`=`$ $`_{J_\nu }(i(X_f)(\mathrm{\Omega }_\nu ))i(X_f)(_{J_\nu }(\mathrm{\Omega }_\nu ))`$ (25) $`=`$ $`d(J_\nu (f)f)`$ (26) where we used Eq. (19). Thus, $`[J_\nu ,X_f]`$ is Hamiltonian with the function $`J_\nu (f)f`$. By induction one can find similarly that $`(_{J_\nu })^2(X_f)`$ is Hamiltonian with $`(J_\nu )^2(f)2J_\nu (f)+f`$ and so on. Interchanging $`J_\nu `$ and $`X_f`$ in the identity (25) we also obtain $`i(X_f)(\theta )=J_\nu (f)`$. $``$ In particular, we let $`\nu =0`$, $`f=t`$ so that $`X_t=q^1w`$ and consider the infinitesimal Hamiltonian automorphisms $`(_{J_0})^k(q^1w),k=0,1,2,\mathrm{}`$ of $`\mathrm{\Omega }_0`$. The identity (24) evaluated on the vector field $`_t+v`$ gives $$_{[J_0,q^1w]}(_t+v)=_{q^1w}([J_0,_t+v])$$ (27) where the vector field $`[J_0,_t+v]`$ is, by proposition (5), Hamiltonian with the function $`J_0(\phi )\phi =pv^2/2`$. By the Lie algebra isomorphism $`[X_f,X_g]=X_{\{f,g\}}`$ defined by the symplectic structure $`\mathrm{\Omega }_0`$, the right hand side of Eq. (27) is a Hamiltonian vector field with the function $$\{t,p\frac{1}{2}v^2\}=\frac{1}{q}w(p\frac{1}{2}v^2).$$ (28) On the level surfaces defined by the constant values of the function (28) we have $`[[J_0,q^1w],_t+v]=0`$ In fact, if we restrict to the constant values of the function $`pv^2/2`$ the hierarchy of Hamiltonian automorphisms of $`\mathrm{\Omega }_0`$ can be identified as the infinitesimal symmetries of the velocity field. This can be seen by replacing $`q^1w`$ with $`[J_0,q^1w]`$ in Eq. (27). We thus proved that ###### Proposition 6 For the Euler flow, the hierarchy of infinitesimal Hamiltonian automorphisms $`(_{J_0})^k(q^1w),k=0,1,2,\mathrm{}`$ of $`\mathrm{\Omega }_0`$ generate infinitesimal time-dependent symmetries of the velocity field on the level surfaces $`pv^2/2=constant`$. As a matter of fact, the function $`pv^2/2`$ is related, in Ref. , to the invariance under particle relabelling symmetries of the Lagrangian density of the variational formulation of the Euler equation.
no-problem/9904/astro-ph9904041.html
ar5iv
text
# Leptonic Domains in the Early Universe and Their Implications ## Abstract We extend a treatment of the causal structure of space-time to active-sterile neutrino transformation-based schemes for lepton number generation in the early universe. We find that these causality considerations necessarily lead to the creation of spatial domains of lepton number with opposite signs. Lepton number gradients at the domain boundaries can open a new channel for MSW resonant production of sterile neutrinos. The enhanced sterile neutrino production via this new channel allows considerable tightening of Big Bang Nucleosynthesis constraints on active-sterile neutrino mixing, including the proposed $`\nu _\mu \nu _s`$ solution for the Super Kamiokande atmospheric $`\nu _\mu `$ deficit, and the four-neutrino schemes proposed to simultaneously fit current neutrino experimental results. PACS numbers: 04.90.Nn; 14.60.Lm; 97.60.Lf; 98.54.-h It is well known that resonant MSW (Mikheyev-Smirnov-Wolfenstein) transitions between active neutrinos and sterile neutrinos in the early universe could generate large lepton number asymmetries in the neutrino sector . Here the lepton number for an active neutrino species $`\nu _\alpha `$ is defined to be $`L_{\nu _\alpha }(n_{\nu _\alpha }n_{\overline{\nu }_\alpha })/n_\gamma `$, the net asymmetry in $`\nu _\alpha `$ over $`\overline{\nu }_\alpha `$ number density normalized by the photon number density $`n_\gamma `$. Not well noted is a crucial feature involved in the lepton number generation process: that the lepton number asymmetry is first damped to essentially zero by the active-sterile neutrino mixing, then oscillates chaotically with an progressively larger amplitude as the mixing goes through resonances, until the asymmetry converges to a growing asymptotic value that is either positive or negative. As a result of this feature, the sign of the lepton number asymmetry is independent of the initial conditions which obtain before the instability begins, and is exponentially sensitive to the parameters involved during the chaotic oscillitory phase. In turn, the lepton number generated in this process may not have a uniform sign in different causal domains. Obviously, an upper bound to the size of these domains is the particle horizon $`H^1(90/8\pi ^3)^{1/2}g^{1/2}m_{\mathrm{pl}}/T^2`$ where $`g`$ is the statistical weight in relativistic particles, $`T`$ is the temperature of the universe, and $`m_{\mathrm{pl}}1.22\times 10^{28}`$ eV. The typical size of these domains is at least as large as the diffusion length of neutrinos at the time of the lepton number generation. These leptonic domains can persist as long as the resonant neutrino transition is capable of efficient lepton number generation. This is because any reduction of lepton number at domain boundaries due to mixing will be quickly reversed by the generation process so that a boundary region is incorporated into one domain or the other. The existence of leptonic domains in the early universe provide a new channel of producing sterile neutrinos, via the resonant MSW conversion of active neutrinos to sterile neutrinos at domain boundaries, where lepton number gradients exist. If during the Big Bang Nucleosynthesis (BBN) epoch the population of sterile neutrinos from this channel becomes comparable to that of an equilibrated active neutrino flavor, the extra neutrino degrees-of-freedom (an increase in $`g`$) can increase the primordial helium abundance significantly . Therefore, such a production mechanism can be constrained by the observationally-inferred primordial helium abundance. The details of the active-sterile neutrino transformation process in the early universe and the associated generation of lepton number asymmetries can be found in a number of previous works . Here we only briefly summarize. In all calculations, we employ the natural units $`\mathrm{}=c=k_\mathrm{B}=1`$. A two-family system $`\nu _\alpha \nu _s`$ ($`\alpha =e`$, $`\mu `$ or $`\tau `$, and $`\nu _s`$ is the sterile neutrino) has a 2$`\times `$2 evolution Hamiltonian $``$ with $`_{\alpha \alpha }=V_z`$, $`_{\alpha s}=_{s\alpha }^{}=V_x+iV_y`$, $`_{ss}=0`$. The effective vector potential V during the BBN epoch (below the QCD phase transition temperature $`T\begin{array}{c}<\hfill \\ \hfill \end{array}100`$ MeV) is $$V_x=\frac{\delta m^2}{2E}\mathrm{sin}2\theta ,V_y=0,V_z=\frac{\delta m^2}{2E}\mathrm{cos}2\theta +V_\alpha ^L+V_\alpha ^T,$$ (1) where $`\delta m^2m_{\nu _s}^2m_{\nu _\alpha }^2`$, $`\theta `$ is the vacuum mixing angle, and $`E`$ is neutrino energy. The matter-antimatter asymmetry contribution to the effective potential is $$V_\alpha ^L\pm 0.35G_FT^3\left[L_0+2L_{\nu _\alpha }+\underset{\beta \alpha }{}L_{\nu _\beta }\right]$$ (2) with the “$`+`$” sign for $`\nu _\alpha `$ and the “$``$” sign for $`\overline{\nu }_\alpha `$. The quantity $`L_0`$ represents the contribution from the baryonic asymmetry and electron-positron asymmetry, i.e., $`L_010^{10}`$. The quantity $`L_{\nu _\beta }`$ is the asymmetry in other active neutrino species $`\nu _\beta `$. For simplicity, and with no loss of generality, we will assume $`L_{\nu _\beta }=0`$ unless explicitly stated otherwise. The contribution to V from a thermal neutrino background is $$V_\alpha ^T\{\begin{array}{cc}80G_F^2ET^4\hfill & \text{for }\alpha =e\text{;}\hfill \\ 22G_F^2ET^4\hfill & \text{for }\alpha =\mu ,\tau \text{.}\hfill \end{array}$$ (3) $`V_\alpha ^T`$ is the same for both $`\nu _\alpha `$ and $`\overline{\nu }_\alpha `$. Resonances occur when the diagonal elements of the Hamiltonian $``$ are equal, i.e., $`V_z=0`$. This is when $`\nu _\alpha `$ and $`\nu _s`$ become degenerate in effective mass and maximally mixed in medium. Very simplistically, when $`V_z`$ evolves through zero, $`\nu _\alpha `$ and $`\nu _s`$ swap their flavors if the resonance is adiabatic ($`|\mathrm{d}V_z/\mathrm{d}t|V_x^2`$), and remain essentially unaltered if the resonance is non-adiabatic ($`|\mathrm{d}V_z/\mathrm{d}t|V_x^2`$). Since the resonance condition is energy dependent, only neutrinos with $`E_{\mathrm{res}}`$ are are resonant at any given temperature. When $`V_\alpha ^L=0`$ (or negligibly small compared to $`|\delta m^2|/2E`$), resonances can occur for the $`\nu _\alpha \nu _s`$ system only if $`\delta m^2<0`$ (i.e., $`m_{\nu _\alpha }>m_{\nu _s}`$). For $`\delta m^2>0`$, and at high enough temperatures for $`\delta m^2<0`$, it has been shown that active-sterile neutrino transformation can damp $`L_0+2L_{\nu _\alpha }`$ to zero very efficiently. The amplification of $`L_0+2L_{\nu _\alpha }`$ starts for $`\delta m^2<0`$ only as the temperature falls below the critical temperature $$T_\mathrm{c}22\left|\delta m^2/1\mathrm{eV}^2\right|^{1/6}\mathrm{MeV}.$$ (4) This is also the temperature where a significant number of $`\nu _\alpha `$ go through the resonance. The instability of $`L_0+2L_{\nu _\alpha }`$ at $`T_\mathrm{c}`$ results from the non-linear characteristics of the MSW resonances, and the feedback effect they have on the lepton number asymmetry. Due to the same non-linearity, $`L_0+2L_{\nu _\alpha }`$ (or $`L_{\nu _\alpha }`$ once $`L_{\nu _\alpha }L_0`$) oscillates chaotically near $`T_c`$, with an exponentially increasing amplitude. This chaotic feature was first pointed out by Shi in numerical studies of monochromic neutrinos undergoing MSW resonance, and was later found to apply to systems with a distribution of energies as well . Eventually, at a temperature slightly below $`T_c`$, the oscillatory behavor abates, and $`L_{\nu _\alpha }`$ settles into one of two fixed points (two $`T^4`$ power-laws) : $$L_{\nu _\alpha }^{(\pm )}\pm \left|\frac{\delta m^2}{10\mathrm{eV}^2}\right|\left(\frac{T}{1\mathrm{MeV}}\right)^4.$$ (5) The sign of the emergent $`L_{\nu _\alpha }`$, however, is independent of the initial $`L_0`$ and $`L_{\nu _\alpha }`$ above $`T_c`$, and is exponentially sensitive to the evolution of the potential V during the oscillatory phase. The sign is therefore chaotic and uncorrelated across causal domains. In the power-law regime, the growth of $`L_{\nu _\alpha }`$ is driven by the resonant conversion of $`\nu _\alpha `$ to $`\nu _s`$ (if $`L_{\nu _\alpha }<0`$) or $`\overline{\nu }_\alpha `$ to $`\overline{\nu }_s`$ (if $`L_{\nu _\alpha }>0`$) in the sector of the neutrino energy spectrum with $`E_{\mathrm{res}}/T0.06|\delta m^2/1\mathrm{eV}^2||L_{\nu _\alpha }|^1(T/1\mathrm{MeV})^4`$, which is in general $`1`$. In this regime, $`V_\alpha ^T`$ quickly becomes negligible because it scales as $`T^5`$. The power-law solution, Eq. (5), for $`|L_{\nu _\alpha }|`$ is very stable, in that a $`|L_{\nu _\alpha }|`$ significantly deviating from this solution will be quickly damped or amplified until it converges to Eq. (5). (A larger $`|L_{\nu _\alpha }|`$ implies a small $`E_{\mathrm{res}}/T`$, and therefore a less efficient generation of $`|L_{\nu _\alpha }|`$; a smaller $`|L_{\nu _\alpha }|`$ implies a larger $`E_{\mathrm{res}}/T`$ and a more efficient generation of $`|L_{\nu _\alpha }|`$.) The rate of this convergence is $`\mathrm{d}\mathrm{ln}|L_{\nu _\alpha }|/\mathrm{d}\mathrm{ln}T\begin{array}{c}>\hfill \\ \hfill \end{array}4`$ in the power-law regime. As $`|L_{\nu _\alpha }|`$ increases, $`E_{\mathrm{res}}/T`$ will slowly increase. This is because keeping up with the approximate $`T^4`$ growth requires progressively more $`\nu _\alpha `$ or $`\overline{\nu }_\alpha `$ to be resonantly converted into sterile neutrinos. Eventually, as $`E_{\mathrm{res}}/T`$ sweeps through most of the $`\nu _\alpha `$ energy spectrum, the growth of $`|L_{\nu _\alpha }|`$ tapers off near its physical limit. This limit is $`|L_{\nu _\alpha }|=3/8`$, when the entire $`\nu _\alpha `$ or $`\overline{\nu }_\alpha `$ population has been converted into sterile neutrinos. In turn, the lepton number generation process ceases, and any inhomogeneity of lepton number arising from the process begins to be smoothed out by neutrino diffusion. Because of the chaotic behavior of the sign of $`L_{\nu _\alpha }`$, domains of lepton number with opposite signs are expected to form at the epoch when $`TT_c`$. At the domain boundaries, mixing between different domains tends to reduce the asymmetries and increases the thickness of the boundary regions. However, the resonant neutrino mixing tends to maintain the solution in Eq. (5) and so narrows the boundary region. The thickness of the boundaries, very crudely, is thus the diffusion length of active neutrinos within the time in which $`|L_{\nu _\alpha }|`$ grows by an $`e`$-folding, $`H^1/4`$. Taking the $`\nu _\alpha `$ collision rate $`\mathrm{\Gamma }_{\nu _\alpha }G_F^2T^5`$, we obtain a boundary thickness relative to the horizon scale $`ct`$ $$\delta _d\frac{c}{\mathrm{\Gamma }_{\nu _\alpha }}\left(\frac{\mathrm{\Gamma }_{\nu _\alpha }}{4H}\right)^{1/2}ct\left(\frac{T}{1\mathrm{MeV}}\right)^{1.5},$$ (6) for $`T\begin{array}{c}>\hfill \\ \hfill \end{array}1`$ MeV when active neutrinos are still diffusive. At temperatures $`T\begin{array}{c}<\hfill \\ \hfill \end{array}1`$ MeV, neutrinos decouple from the plasma and free-stream at the speed of light. In this case $`\delta _dct`$. The existance of lepton domains results in a gradient of $`L_{\nu _\alpha }`$, and therefore a gradient in $`V_z`$. More importantly, the varying $`V_z`$ at the domain boundaries satisfies the resonant condition for most $`\nu _\alpha `$ crossing the boundaries. This is because \[as Eq. (5) shows\] the power-law solutions within each domain result from resonant transitions of $`\nu _\alpha `$ or $`\overline{\nu }_\alpha `$ with $`E_{\mathrm{res}}T`$. As a result, at domain boundaries $`E_{\mathrm{res}}`$ becomes larger. This is because $`E_{\mathrm{res}}|L_{\nu _\alpha }|^1`$. Most of $`\nu _\alpha `$ and $`\overline{\nu }_\alpha `$ therefore undergo resonant transitions to sterile neutrinos at domain boundaries. This new channel of sterile neutrino production can populate a significant sterile neutrino sea with a number density comparable to that of an equilibrated active neutrino flavor if (1) the resonant conversion at the boundary region is adiabatic; and (2) this new production channel for sterile neutrinos does not provide negative feedback to the lepton number asymmetry generation process, and so does not compromise the domain structure of lepton number. Here we demonstrate that both requirements can be satisfied. The adiabaticity condition at the resonances is $$V_x^2>\left|\frac{\mathrm{d}V_z}{\mathrm{d}t}\right|=\left|c\frac{V_z}{r}HT\frac{V_z}{T}\right|,$$ (7) evaluated at $`V_z=0`$. The second term on the r.h.s. is of order $`HV_\alpha ^L`$. This term is always small compared to the first term as long as the leptonic domains exist. Since the spatial gradient is expected to be smooth across the boundary (whose thickness is determined by the diffusion/streaming process), we can employ the average $`|V_z/r|`$ across the domain boundaries, $`0.7G_FT^3(L_{\nu _\alpha }^{(+)}L_{\nu _\alpha }^{()})/\delta _d`$ \[where $`L_{\nu _\alpha }^{(+)}L_{\nu _\alpha }^{()}`$ satisfying Eq. (5)\]. In turn, the adiabaticity condition Eq. (7) becomes $$\left|\delta m^2\right|^2\mathrm{sin}^22\theta >10H\left(\frac{E}{T}\right)^2G_FT^5\left(\frac{T}{1\mathrm{MeV}}\right)^{1.5}\mathrm{min}[\left|\frac{\delta m^2}{10\mathrm{eV}^2}\right|\left(\frac{T}{1\mathrm{MeV}}\right)^4,\frac{3}{8}].$$ (8) The production of sterile neutrinos via adiabatic conversion of $`\nu _\alpha `$ at domain boundaries will not have a negative impact on the $`L_{\nu _\alpha }`$ generation process and the domain structure of lepton number. Consider a domain boundary with a lepton asymmetry $`L_{\nu _\alpha }^{(+)}`$ on one side, and $`L_{\nu _\alpha }^{()}`$ on the other. The $`L_{\nu _\alpha }^{(+)}`$ side has fewer $`\overline{\nu }_\alpha `$ than $`\nu _\alpha `$, and some $`\overline{\nu }_s`$ from resonant $`\overline{\nu }_\alpha \overline{\nu }_s`$ conversions. The $`L_{\nu _\alpha }^{()}`$ side has the opposite, with more $`\overline{\nu }_\alpha `$ than $`\nu _\alpha `$, and some $`\nu _s`$ from resonant $`\nu _\alpha \nu _s`$ conversions. When neutrinos cross the boundary from the $`L_{\nu _\alpha }^{(+)}`$ side to the $`L_{\nu _\alpha }^{()}`$ side, the resultant production of sterile neutrinos due to $`\nu _\alpha \nu _s`$ and $`\overline{\nu }_\alpha \overline{\nu }_s`$ has no bearing on the asymmetry of $`\nu _\alpha `$ on the $`L_{\nu _\alpha }^{()}`$ side. The existence of sterile neutrinos do not hinder the $`L_{\nu _\alpha }`$ generation process until the sterile neutrino population is comparable in numbers to the $`\nu _\alpha `$ population. The resonant conversion of $`\overline{\nu }_s\overline{\nu }_\alpha `$ due to the crossing, on the other hand, produces more $`\overline{\nu }_\alpha `$ in the $`L_{\nu _\alpha }^{()}`$ domain and only reinforces the domain structure. Therefore, once the adiabaticity condition Eq. (8) is met, this new channel of sterile neutrino production may be potent enough to bring the sterile neutrinos into equilibrium with active neutrinos. (In fact, this condition is conservative because neutrinos may cross multiple domain boundaries within a Hubble time.) On the other hand, the observationally-inferred primordial <sup>4</sup>He abundance and deuterium abundance constrain the total number of neutrino flavors in equilibium $`N_\nu `$ to be $`\begin{array}{c}<\hfill \\ \hfill \end{array}3.3`$ ($`N_\nu `$ represents relativistic degrees of freedom in neutral fermions). This constraint implies that the new sterile neutrino production channel cannot be efficient before the decoupling of the $`\nu _\alpha \overline{\nu }_\alpha `$ pair production process. For $`\alpha =\mu `$ and $`\tau `$, this decoupling temperature is $`5`$ MeV. A constraint on the two-family $`\nu _{\mu ,\tau }\nu _s`$ mixing can therefore be obtained by requiring that the adiabaticity condition Eq. (8) is not satisfied at $`T\mathrm{max}(5,|\delta m^2/4\mathrm{eV}^2|^{1/4})`$ MeV (the latter term in the bracket is the temperature at which the growth of $`L_{\nu _\alpha }`$ stops): $$\begin{array}{cc}\left|\delta m^2\right|\mathrm{sin}^22\theta <7\times 10^5\mathrm{eV}^2\hfill & \text{for }\left|\delta m^2\right|\begin{array}{c}<\hfill \\ \hfill \end{array}2.5\times 10^3\mathrm{eV}^2\text{;}\hfill \\ \mathrm{sin}^22\theta <3\times 10^8\hfill & \text{for }\left|\delta m^2\right|\begin{array}{c}>\hfill \\ \hfill \end{array}2.5\times 10^3\mathrm{eV}^2\text{.}\hfill \end{array}$$ (9) For $`\alpha =e`$, the constraint from BBN is more severe. The sterile neutrino production cannot be efficient not only above the $`\nu _e\overline{\nu }_e`$ pair production decoupling temperature $`T3`$ MeV, but also at the weak freeze-out temperature of $`T1`$ MeV. At this temperature, a significant $`\nu _e\nu _s`$ and $`\overline{\nu }_e\overline{\nu }_s`$ transition would cause a deficit in the $`\nu _e\overline{\nu }_e`$ number density, which cannot be replenished by pair production. A significant deficit in the $`\nu _e\overline{\nu }_e`$ number density causes the neutron-to-proton ratio to freeze out too early and results in a primordial <sup>4</sup>He abundance that is too large. (For example, a 10% deficit in the $`\nu _e\overline{\nu }_e`$ number density has roughly the same effect on the <sup>4</sup>He abundance as $`N_\nu 3.5`$.) Therefore the BBN constraint on the two-family $`\nu _e\nu _s`$ mixing is obtained at $`T\mathrm{max}(1,|\delta m^2/4\mathrm{eV}^2|^{1/4})`$ MeV: $$\begin{array}{cc}\left|\delta m^2\right|\mathrm{sin}^22\theta <5\times 10^8\mathrm{eV}^2\hfill & \text{for }\left|\delta m^2\right|\begin{array}{c}<\hfill \\ \hfill \end{array}4\mathrm{eV}^2\text{;}\hfill \\ \mathrm{sin}^22\theta <10^8\hfill & \text{for }\left|\delta m^2\right|\begin{array}{c}>\hfill \\ \hfill \end{array}4\mathrm{eV}^2\text{.}\hfill \end{array}$$ (10) These bounds are summarized in Figure 1. They apply in addition to the previous bounds based on a universe with homogeneous lepton numbers, and together they offer much tighter constraints on the two-family active-sterile neutrino mixing. Intriguing results can also be obtained if there is active-sterile neutrino mixing involving three or more families. One example is the proposal that a $`\nu _\mu \nu _s`$ and $`\nu _\tau \nu _s^{}`$ (in principle $`\nu _s^{}`$ and $`\nu _s`$ can be the same flavor) mixing might be able to simultaneously explain the Super Kamiokande atmospheric neutrino data and satisfy the BBN bound. A stand-alone $`\nu _\mu \nu _s`$ oscillation solution to the Super Kamiokande data would violate the BBN bound by bringing $`\nu _s`$ into equilibrium during BBN. The double active-sterile neutrino oscillation proposal argues that this violation of BBN bound may be avoided if a resonant $`\nu _\tau \nu _s^{}`$ transformation in the early universe generates a $`L_{\nu _\tau }`$ that hinders the $`\nu _s`$ production from the $`\nu _\mu \nu _s`$ mixing (by creating a $`L_{\nu _\beta }`$ term in Eq. ). This argument is no longer valid once we consider the existence of the $`L_{\nu _\tau }`$ domains as a result of the resonant $`\nu _\mu \nu _s^{}`$ transformation. Rather the contrary is true. The $`L_{\nu _\tau }`$ domains facilitate the production of $`\nu _s`$ via resonant $`\nu _\mu \nu _s`$ transformation at domain boundaries. The adiabaticity condition, Eq. (8), is modified in this double mixing situation to be: $$\left|\delta m_1^2\right|^2\mathrm{sin}^22\theta _1>5H\left(\frac{E}{T}\right)^2G_FT^5\left(\frac{T}{1\mathrm{MeV}}\right)^{1.5}\mathrm{min}[\left|\frac{\delta m_2^2}{10\mathrm{eV}^2}\right|\left(\frac{T}{1\mathrm{MeV}}\right)^4,\frac{3}{8}].$$ (11) where $`\delta m_1^2|m_{\nu _\mu }^2m_{\nu _s}^2|`$, $`\theta _1`$ is the $`\nu _\mu \nu _s`$ vacuum mixing angle, and $`\delta m_2^2m_{\nu _\tau }^2m_{\nu _s^{}}^2`$. To be consistent with BBN, the adiabaticity condition cannot be satisfied for the double mixing system from the onset of the $`L_{\nu _\tau }`$ generation $`T_c22|\delta m_2^2/1\mathrm{e}\mathrm{V}^2|^{1/6}`$ MeV to the $`\nu _{\mu ,\tau }`$ decoupling temperature $`T5`$ MeV. However, for $`\delta m_1^210^3`$ to $`10^2`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta _11`$ (the parameters required to explain the Super Kamiokande data), the adiabaticity condition is always satisfied in this double mixing proposal for any reasonable choices of the tau neutrino mass. Therefore, BBN unambiguously rules out an active-sterile neutrino oscillation explanation to the Super Kamiokande data. Another interesting situation involving multi-family active-sterile neutrino mixing arises from neutrino mixing schemes proposed to explain simultaneously the Los Alamos Liquid Scintillator Neutrino Detector (LSND) $`\overline{\nu }_e`$ signal, Super Kamiokande atmospheric $`\nu _\mu `$ deficit, and solar neutrino deficit. In these models, $`\nu _\mu \nu _e`$ mixing (with $`m_{\nu _\mu }^2m_{\nu _e}^20.1`$ to 10 eV<sup>2</sup> and $`\mathrm{sin}^22\theta _{\mu e}10^3`$) is employed to explain the LSND result and $`\nu _e\nu _s`$ mixing (with $`m_{\nu _s}^2m_{\nu _e}^210^5`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta _{es}10^3`$ for the MSW solution, and $`|m_{\nu _s}^2m_{\nu _e}^2|10^{10}`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta _{es}1`$ for the vacuum solution) is invoked to explain the solar neutrino data. If there is mixing between $`\nu _\mu `$ and $`\nu _s`$ as well, however, with $`\mathrm{sin}^22\theta _{\mu s}\begin{array}{c}>\hfill \\ \hfill \end{array}10^{11}`$, the $`L_{\nu _\mu }`$ background and domains generated by the mixing in the early universe would imply that the $`\nu _e\nu _s`$ mixing would not populate enough $`\nu _s`$ to violate BBN constraints only if $$\left|m_{\nu _s}^2m_{\nu _e}^2\right|^2\mathrm{sin}^22\theta _{es}<2\times 10^{14}\left|\frac{m_{\nu _\mu }^2m_{\nu _s}^2}{1\mathrm{eV}^2}\right|^{1/4}\mathrm{eV}^22\times 10^{14}\mathrm{eV}^2.$$ (12) (This is in analogy to the previous example if we take $`\delta m_1^2|m_{\nu _s}^2m_{\nu _e}^2|`$ and $`\delta m_2^2|m_{\nu _\mu }^2m_{\nu _s}^2|`$ in Eq. .) This requirement is not satisfied by the MSW $`\nu _e\nu _s`$ solution to the solar neutrino problem. (Note that if the solar neutrino problem is solved by a vacuum $`\nu _e\nu _s`$ mixing, with $`|m_{\nu _s}^2m_{\nu _e}^2|10^{10}`$ eV<sup>2</sup>, Eq. (12) will be satisfied and the $`\nu _e\nu _s`$ resonant transition will not be adiabatic.) Therefore, in light of LSND, Super Kamiokande and solar neutrino experiments, the neutrino oscillation explanations of the LSND data and the MSW solution to solar neutrino data is inconsistent with BBN unless the $`\nu _\mu \nu _s`$ mixing is extremely small, $`\mathrm{sin}^22\theta _{\mu s}\begin{array}{c}<\hfill \\ \hfill \end{array}10^{11}`$. This results holds despite the possibility that the $`\nu _\mu \nu _e`$ mixing amplitude and the $`\nu _e\nu _s`$ mixing amplitude could be $`\begin{array}{c}>\hfill \\ \hfill \end{array}10^8`$ times larger. This restriction severly constrains the $`\nu _\tau `$-$`\nu _\mu `$-$`\nu _e`$-$`\nu _s`$ mixing matrix required to fit the current neutrino experiment results. In summary, we have discussed the existence of leptonic domains as an inevitable consequence of resonant active-sterile neutrino oscillation mechanisms for generation of lepton number. Resonant MSW conversion due to the lepton number gradients at domain boundaries therefore provides a new channel for sterile neutrino production. As a result, the Big Bang Nucleosynthesis constraint on active-sterile neutrino mixing becomes much more stringent. Likewise for the constraint on multi-family neutrino mixing schemes involving sterile neutrinos. We have found that the $`\nu _\mu \nu _s`$ explanation to the Super Kamiokande data is inconsistent with Big Bang Nucleosynthesis in spite of lepton number asymmetries generated by other active-sterile neutrino oscillations. We have also found that together the $`\nu _\mu \nu _e`$ explanation of the LSND result and the MSW $`\nu _e\nu _s`$ solution to the solar neutrino problem are incompatible with Big Bang Nucleosynthesis considerations unless the amplitude of the mixing between $`\nu _\mu `$ and $`\nu _s`$ is $`\begin{array}{c}>\hfill \\ \hfill \end{array}10^8`$ smaller than that between $`\nu _\mu `$ and $`\nu _e`$ and that between $`\nu _e`$ and $`\nu _s`$. X. S. and G. M. F. are supported in part by NSF grant PHY98-00980 at UCSD. Figure Captions: Figure 1. Parameter spaces to the right of the hatched lines are excluded by BBN. The solid lines indicate bounds obtained in this work, and the dashed lines are previous bounds assuming a universe with a homogeneous lepton number.
no-problem/9904/cond-mat9904145.html
ar5iv
text
# Classical versus Quantum Transport near Quantum Hall Transitions \[ ## Abstract Transport data near quantum Hall transitions are interpreted by identifying two distinct conduction regimes. The “classical” regime, dominated by nearest neighbor hopping between localized conducting puddles, manifests an activated–like resistivity formula, and the quantized Hall insulator behavior. At very low temperatures $`T`$, or farther from the critical point, a crossover occurs to a “quantum” transport regime dominated by variable range hopping. The latter is characterized by a different $`T`$–dependence, yet the dependence on filling fraction is coincidentally hard to distinguish. \] Magneto–transport measurements in the vicinity of quantum Hall (QH) transitions provided over the recent years an extensive variety of data , which stimulated a considerable confusion. The present paper suggests a way to settle the apparent disagreement between different experimental results.. The traditional point of view asserts that transport properties near the transitions between QH plateaux, and from a QH liquid to the insulator, should reflect the proximity to a second order quantum phase transition. Correspondingly, the d.c. resistivity tensor $`\rho _{ij}`$ at a given filling fraction $`\nu `$ and temperature $`T`$ should be described by a universal function $`f(X)`$ of a single parameter: $$\rho _{ij}=\rho _{ijc}f\left(\frac{\mathrm{\Delta }\nu }{T^\kappa }\right),$$ (1) where $`\mathrm{\Delta }\nu =\nu \nu _c`$ (the deviation from the critical filling $`\nu _c`$), and $`\rho _{ijc}`$, $`\kappa `$ are universal. Here $`\kappa `$ is a combination of critical exponents, $`\kappa =1/zx`$, where $`z`$ is the dynamical exponent and $`x`$ characterizes the divergance of the correlation length near criticality: $`\xi |\mathrm{\Delta }\nu |^x`$. Theoretical studies have predicted $`x=7/3`$ ; experimentally, a number of groups have obtained data in remarkable consistency with the scaling ansatz, at different (integer as well as fractional) QH transitions . These results are further supported by data showing scaling at finite frequency and current , which moreover confirm the theoretical prediction $`x=7/3`$ and yield $`z1`$ (indicating the relevance of Coulomb interactions near the quantum critical point). The validity of the single parameter scaling form Eq. (1) has been, however, recently challenged . Motivated by an earlier work , which identified a surprisingly robust (duality) symmetry relating the current–voltage curves $`I(V)`$ in the QH liquid phase ($`\mathrm{\Delta }\nu >0`$ ) to $`V(I)`$ in the insulator at $`\mathrm{\Delta }\nu `$, the parameter $`\mathrm{\Delta }\nu `$ was marked as a relevant scaling variable . The Ohmic resistivity plotted as a function of $`\mathrm{\Delta }\nu `$ is indeed fitted (in a wide range of parameters) by the formula $$\rho _{xx}=\frac{h}{e^2}\mathrm{exp}\left[\frac{\mathrm{\Delta }\nu }{\nu _0(T)}\right],$$ (2) However, counter to the expectation, $`\nu _0(T)`$ does not scale as $`T^\kappa `$, but rather exhibits a linear dependence on $`T`$: $$\nu _0(T)=\alpha T+\beta .$$ (3) The resistivity law Eqs. (2), (3) holds in various different samples, as well as in different transitions, including plateau–to–plateau transitions (with an appropriate definition of the analog of $`\rho _{xx}`$). The parameters $`\alpha `$ and $`\beta `$ are sample–dependent, and define a “saturation temperature” $`T_s=\beta /\alpha `$ which ranges between $`0.05K`$ and $`0.5K`$. Moreover, even if one ignores this saturation, attributing it to incomplete cooling of the carriers, a linear scaling of $`\nu _0(T)`$ with $`T`$ is inconsistent with any sensible theory for the quantum critical behavior. It should be noted, that a partial set of curves obeying Eqs. (2), (3) (corresponding to a restricted range of parameters) can be collapsed on a “traditional” scaling curve , with an exponent $`0<\kappa <1`$. This observation raises a serious doubt concerning the interpretation of the data in earlier experimental works: the distinction between a pure power law $`T^\kappa `$ with $`\kappa 0.40.5`$, and an alternative function which interpolated between $`T`$ and a constant, may turn out to be rather difficult. Nevertheless, a recent experimental result indicates a cross–over from one behavior to another, which will be discussed later in this paper in more detail. Another set of experimental observations that appear to be inconclusive involve the Hall resistance in the high magnetic field insulator. While part of the data support the theoretical prediction of a “Hall insulator phase” , in which $`\rho _{xx}`$ diverges in the limit $`T0`$ yet $`\rho _{xy}`$ behaves as in a classical conductor (linearly dependent on the magnetic field $`B`$), other data exhibit a tendency to divergence of $`\rho _{xy}`$ as well. Yet a third class of experimental data have established the existance of a “quantized Hall insulator” (QHI) behavior in the insulator close to a fundamental QH state ($`1/k`$ with $`k`$ an odd integer): in this regime, $`\rho _{xy}`$ is not only finite as asserted in , but moreover maintains the quantized plateau value $`kh/e^2`$. This phenomenon is a specific manifestation of the “semi–circle law” , which however extends beyond the range of validity expected from that theory. In particular, the observation of a QHI behavior in the non–linear response regime indicates a surprising robustness of the phenomena, and has provided support to the idea that it is intimately related to the validity of duality symmetry . In a previous work , Shimshoni and Auerbach proposed a transport mechanism consistent with the above described QHI phenomenon. The mechanism involves hopping across the junction between edge–states surrounding nearest neighbors in a random network of $`1/k`$-QH liquid puddles, carried out by quantum tunneling assisted by the temperature and current bias. Then, neglecting the quantum interference between different junctions in the network, it is proven that the Hall resistance is quantized at the value dictated by the QH liquid, irrespective of the details of the longitudinal resistance associated with the hopping processes in the junctions. Note that this network model can be extended to the case where the liquid puddles do not consist of a single type (a situation that is likely to be applicable in an insulating regime close to more than one fundumental QH state), in which case $`\rho _{xy}B`$ can be established (similarly to data in Ref. ). It is later shown , that within this model one also obtains an activated–like behavior similar to Eq. (2) (though with $`\nu _0(T)`$ interpolating between $`T`$ and a constant in a way different from the linear expression Eq. (3)). The linear dependence on $`\mathrm{\Delta }\nu `$ in this model is attributed to the relation between area–fraction of the liquid and the barrier heights. Underlying the above described model for the transport, there is an essential assumption of a finite dephasing length $`L_\varphi `$, beyond which quantum interfernce terms are suppressed. As long as the size of a QH liquid puddle, around which electronic edge–states are extended, is larger than $`L_\varphi `$ – the classical random resistor network model is justified. In this sense, the transport regime dominated by nearest–neighbor hopping processes is “classical”. This is even though the resistance associated with a single junction in the network (and given by a Landauer formula, where the two neighboring puddles are regarded as macroscopic reservoirs), is possibly dictated by quantum tunneling through the barrier. The classical model of Ref. is not obviously a unique scenario which yields a quantized Hall resistance away from the strict QH phase. To test this, in a recent work Pryadko and Auerbach have examined the effect of quantum interference in the network on $`\rho _{xy}`$, and found a deviation from the quantized value. In the case where $`L_\varphi `$ is much larger than a puddle size, $`\rho _{xy}`$ vs. $`B`$ indicates an exponential divergence towards the insulating regime. A similar trend is indicated in other recent numerical data as well . This suggests that (counter to the naive intuition!) a classical scenario is actually a necessary condition for supporting a QHI behavior; it also follows, that the range of parameters in which it is observed should coincide with the range of validity of Eqs. (2), (3). A cross–over from the classical to a quantum transport regime is expected, when the size of the QH puddles becomes smaller than $`L_\varphi `$. Then, the nearest neighbor localized puddle is not necessarily the optimal destination of a hopping electron. Typically, randomly distributed electronic states that are close in energy are not close in real space. As a consequence, the dominant transport mechanism is variable range hopping (VRH) . In this regime, a typical hop occurs between localized states separated by a distance $`R_h(T)`$, which minimizes the exponential suppression of the hopping probability due to the difference in both energy and real space. Assuming a Coulomb gap in the density of states , this hopping length is given by $$R_h(T)\left(\frac{\xi e^2}{ϵk_BT}\right)^{1/2};$$ (4) here $`\xi `$ is the localization length, and $`ϵ`$ the dielectric constant. The resulting expression for the longitudinal resistance in the insulator is $$\rho _{xx}\rho _0\mathrm{exp}\left[\left(\frac{T_0}{T}\right)^{1/2}\right],T_0=\frac{e^2}{k_Bϵ\xi }.$$ (5) (Similarly, in the QH phase Eq. (5) holds for $`1/\rho _{xx}`$; to avoid confusion, in the rest of the paper the expressions for $`\rho _{xx}`$ correspond to the insulator.) To convert Eq. (5) into a dependence on the filling fraction, note that the localization length $`\xi `$ (which by definition describes a typical cluster over which an electronic state is extended), coincides with the correlation length which tends to diverge near the transition : $$\xi =\xi _0\left(\frac{|\mathrm{\Delta }\nu |}{\nu _c}\right)^x$$ (6) (where $`\xi _0`$ is the value of $`\xi `$ deep in the localized phase). The critical behavior Eq. (6) is valid for $`\mathrm{\Delta }\nu \nu _c`$, so that $`\xi \xi _0`$. On the other hand, the mechanism of VRH dominates the transport as long as $`\xi `$ is finite and smaller than $`L_\varphi `$. Provided the latter is a few orders of magnitude larger than $`\xi _0`$, there is a range of parameters where Eq. (5) holds in coincidence with Eq. (6) . As a result, one obtains $$\rho _{xx}\rho _0\mathrm{exp}\left[\left(\frac{C|\mathrm{\Delta }\nu |^x}{T}\right)^{1/2}\right],C\frac{e^2}{k_Bϵ\xi _0\nu _c^x}.$$ (7) In the regime where Eq. (7) is applicable, the experimental data exhibit three prominent features: (a) the scaling form Eq. (1) is recovered with $`\kappa =1/x0.43`$ (given that indeed $`x=7/3`$), and $`f(X)e^{(CX^x)^{1/2}}`$; (b) at a given $`\mathrm{\Delta }\nu `$, $$\mathrm{log}\rho _{xx}(T)T^{1/2};$$ (8) and (c) isotherms plotted as a function of $`\nu `$ are of the form $$\mathrm{log}\rho _{xx}(\nu )|\mathrm{\Delta }\nu |^{1.15}.$$ (9) Comparing Eq. (9) with the empirical resistivity law (2), we observe that by mere coincidence (which stems from the specific value of the exponent $`x`$), the two functional forms are practically indistinguishable. Similarly, it is hard to distinguish the temperature dependence Eq. (8) from the fit to $`T^\kappa `$ employed in Ref. ; it is suggested that the VRH scenario is, in fact, a more appropriate basis for interpretation of the data in the low $`T`$ regime. As mentioned above, the VRH scenario is consistent in the regime where the transport is quantum coherent, namely for $`\xi <L_\varphi `$. Hence, the quantum regime terminate once $`\xi `$ approaches $`L_\varphi `$ due to either increase of temperature, or the divergence of $`\xi `$ sufficiently close to $`\nu _c`$. To estimate the boundary of the corresponding region in parameter space, an explicite expression for $`L_\varphi `$ is needed. It turns out that in the VRH regime, the length scale which plays the role of a dephasing length is the hopping length $`R_h(T)`$ (Eq. (4)). This implies that a cross–over to a “classical” transport regim occurs at $`TT_0`$ (where $`T_0`$ is defined in Eq. (5)). Note that this criterion is consistent with the observation, that for $`T>T_0`$ the longitudinal resistivity no longer indicates the exponential divergence characteristic of strong localization. Employing Eq. (6) we conclude that for a fixed $`\nu `$, a cross–over to the classical regime occurs at a temperature $`T_x`$, where $$T_xT_0=\frac{e^2}{k_Bϵ\xi _0}\left(\frac{|\mathrm{\Delta }\nu |}{\nu _c}\right)^x.$$ (10) Alternatively, for a fixed $`T`$, the cross–over occurs at $`|\mathrm{\Delta }\nu |_x`$, where $$\frac{|\mathrm{\Delta }\nu |_x}{\nu _c}\left(\frac{ϵ\xi _0k_BT}{e^2}\right)^{1/x}.$$ (11) As argued in Ref. , the latter expression defines the width of the peaks in $`\sigma _{xx}`$ near QH transitions. However, it should be emphasized that (at a fixed $`T`$) critical scaling of the data is expected to hold out side this width, while $`|\mathrm{\Delta }\nu |<|\mathrm{\Delta }\nu |_x`$ corresponds to a classical transport regime. I next show that the data which clearly manifest the activated–like behavior resistivity law (2), (3) (Ref. ) and a QHI behavior (Refs. ), mostly correspond to the classical regime by the criterion suggested above. A quantitative estimate of $`|\mathrm{\Delta }\nu |_x`$ from Eq. (11) is possible provided the “bare” localization length $`\xi _0`$ is known. Unfortunately, this parameter can not be extracted independently from the available information about the samples. However, the fact that the integer QH effect is observed indicates that the single–electron states are localized over a length scale at least as large as the magnetic length $`l=(\mathrm{}c/eB)^{1/2}`$. Hence, the insertion $`\xi _0l`$ provides a minimal estimate of $`|\mathrm{\Delta }\nu |_x`$ for a given $`T`$. In Ref. , close to the critical field in the InGaAs/InP sample ($`B_c=2.14T`$, corresponding to $`\nu _c=0.562`$ and carrier density $`n=3\times 10^{10}cm^2`$), one gets $`l170\AA `$. The implied lower bound on the width of the classical regime is $`(\mathrm{\Delta }\nu )_x/\nu _c\pm 0.2`$ for the highest temperature isotherm ($`T=2.21K`$), and $`(\mathrm{\Delta }\nu )_x/\nu _c\pm 0.1`$ for $`T=0.3K`$. Comparing with the data, it turns out that the range of $`\nu `$’s where $`\mathrm{log}\rho _{xx}`$ vs. $`\nu `$ is strictly linear is not much larger than this lower bound. A more conclusive statement can be made regarding the quantized Hall resistance data of Hilke et al. in Ref : there, $`l130\AA `$ which implies that at the lowest displayed temperature ($`T0.3K`$), the classical regime extends at least within $`\mathrm{\Delta }B/B_c0.09`$. Indeed, this estimate implies an upper field $`B_u=B_c\times 1.09`$ which roughly coincides with the field at which the $`\rho _{xy}`$ data terminate (due to insufficient accuracy of the measurement). Note that the range of observed QHI increases with $`T`$ or with an increased current bias, as long as a plateau in the QH phase is preserved. Beyond a certain $`T`$, the quantization in the insulator is destroyed at the same time with the entire plateau, due to excitations to higher Landau levels. To further test the central arguments of this paper, one should examine experimental data that extend over a wide enough range of temperatures below and above the cross-over point $`T_x`$. The classical and quantum regimes are then clearly distinct in terms of the $`T`$-dependence of $`\rho _{xx}`$ for a given $`\nu `$. The functional dependence on $`\nu `$ is, however, nearly identical: $`\mathrm{log}\rho _{xx}`$ is expected to be approximately linear in $`\nu `$ in a wide range of parameters extending over both regimes. This is possibly a major source of confusion in the literature. In particular, in Ref. a cross–over in the $`T`$-dependence is clearly observed, however the single cross–over point identified there ($`T_x0.1K`$) is an average over a range of filling fractions. Similarly, it is possible that in Ref. as well, the outskirts of the range of $`\mathrm{\Delta }\nu `$ indicating $`\mathrm{log}\rho _{xx}\nu `$ extend into the quantum regime. The formula Eq. (3) is then not necessarily the only possible fit of the slope. To summarize, I propose an interpretation of the extensive set of data close to QH transitions which distinguishes two conduction regimes. The classical regime is established closer to the critical point and at relatively high $`T`$. It is dominated by hopping between nearest neighbor hopping between conducting QH puddles, whose typical size is larger than the dephasing length $`L_\varphi `$. Hence, the transport coefficients do not depend on $`L_\varphi `$, but rather on the details of the narrow junctions separating the puddles. When mapped to a QH liquid–to–insulator transition, the characteristic behavior of the resistivity tensor in this regime is an activated–like $`\rho _{xx}`$, and quantization of $`\rho _{xy}`$ in the insulator. The quantum regime is established at lower $`T`$ and farther from the critical point, where the transport is dominated by VRH. The limit of validity of VRH (corresponding to $`\xi R_h(T)`$, where $`R_h(T)`$ is identified with $`L_\varphi `$), provides an estimate of the boundary between the regimes (Eqs. (10), (11)). The classical and quantum regimes are very hard to distinguish by $`\nu `$–dependence of $`\mathrm{log}\rho _{xx}`$ ($`\mathrm{\Delta }\nu `$ vs. $`(\mathrm{\Delta }\nu )^{1.15}`$, respectively). It is predicted that the cross–over between them should be more clearly indicated by a change in the $`T`$–dependence, accompanied by a deviation of $`\rho _{xy}`$ in the insulator from a quantized plateau. ###### Acknowledgements. I thank A. Auerbach, S. Girvin, M. Hilke, D. Huse, S. Murphy, D. Shahar, U. Sivan and S. Sondhi for useful conversations, and P. Coleridge for informing me of his data prior to publication. This work was supported by grant no. 96–00294 from the United States–Israel Binational Science Foundation (BSF), Jerusalem, Israel.
no-problem/9904/astro-ph9904290.html
ar5iv
text
# Is there a 4.5 PeV neutron line in the cosmic ray spectrum? ## I Introduction Recently we presented a model to fit the high energy cosmic ray spectrum using the hypothesis that the electron neutrino is a tachyon. A good fit to the spectrum was obtained using $`|m_\nu |\sqrt{m^2}=0.5\pm 0.25`$ eV/c$`^2.`$ The signature prediction of the model is the existence of a neutron flux ‘spike’ in the cosmic rays centered on $`E=4.5\pm 2.2`$ PeV, and having a width $`\mathrm{\Delta }\mathrm{log}E=0.1`$ (FWHM). Although the existence of neutral cosmic rays from point sources remains a highly controversial subject, we report here that an examination of the published literature on cosmic rays from Cygnus X-3 reveals just such a hitherto unreported neutral particle spike centered on E = 4.5 PeV with a level of statistical significance of $`6\sigma .`$ An additional prediction of the model that the integrated flux of neutrons above 0.5 EeV should be 0.048 percent that above 2 PeV is also consistent with results from two out of three experiments. Although few physicists have taken tachyons seriously since they were first proposed in 1962, their existence is clearly an experimental question. In 1985 Chodos, Hauser and Kostelecký, suggested that neutrinos were tachyons – an idea that is consistent with experiments used to determine the neutrino mass. Chodos et al. also suggested a remarkable empirical test of the tachyonic neutrino hypothesis, namely that stable particles should decay when they travel with sufficiently high energies. Consider, for example, the energetically forbidden decay $`pn+e^++\nu _e.`$ In order to conserve energy in the CM frame the neutrino would need to have $`E<0`$. But tachyons with $`m^2<0`$ have $`E<p,`$ and therefore the sign of their energy in the lab frame $`E_{lab}=\gamma (E+\beta pcos\theta )`$ will be positive for a proton velocity $`\beta >\beta _{th}E/pcos\theta .`$ With the aid of a little kinematics it can easily be shown that the threshold energy for proton decay is $`E_{th}1.7|m_\nu |^1`$ PeV, with $`|m_\nu |`$ in eV. Thus, if neutrinos are tachyons, energetically forbidden decays become allowed when the parent particle has sufficient energy – in seeming contradiction with the principle of relativity that whether or not a process occurs should not depend on the observer’s reference frame. That contradiction is only an apparent one, however, because what appears to the lab observer as a proton decay emitting a neutrino appears to the CM observer as a proton absorbing an antineutrino from a background sea. ## II Cosmic Rays Since cosmic rays bombard the Earth with energies far in excess of what can be achieved in present day accelerators, it is natural to ask whether any evidence for a process such as proton decay exists there at very high energies. One striking feature of the cosmic ray spectrum is the “knee” or change in power law that occurs at $`E4`$ PeV. Various two-source mechanisms have been suggested to account for this spectral feature, but some researchers have identified it as arising from a single type of source. In 1992 Kostelecký suggested that for a tachyonic neutrino mass $`|m_\nu |0.3eV,`$ the proton decay threshold energy occurs at the knee of the cosmic ray spectrum, and could explain its existence. The idea is that cosmic ray nucleons on their way to Earth would lose energy through a chain of decays $`pnpn\mathrm{},`$ which would deplete the spectrum at energies above $`E_{th}.`$ However, Kostelecký regarded the existence of the knee by itself as insufficient evidence for the tachyonic neutrino hypothesis in view of other more conventional explanations of the knee of the cosmic ray spectrum. He also did not attempt to model the spectrum, nor mention the signature neutron spike. Recently this author has developed a tachyonic neutrino model that fits a number of features of the cosmic ray spectrum in addition to the knee. These include the existence and position of the “ankle” (another change in power law at $`E6`$ EeV), the specific changes in power law at the knee and ankle, the changes in composition of cosmic rays with energy, and the ability of cosmic rays to reach us above the conjectured GZK “cutoff.” Although the fit to the cosmic ray spectrum was a good one, the model is highly speculative, because it is at variance with conventional wisdom about cosmic rays and it arbitrarily assumed that the decay rate for protons (for $`E>E_{th}`$) was far greater than that for neutrons. Nevertheless, the model did make the striking prediction of a cosmic ray neutron flux in a narrow range of energies just above $`E_{th}`$ – a neutron “spike.” The pile up of neutrons in a narrow interval just above $`E_{th}`$ is a consequence of the fractional energy loss of the nucleon in proton decay becoming progressively smaller, the closer the proton energy gets to $`E_{th}`$. The position of the predicted cosmic ray neutron spike depends on the value assumed for $`|m_\nu |`$. From the fit to the cosmic ray spectrum we found $`|m_\nu |=0.5\pm 0.25`$ eV/c$`^2,`$ and hence we predicted a neutron spike at $`E=4.5\pm 2.2`$ PeV. In fact the model predicted that most nucleons should be neutrons for $`E>E_{th},`$ because it was assumed that as nucleons lose energy in the $`pnp\mathrm{}`$ decay chain, the lifetime and hence the decay mean free path for neutrons is far greater than for protons, and so nucleons above $`E_{th}`$ would spend nearly all of their time en route as neutrons. But, the model also predicts that for energies above the spike the neutron component does not become an appreciable fraction of the total cosmic ray flux until around 1 EeV. While neutrons might reach Earth at EeV energies in conventional cosmic ray models, it would be difficult to understand any sizable neutron component at energies as low as E=4.5 PeV, where the neutron mean free path before decay would be only about 100 ly. In the present model, however, A = 1 cosmic rays can travel very many neutron decay lengths and still arrive as neutrons because many steps of the $`pnp\mathrm{}`$ decay chain occur for nucleons having energies above $`E_{th}.`$ ## III Cygnus X-3 Data One way to look for a neutron flux would be to find a cosmic ray signal that points back to a specific source, since neutrons are unaffected by galactic magnetic fields. Starting in 1983 a number of cosmic ray groups did, in fact, report seeing signals in the PeV range from Hercules X-1 and Cygnus X-3. At the time these signals were believed to be either gamma rays or some hitherto unknown long-lived neutral particle, since neutrons, as already noted, should not live long enough to reach Earth (except in the present model). Some of the experiments coupled detection of extensive air showers with detection of underground muons. The observed high muon intensity was found to be consistent with hadrons but not with showers induced by gamma rays. It was widely believed that the mass of the neutral particle was $`m1`$ GeV/c$`^2.`$ Thus, all the observed or conjectured properties of these particles were consistent with neutrons: neutral strongly interacting particles with $`m1`$ GeV/c$`^2.`$ Following a period of excitement in the 1980’s, many researchers began to look critically at some of the observations of ultra-high energy cosmic rays from point sources. This skepticism was based in part on the inconsistencies between results reported in different experiments. As Chardin and Gerbier have noted, a number of papers used data selection procedures that made direct comparisons difficult, e.g., using different phase intervals to make cuts, variously reporting the total flux or only the flux in a particular phase bin, and reporting only “muon-poor” events. Also, some papers appeared to inflate the statistical significance of their results. But, the most serious challenge to the idea of neutral particles in the PeV range from Cygnus X-3 and other point sources came from a trio of high sensitivity experiments that reported seeing no signals from point sources claimed earlier. In the most sensitive experiment of the three, the upper limit on the flux of neutral particles from Cygnus X-3 above 1.175 PeV was far below the fluxes reported by those experiments claiming signals earlier. There seems to be only two possibilities: either all the earlier experiments claiming signals were in error, or Cygnus X-3 and other reported sources all had turned off about the time improved instrumentation became available. Table I offers some support for the latter possibility, because (a) the phases of the signals are in rough agreement in three experiments, and (b) the integrated flux above a PeV does appear to systematically decrease over time taking all experiments together. (Among those claiming signals only those claiming more than $`4\sigma `$ have been listed, and among those citing upper limits only those giving upper limits on the flux above a PeV have been listed.) The suggestion that signals from Cygnus X-3 have fallen with time was first raised by N. C. Rana et al. based on X-ray and gamma ray data in four different wavelength regions. In what follows, we make the “optimistic” assumption that earlier experiments were seeing real signals, and we consider to what extent those reports of signals from Cygnus X-3 support the prediction of a 4.5 PeV neutron spike. In the 1980’s there were eight cosmic ray groups that cited fluxes in the PeV range of signals pointing back to Cygnus X-3, (some which were inconsistent as mentioned earlier.) In nearly all cases limited statistics required reporting the flux integrated over energy in only one or at most two energy intervals. One group (Lloyd Evans et al.), however, had good enough statistics to report fluxes in eight energy bins spanning the location of the predicted 4.5 PeV neutron spike, and it had an energy acceptance threshold near $`E_0`$ = 1 PeV, which could give one energy bin before the spike itself. The signal seen by Lloyd Evans et al. from Cygnus X-3 did not appear until the data is selected on the basis of orbital phase determined from the X-ray binary’s 4.79 h orbital period, and the time of signal arrival. Lloyd-Evans et al. found that if they looked at the number of counts in 40 phase bins, one of these bins showed a sizable excess (73 counts when the average was 39). The information in Table II is taken from Lloyd-Evans et al., with the last column added by this author. Fig. 1 displays the data in that last column. We would expect a flat distribution on the basis of chance, assuming that the signal were just a statistical fluctuation. In fact, averaged over all phases, the distribution must be flat and zero height, regardless of whether the signal is real or not. Note, that a spike appears centered on the value predicted by the tachyonic neutrino model, and that all the remaining bins have a flux consistent with zero. The gaussian curve drawn with arbitrary height in the figure shows what would be predicted by the model given a neutron spike of width $`\mathrm{\Delta }\mathrm{log}E=0.1(FWHM)`$ and a 50 percent energy resolution ($`\mathrm{\Delta }\mathrm{log}E=\pm 0.176`$). According to Lloyd-Evans, the actual resolution was probably around 50 percent, and very likely less than 100 percent We estimate the statistical significance of this spike occurring by chance by dividing the excess number of events in the two bins straddling 5 PeV by the square root of the expected number of events in those two bins: $`28.4/\sqrt{22.6}=6.0\sigma .`$ It is interesting that in their article, Lloyd-Evans et al. displayed only the integrated flux $`I(>E)`$ versus energy, and hence failed to mention the spike. Instead, they simply noted that the integrated spectrum appeared to steepen right after 10 PeV. How can we be sure that the spike seen in Lloyd-Evans et al. data is not an artifact of the data analysis or a statistical fluctuation? Six standard deviations may seem interesting, but the original peak in their phase plot was far less impressive, particularly allowing for a “trials factor” of 40, since such a peak might have been seen in any one of the 40 phase bins. Suppose that in fact the original peak in the phase plot were a statistical fluctuation, how could one then get a $`6\sigma `$ peak in the flux versus energy distribution for events in a specific phase bin? Clearly, such a peak would require some correlation between energy and phase. This could in principle occur, because observed cosmic ray energy is correlated with declination angle, and hence with time of day. However, all cosmic rays in a given phase bin arrive at one of five, i.e., 24/4.79, times throughout the day, and those arrival times slowly advance from day to day, since the Cygnus X-3 period is not exactly divisible into 24 hours. Thus, over the years of data-taking each phase bin would sample times of the day with an almost uniform distribution, making it difficult to see how a phase-energy correlation could occur. (It could be that at their source the phase and energy of cosmic rays are correlated, but in that case we would be dealing with a real source, not a statistical fluctuation, as hypothesized above.) Ideally, one would want to combine the Lloyd-Evans et al. data with that of other experiments in the PeV region to see if the spike either is destroyed or enhanced. Several problems arise with the other existing data, in which a signal is claimed from Cygnus X-3: one experiment used only “muon-poor” events, two experiments reported only the integral flux above some energy (no energy bin defined), two reported the flux in an energy bin three times the width used by Lloyd-Evans, and none was contemporaneous with Lloyd-Evans, thereby severely diminishing their utility. Aside from the spike, one other prediction of the tachyonic neutrino model is that neutrons should also be seen as a significant and rising fraction of the cosmic ray flux above around 1.0 EeV. In fact, two cosmic ray groups have reported seeing neutral particles from Cygnus X-3 having energies above 0.5 EeV with fluxes of $`1.8\pm 0.7`$, and $`2.0\pm 0.6`$, while a third group reporting merely an upper limit to the flux $`<0.4`$ – all in units of $`10^{17}`$ particles cm<sup>-2</sup> s<sup>-1</sup>. These measured fluxes above 0.5 EeV can be compared directly with the neutron flux predictions from the tachyonic neutrino model. As noted previously, the ratio of the integral flux of neutrons above 0.5 EeV to that above 2 PeV is predicted to be $`R=4.8\times 10^4.`$ The predicted neutron flux for $`E>0.5`$ EeV is then $`R`$ times the measured flux reported by Lloyd-Evans et al. for $`E>2`$ PeV, or: $`R\times 7.4\pm 3.2\times 10^{14}=3.5\pm 1.5\times 10^{17}`$ particles cm<sup>-2</sup> s<sup>-1</sup>, which is in quite good agreement with the two groups that measured a flux, rather than an upper limit. Although subsequent data accummulation by these two groups failed to show a signal from Cygnus X-3, that only adds additional support to the hypothesis that the source faded over time. If it is true that Cygnus X-3 and other point sources were active in the early 1980’s and subsequently have turned off, is there any way to check whether there really is a 4.5 PeV neutron spike without waiting for specific sources to come back on? Without knowing where the sources are, the model can make no prediction of the anisotropy or the the angular distribution of sources of high energy cosmic rays. However, recall that the model predicts that all the cosmic rays include a 4.5 PeV neutron spike, not just those pointing back to the handful of possible sources looked at so far. Thus, if one selects events in a narrow energy band centered on 4.5 PeV, one could look at their arrival directions on the two dimensional map of the sky, and see if there is a noticeable clustering of points, which would indicate neutral particles coming from specific sources. Moreover, if those sources were episodic, one should observe a nonuniform distribution in arrival times for events for a given source. Consider a specific example. The integrated flux in the 4.5 PeV spike is 0.1 neutrons per m<sup>2</sup>-sr-s, which would give around 3 million counts over 5 years for an array of area 250,000 m<sup>2</sup>. If the array had an energy resolution of 100 percent, it would also record a background count rate roughly four times as great in the energy bin centered on 4.5 PeV. Suppose the angular resolution were $`\mathrm{\Delta }\theta =0.01`$ rad, which would allow up to $`4/\mathrm{\Delta }\theta ^2=4\times 10^4`$ solid angle bins to be defined. Each bin would then have on the average 400 background counts. Further suppose that the cosmic rays reaching Earth came from N point sources, then those solid angle bins pointing back to sources would have an average signal to background ratio: $`10^4/N.`$ Identification of sources should then be possible, unless N were larger than the number of solid angle bins, and no subset of sources were appreciably brighter than others. ## IV Summary In summary, a highly speculative tachyonic neutrino model, which fits the cosmic ray spectrum well, predicts a spike of neutrons at an energy where, given the neutron lifetime and distance to likely sources, very few should appear. A search through the literature for sources of neutral cosmic rays has identified a particular experiment with a favorable energy acceptance threshold, good enough statistics, and enough energy bins spanning the region of the neutron spike to test the prediction. The data do show a $`6\sigma `$ spike located right at the predicted energy, which was not identified in the original work. The failure of other subsequent more sensitive experiments to see a signal from Cygnus X-3 would seem to require that this source has since turned off – a possibility given some support by both time trends of data from different experiments, and data within the same experiments. The characteristics of the neutral particles from Cygnus X-3 seem to be consistent with neutrons rather than gamma rays, based on muon data from various experiments. For the EeV region, where the model also predicts neutrons (though not a spike), two out of three experiments show a positive signal from Cygnus X-3, and they report a flux whose magnitude (relative to the flux in the spike) is well-predicted by the model. The hypothesis that the electron neutrino is a tachyon would seem to be supported, and it can be further tested without waiting for specific point sources to come back on. ## ACKNOWLEDGMENTS The author wishes to thank John Wallin for helpful comments. He also could have not done without the critical comments of his colleague Robert Ellsworth.
no-problem/9904/cond-mat9904009.html
ar5iv
text
# Comparative study of spanning cluster distributions in different dimensions ## Abstract The probability distributions of the masses of the clusters spanning from top to bottom of a percolating lattice at the percolation threshold are obtained in all dimensions from two to five. The first two cumulants and the exponents for the universal scaling functions are shown to have simple power law variations with the dimensionality. The cases where multiple spanning clusters occur are discussed separately and compared. Percolation is a subject which has been studied extensively for the last few decades. The relevance of percolation in various areas of physics is also well established. Although many of the properties of percolating systems are well understood and studied, there still remains a lot of details to be explored and intricate questions to be addressed. At the critical point (percolation threshold) there appears for the first time a cluster spanning the whole lattice. The spanning cluster is a fractal in the sense its mass $`M`$ scales with the length as $`L^D`$ where $`D<d`$, $`d`$ being the spatial dimension and $`D`$ the fractal dimension. Distribution of cluster masses at and away from criticality has been studied in detail . Conditional probability distributions for spanning cluster (SC) masses, their moments, and other variables like the shortest path etc. also appear in the literature . While the distribution for the cluster masses show a power law behaviour at criticality , the probabilities of spanning clusters masses have an entirely different variation . In this article we report a study of the probability distribution functions of the masses of the spanning clusters which span the lattice from top to bottom and a comparative analysis for different dimensions. Here the condition that the cluster spans along one particular direction of the lattice is necessary and sufficient and hence the condition of the spanning along all directions is relaxed. We examine the distribution functions separately for the two cases (a) when there exists only one SC (b) when there are more than one coexisting spanning clusters. Although case (a) occurs predominantly, case (b) has recently been established to have a finite non-zero probability of occurrence even in two dimensions. Little is known about the distribution functions of masses in case (b) and we attempt to extract as much information for this as possible. We have simulated $`L^d`$ hypercubic lattices in $`d`$ dimensions with helical boundary conditions where each site is occupied with a probability $`p`$. The clusters are identified using the Hoshen Kopelman algorithm. The largest lattices considered have sizes $`L=800`$ in $`d=2`$, $`L=60`$ in $`d=3`$, $`L=30`$ in $`d=4`$ and $`L=15`$ in $`d=5`$. A maximum of $`10^6`$ initial configurations (for the smallest lattices) were generated at the percolation threshold $`p_c`$ where the values of $`p_c`$ given in ref have been used. As it is known that $`M`$ scales as $`L^D`$, where $`D`$ is the fractal dimension of the spanning cluster, we have directly measured the probability distribution of $`M/L^D`$, i.e., the bin sizes are chosen to be proportional to $`1/L^D`$. We normalise the probabilities so that the total probability is unity. We find that the normalised probabilities plotted against $`m=M/L^D`$ all collapse on a single curve for different system sizes. This happens in all dimensions from two to five. As an example, the collapses in two and three dimensions are shown in Fig. 1. Finite sizes effects are stronger in higher dimensions. The probability distribution is of the form: $$P(M/L^D)f(M/L^D)$$ (1) where $$f(x)=Ax^\alpha \mathrm{exp}(\gamma x^\beta )$$ (2) Fitting the universal scaling function in the above form is best in two dimensions. However, for the tail of the distribution, the above form gives a very good fit even in higher dimensions. It maybe added here that the tail of the distribution becomes important in many problems, e.g., in problems related to stock market fluctuations . The form of the probability distribution obtained here is very close to that studied in . We do not get any prefactor for the scaling functions here as the bin sizes are proportional to $`1/L^D`$. (This factor, as $`1/M`$ appears when the normalised probabilities are also divided by the bin sizes as in .) However, the exponents obtained in the present study are totally different. For example, in two dimensions, $`\beta =6.7\pm 0.1`$ and $`\gamma 10`$ while in the corresponding values are $`19`$ and $`10^8`$ respectively. The possible reasons for this discrepancy are discussed later. Quantitative comparison of the distributions for different dimensions is done by calculating the first and second cumulants of the distributions and studying their behaviour with the dimensionality. In each dimension, we extrapolate these results for $`1/L0`$ as there are some finite size effects. In general we fit the cumulants as linear functions of $`1/L`$ to extrapolate. The extrapolated values vary as simple power laws as given below (see Fig. 2) $$md^a$$ (3) $$\sigma ^2=m^2m^2d^{2b}$$ (4) with $`a=1.65\pm 0.1`$ and $`b=0.25\pm 0.01`$. We find the scaling form of the distribution by fitting with appropriate values of $`A,\alpha ,\gamma `$ and $`\beta `$. Again $`\alpha ,A`$ and $`\beta `$ show simple power law variations with dimensionality: the powers are close to -2 for $`\alpha `$ and $`\beta `$ (see Fig. 2), $`\gamma `$ apparently has no dependence on the dimensionality. The case when there exists more than one spanning clusters has also been explored. We rank the spanning clusters by their sizes and obtain separately the distributions for the $`rth`$ largest cluster when the total number of spanning clusters is $`n`$. For two dimensions, when there exists two SC’s, the distribution function for the larger SC is clearly different from that of the unique SC (see Fig. 3). In particular, the distribution is more symmetric in comparison to that of the unique SC and more sharply peaked. One concludes that there is a different universal function for the SC’s when $`n>1`$. However, an attempt to find the exact form of this distribution is difficult because of the fluctuations in the data. This fluctuation is unavoidable as the probability of cases with $`n>1`$ is very small. However, certain features of the distribution of the masses are available from the present study. The mean value of $`M/L^D`$ varies appreciably, for example, in the $`n=2`$ case in two dimensions: for the largest SC $`m_{n=2,r=1}0.42`$ compared to $`m_{n=1,r=1}0.58`$, where $`m_{n,r}`$ is the mass of the $`r`$th largest SC when the number of SC is $`n`$. Whereas, in higher dimensions, the largest spanning clusters in the multiple SC cases become comparable in size whatever be the number of SC’s. Such a result indicates that in the higher dimensions, the largest SC is unaffected by the presence of others - consistent with the fact that it is easier to conceive independent coexisting spanning clusters along one direction in large dimensions. However, the width becomes smaller and $`\sigma `$ shows a power law behaviour with $`n`$: $$\sigma =(m^2m^2)_{n,r=1}n^{2c}$$ (5) with $`c0.3\pm 0.02`$. This is shown for the case for five dimensions where one can obtain an appreciable number of SC’s numerically (Fig. 4). We also get conclusive results for the ratios of the SC masses when e.g., $`n=2`$. This ratio is around 1.4 in case of $`d=2`$ (also obtained in ) and 2.2 for $`d=5`$. This indicates that the larger SC becomes dominant in higher dimensions. In summary, we have obtained several quantitaties related to the distribution of the spanning cluster masses systematically varying with the dimensionality. A universal scaling function is obtained in each dimension, having similar form with dimension dependent exponents. These dependences appear as simple power law variations of the dimensionality. The dependence of the exponents on the dimensionality is not surprising; in fact, the variation of the cumulants with the dimension is related to the dimension-dependent exponents. One can, in principle, also fit these exponents as polynomial functions of $`(6d)`$ (as $`6`$ is the upper critical dimension in percolation). We keep the question of exact variation of the exponents as function of dimension open as it is difficult to obtain a concrete form from the numerical simulations only. As mentioned before, these exponents do not match with some earlier results . However, it must be noted that in these studies, the conditional probabilities were obtained with different boundary conditions in the sense that the clusters were required to span in all directions. There are differences in the values of $`m`$ and $`\sigma `$ as well, $`m`$ being smaller when the SC spans along one direction only. Hence the universal function seems to be highly dependent on the boundary conditions and in that sense only weakly universal. We also do not attempt to fit the scaling function by an alternative form as that admits yet another parameter and the fitting becomes difficult to handle. Another less important point is that in , the distributions are obtained for any number of SC present. Although the distributions in the one SC case and two SC case are quite different, it should however not matter as the latter has a very small probability of occurrence. We have also shown qualitatively that the distributions for the SC’s in multiple SC case are different from that of the one SC case. The second cumulant of the largest SC, in particular, apparently varies as a power law with $`n`$. The mean $`m`$ for the largest SC varies appreciably in low dimensions but becomes a constant in higher dimensions. The author is grateful to B. K. Chakrabarti for discussions and D. Stauffer for critical comments on the manuscript. She also thanks A. Aharony for bringing ref to notice.
no-problem/9904/hep-ph9904455.html
ar5iv
text
# 1 Introduction ## 1 Introduction As well known, the larger colour charge of gluons ($`C_A=N_\mathrm{c}=3`$) compared to quarks ($`C_\mathrm{F}=(N_{\mathrm{c}}^{}{}_{}{}^{2}1)/2N_\mathrm{c}=4/3`$) leads to various distinctive differences between the two types of jets, for recent articles see e.g. and the review . Thus, a detailed comparison of the properties of quark and gluon jets provides one of the most instructive tests of the basic ideas of QCD. An experimental verification of these differences has been a subject of quite intensive investigations, especially in the last years, e.g. . However, obtaining of the theoretically adequate information about the properties of the gluon jet appears to be not an easy task. Recall that the analytical QCD results address the comparison between the energetic gluon and quark jets emerging from the point-like colourless sources, and that (unlike the $`\mathrm{q}\overline{\mathrm{q}}`$ case) the pure high energy gg events at present are not available experimentally.<sup>3</sup><sup>3</sup>3In principle, it is possible to create a pure source of the colour singlet gg events at a future linear $`e^+e^{}`$ collider through the process $`\gamma \gamma \mathrm{gg}`$ . So far, most studies of the structure of gluon jets have been performed in three-jet events of $`e^+e^{}`$ annihilation. As a rule, these rely on a jet finding procedure both for selection of the $`\mathrm{q}\overline{\mathrm{q}}`$g events and for a separation between the jets in an event. Without special care, such an analysis is inherently ambiguous and may suffer from the lack of the direct correspondence to the underlying theory. Recently some more sophisticated approaches have been exploited (see e.g. \[3, 5-8\]) which allow better theoretical significance. There are still a number of issues which are frequently overlooked in the present gluon jet analyses and some further theoretical efforts are required. First of all, this concerns particle multiplicity distributions in the jets. Clarification of these issues is the main aim of this paper. More detailed description of the theoretical framework can be found in ref. . In particular the following problems are addressed. 1. Different approaches to the three-jet studies employ different definitions of the $`\mathrm{q}\overline{\mathrm{q}}`$g kinematics. In particular, this concerns such a key variable as a transverse momentum scale of the gluon, $`p_{}`$. Our first issue here is to discuss an exact definition of this quantity, which governs radiation from the gluon. 2. The definition of the three-jet topology with the gluon registered at a given $`p_{}`$ imposes an obvious requirement that there are no other subjets in the event with the transverse momentum exceeding $`p_{}`$. We have to investigate quantitatively the impact of this requirement on the jet sample. 3. To calculate predictions from perturbative QCD, using the assumption of local parton hadron duality (LPHD) , a cutoff is needed for the infrared singularities. As discussed in detail in ref. such a cutoff depends on the soft hadronization process and can not be uniquely specified from perturbative QCD alone. Thus, the result is necessarily model dependent. In what follows we discuss these three issues successively in sections 2, 3 and 4, and in section 5 we study their effect on analyses of 3-jet events in $`e^+e^{}`$-annihilation. ## 2 Definition of $`p_{}`$ In the simplest case of soft radiation, $`p_{}`$ can be easily defined, as the quark and antiquark specify a unique direction. For large $`p_{}`$ gluons, however, the q and $`\overline{\mathrm{q}}`$ get recoils such that there is no obvious direction against which the transverse momentum should be measured. To have well defined expressions such a direction has to be specified. In the Lund dipole formalism \[10-16\] $`p_{}`$ has been defined according to (subscript $`\mathrm{Lu}`$ for Lund) $$p_{\mathrm{Lu}}^2\frac{s_{\mathrm{qg}}s_{\mathrm{g}\overline{\mathrm{q}}}}{s},$$ (1) where $`s_{\mathrm{qg}}`$ denotes the squared mass of the quark-gluon system etc. In this particular frame the gluon rapidity is given by the expression $$y=\frac{1}{2}\mathrm{ln}(\frac{s_{\mathrm{qg}}}{s_{\mathrm{g}\overline{\mathrm{q}}}}).$$ (2) The kinematically allowed region is given by $$p_{\mathrm{Lu}}<\frac{\sqrt{s}}{2};|y|<\mathrm{ln}\left(\frac{\sqrt{s}}{p_{\mathrm{Lu}}}\right)\frac{1}{2}(L\kappa _{\mathrm{Lu}});L\mathrm{ln}(\frac{s}{\mathrm{\Lambda }^2}),\kappa _{\mathrm{Lu}}\mathrm{ln}(\frac{p_{\mathrm{Lu}}^2}{\mathrm{\Lambda }^2}).$$ (3) These variables have the advantage that the phase space element usually expressed in the scaled energy variables $`x_\mathrm{q}`$ and $`x_{\overline{\mathrm{q}}}`$ is exactly given by the simple relation $$s\mathrm{d}x_\mathrm{q}\mathrm{d}x_{\overline{\mathrm{q}}}=\mathrm{d}p_{\mathrm{Lu}}^2\mathrm{d}y.$$ (4) As discussed in section 5, $`p_{\mathrm{Lu}}`$ may also work well as a scale parameter in the QCD cascade. An alternative definition has also been used in the literature, e.g. by the Leningrad group $$p_{\mathrm{Le}}^2\frac{s_{\mathrm{qg}}s_{\mathrm{g}\overline{\mathrm{q}}}}{s_{\mathrm{q}\overline{\mathrm{q}}}}.$$ (5) This definition corresponds to the gluon transverse momentum in the $`\mathrm{q}\overline{\mathrm{q}}`$ cms (with respect to the $`\mathrm{q}\overline{\mathrm{q}}`$ direction). It is notable that in this frame the gluon rapidity is also exactly given by the expression in Eq (2). The two $`p_{}`$-definitions agree for soft gluons, but deviate for harder gluons. While $`p_{\mathrm{Lu}}`$ is always bounded by $`\sqrt{s}/2`$, $`p_{\mathrm{Le}}`$ has no kinematic upper limit in the massless case. ## 3 Bias from restrictions on subjet transverse momenta The effect of a cutoff in $`p_{}`$ has been discussed previously . Here we give a brief review of the results, in order to end the section with an investigation of the numerical importance of subleading terms. These are essential for a correct analysis of three-jet events, which will be discussed in section 5. To see the qualitative features of the bias we first study $`e^+e^{}`$ $``$ $`\mathrm{q}\overline{\mathrm{q}}`$ events within the Leading Log approximation (LLA). The quark and antiquark emit gluons according to the well-known radiation pattern $$\mathrm{d}n_\mathrm{g}C_\mathrm{F}\frac{\alpha _s}{\pi }\frac{\mathrm{d}x_\mathrm{q}\mathrm{d}x_{\overline{\mathrm{q}}}}{(1x_\mathrm{q})(1x_{\overline{\mathrm{q}}})}=C_\mathrm{F}\frac{\alpha _s(p_{}^2)}{\pi }\frac{\mathrm{d}p_{}^2}{p_{}^2}\mathrm{d}yC_\mathrm{F}\frac{\alpha _s(\kappa )}{\pi }\mathrm{d}\kappa \mathrm{d}y;\kappa \mathrm{ln}(p_{}^2/\mathrm{\Lambda }^2).$$ (6) We have here used Eq (4), and in the following we define $`p_{}`$ and $`y`$ according to Eqs (1) and (2), unless otherwise stated. Due to colour coherence the hadronic multiplicity $`N_\mathrm{g}(\kappa )`$ in a gluon jet depends on the $`p_{}`$ of the gluon and not on its energy (see, e.g., refs ). Summing up the contributions from all gluons in a cascade we arrive at the average multiplicity $`N_{\mathrm{q}\overline{\mathrm{q}}}(L=\mathrm{ln}(s/\mathrm{\Lambda }^2))`$ in the original $`\mathrm{q}\overline{\mathrm{q}}`$ system \[13, 15-18\] (Refs \[15-18\] include also nonleading terms.) $$N_{\mathrm{q}\overline{\mathrm{q}}}(L)_{\kappa _0}^Ld\kappa _{\frac{1}{2}(L\kappa )}^{\frac{1}{2}(L\kappa )}dyC_\mathrm{F}\frac{\alpha _s(\kappa )}{\pi }N_\mathrm{g}(\kappa )=_{\kappa _0}^Ld\kappa (L\kappa )C_\mathrm{F}\frac{\alpha _s(\kappa )}{\pi }N_\mathrm{g}(\kappa ).$$ (7) (We have here introduced a lower cutoff $`\kappa _0`$ for the integral over transverse momentum. This point will be discussed in section 4.) Taking the derivative with respect to $`L`$ we find $$N_{\mathrm{q}\overline{\mathrm{q}}}^{}(L)_{\kappa _0}^Ld\kappa C_\mathrm{F}\frac{\alpha _s(\kappa )}{\pi }N_\mathrm{g}(\kappa ).$$ (8) Consider now a sample of events selected in such a way that there are no subjets with $`p_{}>p_{\mathrm{cut}}`$. (Within a $`k_{}`$-based cluster scheme with a resolution parameter $`p_{\mathrm{cut}}`$, this means that there are only two primary q and $`\overline{\mathrm{q}}`$ jets.) To obtain the multiplicity $`N_{\mathrm{q}\overline{\mathrm{q}}}(L,\kappa _{\mathrm{cut}})`$ in this biased sample, we must restrict the $`\kappa `$ integral in Eq (7) to the region $`\kappa <\kappa _{\mathrm{cut}}`$. We then find $`N_{\mathrm{q}\overline{\mathrm{q}}}(L,\kappa _{\mathrm{cut}})`$ $``$ $`N_{\mathrm{q}\overline{\mathrm{q}}}(\kappa _{\mathrm{cut}})+(L\kappa _{\mathrm{cut}})N_{\mathrm{q}\overline{\mathrm{q}}}^{}(\kappa _{\mathrm{cut}}).`$ (9) The first term corresponds to two cones around the q and $`\overline{\mathrm{q}}`$ jet directions. Here the $`p_{}`$ of the emissions is limited by the kinematical constraint in Eq (3) rather than by $`\kappa _{\mathrm{cut}}`$. It also corresponds exactly to an unbiased $`\mathrm{q}\overline{\mathrm{q}}`$ system with cms energy $`p_{\mathrm{cut}}`$. The second term describes a central rapidity plateau of width $`(L\kappa _{\mathrm{cut}})`$, in which the limit for gluon emission is given by the constraint $`\kappa _{\mathrm{cut}}`$. This expression for a two-jet event can be generalized for a biased multi-jet configuration, and a similar discussion applies also to the multiplicity variance, cf. ref . (Similar equations for biased two-jet and three-jet events were later discussed also in ref .) The average particle multiplicity in the selected two-jet sample is smaller than in an unbiased sample. The modification due to the bias is similar to the suppression from a Sudakov form factor. It is formally $`𝒪(\alpha _s)`$, but it also contains a factor $`\mathrm{ln}^2(s/p_{}^2)`$. Thus, it is small for large $`p_{}`$-values but it becomes significant for smaller $`p_{}`$. This clearly demonstrates that the multiplicity in this restricted case depends on two scales, $`\sqrt{s}`$ and $`p_{\mathrm{cut}}`$. The $`p_{}`$ of an emitted gluon is related to the virtual mass of the radiating parent quark. Therefore, the two scales $`\sqrt{s}/2`$ and $`p_{\mathrm{cut}}`$ represent the energy and virtuality of the quark and antiquark initiating the jets. Though the LLA result in Eq (9) describes the qualitative features of the bias, subleading corrections are needed for a quantitative analysis. Within the Modified Leading Log approximation (MLLA) , subleading terms are included, which affect the prediction for the unbiased multiplicities and, thus, implicitly also the biased multiplicity in Eq (9). Furthermore, it is in shown that the expression in Eq (9) for the biased multiplicity is explicitly changed when MLLA corrections are considered. An unbiased system should be restored when $`p_{\mathrm{cut}}`$ approaches the kinematical limit $`\sqrt{s}/2`$, but the r.h.s. of Eq (9) equals the unbiased quantity $`N_{\mathrm{q}\overline{\mathrm{q}}}(L)`$ only when $`p_{\mathrm{cut}}=\sqrt{s}`$. The relation consistent with the MLLA is $`N_{\mathrm{q}\overline{\mathrm{q}}}(L,\kappa _{\mathrm{cut}})`$ $``$ $`N_{\mathrm{q}\overline{\mathrm{q}}}(\kappa _{\mathrm{cut}}+c_\mathrm{q})+(L\kappa _{\mathrm{cut}}c_\mathrm{q})N_{\mathrm{q}\overline{\mathrm{q}}}^{}(\kappa _{\mathrm{cut}}+c_\mathrm{q});c_\mathrm{q}={\displaystyle \frac{3}{2}}.`$ (10) The bias is illustrated in Fig 1. The dotted line shows results from the Ariadne MC , when the Durham cluster algorithm is used to define a biased sample of events classified as two-jet events with a $`y_{\mathrm{cut}}`$ equal to $`p_{\mathrm{cut}}^2/s`$. The MC results agree well with the prediction of Eq (10), where for $`p_{\mathrm{cut}}`$ we have used the $`p_{}`$-definition in Eq (1) (solid line). The predicted effect is below 5% for $`p_{\mathrm{cut}}>20`$GeV, but increases rapidly for smaller $`p_{\mathrm{cut}}`$. Fig 1 presents also the result using the LLA relation in Eq (9) (dashed line). To elucidate the effect of the differences between Eq (9) and (10), we have used the same expression for the unbiased quantities $`N_{\mathrm{q}\overline{\mathrm{q}}}`$ and $`N_{\mathrm{q}\overline{\mathrm{q}}}^{}`$. (These are obtained by a simple fit to Ariadne MC results, which are in good agreement with the MLLA.) As seen, the subleading terms are important; the LLA relation significantly overestimates the effect. To our knowledge experimental data for this bias have not been presented. Such data should be obtainable in a rather straightforward analysis, which thus readily could test the accuracy of the MC result or the MLLA relation. ## 4 Infrared cutoffs Gluon radiation diverges for collinear and soft emissions. Therefore, to estimate the hadronic multiplicity from the assumption of LPHD , a cutoff is needed. Naturally, the cutoff must be Lorentz invariant. For collinear emissions a single Feynman diagram dominates, and there are two possibilities, the virtual mass, $`\mu `$, of the emitting parent parton or the transverse momentum, $`p_{}`$, of the emitted gluon measured relative to the parent parton direction. These quantities are connected by the relation $$p_{}^2=\mu ^2z(1z),$$ (11) where $`z`$ equals the light cone momentum fraction taken by the emitted gluon. The transverse momentum is directly related to the formation time, and, therefore, we regard this as the most natural choice for a cutoff. (For a further discussion see ref .) For soft emissions no obvious cutoff is available, however. As several Feynman diagrams contribute and interfere, there is no unique parent parton. Consequently $`\mu ^2`$ or $`p_{}^2`$ cannot be uniquely specified and, therefore, cannot be directly used. (Obviously a cut in energy is not possible, as this is not Lorentz invariant.) For soft emissions from a single $`\mathrm{q}\overline{\mathrm{q}}`$ colour dipole a cutoff in $`p_{}`$ is still the natural choice if measured in the cms, where the $`\mathrm{q}`$ and $`\overline{\mathrm{q}}`$ move back to back. For emissions from a more complicated state the situation simplifies greatly in the large-$`N_\mathrm{c}`$ limit, as many interference terms disappear. In this limit the emission corresponds to a set of independent colour dipoles . The natural choice for the cutoff is then $`p_{}`$ in the cms of the emitting dipole (measured with respect to the dipole direction). We note that this implies that the soft gluons connect the hard partons in exactly the same way as the string in the string fragmentation model , which illustrates the connection between perturbative QCD and the string model . For the physical case with 3 colours, extra interference terms appear with relative magnitude $`1/N_\mathrm{c}^2`$ . Here nonplanar Feynman diagrams contribute, and it is impossible to uniquely specify a parent parton or a relevant $`p_{}`$. Thus, a more fundamental understanding of confinement is needed to specify the cutoff, which cannot be determined from perturbative QCD alone . In hadronization models the $`1/N_\mathrm{c}^2`$ interference terms correspond to the problem of “colour reconnection”, and different models have been proposed . None of these can be motivated from first principles, and only experimental data can differentiate among the various models. In spite of the formal uncertainties, the success of current Monte Carlo programs indicate that the colour suppressed interference terms do not have a very large effect. This is also supported by recent searches by OPAL of the reconnection effects in hadronic $`Z`$ events . In most parton cascade formalisms, a cascade cutoff motivated in the large-$`N_\mathrm{c}`$ limit is used also for finite $`N_\mathrm{c}`$. The colour interference effects are accounted for by reducing the colour factor from $`N_\mathrm{c}/2`$ to $`C_\mathrm{F}`$ in regions collinear with quarks and antiquarks, and, due to colour coherence, also in some parts of the central rapidity region. We note, however, that some subtle interference phenomena, as a matter of principle, cannot be absorbed into a probabilistic scheme, see for details. These are still awaiting a thorough experimental test. ## 5 Formalism for three-jet events After these general discussions we are now ready to consider three-jet $`\mathrm{q}\overline{\mathrm{q}}`$g systems. To simplify the discussion we first study the large-$`N_\mathrm{c}`$ limit. The emission of softer gluons from a $`\mathrm{q}\overline{\mathrm{q}}\mathrm{g}`$ system corresponds then to two dipoles which emit gluons independently. If a gluon jet is resolved with transverse momentum $`p_{}`$, this imposes a constraint on the emission of subjets from the two dipoles. Thus, the contribution from each dipole is determined by an expression like Eq (10). For relatively soft primary gluons the constraint should be given by $`p_{\mathrm{cut}}=p_\mathrm{g}`$. For hard gluons $`p_{\mathrm{Lu}}`$ is of the same order as its parent quark virtuality, and in ref it is shown that $`𝒪(\alpha _s^2)`$ matrix elements are well described if $`p_{\mathrm{Lu}}`$ is used as an ordering parameter for the perturbative cascade. This is also indicated by the successful applications of the Ariadne MC. We will, therefore, assume that the constraint on further emissions is well described by the identification $`p_{\mathrm{cut}}=p_{\mathrm{Lu}}`$. The multiplicity in a qg dipole with an upper limit on $`p_{}`$ can, just as for the $`\mathrm{q}\overline{\mathrm{q}}`$ case discussed in section 3, be described as two forward jet regions and a central plateau. We note that if the three-jet events were selected using a cluster algorithm with a fixed resolution scale , then the constraint on subjet transverse momenta, $`p_{\mathrm{cut}}`$, would be smaller than the $`p_{}`$ of the gluon jet (as the gluon jet was resolved). In this case most jet definitions give three jets which are all biased . We will, however, here focus on three-jet configurations obtained by iterative clustering until exactly three jets remain, without a specified resolution scale, where hence the constraint on subjet $`p_{}`$ is described by $`p_{\mathrm{cut}}=p_{\mathrm{Lu}}`$. As we will see, this implies that the bias on the gluon jet is negligible, which makes this selection procedure suitable for an investigation of unbiased gluon jets. For finite $`N_\mathrm{c}`$ the different dipoles in a multi-parton configuration can not be completely independent of each other. However, encouraged by the success of MC programs, let us assume that the main effect of finite $`N_\mathrm{c}`$ is that the colour factor, which determines softer gluon emission, is reduced from $`N_\mathrm{c}/2`$ to $`C_\mathrm{F}`$ in the domains where the emission is dominated by radiation from the quark or the antiquark leg. Let us assume that a rapidity range $`Y_\mathrm{q}`$ in the $`\mathrm{qg}`$ dipole is similar to a corresponding range in a $`\mathrm{q}\overline{\mathrm{q}}`$ dipole, while the remaining range $`L_{\mathrm{qg}}Y_\mathrm{q}`$ is similar to a range in one half of a $`\mathrm{gg}`$ system. The corresponding ranges in the $`\mathrm{g}\overline{\mathrm{q}}`$ dipole are $`Y_{\overline{\mathrm{q}}}`$ and $`L_{\mathrm{g}\overline{\mathrm{q}}}Y_{\overline{\mathrm{q}}}`$. This implies that the total multiplicity in the $`\mathrm{q}\overline{\mathrm{q}}`$g event corresponds to the expression $$N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}=N_{\mathrm{q}\overline{\mathrm{q}}}(Y_\mathrm{q}+Y_{\overline{\mathrm{q}}},\kappa _{\mathrm{Lu}})+\frac{1}{2}N_{\mathrm{gg}}(L_{\mathrm{qg}}+L_{\mathrm{g}\overline{\mathrm{q}}}Y_\mathrm{q}Y_{\overline{\mathrm{q}}},\kappa _{\mathrm{Lu}}).$$ (12) For the constraint $`p_{\mathrm{cut}}`$ we have here written $`\kappa _{\mathrm{Lu}}`$, which is appropriate for the selection procedure discussed above. As discussed in section 4, the size of $`Y_\mathrm{q}`$ and $`Y_{\overline{\mathrm{q}}}`$ cannot be uniquely determined within perturbative QCD. Possibly the most natural choice is to assume that the quantity $`Y_\mathrm{q}+Y_{\overline{\mathrm{q}}}`$ corresponds to the energy in the $`\mathrm{q}\overline{\mathrm{q}}`$ subsystem , which implies $`Y_\mathrm{q}+Y_{\overline{\mathrm{q}}}\mathrm{ln}(s_{\mathrm{q}\overline{\mathrm{q}}}/\mathrm{\Lambda }^2)L_{\mathrm{q}\overline{\mathrm{q}}}.`$ (13a) The relation in Eq (5a) can be regarded as an educated guess, but a finite shift cannot be excluded. In ref it is assumed that $`Y_\mathrm{q}+Y_{\overline{\mathrm{q}}}\mathrm{ln}(s/\mathrm{\Lambda }^2)=L,`$ (13b) which agrees with Eq (5a) to leading order. For relatively soft gluons we have $`s_{\mathrm{q}\overline{\mathrm{q}}}s`$, and in this case Eqs (5a) and (5b) are approximately equivalent. The assumption in Eq (5a) implies that the energy scale for the gluon term is given by $`L_{\mathrm{qg}}+L_{\mathrm{g}\overline{\mathrm{q}}}L_{\mathrm{q}\overline{\mathrm{q}}}=\kappa _{\mathrm{Le}}`$. Similarly we get from Eq (5b) the corresponding gluonic energy scale $`\kappa _{_{\mathrm{Lu}}}`$. The effect of the $`p_{}`$ constraint is rather different in the two terms in Eq (12). For the gluon term the energy scale is in general only slightly larger than the bias scale $`\kappa _{\mathrm{Lu}}`$. This implies that in most cases the bias can be disregarded in this term. Inserting the different assumptions in Eqs (5a) and (5b) into Eq (12) then gives $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}N_{\mathrm{q}\overline{\mathrm{q}}}(L_{\mathrm{q}\overline{\mathrm{q}}},\kappa _{\mathrm{Lu}})+{\displaystyle \frac{1}{2}}N_{\mathrm{gg}}(\kappa _{\mathrm{Le}}),`$ (14a) $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}N_{\mathrm{q}\overline{\mathrm{q}}}(L,\kappa _{\mathrm{Lu}})+{\displaystyle \frac{1}{2}}N_{\mathrm{gg}}(\kappa _{\mathrm{Lu}}).`$ (14b) We note that the consistency between Eqs (5a) and (5b) follows from the fact that the total rapidity range in the two dipoles, $`L_{\mathrm{qg}}+L_{\mathrm{g}\overline{\mathrm{q}}}`$, can be expressed in two different ways by the equalities $`L_{\mathrm{qg}}+L_{\mathrm{g}\overline{\mathrm{q}}}=L_{\mathrm{q}\overline{\mathrm{q}}}+\kappa _{\mathrm{Le}}=L+\kappa _{\mathrm{Lu}}`$. In particular, we see from these equalities that the argument in $`N_{\mathrm{gg}}`$ has to be $`p_{\mathrm{Le}}^2`$ in Eq (5a) and $`p_{\mathrm{Lu}}^2`$ in Eq (5b), and not e.g. $`(2p_{})^2`$. The leading effect of a finite shift in $`Y_\mathrm{q}+Y_{\overline{\mathrm{q}}}`$ is colour-suppressed, and therefore not expected to be large. However, subleading corrections introduce a difference between the results of Eqs (5a) and (5b). This is seen in Fig 3, where the difference is approximately 1 particle for $`\sqrt{s_{\mathrm{q}\overline{\mathrm{q}}}}=60`$GeV. In the calculations of $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}`$ in Fig 3, we have used the expressions in for the multiplicities $`N_{\mathrm{q}\overline{\mathrm{q}}}`$ and $`N_{\mathrm{gg}}`$. These include MLLA corrections and recoil effects, which implies that $`N_{\mathrm{gg}}<2N_{\mathrm{q}\overline{\mathrm{q}}}`$ for accessible energies. Consequently, the result for $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}`$ grows with the assumed value of $`Y_\mathrm{q}+Y_{\overline{\mathrm{q}}}`$. While the bias is not serious for the gluon term in Eq (12), it is more important for the $`\mathrm{q}\overline{\mathrm{q}}`$ term. Focusing on events with comparatively large values of $`p_{}`$, where the bias is less essential, and using the assumption in Eq (5a), we arrive at the result of ref : $$N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}(s,p_{\mathrm{Le}}^2)=[N_{\mathrm{q}\overline{\mathrm{q}}}(s_{\mathrm{q}\overline{\mathrm{q}}})+\frac{1}{2}N_{\mathrm{gg}}(p_{\mathrm{Le}}^2)](1+𝒪(\alpha _s)).$$ (15) The bias is formally of order $`\alpha _s`$, and is here taken into account by the factor $`(1+𝒪(\alpha _s))`$. The result of this expression, neglecting the $`𝒪(\alpha _s)`$ term, is also shown in Fig 3. The effect of the bias corresponds to less than one charged particle for $`p_{\mathrm{cut}}`$ larger than $`10`$GeV, but becomes much more important for smaller $`p_{\mathrm{cut}}`$-values. An alternative way to express this result is the effect on extracting $`N_{\mathrm{gg}}`$ from data for $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}`$, as illustrated in Fig 3. $`N_{\mathrm{gg}}`$ can be extracted by subtracting the biased quark multiplicity $`N_{\mathrm{q}\overline{\mathrm{q}}}(L_{\mathrm{q}\overline{\mathrm{q}}},\kappa _{\mathrm{Lu}})`$ from $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}`$, here assumed to be described by Eq (5a). Neglecting the bias in the subtracted $`N_{\mathrm{q}\overline{\mathrm{q}}}`$ term gives a significantly different result. The relative effect of the bias is in this case larger, and it exceeds 20% for $`p_{}<15`$GeV. Furthermore, to get a reliable result for $`N_{\mathrm{gg}}`$, the relevance of subleading terms in the biased quark multiplicity needs to well understood. For the solid line in Fig 3, the MLLA relation in Eq (10) is used to subtract the $`\mathrm{q}\overline{\mathrm{q}}`$ contribution from the total multiplicity. Instead using the LLA relation in Eq (9) would give a prediction for $`N_{\mathrm{gg}}`$ which is about three charged particles higher for most values of $`p_{\mathrm{cut}}`$. Although the effect of the bias is very important for small $`p_{}`$, we also see from Figs 3 and 3 that it can be neglected for large $`p_{}`$-values, where, thus, the results in ref and Eq (15) can be safely used. This implies e.g. that the bias is negligible in gluon systems defined as the hemisphere opposite to two quasi-collinear quark jets, thoroughly investigated by OPAL . It would be very interesting to compare the results in Figs 3 and 3 to experiments. Experimental data on $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}`$ can be directly compared to the Monte Carlo or MLLA results in Fig 3, Data on the difference $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}N_{\mathrm{q}\overline{\mathrm{q}}}`$ can be compared either to the predictions in Fig 3 or to experimental results for $`N_{\mathrm{gg}}`$ obtained through one of the methods described in ref . We have compared the results in Fig 3 with MC simulations, where the $`p_{}`$ scale is determined by the Durham cluster algorithm. The MC results (not shown) indicate that an analysis based on jet reconstruction is accurate enough to illustrate the effects the bias, but perhaps not to distinguish between the assumptions in Eq (5a) and (5b). We also note that the effects described here may have a phenomenological impact on the recent analysis of $`N_{\mathrm{q}\overline{\mathrm{q}}\mathrm{g}}`$ , which employs the two-scale dependence. ## 6 Conclusion A series of subtle effects influence an analysis of the difference between quark and gluon jets in a real life experiment. In this letter we discuss and clarify effects associated with * the definition of $`p_{}`$, * the bias from restrictions on subjet $`p_{}`$, * the problem that infrared cutoffs cannot be uniquely defined from perturbative QCD. We also demonstrate the impact of these effects on the analysis of three-jet events in $`e^+e^{}`$-annihilation. ### Acknowledgments We thank K. Hamacher, W. Ochs, R. Orava and T. Sjöstrand for useful discussions. This work was supported in part by the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12 - MIHT).
no-problem/9904/astro-ph9904107.html
ar5iv
text
# Gamma Ray Bursts with peculiar temporal asymmetry ## 1 Introduction During more than 25 years, the origin of gamma ray bursts (GRBs) has been, perhaps, the deepest and most persistent problem in astrophysics. However, with the advent of the Compton Gamma Ray Observatory (CGRO) and its Burst and Transient Source Experiment (BATSE) in 1991, a new phase in the research of GRBs started. In seven years of operation, BATSE has accumulated a database of more than 2000 observations. The angular distribution of these bursts is isotropic within the statistical limits, and the paucity of faint bursts implies that we are seeing to near the edge of the source population (e.g. Meegan et al. 1992, Fishman & Meegan 1995). Both effects, isotropy and non-homogeneity in the distribution, strongly suggest a cosmological origin of the phenomenon. In support of this conclusion, absorption lines (Fe II and Mg II) in the optical counterpart of GRB 970508 have been detected with a redshift of $`z=0.835`$. Along with the absence of Lyman-$`\alpha `$ forest features in the spectra, these results imply that the burst source is located at 0.835 $`z`$ 2.3 (Metzger et al. 1997). The energy required to generate cosmological bursts is as high as $`10^{51}`$ erg s<sup>-1</sup>. The very short timescale observed in the time profiles indicate an extreme compactness that implies a source initially opaque (because of $`\gamma \gamma `$ pair creation) to $`\gamma `$-rays. The radiation pressure on the optically thick source drives relativistic expansion, converting internal energy into kinetic energy of the inflating shell. Baryonic pollution in this expanding flow can trap the radiation until most of the initial energy has gone into bulk motion with Lorentz factors of $`\mathrm{\Gamma }10^210^3`$. The kinetic energy, however, can be partially converted into heat when the shell collides with the interstellar medium or when shocks within the expanding source collide with one another. The randomized energy can be then radiated by synchrotron radiation and inverse Compton scattering yielding non-thermal bursts with timescales of seconds. This fireball scenario has been developed by Cavallo and Rees (1978), Paczyński (1986), Goodman (1986), Mészáros and Rees (1993), Mészáros, Laguna and Rees (1993) and others. A comprehensive review is presented by Mészáros (1997). The fireball model is a robust astrophysical scenario independent of the mechanism assumed for the original energy release. A popular mechanism is the merger of two collapsed stars in a binary system, for instance, two neutron stars or a neutron star and a black hole (see Narayan, Paczyński & Piran 1992, and references threin), although other processes have been suggested (e.g. Usov 1992, Carter 1992, Melia and Faterzzo 1992, Woosley 1993). One important prediction of the fireball model, as well as by any explosive mechanism, is that individual burst profiles should be inherently asymmetric under time reversal, with a shorter rise time than the subsequent decay time. This is a natural consequence of a sudden particle energy increase (e.g. produced by a shock) and the slower radiative dissipation of the energy excess. Time asymmetry in GRBs light curves has been discussed by several authors (e.g. Mitrofanov et al. 1994, Link et. al. 1993, Nemiroff et al. 1994). In particular, Nemiroff et al. (1994) showed that in the sample formed by those bursts with count rates greater than 1800 counts s<sup>-1</sup> and durations longer than 1s detected by BATSE until 1993 March 10, there is a significant asymmetry in the bursts profiles in the sense that most bursts rise in a shorter time than they decay, in agreement with what is expected from a general fireball model. The most recent and complete study was made by Link and Epstein (1996). They took 631 GRBs from the BATSE 3B catalog, including both faint and bright bursts, and confirmed the global asymmetry in the burst profiles showing that about two thirds of the events display fluxes that rise faster than the subsequent fall. About 30% of the bursts, however, presented a peculiar asymmetry in the temporal profiles, with slower risings than decays. In this paper we focus on this subsample of peculiar asymmetric bursts (PABs), which seems at first sight to conflict with some predictions of the simplest scenarios for fireballs. In particular, we shall discuss whether there are reasons to consider this subsample of GRBs as representative of a class of sources with different physical properties than other bursts. The structure of the paper is as follows. In Section 2 we define the sample and present the results of the symmetry analysis. We provide tables with the full results for PABs in order to allow identification of specific events. In Section 3 we study the sky distribution of the sample, while in Section 4 we investigate the level of positional coincidence (possible repetition) that PABs show. Finally, we discuss the implications of these results for theoretical models of GRBs. ## 2 Sample and symmetry analysis We have studied the sample of 631 bursts from the BATSE 3B catalog whose global symmetry properties were discussed by Link and Epstein (1996). This sample contains both faint and bright bursts, spanning a 200-fold range in peak flux. PREB plus DISC data types at 64 ms time resolution, with four energy channels, were used in the analysis. The time asymmetry of the individual burst profiles was examined with the skewness function introduced by Link et al. (1993) and used in Link and Epstein’s (1996) paper. This function is defined as $$𝒜\frac{<\left(t<t>\right)^3>}{<\left(t<t>\right)^2>^{3/2}}.$$ (1) Here, angle brackets denote an average over the data sample, performed as $$<g(t)>\frac{_i(c_ic_{th})g(t_i)}{_i(c_ic_{th})},$$ (2) where $`c_i`$ is the measured number of counts in the $`i`$th bin, $`t_i`$ is the time of the $`i`$th bin and $`c_{th}`$ is a threshold level defined as, $$c_{th}=f(c_pb)+b.$$ (3) Here $`c_p`$ stands for the peak (maximum) count rate, $`b`$ is the background, and $`f<1`$ is a fraction that will be fixed for the data set. Fixing $`f`$ ensures that $`𝒜`$ is calculated to the same fraction of the peak flux relative to the background. Larger values of $`f`$ emphasize the structure of the peak over the surrounding foothills. The normalization of $`𝒜`$ makes it independent of background, duration, and amplitude. It is equal to 0 in the case of symmetric bursts, greater than 0 for a burst whose peak rises more quickly than it falls and smaller than 0 in the opposite case. It is equal to 2 for an exact FRED (from fast rise and exponential decay) and to $`2`$ for an exact anti-FRED. Four fixed values of $`f`$ were analyzed: $`f_1`$=0.1, $`f_2`$=0.2, $`f_3`$=0.5 and $`f_4`$=0.67. The only requirement for a burst to be tested in each of the four $`f`$-values is that the number of bins whose $`c_i`$ exceeds or equals $`c_{th}`$ is at least three. Consequently, the size of the sample differs for each choice of $`f`$. The error bars in $`𝒜`$ represent 1$`\sigma `$ deviations, calculated by randomizing the number of counts according to Poisson statistics and computing the variance of the asymmetry parameter for many trials. In Tables 1 - 4 we show the results of the skewness analysis for those bursts that presented PAB behavior (i.e. $`𝒜<0`$). Each table contains the peak flux, trigger number, burst type, value of the symmetry parameter for each $`f_i`$, and a classification of the bursts profiles according to the following scheme: S for single-peaked or spike-like bursts, M for multiply-peaked bursts, and C for complex or chaotic events. Regarding the burst types, events for which the $`𝒜`$ is negative for all $`f`$ are labelled “1”, whereas events for which the errors in $`𝒜`$ allow positive $`𝒜`$ for at least one value of $`f`$ are denoted “2”. In Fig. 1 we show specific examples of these profiles. We found that 91 out of 631 bursts (14.4%) are PABs, i.e. do not present positive skewness for any $`f`$.<sup>1</sup><sup>1</sup>140 bursts out of these 91 are type 1. Only 28.5% of these bursts are single-peaked. The rest are multiply-peaked or complex events. Notice that most of S-type bursts are in Table 4. This is consistent with the analysis technique: a fast burst, typically lasting a couple of seconds, will have few points above the higher cut-offs and then, data for $`𝒜_{f_2,f_3,f_4}`$ will not be computed. As we can see from Tables 1 - 4 as well as from Fig. 1, PABs exhibit a variety of temporal morphologies. If all of these events are produced by a single mechanism, then there should be a very wide range of boundary and initial conditions in the sources in order to generate such a plurality of profiles. With the aim of searching for differences between PABs and the more common bursts, we have computed average values of the hardness ratio and durations of type 1 PABs. These values are compared with similar estimates for those bursts with $`𝒜>0`$ at all levels in Table 5. Due to the small number of bursts and the variety in their features, dispersions are so large that no conclusions can be drawn. However, in the light of current data, it is clear that no significant correlation is found between hardness ratio, or duration, with burst morphology. ## 3 Sky distribution One of the most important results of BATSE is the discovery of that GRBs are isotropically distributed on the sky (see, however, Balazs et al. 1998). With the recent detection of high-redshifted absorption lines in the optical counterparts of individual bursts (Djorgovski et al. 1997, Metzger et al. 1997, van Paradjis et al. 1997) Galactic models appear to be finally ruled out. However, one could ask whether the distribution of PABs exhibits the same level of isotropy than that of the whole sample. It could be the case, for instance, that PABs have a different origin than other GRBs, and consequently, display a distinct distribution on the sky (e.g. there could be a statistically significant concentration of PABs in the supergalactic plane or within any superstructure). In order to quantify the isotropy we followed the method developed by Briggs (1993). The dipole moment toward the Galactic center is $`<\mathrm{cos}\theta >`$, the mean of $`\mathrm{cos}\theta _i`$, where $`\theta _i`$ is the angle between the $`i`$th burst and the Galactic center. An excessively large value of $`<\mathrm{cos}\theta >`$ indicates a significant dipole moment towards the Galactic center. The quantity $`<\mathrm{sin}^2b1/3>`$ tests for a concentration in the Galactic plane or in the Galactic poles. The expected mean values of the two statistics are zero for an isotropic distribution and, if they are asymptotically gaussian distributed, i.e. if for a large number of bursts in the sample ($`N`$) they are gaussian distributed, the variances $`\sigma ^2`$ are $`1/3N`$ and $`4/45N`$ respectively. Briggs et al. (1996) noted that because the CGRO is in a low-Earth orbit, about one-third of the sky is blocked by the Earth causing a portion of the Galactic equator to be observed about 20% less than the poles. An additional effect is different exposure times between the Galactic south and north poles. These effects must be taken into account when computing the expected values of the statistics (Briggs et al. 1996). Location errors on particular bursts, however, have no impact on the isotropy characteristics because they are small compared with the large scale of anisotropies we are testing against. Table 6 shows that the distribution of all PABs (Fig. 2) is consistent with perfect isotropy. The same is true for sub-samples of PABs. Some entries in Table 6 show small deviations from isotropy (quadrupole). However, the small number of events make the asymptotic gaussian distribution no longer valid, and one should compare with the study of Briggs et al. (1996) (see their Fig. 4a and b). Comparing with the values of $`\sigma `$ that arise from the previous cited figures of the Briggs et al. work, we find, consequently, that there is no detectable anisotropy in the sky distribution of PABs and we see that the 1$`\sigma `$ deviation from isotropy contains the values of all entries in Table 6. ## 4 Time-space clustering Several time-space clustering analysis of different GRB-samples have come to contradictory conclusions about whether some GRBs repeat or not (e.g. Quashnock & Lamb 1993, Narayan & Piran 1993, Wang & Lingenfelter 1995, Petrosian & Efron 1995, Meegan et al. 1995). The most complete study on the subject until now, carried out by Tegmark et al. (1996), is based on the analysis of the angular power spectrum of 1120 bursts from BATSE 3B catalog. These authors found that the number of bursts that can be labelled as repeaters (considering just one repetition) is not larger than 5% at 99% confidence. The recent study of by Gorosabel et al. (1998), which combined data from different satellites, shows that at most 15.8% of the events detected by WATCH recur in the BATSE sample (at 94% confidence level). Despite the discussion in the literature, it seems clear that only a small fraction of the total number of GRBs could repeat over timescales of up to a few years. However, if PABs have a different physical origin than other bursts, this subclass of bursts might exhibit time-space clustering. In fact, we find that 48 out of 91 PABs (52.7%) have companions within their location error boxes in the sample of 631 bursts. If we consider just bursts separated by less than 4<sup>o</sup>, we find 40 possible repeaters (44% of the PAB-subsample); typically, the separation is about 2.5<sup>o</sup>. To estimate the statistical significance of these results, we have made a numerical study as follows. We have simulated 1500 sets of 91 random positions for PABs. In order to do this, we have made rotations on the celestial sphere sending a particular PAB with coordinates $`(l,b)`$ to a new position $`(l^{},b^{})`$, which is obtained from the previous by seting $`l^{}=l+R_1\mathrm{\hspace{0.17em}360}^o`$ and $`b^{}=b+R_2\mathrm{\hspace{0.17em}90}^o`$, and using appropriate spherical boundary conditions. Here, $`R_1`$ and $`R_2`$ are different random numbers (between 0 and 1) which never repeat. Doing this for each event we get a new set of simulated PAB-positions. For this set we then compute the positional coincidence level with respect to the fixed $`63191`$ GRBs coordinates. As in the real case, we shall assign a positional coincidence when two or more bursts are separated by less than 4<sup>o</sup>. After making 1500 operations of this type (a larger number of simulations does not significantly modify the results) we can obtain the mean value of the expected number of positional coincidences and its $`\sigma `$. We obtain that for 91 GRBs, the average level of positional coincidences is 42.9 $`\pm `$ 4.7, which is entirely compatible with the observed result for PABs within 1$`\sigma `$. We have repeated the process for the subset of 26 single-peaked PABs (those denoted by an S in Tables 1 - 4). These events represent about 4% of the whole sample and about 28.5% of the PAB subsample. 15 out of 26 bursts of this kind ($``$60%) present companions within error boxes of less than 4 degrees. We find that the average simulated positional coincidence level is 13.3 $`\pm `$ 2.5. That is, the real coincidence level is also compatible with the random result to within 1$`\sigma `$ and no particular association appears obvious. If we now take positional coincidences separated by less than 1<sup>o</sup>, we find that 3 out of 26 single-peaked PABs have companions. Repeating the simulations in this case yields an expected chance association of 1 $`\pm `$ 1 events. This means that the real positional coincidence is only compatible with the random one to within 2$`\sigma `$. The number of events is of course too scarce to draw any conclusion, but if this is confirmed in a larger sample it would entail an excess of 3.8% repetitions above the result expected from chance associations (something compatible with already mentioned Tegmark et al.’s analyses). As we shall see in the next section, spikes with peculiar asymmetry present problems for their interpretation within the standard fireballs models. ## 5 PABs and the fireball models As mentioned in the introduction, GRBs profiles with $`𝒜<0`$ are not expected from the simplest versions of the fireball model (i.e. a single expanding shell that acts as a gamma ray emitter during a brief time at some fixed radius from the central site of the explosion, e.g. Fenimore et al. 1996). However, one of the distinctive features of the fireball scenario is that the same basic mechanism can generate a variety of time profiles for different initial and boundary conditions. We now discuss whether these changes can provide the main types of PABs observed in the sample. In Fig. 3 we show the profile of BATSE trigger #2450, which has negative skewness function for all values of $`f`$ (see Table 1). This is a typical multiply-peaked burst, with a precursor at $`t=0`$ and a series of peaks of increasing height that start about 35 s after the first signal. Individual peaks, when analysed with appropriate $`f`$, give $`𝒜>0`$. Events of this kind can be understood as the effect of a mild baryon loaded fireball (Mészáros & Rees 1993). Even a small baryon contamination ($`M_b10^9M_{}`$) of the expanding pair-photon fireball is enough to trap the $`\gamma `$-rays until most of the initial energy is transformed into kinetic energy of the baryons. The fireball expands by radiation pressure and becomes optically thin to Thomson scattering when the optical depth drops below unity at a radius given by (Mészáros & Rees 1993), $$r_p0.6\times 10^{15}\theta ^1E_{51}^{1/2}\eta ^{1/2}\mathrm{cm},$$ (4) where $`\theta `$ takes into account the possibility of channeling of the flow ($`\theta 1`$ corresponds to spherical symmetry), $`E_{51}`$ is the original energy release ($`e^\pm ,\gamma `$) in units of 10<sup>51</sup> erg, and $`\eta =E_0/M_0c^2`$ is the initial radiation to the rest mass energy ratio in the fireball. At $`r=r_p`$, the $`\gamma `$-rays still trapped in the fireball can escape producing a burst (Cavallo & Rees 1978, Paczński 1986, Goodman 1986). As shown by Mészáros & Rees (1993), this burst should be rather weak, with an observed energy in gamma rays of, approximately, $$E_p^{obs}7\times 10^{47}\theta ^{1/3}E_{51}^{1/2}\eta _3^{11/6}\mathrm{erg},$$ (5) where $`\eta _3=10^3\eta `$. This prompt, small burst will form a precursor that can last a few seconds. When the expanding relativistic shell collides with the interstellar medium, a shock wave is formed and the gas in the post-shock region is heated up to thermal Lorentz factors of $`\gamma \eta `$, reconverting the kinetic energy of the shell into thermal energy of the particles. The thermal energy is radiated through synchrotron and inverse Compton processes at MeV to GeV energies. A non-uniform ambient medium can naturally lead to a multiply-peaked burst (e.g. Fenimore et al. 1996). Events of this class will have $`𝒜<0`$, as in the case of trigger #2450, due to the effect of the prompt precursor. Hence, these $`𝒜<0`$ events can be explained within the fireball model. In other cases, the precursor can remain undetected but a multiply-peaked PAB can arise from internal shocks in bursts with several shells with different Lorentz factors (e.g. Kobayashi et al. 1997, Daigne & Mochkovitch 1998). In the simulations carried out by Kobayashi et al. (1997), bursts with negative skewness can be produced through multiple shell collisions (e.g. see Fig. 2f of their work). Complex bursts, as the one shown in Fig. 1d, could be the result of instabilities on the expanding shell surface once it shocks the interstellar medium. Hydromagnetic instabilities in the contact discontinuity can lead to local variations in the fields and the flow’s Lorentz factor, yielding very rapid changes in the time profiles (e.g. Daigne & Mochkovitch 1998). The resulting global morphology could resemble that seen in some bursts with negative skewness, such as #2240. Single-peaked bursts with $`𝒜<0`$, however, appear to be more difficult to explain with the fireball model. The main problem is that a single spike with slower rising than falling cannot be generated through dissipative shocks. In Fig. 4 we show BATSE trigger #444 (see also Table 3 and Fig. 1a). We have attempted to fit this event with the multiple shell model developed by Kobayashi et al. (1997). The $`\gamma `$-ray emission is produced when a shock results from the collision of two shells with different velocities. The randomized kinetic energy is then radiated through synchrotron and inverse Compton processes. Notice that the better the fit for the rising profile, the worse the model describes the fall. This is a straightforward consequence of the fact that cooling times are longer than particle acceleration times at the shock. To better understand the meaning of the theoretical curves in Fig. 4 we recall the predicted luminosity in the case of a two shell interaction (Kobayashi et al. 1997), $$(t)\{\begin{array}{c}1(1+2\gamma _m^2ct/R)^2,0<t<\delta t_e/2\gamma _m^2\hfill \\ (1+(2\gamma _m^2t\delta t_e)c/R)^2(1+2\gamma _m^2ct/R)^2,\hfill \\ t>\delta t_e/2\gamma _m^2\hfill \end{array}$$ (6) where $`\gamma _m`$ is the Lorentz factor of the merged shell (depending on the Lorentz factor and mass of each colliding shell), $`\delta t_e/2\gamma _m^2`$ is the time at which the burst reach its maximum, and $`R`$ is the radius at which the collision takes place. Observational data of a given burst, its height and duration up to the maximum in the number of counts, allow a parameterization of $`(t)`$ with $$B=\frac{2\gamma _m^2c}{R}.$$ (7) The shape of the pulse is asymmetric with a fast rise and a slower decline unlike a spike event with $`𝒜<0`$. Attempts to fit such a burst using eq. (6) are shown in Fig. 4. Spike-like bursts with $`𝒜<0`$ are predicted, however, in some extrinsic models for GRBs. Torres et al. (1998a,b) have shown that microlensing effects produced upon the core of high redshifted AGNs by compact extragalactic objects which violate the weak energy condition at a macroscopic level would yield GRB-like lightcurves with spike-type profiles and negative skewness function. A similar burst with $`𝒜>0`$ should be observed from several months up to a few years later in the same position of the sky, provided the lens has an absolute mass of the order of $`1M_{}`$. If this interpretation turns out to be correct, it could explain not just S-type PABs but also any apparent excess of positional coincidences among these bursts at a level compatible with current constraints on repetition over the whole sample. It would appear that the small group of spike bursts with negative skewness deserve further study. ## 6 Conclusions GRBs exhibit a very rich variety of temporal profiles. Most of them have highly variable structure over timescales significantly shorter than the overall duration of the event. The study of burst morphology by Link and Epstein (1996) shows that a significant fraction of bursts ($`1/3`$) have time histories in which the flux rises more rapidly than it decays (PABs). Here we have argued that most PABs can be accommodated by fireball models. Isotropy and other average features, common to the bulk of observed bursts, are shared by PABs. But there is, however, a subclass of PABs, those which consist of a single, prominent peak with negative skewness, that appear to be inconsistent with the fireball mechanism. These events represent $`4`$ % of the total sample and certainly merit further research in order to clarify their nature. ## Acknowledgments This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center, provided by the NASA/Goddard Space Flight Center and also of NASA/IPAC Extragalactic Database, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. Our work has been supported by the Argentine agencies CONICET (D.F.T. and G.E.R - under grant PIP N<sup>o</sup> 0430/98 -), ANPCT (G.E.R.), and FOMEC (L.A.A.). We acknowledge G. Bosch for his help in producing Fig. 2 and S. Grigera for an enlightening discussion on numerical issues.
no-problem/9904/physics9904022.html
ar5iv
text
# Large Numbers, Galactic Rotation and Orbits ## Abstract The time variation of the gravitational constant $`G`$ in the recently discussed large number cosmologies accounts for the galactic rotational velocity curves without invoking dark matter and also for effects like the precession of the perhelion of Mercury. <sup>0</sup><sup>0</sup>footnotetext: Email:birlasc@hd1.vsnl.net.in; birlard@ap.nic.in In recent issues (Matthews, 1998), (Sidharth, 1999), the large number coincides brought to light by Dirac a little over sixty years ago were revisited. Cosmological schemes where these relations between the fundamental micro physical constants and large scale parameters like the Hubble constant and the number of particles in the universe are not mere magical or mysterious coincidences were considered. Here, the universal constant for gravitation $`G`$ varies with time (Narlikar, 1993), (c.f. also ref. (Sidharth, 1999)), $$G=G_o(1\frac{t}{t_o})$$ (1) where $`t_o10^{17}secs`$ is the present age of the universe and $`t`$ is the time elapsed from the present epoch. Subscripts refer to values at the present epoch. From this it would also follow that the distance of an object moving under the influence of a massive gravitating body would decrease with time as (cf.ref.(Narlikar, 1993)) $$r=r_o(1\frac{t}{t_o})$$ More accurately we have $$r=r_o(1\frac{\beta t}{t_o}),\beta 1$$ (2) (There is also a variant of the above idea, called multiplicative creation of particles, under which the distances increase with time - cf.ref.(Narlikar, 1993) for details.) We now show that given (1), it is possible to explain the anomalies in galactic rotational curves on the one hand and deduce effects like the correct perhelion precession of the planet Mercury on the other. The problem of galactic rotational curves is well known (cf.ref.(Narlikar, 1993)). We would expect, on the basis of straightforward dymamics that the rotational velocities at the edges of galaxies would fall off according to $$v^2\frac{GM}{r}$$ (3) On the contrary the velocities tend to a value, $$v300km/sec$$ (4) This has lead to the postulation of as yet undetected dark matter, that is that the galaxies are more massive than their visible material content indicates. Our starting point is the well known equation for Keplerian orbits (Goldstein, 1966), which on use of (1) becomes $$\frac{1}{r}=\frac{mk_o}{l^2}(1+ecos\mathrm{\Theta })(1\frac{t}{t_o}),k=GmM,l=mr^2\dot{\mathrm{\Theta }}$$ (5) $`M`$ and $`m`$ being respectively the masses of the central and orbiting objects and $`e`$ is the eccentricity. From (6) we could deduce $$r^3\dot{\mathrm{\Theta }}^2=r_o^3\dot{\mathrm{\Theta }}_o^2(1\frac{t}{t_o})$$ (6) Equation (6), for the special case of closed orbits, $`e<1`$ can be considered to be the generalisation of Kepler’s third law. From (2) it can be easily deduced that $$a(\ddot{r}_o\ddot{r})\frac{\beta }{t_o}(t\ddot{r_o}+2\dot{r}_o)2\beta \frac{r_o}{t_o^2},$$ (7) as we are considering infinitesimal intervals $`t`$ and nearly circular orbits. Equation (7) shows that there is an anomalous inward acceleration, as if there is an extra attractive force, or an additional central mass. While we recover the usual theory, in the limit $`\beta 0`$, if we retain $`\beta `$ then we will have instead of the usual equation (3), in view of (7) and the fact that $`\beta 1`$ $$\frac{GMm}{r^2}+\frac{2mr}{t_o^2}\frac{mv^2}{r}$$ (8) From (8) it follows that $$v\left(\frac{2r^2}{t_o^2}+\frac{GM}{r}\right)^{1/2}$$ (9) From (9) it is easily seen that at distances within the edge of a typical galaxy, that is $`r<10^{23}cms`$ the equation (3) holds but as we reach the edge and beyond, that is for $`r10^{24}cms`$ we have $`v10^7cms`$ per second, in agreement with (4). Thus the time variation of $`G`$ given in equation (1) explains observation without taking recourse to dark matter. We now come to the case of the precession of Mercury’s perhelion. Indeed using (2) in (6) we get $$\dot{\mathrm{\Theta }}^2=\dot{\mathrm{\Theta }}_o^2(1\frac{t}{t_o})(1+\frac{3\beta t}{t_o})$$ whence, $$\dot{\mathrm{\Theta }}=\dot{\mathrm{\Theta }}_o(1+\frac{t}{t_o})$$ (10) From (10) we can deduce $$\lambda (t)\mathrm{\Theta }\mathrm{\Theta }_o=\frac{\pi }{\tau _ot_o}t^2$$ where $`\lambda (t)`$ is the average perhelion precession at time $`t`$ and $`\tau _o0.25`$ years the planet’s period of revolution. Summing over the years $`t=1,2,\mathrm{},100,`$ the total precession in a century is given by $$\lambda =\underset{n=1}{\overset{100}{}}\lambda (n)43^{\prime \prime }$$ the age of the universe $`t_o`$ being taken $`2\times 10^{10}`$ years. This ofcourse is the observed value. Finally, it may be mentioned that several recent studies show that the universe is ever expanding (Perlmutter, 1998), which undermines the conventional belief that dark matter closes the universe. References Goldstein H 1966 Classical Mechanics Addison-Wesley, Reading,Mass. Matthews R 1998 A& G 39 6.19-6.20. Narlikar J V 1993 Introduction to Cosmology Cambridge University Press, Cambridge. Perlmutter S et al. 1998 Nature 391 51-54. Sidharth B G 1999 A& G 40 2.8.
no-problem/9904/astro-ph9904421.html
ar5iv
text
# POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz ## 1 Introduction Past studies of the polarization properties of the Crab Nebula pulsar have been limited to low-frequency radio and visible wavelengths. The pulsar’s steep radio spectrum, and interference from the radio-bright Nebula make observations above 1 GHz difficult with single dish antennas. Thus, interpretations of the pulsar’s emission geometry have only been made from the properties of its polarized profiles at visible wavelengths. Single dish average profile measurements of the radio polarization are available at frequencies between 110 and 1664 MHz (Manchester, Huguenin, & Taylor 1972; Manchester 1971a). Prior to the work described in Moffett & Hankins (1996, hereafter Paper I), only three components of the pulsar’s average profile were known; a steep-spectrum precursor, which is approximately 100% linearly polarized (Manchester 1971a ), plus a main pulse (MP) and interpulse (IP) which are roughly 15 to 25% linearly polarized. The position angle (PA) remains constant between all three components, with very little change of position angle across them. The only other major radio polarization observations that have been published are measurements of the time-variable rotation measure used to probe the magnetic fields of the Crab Nebula’s filaments (Rankin et al. (1988)). No high radio-frequency ($`\nu >1.7`$ GHz) polarization information for the Crab pulsar has been available. The visible wavelength (Smith et al. (1988)) and newly acquired ultraviolet (Smith et al. (1996)) polarization profiles show similar linear polarized fractions as the radio, but unlike the radio profiles, they show large PA variations. At the peak positions of the MP and IP, the fraction of visible linear polarization is about the same as in the radio regime, $`14`$ to 17%. In the region after the IP in phase, the percentage polarization rises to $`47\pm 10`$%, and the position angle rises above the IP angle and remains nearly constant across the total intensity minimum. Narayan & Vivekanand (1982) found that the visible wavelength polarization PA sweeps of the MP and IP suggest that emission comes from two opposite poles of a pulsar whose magnetic axis is nearly orthogonal to the rotational axis. But arguments from $`\gamma `$-ray emission theory have surfaced recently (Manchester 1995; Romani & Yadigaroglu 1995) that question this type of geometry by claiming that emission arises from a wide cone in the outer magnetosphere. So far, the study of radio polarization has not improved our knowledge of the emission and field geometry. The lack of position angle variation in low frequency radio profiles is difficult to explain in terms of the simple rotating vector model (Radhakrishnan & Cooke (1969)). Following the serendipidous discovery of additional components in the Crab pulsar’s profile in Paper I, a program of polarization observations was scheduled to study the high radio-frequency polarization characteristics of these new components, perhaps improving the interpretation of the polarization and emission location for the Crab pulsar’s radio components. ## 2 Observations Observations were conducted during several sessions from October 1995 to October 1996 at the Very Large Array (VLA) of NRAO. Between February 22 and April 18, 1996, the data acquisition system was modified to double the number of filterbank channels that are recorded. Using the phased VLA, the coherent sum of undetected right-hand and left-hand circular polarization ($`R`$ and $`L`$) from all antennas is mixed to 150 MHz and then split into 14 independent frequency channels by a MkIII VLBI filter bank. The filter bank output is sent to the VLA’s High Time Resolution Processor (HTRP), which consists of a set of 14 multiplying polarimeters. Channels of detected and smoothed $`LL`$, $`RR`$, $`RL\mathrm{cos}\theta `$, and $`RL\mathrm{sin}\theta `$, where $`\theta `$ is the phase offset between $`R`$ and $`L`$, are continuously sampled by 12-bit, analog-to-digital converters in a PC and recorded on disk at a time resolution of 256 $`\mu `$s. The detector time constants are set to optimize sampling of the dispersed time series across the channel bandwidths. The observations were scheduled so that short scans (typically 30 to 40 minutes apart) of an unresolved calibrator point source were made between pulsar scans to keep the VLA phased, and to record on- and off-source data for flux and polarization calibration of the pulsar data. Within the duration of an observing session, anywhere from five to seven sets of measured Stokes parameter fluxes were recorded from the phase and flux calibrator, 3C138, with enough parallactic angle coverage for instrumental polarization calibration. The flux density and position angle of 3C138 are regularly monitored by the University of Michigan Radio Astronomy Observatory. The position angle of this source remains the same over our frequency coverage, with a value of $`\psi =12^{}`$. The pulsar data were folded off-line at the pulsar’s topocentric period using a timing model initially provided by Nice (1995). Consequent observations of the Crab pulsar using the Princeton/Dartmouth Mark III Pulsar Timing System (Stinebring et al. (1992)) provided time-of-arrival (TOA) information reduced using the program TEMPO (Taylor & Weisberg (1989)), which yielded new timing solutions for folding at later epochs. Individual channel data were folded into two-minute average profiles of all four detected polarizations prior to calibration and dedispersion. The gain amplitudes, relating the received voltage in the data acquisition system to flux density in Janskys for the $`LL`$ and $`RR`$ detector signals were determined by observation of the phase calibrator source and blank sky. The gain amplitude of the cross polarizations, $`RL\mathrm{cos}\theta `$ and $`RL\mathrm{sin}\theta `$, were found directly from solutions for the circular polarization gains $`G_\mathrm{L}`$ and $`G_\mathrm{R}`$ from $`LL`$ and $`RR`$, by using $`G_{\mathrm{RL}}=\sqrt{G_\mathrm{R}G_\mathrm{L}}`$. For data collected from the lower side-band channels of the MkIII VLBI videoconverters, the sign of the measured Stokes U was inverted, thus removing a known $`180^{}`$ phase shift caused by the image-rejecting mixers within the videoconverters. Polarization calibration was completed following procedures similar to those used by McKinnon (1992). In his paper, the polarization characteristics of the phased VLA approximate those of a single dish antenna with circular polarization feeds. An ideal antenna with orthogonal circular polarization receivers has no cross-coupling. However, imperfections in the reflectors and receiving systems of antennas tend to change the received radiation from purely linear polarized sources into elliptical polarization (Conway & Kronberg (1969)). McKinnon’s method involves measuring the time-dependent Stokes parameters of a polarization calibrator source with respect to the changing parallactic angle, and solving for time-dependent and independent instrumental corrections. McKinnon used the polarization from a point in a pulsar’s profile to perform a self-calibration, but he could not determine the absolute position angle. We could have used the Crab pulsar at 1.4 GHz to perform such a self-calibration, but at higher frequencies we were limited by low signal to noise ratios. Instead we used a phase calibrator of known polarization characteristics to solve for the instrumental corrections and absolute position angle at 1.4, 4.9, and 8.4 GHz. ## 3 Results Our results at 1.4 GHz (Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz) are similar to published observations at 1.664 GHz (Manchester 1971a; Manchester 1971b), but with the addition of three more components (see Paper I, Figure 2). These three are labeled LFC for “low frequency component”, as it mainly appears at $`\nu <2`$ GHz, and HFC1 and HFC2 for “high frequency component 1 and 2”, as they appear only at $`\nu 1.4`$ GHz. The MP and IP are both linearly polarized, 25% and 15% respectively, and their relative PAs are nearly the same. The LFC component is more than 40% linearly polarized, and it has a PA offset $`30^{}`$ from the MP, which sweeps down toward the MP. There is also low-level emission after the IP, coincident in phase with components HFC1 and HFC2 found in the total intensity profiles at 4.9 GHz (Paper I). These components are $`>50\%`$ polarized, and their position angles appear to be relatively the same, offset from the MP by $`60^{}`$. The circular polarization undergoes a sense reversal centered on the MP with an amplitude 1 – 2% of the MP peak. We can exclude this as a cross-coupling signature, even though our uncertainties for fitting the instrumental parameters were several times higher for individual frequency channels. The linear polarization is not strong, nor does it sweep rapidly across the pulse at our time resolution. So a coupling of linear to circular power should not produce sense-reversing circular polarization. We can make no comparisons as no previous circular polarization observations of average profiles have been published. The similarity of circular polarization signatures on separate observation dates, and in individual channels improves our confidence that these signatures are real. The profiles at 4.9 GHz (Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz) are new results. We have successfully confirmed the detection of the HFC components found in Paper I. The pulsar was visible for only three out of seven observing sessions; the profile was formed from about 3.7 hours of data. We attribute the non-detections to heavy scintillation, which affected observations at 4.9 GHz and higher frequencies. The IP, HFC1, and HFC2 are highly polarized, 50% to 100%, while the MP seems to have the same polarized fraction as at 1.4 GHz. HFC1 and HFC2 share a common range of PA, but it sweeps through them with different slopes. The most important feature to note is the IP. Its relative flux density has increased with respect to the MP and it has shifted earlier in phase by $`10^{}`$ (see Paper I, Figure 2). At 8.4 GHz (Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz), the profiles show some polarization, and also confirm the profile morphology found in earlier total intensity observations. The pulsar was visible for only one out of three observing sessions for a total of 2.3 hours, which we again attribute to scintillation. The IP seen in the profile is substantially wider than at lower frequency and the fractional polarization of the IP and HFCs is reduced, due in part to an incomplete instrumental correction of the position angles. As in Paper I, no evidence is found of the MP, whose predicted flux from a spectral index of $`\alpha _{\mathrm{MP}}=3`$ (Section 4.3) is below the noise level of the profile recorded at this frequency. Observations made on April 18, 19 and 20, 1996, were conducted at several frequencies within the 1.4 GHz receiver band, and evidence for Faraday rotation was found between the separate frequencies. A rotation measure, RM$`=46.9`$ rad m<sup>-2</sup>, was found after comparing the position angles of the major components, and its effects have been removed from all profiles reported here. Past measurements show the RM near $`43.0`$ rad m<sup>-2</sup> (Rankin et al. (1988)), but it is known to be variable on time scales of months, as the line of sight to the pulsar passes through the Crab Nebula’s filaments. After removing Faraday rotation effects, the position angles of the MP, HFC1 and HFC2 are found to align at 1.4 and 4.9 GHz, and the PAs of the IP, HFC1 and HFC2 align at 4.9 and 8.4 GHz. But, the IP is found to have a position angle difference between 1.4 and 4.9 GHz of $`90^{}`$ (see Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz). So the IP has a discontinuous change of positional phase, flux, and polarization between 1.4 and 4.9 GHz. It obviously cannot be the same component at both frequencies. ## 4 Analysis The unique and confusing discoveries described in the previous section are the first successful, fully polarimetric observations of the Crab pulsar above the 1.4-GHz band. In the following sections, this pulsar’s emission geometry is explored by comparing properties of the its polarization profile with known properties of other pulsars, and possible emission geometry models. ### 4.1 Multiple Components The components of the Crab pulsar appear in six distinct positions in rotational phase at all observed radio frequencies. The distribution of components is difficult to explain in a low-altitude, dipolar or hollow-cone emission models (Rankin 1983a ; Lyne & Manchester (1988)), mainly because of their number and wide separation. Up to 5 components have been seen from “normal” pulsars (PSR B1237+25 and B1857-26). The separation of profile components is usually restricted to a small range of pulse phase ($`<30^{}`$), corresponding to a cone of emission above one pole of the star. However, a few interpulsars exist, whose components can be attributed to emission from the observer’s line of sight passing above both poles (orthogonal rotator), or from one pole (aligned rotator). The phase separation between the MP and IP, $`\mathrm{\Delta }\varphi _{\mathrm{MP}\mathrm{IP}}140^{}`$, is too low to argue for a line of site crossing of both poles. When compared to the high energy emission (infrared to $`\gamma `$-ray), the morphology of the MP and IP implies that they arise from a wide conal beam, high above a single pole (Manchester (1995)). With the wide beam picture in mind, one apparent symmetry in the distribution of the Crab’s components can be seen if we draw a line through the midpoint between the MP and IP, and the midpoint between HFC1 and HFC2 (see Figure 3 of Paper I). The midpoints between the component pairs are separated by $`170^{}`$ at 4.9 GHz. So the Crab’s components could arise from conal emission regions above both poles, one wider than the other. In fact, the HFCs do show promise as a conal pair. Rankin’s (1993) empirical relations for inner and outer conal width (assuming the Crab to be an orthogonal rotator, $`\alpha =90^{}`$) yield: $$\begin{array}{ccccc}\rho _{\mathrm{inner}}& =& 2\times 4.33^{}P^{1/2}=47.3^{}\hfill & & \\ \rho _{\mathrm{outer}}& =& 2\times 5.75^{}P^{1/2}=62.8^{}\hfill & & \end{array}$$ The phase separation of the HFCs, $`\mathrm{\Delta }\varphi _{\mathrm{HFC1}\mathrm{HFC2}}56^{}`$, is within Rankin’s predicted values for inner and outer conal widths for a pulsar of the Crab’s period. It is possible that the HFCs are generated at low altitudes, and the MP and IP are generated at higher altitudes where the emission beam is much wider. However, we note that interpreting the frequency-dependent properties of the IP and the HFCs with this geometric model is quite difficult. Another set of components, the LFC, precursor, and main pulse, form what may be a cone/core triplet. The LFC to MP separation is $`45^{}`$, nearly what one expects for the inner conal width, and the precursor behaves much like a core component, with its high polarization and steep spectrum. But why the MP is so much brighter than the LFC requires explanation. It is interesting to note that one pulsar, B1055-52, has a similar distribution of components (precursor, main pulse, and a strong interpulse located $`155^{}`$ away) at low frequency (McCulloch et al. (1976)). And like the Crab, it also has pulsed high energy emission X-rays (Ogelman & Finley (1993)), pulsed $`\gamma `$-rays (Fierro et al. (1993)), and has been recently detected as continuum source at visible wavelengths (Mignani, Caraveo, & Bignami (1997)). ### 4.2 Radius to Frequency Mapping? Using the main pulse as the fidicial point of the Crab’s profile (Paper I), we found that the separations from MP to IP, and from MP to the HFCs are frequency-dependent. (see Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz). From 1.4 to 4.7 GHz, the IP jumps $`10^{}`$ earlier in phase, while the HFCs appear to make a smooth linear transit in phase between 1.4 and 8.4 GHz. This property is reminiscent of the smooth phase shift of conal components in radius-to-frequency mapping (Cordes 1978; Rankin 1983b; Thorsett 1991). The phase separation of conal components usually can be best fit by a power law function, $`\mathrm{\Delta }\varphi \nu ^\eta `$, where $`1.1\eta 0.0`$. The phase separations from the MP to both HFC1 and HFC2 are best fit with $`\eta =1`$ (fit parameters found in Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz). The HFCs are both moving toward later rotational phase with increasing frequency, unlike conal components of other pulsars, whose phase separation decreases to a common fiducial point. Curiously, the HFCs would merge at a common point at the MP phase, if their phases are extrapolated to a frequency above 60 GHz. ### 4.3 Spectral Index The amplitude calibration method for these observations was based on gains transferred from a standard extragalactic continuum calibration source, whereas the flux densities in the profiles presented in Paper I were estimated using known radiometer characteristics. We used the integrated flux density under the major components, and computed spectral indices for the MP and IP; $`\alpha _{\mathrm{MP}}=3.0`$ for the MP, and $`\alpha _{\mathrm{IP}}=4.1`$ for the IP at $`\nu 1.4`$ GHz. Independent of the uncertainty of the flux density measurements, the relative spectral index differences between components were determined simply through ratios of their integrated flux densities using the following relation: $$\frac{S_{\mathrm{C1}}(\nu _1)/S_{\mathrm{C1}}(\nu _2)}{S_{\mathrm{C2}}(\nu _1)/S_{\mathrm{C2}}(\nu _2)}=\frac{\nu _1}{\nu _2}^{(\alpha _{\mathrm{C1}}\alpha _{\mathrm{C2}})}$$ where the fluxes, $`S_\nu `$, and spectral indices, $`\alpha `$, correspond to the components C1 and C2. The spectral index difference of components using these ratios yields a spectral index, $`\alpha _{\mathrm{PC}}5.0`$, for the precursor. The spectral indices we have found for the three major components of the Crab pulsar profile agree with previous measurements by Manchester (1971a). In Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz, we plot the flux density spectrum of the MP and IP, and the two HFC components. Below 1.4 GHz, the IP follows a power-law spectral index of approximately -4, but above 1.4 GHz, the plot shows that the IP has a flat spectral index, as do the HFC components, though no power-law can be determined from the plot. Such a turn-up or flattening of pulsar spectra has been observed by Kramer et al. (1996) in two other pulsars. They have suggested that a transition from coherent to incoherent emission would cause changes in the expected flux density. Sampling pulsar radiation at very high frequencies gives limits to the bandwidth of the coherent emission mechanism. A simple extrapolation of the Crab pulsar spectrum from radio to infrared wavelengths (Fig. 4-2, Manchester & Taylor 1977) implies that the flux must rise and the emission mechanism must change. So the change in spectral index lends support to our hypothesis that the low frequency IP and high frequency IP are two different components. We should note, that when compared to other pulsars, the spectral indices of the Crab pulsar’s MP and IP are much steeper than the components of other pulsars ($`1.5<\alpha <3`$). The Crab’s mean spectral index, $`\alpha _{\mathrm{crab}}=3.1`$, is also greater than the average spectral index, $`\alpha =1.5`$, of most detected pulsars (Lorimer et al. (1995)). ### 4.4 Polarization Properties The polarization position angle of the Crab changes across the full period, though not significantly within components. There are no sudden well-defined PA sweeps (‘S’ shaped sweeps) within components, as seen at optical wavelengths. However, we should note the radio components are much narrower, and some polarization information is smeared by dispersion and scattering. The lack of PA variation between close components implies that the observer’s line of sight trajectory does not fall close to the magnetic poles, where the position angle of field lines varies quickly. The fraction of linear to total intensity of the MP and HFCs is nearly constant from 1.4 to 4.9 GHz. But the IP becomes substantially more polarized (from 20% to 100%) between the two frequencies, as well as undergoing a $`90^{}`$ PA shift. The spectral change in phase and PA could be due to a mechanism (birefringence) affecting the propagation of the two orthogonal modes (ordinary or O-mode, and extraordinary or X-mode) of linear polarized radiation within the pulsar’s magnetosphere (Barnard & Arons (1986)). The ordinary mode waves are forced to travel along magnetic field lines, while the extraordinary mode waves are unaffected. A sudden change in plasma conditions could cause one of the modes to be beamed out of the line of sight. However, this process is sensitive to frequency, and any transitions we see should be continuous. The change of the phase and PA of the IP between 1.4 and 4.9 GHz is rather abrupt, but this does not rule out birefringence effects, since we have not yet seen the IP at an intermediate frequency (Moffett & Hankins (1996)). In general, the polarized fraction of other pulsars decreases with frequency, and the position angle gradient is independent of frequency (Xilouris et al. (1996)). It is generally believed that pulsar depolarization toward higher frequencies is due to the instantaneous superposition of emitted orthogonal or quasi-orthogonal polarization modes (Stinebring et al. 1984a ). Our results seems to indicate that one polarization mode dominates the emission from the IP and HFCs for $`\nu >1.4`$ GHz. Following the standard rotating vector model (RVM), the polarization position angle traces the projected magnetic field of the pulsar if emission occurs along the open field lines. The position angle of the RVM is given by Manchester & Taylor (1977) as $$\psi (\varphi )=\psi _0+\mathrm{tan}^1\left[\frac{\mathrm{sin}\alpha \mathrm{sin}(\varphi \varphi _0)}{\mathrm{sin}\zeta \mathrm{cos}\alpha \mathrm{cos}\zeta \mathrm{sin}\alpha \mathrm{cos}(\varphi \varphi _0)}\right],$$ (1) where $`\psi _0`$ is a position angle offset, $`\varphi _0`$ is the pulse phase at which position angle variation is most rapid, $`\alpha `$ is the inclination angle from the rotation axis to the magnetic axis, and $`\zeta `$ is the angle between the rotation axis and the observer’s line-of-sight. The observer’s impact angle with the magnetic axis is just the difference $`\beta =\zeta \alpha `$. It can also be determined from the maximum slope of position angle with phase by using $$\left[\frac{d\psi }{d\varphi }\right]_{\mathrm{max}}=\frac{\mathrm{sin}\alpha }{\mathrm{sin}\beta }$$ (2) This simple geometric construct is only useful if emission is located close to the polar cap, since the line of sight angle $`\zeta `$ in the model passes through the center of the star (which is not the case in reality). We have made rudimentary fits of Eq(1) to our polarization profiles in an attempt to match polarization signatures to the low altitude dipole model. In Figure POLARIMETRIC PROPERTIES OF THE CRAB PULSAR BETWEEN 1.4 AND 8.4 GHz, we plot the position angles of the Crab’s major components at 1.4, 4.9 and 8.4 GHz with the best fit to the RVM overplotted. The RVM does not fit well for the case where the PA of the IP at all frequencies is left at its 1.4-GHz value, so we have shifted the IP position angle at 1.4 GHz by $`90^{}`$ to match the that at higher frequency. From the fit, the angle found between the rotation and magnetic axes is $`\alpha =56.0^{}`$, with one pole projected near the IP, and the other near the LFC. With $`\alpha `$ fixed in Eq(2), the slope of the position angle with phase yields an impact angle, $`\beta 51^{}`$, for the IP. The LFC has a smaller impact angle, $`\beta 30^{}`$, and the maximum slope of the fitted curve occurs just ahead of the LFC. At this location, $`\beta 18^{}`$. The impact angles for the components near the interpulse are much larger than those found from other pulsars (Rankin 1993b; Lyne & Manchester 1988). The fitted impact angles are much larger than the polar cap width expected for the Crab pulsar, given by Goldreich & Julian (1969) is $`\rho _{\mathrm{PC}}=\left(\frac{2\pi r}{cP}\right)^{1/2}=4.56^{}`$, where $`\rho _{\mathrm{PC}}`$ is the width of the polar cap, $`r`$ is the height above the surface, $`c`$ is the speed of light, and $`P`$ is the pulsar’s rotational period. So our solution to the RVM fit appears to find emission both close to (LFC) and well away from (IP, HFC1 and HFC2) expected low altitude dipole fields above the polar cap. Our radio wavelength RVM fit does not agree with values found from fitting the visible wavelength PA sweeps (Narayan & Vivekanand (1982)): $`\alpha =86^{}`$, $`\beta _{\mathrm{MP}}=9.6^{}`$, and $`\beta _{\mathrm{IP}}=18^{}`$. The large inclination angle and the small observer impact angles of this fit imply that both poles sweep by the observer. However, the visible wavelength fits were obtained by only fitting for the maximum sweep through each component individually, not by fitting the position angle over the whole pulsar period. A simple comparison of the visible polarization profiles (Smith et al. (1988)) and the high frequency radio profiles show a few similarities. First, the PA of the MP and IP at 1.4 GHz matches the visible PA at the phase of the radio components. And though PAs of components at high radio frequency do not match the visible, the position angles and polarized fraction of both the visible and radio profiles increase in the region occupied by HFC1 and HFC2. ## 5 Emission Geometry It is difficult to interpret the emission geometry from the profile morphology and polarization measurements we have acquired. There are six sites in rotational phase where pulsed emission occurs, and some evidence of radius to frequency mapping. The HFC components appear to be separated by a width comparable to the conal width expected of a pulsar of this period (Rankin 1993), as do the LFC and MP pair. The precursor may even be a core-type component between the LFC and MP. However, the sweep of position angle through these components is shallow, suggesting that radiation comes from far outside a low-altitude emission cone. So far, our interpretation has followed a simple emission geometry proposed by Smith (1986), which places the location of emission at both low and high altitudes. The MP and IP are generated in the outer magnetosphere, near the light cylinder, where the dipolar fields are swept back, and the rotational phase of components and their position angles is not the same as above the polar cap. Although no clear evidence of field sweep back has been found for pulsars, if the emission does originate at high altitudes, the swept-back dipole fields of the pulsar would allow the MP and IP to be formed from either the two sides of the same dipole cone above one pole, or from just the leading edges of dipolar fields above both poles of an orthogonal rotator (Smith et al. (1988)). The precursor and LFC are then generated close to the surface of the star above one pole. However, this simplistic model does little to interpret the HFC components, how the IP’s properties change, or the nature of the polarization position angle. Another model, proposed by Romani and Yadigaroglu (1995), ties $`\gamma `$-ray emission of several pulsars to particle production in an outer magnetospheric gap. Through Monte Carlo simulations of particles in the gap, they have generated $`\gamma `$-ray profiles similar to the Crab and Vela pulsars, and have successfully generated a polarization position angle profile similar to that of the optical polarization of the Crab, by projecting the magnetic fields (or polarization of high energy photons) in the outer gap. Using this model, it is even possible to find a less powerful outer gap surface that could drive particle acceleration at rotational phases where HFC1 and HFC2 reside (Romani (1996)). The processes by which radio radiation is generated in the outer magnetosphere are still unknown, though they must be similar to normal pulsar radio production to yield comparable radio power and spectra. Romani and Yadigaroglu (1995) also claim that one should see low altitude emission alongside the outer magnetospheric emission if the orientation of the pulsar allows it. This is true for the Crab pulsar’s precursor as well as the Vela pulsar’s single radio component, which is offset in phase from its X-ray emission. One last piece of information that may aid in efforts to interpret the polarization, is evidence for the Crab pulsar’s orientation on the sky. Using optical images from HST, Hester et al. (1995) link certain features found at visible wavelengths with structures found in ROSAT X-ray images. The wisps, arcs, and jet-like features, which probably came from interactions of the Nebula with a pulsar wind, show a cylindrical symmetry, implying that the spin axis of the pulsar is at an angle of $`110^{}`$ east of north, projected $`30^{}`$ out of the plane of the sky to the southeast. If the geometry proposed by Hester et al. is tied to the true spin axis of the pulsar, then the angle of the spin axis to the observer is $`\alpha =90^{}30^{}=60^{}`$, very close to our fitted value for the observer impact angle to the spin axis determined through RVM fits. ## 6 Conclusion We have presented new polarimetric observations of the Crab pulsar at frequencies between 1.4 and 8.4 GHz which are difficult to interpret under the classical polar cap model. There are more than the typical number of components seen in other pulsars, and they arise from all over the pulsar’s rotational phase. The new pulse components (LFC, HFC1 and HFC2) found in Paper I all have high linear polarization. We re-confirmed the phase shift and spectral change of the IP between 1.4 and 4.9 GHz, and found that the component also undergoes a $`90^{}`$ relative position angle shift with respect to the other components! A good fit is made of the low altitude, rotating-vector model to the polarization position angle at high frequencies, but the line of sight impact angles to the magnetic axis are very large, implying that emission is arising from angles beyond the width of the low altitude polar cap region. It appears that the MP and IP do not arise from low altitude dipole emission. However, the LFC and HFC components show some properties inherent to conal emission (just as the precursor exhibits core-type emission). The Crab profile appears to be associated with a mixture of low and high altitude emission, with the IP being the greatest mystery and exception to the rule. The authors wish to thank Phil Dooley and the LO/IF group at NRAO-Socorro for their maintenance of the HTRP system. DM acknowledges support from a NRAO pre-doctoral fellowship, and from NSF grants AST 93-15285 and AST 96-18408. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research has made use of data from the University of Michigan Radio Astronomy Observatory which is supported by the National Science Foundation and by funds from the University of Michigan. We also acknowledge the use of NASA’s Astrophysics Data System Abstract Service (ADS), and the SIMBAD database, operated at CDS, Strasbourg, France.
no-problem/9904/astro-ph9904059.html
ar5iv
text
# Galaxies and Superclusters ## Abstract In this brief note, we would like to point out that large-scale structures like galaxies and superclusters would arise quite naturally in the universe. <sup>0</sup><sup>0</sup>footnotetext: Email:birlasc@hd1.vsnl.net.in; birlard@ap.nic.in There being about $`N=10^{11}`$ galaxies in the universe, each about $`l=10^{23}cms`$ across, we can easily verify that $$R\sqrt{N}l$$ (1) where $`R`$ is the radius of the universe. As is well known, (1) arises in the theory of Brownian motion on the one hand and on the other, it is also true if $`N`$ represents the number of elementary particles $`10^{80}`$, in the universe and $`l`$, their size or spread, that is their Compton wavelength. We can now interpret (1) as follows: From the large-scale perspective, the galaxies are approximately in Brownian motion and their size is given correctly by (1). Interestingly, as we can easily verify, an identical relation holds for superclusters also, with a similar interpretation. Moreover (1) implies a two dimensional structure - this is indeed true; not only do galaxies have large flat disks, but also superclusters have a flat cellular character. However, (1) is not true for stars. In this case gravitation is strong and the Brownian approximation is no longer valid. Thus galaxies and superclusters would naturally arise in the Universe.
no-problem/9904/astro-ph9904247.html
ar5iv
text
# The Formation and Evolution of Candidate Young Globular Clusters in NGC 3256 ## 1 Introduction Globular clusters have long been recognized as excellent fossil records of the formation history of their host galaxies (Ashman & Zepf 1998 and references therein). They also provide critical testbeds for the study of stellar evolution and stellar dynamics. However, the formation process of globular clusters themselves is not well understood. One hypothesis is that merger-induced starbursts are favorable environments for globular cluster formation (Schweizer 1987, Ashman & Zepf 1992). Ashman & Zepf (1992) specifically predicted that Hubble Space Telescope (HST) images of gas-rich mergers would reveal young globular clusters, readily identifiable through their very compact sizes, high luminosities, and blue colors. This prediction has been dramatically confirmed. Initial discoveries of compact, bright, blue star clusters in HST images of the peculiar galaxy NGC 1275 by Holtzman et al. (1992) and in the well-known galaxy merger NGC 7252 by Whitmore et al. (1993), have been followed up by the observations of similar objects with the characteristics of young globular clusters in a large number of starbursting and merging galaxies (see list in Ashman & Zepf 1998). The identification of these compact, bright, blue objects as young star clusters has been confirmed by ground-based spectroscopy in several systems (e.g. Schweizer & Seitzer 1998, Brodie et al. 1998, Zepf et al. 1995, Schweizer & Seitzer 1993). There are even possible mass estimates from high resolution spectroscopy of a few nearby examples (e.g. Ho & Filippenko 1997). These observations provide significant support for the idea that globular clusters form in galaxy mergers and strong starbursts. They also suggest that globular cluster formation may be a regular part of the starbursting process. The empirical evidence for globular cluster formation in these environments is broadly consistent with the hypothesis that globular clusters primarily form in mergers and starbursts rather than in other sites. Other globular cluster formation scenarios appear to have difficulties accounting for the observational properties of globular cluster systems (e.g. Ashman & Zepf 1998, Harris 1996 and references therein). In particular, correlations between cluster and host galaxy properties and the absence of dark matter are problematic for primordial globular cluster formation models (e.g. Peebles & Dicke 1968, Rosenblatt et al. 1988). Similarly, thermal instability models for globular cluster formation (e.g. Fall & Rees 1985) appear to be unable to account for the absence of a correlation between globular cluster metallicity and mass along with the high metallicities of typical globular clusters (\[Fe/H\] $`>1`$). The discovery of young globular cluster systems in nearby starbursts and galaxy mergers opens up the possibility of more detailed, empirical studies of the formation and evolution of globular clusters. One of the questions that remains to be answered is the efficiency with which globular clusters form in starbursts and mergers. This efficiency is critical for determining if most or all globular clusters can form in merger-like conditions. The efficiency can also constrain models of the formation the clusters themselves. For example, models in which globular clusters form as cores in much larger clouds predict low efficiencies, while models in which typical molecular clouds are compressed may be more efficient. A closely related question is the dynamical evolution of globular cluster systems. Most studies to date have concentrated on developing theoretical models and matching these to the properties of old globular cluster systems that have undergone evolution over most of a Hubble time. Attempts to infer the initial population and the effects of evolution from the remnant population are difficult. Observations of young cluster systems can provide valuable input into the initial conditions and early dynamical evolution of globular cluster systems. This is not only true of the mass (luminosity) function, but also of the radii and densities with which the clusters form. The efficiency of globular cluster formation in mergers and of the dynamical evolution of globular cluster populations also have significant implications for the use of globular cluster systems as fossil records of the formation history of their host galaxies. For example, Ashman & Zepf (1992) predicted that if elliptical galaxies form by mergers, they should have two or more populations of globular clusters. One of these populations originates from the halos of the progenitor spirals and is therefore spatially extended and metal-poor, while the other forms during the merger and is thus more spatially concentrated and metal-rich. This prediction of multiple populations in the globular cluster systems of ellipticals formed by mergers has now been confirmed in many cases (e.g. Ashman & Zepf 1998 and references therein). However, it has not yet been clearly demonstrated that the efficiency of cluster formation in galaxy mergers is sufficient to account for the metal-rich globular cluster population observed in elliptical galaxies. Furthermore, although there are strong theoretical arguments that the mass function of globular cluster systems evolves significantly over time to resemble the log-normal mass function of old globular cluster systems (e.g. Gnedin & Ostriker 1997, Murali & Weinberg 1997a), this evolution has not been demonstrated observationally. The goal of this paper is to address the questions of the formation and evolution of globular clusters through the study of the galaxy merger NGC 3256. The HST observations on which this study is based and the analysis of these data are presented in $`\mathrm{\S }2`$. The resulting sample of a large number of compact, bright, blue objects in NGC 3256 is examined in detail in $`\mathrm{\S }3`$. This section includes the determination of the relationship between the magnitudes, colors, and radii of the young cluster sample and the luminosity function. The implications of the results for the formation efficiency and dynamical evolution of globular cluster systems are discussed in $`\mathrm{\S }4`$, and the conclusions are given in $`\mathrm{\S }5`$. ## 2 Observations and Data Reduction ### 2.1 Target Galaxy We utilized HST and WFCP2 to obtain high resolution images of the galaxy NGC 3256. This galaxy was selected for our program because it has long been identified as a galaxy merger (e.g. Toomre 1977) and is fairly nearby, with $`cz_{}=2820`$ $`\mathrm{km}\mathrm{s}^1`$ (English et al. 1999) which places NGC 3256 at a distance of 37 Mpc for H<sub>0</sub> = 75 km s<sup>-1</sup> Mpc<sup>-1</sup>. As shown in Figures 1 and 2 (plates 1 and 2), the central region of NGC 3256 has star forming knots, dust lanes, and loops, along with a more extended, smoother component. In the radio continuum and at $`2.2\mu `$ there appear to be two nuclei separated by about $`5^{\prime \prime }`$, or about 1 kpc (Norris & Forbes 1995, Kotilainen et al. 1996, Doyon, Joseph, & Wright 1994). Tidal tails can also be seen in the optical images in Figure 1, and have been shown to extend out to $``$ 50 kpc in HI (English et al. 1999). Toomre (1977) placed it in the middle of his sequence of disk galaxy mergers, suggesting that it is dynamically older than the NGC 4038/4039 system (the Antennae), but younger than NGC 7252. Of the eleven mergers on the Toomre list, NGC 3256 also has the most molecular gas ($`1.5\times 10^{10}`$ $`M_{}`$, Casoli et al. 1991, Aalto et al. 1991, Mirabel et al. 1990), and is the brightest in the far-infrared ($`L_{FIR}=3\times 10^{11}`$ $`L_{}`$, Sargent et al. 1989). ### 2.2 HST Observations The WFPC2 images of NGC 3256 were obtained with the Planetary Camera (PC) centered on the galaxy. At a distance of 37 Mpc, each PC pixel is 8 pc, and the PC covers a total of 7 kpc $`\times `$ 7 kpc, encompassing the starburst region identified in previous studies. The PC data centered on NGC 3256 are the subject of this paper. The data at larger radii will be discussed in future papers. We imaged NGC 3256 in the F450W and F814W filters. Two equal exposures were obtained through each filter, with total exposures times of 1800s in F450W and 1600s in F814W. For each filter, the two exposures were combined utilizing a cosmic-ray rejection routine kindly provided by Rick White. As a check on this procedure, we also performed the more standard CCREJECT task in STSDAS on the images in each filter, and then set a strict criterion for matching the object lists between the two filters. The final results were very similar to those produced by White’s routine (c.f. Schweizer et al. 1996, Miller et al. 1997). A visual examination of the few differences favored the results of White’s routine, so we used these combined images for further analysis. In any case, the number of compact sources observed is far greater than any possible residual defects. The resulting combined images are shown in Figure 2 (Plate 2). ### 2.3 Cluster Identification A wealth of blue, compact objects is revealed in the HST images shown in Figure 2 (Plate 2). In order to determine the magnitudes and sizes of the compact objects discovered in the HST images, we first used the DAOFIND task in IRAF to identify objects. This task convolves the image with a Gaussian kernel, finds the best fitting Gaussian function at each point, and then searches for density enhancements which are both greater than a given threshold value and the brightest density enhancement in a localized region determined by the width of the Gaussian kernel. For this analysis, we set the FWHM of the Gaussian to be 2.8 pixels, which is the apparent width expected for an object with a true FWHM of roughly 2 pixels. We also applied broad cuts with the DAOFIND sharpness and roundness criteria to eliminate a few extremely diffuse or sharp features. There are two notable effects of identifying objects in this standard way. One is that it introduces a selection bias against objects significantly larger than the smoothing kernel. This is a direct result of the search for density enhancements on a given scale. Although not an issue if all of the objects are unresolved or marginally resolved, this selection effect needs to be accounted for in studies of the distribution of object sizes. A second aspect of DAOFIND is that the threshold is defined globally. Several other globular cluster searches have been performed using a local threshold, rather than a global one (e.g. Kundu et al. 1998, Carlson et al. 1998, Miller et al. 1997). Although this has the advantage of giving a uniform number of false detections over the image, it does so at the cost of producing a non-uniform magnitude limit across the frame. As this is critical for our purposes, we retain the global threshold, and simply set it high enough that the probability of a spurious source in regions of high background (noise) is negligible. Perhaps the most critical aspect is that the detection algorithm is well-understood, and can be run on a variety of artificial datasets to explore the success with which is recovers objects of various luminosities, colors, and sizes. ### 2.4 Cluster Photometry The next step is to determine the magnitudes of the identified objects. Because of crowding, variable background, and signal to noise, it is not possible to determine the brightness profile of the objects out to large radius. Therefore we perform aperture photometry from one to several pixels in radius, and correct these modest apertures to total magnitudes. If the objects were unresolved, the aperture correction to total magnitude would be straightforward, as the HST point spread function (psf) is reasonably well-understood. Moreover, an aperture of several pixels incorporates the majority of the light from an unresolved source, even in the PC, so the overall correction is not a large one. However, the objects we detect in NGC 3256 are resolved, as expected for objects with sizes like those of Galactic globular clusters at the distance of NGC 3256. In this case, the aperture corrections depend on both the psf and the intrinsic radial profile of the object. There is a limited amount of spatial information in the surface brightness profile within the few pixels radius out to which it can be reliably measured. Therefore, if a form of the profile is assumed, the radial scale of that profile can be determined by the difference between magnitudes at small radii. A Gaussian shape for the intrinsic profile of the clusters has been adopted in most previous work (e.g. Whitmore et al. 1993, Whitmore & Schweizer 1995, Schweizer et al. 1996). In order to facilitate comparison with these studies, we also adopt a Gaussian profile for the clusters, although we note that this will tend to underestimate the total magnitude if the clusters follow a more extended profile, such as a modified Hubble law (e.g. Holtzman et al. 1996, Carlson et al. 1998, Ostlin et al. 1998). In detail, we determine the size of each object by comparing the difference between magnitudes within apertures of one and three pixels ($`m_3m_1`$) to a table of values for a wide range of Gaussian profiles convolved with the HST psf given by the TINYTIM software (Krist 1993). This is done in both the B and the I filters, and the resulting intrinsic FWHM is taken as the average of the two. We tested this procedure by using I-band (F814W) observations of unsaturated stars in the globular cluster Omega Cen as the basis for the HST psf. This psf gives the same inferred sizes as the TINYTIM psf when the intrinsic input FWHM is greater than about 0.5 of a PC pixel. For objects with intrinsic sizes less than 0.5 of a PC pixel, a slightly ($`10\%`$) smaller size is inferred with the Omega Cen psf than with the TINYTIM psf. This difference in inferred size due to different psfs is much smaller than the uncertainty introduced by the assumption of a form for the intrinsic surface brightness profile of globular clusters with only a single free parameter. Therefore, we adopt the TINYTIM results for the remaining analysis. We note that although the absolute errors in total magnitudes and half-light radii due to the requirement of an assumed profile form for the clusters in the sample can be significant, they should give good relative sizes and total magnitudes, providing the clusters are roughly similar to each other in structural characteristics. Moreover, any systematic error is likely to affect both filters, so that the colors will be mostly unaffected. The total magnitude of each object is determined by applying the aperture correction from the magnitude within an aperture of 2 pixels in radius to a total magnitude, appropriate for the measured size for each object. This aperture correction for an object of typical size in our sample is roughly 0.85 magnitudes in B and 1.0 magnitudes in I. It compares to 0.25 magnitudes in B and 0.48 in I for completely unresolved objects. These differences emphasize both that our objects are significantly resolved, and that the colors are largely unaffected by the correction to total magnitudes (cf. Holtzman et al. 1996). We correct the magnitudes and colors of the objects for Galactic reddening using the dust maps of Schlegel, Finkbeiner, & Davis (1998). These give $`E(BV)=0.12`$, and extinction in our HST filters of $`A_{F450W}=0.47`$ and $`A_{814W}=0.23`$. The absolute photometric calibration to the standard B and I<sub>C</sub> system is achieved using the Holtzman et al. (1995) zero points for the F450W and F814W filters. ## 3 Analysis The color, magnitude, and sizes of the objects detected in NGC 3256 are plotted in Figure 3 <sup>1</sup><sup>1</sup>1The positions, magnitudes, colors, and sizes of the objects found in NGC 3256 are also given in Table 1, available either in the electronic journal or from the first author.. Several features of the cluster system of this galaxy merger are apparent in this diagram. One is the very large number of bright star clusters in this galaxy, approximately 1,000 with $`M_B<9`$ inside of the central 7 kpc $`\times `$ 7 kpc. These objects account for approximately $`19\%`$ of the B light and $`7\%`$ of the I light within this region. The star clusters generally have blue colors. The bright magnitudes and blue colors are indicative of massive star clusters at young ages. For reference, Figure 4 shows the prediction of two stellar population models for the color and magnitude of a $`2\times 10^5`$ $`M_{}`$ globular cluster as a function of age. The clusters are also compact like globular clusters, with typical sizes of $`\stackrel{<}{}10`$ pc. Only a few of the 1,000 objects are expected to be compact background galaxies or foreground stars, based on similar analyses of blank fields, such as the Hubble Deep Field. In order to address the true distribution of the population in color, size, and magnitude, simulations of artificial datasets are required to calibrate the detection procedure. For example, the absence of objects with very faint B magnitudes and blue B-I colors may be caused by the detection limit in the I band. Similarly, the absence of large, faint clusters may also be possibly due to selection effects. Therefore, we created a grid of artificial objects with a range of magnitudes in each bandpass, and a range of sizes for each magnitude. By creating a full grid of artificial stars in both bandpasses, we can address the issue of any effect of incompleteness in B or I on the color distribution at faint magnitudes. The difference between input and output magnitudes also provides a calibration of the effect of “bin jumping” when constructing luminosity functions. Similarly, by incorporating a range of sizes in the artificial star tests, we can address the question of the intrinsic size distribution of the cluster population. This study is the first in which all of these effects have been modeled. We can then test the consistency of various models of the luminosity, color, and sizes of the candidate globular clusters in NGC 3256 against the observations. Specifically, we create model data sets with various combinations of luminosity functions, color-magnitude relations, and luminosity-size relations. For the luminosity function and luminosity size-relation, we adopt a power-law form, while we use a linear relation between color and magnitude. The intrinsic widths of the color and size distributions are drawn from the data at bright magnitudes where they are unaffected by selection. Predictions for observables for each model are made by convolving the model with the selection functions derived from the simulations described above. We compare these predictions of various models to the luminosity function, color-magnitude relation, and luminosity-size relation observed for the candidate young globular clusters in NGC 3256 (Figure 3). A model is considered to fit the color-magnitude and luminosity-size relation if the linear regression of these parameters is statistically consistent with the data. To insure that the results are not dependent on the use of a linear fit, we also compare the median colors and sizes as a function of magnitude of the models to the observations, with the uncertainties in the medians of the data determined via bootstrapping. For the luminosity function, the goodness of the fit is determined using the double-root-residual test (Ashman, Bird, & Zepf 1994). We also test the effects of changes in any one of the underlying distributions on all of the observed properties, as they are not decoupled from each other. For example, an underlying luminosity function can be flat, but if the clusters are smaller at faint magnitudes, they will be easier to detect, and the luminosity function will appear to rise at faint magnitudes. Similarly, a trend of color with magnitude can also give rise to an apparent luminosity function different than the underlying one if different color clusters are detected with different efficiency at faint magnitudes. We find that the best fitting model cluster population has little or no correlation between luminosity and color (B-I independent of $`M_B`$), a shallow correlation between luminosity and radius ($`rL^{0.07}`$), a power-law luminosity function $`N(L)L^{1.8}`$. This best fitting model is shown in Figure 5. The statistical uncertainties on the parameters of the underlying cluster population are roughly 0.05 in the slope of the magnitude-color relation, and 0.1 in the exponent of both the radius-luminosity relation and the luminosity function. We discuss the magnitude-color relation, luminosity-size relation, and luminosity function individually in more detail below. ### 3.1 Color-Magnitude Relation The broad color distribution and absence of a strong relationship between color and luminosity places strong constraints on the nature and evolution of the young cluster system in NGC 3256. In order to produce the observed range of colors, either differential reddening by dust or a range of ages is required. However, both reddening and age generally produce fainter magnitudes for redder objects (see Figure 4). This is not observed in NGC 3256, as shown by the similar color-magnitude diagrams of the data (Figure 3) and a simulated data set with no relationship between color and magnitude (Figure 5). A quantitative measure of the close agreement between the observations and a simulated data set with no relationship between color and magnitude is the similar median (B-I) colors as a function of B magnitude, which are given in Table 2. In contrast, a simulated data set with an intrinsic slope of (B-I) $`0.5`$B, like that expected for a standard reddening law, gives a much steeper relation between (B-I) color and B magnitude, as shown in Table 2. Similar results are obtained using other robust measures of the average color as a function of magnitude. The fundamental result is that we are unable to account for the broad range of observed colors solely by differential reddening or a broad age distribution because there is no intrinsic trend of redder colors with fainter magnitudes. The observations can be accounted for in two ways. One possibility is that the young cluster system in NGC 3256 has an age distribution up to several hundred Myr and low mass clusters are preferentially destroyed over this timescale. In this way, the typical luminosity of the older, redder cluster population will not become much fainter than the younger bluer, cluster population because the younger population will have more low-mass clusters. A modest amount of reddening may also be required to produce the colors of the reddest clusters, but not so much that a strong color-luminosity trend is produced. The lack of a strong color-luminosity relation can also be accounted for if most of the clusters are very young. At ages up to $`10`$ Myr, stellar population models predict a fairly broad range of colors with little change in B luminosity (see Figure 4). This effect is due to red supergiants, and is stronger in the Leitherer et al. (1998) models than the Bruzual & Charlot (1998) models because of the increased presence of red supergiants in the former models. As in the previous case, some reddening may be required to produce the reddest clusters, but reddening can not be the primary determinant of the cluster colors, or a color-magnitude relation would be introduced, contrary to the observations. The critical aspect of the possibility that red supergiants at young ages account for much of the observed color spread is the requirement of a very young age for the system as a whole because all of the models begin to produce a significant color-luminosity trend after $`10`$ Myrs. Both the destruction and red supergiant hypotheses can account for a broad color distribution without a strong color-luminosity relation. An additional constraint is that the range of ages in the young cluster population would not be expected to be less than the dynamical time of the region in which they formed. Adopting a radius of 3 kpc for the region in which the young clusters are found, and a typical velocity of $`v150`$ $`\mathrm{km}\mathrm{s}^1`$, we find a dynamical time of 20 Myr. Thus the very young age hypothesis is marginally inconsistent with the requirement that the cluster population not form on a timescale shorter than the dynamical time. Other explanations for the absence of a strong color-luminosity trend are strongly constrained by the large observed color spread. For example, a model in which younger clusters are more heavily reddened than older clusters can give a weak color-luminosity trend. However, it does so at the cost of narrowing the color spread, and therefore fails to account for the observations. ### 3.2 Luminosity-Radius Relation A second observation of relevance for models of the formation and evolution of young star clusters is the luminosity-radius relationship. The NGC 3256 young star cluster system has a very shallow relationship between radius and luminosity, roughly $`rL^{0.07}`$. The relationship of radius with mass is likely to be similar to that with luminosity. This follows from the absence of a correlation between color and luminosity which suggests the mass-to-light ratio is mostly independent of luminosity. Thus, cluster mass is likely to be fairly independent of radius. A weak correlation between mass and radius has significant implications for the formation and evolution of globular clusters. Clouds in hydrostatic equilibrium follow the relationship $`rM^{1/2}P^{1/4}`$ (e.g. Ashman & Zepf 1999, Elmegreen 1989). The shallow observed correlation between radius and mass therefore suggests that higher mass clusters form at higher pressure. If confirmed, this will play a significant role in developing models of globular cluster formation. A shallow relationship between mass and radius also suggests that on average low-mass clusters are formed with lower density and are less bound than higher mass clusters. Therefore, they will be more susceptible to destruction by mass loss at early ages, and through tidal shocks over time. This result is an important input into determinations of the effect of dynamical evolution on the mass function of clusters, as discussed in the previous and the following subsections. In particular, studies of the dynamical evolution of globular cluster systems must adopt some relation between radius and mass for the initial cluster population (e.g. Ostriker & Gnedin 1997 and references therein). Without data from young clusters, this relation has been based on observations of old cluster populations, whose properties may have already been altered by dynamical evolution. Thus, observations of the radii of young cluster systems are an important part of the study of dynamical evolution of globular cluster systems. These conclusions regarding the mass-radius relationship require confirmation, as the clusters are only marginally resolved in the present data. We are able to recover differences in the sizes of the objects, given a form for the radial profile of the cluster. The inferred mass-radius relationship for the cluster population should not be sensitive to the form of the profile adopted because it is only based on relative values of cluster radii. Therefore, our result is not likely to be sensitive to the specific cluster profile chosen, although it will be affected by any systematic changes in the shape of the profile with cluster mass. ### 3.3 Luminosity Function The luminosity function of the NGC 3256 cluster system has a best-fitting power with slope about $`1.8`$, with tentative evidence that it flattens at faint magnitudes. This slope is similar to that found in other young globular cluster systems in galaxy mergers (e.g. Whitmore & Schweizer 1995, Schweizer et al. 1996, Miller et al. 1997, Carlson et al. 1997) and also to that for populous clusters in the LMC (Elmegreen & Efremov 1997, Elson & Fall 1985). The most notable difference between the observed luminosity function and a power-law convolved with observational selection is that the data appear to be flatter at the faint end than the model. In order to assess the statistical significance of this difference, we utilized a double-root-residual test (Ashman et al. 1994). The test indicates the difference is significant at about the $`2.5\sigma `$ level. However, given the uncertainties modeling the selection at these faint levels, any deviation from a power-law slope is tentative. Although the luminosity function of the NGC 3256 cluster system is now roughly a power-law, both the luminosity-color and luminosity-radius relation suggest that it is likely to evolve significantly over time. This is also expected on theoretical grounds (e.g. Gnedin & Ostriker 1997, Murali & Weinberg 1997a). A comparison of the observations to these theoretical expectations is presented below. ## 4 Discussion HST imaging of the galaxy merger NGC 3256 has revealed a large population of objects with the bright luminosities, blue colors, and compact sizes expected of objects like Galactic globular clusters at young ages. In this section, we explore in more detail the relation of young clusters observed in starbursting and merging galaxies like NGC 3256 to old globular clusters, and to implications these observations have for models of the formation of globular clusters. ### 4.1 Cluster Mass Function and Dynamical Evolution The power-law luminosity function is the one feature of the NGC 3256 young cluster system that is decidedly different from that of old globular cluster systems around galaxies like M87 and the Milky Way, which have lognormal luminosity functions. Although the masses and sizes inferred for many of the young clusters in NGC 3256 are similar to those of old globular clusters (e.g. Figures 3 and 5), the difference in the shape of the luminosity function has long been used to argue that the clusters in mergers and starbursts are fundamentally different than in the old systems (e.g. van den Bergh 1995). However, it has also long been realized that dynamical evolution can significantly alter the mass function of clusters systems over a Hubble time (e.g. Fall & Rees 1977, Gnedin & Ostriker 1997, Murali & Weinberg 1997a, 1997b, Ashman & Zepf 1998 and many references therein). We therefore consider whether observations of the mass and luminosity functions of cluster systems over a range of ages can be consistently accounted for by dynamical evolution. The two long-term dynamical processes that may be relevant for determining the shape of the globular cluster mass function are evaporation and tidal shocking. These have the following scalings between the lifetime of a cluster and its mass and radius $`t_{evap}M^{1/2}r^{3/2}`$, and $`t_{sh}Mr^3`$. The timescale for tidal shocks also depends on the cluster orbit and galaxy potential. As long as these are similar in the galaxies being compared, the scaling is independent of these parameters. Therefore, we concentrate on the scaling of the destruction timescale with cluster mass and radius. In $`\mathrm{\S }3.2`$, we found that cluster mass and radius may be only weakly dependent on one another with $`rL^{0.07}`$. With this result, the equations above give a timescale for destruction that scales as roughly $`M^{2/3}`$ for both evaporation and tidal shocking. These give relative timescales for destruction of clusters as a function of their mass. An absolute timescale can be placed on these scalings through the known turnover mass and age of the globular cluster systems of galaxies like the Milky Way and M87. This allows us to determine the cluster mass scale that is undergoing destruction at the ages corresponding to young cluster systems observed in galaxy mergers. In this way, we find that at an age of 100 Myr, the characteristic mass scale set by dynamical processes is $`10^3`$ of that for the globular cluster systems of the Milky Way and M87. At 500 Myr the corresponding number is about $`10^2`$ of the characteristic mass scale of old globular cluster systems. This calculation is also based on the simplifying assumption that the dynamical processes are independent of one another, which is not strictly true. However, the numbers given above are not likely to be dramatically wrong as long as the observation that $`M`$ and $`r`$ are mostly independent holds. If long-term dynamical processes are responsible for evolution of the mass function, the above calculation suggests the “turnover” in the mass function in young globular cluster systems will occur at much smaller masses than in older globular cluster systems like that in the Galaxy. These small masses make such a turnover very difficult to observe directly in young systems. Specifically, the above calculation suggests that clusters with masses of roughly $`1\%`$ of the current turnover mass are being destroyed at the ages typical of young globular cluster systems, such as the NGC 3256 system studied here, as well as the NGC 1275 system (Carlson et al. 1998) and the NGC 7252 system (Miller et al. 1997). This corresponds to clusters five magnitudes below the current turnover of the globular cluster luminosity function. In order to calculate the detectability of a dynamically induced turnover in young cluster systems, the expected brightening of young clusters must also be accounted for. Stellar populations models (e.g. Bruzual & Charlot 1998) predict that old objects of this magnitude were about 5.0 magnitudes brighter in B and 3.7 in V at the ages inferred for NGC 3256 from their colors. The corresponding brightening for the slightly older NGC 1275 and NGC 7252 cluster systems we will also study is about 4.2 magnitudes in B and 3.1 in V. The predicted brightening in these young cluster systems relative to $`13`$ Gyr old populations is dependent primarily on the mass function between several $`M_{}`$ and slightly less than one $`M_{}`$. The adopted numbers are for a Salpeter (1955) slope of $`x=1.35`$ are somewhat less for flatter mass functions. The net result is that in young cluster systems observed in the B-band, such as NGC 3256, the observations must reach absolute magnitudes around the current turnover luminosity in old globular cluster systems, roughly $`B=6.8`$. As can been seen in Figure 3, the observations fail by several magnitudes to reach such faint levels. In fact, it will be difficult to obtain reliable cluster counts at such magnitudes because of potential confusion with individual bright stars, which can have similar luminosities. Analyses of the NGC 1275 and NGC 7252 datasets give similar conclusions that the highest mass scale at which dynamical evolution is expected to be effective is well below the observational limit. In particular, for these $`500`$ Myr old systems, the observations would have to reach limits of about $`B\stackrel{>}{}6`$ and $`V\stackrel{>}{}5.5`$, while the observations are limited to about two magnitudes brighter. The luminosity functions of known cluster systems are therefore consistent with a model in which globular clusters form with a power-law mass function, and evolve through well-known dynamical processes to have the log-normal mass function observed in old systems such as the Milky Way and M87. Testing this hypothesis by directly looking for the turnover in young cluster systems is difficult because the mass scale is predicted to be extremely small. This work suggests that searches for turnovers in globular cluster mass functions might be more profitable in more intermediate-age systems in which the predicted mass scale may be accessible to observation. In order to improve the predictions for the dynamical evolution of young globular cluster systems, better constraints on the initial mass-radius relation are required. Studies of young cluster systems are critical for providing these constraints because these systems have not undergone as much dynamical evolution and will more accurately reflect the relationship between mass and radius at the time the young clusters formed. This relation plays a major role in the dynamical calculations. Specifically, the timescales for evaporation and tidal shocking depend on both mass and radius. If the relationship between mass and radius is stronger than it appears to be in our observations of the NGC 3256 system, the timescales for disruption of clusters of different masses will be affected. For example, if $`rM^{1/2}`$, then $`t_{evap}`$ will have a very steep dependence on mass, and $`t_{sh}`$ will be shorter for larger masses. In this case, the mass scale for evaporation at 100 Myr would be only about a factor of 5 less than that after 10 Gyr, and only a factor of 3 less at 500 Myr compared to 10 Gyr. However, the tidal shocking would then be inversely correlated with mass, so both high and low mass clusters would be destroyed and the observational predictions in this case are less clear. Perhaps the safest conclusion then from our analysis of the formation and destruction of globular clusters is that information on the radii of the clusters as well as their mass function is critical for determining the effects of dynamical evolution and comparing of mass functions of young and old cluster systems. The cluster radii and mass-radius relation also have significant implications for models of the formation of young globular clusters. Further exploration of observational constraints on the initial mass-radius relation is a promising route for future studies. ### 4.2 Efficiency of Compact, Massive Cluster Formation The census of compact, young star clusters in NGC 3256 allows the determination of the efficiency with which clusters form. The simplest characterization of efficiency is the fraction of new stars formed that are in dense star clusters. As described in $`\mathrm{\S }3.1`$, this fraction is about $`20\%`$ based on the percentage of blue light that is in dense star clusters. This value is similar to that observed in smaller starburst regions by Meurer et al. (1995). It is somewhat larger than the fraction of blue light in dense star clusters observed in the older NGC 7252 system, suggesting that either the formation efficiency peaks near the peak of the starburst or the dynamical evolution removes some of the clusters on the timescale of the age of the NGC 7252 system. A critical aspect of these NGC 3256 data is that they demonstrate that high formation efficiency can occur over a large area (7 kpc $`\times `$ 7 kpc) encompassing much of a $`L_{}`$ galaxy. These data also indicate that the fraction of stars that form in these dense star clusters is not negligible, and suggest that this is an interesting mode of star formation. A second way to characterize the efficiency of cluster formation is to compare it to the total amount of gas available. This is less straightforward than comparing the fraction of light, but connects more directly to current theoretical models. The total amount of molecular gas inferred from CO observations and standard assumptions regarding the conversion to H<sub>2</sub> mass is $`1.5\times 10^{10}`$ $`M_{}`$ (Casoli et al. 1991, Aalto et al. 1991, Mirabel et al. 1990). The total mass in the young cluster system can be estimated from the color of each object, and a stellar populations model that allows color to be converted to age and mass-to-light ratio. The observed luminosity can then be converted into mass. Applying this procedure individually to each cluster in the NGC 3256 cluster sample and summing up gives a total mass in the young cluster system of $`6\times 10^7`$ $`M_{}`$. This is based on taking all objects with $`(BI)<1.5`$ and inferred sizes less than 15 pc, and using the Charlot-Bruzual stellar population models with a Miller-Scalo initial mass function. A Salpeter initial mass function would increase the mass estimate by approximately $`50\%`$. Reddening of observed objects is less of a factor because its effects on the mass estimate cancel out to first order. Specifically, reddening makes the objects fainter, decreasing the apparent luminosity, but also redder, increasing the inferred mass-to-light ratio, so in the end the mass estimate is similar. For example, the mass estimate for the NGC 3256 cluster system is only about $`30\%`$ greater for an internal reddening of $`A_B=1.0`$ compared to $`A_B=0.0`$. An additional effect is that some clusters may be extincted beyond detection. The resulting efficiency for the formation of massive, compact clusters computed in this way is then about $`0.5\%`$. This is a lower limit in the sense that cluster formation is ongoing and there is plenty of gas mass left from which to form more clusters. If clusters continue to make up $`20\%`$ of the stars formed, the total cluster mass fraction at the end of the starburst will be closer to this value. Therefore, the observations place the formation efficiency between $`0.520\%`$ depending on how exactly efficiency is defined and what happens in the future of the starburst in NGC 3256. It is interesting to compare our observed efficiency of globular cluster formation in NGC 3256 to the fraction of mass in old stellar populations that is in globular clusters. The highest observed fraction of mass in globular clusters to total stellar mass is about $`1\%`$, as seen in the Galactic halo and the richest extragalactic globular cluster systems such as M87 (e.g. Ashman & Zepf 1998). Typical elliptical galaxies have ratios about five times lower, around $`0.2\%`$. Therefore, the globular cluster formation efficiency observed in NGC 3256 is more than sufficient to account for the globular cluster systems of elliptical galaxies. More specifically, if the mass fraction of stars that form in globular clusters is closer to $`20\%`$ over the full starburst/merger event, then mergers have an over-efficiency problem. This over-efficiency problem can be alleviated in several ways. One possibility is that the progenitor spirals are gas-poor, which may be true at the current epoch but seems unlikely at high redshift when most mergers are likely to occur. A more likely explanation is significant dynamical destruction of the cluster population. This destruction is predicted by theory and required by observation if the power-law luminosity functions of young cluster systems are to match the lognormal luminosity function of old globular cluster systems. Conversely, if the overall mass fraction of stars that form in globular clusters is closer to $`0.5\%`$, then gas-rich mergers will make about the right number of globular clusters for moderately rich systems, and gas-poor mergers will lead to moderately poor globular cluster systems. The reality is likely to be somewhere between these two extremes. It is unlikely that all globular cluster formation in NGC 3256 will immediately cease while the large reservoir of cold gas continues to form stars, as required for the overall efficiency to be $`0.5\%`$. It is also unlikely that all of the remaining gas will form stars in a starburst in which $`20\%`$ of the new stars are formed in dense, massive clusters, as is currently happening in NGC 3256. Moreover, many clusters are expected to be lost due to dynamical processes. The high efficiency of globular cluster formation observed in NGC 3256 also has significant implications for theoretical models of globular cluster formation. In particular, scenarios in which globular clusters form as cores within proposed super giant molecular clouds (SGMCs) predict that only about $`0.2\%`$ of the mass of the cloud forms stars in dense cores, as seen in Galactic giant molecular clouds (McLaughlin & Pudritz 1996, Harris & Pudritz 1994). Thus the observed globular cluster formation efficiency appears to pose a problem for the SGMC scenario. A second problem is the long timescale for the formation of SGMCs in these models compared to the apparently rapid formation of globular clusters in galaxy mergers and starbursts. The observations appear to be more consistent with models of globular cluster formation in which the clusters form from highly compressed giant molecular clouds of typical mass for spiral galaxies. These may either originate from the progenitor spirals or be newly made GMCs. ### 4.3 Conclusions The primary conclusions of our study of HST images in B and I of the galaxy merger NGC 3256 are: 1. NGC 3256 has a very large population of compact, bright, blue objects. Many of these clusters have estimated sizes and masses like those of Galactic globular clusters. On this basis, we identify some fraction of these objects as young globular clusters. 2. The young cluster system has a broad range of colors, but little or no correlation between color and luminosity. This observation requires either destruction of low mass clusters over time or a very young age ( $`\stackrel{<}{}`$ 20 Myr) for the young cluster system. 3. Dynamical evolution is likely to significantly affect the mass function of the young cluster system. If the system is not very young, this has already been observed. The mass-radius relation for the cluster population is an important constraint on the predictions for dynamical evolution of the cluster population. 4. The large number of candidate young globular clusters indicates a high efficiency of cluster formation. This is observed across the 7 kpc $`\times `$ 7 kpc region studied. The efficiency of cluster formation in the galaxy merger NGC 3256 is more than sufficient to account for the metal-rich globular cluster populations in elliptical galaxies if these form from gas-rich mergers. We thank Richard Larson for useful discussions, Dave Carter for collaborating in obtaining the AAT image of NGC 3256, and Eddie Bergeron for making the color images frmo the HST data. We thank an anonymous referee for helpful suggestions. S.E.Z. and K.M.A. acknowledge support for this project from NASA through grants GO-05396-94A and AR-07542-96A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. SEZ also acknowledges the support of a Hubble Fellowship and fruitful discussions with colleagues at UC Berkeley during the early stages of this work.
no-problem/9904/astro-ph9904282.html
ar5iv
text
# The correlation function of X-ray galaxy clusters in the RASS1 Bright Sample ## 1 Introduction Galaxy clusters play an important role in the models for structure formation based on the gravitational instability hypothesis. They are the most extended gravitationally bound systems in the Universe. For this reason the study of their properties is a useful tool to constrain the parameters entering in the definition of the cosmological scenarios. In particular, their abundance and spatial distribution (also as a function of redshift) have been used to obtain estimates of the mass fluctuation amplitude and of the density parameter $`\mathrm{\Omega }_0`$ (e.g. Eke, Cole & Frenk 1996; Viana & Liddle 1996; Mo, Jing & White 1996; Oukbir, Bartlett & Blanchard 1997; Eke et al. 1998; Sadat, Blanchard & Oubkir 1998; Viana & Liddle 1999; Borgani, Plionis & Kolokotronis 1999; Borgani et al. 1999). In the past years many different groups have compiled deep cluster surveys in the optical band, which have been used to compute the clustering properties of galaxy clusters. The first results showed that clusters are strongly correlated, with a correlation length $`r_02025h^1`$ Mpc, a factor 4-5 larger than that obtained for local galaxies (e.g. Bahcall & Soneira 1983; Postman, Huchra & Geller 1992). Here $`h`$ represents the Hubble constant $`H_0`$ in units of 100 km s<sup>-1</sup> Mpc<sup>-1</sup>. Sutherland (1988) suggested the existence of a possible strong effect due to the spurious presence of galaxies acting as interlopers (see also Dekel et al. 1989; van Haarlem, Frenk & White 1997). New analyses of optical catalogues, taking into account this projection effect (Dalton et al. 1992; Nichol et al. 1992; Dalton et al. 1994; Croft et al. 1997), led to a smaller value for the cluster correlation length ($`r_01318h^1`$ Mpc). A way to overcome the projection problem is the use of data obtained in the X-ray region of the spectrum. In fact, in this band, galaxy clusters have a strong emission produced by thermal bremsstrahlung, which allows to detect them also at high redshifts. Starting from the eighties, different space missions produced extended cluster catalogues which have been mainly used to compute their X-ray luminosity function. In particular, the $`ROSAT`$ satellite provided a good opportunity to build a reliable all-sky survey, which was performed in the soft (0.1 – 2.4 keV) X-ray band. First studies of the clustering properties in small samples of X-ray selected galaxy clusters have been performed by Lahav et al. (1989), Nichol, Briel & Henry (1994) and Romer et al. (1994). More recently the $`ROSAT`$ data have been correlated with the Abell-ACO cluster catalogue (Abell, Corwin & Olowin 1989) to produce the X-ray Brightest Abell Cluster sample (XBACs; Ebeling et al. 1996), for which estimates of the two-point correlation function have been recently obtained (Abadi, Lambas & Muriel 1998; Borgani, Plionis & Kolokotronis 1999). The corresponding values for $`r_0`$ are in the range $`r_02026h^1`$ Mpc. A smaller amplitude of the correlation function is obtained from the preliminary analyses of the REFLEX sample (Collins et al. 1999), which is also obtained by the $`ROSAT`$ All-Sky Survey (RASS) data. In this paper we estimate the clustering properties for the RASS1 Bright Sample (De Grandi et al. 1999), which is another X-ray cluster catalogue obtained from the RASS. In this case the clusters are spectroscopically searched in a preliminary list of candidates produced by correlating the X-ray data with regions of galaxy overdensity in the southern sky. In this way, the resulting catalogue is not affected by the selection biases present in the Abell-ACO cluster catalogue. The RASS1 Bright Sample is used to test a theoretical model for the correlation function of X-ray clusters in flux-limited samples (see also Moscardini et al. 1999). This model makes use of the technique introduced by Matarrese et al. (1997) and Moscardini et al. (1998), which allows a detailed modelling of the redshift evolution of clustering, accounting both for the non-linear dynamics of the dark matter distribution and for the redshift evolution of the bias factor. A characteristic feature of this technique is that it takes into full account light-cone effects, which are relevant in analysing the clustering of even moderate redshift objects (see also Matsubara, Suto & Szapudi 1997; de Laix & Starkman 1998; Yamamoto & Suto 1999). The plan of the paper is as follows. In Section 2 we summarize the characteristics of the RASS1 Bright Sample used in the following clustering analysis. In Section 3 we discuss the method used to compute the observational two-point correlation function for the RASS1 Bright Sample and we present the results. In Section 4 we introduce our theoretical model to estimate the correlations of the X-ray clusters in the framework of different cosmological models and we compare our predictions to the observational results. Conclusions are drawn in Section 5. ## 2 The Sample The RASS1 Bright Sample (De Grandi et al. 1999), contains 130 clusters of galaxies selected from the first processing of the $`ROSAT`$ All-Sky Survey (RASS) data (Voges 1992). This sample was constructed as part of an ESO Key Programme (Guzzo et al. 1995) aimed at surveying all southern RASS candidates, which is now known as the REFLEX cluster survey (Böhringer et al. 1998; Guzzo et al. 1999). The identification of RASS cluster candidates was performed by means of different optically and X-ray based methods. First, candidates were found as overdensities in the galaxy density distribution at the position of the X-ray sources using the COSMOS optical object catalogue (e.g. Heydon-Dumbleton, Collins & MacGillivray 1989). Then, correlating all RASS sources with the ACO cluster catalogue, and, finally, selecting all RASS X-ray extended sources. X-ray fluxes were remeasured using the steepness ratio technique (De Grandi et al. 1997), specifically developed for estimating fluxes from both extended and pointlike objects. A number of selections aimed at improving the completeness of the final sample lead to the RASS1 Bright Sample. Considering the intrinsic biases and incompletenesses introduced by the X-ray flux selection and source identification processes, the overall completeness of the sample is estimated to be $`>\text{ }90`$ per cent. The RASS1 Bright Sample is count-rate-limited in the $`ROSAT`$ hard band (0.5 – 2.0 keV), so that due to the distribution of Galactic absorption its effective flux limit varies between 3.05 and $`4\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> over the selected area. This covers a region of approximately 2.5 sr within the Southern Galactic Cap, i.e. $`\delta <2.5^o`$ and $`b<20^o`$, with the exclusion of patches with RASS exposure times lower than 150 s and of the Magellanic Clouds area. The exact sky map covered by the sample is shown in Figure 2 of De Grandi et al. (1999). The redshift distribution for our whole sample is presented in the left panel of Figure 1 while the X-ray luminosity $`L_X`$ as a function of the redshift for each cluster is shown in the right one. It is possible to notice that 66 per cent of the clusters have $`z<0.1`$ but the redshift distribution has a tail up to $`z0.3`$. ## 3 The 2-point correlation function Before computing the clustering properties of our sample, we have to derive for each cluster the comoving radial distance $`r`$ from the observer, given the redshift of each source. To this goal we use the standard relation (neglecting the effect of peculiar motions) $$r(z)=\frac{c}{H_0\sqrt{|\mathrm{\Omega }_0|}}𝒮\left(\sqrt{|\mathrm{\Omega }_0|}_0^z𝑑z^{}\left[\left(1+z^{}\right)^2\left(1+\mathrm{\Omega }_{0\mathrm{m}}z^{}\right)z^{}\left(2+z^{}\right)\mathrm{\Omega }_{0\mathrm{\Lambda }}\right]^{1/2}\right),$$ (1) where $`\mathrm{\Omega }_01\mathrm{\Omega }_{0\mathrm{m}}\mathrm{\Omega }_{0\mathrm{\Lambda }}`$, with $`\mathrm{\Omega }_{0\mathrm{m}}`$ and $`\mathrm{\Omega }_{0\mathrm{\Lambda }}`$ the density parameters for the non-relativistic matter and cosmological constant components, respectively. In this formula, for an open universe model, $`\mathrm{\Omega }_0>0`$ and $`𝒮(x)\mathrm{sinh}(x)`$, for a closed universe, $`\mathrm{\Omega }_0<0`$ and $`𝒮(x)\mathrm{sin}(x)`$, while in the Einstein-de Sitter (EdS) case, $`\mathrm{\Omega }_0=0`$ and $`𝒮(x)x`$. To compute the spatial two-point correlation function $`\xi (r)`$ we adopt both the Landy & Szalay (1993) estimator, $$\xi (r)=\frac{N_r(N_r1)}{N_c(N_c1)}\frac{DD(r)}{RR(r)}\frac{N_r1}{N_c}\frac{DR(r)}{RR(r)}+1,$$ (2) and the Davis & Peebles (1983) estimator, $$\xi (r)=2\frac{N_r}{N_c1}\frac{DD(r)}{RR(r)}1.$$ (3) In the previous formulas $`N_r`$ is the number of random points and $`N_c`$ that of clusters, DD is the number of distinct cluster-cluster pairs, DR is the number of cluster-random pairs and RR refers to random-random pairs with separation between $`r`$ and $`r+\mathrm{\Delta }r`$. The random catalogue contains a number of sources 1,000 times larger than the real catalogue (i.e. $`N_r=1000N_c`$). To generate this sample we have extracted randomly coordinates from the surveyed area (see Figure 2 in De Grandi et al. 1999), assigning to each position a random flux drawn from the observed number counts (Figure 8 in De Grandi et al. 1999). We decided to retain the source in the catalogue if its flux is larger than the flux limit at the choosen position. We adopt two different methods to assign the random redshifts: in the first we scramble the observed redshifts of the clusters in the sample; in the second we generate them randomly from the observed redshift distribution binned in intervals of 0.01 in $`z`$. The results obtained with these two different methods are practically indistinguishable. The same happens also if we use the two previous estimators for the two-point correlation function (eqs.2-3). The errorbars have been estimated by using the bootstrap method with 50 resamplings. We find that the errors obtained in this way are in many cases larger than $`\sqrt{3}`$ times the Poissonian estimates, which are often used as an analytical approximation of the bootstrap errors (Mo, Jing & Börner 1992). This is particularly true at small separations. In the left panel of Figure 2 we show the correlation function computed for the whole catalogue. We present results obtained by using both an EdS model and two models with $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$ (with and without cosmological constant). The results are quite similar and the differences are always small. The correlation function has been fitted by adopting the power-law relation $$\xi (r)=(r/r_0)^\gamma .$$ (4) The best-fit parameters have been obtained by using a maximum likelihood estimator based on Poisson statistics and unbinned data (Croft et al. 1997; see also Borgani, Plionis & Kolokotronis 1999). Unlike the usual $`\chi ^2`$-minimization, this method allows to avoid the uncertainties due to the binsize, to the position of the bin centres and to the bin scale (linear or logarithmic). To build the estimator, it is necessary to estimate the predicted probability distribution of cluster pairs, given a choice for the correlation length $`r_0`$ and the slope $`\gamma `$. By using all the distances between the cluster-random pairs, we can compute the number of pairs $`g(r)dr`$ in arbitrarily small bins $`dr`$ and use it to predict the mean number of cluster-cluster pairs $`h(r)dr`$ in that interval as $$h(r)dr=\frac{N_c1}{2N_r}[1+\xi (r)]g(r)dr,$$ (5) where the correlation function $`\xi `$ is modelled with a power-law as in eq.(4). Actually the previous equation holds only for the Davis & Peebles (1983) estimator \[eq.(3)\] but, since we obtain very similar results using different estimators, we can safely apply it here. Now it is possible to use all the distances between the $`N_p`$ cluster-cluster pairs data to build a likelihood. In particular, the likelihood function $``$ is defined as the product of the probabilities of having exactly one pair at each of the intervals $`dr`$ occupied by the cluster-cluster pairs data and the probability of having no pairs in all the other intervals. Assuming a Poisson distribution, one finds $$=\underset{i}{\overset{N_p}{}}\mathrm{exp}[h(r)dr]h(r)dr\underset{ji}{}\mathrm{exp}[h(r)dr],$$ (6) where $`j`$ runs over all the intervals $`dr`$ where there are no pairs. It is convenient to define the usual quantity $`S=2\mathrm{ln}`$ which can be written, once we retain only the terms depending on the model parameters $`r_0`$ and $`\gamma `$, as $$S=2_{r_{\mathrm{min}}}^{r_{\mathrm{max}}}h(r)𝑑r2\underset{i}{\overset{N_p}{}}\mathrm{ln}[h(r_i)].$$ (7) The integral in the previous equation is computed over the range of scales where the fit is made. We will adopt 5 and 80 $`h^1`$ Mpc for $`r_{\mathrm{min}}`$ and $`r_{\mathrm{max}}`$, respectively. By minimizing $`S`$ one can obtain the best-fitting parameters $`r_0`$ and $`\gamma `$; the confidence levels are defined by computing the increase $`\mathrm{\Delta }_S`$ with respect the minimum value of $`S`$ and assuming a $`\chi ^2`$ distribution for $`\mathrm{\Delta }_S`$. By applying this maximum likelihood method to the RASS1 Bright Sample with the assumption of an EdS model, we find $`r_0=21.5_{4.4}^{+3.4}h^1`$ Mpc and $`\gamma =2.11_{0.56}^{+0.53}`$ (95.4 per cent confidence level with one fitting parameter). Since the redshift distribution is shallow, the values obtained in other cosmologies are quite similar: for $`(\mathrm{\Omega }_{0\mathrm{m}},\mathrm{\Omega }_{0\mathrm{\Lambda }})=(0.3,0)`$ we find $`r_0=21.4_{4.6}^{+3.4}h^1`$ Mpc and $`\gamma =2.17_{0.56}^{+0.55}`$, while for $`(\mathrm{\Omega }_{0\mathrm{m}},\mathrm{\Omega }_{0\mathrm{\Lambda }})=(0.3,0.7)`$ we find $`r_0=22.1_{4.7}^{+3.6}h^1`$ Mpc and $`\gamma =2.06_{0.56}^{+0.54}`$. The best-fit relations are also shown in Figure 2. Notice that a $`\chi ^2`$-minimization procedure gives similar results, but with larger errorbars. In the right panel of the same figure we show the contour levels corresponding to $`\mathrm{\Delta }_S`$ equal to 2.30, 6.31 and 11.8. Assuming that $`\mathrm{\Delta }_S`$ is distributed as a $`\chi ^2`$ distribution with two degrees of freedom, they correspond to 68.3, 95.4 and 99.73 per cent confidence levels, respectively. Notice that by assuming a Poisson distribution the method considers all pairs as independent, neglecting their clustering. Consequently the resulting errobars can be underestimated (see the discussion in Croft et al. 1997). Our results are somewhat larger than those derived by Romer et al. (1994) who found $`r_0=1315h^1`$ Mpc by analysing a sample of galaxy clusters also selected from the RASS in a similar region of sky ($`22^h<`$ RA $`<3^h`$, $`50^o<\delta <2^o`$, $`|b|>40^o`$). A partial explanation of this difference is related to the deeper limiting flux ($`S_{\mathrm{lim}}10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup> in the 0.1 – 2.4 keV band) of their catalogue: as we will discuss in the next section, the correlation length is expected to depend on the characteristics defining the surveys, such as their limits in flux and/or luminosity. However we have to remind that this early sample was derived drawing on X-ray information from the $`ROSAT`$ standard analysis software (SASS), which was not optimazed for the analysis of extended sources (for a more detailed discussion see e.g. De Grandi et al. 1997), and this source of incompleteness was not included in the analysis of Romer et al. (1994). Moreover in their analysis the sample sky coverage (i.e., the surveyed area as a function of the flux limit) was not discussed. Previous analyses of the XBACs sample, which is a flux-limited catalogue of X-ray Abell clusters with a limiting flux $`S_{\mathrm{lim}}=5\times 10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup> in the 0.1 – 2.4 keV band, gave compatible amplitudes for the correlation function: $`r_0=21.1_{2.3}^{+1.6}h^1`$ Mpc (Abadi, Lambas & Muriel 1998) and $`r_0=26.0_{4.7}^{+4.1}h^1`$ Mpc (Borgani, Plionis & Kolokotronis 1999; errorbars in this case are 2-$`\sigma `$ uncertainties). Preliminary analyses of the clustering properties of the REFLEX sample (Collins et al. 1999; Guzzo et al. 1999), which has a limiting flux $`S_{\mathrm{lim}}=3\times 10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup> in the 0.1 – 2.4 keV band, lead to a smaller correlation length ($`r_018h^1`$ Mpc). Also in this case, the discrepancy is probably a consequence of the deeper limiting flux. ## 4 Comparison with theoretical models ### 4.1 Structure formation models In the following analysis we consider five models, all normalized to reproduce the local cluster abundance. In particular we will adopt the normalizations obtained by Eke, Cole & Frenk (1996) by analysing the temperature distribution of X-ray clusters (Henry & Arnaud 1991). All our models belong to the general class of Cold Dark Matter (CDM) scemarios; their linear power-spectrum can be represented as $`P_{\mathrm{lin}}(k,0)k^nT^2(k)`$, where, for the CDM transfer function $`T(k)`$, we use the Bardeen et al. (1986) fit. In particular, we consider three different EdS models, for which the power-spectrum amplitude corresponds to $`\sigma _8=0.52`$ (here $`\sigma _8`$ is the r.m.s. fluctuation amplitude in a sphere of $`8h^1`$ Mpc). They are: a version of the standard CDM (SCDM) model with shape parameter (see its definition in Sugiyama 1995) $`\mathrm{\Gamma }=0.45`$ and spectral index $`n=1`$; the so-called $`\tau `$CDM model, with $`\mathrm{\Gamma }=0.21`$ and $`n=1`$; a tilted model (TCDM), with $`n=0.8`$ and $`\mathrm{\Gamma }=0.41`$, corresponding to a high (10 per cent) baryonic content. We also consider an open CDM model (OCDM), with matter density parameter $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$ and $`\sigma _8=0.87`$ and a low-density flat CDM model ($`\mathrm{\Lambda }`$CDM), with $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$, with $`\sigma _8=0.93`$. Except for SCDM, which is shown as a reference model, all these models are also consistent with COBE data; for TCDM consistency is achieved by taking into account the possible contribution of gravitational waves to large-angle CMB anisotropies. A summary of the parameters of the cosmological models used in this paper is given in Table 1. ### 4.2 The method Theoretical predictions for the spatial two-point correlation function in the RASS1 Bright Sample have been here obtained in the framework of the above cosmological models by using a method presented in more detail in (Moscardini et al. 1999). Here we only give a short description. Matarrese et al. (1997; see also Moscardini et al. 1998) developed an algorithm to describe clustering in our past light-cone, where the non-linear dynamics of the dark matter distribution and the redshift evolution of the bias factor are taken into account. In the present paper we adopt a more refined formula which better accounts for the light-cone effects (see Moscardini et al. 1999). The observed spatial correlation function $`\xi _{\mathrm{obs}}`$ in a given redshift interval $`𝒵`$ is given by the exact expression $$\xi _{\mathrm{obs}}(r)=\frac{_𝒵𝑑z_1𝑑z_2𝒩(z_1)r(z_1)^1𝒩(z_2)r(z_2)^1\xi _{\mathrm{obj}}(r;z_1,z_2)}{\left[_𝒵𝑑z_1𝒩(z_1)r(z_1)^1\right]^2},$$ (8) where $`\xi _{\mathrm{obj}}(r,z_1,z_2)`$ is the correlation function of pairs of objects at redshifts $`z_1`$ and $`z_2`$ with comoving separation $`r`$ and $`𝒩(z)`$ is the actual redshift distribution of the catalogue. A related approach to the study of correlations on the light-cone hypersurface has been recently presented by Yamamoto & Suto (1999) and Nishioka & Yamamoto (1999) within linear theory and by Matsubara, Suto & Szapudi (1997) in the non-linear regime. An accurate approximation for $`\xi _{\mathrm{obj}}`$ over the scales considered here is $$\xi _{\mathrm{obj}}(r,z_1,z_2)b_{\mathrm{eff}}(z_1)b_{\mathrm{eff}}(z_2)\xi _\mathrm{m}(r,z_{\mathrm{ave}}),$$ (9) where $`\xi _m`$ is the dark matter covariance function and $`z_{\mathrm{ave}}`$ is an intermediate redshift between $`z_1`$ and $`z_2`$, for which an excellent approximation is obtained through $`D_+(z_{\mathrm{ave}})=D_+(z_1)^{1/2}D_+(z_2)^{1/2}`$ (Porciani 1997), with $`D_+(z)`$ the linear growth factor of density fluctuations. In our treatment we disregard the effect of redshift-space distortions. Some analytical expressions have been obtained in the mildly non-linear regime, by using either the Zel’dovich approximation (Fisher & Nusser 1996) or higher order perturbation theory (Heavens, Matarrese & Verde 1998). The complicating role of the cosmological redshift-space distortions on the evolution of the bias factor has been considered by Suto et al. (1999). A rough estimate of the effect of redshift-space distortions can be obtained within linear theory and the distant-observer approximation (Kaiser 1987; see Zaroubi & Hoffman 1996 for an extension of this formalism to all-sky surveys). In this case the enhancement of the redshift-space averaged power spectrum is given by the factor $`1+2\beta /3+\beta ^2/5`$, where $`\beta \mathrm{\Omega }_{0\mathrm{m}}^{0.6}/b_{\mathrm{eff}}`$ and $`b_{\mathrm{eff}}`$ is the effective bias (see below). Plionis & Kolokotronis (1998), by analysing the XBACs catalogue and using linear perturbation theory to relate the X-ray cluster dipole to the Local Group peculiar velocity, found $`\beta 0.24\pm 0.05`$. Adopting this approach, Borgani, Plionis & Kolokotronis (1999) conclude that the overall effect of redshift-space distortions is a small change of the correlation function, which expressed in terms of $`r_0`$ corresponds to an $`8`$ per cent increase. The effective bias $`b_{\mathrm{eff}}`$ appearing in the previous equation can be expressed as a weighted average of the ‘monochromatic’ bias factor $`b(M,z)`$ of objects of some given intrinsic property $`M`$ (like mass, luminosity, …), as follows $$b_{\mathrm{eff}}(z)𝒩(z)^1_{}d\mathrm{ln}M^{}b(M^{},z)𝒩(z,M^{}),$$ (10) where $`𝒩(z,M)`$ is the number of objects actually present in the catalogue with redshift in the range $`z,z+dz`$ and $`M`$ in the range $`M,M+dM`$, whose integral over $`\mathrm{ln}M`$ is $`𝒩(z)`$. In our analysis of cluster correlations we will use for $`𝒩(z)`$ in eq.(8) the observed one, while in the theoretical calculation of the effective bias we will take the $`𝒩(z,M)`$ predicted by the model described below. This phenomenological approach is self-consistent, in that our theoretical model for $`𝒩(z,M)`$ will be required to reproduce the observed cluster abundance and their $`\mathrm{log}N`$$`\mathrm{log}S`$ relation. For the cluster population it is extremely reasonable to assume that structures on a given mass scale are formed by the hierarchical merging of smaller mass units; for this reason we can consider clusters as being fully characterized at each redshift by the mass $`M`$ of their hosting dark matter haloes. In this way their comoving mass function $`\overline{n}(z,M)`$ can be computed using an approach derived from the Press-Schechter technique. Moreover, it is possible to adopt for the monochromatic bias $`b(M,z)`$ the expression which holds for virialized dark matter haloes (e.g. Mo & White 1996; Catelan et al. 1998). Recently, a number of authors have shown that the Press-Schechter (1974) relation does not provide an accurate description of the halo abundance both in the large and small-mass tails (e.g. Sheth & Tormen 1999). Also, the simple Mo & White (1996) bias formula has been shown not to correctly reproduce the correlation of low mass haloes in numerical simulations. Several alternative fits have been recently proposed (Jing 1998; Porciani, Catelan & Lacey 1999; Sheth & Tormen 1999; Jing 1999). In this paper we adopt the relations recently introduced by Sheth & Tormen (1999), which have been shown to produce an accurate fit of the distribution of the halo populations in the GIF simulations (Kauffmann et al. 1999). They read $$\overline{n}(z,M)=\sqrt{\frac{2aA^2}{\pi }}\frac{3H_0^2\mathrm{\Omega }_{0\mathrm{m}}}{8\pi G}\frac{\delta _c}{MD_+(z)\sigma _M}\left[1+\left(\frac{D_+(z)\sigma _M}{\sqrt{a}\delta _c}\right)^{2p}\right]\left|\frac{d\mathrm{ln}\sigma _M}{d\mathrm{ln}M}\right|\mathrm{exp}\left[\frac{a\delta _c^2}{2D_+^2(z)\sigma _M^2}\right]$$ (11) and $$b(M,z)=1+\frac{1}{\delta _c}\left(\frac{a\delta _c^2}{\sigma _M^2D_+^2(z)}1\right)+\frac{2p}{\delta _c}\left(\frac{1}{1+[\sqrt{a}\delta _c/(\sigma _MD_+(z))]^{2p}}\right).$$ (12) Here $`\sigma _M^2`$ is the mass-variance on scale $`M`$, linearly extrapolated to the present time ($`z=0`$), and $`\delta _c`$ the critical linear overdensity for spherical collapse. Following Sheth & Tormen (1999), we adopt their best-fit parameters $`a=0.707`$, $`p=0.3`$ and $`A0.3222`$, while the standard (Press & Schechter and Mo & White) relations are recovered for $`a=1`$, $`p=0`$ and $`A=1/2`$. Notice that $$𝒩(z,M)=4\pi r^2(z)\frac{dr}{dz}\left[1+\frac{H_0^2}{c^2}\mathrm{\Omega }_0r^2(z)\right]^{1/2}\overline{n}(z,M)\varphi (z,M),$$ (13) where $`\varphi (z,M)`$ is the isotropic catalogue selection function, which also accounts for the catalogue sky coverage, as detailed below. The last ingredient entering in our computation of the correlation function is the redshift evolution of the dark matter covariance function $`\xi _\mathrm{m}`$. As in Matarrese et al. (1997) and Moscardini et al. (1998) we use an accurate method, based on the Hamilton et al. (1991) ansatz, to evolve $`\xi _\mathrm{m}`$ into the fully non-linear regime. In particular, we use the fitting formula given by Peacock & Dodds (1996). In order to predict the abundance and clustering of X-ray clusters in the RASS1 Bright Sample we need to relate X-ray cluster fluxes into a corresponding halo mass at each redshift. The given band flux $`S`$ corresponds to an X-ray luminosity $`L_X=4\pi d_L^2S`$ in the same band, where $`d_L=(1+z)r(z)`$ is the luminosity distance. To convert $`L_X`$ into the total luminosity $`L_{\mathrm{bol}}`$ we perform band and bolometric corrections by means of a Raymond-Smith code, where an overall ICM metallicity of $`0.3`$ times solar is assumed. We translate the cluster bolometric luminosity into a temperature, adopting the empirical relation $`T=𝒜L_{\mathrm{bol}}^{}(1+z)^\eta `$, where the temperature is expressed in keV and $`L_{\mathrm{bol}}`$ is in units of $`10^{44}h^2`$ erg s<sup>-1</sup>. In the following analysis we assume $`𝒜=4.2`$ and $`=1/3`$; these values allow a good representation of the local data for temperatures larger than $`1`$ keV (e.g. David et al. 1993; White, Jones & Forman 1997; Markevitch 1998). Analysing a catalogue of local compact groups, Ponman et al. (1996) showed that at lower temperatures the $`L_{\mathrm{bol}}T`$ relation has a steeper slope ($`0.1`$). For these reasons we prefer to fix a minimum value for the temperature at $`T=1`$ keV. Moreover, even if observational data are consistent with no evolution in the $`L_{\mathrm{bol}}T`$ relation out to $`z0.4`$ (Mushotzky & Scharf 1997), a redshift evolution described by the parameter $`\eta `$ has been introduced to reproduce the observed $`\mathrm{log}N`$$`\mathrm{log}S`$ relation (Rosati et al. 1998; De Grandi et al. 1999) in the range $`2\times 10^{14}S2\times 10^{11}`$ (see also Kitayama & Suto 1997; Borgani et al. 1999). The values of $`\eta `$ required for SCDM, $`\tau `$CDM, TCDM, OCDM and $`\mathrm{\Lambda }`$CDM models are reported in Table 1. A general discussion of the effects of different choices of the parameters entering in this method (e.g. the scatter in the $`L_{\mathrm{bol}}T`$ relation) is presented elsewhere (Moscardini et al. 1999). Finally, with the standard assumption of virial isothermal gas distribution and spherical collapse, it is possible to convert the cluster temperature into the mass of the hosting dark matter halo, namely (e.g. Eke, Cole & Frenk 1996) $$T=\frac{7.75}{\beta _{\mathrm{TM}}}\left(\frac{M}{10^{15}h^1M_{}}\right)^{2/3}(1+z)\left(\frac{\mathrm{\Omega }_{0\mathrm{m}}}{\mathrm{\Omega }_\mathrm{m}(z)}\right)^{1/3}\left(\frac{\mathrm{\Delta }_{\mathrm{vir}}(z)}{178}\right)^{1/3}.$$ (14) The quantity $`\mathrm{\Delta }_{\mathrm{vir}}`$ represents the mean density of the virialized halo in units of the critical density at that redshift (e.g. Bryan & Norman 1998 for fitting formulas). We assume $`\beta _{\mathrm{TM}}=1.17`$, which is in agreement with the results of different hydrodynamical simulations (Bryan & Norman 1998; Gheller, Pantano & Moscardini 1998; Frenk et al. 1999). Once the relation between observed flux and halo mass at each redshift is established we account for the RASS1 Bright Sample sky coverage $`\mathrm{\Omega }_{\mathrm{sky}}(S)`$ (see Figure 7 in De Grandi et al. 1999) by simply setting $`4\pi \varphi (z,M)=\mathrm{\Omega }_{\mathrm{sky}}[S(z,M)]`$. ### 4.3 Results In Figure 3 we compare our predictions for the RASS1 spatial correlation function in different cosmological models to the observational data. All the EdS models here considered predict too small an amplitude. Their correlation lengths are smaller than the observational results: we find $`r_011.5,12.8,14.8h^1`$ Mpc for SCDM, TCDM and $`\tau CDM`$, respectively. On the contrary, both the OCDM and $`\mathrm{\Lambda }`$CDM models are in much better agreement with the data and their predictions are always inside the 1-$`\sigma `$ errorbars ($`r_018.4,18.6h^1`$ Mpc, respectively). In order to quantify the differences between the model predictions and the observational data, we use again the maximum likelihood approach. The minimum value for $`S`$ is obtained for the $`\mathrm{\Lambda }`$CDM model. A similar value, corresponding to $`\mathrm{\Delta }_S=0.7`$, is obtained for the OCDM model, while for the EdS models the resulting $`S`$ are much larger: $`\mathrm{\Delta }_S=10.6,19.3,26.8`$ for $`\tau CDM`$, TCDM and SCDM models, respectively. To evaluate what is the effect of neglecting the description of clustering in the past light-cone, as usually done in previous analyses, we estimate the cluster correlation function as $`\xi (r)=b_{\mathrm{eff}}^2(z_{\mathrm{med}})\xi _\mathrm{m}(r,z_{\mathrm{med}})`$, where $`z_{\mathrm{med}}`$ is the median redshift of the catalogue. For the RASS1 Bright Sample we have $`z_{\mathrm{med}}0.08`$. The resulting values of the correlation lengths obtained in this way are $`r_013.5,15.5,18.2,21.9,22.4h^1`$ Mpc, for SCDM, TCDM, $`\tau CDM`$, OCDM and $`\mathrm{\Lambda }CDM`$, respectively. They are typically 20 per cent higher than the estimates obtained by our method. This difference is due to the fact that in the past light-cone formalism the matter correlation functions (and bias factors) are weighted by a factor $`𝒩(z)/r(z)`$, for which the average value on the whole sample corresponds neither to the median nor to the mode of the redshift distribution. Actually, the presence of the comoving radial distance $`r(z)`$ at the denominator tends to shift the “effective redshift” to smaller values of $`z`$. As a consequence, the true cluster correlations, which are indeed measured in our past light-cone, have typically smaller amplitudes than those estimated at the median redshift of the catalogue. Of course, the importance of this effect becomes larger when deeper surveys are considered. In order to study the possible dependence of the clustering properties of the X-ray clusters on the observational characteristics defining the survey, we use our model to predict the values of the correlation length $`r_0`$ in catalogues where we vary the limiting X-ray flux $`S_{\mathrm{lim}}`$ or luminosity $`L_{\mathrm{lim}}`$ (both defined in the energy band 0.5 – 2 keV). Notice that this analysis can be related to the study of the richness dependence of the cluster correlation function. In fact, a change in the observational limits implies a change in the expected mean intercluster separation $`d_c`$. Bahcall & West (1992) found that the Abell clusters data are consistent with a linear relation $`r_0=0.4d_c`$, while a milder dependence is resulting from the analysis of the APM clusters (Croft et al. 1997). Our results are shown in Figure 4. All the cosmological models display a similar trend: in the flux and luminosity intervals here considered, the correlation length $`r_0`$ has a slow growth with $`S_{\mathrm{lim}}`$ (left panel), and a more marked one with $`L_{\mathrm{lim}}`$ (right panel). For example, for OCDM and $`\mathrm{\Lambda }CDM`$ the correlation length changes from $`r_018`$ to $`r_021h^1`$ Mpc, when the limiting flux varies from $`S_{\mathrm{lim}}=10^{12}`$ to $`S_{\mathrm{lim}}=10^{11}`$ erg s<sup>-1</sup> cm<sup>-2</sup>, and from $`r_018`$ to $`r_030h^1`$ Mpc, when the limiting luminosity varies from $`L_{\mathrm{lim}}=10^{42}`$ to $`L_{\mathrm{lim}}=10^{44}h^2`$ erg s<sup>-1</sup>. The values of $`r_0`$ for the EdS models have similar variations but are always smaller. We can compare these predictions to the results obtained by computing the two-point correlation function in the RASS1 Bright Sample with the same cuts in X-ray flux or luminosity. The estimates of $`r_0`$ obtained in this way are presented in Table 2 (where we also report the number of clusters inside each subsample) and shown in the figure (open squares). In both panels the whole catalogue is represented by the square on the right. Because of the small number of clusters in these subsamples, we prefer to fit the correlation function by fixing the value of the slope $`\gamma =1.8`$; we find that our values of $`r_0`$ are only slightly dependent on this assumption. The errorbars shown in the figure correspond to an increase $`\mathrm{\Delta }_S=4`$ with respect to the minimum value of $`S`$. With the assumption that $`\mathrm{\Delta }_S`$ is distributed as a $`\chi ^2`$ distribution with a single degree of freedom, this corresponds to 95.4 per cent confidence level. By analysing the trend with changing limiting flux, we find that the observed values of $`r_0`$ are almost constant even if $`S_{\mathrm{lim}}`$ changes by a factor larger than 2. This result is in agreement with what Borgani, Plionis & Kolokotronis (1999) obtained for XBACs. We notice that OCDM and $`\mathrm{\Lambda }`$CDM are able to reproduce the amount of clustering shown by the RASS1 Bright Sample, while all the EdS models strongly underpredict the amplitude of the correlation function. The situation is slightly different when we analyse luminosity-limited catalogues. The RASS1 Bright Sample suggests a small increase of $`r_0`$ with $`L_{\mathrm{lim}}`$, even if the hypothesis of a constant correlation length cannot be rejected. This is consistent with a similar analysis made by Abadi, Lambas & Muriel (1998) on the XBACs catalogue. As shown in the left panel of Figure 4, our models always tend to predict smaller correlations, even if the non-EdS models are still marginally consistent with the observational data. ## 5 Conclusions In this paper we have studied the two-point correlation function $`\xi (r)`$ of a flux-limited sample of X-ray galaxy clusters, the RASS1 Bright Sample. These observational results have been used to test a theoretical model predicting the clustering properties of X-ray clusters in flux-limited surveys in the framework of different cosmological scenarios. Our main results are: * Assuming an Einstein-de Sitter model, $`\xi (r)`$ can be well fitted using the standard power-law relation $`\xi =(r/r_0)^\gamma `$, with $`r_0=21.5_{4.4}^{+3.4}h^1`$ Mpc and $`\gamma =2.11_{0.56}^{+0.53}`$ (95.4 per cent confidence levels with one fitting parameter). The values obtained in models with matter density parameter $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$ are quite similar. * The amplitude of the correlation function is almost constant when the RASS1 Bright Sample is analysed with different limiting fluxes in the range $`S_{\mathrm{lim}}=37\times 10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup> while it displays a slightly increasing trend when computed in catalogues with an increasing X-ray luminosity limit in the range $`L_{\mathrm{lim}}=0.010.4\times 10^{44}h^2`$ erg s<sup>-1</sup>. * The comparison with the predictions of our theoretical models shows that the Einstein-de Sitter models are unable to reproduce the observational results obtained for the whole RASS1 Bright Sample. On the contrary, good agreement is found for the models with matter density parameter $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$, both with and without a cosmological constant. * Our models are also able to reproduce the behaviour of $`r_0`$ with $`S_{\mathrm{lim}}`$ and $`L_{\mathrm{lim}}`$, but the observed amount of clustering is reproduced only by the open and $`\mathrm{\Lambda }`$ models. In conclusion, we believe that the method presented here leads to robust predictions on the clustering of X-ray clusters; its future application to new and deeper catalogues will allow to provide useful constraints on the cosmological parameters. ## Acknowledgments. This work has been partially supported by Italian MURST, CNR and ASI. We are grateful to Stefano Borgani, Enzo Branchini, Houjun Mo, Ornella Pantano, Piero Rosati, Bepi Tormen and Elena Zucca for useful discussions. We also thank the referee, C. Collins, for comments which allowed us to improve the presentation of this paper.
no-problem/9904/hep-ex9904005.html
ar5iv
text
# 1 CRESST and the Dark Matter Problem ## 1 CRESST and the Dark Matter Problem After a long period of development, cryogenic detectors are now coming on line and in the next years will deliver significant results in particle-astrophysics and weak interactions. The stable operation of a kilogram of detecting material in the millikelvin range over long time periods by CRESST, as well as similar work by other collaborations, has confirmed the hopes that large mass cryogenic detectors are feasible. CRESST is presently the most advanced deep underground, low background, cryogenic facility. Other major projects are the CDMS project in Stanford, the EDELWEISS project at Frejus, the Milano $`\beta \beta `$ project in Gran Sasso, the ROSEBUD experiment at Canfranc, the Tokyo Cryogenic Dark Matter Search and the ORPHEUS project at Bern . The goal of the CRESST project is the direct detection of elementary particle dark matter and the elucidation of its nature. The search for Dark Matter and the understanding of its nature remains one of the central and most fascinating problems of our time in physics, astronomy and cosmology. There is strong evidence for it on all scales, ranging from dwarf galaxies, through spiral galaxies like our own, to large scale structures. The history of the universe is difficult to reconstruct without it, be it big bang nucleosynthesis or the formation of structure . The importance of the search for dark matter in the form of elementary particles, created in the early stages of the universe, is underlined by the recent weakening of the case for other forms such as MACHOS, faint stars and black holes . Particle physics provides a well motivated candidate through the assumption that the lightest supersymmetric (SUSY) particle, the ‘neutralino , is some combination of neutral particles arising in the theory and it is possible to find many candidates obeying cosmological and particle physics constraints. Indeed, SUSY models contain many parameters and many assumptions, and by relaxing various simplifying assumptions one can find candidates in a wide mass range . Generically, such particles are called WIMPS (Weakly Interacting Massive Particles), and are to be distinguished from proposals involving very light quanta such as axions. WIMPS are expected to interact with ordinary matter by elastic scattering on nuclei and all direct detection schemes have focused on this possibility. Conventional methods for direct detection rely on the ionisation or scintillation caused by the recoiling nucleus. This leads to certain limitations connected with the relatively high energy involved in producing electron-ion or electron-hole pairs and with the sharply decreasing efficiency of ionisation by slow nuclei. Cryogenic detectors use the much lower energy excitations, such as phonons, and while conventional methods are probably close to their limits, cryogenic technology can still make great improvements. Since the principal physical effect of a WIMP nuclear recoil is the generation of phonons, cryogenic calorimeters are well suited for WIMP detection and, indeed, the first proposals to search for dark matter particles were inspired by early work on cryogenic detectors . Further, as we shall discuss below, when this technology is combined with charge or light detection the resulting background suppression leads to a powerful technique to search for the rare nuclear recoils due to WIMP scatterings. The detectors developed by the CRESST collaboration consist of a dielectric crystal (target or absorber) with a small superconducting film (thermometer) evaporated onto the surface. When this film is held at a temperature in the middle of its superconducting to normal conducting phase transition, it functions as a highly sensitive thermometer. The detectors presently employed in Gran Sasso use tungsten (W) films and sapphire ($`Al_2O_3`$) absorbers, running near 15 mK. It is important for the following, however, to realise that the technique can also be applied to a variety of other materials. The small change in temperature of the superconducting film resulting from an energy deposit in the absorber leads to a relatively large change in the film’s resistance. This change in resistance is measured with a SQUID. To a good approximation, the high frequency phonons created by an event do not thermalise in the crystal before being directly absorbed in the superconducting film . Thus the energy resolution is only moderately dependent on the size of the crystal, and scaling up to large detectors of some hundred gramms or even kilogramms is feasible. The high sensitivity of this system also allows us to use a small separate detector of the same type to see the light emitted when the absorber is a scintillating crystal. ## 2 Present Status of CRESST The task set for the first stage of CRESST was to show the operation of 1 kg of sapphire in the millikelvin range, with a threshold of 500 eV under low background conditions . Meeting this goal involved two major tasks: * The setting up of a low background, large volume, cryogenic installation and * the development of massive, low background detectors with low energy thresholds. ### 2.1 CRESST Installation in the Gran Sasso Laboratory (LNGS) The central part of the CRESST low background facility at the LNGS is the cryostat. The design of this cryostat had to combine the requirements of low temperatures with those of a low background. The first generation cryostats in this field were conventional dilution refrigerators where some of the materials were screened for radioactivity. However, due to cryogenic requirements some non-radiopure materials, for example stainless steel, cannot be completely avoided. Thus for a second generation low background cryostat, a design was chosen in which a well separated ‘cold box houses the experimental volume at some distance from the cryostat. The cold box is constructed entirely of low background materials, without any compromise. It is surrounded by shielding consisting of 20 cm of lead and 14 cm of copper. The cooling power of the dilution refrigerator is transferred to the cold box by a 1.5 meter long cold finger protected by thermal radiation shields, all of low background copper. The experimental volume can house up to 100 kg of target mass. The cold box and shielding are installed in a clean room area with a measured clean room class of 100. For servicing, the top of the cryostat can be accessed from the first floor outside the clean room. This situation of a second generation cryostat in a high quality clean room, deep underground in the LNGS, presently makes this instrument unique in the world. The installation is now complete and entering into full operation. The system demonstrated its high reliability by running for more than a year with a prototype cold box made of normal copper. Runs with a new low background cold box in the fall of 1998 showed stable operation for a period of months. At present four 262 g detectors are in the experimental volume, performing first measurements under low background conditions. First results of this run have shown that at energies above 30 keV, the counting rate is on the order of a few counts/ (kg keV day) and above 100 keV below 1 count/(kg keV day) There are strong indications that the low energy part of the spectrum was dominated by external disturbances such as mechanical vibrations or electromagnetic interference. We are working to correct this in future runs. ### 2.2 Detector Development The CRESST collaboration is among the pioneers of cryogenic detector development. Present CRESST detectors have by far the highest sensitivity per unit mass of any cryogenic device now in use. Figure 1 shows the spectrum of an X-ray fluorescence source measured with a 262 g CRESST sapphire detector, as presently being used, showing an energy resolution of 133 eV at 1.5 keV. These 262 g detectors were developed by scaling up a 32 g sapphire detector . Due to optimised design, and because of the non-thermalization of the phonons as explained in the introduction, this scaling-up could be achieved without loss in sensitivity. Further developments for the next detector generation are in progress. * In order to improve linearity, dynamic range and time response, a mode of operation with thermal feedback was developed and successfully operated with the present CRESST detectors. * For another thermometer type, the iridium-gold proximity sandwich, fabrication improvements now allow the application of these thermometers with a wide choice of absorber materials, even for low melting point materials such as germanium. A germanium detector with a mass of 342 g is in preparation. * To further increase the energy sensitivity of the detectors we have also developed phonon collectors. The collectors provide a large collection area for phonons while retaining a small thermometer. This allows a more rapid collection of the phonons and so an increase in sensitivity. This concept can be applied to all detector types and is especially of interest with regard to scaling up the size of the detectors. * Passive techniques of background reduction – radiopure materials and a low background environment – are of course imperative in work of this type. However, there is a remaining background dominated by $`\beta `$ and $`\gamma `$ emissions from nearby radioactive contaminants. These produce exclusively electron recoils in the detector. In contrast WIMPs, and of course also neutrons, lead to nuclear recoils. Therefore, dramatic improvements in sensitivity are to be expected if, in addition to the usual passive shielding, the detector itself is capable of distinguishing electrons from nuclear recoils and rejecting them. ### 2.3 Simultaneous Phonon and Light Measurement We have recently developed a system, presently using CaWO<sub>4</sub> crystals as the absorber, where a measurement of scintillation light is carried out in parallel to the phonon detection. We find that these devices clearly discriminate nuclear recoils from electron recoils. The system is shown schematically in fig. 2 . It consists of two independent detectors, each of the CRESST type: A scintillating absorber with a tungsten superconducting phase transition thermometer on it, and a similar but smaller detector placed next to it to detect the scintillation light from the first detector. A detailed description is given in . Both detectors detectors were made by standard CRESST techniques and were operated at about 12mK. The CaWO<sub>4</sub> crystal was irradiated with photons and simultaneously with electrons. The left plot in fig. 3 shows a scatter plot of the pulse heights observed in the light detector versus the pulse height observed in the phonon detector. A clear correlation between the light and phonon signals is observed. The right hand plot shows the result of an additional irradiation with neutrons from an Americium-Beryllium source. A second line can be seen due to neutron-induced nuclear recoils. It is to be observed that electron and nuclear recoils can be clearly distinguished down to a threshold of 10keV. The leakage of some electron recoils into the nuclear recoil line gives the electron recoil rejection according to the quality factor of ref. . A detailed evaluation yields a rejection factor of 98% in the energy range between 10 keV and 20 keV, 99.7% in the range between 15 keV and 25 keV and better than 99.9% above 20 keV. The intrinsic background in our CaWO<sub>4</sub> crystals is now being measured in the new Munich low background laboratory. No contamination was found as of this writing. The present limits are 45 counts/(kg keV day) for the thorium chain and 6 counts/(kg keV day) for the uranium chain in the energy region relevant for the WIMP dark matter search. ## 3 Next Steps for CRESST All our detectors, including those measuring scintillation light, use superconducting phase transition thermometers with SQUID readout and can be run in the present set-up. The CRESST cold box is designed to house detectors of various types, up to a total mass of about 100 kg. Due to the complementary detector concepts of low threshold calorimeters on the one hand and detectors with the simultaneous measurement of light and phonons on the other, CRESST can cover a very wide range of WIMP masses. ### 3.1 Low Mass WIMPs The present sapphire detectors, with their extremely low energy thresholds and a low mass target nucleus with high spin ($`Al`$), cover the low WIMP mass range from 1 GeV to 10 GeV in the sense that they are presently the only detectors able to explore this mass range effectively for non-coherent interactions. The sensitivity for WIMPS with spin-dependent interactions, an expected threshold of 0.5 keV, a background of 1 count/(kg keV day) and an exposure of 0.1 and 1 kg year is shown in fig. 4. For comparison the present limits from the DAMA and UKDMC NaI experiments are also shown. Data-taking with the present sapphire ($`Al_2O_3`$) detectors (262 g each) will continue during 1999. The goals for this period include the identification and removal of noise sources and radioactive contaminants as well as the presentation of first results. In parallel, a run with a Ge detector is also planned. Running two target materials in parallel is expected to help substantially in understanding backgrounds and the systematics, as well as preparing the way for the study of a possible positive signal (see below). ### 3.2 Medium and High Mass WIMPs In the second half of 1999 we intend to start the installation of the next detector generation with background suppression using the simultaneous measurement of scintillation light and phonons. These detectors will have target nuclei of large atomic number, such as tungsten, making them particularly sensitive to WIMPs with coherent interactions. Here the WIMP cross section profits from a large coherence factor of the order $`A^2`$, ($`A=`$ number of nucleons). Combined with the strong background rejection, this means these detectors can be sensitive to low WIMP cross sections. Figure 5 shows the anticipated sensitivity obtained with a CaWO<sub>4</sub> detector in the present CRESST set-up in Gran Sasso. The CRESST CaWO<sub>4</sub> curve is based on a background rate of 1 count/(kg keV day), an intrinsic background rejection of 99.7 % above a recoil threshold of 15 keV and an exposure of 1 kg year. For comparison the recently updated limits of the Heidelberg-Moscow <sup>76</sup>Ge-diode experiment , and the DAMA NaI experiment are also shown. A 60 GeV WIMP with the cross section claimed in would give about 55 counts between 15 and 25 keV in 1 kg CaWO<sub>4</sub> within one year. A background of 1 count/(kg keV day) suppressed with 99.7% would leave 11 background counts in the same energy range. A 1 kg CaWO<sub>4</sub> detector with 1 year of measuring time in the present set-up of CRESST should allow a comfortable test of the recently reported positive signal. ### 3.3 CRESST in the case of a positive signal In addition to improving limits on dark matter, it is important to have means for the positive verification of a dark matter signal as well as for the elucidation of its nature. Once a dark matter signal is suspected, it can be verified by CRESST through the following effects. * Varying the mass of the target nucleus leads to a definite shift in the recoil energy spectrum. For example, in the case where the WIMP is substantially lighter than the target nucleus, the recoil momentum spectrum has an unchanged shape from nucleus to nucleus. Hence there is a simple rescaling of the recoil energy spectrum. The observation of the correct behaviour will greatly increase our confidence in a positive signal. Here the significant advantage of the CRESST technology, that it can be applied to different target materials, comes into play. In this context, the wide variety of materials that may be used for simultaneous light and phonon measurement is extremely important. We have already measured the relative scintillation efficiencies of CaWO<sub>4</sub>, PbWO<sub>4</sub>, BaF and BGO crystals at low temperatures and found similarly encouraging results for all materials. * Another verification of a dark matter signal is to be expected through an annual modulation of spectral shape and rate, which results from the motion of the earth around the sun. However, a 1 kg CaWO<sub>4</sub> detector is too small to reach a really significant statistical accuracy within one year of measurement. Here the large mass potential of the present CRESST installation, about 100 kg, will play an important role for establishing such an effect in the future. * Given the detection of a dark matter particle, an important task will be to determine its nature, e. g. for SUSY the gaugino and higgsino content, which gives rise to different strengths of the spin-dependent interaction. Significant steps in this direction can be taken by using different target materials (see e.g. fig. 24 in ref. ). * Finally we note calculations concerning the possible existence of a WIMP population orbiting in the solar system rather than in the galaxy. These would have a much lower velocity (about 30 km/sec) as compared to galactic WIMPs (270 km/sec), so that even heavy WIMPs have low momenta. This underlines the need for low threshold recoil detection and CRESST is well suited for such an investigation. ## 4 Long Term Perspectives The sensitivity reached by a system that simply relies passively on radiopure materials, but lacks active intrinsic background suppression, saturates at some point and cannot be improved with more mass ($`M`$) and measuring time ($`t`$). On the other hand, in a system with a precisely determined background suppression factor, the sensitivity continues to improve as $`\sigma 1/\sqrt{Mt}`$ , as is possible with the CRESST scintillation light method. Beginning in the year 2000 we intend to upgrade the multi-channel SQUID read out and systematically increase the detector mass, which can go up to about 100 kg before reaching the full capacity of the present installation. With a 100 kg CaW0<sub>4</sub> detector, the sensitivity shown in fig. 6 can be reached in one year of measuring time. If we wish to cover most of the MSSM parameter space of SUSY with neutralino dark matter, the exposure would have to be increased to about 300 kg years, the background suppression improved to about 99.99 % above 15 keV, and the background lowered to 0.1 count/(kg keV day). The recent tests in Munich with CaWO<sub>4</sub>, which were limited by ambient neutrons, suggest that a suppression factor of this order should be within reach underground, with the neutrons well shielded and employing a muon veto. The excellent background suppression of cryodetectors with active background rejection makes them much less susceptible to systematic uncertainties than conventional detectors, which must rely heavily on a subtraction of radioactive backgrounds. Since this kind of systematic uncertainty cannot be compensated by an increase of detector mass, even moderate sized cryogenic detectors can achieve much better sensitivity than large mass conventional detectors. Note that the excellent levels shown in fig. 6 can be achieved with the rather moderate assumptions of background at 1 count/(kg keV day) and 0.1 count/(kg keV day). To a large extent, even higher background levels can be compensated with increased exposure. On the other hand, dark matter searches with conventional detectors, require a scaling of the presently reached best background levels of 0.057 counts/(kg keV day) by a factor of 2000 to reach the same sensitivity level. If WIMPs are not found, at some point the neutron flux, which also gives nuclear recoils, will begin to limit further improvement. With careful shielding the neutron flux in Gran Sasso should not limit the sensitivity within the exposures assumed for the upper CRESST curve in fig. 6. With still larger exposures, the neutron background may still be discriminated against large mass WIMPS. This can be done by comparing different target materials, which is possible with the CRESST technology, since different variations with nuclear number for the recoil spectra are to be expected with different mass projectiles. The phase of the project with large increased total detector mass will necessitate certain improvements and innovations in the technology, particularly involving background rejection, optimisation of the neutron shielding, and muon vetoing. As described above, if a positive dark matter signal exists, increased mass and improved background rejection will be important in verifying and elucidating the effect. A large target mass, such as 100 kg, is of importance to reach the high statistics needed to study the annual modulation effect. ## 5 Conclusions The installation of the large volume, low background, cryogenic facility of CRESST at the Gran Sasso Laboratory is completed. The highly sensitive CRESST sapphire detectors are up to now the only technology available to reasonably explore the low mass WIMP range. The new detectors with the simultaneous measurement of phonons and scintillation light allow to distinguish the nuclear recoils very effectively from the electron recoils caused by background radioactivity. For medium and high mass WIMPs this results in one of the highest sensitivities possible with today’s technology. This will allow a test of the reported positive evidence for a WIMP signal by the DAMA collaboration in the near future. In the long term, the present CRESST set-up permits the installation of a detector mass up to 100 kg. In contrast to other projects, CRESST detectors allow the employment of a large variety of target materials. This offers a powerful tool in establishing a WIMP signal and in investigating WIMP properties in the event of a positive signal. By its combination of detection technologies CRESST is over the whole WIMP mass range one of the best options for direct particle Dark Matter detection.
no-problem/9904/cond-mat9904324.html
ar5iv
text
# ON THE RELATION BETWEEN INTERBAND SCATTERING AND THE ”METALLIC PHASE” OF TWO DIMENSIONAL HOLES IN GaAs/AlGaAs ## Abstract The ”metallic” regime of holes in GaAs/AlGaAs heterostructures corresponds to densities where two splitted heavy hole bands exist at a zero magnetic field. Using Landau fan diagrams and weak field magnetoresistance curves we extract the carrier density in each band and the interband scattering rates. The measured inelastic rates depend Arrheniusly on temperature with an activation energy similar to that characterizing the longitudinal resistance. The ”metallic” characteristics, namely, the resistance increase with temperature, is hence traced to the activation of inelastic interband scattering. The data are used to extract the bands dispersion relations as well as the two particle-hole excitation continua. It is then argued that acoustic plasmon mediated Coulomb scattering might be responsible for the Arrhenius dependence on temperature. The absence of standard Coulomb scattering characterized by a power law dependence upon temperature is pointed out. Non-interacting two dimensional electron gas (2DEG) is believed to be insulating in the sense that its resistance always diverges as the temperature, $`T`$, approaches zero. This trend is opposite to that characteristic of most 3D metals where for $`T0`$ the resistance becomes smaller and eventually saturates to a finite value. The insulating nature of 2D systems has been observed in many experiments and was a generally accepted dogma until a few years ago, when Kravchenko et al. discovered that in silicon based high mobility 2DEG at high enough carrier densities, the resistance decreases as $`T0`$ (”metallic” behavior). The same samples, at lower densities, displayed insulating characteristics and the crossover from positive to negative variation of the resistance with temperature was soon identified as a novel metal-insulator transition. Since the discovery of Kravchenko et al. , qualitatively similar dependencies of the resistance upon temperature were observed in two dimensional hole gas (2DHG) in GaAs/AlGaAs heterostructures, SiGe quantum wells , InAs quantum wells as well as other silicon samples . While the nature of the insulating phase may be roughly understood, the physics of the metallic phase remains obscure. A mere reduction of the resistance with temperature does not however necessarily imply delocalization. Within a Drude-like framework it may result from a temperature dependent carrier density, or as suggested by Altshuler and Maslov , by a temperature dependent scattering time. Here we provide experimental evidence that indeed suppression of scattering with decreasing $`T`$ is responsible for the metallic characteristics of 2DHG in GaAs/AlGaAs heterostructures. The scattering mechanism, interband Coulomb scattering between the two splitted heavy hole bands, is very different from the one proposed in ref. . The correlation between the existence of two conducting bands and ”metallic” behavior was put forward by Pudalov . It has recently been convincingly substantiated by Papadakis et al . We find the same correlation but go beyond that and prove that the characteristic dependence upon temperature, $`\rho _{xx}(T)=\rho (0)+\alpha \mathrm{exp}(T_0/T)`$ ($`T_0`$ and $`\alpha `$ are some constants) follows from a similar dependence of the interband inelastic scattering rates upon temperature. We extract the bands’ structures and their particle-hole excitation continua from the measurements. We then use the latter to show that the Arrhenius temperature dependence might result from activation of plasmon mediated interaction, similarly to plasmon enhanced Coulomb drag between coupled layers. We believe that the same considerations may apply to the other ”metallic” 2D systems since band degeneracy is generally lifted there due to spin-orbit coupling and the lack of inversion symmetry at the interface where the 2D layer resides. The splitting between the two heavy hole bands in GaAs/AlGaAs heterostructures due to spin orbit coupling and lack of inversion symmetry have been extensively studied both theoretically and experimentally . The two bands are approximately degenerate up to a certain energy where they split and acquire very different effective masses (see inset to fig. 4). Thus, above a threshold density, current is carried by two bands as reflected in slope variation in the corresponding Landau fan diagram or in the appearance of a second frequency in Shubnikov de Haas measurements . Using either method, the hole densities in the lighter band, $`p_{l\text{ }}`$, and heavier band, $`p_h`$, can be extracted and fig. 1 depicts them as a function of the total density, $`p_{total}`$. For a total density below $`1.7\times 10^{11}cm^2`$, the bands are degenerate. For higher densities they split and for a total density above $`2.8\times 10^{11}cm^2`$ , practically all additional carriers go to the heavier and less mobile band. The inset to fig. 1 depicts a characteristic Shubnikov de Haas curve and demonstrates the existence of two sets of oscillations corresponding to the two bands. The data were taken using a high mobility ($`500,000`$ $`cm^2/Vs`$ at $`100`$ $`mK`$) 2DHG confined to a $`GaAs/Al_{0.8}Ga_{0.2}As`$ interface in the 100 plane. The sample had a 2DEG front gate and silicon doped backgate, 40nm and 300nm from the 2DHG, respectively. The simultaneous transport of two types of carriers with different mobilities and densities is also manifested in our system by a weak field classical positive magnetoresistance . A set of resistance curves for $`p_{total}=4.25\times 10^{11}cm^2`$, $`p_l=1.52\times 10^{11}cm^2`$, $`p_h=2.73\times 10^{11}cm^2`$ at different temperatures is depicted in the inset to fig. 2. Measurements as a function of density show that the positive magnetoresistance at weak fields appears when the total density is beyond the split off density, namely, when the two bands have different masses and mobilities. It then grows larger with density, as the bands deviate further. The Lorentzian shaped magnetoresistance expected from two band transport is obtained by subtracting the weak parabolic negative magnetoresistance (attributed to Coulomb interactions ) from the full magnetoresistance curve. It is depicted in fig. 2. Note the excellent agreement with the predicted shape, eqs. 2 below. Below $`0.6K`$, the resistance is practically independent of $`T`$ while for temperatures above $`2K`$, the Lorentzian is hardly visible. The suppression of the classical two band magnetoresistance results from interband scattering. At low temperatures this scattering is mainly elastic. As the temperature is increased, inelastic scattering commences, the drift velocities of carriers in the two bands gradually approach each other, and the magnetoresistance is consequently diminished. The data presented in figs. 1 and 2 can be used to extract the interband scattering rates. To that end, the standard two band transport formulae should be generalized to include interband scattering. The starting point is coupled Drude equations for transport in the two bands which straightforwardly give the resistance tensor $$\left[\begin{array}{c}\epsilon _x\hfill \\ \epsilon _y\hfill \\ \epsilon _x\hfill \\ \epsilon _y\hfill \end{array}\right]=\left[\begin{array}{cccc}S_l& HR_l& Q& 0\\ HR_l& S_l& 0& Q\\ Q& 0& S_h& HR_h\\ 0& Q& HR_h& S_h\end{array}\right]\left[\begin{array}{c}J_{lx}\hfill \\ J_{ly}\hfill \\ J_{hx}\hfill \\ J_{hy}\hfill \end{array}\right]$$ (1) Here, $`\epsilon `$ and $`H`$ are the electric and magnetic fields, respectively, $`R_i=1/en_i`$ $`(i=l,h)`$ is the Hall coefficient of the $`i`$-th band, $`n_i`$ is the hole density in that band and $`J_i`$ is the current density there. The elements $`S_l`$, $`S_h`$, and $`Q`$ can be expressed in terms of conductances, $`S_l=\sigma _{ll}^1+\sigma _{lh}^1;S_h=\sigma _{hh}^1+\sigma _{hl}^1;Q=\lambda \sigma _{lh}^1=\lambda ^1\sigma _{hl}^1`$ where $`\lambda `$ is a function of the velocities and densities. The diagonal conductances, $`\sigma _{ii}`$, pertain to scattering (elastic and inelastic) within each band while $`\sigma _{ij}`$ accounts for interband scattering. The latter processes may include carrier transfer from one band to the other as well as drag-like processes where one particle from one band scatters off a particle in the other band and both carriers maintain their bands. The function $`\lambda `$ depends on the nature of the dominant interband scattering mechanism. Setting $`J_{ly}+J_{hy}=0`$ one obtains the Lorentzian shape longitudinal magnetoresistance depicted in fig. 2 $`\rho _{xx}(H){\displaystyle \frac{\epsilon _x}{J_{lx}+J_{hx}}}=\rho _{xx}(H\mathrm{})+{\displaystyle \frac{L}{1+\left(H/W\right)^2}}`$ (2) $`\rho _{xx}(H\mathrm{})={\displaystyle \frac{R_h^2S_l+R_l^2S_h2QR_lR_h}{\left[R_l+R_h\right]^2}}(2)`$ (3) $`W={\displaystyle \frac{S_l+S_h+2Q}{R_l+R_h}};L={\displaystyle \frac{\left[R_l\left(S_h+Q\right)R_h\left(S_l+Q\right)\right]^2}{\left(S_l+S_h+2Q\right)\left(R_l+R_h\right)^2}}`$ (4) The hole densities in the two bands, and hence $`R_l`$, $`R_h`$, are known from fig. 1. Fitting the data depicted in fig. 2 to eqs. 2, one obtains $`\rho _{xx}(H\mathrm{})`$, $`L`$, and $`W`$ which are in turn used to extract $`S_l`$, $`S_h`$, and $`Q`$ as a function of temperature and density. The Landau fan diagram combined with the low field magnetoresistance, hence provide a unique opportunity to directly measure inter and intraband scattering. The results of such analysis are shown in fig. 3 where $`S_l`$, $`S_h`$, $`Q`$ , and $`\rho _{xx}(H=0)`$ are depicted vs. $`T`$ for the same total density as in fig. 2. At low temperatures, all these quantities saturate to some residual values which we attribute to inter and intraband elastic scattering. As the temperature is increased, inelastic scattering commences and these quantities grow. The remarkable and central result of our work is the observation that the inelastic scattering rates follow the same temperature dependence as $`\rho _{xx}(H=0)`$, namely, $`S_i(T)=S_i(0)+\alpha _{S_i}\mathrm{exp}(T_0/T)`$, $`Q(T)=Q(0)+\alpha _Q\mathrm{exp}(T_0/T)`$, where $`\alpha _{S_i}`$, $`\alpha _Q`$ are some constants. Remarkably, the characteristic temperature, $`T_0`$, is similar to all these quantities, including $`\rho _{xx}`$! For $`S_l`$ and $`S_h`$ we find $`T_0=5.0K`$, for $`Q`$, $`T_0=4.8K`$, and for $`\rho _{xx}`$, $`T_0=4.3K`$. Similar correlations are found for other total densities in the ”metallic” regime. Our results hence strongly suggest that the universal Arrhenius temperature dependence of the resistance in the ”metallic” regime merely reflects the increase of inelastic interband scattering with temperature and probably have nothing to do with phase transition into a delocalized state. As expected, the light band is more susceptible to scattering and hence $`\alpha _{S_l}>\alpha _{S_h}`$. Note the inelastic contributions to $`S_l`$ and $`Q`$ at $`T=2K`$ are larger than their elastic counterparts. Moreover, they are all larger than $`\rho _{xx}`$, indicating the two bands are strongly coupled by the scattering. Eventually, for $`T>2K`$ , the coupling equilibrates the two drift velocities and the classical magnetoresistance is fully suppressed. However, the resistance, $`\rho _{xx}`$, continues to grow. The latter growth results from inelastic interband scattering that affects the resistance even when the bands are already fairly strongly coupled. In fact, in some of the data published in the literature, the Arrhenius dependence of the resistance is not accompanied by the Lorentzian magnetoresistance, probably indicating substantial interband scattering. We find experimentally that $`S_l(T)S_l(0)+2.1Q;`$ $`S_h(T)S_h(0)+0.48Q`$, thus yielding for the density of figs 2, 3, $`\lambda =0.48`$. Since the prefactors multiplying $`Q`$ are reciprocal, the resistances, $`S_l(0);`$ $`S_h(0),`$ are identified as $`\sigma _{ll}^1`$ and $`\sigma _{hh}^1`$, respectively. The diagonal resistances, pertaining to intraband scattering, are hence found to be practically temperature independent. The Arrhenius $`T`$ dependence originates from interband scattering alone. We turn now to discuss possible reasons for the Arrhenius temperature dependence of $`S_l,S_h`$ and $`Q`$ which in turn lead to a similar temperature dependence of $`\rho _{xx}`$. These scattering rates (expressed as resistances) crucially depend on the bands’ dispersions relations, $`E_i(𝐤)`$, and their resulting excitation spectra. To extract the bands dispersions depicted in the inset to fig. 4, we approximate the light band by a parabolic relation with a mass $`m_l=0.38m_0`$ ($`m_0`$ is the bare electron mass). The variation of $`p_l`$, $`p_h`$ with $`p_{total}`$, depicted in fig. 1, is then used to calculate the ratio between the two bands compressibilities. Neglecting band warping as well as differences between density of states and compressibility, we use the ratio of the two compressibilities to extract the dispersion of the heavy band. This dispersion then allows, within the random phase approximation, the calculation of the excitation spectrum of the system. The spectrum is composed of two particle-hole continua, one for each band, and two plasmon branches. The particle-hole continua correspond to regions in the $`𝐪,\omega `$ plane where the imaginary part of the polarization operator, $`\mathrm{\Pi }(𝐪,\omega )`$, is non-zero. The plasmon branches are the poles of the dielectric constant, $`ϵ(𝐪,\omega )`$. Both are shown in fig. 4 for zero temperature. Due to the very different masses, the optical plasmon branch corresponds mostly to motion of the light holes while the acoustic branch originates from the heavy holes screened by the lighter ones (analogous to acoustic phonons in metals). At small wave-vectors, the acoustic plasmon velocity is $`v_{ap}=\sqrt{\frac{m_h}{2m_l}}v_{Fh},`$ where $`v_{Fh}`$ is the heavier hole Fermi velocity. Some interband scattering may be attributed to electron-phonon interaction but the calculated magnitude of this effect is more than an order of magnitude too small to account for the measured rates. A more plausible candidate is interband Coulomb scattering that leads to resistance through either Coulomb drag or particle transfer. The rate of interband scattering is proportional to the screened potential squared, $`|\frac{2\pi e^2}{qϵ(𝐪,\omega )}|^2`$. In the absence of plasmons, the dielectric function is regular and at low temperatures may be approximated by its static value. The resulting rate is usually proportional to $`T^2`$. The absence of such contribution in our data is puzzling and calls for a detailed calculation of the scattering rates with the specific particle-hole continua and band structure depicted in fig. 4. It may happen that the very different masses, as well as the concave shape of the heavy band, limit that contribution to small values. The Coulomb scattering rates may be substantially enhanced in the presence of plasmons due to the divergent screened interaction in their vicinity, provided the plasmon branch overlaps the particle-hole continua. This effect is very pronounced in the calculation of the Coulomb drag between coupled quantum wells by Flensberg and Hu . As indicated by fig. 4, at $`T=0`$ the acoustic plasmon does not intersect the heavy holes continuum. As the temperature is raised, Im$`\left[\mathrm{\Pi }_h(𝐪,\omega )\right]`$ is thermally activated beyond its zero $`T`$ boundaries, leading to a finite overlap with the acoustic plasmon branch. The value of Im$`\left[\mathrm{\Pi }_h(𝐪,\omega )\right]`$ is Arrheniusly small there but the diverging screened interaction compensates for that. The corresponding scattering rates depend Arrheniusly on temperature. We turn now to evaluate the corresponding activation temperature, $`T_0`$. Since the temperature, $`T2K`$ is small compared with the Fermi energy, we restrict ourselves to small $`𝐪`$. A direct calculation for a concave dispersion relation then gives, Im$`\left[\mathrm{\Pi }_h(𝐪,v_{ap}q)\right]=\frac{1}{2\sqrt{\pi ^3T}}\frac{k_p}{\sqrt{^2E/k^2}}\mathrm{exp}\left[\left(E_FE_p\right)/T\right]`$. Here, $`k_p<k_{Fh}`$ is the momentum for which the heavy hole velocity matches the plasmon velocity, the curvature, $`^2E/k^2`$, is evaluated at $`k_p`$ and $`E_pE(k_p)`$. Note the result is independent of $`q`$! The activation temperature, $`T_0`$, is simply $`E_FE_p`$. At our density, $`m_h=5.15m_l`$, leading to $`v_{ap}1.6v_{Fh}`$. The corresponding $`E_p`$ is marked in the inset to fig. 4 together with the Fermi energy. The resulting activation energy $`T_0=E_FE_p5.5K`$, in good agreement with the value characterizing the inelastic rates, $`Q`$, $`S_l`$, $`S_h`$ and the resistance, $`\rho _{xx}`$. We are presently calculating the resulting scattering rates. We briefly note another possible source of Arrhenius temperature dependence of the resistance. As the temperature is increased, carriers are transferred from the light to the heavy band, due to the larger entropy of the latter (larger density of states). This coexistence of two bands is hence analogous to the standard liquid (light band) - gas (heavy band) coexistence. Under certain conditions, the density changes calculated from the corresponding Clausius-Clapeyron equation may follow an Arrhenius dependence which is analogous to the vapor pressure in the liquid-gas problem. Since the heavy hole mobility is lower than that of the light holes, such carrier transfer should result in resistance increase. We are presently studying the implications of this effect. In Summary. We have shown experimentally that the ”metallic” behavior of the resistance of holes in a GaAs/AlGaAs heterostructure results from inelastic scattering between the two splitted heavy hole bands. The measured interband scattering rates depend Arrheniusly on temperature with almost the same activation energy as the longitudinal resistance. Using Shubnikov de Haas data, we mapped the band structure and the corresponding excitations. We found that due to the dispersion relation of the heavy band, the system supports a weakly damped acoustic plasmon. The Arrheniusly small overlap between the heavy hole excitation continuum and the plasmon branch leads to a similar dependence of the interband scattering rates, and hence of $`\rho _{xx}`$ upon $`T`$. Using the measured band structures we calculate $`T_0`$ for the plasmon mediated process and obtain good agreement with the values deduced experimentally from the inelastic scattering rates. The absence of a substantial power law contribution to the inelastic interband scattering remains to be explained. Acknowledgment This work was supported by the Binational Science Foundation (BSF), Israeli Academy of Sciences, German-Israeli DIP grant, Minerva foundation, Technion grant for promotion of research, and by the V. Ehrlich career development chair. Figure Captions Figure 1 - Hole densities of the two splitted heavy hole bands as a function of total density. Inset - one of the Shubnikov de Haas traces used to determine the hole densities. Figure 2 - The two-band classical magnetoresistance contribution to the resistance for different temperatures. Note the perfect Lorentzian shape. Inset- Full magnetoresistance curves for the same temperatures. Figure 3 - The various scattering rates expressed in terms of resistances (left axis) and the zero field longitudinal resistance (right axis) vs. $`T`$. Solid lines depict best fit to Arrhenius dependence. Inset-same data in $`ln`$ vs. $`1/T`$ plot. The slopes give $`T_0`$. Figure 4 - Solid lines - heavy and light particle-hole excitation continua as a function of momentum scaled to the heavy hole Fermi wave vector. Shaded area corresponds to the range where drag-like interband scattering is possible at very low $`T`$. Dashed lines - optical (op) and acoustic (ap) plasmon dispersions. Inset - The measured bands dispersion relations. The energy $`E_F`$ corresponds to the hole Fermi energy and $`E_p`$ is the energy where the heavy hole velocity matches that of the acoustic plasmon. The difference, $`E_FE_p`$ gives the activation temperature, $`T_0`$ (see text).
no-problem/9904/hep-th9904009.html
ar5iv
text
# Quantum causal histories ## 1 Introduction In general relativity, with compact rather than asymptotically flat boundary conditions, physical observations are made inside the system that they describe. In quantum theory, the observable quantities are meaningful outside the system that they refer to. It is very likely that quantum gravity must be a “quantum mechanical relativistic theory”. That is, a theory where the observables can be given as self-adjoint operators on a Hilbert space but which are meaningful inside the system that they describe. A rough description of what such a theory involves is the following. Observations made inside the system are closely related to causality in the sense that an inside observer necessarily splits the history of the system into the part that is in the future, the part that is in the past, and — assuming finite speed of propagation of information — elsewhere. We may call such observables “internal observables”, characterised by the requirement that they refer to information that an observer at a point, or a connected region of spacetime, may be able to gain about their causal past. In a previous paper , we found that these observables can be described by functors from the partially ordered set of events in the spacetime to the category of sets. Such a functor codes the relationship between the causal structure and the information available to an observer inside the spacetime. This has non-trivial consequences and, in particular, the observables algebra is modified even at the classical level. Internal observables satisfy a Heyting algebra, which is a weak version of the Boolean algebra of ordinary observables. This is still a distributional algebra. Namely, for propositions $`P,Q`$ and $`R`$, if $`PQ`$ denotes “$`P`$ or $`Q`$”, and $`PQ`$ means “$`P`$ and $`Q`$”, then $`P(QR)=(PQ)(PR)`$. On the other hand, quantum mechanics is linear, and as a result of the superposition principle, quantum mechanical propositions are not distributive. If $`P,Q`$ and $`R`$ are projection operators, $`P(QR)`$ is not equal to $`(PQ)(PR)`$. Having both quantum mechanics and internal observables in the same theory requires finding propositions that have a non-distributive quantum mechanical aspect and a distributive causal aspect. The aim of the present paper is to define the histories in which such observables may be encountered. We will, therefore, define quantum causal histories, which are histories that are both quantum mechanical and causal. Assuming that a discrete causal ordering (a causal set) is a sufficient description of the fundamental past/future ordering needed to qualify observations as being inside the system, we find that a quantum causal history can be constructed by attaching finite-dimensional Hilbert spaces to the events of the causal set. It is then natural to consider tensor products of the Hilbert spaces on events that are spacelike separated. We define quantum histories with local unitary evolution maps between such sets of spacelike separated events. The conditions of reflexivity, antisymmetry and transitivity that hold for the causal set have analogs in the quantum history which are conditions on the evolution operators. We find that transitivity is a strong physical condition on the evolution operators and, if imposed, implies that the quantum causal histories are invariant under directed coarse graining. If the the causal set represents the universe, quantum causal histories constitute a quantum cosmological theory. Its main notable feature is that there is a Hilbert space for each event but not one for the entire universe. Hence, no wavefunction of the universe arises. A consistent intepretation of quantum causal histories and observatons inside the quantum universe can be provided and will appear in a forthcoming paper . In more detail, the outline of this paper is the following. In section 2 we review causal set histories and provide a list of definitions of structures that can be found in a causal history and which are used in the quantum causal histories. In particular, we concentrate on acausal sets, sets of causally unrelated events. In section 3, we introduce the poset of acausal sets, equipped with the appropriate ordering relation. The definition of quantum causal histories is based on this poset and is given in section 4. The properties of the resulting histories are discussed in section 5. The ordering of a causal set is reflexive, antisymmetric and transitive, conditions which are also imposed on the quantum histories. The consequences of these properties are analyzed in section 6. In particular, we find that transitivity leads to directed coarse-graining invariance. Two classes of quantum causal histories are given as examples in section 7. Up to this point, the causal histories require a choice of a causal set. In section 8, we remove this restriction and provide a sum-over-histories version of quantum causal evolution. The quantum causal histories presented here are consistent, but not all physically meaningful questions can be asked. There are several possibilities for generalisations, some of which we outline in the Conclusions. ## 2 Causal set histories A (discrete) causal history is a causal set of events that carry extra structure. For example, the causal histories that were examined in had as events vector spaces spanned by $`SU_q(2)`$ intertwiners. In two dimensions, an exact model of such a causal history has been proposed by Ambjorn and Loll and its continuum limit properties have been investigated in . The dynamics of a 3-dimensional causal spin network history model is addressed by Borissov and Gupta in <sup>1</sup><sup>1</sup>1 For pure causal set theories (with no extra structure on the events), we may note recent work by Rideout and Sorkin who derived a family of stochastic sequential growth dynamics for a causal set, with very interesting consequences about the classical limit of pure causal sets . Further, the dynamics of a toy model causal set using a suitable quantum measure was proposed by Criscuolo and Waelbroek in .. In this section, we review the definition of a causal set and provide several derivative definitions which will be used in the rest of this paper. A causal set $`𝒞`$ is a partially ordered set whose elements are interpeted as the events in a history (see ). We denote the events by $`p,q,r,\mathrm{}`$. If, say, $`p`$ precedes $`q`$, we write $`pq`$. The equal option is used when $`p`$ coincides with $`q`$. We write $`pRq`$ when either $`pq`$ or $`pq`$ holds. The causal relation is reflexive, i.e. $`pp`$ for any event $`p`$. It is also transitive, i.e. if $`pq`$ and $`qr`$, then $`pr`$. To ensure that $`𝒞`$ has no closed timelike loops, we make the causal relation antisymmetric, that is, if $`pq`$ and $`qp`$, then $`p=q`$. Finally, we limit ourselves to histories with a finite number of events. Given a causal set, there are several secondary structures that we can construct from it and which come in useful in this paper. We therefore list them here (see Figures 1 and 2): * The causal past of some event $`p`$ is the set of all events $`r𝒞`$ with $`rp`$. We denote the causal past of $`p`$ by $`P(p)`$. * The causal future of $`p`$ is the set of all events $`q𝒞`$ with $`pq`$. We denote it by $`F(p)`$. * An acausal set, denoted $`a,b,c,\mathrm{}`$, is a set of events in $`𝒞`$ that are all causally unrelated to each other. * The acausal set $`a`$ is a complete past of the event $`p`$ when every event in the causal past $`P(p)`$ of $`p`$ is related to some event in $`a`$. It is not possible to add an event from $`(P(p)a)`$ to $`a`$ and produce a new acausal set. * Similarly, an acausal set $`b`$ is a complete future of $`p`$ when every event in the causal future $`F(p)`$ of $`p`$ is related to some event in $`b`$. * A maximal antichain in the causal set $`𝒞`$ is an acausal set $`A`$ such that every event in $`(𝒞A)`$ is causally related to some event in $`A`$. Similar definitions as the above of past, future, complete past, and complete future for a single event can be given for acausal sets: * The causal past of $`a`$ is $`P(a)=_iP(q_i)`$ for all $`q_ia`$. Similarly, the causal future of $`a`$ is $`F(a)=_iF(q_i)`$ for all $`q_ia`$. * An acausal set $`a`$ is a complete past of the acausal set $`b`$ if every event in $`P(b)`$ is related to some event in $`a`$. * An acausal set $`c`$ is a complete future of $`b`$ if every event in $`F(b)`$ is related to some event in $`c`$. Furthermore, * Two acausal sets $`a`$ and $`b`$ are a complete pair when $`a`$ is a complete past of $`b`$ and $`b`$ is a complete future of $`a`$. * Two acausal sets $`a`$ and $`b`$ are a full pair when they are a complete pair and every event in $`a`$ is related to every event in $`b`$. * Two acausal sets $`a`$ and $`b`$ cross when some of the events in $`a`$ are in the future of $`b`$ and some are in its past. ## 3 The poset of acausal sets The set of acausal sets within a given causal set $`𝒞`$ is a partially ordered set if we define the relation $`ab`$ to mean that $`a`$ is a complete past of $`b`$ and $`b`$ is a complete future of $`a`$. Reflecting the properties of the underlying causal set, the relation $``$ is reflexive, transitive and antisymmetric. Let us call this poset $`𝐀`$. It is on this poset of acausal sets that we will base the quantum version of the causal histories. Its properties, therefore, are important constraints on the corresponding quantum history. The main property of $`𝐀`$ that characterises the kind of quantum theory we will obtain in this paper is that, given acausal sets $`a,b`$ and $`c`$, the following holds ($`R`$ means either $``$ or $``$): $$\text{If }aRc,bRc,\text{ and }a,b\text{ do not cross, then }aRb.$$ (1) That is, given some acausal set $`c𝐀`$, all the acausal sets that are related to $`c`$ are also related to each other — except if they happen to cross. If $`a`$ and $`b`$ are “too close” to each other they may cross and then cannot be related by $``$. This means that for the chosen $`c`$ there is not a unique complete pair sequence. In selecting one of the possible sequences we need to make repeated choices of which of two crossing acausal sets to keep. ## 4 Quantum causal histories We will now construct the quantum version, $`Q𝐀`$, of the poset $`𝐀`$. We will regard an event $`q`$ in the causal set as a Planck-scale quantum “event” with a Hilbert space $`H(q)`$ that stores its possible states. We require that $`H(q)`$ is finite-dimensional, which is consistent with the requirement that our causal sets be finite. Choose an acausal set $`a=\{q_1,q_2,\mathrm{},q_n\}`$ in $`𝐀`$. Since all $`q_ia`$ are causally unrelated to each other, standard quantum mechanics dictates that the Hilbert space of $`a`$ is $$H(a)=\underset{i=1}{\overset{n}{}}H(q_i).$$ (2) That is, we have a tensor product Hilbert space in $`Q𝐀`$ for each acausal set in $`𝐀`$. When two acausal sets are related, $`ab`$, there needs to be an evolution operator between the corresponding Hilbert spaces: $$E_{ab}:H(a)H(b).$$ (3) We will impose one more condition on the causal histories that will make their present treatment simpler. We will only consider posets $`𝐀`$ with the following property: $$\text{ If }ab,\text{dim}H(a)=\text{dim}H(b).$$ (4) This restriction is particularly convenient since it allows us to simply regard $`H(a)`$ and $`H(b)`$ as isomorphic and require that $`E_{ab}`$ is a unitary evolution operator. The poset of acausal sets $`𝐀`$ is reflexive, transitive and antisymmetric. We would like to maintain these properties of the causal ordering in the quantum theory as analogous conditions on the evolution operators. (In other words, we want the quantum causal history to be a functor from the poset $`𝐀`$ to the category of Hilbert spaces.) The analogue of reflexivity is the existence of an operator $`E_{aa}=\mathrm{𝟏}_a:H(a)H(a)`$ for every acausal set $`a`$. $`E_{aa}`$ has to be the identity because any other operator from $`H(a)`$ to itself would have to be a new event. Transitivity in $`𝐀`$ implies that $$E_{bc}E_{ab}=E_{ac}$$ (5) in $`Q𝐀`$. We will return to transitivity and its implications for $`𝐀`$ and $`Q𝐀`$ in section 6. At each event $`q`$, there is an algebra of observables, the operators on $`H(q)`$. An observable $`\widehat{O}_a`$ at $`a`$ becomes an observable $`\widehat{O}_b`$ at $`b`$ by $$\widehat{O}_b=E_{ab}\widehat{O}_aE_{ab}^{}.$$ (6) This completes the definition of the causal quantum histories we are concerned with. In the next section we discuss the evolution of states which is allowed in such histories. Then, in section 6, we will come back to the imposition of transitivity on the evolution operators, a strong condition that dictates the form of the resulting histories and their quantum cosmology interpretation. ## 5 Quantum evolution in $`Q𝐀`$ In this section we discuss the consequences of the definitions of quantum histories given above. ### 5.1 Products of complete pair sequences Evolution maps between complete pairs that are themselves causally unrelated to each other may be composed in the standard way, like tensors. That is, consider complete pairs $`ab`$ and $`cd`$, with $`a`$ and $`b`$ unrelated to $`c`$ or $`d`$. Then construct the acausal sets $`ac`$ and $`bd`$, which form a new complete pair: $`(ac)(bd)`$. The evolution operator on the composites, $$E_{(ac)(bd)}:H(a)H(c)H(b)H(d),$$ (7) is the product of the operators on the two pairs, $$E_{(ac)(bd)}=E_{ab}E_{cd}.$$ (8) ### 5.2 Projection operators The causal structure of $`𝐀`$ means that a projection operator on $`H(a)`$ propagates to the future of $`a`$ in the following way. A projection operator $$P_a:H(a)V(a)$$ (9) that reduces $`H(a)`$ to a subspace $`V(a)`$, can be extended to a larger acausal set $`ac`$. On $`H(a)H(c)`$, it is the new projection operator $$P_{ac}=P_a\mathrm{𝟏}_c:H(a)H(c)V(a)H(c).$$ (10) By using the evolution operator $`E_{(ac)(bd)}=E_{ab}E_{cd}`$ on the enlarged projection operator we obtain a projection operator $`P_b1_d`$ on the future acausal set $`bd`$. ### 5.3 Evolution in $`𝐀`$ can be independent of $`𝒞`$ Consider a complete pair $`ab`$ in which $`a=a_1a_2`$ and $`b=b_1b_2`$. The corresponding Hilbert spaces are $$H(a)=H(a_1)H(a_2)\text{and}H(b)=H(b_1)H(b_2),$$ (11) and $`E_{ab}`$ is the evolution operator that corresponds to the causal relation $`ab`$. Choose some state $`|\psi H(a_1)`$ by acting on $`H(a_1)`$ with the projection operator $`|\psi \psi |`$. This implies that, in $`H(a)`$, we have chosen the state $`|\psi |\psi _{a_2}`$, for some $`|\psi _{a_2}H(a_2)`$, using the projection operator $`(|\psi \psi |)\mathrm{𝟏}_{a_2}`$. We can use $`E_{ab}`$ on this state to obtain the state $$|\psi _b=E_{ab}\left(|\psi |\psi _{a_2}\right)$$ (12) in $`H(b)`$. If, for any reason, we need to restrict our attention to $`b_2`$, we can trace over $`b_1`$ to find that the original state $`|\psi H(a_2)`$ gives rise to the density matrix: $`\rho _\psi (b_2)`$ $`=`$ $`\text{Tr}_{b_1}|\psi _b\psi _b`$ (13) $`=`$ $`\text{Tr}_{b_1}\left[E_{ab}\left(|\psi \psi |\mathrm{𝟏}_{a_2}\right)E_{ab}^{}\right].`$ (14) At this point, the following question arises. What if $`a_1`$ is not in the causal past of $`b_2`$ and, for example, we have these causal relations: $$\begin{array}{c}\text{}\end{array}$$ Does the evolution we defined on $`Q𝐀`$ violate causality in the underlying causal set $`𝒞`$? This question illuminates several features of the quantum causal histories. The very first thing to note is that we get the same acausal poset $`𝐀`$ for many different causal sets. The operator $`E_{ab}`$ refers to $`𝐀`$ and does not distiguish the different possible underlying causal sets. There is a simple solution to the above apparent embarassment. Instead of promoting the events of the causal set to Hilbert spaces, we may attach the Hilbert spaces to the edges, and the evolution operators to the events. An event in the causal set, then, becomes an evolution operator from the tensor product of the Hilbert spaces on the edges ingoing to that event, to the tensor product of the outgoing ones. Since the set of ingoing and the set of outgoing edges to the same event are a full pair (i.e. a complete pair in which all events in the past acausal set are related to all the events in the future one), the above problem will not arise. Conceptually, this solution agrees with the intuition that events in the causal set represent change, and, therefore, in the quantum case they should be represented as operators. In section 7.2, we discuss the example of quantum causal histories with the Hilbert spaces on the edges for trivalent causal sets. ### 5.4 Propagation by a density matrix requires a complete pair According to (14), given a state $`|\psi H(a)`$, we can obtain the density matrix $`\rho _\psi (b_2)`$ for the acausal set $`b_2`$ in the future of $`a`$. This uses the fact that $`b_2`$ is a subset of an acausal set $`b`$ that forms a complete pair with $`a`$. Consider this configuration: $$\begin{array}{c}\text{}\end{array}$$ The initial acausal set is $`a=\{p_1,p_2\}`$. The acausal set $`w=\{p_3,p_4\}`$ is in the future of $`a`$. Given $`|\psi H(a)`$, can we obtain a density matrix on $`H(w)`$? The answer is no, since there is no acausal set that contains $`w`$ and is maximal in the future of $`a`$. (The problem is the $`p_2p_7`$ relation.) There are, therefore, acausal sets in the future of $`a`$ which cannot be reached by the evolution map $`E`$. ## 6 Directed coarse-graining The main idea in this paper is that the quantum version of some causal history is a collection of Hilbert spaces connected by evolution operators that respect the structure of the poset $`𝐀`$ we started with. For this reason, in section 4, we imposed reflexivity, transitivity and antisymmetry to the operators $`E_{ab}`$. Transitivity is a strong condition on the quantum history. One should keep in mind that it was first imposed on causal sets because it holds for the causal structure of Lorentzian spacetimes. Significantly, it does not just encode properties of the ordering of events but also the fact that a Lorentzian manifold is a point set. In general relativity, an event is a point and this has been imported in the causal set approach. To analyse this a little further, let us introduce a notation that indicates when two events $`p`$ and $`q`$ are related by a shortest causal relation, i.e., no other event occurs after $`p`$ and before $`q`$. This is the covering relation: > $``$ The event $`q`$ covers $`p`$ if $`pq`$ and there is no other event $`r`$ with $`prq`$. We denote this by $`pq`$. The following are worth noting. For a finite causal set, transitivity means that the order relation determines, and is determined by, the covering relation, since $`pq`$ is equivalent to a finite sequence of covering relations $`p=p_1p_2\mathrm{}p_n=q`$. On the other hand, in the continuum (for example the real line $`𝐑`$) there are no pairs $`p,q`$ such that $`pq`$ . Hence, in a continuum spacetime, it is simply not meaningful to consider an ordering that is not transitive. Non-transitive ordering requires distinguishing between the covering relations and the resulting transitive ones. This distinction is not possible in the continuum case. In short, for events that are points, it is sensible to expect that if $`p`$ leads to $`q`$ and $`q`$ to $`r`$, then $`r`$ is in the future of $`p`$. If, however, the events were (for example) spacetime regions of some finite volume, with overlaps, then it is unclear whether transitivity would hold. (See also section 2.4 in .) In the causal histories we consider here, an event is a Hilbert space. It is, therefore, an open question whether it is sensible to impose reflexivity, transitivity and antisymmetry on the ordering of the Hilbert spaces. We choose to first impose them, then find what the implications are, and if they are unphysical, go back and check which of the three conditions should not be maintained in a quantum causal ordering. On the positive side, there is a very interesting advantage to maintaining transitivity. Using (5), we have the benefit of a directed coarse-graining invariance of the quantum history. For example, if we are handed $$\begin{array}{c}\text{}\end{array}$$ and need to go from $`H(p_1)H(p_2)`$ to $`H(p_3)H(p_4)H(p_5)`$, we can reduce it to $$\begin{array}{c}\text{}\end{array}.$$ Clearly, several initial graphs will give the same coarse-grained graph. We return to this in section 8. It is very interesting to note that the coarse-graining implied by transitivity can be used to improve the propagation of density matrices that we discussed in section 5.4. We can coarse-grain the causal set depicted in (5.4) by considering the events $`p_3,p_4,p_5,p_6,p_7`$ to be the acausal set $`\overline{w}`$. This is a coarse-graining in the sense that we ignore any causal relations between these events. We then obtain: $$\begin{array}{c}\text{}\end{array}.$$ In this causal set, $`a`$ and $`\overline{w}`$ are a complete pair. It is then possible to use an evolution operator $`E_{a\overline{w}}`$ to take the state $`|\psi H(a)`$ to a state in $`H(\overline{w})`$ and then trace over $`(\overline{w}w)`$ to obtain a density matrix on $`H(w)`$. ## 7 Examples In this section, we provide two examples of the quantum causal histories we defined in section 4. ### 7.1 Discrete Newtonian evolution A discrete Newtonian history is a universe with a preferred time and foliation. It is represented by a poset $`𝐀`$ which is a single complete pair sequence: $$\mathrm{}a_na_{n+1}a_{n+2}\mathrm{}.$$ (15) The corresponding quantum history is then $$\mathrm{}H_nH_{n+1}H_{n+2}\mathrm{}.$$ (16) with all the Hilbert spaces isomorphic to each other. We can denote by $`E`$ the evolution operator from $`H_n`$ to $`H_{n+1}`$. Evolution from $`H_n`$ to $`H_m`$ is then given by $$E_{nm}=E^{mn}.$$ (17) We may compare this universe to the standard one in quantum theory. There, there is a single Hilbert space for the entire universe and evolution is given by the unitary operator $`e^{iH\delta t}`$. In the above, there is a sequence of identical and finite-dimensional Hilbert spaces. Since evolution is by discrete steps, we may set $`\delta t=1`$. Then $`E=e^{iH}`$, for some hermitian operator $`H`$ and $`E_{nm}=e^{i(mn)H}`$. ### 7.2 A planar trivalent graph with Hilbert spaces on the edges This example is a history with multifingered time . It is a planar trivalent graph with finite-dimensional Hilbert spaces living on its edges. Trivalent means either two ingoing and one outgoing edges at a node, or two outgoing and one ingoing. We exclude nodes with no ingoing or no outgoing edges. From a given planar trivalent causal set $`𝒞`$, we can obtain what we will call its edge-set, $`𝒞`$. This is a new graph which has the covering relations of $`𝒞`$ (the edges, not including any transitive ones) as its nodes. The covering relations in $`𝒞`$ are also ordered and these relations are the edges in $`𝒞`$. Figure 3 shows an example of a causal set and its edge-set. We now take the poset $`𝐀`$ of $`𝒞`$ and construct a quantum history from it by assigning Hilbert spaces to the nodes of $`𝒞`$. The very interesting property of $`𝒞`$ is that it can be decomposed into pieces, generating evolution moves that take two events to one, or split one event into two. That is, $`𝒞`$ can be decomposed to these two full pairs: $$\begin{array}{c}\text{}\end{array}.$$ (This is generally the case, not only for trivalent graphs.) Such a decomposition is not possible in some general $`𝒞`$ and therefore, in view of the problem we encountered in 5.3 above, $`𝒞`$ provides an advantage over $`𝒞`$. To be able to employ unitary evolution operators, we need $`\text{dim}H(e_1)=\text{dim}H(e_2)+\text{dim}H(e_3)`$ and $`\text{dim}H(e_4)+\text{dim}H(e_5)=\text{dim}H(e_6)`$. One can check that this can be done consistently for all the events in the causal set. Although $`𝒞`$ is trivalent, the nodes of $`𝒞`$ have valence 2,3, or 4. The list of possible edges in $`𝒞`$ and the corresponding nodes in $`𝒞`$ is: $$\begin{array}{c}\text{}\end{array}$$ There is a substantial interpretational difference between Hilbert spaces on the (covering) causal relations and on the events. A state space on the edge $`pq`$ is most naturally interpeted as “the state space of $`p`$ as seen by $`q`$”. If there are two edges coming out of $`p`$, to $`q`$ and $`r`$, then there are two Hilbert spaces in $`𝒞`$, interpeted as the Hilbert space of $`p`$ as seen by $`q`$ and the Hilbert space of $`p`$ as seen by $`r`$. On the other hand, a Hilbert space placed on the event $`p`$ is absolute, in the sense that is independent of who is observing it. ## 8 Summing over histories The quantum causal histories we discussed in this paper require a fixed poset $`𝐀`$. This is also necessary in the treatment of observers in a classical causal set universe in . Since there is no physical reason that some causal set should be preferred over all others, this restriction is unappealing. In this section we outline a “sum-over-histories” version of the evolution in section 4, which also applies to and can be used in any further work on quantum observers inside the universe. An acausal set is a set of points. We have considered causal sets where all events have at least one ingoing and one outgoing edge. Then, that $`a,b`$ are a complete pair $`ab`$ means that there is a directed graph with the set $`a`$ as its domain and the set $`b`$ as its codomain. Let us denote this graph by $`\gamma (a,b)`$. A graph with $`b`$ as its codomain and one with $`b`$ as its domain may be composed. When $`𝐀`$ is given, there is one known graph $`\gamma (a,b)`$ connecting $`a`$ and $`b`$. If $`𝐀`$ is not fixed, we can sum over all graphs that we can fit between $`a`$ and $`b`$ (that have a finite number of nodes). This leads to a sum-over-histories version of the evolution in section 4. Let us call $`E_{ab}^\gamma `$ the evolution operator (as in equation (3)) when the underlying graph is $`\gamma (a,b)`$. The transition amplitude from a state $`|\psi _aH(a)`$ to a state $`|\psi _bH(b)`$ for this particular graph is $$A_\gamma =\psi _b|E_{ab}^\gamma |\psi _a.$$ (18) If the graph is not fixed, we may sum over all the possible ones: $$A_{ab}=\underset{\gamma }{}\psi _b|E_{ab}^\gamma |\psi _a.$$ (19) Now note that transitivity defines equivalence classes of graphs between given acausal sets. Here is an example of a series of graphs that, by transitivity, have the same causal relations as far as $`a`$ and $`b`$ are concerned. Specifically, $`p_1,p_2,p_3p_4`$ and $`p_3p_4,p_5`$ in all of these graphs: $$\begin{array}{c}\text{}\end{array}$$ All of the above correspond to the same evolution operator in (19). It is intriguing to compare this to the triangulation invariance of topological quantum field theory. Transitivity can be intepreted as a directed triangulation invariance. We will return to this in future work. ## 9 Conclusions We saw that it is possible to promote a causal set to a quantum one by taking the events to be finite-dimensional Hilbert spaces. It is natural to consider tensor products of the Hilbert spaces on events that are spacelike separated. This led us to replace the causal set by the poset of acausal sets $`𝐀`$. We defined quantum histories with local unitary evolution maps between complete pairs in $`𝐀`$.<sup>2</sup><sup>2</sup>2Discrete evolution by unitary operators is also considered by P. Zizzi, in work to appear. We explored several of the features of these causal histories. A property of $`𝐀`$ is that it splits into distinct sequences of complete pairs. Consequently, this places restrictions on which Hilbert spaces can be reached from a given one. The conditions of reflexivity, antisymmetry and transitivity that hold for $`𝐀`$ were imposed on the quantum history as conditions on the evolution operators. The most interesting consequences were from transitivity, which gave rise to invariance under directed coarse-graining. This needs further investigation, since the physical assumption behind transitivity is pointlike events. It will be interesting to consider events which are extended objects and find what ordering is suitable in this case. We were able to tensor together the Hilbert spaces for two acausal sets when they are spacelike separated and have no events in common. However, it is natural to consider cases where an acausal set is a subset of a larger one. It appears that, if we need to use acausal sets, we should enrich $`𝐀`$ with the inclusion relation, that is, use a poset with two ordering relations, causal ordering and spacelike inclusion. While this is straightforward in the plain causal set case, it becomes tricky in the quantum histories. For example, we may start from the acausal sets $`a=\{q_1,q_2\}`$ and $`a^{}=\{q_1,q_2,q_3\}`$. In the poset $`𝐀`$ we have $`aa^{}`$. In $`Q𝐀`$, the corresponding state spaces are $`H(a)=H(q_1)H(q_2)`$ and $`H(a^{})=H(q_1)H(q_2)H(q_3)`$. However, there is no natural way in which $`H(a)`$ is a subspace of $`H(a^{})`$. That is, the set inclusion relation is not directly preserved in the quantum theory. In the quantum histories we discussed, we restricted the past and future Hilbert spaces to have the same dimension. This is the simplest case and needs to be generalised. A related issue is the properties of the individual Hilbert spaces. For example, if we employ the causal spin network models of , the evolution operators should respect the $`SU(2)`$ invariance of the state spaces. More generally, a discrete quantum field theory toy model can be constructed by inserting matter field algebras on the events. This will be addressed in future work. Finally, we set up $`Q𝐀`$ having in mind a functor from a poset to Hilbert spaces, taking the elements and arrows of the poset into Hilbert spaces and operators which preserve the properties of the original poset, i.e. reflexivity, antisymmetry and transitivity. It is also possible to use graphs between sets of events (rather than a fixed causal set), as outlined in section 8. In this case, the quantum causal histories become similar to topological quantum field theory except, importantly, they are directed graphs (or triangulations). The coarse-graining invariance relations can be calculated for given fixed valence of the covering relations. On the interpretational side, the main thing to note is that the causal history is a collection of Hilbert spaces which itself is not a Hilbert space. According to quantum theory, we can take tensor products of events that are not causally related. We cannot, for example, take the tensor product of all the Hilbert spaces in the history to be the Hilbert space of the entire history.<sup>3</sup><sup>3</sup>3Taking tensor products of all Hilbert spaces in the history is, however, exactly what is done in the consistent histories approach of Isham . As a result, the causal quantum cosmology is not described in terms of a wavefunction of the universe. Individual events (or observers on the events) can have states and wavefunctions but the entire universe does not. Further discussion of the interpretation of the quantum causal histories will appear in . ## Acknowledgment I am grateful to Chris Isham for his comments on transitivity and the pointlike structure of spacetime. This paper has benefited from discussions with Lee Smolin on quantum theory and the interpetation of such histories as quantum cosmologies. I am also grateful to Sameer Gupta and Eli Hawkins for discussions on causal histories. This work was supported by NSF grants PHY/9514240 and PHY/9423950 to the Pennsylvania State University and a gift from the Jesse Phillips Foundation.
no-problem/9904/hep-ph9904204.html
ar5iv
text
# IMPLEMENTATION OF THE RECOVERING CORRECTIONS INTO THE INTERMITTENT DATA ANALYSIS ## Acknowledgements I would like to thank Prof. A. Białas for reading the manuscript and many suggestions and comments and Dr. R. Janik for discussions. This work was supported in part by Polish Government grant Project (KBN) 2P03B04214. Figure captions Figs. 1a, 1b Estimation of normalized intermittency exponents $`\varphi _{2;norm.}`$ and $`\varphi _{3;norm.}`$ for $`\alpha `$model, using the standard method (dotted line), the improved method with the implementation algorithm (thin solid line), and dedicated $`\alpha `$\- corrections (dashed line) compared with the theoretical values (solid line), performed for one set of $`\alpha `$model parameters : $`a_1=0.8`$, $`a_2=1.1`$, $`p_1=1/3`$ Figs. 1c, 1d Estimation of normalized intermittency exponents $`\varphi _{2;norm.}`$ and $`\varphi _{3;norm.}`$ for $`(p+\alpha )`$model, using the standard method (dotted line), the improved method with the implementation algorithm (thin solid line), compared with the theoretical values (solid line), performed for one set of $`(p+\alpha )`$model parameters : $`a_{2i}=1a_{2i1}`$, $`p_{2i}=p_{2i1}`$ for $`i=1,\mathrm{},10`$, $`a_1=0.2`$, $`a_3=0.5`$, $`a_5=0.6`$, $`a_7=0.3`$, $`a_9=0.45`$, $`a_{11}=0.25`$, $`a_{13}=0.1`$, $`a_{15}=0.15`$, $`a_{17}=0.87`$, $`a_{19}=0.66`$, $`2p_1=0.05`$, $`2p_3=0.15`$, $`2p_5=0.25`$, $`2p_7=0.40`$, $`2p_9=0.05`$, $`2p_{11}=0.05`$, $`2p_{13}=0.02`$, $`2p_{15}=0.02`$, $`2p_{17}=0.005`$, $`2p_{19}=0.005`$,
no-problem/9904/cond-mat9904039.html
ar5iv
text
# Adsorbate-induced substrate relaxation and the adsorbate–adsorbate interaction \[ ## Abstract We formulate the theory of the perturbation caused by an adsorbate upon the substrate lattice in terms of a local modification of the interatomic potential energy around the adsorption site, which leads to the relaxation of substrate atoms. We apply the approach to CO chemisorption on close-packed metal surfaces, and show that the adsorbate–adsorbate interaction and a variety of other properties can be well described by a simple model. \] Several direct and indirect (through-the-substrate) mechanisms can lead to an effective interaction between adsorbates on metal surfaces. The possible role of adsorbate-induced substrate relaxation was considered some time ago by Lau and Kohn , using an elastic continuum model of the surface. They concluded that the resulting interaction was repulsive between identical adsorbates, varied as $`\rho ^3`$ with separation $`\rho `$, and was inversely proportional to the shear modulus of the substrate. For adsorbates separated by a few atomic spacings, however, the continuum theory may not be valid. More recently, in a series of papers Kevan et al. have determined adsorbate–adsorbate interaction energies for CO at several metal surfaces, using a transfer-matrix analysis of thermal desorption spectra. They have also qualitatively discussed the adsorbate-induced strain as a possible mechanism of the adsorbate–adsorbate interaction at intermediate range, i.e., several substrate atoms apart. The potential energy of the atomic lattice of a solid in the harmonic approximation can be written as $$V=\underset{i,j,\mu ,\nu }{}x_\mu (i)D_{\mu \nu }(i,j)x_\nu (j),$$ (1) where $`x_\mu (i)`$ is the $`\mu `$-th component of the displacement of the $`i`$-th atom from the equilibrium position. Now assume that an impurity is created by replacing one atom with a different species. (In the general discussion we talk about an “impurity”, although we are primarily interested in the chemisorption case, where the adsorbate also introduces additional degrees of freedom. The generalization to the latter case is straightforward.) The potential energy after the replacement can again be expressed in the form (1), but the new equlibrium positions $`x_\mu ^{}(i)`$ are in general different: $$x_\mu ^{}(i)=x_\mu (i)+\mathrm{\Delta }x_\mu (i).$$ (2) The new dynamical matrix $`D_{\mu \nu }^{}(i,j)`$ is also different, and a constant term appears which shifts the energy minimum. Using (2), the potential energy of the system after the impurity has been introduced can be expressed in the coordinates $`x_\mu (i)`$: $$V^{}=\underset{i,j,\mu ,\nu }{}x_\mu (i)D_{\mu \nu }^{}(i,j)x_\nu (j)+\underset{i,\mu }{}F_\mu (i)x_\mu (i)+V_0^{},$$ (3) i.e., the dynamical matrix is modified, and linear (force) and constant (energy shift) terms appear. The foregoing considerations depend only upon the assumed stability of the solid, i.e., the existence of the minimum of the potential energy. For our application to chemisorption, we specifically assume the following properties: (a) the force terms $`F_\mu (i)`$ are nonzero only for a small number of substrate atoms around the adsorption site; (b) similarly, only a few elements of the dynamical matrix $`D_{\mu \nu }(i,j)`$ change, if any; (c) the effect is linear, so that the chemisorption of another molecule (and, consequently, of a third, a fourth, etc.) can be described by the same set of parameters, of course centered around the new adsorption site. The condition (c) is rather restrictive and excludes systems which reconstruct at large adsorbate coverage, as well as those where other interactions are important, such as the direct adsorbate–adsorbate repulsion, the electrostatic dipole–dipole interaction, or the “chemical” competition for the same electronic orbitals in the substrate. Estimates, however, show that such interactions are usually weak and short-ranged, while we are interested in medium-range interactions (second nearest neighbor and beyond) of nonionic adsorbates. In this paper we take into account the in-plane relaxation within the first atomic layer of the substrate induced by a chemisorbed species. The relaxation in the perpendicular direction can, of course, be equally strong, but it does not contribute much to the effective adsorbate–adsorbate interaction and to the adsorbate-induced surface stress. Also, we do not consider the coupling to internal adsorbate coordinates, which was discussed in Ref. in an application to the damping of adsorbate vibrations. For a hexagonal layer of atoms, we write the potential energy as $`V`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{}}{\displaystyle \underset{j=1}{\overset{6}{}}}{\displaystyle \frac{1}{2}}K_1[\widehat{𝐫}_{ij}(𝐫_i𝐫_j)]^2+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{}}K_2𝐫_i^2,`$ (4) where $`𝐫_𝐢=(𝐱_𝐢,𝐲_𝐢)`$ is the in-plane displacement from the equlibrium position of the $`i`$-th atom. The term $`K_1`$ describes a central atom–atom interaction, and the term $`K_2`$ binds atoms to their equilibrium positions, simulating the interaction to lower atomic layers. Without it, the model would be too “soft” to long-wavelength perturbations. The trade-off is that the lowest phonon frequency becomes finite, i.e., there are no true “acoustic” modes, but this has little influence on the relaxation energy and other static quantities calculated in this work. Now assume that an atom or a molecule chemisorbs on top of the atom $`i=0`$, and that the induced change of the potential energy is $`V^{}`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{6}{}}}{\displaystyle \frac{1}{2}}\mathrm{\Delta }K_1[\widehat{𝐫}_{0j}(𝐫_0𝐫_j)]^2`$ (6) $`{\displaystyle \underset{j=1}{\overset{6}{}}}{\displaystyle \frac{k_2}{\alpha }}\widehat{𝐫}_{0j}(𝐫_0𝐫_j)+V_0^{},`$ where the first term describes the change of the force constant between the atom $`0`$ and the six surrounding atoms, and the second is a linear force term. In the following we drop the $`\mathrm{\Delta }K_1`$ and $`V_0^{}`$ terms, which are beyond the level of accuracy assumed in this work, although, in general, the $`\mathrm{\Delta }K_1`$ term ought to be included. The definition of the chemisorption-induced surface stress $`\tau `$ is $$\delta W=A\tau _x\delta ϵ_x,$$ (7) where $`A`$ is the surface area, $`\delta ϵ_x`$ the strain, and $`\delta W`$ the difference of the work involved in straining a clean surface and a surface with adsorbates. In our model, we obtain $$\tau _x=\frac{k_2}{a\alpha }2\sqrt{3}\theta ,$$ (8) where $`\theta `$ is the adsorbate coverage. (If $`\mathrm{\Delta }K_1`$ is different from zero, an additional factor of order unity appears in the above expression.) Irrespective of the sign of the force term $`k_2/\alpha `$, there is always an energy gain due to the relaxation of the surrounding atoms, i.e., the minimum of $`V+V^{}`$ is less than zero. If another molecule adsorbs on a nearby site, the forces induced by the two adsorbates act in opposite directions and the relaxation is less complete than with adsorbates far apart, which leads to an effective interaction. We calculate the interaction energies by comparing the minimum of the potential energy $`V+V^{}`$ for a single adsorbed molecule with the minimum for two molecules adsorbed on, in turn, second, third, fourth, and fifth nearest-neighbor sites, see Fig. 1(a). (Throughout the paper we assume that there is a strong repulsion between first nearest neighbor adsorbates caused by “chemical” effects, and do not consider them.) We first consider the chemisorption of CO on a Pt(111) surface. There is a large amount of tensile stress in the first atomic layer of the clean Pt(111) surface . Although the surface does not reconstruct at room temperature, a reconstruction is observed at high temperatures in the presence of saturated Pt vapor . CO adsorbs initially into on-top sites of Pt(111), but the energy difference for the adsorption into bridge sites is obviously small, since some bridge adsorbates are found already at coverages $`\theta `$ above 0.15 . Some authors report (Ref. and references therein) that a regular $`(\sqrt{3}\times \sqrt{3})\mathrm{R30}^{}`$ structure at a coverage $`\theta =1/3`$ is formed, Fig. 1(b), but others claim that the densest structure of exclusively on-top adsorbates exists at $`\theta =0.29`$ , and that further chemisorption occurs into bridge sites. The regular structure at $`\theta =0.5`$ contains an equal number of on-top and bridge adsorbates. We have chosen the values of the parameters $`K_1`$, $`K_2`$, and $`k_2/\alpha `$ which give good agreement with experimental data on adsorbate-induced surface stress and with low-coverage interaction energies , as shown in Table I. We had to choose a rather small value for the force constant $`K_1`$ between atoms in the first surface layer. (The value of $`K_2`$ has little effect on the results. The much larger value of $`K_1`$ used in a similar model in Ref. was an overestimate.) The reduction from bulk values is characteristic of many close-packed noble-metal surfaces , but the reduction we find is larger than suggested in the surface phonon calculation in Ref. . Consequently, the highest resulting vibrational frequency of two-dimensional phonons of the first surface layer is about 5 meV, by about a factor of two smaller than the frequency of surface phonons along the edge of the Brillouin zone of around 10 meV calculated in Ref. . As discussed above, it is possible that part of the softening is localized around the adsorption site only, the term $`\mathrm{\Delta }K_1`$ in Eq. (6). The values in Table I show that the repulsive interaction is strong between CO adsorbed on sites lying along rows of substrate atoms, and weaker for adsorbates separated by hollows, even if they are less far apart. In our opinion, the rather large interaction energy between fourth nearest neighbor adsorbates in Ref. is influenced by the contributions from more distant sites, which were not included in their analysis. Clean nickel (111) surfaces do not reconstruct. Unlike some earlier claims, it is now accepted that at low temperature CO chemisorbs initially into threefold hollow sites . At room temperature, some bridge and on-top sites seem to be occupied even at low coverages . At a coverage $`\theta =0.33`$, CO forms a regular $`(\sqrt{3}\times \sqrt{3})\mathrm{R30}^{}`$ structure , but it is not clear whether the molecules adsorb into fcc or hcp positions. At $`\theta =0.5`$, a regular c(4$`\times `$2)–2CO structure is formed, in which an equal number of fcc and hcp sites is occupied , Fig. 2(b). The top layer of Ni atoms shows buckling in that nonequivalent atoms have different vertical relaxation. The CO molecules are slightly tilted from the direction perpendicular to the surface. The initial heat of adsorption at room temparature is 130 kJ/mol . It decreases slowly at first, to around 122 kJ/mol at $`\theta =0.33`$ and 112 kJ/mol at $`\theta =0.5`$. However, the fact that not all adsorption sites are equivalent, and the rather large standard deviation of experimental data make the interpretation of the coverage dependence of the heat of adsorption uncertain. We describe the first layer of Ni atoms by the same potential as for Pt, Eq. (4). The interaction terms are similar to Eq. (6), but the adsorbate is in a threefold hollow site and the sums run over the three surrounding Ni atoms (Fig. 2). The induced surface stress is $$\tau =\frac{k_2}{\alpha a}\theta ,$$ (9) where the nearest-neighbor Ni–Ni distance is $`a=2.49`$ Å. From the experimental data $`\tau =0.55`$ N/m at $`\theta =0.33`$ we have estimated $`k_2/\alpha =1\times 10^{10}`$ N. In a LEED analysis of the c$`(4\times 2)`$–2CO structure which forms at $`\theta =0.5`$, Mapledoram et al. found that the lateral displacement of the Ni atoms next to an adsorbate, a and c in Fig. 2(b), was 0.03 Å. We have reproduced this value by taking $`K_1=6`$ N/m and $`K_2=2`$ N/m. (As before, the value of $`K_2`$ is of lesser importance.) This is a considerable reduction from the bulk values, but not as large as for the Pt(111) surface, in agreement with the fact that the Ni(111) surface seems less prone to reconstruct than Pt(111). The energy gain per adsorbate is 22 K. Using these values of the parameters, the relaxation energy for a single adsorbate is only 42 K, which is 30 times smaller than that we have found for CO/Pt(111). (This reduction can be well estimated treating each surrounding Ni atom as an independent oscillator: The force $`k_2/\alpha `$ is three times smaller than in the CO/Pt(111) case, the Ni–Ni force constant is 50% larger, and there are 3 surrounding atoms instead of 6.) Furthermore, the displacements of Ni atoms around the hollow adsorption site are not along chains of atoms, and do not cause a large displacement of other atoms. The calculated effective interaction energies for second nearest neighbors and beyond are therefore only a few K, too small to be observable. In our opinion, the interaction energy of 100 K between second neighbor adsorbates suggested by Skelton et al. is either a combined result of other mechanisms or an artefact of their procedure. We note that an earlier study reported that there was essentially no interaction already between second neighbor adsorbates. Interaction energies have also been determined for CO chemisorbed on Rh and Cu surfaces. We discuss these systems only qualitatively, since the proposed values are still uncertain, and there is no quantitative data on other adsorbate-induced properties. Wei et al. estimated that $`W_2=100`$ K and $`W_3=150`$ K for the on-top chemisorbed CO on Rh(111). An earlier measurement by Payne et al. reported $`W_2=170`$ K and $`W_3=85`$ K. In our model, the relative magnitudes of interaction energies for on-top adsorbates on fcc (111) surfaces are always similar to those found for CO/Pt(111). In particular, we expect large repulsion between third nearest neighbor adsorbates which lie along a chain of substrate atoms. In this respect the values proposed in Ref. seem more probable, although the origin of the attractive $`W_2`$ (if it is real) is not clear. For the on-top CO on Cu(111), the same authors found $`W_2=107`$ K, $`W_3>800`$ K, $`W_4=155`$ K . The value of $`W_3`$ seems too large compared with the other two, but otherwise the results are quite similar to CO/Pt(111). The square lattice of the first layer of the (100) surface of the fcc bulk is not well described by a purely pairwise potential similar to Eq. (4), since in the absence of angular force constants and of coupling to lower layers, only the ad hoc term $`K_2`$ ensures the stability. Nevertheless, the trends in the interaction energies between on-top adsorbates can be deduced by analogy with (111) surfaces. We expect a repulsive interaction between third nearest neighbor adsorbates which lie on the same chain of atoms, and no interaction between second nearest neighbors which lie diagonally on different chains, because the forces on adjacent substrate atoms are orthogonal. In the latter case, even a weak attraction is possible owing to partly collinear displacements induced by the two adsorbates on more distant surface atoms. Indeed, the values $`W_2=0`$ and $`W_3=400`$ K were found for Rh(100) , while the values suggested for Cu(100) were $`W_2=33`$ K and $`W_3=13`$ K . It is interesting that the continuum elastic theory also gives a strong repulsion in the $`110`$ direction and possibly a weak attraction in the $`100`$ direction between adsorbates on (100) surfaces of noble metals. We have shown that the adsorbate-induced substrate relaxation leads to an adsorbate–adsorbate interaction which is quite long-ranged and has a nonmonotonic distance dependence, being particularly large between molecules adsorbed in on-top positions along a chain of surface atoms. The same mechanism also leads to other observable effects, such as the adsorbate-induced surface stress and the relaxation displacement of substrate atoms. Quantitative agreement with experiment can be obtained for CO adsorbed on Pt(111) and some other surfaces using a simple model, assuming that force constants between atoms in the first surface layer are considerably weaker than in the bulk, which is a known property of many close-packed noble metal surfaces. The results emphasize the importance of allowing the full substrate relaxation in the first-principle calculations of chemisorption on metal surfaces. This work was supported by the Ministry of Science and Technology of the Republic of Croatia under contract No. 00980101.
no-problem/9904/math-ph9904019.html
ar5iv
text
# Untitled Document RU-99-16 SPhT 99/037 math-ph/9904019 Matrix Integrals and the Counting of Tangles and Links P. Zinn-Justin Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854-8019, USA and J.-B. Zuber C.E.A.-Saclay, Service de Physique Théorique, F-91191 Gif sur Yvette Cedex, France Using matrix model techniques for the counting of planar Feynman diagrams, recent results of Sundberg and Thistlethwaite on the counting of alternating tangles and links are reproduced. to appear in the proceedings of the 11th International Conference on Formal Power Series and Algebraic Combinatorics, Barcelona June 1999 4/99 1. Introduction This is a paper of physical mathematics, which means that it addresses a problem of mathematics using tools of (theoretical) physics. The problem of mathematics is a venerable one, more than a hundred years old, namely the counting of (topologically inequivalent) knots. The physical tools are combinatorial methods developed in the framework of field theory and so-called matrix models. For a review of the history and recent developments of the first subject, see . For an introduction for non physicists to matrix integral techniques, see for example \[23\]. In this note, we show that by combining results obtained recently in knot theory and older ones on matrix integrals, and by using graphical decompositions familiar in field theory, one may reproduce and somewhat simplify the counting of alternating tangles and links performed in . In section 2, we recall basic facts and definitions on knots and their planar projections; we also recall why integrals over large matrices are relevant for the counting of planar objects. Specifically, we shall consider the following integral $$𝑑M𝑑M^{}\mathrm{exp}N\mathrm{tr}\left(\alpha MM^{}\frac{g}{2}(MM^{})^2\right).$$ over $`N\times N`$ complex matrices, in the large $`N`$ limit. In that limit, the integral is represented in terms of planar Feynman diagrams, with directed edges and four-valent vertices of the type , which exhibit a close similarity with alternating knot diagrams in planar projection, with crossings represented as . Thus the counting of planar Feynman diagrams (with adequate conditions and weights) must be related to the counting of alternating knots. A substantial part of this paper (section 3) is devoted to eliminating irrelevant or redundant contributions of Feynman diagrams. Once this is achieved, the results of are recovered. In the concluding section, we comment on the possible extensions of these methods. The observation that planar Feynman diagrams generated by matrix models can be associated to knot diagrams was already made in ; the matrix integral proposed there was more complicated,so that no explicit calculation was carried out. 2. Basics 2.1. Knots, links and tangles In this section, we briefly recall some basic concepts of knot theory, referring to the literature for more precise definitions. A knot is a smooth circle embedded in $`\mathrm{IR}^3`$. A link is a collection of intertwined knots (in the following, we shall not consider “unlinks”, i.e. links which can be divided in several non-intertwined pieces). Both kinds of objects are considered up to homeomorphisms of $`\mathrm{IR}^3`$. Roughly speaking, a tangle is a knotted structure from which four strings emerge: if this object is contained in a ball $`B`$ with the four endpoints of the strings attached on $`B`$, topological equivalence is up to orientation preserving homeomorphisms of $`B`$ that reduce to the identity on $`B`$. The fundamental problem of knot theory is the classification of topologically inequivalent knots, links and tangles. Fig. 1: (a): a non prime link; (b): an irrelevant (or “nugatory”) crossing; (c): 2 particle-reducible tangles, horizontal or vertical sums of two tangles It is common to represent such objects by their projection in a plane, with under/over-crossings at each double point and with the minimal number of such crossings. To avoid redundancies, we can concentrate on prime links and tangles, whose diagrams cannot be decomposed as a sum of components (Fig. 1) and on reduced diagrams that contain no irrelevant crossing. A diagram is called alternating if one meets alternatively under- and over-crossings as one travels along each loop. Starting with eight (resp six) crossings, there are knot (resp link) diagrams that cannot be drawn in an alternating way. Although alternate links (and tangles) constitute only a subclass (asymptotically subdominant), they are easier to characterize and thus to enumerate or to count. Fig. 2: The flype of a tangle A major result conjectured by Tait and proved by Menasco and Thistlethwaite is that two alternating reduced knot or link diagrams on the sphere represent the same object if and only if they are related by a sequence of moves acting on tangles called “flypes” (see Fig. 2). The subproblem that we shall address here is thus the counting of alternating prime links and tangles. 2.2. Matrix Integrals in the large $`N`$ limit As a prototype of matrix integrals, we consider the integral over $`N\times N`$ hermitian matrices $$Z=𝑑M\mathrm{exp}N\mathrm{tr}\left(\frac{1}{2}M^2\frac{g}{4}M^4\right).$$ In order that the integral makes sense, the sign of $`g`$ should originally be chosen negative. As is well-known, the large $`N`$ limit allows the analytic continuation to positive values of $`g`$. The series expansion of $`Z`$ in powers of $`g`$ may be represented diagrammatically by Feynman diagrams, made of undirected edges or “propagators” (2-point functions of the Gaussian model) with double lines expressing the conservation of indices, $`M_{ij}M_k\mathrm{}_0=\text{}=\frac{1}{N}\delta _i\mathrm{}\delta _{jk}`$, and of four-valent vertices $`\text{}=gN\delta _{qi}\delta _{jk}\delta _\mathrm{}m\delta _{np}`$. In the large $`N`$ limit a counting of powers of $`N`$ shows that the leading contribution to $`\mathrm{log}Z`$ is given by a sum of diagrams that may be drawn on the sphere , called “planar” by abuse of language. More precisely $$\underset{N\mathrm{}}{lim}\frac{1}{N^2}\mathrm{log}Z=\underset{\genfrac{}{}{0pt}{}{\mathrm{planar}\mathrm{diagrams}}{\mathrm{with}n\mathrm{vertices}}}{}\mathrm{weight}g^n$$ with a weight equal to one over the order of the automorphism group of the diagram. Once this has been realised, it is simpler to return to a notation with simple lines $`\frac{}{}`$ and rigid vertices . It is this property of matrix integrals to generate (weighted) sums over planar diagrams that we shall use in the context of knot theory. 3. From planar Feynman diagrams to links and tangles 3.1. The matrix integral As mentionned in the Introduction, in the context of knot theory, it seems natural to consider the integral (1.1) over complex (non hermitian) matrices, in order to distinguish between under-crossings and over-crossings. However, this integral is closely related to the simpler integral (2.1) in the large $`N`$ limit. Let us define the partition functions $$\begin{array}{ccc}\hfill Z^{(1)}(\alpha ,g)& =𝑑M𝑑M^{}\mathrm{exp}N\mathrm{tr}\left(\alpha MM^{}\frac{g}{2}(MM^{})^2\right)\hfill & (3.1a)\hfill \\ \hfill Z(\alpha ,g)& =𝑑M\mathrm{exp}N\mathrm{tr}\left(\frac{1}{2}\alpha M^2\frac{g}{4}M^4\right)\hfill & (3.1b)\hfill \end{array}$$ and the corresponding “free energies” $$\begin{array}{ccc}\hfill F^{(1)}(\alpha ,g)& =\underset{N\mathrm{}}{lim}\frac{1}{N^2}\frac{\mathrm{log}Z^{(1)}(\alpha ,g)}{\mathrm{log}Z^{(1)}(\alpha ,0)}\hfill & (3.2a)\hfill \\ \hfill F(\alpha ,g)& =\underset{N\mathrm{}}{lim}\frac{1}{N^2}\frac{\mathrm{log}Z(\alpha ,g)}{\mathrm{log}Z(\alpha ,0)}.\hfill & (3.2b)\hfill \end{array}$$ The constant $`\alpha `$ can be absorbed in a rescaling $`M\alpha ^{\frac{1}{2}}M`$: $$\begin{array}{ccc}\hfill Z(\alpha ,g)& =\alpha ^{\frac{N^2}{2}}Z(1,\frac{g}{\alpha ^2})\hfill & (3.3a)\hfill \\ \hfill F(\alpha ,g)& =F(1,\frac{g}{\alpha ^2})\hfill & (3.3b)\hfill \end{array}$$ and similarly for $`Z^{(1)}`$ and $`F^{(1)}`$. However, it will be useful for our purposes to keep the parameter $`\alpha `$. The Feynman rules of $`(3.1a)`$ and those of $`(3.1b)`$ are quite similar: the former have already been described in Sect. 2.2, while the latter are given by a propagator $`M_{ij}M_k\mathrm{}^{}_0=\text{}=\frac{1}{N\alpha }\delta _i\mathrm{}\delta _{jk}`$ and a four-vertex equal to $`gN`$ times the usual delta functions. The only difference lies in the orientation of the propagator of the complex theory. However, in the large $`N`$ “planar” limit, this only results in a factor of 2 in the corresponding free energies, which accounts for the two possible overall orientations that may be given to the lines of each graph of the hermitian theory to transform it into a graph of the non-hermitian one. Therefore $$F^{(1)}(\alpha ,g)=2F(\alpha ,g)$$ In addition to the partition functions and free energies, one is also interested in the correlation functions $`G_{2n}(\alpha ,g)=\frac{1}{N}\mathrm{tr}M^{2n}`$ and $`G_{2n}^{(1)}(\alpha ,g)=\frac{1}{N}\mathrm{tr}(MM^{})^n`$. The first ones, namely the 2-point and 4-point functions, are simply expressed in terms of $`F`$. $$G_4=4\frac{F}{g}G_4^{(1)}=2\frac{F^{(1)}}{g}$$ so that $`G_4^{(1)}=G_4`$ in the large $`N`$ limit. Furthermore, combining (3.4) with the homogeneity property $`(3.3)`$, one finds $$\begin{array}{ccc}\hfill G_2& =\frac{1}{\alpha }2\frac{}{\alpha }F(\alpha ,g)=\frac{1}{\alpha }(1+gG_4)\hfill & (3.4a)\hfill \\ \hfill G_2^{(1)}& =\frac{1}{\alpha }(1+gG_4^{(1)})\hfill & (3.4b)\hfill \end{array}$$ which in particular proves that in the large $`N`$ limit $`G_2=G_2^{(1)}`$. In that “planar” limit, the function $`F`$ has been computed by a variety of techniques: saddle point approximation , orthogonal polynomials , “loop equations” (see for a review). With the current conventions and normalizations, $$F(\alpha ,g)=\frac{1}{2}\mathrm{log}a^2\frac{1}{24}(a^21)(9a^2)$$ where $`a^2`$ is the solution of $$3\frac{g}{\alpha ^2}a^4a^2+1=0$$ which is equal to $`1`$ for $`g=0`$. We have the expansion $$\begin{array}{ccc}\hfill F(1,g)& =\frac{1}{2}\left(g+\frac{9}{4}g^2+9g^3+\frac{189}{4}g^4+\mathrm{}\right)\hfill & \\ & =\underset{p=1}{\overset{\mathrm{}}{}}(3g)^p\frac{(2p1)!}{p!(p+2)!}\hfill & (3.5)\hfill \end{array}$$ The 2- and 4-point functions are thus $$\begin{array}{ccc}\hfill G_2& =G_2^\mathrm{c}=\frac{1}{3\alpha }a^2(4a^2)\hfill & (3.6a)\hfill \\ \hfill G_4& =\frac{1}{\alpha ^2}a^4(3a^2)\hfill & (3.6b)\hfill \\ \hfill G_4^\mathrm{c}& =G_42G_2^2=\frac{1}{9\alpha ^2}a^4(a^21)(2a^25)\hfill & (3.6c)\hfill \end{array}$$ where $`G_4^\mathrm{c}`$ is the connected 4-point function. More generally, $`G_{2n}`$ is of the form $`\alpha ^n`$ times a polynomial in $`a^2`$. The singularity of $`a^2`$ at $`g/\alpha ^2=1/12`$ determines the radius of convergence of $`F`$ and of the $`G_{2n}`$. 3.2. Removal of self-energies If we want to match Feynman diagrams contributing to the free energy $`F^{(1)}`$ with prime knots or links, we have to eliminate a certain number of redundancies. We have first to eliminate the “self-energy insertions” that correspond either to non prime knots or to irrelevant crossings (“nugatory” in the language of Tait). This is simply achieved by choosing $`\alpha `$ as a function of $`g`$ such that $$G_2(\alpha (g),g)=1.$$ The function $`a^2(g):=a^2(\alpha (g),g)`$ is obtained by eliminating $`\alpha `$ between Eqs. (3.5) and (3.7), i.e. $$\begin{array}{ccc}\hfill 3\frac{g}{\alpha ^2}a^4a^2+1& =0\hfill & (3.5)^{}\hfill \\ \hfill \frac{1}{3}a^2(4a^2)& =\alpha \hfill & (3.7)^{}\hfill \end{array}$$ which implies that $`a^2(g)`$ is the solution of $$27g=(a^21)(4a^2)^2$$ equal to 1 when $`g=0`$; $`\alpha (g)`$ is then given by $$\alpha (g)=\frac{1}{3}a^2(g)(4a^2(g)).$$ Fig. 3: (a): decomposition of the two-point function into its one-particle-irreducible part; (b) discarding one-vertex-reducible contributions Let us now consider correlation functions. In field theory, it is common practice to define truncated diagrams, whose external lines carry no propagator, and one-particle-irreducible ones, that remain connected upon cutting of any line. The two-point function $`G_2(\alpha ,g)`$ may be expressed in terms of the “self-energy” function $`\mathrm{\Sigma }`$ $$G_2(\alpha ,g)=\frac{1}{\alpha \mathrm{\Sigma }(\alpha ,g)}$$ which is the sum of (non-trivial) truncated, one-particle-irreducible graphs (Fig. 3). One can further discard the contributions that are one-vertex-reducible by defining $`\mathrm{\Sigma }^{}`$ (see Fig. 3(b)) $$\mathrm{\Sigma }^{}(\alpha ,g)=\mathrm{\Sigma }(\alpha ,g)2G_2(\alpha ,g)$$ If we now remove the self-energy insertions by imposing condition (3.7), we find that Eqs. (3.7) and (3.7) simplify, so that $`\mathrm{\Sigma }(g):=\mathrm{\Sigma }(\alpha (g),g)`$ and $`\mathrm{\Sigma }^{}(g):=\mathrm{\Sigma }^{}(\alpha (g),g)`$ are given by $$\begin{array}{ccc}\hfill \mathrm{\Sigma }(g)& =\alpha (g)1\hfill & (3.7a)\hfill \\ \hfill \mathrm{\Sigma }^{}(g)& =\alpha (g)12g\hfill & (3.7b)\hfill \end{array}$$ i.e. obtained from $`\alpha (g)`$ by removing the first terms in its expansion in powers of $`g`$. The procedure extends to all correlation functions. Finally one obtains the corresponding “free energy” $`F^{(1)}(g)`$ by dividing the term of order $`n`$ of $`\mathrm{\Sigma }^{}(g)`$ by $`2n`$ (= number of times of picking a propagator in a diagram of the free energy to open it to a two-point function, cf Eq. $`(3.4)`$ above). We may also compute the function $`\mathrm{\Gamma }(\alpha ,g)=G_2^\mathrm{c}(\alpha ,g)^4G_4^\mathrm{c}(\alpha (g),g)`$, which counts the truncated (automatically one-particle-irreducible) connected 4-point functions. After removal of the self-energy insertions, $`\mathrm{\Gamma }(g):=\mathrm{\Gamma }(\alpha (g),g)`$ becomes simply (Eqs. $`(3.4a)`$, $`(3.6c)`$ and $`(3.7a)`$) $$\mathrm{\Gamma }(g)=\frac{\mathrm{\Sigma }^{}(g)}{g}=2\frac{d}{dg}F^{(1)}(g)$$ Explicitly, $$\mathrm{\Gamma }(g)=\frac{1}{(4a^2(g))^2}(a^2(g)1)(2a^2(g)5)$$ Perturbatively one finds $$\begin{array}{ccc}\hfill a^2(1,g)& =1+3g+18g^2+135g^3+1134g^4+10206g^5+96228g^6+\mathrm{}\hfill & \\ \hfill G_2(1,g)& =1+2g+9g^2+54g^3+378g^4+2916g^5+24057g^6+\mathrm{}\hfill & \\ \hfill \alpha (g)& =1+2g+g^2+2g^3+6g^4+22g^5+91g^6+\mathrm{}\hfill & (3.8)\hfill \\ \hfill \mathrm{\Gamma }(\alpha (g),g)& =g+2g^2+6g^3+22g^4+91g^5+408g^6+\mathrm{}\hfill & \\ \hfill F^{(1)}(g)& =\frac{g^2}{4}+\frac{g^3}{3}+\frac{3g^4}{4}+\frac{11g^5}{5}+\frac{91g^6}{12}+\mathrm{}\hfill & \end{array}$$ Fig. 4: (a): the first links, with the labelling of ; (b) the corresponding diagrams contributing to $`F^{(1)}`$. For simplicity, the diagrams are not oriented, but the weights are those of the $`(MM^{})^2`$ theory; (c) diagrams up to order 3 contributing to $`\mathrm{\Gamma }`$: the last four are pairwise flype-equivalent. The first terms of $`F^{(1)}`$ match the counting of the first prime links weighted by their symmetry factor (see Fig. 4). $$F^{(1)}(g)=\frac{1}{4}g^2+\frac{1}{3}g^3+(\frac{1}{2}+\frac{1}{4})g^4+(\frac{1}{5}+1+1)g^5+\mathrm{}$$ Starting with order $`6`$, however, there is an overcounting of links due to neglecting the flype equivalence. The overcounting occurs already at order 3 if $`\mathrm{\Gamma }`$ is used to count tangles. Asymptotic behavior of the coefficients $`f_n`$ of $`F^{(1)}(g)`$ The singularity is now given by the closest zero of the equation $`g/\alpha ^2(g)=1/12`$, which gives $`g_{}=\frac{4}{27}`$. Thus $$f_n\mathrm{const}b^nn^{\frac{7}{2}}$$ with $$b=\frac{27}{4}=6.75.$$ 3.3. Two-particle irreducibility Since the flype acts on tangles, we have to examine more closely the generating function $`\mathrm{\Gamma }(g)`$ of connected 4-point functions with no self-energies. We want to regard the corresponding diagrams as resulting from the dressing of more fundamental objects. This follows a pattern familiar in field theory, whose language we shall follow, while indicating in brackets the corresponding terminology of knot theory. We say that a 4-leg diagram (a tangle) is two-particle-irreducible (2PI) (resp two-particle reducible, 2PR) if cutting any two distinct propagators leaves it connected (resp makes it disconnected). A 2PR diagram is thus the “sum” of smaller components (see Fig. 1). A fully two-particle-irreducible diagram is a 4-leg diagram such that any of its 4-leg subdiagrams (including itself) is 2PI. Conversely a fully 2PR diagram (algebraic tangle) is constructed by iterated sums starting from the simple vertex. We shall also make use of skeletons, which are generalized diagrams (or “templates”) in which some or all vertices are replaced by blobs (or “slots”). The concepts of fully 2PI skeleton (or basic polyedral template) and of fully 2PR skeleton (algebraic template) follow naturally, with however the extra condition that only blobs should appear in the former. These blobs may then be substituted by 4-leg diagrams, resulting in a “dressing”. As will appear, the general 4-leg diagram (tangle) results either from the dressing of a fully 2PI skeleton by generic 4-leg diagrams (type I tangles) or from the dressing of non trivial fully 2PR skeletons by type I tangles. The action of flypes will be only on the fully 2PR skeletons appearing in this iteration. Because in the $`(MM^{})^2`$ theory with 4-valent vertices, there is no diagram which is two-particle-reducible in both channels, any diagram of $`\mathrm{\Gamma }(g)`$ must be i) 2-particle-irreducible in the vertical channel (V-2PI) but possibly 2-particle-reducible in the horizontal one. We denote $`V(g)`$ the generating function of those diagrams. ii) or 2-particle-irreducible in the horizontal channel (H-2PI) but possibly 2-particle-reducible in the vertical one. We denote $`H(g)`$ the generating function of those diagrams. Obviously there is some overlap between these two classes, corresponding to diagrams that are 2-particle-irreducible in both channels (2PI), including the simple vertex. Let $`D(g)`$ denote their generating function. We thus have $$\mathrm{\Gamma }=H+VD.$$ Now since $`V`$ encompasses diagrams that are 2PI in both channels, plus diagrams that are once 2PR in the horizontal channel (i.e. made of two H-2PI blobs joined by two propagators), plus diagrams twice H-2PR, etc, we have (see Fig. 5) $$\begin{array}{ccc}\hfill V& =D+HH+HHH+\mathrm{}\hfill & \\ & =D+\frac{H^2}{1H}.\hfill & (3.9)\hfill \end{array}$$ Fig. 5: Graphical representation of Eqs. (3.9) and (3.9). Since for obvious symmetry reasons, $`H=V`$, we have the pair of equations $$\begin{array}{ccc}\hfill \mathrm{\Gamma }& =2HD\hfill & \\ \hfill H& =D+\frac{H^2}{1H}.\hfill & (3.10)\hfill \end{array}$$ Eliminating $`H`$ yields $$D=\mathrm{\Gamma }\frac{1\mathrm{\Gamma }}{1+\mathrm{\Gamma }}.$$ Eq. (3.11) can be inverted to allow to reconstruct the function $`\mathrm{\Gamma }(g)`$ out of the 2PI function $`D(g)`$. For later purposes, we want to distinguish between the single vertex diagram and non-trivial diagrams: $$D(g)=g+\zeta (g)$$ In terms of these new variables, one has: $$\mathrm{\Gamma }(g)=\mathrm{\Gamma }\{g,\zeta (g)\}$$ (note the braces are used to distinguish the different variables) where $$\mathrm{\Gamma }\{g,\zeta \}=\frac{1}{2}\left[1g\zeta \sqrt{(1g\zeta )^24(g+\zeta )}\right].$$ is the generating function $$\mathrm{\Gamma }\{g,\zeta \}=\underset{m,n}{}\gamma _{m,n}g^m\zeta ^n$$ of the number of fully 2PR skeleton diagrams with $`m`$ vertices and $`n`$ blobs (algebraic templates, including the trivial one made of a single blob and no vertex). Similarly, one could define $`H\{g,\zeta \}=V\{g,\zeta \}`$, and of course $`D\{g,\zeta \}=g+\zeta `$. Note that the function $`\mathrm{\Gamma }\{g,\zeta \}`$ does not depend on the precise form of $`\mathrm{\Gamma }(g)`$, since we have only used the relation (3.11) which was derived from general diagrammatic considerations. Inversely, we shall need the fully 2PI skeletons which are obtained from $`\zeta (g)`$ by replacing subgraphs that have four legs with blobs. Defining the inverse function $`g[\mathrm{\Gamma }]`$ of $`\mathrm{\Gamma }(g)`$, the generating function of these skeleton diagrams is simply $`\zeta [\mathrm{\Gamma }]:=\zeta (g[\mathrm{\Gamma }])`$, or more explicitly $$\zeta [\mathrm{\Gamma }]=\mathrm{\Gamma }\frac{1\mathrm{\Gamma }}{1+\mathrm{\Gamma }}g[\mathrm{\Gamma }].$$ The function $`g[\mathrm{\Gamma }]`$ satisfies by definition $`\mathrm{\Gamma }(g[\mathrm{\Gamma }])=\mathrm{\Gamma }`$, and is found by eliminating $`a^2`$ between Eqs. $`(11)`$ and $`(16)`$. Setting $`\eta =1a^2`$, we recover the system of , : $$\begin{array}{ccc}\hfill 27g& =\eta (3+\eta )^2\hfill & (3.11)\hfill \\ \hfill \mathrm{\Gamma }& =\frac{1}{(3+\eta )^2}\eta (3+2\eta )\hfill & \end{array}$$ which leads to $$g[\mathrm{\Gamma }]=\frac{1}{2}\frac{1}{(\mathrm{\Gamma }+2)^3}\left[1+10\mathrm{\Gamma }2\mathrm{\Gamma }^2(14\mathrm{\Gamma })^{\frac{3}{2}}\right]$$ and finally $$\zeta [\mathrm{\Gamma }]=\frac{2}{1+\mathrm{\Gamma }}+2\mathrm{\Gamma }g(\mathrm{\Gamma })$$ is the desired generating function of fully 2PI skeletons (in the notations of , this is $`q(g)`$). The property mentionned above that $`\mathrm{\Gamma }`$ is obtained by dressing is expressed by the identity $$\mathrm{\Gamma }(g)=\zeta [\mathrm{\Gamma }(g)]+\underset{\genfrac{}{}{0pt}{}{m,n}{(m,n)(0,1)}}{}\gamma _{m,n}g^m\zeta ^n[\mathrm{\Gamma }(g)]=\mathrm{\Gamma }\{g,\zeta (g)\}.$$ Perturbatively, we find $$\begin{array}{ccc}\hfill \mathrm{\Gamma }(g)& =g+2g^2+6g^3+22g^4+91g^5+\mathrm{}\hfill & \\ \hfill D(g)& =g+g^5+10g^6+74g^7+492g^8+\mathrm{}\hfill & \\ \hfill \mathrm{\Gamma }\{g,\zeta \}& =g+\zeta +2(g+\zeta )^2+6(g+\zeta )^3+22(g+\zeta )^4+90(g+\zeta )^5+\mathrm{}\hfill & (3.12)\hfill \\ \hfill g[\mathrm{\Gamma }]& =\mathrm{\Gamma }2\mathrm{\Gamma }^2+2\mathrm{\Gamma }^32\mathrm{\Gamma }^4+\mathrm{\Gamma }^52\mathrm{\Gamma }^62\mathrm{\Gamma }^78\mathrm{\Gamma }^822\mathrm{\Gamma }^968\mathrm{\Gamma }^{10}+\mathrm{}\hfill & \\ \hfill \zeta [\mathrm{\Gamma }]& =\mathrm{\Gamma }^5+4\mathrm{\Gamma }^7+6\mathrm{\Gamma }^8+24\mathrm{\Gamma }^9+66\mathrm{\Gamma }^{10}+\mathrm{}.\hfill & \end{array}$$ Fig. 6: The first contributions to $`\zeta [\mathrm{\Gamma }]`$. All diagrams can be obtained from the ones depicted by rotations of 90 degrees. The first contributions to $`\zeta [\mathrm{\Gamma }]`$ are depicted on Fig. 6. The closest singularity of $`g[\mathrm{\Gamma }]`$ and of $`\zeta [\mathrm{\Gamma }]`$ is $`\mathrm{\Gamma }_{}=1/4`$ which corresponds to $`g[\mathrm{\Gamma }_{}]=g_{}=4/27`$ and $`\zeta [\mathrm{\Gamma }_{}]=1/540`$. 3.4. Quotienting by the flype The last step is to take into account the flype equivalence; it is borrowed from the discussion of Sundberg and Thistlethwaite and reproduced here for completeness. The fully 2PR skeletons contained in Eq. (3.11) now have to be replaced with skeletons in which the flype equivalence has been quotiented. Then, they have to be dressed by 4-point 2PI functions using Eq. (3.12). Let $`\stackrel{~}{\mathrm{\Gamma }}\{g,\zeta \}`$ be the generating function of the number $`\stackrel{~}{\gamma }_{mn}`$ of flype-equivalence classes of fully 2PR skeleton diagrams (algebraic templates) with $`m`$ vertices and $`n`$ blobs $$\stackrel{~}{\mathrm{\Gamma }}\{g,\zeta \}=\underset{m,n}{}\stackrel{~}{\gamma }_{m,n}g^m\zeta ^n.$$ Let $`\stackrel{~}{H}\{g,\zeta \}`$ (resp. $`\stackrel{~}{V}\{g,\zeta \}`$) denote the generating function of flype-equivalence classes of skeletons which are 2PI in the horizontal (resp. vertical) channel, including the single blob and the single vertex. In a way similar to the decomposition performed above for $`\mathrm{\Gamma }`$, cf Eq. (3.9), we write $$\stackrel{~}{\mathrm{\Gamma }}=\stackrel{~}{H}+\stackrel{~}{V}D$$ where $`D\{g,\zeta \}=g+\zeta `$. The equation analogous to Eq. (3.9) is $$\begin{array}{ccc}\hfill \stackrel{~}{V}& =D+g\stackrel{~}{\mathrm{\Gamma }}+(\stackrel{~}{H}g)^2+(\stackrel{~}{H}g)^3+\mathrm{}\hfill & \\ & =D+g\stackrel{~}{\mathrm{\Gamma }}+\frac{(\stackrel{~}{H}g)^2}{1(\stackrel{~}{H}g)}\hfill & (3.13)\hfill \end{array}$$ where by flyping we can remove the simple vertices inside the $`\stackrel{~}{H}`$ and put them as a single contribution $`g\stackrel{~}{\mathrm{\Gamma }}`$. As before, $`\stackrel{~}{H}=\stackrel{~}{V}`$ for symmetry reasons, and after eliminating it one gets an algebraic equation for $`\stackrel{~}{\mathrm{\Gamma }}\{g,\zeta \}`$ $$\stackrel{~}{\mathrm{\Gamma }}^2(1+g\zeta )\stackrel{~}{\mathrm{\Gamma }}+\zeta +g\frac{1+g}{1g}=0$$ with solution $$\stackrel{~}{\mathrm{\Gamma }}\{g,\zeta \}=\frac{1}{2}\left[(1+g\zeta )\sqrt{(1g+\zeta )^28\zeta 8\frac{g^2}{1g}}\right],$$ which should be compared with Eq. (3.11). The last step to get the generating function $$\stackrel{~}{\mathrm{\Gamma }}(g):=\stackrel{~}{\mathrm{\Gamma }}\{g,\stackrel{~}{\zeta }(g)\}$$ of flype equivalence classes of (alternating, prime) tangles is to define $`\stackrel{~}{\zeta }(g)`$ via the relation (3.12), i.e. $$\stackrel{~}{\zeta }(g)=\zeta [\stackrel{~}{\mathrm{\Gamma }}(g)]$$ (The reason we can use Eq. (3.12) without any modification is that fully 2PI skeleton diagrams are not affected by flypes.) In other words, $`\stackrel{~}{\mathrm{\Gamma }}(g)`$ is the solution to the implicit equation $$\stackrel{~}{\mathrm{\Gamma }}(g)=\stackrel{~}{\mathrm{\Gamma }}\{g,\zeta [\stackrel{~}{\mathrm{\Gamma }}(g)]\},$$ where $`\zeta [\mathrm{\Gamma }]`$ and $`\stackrel{~}{\mathrm{\Gamma }}\{g,\zeta \}`$ are provided by Eqs. (3.12), (3.12) and (3.14), and which vanishes at $`g=0`$. Perturbatively $$\stackrel{~}{\mathrm{\Gamma }}(g)=g+2g^2+4g^3+10g^4+29g^5+98g^6+372g^7+\mathrm{}.$$ Asymptotic behavior of $`\stackrel{~}{\mathrm{\Gamma }}(g)`$: we know the closest singularity of $`\zeta [\mathrm{\Gamma }]`$; we therefore set $`\stackrel{~}{\mathrm{\Gamma }}=\mathrm{\Gamma }_{}=1/4`$ and $`\stackrel{~}{\zeta }=\zeta _{}=1/540`$ in Eq. (3.14) and solve for $`g`$: $$\stackrel{~}{g}_{}=\frac{101+\sqrt{21001}}{270}.$$ This provides the asymptotic behavior of the coefficients $`\stackrel{~}{f}_n`$ of the “free energy” $`\stackrel{~}{F}^{(1)}(g)`$ defined by $`\stackrel{~}{\mathrm{\Gamma }}(g)=2\frac{d}{dg}\stackrel{~}{F}^{(1)}(g)`$: $$\begin{array}{ccc}\hfill \stackrel{~}{f}_n& \mathrm{const}\stackrel{~}{b}^nn^{\frac{7}{2}}\hfill & (3.14a)\hfill \\ \hfill \stackrel{~}{b}& =(101+\sqrt{21001})/406.147930.\hfill & (3.14b)\hfill \end{array}$$ Note that while the first factor $`\stackrel{~}{b}^n`$ in the expansion (“bulk free energy”) is non-universal, the second factor $`n^{7/2}`$ is universal (critical behavior of pure gravity ; the exponent $`\alpha =\frac{7}{2}`$ gives the “string susceptibility” exponent $`\gamma `$ of non-critical string theory : $`\gamma =\alpha +3=\frac{1}{2}`$) and in particular is identical in Eqs. (3.9) and $`(3.14a)`$. Note also that this free energy counts links with a weight that takes into account the equivalence under flypes and which is less easy to characterize. 4. Concluding remarks In this paper we have reproduced the results of using prior knowledge of graph counting derived from matrix models, while Sundberg and Thistlethwaite were using results of Tutte . Admittedly the progress is modest. We hope, however, that our method may give some clues on problems that are still open, such as the counting of knots rather than links. To control the number of connected components of a link is in principle easy in our approach. We should consider an integral over $`n`$ matrices $`M_\alpha `$, $`\alpha =1,\mathrm{},n`$ interacting through a term $`_{\alpha ,\beta }(M_\alpha M_\beta ^{})^2`$ and look at the dependence of $`F^{(1)}`$ on $`n`$. The term linear in $`n`$ receives contributions only from one-component diagrams, hence after a treatment similar to that of Sect. 3, it should give a generating function of the number of knots, weighted as before by their symmetry factor. Unfortunately, the computation of these matrix integrals and their subsequent treatment (removal of self-energies and flype equivalences) is for generic $`n`$ beyond our capabilities (see however for a first step in this direction). Another problem on which matrix technology might prove useful would be in the counting of non alternating diagrams. But there the main problem is knot-theoretic rather than combinatorial: how does one get rid of multiple counting associated with Reidemeister moves? Acknowledgements It is a pleasure to acknowledge an informative exchange with V. Jones at the beginning of this work, and interesting discussions with M. Bauer. Part of this work was performed when the authors were participating in the programme on Random Matrices and their Applications at the MSRI, Berkeley. They want to thank Prof. D. Eisenbud for the hospitality of the Institute, and the organizers of the semester, P. Bleher and A. Its, for their invitation that made this collaboration possible. P.Z.-J. is supported in part by the DOE grant DE-FG02-96ER40559. References relax J. Hoste, M. Thistlethwaite and J. Weeks, The First 1,701,936 Knots, The Mathematical Intelligencer 20 (1998) 33–48. relax D. Bessis, C. Itzykson and J.-B. Zuber, Quantum Field Theory Techniques in Graphical Enumeration, Adv. Appl. Math. 1 (1980) 109–157. relax A. Zvonkin, Matrix Integrals and Map Enumeration: An Accessible Introduction, Math. Comp. Modelling 26 (1997) 281–304. relax C. Sundberg and M. Thistlethwaite, The rate of Growth of the Number of Prime Alternating Links and Tangles, Pac. J. Math. 182 (1998) 329–358. relax I.Ya. Arefeva and I.V. Volovich, Knots and Matrix Models, Infinite Dim. Anal. Quantum Prob. 1 (1998) 1 (hep-th/9706146). relax W.W. Menasco and M.B. Thistlethwaite, The Tait Flyping Conjecture, Bull. Amer. Math. Soc. 25 (1991) 403–412; The Classification of Alternating Links, Ann. Math. 138 (1993) 113–171. relax G. ’t Hooft, A Planar Diagram Theory for Strong Interactions, Nucl. Phys. B 72 (1974) 461–473. relax E. Brézin, C. Itzykson, G. Parisi and J.-B. Zuber, Planar Diagrams, Commun. Math. Phys. 59 (1978) 35–51. relax P. Di Francesco, P. Ginsparg and J. Zinn-Justin, 2D Gravity and Random Matrices, Phys. Rep. 254 (1995) 1–133. relax D. Rolfsen, Knots and Links, Publish or Perish, Berkeley 1976. relax W.T. Tutte, A Census of Planar Maps, Can. J. Math. 15 (1963) 249–271. relax V.A. Kazakov and A.A. Migdal, Recent progress in the theory of non-critical strings, Nucl. Phys. B 311 (1988) 171–190. relax V.A. Kazakov and P. Zinn-Justin, Two-Matrix Model with $`ABAB`$ Interaction, Nucl. Phys. B 546 (1999) 647 (hep-th/9808043).
no-problem/9904/hep-th9904050.html
ar5iv
text
# 𝒩=1 theories, T-duality, and AdS/CFT correspondence ## I Introduction Brane constructions in string theory provide powerful tools for analyzing field theories in diverse dimensions and with varying amounts of supersymmetry . For a review and references see . More recently the Maldacena conjecture added a new relation between the large $`N`$ limit of conformal field theories on branes and the near horizon geometry of the corresponding black brane solutions. The original conjecture was stated for $`𝒩=4`$ SYM realized on $`N`$ 3-branes, but subsequently more general examples were discovered. One class of such examples are orbifolds of the $`𝒩=4`$ configuration and another includes theories on 3-branes in nontrivial F-theory backgrounds . All of these constructions give rise to conformal theories with varying amounts of supersymmetry. A third class of theories arises on 3-branes at a conifold singularity . These $`𝒩=1`$ theories are not conformal at all scales, but flow to a line of conformal fixed points in the infrared. For all these theories the correspondence between the large $`N`$ field theory and supergravity was studied in some detail. For branes on a conifold it turned out to be useful to have a type IIA description which is related to the IIB configuration via T-duality . In this paper we study a superconformal $`𝒩=1`$ $`Sp(N)\times Sp(N)`$ gauge theory with matter in the fundamental, bifundamental, and antisymmetric representations. We also discuss a specific deformation which preserves $`𝒩=1`$ SUSY but breaks conformal invariance. The resulting theory has a running gauge coupling and flows to a line of superconformal fixed points in the infrared. For both of these theories we give a IIA brane construction as well as a IIB orientifold construction. The latter description allows us to obtain the supergravity solution that is dual to the large $`N`$ limit of the conformal field theory. The type IIA description, on the other hand, provides a simple way to determine the gauge group, the matter content, and the superpotential of the theories in question. In most $`𝒩=1`$ theories discussed in the AdS/CFT literature (see e.g. ) the R-current which is the superpartner of the stress-energy tensor can be fixed uniquely by field theory considerations. For the theories we discuss here this is not the case. There is a one parameter family of candidate R-currents, both in the theory with vanishing beta function, and in its deformation which flows to a line of fixed points. Since the R-charges of the fields are not uniquely determined, there is no field theory prediction for the dimensions of the chiral primary operators. On the other hand, once we have a supergravity dual of the large $`N`$ field theory, we know which gauge boson on AdS is the superpartner of the graviton. If we are able to match field theory operators with supergravity states, we can determine the R-charges of all fields and therefore the dimensions of all chiral primary operators. Although there is no firm field-theoretical prediction for the dimensions of fields in the infrared, for the theory with vanishing beta function the most natural assumption is that all fields have canonical dimensions, i.e., that the theory is finite. This will be born out by the supergravity analysis. In the other case, the theory with a running coupling constant, the correct charge assignment in the infrared is harder to guess. Unfortunately the supergravity analysis in this case is on a considerably less solid footing and depends on circumstantial evidence. Nonetheless our analysis suggests a definite R-charge assignment. It would be interesting to find a field theory explanation for it. The type IIA construction involves D4-branes compactified on a circle as well as NS5-branes, D6-branes, and O6-planes. The gauge theory lives on the D4-branes. Our construction is very similar to the brane configurations that give rise to elliptic $`𝒩=2`$ models . One advantage of the IIA description is that the moduli space of the gauge theory is realized geometrically. The flat directions correspond to motions of the 4-branes. Similarly, relevant perturbations of the field theory, such as masses for the matter fields, are also realized geometrically as motions of the 6-branes. This allows us to identify a 6-brane configuration that gives rise to a superconformal $`𝒩=1`$ theory on the 4-branes with an exactly marginal parameter. We can also identify relevant perturbations of the superconformal 4-brane theory that lead to theories with running coupling constants. There is one particular perturbation that gives rise to a theory that flows to a line of conformal fixed points in the infrared. The moduli space of the perturbed theory has a Coulomb branch. A generic $`𝒩=1`$ theory with a Coulomb branch has a low energy effective gauge coupling that varies over the moduli space. The theory we are considering in this paper has the special feature that the low-energy effective gauge coupling does not depend on the moduli. This will be relevant when we discuss the supergravity description of these theories. In order to construct the supergravity duals we T-dualize the IIA configuration along the compact direction. This operation turns the D6-branes and the O6-planes into D7-branes and O7-planes. The D4-branes turn into D3-branes probing this background. Similar probe theories were studied in , and their relation to supergravity is described in . Our IIA configuration turns out to be T-dual to 3-branes probing a local piece of an F-theory compactification which is related to the Gimon-Polchinski model . The simplest such configuration, consisting of two intersecting O7-planes with four coincident 7-branes on top of each, corresponds to the IIA construction of the superconformal 4-brane theory. In the type IIB construction the Ramond-Ramond (RR) charges of the 7-branes are cancelled locally by the charges of the orientifold planes, so the string coupling is constant. Since the type IIB description is a perturbative orientifold, we can find the supergravity dual of the large $`N`$ limit of the field theory along the lines of . Matching the spectrum of primary operators with the KK modes allows us to determine the $`U(1)_R`$ charges of all fields in the conformal theory unambiguously. The matching of non-chiral primaries exhibits a new interesting feature: we find a short supergravity multiplet whose field theory counterpart becomes short only when $`N\mathrm{}`$. We interpret this as the evidence that at higher orders in $`1/N`$ supersymmetry mixes one-particle and two-particle supergravity states. It should also be possible to find a supergravity description of the infrared limit of the deformed theory. Although this theory is not conformal, it has a constant low-energy effective coupling along the Coulomb branch, so the supergravity dual will have a constant dilaton. To find this dual we need to study the deformations of the backgrounds in IIA and IIB and find an explicit map between them. As mentioned before, this is straightforward on the IIA side, since the deformations correspond to motions of the 6-branes. On the IIB side the situation is more involved. We can analyze the deformations on the IIB side by studying the theory on the 7-branes. The eight-dimensional theory on the 7-branes has six-dimensional matter localized at the intersection of orthogonal 7-branes. We analyze the moduli space of this impurity theory following , and find an explicit map between the type IIB and type IIA deformations. Among supersymmetric type IIB deformations there is one that maps to a new IIA brane configuration which involves curving D6-branes in the background of an NS5-brane. The map between deformations also allows us to identify the IIB configuration that gives rise to the non-conformal probe theory with moduli-independent effective coupling. We do not have a complete supergravity description of this theory, but a partial description is possible. It supplies enough information to determine the dimension of all chiral operators in the infrared if we use field theory considerations as well. In section II we discuss the type IIA construction of the probe theory and list some field theory results that we need in subsequent sections. Section III contains the T-duality, the analysis of the 7-brane impurity theory, and the map between IIA and IIB deformations. We also briefly discuss the exotic IIA deformation that appears as the counterpart of an ordinary deformation in IIB. In section IV we analyze the large $`N`$ limit of our field theories and their supergravity duals. We discuss the matching of operators with Kaluza-Klein modes in the conformal case and present a partial analysis in the non-conformal case. ## II The IIA construction of the field theory ### A The IIA brane configuration A configuration consisting of D4-branes extending in 01236, D6-branes and O6-planes extending in 0123789, and NS5-branes extending in 012389 preserves four supercharges. We obtain an $`𝒩=1`$ supersymmetric field theory in four dimensions after compactifying $`X_6`$ on a circle with circumference $`2\pi R_6`$. Specifically we consider configurations with $`N`$ D4-branes wrapping the compact $`X_6`$ direction. We put two O6<sup>-</sup>-planes at $`X_6=0,\pi R_6`$ and an NS5-brane and its image at $`X_6=R_6\pi /2,3R_6\pi /2`$. In order to cancel the total RR charge, we place four physical D6-branes on the circle. An example of such a configuration is shown in Fig. 1. These brane configurations are very similar to the configurations that give rise to finite $`𝒩=2`$ theories in four dimensions . In fact, the configuration we study here can be obtained from one of the $`𝒩=2`$ configurations in by rotating the NS5-branes from the 45 directions into the 89 directions. This breaks half of the supersymmetries, giving an $`𝒩=1`$ theory in $`d=4`$. Using standard techniques , we can determine the matter content and the superpotential of the field theory on the 4-branes. Unlike the $`𝒩=2`$ case, the $`X_6`$ position of the D6-branes will play an important role in our analysis. We need to distinguish two cases that are of interest for the analysis in this paper. Either all 6-branes intersect the NS5-brane, or the four 6-branes are split into two groups of two to the left and right of the NS5-brane (as shown in Fig. 1). These two choices give rise to physically inequivalent theories. The former configuration yields a line of fixed points (parametrized by the dilaton expectation value) that passes through zero coupling, while the latter corresponds to a non-conformal gauge theory which flows to a line of strongly coupled fixed points. ### B The conformal case: a field theory analysis The theory on the 4-branes turns out to be an $`Sp(2N)_1\times Sp(2N)_2`$ gauge theory with matter fields $`A_i,i=1,2`$ in the antisymmetric representation of each of the gauge groups, two bifundamentals $`𝒬,\stackrel{~}{𝒬}`$, and fundamentals from the 4-6 strings. The brane configuration, and consequently the field theory, admit a symmetry which exchanges the two $`Sp`$ factors. To determine the number and flavor representations of the fundamentals we need to understand the classical gauge theory on the 6-branes. Note that the worldvolume of the NS5-brane lies within the worldvolume of the D6-branes. It was argued in that the 6-branes can break on the NS5-brane (see also ). The gauge group on the four 6-branes turns out to be $`U(4)_u\times U(4)_d`$, where the two $`U(4)`$ factors act on the upper and lower halfs of the 6-branes respectively. One-loop effects break the $`U(4)_u\times U(4)_d`$ symmetry to $`SU(4)_u\times SU(4)_d`$ . The matter content of the 6-brane theory includes a bifundamental hypermultiplet from strings connecting upper and lower halfs of the 6-branes. We will have more to say about the 6-brane theory when we discuss the deformations of this background. For our present purposes we only need to know that the gauge group of the 6-brane theory is the flavor group of the probe theory. The matter content and the superpotential for a 4-brane probe in this background were worked out in . The fundamentals transform as $`q=(\text{ }\text{ }\text{ }\text{ }\text{ },1,\mathrm{𝟒},1)`$, $`\stackrel{~}{q}=(\text{ }\text{ }\text{ }\text{ }\text{ },1,1,\mathrm{𝟒})`$, $`p=(1,\text{ }\text{ }\text{ }\text{ }\text{ },\overline{\mathrm{𝟒}},1)`$, and $`\stackrel{~}{p}=(1,\text{ }\text{ }\text{ }\text{ }\text{ },1,\overline{\mathrm{𝟒}})`$ under $`Sp(2N)_1\times Sp(2N)_2\times SU(4)_u\times SU(4)_d`$. The superpotential reads $$W=h_1\stackrel{~}{𝒬}A_1J_1𝒬h_1𝒬A_2J_2\stackrel{~}{𝒬}+h_2q𝒬p+h_2\stackrel{~}{p}\stackrel{~}{𝒬}\stackrel{~}{q}.$$ (1) Here $`J_1`$ ($`J_2`$) is the invariant antisymmetric tensor of $`Sp(2N)_1`$ ($`Sp(2N)_2`$). Following it is easy to check that this theory has a line of fixed points passing through weak coupling. The one-loop beta function vanishes and the symmetry between the gauge factors implies that both antisymmetric tensors have the same anomalous dimension $`\gamma _A`$, both bifundamentals have $`\gamma _𝒬`$ and all fundamentals have $`\gamma _q`$. Therefore the beta functions of the gauge coupling and the Yukawa couplings in the superpotential are $`\beta _g`$ $``$ $`2(N1)\gamma _A+4N\gamma _𝒬+8\gamma _q`$ (2) $`\beta _{h_1}`$ $``$ $`2\gamma _𝒬+\gamma _A`$ (3) $`\beta _{h_2}`$ $``$ $`\gamma _𝒬+2\gamma _q.`$ (4) Setting all beta functions to zero gives two independent constraints on the three coupling constants. The remaining coupling constant parametrizes a line of superconformal fixed points. Since setting all anomalous dimensions to zero satisfies the constraints, this line passes through the free point $`g=h_1=h_2=0`$. Note that requiring the beta functions to vanish does not fix anomalous dimensions unambiguously. The most natural assumption is that the dimensions of the fields are unchanged as one moves along the fixed line. This would mean that the theory is finite. The supergravity computation in the last section supports this conjecture by showing that this is true in the large $`N`$ limit. The moduli space of this theory includes subspaces where it flows to theories with more supersymmetry. For example, giving an expectation value to either $`𝒬`$ or $`\stackrel{~}{𝒬}`$ proportional to a unit matrix gives a mass to half of the fundamentals and breaks the gauge group to the diagonal $`Sp(2N)_D`$. It is a simple matter to show that the resulting theory flows to an $`𝒩=2`$ superconformal theory with gauge group $`Sp(2N)`$, one antisymmetric hypermultiplet, and four hypermultiplets in the fundamental. Giving such expectation values to both $`𝒬`$ and $`\stackrel{~}{𝒬}`$ makes all flavors massive and breaks the gauge group to $`SU(N)`$. Part of the bifundamentals are eaten by gauge bosons, and the rest give rise to three chiral superfields in the adjoint of $`SU(N)`$. This theory flows to $`𝒩=4`$ SYM in the infrared. These field theory results are reproduced in the brane construction if we identify the positions of the D4-branes with the field theory moduli in the following way: $`X_7𝒬𝒬^{}\stackrel{~}{𝒬}^{}\stackrel{~}{𝒬}`$ (5) $`X_4+iX_5𝒬\stackrel{~}{𝒬}.`$ (6) Giving an expectation value to either of the bifundamentals while keeping the other expectation value zero corresponds to moving the 4-branes in the positive or negative $`X_7`$ direction. Turning on both bifundamentals corresponds to moving the 4-branes in the $`X_4`$ and $`X_5`$ directions as well as $`X_7`$. The effect of these motions on the 4-brane theory agrees with the field theory expectations. If we move the 4-branes off the NS5-branes in the $`X_7`$ direction, we can ignore the NS5-brane. The remaining branes preserve eight supercharges, and standard techniques confirm the matter content and gauge group stated above for the $`𝒩=2`$ case. Moving the 4-branes in $`X_4`$ and $`X_5`$ amounts to separating them from all other branes. The theory on the 4-branes is then $`𝒩=4`$ SYM as expected. ### C The non-conformal case We can deform the background for the 4-brane theory by moving the 6-branes in the $`X_6`$ and $`X_{4,5}`$ directions. These brane motions are parametrized by expectation values of the two complex scalars, $`,\stackrel{~}{}`$ in the bifundamental hypermultiplet of the $`(1,0)`$ theory on the intersection of the 6-branes and the NS5-branes . More precisely, we relate the positions of the 6-branes to $`,\stackrel{~}{}`$ as follows: $`X_6^{}\stackrel{~}{}^{}\stackrel{~}{}`$ (7) $`X_4+iX_5\stackrel{~}{}.`$ (8) To obtain the configuration shown in Fig. 1 we have to set $`=\mathrm{diag}(m_1,m_2,0,0)`$ and $`\stackrel{~}{}=\mathrm{diag}(0,0,\stackrel{~}{m}_3,\stackrel{~}{m}_4)`$. These bifundamental expectation values act as mass terms in the 4-brane theory. The corresponding terms in the field theory superpotential are $$W=\stackrel{~}{}q\stackrel{~}{q}+p\stackrel{~}{p}.$$ (9) We will be particularly interested in the case $`m_1=m_2=\stackrel{~}{m}_3=\stackrel{~}{m_4}`$. In this case the bifundamental expectation values break the $`SU(4)_u\times SU(4)_d`$ 6-brane gauge group to $`SU(2)_1\times SU(2)_2\times U(1)`$. After integrating out the massive components of the fundamentals, the superpotential of the 4-brane theory reads $$W=h_1\stackrel{~}{𝒬}A_1J_1𝒬h_1𝒬A_2J_2\stackrel{~}{𝒬}+h_3q𝒬\stackrel{~}{𝒬}\stackrel{~}{q}+h_3\stackrel{~}{p}\stackrel{~}{𝒬}𝒬p.$$ (10) The fundamentals now transform as $`q=(\text{ }\text{ }\text{ }\text{ }\text{ },\mathrm{𝟏},\mathrm{𝟐},\mathrm{𝟏})`$, $`\stackrel{~}{q}=(\text{ }\text{ }\text{ }\text{ }\text{ },\mathrm{𝟏},\mathrm{𝟐},\mathrm{𝟏})`$, $`p=(\mathrm{𝟏},\text{ }\text{ }\text{ }\text{ }\text{ },\mathrm{𝟏},\mathrm{𝟐})`$, and $`\stackrel{~}{p}=(\mathrm{𝟏},\text{ }\text{ }\text{ }\text{ }\text{ },\mathrm{𝟏},\mathrm{𝟐})`$ under $`Sp(2N)_1\times Sp(2N)_2\times SU(2)_1\times SU(2)_2`$. Actually, the superpotential, Eq. (10), has an accidental $`SO(4)_1\times SO(4)_2`$ global symmetry under which $`q`$ and $`\stackrel{~}{q}`$ transform as a $`(\mathrm{𝟒},\mathrm{𝟏})`$ while $`p`$ and $`\stackrel{~}{p}`$ transform as $`(\mathrm{𝟏},\mathrm{𝟒})`$. An analysis along the lines of shows that this theory also has a line of superconformal fixed points. The beta functions are given by $`\beta _g`$ $``$ $`4+2(N1)\gamma _A+4N\gamma _𝒬+4\gamma _q`$ (11) $`\beta _{h_1}`$ $``$ $`2\gamma _𝒬+\gamma _A`$ (12) $`\beta _{h_3}`$ $``$ $`1+{\displaystyle \frac{1}{2}}\gamma _𝒬+\gamma _q.`$ (13) Demanding that the beta functions vanish, we again find that two out of the three constraints are independent, leaving us with a line of fixed points. In this case, however, the line does not pass through weak coupling, since at least one of the anomalous dimensions must be nonzero. Again the vanishing of the beta functions alone does not determine the values of anomalous dimensions. In the last section we will argue that supergravity considerations allow us to fix this ambiguity for large $`N`$ and find $`\gamma _A=\gamma _Q=0,\gamma _q=1`$. As in the conformal case we can analyze the RG flows both in field theory and using the brane picture. From the brane construction it is clear that we flow to the same $`𝒩=2`$ theory as in the conformal case if we move the 4-branes off the NS5-brane in the positive or negative $`X_7`$ direction. Moving the 4-branes in $`X_{4,5}`$ again yields $`𝒩=4`$ SYM. The analysis in the field theory is a little more involved in this case because the one-loop beta function does not vanish. This implies that there will be threshold effects in the matching of the running gauge coupling. On general grounds one would expect the low-energy effective coupling to depend on the size of the bifundamental expectation values in the field theory. However, if we give arbitrary (non zero) expectation values to $`𝒬`$ and $`\stackrel{~}{𝒬}`$, fields get integrated out at a variety of scales. Assuming that the expectation value of $`𝒬`$ is larger than that of $`\stackrel{~}{𝒬}`$, the $`Sp(2N)\times Sp(2N)`$ gauge group is broken to the diagonal group at a scale set by $`𝒬`$. The diagonal $`Sp(2N)_D`$ is broken to $`SU(N)`$ at a scale set by $`\stackrel{~}{𝒬}`$, and finally the fundamentals are integrated out at scale $`h_3𝒬\stackrel{~}{𝒬}`$. Matching the gauge couplings at each of these scales we find that the low-energy effective coupling does not depend on the bifundamental expectation values. This is a special feature of this theory that will be important later on. ## III The type IIB description ### A T-duality In this section we describe the IIB configuration which is obtained by T-dualizing the IIA brane configuration of section II along $`X_6`$. Since $`/X_6`$ is not a Killing vector, performing this T-duality is not completely trivial. Similar T-dualities on IIA configurations that preserve $`𝒩=2`$ supersymmetry on the 4-branes have appeared in the literature . In the $`𝒩=2`$ case the T-duality maps the two O6<sup>-</sup>-planes and the four D6-branes to an orientifold 7-plane and four D7-branes. The D4-branes become D3-branes probing this background. The NS5-brane and its mirror image turn into a $`𝐙_2`$ orbifold acting on the 7-brane coordinates transverse to the D3-brane. The T-dual of the IIA configuration without NS5-branes was analyzed in . Our configuration differs from the $`𝒩=2`$ case by the orientation of the NS5-branes. Since this modifies the T-duality considerably we discuss it in some detail here. Our first goal is to T-dualize the NS5-branes and the pair of O6<sup>-</sup>-planes. The other branes can be added later. We begin by separating the NS5-brane and its image in the $`X_{4,5}`$ directions. The T-dual of the two NS5-branes is a two-center Taub-NUT space. Recall that the two-center Taub-NUT space can be thought of as a circle fibered over $`𝐑^3`$ so that its radius vanishes at two points on $`𝐑^3`$ (the centers). In the present case $`𝐑^3`$ is parametrized by $`X_4,X_5,X_7`$, while the coordinate along the circle is T-dual to $`X_6`$. The positions of the centers correspond to the positions of the NS5-branes in $`X_4,X_5,X_7`$. In the IIA configuration the orientifold projection ensures that position of the physical NS5-brane and its image are related by a reflection of the $`X_{4,5}`$ coordinates. The T-dual orientifold projection should therefore impose a similar constraint on the location of the centers of the Taub-NUT. The Taub-NUT metric has the following form $`ds^2`$ $`=`$ $`\left({\displaystyle \frac{4}{b^2}}+{\displaystyle \frac{1}{R_+}}+{\displaystyle \frac{1}{R_{}}}\right)^1\left[d\sigma +\left({\displaystyle \frac{Z_+}{R_+}}+{\displaystyle \frac{Z_{}}{R_{}}}\right)d\mathrm{arctan}\left({\displaystyle \frac{Y}{X}}\right)\right]^2`$ (15) $`+\left({\displaystyle \frac{4}{b^2}}+{\displaystyle \frac{1}{R_+}}+{\displaystyle \frac{1}{R_{}}}\right)\left[dX^2+dY^2+dZ^2\right],`$ where $$Z_\pm =Z\pm Z_0R_\pm ^2=X^2+Y^2+Z_\pm ^2.$$ (16) The $`𝐑^\mathrm{𝟑}`$ base is parametrized by $`X,Y,Z`$, the two centers are located at $`(0,0,\pm Z_0)`$, and $`\sigma `$ is the $`4\pi `$-periodic coordinate on the circle fiber. The parameter $`b`$ is the asymptotic radius of the fiber. The reflection of $`X_{4,5}`$ in the IIA picture map into reflections of $`Z`$ and one other coordinate of $`𝐑^\mathrm{𝟑}`$, say $`Y`$. We will be interested in the limit when the asymptotic radius of the circle fiber, $`b`$, becomes infinitely large, while the T-dual circle parametrized by $`X_6`$ shrinks to zero. In this limit the two-center Taub-NUT space becomes an $`A_1`$ ALE space, also known as Eguchi-Hanson space. It is useful to change coordinates to transform the metric above into the Eguchi-Hanson form: $`X`$ $`=`$ $`{\displaystyle \frac{1}{8}}\sqrt{r^4a^4}\mathrm{sin}(\theta )\mathrm{cos}(\psi )`$ (17) $`Y`$ $`=`$ $`{\displaystyle \frac{1}{8}}\sqrt{r^4a^4}\mathrm{sin}(\theta )\mathrm{sin}(\psi )`$ (18) $`Z`$ $`=`$ $`{\displaystyle \frac{1}{8}}r^2\mathrm{cos}(\theta )`$ (19) $`\sigma `$ $`=`$ $`2\varphi ,`$ (20) where $`a^2=8Z_0`$ and $`\psi `$ has period $`2\pi `$. The orientifold-induced projection $`(Y,Z)(Y,Z)`$, implies the identification $`(\theta ,\psi )(\pi \theta ,\psi )`$ for the angular coordinates. The fixed locus of this identification is a two-dimensional submanifold of the Eguchi-Hanson space which has the topology of a cylinder. Next we want to bring the NS5-brane and its image back to the origin of the $`X_{4,5}`$ plane in the IIA description, which corresponds to setting $`a=0`$. For $`a=0`$ the Eguchi-Hanson metric becomes an orbifold metric on $`𝐂^2/𝐙_2`$. To make this explicit we can introduce two complex coordinates $$z_{1,2}=r\mathrm{exp}(i\varphi /2)\left(\mathrm{cos}(\theta /2)\mathrm{exp}(i\psi /2)\pm i\mathrm{sin}(\theta /2)\mathrm{exp}(i\psi /2)\right).$$ (21) In these coordinates the $`a=0`$ Eguchi-Hanson metric becomes flat. The identification $`\psi \psi +2\pi `$ requires that we identify $`(z_1,z_2)(z_1,z_2)`$ as expected for $`𝐂^2/𝐙_2`$. The additional orientifold identification acts on the new coordinates as $`(z_1,z_2)(z_1,z_2)`$, and acting with both orientifold and orbifold identifications flips the sign of $`z_1`$. The orientifold projections have two fixed planes, $`z_{1,2}=0`$, which we identify with two O7<sup>-</sup>-planes. To summarize, the NS5-brane together with two O6<sup>-</sup>-planes become, under T-duality, a pair of intersecting O7<sup>-</sup>-planes with six common directions. Now let us put in D-branes. The four physical D6-branes in IIA are located at $`X_4=X_5=0`$. Under T-duality they become D7-branes wrapping the circle fiber of the Taub-NUT and located at $`Y=Z=0`$. In other words, they are wrapped on the invariant cylinder of the orientifold projection. Taking the limit $`b\mathrm{},a0`$ we find that the invariant cylinder develops a neck and becomes a pair of planes $`z_1=0`$ and $`z_2=0`$ in $`𝐂^2/𝐙_2`$. Thus the four physical D7-branes must be located on these planes. Recall that these planes are the O7<sup>-</sup>-planes and therefore have 7-brane charge $`4`$. It follows that the 7-brane charge is cancelled between the D7-branes and the orientifold planes, and the IIB dilaton is constant. Finally, T-duality turns the D4-branes into D3-branes extending in 0123. To summarize, the T-dual of the IIA configuration in the limit when the radius of $`X_6`$ goes to zero consists of an O7<sup>-</sup>-plane with four coincident D7-branes in 01236789, another O7<sup>-</sup>-plane with four coincident D7-branes in 01234589 and 3-branes in 0123. We will refer to the 7-branes extending in 01234589 as 7-branes. The orientifold group for this configuration is $$G=\{1,(1)^{F_L}R_{45}\mathrm{\Omega },(1)^{F_L}R_{67}\mathrm{\Omega },R_{4567}\},$$ (22) where $`R`$ reflects the coordinates indicated and $`\mathrm{\Omega }`$ is the worldsheet parity. The splitting of the D6-branes into half-D6-branes discussed in becomes obvious after T-duality. Indeed, it follows easily from the above formulas that the location of the upper half 6-branes, $`X_4=X_5=0,X_7>0`$ in the type IIA configuration maps to the locus $`z_2=0`$ in IIB. Similarly, the lower halfs of the 6-branes, $`X_4=X_5=0,X_7<0`$, map to $`z_1=0`$. Thus the upper halfs of D6-branes map to whole D7-branes located at $`z_2=0`$, while the lower halfs map to whole D7-branes at $`z_1=0`$. To specify the theory on the 7-branes completely we need to make a consistent choice for the action of the orientifolds on the Chan-Paton factors of the 7-7, 7-7, and 7-7 strings. There are at least two such choices. One gives rise to an $`SO(8)\times SO(8)`$ gauge symmetry , and classically the other yields a $`U(4)\times U(4)`$ gauge group on the 7-branes , which is broken to $`SU(4)\times SU(4)`$ by one-loop effects . The second case is related to the Gimon-Polchinski orientifold via T-duality. We will be mainly interested in the second orientifold, which we will refer to as the Sen model. Both of these orientifolds were constructed as compact models with a total of four orientifolds and sixteen physical 7-branes of each kind. The 7-brane gauge groups listed here are the parts of the total 7-brane group that are visible to a 3-brane probe near one of the intersections. The theory on a 3-brane probe in the Sen model background was analyzed in . The gauge group, matter content, and the superpotential are in complete agreement with the theory we discussed in section II B. Thus we conclude that the IIA configuration with all 6-branes on top of the NS5-brane is T-dual to a local piece of the Sen model . As in the IIA description the flat directions of the field theory correspond to motions of the 3-branes in the 7-brane background. Moving the 3-branes off the intersection point along either of the O7-planes corresponds to giving en expectation value to one of the bifundamentals $`𝒬,\stackrel{~}{𝒬}`$, and moving the 3-branes off both orientifolds gives an expectation value to both $`𝒬`$ and $`\stackrel{~}{𝒬}`$. Separating the 3-branes in the direction which the 7- and 7-branes share corresponds to giving expectation values to the antisymmetric tensors $`A_1,A_2`$. It is instructive to study the deformations of the Sen model and compare these to the deformations of the corresponding IIA construction. The IIA construction has the advantage that all deformations of the background correspond to moving the 6-branes or the NS5-branes. In the IIB picture only some of the deformations are geometric, others correspond to Wilson lines. Once the map between IIA and IIB deformations is established, we can also find the IIB description of the second (non-conformal) IIA configuration discussed in section II C. Sen has studied the deformations of the compact model in great detail. In the compact case the field theory on the 7-branes turns out to be a $`(1,0)`$ theory in six dimensions. Since our IIB configuration is non-compact, we cannot simply use Sen’s results. In fact, in our case the theory on the 7-branes is not even six-dimensional, instead it is an eight-dimensional theory with six-dimensional impurities. Such theories have been discussed previously . Before we launch into an analysis of the impurity theory we need to discuss the matter content of the 7-brane theory. A single O7<sup>-</sup>-plane with four coincident 7-branes gives rise to an $`𝒩=1`$ $`SO(8)`$ theory in eight dimensions. The bosonic degrees of freedom in the eight-dimensional vector multiplet consist of a vector field and a complex scalar, both in the adjoint of the gauge group. The second O7<sup>-</sup>-plane in our configuration breaks half of the supersymmetries and imposes projections on fields in the vector multiplet. With the projection matrices for the Sen model , the surviving constant modes of the fields are a vector and a complex scalar in the $`\mathrm{𝟔}+\overline{\mathrm{𝟔}}`$. These fields account for the 7-7 strings and there are similar fields on the 7-branes from 7-7 strings. The 7-7 strings are localized at the intersection of 7- and 7-branes. They yield a single hypermultiplet of the six-dimensional $`(1,0)`$ theory on the intersection, which transforms as a $`(\mathrm{𝟒},\mathrm{𝟒})`$ under the (classical) $`U(4)_7\times U(4)_7^{}`$ gauge group. ### B The seven-brane impurity theory In this section we analyze the supersymmetric vacua of the impurity theory on the 7-branes and compare them with the vacua of the T-dual IIA configuration. We expect the vacuum field configurations to be translationally invariant in the six directions common to the 7- and 7-branes. Focusing now on the 7-branes, we see that we can capture the physics by studying the dependence of the 7-brane fields on the remaining two directions transverse to the 7-branes. The 7-branes and the O7-plane intersect this two-dimensional plane in a point. To set up the impurity theory we use a complex affine coordinate $`z`$ on the plane and define $`A_{\overline{z}}=\frac{1}{2}(A_1+iA_2)`$, where $`A_i`$ are the two components of the $`SO(8)`$ gauge field living on the 7-branes. The 7-brane theory also contains a complex scalar, $`\mathrm{\Phi }`$, in the adjoint of $`SO(8)`$ that describes the transverse fluctuations of the 7-branes. The bifundamental $`(,\stackrel{~}{})`$ from the 7-7 strings is localized at the point $`z=0`$. A very similar theory (without orientifold projections) was described in . The moduli space of the impurity theory is given by the solution of the equations $`F_{z\overline{z}}[\mathrm{\Phi },\mathrm{\Phi }^{}]=\delta (z)\left(^{}\stackrel{~}{}^{}\stackrel{~}{}\right)`$ (23) $`\overline{D}\mathrm{\Phi }=\delta (z)\stackrel{~}{},`$ (24) where $`F_{z\overline{z}}=A_{\overline{z}}\overline{}A_z[A_z,A_{\overline{z}}]`$ and $`\overline{D}=\overline{}A_{\overline{z}}`$. These equations are known as Hitchin equations with sources. They are analogous to the $`D`$ and $`F`$ flatness conditions in ordinary supersymmetric field theories. A similar set of equations describes the impurity theory on the 7-branes. To make contact with the notation in we write all 7-brane fields as antisymmetric $`8\times 8`$ matrices with certain constraints on the entries. This reflects the origin of the fields in the impurity theory. Without the O7-plane, both $`A_{\overline{z}}`$ and $`\mathrm{\Phi }`$ would transform in the adjoint of $`SO(8)`$. Orientifolding with O7 puts additional constraints on these fields $`\mathrm{\Phi }(z)`$ $`=`$ $`P\mathrm{\Phi }^T(z)P^1`$ (25) $`A(z)`$ $`=`$ $`PA^T(z)P^1,`$ (26) where $$P=\left(\begin{array}{cc}P_4& 0\\ 0& P_4\end{array}\right),P_4=\left(\begin{array}{cc}0& \mathrm{𝟏}_{2\times 2}\\ \mathrm{𝟏}_{2\times 2}& 0\end{array}\right).$$ (27) Orientifolding also breaks the gauge group from $`SO(8)`$ down to the group of all continuous $`SO(8)`$-valued functions satisfying $`g(z)=Pg(z)P^1`$. In particular, at $`z=0`$ the gauge group reduces to $`U(4)`$. The orientifold projections allow the bifundamentals to be arbitrary complex $`8\times 8`$ matrices that commute with $`P`$ . The impurity equations are consistent if the products of the bifundamentals on the r.h.s. of Eq. (23) are antisymmetrized in the gauge indices. We need to find all, possibly $`z`$-dependent, field configurations that satisfy the impurity equations, Eq. (23), modulo gauge transformations. To this end we make the following ansatz $$A_{\overline{z}}=\frac{T}{z},\mathrm{\Phi }(z)=\mathrm{\Phi }_0+\frac{\mathrm{\Phi }_s}{z}.$$ (28) Here $`T,\mathrm{\Phi }_0`$ and $`\mathrm{\Phi }_s`$ are constant antisymmetric $`8\times 8`$ matrices. Imposing the constraints, Eq. (25), determines that $`\mathrm{\Phi }_0`$ transforms in the $`\mathrm{𝟔}+\overline{\mathrm{𝟔}}`$ of $`U(4)`$ while $`T`$ and $`\mathrm{\Phi }_s`$ transform as adjoints. The background gauge field, $`A_{\overline{z}}`$, can be interpreted as a flat connection that gives rise to a Wilson line around the intersection point at $`z=0`$. The constant part of the scalar field, $`\mathrm{\Phi }_0`$, corresponds to the asymptotic (i.e. $`z\mathrm{}`$) positions of the 7-branes in the directions transverse to the O7-plane, while the singular part, $`\mathrm{\Phi }_s`$, parametrizes a deformation of the shape of the 7-branes. The moduli space of the impurity equations, Eq. (23), has several branches with rather different physics. The simplest situation arises if all bifundamental expectation values and all singular parts of $`A_{\overline{z}}`$ and $`\mathrm{\Phi }`$ vanish. In that case Eq. (23) reduces to the condition $$[\mathrm{\Phi }_0,\mathrm{\Phi }_0^{}]=0,$$ (29) which is solved by $$\mathrm{\Phi }_0=\left(\begin{array}{cc}0& \varphi \\ \varphi & 0\end{array}\right),\varphi =\mathrm{diag}(\varphi _1,\varphi _2,\varphi _1,\varphi _2).$$ (30) As in Ref. , the two complex parameters, $`\varphi _{1,2}`$, parametrize the transverse position of two pairs of 7-branes. We discuss the corresponding IIA deformation in the next section. For the remainder of this section we set $`\mathrm{\Phi }_0=0`$. The impurity equations, Eq. (23), become inhomogeneous once we turn on an expectation value for the bifundamental fields. Since $`\overline{}(1/z)\delta (z)`$, and the r.h.s. of Eq. (23) is proportional to $`\delta (z)`$, the singular fields above have the right form to satisfy the impurity equations with nonzero bifundamental expectation values. The most generic expectation value of the bifundamentals for which the impurity equations have solutions reads $`=\left(\begin{array}{cc}M_1& 0\\ 0& M_2\end{array}\right),`$ (33) $`M_1=\left(\begin{array}{cccc}m_1& 0& im_1& 0\\ 0& m_2& 0& im_2\\ im_1& 0& m_1& 0\\ 0& im_2& 0& m_2\end{array}\right),M_2=\left(\begin{array}{cccc}m_3& 0& im_3& 0\\ 0& m_4& 0& im_4\\ im_3& 0& m_3& 0\\ 0& im_4& 0& m_4\end{array}\right),`$ (42) and an expectation value of the same form, but with $`m_i`$ replaced by $`\stackrel{~}{m}_i`$, for $`\stackrel{~}{}`$. The impurity equations determine the expectation values of the other fields in terms of $``$ and $`\stackrel{~}{}`$. The residue of $`\mathrm{\Phi }`$ is given by $$\mathrm{\Phi }_s=\mathrm{diag}(\mathrm{\Phi }_1,\mathrm{\Phi }_2),$$ (43) where $$\mathrm{\Phi }_1=\left(\begin{array}{cccc}0& 0& \varphi _1& 0\\ 0& 0& 0& \varphi _2\\ \varphi _1& 0& 0& 0\\ 0& \varphi _2& 0& 0\end{array}\right),\mathrm{\Phi }_2=\mathrm{\Phi }_1(\varphi _1\varphi _3,\varphi _2\varphi _4),$$ (44) with $`\varphi _im_i\stackrel{~}{m}_i`$. The matrix $`T`$ in the gauge connection has the same structure as $`\mathrm{\Phi }_s`$, except that $`\varphi _i`$ is replaced by $`t_i|m_i|^2|\stackrel{~}{m}_i|^2`$. Before discussing this general solution, we will focus on two special cases. If we set $`m_i=\stackrel{~}{m}_i`$, the r.h.s. of the first impurity equation vanishes and only the residue of $`\mathrm{\Phi }`$ is turned on. This expectation value of the bifundamentals breaks the $`U(4)\times U(4)`$ gauge group to a diagonal subgroup. If all $`m_i`$ are equal this subgroup is $`U(4)_D`$, and for generic values of $`m_i`$ we find $`U(1)^4`$. Since the 7-brane group is broken to a diagonal subgroup, the impurity theory, Eq. (23), on the 7-branes and the corresponding impurity theory on the 7-branes contain the same information. Therefore it is sufficient to consider only the 7-brane theory. The field $`\mathrm{\Phi }(z)`$ describes the shape of the 7-branes. For large $`z`$ the 7-branes asymptote to the O7-plane as in the unperturbed case, while they approach the O7-plane for small $`z`$. Thus we conclude that turning on this bifundamental expectation value deforms pairs of intersecting 7- and 7-branes into a single smooth 7-brane that interpolates between the 7- and 7-branes. This result agrees with the F-theory analysis in , where this behavior was interpreted as fusing the 7- and 7-branes together. There are also solutions of the impurity equations with non-zero gauge connection and $`\mathrm{\Phi }_s=0`$. We find one such solution if we set $`m_1=m_2=\stackrel{~}{m}_3=\stackrel{~}{m}_4`$, and all other components of the bifundamentals vanish. For this choice the r.h.s. of the second equation in Eq. (23) vanishes, which implies $`\mathrm{\Phi }_s=0`$, and $`t_1=t_2|m_1|^2`$, $`t_3=t_4|m_1|^2`$. This bifundamental expectation value breaks the $`U(4)_7\times U(4)_7^{}`$ 7-brane gauge group to a diagonally embedded $`U(2)\times U(2)`$. Note that this deformation is purely non-geometric. Since $`\mathrm{\Phi }(z)=0`$, the 7-branes have the same shape as in the case without any bifundamental expectation values. It is now a simple matter to identify these two singular solutions with the corresponding deformations in the IIA construction. The first solution with $`T=0`$, $`\mathrm{\Phi }_s0`$ corresponds to moving the 6-branes off the NS5-brane in the $`X_{4,5}`$ direction. If none of the 6-branes coincide, the $`U(4)\times U(4)`$ gauge symmetry on the 6-branes is broken to $`U(1)^4`$. This is in complete agreement with the impurity analysis. Note that a deformation that corresponds to fusing 7 and 7-branes together in the IIB description maps into a simple brane motion in the IIA construction, which involves reconnecting the upper and lower halfs of the 6-branes. The second singular solution with $`T0`$, $`\mathrm{\Phi }_s=0`$ also corresponds to a simple brane motion in the IIA description. We identify turning on $`m_1`$ with the motion of two pairs of 6-branes in the $`X_6`$ direction. The classical gauge group on the 6-branes is $`U(2)\times U(2)`$ as expected from the IIB analysis. This brane motion also requires that we reconnect the upper and lower halfs of the 6-branes, so that the resulting 6-brane group is a diagonal subgroup of the original $`U(4)\times U(4)`$ gauge symmetry. This is in perfect agreement with the analysis of the 7-brane impurity theory. It is straightforward to discuss more general choices for the bifundamental expectation values. The bifundamental expectation values are parametrized by eight complex numbers, $`m_i`$ and $`\stackrel{~}{m}_i`$, which determine the matrices $`T`$ and $`\mathrm{\Phi }_s`$ completely. The four parameters in $`T`$ map into the $`X_6`$ position of the 6-branes in the IIA description and the entries in $`\mathrm{\Phi }_s`$ correspond to the $`X_{4,5}`$ positions. Thus we find complete agreement between the brane motions in the IIA description and the moduli corresponding to singular fields in the impurity theory. ### C A supersymmetric IIA configuration with curving six-branes The deformations we discussed so far are rather complicated in the IIB picture and correspond to simple brane motions in the IIA description. In fact, all simple brane motions in the IIA description are accounted for. However, there is a very simple brane motion in IIB, namely the constant solution of the impurity equations given in Eq. (30), that should have a counterpart in the IIA description. Since this deformation corresponds to moving pairs of 7-branes off the orientifold, we can find an explicit equation describing the position of these branes. In terms of the coordinates in Eq. (21) this equation reads $`z_2=const`$. Starting from this expression we can reverse the coordinate transformations that took us from the Taub-NUT space to the flat coordinates on $`𝐂^2/𝐙_2`$. This provides an expression for the world volume of the 7-brane in the Taub-NUT coordinates. Since the 7-branes wrap the fiber of the Taub-NUT and the fiber T-dualizes to the compact $`X_6`$ direction, it is straightforward to find the equation for the worldvolume of the corresponding 6-brane. The result is $`X_4^2cX_7c^2/4=0`$, i.e., a parabola in the $`X_4X_7`$ plane. Fig. 2 shows the IIA configuration which is T-dual to the following IIB situation: all 7-branes are coincident with the O7-plane, and one pair of 7-branes is displaced from the O7-plane. From this picture one can see that turning on the constant complex scalar on the 7-brane corresponds to fusing two upper halfs of the 6-branes together and moving them off the NS5-brane as shown in the figure. On the IIB side it is obvious that this deformation preserves all supersymmetries. This is somewhat harder to see on the IIA side. Presumably the $`H`$-field produced by the NS5-brane stabilizes the curved worldvolume of the D6-brane. The effect of this deformation on the probe theory is what we expect from the IIB picture. There we move two 7-branes away from the 3-branes sitting at the intersection point of the orientifold planes. This gives a mass to half of the fundamentals from 7-3 strings. In the IIA picture the deformation accomplishes the same. In the IIB picture moving the 3-branes along the O7-plane and transverse to the O7-plane corresponds to giving the bifundamental field $`𝒬`$ in the probe theory an expectation value . Thus it is possible to move the 3-branes away from the intersection of the orientifolds towards the intersection of the pair of 7-branes with the O7-plane by giving an expectation value to one of the bifundamentals. This is also reflected in the IIA description. We can move the 4-branes in the negative $`X_7`$ direction by giving an expectation value to one of the bifundamentals (see section II B). This moves the 4-branes off the NS5-brane and towards the intersection of the lower half-6-branes with the curving 6-brane. In the IIB description moving a pair of 7-branes away from the O7-plane breaks the 7-brane gauge group from $`SU(4)`$ down to $`SU(2)\times SU(2)`$ . Moving all four 7-branes together breaks $`SU(4)`$ down to $`Sp(4)`$. This implies that the unbroken gauge group on a single curving 6-brane should be $`SU(2)`$, while for two coincident curving branes it should be enhanced to $`Sp(4)`$. It is not at all clear how to see this from the IIA description. ### D Comparison with F-theory Sen argued that the T-dual version of the GP model is related to an F-theory compactification with certain fluxes through collapsed 2-cycles. The naive candidate for such an F-theory compactification would be a pair of intersecting $`D_4`$ singularities. However, this cannot be directly related to the GP orientifold, since it would give rise to an $`SO(8)\times SO(8)`$ gauge symmetry and contain tensionless strings, while the GP model has $`SU(4)\times SU(4)`$ symmetry and no tensionless strings. The difference is due to NS (and possibly RR) 2-form fluxes through the collapsed 2-cycle at the intersection of the two $`D_4`$ singularities. These fluxes give a mass to 3-branes wrapping this cycle, thereby preventing the appearance of tensionless strings. These fluxes are not quantized , so we should be able to identify moduli in our IIA description that correspond to turning them off. The NS flux is conventionally identified with the position of the NS5-branes on the $`X_6`$ circle and the RR flux parametrizes the location of the NS5-branes on the M-theory circle. From the IIB point of view, they are both part of a massless hypermultiplet living at the intersection of the $`D_4`$ singularities. In order to turn off the NS flux, we move the NS5-brane and its image as well as all D6-branes to coincide with one of the O6-planes. This configuration has an $`SO(8)\times SO(8)`$ gauge symmetry from the eight upper and eight lower halfs of the 6-branes, as well as tensionless strings from the NS5-brane coincident with its image . In addition to the hypermultiplet that corresponds to moving the NS5-brane off the orientifold in the $`X_4,X_5,X_6,X_{10}`$ there is now a tensor multiplet whose scalar expectation value corresponds to separating the two NS5-branes in the $`X_7`$ direction. All this agrees with the expectations from F-theory. ## IV The large $`N`$ limit When the number of D3-branes, $`N`$, is large there is a dual description of $`𝒩=1`$ superconformal theory on the D3-branes in terms of a supergravity on $`AdS_5\times X`$, where $`X`$ is an Einstein manifold (or orbifold) . This dual description is valid when the t’Hooft gauge coupling, $`g_{YM}^2N`$, is large. In this section we will show how the AdS/CFT correspondence works for the conformal gauge theory with $`SU(4)\times SU(4)`$ flavor symmetry discussed in section II B, and provide evidence that this theory is finite. We will also provide a partial analysis of the non-conformal theory of section II C in the large $`N`$ limit and argue that supergravity suggests a definite R-charge assignment for all the fields in the infrared. ### A The conformal case In the conformal case, $`X`$ is an orientifold of $`𝐒^5`$. As explained in the previous section, the IIA configuration with $`SU(4)\times SU(4)`$ gauge symmetry on the 6-branes is T-dual to a local piece of the Sen model. At the $`SU(4)\times SU(4)`$ point, the Sen model is a perturbative type IIB orientifold with constant string coupling, $`\tau `$ . Thus the near-horizon geometry of the 3-branes is obtained by orientifolding $`AdS_5\times 𝐒^5`$. Similar theories were analyzed in . Let us denote the orientifolded five-sphere by $`\stackrel{~}{𝐒}^5`$. The metric on $`\stackrel{~}{𝐒}^5`$ is the angular part of $$ds^2=|dz_1|^2+|dz_2|^2+|dw|^2,$$ (45) where $`w=X_8+iX_9`$ and the variables $`z_1,z_2`$ are subject to the identifications $`z_iz_i`$. A $`U(1)^3`$ subgroup of the $`SO(6)`$ isometry group of $`𝐒^5`$ commutes with these identifications. It is convenient to take the generators that rotate $`z_1`$, $`z_2`$,and $`w`$ separately as a basis in the Lie algebra of $`U(1)^3`$. Explicitly, the metric on $`\stackrel{~}{𝐒}^5`$ can be written as $$ds_{\stackrel{~}{𝐒}^5}^2=d\theta _1^2+\mathrm{sin}^2(\theta _1)d\varphi _1^2+\mathrm{cos}^2(\theta _1)\left(d\theta _2^2+\mathrm{sin}^2(\theta _2)d\varphi _2^2+\mathrm{cos}^2(\theta _2)d\varphi _3^3\right),$$ (46) where $`\varphi _{1,2}[0,\pi ]`$, $`\varphi _3[0,2\pi ]`$, and $`\theta _{1,2}[0,\pi ]`$. The three angles $`\varphi _i`$ parametrize rotations in the $`z_{1,2}`$ and $`w`$ planes respectively. The periodicity of $`\varphi _{1,2}`$ reflects the identifications on $`z_{1,2}`$. Since this periodicity of $`\varphi _{1,2}`$ is the only thing which distinguishes $`\stackrel{~}{𝐒}^5`$ from $`𝐒^5`$, the eigenvalues of the scalar Laplacian on the former can be obtained from those on the latter. The eigenvalue of the scalar Laplacian on $`𝐒^5`$ is $`k(k+4)`$, where $`k=0,1,\mathrm{}`$. In terms of the angular momenta, $`m_i`$, associated with the angles $`\varphi _i`$, we have $`k=|m_1|+|m_2|+|m_3|+2l_1+2l_2`$, where $`l_i`$ are non-negative integers. The orientifold projection on the bulk supergravity states amounts to keeping modes with even $`m_1`$ and $`m_2`$. In the $`𝒩=4`$ case, the supergravity states with lowest mass squared come from the KK reduction of $`h_a^a`$, the dilaton mode of the $`𝐒^5`$. The AdS masses of these states are given by $`m^2=k(k4)`$ , where $`k`$ is given above. According to , the AdS mass of a KK state is related to the dimension of the corresponding boundary operator by $`\mathrm{\Delta }(\mathrm{\Delta }4)=m^2`$, which implies $`\mathrm{\Delta }=k`$ for this tower of KK modes. The decomposition of the other supergravity fields yield towers of KK states for which $`\mathrm{\Delta }=k+n`$, where $`n`$ is a positive integer . We will see below that only for $`n=0`$ the KK states couple to chiral primary operators. Therefore we will restrict our analysis to the KK modes from the decomposition of $`h_a^a`$. The simplest way to identify chiral primaries is to find all states for which $`\mathrm{\Delta }=\frac{3}{2}R`$, where $`R`$ is the R-charge which is part of the superconformal algebra. The R-current is a certain linear combination of the three $`U(1)`$ currents. To find this linear combination we first need to determine which supercharges survive the orientifold projection. The orientifold group, $`𝐙_2\times 𝐙_2`$, is generated by $`\gamma _1=R_{z_2}\mathrm{\Omega }(1)^{F_L}`$ and $`\gamma _2=R_{z_1z_2}`$. Orientifolding by the first generator breaks $`SO(6)`$ down to $`SU(2)_L\times SU(2)_R\times U(1)_N`$ where $`U(1)_N`$ acts on $`z_2`$ while $`SU(2)_L\times SU(2)_R`$ acts on $`z_1,w`$. The surviving supercharges $`(Q_+,Q_{})`$ transform as $`(\mathrm{𝟏},\mathrm{𝟐})_1`$ with respect to this group. Orientifolding by $`\gamma _2`$ breaks $`SU(2)_L\times SU(2)_R`$ down to $`U(1)_L\times U(1)_R`$. We will denote the sum of the $`U(1)_L`$ and $`U(1)_R`$ charges by $`U(1)_2`$, the difference by $`U(1)_3`$, and refer to $`U(1)_N`$ as $`U(1)_1`$. The charges of $`z_2,z_1,`$ and $`w`$ under these three $`U(1)`$’s are given by $`(2,0,0),(0,2,0),`$ and $`(0,0,2)`$, respectively. The supercharge $`Q_+`$ which survives the second orientifolding has $`U(1)`$ charges $`(1,1,1)`$. It follows that the R-charge which is in the same superconformal multiplet as the stress-energy tensor is $`\frac{1}{3}(2m_1+2m_22m_3)`$. Here $`2m_1`$ is the $`U(1)_1`$ charge, $`2m_2`$ is the $`U(1)_2`$ charge, and $`2m_3`$ is the $`U(1)_3`$ charge. The normalization is chosen so that $`Q_+`$ has R-charge $`1`$. It follows that any KK mode with $`l_1=l_2=0`$, $`m_1,m_20`$ and $`m_30`$ should couple to a chiral primary operator in the boundary field theory. We discussed the identification of geometric motions of 3-branes with flat directions in the 3-brane field theory in the previous section (see also ). This allows us to determine the $`U(1)`$ charges of the fields $`A_1,A_2,𝒬,\stackrel{~}{𝒬}`$. The field theory superpotential then fixes the R-charges of the fundamentals $`q,\stackrel{~}{q},p,\stackrel{~}{p}`$. The results are summarized in the table below. $$\begin{array}{cccc}& U(1)_1& U(1)_2& U(1)_3\\ & & & \\ A_{1,2}& 0& 0& 2\\ 𝒬& 2& 0& 0\\ \stackrel{~}{𝒬}& 0& 2& 0\\ q,p& 0& 1& 1\\ \stackrel{~}{q},\stackrel{~}{p}& 1& 0& 1\end{array}$$ (47) With these charge assignments in hand it is now a simple matter to match the bulk KK modes and the chiral primary operators in the field theory. Let us give some examples. The supergravity spectrum contains a singleton chiral primary with $`U(1)_3`$ charge $`2`$ and $`\mathrm{\Delta }=k=1`$. This state corresponds to a chiral primary $`\mathrm{Tr}(A_1J_1)+\mathrm{Tr}(A_2J_2)`$ in the field theory.<sup>*</sup><sup>*</sup>*The antisymmetric representation of $`Sp(N)`$ is reducible and contains a singlet. Since $`\mathrm{\Delta }=1`$, this is a free field. For $`\mathrm{\Delta }=2`$ there are three chiral primary states with geometric $`U(1)`$ charges $`(4,0,0)`$, $`(0,4,0)`$, and $`(0,0,4)`$. We identify them with $`\mathrm{Tr}𝒬^TJ_1𝒬J_2`$, $`\mathrm{Tr}\stackrel{~}{𝒬}^TJ_2\stackrel{~}{𝒬}J_1`$, and $`\mathrm{Tr}(A_1J_1)^2+\mathrm{Tr}(A_2J_2)^2`$. The chiral primary operators with $`\mathrm{\Delta }=3`$ are $`\mathrm{Tr}[𝒬A_2𝒬^TJ_1+𝒬^TJ_1A_1J_1𝒬J_2]`$, $`\mathrm{Tr}[\stackrel{~}{𝒬}A_1\stackrel{~}{𝒬}^TJ_2+\stackrel{~}{𝒬}^TJ_2A_2J_2\stackrel{~}{𝒬}J_1]`$, and $`\mathrm{Tr}(A_1J_1)^3+\mathrm{Tr}(A_2J_2)^3`$. They correspond to the KK states with charges $`(4,0,2)`$, $`(0,4,2)`$, and $`(0,0,6)`$ respectively. The field theory also contains operators that carry charges under the 7-brane gauge groups. It was pointed out in that these operators couple to the AdS modes coming from the KK reduction of the 7-brane fields. Our configuration includes an O7-plane with four coincident D7-branes wrapping an $`𝐒^3`$ defined by $`|z_1|^2+|w|^2=const`$, and similarly an O7-plane with four D7-branes wrapped on $`|z_2|^2+|w|^2=const`$. The two 3-spheres intersect over a circle. We can focus on the KK modes from the first $`𝐒^3`$. These modes couple to operators that are charged under the $`SU(4)_7`$ subgroup of the $`SU(4)_7\times SU(4)_7^{}`$ global symmetry group of the probe theory. The modes living on the other $`𝐒^3`$ couple to similar operators in the field theory that transform under $`SU(4)_7^{}`$. The KK reduction of the theory on an O7-plane with four coincident 7-branes was discussed in . In that case there were twice as many supersymmetries as in ours. The simplest way to compute the KK spectrum in our case is to use the results of and impose the additional projection from the O7-plane. Ref. contains a detailed discussion of the 7-brane states and their multiplet structure. The lowest component of the multiplet is a real field in the $`(𝐤,𝐤+\mathrm{𝟐})_0`$ representation of $`SU(2)_L\times SU(2)_R\times U(1)_N`$, where $`k=1,2,\mathrm{}`$. This mode comes from KK reduction of the components of the 7-brane gauge field along the $`𝐒^3`$, $$A_a=\underset{k}{}a_kY_a^k,$$ (48) where $`Y_a^k`$ is the $`k`$-th vector spherical harmonic on $`𝐒^3`$. These modes couple to operators of dimension $`\mathrm{\Delta }=k+1`$ in the boundary field theory. For simplicity we will only consider operators with $`\mathrm{\Delta }=2,3`$. The state with $`\mathrm{\Delta }=2`$ transforms in the $`(\mathrm{𝟏},\mathrm{𝟑})_0`$ and decomposes into modes with $`U(1)^3`$ quantum numbers $`(0,0,0)`$ and $`(0,\pm 2,2)`$. The $`(0,0,0)`$ mode has no $`U(1)_R`$ charge and does not correspond to a chiral primary. The states with $`U(1)^3`$ charges $`(0,2,2)`$ and $`(0,2,2)`$ are complex conjugates of each other, so it is sufficient to consider only one of them, e.g., the first. It has R-charge $`4/3`$ and is, therefore, a chiral primary. This state starts out in the adjoint of the $`SO(8)_7`$ gauge group on the 7-brane. Since it has $`m_2=1`$, it is odd under the additional orientifold projection $`\gamma _2`$. This projection breaks $`SO(8)_7`$ down to $`SU(4)_7`$. As explained in , states in the adjoint of $`SO(8)_7`$ which are odd under $`\gamma _1`$ yield $`\mathrm{𝟔}+\overline{\mathrm{𝟔}}`$ of $`SU(4)_7`$, while even states give adjoints of $`SU(4)_7`$. It follows that the $`(0,2,2)`$ state yields one complex state in $`\mathrm{𝟔}`$ and one complex state in $`\overline{\mathrm{𝟔}}`$. These KK states correspond to operators $`qJ_1q`$ and $`pJ_2p`$, which transform in the $`\mathrm{𝟔}`$ and $`\overline{\mathrm{𝟔}}`$ of the 7-brane group respectively. The $`\mathrm{\Delta }=3`$ mode is in the $`(\mathrm{𝟐},\mathrm{𝟒})_0`$ representation and decomposes into even modes with $`U(1)^3`$ charges $`(0,0,2)`$ and $`(0,4,2)`$ and their complex conjugates, as well as odd modes with $`U(1)^3`$ charges $`(0,2,0)`$ and $`(0,2,4)`$ and their complex conjugates. The even $`(0,0,2)`$ mode and the odd $`(0,2,0)`$ mode do not couple to chiral primary operators, because the R-charge does not match the dimension. The even $`(0,4,2)`$ mode couples to a chiral primary operator in the adjoint of $`SU(4)`$ which we identify as $`pJ_2\stackrel{~}{𝒬}J_1q`$. The odd $`(0,2,4)`$ mode couples to a chiral primary in the $`\mathrm{𝟔}+\overline{\mathrm{𝟔}}`$. The corresponding operators are given by $`qA_1q`$ and $`pJ_2A_2J_2p`$. Other scalars on AdS come from the decomposition of the complex scalar field on the 7-branes. These KK modes are in the $`(𝐤,𝐤)_2`$ representation of $`SU(2)_L\times SU(2)_R\times U(1)_N`$ and couple to operators of dimension $`k+2`$ . It is straightforward to decompose and project these modes as we did for the KK modes of the vector field. The $`\mathrm{\Delta }=3`$ case is especially simple, since this mode carries only $`U(1)_1`$ charge. Since the R-charge and the dimension do not satisfy $`\mathrm{\Delta }=\frac{3}{2}R`$, this KK mode does not couple to a chiral primary operator. The same is true for the higher KK modes of the complex scalar field. Finally, there are also states living on the intersection of the 7-branes and 7-branes which is an $`𝐒^1`$ embedded in $`𝐒^5`$. The KK reduction of these states is straightforward, and we will not discuss it. In the above analysis we have focused on chiral primaries. It is also interesting to ask whether non-chiral states match between field theory and supergravity. Some of the non-chiral scalars we have seen, namely the ones coming from the reduction of complex scalars living on the 7-branes, are descendants of the chiral primaries and therefore match automatically. On the other hand, the non-chiral scalars which come from the KK reduction of the gauge field on the 7-branes are primary. One may ask whether the superconformal multiplet they live in is long or short. To answer this question we need to recall some facts about unitary representations of the $`𝒩=1`$ superconformal algebra . For our purposes it is sufficient to consider multiplets whose primary states have zero spin. Let the $`R`$ and $`\mathrm{\Delta }`$ be the R-charge and the dimension of the primary. Unitarity puts restrictions on which values of $`R`$ and $`\mathrm{\Delta }`$ may occur; the allowed possibilities are (i) $`\mathrm{\Delta }=R=0`$ (the trivial representation), (ii) $`\mathrm{\Delta }=\frac{3}{2}|R|`$ (chiral and anti-chiral representations), (iii) $`\mathrm{\Delta }\frac{3}{2}|R|+2`$. Representations of type (iii) with $`\mathrm{\Delta }>\frac{3}{2}|R|+2`$ contain no null states and therefore are termed long multiplets. Chiral and anti-chiral representations contain null states at level one, i.e., their primaries are annihilated by half of the supercharges. These representations are called short. Representations of type (iii) which saturate the inequality are also short; the null states occur at level two. A well-known example of a short multiplet is a linear multiplet which contains a conserved current. It corresponds to the case $`R=0,\mathrm{\Delta }=2`$. One can check that all non-chiral primaries coming from the reduction of the gauge field on the 7-branes satisfy $`\mathrm{\Delta }=\frac{3}{2}|R|+2`$ and therefore are in short multiplets of type (iii). In particular, the $`(0,0,0)`$ mode with $`\mathrm{\Delta }=2`$ we have found above is in fact the lowest component of a linear multiplet. It couples to a field theory operator $`q^{}qpp^{}`$ in the adjoint of $`SU(4)_7`$. The corresponding current is simply the $`SU(4)_7`$ flavor current. The matching of non-chiral primaries with $`\mathrm{\Delta }=3`$ is a bit more involved. The $`(0,2,0)`$ mode transforms in $`\mathrm{𝟔}+\overline{\mathrm{𝟔}}`$ of $`SU(4)_7`$. Its field theory counterparts are $`h_1p^{}\stackrel{~}{𝒬}J_1q+h_2qJ_1A_1^{}J_1q`$ and $`h_1pJ_2\stackrel{~}{𝒬}q^{}+h_2pA_2^{}p`$, where the flavor indices are antisymmetrized. The $`U(1)^3`$ charges of these operators match those of the $`(0,2,0)`$ mode. To show that these operators live in short multiplets, i.e., are annihilated by $`\overline{D}^2`$, one needs to use the classical equations of motion. The manipulations one has to go through are very similar to those in , and are subject to the same caveats. The use of the classical equations of motion is presumably justified in the weakly coupled regime where $`g_{YM}^2N`$ is small. The supergravity analysis indicates that the operators in question belong to short multiplets even for large $`g_{YM}^2N`$. An even more interesting situation arises when one tries to match the non-chiral primary with $`U(1)^3`$ charges $`(0,0,2)`$ and $`\mathrm{\Delta }=3`$. This mode lives in the adjoint of $`SU(4)_7`$. We claim that it corresponds to the field theory operator $`h_1qA_1J_1q^{}h_1p^{}A_2J_2ph_2q\stackrel{~}{𝒬}^{}p`$. Evaluating the $`\overline{D}^2`$ descendant of this operator using the classical equations of motion, one finds that it does not vanish. Instead, the descendant has the form $`(q\stackrel{~}{q})(\stackrel{~}{p}p)`$, i.e., it factorizes into a product of two gauge-invariant operators and is therefore subleading at large $`N`$. It follows that this field theory operator lives in a long multiplet for finite $`N`$, but is “close” to being in a short multiplet in the sense that its dimension approaches the unitarity bound as $`N\mathrm{}`$. On the supergravity side this means that the $`(0,0,2)`$ one-particle state is in a short multiplet only for $`N=\mathrm{}`$. For finite $`N`$ the multiplet absorbs another short multiplet made of two-particle states and becomes long. This concludes our analysis of the AdS/CFT correspondence for the Sen model. There is complete agreement between the spectrum of primary operators in the field theory and the scalar Kaluza-Klein states on AdS as required by the AdS/CFT correspondence . The charge assignments in Table 47 together with the formula $`R=\frac{1}{3}(2m_1+2m_22m_3)`$ imply that all chiral fields have canonical dimensions in the infrared. This is the most natural assumption for a theory with vanishing beta function, but as we pointed out in the introduction there is no field theory proof of this. The supergravity computation is only valid for large $`N`$ and large $`g_{YM}^2N`$. However, given that for $`g_{YM}^21`$ and $`N`$ of order $`1`$ the dimensions are also canonical, it appears likely that the theory is finite for all $`N`$. ### B The non-conformal case Next we discuss the deformed $`𝒩=1`$ theory which flows to a line of conformal fixed points in the infrared (section II C). We have already pointed out that although the Wilsonian gauge coupling in this theory depends on the scale, the low-energy effective gauge coupling does not vary over the moduli space. This implies that the corresponding IIB background should have constant $`\tau `$. Indeed, in section III we showed that the 7-brane background for this configuration is very similar to the background for the conformal theory. As in the conformal case, the 7-branes do not bend and are coincident with the O7-planes. The RR charge of the 7-brane is cancelled locally by the O7-planes, so we expect that the type IIB string coupling is constant. Similarly, the gravitational field of the 7-branes cancels against that of the orientifold planes. Thus it appears that the closed string sector is not affected by this deformation. The only difference between the conformal and the non-conformal case is in the open string sector, namely in the gauge connection on the 7-branes. In the conformal case it is trivial, while in the non-conformal case it is a flat connection which breaks the $`SU(4)_7\times SU(4)_7^{}`$ group to a diagonally embedded $`SU(2)\times SU(2)`$. To summarize, the deformation of the 7-brane background that leads to the non-conformal theory changes the properties of the theory on the 7-branes, but it appears not to change the closed string sector. To find a supergravity dual for this non-conformal theory, we need to repeat the analysis above with the new 7-brane background. Since the closed string sector is unchanged, the spectrum of the bulk modes should be the same as before. The matter content of the conformal and the non-conformal theory differ only in the number of flavors and their coupling to the bifundamentals. Therefore both theories have the same spectrum of operators that do not transform under the 7-brane groups. Thus it appears that the dimensions of all chiral primaries uncharged with respect to the flavor group are the same as in the conformal case, i.e., canonical. If antisymmetric tensors and bifundamentals have zero anomalous dimensions then the vanishing of the beta-functions, Eq. (11), requires that the fundamentals have dimension $`1/2`$. This is actually the lowest dimension for the fundamental allowed by unitarity. To show that this assignment of dimensions, or equivalently of R-charges, agrees with supergravity we would have to show that the KK reduction of the 7-brane theory with the singular flat connection switched on, reproduces the expected dimensions of the chiral primaries that involve the fundamentals. Unfortunately we do not know how to analyze the excitations of the impurity theory around nontrivial vacua, so we cannot check that our solution is consistent. Nevertheless, we get a definite prediction for the infrared dimensions of all fields. It would be interesting to confirm the answer by directly analyzing the perturbative expansion of the non-conformal theory at large $`N`$. ###### Acknowledgements. It is a pleasure to thank O. Aharony, E. Gimon, J. Maldacena, G. Moore, E. Katz, and E. Witten for helpful discussions. M.G. would like to thank the Institute for Advanced Study for hospitality while this work was in progress. The work of M.G was supported in part by DOE grants #DF-FC02-94ER40818 and #DE-FC-02-91ER40671, while that of A.K. by DOE grant #DE-FG02-90ER40542.
no-problem/9904/hep-th9904058.html
ar5iv
text
# Solution of Potts-3 and Potts-∞ matrix models with the equations of motion method ## 1 Introduction : Random matrices are useful for a wide range of physical problems. In particular, they can be related to two-dimensional quantum gravity coupled to matter fields with a non-zero central charge $`C`$ . While $`C1`$ models are relatively well understood, the $`C>1`$ domain remains almost totally unknown : there is a $`C=1`$ “barrier”. When studying $`C0`$ models, we are led to consider multi-matrix models which are often non-trivial. One class of difficult matrix models corresponds to the $`q`$-state Potts model (in short: Potts-$`q`$) on a random surface. This model is a $`q`$-matrix model where all the matrices are coupled to each other, thus making difficult the use of usual techniques such as the saddle point or the orthogonal polynomials method. Moreover, the $`q4`$ limit corresponds to $`C1`$, thus, by solving Potts-$`q`$ models, we shall gain a new understanding of the $`C=1`$ barrier. In this letter, we show that, contrary to what was previously thought, one can use the loop equations to solve the Potts-3 random matrix model, and we find that the resolvent (which generates many of the operators of the problem) obeys an algebraic equation that we write explicitely. We also show that this method applies when one adds branching interactions (gluing of surfaces, also called “branched polymers”) and we derive the critical line of this extended model. The extension to the model with branching interactions and the study of its phase diagram is necessary to verify ’s conjecture about the $`C=1`$ transition. Finally, we apply the method to the Potts-$`\mathrm{}`$ matrix model, which corresponds to $`C=\mathrm{}`$. As this work was approaching its completion, a paper appeared on the dilute Potts model , which partially overlaps our present work. In this article, the author also has an algebraic equation for the conventional Potts-3 model. Here, we go further as we consider the Potts-3 + branching interactions model. Moreover, his method is quite different : while he uses analytical considerations on the resolvents and large-$`N`$ techniques, we solve our model by the loop equations method, which can be extended to finite $`N`$ problems and is also more adapted to the use of renormalization group techniques . ## 2 The Potts-3 + branching interactions model : Let us define : $$Z=𝑑\mathrm{\Phi }e^{N^2V(\mathrm{\Phi })}$$ (1) $$V(\mathrm{\Phi })=g\frac{\text{tr}\mathrm{\Phi }^3}{3N}+\psi (\frac{\text{tr}\mathrm{\Phi }^2}{2N},\frac{\text{tr}\mathrm{\Phi }\delta \mathrm{\Phi }\delta }{2N})$$ (2) $$\mathrm{\Phi }=\left(\begin{array}{ccc}\mathrm{\Phi }_1& 0& 0\\ 0& \mathrm{\Phi }_2& 0\\ 0& 0& \mathrm{\Phi }_3\end{array}\right),\delta _1=\left(\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ 1& 0& 0\end{array}\right),\delta _1=\left(\begin{array}{ccc}0& 0& 1\\ 1& 0& 0\\ 0& 1& 0\end{array}\right)$$ (3) $`\delta =\delta _1+\delta _1`$. We shall also use later the notation $`\delta _0=Id.`$ $`\mathrm{\Phi }`$, $`\delta _0`$, $`\delta _1`$ and $`\delta _1`$ are $`(3N)\times (3N)`$ and $`\mathrm{\Phi }_1`$, $`\mathrm{\Phi }_2`$, $`\mathrm{\Phi }_3`$ $`N\times N`$ hermitean matrices. $`\psi `$ is a general two-variable function, and will mainly appear through its partial derivatives $`U`$ and $`c`$ with respect to $`\frac{\text{tr}\mathrm{\Phi }^2}{2N}`$ and $`\frac{\text{tr}\mathrm{\Phi }\delta \mathrm{\Phi }\delta }{2N}`$ respectively. If these are constants, then we recover the conventional Potts-3 model (no branching interactions). This model was given partial solution by J.M. Daul , by considering the analytic structure of the resolvents. He had its critical point and its associated critical exponent. He did not know, however, if the resolvent obeyed an algebraic equation. We shall give here the expression of this equation for the conventional and extended Potts-$`3`$ model. We also derive the critical line of the extended model and check it corresponds to Daul’s result in the particular case of the conventional model. Let us note, for convenience : $$t_{i_1i_2\mathrm{}i_n\mathrm{\Phi }^k}=\frac{1}{3N}\text{tr}\delta _{i_1}\mathrm{\Phi }\delta _{i_2}\mathrm{\Phi }\mathrm{}\delta _{i_n}\mathrm{\Phi }^k$$ (4) where $`i_1`$, $`\mathrm{}`$, $`i_n`$ can be $`+1`$, $`1`$ or $`0`$. This trace is non-zero if and only if $`i_1+\mathrm{}+i_n0(mod3)`$. $`\mathrm{}`$ is the expectation value of $`(\mathrm{})`$ : $$\mathrm{}=\frac{1}{Z}𝑑\mathrm{\Phi }(\mathrm{})e^{N^2V(\mathrm{\Phi })}$$ (5) A trace will be said to be “of degree $`m`$” if there are $`m`$ matrices $`\mathrm{\Phi }`$ in it. For example, the above trace is of degree $`k+n1`$. Let us now use the method of the equations of motion (or loop equations). If we make the infinitesimal change of variables in $`Z`$ : $$\mathrm{\Phi }\mathrm{\Phi }+ϵ\delta _{i_1}\mathrm{\Phi }\delta _{i_2}\mathrm{}\mathrm{\Phi }\delta _{i_n}$$ (6) with $$i_1+i_2+\mathrm{}+i_n0(mod3)$$ then we obtain the expression of the general equations of motion : $$gt_{i_1\mathrm{}i_n\mathrm{\Phi }^2}+Ut_{i_1\mathrm{}i_n\mathrm{\Phi }}+c(t_{(i_1+1)i_2\mathrm{}(i_n1)\mathrm{\Phi }}+t_{(i_11)i_2\mathrm{}(i_n+1)\mathrm{\Phi }})\underset{j=1}{\overset{n1}{}}t_{i_1\mathrm{}i_j}t_{i_{j+1}\mathrm{}i_n}=0$$ (7) The first three terms come from the transformation of $`V(\mathrm{\Phi })`$, and the last one, from the jacobian of the transformation. Eq. (7) relates any expectation value of trace containing a quadratic term (i.e. a $`\mathrm{\Phi }^2`$ term) to expectation values of traces of lower degrees. The problem is that we do not have any recursion relation for more general expectation values like $`t_{i_1\mathrm{}i_n\mathrm{\Phi }}`$ where all the $`i_k0`$. Moreover, when one wants to compute even a very simple trace : for example $`t_{\mathrm{\Phi }^n}`$ by using Eq. (7), one obtains, $`[\frac{n}{2}]`$ steps later, a $`n[\frac{n}{2}]`$ degree complicated trace which does not contain quadratic terms any more. Thus, the recursion stops there. In fact, this problem can be overcome by a very simple idea : one uses the invariance of traces by cyclic permutations to get rid of the $`n+1`$ degree term in Eq. (7). Then, one obtains relations between general traces, and it is thus possible to compute the expectation values of any trace in function of the first ones. Let us see now how this idea applies to the computation of the resolvent. We denote : $$\omega _{i_1i_2\mathrm{}i_n}=\frac{1}{3N}\text{tr}\delta _{i_1}\mathrm{\Phi }\delta _{i_2}\mathrm{\Phi }\mathrm{}\delta _{i_n}\frac{1}{z\mathrm{\Phi }}$$ $`\omega _0=\omega `$ is the usual resolvent. Using the change of variables : $$\mathrm{\Phi }\mathrm{\Phi }+ϵ\frac{1}{z\mathrm{\Phi }}$$ we obtain the equation : $$z(U+gz)\omega Ugzgt_\mathrm{\Phi }\omega ^2+2c\omega _{11}=0$$ (8) Similarly, $$\mathrm{\Phi }\mathrm{\Phi }+ϵ\delta _1\mathrm{\Phi }\delta _1\frac{1}{z\mathrm{\Phi }}$$ yields : $$z(U+gz)\omega _{11}(U+gz)t_\mathrm{\Phi }gt_{11\mathrm{\Phi }}\omega \omega _{11}+c\omega _{\mathrm{1\; 0}1}+c\omega _{111}=0$$ (9) and, by the means of similar changes in variables, we have the equations : $$z(U+gz)\omega _{111}(U+gz)t_{11\mathrm{\Phi }}gt_{111\mathrm{\Phi }}+c\omega _{\mathrm{1\; 0}11}+c\omega _{\mathrm{1\; 1}11}\omega \omega _{111}=0$$ (10) $$z(U+gz)\omega _{\mathrm{1\; 1}11}(U+gz)t_{111\mathrm{\Phi }}gt_{\mathrm{1\; 1}11\mathrm{\Phi }}+c\omega _{\mathrm{1\; 0\; 1}11}+c\omega _{1\mathrm{1\; 1}11}\omega \omega _{\mathrm{1\; 1}11}=0$$ (11) These equations alone are not sufficient to compute $`\omega (z)`$. Indeed, if we intend to calculate $`\omega (z)`$, we generate the function $`\omega _{11}(z)`$ (Eq. (8)). Then, in turn, we generate the function $`\omega _{111}(z)`$ (Eq. (9)) and so on. As for $`\omega `$ functions containing a $`0`$ (i.e. a $`\mathrm{\Phi }^2`$ term) such as $`\omega _{\mathrm{1\; 0}1}`$, they are easy to deal with : we know how to compute traces containing $`\mathrm{\Phi }^2`$. $`\omega _{\mathrm{1\hspace{0.17em}0}1}=\frac{1}{3N}\text{tr}\delta _1\mathrm{\Phi }^2\delta _1\frac{1}{z\mathrm{\Phi }}`$, will be seen as $`\frac{1}{3N}\text{tr}\mathrm{\Phi }^2\delta _1\frac{1}{z\mathrm{\Phi }}\delta _1`$. Then the change in variables : $$\mathrm{\Phi }\mathrm{\Phi }+ϵ\delta _1\frac{1}{z\mathrm{\Phi }}\delta _1$$ (12) yields ($`\omega _{11}=\omega _{\mathrm{1\; 1}}`$ for symmetry reasons) $$g\omega _{101}+(U+c)\omega _{11}+cz\omega c=0$$ (13) and similar changes in variables lead to the equations : $$g\omega _{\mathrm{1\; 0}11}+U\omega _{111}+c\omega _{\mathrm{1\; 0}1}+cz\omega _{11}ct_\mathrm{\Phi }=0$$ (14) $$g\omega _{\mathrm{1\; 0\; 1}11}+U\omega _{\mathrm{1\; 1}11}+cz\omega _{111}ct_{11\mathrm{\Phi }}+c\omega _{\mathrm{1\; 0}11}t_\mathrm{\Phi }\omega =0$$ (15) But, to compute $`\omega _{1\mathrm{1\; 1}11}`$, as mentionned in the comments to Eq. (7), we have to substract two different changes in variables and use cyclicity of traces : $$\mathrm{\Phi }\mathrm{\Phi }+ϵ[\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1(z\mathrm{\Phi })^1\delta _1\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1(z\mathrm{\Phi })^1\delta _1\mathrm{\Phi }^2]$$ (16) yields : $$c(\omega _{1\mathrm{1\; 1}11}+\omega _{\mathrm{1\; 1}111}\omega _{1\mathrm{1\; 0}1}\omega _{\mathrm{1\; 0\; 1\; 1}1})\omega _{111}+t_\mathrm{\Phi }\omega _{11}=0$$ (17) This equation, as we know how to compute $`\omega _{11}`$, $`\omega _{111}`$, $`\omega _{1\mathrm{1\; 0}1}`$ and $`\omega _{\mathrm{1\; 0\; 1\; 1}1}`$, relates $`\omega _{1\mathrm{1\; 1}11}`$ to $`\omega _{\mathrm{1\; 1}111}`$. $$\mathrm{\Phi }\mathrm{\Phi }+ϵ(\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1(z\mathrm{\Phi })^1\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1(z\mathrm{\Phi })^1\mathrm{\Phi }^2)$$ allows us to relate similarly $`\omega _{\mathrm{1\; 1}111}`$ to $`\omega _{11111}`$. Then $$\mathrm{\Phi }\mathrm{\Phi }+ϵ(\mathrm{\Phi }\delta _1(z\mathrm{\Phi })^1\delta _1\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1\delta _1(z\mathrm{\Phi })^1\delta _1\mathrm{\Phi }\delta _1\mathrm{\Phi }\delta _1\mathrm{\Phi })$$ (18) allows us to relate $`\omega _{11111}`$ to $`\omega _{1\mathrm{1\hspace{0.17em}1\hspace{0.17em}1\hspace{0.17em}1}}`$, and we have $`\omega _{1\mathrm{1\; 1\; 1\; 1}}=\omega _{\mathrm{1\; 1}111}`$ as the roles of $`\delta _1`$ and $`\delta _1`$ are completely symmetric. Finally, as a result of these operations, we have : $$\omega _{\mathrm{1\; 1}111}+\omega _{\mathrm{1\; 1}111}=K(z)$$ (19) where $`K(z)`$ only contains easy to compute $`\omega `$ functions. We can then write an equation for $`\omega _{\mathrm{1\; 1}111}`$ and thus for $`\omega _{1\mathrm{1\; 1}11}`$ which only involves $`\omega `$ functions that we either already know or are able to compute similarly as was done during the two first steps of the procedure. That way, our set of equations is closed, and we obtain a degree five algebraic equation for $`\omega (z)`$. This expression applies to general expressions of $`U`$ and $`c`$. For the exact expression of this equation see Appendix A. The equation only contains four unknown parameters : $$t_\mathrm{\Phi }=\frac{1}{N}\text{tr}\mathrm{\Phi }_1,t_{11\mathrm{\Phi }}=\frac{1}{N}\text{tr}\mathrm{\Phi }_1\mathrm{\Phi }_2,t_{111\mathrm{\Phi }}=\frac{1}{N}\text{tr}\mathrm{\Phi }_1\mathrm{\Phi }_2\mathrm{\Phi }_3,t_{\mathrm{1\; 1}11\mathrm{\Phi }}=\frac{1}{N}\text{tr}\mathrm{\Phi }_1\mathrm{\Phi }_2\mathrm{\Phi }_1\mathrm{\Phi }_3$$ (20) These parameters are also those that would be involved if we used the renormalisation group method to compute the Potts-3 model. The renormalization group flows would relate the conventional Potts-3 to the Potts-3 + branching interactions model, with arbitrary $`U`$ and $`c`$; but the presence of $`t_{111\mathrm{\Phi }}`$ shows us it would also be related to the dilute Potts-3 model, where one has a $`\frac{1}{N}\text{tr}(\mathrm{\Phi }_1+\mathrm{\Phi }_2+\mathrm{\Phi }_3)^3`$ term. Finally, the $`t_{\mathrm{1\hspace{0.17em}1}11\mathrm{\Phi }}`$ term shows us it may also be related to more complicated quartic models. We are now going to derive from our equation the critical behaviour and critical line of the model when $`U=1+h\text{tr}\mathrm{\Phi }^2/6`$ and $`c`$ is a constant. This is the most common type of extension of a matrix model to branching interactions. The values of the unknown parameters given in Eq. (20) are fixed by the physical constraint that the resolvent has only one physical cut which corresponds to the support of the eigenvalues of $`\mathrm{\Phi }`$. Then, one can study the critical behaviour of the model. It is easy to look for the Potts critical line. Indeed, the scaling behaviour of the resolvent is then, if we denote the physical cut of $`\omega `$ as $`[a,b]`$ : $$\omega (z)(za)^{\frac{1}{2}}\text{ when }za\text{ and }\omega (z)(zb)^{\frac{6}{5}}\text{ when }zb$$ (21) The corresponding exponent $`\gamma _s`$ is $`\frac{1}{5}`$, which corresponds to the $`C=\frac{4}{5}`$ central charge of the model. Rather than looking for the resolvent for any values of the coupling constants, it is easier to search for the resolvent only on this critical line where the presence of the $`\frac{6}{5}`$ exponent leads to simple conditions on the derivatives of the algebraic equation. We obtain : $$\begin{array}{c}105c^3+4g^2=0\hfill \\ 2480625c^2(14c+43c^2)+\mathrm{\hspace{0.17em}296100}c(15+113c)h\mathrm{\hspace{0.17em}692968}h^2=0\hfill \end{array}$$ (22) Let us note here that, when $`h=0`$ (no branching interactions) we recover the Potts-3 bicritical point which agrees with Daul’s result : $$c=\frac{2\sqrt{47}}{43},g=\frac{\sqrt{105}}{2}\left(\frac{3+\sqrt{47}}{41\sqrt{47}}\right)^{\frac{3}{2}}$$ Thus, we have shown that the resolvent for the model of Potts-$`3`$ plus branching interactions obeys a degree five algebraic equation. We have found the critical line and exponent of this extended model. This extends the results of Daul who had only derived the position of the critical point and exponent of the conventional model. Finally, let us recall that, in a recent paper , P. Zinn-Justin obtains independently algebraic equations for similar problems. His method, though, does not involve loop equations, and is rather in the spirit of . Moreover, it does not address the problem of branching interactions and thus overlaps our results only in the case of the conventional Potts-3 model. ## 3 The Potts-$`\mathrm{}`$ model : We are now going to briefly derive the solution for the Potts-$`\mathrm{}`$ model, from the equations of motion point of view. The purpose of this part is mainly to show the efficiency of our method on this $`c=\mathrm{}`$ model. This model was previously studied by Wexler in . Let us denote $`\mathrm{\Phi }=\left(\begin{array}{ccc}\mathrm{\Phi }_1& & 0\\ & \mathrm{}& \\ 0& & \mathrm{\Phi }_q\end{array}\right)`$, and $`X=\frac{(\mathrm{\Phi }_1+\mathrm{}+\mathrm{\Phi }_q)}{N}\mathrm{𝟏}_{q\times q}`$. We shall define the Potts-q partition function as $$Z=𝑑\mathrm{\Phi }e^{NV(\mathrm{\Phi })}\text{where}V(\mathrm{\Phi })=g\frac{\text{tr}\mathrm{\Phi }^3}{3N}+U\frac{\text{tr}\mathrm{\Phi }^2}{2N}+c\frac{\text{tr}X^2}{2N}$$ (23) $`V(\mathrm{\Phi })`$ is of order $`q`$ when $`q\mathrm{}`$. First, let us use the equations of motion to relate $$a(x)=\frac{1}{qN}\text{tr}\frac{1}{x\mathrm{\Phi }}\text{to}b(y)=\frac{1}{qN}\text{tr}\frac{1}{yX}$$ (24) Let us also denote : $$d(x,y)=\frac{1}{qN}\text{tr}\frac{1}{x\mathrm{\Phi }}\frac{1}{yX}$$ (25) $$\mathrm{\Phi }\mathrm{\Phi }+ϵ\frac{1}{x\mathrm{\Phi }}\frac{1}{yX}\text{yields }$$ $$(x(gx+U)+cya(x)\frac{b(y)}{q})d(x,y)+ggyb(y)ca(x)b(y)(gx+U)=\mathrm{\hspace{0.17em}0}$$ (26) We can get rid of $`d(x,y)`$ since, when $`x(gx+U)+cya(x)\frac{b(y)}{q}=0`$, $`d(x,y)`$ remains finite, thus $`ggyb(y)ca(x)b(y)(gx+U)=0`$. This is sufficient to relate $`a(x)`$ to $`b(y)`$. Moreover, the value of $`b(y)`$ is easy to compute when $`q=\mathrm{}`$. Let us briefly summarize this computation : we calculate the value of $`\text{tr}X^n`$ in the $`q\mathrm{}`$ limit. First : $$\text{tr}X^n=\text{tr}\mathrm{\Phi }_1\mathrm{}\mathrm{\Phi }_n+O(\frac{1}{q})$$ (27) (recall that all the $`\mathrm{\Phi }_i`$ play the same role). If we now separate the first $`n`$ matrices from the remaining $`qn`$ (with $`qn`$), and suppose there is a saddle point for the eigenvalues of $`\frac{\mathrm{\Phi }_{n+1}+\mathrm{}+\mathrm{\Phi }_q}{q}`$, then this saddle point is (in the $`q\mathrm{}`$ limit) independent from the matrices $`\mathrm{\Phi }_1,\mathrm{},\mathrm{\Phi }_n`$. Then, in this limit, up to a change in variables : $`\stackrel{~}{\mathrm{\Phi }}_k=U\mathrm{\Phi }_kU^1`$, we have $`n`$ independent matrice $`\stackrel{~}{\mathrm{\Phi }}_1\mathrm{}\stackrel{~}{\mathrm{\Phi }}_n`$. Each of them has the partition function $$Z_{\mathrm{\Lambda }_C}=𝑑\stackrel{~}{\mathrm{\Phi }}_ke^{N(\frac{g}{3}\text{tr}\stackrel{~}{\mathrm{\Phi }}_k^3+\frac{U}{2}\text{tr}\stackrel{~}{\mathrm{\Phi }}_k^2+c\text{tr}\stackrel{~}{\mathrm{\Phi }}_k\mathrm{\Lambda }_C)}$$ (28) As $`\text{tr}\mathrm{\Phi }_1\mathrm{}\mathrm{\Phi }_n=\text{tr}\stackrel{~}{\mathrm{\Phi }}_1\mathrm{}\stackrel{~}{\mathrm{\Phi }}_n`$, we have $`\text{tr}X^n=\text{tr}\stackrel{~}{\mathrm{\Phi }}_1_{\mathrm{\Lambda }_C}\mathrm{}\stackrel{~}{\mathrm{\Phi }}_n_{\mathrm{\Lambda }_C}`$ where $`\mathrm{}_{\mathrm{\Lambda }_C}`$ is the expectation value obtained with the partition function $`Z_{\mathrm{\Lambda }_C}`$ (cf Eq. (28) ). The matrices $`\stackrel{~}{\mathrm{\Phi }}_k,k=1,\mathrm{}n`$ all play the same part, and $`\text{tr}X^n=\text{tr}\mathrm{\Lambda }_C^n`$, thus $$\text{tr}\mathrm{\Lambda }_C^n=\text{tr}\stackrel{~}{\mathrm{\Phi }}_1^n$$ (29) This must give us $`\mathrm{\Lambda }_C`$, provided we calculate $`\stackrel{~}{\mathrm{\Phi }}_1_{\mathrm{\Lambda }_C}`$ in function of $`\mathrm{\Lambda }_C`$. This is a solvable problem, but it is much faster to note that $$\mathrm{\Lambda }_C=t_\mathrm{\Phi }\mathbf{\hspace{0.17em}1}_{qN\times qN}$$ (30) is solution. Thus, $`\frac{1}{qN}\text{tr}X^n=(t_\mathrm{\Phi })^n`$, and $`b(y)`$ is simply $`(yt_\mathrm{\Phi })^1`$. This gives us immediately the solution for $`a(x)`$ : it obeys a second order equation and reads : $$a(x)=\frac{1}{2}(x(U+gx)+ct_\mathrm{\Phi }\sqrt{(x(U+gx)+ct_\mathrm{\Phi })^24(U+gx+gt_\mathrm{\Phi })})$$ (31) The Potts-$`\mathrm{}`$ plus branched polymers model is thus very similar to an ordinary pure gravity model. As previously, we compute the parameter $`t_\mathrm{\Phi }`$ by imposing that the resolvent $`a(x)`$ has only one physical cut. The model is critical when $`a(x)`$ behaves as $`(xx_0)^{\frac{3}{2}}`$, $`x_0`$ being a constant, and the critical point verifies (as in ) : $$g_c=\frac{1}{4\sqrt{2}}\text{and}c_c=\frac{1}{2}$$ (32) Let us note finally that the loop equations method used here is appropriate for the renormalization group method of . ## 4 Conclusion In this letter, we have shown that it is possible to solve the Potts-3 and Potts-$`\mathrm{}`$ models on two-dimensional random lattices through the method of the equations of motion. We have obtained a closed set of loop equations for the Potts-3 model, which was thought to be impossible. We have shown that the Potts-3 resolvent obeys an order five equation, and this new knowledge opens the door to the calculation of expectation values of the operators of the model. We have extended the Potts-3 conventional model to Potts-3 plus branching interactions, and given the general algebraic equation and the Potts critical line of this model. Finally, we have shown our method also applies successfully to another Potts model : the Potts-$`\mathrm{}`$ model. We hope to generalize soon our method to more general Potts-$`q`$ models, in particular for large-$`q`$ Potts + branching interactions models. ## Appendix A The equation for the Potts-3 resolvent : Here is the degree five equation for the resolvent of this model, where $`𝐖(x)`$ is related to $`\omega (x)`$ by $`𝐖(x)=\omega (x)gx^2Ux`$. $`24c^7+4c^4g^216t_{+\varphi }c^5g^212t_{111\varphi }c^4g^34t_{11\varphi }c^2g^4+8t_{1111\varphi }c^3g^4+68c^6gt_\varphi +2c^3g^3t_\varphi +3c^2g^4t_\varphi ^2+60c^6U+2c^3g^2U52t_{11\varphi }c^4g^2U+20t_{111\varphi }c^3g^3U20c^5gt_\varphi U4c^2g^3t_\varphi U36c^5U^23c^2g^2U^2+36t_{11\varphi }c^3g^2U^236c^4gt_\varphi U^212c^4U^3+36c^3gt_\varphi U^3+12c^3U^4+28c^6gx+2c^3g^3x12t_{11\varphi }c^4g^3x+8t_{111\varphi }c^3g^4x36c^5g^2t_\varphi x2c^2g^4t_\varphi x52c^5gUx4c^2g^3Ux+36t_{11\varphi }c^3g^3Ux18c^4g^2t_\varphi Ux+2c^4gU^2x+54c^3g^2t_\varphi U^2x+22c^3gU^3x24c^5g^2x^2c^2g^4x^2+8t_{11\varphi }c^3g^4x^2+2c^4g^3t_\varphi x^2+24c^7Ux^2+20c^4g^2Ux^2+26c^3g^3t_\varphi Ux^248c^6U^2x^2+12c^3g^2U^2x^2+24c^5U^3x^2+24c^7gx^3+6c^4g^3x^3+4c^3g^4t_\varphi x^364c^6gUx^3+2c^3g^3Ux^3+44c^5gU^2x^316c^6g^2x^4+24c^5g^2Ux^4+4c^5g^3x^5+\left(24c^5g12t_{11\varphi }c^3g^3+8t_{111\varphi }c^2g^436c^4g^2t_\varphi 2cg^4t_\varphi 40c^4gU2cg^3U+36t_{11\varphi }c^2g^3U18c^3g^2t_\varphi U10c^3gU^2+54c^2g^2t_\varphi U^2+26c^2gU^3+24c^7x24c^4g^2x2cg^4x+16t_{11\varphi }c^2g^4x+4c^3g^3t_\varphi x36c^6Ux+4c^3g^2Ux+52c^2g^3t\varphi Ux+12c^5U^2x+36c^2g^2U^2x12c^4U^3x+12c^3U^4x4c^6gx^2+14c^3g^3x^2+12c^2g^4t_\varphi x^236c^5gUx^2+10c^2g^3Ux^2+30c^4gU^2x^2+22c^3gU^3x^240c^5g^2x^3+60c^4g^2Ux^3+12c^3g^2U^2x^3+18c^4g^3x^4+2c^3g^3Ux^4\right)𝐖(x)+\left(12c^66c^3g^2+8t_{11\varphi }cg^4+2c^2g^3t_\varphi 12c^5U4c^2g^2U+26cg^3t_\varphi U12c^4U^2+18cg^2U^2+12c^3U^344c^5gx2c^2g^3x+12cg^4t_\varphi x+30c^4gUx+20cg^3Ux+26c^2gU^3x+6c^4g^2x^2+2cg^4x^2+12c^3g^2Ux^2+39c^2g^2U^2x^2+20c^3g^3x^3+14c^2g^3Ux^3+c^2g^4x^4\right)𝐖(x)^2+\left(12c^4g2cg^3+4g^4t_\varphi 10c^3gU+4g^3U+26c^2gU^2+20c^3g^2x+4g^4x+12c^2g^2Ux+18cg^2U^2x+22cg^3Ux^2+4cg^4x^3\right)𝐖(x)^3+\left((c^2g^2)+18cg^2U+4cg^3x+4g^3Ux+4g^4x^2\right)𝐖(x)^4+4g^3𝐖(x)^5=0`$ Note that $`U`$ and $`c`$ may depend, in the most general case, on $`t_{11\mathrm{\Phi }}`$ and $`t_{\mathrm{\Phi }^2}`$, the latter being related to $`t_\mathrm{\Phi }`$ through the equation of motion : $`gt_{\mathrm{\Phi }^2}+(U+2c)t_\mathrm{\Phi }=0`$. In this article, we have computed explicitely the critical line for the particular case of $`c`$ constant and $`U=1+\frac{h}{2}t_{\mathrm{\Phi }^2}`$. ## Appendix B Acknowledgments : We thank P. Zinn-Justin for discussing his ideas with us, and we are grateful to F. David and J.-B. Zuber for useful discussions and careful reading of the manuscript.
no-problem/9904/astro-ph9904039.html
ar5iv
text
# The Role of the BATSE Instrument Response in Creating the GRB E-Peak Distribution ## 1 The Characteristic Photon Energy of Bursts The prompt emission of gamma-ray bursts is observed predominately at several hundred keV. This is one of the most interesting features of gamma-ray bursts, since it is not easily explained by most theories of prompt gamma-ray emission. In particular, the internal and external shock theories predict a wide variation in the characteristic gamma-ray energy both during a burst and between bursts. It is therefore imperative to know whether this is a physical characteristic of bursts, or a consequence of an instrumental effect. In this article, we discuss the instrumental effects that arise when observing gamma-ray bursts with the Burst and Transient Source Experiment (BATSE) on the Compton Gamma-Ray Observatory (CGRO). The primary points that affect the observed distribution function are the ability to correctly define the characteristic energy of the gamma-ray burst and the ability to trigger on a gamma-ray burst with a characteristic energy outside of the trigger energy range. These effects are discussed in §2 and §3 below. Models of simple distributions for the characteristic energy are given in §4, and their consistency with the observations is discussed in §5. We find that the observed distribution of characteristic energies cannot be explained through instrumental effects alone. A model distribution with a characteristic energy is fit to the observations to quantify the type of physical theory required by the observations. Our conclusion is that the existence of approximately the same characteristic energy for all gamma-ray burst spectra is a physical property of gamma-ray bursts that any viable theory must explain. ## 2 Defining E-Peak One can characterize the gamma-ray burst spectrum through the E-peak value ($`E_p`$), the photon energy at which the $`\nu F_\nu `$ curve has a maximum. The distribution of such values as measured by the BATSE gamma-ray burst instrument is narrowly distributed. The standard method of modeling a gamma-ray spectrum is forward fitting. In this method, a model photon spectrum is folded through a model of the detector response matrix (DRM) to produce a count spectrum. The count spectrum is then compared to the observed spectrum, with the best fit found through $`\chi ^2`$ minimization. The photon model used in the analysis of the BATSE gamma-ray burst spectra is the gamma-ray burst spectral form, which is a 4 parameter model that produces good fits to the data. The data type used in deriving $`E_p`$ for the BATSE data set is the MER data type, which covers the spectrum with 16 channels. To test the ability of BATSE to correctly determine the value of $`E_p`$, we generated test burst count spectra with background counts added for model spectra of a given $`E_p`$, and then went through the procedure of subtracting background and deriving the values of $`E_p`$ that provide the best fit to the test spectra. We find that the forward fitting method correctly defines the value of $`E_p`$ for $`20\text{keV}<E_p<2\text{MeV}`$. Once one is outside this range, either no value of $`E_p`$ is found, or the value that is found is at one of these two limits on $`E_p`$. Because few BATSE bursts have $`E_p`$ at $`2\text{MeV}`$, and no BATSE burst is at $`20\text{keV}`$, the misidentification of the value of $`E_p`$ for $`E_p>2\text{MeV}`$ and $`E_p<20\text{keV}`$ is inconsequential. ## 3 BATSE Triggering The BATSE instrument triggers on a gamma-ray burst when the count rate in the trigger channels exceed an average background count rate by a preset value. Of the eight modules that comprise BATSE, at least two must be above the background rate for the burst to trigger. The triggering is done using four discriminator channels on the timescales of $`1.024\text{s}`$, $`0.256\text{s}`$, and $`0.064\text{s}`$. For most bursts, the triggering is on channels $`2+3`$, which contain counts in the $`50\text{keV}`$ to $`300\text{keV}`$ energy range. To model the trigger of a gamma-ray burst, we ran Monte Carlo simulations of the count rates generated by a gamma-ray burst of a given normalization and value of $`E_p`$. The randomness in this simulation was in the orientation of the spacecraft relative to the gamma-ray burst and to the center of the earth. Two aspects that affect the count rate found for a burst are the orientation of all detectors to the burst, and the component of the burst scattered into the detectors from Earth’s atmosphere. One aspect this simulation did not address was the angular dependence of the background count rate. The purpose of the analysis was to determine the average count rate in the second brightest detector as a function of $`E_p`$. The result is used to relate the photon rate of a burst to the count rate in the trigger channels. ## 4 Model E-Peak Distributions Based on the conversion of photon spectrum to count rate, one can convert model distributions in $`E_p`$ and $`N_p`$, the photon rate at $`E_p`$, into distributions in $`E_p`$ as functions of $`C_{min}`$, the minimum count rate. In doing this, we assume that the minimum count rate is high enough above the background that threshold effects are unimportant. We are also ignoring the fact that different detectors will have different background count rates. We assume that the distribution of $`N_p`$ is a broken power law similar to the peak-flux broken power law, with the power law having an index of $`1.8`$ below the break, and an index of $`5/2`$ above the break. The value at which power law index breaks in the $`N_p`$ distribution is assumed to depend on $`E_p`$ as $`E_p^2`$, which is equivalent to the statement that the total energy per unit time emitted as gamma-rays is independent of $`E_p`$ for bursts that fall on the break of the peak-flux curve. Two models for the $`E_p`$ distribution are used. The first is a power law, so that there is no characteristic value of $`E_p`$ defined by the physics. The second is a log normal curve with two power laws joined on to the wings of the distribution. This second is used to fit the observed $`E_p`$ distribution to demonstrate the extent to which the physical $`E_p`$ distribution must break to reproduce the observations. The model distributions for a power law $`E_p`$ distribution are given in Figures 1 and 2. These curves are calculated for three different trigger energy ranges: channels $`1+2`$ ($`20\text{keV}`$$`100\text{keV}`$), channels $`2+3`$ ($`50\text{keV}`$$`300\text{keV}`$), and channels $`3+4`$ ($`>100\text{keV}`$). The power law index for the $`E_p`$ distribution is set to $`1`$. The difference between Figures 1 and 2 are in the minimum count rate. The first has a minimum count rate such that most bursts are in the $`3/2`$ portion of the cumulative peak-flux distribution, while the second has a count rate such that most bursts are on the $`0.8`$ portion of the cumulative peak-flux distribution. The effect of lowering the count rate limit is to broaden the $`E_p`$ distribution. ## 5 Fits of E-Peak Distributions The observed $`E_p`$ distribution was fit to the power law E-peak distribution. Most bursts have low count rates, placing them below the break in the peak-flux distribution, so the low count rate model for the $`E_p`$ distribution is used. We find that the best power law index for the $`E_p`$ distribution is $`0.94\pm 0.11`$, with $`\chi ^2=444.4`$ for 8 degrees of freedom. This is clearly a poor fit to the data. This model and the observed $`E_p`$ distribution are given in Figure 3. The model distribution is much broader than the observed distribution. This demonstrates that the physical distribution of $`E_p`$ must have a characteristic value near $`200\text{keV}`$ to produce the observed distribution. Just how narrow the physical distribution must be is demonstrated by the fit of a log normal distribution to the observations. In this distribution, the half maxima are a factor of 4 apart, and the power law tails have indexes of $`3.35`$ and $`2.58`$, so that the change in index exceeds 5. The model fit has $`\chi ^2=7.93`$ for 5 degrees of freedom, which is an excellent fit to the data. The physical distribution therefore must be very narrow to reproduce the observations. This fit also suggests that above $`1\text{MeV}`$, each decade in $`E_p`$ contains $`<10\%`$ of the bursts in the lower decade, and that below $`100\text{keV}`$, each decade contains $`<1\%`$ of the bursts in the higher decade. ## 6 Discussion We have presented the results of our study of the BATSE instrumental effects that affect the determination of the $`E_p`$ distribution. Our results are as follows: * Model fitting of spectra to gamma-ray burst spectra correctly gives the value of $`E_p`$ for $`20\text{keV}<E_p<2\text{MeV}`$. * The power law model curves for the $`E_p`$ distribution for bursts that trigger on the $`50`$$`300\text{keV}`$ count energy range have maxima at $`100\text{keV}`$, and half-maxima at $`20\text{keV}`$ and $`800\text{MeV}`$. * The power law model curves provide poor fits to the observations, because the observations are at $`10\%`$ of the peak values at the energies where the model curves have their half-maxima. * Fitting a log normal $`E_p`$ distribution to the observations finds that the best fit is very narrow, with power law wings that differ by 5 in index. These results demonstrate that the narrow $`E_p`$ distribution must have a physical origin. Theories of the prompt gamma-ray emission must provide an explanation for this property before they can be regarded as viable. ## Acknowledgments This research was supported under the NASA grant NAG5-6746. ## References
no-problem/9904/astro-ph9904155.html
ar5iv
text
# An RXTE Observation of NGC 6300: a new bright Compton reflection Dominated Seyfert 2 Galaxy ## 1 Introduction The unified model for Seyfert galaxies proposes that orientation of a molecular torus determines the optical emission-line characteristics (e.g. Antonucci 1993). When the molecular torus lies in our line of sight, it blocks our view of the broad optical emission lines, leading to a Seyfert 2 classification. The X-ray emission of Seyfert 2 galaxies is frequently absorbed (e.g. Turner et al. 1998; Bassani et al. 1999), a result which supports this unified model. In the most extreme case, the obscuring material presents such a high column density to the observer that no X-rays are transmitted. Such objects are termed “Compton-thick” Seyfert 2s, and the only X-rays observed from them are ones that have been scattered from surrounding material. (Diffuse thermal X-rays may also contribute). Such objects are important because they directly support unified models for Seyfert galaxies. The scattering is thought to originate in one or both of two types of material, each of which imparts characteristic signatures on the observed X-ray spectrum (for a review, Matt 1997). Scattering can occur in the warm optically thin gas that is thought to produce the polarized broad lines seen in some Seyfert 2s. The resulting spectrum has the same slope as the intrinsic spectrum with superimposed emission lines from recombination. This is the process which appears to dominate in the archetype Seyfert 2 galaxy NGC 1068 (e.g. Netzer & Turner 1997). Scattering can also occur in optically thick cool material located on the surface of the molecular torus. In this case, the process is called Compton reflection (e.g. Lightman & White 1988), and the observed continuum spectrum is flat with superimposed K-shell fluorescence lines (e.g. Reynolds et al. 1994). Circinus can be considered the prototype of a Compton-reflection dominated Seyfert 2 galaxy (Matt et al. 1996). Compton-reflection dominated Seyfert 2 galaxies are important because they may comprise a significant fraction of the X-ray background (Fabian et al. 1990). They were once thought to be rare (e.g. Matt 1997); however, new observations of objects selected according to their \[O III\] emission-line flux, a method which ideally does not discriminate against highly absorbed objects, show that they may be more common than previously thought (Maiolino et al. 1998). However, bright examples of this class remain rare. This is not surprising, because the reflected X-rays are very much weaker than the primary continuum. We present the results of an RXTE observation of the nearby (18 Mpc) Seyfert 2 galaxy NGC 6300. This object has a flat hard X-ray spectrum and huge equivalent width iron line which suggests that it is a Compton-reflection dominated Seyfert 2 galaxy. If so, it is one of the brightest members of this class known, about half as bright as the prototype, Circinus, and far brighter than other examples. ## 2 Data Analysis NGC 6300 was first detected in hard X-rays during a Ginga maneuver (Awaki 1991). We proposed scanning and pointing observations of this galaxy using RXTE to confirm the Ginga detection. Another Seyfert 2 galaxy, NGC 6393, was detected during a Ginga scan and was also investigated as part of this proposal. The data show that NGC 6393 was very faint ($`<0.5\mathrm{counts}\mathrm{s}^1`$ in the top-layer for 5 PCUs). The scanning RXTE observation of NGC 6300 was performed on 1997 February 14–15. Four of 5 PCUs were on for the entire observation and analysis was confined to these detectors. A pointed observation followed on February 20, 1997, performed with all five PCUs on. The data were reduced using Ftools 4.1 and 4.2 and standard data selection criteria recommended for faint sources. The resulting exposure for the pointed observation was 24,896 seconds. NGC 6300 was detected in all three layers of the PCA, and the top and mid layers were used for spectral fitting. Background subtraction yielded net count rates for 5 PCUs of 4.4 counts s<sup>-1</sup> (12.5% of the total) between 3 and 24 keV for the top layer, and 0.86 counts s<sup>-1</sup> (8.8% of the total) between 9 and 24 keV for the mid layer. The current standard background model for the RXTE PCA is quite good; however, NGC 6300 is a relatively faint source and therefore we attempt to estimate systematic errors associated with the background subtraction. Above 30 keV, no signal should be detected; however, we found a positive signal which could be removed if the background normalization were increased by 1%. Below about 7 keV, no signal should be detected in the mid or bottom layers. We observed a small deficit in signal which would be removed if the background normalization were decreased by 1%. We consider this evidence that the systematic error on the background subtraction is less than 1%. The scan observation consisted of 4 passes over the object with a total scan length of 6 degrees. The scans were performed keeping the declination constant during the first two passes and the right ascension constant during the second two passes. The resulting scan profiles when compared with the optical position clearly indicate that NGC 6300 is the X-ray source. The field of view of the PCA is less than two degrees in total width. Since the scan length was 6 degrees, and since there are apparently no other X-ray sources in the field of view, the ends of the scan paths can be used to check the quality of the background subtraction. This was of some concern since NGC 6300 is rather near the Galactic plane ($`l=328`$, $`b=14`$) and thus there could be Galactic X-ray emission not accounted for in the background model. We accumulated spectra with offset from the source position $`>1.5^{}`$. The exposure time was 1472 seconds. The count rate between 3 and 24 keV was $`0.28\pm 0.19\mathrm{counts}\mathrm{s}^1`$, so there was no evidence for unmodeled Galactic emission. Thermal model residuals show no pattern; i.e., there is no evidence for a 6.7 keV iron emission line from the Galactic Ridge (Yamauchi & Koyama 1993). The spectrum from the pointed observation was first modeled using a power law plus Galactic absorption set equal to $`9.38\times 10^{20}\mathrm{cm}^2`$ (Figure 1; Dickey & Lockman 1990). This model did not fit the data well ($`\chi ^2=609`$ for 82 degrees of freedom (d.o.f.)). The photon index is very flat ($`\mathrm{\Gamma }=0.60`$), there is clear evidence for an iron emission line and there are negative low-energy residuals. Addition of a narrow ($`\sigma =0.05\mathrm{keV}`$) line with energy fixed at $`6.4\mathrm{keV}`$ improves the fit substantially ($`\mathrm{\Delta }\chi ^2=473`$) but low energy residuals remain. Additional absorption in the galaxy rest frame improves the fit substantially ($`\mathrm{\Delta }\chi ^2=45`$). Freeing the line energy again improves the fit ($`\mathrm{\Delta }\chi ^2=11`$); the best fit rest-frame line energy is $`6.26\mathrm{keV}`$. Freeing the line width marginally improves the fit ($`\mathrm{\Delta }\chi ^2=5`$). The final fit parameters are listed in Table 1 and fit results are shown in Figure 2. The absorbed power law plus iron line is an acceptable model. The notable properties of the fit are a very flat photon index ($`\mathrm{\Gamma }=0.68`$) and very large equivalent width ($`920\mathrm{eV}`$). Such parameters suggest that the spectrum of NGC 6300 is dominated by Compton-reflection (Matt et al. 1996; Malaguti et al. 1998; Reynolds et al. 1994). We next use the pexrav model in XSPEC to explore this possibility. This model calculates the expected X-ray spectrum when a point source of X-rays is incident on optically thick, predominately neutral (except hydrogen and helium) material. The parameter $`R`$ measures the solid angle $`\mathrm{\Omega }`$ subtended by the optically thick material: $`R=\mathrm{\Omega }/2\pi `$. The model that was fit includes a narrow iron line and a direct and reflected power law; additional absorption also appears to be necessary. The low resolution and limited band pass provided by the RXTE spectrum means that not all of the model parameters could be constrained by the data; thus, the energy of the exponential cutoff was fixed at 500 keV, approximately the value that has been found in OSSE data from Seyfert galaxies (Zdziarski et al. 1995), and the inclination was initially fixed arbitrarily at $`45^{}`$. This model provided a fairly good fit to the data ($`\chi ^2=112`$ for 77 d.o.f.). However, the best fit value of $`R`$ is very large (1450) and not well constrained. This indicates that the spectrum can be modeled using reflection alone, and there is no significant contribution of direct emission (e.g. Matt et al. 1996). Reflection alone gives the same $`\chi ^2`$ as the model which includes a weak direct component, but the fit is still not completely satisfactory, and it could be improved in two ways. The first is to allow the iron abundance to vary. We set the iron abundance relative to solar in the pexrav model equal to the abundances of light elements and allow these parameters to vary together. The fit is improved significantly ($`\mathrm{\Delta }\chi ^2=36`$ for $`\mathrm{\Delta }`$d.o.f.=1) and gives a slightly subsolar abundance. The second is to allow the inclination to be free. The fit is not very sensitive to this parameter, as $`\mathrm{\Delta }\chi ^2`$ over the whole range is 9.2. The best fit value is $`\mathrm{cos}(\mathrm{\Theta })=0.22`$, corresponding to 77 from the normal. The parameters are listed in Table 1. We investigated the choice of fixed parameters in the pexrav model a posteriori. Increasing the cutoff energy did not change the fit. Decreasing the cutoff to 100 keV produced small differences in the parameters; namely, the photon index was smaller, the abundance was higher, and the line flux was lower, but the differences are within the statistical errors of the adopted model. We also investigated the situation when the iron abundance was allowed to vary, but the abundances of lighter metals were maintained at the solar value. A significantly larger photon index by $`\mathrm{\Delta }\mathrm{\Gamma }0.15`$ was required, due to the decreased reflectivity in soft X-rays. We also investigated the effect of a 1% systematic error in the background normalization, and the results are listed in Table 1. This resulted in the largest change in the fit parameters, but the resulting estimated systematic errors are in the worst case less than a factor of two larger than the statistical errors. Because of the low energy resolution of the RXTE spectra, other models can be found which fit equally well. It is possible to describe the spectra using a sum of absorbed power laws (the “dual absorber” model; e.g. Weaver et al. 1994). Specifically, the model consisted of two power laws, both absorbed by a moderate column, and one absorbed by a heavy column. The resulting photon index was very flat ($`\mathrm{\Gamma }=1.15`$), and is therefore deemed unphysical. However, including reflection with $`R=1`$ in the dual absorber model gave an insignificant improvement in fit ($`\mathrm{\Delta }\chi ^2=0.6`$) but a more plausible photon index ($`\mathrm{\Gamma }=1.7`$). The fit parameters are given in Table 1. It is notable that the spectra cannot be described using a highly absorbed transmitted component and an unabsorbed Compton-reflection dominated component, as has been found to be appropriate for Mrk 3 (Cappi et al. 1999). ## 3 Discussion ### 3.1 Compton Reflection Continuum Model The X-ray continuum of NGC 6300 can be modeled as pure Compton reflection. For solar abundance, the iron line equivalent width relative to the reflection continuum is predicted to be between 1 and 2 keV depending on inclination (Matt, Perola & Piro 1991). When we fit a power law continuum to the spectra, the observed equivalent width is nearly 1 keV. However, when the reflection continuum is fitted, the measured equivalent width is reduced to 470 eV. The reason for the reduction in the measured equivalent width is that the reflection continuum model includes a substantial iron edge. In low resolution data, the iron line and iron edge overlap in the response-convolved spectra. Therefore, when the continuum is modeled by a power law, the iron line models both the line and the edge, so the measured equivalent width is larger than when the continuum is modeled by the reflection continuum which includes the iron edge explicitly. This effect can be seen in Figure 2. A significant improvement is obtained when the iron abundance in the reflection continuum model is allowed to be subsolar. The iron abundance is determined by the depth of the iron edge. Therefore, the subsolar abundance fits because the iron edge is apparently not as deep as the model predicts. The fact that the iron line has a lower equivalent width than predicted in the Compton reflection continuum model may also support subsolar abundance. Alternatively, however, the apparent subsolar abundances may at least partially be due to calibration uncertainties in the RXTE PCA. (The resolution of the RXTE PCA is under some debate; see, Weaver, Krolik & Pier 1998). The iron line and edge overlap in the response-convolved spectra. If the true energy resolution is worse than the current estimated value, then because the line is an excess and the edge is a deficit, both would be measured to be smaller than they really are. Another source of uncertainty may come from the models themselves, which depend strongly on the geometry of the illuminating and reprocessing material. Models generally assume a point source of X-rays located above a disk and illuminating it with high covering fraction. Such an ideal case may not be attained in nature. ### 3.2 Dual Absorber Continuum Model The dual absorber model can also describe the spectra. That such a fit is successful is not surprising, as any flat continuum can be modeled as a sum of absorbed power laws (e.g. the X-ray background). The iron line equivalent width for the dual absorber model is similar to that found for the Compton-reflection model, and smaller than that found for the power law model. The reason for the difference is that, like the Compton reflection continuum model, the dual absorber model explicitly includes an iron edge. A plausible origin for the iron line in the dual absorber model is in the absorbing material itself. However, the iron line equivalent width appears to be too large to have been produced in the absorbing material. We investigate this possibility by comparing the observed iron line flux to the predicted value from a spherical shell of gas surrounding an isotropically illuminating point source (Leahy & Creighton 1993). A line flux of $`2.3\times 10^5\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$ is predicted for the absorption columns and covering fractions determined by the fit; this is about half of what is observed ($`4.7\times 10^5\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$). The dual absorber model also requires a reflection component with $`R=1`$ and therefore an additional iron line with equivalent width $`100`$ eV is expected from the reflection. Then the predicted flux increases to $`3.3\times 10^5\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$, about 70% of what is observed. The predicted line flux would be smaller if the absorbing material does not completely cover the source, a circumstance that would exacerbate the difference between predicted and observed flux. Thus, the iron line equivalent width, at least to first approximation, appears to be too large to have been produced in the absorbing material required by the dual absorber model. Therefore, either an iron overabundance is required in the dual absorber model, or the alternative model, the Compton-reflection dominated model, is favored. Discrimination between models will come with observations using detectors with better energy resolution. If NGC 6300 is a Compton-thick Seyfert 2, then we should detect soft X-ray emission lines (e.g. Reynolds et al. 1994). The lack of any observable soft excess in the RXTE data may imply that there is absorption by the host galaxy (see below), or that there is little contamination from thermal X-rays or scattering from warm gas. If the latter case is true, NGC 6300 will be a particularly clean example of a Compton reflection-dominated Seyfert 2 galaxy, and the observed soft X-ray emission lines should be unambiguously attributable to fluorescence. Such proof should be easily attainable as NGC 6300 is bright compared with known Compton-reflection dominated Seyfert 2 galaxies. ### 3.3 Information from Other Wavebands Optical observations provide some support for Compton-thick absorption in NGC 6300. Intrinsically, both hard X-rays and forbidden optical emission lines should be emitted approximately isotropically; therefore, the ratio of these quantities should be the same from object to object. However, if the absorption is Compton-thick, the observed hard X-ray luminosity and therefore $`L_X/L_{[OIII]}`$ will be significantly reduced; the ratio of the power law to reflection component 2–10 keV fluxes in the pexrav model is $`15`$. Care must be taken when applying this test, as the \[O III\] flux must be corrected for reddening in the narrow-line region, and determination of the reddening using narrow-line Balmer decrements can be difficult. The optical spectrum of NGC 6300 is dominated by starlight (e.g. Storchi-Bergmann & Pastoriza 1989); to remove the Balmer absorption from the stars, an accurate galaxy spectrum subtraction must be done. Another complication could be narrow Balmer lines from star formation. NGC 6300 has a well-studied starburst ring (e.g. Buta 1987), but H$`\alpha `$ images show that the H II regions are located $`>0.5^{}`$ from the nucleus, with a little diffuse emission inside of that (e.g. Evans et al. 1996). High quality long-slit spectra yield $`A_v2.53`$ from both the red continuum and the Balmer decrement (Storchi-Bergmann 1999, P. comm.). The observed 2–10 keV flux is $`6.4\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (corresponding to a luminosity of $`2.5\times 10^{41}\mathrm{erg}\mathrm{s}^1`$). Then $`L_X/L_{[OIII]}`$ is approximately 1.1–1.9. This value is quite low and comparable to those obtained from Compton-thick Seyfert 2s by Maiolino et al. (1998). In particular, Circinus has $`F_X=1.4\times 10^{11}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$ (Matt et al. 1999) and reddening-corrected $`F_{[OIII]}=1.95\times 10^{11}`$ ($`A_v=5.2\pm 0.4`$; Oliva et al. 1994), yielding $`L_X/L_{[OIII]}=0.7`$. For comparison, a reddening-corrected sample of Seyfert 1s taken from Mulchaey et al. 1994 as a mean ratio of 14.8 ($`1\sigma `$ range 6.4–33.9). In contrast, the intrinsic luminosity (i.e. corrected for absorption) for the dual absorber model is $`6.9\times 10^{41}\mathrm{erg}\mathrm{s}^1`$, implying $`L_X/L_{[OIII]}`$3.0–5.2. It is notable also that NGC 6300 shows the reddest continuum toward the nucleus in a sample of objects studied with long slit spectroscopy (Cid Fernandes, Storchi-Bergmann & Schmitt 1998). Furthermore, there seems to be a correlation between the presence of a bar in the host galaxy and presence of a Compton thick Seyfert 2 nucleus (Maiolino, Risaliti & Salvati 1999); NGC 6300 has a bar (e.g. Buta 1987). ### 3.4 Consistency with Einstein IPC Observation The X-ray spectra from Seyfert 2 galaxies frequently includes a soft spectral component which may originate in scattering by warm gas or diffuse thermal X-rays. However, lack of detection in an Einstein IPC observation, combined with the RXTE observation presented here, shows that there is no evidence for such a component in NGC 6300. In 1979 NGC 6300 was observed with the Einstein IPC for 990 seconds. It was not detected and the three sigma upper limit to the count rate was $`1.19\times 10^2\mathrm{counts}\mathrm{s}^1`$ between 0.2 and 4.0 keV (Fabbiano, Kim, & Trinchieri 1992). The Compton reflection-dominated model predicts a count rate of $`1.3\times 10^2\mathrm{counts}\mathrm{s}^1`$, just the same order as the upper limit, and probably consistent within the uncertainties of the model and the relative flux calibrations of the two instruments. There may be intrinsic absorption in the system, for example, from the host galaxy. The observed optical reddening of $`A_v=2.53.0`$ corresponds to an absorption column of 4–5$`\times 10^{21}\mathrm{cm}^2`$, assuming a standard dust to gas ratio. Including this column in the Compton-reflection dominated model leads to a predicted IPC flux of $`1.1\times 10^2\mathrm{counts}\mathrm{s}^1`$, consistent with the upper limit. Because the RXTE band pass is truncated at 3 keV, it is impossible to estimate with accuracy the intrinsic absorption. For the Compton-reflection model, the 90% upper limit for one parameter of interest is $`1.8\times 10^{22}\mathrm{cm}^2`$. Alternatively, there may have been variability within the 18 years between the IPC and the RXTE observation. A probable site for the reflection is the inner wall of the molecular torus which blocks the line of sight to the nucleus. In unified models for Seyfert galaxies, this material is located outside of the broad line region, and can be 1–100 pc from the nucleus. Short term variability is not expected; long term variability is possible but requires a long term trend in flux. ###### Acknowledgements. KML acknowledges useful discussions on the RXTE background with Keith Jahoda. KML gratefully acknowledges support by NAG-4112 (RXTE) and NAG5-7971 (LTSA).
no-problem/9904/hep-th9904158.html
ar5iv
text
# The infinite energy limit of the fine structure constant equal to 1/4⁢𝜋? (20 March 1999) ## Abstract A recently proposed topological mechanism for the quantization of the charge gives the value $`e_0=\sqrt{\mathrm{}c}`$ for both the fundamental electric and magnetic charges. It is argued here that the corresponding fine structure constant $`\alpha _0=1/4\pi `$ could be interpreted as its value at infinite energy. This letter proposes an argument in favour of the idea that the infinite energy limit of the fine structure constant is equal to $`1/4\pi `$. The argument is based on two grounds: (i) a recent topological mechanism for charge quantization which implies that the fundamental electric and magnetic charges are both equal to $`e_0=\sqrt{\mathrm{}c}=3.3e`$, the corresponding fine structure constant being $`\alpha _0=1/4\pi `$ ; and (ii) the appealing and plausible idea that, in the limit of very high energies, the interactions of charged particles could be determined by their bare charges (this meaning the value that their charges would have if they were not renormalized by the quantum vacuum, see for instance section 11.8 of ). A warning is however necessary: the concept of bare charge is more complex than what was thought some time ago, so that it is better now to speak of charge at a certain scale. To be precise, when the expression “bare charge” will be used here, it will be taken as equivalent and synonymous to “infinite energy limit of the charge” or, more correctly, “charge at infinite momentum transfer”, defined as $`e_{\mathrm{}}=\sqrt{4\pi \mathrm{}c\alpha _{\mathrm{}}}`$, where $`\alpha _{\mathrm{}}=lim\alpha (Q^2)`$ when $`Q^2\mathrm{}`$. The possibility of a finite value for $`\alpha _{\mathrm{}}`$ is an intriguing idea worth of study. Indeed, it was discussed very early by Gell-Mann and Low in their classic and seminal paper “QED at small distances” , in which they showed that it is something to be seriously considered. However, they could not decide with their analysis whether $`e_{\mathrm{}}`$ is finite or infinite. The standard QED statement that it is infinite was established later because of perturbative calculations, but it can not be said that the alternative presented by Gell-Mann and Low had been definitely settled. The infinite energy charge $`e_{\mathrm{}}`$ of an electron is partially screened by the sea of virtual pairs that are continuously being created and destroyed in empty space. It is hence said that it is renormalized. As the pairs are polarized, they generate a cloud of polarization charge near any charged particle, with the result that the observed value of the charge is smaller than $`e_{\mathrm{}}`$. Moreover, the apparent electron charge increases as any probe goes deeper into the polarization cloud and is therefore less screened. This effect is difficult to measure, as it can only be appreciated at extremely short distances, but it has been observed indeed in experiments of electron-positron scattering at high energies . In other words: the vacuum is dielectric. On the other hand, it is paramagnetic, since its effect on the magnetic field is due to the spin of the pairs. As a consequence, the hypothetical magnetic charge would be observed with a greater value at low energy than at very high energy, contrariwise to the electron charge. The name bare charge is appropriate for $`e_{\mathrm{}}`$, as it is easy to understand intuitively. When two electrons interact with very high momentum transfer, each one is so deeply inside the polarization cloud around the other that no space is left between them to screen their charges, so that the bare values, i.e. $`e_{\mathrm{}}`$, interact directly. As unification is assumed to occur at very high energy, it is an appealing idea that $`\alpha _{\mathrm{}}=\alpha _{\mathrm{GUT}}`$ (it is true that one could imagine that $`\alpha (Q^2)`$ has a plateau at the unification scale corresponding to a critical value smaller than $`\alpha _{\mathrm{}}`$, but we assume the simpler situation in which that plateau does not exist). This suggests that a unified theory could be a theory of bare particles (in the sense of neglecting the effect of the vacuum). If this were the case, nature would have provided us with a natural cutoff, in such a way that $`\alpha _{\mathrm{GUT}}=\alpha _{\mathrm{}}`$. The charge quantization mechanism given in is based on a topological model endowed with a structure induced by the topology of the magnetic and electric force lines, which are represented as the level curves of a couple of complex scalar fields $`\varphi ,\theta `$ , the electromagnetic tensor $`F_{\mu \nu }`$ being expressed in terms of these scalars by a certain precise transformation $`T:\varphi ,\theta F_{\mu \nu }`$. The scalars $`\varphi ,\theta `$ obey highly nonlinear equations. Surprisingly however these nonlinear equations are transformed exactly into Maxwell equations by the transformation $`T`$. Consequently, the $`F_{\mu \nu }`$ of the model are standard Maxwell fields (although behaving in a particular way around the infinity), so that it is equivalent to Maxwell standard theory in any bounded spacetime domain. A consequence of that topological structure is that the charge inside any volume is always equal to $`n\sqrt{\mathrm{}c}`$, the integer $`n`$ being understood as the degree of a map between two spheres. It turns out that each electric line around a point charge is labelled by a complex number, the value of $`\theta (𝐫,t)`$ along it, in such a way that there are exactly $`n`$ lines with the same label, taking into account the orientation of the map (the same would apply to a magnetic charge, with $`\varphi `$ instead of $`\theta `$). As this topological mechanism operates at the classical level and since the charge is necessarily affected by the quantum vacuum to give the dressed observed value, the fundamental charge $`e_0=\sqrt{\mathrm{}c}`$ must be interpreted as the infinite energy value of both the electric and magnetic charges $`e_{\mathrm{}}`$ and $`g_{\mathrm{}}`$. In other words, the model predicts that $`e_{\mathrm{}}=g_{\mathrm{}}=e_0`$. (It is perhaps worth mentioning that, in a different context, these topological ideas have inspired a model of ball lightning in which this phenomenon is assumed be a magnetic knot coupled to a plasma . The linking of magnetic lines turns out to have a stabilizing effect which allow the fireballs to last for much more time than expected.) As a consequence of these considerations, the argument announced in the first line which leads to the equalities $`\alpha _{\mathrm{GUT}}=\alpha _{\mathrm{}}=1/4\pi `$ goes as follows: 1. The value of the fundamental charge implied by the topological mechanism $`e_0=\sqrt{\mathrm{}c}`$ is in the right interval to verify $`e_0=e_{\mathrm{}}=g_{\mathrm{}}`$, that is to be equal to the common value of both the fundamental electric and magnetic infinite energy charges. This is so because, as the quantum vacuum is dielectric but paramagnetic, the following inequality must be satisfied then: $`e<e_0<g`$, as it is indeed, since $`e=0.3028`$, $`e_0=1`$, $`g=e/2\alpha =20.75`$, in natural units. Note that it is impossible to have a complete symmetry between electricity and magnetism simultaneously at low and high energy. The lack of symmetry between the electron and the Dirac monopole charges would be due, in this view, to the vacuum polarization: according to the topological model, the electric and magnetic infinite energy charges are equal and verify $`e_{\mathrm{}}g_{\mathrm{}}=e_0^2=1`$, but they would be decreased and increased, respectively, by the sea of virtual pairs, until the electron and the monopole charge values verifying the Dirac relation $`eg=2\pi `$ . The qualitative picture seems nice and appealing. 2. Let us admit as a working hypothesis that two charged particles interact with their bare charges in the limit of very high energies (as explained above). There could be then a conflict between (i) a unified theory of electroweak and strong forces, in which $`\alpha =\alpha _s`$ at very high energies, and (ii) an infinite value of $`\alpha _{\mathrm{}}`$. This is so because unification implies that the curves of the running constants $`\alpha (Q^2)`$ and $`\alpha _s(Q^2)`$ must converge asymptotically to the same value $`\alpha _{\mathrm{GUT}}`$. It could be argued that, to have unification at a certain scale, it would be enough that these two curves be close in an energy interval, even if they cross and separate afterwards. But, in that case, the unified theory would be just an approximate accident at certain energy interval. On the other hand, the assumption that both running constants go asymptotically to the same finite value $`\alpha _{\mathrm{GUT}}`$ gives a much deeper meaning to the idea of unified theory, and is therefore much more appealing. In that case, $`e_{\mathrm{}}`$ must be expected to be finite, and the equality $`\alpha _{\mathrm{GUT}}=\alpha _{\mathrm{}}`$ must be satisfied. 3. The value $`\alpha _0=e_0^2/4\pi \mathrm{}c=1/4\pi =0.0796`$ for the infinite energy fine structure constant $`\alpha _{\mathrm{}}`$ is thought provoking and fitting, since $`\alpha _{\mathrm{GUT}}`$ is believed to be in the interval $`(0.05,\mathrm{\hspace{0.17em}0.1})`$ (some say furthermore that close to 0.08). This reaffirms the assert that the fundamental value of the charge given by the topological mechanism $`e_0`$ could be equal to $`e_{\mathrm{}}`$, the infinite energy electron charge (and the infinite energy monopole charge also), and supports the statement that $`\alpha _{\mathrm{GUT}}`$ must be equal to $`\alpha _0`$ and to $`1/4\pi `$. All this is certainly curious and intriguing since the topological mechanism for the quantization of the charge is obtained just by putting some topology in elementary classical low energy electrodynamics . The conclusion of this letter is that the following three ideas must be studied carefully: (i) the complete symmetry between electricity and magnetism at the level of the infinite energy charges, both being equal to $`\sqrt{\mathrm{}c}`$, the symmetry being broken by the dielectric and paramagnetic quantum vacuum; (ii) that the topological model on which the topological mechanism of quantization is based could give a theory of high energy electromagentism at the unification scale; and (iii) that the value which it predicts for the infinite energy fine structure constant $`\alpha _0=1/4\pi `$ could be equal to $`\alpha _{\mathrm{}}`$ and also to $`\alpha _{\mathrm{GUT}}`$, the constant of the unified theory of strong and electroweak interactions. In this way the three quantities, both the electric and magnetic fine structure constants at infinite momnetum transfer and $`\alpha _{\mathrm{GUT}}`$, would be equal and there would be a complete symmetry between electricity, magnetism and strong force at the level of bare particles (i.e. at $`Q^2=\mathrm{}`$), this symmetry being broken by the effect of the quantum vacuum.
no-problem/9904/cond-mat9904080.html
ar5iv
text
# Harmonic generation from YBa₂⁢Cu₃⁢O_{7-𝛿} microwave resonators ## I Acknowledgements Stimulating discussions with D. E. Oates are thankfully acknowledged. Work at Northeastern was supported by NSF-9711910 and AFOSR.
no-problem/9904/cond-mat9904228.html
ar5iv
text
# Nonergodicity transitions in colloidal suspensions with attractive interactions ## I Introduction Attractions among stable colloidal particles lead to a diverse phase behavior. Colloidal attractions, unlike the molecular attractions, act usually over a relatively short (compared to the particle size) range. It is by now well established that when the range of the colloidal attraction is decreased the phase diagram undergoes a progression from gas–liquid–solid to fluid–solid coexistence, the latter with a subcooled critical point which is metastable with respect to fluid–solid coexistence . Numerous experimental studies show that suspensions often form incompletely equilibrated solids with the appearance of gels where one expects a fluid–solid or gas–liquid phase separation from equilibrium theory. The systems studied include mixtures of colloids and non-adsorbing polymer and sterically stabilized colloids in marginal solvents . In the former case, the attractions stem from depletion of the polymer coils from the regions between closely spaced particles , and in the latter they are caused by surface grafted chain–chain interactions . The gel transition is observed when the range of attraction is short compared to the particle size. In the colloid–polymer mixtures this is achieved by choosing a small ratio of polymer to colloid size, whereas the overlap length of the surface grafted chains sets essentially the range of the attraction in the sterically stabilized particle systems. While the equilibrium phase behavior of these systems is well understood , the nature of the gel transition remains to be clarified. The gel state appears to be related to a ramified structure with interconnected particle clusters . Temporal density fluctuations are very slow close to the gel transition and the suspensions acquire a yield stress and a finite low-frequency elastic shear modulus in the gel . In the past the transition to the gel state has most often been interpreted as either a static percolation transition , where a sample-spanning cluster of particles forms, or due to the fluid–solid phase transition . Comparison between integral equation predictions for the percolation transition and experimental data, however, shows that the gel transition is confined to the region in the phase diagram between the static percolation threshold and the gas–liquid spinodal . Colloidal gels have also been attributed to states inside a gas–liquid binodal which is metastable with respect to fluid–solid coexistence. Such metastable binodals have indeed been observed for suspensions of globular proteins , which also form gels when the ionic strength is sufficiently high . In this work we present an alternative interpretation of the dynamical arrest of the gel structure which causes colloidal systems to become disordered solids. We propose that colloidal gels are nonergodic systems that form when a dynamic gel transition is traversed. We further suggest that the gel transition is a low temperature extension of the liquid–glass transition. The gels, however, differ physically from colloidal hard sphere glasses in that they generally display a larger elastic shear modulus and that the particles are more strongly localized in the gels; both of these effects are due to particle clustering induced by a short–range attraction among particles. We demonstrate that this is a possible explanation for colloidal gel formation by applying the idealized mode coupling theory (MCT) of the form used to study the liquid–glass transition to systems in which the attraction is restricted to short ranges. This study is motivated by the observations made by Verduin and Dhont , who noted a structural arrest (nonergodicity) in connection with the gel transition similar to that observed for hard sphere colloidal glasses . Also Poon et al. have made such observations in the so-called transient gelation region of the colloid–polymer phase diagram for short polymers. Krall and Weitz have shown that, in the limit of strong particle aggregation, suspensions become nonergodic even at low colloid densities. To date, however, only a speculative connection has been made between the gel and liquid–glass transitions . We conduct a study of ergodicity breaking in two model systems: Baxter’s adhesive hard sphere (AHS) system and the hard core attractive Yukawa (HCAY) system. Both systems supply analytical solutions for the static structure factor, the former within Percus-Yevick (PY) theory , and the latter within the mean spherical approximation (MSA) . This study provides more information on the AHS phase diagram and, in addition, serves to complement a recent independent MCT study on the temperature dependence of the AHS glass transition. Further, the HCAY system provides a likely candidate for the gel transition in colloidal systems as an ergodicity breaking dynamic transition of the same type as the liquid–glass transition, suggesting that the experimentally observed gel transition is a low temperature extension of the glass transition. In what follows, the idealized mode coupling theory of the liquid–glass transition, suitable for colloidal suspensions, is briefly summarized. Results for the temperature dependence of the glass transition are then presented and compared to the AHS phase diagram as predicted by density functional theory. Results are also shown for the HCAY system, which show that the MCT glass transition extends to the critical and subcritical region at low temperatures. The way in which the glass transition line traverses this part of the phase diagram is shown to be a strong function of the range of attraction. ## II Mode coupling theory The mode coupling theory (MCT) of the liquid–glass transition provides a dynamic description of the transition . For sufficiently strong interactions the dynamical scattering functions do not decay to zero with time, leaving instead finite residues – the nonergodicity parameters, also known as Edwards–Anderson parameters or glass form factors. Generically, concurrent with this long-time diffusion ceases and the zero-shear viscosity diverges, both due to a diverging relaxation time. This structural relaxation time is in turn related to the particles’ inability of escaping their nearest neighbor cages. The glass transition within the framework of the idealized MCT is not a conventional thermodynamic phase transition; the constrained motion of the particles leads to a difference between time and ensemble averages, i.e. an ergodicity breaking transition. Application of the MCT to the Mori–Zwanzig reduced equations of motion for the density correlators, together with a $`t\mathrm{}`$ limit, leads to the following set of closed equations $`{\displaystyle \frac{f_q}{1f_q}}`$ $`=`$ $`{\displaystyle \frac{\rho }{2(2\pi )^3q^2}}{\displaystyle 𝑑𝐤V(𝐪,𝐤)^2S_qS_kS_{|𝐪𝐤|}f_kf_{|𝐪𝐤|}}`$ (1) $`V(𝐪,𝐤)`$ $`=`$ $`\widehat{𝐪}(𝐪𝐤)c_{|𝐪𝐤|}+\widehat{𝐪}𝐤c_k+q\rho c^{(3)}(𝐪,𝐪𝐤)`$ (2) $`{\displaystyle \frac{f_q^s}{1f_q^s}}`$ $`=`$ $`{\displaystyle \frac{\rho }{(2\pi )^3q^2}}{\displaystyle 𝑑𝐤V^s(𝐪,𝐤)^2S_kf_kf_{|𝐪𝐤|}^s}`$ (3) $`V^s(𝐪,𝐤)`$ $`=`$ $`\widehat{𝐪}𝐤c_k`$ (4) where $`\rho `$ is the number density, $`S_q`$ is the static structure factor, $`c_q=(S_q1)/\rho S_q`$ is the Fourier–transformed Ornstein–Zernike direct correlation function, and $`c^{(3)}`$ is the triplet direct correlation function. In this study we use primarily the so-called convolution approximation ($`c^{(3)}=0`$) for the triplet direct correlation function. Note that the coupling vertices $`V(𝐪,𝐤)`$ and $`V^s(𝐪,𝐤)`$ in Eqs. 1 and 3 are free of singularities and vary smoothly with a set of external control parameters, e.g., the particle density. The coherent ($`f_q`$) and incoherent ($`f_q^s`$) nonergodicity parameters are defined as the long-time limits of the intermediate scattering function $`F_q(t)`$ and the self-intermediate scattering function $`F_q^s(t)`$, according to $`f_q`$ $`=`$ $`\underset{t\mathrm{}}{lim}\left(F_q(t)/S_q\right)`$ (5) $`f_q^s`$ $`=`$ $`\underset{t\mathrm{}}{lim}\left(F_q^s(t)\right).`$ (6) The nonergodicity parameters determine the properties of the glass. The zero-frequency elastic shear modulus of the colloidal glass (in units of $`k_BT/\sigma ^3`$, with $`k_BT`$ the temperature and $`\sigma `$ the particle diameter) is given by $$G=\frac{\sigma ^3}{60\pi ^2}_0^{\mathrm{}}𝑑kk^4\left(\frac{d\mathrm{ln}S_k}{dk}f_k\right)^2.$$ (7) The incoherent nonergodicity parameter $`f_q^s`$ is found to be well approximated by a Gaussian, the half-width of which is proportional to the mean-square displacement in the glass state . The localization length $`r_s`$ is defined as the root-mean-square displacement in the glass, and is determined from $`f_q^s=1q^2r_s^2`$ for $`q0`$ . Eqs. 1 and 3 are solved self-consistently for the nonergodicity parameters as functions of specified external control parameters: the reduced temperature $`\tau `$ and volume fraction $`\varphi =\pi \rho \sigma ^3/6`$ for the AHS system, and the reduced temperature K<sup>-1</sup>, screening parameter b, and volume fraction for the HCAY system (cf. below). The solution proceeds by iteration, first on $`f_q`$, and subsequently on $`f_q^s`$. Transition lines delineating ergodic and nonergodic states were found by bracketing, and the monotonicity property of the iteration was employed. The integrations were performed numerically using Simpson’s rule on a uniformly discretized wavevector grid: $`q_i=i\mathrm{\Delta }q`$, $`i=0,\mathrm{},N`$. The parameters $`\mathrm{\Delta }q`$ and $`N`$ used varied somewhat, but most results were obtained using $`0.15<\mathrm{\Delta }q\sigma <0.30`$ and $`600<N<1000`$. The iterative solution scheme was complemented occasionally by an algorithm to speed up convergence by using stored previous iterates . We have also directly integrated the equations of motion (adjusted to obey Smoluchowski dynamics). This yields the entire time-evolution of the density correlators $`F_q(t)`$ and $`F_q^s(t)`$, the long-time limits of which were found to be identical to the solutions of Eqs. 1 and 3. The result $`f_q=f_q^s=0`$ is always a solution to Eqs. 1 and 3, implying that correlations among density fluctuations vanish for long times. At low densities this is the only solution, hence the system is in a fluid, possibly metastable fluid state. Above a critical volume fraction $`\varphi _c`$ also non-zero solutions appear, which correspond to nonergodic glass states. The physical solutions to Eqs. 1 and 3, corresponding to the long-time limits defined by Eqs. 5 and 6, were identified by choosing the largest solutions for $`f_q`$ and $`f_q^s`$ . ## III Adhesive hard sphere system The AHS pair potential consists of an infinitely deep and narrow well located at particle contact, given explicitly by $$u(r)/k_BT=\underset{d\sigma ^+}{lim}\{\begin{array}{cc}\mathrm{}\hfill & 0<r<\sigma \hfill \\ \mathrm{ln}\left[\frac{12\tau (d\sigma )}{d}\right]\hfill & \sigma <r<d\hfill \\ 0\hfill & d<r\hfill \end{array}$$ (8) where $`\tau `$ is a reduced temperature and $`r`$ is the interparticle separation distance. The $`\tau \mathrm{}`$ limit of the PY–AHS system corresponds to the PY hard sphere system. Using this as the starting point, the earlier MCT result for the hard sphere glass transition volume fraction $`\varphi _c=0.516`$ was reproduced. With more accurate hard sphere static inputs one obtains $`\varphi _c=0.525`$ . Experiments locate the colloidal hard sphere glass transition at $`\varphi _c0.58`$ , showing that the idealized MCT prediction for the hard sphere $`\varphi _c`$ is too low; this is presumably caused by the strong restriction of the modes of structural relaxation imposed in the MCT. The locus of critical glass transition points is shown in Fig. 1 as a function of the particle volume fraction $`\varphi `$ and the reduced temperature $`\tau `$, with the shaded region denoting nonergodic glass states. Upon decreasing $`\tau `$ the glass transition point moves along the line B1 in Fig. 1 to higher density, contrary to the findings for particles interacting via a molecular interaction potential of Lennard–Jones form . We see that starting with a hard sphere glass and introducing a short–range attraction leads to an ergodicity restoring transition, provided the density of the isochore is not too high. The glass transition for $`\tau =2`$ occurs at a volume fraction of $`0.5527`$, significantly higher than the MCT hard sphere result. The strength of attraction needed to shift the glass transition to higher density is weak; a second virial coefficient mapping reveals that a reduced temperature $`\tau =2`$ corresponds roughly to a $`0.05\sigma `$ wide square-well with a depth of $``$0.6 $`k_BT`$. An examination of the static structure factors along the critical glass transition boundary B1 in Fig. 1 shows that they are not markedly different from those of hard sphere suspensions; however, subtle changes in the structure lead to significant changes in the critical glass transition density. At high temperatures we observe that the MCT predictions for the localization length $`r_s`$ follow roughly a Lindemann criterion, given by the hard sphere result $`r_s0.074\sigma `$, in agreement with the results of the Lennard–Jones study . Below $`\tau 5`$, however, the Lindemann criterion is violated, the particles being now more strongly localized in the glass state. We note also that inclusion of improved triplet correlations in the manner of Barrat et al. , who used an approximation due to Denton and Ashcroft for $`c^{(3)}`$ , leads to a qualitatively similar phase diagram with boundaries shifted slightly to lower densities and temperatures relative to those shown in Fig. 1. The AHS phase behavior has been the subject of several studies, most of them using density functional theory (DFT) . Selecting the most recent one by Marr and Gast for comparison, who used the modified weighted–density approximation (MWDA) , we find that the B1 glass transition is confined to the metastable region between the fluid–solid coexistence lines (see Fig. 1). One striking feature is that the B1 glass transition line from MCT and the MWDA freezing transition line track each other, the quantity $`\mathrm{\Delta }\varphi =\varphi _c\varphi _f`$, with $`\varphi _f`$ the volume fraction at freezing, being nearly temperature independent. This result has interesting consequences for the diffusion constants at the freezing densities . When sufficiently close to the glass transition the long-time self diffusion coefficient assumes its asymptotic behavior governed by the distance to the glass transition singularity. The normalized long-time self diffusion coefficient has been found to exhibit universality along the fluid–solid freezing transition . The comparison made here shows that this condition may be related to a universality of the proximity of the freezing transition to the glass transition. At least, it suggests a deeper connection between MCT for the liquid–glass transition and DFT, a topic that has been explored to some extent . In following the glass transition line from high to low temperatures (the line denoted by B1 in Fig. 1), we find a line crossing similar to that studied within schematic ($`q`$–independent) models . This region in the phase diagram has been studied in detail recently by Fabbian et al. . At the crossing between the B1 and B2 glass transition lines, it is B2 that determines the behavior of the physical solution as the nonergodicity parameters associated with B2 are found to be always greater than those associated with B1. Thus, along the B2 line bordering the fluid phase, $`f_q`$ for each $`q`$ jumps discontinuously between 0 and finite values, and between smaller and larger finite values when the B2 line is traversed in the glass. The appearance of the B2 line is a result of an endpoint (cusp, A3) singularity in the AHS phase diagram, where three solutions of Eq. 1 for $`f_q`$ coalesce. This singularity appears as the termination point of the B2 transition line. It is connected to another bifurcation point with triply degenerate $`f_q`$ solutions located at lower temperature by the B3 transition line shown in Fig 1. Neither the low temperature endpoint, the piece of B1 between it and B2, nor the B3 transition line play a role in determining the glass dynamics, but show the connectivity among the bifurcation solutions of Eq. 1. The B2 line in Fig. 1 exhibits unusual properties. Varying the numerical parameters $`N`$ and $`\mathrm{\Delta }q`$, such that the maximum wavevector $`q_{max}=N\mathrm{\Delta }q`$ changes, shifts the location of the B2 glass transition line and the high temperature endpoint in the phase diagram. Such a variation is not observed in connection with the B1 glass transition line. In addition, we were unable to identify a set of $`N`$ and $`\mathrm{\Delta }q`$ such that the $`f_q`$ associated with the B2 glass transition decays to zero within the prescribed wavevector range. The results shown in Fig. 1 were obtained using $`N=700`$ and $`\mathrm{\Delta }q\sigma =0.2`$. It is possible that the anomalous behavior of the B2 glass solutions results from the atypical behavior of the AHS $`S_q`$ in the large $`q`$ limit caused by the singular nature of the AHS pair potential. The AHS $`S_q`$ decays slowly for large $`q`$ as $`S_q1+2\varphi \lambda _\mathrm{B}\mathrm{sin}(q\sigma )/q\sigma `$, where $`\lambda _\mathrm{B}`$ is the solution of Baxter’s quadratic equation . We do not consider the behavior of the B2 glass solutions here further; instead, we show in the next section that the HCAY system exhibits glass transition lines that extend to low temperatures and densities. Several properties of these low temperature glasses are in qualitative agreement with experiments on colloidal gels. ## IV Hard core attractive Yukawa system In this section we examine the effect on the glass transition of introducing a finite range of attraction via the HCAY system. The HCAY pair potential is given by $$u(r)/k_BT=\{\begin{array}{cc}\mathrm{}\hfill & 0<r<\sigma \hfill \\ \frac{\text{K}}{r/\sigma }e^{\text{b}(r/\sigma 1)}\hfill & \sigma <r\hfill \end{array}$$ (9) where the dimensionless parameter K regulates the depth of the attractive well and the reduced screening parameter b sets the range of the attraction. Using the MSA static structure factor as input, the MCT was solved for three different screening parameters: b = 7.5, 20, and 30. The progression of the glass transition can be traced from the (PY) hard sphere limit, corresponding to K=0, to lower temperatures in terms of the reduced temperature K<sup>-1</sup>. In Fig. 2 we show the gas–liquid spinodal curves, studied in detail by Cummings, Smith, and Stell . The critical temperature is sensitive to the range of the attraction, decreasing with increasing b. The spinodal curves are shown as indicators of where gas–liquid phase coexistence will occur, should there be a stable liquid phase. Included in the diagrams in Fig. 2 are the corresponding loci of glass transition points. At high temperatures and small values of b (b=7.5) they are relatively insensitive to the strength of the attraction, showing only a minor initial movement toward higher densities. Increasing the value of the screening parameter b (b=20 and 30), which decreases the range of the attraction, leads to a small initial increase of the glass transition density upon lowering the temperature, although this trend is not as pronounced as in the AHS system; subsequently, at lower temperatures the glass transition is induced at increasingly lower densities. For b=7.5 and 20 the nonergodicity transition lines reach subcritical temperatures and approach the liquid side of the spinodal. For b=30 the nonergodicity transition line lies entirely within the fluid phase above the two phase region, and extends to subcritical temperatures at low densities. The MCT used here does not account for large concentration gradients and additional critical slowing of relaxations. As there is no small expansion parameter in MCT, it is difficult to ascertain when, upon approaching a critical point, this form of MCT should be replaced by a more complete theory, including a more sophisticated handling of the critical dynamics (see ref. and references therein). At high temperatures wavevectors around the primary peak of the structure factor contribute the most to the mode coupling integrals in Eqs. 1 and 3. With decreasing temperature, on the one hand, the small wavevector structure in $`S_q`$ leads to a stronger coupling on large length scales; on the other hand, the attractive interactions become of increasing importance on all length scales. The former effect, which can be expected to appear for all ranges of attractions, leads to nonergodicity transitions which trace the spinodal lines. These transitions will be discussed in the appendix as here the present MCT is least reliable because it does not include all relevant mode couplings and will not correctly describe the dynamics near the critical points . The latter effect, important for systems with short–range attractions, can be studied by an asymptotic analysis of the MCT equations and will be seen to dominate the low density glass transitions for large values of b. At low densities and in the limit of strong attractive interactions, the Ornstein–Zernike direct correlation function becomes independent of density. Specifying this to the MSA of the HCAY system, this limit corresponds to $`\varphi 0`$ and K $`\mathrm{}`$. Considering the asymptotic limit $$\varphi 0\text{and }\mathrm{K}\mathrm{},\text{so that }\mathrm{\Gamma }=\frac{\mathrm{K}^2\varphi }{\mathrm{b}}=\text{constant},$$ (10) the MCT equations simplify because $`S_q1`$ follows. The nonergodicity transitions then occur at $`\mathrm{\Gamma }=\mathrm{\Gamma }_c(\mathrm{b})`$, leading to the asymptotic prediction $`\mathrm{K}_\mathrm{c}1/\sqrt{\varphi _\mathrm{c}}`$. For short–range attractions, in the limit of $`\mathrm{b}\mathrm{}`$, a further simplification arises because the coupling constant $`\mathrm{\Gamma }`$ approaches a unique value at the transition, $`\mathrm{\Gamma }_c3.02`$ for $`\mathrm{b}\mathrm{}`$, and the nonergodicity parameters now depend only on the rescaled wavevector $`\stackrel{~}{q}=q\sigma /\mathrm{b}`$: $`f_q^c\stackrel{~}{f}^c(\stackrel{~}{q})`$. The asymptotic transition lines are shown in Fig. 2 as the chain curves. We find excellent agreement with the MCT transition line for b=30 at low density, demonstrating that the low density nonergodicity transitions are not driven by the divergence of the small wavevector limit of $`S_q`$. Moreover, the asymptotic model is seen to capture the behavior of the full MCT transition lines – where present – qualitatively and even semi–quantitatively at higher densities. We further point out that Eq. 3 for the single particle dynamics and, thus, the incoherent form factors $`f_q^s`$ are not dominated by small wavevector variations in the static structure factor. Instead, $`f_q^s`$ and the localization length $`r_s`$ are dominated by the small distance or large wavevector behavior of the liquid structure. In the asymptotic limit of Eq. 10 this also holds for the collective particle dynamics and $`\stackrel{~}{f}^c(\stackrel{~}{q})=\stackrel{~}{f}^s(\stackrel{~}{q})`$ is obtained, where both functions show rather large non–Gaussian corrections. For the system with b=20, Fig. 2 shows that the glass transition nearly meets the critical point (see also Fig. 7 in the appendix). This aspect is in qualitative agreement with the behavior of the sterically stabilized suspensions studied by Verduin and Dhont . They observed a gel transition that traversed the phase diagram from high density and temperature to the critical point. The transition, which they refer to as a static percolation transition, was associated with a non-decaying intermediate scattering function and non-fluctuating dynamic light scattering speckle patterns. Thus, it has the expected properties of the ergodic–nonergodic dynamic transition predicted by the idealized MCT. Moreover, they were able to follow the transition into the unstable region inside the spinodal curve, showing that complete phase separation does not occur because of interference from the gel transition. We cannot extend the calculation of the MCT gel transition line into the unstable region because an appropriate $`S_q`$ is not available and the theory assumes closeness to equilibrium . Restricting the range of the attraction sufficiently, as for b=30, Fig. 2 shows that the glass transition line passes above the critical point and reaches subcritical temperatures on the gas side of the metastable spinodal. For such systems we may speculate that the glass transition renders the entire spinodal curve and the liquid phase dynamically inaccessible. This feature appears to agree with some measurements on sterically stabilized suspensions , in which only a liquid-gel transition was observed. Recent measurements by Poon et al. suggest that the colloid–polymer mixtures with a small polymer/colloid size ratio ($`\xi 0.08`$) may belong to the class of HCAY phase diagrams with b $`<20`$, where the nonergodicity transition line meets the spinodal on the liquid side. This interpretation includes a possible explanation for the growth of the small-angle scattering peak for samples with low colloid concentrations , and that the denser colloid domains arrest in the transient gelation region. To further clarify the physical mechanism of the gel transition and the properties of the gel states, various aspects of the solutions of the MCT will be discussed for the three cases: b=7.5, 20, and 30. In Fig. 3 we show the evolution of the coherent nonergodicity parameter $`f_q`$ along the critical glass transition boundary corresponding to b=30 in Fig. 2. As seen, the width of $`f_q`$ increases with decreasing temperature (increasing K). This behavior of $`f_q`$ with decreasing temperature is a result of a corresponding increase in the range of $`S_q`$, which results from particles being strongly correlated near contact, i.e. due to particle clustering. For longer range attractions, like the b=7.5 system, the width of $`f_q`$ and $`f_q^s`$ remain essentially unchanged along the glass transition boundary, which reflects the lower degree of particle clustering in this system. Note that $`f_q`$ becomes a density and temperature independent function, $`f_q\stackrel{~}{f}(q\sigma /\mathrm{b})`$, in the limit of strong attractions. This prediction is shown in Fig. 3 as the bold line, and agrees almost quantitatively with the full MCT $`f_q`$ solutions for large values of K. The localization length $`r_s`$ in the glass decreases along the glass transition boundary when the attraction is sufficiently short range. This decrease in the localization length is shown in Fig. 4, and is caused by the increased contribution from large wavevectors in the MCT integrals in Eqs. 1 and 3. For longer range attractions, like the b=7.5 case, the localization length stays close to the value dictated by the Lindemann criterion and found at the glass transition in the hard sphere system . Thus, for short–range attractions the particles are more strongly localized in the glass than for systems with somewhat longer range attractions. At low temperatures the localization length saturates at a limiting value which is inversely proportional to b because of Eq. 10, which leads to the prediction $`r_s0.91\sigma /\mathrm{b}`$ for b $`\mathrm{}`$. In addition to an increased width at low temperatures, the small wavevector behavior of $`f_q`$ changes dramatically, such that at low temperatures the intermediate scattering function for small $`q`$ practically does not decay with time at all (see Fig. 3). This indicates that large scale assemblies of particles behave essentially as static objects, where the single particles are tightly bound to the particle clusters (see Fig. 4). The asymptotic limit, Eq. 10 and b $`\mathrm{}`$, which results in $`\stackrel{~}{f}(\stackrel{~}{q})1\stackrel{~}{q}^2`$, stresses that this is caused by the short–range attraction. Such a rise in $`f_q`$ for small $`q`$ is observed also in the b=7.5 system, where it is caused by a different mechanism, namely the increase in the isothermal compressibility on approaching the gas–liquid spinodal. There, this leads to coherent nonergodicity parameters which are essentially hard sphere–like except for a large $`q0`$ value. In Fig. 5 we show the zero-frequency elastic shear modulus as a function of the reduced temperature along the glass transition lines in Fig. 2. When the range of attraction is comparatively large (b=7.5) the shear modulus remains constant at the hard sphere value, even for suspensions near the spinodal. This illustrates that the shear modulus, like the localization length, is determined by the large wavevector behavior of the liquid structure $`S_q`$, and is unaffected by long-wavelength density fluctuations in our calculations. For shorter range attractions (b=20 and 30), the shear modulus is dominated by particle clustering; it increases strongly with decreasing temperature because of the stronger binding among particles, eventually showing a maximum for suspensions close to the bend in the K<sub>c</sub> versus $`\varphi _c`$ curves, where $`\varphi _c`$ begins to decrease strongly with decreasing temperature. At lower density the shear modulus becomes linear in the density at the transition, according to $`G\mathrm{K}_\mathrm{c}^2\varphi _\mathrm{c}/\mathrm{b}`$, as predicted by the asymptotic solution in Eq. 10, and as observed in the dilute limit of the b=30 system. These results show that low temperature nonergodic structures, proposed to be colloidal gels here, are distinct from colloidal glasses in that they generally display a larger static shear modulus and strongly localized particles bound in clusters. To more clearly connect this study of the low temperature behavior of the glass transition to the experimental studies of the gel transition, we have calculated the intermediate scattering function upon approaching the glass transition at fixed volume fraction. This mimics the Verduin and Dhont study , in which they performed low–$`q`$ dynamic light scattering experiments on a series of suspensions at fixed volume fraction close to the gel transition. As noted already, the HCAY system with b=20 exhibits a glass transition line that nearly meets the critical point. This qualitative aspect is shared with the experimental phase diagram determined by Verduin and Dhont. We have selected four suspensions with b=20 and $`\varphi =0.4`$ at different reduced temperatures (shown as open circles in Fig. 2), such that the suspension with the lowest temperature (K=12) is located in the glass. The resulting normalized intermediate scattering functions corresponding to these suspensions are displayed in Fig. 6 for a fixed wavevector $`q\sigma `$=0.2, the same wavevector as that used in the measurements by Verduin and Dhont. As the normalized intermediate scattering function is the quantity that one measures in dynamic light scattering experiments, we can compare Fig. 6 with the results of Verduin and Dhont (see their Fig. 11). This comparison shows excellent qualitative agreement between their dynamic light scattering data and our calculated $`F_q(t)/S_q`$. Away from the transition the decay of $`F_q(t)/S_q`$ is approximately exponential for this wavevector. The decay becomes slower when the temperature is decreased until $`K=12`$, when $`F_q(t)/S_q`$ no longer decays to zero. Instead, a long-time plateau with a value near unity is obtained, which corresponds to the nonergodicity parameter $`f_q`$ at $`q\sigma =0.2`$. Note that additional incoherently scattered light due to particle size polydispersity may contribute appreciably and cause $`f_q`$ to attain such a large value. Nevertheless, the dynamical arrest of $`F_q(t)/S_q`$ agrees with our proposed ergodic–nonergodic transition for the gel transition and is captured by the idealized MCT. As has been shown in the past, the range of the colloidal attraction dictates where the fluid–solid freezing transition passes through the phase diagram and whether there is a stable liquid phase . In the same manner, Fig. 2 shows that the range of the attraction determines how the glass transition traverses the phase diagram relative to the critical point. We have not compared the diagrams in Fig. 2 with results for the fluid–solid and gas–liquid transitions (see e.g., ). The MCT relies on the static structure factor input, which was provided using the MSA. For short–range attractions the MSA produces relatively poor structural and thermodynamic predictions. Thus, a fair comparison should be made with a theory, e.g., MWDA , which can use the same input as that supplied to the MCT. Alternatively, the MCT can be solved using a more accurate static structure factor, such as that from HMSA theory . This would enable a determination of the location of the glass transition relative to that of the fluid–solid freezing transition, testing the conjecture made here that the glass transition line tracks the freezing transition at higher density in the phase diagram. ## V Discussion and Conclusions The idealized MCT has been shown to provide a possible explanation for important aspects of the colloidal gel transition. In this scenario the arrest of the dynamics during the gel transition is caused by a low temperature liquid–glass transition. The underlying phenomenon is a breaking of ergodicity, caused by long-time structural arrest. It is accompanied by the cessation of hydrodynamic diffusion and the appearance of relatively large finite elastic moduli as the particles are tightly localized in ramified clusters. The AHS system was found to have endpoint singularities in the phase diagram. The spectacular dynamics close to the MCT endpoint singularity has been the focus of a recent study . However, the glass transitions that occur at low temperatures in the AHS system are accompanied by numerical difficulties, resulting from the singular nature of the AHS pair interaction potential. Nevertheless, the AHS system provides insight into the temperature dependence of the glass transition in systems with weak short–range attractions. Subtle changes in the structure caused by the attractions lead to an initial increase in the glass transition density with decreasing temperature. The particles forming the glassy cage tend to stick together, thereby creating openings in the collective cage around a central particle which have to be filled by increasing the critical colloid density. We suggest that the recrystallization of glass samples at high densities upon addition of short polymers, reported in , is explained by this shift of the glass transition density to higher values. Moreover, the MCT glass transition line was observed to lie parallel to the DFT fluid–solid freezing transition at high temperatures in the AHS phase diagram. Introduction of a finite range of attraction and replacement of the PY theory with the MSA via the HCAY system, yields glass transition lines that extend to low temperatures in the phase diagram. For HCAY systems with moderate–range attractions the glass transition line crosses the liquid binodal. When the range of the attraction is further restricted the glass transition line passes above the critical point, likely rendering part of the (metastable) equilibrium phase diagram irrelevant. Preliminary solutions of the dynamical MCT equations for the b = 30 HCAY system indicate that nearby $`A_l`$ singularities with $`l>2`$ appreciably distort the time dependent structural correlators in the intermediate time windows, even though no $`A_3`$ singularity could be found in the phase diagram. The nonergodicity transitions of the HCAY system are influenced by two mechanisms which are absent or not dominant in the hard sphere and Lennard–Jones systems. In the latter two, the temperature dependence of the critical density $`\rho _c`$ is either trivially absent or arises from the soft repulsive part of the pair interaction potential. Along the MCT transition line in the Lennard–Jones liquid the temperature dependent packing fraction $`\varphi (\rho ,T)`$, resulting from the effective excluded volume diameter $`\sigma =\sigma ^{\mathrm{eff}}(T)`$ , is roughly constant $`\varphi (\rho ,T)0.52`$ and approximately equal to its hard sphere value . As the soft repulsion of the Lennard–Jones system leads to $`\sigma ^{\mathrm{eff}}T^{1/12}`$, the critical density smoothly decreases with temperature . Note that this observation indicates that the nonergodicity transitions resulting from the solution of Eq. 1 for the Lennard–Jones system are dominated by the excluded volume effect, i.e. the primary peak of the structure factor $`S_q`$, as is also the case for the hard sphere system. Because the idealized MCT has been developed from approximations aimed at describing the connected physical mechanism, called cage– or back–flow effect, the quality of the mode coupling approximation is expected to be unaffected by temperature changes as long as these excluded volume effects dominate in Eqs. 1 and 3. The nonergodicity transition lines of the HCAY system on the other hand are additionally affected by the low wavevector fluctuations in the fluid structure factor, $`S_q`$ for $`q0`$, and by the increase in $`S_q`$ at large wavevectors arising from the short–range nature of the attraction. The first aspect, which also occurs for longer range attractions, leads to nonergodicty transitions tracking the spinodal curve (see appendix). The short–range nature of the attraction causes the stronger localization of the particles, i.e. the shorter localization length $`r_s`$, upon decreasing the temperature. Also the strong increase in the elastic shear moduli along the nonergodicity transition line occurs only for sufficiently short–range attractions as shown in Fig. 5. Again, the ability of the MCT Eqs. 1 and 3 to describe such local interparticle correlations is not known. Note, however, that the HCAY results are independent of the numerical parameters chosen. Clearly, theories aimed at long wavelength phenomena at the gel transition cannot incorporate these variations of the elastic modulus as described by the MCT because it arises from local potential energy considerations. The asymptotic model, defined by Eq. 10 (and b $`\mathrm{}`$), which highlights the effects of strong short–range attractions, captures all aspects of the low density MCT nonergodicity transitions qualitatively and even semi–quantitatively. Furthermore, it clearly demonstrates that the gel transitions are not driven by long–range structural correlations. It can be expected that such an asymptotic model can be found for other theories of liquid structure with strong short–range attractive potentials, but the detailed predictions presented here rest upon the use of the MSA for the HCAY system. Based on this suggested interpretation of the MCT nonergodicity transitions, several features of the computed HCAY density–temperature diagrams agree qualitatively with experimental observations made on colloid–polymer mixtures and sterically stabilized suspensions . First, the gel transitions appear to lie at lower temperatures than, but otherwise track, the freezing line when present. Second, for short–range attractions the gel transition can shift to comparable or higher temperatures than those required for gas–liquid phase separation. Third, the gel line does not show such a strong density dependence as the static percolation transition. We emphasize that this suggested interpretation of the colloidal gel transitions is based on an extension of the idealized MCT of the glass transition beyond the range its approximations were aimed at. Our speculation, however, can be decisively tested by dynamic light scattering experiments. Nonergodicity transitions within the MCT exhibit universal dynamical properties , which for example led to the identification of the colloidal hard sphere glass transition by van Megen and coworkers . As more complicated bifurcation scenarios, $`A_l`$ with $`l>2`$ , can be expected, the dynamics at the gel transitions should be very nonexponential and anomalous. Moreover, the short–range attractions lead to couplings among more wavevector–modes, as can be seen from the asymptotic model defined by Eq. 10, resulting in an MCT exponent parameter (see ref. for a definition and details on its calculation) $`\lambda =0.89`$ for b $`\mathrm{}`$, considerably larger than values found for systems not characterized by short–range attractions (see e.g., ). The proposed connection between the MCT nonergodicity transitions in the HCAY system and the non-equilibrium transitions in colloidal suspensions is further supported by the following observations. For moderate–range attractions, like the b=7.5 diagram in Fig. 2, the coexistence region can be tentatively divided into three regions. For somewhat lower temperatures than the critical temperature, gas–liquid phase separation occurs, provided a thermodynamically stable liquid phase exists. For temperatures (just) below the triple point temperature, gas–crystal phase separation takes place. Decreasing K<sup>-1</sup> still more, gas–glass coexistence may be expected if — as argued from computer simulations — the way to crystallization proceeds via the initial formation of a liquid droplet, whose density lies above the nonergodicity transition line. As the glass states for this system are rather close to the ideal hard sphere glass state, we expect signatures of this well studied transition to be observed . We speculate that the vanishing of the homogeneously nucleated crystallites in the colloid–polymer systems upon addition of sufficient large molecular weight polymer, observed in , signals the presence of a nonergodicity transition as found in the colloidal hard sphere system . This suggestion can be tested by studying the dynamic density fluctuations close to the transition as has been demonstrated in the hard sphere system . For suspensions with short–range attractions, like the b=20– or b=30–curve in Fig. 2, it seems possible that the long–range density fluctuations, likely induced by the hidden critical point, become arrested when the denser domains of the system cross the MCT nonergodicity transition line. Nonergodic gel states characterized by large small-wavevector form factors $`f_q`$, rather short localization lengths, and finite, rather large elastic moduli can be expected. We suggest that these nonergodicity transitions cause the gel transitions observed in the colloid–polymer mixtures and sterically stabilized suspensions, and anticipate that they may also play a role in other colloidal systems, such as emulsions , emulsion–polymer mixtures , and suspensions of globular proteins , in which short–range attractions also dominate. We caution again that a proper extension of the MCT used here to include a full description of the critical dynamics close to critical points has yet to be formulated. Experimental tests of the dynamics close to the gel transitions would be required to test our suggestion. ## ACKNOWLEDGMENTS J. B. acknowledges financial support from the National Science Foundation (Grant No. INT-9600329) and the kind hospitality of Professor R. Klein (Universität Konstanz). The work was further supported by the Deutsche Forschungsgemeinschaft (DFG) (Grant No. Fu309/2-1). Useful discussions with G. Nägele and N. J. Wagner are acknowledged. We also thank Fabbian et al. for making their work available to us prior to publication. This work was conducted independently of theirs. ## In this appendix the nonergodicity transitions caused by the increase in the $`q0`$ limit of the structure factor close to the spinodal lines are discussed for the HCAY system. Figure 7 shows the spinodal lines and the gel transitions for the interaction parameters considered in Sections IV and V. Also shown are nonergodicty transition lines occurring only close to the spinodal lines. As seen, there is at least one crossing of the two types of nonergodicity transitions for each attraction range b, where in all cases the gel transitions discussed in the main text provide the larger, physical nonergodicity parameters. The additional transition lines presented here in the appendix have two peculiarities which cause us to doubt the validity of the present MCT for their description. First, they are directly caused by the small wavevector structure in $`S_q`$; thus, a proper MCT description should include also the very likely present critical dynamics . Second, at these transition lines only the collective density fluctuations for exceedingly small wavevectors or on large length scales are arrested. The single particle dynamics remain fluid–like, i. e. $`f_q^s=0`$ from Eq. 2. Again, this suggests that long–range collective fluctuations are of crucial importance at these transitions and the simple MCT approach used here is likely insufficient.
no-problem/9904/astro-ph9904084.html
ar5iv
text
# Galaxy Formation, Bars and QSOs ## 1 Introduction It is often noted that bright galaxies were assembled at about the same time that QSOs flare (e.g. Rees 1997), suggesting a causal connection. Blandford (this volume) reviews both the evidence for short lifetimes for QSOs and some recent models for their formation, but leaves open the question of whether they formed before or after the first galaxies. I suggest that, since QSOs are believed to reside in the centers of galaxies (e.g. Bahcall et al. 1997), it is natural to suppose that they formed there. There may be a loose proportionality between the mass of the BH and that of the bulge in which it resides (Magorrian et al. 1998). Thus an attractive model would offer answers to at least the following questions: * Why should QSOs flare during an early stage of galaxy formation? * Why are the centers of galaxies the preferred sites for QSOs? * What interrupts the fuel supply to limit QSO lifetimes? * Why is the mass of the central BH related to that of the host galaxy? Here I outline a model that offers dynamical answers to all these questions. The main ideas are: (1) most large galaxies developed a bar at an early stage of their formation, (2) the central engine is created from gas driven to the center by the bar and (3) changes to the galaxy potential, caused by mass inflow itself, shut off the fuel supply to the central engine when the mass concentration reaches a small fraction of the galaxy mass. Furthermore, this central mass is sufficient to weaken or even destroy the bar. ## 2 Bars in young galaxies I adopt the conventional picture that a galaxy disk forms as gas cools and settles into rotational balance in a dark matter halo. As I argue elsewhere (Sellwood, this volume), the DM halo has a large, low-density core. Unless the cooling gas has very low angular momentum, the disk it forms will be extensive and have a gently rising rotation curve at first. Under these conditions, a global bar instability will become unavoidable as the mass of the disk rises (e.g. Binney & Tremaine 1987, §6.3). Thus almost every galaxy with a dominant disk today would have become barred early in its lifetime. ## 3 Gas inflow driven by bars Many authors (e.g. Shlosman et al. 1990) have suggested that bar-driven gas inflow could fuel QSO activity. The inflow occurs because large-scale shocks develop in the gas flow pattern within the bar which Prendergast (1962) identified with the dust lanes seen along the leading sides of bars in galaxies today. The gas loses both energy and angular momentum in these shocks, the latter because the gas is asymmetrically distributed about the bar major axis. Thus gas is driven towards the center. Of the many simulations of gas flows in barred galaxy-like potentials, those by Athanassoula (1992) perhaps illustrate most clearly the difference in flow patterns caused by a central mass. A relatively shallow density profile allows gas to flow right into the center (her Figure 4), whereas a mass concentration causes the gas flow to stall some distance from the center (her Figure 2). The different flow pattern in the second case results from the presence of a generalized inner Lindblad resonance of the bar; outside this resonance the flow pattern is generally aligned along the bar, but it switches to perpendicular alignment inside this radius. The flow pattern inside this resonance ring does not contain shocks, and the gas cannot be driven by the bar any closer to the center. This phenomenon is also seen in nearby barred galaxies: nuclear rings occur at radii of a few hundred parsecs in many barred galaxies (Buta & Crocker 1993) where gas is often observed to pile up (Helfer & Blitz 1995; Rubin et al. 1997). ## 4 Quasar activity The bar which forms early in the life of a galaxy lacks a central mass concentration and does not possess an ILR. The abundant gas at this epoch will therefore be driven close to the center by the bar. But as the mass in the center rises to a percent or two of the then galaxy mass, an ILR will develop shutting off the dynamically driven flow of gas into the very inner regions. Thus the amount of gas that can reach the central $`50`$ pc is naturally limited by the large-scale dynamics. It is hard to predict the precise fate of the gas as it accumulates in the center, but an attractive guess is that some fraction of it makes a collapsed object while the rest forms stars. As bars form on the dynamical time-scale of the inner galaxy, and gas inflow time is not much longer, we expect a central engine to be created soon after a galaxy begins to be assembled. The ILR valve will close shortly thereafter, depriving the central collapsed object of further fuel and limiting its mass to a small fraction of the galactic mass. ## 5 Bar destruction The majority of galaxies are not strongly barred today, so the above picture requires that most bars be destroyed. Simulators have been reporting for years that stellar bars seem to be robust, long-lived objects (e.g. Miller & Smith 1979; Sparke & Sellwood 1986), but it is now known that bars can be destroyed by growing a central object. The mass and concentration needed for a central object to destroy a bar is not known at all precisely; Norman et al. (1996) showed that a dense object containing 5% of the disk plus bulge mass caused the bar to dissolve on a dynamical time, but Friedli’s (1994) work suggests that masses of 1-2% could lead to slower bar decay (see also Hozumi & Hernquist 1998). The central masses needed seem too high to be just the collapsed object, but all the gas and stars within a radius $`50`$ pc should be included. Sellwood & Moore (1999) report simulations that mimic this process. They grow a central mass “by hand” after a bar develops and limit its mass to 1.5% of the initial galaxy mass, to mimic the effect of the formation of an ILR. They find that the bar is weakened at this stage, though not yet totally destroyed. They go on to mimic later infall of fresh material which causes strong spiral patterns to develop in the disk. In some cases, the spiral patterns are vigorous enough to destroy the already weakened bar, but in other cases, infall of the material with the appropriate angular momentum can re-excite the bar. ## 6 Conclusions I have argued that every bright galaxy should have developed a bar early in its lifetime. The bar drives gas into the center which creates (in an unspecified manner) a central engine for QSO activity. Once the collapsed object, and its surrounding gas and star cluster, reaches a mass of 1-2% of the (luminous) galaxy mass at that time, an inner Lindblad resonance forms which shuts off the dynamically driven gas supply to the central engine. The dense center also weakens the bar, which may either be destroyed or re-excited, depending on the angular momentum distribution of later infalling material. This model leads two significant predictions: (1) Halo dominated galaxies, such as LSB or low-luminosity galaxies, which never suffered a bar instability, should not contain massive BHs. A good example of the latter kind is M33, for which Kormendy & McClure (1993) have placed a very low upper limit of $`10^4`$M on the mass of any central BH. (2) The fraction of barred galaxies should be lower in the early universe. This is because the first barred phase should be very short and occurs when the QSO is bright making the bar hard to see. By the time the QSO fades, any residual bar will be short and weak, and the later development of large-scale bars in galaxies is a more gradual process. Some observational support for this prediction is now available (Abraham et al. 1998). The model proposed here does not exclude the possibility that QSO activity would be re-ignited during galaxy mergers. Indeed, the further growth of the BHs during/after a merger will lead to brighter QSOs than those expected in the early stages because the central engines will be more massive. ###### Acknowledgements. This work was supported by NSF grant AST 96/17088 and NASA LTSA grant NAG 5-6037.
no-problem/9904/gr-qc9904028.html
ar5iv
text
# On local and global measurements of the speed of light on rotating platforms. ## 1 Introduction In 1997 Franco Selleri , in his long quest for inconsistencies in the special relativity theory (SRT), pointed out a paradox concerning the speed of light as measured on board a rotating disk. Actually his point treats the speed of light along a closed circuit encircling the rotation axis: when the platform is moving, the speed obtained dividing the length of the contour by the time of flight, as measured in the ”relative space” of the disk (defined in Sec. 3) by an observer at rest on the platform, is different whether measured in the rotation sense or in the opposite sense. More explicitly, suppose a light beam is sent along the rim in the rotation sense and another one in the opposite sense; then measure the average velocities of both beams for a complete round trip, i.e. the ratio between the length of the path and the times of flight read on a clock at rest on the rim, and call them $`c_+`$ and $`c_{}`$; then the ratio $`\rho =c_+/c_{}`$ differs from 1. Since a rotating reference frame is not inertial, this anisotropy of the light propagation is not, on itself, a surprising result; however Selleri notices that when letting the platform’s radius $`R`$ go to infinity and the angular speed $`\omega `$ go to $`0`$ in such a way that the peripheral speed $`\omega R`$ of the turntable remains constant, the ratio $`\rho `$ too keeps a constant value. However in the limit of infinite radius the uniform rotation becomes a uniform translation, i.e. the local reference frame becomes inertial. Here, according to SRT, the speed of light is assumed to be exactly the same in any direction (since all inertial frames are assumed to be optically isotropic); hence $`\rho `$ must be strictly $`1`$. This alleged discontinuity in the behavior of $`\rho `$, under such limit process, is the core of what we could call Selleri’s paradox. This issue has already been discussed elsewhere , showing that a full 4-dimensional relativistic treatment of the problem of the rotating platforms avoids any discontinuity or inconsistency whatsoever, since the speed of light, consistently defined, turns out to be exactly the same both clockwise and counterclockwise, just as in an inertial reference frame. However, though the 4-dimensional geometric point of view is clear and consistent, nothing prevents from considering the problem from a different viewpoint, rather natural for an observer living on the rotating disk. Then some doubt is apparently allowed , , since the ratio $`\rho `$, when measured by means of meter rods and clocks (or rather a single clock) at rest on the platform, actually differs from 1. The root of Selleri’s paradox can be identified in the basic assumption - founded on the homogeneity of the disk along the rim - that the ”global ratio” $`\rho =c_+/c_{}`$ of the average light velocities for complete round trips, coincides with the ”local ratio” $`\rho _o`$ of the forward and backward light velocities. We shall however show, analyzing the actual measurement procedures of the velocities in both cases, that $`\rho `$ cannot in general be assumed to equal $`\rho _o`$, contrary to the claim by Selleri. In fact, $`\rho `$ does not depend on the criterium for simultaneity adopted along the rim, as Selleri correctly points out; $`\rho _o`$ does instead strictly depend on the local simultaneity criterion. The two ratios, which Selleri labels by the same letter $`\rho `$, refer to two different kinds of measurements, so that there is no point in comparing them: they are and remain different, whatever the size of $`R`$ is, be it finite or infinite, with no harm for SRT. That this was the weak point of Selleri’s argument has been already remarked also by Budden . In sect. 2 the four-dimensional approach considered in ref. is breafly reexamined. Sect. 3 discusses two possible alternative definitions of space of the platform along the rim, and compares the Minkowskian and the operational approach to the interpretation of the measurements of space and time intervals on board the rotating disk. Sect. 4 draws the general conclusions. ## 2 Constancy of the speed of light in Minkowski spacetime On a formal point of view, the SRT is the description of a four-dimensional manifold, whose geometrical structure is uniquely determined by two principles: the Einstein relativity principle and the principle of constancy of the (one way) velocity of light in vacuum<sup>1</sup><sup>1</sup>1Of course, the axiomatic basis of the SRT is not completely established by the two principles mentioned above, but also embodies the so-called ”principle of locality”, which states the local equivalence of any accelerated reference frame with a momentarily comoving inertial frame.. As well known, such manifold is the familiar Minkowski spacetime, in which the time evolution of any massive particle is described in terms of a world line $`\gamma _m`$ which lies everywhere inside the light cone associated with any point of $`\gamma _m`$. This can be visualized in a standard spacetime diagram (in which space and time are measured by the same unities and the coordinate lines are drawn orthogonal to each other) as a world line whose slope, although variable, is everywhere greater than 45<sup>o</sup>. Only massless particles, particularly photons, are described by null world lines, i.e. by world lines whose slope, in the graphic representation, is always 45<sup>o</sup>: any light beam in free spacetime is described by a 45<sup>o</sup> slanting straight line, which can be regarded as a generator of the light cone. This is a geometrical expression of the principle of costancy of the one way velocity of light in minkowskian spacetime. The interaction of the light beam with a mirror may change the space direction of propagation of the beam, curving the trajectory in space and the world line in spacetime, without affecting its slope. As a consequence, when a light beam is lead to move along the rim of a rotating disk, grazing a cylindrical mirror, its world line in $`2+1`$ dimensions turns out to be a ”null helix” wrapped around the world tube of the disk and keeping everywhere a 45<sup>o</sup> slope. A $`2+1`$ geometrical analysis of the Sagnac effect (see ) shows how and why the times of flight for co-rotating and counter-rotating beams are different, although their world lines are helixes of constant (45<sup>o</sup>) slope; or, frasing it differently, although their velocities are the same - namely c - in any inertial frame, in particular in the local inertial comoving frame at any point of the rim. To sum up, the special relativistic assumption of the constancy of the slope of the world lines of light does not lead to inconsistencies or unphysical discontinuities. On the operational point of view, this means that, in the framework of SRT, the apparent global anisotropy of the propagation of light along the rim is perfectly compatible with the local isotropy, contrary to Selleri’s assumption. ## 3 Actual measurements of the speed of light Once the internal consistency of the geometry of Minkowskian spacetime has been established again, it still remains to confront it with the operational procedures an observer at rest on the rotating disk uses, in order to attribute actual values to the physical quantities of interest. The problem is that any measurement concerning the geometry of the disk and the synchronization of clocks on it is a well defined set of physical and mathematical operations on an extended region of space (in particular along the rim), whereas in a rotating frame special relativistic formulae are merely local: any result obtained by extrapolating them globally cannot be considered as a pure consequence of SRT, but depends on some (usually hidden) further assumptions. In our opinion, the presence of recurrent contradictions and paradoxes simply underlines the arbitrariness of such extrapolations, from local to global. Now, the obvious operational way to define and determine the (one way) speed of a (massive or not) moving object is to measure the length of a given travel and the time it takes, then divide the former by the latter. Of course this procedure determines the slope of the world line of the moving object only when the measurements are local (infinitesimal extension of the space and time intervals); finite measurements can yield the slope only in very special cases (constant slope world lines). In the case of uniform rotation and of light travelling along the rim of the rotating disk, the slope of the light world line is constant as well as that of the observer’s one; as a consequence, it can be determined not only by (a sequence of) local measurements of space and time intervals, performed in the local comoving frames, but also by global measurements of space and time intervals referred to a complete (either co-rotating or counter-rotating) round trip. However, in the second case the operational procedure should be carefully defined, because of the presence of some unavoidable conventional extrapolations, as pointed out before. More precisely: (i) the measure of the length of a complete round trip depends on the definition of ”space on the platform”, at least along the rim; (ii) the time taken by the light beam for a complete round trip is an observable quantity (it is the proper time lapse of a single clock), but the impossibility of a global synchronization along the rim could require a suitable correction. The form of the correction is imposed by the space-time geometry, according to the particular definition chosen for the space of the platform (see later). Among the many possible definitions of ”space on the platform along the rim”, we consider in particular the following two: (i) the ”space of locally Einstein simultaneous events”, defined as the set of events along the rim such that any nearby pair of them are simultaneous according to the Einstein criterium; (ii) the ”relative space” $`S:=T/I`$, defined as the quotient of the world tube $`T`$ of the disk by the congruence $`I`$ of the word lines of the points of the disk. The former is obtained extrapolating the local Einstein synchronization procedure to the whole rim of the disk. This space coincides with the space-like helix $`\gamma _S`$ considered at the beginning of Sec. 4 of ref. 2, and is everywhere Minkowski-orthogonal to the time-like helixes corresponding to the world lines of the points of the rim. The latter turns out to be the space of locations on the disk, regardless of any kind of synchronization: ”two points of spacetime which lie on the same disk word line … are identified in the relative space” . This space seems rather artificial on a Minkowskian point of view, but it should appear quite natural for the observer on the platform, since the space spanned by meter sticks arranged on the platform by this observer is precisely the ”relative space” of the disk. Notice that we used both spaces, namely: the ”space of Einstein locally simultaneous events” when we adopt a Minkowskian approach, like in the main part of ; and the ”relative space” when we adopt an operational approach, like in sect. 5 of and everywhere in . ### 3.1 Minkowskian approach In this approach, the ”space of locally Einstein simultaneous events” along the rim coincides with the space-like helix $`\gamma _S`$ considered before; the important point is that $`\gamma _S`$ is not a circumference (this is true only in the absence of rotation), but an open line whose slope depends on the rotation velocity. Now, the proper length of an open line is not a uniquely defined entity; we showed in particular in that the geometry of Minkowskian spacetime imposes different lengths for the portion of $`\gamma _S`$ covered by the co-rotating and by the counter-rotating light beams in a complete round trip. The difference in these two lengths turns out to be $$\delta s_{\gamma _S}=\frac{4\pi \left(\omega R\right)}{c\sqrt{1\left(\omega R\right)^2/c^2}}R$$ (1) which exactly coincides, dividing by $`c`$, with the difference in time of flight along the two round trips (see eq. (2) later, which is consistent with the Sagnac effect). As a consequence, this definition of space ensures the equality of the global speed of light both for the co-rotating and the counter-rotating light beams, restoring the isotropy of light propagation. We point out that this definition of space is the only one which can insure the equality between global and local velocities, i.e. between the ”global ratio” $`\rho `$ and the ”local ratio” $`\rho _o`$: this agrees with Selleri’s assumption, but both ratios equal exactly 1, with no harm for the SRT. ### 3.2 Operational approach In this approach, the ”relative space” $`S`$ along the rim allows the observer at rest on the platform to consider a unique length for the rim of the disk (see sect. 5 of and ). The measure of the two round trip times is obtained by one single clock (no need for special synchronization procedures), and gives two different results. In particular, the difference in time between the two round trips is $$\delta \tau =\frac{4\pi \left(\omega R\right)}{c^2\sqrt{1\left(\omega R\right)^2/c^2}}R$$ (2) which is an expression of the Sagnac effect , . In this case, the observer can draw the following conclusions, on the basis of his measurements of space and time on the platform and without any knowledge of Minkowskian spacetime structure (see , sect.5): (i) the platform on which he lives is rotating, and the desynchronization $`\delta \tau `$ of a pair of clocks, after slow round trips in opposite directions, is a measure of the speed of this rotation; (ii) the durations of travels along the closed path are not uniquely defined and, to obtain reliable measures of them, the readings of clocks must be corrected by a quantity $`\pm \delta \tau /2`$ to account for the desynchronization effect, which is the same result obtained by Bergia and Guidone ; (iii) as a consequence of this correction, the speed of light is actually the same both forward and backward. On the other hand, if the readings are used without any theoretical correction, the global measurement actually gives an anisotropic result at all radii (as far as $`\omega R0`$); but this procedure, which is the one proposed by Selleri, cannot prove his basic assumption, only founded on the homogeneity of the disk along the rim, that the ”global ratio” $`\rho `$ coincides with the ”local” one $`\rho _o`$. In fact, the measurements performed by the observer on the platform in order to calculate the two ratios are completely different. The value of the ”global ratio” turns out to be $$\rho =\frac{c_+}{c_{}}=\frac{c\omega R}{c+\omega R}$$ (3) and depends on the measurement of a difference of proper times read on a single clock. This measurement is independent from any assumption about synchronization. On the contrary, the ”local ratio” $`\rho _o`$ depends: (i) on the measurement of two infinitesimal lengths (forward and backward) in the local comoving frame; (ii) on the readings of three clocks (placed at the starting point of the light beams and at the arrival points, in opposite directions), Einstein synchronized in the local comoving frame. If Einstein synchronization is used, the ”local ratio” $`\rho _o`$ is exactly 1, and cannot be identified with the ”global ratio” $`\rho `$, which differs from 1. One could object that also a local measurement of the light velocity can be performed by means of a single clock, when the light beam is reflected by a mirror placed at an infinitesimal distance from the source (two ways average light speed). But in this case - that is what is usually made in actual experiments, like e.g. Michelson-like experiments - the difference of measurement procedures is still more evident: the global method, which measures two one-way velocities of two light beams performing two complete round trips along a closed path in opposite directions, cannot be used in the local inertial frame, in which only the two ways light speed is measurable. So the ”global ratio” $`\rho `$ only is an observable; the ”local ratio” $`\rho _o`$ is not. Selleri’s assumption is $$\rho =\rho _o=\frac{c\omega R}{c+\omega R}1\omega 0$$ (4) This assumption is equivalent to assuming a suitable non Einstein synchronization in the local comoving frame, which could be called ”Selleri synchronization”, consistent with the condition , : $$c_+=c\left(1+\frac{\omega R}{c}\right)^1;c_{}=c\left(1\frac{\omega R}{c}\right)^1$$ (5) Such a synchronization requires of course a suitable non Lorentz coordinate transformation, which in turn implies the existence of a privileged frame and the absolute character of synchronization, see , , . An obvious consequence of eqs. (5) is that light propagates anisotropically in any local comoving frame along the rim, but the observable two ways light speed is again $`c`$. As a consequence, the ”Selleri synchronization” does not conflict with known experiments, but conflicts with the standard Einstein synchronization (which assumes that light propagates isotropically in any inertial frame: $`c_+=c_{}`$ $`=c`$). If Selleri’s synchronization is used, the SRT is violated; in this case Selleri’s paradox only shows that, starting from an assumption violating the SRT, a result violating the SRT follows. We cannot treat, in the limits of this letter, the question of which synchronization (Einstein or Selleri) is more adequate to the whole experimental and theoretical context: we limit ourselves to claiming that both are consistent and compatible with experiments, but the ”serious logical problem in the SRT” declared by Selleri does not exist. ## 4 Conclusion To sum up our line of thought, we have seen that the direct measurement of the speed of light along a closed path, free of any theoretical corrections, does indeed reveal an anisotropy when the observer is rotating along the contour. It would continue to be so also for a contour of infinitely great curvature radius, though it is impossible to actually perform the experiment. On the other side local measurements of the speed of light cannot evidence any anisotropy. The global and local ratios between forward and backward light velocities, which Selleri labels by the same letter $`\rho `$, refer to two different kinds of measurements, which cannot be reduced one to the other: they are and remain different, whatever the size of the platform radius is, be it finite or infinite.The two classes of measurements do not overlap and do not reveal, in the framework of the SRT, any internal contradiction.
no-problem/9904/cond-mat9904159.html
ar5iv
text
# Linear-scaling ab-initio calculations for large and complex systems ## I INTRODUCTION It is clearer every day the contribution that first-principles calculations are making to several fields in physics, chemistry, and recently geology and biology. The steady increase in computer power and the progress in methodology have allowed the study of increasingly more complex and larger systems . It has been only recently that the scaling of the computation expense with the system size has become an important issue in the field. Even efficient methods, like those based on Density-Functional theory (DFT), scale like $`N^{2\text{-}3}`$, being $`N`$ the number of atoms in the simulation cell. This problem stimulated the the first ideas for methods which scale linearly with system size , a field that has been the subject of important efforts ever since . The key for achieving linear scaling is the explicit use of locality, meaning by it the insensitivity of the properties of a region of the system to perturbations sufficiently far away from it . A local language will thus be needed for the two different problems one has to deal with in a DFT-like method: building the self-consistent Hamiltonian, and solving it. Most of the initial effort was dedicated to the latter using empirical or semi-empirical Hamiltonians. The Siesta project started in 1995 to address the former. Atomic-orbital basis sets were chosen as the local language, allowing for arbitrary basis sizes, what resulted in a general-purpose, flexible linear-scaling DFT program . A parallel effort has been the search for orbital bases that would meet the standards of precision of conventional first-principles calculations, but keeping as small a range as possible for maximum efficiency. Several techniques are presented here. Other approaches pursued by other groups are also shortly reviewed in section II. All of them are based on local bases with different flavors, offering a fair variety of choice between systematicity and efficiency. Our developments of atomic bases for linear-scaling are presented in section III. Siesta has been applied to quite varied systems during these years, ranging from metal nanostructures to biomolecules. Some of the results obtained are briefly reviewed in section IV. ## II METHOD AND CONTEXT Siesta is based on DFT, using local-density and generalized-gradients functionals , including spin polarization, collinear and non-collinear . Core electrons are replaced by norm-conserving pseudopotentials factorized in the Kleinman-Bylander form , including scalar-relativistic effects, and non-linear partial-core corrections . The one-particle problem is then solved using linear combination of atomic orbitals (LCAO). There are no constraints either on the radial shape of these orbitals (numerically treated), or on the size of the basis, allowing for the full quantum-chemistry know-how (multiple-$`\zeta `$, polarization, off-site, contracted, and diffuse orbitals). Forces on the atoms and the stress tensor are obtained from the Hellmann-Feynman theorem with Pulay corrections , and are used for structure relaxations or molecular dynamics simulations of different types. Firstly, given a Hamiltonian, the one-particle Schrödinger equation is solved yielding the energy and density matrix for the ground state. This task is performed either by diagonalization (cube-scaling, appropriate for systems under a hundred atoms or for metals) or with a linear-scaling algorithm. These have been extensively reviewed elsewhere . Siesta implements two $`O(N)`$ algorithms based on localized Wannier-like wavefunctions. Secondly, given the density matrix, a new Hamiltonian matrix is obtained. There are different ways proposed in the literature to perform this calculation in order-$`N`$ operations. $`(i)`$ Quantum chemists have explored algorithms for Gaussian-type orbitals (GTO) and related technology . The long-range Hartree potential posed an important problem that has been overcome with Fast Multipole Expansion techniques plus near-field corrections . Within this approach, periodic boundary conditions for extended systems require additional techniques that are under current development . $`(ii)`$ Among physicists tradition favors more systematic basis sets, such as plane-waves and variations thereof. Working directly on a real-space grid was early proposed as a natural possibility for linear scaling . Multigrid techniques allow efficient treatment of the Hartree problem, making it very attractive. However, a large prefactor was found for the linear scaling, making the order-$`N`$ calculations along this line not so practical for the moment. The introduction of a basis of localized functions on the points of the grid (blips) was then proposed as an operative method within the original spirit . It is probably more expensive than LCAO alternatives, but with the advantage of a systematic basis. Another approach works with spherical Bessel functions confined to (overlapping) spheres wisely located within the simulation cell. As for plane-waves, a kinetic energy cutoff defines the quality of the basis within one sphere. The number, positioning, and radii of the spheres are new variables to consider, but the basis is still more systematic than within LCAO. $`(iii)`$ There are mixed schemes that use atomic-orbital bases but evaluate the matrix elements using plane-wave or real-space-grid techniques. The method of Lippert et al. uses GTO’s and associated techniques for the computation of the matrix elements of some terms of the Kohn-Sham Hamiltonian. It uses plane-wave representations of the density for the calculation of the remaining terms. This latter method is conceptually very similar to the one presented earlier by Ordejón et al. , on which Siesta is based. The matrix elements within Siesta are also calculated in two different ways : some Hamiltonian terms in a real-space grid and other terms (involving two-center integration) by very efficient, direct LCAO integration . While Siesta uses numerical orbitals, Lippert’s method works with GTOs, which allow analytic integrations, but require more orbitals. Except for the quantum-chemical approaches, the methods mentioned require smooth densities, and thus soft pseudopotentials. A recent augmentation proposal allows a substantial improvement in grid convergence of the method of Lippert et al. , possibly allowing for all-electron calculations. ## III ATOMIC ORBITALS ADAPTED TO LINEAR SCALING The main advantage of atomic orbitals is their efficiency (fewer orbitals needed per electron for similar precision) and their main disadvantage is the lack of systematics for optimal convergence, an issue that quantum chemists have been working on for many years . They have also clearly shown that there is no limitation on precision intrinsic to LCAO. Orbital range. The need for locality in linear-scaling algorithms imposes a finite range for matrix elements, which has a strong influence on the efficiency of the method. There is a clear challenge ahead for finding short-range bases that still give a high precision. The traditional way is to neglect matrix elements between far-away orbitals with values below a tolerance. This procedure implies a departure from the original Hilbert space and it is numerically unstable for short ranges. Instead, the use of orbitals that would strictly vanish beyond a certain radius was proposed . This gives sparse matrices consistently within the Hilbert space spanned by the basis, numerically robust even for small ranges. In the context of Siesta, the use of pseudopotentials imposes basis orbitals adapted to them. Pseudoatomic orbitals (PAOs) are used, i.e., the DFT solution of the atom with the pseudopotential. PAO’s confined by a spherical infinite-potential wall , has been the starting point for our bases. Fig. 1 shows $`s`$ and $`p`$ confined PAOs for oxygen. Smoother confining potentials have been proposed as a better converging alternative . A single parameter that defines the confinement radii of different orbitals is the orbital energy shift , $`\mathrm{\Delta }E_{\mathrm{PAO}}`$, i.e., the energy increase that each orbital experiences when confined to a finite sphere. It defines all radii in a well balanced way, and allows the systematic convergence of physical quantities to the required precision. Fig. 2 shows the convergence of geometry and cohesive energy with $`\mathrm{\Delta }E_{\mathrm{PAO}}`$ for various systems. It varies depending on the system and physical quantity, but $`\mathrm{\Delta }E_{\mathrm{PAO}}100200`$ meV gives typical precisions within the accuracy of current GGA functionals. Multiple-$`\zeta `$. To generate confined multiple-$`\zeta `$ bases, a first proposal suggested the use of the excited PAOs in the confined atom. It works well for short ranges, but shows a poor convergence with $`\mathrm{\Delta }E_{\mathrm{PAO}}`$, since some of these orbitals are unbound in the free atom. In the split-valence scheme, widely used in quantum chemistry, GTOs that describe the tail of the atomic orbitals are left free as separate orbitals for the extended basis. Adding the quantum-chemistry GTOs’ tails to the PAO bases gives flexible bases, but the confinement control with $`\mathrm{\Delta }E_{\mathrm{PAO}}`$ is lost. The best scheme used in Siesta calculations so far is based on the idea of adding, instead of a GTO, a numerical orbital that reproduces the tail of the PAO outside a radius $`R_{\mathrm{DZ}}`$, and continues smoothly towards the origin as $`r^l(abr^2)`$, with $`a`$ and $`b`$ ensuring continuity and differenciability at $`R_{\mathrm{DZ}}`$. This radius is chosen so that the norm of the tail beyond has a given value. Variational optimization of this split norm performed on different systems shows a very general and stable performance for values around 15% (except for the $`50\%`$ for hydrogen). Within exactly the same Hilbert space, the second orbital can be chosen as the difference between the smooth one and the original PAO, which gives a basis orbital strictly confined within the matching radius $`R_{\mathrm{DZ}}`$, i.e., smaller than the original PAO. This is illustrated in Fig. 1. Multiple-$`\zeta `$ is obtained by repetition of this procedure. Polarization orbitals. A shell with angular momentum $`l+1`$ (or more shells with higher $`l`$) is usually added to polarize the most extended atomic valence orbitals ($`l`$), giving angular freedom to the valence electrons. The (empty) $`l+1`$ atomic orbitals are not necessarily a good choice, since they are typically too extended. The normal procedure within quantum chemistry is using GTOs with maximum overlap with valence orbitals. Instead, we use for Siesta the numerical orbitals resulting from the actual polarization of the pseudoatom in the presence of a small electric field . The pseudoatomic problem is then exactly solved (within DFT), yielding the $`l+1`$ orbitals through comparison with first order perturbation theory. The range of the polarization orbitals is defined by the range of the orbitals they polarize. It is illustrated in Fig. 3 for the $`d`$ orbitals of silicon. The performance of the schemes presented here has been tested for various applications (see below) and a systematic study will be presented elsewhere . It has been found in general that double-$`\zeta `$, singly polarized (DZP) bases give precisions within the accuracy of GGA functionals for geometries, energetics and elastic/vibrational properties. Other possibilities. Scale factors on orbitals are also used, both for orbital contraction and for diffuse orbitals. Off-site orbitals can be introduced. They serve for the evaluation of basis-set superposition errors . Spherical Bessel functions are also included, that can be used for mixed bases between our approach and the one of Haynes and Payne . ## IV BRIEF REVIEW OF APPLICATIONS Carbon Nanostructures. A preliminary version of Siesta was first applied to study the shape of large hollow carbon fullerenes up to C<sub>540</sub>, the results contributing to establish that they do not tend to a spherical-shape limit but tend to facet around the twelve corners given by the pentagons. Siesta has been also applied to carbon nanotubes. In a first study, structural, elastic and vibrational properties were characterized . A second work was dedicated to their deposition on gold surfaces, and the STM images that they originate , specially addressing experiments on finite-length tubes. A third study has been dedicated to the opening of single-wall nanotubes with oxygen, and the stability of the open, oxidized tubes for intercalation studies . Gold Nanostructures. Gold nanoclusters of small sizes (up to Au<sub>75</sub>) were found to be amorphous, or nearly so, even for sizes for which very favorable geometric structures had been proposed before. In a further study the origin of this striking situation is explained in terms of local stresses . Chains of gold atoms have been studied addressing the experiments which show them displaying remarkably long interatomic spacings (4 - 5 Å). A first study arrives at the conclusion that a linear gold chain would break at interatomic spacings much smaller than the observed ones. It is illustrated in Fig. 4 . A possible explanation of the discrepancy is reported elsewhere. Surfaces and Adsorption. A molecular dynamics simulation was performed on the clean surface of liquid silicon close to the melting temperature, in which surface layering was found, i.e., density oscillations of roughly atomic amplitude, like what was recently found to happen in the surface of other liquid metals . Unlike them, though, the origin for silicon was found to be orientational, reminescent of directed octahedral bonding. Adsorption studies have also been performed on solid silicon surfaces, Ba on Si(100) and C<sub>60</sub> on Si(111) . Both works study adsorption geometries and energetics. For Ba, interactions among adsorbed atoms and diffusion features are studied. For C<sub>60</sub>, STM images have been simulated and compared to experiments. Nucleic Acids. Feasibility tests on DNA were performed in the early stages of the project, by relaxing a dry B-form poly(dC)-poly(dG) structure with a minimal basis . In preparation of realistic calculations, a thorough study of 30 nucleic acid pairs has been performed addressing the precision of the approximations and the DZP bases, and the accuracy of the GGA functional , obtaining good results even for the hydrogen bridges. Based on that, a first study of dry A-DNA has been performed, with a full relaxation of the structure, and an analysis of the electronic characteristics . ## V CONCLUSIONS The status of the Siesta project has been briefly reviewed, putting it in context with other methods of liner-scaling DFT, and briefly describing results obtained with Siesta for a variety of systems. The efforts dedicated to finding schemes for atomic bases adapted to linear-scaling have been also described. A promising field still very open for future research. Acknowledgments. We are grateful for ideas, discussions, and support of José L. Martins, Richard M. Martin, David A. Drabold, Otto F. Sankey, Julian D. Gale, and Volker Heine. EA is very grateful to the Ecole Normale Supérieure de Lyon for its hospitality. PO is the recipient of a Sponsored Research Project from Motorola PCRL. EA and PO acknowledge travel support of the $`\mathrm{\Psi }_k`$ network of ESF. This work has been supported by Spain’s DGES grant PB95-0202.
no-problem/9904/hep-th9904142.html
ar5iv
text
# Untitled Document hep-th/9904142 PUPT-1859 IAS-SNS-HEP-99-37 Comments on the IIA NS5-brane Shiraz Minwalla<sup>1</sup> minwalla@princeton.edu Department of Physics, Princeton University Princeton, NJ 08544, USA and Nathan Seiberg<sup>2</sup> seiberg@sns.ias.edu School of Natural Sciences, Institute for Advanced Study Olden Lane, Princeton, NJ 08540, USA Abstract We study $`N`$ coincident IIA NS5-branes at large $`N`$ using supergravity. We show that the absorption cross section for gravitons in this background does not vanish at zero string coupling for energies larger than $`\frac{m_s}{\sqrt{N}}`$ ($`m_s`$ is the string scale). Using a holographic description of the intrinsic theory of the IIA NS5-branes, we find an expression for the two point function of the stress energy tensor, and comment on its structure. April 1999 1. Introduction In this note we study $`N`$ coincident NS5-branes in type II theory. In the limit $`g_00`$, $`m_s`$ fixed (where $`g_0`$ is the asymptotic value of the string coupling and $`m_s`$ is the string mass) the theory is free in the bulk. However, modes living on the 5-brane continue to interact amongst themselves, while decoupling from the bulk, defining a mysterious non-gravitational six-dimensional theory (this construction was motivated by ), sometimes referred to as the little string theory. Upon compactification, this theory inherits the $`T`$ duality of type II string theory and is therefore nonlocal at the scale $`m_s`$. Maldacena and Strominger studied the system of $`N`$ non-extremal coincident NS5-branes with excitation energy density $`\mu m_s^6`$. The classical solution for this configuration possesses an asymptotically flat region, which turns into a tube on approaching the 5-brane. On descending down the tube one encounters the horizon of the black branes. The local string coupling outside the horizon is everywhere less than $`\sqrt{\frac{N}{\mu }}`$, and all curvatures in this region are less than $`\frac{m_s}{\sqrt{N}}`$. Therefore, when $`\mu N1`$ semi-classical gravity is accurate outside the horizon. In particular, the brane Hawking radiates at temperature $`\frac{1}{2\pi }\frac{m_s}{\sqrt{N}}`$, and modes behind the horizon continue to couple to modes in the tube even for $`g_0=0`$. This fact was interpreted in as a manifestation of holography in the underlying string theory. In particular, the decoupled theory of the NS5-branes is the holographic projection, along the lines of , of the near horizon geometry ($`g_00`$ limit) including the long tube<sup>1</sup> Similar ideas have been suggested by various people, including C.V. Johnson, J. Maldacena and A. Strominger.. It was further proposed in that some of the observables of the NS5-brane theory are the on-shell particles (vertex operators in the string theory limit) in the bulk. Unlike the situation in $`AdS`$, our geometry admits the definition of an $`S`$ matrix. Hence, off-shell six-dimensional “Green’s functions” of these observables are identified as the $`S`$ matrix elements of the corresponding higher dimensional bulk particles. It should be emphasized that the boundary of the near horizon geometry of the Euclidean NS5-brane is $`R^6\times S^3`$, and reduces to $`R^6`$ only after a Kaluza Klein reduction on the $`S^3`$. This is in contrast with the near horizon geometry of, say, the Euclidean M5-brane, whose boundary is $`S^6`$ rather than $`S^6\times S^4`$ (because the ratio of the size of the $`S^4`$ to that of the $`S^6`$ goes to zero on approaching the boundary). The Euclidean near horizon geometry of the NS5-brane consists of a semi-infinite tube, capped at the bottom by (Euclidean) $`AdS_7\times S^4`$. Let the analytic continuation of this space to Lorentzian signature be denoted by $`E`$. $`E`$ possesses a future horizon $`H^+`$ and a past horizon $`H^{}`$ each at finite affine distance, and so is geodesically incomplete. Its metric may be completed (see Appendix D) by gluing together a new copy of $`E`$ at each of the two original horizons, and then continuing this procedure indefinitely (just as the AdS cylinder may be constructed by gluing together copies of Poincare patches at their horizons). The resulting Penrose diagram is shown in fig. 1, and contains an infinite number of pairs of $`_i^\pm `$s, differentiated in our notation by the subscript $`i`$. $`_i^\pm `$, together with the horizons $`H_i^\pm `$ constitute the boundary of the $`i^{th}`$ wedge $`E_i`$. It is not clear to us whether this analytic continuation is physically relevant<sup>2</sup> We thank O. Aharony and T. Banks for a useful discussion on this point.. Fig. 1: The Penrose diagram of the geodesic completion of the metric of the NS5-brane. The correlation functions defined in in terms of $`S`$ matrix elements are naturally found in momentum space. However, it was pointed out by Peet and Polchinski that these answers cannot be Fourier transformed to coordinate space, and therefore the little string theory is nonlocal. The reason for that is that the momentum space answers are obtained after a momentum dependent multiplicative renormalization with the characteristic scale $`\frac{m_s}{\sqrt{N}}`$. We will return to this below. Aharony and Banks computed the entropy of the little string theory as a function of energy $`\omega `$ using its DLCQ definition \[9,,10\] and found $`\frac{6\sqrt{N}\omega }{m_s}`$, in agreement with the Bekenstein-Hawking entropy computed in . This entropy formula suggests that the number of states in the system grows rapidly with energy, thus explaining why the typical momentum space correlation function cannot be Fourier transformed . It also reproduces the scale of nonlocality as $`\frac{m_s}{\sqrt{N}}`$. In this paper we study the propagation of a mode of the graviton, which propagates like a minimal scalar in the supergravity background of $`N`$ coincident IIA NS5-branes. Its $`S`$ matrix elements are interpreted as the correlation functions of the energy momentum tensor of the little string theory. In section 2 we discuss the validity of the supergravity approximation, define variables, and set up our equations. In section 3 we compute the absorption cross section of a graviton incident onto the 5-branes from the tube, and note that it is nonzero at $`g_0=0`$ for energies larger than $`\frac{m_s}{\sqrt{N}}`$. This result strengthens that of . In section 4 we follow to derive an expression for the two point function of the energy momentum tensor of the little string theory, and comment on its structure. In section 5 we explain the relation between our results and previous calculations. In Appendix A we estimate the high energy behavior of the two point function of section 4 using the WKB approximation. In Appendix B we develop a low energy expansion for the two point function. In Appendix C we study a simple toy model, similar in some ways to the NS5-brane. In Appendix D we discuss the causal structure of the geodesic completion of the near horizon geometry of the NS5-brane. 2. The Setup 2.1. The Background Metric The classical string frame background corresponding to $`N`$ coincident extremal NS5-branes is $$\begin{array}{cc}\hfill ds^2& =dx_6^2+(1+\frac{N}{m_s^2r^2})(dr^2+r^2d\mathrm{\Omega }_3^2)\hfill \\ \hfill e^{2\mathrm{\Phi }}& =g_0^2(1+\frac{N}{m_s^2r^2}).\hfill \end{array}$$ $`(2.1)`$ This background possesses an asymptotically flat region $`r\frac{\sqrt{N}}{m_s}`$ connected to a semi-infinite flat tube $`r\frac{\sqrt{N}}{m_s}`$, with the topology of $`R^+\times S^3\times R^6`$ (these factors are parametrized by $`r`$, $`\mathrm{\Omega }`$ and $`x_6`$ respectively). In this paper we will restrict our attention to the IIA version of the little string theory. Its moduli space of vacua is $`\frac{(R^4\times S^1)^N}{S_N}`$ corresponding in M theory to the positions of the $`N`$ M5-branes in the transverse space and on the $`11^{th}`$ circle. (2.1) is the solution that corresponds to stacking all branes at the origin in the $`R^4`$, but smearing them evenly over the $`S^1`$, and therefore does not correspond to any true vacuum (single point in moduli space) of the theory. We wish to study the theory at the most singular point in its moduli space, where all the branes are on top of each other both in $`R^4`$ and $`S^1`$. The corresponding classical background is most conveniently written in eleven dimensions $$ds^2=A^{\frac{1}{3}}dx_6^2+A^{\frac{2}{3}}(dx_{11}^2+dr^2+r^2d\mathrm{\Omega }_3^2)$$ $`(2.2)`$ $$A=1+\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{N\pi l_p^3}{[r^2+(x_{11}2\pi nR_{11})^2]^{\frac{3}{2}}}.$$ $`(2.3)`$ The $`11^{th}`$ dimension is compact: $`x_{11}=x_{11}+2\pi R_{11}`$ and $`l_p`$ is the eleven-dimensional Planck length. For $`rR_{11}`$ the summation above may be replaced by an integral and we recover (2.1), on dimensional reduction and transforming to the string frame. For $`rR_{11}`$ a single term in the summation in (2.3) dominates, and (2.2) reduces to the near horizon of $`N`$ stacked M5-branes. Therefore (2.2) represents asymptotic flat space connected to $`AdS_7\times S^4`$ by a long tube. 2.2. Validity of Supergravity We start by considering the general case of M theory compactified on a circle, whose physical radius varies in an arbitrary spatially dependent fashion in the ten non-compact directions. We then specialize to M theory on (2.2). When the physical length $`r_{11}`$ of the M theory circle is much larger than $`l_p`$ (the eleven-dimensional region) eleven-dimensional supergravity is valid for energies and curvatures much smaller than $`\frac{1}{l_p}`$. When $`r_{11}l_p`$ (the ten-dimensional region) the local IIA string coupling $`g=(\frac{r_{11}}{l_p})^{\frac{3}{2}}`$ is small, and string perturbation theory is valid. It reduces to IIA supergravity for energies and curvatures much smaller than the string scale. Under these conditions proper energies are smaller than $`m_{eff}=\frac{g}{r_{11}}\frac{1}{r_{11}}`$. Therefore, no Kaluza Klein modes on the $`11^{th}`$ circle are excited, and IIA supergravity is identical to eleven-dimensional supergravity. In summary, eleven-dimensional supergravity may be used over all space for energies $`\omega \frac{1}{l_p},m_s`$ provided curvatures are small in string units in the ten-dimensional region, and in Planck units in the eleven-dimensional region. Throughout this paper $`g_0`$ is taken to be very small, so that $`R_{11}l_p\frac{1}{m_s}`$ and the energy condition above is satisfied if $`\omega m_s`$. In the ten-dimensional region in (2.2), $`rR_{11}\sqrt{N}`$, the curvatures in string units are always less than $`\frac{1}{\sqrt{N}}`$. Curvatures in the eleven-dimensional region $`rR_{11}\sqrt{N}`$ are always less than $`N^{\frac{1}{3}}`$ in Planck units. Hence 11d supergravity is valid in (2.2) for $`\omega m_s`$ and $`N1`$. Therefore, we study the theory in the ’t Hooft limit $`m_s\mathrm{}`$, $`N\mathrm{}`$ with $`\frac{m_s}{\sqrt{N}}`$ fixed<sup>3</sup> This is the usual ’t Hooft limit for the low energy gauge theory of the IIB little string theory.. In this limit classical supergravity is valid at all energies. 2.3. Qualitative motion We focus initially on the tube region, $`R_{11}r\frac{\sqrt{N}}{m_s}`$, where the expression for $`A`$ (2.3) simplifies to $`A\frac{N}{m_s^2r^2}`$. A mode of energy $`\omega `$ at asymptotic infinity has proper energy $`\omega A^{\frac{1}{6}}`$. The local proper radius of the $`11^{th}`$ circle is $`r_{11}=R_{11}A^{\frac{1}{3}}`$. The local value of the string coupling $`g`$ is $`g_0A^{\frac{1}{2}}`$. The local effective string tension (proper energy per unit proper length) is given by $`m_{eff}^2=m_s^2A^{\frac{1}{3}}`$. The proper energy of a Kaluza Klein excitation in the $`11^{th}`$ direction is $`\frac{1}{r_{11}}=\frac{1}{R_{11}A^{\frac{1}{3}}}`$. This corresponds to energy at infinity $`\omega =\frac{1}{R_{11}A^{\frac{1}{2}}}=\frac{r}{R_{11}}\frac{m_s}{\sqrt{N}}`$. Therefore, Kaluza Klein modes are excited at the bottom of the tube for energies larger than $`\frac{m_s}{\sqrt{N}}`$. A graviton with polarization parallel to the brane propagating in (2.2) with no momentum along the $`S^3`$ or the brane directions obeys at the quadratic level the equation of motion of a minimally coupled scalar . Such a graviton couples to a particular polarization of the stress energy tensor on the brane world volume<sup>4</sup> Upon compactification the theory on the brane has more than one “stress tensor” \[2,,5\] and this is one of them.. We denote it by $`\varphi `$ and study its propagation in (2.2). In the tube ($`\frac{\sqrt{N}}{m_s}rR_{11}`$) we change coordinates to $`z=\frac{\sqrt{N}}{m_s}\mathrm{ln}(\frac{rm_s}{\sqrt{N}})`$ and ignore variations in the $`x_{11}`$ direction to find the string frame metric and coupling constant $$ds^2=dx_6^2+dz^2+\frac{N}{m_s^2}d\mathrm{\Omega }_3^2$$ $`(2.4)`$ $$g=g_0e^{\frac{zm_s}{\sqrt{N}}}.$$ $`(2.5)`$ The Einstein frame metric is $$ds^2=e^{\frac{zm_s}{2\sqrt{N}}}(dx_6^2+dz^2+\frac{N}{m_s^2}d\mathrm{\Omega }_3^2),$$ $`(2.6)`$ and the quadratic action for $`\varphi `$ is $$S=\frac{(2\pi )^5}{\kappa _{10}^2}d^{10}xe^{\frac{2zm_s}{\sqrt{N}}}[(_0\varphi )^2(_z\varphi )^2],$$ $`(2.7)`$ where we have chosen a convenient normalization. In terms of $`\stackrel{~}{\varphi }=e^{\frac{zm_s}{\sqrt{N}}}\varphi `$ it is $$S=\frac{(2\pi )^5}{\kappa _{10}^2}d^{10}x[(_0\stackrel{~}{\varphi })^2(D_z\stackrel{~}{\varphi })^2],$$ $`(2.8)`$ where $`D_z=_z\frac{m_s}{\sqrt{N}}`$. This action leads to the equation of motion $$(_0^2+_z^2\frac{m_s^2}{N})\stackrel{~}{\varphi }=0,$$ $`(2.9)`$ corresponding to a free massive particle with mass $`\frac{m_s}{\sqrt{N}}`$. The two independent solutions in the tube are $`\varphi _\pm =e^{\frac{m_s}{\sqrt{N}}(\beta _\pm (s)zi\sqrt{s}t)}`$, with $$\beta _\pm (s)=1\pm \sqrt{1s}$$ where $$s=\frac{\omega ^2N}{m_s^2}$$ is energy squared in units of the mass gap. The string frame length of the tube is $`\frac{\sqrt{N}}{m_s}\mathrm{ln}(\frac{\sqrt{N}}{g_0})`$, and goes to infinity as $`g_00`$. We set $`g_0=0`$ and then the asymptotic region is driven off to infinity, and can be ignored. Our geometry consists of an infinite tube capped at the bottom by $`AdS_7\times S^4`$. A particle with $`s<1`$ cannot propagate in the tube and is confined in the AdS part of our geometry. A particle with $`s>1`$ can propagate in the tube, and can leak out of the AdS region into the tube. Consequently, absorption from the tube into the AdS region is also possible. 2.4. The Differential Equation When studying the near horizon region $`r\frac{\sqrt{N}}{m_s}`$ it is convenient to rescale the coordinates, $`\rho =\frac{r}{R_{11}}`$, $`\chi =\frac{x}{R_{11}}`$. $`y_i=\frac{x_im_s}{\sqrt{N}}`$, where $`i`$ runs over the 5+1 dimensions parallel to the brane. (2.2) becomes $$ds^2=(N\stackrel{~}{A})^{\frac{2}{3}}l_p^2[\stackrel{~}{A}^1dy_6^2+d\chi ^2+d\rho ^2+\rho ^2d\mathrm{\Omega }_3^2]$$ $`(2.10)`$ $$\stackrel{~}{A}=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{\pi }{[\rho ^2+(\chi 2\pi n)^2]^{\frac{3}{2}}}.$$ $`\chi `$ has periodicity $`2\pi `$. The tube is the region $`\rho 1`$, while $`\rho 1`$ is the M5 part of the geometry. The propagation of a complex minimally coupled scalar is governed by the action $$S_b=\frac{(2\pi )^5}{\kappa _{11}^2}d^{11}x\sqrt{g}|\varphi |^2.$$ $`(2.11)`$ We set $`\varphi =\theta e^{i\omega t}=\theta e^{i\sqrt{s}y_0}`$ with $`\theta `$ constant on the 3-sphere and the spatial directions parallel to the 5-brane. (2.11) becomes $$S_b=\frac{m_s^6V}{2\pi }𝑑\chi 𝑑\rho \rho ^3(\stackrel{~}{A}s|\theta |^2+|_\chi \theta |^2+|_\rho \theta |^2),$$ $`(2.12)`$ where $`V=d^6x=\frac{N^3}{m_s^6}d^6y`$ is the spacetime volume of the brane. $`\theta `$ obeys the equation of motion $$_\chi ^2\theta +\frac{1}{\rho ^3}_\rho \rho ^3_\rho \theta +\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{s\pi }{[\rho ^2+(\chi 2\pi n)^2]^{\frac{3}{2}}}\theta =0,$$ $`(2.13)`$ which is valid for all $`\rho `$ and on the full complex $`s`$ plane. $`s`$ is real and positive in Minkowski space, and real and negative in Euclidean space. Recall that the mass gap in the tube is at $`s=1`$. 2.5. A particular solution Define $`f(s,\rho ,\chi )`$ as the unique solution to (2.13) obeying: 1. Near $`\rho ,\chi =0`$ $`f=f(s,u)`$, where $`u^2=\rho ^2+\chi ^2`$. Further $`f(s,u=0)=0`$ in Euclidean space. This condition ensures that in Minkowski space at small $`u`$, $`f`$ is a wave that carries flux only into the 5-branes. It uniquely determines $`f(s,\rho ,\chi )`$ up to a normalization everywhere on the complex $`s`$ plane. In particular at small $`u`$ $$fC(s)\left(\frac{s\pi }{u}\right)^{\frac{3}{2}}K_3\left(2\sqrt{\frac{s\pi }{u}}\right)$$ $`(2.14)`$ 2. $`f`$ is normalized such that for $`\rho 1`$ $$f(s,\rho ,\chi )\rho ^{\beta _+(s)}+D(s)\rho ^{\beta _{}(s)}.$$ $`(2.15)`$ $`C(s),D(s)`$ are determined in principle by (2.13). In the absence of a complete solution we list what we know about these two functions. Note that $`C(s)`$ and $`D(s)`$ are real on the negative real axis. a. For real $`s`$ $`f(s,\rho ,\chi )`$ obeys a flux conservation equation derived from (2.13). The equation is trivial for negative $`s`$, but nontrivial for positive $`s`$ yielding $$\mathrm{Im}\left[(1+D(s))(1D^{}(s))\sqrt{1s}\right]=\frac{\pi ^3s^3|C(s)|^2}{6}.$$ $`(2.16)`$ b. In Appendix A we use the WKB approximation to argue that on the positive real axis $$\underset{s\mathrm{}}{lim}|D(s)|=0.$$ c. At $`s=0`$ (2.13) becomes an equation for free propagation of waves in 4 spatial dimensions and may be exactly solved. $`C(s)`$ and $`D(s)`$ may then be determined for small $`s`$ by perturbing around this exact solution. In Appendix B we show that in perturbation theory $`C(s)`$ and $`D(s)`$ take the form (a similar result was anticipated in ) $$C(s)=\underset{n=0}{\overset{\mathrm{}}{}}(s^3\mathrm{ln}s)^nf_n(s)$$ $`(2.17)`$ $$D(s)=\underset{n=0}{\overset{\mathrm{}}{}}(s^3\mathrm{ln}s)^ng_n(s),$$ $`(2.18)`$ where $`f_n(s)`$ and $`g_n(s)`$ are analytic functions at $`s=0`$ related to each other by (2.16). Explicitly computing the first few terms in the perturbative expansion (Appendix B) we find $$f_0(s)=1+\frac{1}{2}[\gamma \mathrm{ln}(4\pi )]s+𝒪(s^2)$$ $$g_0(s)=𝒪(s^2)$$ $$f_1(s)=\frac{\zeta (3)}{48\pi }+𝒪(s)$$ $$g_1(s)=\frac{\pi ^2}{12}+𝒪(s)$$ where $`\gamma `$ is Euler’s constant. Therefore, $$C(s)=1+smallerterms,$$ $$D(s)=Analytic+\frac{\pi ^2}{12}s^3\mathrm{ln}s+smallerterms.$$ To leading order $`\mathrm{Im}D(s)=\frac{\pi ^3}{12}s^3|C(s)|^2`$ in accordance with (2.16). 3. Absorption from the tube for $`s>1`$ When $`s>1`$ particles propagate in the tube. $`f(s,\rho ,\chi )`$ for positive $`s`$ is a solution with unit flux incident down into the tube, flux $`|D(s)|^2`$ is reflected back up the tube, and flux $`1|D(s)|^2`$ is absorbed into the M5-branes. Therefore, if a particle moves down the tube, the probability that it will be reflected back up the tube is $`|D(s)|^2`$. The reflection $`S`$ matrix element is $`2\sqrt{s1}D(s)`$ (see Appendix C for our normalization of the $`S`$ matrix). Since particles with $`s>1`$ can be absorbed into the M5-branes, the reverse process – the leakage of particles with $`s>1`$ out of the AdS region into the tube – is also possible. This is unlike the situation with a stack of M5-branes in flat space, which ceases to absorb and emit particles in the decoupling limit $`l_p0`$. This observation strengthens the Maldacena-Strominger non-decoupling from the tube in two ways. First, the non-decoupling occurs at finite energy and does not require energy densities. Second, the non-decoupling occurs at energies above $`\frac{m_s}{\sqrt{N}}`$. This value is below $`\frac{m_s}{N^{\frac{1}{6}}}`$, which is required for the validity of the Maldacena-Strominger approximations. 4. Two Point Functions 4.1. The Prescription In this section we will use the holographic proposal of to find an expression for the two point function of the little string theory operator $`O`$ that couples to our minimal scalar according to $$S_{int}=d^6x(\theta ^{}(x,L)O(x)+\theta (x,L)O(x)^{}).$$ $`(4.1)`$ Since $`\theta `$ is a mode of the graviton, we interpret $`O`$ as a component of the energy momentum tensor of the brane. We will be working in momentum space and use $$O(k)=\frac{d^6x}{(2\pi )^3}O(x)e^{ikx}.$$ Let $`O(k)O^{}(k^{})=\mathrm{\Pi }(k)\delta (kk^{})`$, and define a dimensionless two point function $`\mathrm{\Pi }^L(s)=\frac{1}{m_s^6}\mathrm{\Pi }(k)`$ at $`\frac{k^2N}{m_s}=s`$. $`h(s,\rho ,\chi )=\frac{f(s,\rho ,\chi )}{f(s,L,\chi =0)}`$ (for $`L1`$ the $`\chi `$ dependence of $`f`$ is exponentially small) is a solution of (2.13), which is regular everywhere in Euclidean space, and is unity at $`\rho =L`$. According to the prescription of \[13,,14\] adapted to our situation, $`\mathrm{\Pi }^L`$ may be found using the classical action (2.12) evaluated on the classical solution $`h(s,\rho ,\chi )`$ $$S_b[h(s,L,\chi )]=Vm_s^6L^3\frac{_\rho f(s,\rho ,\chi )_{\rho =L}}{f(s,L,\chi )}$$ $`(4.2)`$ We subtract from (4.2) the action evaluated on the ‘free’ solution $`\left(\frac{\rho }{L}\right)^{\beta _+(s)}`$, $`S_b^{(0)}=Vm_s^6L^3\beta _+(s)`$ and then retain only the dominant term as $`L\mathrm{}`$ (the toy model of Appendix C clearly motivates this prescription) $$\mathrm{\Pi }^L(s)=L^2[\beta _{}(s)\beta _+(s)]D(s)L^{\beta _{}(s)\beta _+(s)}=L^{2\beta _+(s)}2\sqrt{1s}D(s).$$ $`(4.3)`$ The renormalized correlation function is obtained by removing the $`L`$ dependence, and is given by $$\mathrm{\Pi }(s)=2\sqrt{1s}D(s).$$ $`(4.4)`$ The renormalized correlation function thus defined agrees with the reflection $`S`$ matrix element computed in the previous section. For small $`s`$ (4.4) becomes $$\mathrm{\Pi }(s)=analytic+\frac{\pi ^2}{6}s^3\mathrm{ln}s+smallerterms.$$ This is exactly the two point function of the energy momentum tensor of the M5 theory, which couples to a massless minimally coupled scalar in $`AdS_7`$. This is a consistency check, since the IIA little string theory reduces to the (0,2) theory at low energies. 4.2. The implications of momentum dependent renormalizations In order to obtain a cut off ($`L`$) independent $`\mathrm{\Pi }(s)`$ from the bare two point function $`\mathrm{\Pi }^L(s)`$ we had to perform a momentum dependent multiplicative renormalization. An additional finite momentum dependent renormalization would change the formula (4.4) for $`\mathrm{\Pi }(s)`$. We have chosen our renormalization scheme to yield a two point function that agrees with the $`S`$ matrix computed in section 3. (Strictly, this might leave an $`s`$ dependent phase ambiguity.) A momentum dependent renormalization is not multiplicative in position space. This may indicate the absence of a natural definition of the correlation function in position space, hinting at nonlocality of the theory \[7,,8\] at scale $`\frac{m_s}{\sqrt{N}}`$. If correct, this effect must be distinct from the nonlocality at $`m_s`$ noted in . 5. Absorption from Infinity In this section we consider NS5-branes in IIA theory with small but nonzero asymptotic string coupling $`g_0`$. The tube in the NS5-brane geometry is now of finite length, and the asymptotically flat part of the geometry cannot be ignored. We compute the absorption probability for a particle incident onto the NS5-brane from asymptotic infinity. Consider a wave with $`s>1`$ incident (with zero momentum along the branes and the sphere) onto the 5-brane. This wave may be partially reflected at two locations - at the entrance to the tube from asymptotic infinity, with a reflection amplitude $`R(s)`$, and at the entrance to the AdS region from the tube, with a reflection amplitude $`D(s)`$. In order to be absorbed a wave must penetrate the tube, and then either be absorbed, or be reflected an even number of times and then absorbed. Thus the absorption amplitude is $$A_{\mathrm{}}=\sqrt{(1|D|^2)(1|R|^2)}\underset{n=0}{\overset{\mathrm{}}{}}(e^{i\gamma }DR)^n.$$ $`e^{i\gamma }=(\frac{\sqrt{N}}{g_0})^{2i\sqrt{s1}}`$ represents the phase picked up by the wave traversing twice the length of the tube. The absorption probability, $`_{\mathrm{}}=\frac{(1|D(s)|^2)(1|R(s)|^2)}{|1e^{i\gamma }D(s)R(s)|^2}`$, is the ratio of flux absorbed by the 5-branes and the flux incident on them from asymptotic infinity. $`_{\mathrm{}}`$ may be computed by matching an exact solution to the wave equation in the asymptotic and tube region with (2.15). We find $$_{\mathrm{}}=\frac{(1|D(s)|^2)(1e^{2\pi \sqrt{s1}})}{\left|1+(\frac{\sqrt{N}}{g_0})^{2i\sqrt{s1}}D(s)e^{\pi \sqrt{s1}}\frac{\mathrm{\Gamma }(\beta _{}(s)+2)}{\mathrm{\Gamma }(\beta _+(s)+2)}(\frac{4}{s})^{i\sqrt{s1}}\right|^2},$$ $`(5.1)`$ where $`\mathrm{\Gamma }`$ is the Gamma function. In particular this implies $`|R(s)|=e^{\pi \sqrt{s1}}`$, in agreement with . The absorption cross section of the 5-branes is related to the absorption probability by $`\sigma _{\mathrm{}}=4\pi \frac{_{\mathrm{}}}{\omega ^3}`$. Notice that, in contrast with a stack of M5-branes in flat space, the absorption cross section for a stack of NS5-branes is not zero in the limit $`g_00`$ ($`l_p0`$). This may be understood as follows. A particle incident on a stack of isolated $`M5`$ branes in flat space has to tunnel through a broad potential barrier extending (in the usual coordinates) from $`r=\frac{1}{\omega }`$ down to very small values of $`r`$. The suppression factor due to this barrier goes to infinity in the decoupling limit, so that, in order to send unit flux into the M5-branes one needs to shine an infinite amount of flux on them from infinity. A stack of NS5-branes is, as we have seen, an array of stacks of M5-branes. The geometry very near any one element of the stack is identical to that of an M5 in flat space, but is significantly different for $`r`$ of order $`R_{11}`$. Since $`R_{11}0`$ in the decoupling limit, most of the potential barrier present in the geometry of the isolated M5-brane is chopped off in this modified geometry, and is replaced by the tube, through which particles with $`s>1`$ propagate freely. The residual potential barrier results in only finite tunneling suppression even in the decoupling limit. In order to send unit flux down to the 5-branes, one needs to shine only a finite amount of flux onto the branes from infinity. For completeness we present also the small $`g_0`$ flux absorption ratio $`_{\mathrm{}}(s)`$ for $`s<1`$ $$_{\mathrm{}}=(\frac{g_0}{\sqrt{N}})^{2\sqrt{1s}}\frac{\pi ^4}{3}\frac{|C(s)|^2}{2^{2\sqrt{1s}}}\frac{s^{3+\sqrt{1s}}}{|\mathrm{\Gamma }(1+\sqrt{1s})|^2}.$$ $`(5.2)`$ The first factor in (5.2) is the tunneling suppression experienced by the $`s<1`$ particle in penetrating through the tube of length $`\frac{g_0}{\sqrt{N}}`$. Note that $`_{\mathrm{}}0`$ as $`g_00`$. Acknowledgements We would like to acknowledge useful discussions with O. Aharony, M. Berkooz, T. Banks, C. Callan, O. Ganor, R. Gopakumar, I. Klebanov, S. Lee, J. Maldacena, S. Mathur, G. Moore, M. Rangamani, A. Vishwanath, E. Witten and especially A. Strominger. The work of S.M. was supported in part by DOE grant DE-FG02-91ER40671 and N.S. by DOE grant DE-FG02-90ER40542. Appendix A. Large $`s`$ Behavior In Minkowski space, for $`u=\sqrt{\chi ^2+\rho ^2}1`$ (2.13) describes the propagation of a wave whose coordinate wavelength $`\delta u`$ is approximately $`\frac{u^{\frac{3}{2}}}{\sqrt{s}}`$. The fractional change in length of a wavelength over the length of one wavelength, of order $`\sqrt{\frac{u}{s}}`$, is small for large $`s`$. Therefore, the WKB approximation is valid for $`s1`$, and the dynamics of (2.13) is that of null geodesics in (2.10). The geodesic equation for a particle moving in (2.10) with no velocity components along the $`S^3`$ and the brane is $$\begin{array}{cc}& Ay_0^{\prime \prime }\frac{1}{3}_\rho A\rho ^{}y_0^{}\frac{1}{3}_\chi A\chi ^{}y_0^{}=0\hfill \\ & A\rho ^{\prime \prime }+\frac{1}{3}_\rho A(\rho ^{})^2\frac{1}{6A}_\rho A(y_0^{})^2\frac{1}{3}_\rho A(\chi ^{})^2+\frac{2}{3}_\chi A\rho ^{}\chi ^{}=0\hfill \\ & A\chi ^{\prime \prime }+\frac{1}{3}_\chi A(\chi ^{})^2\frac{1}{6A}_\chi A(y_0^{})^2\frac{1}{3}_\chi A(\rho ^{})^2+\frac{2}{3}_\chi A\rho ^{}\chi ^{}=0,\hfill \end{array}$$ $`(\text{A.}1)`$ where $`y_0`$ is the time coordinate in (2.10) and primes denote derivatives with respect to an affine parameter $`\lambda `$ along the curve. A null geodesic also satisfies the mass shell condition $$y_0^{}=A\sqrt{\rho ^2+\chi ^2}.$$ $`(\text{A.}2)`$ In the tube $`A=\frac{1}{\rho ^2}`$ and (A.1), (A.2) admit the one parameter family of solutions $$\chi (\lambda )=\chi _0;\rho (\lambda )=\lambda ^{\frac{3}{2}};y_0(\lambda )=\frac{3}{2}\mathrm{ln}\lambda $$ $`(\text{A.}3)`$ that describe a light ray traveling down that tube at a fixed value of $`\chi =\chi _0`$, where $`\pi <\chi _0\pi `$. Numerical integration of (A.1) into the AdS part of the geometry indicates that every geodesic in (A.3) except the one at $`\chi _0=\pi `$ reaches $`\chi =0,\rho =0`$ (and so is absorbed by the M5-brane) at a finite value of the affine parameter $`\lambda _0`$. As $`\chi _0\pi `$, $`\lambda _0\mathrm{}`$ and the geodesic at $`\chi _0=\pi `$ never reaches the horizon but is reflected back. Hence, classically, a light ray propagating down the tube with no momentum along the $`S^3`$ or the brane is absorbed by the NS5-brane with unit probability and $`D0`$. Appendix B. Small $`s`$ Behavior We wish to determine $`C(s)`$ and $`D(s)`$ for small $`s`$. To do this we must find $`f(s,\rho ,\chi )`$, the solution to (2.13) subject to the conditions listed in section 2.5. At $`s=0`$ (2.13) has two linearly independent solutions that, for small $`\rho ,\chi `$ are functions only of $`u^2=\rho ^2+\chi ^2`$. They are $$\psi _1=1,\psi _2=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{\pi }{[\rho ^2+(\chi 2\pi n)^2]^{\frac{3}{2}}}.$$ For large $`\rho `$ $`\psi _2=\frac{1}{\rho ^2}`$ and for small $`u`$ $`\psi _2=\pi (\frac{1}{u^3}+\frac{2\zeta (3)}{(2\pi )^3})`$. $`f(s=0,\rho ,\chi )`$ is a linear combination of $`\psi _1`$ and $`\psi _2`$. Regularity at $`u=0`$ sets the coefficient of $`\psi _2`$ to zero, and the normalization condition in sec 2.5 sets the coefficient of $`\psi _1`$ to unity. Therefore, $`C(s=0)=1`$ and $`D(s=0)=0`$. We iterate our solution to successively higher order in $`s`$. Let $$f=f_0+f_1+f_2+\mathrm{}$$ $$C=C_0+C_1+C_2+\mathrm{}$$ $$D=D_0+D_1+D_2+\mathrm{},$$ where $`f_n,C_n,D_n`$ are of successively of higher order in $`s`$ (later in this appendix we will find that $`f_n,C_n,D_n`$ are each of the form $`s^nP_n(\mathrm{ln}s)`$ where $`P_n`$ is a polynomial of degree $`[\frac{n}{3}]`$). Above we have found $`f_0=\psi _1`$, $`C_0=1`$, $`D_0=0`$. $`f_{n+1}`$ is obtained from $`f_n`$ by solving (2.13) iteratively, $$\left(_\chi ^2+\frac{1}{\rho ^3}_\rho \rho ^3_\rho \right)f_{n+1}=\left(\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}\frac{s\pi }{[\rho ^2+(\chi 2\pi m)^2]^{\frac{3}{2}}}\right)f_n.$$ $`(\text{B.}1)`$ (B.1) does not uniquely determine $`f_{n+1}`$, since the differential operator on the LHS of (B.1) has two zero modes, $`\psi _1`$ and $`\psi _2`$. At large $`\rho `$ $`f=\rho ^{\beta _+(s)}+D(s)\rho ^{\beta _{}(s)}`$ may be expanded as $$f=1s\frac{\mathrm{ln}\rho }{2}+\frac{s^2}{8}[(\mathrm{ln}\rho )^2\mathrm{ln}\rho ]+\mathrm{}+\frac{1}{\rho ^2}(D_0+D_1+D_2+\mathrm{})\{1+s\frac{\mathrm{ln}\rho }{2}+\frac{s^2}{8}[(\mathrm{ln}\rho )^2+\mathrm{ln}\rho ]+\mathrm{}\},$$ and so the freedom to add an arbitrary multiple of $`\psi _1`$ to any solution of (B.1) is fixed by the condition that the coefficient of the constant term in an expansion of $`f_{n+1}(\rho )`$ at large $`\rho `$ is zero. Let $`\theta _{n+1}`$ be a particular solution of (B.1) obeying this asymptotic condition. Then $$f_{m+1}=\theta _{m+1}+B_{m+1}\psi _2,$$ where $`B_{m+1}`$ is a constant determined by imposing regularity of $`f`$ at small $`u`$. Regularity of $`f`$ at $`u=0`$ does not imply regularity of $`f_m`$ for any $`m0`$. It implies that $`f_m`$ is proportional to the appropriate term in the small $`s`$ expansion of $`C(s)\left(\frac{s\pi }{u}\right)^{\frac{3}{2}}K_3\left(2\sqrt{\frac{s\pi }{u}}\right)`$. Expanding this order by order in $`s`$ we find $$f(s,\rho ,\chi )=(C_0+C_1+C_2\mathrm{})\{1+\frac{s\pi }{2u}+\frac{s^2\pi ^2}{4u^2}\frac{s^3\pi ^3}{12u^3}[\mathrm{ln}(\frac{s\pi }{u})\psi (1)\psi (4)]+\mathrm{}\}.$$ $`(\text{B.}2)`$ Matching with this form fixes $`B_{m+1}`$ and $`C_{m+1}`$ at each order. At first order we find $$f_1=s\underset{M\mathrm{}}{lim}\left[\left(\underset{n=M}{\overset{M}{}}\frac{\pi }{2\sqrt{\rho ^2+(\chi 2\pi n)^2}}\right)\frac{1}{2}\mathrm{ln}(4\pi M)\right].$$ At large $`\rho `$ $`f_1=\frac{s}{2}\mathrm{ln}\rho `$. At small $`\rho `$ $`f_1=s\{\frac{\pi }{2u}+\frac{1}{2}[\gamma \mathrm{ln}(4\pi )]\}`$ where $`\gamma `$ is Euler’s constant. Therefore, $$C_1=\frac{s}{2}[\gamma \mathrm{ln}(4\pi )],D_1=0.$$ This procedure may be iterated to higher orders. $`f_i,C_i,D_is^i`$ for $`i2`$. Since $`f_2s^2`$, it is possible to choose $`\theta _3s^3`$. However, at small $`u`$ $`f_3s^3h(u)+\frac{\pi ^3}{12u^3}s^3\mathrm{ln}s`$, and therefore, we must choose $`B_3=As^3+\frac{\pi ^2}{12}s^3\mathrm{ln}s`$, where $`A`$ is a constant. This implies $`D_3=(const)s^3+\frac{\pi ^2}{12}s^3\mathrm{ln}s`$ and $`C_3=(const)s^3+\frac{\zeta (3)}{48}s^3\mathrm{ln}s`$. Since at small $`u`$ $`f_3=s^3h(u)+\frac{\pi ^3s^3\mathrm{ln}s}{12u^3}`$, it is possible to choose $`\theta _4=g(u)s^4+s^4\mathrm{ln}s(\frac{c_1}{u}+\frac{c_2}{u^4}+c_3)`$ for small $`u`$. To ensure matching with (B.2) each of $`C_4,B_4`$ must contain a term proportional to $`s^4\mathrm{ln}s`$. The situation is similar at the next order. However, at sixth order, matching with (B.2) (specifically the cross term from $`C_3`$ and the $`\frac{1}{u^3}`$ term) forces us to include also a term proportional to $`s^6(\mathrm{ln}s)^2`$ in $`B_6`$, and hence in $`C_6,D_6`$. Continuing successively to higher orders, it is clear that $`D_i`$ and $`C_i`$ are of the form $`s^iP_i(\mathrm{ln}s)`$, where $`P_i(x)`$ is a polynomial of order $`[\frac{i}{3}]`$ in $`x`$. In other words, at small $`s`$, $`D(s)`$ and $`C(s)`$ may each be written as a power series in $`s`$ and $`s^3\mathrm{ln}s`$. Appendix C. A Toy Model In the string frame, the near horizon geometry of the NS5-brane is a semi-infinite tube, connected through an intermediate region to the M5-brane geometry. A quantum incident down the tube is either reflected back up the tube, or continues through the horizon of the 5-brane and disappears. In some ways this problem is similar to a free scalar field theory with the action $$S=\frac{1}{2}𝑑x𝑑t\{(_0\varphi )^2(_x\varphi )^2\theta (x)\varphi ^2\}$$ $`(\text{C.}1)`$ in 1+1 dimensions. The region $`x<0`$ is analogous to the tube, $`x>0`$ is similar to the M5 region, while the abrupt change in mass at $`x=0`$ is the analogue of the transition region between the NS5 tube and the M5 region. In this Appendix we explore this toy model. We compute the $`S`$ matrix of this model in three different ways: using the LSZ formula, computing flux ratios, and computing the Euclidean action as a function of boundary values. We then consider a holographic projection of this theory, and demonstrate that the two point function of the boundary operator is equal to the reflection $`S`$ matrix of the bulk particle. C.1. The Propagator The Euclidean space propagator $$G(x,y,s)=𝑑te^{i(tt^{})\sqrt{s}}\varphi (x,t)\varphi (y,t^{})$$ $`(\text{C.}2)`$ is easily found by solving the appropriate differential equation with the boundary condition that it vanishes as either $`|x|`$ or $`|y|`$ go to infinity. Our conventions are such that $`s`$ is real and positive for Lorentzian energies and real and negative for Euclidean energies. We find $$G(x,y,s)=\{\begin{array}{cc}G^0(xy,s,m=1)+e^{\sqrt{1s}(x+y)}A(s)\hfill & x,y0\hfill \\ e^{\sqrt{1s}x\sqrt{s}y}B(s)\hfill & x0;y0\hfill \\ G^0(x,y,s,m=0)+e^{\sqrt{s}(x+y)}\stackrel{~}{A}(s)\hfill & x,y0\text{.}\hfill \end{array}$$ $`(\text{C.}3)`$ where $$\begin{array}{cc}& A(s)=\frac{1}{2\sqrt{1s}}\left(12s2\sqrt{s}\sqrt{1s}\right)\hfill \\ & B(s)=\sqrt{s}\sqrt{1s}\hfill \\ & \stackrel{~}{A}(s)=\frac{1}{2\sqrt{s}}\left(12s2\sqrt{s}\sqrt{1s}\right),\hfill \end{array}$$ $`(\text{C.}4)`$ and $$G^0(xy,s,m)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}𝑑p\frac{e^{ip(xy)}}{s+p^2+m^2}=\frac{e^{\sqrt{m^2s}|xy|}}{2\sqrt{m^2s}}$$ $`(\text{C.}5)`$ is a free propagator. $`G`$ can also be written as $$\begin{array}{cc}& G(x,y,s)=\hfill \\ & \{\begin{array}{cc}G^0(xy,s,m=1)+G^0(x0,s,m=1)G^0(0y,s,m=1)\mathrm{\Gamma }_{LL}(s)\hfill & x,y0\hfill \\ G^0(x0,s,m=1)G^0(0y,s,m=0)\mathrm{\Gamma }_{LR}(s)\hfill & x0;y0\hfill \\ G^0(x,y,s,m=0)+G^0(x0,s,m=0)G^0(0y,s,m=0)\mathrm{\Gamma }_{RR}(s)\hfill & x,y0\hfill \end{array}\hfill \end{array}$$ $`(\text{C.}6)`$ in terms of $$\begin{array}{cc}& \mathrm{\Gamma }_{LL}=2\left((12s)\sqrt{1s}2(1s)\sqrt{s}\right)\hfill \\ & \mathrm{\Gamma }_{LR}=4\left((1s)\sqrt{s}+s\sqrt{1s}\right)\hfill \\ & \mathrm{\Gamma }_{RR}=2\left((12s)\sqrt{s}+2s\sqrt{1s}\right).\hfill \end{array}$$ $`(\text{C.}7)`$ Equations (C.3)-(C.7) are valid by analytic continuation on the complex $`s`$ plane with square root functions defined as follows. For $`s=Xe^{i\alpha }(0\alpha <2\pi )`$ we set $`\sqrt{s}=i\sqrt{s}=\sqrt{X}e^{i\frac{\alpha \pi }{2}}`$. Similarly for $`s1=Xe^{i\alpha }`$ we set $`\sqrt{1s}=i\sqrt{s1}=\sqrt{X}e^{i\frac{\alpha \pi }{2}}`$. In particular, for $`s`$ infinitesimally above the real axis (C.3) is the Minkowskian propagator $`G_M`$ defined by $$iG_M(x,y,s)=e^{i\sqrt{s}x^0}T\varphi (x,x^0)\varphi (y,0)_{Minkowski}𝑑x^0.$$ C.2. S Matrix from the Propagator When $`s>1`$ we have two kinds of in states: particles with momentum $`p_{in}=\sqrt{s1}>0`$ coming from the left and particles with momentum $`k_{in}=\sqrt{s}<0`$ coming from the right. We also have two kinds of out states: particles with momentum $`p_{out}=\sqrt{s1}<0`$ going to the left and particles with momentum $`k_{out}=\sqrt{s}>0`$ going to the right. We normalize states covariantly so that $`k_{in}|k_{in}^{}=2\sqrt{s}\delta (\sqrt{s_{in}}\sqrt{s_{in}^{}})`$, $`p_{in}|p_{in}^{}=2\sqrt{s1}\delta (\sqrt{s_{in}}\sqrt{s_{in}^{}})`$, and similarly for out states. The covariant $`S`$ matrix with this normalization of states is obtained by amputating the external propagators from (C.6). It is given by $$\left(\begin{array}{cc}p_{out}|p_{in}& p_{out}|k_{in}\\ k_{out}|p_{in}& k_{out}|k_{in}\end{array}\right)=\delta (\sqrt{s_{in}}\sqrt{s_{out}})\left(\begin{array}{cc}\mathrm{\Gamma }_{LL}(s)& \mathrm{\Gamma }_{LR}(s)\\ \mathrm{\Gamma }_{LR}(s)& \mathrm{\Gamma }_{RR}(s)\end{array}\right).$$ $`(\text{C.}8)`$ With our normalization, completeness of in states implies $$_0^1𝑑\sqrt{s}\frac{|k_{in}k_{in}|}{2\sqrt{s}}+_1^{\mathrm{}}𝑑\sqrt{s}\left(\frac{|p_{in}p_{in}|}{2\sqrt{s1}}+\frac{|k_{in}k_{in}|}{2\sqrt{s}}\right)=1.$$ $`(\text{C.}9)`$ Inserting (C.9) into the the scalar product between two arbitrary in states we verify that the $`S`$ matrix is unitary $$\left(\begin{array}{cc}\mathrm{\Gamma }_{LL}^{}(s)& \mathrm{\Gamma }_{LR}^{}(s)\\ \mathrm{\Gamma }_{LR}^{}(s)& \mathrm{\Gamma }_{RR}^{}(s)\end{array}\right)\left(\begin{array}{cc}\frac{1}{2\sqrt{s1}}& 0\\ 0& \frac{1}{2\sqrt{s}}\end{array}\right)\left(\begin{array}{cc}\mathrm{\Gamma }_{LL}(s)& \mathrm{\Gamma }_{LR}(s)\\ \mathrm{\Gamma }_{LR}(s)& \mathrm{\Gamma }_{RR}(s)\end{array}\right)\left(\begin{array}{cc}\frac{1}{2\sqrt{s1}}& 0\\ 0& \frac{1}{2\sqrt{s}}\end{array}\right)=1.$$ $`(\text{C.}10)`$ For $`s<1`$ all particles incident from the right are reflected, and the appropriately normalized $`S`$ matrix is a pure phase. C.3. $`S`$ matrix through flux ratios The probability for a particle incident from the left at energy $`\sqrt{s}`$ to continue thorough to $`x=\mathrm{}`$, $``$, may be computed very simply. A purely in-going wave function for $`x>0`$ is $$\psi (x)=\{\begin{array}{cc}e^{ipx}+E(s)e^{ipx}\hfill & x0\hfill \\ F(s)e^{ikx}\hfill & x0\text{,}\hfill \end{array}$$ with $$F=\frac{2\sqrt{s1}}{\sqrt{s}+\sqrt{s1}}=1+E,E=\frac{\sqrt{s}+\sqrt{s1}}{\sqrt{s}+\sqrt{s1}},$$ and hence $$=1|E|^2=\frac{4\sqrt{s}\sqrt{s1}}{(\sqrt{s}+\sqrt{s1})^2}.$$ This result is equivalent to the $`S`$ matrix of the previous section. For instance, given (C.8), the amplitude for reflection is $`\frac{\mathrm{\Gamma }_{LL}}{2\sqrt{s1}}=E(s)`$ (where we have accounted for state normalizations and the delta function). C.4. $`S`$ matrix from the Euclidean Action The scattering solution used in the flux computation of the previous section may be analytically continued to the complex $`s`$ plane $$\psi (x)=\{\begin{array}{cc}e^{\sqrt{1s}x}+E(s)e^{\sqrt{1s}x}\hfill & x0\hfill \\ F(s)e^{\sqrt{s}x}\hfill & x0\text{,}\hfill \end{array}$$ $`(\text{C.}11)`$ where $$F(s)=\frac{2\sqrt{1s}}{\sqrt{s}+\sqrt{1s}};E(s)=\frac{\sqrt{s}+\sqrt{1s}}{\sqrt{s}+\sqrt{1s}}.$$ For $`s`$ real and negative we compute the action for this solution on $`L<x<\mathrm{}`$. $$S=\frac{\psi }{\psi }_{x=L}=\sqrt{1s}\frac{e^{2\sqrt{1s}L}E(s)1}{e^{2\sqrt{1s}L}E(s)+1}.$$ In terms of the action $`S_0`$ of the similar solution of a free massive theory $`\psi =e^{\sqrt{1s}x}`$ $$\underset{L\mathrm{}}{lim}\frac{(SS_0)}{e^{2\sqrt{1s}L}}=\mathrm{\Gamma }_{LL}(s).$$ $`(\text{C.}12)`$ Thus after a subtraction and renormalization, the Euclidean action reproduces the reflection $`S`$ matrix. C.5. Effective theory on the boundary Consider a distinct but related physical theory in which a free particle of unit mass on the real line interacts with an operator $`O`$ in a quantum mechanical system situated at $`x=L`$ via the interaction action $$S_{int}=𝑑t\varphi (L,t)O(t).$$ We will look for an $`O`$ such that the $`\varphi (x,t)`$ Greens functions computed at $`x,y<L`$ are identical to (C.6). The Euclidean space $`\varphi `$ Greens function is given by $$\begin{array}{cc}& \stackrel{~}{G}(x,y,t)=\frac{\delta }{\delta J(x,t)}\frac{\delta }{\delta J(y,0)}𝒟\varphi e^{S_f(\varphi )+{\scriptscriptstyle 𝑑\tau \varphi (L,\tau )O(\tau )}+{\scriptscriptstyle 𝑑\tau 𝑑\chi J(\chi ,\tau )\varphi (\chi ,\tau )}}|_{J=0}\hfill \\ & =G^0(xy,t,m=1)𝑑t^{}𝑑\tau G^0(x+L,t\tau ,m=1)G^0(Ly,t^{},m=1)O(\tau )O(t^{})\hfill \end{array}$$ $`(\text{C.}13)`$ ($`S_f(\varphi )`$ is the free Euclidean action for the $`\varphi `$ field) provided all higher $`n`$ point functions of $`O`$ vanish. Setting $`\stackrel{~}{G}(x,y,s)=𝑑te^{i\sqrt{s}t}\stackrel{~}{G}(x,y,t)`$ we find $$\begin{array}{cc}\hfill \stackrel{~}{G}(x,y,s)& =G^0(xy,s,m=1)\hfill \\ & G^0(x0,s,m=1)G^0(0y,s,m=1)e^{2\sqrt{1s}L}𝑑te^{\sqrt{s}t}O(t)O(0).\hfill \end{array}$$ We choose $$𝑑te^{\sqrt{s}t}O(t)O(0)=e^{2\sqrt{1s}L}\mathrm{\Gamma }_{LL}(s)=S_0S.$$ $`(\text{C.}14)`$ This ensures $`\stackrel{~}{G}(x,y,s)=G(x,y,s)`$, and the dynamics of $`\varphi (x,t)`$ for $`x<L`$ in the new system is identical to the dynamics of $`\varphi `$ governed by (C.1) for $`x<L`$. It should be stressed that there is no simple quantum mechanical system with such correlation functions of $`O`$. Appendix D. Geodesic Completion of the Brane Metric<sup>5</sup> This appendix was worked out in collaboration with A. Strominger. In this appendix we describe the geodesic completion of the near horizon metric of the NS5-brane. As a preliminary, following , we describe the completion of the full (not merely near horizon) geometry of M2, M5 and D3 branes, and demonstrate that the completion of the geometry of several separated M5-branes is regular. D.1. Coincident extremal M2,M5 and D3 branes The metric of a single wedge in the geometry of a set of coincident M2, M5 or D3 branes is $$ds^2=A^{\frac{2}{p+1}}dx_{p+1}^2+A^{\frac{2}{d2}}(dr^2+r^2d\mathrm{\Omega }_{d1}^2)$$ $`(\text{D.}1)`$ ($`0<r<\mathrm{}`$), where $`p`$ is the spatial dimension of the brane, $`d`$ the spatial dimension of the transverse space, and $`A=1+(\frac{\mathrm{\Lambda }}{r})^{d2}`$. The Penrose diagram for this patch of the geometry is a diamond whose edges on the right are $`^\pm `$ and those to the left are horizons at $`t=\pm \mathrm{}`$, at finite affine distance. In terms of $`\zeta =Kr^{\frac{p+1}{d2}}`$ (where $`K`$ is a constant) the metric of the near horizon region $`\zeta ^{p+1}\mathrm{\Lambda }^{d2}`$ of (D.1) is $$ds^2=\mathrm{\Lambda }^2\left(\frac{p+1}{d2}\right)^2\left(\zeta ^2dx_{p+1}^2+\frac{d\zeta ^2}{\zeta ^2}\right)+\mathrm{\Lambda }^2d\mathrm{\Omega }_{d1}^2.$$ $`(\text{D.}2)`$ (D.2) may be smoothly continued past its horizons by extending the range of $`\zeta `$ to negative values, hence the same is true of (D.1) (written in terms of $`\zeta `$) for $`|\zeta |K\mathrm{\Lambda }^{\frac{p+1}{d2}}`$ . For larger $`|\zeta |`$ the Harmonic function $`A=1+\frac{\mathrm{\Lambda }^{d2}}{\zeta ^{p+1}}`$ behaves differently for odd and even $`p`$. $`A`$ is well behaved for all $`\zeta `$ when $`p`$ is odd, but vanishes at $`\zeta =\mathrm{\Lambda }^{\frac{d2}{p+1}}`$ when $`p`$ is even, leading to a curvature singularity in (D.1) at that point. The Penrose diagram of (D.1) is a diamond. Depending on whether $`p`$ is even or odd our extension of the geometry augments the diamond differently. For $`p`$ odd we add another diamond whose bottom right edge is attached to the top left edge of the original diamond. The two left edges of the new diamond are new $`^\pm `$. The upper right edge is a new horizon at finite affine distance. For $`p`$ even we add a triangle, whose bottom right edge is attached to the top left edge of the original diamond. The top right edge is a new horizon at finite affine distance. The vertical line is the singularity. In both cases the extension to the original geometry has its own horizon at finite affine distance, which may in turn be continued through. The corresponding Penrose diagrams are depicted in fig. 2. Each point on the diagram represents $`S^{d1}\times R^p`$. Fig. 2: Penrose diagram of the geodesic completion of the metric of the M5 and D3 brane (2a), and the M2 brane (2b). (2a) is also the Penrose diagram for a $`\chi =0`$ slice of the near horizon region of the NS5-brane. D.2. Multiple M5 geometry The metric corresponding to two non-coincident $`p`$ branes is of the form (D.1). The harmonic function $`A`$ however picks up an additional term corresponding to the second brane. Let the first brane be located at the origin, and the second brane on the $`z`$ axis at $`z=a`$ in a Cartesian coordinate system in transverse space. The term in $`A`$ corresponding to the second brane is (recall $`r=K\zeta ^{\frac{p+1}{d2}}`$) $$A=\frac{\mathrm{\Lambda }^{d2}}{\left(K^2\zeta ^{\frac{2(p+1)}{d2}}2Ka\mathrm{cos}\theta \zeta ^{\frac{p+1}{d2}}+a^2\right)^{\frac{d2}{2}}},\mathrm{sin}\theta =\frac{z}{r}.$$ $`(\text{D.}3)`$ We attempt to extend through the horizon of the first brane by extending the range of $`\zeta `$ to negative values. This is permissible only if $`\frac{p+1}{d2}`$ is an integer (in order for (D.3) to be real). If $`\frac{p+1}{d2}`$ is moreover an even integer, as is the case for M5-branes, then the ‘mirror universe’ ($`\zeta <0`$) is identical to the original in every respect. Therefore, this is also true of any further extension through any other brane. The full space is completely regular. D.3. The IIA NS5-brane Consider an array of M5-branes periodically identified. The identification is regular in each wedge as every wedges has the geometry of (2.2). Since wedges in the multiple M5 geometry may be patched together in a regular manner we conclude that the full geometry is regular. We focus on the near horizon region of the NS5-brane geometry. The geodesic completion of a $`\chi =0`$ slice of (2.10) has the Penrose diagram depicted in fig. 2a, where $`^\pm `$ represent light-like asymptotic infinity in the NS5-brane tube. As demonstrated in appendix A, null geodesics starting out in the tube at arbitrary values of $`\chi `$ are qualitatively similar to those starting at $`\chi =0`$, and so fig. 2a provides a fair picture of the causal structure of the spacetime. Each point in the diagram may roughly be thought of as the product of $`R^5`$, an $`S^3`$ and an $`S^1`$. The $`S^1`$ shrinks to zero size at the boundaries of the diagram. References relax C. Callan, J. Harvey and A. Strominger, “Supersymmetric String Solitons,” hep-th/9112030, Lectures at the 1991 Trieste Spring School on String Theory and Quantum Gravity. relax N. Seiberg, “New Theories in Six Dimensions and Matrix Description of M-theory on $`T^5`$ and $`T^5/Z_2`$,” Phys. Lett. B408 (1997) 98, hep-th/9705221. relax M. Berkooz, M. Rozali and N. Seiberg, “Matrix description of M theory on $`T^4`$ and $`T^5`$,” Phys. Lett. B408 (1997) 105, hep-th/9704089. relax J. Maldacena and A. Strominger, “Semiclassical decay of near extremal fivebranes,” JHEP 12 (1997) 008, hep-th/9710014. relax O. Aharony, M. Berkooz, D. Kutasov and N. Seiberg, “Linear Dilatons, NS5-branes and Holography,” JHEP 10 (1998) 004, hep-th/9808149. relax J. Maldacena, “The large N limit of Superconformal theories and Supergravity,” Adv.Theor.Math.Phys. 2 (1998) 231, hep-th/9711200. relax A. Peet and J. Polchinski, “UV/IR Relations in AdS Dynamics,” Phys.Rev. D59 (1999) 65006, hep-th/9809022. relax O. Aharony and T. Banks, “Note on the Quantum Mechanics of M Theory,” JHEP 03 (1999) 016, hep-th/9812237. relax O. Aharony, M. Berkooz, S. Kachru, N. Seiberg, E. Silverstein, “Matrix Description of Interacting Theories in Six Dimensions,” Adv.Theor.Math.Phys. 1 (1998) 148, hep-th/9707079. relax E. Witten, “On The Conformal Field Theory Of The Higgs Branch,”, JHEP 07 (1997) 003, hep-th/9707093. relax N. Itzhaki, J.M. Maldacena, J. Sonnenschein and S. Yankielowicz, “Supergravity and the Large $`N`$ Limit of Theories with Sixteen Supercharges,” Phys.Rev. D48 (1998) 46, hep-th/9802042. relax S. Gubser, I. Klebanov, A. Tseytlin, “String Theory and Classical Absorption by Threebranes,” Nucl.Phys. B499 (1997) 217, hep-th/9703040. relax S. Gubser, I. Klebanov and A. Polyakov, “ Gauge theory Correlators from Non-Critical String Theory,” Phys.Lett. B428 (1998) 105, hep-th/9802109. relax E. Witten, “Anti De Sitter Space And Holography,” Adv.Theor.Math.Phys. 2 (1998) 253, hep-th/9702150. relax S. Gubser and I. Klebanov, “Absorption by branes and Schwinger terms in the World Volume Theory,” Phys.Lett. B413 (1997) 41, hep-th/9708005. relax G. Gibbons, G. Horowitz and P. Townsend, “Higher-dimensional resolution of dilatonic black hole singularities,” Class.Quant.Grav. 12 (1995) 297, hep-th/9410073.
no-problem/9904/astro-ph9904370.html
ar5iv
text
# The Black Hole to Bulge Mass Relation in Active Galactic Nuclei ## 1 Introduction Massive black holes (MBHs) have been postulated in quasars and active galaxies (Lynden-Bell 1969, Rees 1984). Evidence for the existence of MBHs has recently been found in the center of our Galaxy (Ghez et al. 1998, Genzel et al. 1997) and in the weakly active galaxy NGC 4258 (Miyoshi et al. 1995). Compact dark masses, probably MBHs, have been detected in the cores of many normal galaxies using stellar dynamics (Kormendy and Richstone 1995). The MBH mass appears to correlate with the galactic bulge luminosity, with the MBH being about one percent of the mass of the spheroidal bulge (Magorrian et al. 1998, Richstone et al. 1998). The question whether AGN follow a similar black hole-bulge relation as normal galaxies is a very interesting one, as it may shed light on the connection between the host galaxy and the active nucleus. Wandel & Mushotzky (1986) have found an excellent correlation between the virial mass included within the narrow line region (of order of tens to hundreds pc from the center) and the black hole mass estimated from X-ray variability in a sample of Seyfert 1 galaxies. A black hole-bulge relation similar to that of normal galaxies has been reported between MBH of bright quasars and the bulge of their host galaxies (Laor 1998), but the black hole and bulge mass estimates have large uncertainties (section 3.2). Seyfert 1 galaxies provide an opportunity to obtain more reliable black hole-to- bulge mass ratios (BBRs): because of their lower nuclear brightness, their bulge magnitudes can be measured directly. Also the black hole mass estimates (Wandel 1998) are much more reliable for AGN with reverberation data, which are more readily obtained for low luminosity AGN. The relation between the bulge and the nonstellar central source has been studied for many Seyfert galaxies (Whittle 1992; Nelson & Whittle 1996). These works find a tight correlation between the stellar velocity dispersion and the O\[III\] line and radio luminosity. Reliable BLR size measurements are now possible through reverberation mapping techniques (Blandford & McKee 1982, recently reviewed by Netzer & Peterson 1997). High quality reverberation data and virial masses are presently available for about twenty AGN, most of them Seyefert 1s (Wandel, Peterson and Malkan 1999, hereafter WPM). We combine the reverberation masses (section 2) with Whittle’s bulge estimates in order to study the BBR in low-luminosity AGN and compare it to MBHs in normal galaxies and quasars (section 3). In section 4 we derive a MBH-evolution theory that can explain our results. ## 2 BLR reverberation as a probe of black holemasses in AGN Broad emission lines probably provide the best probe of black holes in AGN. Assuming the line-emitting matter is gravitationally bound, and hence has a near-Keplerian velocity dispersion (indicated by the line width), it is possible to estimate the virial central mass: $`MG^1rv^2.`$ This remains true for many models where the line emitting gas is not in Keplerian motion, such as radiation-driven motions and disk-wind models (e.g. Murray et al. 1998): in a diverging outflow the density (end hence the emissivity) decreases outwards, the emission is dominated by the gas close to the base of the flow, where the velocity is close to the escape velocity. (Note that if the velocity is actually larger than Keplerian, the virial mass is an upper limit and the result that the Seyfert galaxies in our sample have smaller black hole masses than MBHs detected in normal galaxies becomes even stronger). The main problem in estimating the virial mass from the emission-linedata is to obtain a reliable estimate of the size of the BLR, and to correctly identify the line width with the velocity dispersion in the gas. WPM use the continuum/emission-line cross-correlation function to measure the responsivity-weighted radius $`c\tau `$ of the BLR (Koratkar & Gaskell 1991), and the variable (rms) component of the spectrum to measure the velocity dispersion in the same part of the gas which is used to calculate the BLR size, automatically excluding constant features such as narrow emission lines and Galactic absorption. The line width and the BLR size yield the virial ”reverberation” mass estimate $`M_{rev}(1.45\times 10^5M_{})c\tau _{days}v_3^2`$ where $`v_3`$ is the rms FWHM in units of $`10^3\mathrm{km}\mathrm{s}^1`$. The virial assumption ($`vr^{1/2}`$) has been directly tested using data for NGC 5548 (Krolik et al. 1991; Peterson & Wandel 1999). The latter authors find that when the BLR reverberation size is combined with the rms line width in multi-year data for NGC 5548, the virial masses derived from different emission lines and epochs are all consistent with a single value ($`(6.3\pm 2)\times 10^7M_{}`$) which demonstrates the case for a Keplerian velocity dispersion in the line-width/time-delay data. ## 3 The Black-Hole - Bulge Relation ### 3.1 Seyfert 1 galaxies We use the WPM sample with the virial mass derived from the H$`\beta `$ line by the reverberation-rms method (table 1). For 13 of the objects in the WPM sample we obtain the bulge magnitudes from the compilation of Whittle et al. (1992), who calculate the bulge magnitude from the total blue magnitude, using the empirical formula of Simien & deVaucolours (1986), relating the galaxy type to the bulge/total fraction. The bulge magnitudes are corrected for the nonstellar emission using the correlation between H$`\beta `$ and the nonstellar continuum luminosity (Shuder 1981). Mkn 110 and Mkn 335 have no estimated bulge magnitude in Whittle’s compilation, because they do not have well defined Hubble types. For these objects we adopt a canonical Hubble type of Sa, which has a bulge correction ($`m_{bulge}m_{gal}`$) of 1.02 mag. For Mkn 335 there is already a fairly large (and therefore uncertain) correction for the active nucleus (1.17 mags). For 3C120 the bulge magnitude is taken from Nelson & Whittle (1995) who find a bulge magnitude of -22.12. The uncertainties in the bulge magnitude were estimated from Whittle’s (1992) quality indicators. These indicators estimate the error in the subtraction of the nonstellar luminosity and some other factors, which for most galaxies amount to an uncertainty in the range 0.2-0.6 magnitudes. To the galaxies with an uncertain Hubble type we assign an uncertainty of 1.2 mag. We relate the bulge luminosity to the magnitude by the standard expression $`\mathrm{log}(L_{bulge}/L_{})=0.4(M_v+4.83)`$. The bulge mass is then calculated using the mass-to-light relation for normal galaxies, $`\frac{M/M_{}}{L/L_{}}5(L/10^{10}L_{})^{0.15}`$ (see Faber et al. 1997). Fig. 1 shows the black hole mass as a function of the bulge mass. All the objects in our sample have BBRs lower than 0.006, the average value found for normal galaxies (Magorrian et al. 1998, represented by a dashed line), and the sample average is $`<M_{BH}>=3\times 10^4<M_{bulge}>`$. Also shown is NGC 1068 (a Seyfert 2), with the MBH mass estimated by maser dynamics. The narrow-line Seyfert galaxy NGC 4051, which has by far the lowest BBR in our sample, may indicate that narrow-line Seyfert 1 galaxies have smaller black holes than ordinary Seyfert 1 galaxies (Wandel and Boller 1998). Figure 1. The virial black hole mass calculated by the reverberation BLR method (from Wandel, Peterson & Malkan 1999) vs. the bulge magnitude (from Whittle 1992) for the Seyfert 1 galaxies in our sample (diamond), the masing Seyfert 2 galaxy NGC 1068 (square) and PG0953+414 (triangle). Open diamonds indicate an unknown Hubble type (and therefore a large uncertainty in the bulge magnitude). The dashed diagonal lines are the average BBRs for normal galaxies (Magorrian et al. 1998) and Seyfert 1s (this work). ### 3.2 Quasars Laor (1998) has studied the black hole-host bulge relation for a sample of 15 bright PG quasars. Estimating the bulge masses from the Bahcall et al. (1997) study of quasar host galaxies he admits the uncertainty in estimating bulge luminosity, dominated by the much brighter nonstellar source. Laor estimates the black hole mass using the H$`\beta `$ line width and the empirical relation $`r_{BLR}=15L_{44}^{1/2}\mathrm{light}\mathrm{days}`$ (Kaspi et al. 1997), where $`L_{44}=L(0.11\mathrm{\mu m})`$ in units of $`10^{44}\mathrm{erg}\mathrm{s}^1`$. As this relation has been derived for less than a dozen low- and medium luminosity objects (mainly Seyferts) with measured reverberation sizes, it is not obvious that it may be extrapolated to more luminous quasars. The BLR size is also dependent on the ionizing and soft X-ray continua (Wandel 1997). The WPM sample (which includes Kaspi’s sample) indicates that the slope of the BLR-size luminosity relation may flatter than 0.5; WPM find $`r17L_{44}^{0.36\pm 0.09}\mathrm{l}\mathrm{d}`$. If this result is correct, extrapolating the $`rL^{1/2}`$ relation over two orders of magnitude (the difference between the average luminosity of the PG quasars used by Laor and Kaspi’s sample average) overestimates the black hole mass. Indeed, for the only object common to the Laor and WPM samples - the quasar PG 0953+414 - Laor finds 3$`\times 10^8M_{}`$, while the reverberation -rms method gives $`(1.5_{0.9}^{+1.1})\times 10^8M_{}`$. ### 3.3 Comparing Normal Galaxies, Seyferts and Quasars Fig. 2 shows the three groups in the plane of black hole mass vs. bulge luminosity. The best fits and the corresponding standard deviations to the data in the three groups are ($`M_8=M_{BH}/10^8M_{}`$ and $`L_{10}=L_{bulge}/10^{10}L_{}`$): 1. Normal galaxies (Magorrian et al. 1998, table 2, excluding upper limits) - $`M_8=2.9L_{10}^{1.26}`$, $`\sigma =0.47`$ 2. PG quasars (Laor 1998, all objects in his table 1) - $`M_8=1.6L_{10}^{1.10}`$, $`\sigma =0.38`$ 3. Seyfert 1s (this work, excluding NGC 4051) $`M_8=0.2L_{10}^{0.83}`$, $`\sigma =0.43`$ Figure 2. Mass estimates of MBHs plotted against the luminosity of the bulge of the host galaxy. Squares: MBH candidates from Magorrian et al. (1998), open squares - MBHs detected by maser dynamics triangles - PG quasars from Laor (1998), diamonds - Seyfert 1 galaxies (this work). MW denotes our Galaxy. Also given are the best linear fits for each class (see text). The dashed long line is the estimate of dead black holes from integrated AGN light. As a group Seyfert 1 galaxies have a significantly lower BBR than normal galaxies and bright quasars. This lower value agrees with the remnant black hole density derived from integrating the emission from quasars (Chokshi and Turner 1992): $`\rho _{BH}=(L/ϵc^2)\mathrm{\Phi }(L,t)𝑑L𝑑t=2\times 10^5(ϵ/0.1)M_{}\mathrm{Mpc}^1`$, ($`\mathrm{\Phi }`$ is the quasar luminosity function and $`ϵ`$ is the efficiency), which compared to the density of starlight in galaxies ( $`\rho _{gl}`$) gives $`\rho _{BH}/\rho _{gl}=2\times 10^3(0.1/ϵ)(M_{}/L_{})`$ (shown as a dashed line in Fig. 2). ## 4 Black hole Evolution and the Black Hole - Bulge ratio ### 4.1 Demography While Seyfert 1 galaxies seem to have a lower BBR than bright quasars and the galaxies with detected MBHs in the Magorrian et al. (1998) sample, they are in good agreement with the BBRs of the upper limits and of our Galaxy, and with remnant quasar black holes. It is plausible therefore that the Seyfert galaxies in our sample represent a larger population of galaxies with low BBRs, which is under-represented in the Magorrian et al. sample. This hypothesis is supported by the distribution of black hole masses in Fig. 2: the only MBHs under $`2\times 10^8M_{}`$ detected by stellar dynamical methods are in the Milky Way, in Andromeda and its satellite M32, and in NGC 3377 (the latter being nearly $`10^8M_{}`$). These galaxies, as well as NGC1068 and at least two of the three upper limit in Magorrian’s sample do have low BBRs, comparable to our Seyfert 1 average. Actually for angular-resolution limited methods, the MBH detection limit is correlated with bulge luminosity: for more luminous bulges the detection limit is higher, because the stellar velocity dispersion is higher (the Faber-Jackson relation). In order to detect the dynamic effect of a MBH it is necessary to observe closer to the center, while the most luminous galaxies tend to be at larger distances, so for a given angular resolution, the MBH detection limit is higher. This may imply that Magorrian et al. ‘s sample is biased towards larger MBHs, as present stellar-dynamical methods are ineffective for detecting MBHs below $`10^8M_{}`$ (except in the nearest galaxies). The BLR method is not subject to this constraint, making Seyfert 1 galaxies good candidates for detecting low-mass MBHs. (Note however that by the same token the WPM sample may be biased towards Seyferts with low black hole masses, which tend to vary on shorter timescales and hence are more likely to be chosen for reverberation studies). ### 4.2 Black Hole Growth by Accretion Fig. 2 shows that Seyfert 1 galaxies have relatively small MBHs compared MBHs in normal galaxies and to quasars, yet they have comparable bulges. Below we suggest a possible explanation. Consider MBH growth by accretion from the host galaxy. Since the accretion radius, $`R_{acc}0.3M_6v_2^2`$pc (where $`M_6=M_{BH}/10^6M_{}`$ and $`v_2=v_{}/100\mathrm{k}\mathrm{m}\mathrm{s}^1`$ is the stellar velocity dispersion) is small compared with the size of the bulge, we may assume the mass supply to the black hole is given by the spherical accretion rate, $`\dot{M}=4\pi \lambda R_{acc}v_{}^{}{}_{}{}^{2}\rho =(10^4M_{}/\mathrm{yr})\lambda M_6^2v_2^3\rho _{}`$, where $`\rho _{}`$ is the stellar (or gas) mass density in units of $`M_{}pc^3`$ (corresponding to $`4.4g/cm^3`$) and $`\lambda <1`$ is the Bondi parameter combined with a possible reduction factor due to angular momentum. Integrating we find the time required for growing from a mass $`M_i`$ to $`M_f`$ by accretion of gas or stars, $`t_{acc}=(10^{16}yr)v_2^3\rho _{}^1\lambda ^1(M_{}/M_iM_{}/M_f)(10^8\mathrm{yr})M_8^1v_2^3\rho _{}^1`$. For masses $`<10^6\rho _{}^1M_{}`$ this is larger than the Hubble time, so seed MBHs must grow by black hole coalescence which, even for dense clusters, is of the order of the Hubble time (Lee, 1993; Quinlan & Shapiro 1987). For densities as high as in the central parsec of the Milky Way (few$`\times 10^7M_{}pc^{}3`$; Genzel et al. 1997) or for NGC 4256 (Miyoshi et al. 1995) accretion-dominated growth becomes feasible for masses as low as 100-1000$`M_{}`$. While the accretion rate is growing as $`M^2`$ and the growth time decreases as $`M^1`$, the MBH eventually becomes large enough for accretion-dominated growth time $`t_g=t_{acc}`$. This phase may be applicable for the Seyfert population. Since the luminosity is $`L\dot{M}M^2`$, the Eddington ratio increases as $`L/L_{Edd}ML^{1/2}`$. The black hole growth slows down when the Eddington ratio approaches unity, $`t_g`$ being bound by the Eddington time, $`t_gt_E=M/\dot{M}_E=4.5\times 10^7(ϵ/0.1)^1\mathrm{yr}`$ where $`\dot{M}_E`$ is the accretion rate that would produce an Eddington luminosity. Equating $`t_E`$ to $`t_{acc}`$ we find that the growth rate flattens at a BH mass of $`M_t(2\times 10^8M_{})v_2^3\rho _{}^1(ϵ/0.1\lambda )`$. In the Eddington -limited era, which may correspond to quasars, the growth rate is exponential, depleeting the available matter in the bulge on the relatively short time scale $`t_{Edd}`$. This leads to an asymptotic BBR, which is likely to be similar for luminous quasars and their largest remnant MBHs in normal galaxies. This scenario predicts that on average quasars should have higher Eddington ratios (near unity) than Seyferts, and larger BBRs. We can test the prediction from the data at hand. Estimating the bolometric luminosity of AGN with reverberation data from the lag (WPM), and of PG quasars from the relation $`L_{bol}8\nu L_\nu (3000\AA )`$ (Laor 1998), we find a correlation between the Eddington ratio and the BBR, with Seyferts having Eddington ratios in the $`10^30.1`$ range and a low BBR, and quasars with Eddington ratios close to unity and higher BBRs. From the Eddington ratio we can also infer the actual growth time, $`t_gt_EL/L_E`$. For most objects in our sample $`t_g`$ is in the range $`10^8\mathrm{few}\times 10^9`$ yr. I acknowledge valuable discussions with Mark Whittle Gary Kriss, Geremy Goodman, Doug Richstone and Mark Morris and the hospitality of the Astronomy Department at UCLA.
no-problem/9904/quant-ph9904014.html
ar5iv
text
# Monotonicity Properties of Certain Measures over the Two-Level Quantum Systems ## Abstract We demonstrate — using the case of the two-dimensional quantum systems — that the “natural measure on the space of density matrices $`\rho `$ describing $`N`$-dimensional quantum systems” proposed by Życzkowski et al \[Phys. Rev. A 58, 883 (1998)\] does not belong to the class of normalized volume elements of monotone metrics on the quantum systems. Such metrics possess the statistically important property of being decreasing under stochastic mappings (coarse-grainings). We do note that the proposed natural measure (and certain evident variations upon it) exhibit quite specific monotonicity characteristics, but not of the form required for membership in that distinguished class. Keywords: density matrix, monotone metric, operator monotone function, Bures metric, measures over quantum systems Mathematics Subject Classification (2000): 81Qxx, 26A48 In a recent paper , Życzkowski, Horodecki, Sanpera and Lewenstein (ZHSL) proposed a “natural measure in the space of density matrices $`\rho `$ describing $`N`$-dimensional quantum systems”. We demonstrate here — using the two-dimensional quantum systems — that this measure (and certain obvious variations upon it) do not belong to another class of, arguably, “natural measures” (ones consistent with desired properties under “coarse-graining” ). This collection of measures, recently studied by Slater in various contexts (cf. ), consists of the normalized volume elements of monotone metrics (including, notably, the minimal monotone or Bures metric ). Nevertheless, the ZHSL measure and the variations we associate with it below ((9) and (10)), exhibit quite definite monotonicity properties, but not those needed for these measures to fall within the indicated class. Using the “Bloch sphere” representation of the two-dimensional density matrices, $$\rho =\frac{1}{2}\left(\begin{array}{cc}1+z& xiy\\ x+iy& 1z\end{array}\right),$$ (1) where $`x^2+y^2+z^21`$, and converting to spherical coordinates $`(x=r\mathrm{sin}\theta \mathrm{cos}\varphi ,y=r\mathrm{sin}\theta \mathrm{cos}\varphi ,z=r\mathrm{cos}\theta )`$, an (unnormalized) volume element of a monotone metric is expressible in the form (cf. \[8, eq. (3.17)\]), $$\frac{r^2\mathrm{sin}\theta }{f\left((1r)/(1+r)\right)(1r^2)^{1/2}(1+r)}.$$ (2) Here $`f(t)`$ is an operator monotone function satisfying a (self-adjointness) condition, $`f(t)=tf(1/t)`$. All such functions considered in also fulfill a normalization condition (at least, in a limiting sense), $`f(t)=1`$. A function $`f:^+`$ is called operator monotone if the relation $`0KH`$ implies $`0f(K)f(H)`$ for any matrices $`K`$ and $`H`$ of any order. The relation $`KH`$ signifies that the eigenvalues of $`HK`$ are non-negative. (Plenio, Virmani, and Papadopoulos have recently employed operator monotone functions to derive a new inequality relating the quantum relative entropy and the quantum conditional entropy. For another quantum-theoretic study involving such functions, cf. .) Now, using the polar decomposition theorem, ZHSL represented density matrices in the form, $$\rho =UDU^{},$$ (3) where $`U`$ is a unitary matrix, $`U^{}`$ is its conjugate transpose, and $`D`$ is a diagonal matrix with non-negative elements $`d_i`$ (the eigenvalues of $`\rho `$), which, of course, sum to 1. The action of the operator monotone function $`f(t)`$ upon $`\rho `$ can be expressed in the form $$f(\rho )=Uf(D)U^{},$$ (4) where $`f`$ acts individually on the diagonal entries of $`D`$ \[14, p. 112\]. The ZHSL measure — which ZHSL used for estimating the volume of separable states (cf. ) — is the product of the uniform distribution (Haar measure) on the unitary transformations $`U(N)`$ and the uniform distribution on the $`(N1)`$-dimensional simplex spanned by the $`N`$ eigenvalues ($`d_i`$) of $`\rho `$. It is quite natural to consider this latter uniform distribution as the specific case ($`\nu =1`$) of the $`(N1)`$-dimensional (symmetric) Dirichlet probability distributions \[5, eq. (1)\] \[15, eq. (3)\], $$p_\nu (d_1,d_2,\mathrm{},d_N)=\frac{\mathrm{\Gamma }(N\nu )d_1^{\nu 1}d_2^{\nu 1}\mathrm{}d_N^{\nu 1}}{\mathrm{\Gamma }(\nu )^N},\nu >0$$ (5) on the $`(N1)`$-dimensional simplex. (For $`\nu =\frac{1}{2}`$, one obtains the “Jeffreys’ prior” of Bayesian theory , corresponding to the classically unique monotone/Fisher information metric . Życzkowski \[15, App. A\] has shown that a vector of an $`N`$-dimensional random orthogonal \[unitary\] matrix generates the Dirichlet measure (5) with $`\nu =\frac{1}{2}`$ $`[\nu =1]`$.) For $`N=2`$, the unitary matrices $`U`$ are parameterizable as the product of a phase factor (irrelevant for the purposes here) and a member of $`SU(2)`$ \[20, eqs. (2.40), (2.41)\], $$U(\alpha \beta \gamma )=e^{i\alpha \sigma _3/2}e^{i\beta \sigma _2/2}e^{i\gamma \sigma _3/2},$$ (6) where $`\sigma _i`$ denotes the $`i`$-th Pauli matrix and the three Euler angles have the ranges, $`0\alpha <2\pi `$, $`0\beta \pi `$, and $`0\gamma <2\pi `$. More explicitly, we have $$U(\alpha \beta \gamma )=\left(\begin{array}{cc}e^{i\alpha /2}\mathrm{cos}(\beta /2)e^{i\gamma /2}& e^{i\alpha /2}\mathrm{sin}(\beta /2)e^{i\gamma /2}\\ e^{i\alpha /2}\mathrm{sin}(\beta /2)e^{i\gamma /2}& e^{i\alpha /2}\mathrm{cos}(\beta /2)e^{i\gamma /2}\end{array}\right).$$ (7) Since the angle $`\gamma `$ also can be shown to be absent (drop out) in the ZHSL representation (3) of the density matrix (cf. ), the corresponding (conditional) Haar measure is simply $`\mathrm{sin}\beta d\beta d\alpha /2\pi `$. We converted the (generalized) ZHSL measures — that is, the product of this measure and members of the (symmetric) Dirichlet family (5) — to Cartesian coordinates, making use of the transformations (several others, leading to equivalent results (9) and (10), are also possible), $$\alpha =\frac{1}{2}i(2\mathrm{log}(x+iy)\mathrm{log}(x^2+y^2)),\beta =\mathrm{cos}^1(\frac{z}{\sqrt{x^2+y^2+z^2}}),d_1=\frac{1}{2}(1\sqrt{x^2+y^2+z^2}).$$ (8) By so doing, we obtained the one-parameter ($`\nu `$) family of probability distributions, $$q_\nu (x,y,z)=\frac{\mathrm{\Gamma }(\frac{1}{2}+\nu )(1x^2y^2z^2)^{\nu 1}}{2\pi ^{\frac{3}{2}}\mathrm{\Gamma }(\nu )(x^2+y^2+z^2)},$$ (9) over the Bloch sphere. (One can easily see, then, that the generalized ZHSL measures are unitarily-invariant, since the two eigenvalues of $`\rho `$ — that is $`(1\pm \sqrt{x^2+y^2+z^2})/2`$ — are preserved under unitary transformations. “Our choice \[of the ZHSL measure\] was motivated by the fact that both component measures are rotationally invariant” .) In spherical coordinates ($`r,\theta ,\varphi `$), this probability distribution (9) takes the form, $$\stackrel{~}{q}_\nu (r,\theta ,\varphi )=\frac{\mathrm{\Gamma }(\frac{1}{2}+\nu )(1r^2)^{\nu 1}\mathrm{sin}\theta }{2\pi ^{3/2}\mathrm{\Gamma }(\nu )}.$$ (10) Now, to cast (10) in the form (2) of the volume element of a monotone metric (making the substitution $`r=(1t)/(1+t)`$), one finds that $$f(t)\frac{(1t)^2\left(\frac{t}{(1+t)^2}\right)^{1/2\nu }}{(1+t)}.$$ (11) (Since the limit of these functions at $`t=1`$ for all $`\nu `$ are 0, we are unable to normalize them so that $`f(1)=1`$, in the manner of , which condition would appear to be necessary for them to be associated with “operator means”. However, interestingly, the functions (11) do fulfill the self-adjointness requirement of Petz and Sudár , $`f(t)=tf(1/t)`$.) In particular, for $`\nu =1`$ (the uniform distribution on the 1-simplex or line), we obtain $`f(t)\frac{(1t)^2}{\sqrt{t}}`$, and for $`\nu =\frac{1}{2}`$ (that is, the Jeffreys’ prior on the 1-simplex), we have $`f(t)\frac{(1t)^2}{(1+t)}`$. Now, the class of functions (11) possesses some interesting properties in regard to monotonicity (as simple plots will reveal). They are all monotone-decreasing for $`t[0,1]`$ (so they are certainly not operator monotone in the sense of Petz and Sudár ), but they are also monotone-increasing for $`t>1`$. We do not know whether or not they are, then, also operator monotone for $`t>1`$ (but some of the numerous results presented in \[14, Chap. V\] might be relevant in this regard). Nevertheless, it would appear that the behavior of the functions given by (11) in the interval is possibly more relevant than the behavior for $`t>1`$, since in (2), the argument of $`f`$ — that is, $`(1r)/(1+r)`$ — can vary only between 0 and 1. (For the operator monotone functions $`f(t)`$, in the form considered by Petz and Sudár , if one takes $`yf(x/y)`$, that is, the reciprocal of the associated “Morozova-Chentsov function”, one obtains various means of $`x`$ and $`y`$, such as the arithmetic and logarithmic ones \[23, eq. (3)\]. Using (11) in this formula, one obtains for the case $`\nu =1`$, $`(xy)^2/4\sqrt{xy}`$, and for $`\nu =\frac{1}{2}`$, $`(xy)^2/2(x+y)`$.)) We have, thus, shown that the “natural measure” recently proposed by ZHSL (and a class of extensions of it) can not be considered (at least for the case of two-dimensional quantum systems, but conjecturally also for the $`N`$-dimensional systems, $`N>2`$, as well) to be proportional to the volume elements of monotone metrics. Any metrics associable with these ZHSL measures would, therefore, lack the statistically important property of being decreasing under stochastic mappings (that is, coarse-grainings). Let us point out that we have previously, as well, argued that the ZHSL measure and its variants do not correspond to normalized volume elements of monotone metrics \[24, secs. II.C, D\]. But there the evidence (concerning the eigenvalues of certain averaged density matrices) was of a more indirect nature. In particular, we were unaware in of the fact that it is possible to express (cf. ) the ZHSL measure using the theoretically minimal number ($`N^21`$) of variables needed to parameterize the convex set of $`N\times N`$ density matrices (rather than the naive number, $`N^2+N1`$, that a reading of would immediately suggest). Consequently, in we did not utilize the transformations (8) between the (economized number of) ZHSL parameters and more conventionally employed ones. It would be of interest, as well, to investigate to what extent it is possible to replace the Dirichlet distributions (5) in the ZHSL (product) measure (leaving, however, intact the Haar measure) by other probability distributions lying outside the Dirichlet family, so that the so-modified ZHSL (product) measures would, then in fact, take the form of normalized volume elements of monotone metrics. In fact, in \[25, eqs. (10), (11)\] we have been able — following the lead of Hall \[26, eqs. (24), (25)\] — to do precisely this for the specific instance of the minimal monotone (Bures) metric in the cases $`N=2`$ and 3. Also in , we have further determined the necessary Hall normalization constants (for the marginal Bures probability distributions over the $`N`$ eigenvalues of the $`N\times N`$ density matrices) for $`N=4`$ and 5 which constants, in conjunction with Euler angle parameterizations (not yet fully specified) of $`SU(4)`$ and $`SU(5)`$, parallel to that reported for $`SU(3)`$ in , would allow one to obtain the corresponding normalized Bures volume elements for those $`N`$-dimensional quantum systems, as well. The normalization constants ($`N>2`$) reported in appear to be strongly related to partial sums of the denominators of even-indexed Bernoulli numbers. ###### Acknowledgements. I would like to express appreciation to the Institute for Theoretical Physics for computational support in this research, to Mark Byrd for his insightful observation that it is possible to express the ZHSL measure in a non-“over-parameterized” form, as well as to K. Życzkowski for his many informative communications.
no-problem/9904/cond-mat9904111.html
ar5iv
text
# Timesaving Double-Grid Method for Real-Space Electronic-Structure Calculations ## Abstract We present a simple and efficient technique in ab initio electronic-structure calculation utilizing real-space double-grid with a high density of grid points in the vicinity of nuclei. This technique promises to greatly reduce the overhead for performing the integrals that involves non-local parts of pseudopotentials, with keeping a high degree of accuracy. Our procedure gives rise to no Pulay forces, unlike other real-space methods using adaptive coordinates. Moreover, we demonstrate the potential power of the method by calculating several properties of atoms and molecules. So far, a number of methods for ab initio electronic-structure calculations entirly in real space have been proposed. They have some advantages compared with the usual plane-wave approach. The first one is that boundary conditions are not constrained to be periodic, e.g., nonperiodic boundary conditions for molecules and a combination of periodic and nonperiodic boundary conditions for surfaces. Even more important is that a technique utilizing real-space double-grid is available where many more grid points are put in the vicinity of nuclei, so that the integrals involving rapidly varying pseudopotentials inside the core regions of atoms can be calculated with a high degree of accuracy. This kind of integration over numerous sampling points, however, requires a large sum of computational effort. In this Letter, we present a quite simple and efficient double-grid technique which yields a drastic reduction of the computational cost without a loss of accuracy in the framework of the real-space finite-difference method. This technique can also be applied to the plane-wave approach, if the integration over the core region is implemented in real space. The double-grid employed here consists of two sorts of uniform and equi-interval grid points, i.e., coarse- and dense-ones, depicted in Fig. 1 by the marks “$`\times `$” and “$``$”, respectively. The dense-grid region enclosed by the circle is the core region of an atom that is taken to be large enough to contain the cutoff region of non-local parts of pseudopotentials. Throughout this paper we postulate that wave-functions are defined and updated only on coarse-grid points, while pseudopotentials are strictly given on all dense-grid points in an analytically or numerically exact manner. Let us consider inner products between wave-functions $`\psi (x)`$ and non-local parts of pseudopotentials $`v(x)`$ (see Fig. 2). For simplicity, the illustration is limited to the one-dimensional case hereafter. The values of wave-functions on coarse-grid points ($``$) are stored in computer memory, and the values on dense-grid points ($``$) are evaluated by interpolation from them. The well-known values of pseudopotentials both on coarse- and dense-grid points ($``$) are also shown schematically. Then, from Fig. 2(a) one can see that only the values on coarse-grid points are so inadequate that the inner products can not be accurately calculated; the errors are mainly due to the rapidly varying behavior of pseudopotentials. On the other hand, Fig. 2(b) indicates that the inner products can be evaluated to great accuracy, if the number of dense-grid points is taken to be sufficiently large and also if the values of wave-functions on dense-grid points are properly interpolated from those on coarse-grid points . There are several interpolation methods for wave-functions, among which the simplest is linear interpolation. In this case, the values of wave-functions $`\psi _j\psi (x_j)`$ on dense-grid points $`x_j`$ are interpolated from the values $`\mathrm{\Psi }_J\psi (X_J)`$ on coarse-grid points $`X_J`$ as $$\psi _j=\frac{h(x_jX_J)}{h}\mathrm{\Psi }_J+\frac{h(X_{J+1}x_j)}{h}\mathrm{\Psi }_{J+1},$$ (1) where $`h`$ is the grid spacing of coarse-grid points. The inner product is assumed to be accurately approximated by the discrete sum over dense-grid points, i.e., $$_{d/2+R_I}^{d/2+R_I}v^I(x)\psi (x)𝑑x\underset{j=nN_{core}}{\overset{nN_{core}}{}}v_j^I\psi _jh_{dens},$$ (2) where $`d`$ is the “diameter” of the core region, $`R_I`$ is the atomic position, $`v_j^Iv^I(x_j)=v(x_jR_I)`$, $`2N_{core}+1`$ ($`2nN_{core}+1`$) is the number of coarse- (dense-) grid points in the core region, $`h_{dens}`$ is the grid spacing of dense-grid points, and $`n=h/h_{dens}`$, i.e., $`n1`$ is the number of dense-grid points existing between adjacent coarse-grid ones. Now, substituting Eq.(1) in the r.h.s. of Eq.(2), we have $$_{d/2+R_I}^{d/2+R_I}v^I(x)\psi (x)𝑑x\underset{J=N_{core}}{\overset{N_{core}}{}}w_J^I\mathrm{\Psi }_Jh,$$ (3) where $$w_J^I=\underset{s=n}{\overset{n}{}}\frac{h|x_{nJ+s}X_J|}{nh}v_{nJ+s}^I.$$ (4) As shown in Eq.(3), the r.h.s. of the inner product (2) has been replaced with the summation over coarse-grid points inside the core region, which produces only a modest overhead in the computational cost. It should be remarked that the weight factors $`w_J^I`$ arising from the interpolation are independent of the wave-functions, but dependent only on the well-known values of pseudopotentials on dense-grid points. Thus, if once computing the factors $`w_J^I`$ every molecular-dynamics time-step, we do not have to recalculate them throughout self-consistent iteration-steps. The extension of the above procedure to the cases of higher-order interpolations is straightforward. Fourier interpolation is of great interest, since the term representing the position of the atom can be factorized in the expression of $`w_J^I`$ as the structural phase factor $`\mathrm{exp}(ik\frac{2\pi }{d}R_I)`$. Indeed, the interpolated values of wave-functions $`\psi _j`$ on dense-grid points $`x_j`$, i.e., $$\psi _j=\underset{k=N_{core}}{\overset{N_{core}}{}}\stackrel{~}{\mathrm{\Psi }}_ke^{ik\frac{2\pi }{d}x_j},$$ (6) with $$\stackrel{~}{\mathrm{\Psi }}_k=\frac{1}{d}\underset{J=N_{core}}{\overset{N_{core}}{}}\mathrm{\Psi }_Je^{ik\frac{2\pi }{d}X_J}h,$$ (7) are substituted into the r.h.s. of Eq.(2) to give Eq.(3), where the weight factors in this case are $$w_J^I=\underset{k=N_{core}}{\overset{N_{core}}{}}\stackrel{~}{w}_ke^{ik\frac{2\pi }{d}X_J}e^{ik\frac{2\pi }{d}R_I},$$ (9) with $$\stackrel{~}{w}_k=\frac{1}{d}\underset{j=nN_{core}}{\overset{nN_{core}}{}}v_je^{ik\frac{2\pi }{d}x_j}h_{dens}.$$ (10) Here $`v_jv(x_j)`$. An advantage to be stressed is that the calculation of making the table of $`\stackrel{~}{w}_k`$ in Eq.(10) for each atom species, which requires a time-consuming computational effort because of the summation over dense-grid points, has only to be carried out once at the early stage of the entire job. At that time, the usage of the fast Fourier transforms (FFT) will considerably reduce an amount of computational operation. We now examine the performance of our method through calculations of several properties of atoms and molecules. Hereafter, we obey the nine-points finite difference formula (i.e., the formula with N=4) for the derivatives arising from the kinetic-energy operator, imposing a nonperiodic boundary condition of vanishing wave-functions. The dense-grid spacing is fixed at $`h_{dens}=h/9`$. The electronic charge-density, the Hartree potential, and the exchange-correlation potential are assumed to be described only on coarse-grid points. Exchange-correlation effects are treated using the local-spin-density approximation of the density-functional theory. The norm-conserving pseudopotential of Bachelet, Hamann, and Schlüter (BHS) is employed in a separable non-local form . The convergence of the total energy for the hydrogen atom as a function of the coarse-grid cutoff energy is presented in Fig. 3. According to Ref., we defined an equivalent energy cutoff $`E_c^{coars}`$ \[$`(\pi /h)^2`$ Ry\] to be equal to that of the plane-wave method which uses a FFT grid with the same spacing as the present calculation. The position where the atom is located is at the center between adjacent grid points. The hydrogen atom is one of the most difficult atoms to treat, owing to the rapid oscillation of its s-state non-local pseudopotential, and consequently the total energy as a function of the cutoff energy calculated without any interpolation sharply oscillates. On the other hand, our prescription with interpolations drastically improves the results; the obtained total energies converge rapidly and monotonously as the cutoff energy increases. Fig. 4 shows the total-energy variation as a function of the displacement of the oxygen atom relative to coarse-grid points along a coordinate axis. The coarse-grid spacing is taken to be $`h=0.21`$ a.u. The first-row elements are difficult atoms to deal with, because their pseudopotentials are rapidly varying in the cutoff region. In treating these elements in the context with a real-space approach, it is necessary to consider the dependence of the energy on the position of the atom. As seen in Fig. 4, the energy variation in our scheme with cubic interpolation is $``$0.04% of the total energy, which is negligibly small. We next apply our method to the calculation of the equilibrium bond length of the CO<sub>2</sub> molecule in order to examine the performance of our method for molecules. The total energies as a function of the C-O distance are shown in Fig. 5. We employ the cubic interpolation formula of the present method and take the coarse-grid spacing $`h`$ to be 0.21 a.u. The results without any interpolation have many “humps”, and the distances between adjacent humps on the respective dotted curves are close to the grid spacing $`h`$, which confirms that the oscillation depends on the relative position of the atom with respect to coarse-grid points. To circumvent this problem, Gygi et al. proposed a real-space grid in adaptive curvilinear coordinates. However, owing to the use of these distorted coordinates, their scheme needs Pulay forces in molecular-dynamics simulations; it would be computationally very demanding. On the contrary, our approach requires only the Hellmann-Feynman forces, which significantly reduces the computational cost concerning the calculation of forces. As shown in Fig. 5, the total energies computed with our method do not make absurd dependence on the position of the atom. The bond length determined by the minimum of the total energy is in excellent agreement with the experimental data 2.19 a.u. These results make it clear that our method solves this problem and that it is very efficient and applicable . Finally, the calculated properties of the other molecules are given in Table I. The coarse-grid resolution $`h`$ is 0.24 a.u. for N<sub>2</sub> and 0.21 a.u. for the other molecules, and the cubic interpolation formula is used. Our results are in good agreement with those of experiments and other theories. In summary, we have presented a method for performing the finite-difference electronic-structure calculations using real-space double-grid. Our method has the following desirable features: (i)The inner products between wave-functions and pseudopotentials are evaluated with the same accuracy as in the case of dense-grid points, in spite of the calculation with respect to coarse-grid points. (ii)The computational effort is modest, thanks to the integration over coarse-grid points. (iii)The double-grid acts as stabilizer in the calculation of the total energy, i.e., it suppresses the spurious oscillation of the energy for the displacement of the atom relative to coarse-grid points. (iv)Unlike other real-space methods using adaptive coordinates, our procedure gives rise to no Pulay forces, and so a substantial increase in the computational cost does not occur. From what has been mentioned above, it seems reasonable to conclude that our method is suitable, because of its simplicity and efficiency, for large-scale molecular-dynamics simulations incorporating the norm-conserving pseudopotentials. Work is in progress to apply the method to large-scale molecular-dynamics simulations. This work was partially supported by a Grant-in-Aid for COE Research. The numerical calculation was carried out by the computer facilities at the Institute for Solid State Physics at the University of Tokyo. Thanks are due to Masako Inagaki and Kouji Inagaki for reading the entire text in its original form.
no-problem/9904/astro-ph9904092.html
ar5iv
text
# A dusty pinwheel nebula around the massive star WR 104 Wolf-Rayet (WR) stars are luminous massive blue stars thought to be immediate precursors to the supernova terminating their brief lives. The existence of dust shells around such stars has been enigmatic since their discovery some 30 years ago; the intense radiation field from the star should be inimical to dust survival . Although dust-creation models, including those involving interacting stellar winds from a companion star , have been put forward, high-resolution observations are required to understand this phenomena. Here we present resolved images of the dust outflow around Wolf-Rayet WR 104, obtained with novel imaging techniques, revealing detail on scales corresponding to about 40 AU at the star. Our maps show that the dust forms a spatially confined stream following precisely a linear (or Archimedian) spiral trajectory. Images taken at two separate epochs show a clear rotation with a period of $`220\pm 30`$ days. Taken together, these findings prove that a binary star is responsible for the creation of the circumstellar dust, while the spiral plume makes WR 104 the prototype of a new class of circumstellar nebulae unique to interacting wind systems. Observations of WR 104 were made with the Keck I telescope on 14 April and 4 June 1998, and employed the technique of aperture masking interferometry in order to recover information out to the diffraction limit of the 10 m Keck aperture . We present, in Figure 1, reconstructed images taken at 1.65 & 2.27 $`\mu `$m ($`\mathrm{\Delta }\lambda =0.33\&0.16`$$`\mu `$m respectively) for both observing epochs. As infrared emission from the hot circumstellar dust dominates the infrared region of the spectrum, we may interpret the highly asymmetric curved plumes evident in the maps as tracing the distribution of this material. Previous high-resolution efforts have been restricted to partially-resolved one-dimensional visibility curves interpreted in the context of spherically symmetric outflow models . As a comparison with these earlier results, we have fitted a uniform-disk model to our visibilities, azimuthally averaged and cropped to the resolutions then obtained, finding perfect agreement with the 130 mas diameter disk reported in 1981 . This similarity over a timescale of decades is in accord with the inclusion of WR 104 in the small handful of ‘persistent’ dust producing WR’s . Additional interferometric observations at 3.08 $`\mu `$m ($`\mathrm{\Delta }\lambda =0.1`$$`\mu `$m) show no evidence of the marked enlargement towards longer wavelengths reported by Dyck et. al. . However, as is apparent from Figure 1, the images do not show even remote similarity to a uniform disk, and we hereafter abandon further consideration of circularly symmetric models. The maps of Figure 1 consist of two components; a bright central core which appears elongated, and a curved tail which seems to emerge from one end of the elongation. This spiral structure dominates the morphology at both colors, and maps taken in April and June 1998 show a high degree of similarity with the striking exception of a clear rotation of the image. The hypothesis of dust formation mediated by the orbital motion of a companion star and subsequently swept outwards by the stellar wind unifies the spiral structure and the $`83^{}`$ rotation apparent between our two epochs into a simple, elegant geometry. A schematic of our model is shown in Figure 2, showing the WR+OB binary, the dust formation zone associated with the collision front between the stellar winds , and the resultant curved outflow plume as this dust ‘nursery’ is carried with the orbital motion. Although the idea of a binary nature for WR 104 is not new, it is only very recently that the presence of an OB companion was confirmed from detection of hydrogen Balmer absorption features and optical emission-line dilution . We have overplotted, also in Figure 1, the results of fitting a simple geometrical model consisting of an Archimedian spiral where the free parameters are the winding rate and the viewing angle to the observer. These modelling results were obtained by finding a global best fit to all four maps simultaneously, but allowing for a rotation of the spiral structure about the model-derived axis between the two epochs. Implicit in the assumption of an Archimedian spiral model is the hypothesis that the material in the spiral is moving out at a uniform velocity, and that new material feeding into the flow insertion point does so at a uniform angular velocity. Although such a model contains the fewest free parameters and yet gives excellent fits to our data, it is important to note that a more complex model may be required if the plume is in a zone where it is being accelerated, or if the orbit of the companion presumed to be mediating the flow is eccentric (e.g. ). The physical geometry of the system, as derived directly from our model, is a spiral plume rotating with a period of $`220\pm 30`$ days viewed at an angle of $`20\pm 5^{}`$ from the pole and with an outflow velocity of $`111\pm 17`$ mas yr<sup>-1</sup> in the plane of the orbit. If we identify this rotation as the orbital period of a binary stellar system, then assuming a combined mass in the range of $`2050M_{}`$ results in a separation of $`1.92.6`$ AU. As this corresponds to a separation of only $`1`$ mas on the sky, our images lack the resolution to show such detail directly, and furthermore the infrared flux is so dominated by thermal emission from the warm dust that it is unlikely that we have detected the central stars in our maps at all. It is interesting to compare our binary parameters with those of the famous ‘episodic’ dust producer WR 140 which is known to undergo dramatic bouts of dust creation co-incident with the passage of a companion star through periastron in a highly elliptical orbit . At periastron, the separation between the stars is $`2.5`$ AU, raising the possibility that the physical conditions favoring copious dust formation fall within a confined range of companion distances for WC+OB binaries. We may make use of the 1220 km s<sup>-1</sup> wind outflow velocity combined with our proper motion to derive an independent estimate of $`2.3\pm 0.7`$ kpc as the distance to WR 104, where the dominant error arises from a $``$25% uncertainty in outflow velocities found by comparing the results of various line-profile studies (velocities as high as 1600 km s<sup>-1</sup> have been reported for this star ). Our distance is somewhat further than earlier estimates of 1.6 kpc derived from a possible association with Sgr OB1 , however the discrepancy is within the estimated errors. Alternatively if the closer distance is preferred, then our measurements imply an outflow velocity of 845 km s<sup>-1</sup> for the dust component. Our geometrical solution solves for the projected viewing angle of the observer, and thus we avoid the usual $`sin(i)`$ uncertainty. We note that although isolated dust grains should be momentum-coupled to the flow , the outflow velocities in the wake of the passage of the OB stellar companion could be significantly perturbed and thus the behavior of the plume may not act as a good tracer of the bulk motion of the stellar wind. With additional observations covering an entire orbit, we will be able to greatly refine our estimates of the physical geometry of this system. It is apparent from Figure 1 that the outflowing material presents a relatively smooth, spatially confined stream without strong clumping out to a radius of some $`65`$ mas ($`150`$ AU) from the star, by which time the outflow has rotated through about $`360^{}`$. We believe that the finding of a single complete turn in the spiral arm is not coincidental, as we detect only dust heated by radiation from the central stars, and therefore lying along a direct line of sight. Material in the second and further coils of the outflow will, of course, be eclipsed by newer material closer in, and will therefore cool rapidly resulting in the relatively sharp cutoff we see. Although there is some evidence for brightness variations along the arm at a level of a few percent of the peak, especially apparent in the maps taken at 1.65 $`\mu `$m, the overall behavior points to a continuous and smooth dust creation process, in accord with the classification of WR 104 as a ‘persistent’ dust producer with a constant IR flux . Again it is interesting to compare this behavior with that of WR 140 whose elliptical orbit results in episodic dust production. The contrasting characteristics of WR 104 argue against a high degree of orbital eccentricity, giving some justification to our choice of the Archimedian spiral model in this case. For WR 104, our observations confine the IR excess emission from the dust to lie in a narrow, spatially confined outflow which rotates synchronously with a period of 220 days – a plausible period for a wind-interacting binary system. No spherically symmetric or diffuse component to the dust nebula was detected to within a few percent of the peak flux. We are therefore able to reject dust-formation models resulting in spherical or disk shaped outflows such as the clumpy spherical outflows of or equatorial density enhancements in favor of the binary wind-wind model. The viewing angle to the observer of $`20\pm 5^{}`$ is well constrained by these measurements. This finding of an almost face-on system contradicts previous attribution of high circumstellar extinction and spectral variability to an edge-on viewing angle. Some of these observations may be explained as WR 104 is thought to lie behind a heavily obscuring cloud with further extinction possibly arising from material created in past mass-loss events of the progenitor star. As the dust comprises only a very small fraction of the total mass loss, it therefore acts as a visible tracer in the outflow enabling the fascinating possibility of dynamical studies of the wind itself. Detailed numerical modelling is needed to determine if the high degree of initial collimation and subsequent confinement of the dust plume can be explained with simple models of the wind-wind interaction , or whether more detailed three-dimensional calculations such as those of Walder , are required. Spectral studies of the plume, beyond the scope of this letter, should reveal the thermal and chemical evolution of the dust as it is swept outwards into the interstellar medium, and also yield information on processes underlying the binary-mediated dust creation mechanism. With a handful of dusty WR systems open to study with this novel method for the detection of binary stars, wider questions of dust formation in this class of objects can now be addressed. Acknowledgements Data herein were obtained at the W.M. Keck Observatory, made possible by the generous support of the W.M. Keck Foundatation, and operated as a scientific partnership among the California Institute of Technology, the University of California and NASA. This work was supported through grants from the National Science Foundation. The authors would like to thank Devinder Sivia for the maximum-entropy mapping program “VLBMEM” and David Hale for sparking our interest in Wolf-Rayet stars. All correspondence should be addressed to Peter Tuthill (e-mail:gekko@ssl.berkeley.edu)
no-problem/9904/astro-ph9904024.html
ar5iv
text
# The HI Parkes Zone of Avoidance Shallow Survey ## 1 Introduction The dust and high stellar density of the Milky Way obscures up to 25% of the optical extragalactic sky, creating a Zone of Avoidance (ZOA). The resulting incomplete coverage of surveys of external galaxies leaves open the possibility that dynamically important structures, or even nearby massive galaxies, remain undiscovered. Careful searches in the optical and infrared wave bands can narrow the ZOA, (see Kraan-Korteweg, this volume) but in the regions of highest obscuration and infrared confusion, only radio surveys can find galaxies. The 21 cm line of neutral hydrogen (HI) passes readily through the obscuration, so galaxies with sufficient HI can be found through detection of their 21 cm emission. Of course, this method will miss HI-poor, early-type galaxies, and cannot discriminate HI galaxies with redshifts near zero velocity from Galactic HI. Here we describe an HI blind survey for galaxies in the southern ZOA conducted with the new multibeam receiver on the 64-m Parkes telescope. A survey of HI galaxies in the northern ZOA is underway with the Dwingeloo radiotelescope (Henning et al. 1998; Rivers et al. this volume). ## 2 The Shallow Survey ### 2.1 Observing Strategy The HI Parkes ZOA survey covers the southern ZOA ($`212\mathrm{deg}l36\mathrm{deg};|b|5\mathrm{deg}`$) over the velocity range (cz) = $`1200`$ to $`12700\mathrm{km}\mathrm{s}^1`$. The multibeam receiver is a focal plane array with 13 beams arranged in an hexagonal grid. The spacing between adjacent beams is about two beamwidths, each beamwidth being 14 arcmin. The survey is comprised of 23 contiguous rectangular fields which are scanned parallel to the galactic equator. Eventually, each patch will be observed 25 times, with scans offset by about 1.5 arcmin in latitude. The shallow survey discussed here consists of two scans in longitude separated by $`\mathrm{\Delta }`$b = 17 arcmin, resulting in an rms noise of about 15 mJy, equivalent to a 5$`\sigma `$ HI mass detection limit of $`4\times 10^6`$ d$`{}_{\mathrm{Mpc}}{}^{}{}_{}{}^{2}`$ M (for a galaxy with the typical linewidth of $`200\mathrm{km}\mathrm{s}^1`$). ### 2.2 Data Visualization After calibration, baseline-subtraction, and creation of data cubes, all done with specially developed routines based on aips++ (Barnes et al. 1998, Barnes 1998) the data are examined by eye using the visualization package Karma (http://www.atnf.csiro.au/karma/). The data are first displayed as right ascension – velocity planes, in strips of constant declination. Data cubes are then rotated, and right ascension – declination planes are checked for any suspected galaxies (eg. Figure 1). ## 3 Galaxies Found by the Shallow Survey The shallow 21-cm survey of the southern ZOA has been completed, and 107 galaxies with peak HI flux densities $``$ about 80 mJy have been cataloged. Refinement of the measurement of their HI characteristics is ongoing, but the objects seem to be normal galaxies. However, of three large multibeam ZOA galaxies imaged in HI with the ATCA, two were seen to break up into complexes of HI suggestive of tidally-interacting systems (Staveley-Smith et al. 1998) Continued follow-up synthesis observations are planned to investigate the frequency of these interacting systems in this purely HI-selected sample. Most of the galaxies are within $`4000\mathrm{km}\mathrm{s}^1`$, which is about the redshift limit for detection of normal spirals of this shallow phase of the survey. As the deep survey continues, spirals at higher velocities will be recovered. The effective depth of the shallow survey is not quite sufficient to recover large numbers of galaxies which might be associated with the Great Attractor (but see Juraszek et al. this volume.) However, a striking feature becomes apparent with the addition of the ZOA galaxies. An enormous filament, which crosses the ZOA twice, is clearly evident when these ZOA data are displayed along with optically-known galaxies above and below the plane within $`3500\mathrm{km}\mathrm{s}^1`$ (Fig. 2.) This structure snakes over $`180\mathrm{deg}`$ through the southern sky. Taking a mean distance of $`30h^1`$ Mpc, this implies a linear size of $`100h^1`$ Mpc, with thickness of $`5h^1`$ Mpc or less. Also, note the relative emptiness of the Local Void. Three hidden galaxies found on a boundary of the Void (l $`30\mathrm{deg}`$) lie at $`1500\mathrm{km}\mathrm{s}^1`$. Two of these objects were also recovered by the Dwingeloo Obscured Galaxies Survey (Rivers et al. this volume.) The positions and redshifts of these objects are consistent with their being members of the cluster at this location proposed by Roman et al. (1998). Of the 107 objects found, 28 have counterparts in the NASA/IPAC Extragalactic Database (NED) with matching positions and redshifts. Optical absorption, estimated from the Galactic dust data of Schlegel et al. (1998), ranges from A<sub>B</sub> = 1 to more than 60 mag at the positions of the 107 galaxies, and is patchy over the survey area. No objects lying behind more than about 6 mag of obscuration have confirmed counterparts in NED, as expected. The shallow multibeam HI survey connects structures all the way across the ZOA within $`3500\mathrm{km}\mathrm{s}^1`$ for the first time. The ongoing, deep ZOA survey will have sufficient sensitivity to connect structures at higher redshifts. While 14 of the 107 galaxies lie within $`1000\mathrm{km}\mathrm{s}^1`$ and are therefore fairly nearby, all of the newly-discovered objects have peak HI flux densities an order of magnitude or more lower than the Circinus galaxy. Thus, it seems our census of the most dynamically important, HI-rich nearby galaxies is now complete, at least for those objects with velocities offset from Galactic HI. Simulations are currently being devised to investigate our sensitivity to HI galaxies whose signals lie within the frequency range of the Milky Way’s HI. This will be done by embedding artificial HI signals of varying strength, width, position, and frequency, into real data cubes. Then, an experienced HI galaxy finder (PH) will examine the cubes without previous knowledge of the locations of the fake galaxies. In this way, we hope to quantify better this remaining blind spot of the HI search method. ## Acknowledgements We thank HIPASS ZOA collaborators R. D. Ekers, A. J. Green, R. F. Haynes, S. Juraszek, M. J. Kesteven, B. S. Koribalski, R. M. Price, and A. Schröder. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, Caltech, under contract with the National Aeronautics and Space Administration. We have also made use of the Lyon-Meudon Extragalactic Database (LEDA), supplied by the LEDA team at the Centre de Recherche Astronomique de Lyon, Observatoire de Lyon. The research of P.H. is supported by NSF Faculty Early Career Development (CAREER) Program award AST 95-02268. ## References Barnes, D.G. 1998, in ADASS VII, eds. Albrecht, R., Hook, R.N., & Bushouse, H.A., San Francisco: ASP Barnes, D.G., Staveley-Smith, L, Ye, T., & Osterloo, T. 1998, in ADASS VII, eds. Albrecht, R., Hook, R.N., & Bushouse, H.A., San Francisco: ASP Henning, P.A., Kraan-Korteweg, R.C., Rivers, A.J., Loan, A.J., Lahav, O., & Burton, W.B. 1998, AJ, 115, 584 Roman, A.T., Takeuchi, T.T., Nakanishi, K., & Saito, M. 1998, PASJ, 50, 47 Schlegel, D.J., Finkbeiner, D.P., & Davis, M. 1998, ApJ, 500, 525 Staveley-Smith, L., Juraszek, S., Koribalski, B.S., Ekers, R.D., Green, A.J., Haynes, R.F., Henning, P.A., Kesteven, M.J., Kraan-Korteweg, R.C., Price, R.M., & Sadler, E.M. 1998, AJ, in press.
no-problem/9904/hep-ph9904302.html
ar5iv
text
# Probing the matter term at long baseline experiments ## Abstract We consider $`\nu _\mu \nu _e`$ oscillations in long baseline experiments within a three flavor oscillation framework. A non-zero measurement of this oscillation probability implies that the $`(13)`$ mixing angle $`\varphi `$ is non-zero. We consider the effect of neutrino propagation through the matter of earth’s crust and show that, given the constraints from solar neutrino and CHOOZ data, matter effects enhance the mixing for neutrinos rather than for anti-neutrinos. We need data from two different experiments with different baseline lengths (such as K2K and MINOS) to distinguish matter effects unambiguously. Recent results from the Super-Kamiokande have sparked tremendous interest in neutrino physics . The deficits seen in the solar and atmospheric neutrino fluxes can be very naturally explained in terms of neutrino oscillations. Since the energy and the distance scales of the solar neutrino problem are widely different from those of the atmospheric neutrino problem, the neutrino oscillation solution to these problems requires widely different values of mass-squared differences . A minimum of three mass eigenstates, and hence three neutrino flavors, are needed for a simultaneous solution of solar and atmospheric neutrino problems. Since LEP has shown that three light, active neutrino species exist , it is natural to consider all neutrino data in the framework of oscillations between all the three active neutrino flavors. The flavor eigenstates $`\nu _\alpha (\alpha =e,\mu ,\tau )`$ are related to the mass eigenstates $`\nu _i(i=1,2,3)`$ by a unitary matrix $`U`$ $$|\nu _\alpha =U|\nu _i.$$ (1) As in the quark sector, $`U`$ can be parametrized in terms of three mixing angles and one phase. A widely used parametrization, convenient for analyzing neutrino oscillations with matter effects, is $$U=U^{23}(\psi )\times U^{phase}\times U^{13}(\varphi )\times U^{12}(\omega ),$$ (2) where $`U^{ij}(\theta _{ij})`$ is the two flavor mixing matrix between the $`i`$th and $`j`$th mass eigenstates with the mixing angle $`\theta _{ij}`$. We assume that the vacuum mass eigenvalues have the pattern $`\mu _3^2\mu _2^2>\mu _1^2`$. Hence $`\delta _{31}\delta _{32}\delta _{21}`$, where $`\delta _{ij}=\mu _i^2\mu _j^2`$. The larger $`\delta `$ sets the scale for the atmospheric neutrino oscillations and the smaller one for solar neutrino oscillations. The angles $`\omega ,\varphi `$ and $`\psi `$ vary in the range $`[0,\pi /2]`$. For this range of the mixing angles, there is no loss of generality due to the assumption that $`\delta _{31},\delta _{21}0`$ . It has been shown that in the above approximation of one dominant mass, the oscillation probability for solar neutrinos is a function of only three parameters ($`\delta _{21},\omega `$ and $`\varphi `$) and that of the atmospheric neutrinos and long baseline neutrinos is also function of only three parameters ($`\delta _{31},\varphi `$ and $`\psi `$). In each case, the three flavor nature of the problem is illustrated by the fact that the oscillation probability is a function of two mixing angles. The phase is unobservable in the one dominant mass approximation because, in both solar and atmospheric neutrino oscillations, one of the three mixing angles can be set to zero . The value of $`\delta _{31}`$ preferred by the Super-K atmospheric neutrino data is about $`2\times 10^3`$ eV<sup>2</sup> . For this value of $`\delta _{31}`$, CHOOZ data sets a very strong constraint $$\mathrm{sin}^22\varphi 0.2.$$ (3) If $`\varphi =0`$, then the solar and atmospheric neutrino oscillations get decoupled and become two flavor oscillations with relavant parameters being $`(\delta _{21},\omega )`$ and $`(\delta _{31},\psi )`$ respectively. It is interesting to look for the consequences of non-zero $`\varphi `$. A non-zero $`\varphi `$ leads to $`\nu _\mu \nu _e`$ oscillations in atmospheric neutrinos and long baseline experiments. Due to theoretical uncertainty in the calculation of atmospheric neutrino fluxes, it will be very difficult to discern the effect of a small $`\varphi `$ from the atmospheric neutrino data. Recently a proposal was made to look for matter enhanced $`\nu _\mu \nu _e`$ oscillations in atmospheric neutrino data, which can be significant even if $`\varphi `$ is small . Here we consider the effect of matter on $`\nu _\mu \nu _e`$ oscillations in a three flavor framework at long baseline experiments. The relation between the flavor states and mass eigenstates can be written in the simple form, $$\left[\begin{array}{c}\nu _e\\ c_\psi \nu _\mu s_\psi \nu _\tau \\ s_\psi \nu _\mu +c_\psi \nu _\tau \end{array}\right]=\left(\begin{array}{ccc}c_\varphi & 0& s_\varphi \\ 0& 1& 0\\ s_\varphi & 0& c_\varphi \end{array}\right)\left[\begin{array}{c}\nu _1\\ \nu _2\\ \nu _3\end{array}\right],$$ (4) where $`c`$ stands for cosine and $`s`$ stands for sine. This equation is obtained by first setting $`U^{phase}=I`$ and $`\omega =0`$ in equation (2). Then substitute the resulting form $`U`$ in equation (1) and multiply it from left by $`U^{23}(\psi )`$. From (4) we see that $`\nu _2`$ has no $`\nu _e`$ component and hence is decoupled from any oscillation involving $`\nu _e`$. Thus the calculation of oscillation probability is essentially a two flavor problem. The three flavor nature is present in the fact that the oscillations occur between $`(\nu _e(s_\psi \nu _\mu +c_\psi \nu _\tau ))`$. The vacuum oscillation probability is calculated to be $$P_{\mu e}=\mathrm{sin}^2\psi \mathrm{sin}^22\varphi \mathrm{sin}^2\left(\frac{1.27\delta _{31}L}{E}\right),$$ (5) where $`\delta _{31}`$ is in eV<sup>2</sup>, the baseline length $`L`$ is in km and the neutrino energy $`E`$ is in GeV. For low energies the phase $`1.27\delta _{31}L/E`$ oscillates rapidly and $`P_{\mu e}`$ becomes insensitive to it. However, for the range of energies for which the phase is close to $`\pi /2`$, $`P_{\mu e}`$ varies slowly as shown in figures (1)-(3). If the spectrum of the neutrinos in long baseline experiments peaks in this range, then $`\delta _{31}`$ and the mixing angles can be determined quite accurately. Given the CHOOZ constraint on $`\varphi `$, Super-Kamiokande atmospheric neutrino data fix $`\mathrm{sin}^22\psi 1`$ or $`\mathrm{sin}^2\psi 0.5`$. Substituting this value in equation (5), we find that $`P_{\mu e}0.1`$. The neutrino beams at K2K and MINOS have a $`1\%`$ $`\nu _e`$ contamination. This background and the systematic uncertainties set a limit to the smallest value of $`P_{\mu e}`$ (and $`\varphi `$) that can be measured. With the current estimates of the systematic uncertainties, the sensitivity of K2K is similar to that of CHOOZ ($`\mathrm{sin}^22\varphi 0.2`$) but MINOS is capable of measuring values of $`\varphi `$ as small as $`3^o(\mathrm{sin}^22\varphi 0.01)`$ . Once K2K starts running, its systematics will be better understood and its sensitivity is likely to improve. In both K2K and MINOS experiments, the neutrino beam travels through earth’s crust, where it encounters matter of roughly constant density 3 gm/cc. This traversal leads to the addition of the Wolfenstein term to the $`ee`$ element of the (mass)<sup>2</sup> matrix, when it is written in the flavor basis . The Wolfenstein term is given by $$A=0.76\times \rho (\mathrm{gm}/\mathrm{cc})\times E(\mathrm{GeV})\times 10^4\mathrm{eV}^2.$$ (6) For $`\rho `$ of a few gm/cc and $`E`$ of a few GeV, $`A`$ is comparable to the value of $`\delta _{31}`$ set by atmospheric neutrino data. This can lead to interesting and observable matter effects in long baseline experiments. The interactions of $`\nu _\mu `$ and $`\nu _\tau `$ with ordinary matter are identical. Hence equation (4) can be used to include matter effects in a simple manner because the problem, once again, is essentially a two flavor one. It is easy to see that $`\psi `$ is unaffected by matter and the matter dependent mixing angle $`\varphi _m`$ and the mass eigenvalues $`m_1^2`$ and $`m_3^2`$ are given by $`\mathrm{tan}2\varphi _m`$ $`=`$ $`{\displaystyle \frac{\delta _{31}\mathrm{sin}2\varphi }{\delta _{31}\mathrm{cos}2\varphi A}},`$ (7) $`m_1^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[\left(\delta _{31}+A\right)\sqrt{\left(\delta _{31}\mathrm{cos}2\varphi A\right)^2+\left(\delta _{31}\mathrm{sin}2\varphi \right)^2}\right],`$ (8) $`m_3^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[\left(\delta _{31}+A\right)+\sqrt{\left(\delta _{31}\mathrm{cos}2\varphi A\right)^2+\left(\delta _{31}\mathrm{sin}2\varphi \right)^2}\right].`$ (9) The above equations hold for the propagation of neutrinos. For vacuum propagation, the mass eigenvalues and mixing angles for neutrinos and anti-neutrinos are the same. However, to include matter effects for anti-neutrinos, one should replace $`A`$ by $`A`$ in equations (7), (8) and (9). Note that, since $`A`$ is always positive, matter effects enhance the neutrino mixing angle and suppress the anti-neutrino mixing angle if $`(\delta _{31}\mathrm{cos}2\varphi )`$ is positive and vice-verse if $`(\delta _{31}\mathrm{cos}2\varphi )`$ is negative. Since we have taken $`\delta _{31}`$ to be positive, matter effects enhance neutrino mixing if $`\mathrm{cos}2\varphi `$ is positive or $`\varphi <\pi /4`$. The CHOOZ constraint from equation (3) on $`\mathrm{sin}^22\varphi `$ sets the limit $`\varphi 13^o`$ or $`\varphi 77^o`$. For the latter possibility, $`\mathrm{cos}2\varphi `$ is negative. However, this large a value of $`\varphi `$ is forbidden by the three flavor analysis of solar neutrino data, which yields the independent constraint $`\varphi 50^o`$ . Hence the enhancement of the mixing angle will be for neutrinos rather than for anti-neutrinos. This is good news because the beams in long baseline experiments consist of neutrinos overwhelmingly. The matter dependent oscillation probability can be calculated in a straight forward manner from equation (4) in terms of $`\psi ,\varphi _m`$ and $`\delta _{31}^m=m_3^2m_1^2`$. $$P_{\mu e}^m=\frac{1}{2}\mathrm{sin}^22\varphi _m\mathrm{sin}^2\left(\frac{1.27\delta _{31}^mL}{E}\right),$$ (10) where we have substituted the Super-Kamiokande best fit value $`\mathrm{sin}^2\psi =1/2`$. As discussed above, the matter effects enhance the mixing angle for neutrinos and the matter modified oscillation probability $`P_{\mu e}^m`$ will be greater than the vacuum oscillation probability $`P_{\mu e}`$, for a range of energies. For some neutrino energy, the Mikheyev-Smirnov resonance condition $$\delta _{31}\mathrm{cos}2\varphi =A$$ (11) can be satisfied and near this energy $`\varphi _m\pi /4`$. Unfortunately, this does not lead to any dramatic increase in $`P_{\mu e}^m`$ because, near the resonance, when $`\mathrm{sin}^22\varphi _m`$ is maximized, the matter dependent mass-squared difference $`\delta _{31}^m`$ is minimized . In fact, near the resonance, $`P_{\mu e}^mP_{\mu e}`$. By differentiating equation (10) with respect to $`E`$, we can calculate the energy at which $`P_{\mu e}^m`$ is maximized. The highest energy at which this occurs is just below the highest energy at which $`P_{\mu e}`$ is maximised, that is the energy for which the phase $`1.27\delta _{31}L/E=\pi /2`$. At this energy, the $`\varphi _m`$ is significantly higher than $`\varphi `$ and hence $`P_{\mu e}^m`$ will be measurably higher than $`P_{\mu e}`$. Since the baseline of K2K is $`3`$ times smaller than that of MINOS, the energy where the its phase becomes $`\pi /2`$ is smaller by a factor of $`3`$ compared to similar energy for MINOS. Because the highest energy maximum (at which the phase is $`\pi /2`$) occurs at different energies, the increase in $`\nu _\mu \nu _e`$ oscillation probability, due to the energy dependent enhancement of the mixing angle, is different for the two experiments. We find that, at their respective highest energy maxima, $`P_{\mu e}^m1.1P_{\mu e}`$ for K2K and $`P_{\mu e}^m=1.25P_{\mu e}`$ for MINOS. This is illustrated in figure (1) for $`\varphi 12.5^o`$ (which is just below the CHOOZ limit) and $`\delta _{31}=2\times 10^3`$ eV<sup>2</sup> (which is the best fit value for Super-Kamiokande atmospheric neutrino data). This conclusion is indenpendent of the value of $`\delta _{31}`$ and is illustrated in figures (1)-(3), for different values of $`\delta _{31}`$ (and $`\varphi =12.5^o`$). A similar conclusion was obtained a recent paper, where it was demonstrated that the relation between $`P_{\mu e}^m`$ and $`P_{\mu e}`$ is almost independent of $`\varphi `$ . We wish to emphasize that data from at least two different experiments with different baselines is needed to state unambiguously, whether matter effects are playing a role in neutrino oscillations. Suppose we have data from only one experiment. We can obtain allowed values of $`\delta _{31}`$ and $`\varphi `$ by fitting either $`P_{\mu e}`$ or $`P_{\mu e}^m`$ to this data. The two analyses will give different values of mixing angle. Since we don’t apriori know the value of vacuum mixing angle, we can’t say which result is correct. However, as mentioned above, matter effects lead to different enhancement of oscillation probability for K2K and MINOS as shown in figure (1). This difference can be exploited to distinguish matter effects. The data from each experiment, K2K and MINOS, should be analyzed twice, once using $`P_{\mu e}`$ as the input and the second time with $`P_{\mu e}^m`$ as the input. In the first analysis, the allowed value of the mixing angle from MINOS will be significantly higher that from K2K, if the matter effects are important. Matter effects can be taken to be established, if the second analysis gives the same values of $`\varphi `$ for both K2K and MINOS. In conclusion, we find that matter effects enhance $`\nu _\mu \nu _e`$ oscillations at long baseline experiments K2K and MINOS. This enhancement can be large enough to be observable. However, one must combine the data from both experiments before making a definite statement about the effect of the matter term. Acknowledgement: We thank Sameer Murthy for his help in the preparation of the figures.
no-problem/9904/hep-ph9904283.html
ar5iv
text
# Effects of supersymmetric CP violating phases on 𝐵→𝑋_𝑠⁢𝑙⁺⁢𝑙⁻ and ϵ_𝐾 ## Abstract We consider effects of CP violating phases in $`\mu `$ and $`A_t`$ parameters in the effective supersymmetric standard model on $`BX_sl^+l^{}`$ and $`ϵ_K`$. Scanning over the MSSM parameter space with experimental constraints including edm constraints from Chang-Keung-Pilaftsis (CKP) mechanism, we find that the $`\mathrm{Br}(BX_sl^+l^{})`$ can be enhanced by upto $`85\%`$ compared to the standard model (SM) prediction, and its correlation with $`\mathrm{Br}(BX_s\gamma )`$ is distinctly different from the minimal supergravity scenario. Also we find $`1ϵ_K/ϵ_K^{SM}1.4`$, and fully supersymmetric CP violation in $`K_L\pi \pi `$ is not possible. Namely, $`|ϵ_K^{\mathrm{SUSY}}|O(10^5)`$ if the phases of $`\mu `$ and $`A_t`$ are the sole origin of CP violation. preprint: April, 1999 KAIST-TH 99/1 hep-ph/9904283 SNUTP 99-003 1. In the minimal supersymmetric standard model (MSSM), there can be many new CP violating (CPV) phases beyond the KM phase in the standard model (SM) both in the flavor conserving and flavor violating sectors. The flavor conserving CPV phases in the MSSM are strongly constrained by electron/neutron electric dipole moment (edm) and believed to be very small ($`\delta 10^2`$ for $`M_{\mathrm{SUSY}}O(100)`$ GeV ) . Or, one can imagine that the 1st/2nd generation scalar fermions are very heavy so that edm constraints are evaded via decoupling even for CPV phases of order $`O(1)`$ . Also it is possible that various contributions to electron/neutron EDM cancel with each other in substantial parts of the MSSM parameter space even if SUSY CPV phases are $`O(1)`$ and SUSY particles are relatively light . In the last two cases where SUSY CPV phases are of $`O(1)`$, these phases may affect $`B`$ and $`K`$ physics in various manners. In the previous letter , we presented effects of these SUSY CPV phases on $`B`$ physics : the $`B^0\overline{B^0}`$ mixing and the direct asymmetry in $`BX_s\gamma `$, assuming that EDM constraints and SUSY FCNC problems are evaded by heavy 1st/2nd generation scalar fermions. In this letter, we extend our previous work to $`BX_sl^+l^{}`$ and $`ϵ_K`$ (see also Ref. .) within the same assumptions. An important ingredient for large $`\mathrm{tan}\beta `$ in our model is the constraint on the $`\mu `$ and $`A_t`$ phases coming from electron/neutron edm’s through Chang-Keung-Pilaftsis (CKP) mechanism . Two loop diagrams with CP-odd higgs and photon (gluon) exchanges between the fermion line and the sfermion loop (mainly stops and sbottoms) can contribute significantly to electron/neutron edm’s in the large $`\mathrm{tan}\beta `$ region. The authors of Ref. find that $$(\frac{d_f}{e})_{\mathrm{CKP}}=Q_f\frac{3\alpha _{\mathrm{em}}}{64\pi ^2}\frac{R_fm_f}{M_A^2}\underset{q=t,b}{}\xi _qQ_q^2\left[F\left(\frac{M_{\stackrel{~}{q}_1}^2}{M_A^2}\right)F\left(\frac{M_{\stackrel{~}{q}_2}^2}{M_A^2}\right)\right],$$ (1) where $`R_f=\mathrm{cot}\beta (\mathrm{tan}\beta )`$ for $`I_{3f}=1/2(1/2)`$, and $$\xi _t=\frac{\mathrm{sin}2\theta _{\stackrel{~}{t}}m_t\mathrm{Im}(\mu e^{i\delta _t})}{\mathrm{sin}^2\beta v^2},\xi _b=\frac{\mathrm{sin}2\theta _{\stackrel{~}{b}}m_b\mathrm{Im}(A_be^{i\delta _b})}{\mathrm{sin}\beta \mathrm{cos}\beta v^2},$$ (2) with $`\delta _q=\mathrm{Arg}(A_q+R_q\mu ^{})`$, and $`F(z)`$ is a two-loop function given in Ref. . This new contribution is independent of the 1st/2nd generation scalar fermion masses, so that it does not decouple for heavy 1st/2nd generation scalar fermions. Therefore it can be important for the electron or down quark edm for the large $`\mathrm{tan}\beta `$ case. This is in sharp contrast with the usual one-loop contributions to edm’s, for which $$\left(\frac{d_f}{e}\right)10^{25}\mathrm{cm}\times \frac{\{\mathrm{Im}\mu ,\mathrm{Im}A_f\}}{\mathrm{max}(M_{\stackrel{~}{f}},M_\lambda )}\left(\frac{1\mathrm{TeV}}{\mathrm{max}(M_{\stackrel{~}{f}},M_\lambda )}\right)^2\left(\frac{m_f}{10\mathrm{MeV}}\right),$$ (3) and one can evade the edm constraints by having small phases for $`\mu ,A_{e,u,d}`$, or heavy 1st/2nd generation scalar fermions. However, this would involve enlargement of our model parameter space, since one has to consider the sbottom sector as well as the stop sector. Therefore, more parameters have to be introduced in principle : $`m_{\stackrel{~}{b}}^2`$ and $`A_b`$ where $`A_b`$ may be complex like $`A_t`$. In order to avoid such enlargement, we will assume that there is no accidental cancellation between the stop and sbottom loop contributions. This CKP edm constraint has not been included in the recent paper by Demir et al. , who made claims that there could be a large new phase shift in the $`B^0\overline{B^0}`$ mixing and it is possible to have a fully supersymmetric $`ϵ_K`$ from the phases of $`\mu `$ and $`A_t`$ only. However, if $`\mathrm{tan}\beta `$ is large ($`\mathrm{tan}\beta 60`$) as in Ref. , the CKP edm constraints via the CKP mechanism have to be properly included. This constraint reduces the possible new phase shift in the $`B^0\overline{B^0}`$ mixing to a very small number, $`2|\theta _d|1^{}`$, as demonstrated in Fig. 1 (a) of Ref. . On the other hand, the CKP edm constraint does not affect too much the direct CP asymmetry in $`BX_s\gamma `$ . In this work, we continue studying the effects of the phases of $`\mu `$ and $`A_t`$ on $`BX_sl^+l^{}`$ and $`ϵ_K`$. We also reconsider a possibility of fully supersymmetric CP violation, namely generating $`ϵ_K`$ entirely from the phases of $`\mu `$ and $`A_t`$ with vanishing KM phase ($`\delta _{\mathrm{KM}}=0`$). Our conclusion is at variance with the claim made in Ref. . 2. As in Refs. , we assume that the 1st and the 2nd family squarks are degenerate and very heavy in order to solve the SUSY FCNC/CP problems. Only the third family squarks can be light enough to affect $`B`$ and $`K`$ physics. We also ignore possible flavor changing squark mass matrix elements that could generate gluino-mediated flavor changing neutral current (FCNC) processes in addition to those effects we consider below, relegating the details to the existing literature -. Therefore the only source of the FCNC in our case is the CKM matrix, whereas there are new CPV phases coming from the phases of $`\mu `$ and $`A_t`$ parameters (see below), in addition to the KM phase $`\delta _{KM}`$. Definitions for the chargino and stop mass matrices are the same as Ref. . There are two new flavor conserving CPV phases in our model, $`\mathrm{Arg}(\mu )`$ and $`\mathrm{Arg}(A_t)`$ in the basis where $`M_2`$ is real. We scan over the MSSM parameter space as in Ref. indicated below (including that relevant to the EWBGEN scenario in the MSSM) : $`80\mathrm{GeV}<|\mu |<1\mathrm{TeV},`$ $`80\mathrm{GeV}<M_2<1\mathrm{TeV},`$ (4) $`60\mathrm{GeV}<M_A<1\mathrm{TeV},`$ $`2<\mathrm{tan}\beta <70,`$ (5) $`(130\mathrm{GeV})^2<M_Q^2`$ $`<`$ $`(1\mathrm{TeV})^2,`$ (6) $`(80\mathrm{GeV})^2<M_U^2`$ $`<`$ $`(500\mathrm{GeV})^2,`$ (7) $`0<\varphi _\mu ,\varphi _{A_t}<2\pi ,`$ $`0<|A_t|<1.5\mathrm{TeV},`$ (8) with the following experimental constraints : $`M_{\stackrel{~}{t}_1}>80`$ GeV independent of the mixing angle $`\theta _{\stackrel{~}{t}}`$, $`M_{\stackrel{~}{\chi ^\pm }}>83`$ GeV, and $`0.77R_\gamma 1.15`$ , where $`R_\gamma `$ is defined as $`R_\gamma =BR(BX_s\gamma )^{expt}/BR(BX_s\gamma )^{SM}`$ and $`BR(BX_s\gamma )^{SM}=(3.29\pm 0.44)\times 10^4`$. We also impose $`\mathrm{Br}(BX_{sg})<6.8\%`$ , and vary $`\mathrm{tan}\beta `$ from 2 to 70 This may be too large for perturbation theory to be valid, but we did extend to $`\mathrm{tan}\beta 70`$ in order to check the claims made in Ref. .. This parameter space is larger than that in the constrained MSSM (CMSSM) where the universality of soft terms at the GUT scale is assumed. Especially, our parameter space includes the electroweak baryogenesis scenario in the MSSM . In the numerical analysis, we used the following numbers for the input parameters (running masses in the $`\overline{MS}`$ scheme are used for the quark masses) : $`\overline{m_c}(m_c(pole))=1.25`$ GeV, $`\overline{m_b}(m_b(pole))=4.3`$ GeV, $`\overline{m_t}(m_t(pole))=165`$ GeV, and $`|V_{cb}|=0.0410,|V_{tb}|=1,|V_{ts}|=0.0400`$ and $`\gamma (\varphi _3)=90^{}`$ in the CKM matrix elements. 3. Let us first consider the branching ratio for $`BX_sl^+l^{}`$. The SM and the MSSM contributions to this decay were considered by several groups and , respectively. We use the standard notation for the effective Hamiltonian for this decay as described in Refs. and . The new CPV phases in $`C_{7,9,10}`$ can affect the branching ratio and other observables in $`BX_sl^+l^{}`$ as discussed in the first half of Ref. in a model independent way. In the second half of Ref. , specific supersymmetric models were presented where new CPV phases reside in flavor changing squark mass matrices. In the present work, new CPV phases lie in flavor conserving sector, namely in $`A_t`$ and $`\mu `$ parameters. Although these new phases are flavor conserving, they affect the branching ratio of $`BX_sl^+l^{}`$ and its correlation with $`Br(BX_s\gamma )`$, as discussed in the first half of Ref. . Note that $`C_{9,10}`$ depend on the sneutrino mass, and we have scanned over $`60\mathrm{GeV}<m_{\stackrel{~}{\nu }}<200\mathrm{GeV}`$. In the numerical evaluation for $`R_{ll}\mathrm{Br}(BX_sl^+l^{})/\mathrm{Br}(BX_sl^+l^{})_{\mathrm{SM}}`$, we considered the nonresonant contributions only for simplicity, neglecting the contributions from $`J/\psi ,\psi ^{^{}},etc`$.. It would be straightforward to incorporate these resonance effects. In Figs. 1 (a) and (b), we plot the correlations of $`R_{\mu \mu }`$ with $`\mathrm{Br}(BX_s\gamma )`$ and $`\mathrm{tan}\beta `$, respectively. Those points that (do not) satisfy the CKP edm constraints are denoted by the squares (crosses). Some points are denoted by both the square and the cross. This means that there are two classes of points in the MSSM parameter space, and for one class the CKP edm constraints are satisfied but for another class the CKP edm constraints are not satisfied, and these two classes happen to lead to the same branching ratios for $`BX_s\gamma `$ and $`R_{ll}`$. In the presence of the new phases $`\varphi _\mu `$ and $`\varphi _{A_t}`$, $`R_{\mu \mu }`$ can be as large as 1.85, and the deviations from the SM prediction can be large, if $`\mathrm{tan}\beta >8`$. As noticed in Ref. , the correlation between the $`\mathrm{Br}(BX_s\gamma )`$ and $`R_{ll}`$ is distinctly different from that in the minimal supergravisty case . In the latter case, only the envelop of Fig. 1 (a) is allowed, whereas everywhere in between is allowed in the presence of new CPV phases in the MSSM. Even if one introduces the phases of $`\mu `$ and $`A_0`$ at GUT scale in the minimal supergravity scenario, this correlation does not change very much from the case of the minimal supergravity scenario with real $`\mu `$ and $`A_0`$, since the $`A_0`$ phase becomes very small at the electroweak scale because of the renormalization effects . Only $`\mu `$ phase can affect the electroweak scale physics, but this phase is strongly constrained by the usual edm constraints so that $`\mu `$ should be essentially real parameter. Therefore the correlation between $`BX_s\gamma `$ and $`R_{ll}`$ can be a clean distinction between the minimal supergravity scenario and our model (or some other models with new CPV phases in the flavor changing ). 4. The new complex phases in $`\mu `$ and $`A_t`$ will also affect the $`K^0\overline{K^0}`$ mixing. The relevant $`\mathrm{\Delta }S=2`$ effective Hamiltonian is given by $$H_{\mathrm{eff}}^{\mathrm{\Delta }S=2}=\frac{G_F^2M_W^2}{(2\pi )^2}\underset{i=1}{\overset{3}{}}C_iQ_i,$$ (9) where $`C_1(\mu _0)`$ $`=`$ $`\left(V_{td}^{}V_{ts}\right)^2\left[F_V^W(3;3)+F_V^H(3;3)+A_V^C\right]`$ (10) $`+`$ $`\left(V_{cd}^{}V_{cs}\right)^2\left[F_V^W(2;2)+F_V^H(2;2)\right]`$ (11) $`+`$ $`2\left(V_{td}^{}V_{ts}V_{cd}^{}V_{cs}\right)\left[F_V^W(3;2)+F_V^H(3;2)\right],`$ (12) $`C_2(\mu _0)`$ $`=`$ $`\left(V_{td}^{}V_{ts}\right)^2F_S^H(3;3)+\left(V_{cd}^{}V_{cs}\right)^2F_S^H(2;2)`$ (13) $`+`$ $`2\left(V_{td}^{}V_{ts}V_{cd}^{}V_{cs}\right)F_S^H(3;2),`$ (14) $`C_3(\mu _0)`$ $`=`$ $`\left(V_{td}^{}V_{ts}\right)^2A_S^C,`$ (15) where the charm quark contributions have been kept. The superscripts $`W,H,C`$ denote the $`W^\pm ,H^\pm `$ and chargino contributions respectively, and $`A_V^C`$ $`=`$ $`{\displaystyle \underset{i,j=1}{\overset{2}{}}}{\displaystyle \underset{k,l=1}{\overset{2}{}}}{\displaystyle \frac{1}{4}}G^{(3,k)i}G^{(3,k)j}G^{(3,l)i}G^{(3,l)j}Y_1(r_k,r_l,s_i,s_j),`$ (16) $`A_S^C`$ $`=`$ $`{\displaystyle \underset{i,j=1}{\overset{2}{}}}{\displaystyle \underset{k,l=1}{\overset{2}{}}}H^{(3,k)i}G^{(3,k)j}G^{(3,l)i}H^{(3,l)j}Y_2(r_k,r_l,s_i,s_j).`$ (17) Here $`G^{(3,k)i}`$ and $`H^{(3,k)i}`$ are the couplings of $`k`$th stop and $`i`$th chargino with left-handed and right-handed quarks, respectively : $`G^{(3,k)i}`$ $`=`$ $`\sqrt{2}C_{R1i}^{}S_{tk1}{\displaystyle \frac{C_{R2i}^{}S_{tk2}}{\mathrm{sin}\beta }}{\displaystyle \frac{m_t}{M_W}},`$ (18) $`H^{(3,k)i}`$ $`=`$ $`{\displaystyle \frac{C_{L2i}^{}S_{tk1}}{\mathrm{cos}\beta }}{\displaystyle \frac{m_s}{M_W}},`$ (19) and $`C_{L,R}`$ and $`S_t`$ are unitary matrices that diagonalize the chargino and stop mass matrices . : $`C_R^{}M_\chi ^{}C_L=\mathrm{diag}(M_{\stackrel{~}{\chi _1}},M_{\stackrel{~}{\chi _2}})`$ and $`S_tM_{\stackrel{~}{t}}^2S_t^{}=\mathrm{diag}(M_{\stackrel{~}{t}_1}^2,M_{\stackrel{~}{t}_2}^2)`$. Explicit forms for functions $`Y_{1,2}`$ and $`F`$’s can be found in Ref. , and $`r_k=M_{\stackrel{~}{t}_k}^2/M_W^2`$ and $`s_i=M_{\stackrel{~}{\chi ^\pm }_i}/M_W^2`$. It should be noted that $`C_2(\mu _0)`$ was misidentified as $`C_3^H(\mu _0)`$ in Ref. . The gluino and neutralino contributions are negligible in our model. The Wilson coefficients at lower scales are obtained by renomalization group running. The relevant formulae with the NLO QCD corrections at $`\mu =2`$ GeV are given in Ref. . It is important to note that $`C_1(\mu _0)`$ and $`C_2(\mu _0)`$ are real relative to the SM contribution in our model. On the other hand, the chargino exchange contributions to $`C_3(\mu _0)`$ (namely $`A_S^C`$) are generically complex relative to the SM contributions, and can generate a new phase shift in the $`K^0\overline{K^0}`$ mixing relative to the SM value. This effect is in fact significant for large $`\mathrm{tan}\beta (1/\mathrm{cos}\beta )`$ , since $`C_3(\mu _0)`$ is proportional to $`(m_s/M_W\mathrm{cos}\beta )^2`$. The CP violating parameter $`ϵ_K`$ can be calculated from $$ϵ_K\frac{e^{i\pi /4}\mathrm{Im}M_{12}}{\sqrt{2}\mathrm{\Delta }M_K},$$ (20) where $`M_{12}`$ can be obtained from the $`\mathrm{\Delta }S=2`$ effective Hamiltonian through $`2M_KM_{12}=K^0|H_{\mathrm{eff}}^{\mathrm{\Delta }S=2}|\overline{K^0}`$. For $`\mathrm{\Delta }M_K`$, we use the experimental value $`\mathrm{\Delta }M_K=(3.489\pm 0.009)\times 10^{12}`$ MeV, instead of theoretical relation $`\mathrm{\Delta }M_K=2\mathrm{R}\mathrm{e}M_{12}`$, since the long distance contributions to $`M_{12}`$ is hard to calculate reliably unlike the $`\mathrm{\Delta }S=2`$ box diagrams. For the strange quark mass, we use the $`\overline{\mathrm{MS}}`$ mass at $`\mu =2`$ GeV scale : $`m_s(\mu =2\mathrm{G}\mathrm{e}\mathrm{V})=125`$ MeV. In Figs. 2 (a) and (b), we plot the results of scanning the MSSM parameter space : the correlations between $`ϵ_K/ϵ_K^{\mathrm{SM}}`$ and (a) $`\mathrm{tan}\beta `$ and (b) the lighter stop mass. We note that $`ϵ_K/ϵ_K^{\mathrm{SM}}`$ can be as large as $`1.4`$ for $`\delta _{KM}=90^{}`$ if $`\mathrm{tan}\beta `$ is small. This is a factor 2 larger deviation from the SM compared to the minimal supergravity case . The dependence on the lighter stop is close to the case of the minimal supergravity case, but we can have a larger deviations. Such deviation is reasonably close to the experimental value, and will affect the CKM phenomenology at a certain level. In the MSSM with new CPV phases, there is an intriguing possibility that the observed CP violation in $`K_L\pi \pi `$ is fully due to the complex parameters $`\mu `$ and $`A_t`$ in the soft SUSY breaking terms which also break CP softly. This possibility was recently considered by Demir et al. . Their claim was that it was possible to generate $`ϵ_K`$ entirely from SUSY CPV phases for large $`\mathrm{tan}\beta 60`$ with certain choice of soft parameters <sup>§</sup><sup>§</sup>§Their choice of parameters leads to $`M_{\chi ^\pm }=80`$ GeV and $`M_{\stackrel{~}{t}}=85`$ GeV, which are very close to the recent lower limits set by LEP2 experiments.. In such a scenario, only $`\mathrm{Im}(A_S^C)`$ in Eq. (6) can contribute to $`ϵ_K`$, if we ignore a possible mixing between $`C_2`$ and $`C_3`$ under QCD renormalization. In actual numerical analysis we have included this effect using the results in Ref. . We repeated their calculations using the same set of parameters, but could not confirm their claim. For $`\delta _{KM}=0^{}`$, we found that the supersymmetric $`ϵ_K`$ is less than $`2\times 10^5`$, which is too small compared to the observed value : $`|ϵ_K|=(2.280\pm 0.019)\times 10^3`$ determined from $`K_{L,S}\pi ^+\pi ^{}`$ . Let us give a simple estimate for fully supersymmetric $`ϵ_K`$, in which case only $`C_3(\mu _0)`$ develops imaginary part and can contribute to $`ϵ_K`$. For $`m_{\stackrel{~}{t}_1}m_{\chi ^\pm }M_W`$, we would get $`Y_2Y_2(1,1,1,1)=1/6`$, and $`|G^{(3,k)i}|O(1),\mathrm{and}|H^{(3,k)i}|{\displaystyle \frac{m_s\mathrm{tan}\beta }{M_W}},`$ because any components of unitary matrices $`C_R`$ and $`S_t`$ are $`O(1)`$. Therefore $`\mathrm{Im}(A_S^C)O(10^3)`$. Now using $$\mathrm{Im}(M_{12})=\frac{G_F^2M_W^2}{(2\pi )^2}f_K^2M_K\left(\frac{M_K}{m_s}\right)^2\frac{1}{24}B_3(\mu )\mathrm{Im}(C_3(\mu )),$$ (21) and Eq. (9), we get $`|ϵ_K|2\times 10^5`$. 7. In conclusion, we extended our previous studies of SUSY CPV phases to $`BX_sl^+l^{}`$ and $`ϵ_K`$. Our results can be summarized as follows : * The branching ratio for $`BX_sl^+l^{}`$ can be enhanced upto $`85\%`$ compared to the SM prediction, and the correlation between $`\mathrm{Br}(BX_s\gamma )`$ and $`\mathrm{Br}(BX_sl^+l^{})`$ is distinctly different from the minimal supergravity scenario (CMSSM) (even with new CP violating phases) in the presence of new CP violating phases in $`C_{7,8,9}`$ as demonstared in model-independent analysis by Kim, Ko and Lee . * $`ϵ_K/ϵ_K^{SM}`$ can be as large as 1.4 for $`\delta _{KM}=90^{}`$. This is the extent to which the new phases in $`\mu `$ and $`A_t`$ can affect the construction of the unitarity triangle through $`ϵ_K`$. * Fully supersymmetric CP violation is not possible even for large $`\mathrm{tan}\beta 60`$ and light enough chargino and stop, contrary to the claim made in Ref. . With real CKM matrix elements, we get very small $`|ϵ_K|O(10^5)`$, which is two orders of magnitude smaller than the experimental value. Before closing this paper, we’d like to emphasize that all of our results are based on the assumption that there are no new CPV phases in the flavor changing sector. Once this assumption is relaxed, then gluino-mediated FCNC with additional new CPV phases may play important roles, and many of our results may change . Within our assumption, the results presented here and in Ref. are conservative since we did not impose any conditions on the soft SUSY breaking terms except that the resulting mass spectra for chargino, stop and other sparticles satisfy the current lower bounds from LEP and Tevatron. More detailed analysis of phenomenological implications of our works on $`B_{d(s)}^0\overline{B_{d(s)}^0}`$ mixing, $`BX_{s(d)}\gamma ,X_{s(d)}l^+l^{},B_{s(d)}^0l^+l^{}`$ and their direct CP asymmetries will be presented elsewhere. ###### Acknowledgements. The authors wich to thank G.C. Cho for clarifying $`O_2`$ and $`O_3`$ in Ref. , and A. Ali, A. Grant, A. Pilaftsis and O. Vives for useful communications. A part of this work was done while one of the authors (PK) was visiting Harvard University under the Distinguished Scholar Exchange Program of Korea Research Foundation. This work is supported in part by KOSEF Contract No. 971-0201-002-2, KOSEF through Center for Theoretical Physics at Seoul National University, Korea Rsearch Foundation Program 1998-015-D00054 (PK), and by KOSEF Postdoctoral Fellowship Program (SB).
no-problem/9904/physics9904003.html
ar5iv
text
# Untitled Document Gauge Invariance and Canonical Variables I.B. Khriplovich<sup>1</sup><sup>1</sup>1E-mail address: khriplovich@inp.nsk.su and A.I. Milstein<sup>2</sup><sup>2</sup>2E-mail address: milstein@inp.nsk.su Budker Institute of Nuclear Physics, 630090 Novosibirsk, Russia, and Novosibirsk University, Novosibirsk, Russia ## Abstract We discuss some paradoxes arising due to the gauge-dependence of canonical variables in mechanics. 1. Rather elementary problems discussed in this note originate partly from tutorials on quantum mechanics at the Novosibirsk University, partly from discussions on elementary particle physics and quantum field theory with our colleagues. These problems turned out difficult not only for undergraduates. To our surprise, they caused confusion even of some educated theorists. So, hopefully, a short note on the subject will be useful, at least from the methodological point of view, so much the more that we are not aware of any explicit discussion of the matter in literature. Though the questions have arisen in quantum mechanics or even in more elevated subjects, they belong in essence to classical mechanics. Just to classical mechanics we confine mainly in the present note. 2. Let us consider the simple problem of a charged particle in a constant homogeneous magnetic field. Its Hamiltonian is well-known: $$H=\frac{1}{2m}\left(𝐩\frac{e}{c}𝐀\right)^2.$$ (1) It is also well-known that various gauges are possible for the vector potential $`𝐀`$. With the magnetic field $`𝐁`$ directed along the $`z`$ axis, one can choose, for instance, $$𝐀=B(0,x,\mathrm{\hspace{0.17em}0}).$$ (2) In this gauge the Hamiltonian is independent of $`y`$, and therefore the corresponding component $`p_y`$ of the canonical momentum is an integral of motion. However, one can choose equally well another gauge: $$𝐀=B(y,\mathrm{\hspace{0.17em}0},\mathrm{\hspace{0.17em}0}).$$ (3) Then it is the component $`p_x`$ of the canonical momentum which is conserved. But how it comes that a component of $`𝐩`$ transverse to the magnetic field can be conserved, and that, moreover, the conserved component can be chosen at will? The obvious answer is that the canonical momentum $`𝐩`$ is not a gauge-invariant quantity and therefore has no direct physical meaning. As to our visual picture of the transverse motion in a magnetic field, it is not the canonical momentum $`𝐩`$ which precesses and thus permanently changes its direction, but the velocity $$𝐯=\frac{1}{m}\left(𝐩\frac{e}{c}𝐀\right).$$ As distinct from the canonical momentum $`𝐩`$, the velocity $`𝐯`$ is a gauge-invariant and physically uniquely defined quantity. 3. It is only natural that not only the space components $`𝐩`$ of the canonical momentum, but as well its time component, the Hamiltonian $`H`$, is gauge-dependent. It is the kinetic energy $`HeA_0`$ which is gauge-invariant. As a rather striking manifestation of this fact, let us consider an example of a well-known physical system whose energy is conserved, but the Hamiltonian can be time-dependent. We mean the motion of a charged particle in a time-independent electric field $`𝐄`$, for instance, in the Coulomb one. Let us choose here the gauge $`A_0=0`$. In it the vector potential becomes obviously $$𝐀=ct𝐄,$$ so that now the Hamiltonian (1) depends on time explicitly. Nevertheless, the energy of a particle in a time-independent electromagnetic field is certainly conserved. Indeed, here the equations of motion become $$\dot{𝐫}=\{H,𝐫\}=\frac{1}{m}(𝐩+et𝐄),$$ (4) $$m\ddot{𝐫}=\frac{d}{dt}(𝐩+et𝐄)=\{H,𝐩+et𝐄\}=e𝐄$$ (5) (we use the Poisson brackets $`\{\mathrm{},\mathrm{}\}`$ in these classical equations). Since for a time-independent electric field its strength can be always written as a gradient of a scalar function: $`𝐄=\mathbf{}\phi `$, equation (5) has first integral $$\frac{1}{2}m\dot{𝐫}^2+e\phi =const,$$ which is obviously nothing but the integral of energy. On the other hand, in virtue of equation (4), the Hamiltonian in the gauge $`A_0=0`$ coincides in fact with the kinetic energy: $$H=\frac{1}{2m}(𝐩+et𝐄)^2=\frac{1}{2}m\dot{𝐫}^2.$$ It looks quite natural: the kinetic energy $`HeA_0`$, being gauge-invariant, should coincide with the Hamiltonian in the gauge $`A_0=0`$. At last, an obvious comment on the situation in quantum mechanics. Though the Hamiltonian is not gauge-invariant, the Schrödinger equation is. Its gauge invariance is saved by the gauge transformation of the wave function. In particular, in the gauge $`A_0=0`$ the time-dependence of the Hamiltonian results only in some extra time-dependent phase for the wave function. \*** We appreciate useful discussions with S.A. Rybak and V.V. Sokolov. We are grateful to S.A. Rybak also for the advice to publish this note. The work was supported by by the Ministry of Education through grant No. 3N-224-98, and by the Federal Program Integration-1998 through Project No. 274.
no-problem/9904/hep-ph9904329.html
ar5iv
text
# 1 Introduction ## 1 Introduction The long awaited recent report on a clear observation of direct CP violation in $`K\pi \pi `$ decays, $`\mathrm{Re}(ϵ^{}/ϵ)=(28.0\pm 3.0\pm 2.6\pm 1.0)\times 10^4`$, is the first evidence for the important role played by penguin amplitudes in the phenomena of CP violation . $`B`$ decays are expected to provide a variety of CP asymmetry measurements, as well as measurerments of certain combinations of rates, some of which carry the promise of determining the angles of the unitarity triangle , $`\alpha ,\beta `$ and $`\gamma `$. This can test the commonly accepted hypothesis that CP violation arises solely from phases in the Cabibbo-Kobayashi-Maskawa matrix . Let us review a few of the ideas involved in this study, paying particular attention to the role of penguin amplitudes. * $`\beta `$: In the experimentally feasible and theoretically pure example of $`B^0(t)J/\psi K_S`$ the decay amplitude is real to a very high precision. Theoretically , the time-dependent mixing-induced CP asymmetry measures the phase $`\beta \mathrm{Arg}V_{td}`$ controlling $`B^0`$-$`\overline{B}^0`$ mixing to an accuracy of 1$`\%`$ . * $`\alpha `$$`B^0(t)\pi ^+\pi ^{}`$ involves direct CP violation from the interference between a dominant current-current amplitude carrying a weak phase $`\gamma `$ and a smaller penguin contribution, which “pollutes” the measured $`\mathrm{sin}\mathrm{\Delta }mt`$ term in the time-dependent asymmetry . A ratio of penguin to tree amplitudes $`|P/T|=0.3\pm 0.1`$ in $`B^0\pi ^+\pi ^{}`$ is inferred from the measured rates of $`BK\pi `$ dominated by a penguin amplitude. Such a penguin contribution introduces a sizable uncertainty in the determination of $`\alpha =\pi \beta \gamma `$ in $`B^0\pi ^+\pi ^{}`$. Isospin symmetry may be used to remove this unknown correction to $`\alpha `$ by measuring also the time-integrated rates of $`B^\pm \pi ^\pm \pi ^0`$ and $`B^0(\overline{B}^0)\pi ^0\pi ^0`$. In the likely case that the decay rate into $`\pi ^0\pi ^0`$ cannot be measured with sufficient precision, one can at least use this measurement to set upper limits on the error in $`\alpha `$ . Further out in the future, one may combine the time-dependence of $`B^0(t)\pi ^+\pi ^{}`$ with the U-spin related $`B_s(t)K^+K^{}`$ to determine separately $`\beta `$ and $`\gamma `$ . This involves uncertaities due to SU(3) breaking. * $`\gamma `$: The angle $`\gamma `$ is apparently the most difficult to measure. It was suggested some time ago to obtain information about this angle from charged $`B`$ decays to $`K\pi `$ final states by measuring the relative phase between a dominant real penguin amplitude and a smaller current-current amplitude carrying the phase $`\gamma `$. This is achieved by relating the latter amplitude through flavor SU(3) to the amplitude of $`B^+\pi ^+\pi ^0`$, introducing SU(3) breaking in terms of $`f_K/f_\pi `$. In the above two examples of determining $`\alpha `$ and $`\gamma `$, QCD penguin amplitudes were taken into account in terms of their very general properties, whereas electroweak penguin (EWP) contributions were first neglected and later on analyzed in a model-dependent manner . Such an approach relies on factorization and on form factor assumptions , and involves theoretical uncertainties in hadronic matrix elements similar to those plaguing $`ϵ^{}/ϵ`$ . In the present report we will focus on recent developments in the study of EWP contributions, which partially avoid these uncertainties, thereby improving the potential accuracy of measuring $`\alpha `$ and $`\gamma `$. ## 2 Model-independent treatment of electroweak penguins The weak Hamiltonian governing $`B`$ decays is given by $$=\frac{G_F}{\sqrt{2}}\underset{q=d,s}{}\left(\underset{q^{}=u,c}{}\lambda _q^{}^{(q)}[c_1Q_1+c_2Q_2]\lambda _t^{(q)}\underset{i=3}{\overset{10}{}}c_iQ_i^{(q)}\right),$$ (1) where $`Q_1=(\overline{b}q^{})_{VA}(\overline{q}^{}q)_{VA},Q_2=(\overline{b}q)_{VA}(\overline{q}^{}q^{})_{VA},\lambda _q^{}^{(q)}=V_{q^{}b}^{}V_{q^{}q},q=d,s,q^{}=u,c,t,\lambda _u^{(q)}+\lambda _c^{(q)}+\lambda _t^{(q)}=0`$. The dominant EWP operators $`Q_9,Q_{10}`$ ($`|c_{7,8}||c_{9,10}|`$) have a (V-A)(V-A) chiral structure, similar to the current-current operators $`Q_1,Q_2`$. Thus, isospin alone relates the matrix elements of these operators in $`B^+\pi ^+\pi ^0`$ $$\sqrt{2}P^{EW}(B^+\pi ^+\pi ^0)=\frac{3}{2}\kappa (T+C),\kappa =\frac{c_9+c_{10}}{c_1+c_2}=0.0088,$$ (2) where $`T+C`$ represents graphically the current-current amplitudes dominating $`B^+\pi ^+\pi ^0`$. Similarly, flavor SU(3) implies $$P^{EW}(B^+K^0\pi ^+)+\sqrt{2}P^{EW}(B^+K^+\pi ^0)=\frac{3}{2}\kappa (T+C),$$ (3) $$P^{EW}(B^0K^+\pi ^{})+P^{EW}(B^+K^0\pi ^+)=\frac{3}{2}\kappa (CE).$$ (4) In the next three sections we describe briefly applications of these three relations to the determination of $`\alpha `$ and $`\gamma `$ from $`B\pi \pi `$ and $`BK\pi `$, respectively. ## 3 Controlling EWP contributions in $`B\pi \pi `$ The time-dependent rate of $`B^0\pi ^+\pi ^{}`$ includes a term $`\mathrm{sin}(2\alpha +\theta )\mathrm{sin}(\mathrm{\Delta }mt)`$, where the correction $`\theta `$ is due to penguin amplitudes . Using isospin (2), the EWP contribution to $`\theta `$, denoted by $`\xi `$, is found to be very small $$\mathrm{tan}\xi =\frac{x\mathrm{sin}\alpha }{1+x\mathrm{cos}\alpha },x\frac{3}{2}\kappa |\frac{\lambda _t^{(d)}}{\lambda _u^{(d)}}|=0.013|\frac{\lambda _t^{(d)}}{\lambda _u^{(d)}}|,$$ (5) and is nicely incorporated into the analysis of Ref. 12 which determines $`\alpha `$. ## 4 $`\gamma `$ from $`B^+K\pi `$ Using (3), EWP terms are included in the triangle construction of Ref. 15 $$\sqrt{2}A(B^+K^+\pi ^0)+A(B^+K^0\pi ^+)=\stackrel{~}{r}_uA(B^+\pi ^+\pi ^0)\left(1\delta _{EW}e^{i\gamma }\right),$$ (6) where $`\stackrel{~}{r}_u=(f_K/f_\pi )\mathrm{tan}\theta _c0.28,\delta _{EW}=(3/2)|\lambda _t^{(s)}/\lambda _u^{(s)}|\kappa 0.66\pm 0.15`$. This relation and its charge-conjugate permit a determination of $`\gamma `$ under the assumption that a rescattering amplitude with phase $`\gamma `$ can be neglected in $`B^+K^0\pi ^+`$. This amplitude is bounded by the U-spin related rate of $`B^\pm K^\pm \overline{K}^0`$ . Present limits are at the level of $`2030\%`$ of the dominant penguin amplitude , and are expected to be improved to the level of 10$`\%`$. In this case the rescattering effect, which depends strongly on the final state phase difference $`\varphi `$ between $`I=3/2`$ current-current and penguin amplitudes, introduces an uncertainty at a level of $`15^{}`$ in the determination of $`\gamma `$ if $`\varphi `$ is near $`90^{}`$ . A considerably smaller theoretical error would be implied if this measurable phase is found to be far from $`90^{}`$. Other sources of errors in $`\gamma `$, such as SU(3) breaking, are discussed elsewhere at this meeting . We note that in this determination of $`\gamma `$ SU(3) breaking does not occur in the leading penguin amplitudes as it does in some other methods . The phase $`\gamma `$ can also be constrained by measuring only charge-averaged $`B^\pm K\pi `$ rates. Defining $$R_{}^1=\frac{2[B(B^+K^+\pi ^0)+B(B^{}K^{}\pi ^0)]}{B(B^+K^0\pi ^+)+B(B^{}\overline{K}^0\pi ^{})},$$ (7) one finds using (3) $$R_{}^1=12ϵ\mathrm{cos}\varphi (\mathrm{cos}\gamma \delta _{EW})+𝒪(ϵ^2,ϵ_A^2,ϵϵ_A),$$ (8) where $`ϵ=\stackrel{~}{r}_u\sqrt{2}|A(B^\pm \pi ^\pm \pi ^0)/A(B^\pm K^0\pi ^\pm )|0.24`$, while $`ϵ_A`$ is the suitably normalized rescattering amplitude. The resulting bound $$|\mathrm{cos}\gamma \delta _{EW}|\frac{|1R_{}^1|}{2ϵ},$$ (9) which neglects second order corrections, can be used to exclude an interesting region around $`\mathrm{cos}\gamma =\delta _{EW}`$ if $`R_{}^11`$ is measured. Again, this would be very difficult if $`\varphi 90^{}`$. The present value of the ratio of rates is $`R_{}^1=2.1\pm 1.1`$. ## 5 $`\gamma `$ from the ratio of $`B^0K^\pm \pi ^{}`$ to $`B^\pm K^0\pi ^\pm `$ rates Denoting this ratio of charged-averaged rates by $`R`$ , one finds using (4) a constraint very similar to (9) $$|\mathrm{cos}\gamma \delta _{EW}^{}|\frac{|1R|}{2ϵ^{}}$$ (10) where $`\delta _{EW}^{}0.2\delta _{EW}0.13`$ represents color-suppressed EWP contributions, and $`ϵ^{}0.2`$ is the ratio of tree to penguin amplitudes in $`B^0K^+\pi ^{}`$. In contrast to (9), this bound neglects first order rescattering effects, and the values of $`\delta _{EW}^{}`$ and $`ϵ^{}`$ are less solid than those of $`\delta _{EW}`$ and $`ϵ`$ in (9). Eq. (10) can exclude a region around $`\gamma =90^{}`$ if $`R1`$ is found. Presently $`R=1.07\pm 0.45`$. ## 6 Conclusion * In $`B\pi \pi `$ strong and electroweak penguins are controlled by isospin. * In $`BK\pi `$ strong penguins dominate and EWP are controlled by SU(3). * Interesting bounds on $`\gamma `$, in one case susceptible to rescattering effects, are implied if the $`BK\pi `$ charge-averaged ratios of rates differ from 1. * A precise determination of $`\gamma `$ from $`BK\pi `$ is challenging and requires a combined effort involving further theoretical and experimental studies. Acknowledgment: This work is supported by the United States $``$ Israel Binational Science Foundation under Research Grant Agreement 94-00253/3.
no-problem/9904/astro-ph9904348.html
ar5iv
text
# Using the Comoving Maximum of the Galaxy Power Spectrum to Measure Cosmological Curvature ## 1. Introduction Methods for measuring cosmological parameters are more readily conceived than realized in practice. Recently however, it has become clear that both SNIa lightcurves and CMB temperature fluctuations provide reasonably model-independent and complimentary means of constraining cosmological curvature (Goobar & Perlmutter 1995, Eisenstein, Hu & Tegmark 1998). The angular scale of the first CMB Doppler peak limits combinations of $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$. At smaller angles more geometric information may be extracted but is subject to greater physical uncertainty and complex foregrounds. The utility of distant Type Ia supernovae has been convincingly demonstrated (Perlmutter et al. 1998,1999), but concerns remain. Local supernovae are largely discovered in luminous dusty late-type galaxies, deliberately targeted for convenience. In the distant field, supernovae are weighted towards lower luminosity galaxies, sometimes undetected because of the volume-limited nature of the selection. Hence it is not clear that both samples should share the same family of light curves. The physical origin of the $``$15% luminosity residual remaining after correction is not understood, nor indeed the light-curve correlation itself. Furthermore, some evolution is natural via progenitor metalicity, although the redshift independence of both the residual variance and the distribution of event durations is reassuring. An upward revision of the distant SNIa luminosities of only 15% would render a cosmological constant unnecessary (Perlmutter et al. 1999). Here we attempt a simple model-independent measurement of curvature, using the spatial scale of the maximum in the power spectrum of galaxy perturbations. A broad maximum is naturally expected below $`k0.1\mathrm{Mpc}/h`$, corresponding to the Horizon at the epoch of matter-radiation equality, up to which high frequency power is suppressed relative to an initially rising power-law spectrum, bending the spectrum over at higher frequency. Observations have established a maximum at $`k0.05h/\mathrm{Mpc}`$ ($`130h^1\mathrm{Mpc}`$) from wide angle galaxy and cluster surveys (Bough & Efstathiou 1993,1994, Einasto et al. 1997, Tadros, Efstathiou & Dalton 1998, Guzzo et al. 1998, Einasto et al. 1998, Hoyle et al. 1998, Scheuker et al. 1999). In particular a pronounced peak is evident in local cluster samples (Einsato et al. 1998) and in the deprojection of the large APM survey (Bough & Efstathiou 1994). In all these estimates the high frequency decline is steeper than expected (Feldman, Kaiser & Peacock 1995) and together with the relatively small amplitude fluctuations implied by CMB at very low k (Smoot et al. 1992) requires the existence of a maximum between 0.01$`<`$k$`<`$0.1. The low frequency peak in the projected 3D power-spectrum and its coincidence in scale with the excess large power detected in lower dimensional redshift surveys (Broadhurst et al. 1990,1992,1999; Landy et al. 1995, Small, Sargent & Hamilton 1997, Tucker, Lin, Schectman 1998) encourages the possibility that the test proposed here can be explored already, by simply comparing pencil beam surveys at high and low redshift. Power on these large scales comoves with the expansion, providing a means of measuring curvature. For example, at $`z=0`$ the redshift interval corresponding to a scale of $`130h^1\mathrm{Mpc}`$ is $`\mathrm{\Delta }z=0.043`$, but this stretches by a factor of 4–8 in redshift to $`\mathrm{\Delta }z0.25`$ at $`z=3`$, increasing with $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$. Here we describe the evidence for a preferred large scale of $`\mathrm{\Delta }z0.22`$ in the fields of high redshift galaxies presented by Steidel (1998) and in the Hubble Deep Field (Adelberger et al. 1998). We examine this in light of the increasing local evidence for a maximum in the power spectrum at 130$`h^1\mathrm{Mpc}`$. We obtain the locus of $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$, under the assumption that the preferred scale in these datasets is the same. Finally we discuss the implication of a spike in the primordial power spectrum for the CMB. ## 2. Low Redshift Structure Several independent wide angle surveys of galaxies and clusters allow the projected 3D power spectrum, $`P_3(k)`$ to be estimated on scales large enough to cover the predicted low frequency roll-over. These surveys all show a sharp decline in power at $`k>0.1\mathrm{Mpc}/h`$, and if deep enough, evidence of either a peak, or a maximum at $`k0.05\mathrm{Mpc}/h`$ (Baugh & Efstathiou 1993, Einasto et al. 1995, 1998, Tadros & Efstathiou 1996, Hoyle et al. 1999, Schuecker et al. 1999). This despite the relatively sparse sampling and fairly shallow depth, so that the peak may be better resolved in larger ongoing surveys, where the binning in $`k`$ can be made finer, and the isotropy of Fourier amplitudes examined around $`k=0.05\mathrm{Mpc}/h`$ for any coherent or generally non-Gaussian behaviour. Consistent with this, redshift surveys directed at the Boötes void and towards the Phoenix region and the Shapley supercluster also show a pronounced pattern of large wall/voids separated by $`130h^1\mathrm{Mpc}`$ (Kirshner et al. 1981, 1987, 1990, Quintana et al. 1988, Small, Hamilton & Sargent 1997). Large strip surveys in two dimensions also contain excess power around $`100h^1\mathrm{Mpc}`$ (Landy et al. 1995, Vettolani et al. 1995). This is despite the narrowness of the strips, of only a few Mpc in width, which must therefore underestimate the real radius of voids (Tucker, Lin & Shectman 1999). The transverse extent of the peaks and troughs identified at the galactic poles by Broadhurst et al. (1990) are now known to span much more than the width of the original beams, which were of order only the correlation length ($`5h^1\mathrm{Mpc}`$). The closest northern peak at v=7000km/s intersects the transverse ‘great wall’ structure near the Coma cluster, as revealed in the maps of De-Lapparent, Geller & Huchra (1986), which extends over $`100h^1\mathrm{Mpc}`$. Wider angle redshift surveys at the galactic poles (of diameter $`100h^1\mathrm{Mpc}`$) have clearly confirmed and strengthened the early finding of a scale of $`130h^1\mathrm{Mpc}`$ spanning the redshift range $`z<0.3`$ (Broadhurst et al. 1999) revealing a sinusoidal alternating pattern of peaks and troughs spanning the redshift range $`z<0.3`$ and hence a correspondingly narrow concentration of power in one dimension $`P_1(k)`$, at $`k=0.045h/\mathrm{Mpc}`$. Independent support for the reality of this pattern is found in the coincidence of these peaks with the local distribution of rich clusters (Broadhurst 1990, Bahcall 1991, Guzzo et al. 1992) and in a southern field close to the pole (Ettori, Guzzo & Tarengi 1997). In comparing the 1D distribution with the projected 3D power spectrum it is important to keep in mind that the wide angle redshift surveys do not have sufficient volume and sampling density to construct the real 3D power spectrum $`P_3(\stackrel{}{k})`$ on 100$`h^1\mathrm{Mpc}`$ scales, so interpretations are based on $`P_3(|k|)`$, which is the mean power averaged over solid angle. Hence, it is only sensible to compare the amplitude of $`P_3(|k|)`$ with $`P_1(k)`$ if the power is known to be isotropic with $`\stackrel{}{k}`$. A non-Gaussian distribution leads to “hotter” and “colder” spots at a given frequency, which may average out in projection but generate a larger variance in 1D pencil beams. This is particularly true of course for highly coherent structure (e.g. Voronoi foam, Icke & Van De Weygaert 1991) , where the 3D variance can be sub-Poissonian on scales larger than the coherence length. ## 3. High Redshift Structure The most puzzling aspect of the distant dropout galaxies is the appearance of sharp peaks in the redshift distribution (Adelberger et al. 1998, Steidel 1998), resembling the situation at low redshift, implying at face value little growth of structure. A conventional interpretation of these peaks resorts to “bias” (Wechsler et al. 1998) which has come to represent a flexible translation between the observed structure and the relatively smooth mass distribution of standard simulations, so that the observed peaks are interpreted as rare events destined to become massive clusters by today (Wechsler et al. 1998). High biases have been claimed for the Lyman-break population on the basis of the amplitude of small scale clustering at $`z3`$ (Adelberger et al. 1997, Giavalisco et al. 1998). The occurrence of such peaks is enhanced with a steep spectrum by a reduction of high frequency ‘noise’ (Wechsler et al 1998) but this gain is offset if the steepness is attributed to low $`\mathrm{\Omega }`$, since then a given redshift bin corresponds to a larger volume, and hence a greater proper density contrast (Wechsler et al 1998). We may regard the existence of regular spikes at low and high redshift as evidence for a revision in our understanding of large scale structure, indicating perhaps that initial density fluctuations are not Gaussian distributed such as may be implied by the observed lack of evolution of the number density of X-ray selected clusters (Rosati et al. 1998). The baryon isocurvature model (Peebles 1997) more naturally accommodates both the early formation and the frequent non-Gaussian occurrence of high density regions (Peebles 1998a,b). We analyze the fields of Lyman-Break galaxies of Steidel (1998) and Adelberger(1998). These include 4 fields of $`100200`$ galaxies and a smaller sample of redshifts in the Hubble Deep Field direction. The fields are $`10`$Mpc in width by $`400`$Mpc in the redshift direction and include over 600 redshifts in the range $`2.5<z<3.5`$ histogrammed in bins of $`\mathrm{\Delta }z=0.04`$. In Figure 1 we plot the pair counts and correlation function, assuming that galaxies are evenly distributed within the narrow redshift bins. A clear excess is apparent on a large scale, corresponding to a preference for separations of $`\mathrm{\Delta }z=0.22`$. A pair excess is also seen at twice this separation (Fig 1) indicating phase coherence along the redshift direction. This behaviour is an obvious consequence of the regular peaked structure visible in the redshift histograms of four of the five fields (Fig 1). In a bin of 10Mpc, the number of pairs at the peak is $`1220`$ compared with an expected $`1060`$, representing a $`4.6\sigma `$ departure from random (Fig 1). The observed redshift interval may be related to comoving scale by the usual formula for a universe with negligible radiation energy-density (Peebles 1993), $`\mathrm{\Delta }w`$ $`=`$ $`3000h^1\mathrm{Mpc}{\displaystyle _z^{z+\mathrm{\Delta }z}}{\displaystyle \frac{dz}{E(z)}}`$ (1) $``$ $`3000h^1\mathrm{Mpc}{\displaystyle \frac{\mathrm{\Delta }z}{E(z)}}`$ with $$E(z)=\sqrt{\mathrm{\Omega }(1+z)^3+\mathrm{\Omega }_\mathrm{\Lambda }+(1\mathrm{\Omega }\mathrm{\Omega }_\mathrm{\Lambda })(1+z)^2}.$$ (2) Whereas standard candles at moderate redshift (SNIa) measure $`(\mathrm{\Omega }\mathrm{\Omega }_\mathrm{\Lambda })`$, and CMB anisotropies measure $`(\mathrm{\Omega }+\mathrm{\Omega }_\mathrm{\Lambda })`$, these observations considered here measure $`E(3)`$ or $`(3\mathrm{\Omega }\mathrm{\Omega }_\mathrm{\Lambda })`$ which lies between these two locii, thus adding complimentary information. With this, the $`\mathrm{\Delta }z=0.22`$ scale of the peak corresponds to $`85h^1\mathrm{Mpc}`$ for $`\mathrm{\Omega }=1`$, doubling to $`170h^1\mathrm{Mpc}`$ for $`\mathrm{\Omega }=0`$. Flat cosmologies with a positive $`\mathrm{\Lambda }`$ fall between these limits. If we constrain $`\mathrm{\Delta }w=130h^1\mathrm{Mpc}`$ we find $`48\mathrm{\Omega }15\mathrm{\Omega }_\mathrm{\Lambda }10.5`$. This gives $`\mathrm{\Omega }=0.2`$ for an open universe ($`\mathrm{\Omega }_\mathrm{\Lambda }0`$) or $`\mathrm{\Omega }=0.4`$ for a flat universe ($`\mathrm{\Omega }+\mathrm{\Omega }_\mathrm{\Lambda }1`$), with an uncertainty of only $`0.1`$ in $`\mathrm{\Omega }_m`$, given by the 15% width of the excess correlation (Figure 1). This scale, if borne out by subsequent redshift data, is certainly consistent with the flat model preferred by SNIa data with $`\mathrm{\Omega }_m^{flat}=0.3`$ (Perlmutter et al. 1998,1999). ## 4. Excess Power and the CMB One can imagine two broad classes of physical mechanisms that might be responsible for the excess power required to fit this data. The power might be a truly primordial feature in the power spectrum. In inflationary scenarios, for example, this could be generated by the proliferation of super-horizon bubbles in a suitably conspiratorial inflaton potential (La 1991, Occhinero & Amendola 1994). On the other hand, excess power could be due to causal microphysics in the universe after the 130 $`h^1\mathrm{Mpc}`$ scales enter the horizon. In other words, the power could be added by the transfer function. A high baryon density naturally imparts large scale “bumps and wiggles” to the power spectrum (Peebles 1998a,b, Eisenstein et al. 1997, Meiksin et al. 1998) in particular if matter density fluctuations are created at the expense of radiation (Peebles 1998a,b). An even more radical possibility is that the universe is topologically compact, and that we are seeing evenly-spaced copies of a small universe with an extent of only $`130h^1\mathrm{Mpc}`$ although this scale seems too small to accommodate the unique and relatively distant (z=0.18) cluster A1689 (Gott 1980). In light of this it is interesting to explore the implications of any excess power on CMB temperature fluctuations. Eisenstein et al. (1997) has examined in what way conventional adiabatic models maybe stretched to match the power spectrum of Broadhurst et al. (1990), demonstrating that such models do not naturally account for a scale of 130$`h^1\mathrm{Mpc}`$ in the mass distribution. Our approach is to simply add power in the primordial spectrum at a fixed wave number, using a modified version of CMBFAST code (Seljak & Zaldarriaga 1999), to simulate temperature fluctuation spectra. A primordial spike of power will effect the CMB directly through the projection of three-dimensional features. A narrow band of power is added, $`\mathrm{\Delta }^2(k)=k^3P(k)/(2\pi ^2)`$. The amplitude is set so that $`d\mathrm{ln}k\mathrm{\Delta }^2(k)0.1`$, the value of the correlation function at the peak and harmonics equivalent to at 30% density contrast. We ignore bias, which may conceivably be large, lowering the peak amplitude; we also ignore the possibility of non-Gaussian fluctuations which would certainly diminish the angle-averaged power. If the value of the power spectrum at the $`k=0.05h^1\mathrm{Mpc}`$ peak is already large, as with COBE-normalized $`\mathrm{\Lambda }`$ models, an additional peak of the above strength has negligible effect. However, COBE-normalized models with $`\mathrm{\Omega }=1`$ and $`\mathrm{\Lambda }=0`$ have comparatively low power at this scale, hence, the effect on the final $`C_{\mathrm{}}`$ and $`P(k)`$ is significant. Of course, the precise width and location of the peak, not yet at all well-constrained by the CMB data, also affects the relative power in the peak versus the underlying “smooth power-law” spectrum. This is illustrated in Figure 2. The spike is seen to raise the amplitude of the first Doppler peak by nearly a factor of two for the sCDM model. Note, although it is premature to interpret the claims of the various CMB experiments for and excess of power at $`C_{\mathrm{}}=200`$ (Netterfield et al 1997, Tegmark 1998) without proper treatment of the covariance matrix of errors and foreground subtraction, the indications of a large amplitude for this peak are not inconsistent with the degree of boosting (Fig. 2). Others, Gawiser and Silk 1998, have noted that the whole aggregate of current $`C_{\mathrm{}}`$ and $`P(k)`$ data is extremely well-fit—much better than the fit to standard models—by an adiabatic inflationary power spectrum with sharp bump like that considered here. ## 5. Conclusions Two interesting results emerge from the above comparison of structure in the local and distant pencil beam data. Firstly both samples of galaxies show regular large scale power confined to a narrow range of frequency. Secondly, by matching these scales we obtain values of $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$ in good agreement with the SNIa claims. These findings may of course be regarded as remarkable coincidences, unlikely to occur by chance in a clumpy galaxy distribution (Kaiser & Peacock 1991). Distinguishing between the open and flat solutions found here requires pencil beam data at a third redshift. Optimally this redshift turns out to be convenient for observation, $`z0.7`$, where the ratio of $`\mathrm{\Delta }z`$ between the open $`\mathrm{\Omega }=0.2`$ and $`\mathrm{\Omega }_m^{flat}=0.4`$ cases is maximal, differing by 13%, and may be explored with existing data. Since locally the excess power at 130$`h^1\mathrm{Mpc}`$ is most prominent in cluster selected samples, we may conclude that the peaks in the high-$`z`$ sample correspond to proto-clusters at $`z3`$. This conclusion is independent of the high-bias interpretation which also regards the spikes as young rich clusters. Understanding the physics behind these spikes is complicated by a conspiracy of scales. We have seen in Fig. 2 that the $`k0.05\mathrm{Mpc}/h`$ spike produces a feature in the CMB power spectrum at nearly exactly the position of the first doppler peak. In the simple model presented here—a primordial spike at this position—this is merely a coincidence, however unlikely. That these scales are so closely matched seems yet more unlikely in the light of another fact, that the required peak in the matter power spectrum is at nearly the position of the expected peak due to the passage from radiation to matter-domination. If the acoustic peaks in the CMB power spectrum are instead a signature of physics at a somewhat later epoch: the sound horizon at recombination (set by $`c/\sqrt{3}`$) then that these scales are nearly coincident (within an order of magnitude or so of 100 $`h^1\mathrm{Mpc}`$) is a consequence of the particular values of the cosmological and physical parameters. Thus, if we change the initial conditions to increase the amplitude of the matter power spectrum near the equality scale, we also increase the CMB temperature power spectrum at roughly the scale of the first acoustic peak. That is, in neither case do we add an unexpected peak, but merely increase the amplitude and “sharpness” of an expected one. Of course, the inexplicable coincidence is that the feature in the power spectrum appears close to the expected matter-radiation equality peak, but with an amplitude much too large to be explained by standard theories. In addition, we must also note that merely adding a peak to the mean power spectrum cannot account for the nearly periodic structure observed in 1D. A real theoretical explanation must account for the complicated non-Gaussian properties of this distribution. The excess 1D power identified by Broadhurst et al. (1990) will soon be subject to easy dismissal with large 3D redshift surveys. However, whether or not it transpires that universal large scale coherence exists, the test proposed here is still viable in principle using the general predicted turnover of the 3D power spectrum, requiring larger volume redshift surveys. In particular comparisons of local and distant clusters over a range of redshift will be particularly useful given their sharply peaked and high amplitude power spectrum.. We thank Richard Ellis, Alex Szalay, Gigi Guzzo, Alvio Renzini, Eric Gawiser, Saul Perlmutter and Martin White for useful conversations.
no-problem/9904/astro-ph9904088.html
ar5iv
text
# Effects of Varying 𝐺 ## 1 Introduction In certain cosmological schemes, as is well known, the universal constant of gravitation $`G`$, changes very slowly with time. These include Dirac’s large number cosmology and fluctuational cosmology. In these cases the variation is given by $$G=\frac{\beta }{T}$$ (1) where in fluctuational cosmology referred to (cf. also refs.), $`\beta `$ is given in terms o f the constant microphysical parameters by $$\beta \frac{l\mathrm{}}{m^2}$$ where $`l`$ is the pion Compton wavelength and $`m`$ its mass. (It may be pointed out in passing that while Dirac’s cosmology has well known inconsistencies, the latter theory is consistent with observation and predicts an ever expanding universe as latest observations of distant supernovae do indeed confirm. In addition, not only are the Large Number coincidences accounted for, but also Weinberg’s mysterious empirical relation between the pion mass and the Hubble constant is deduced from the theory (cf. references).) In any case, what we now propose to show is, that starting from (1), we can account for the perhelion precession of the planet Mercury as also for anomalous acceleration of the planets and more generally for the solar system bodies and in addition predict anomalous changes in the orbital eccentricities. ## 2 Solar System Orbits We now deduce using (1), the perhelion precession of Mercury. We first observe that from (1) it follows that $$G=G_o(1+\frac{t}{t_o})$$ (2) where $`G_o`$ is the present value of $`G`$ and $`t_o`$ is the present age of the universe and $`t`$ the time elapsed from the present epoch. Similarly one could deduce that (cf.ref.), $$r=r_o\left(\frac{t_o}{t_o+t}\right)$$ (3) We next use Kepler’s Third law: $$\tau =\frac{2\pi a^{3/2}}{\sqrt{GM}}$$ (4) $`\tau `$ is the period of revolution, $`a`$ is the orbit’s semi major axis, and $`M`$ is the mass of the sun. Denoting the average angular velocity of the planet by $$\dot{\mathrm{\Theta }}\frac{2\pi }{\tau },$$ it follows from (2), (3) and (4) that $$\dot{\mathrm{\Theta }}\dot{\mathrm{\Theta }}_o=\dot{\mathrm{\Theta }}_0\frac{t}{t_o},$$ where the subscript $`o`$ refers to the present epoch, Whence, $$\omega (t)\mathrm{\Theta }\mathrm{\Theta }_o=\frac{\pi }{\tau _ot_o}t^2$$ (5) Equation (5) gives the average perhelion precession at time ’$`t`$’. Specializing to the case of Mercury, where $`\tau _o=\frac{1}{4}`$ year, it follows from (5) that the average precession per year at time ’$`t`$’ is given by $$\omega (t)=\frac{4\pi t^2}{t_0}$$ (6) Whence, considering $`\omega (t)`$ for years $`t=1,2,\mathrm{},100,`$ we can obtain from (6), the usual total perhelion precession per century as, $$\omega =\underset{n=1}{\overset{100}{}}\omega (n)43^{\prime \prime },$$ if the age of the universe is taken to be $`2\times 10^{10}`$ years. Conversely, if we use the observed value of the precession in (6), we can get back the above age of the universe. It can be seen from (6), that the precession depends on the epoch. We next demonstrate that orbiting objects will have an anamolous inward radial acceleration. Using the well known equation for Keplarian orbits (cf.ref.), $$\frac{1}{r}=\frac{GMm^2}{l^2}(1+ecos\mathrm{\Theta })$$ (7) $$\dot{r}^2=\frac{GM}{rm}\frac{l^2}{m^2r^2}$$ (8) $`l`$ being the orbital angular momentum constant and $`e`$ the eccentricity of the orbit, we can deduce such an extra inward radial acceleration, on differentiation of (8) and using (2) and (3), $$a_r=\frac{GM}{2t_or\dot{r}}$$ (9) It can be easily shown from (7) that $$\dot{r}\frac{eGM}{rv}$$ (10) For a nearly circular orbit $`rv^2GM`$, whence use of (10) in (9) gives, $$a_rv/2t_oe$$ (11) For the earth, (11) gives an anomalous inward radial acceleration $`10^9cm/sec^2,`$ which is known to be the case. We could also deduce a progressive decrease in the eccentricity of orbits. Indeed, $`e`$ in (7) is given by $$e^2=1+\frac{2El^2}{G^2m^3M^2}1+\gamma ,\gamma <0.$$ Use of (2) in the above and differenciation, leads to, $$\dot{e}=\frac{\gamma }{et_o}\frac{1}{et_o}\frac{10^{10}}{e}\text{per year},$$ if the orbit is nearly circular. (Variation of eccentricity in the usual theory have been extensively studied (cf.ref. for a review). We finally consider the anomalous accelerations given in (9) and (11) in the context of space crafts leaving the solar system. If in (9) we use the fact that $`\dot{r}v`$ and approximate $$v\sqrt{\frac{GM}{r}},$$ we get, $$a_r\frac{1}{et_o}\sqrt{\frac{GM}{r}}$$ For $`r10^{14}cm`$, as is the case of Pioneer $`10`$, this gives, $`a_r10^{11}cm/sec^2`$ Interestingly Anderson et al., claims to have observed an anomalous inward acceleration of $`10^9cm/sec^2`$
no-problem/9904/hep-th9904034.html
ar5iv
text
# References April 1999 Some Speculations on the Gauge Coupling in the AdS/CFT Approach P. Olesen The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark. e-mail: polesen@nbi.dk Abstract We propose the principle that the scale of the glueball masses in the AdS/CFT approach to QCD should be set by the square root of the string tension. It then turns out that the strong bare coupling runs logarithmically with the ultraviolet cutoff $`T`$ if first order world sheet fluctuations are included. We also point out that in the end, when all corrections are included, one should obtain an equation for the coupling running with $`T`$ which has some similarity with the equation for the strong bare coupling. The remarkable duality between supersymmetric $`SU(N)`$ gauge theory and type IIB string theory in anti de-Sitter space times a compact space and its finite temperature generalization to non-supersymmetric gauge theories have been much discussed. It was pointed out by Gross and Ooguri that the four dimensional non-supersymmetric theory constructed this way corresponds to four dimensional large $`N`$ QCD only in a limit where the temperature $`T`$ approaches infinity and at the same time the coupling $`\lambda =g_{YM}^2N`$ goes to zero. More precisely, the limits to be taken are $$T\mathrm{}\mathrm{and}\lambda \frac{b}{\mathrm{ln}(T/\mathrm{\Lambda }_{QCD})}.$$ (1) Here $`\mathrm{\Lambda }_{QCD}`$ is a renormalization group invariant, so it should not change by simultaneous changes of $`T`$ and $`\lambda `$. To actually take these limits is not feasible at the moment, since the supergravity approximation breaks down for $`\lambda 0`$. In the supergravity approximation to the AdS/CFT approach the temperature $`T`$ plays the role of an ultraviolet cutoff, and the coupling $`\lambda _T=g_{YM}^2N\mathrm{}`$ is the bare coupling at the scale $`T`$. The string tension in the saddle point approximation is then proportional to $`\lambda _TT^2`$ . Here the coupling is, however, an arbitrary parameter (as long as it is large), and does not seem run with the scale $`T`$. This causes a problem concerning the comparison of the string tension and the glueball masses in the strong coupling limit . In an underlying string picture the glueball masses are expected to be proportional to the square root of the string tension, like in lattice gauge theory even away from the continuum limit, but this is not true here where the glueball masses are proportional to the temperature $`T`$ without any $`\sqrt{\lambda _T}`$ factor. Therefore, in the strong coupling limit the glueball spectrum does not appear to be consistent with an underlying string picture, where the glueballs would come from closed strings, and hence should have masses proportional to the square root of the string tension . Although this situation could certainly be improved as one goes from $`g^2N\mathrm{}`$ to $`g^2N0`$, it is strange that the AdS/CFT approach does not involve a string picture behind the glueballs even in the strong coupling limit. This problem indicates that the definition of the bare coupling should be reconsidered. In the following we therefore propose the (“renormalization”) principle that the scale of the glueball masses , $$\mathrm{glueball}\mathrm{masses}=\mathrm{const}.T(1+O(1/\lambda _T)),$$ (2) should be the right string scale, so $`T`$ should be proportional to the square root of the string tension. For consistency of the supergravity calculation around the saddle point, the coupling $`\lambda _T`$ should still go to infinity as $`T\mathrm{}`$. To invoke this “renormalization” condition is clearly not possible in the leading order, since it would require $`\lambda _T`$ to be of order one. However, when fluctuations of the string are included, the string tension acquires logarithmic corrections . The reason for this somewhat unexpected behavior is that two of the transverse bosonic world sheet fields become massive, whereas the remaining six transverse fields remain massless. The massive fields then contribute a logarithmic term to the string tension $`\mathrm{\Lambda }^2`$, $$\mathrm{string}\mathrm{tension}\mathrm{\Lambda }^2=\frac{8\pi }{27}\lambda _TT^24\pi T^2\mathrm{ln}\frac{T^2}{\mu ^2}\left(1+O(1/\lambda _T)\right).$$ (3) Here $`\mu `$ is an arbitrary scale introduced to regulate the sum over the modes of the world sheet fluctuations through the heat kernel for a Laplace-type operator $`𝒪`$ $$\mathrm{tr}\mathrm{ln}𝒪=(4\pi ^2\mu ^2/e)^s\frac{1}{\mathrm{\Gamma }(s)}_0^{\mathrm{}}\frac{dt}{t^{1s}}\mathrm{tr}e^{t𝒪},$$ (4) where $`s0`$. The factors multiplying $`\mu `$ have been selected in order to simplify the string tension (3). The tr ln is thus evaluated using analytic regularization, and the scale $`\mu `$ is somewhat similar to the arbitrary scale introduced in dimensional regularization. The two scales $`T`$ and $`\mu `$ are a priori of different origin, since $`T`$ is the scale at which supersymmetry is broken by the boundary conditions, whereas $`\mu `$ is a scale needed to treat the logarithmic behavior of the massive string modes. The equation for the string tension (3) can be considered as expressing $`\mathrm{\Lambda }`$ in terms of $`T`$ and $`\lambda _T`$, but if we have additional information on the string tension, one can equally well consider the equation as giving $`\lambda _T`$ in terms of $`T`$ and $`\mathrm{\Lambda }`$. Now, in accordance with the “renormalization” principle stated above, we impose as a boundary condition that the square root of the string tension is proportional to the scale $`T`$ of the glueball masses. Therefore to leading order in the inverse coupling we require $$\sqrt{\mathrm{string}\mathrm{tension}}=\mathrm{\Lambda }=cT,$$ (5) where $`c`$ is some number which in principle can be fixed if one knows enough about the glueball masses for higher spins by fitting the Regge trajectory to these high spins. Hence we get $$\lambda _T=\frac{27c^2}{8\pi }+27\mathrm{ln}\frac{T}{\mu }27\mathrm{ln}\frac{T}{\mu }\mathrm{}\mathrm{for}T\mathrm{}.$$ (6) Therefore the coupling does run with the scale, and the bare coupling goes logarithmically to infinity (for $`T\mu `$) when $`T\mathrm{}`$. The scale dependent behavior (6) is thus needed in order that the glueball masses are proportional to the square root of the string tension. This is the main result of this note. One could ask what should happen in the end, when all calculations are done some time in the future (taking into account that the supergravity approximation must break down for small $`\lambda `$, so corrections to the metric should be included) so that it makes sense to consider also the small coupling. We would then expect that $`\mathrm{\Lambda }`$, being the square root of the string tension, becomes the QCD scale, and hence $$\frac{\mathrm{\Lambda }^2}{T^2}e^{2b/\lambda _{YM}(T)},$$ (7) where $`\lambda _{YM}(T)b/\mathrm{ln}(T/\mathrm{\Lambda })`$ is the QCD coupling at scale $`T`$. Thus, the right physics is obtained by performing the limit $`T\mathrm{}`$. Then eq. (3) would be replaced by $$\frac{8\pi }{27}\lambda _T4\pi \mathrm{ln}\frac{T^2}{\mu ^2}+(\frac{1}{\lambda _T},\mathrm{ln}\frac{T}{\mu })=\frac{\mathrm{\Lambda }^2}{T^2}0\mathrm{for}T\mathrm{}$$ (8) with $`\mathrm{\Lambda }`$ fixed. Here $``$ represents the additional $`1/\lambda _T`$ corrections to the string tension divided by $`T^2`$, coming from higher order expansions of the string action (including the fermions as well corrections to the supergravity approximation) used to compute the fluctuations in the Wilson loop around the saddle point. These corrections depend on powers of the inverse coupling and could also in general depend on the cutoff. In the interpretation of eq. (8) it is important that $`\mathrm{\Lambda }`$ is renormalization group invariant, which defines $`\lambda _T`$ as a function of $`T`$ through eq. (8). In an expansion of the action in orders of fluctuations of the world sheet, in general the higher order terms do not contribute to the linear term (string tension)$`\times L`$, but rather produce terms of order $`1/L`$ or smaller. Thus, in general it is not so easy to get contributions to the function $``$ from the higher order terms. However, if for example the higher order fluctuations produce new mass terms, these can still contribute to the string tension. Such terms should involve masses of order $`1/\lambda _T`$ or lower. Also, corrections to the metric may contribute to the string tension. Using (8) we then get $$\lambda _T27\mathrm{ln}\frac{T}{\mu }\frac{27}{8\pi }(\frac{1}{\lambda _T},\mathrm{ln}\frac{T}{\mu })\mathrm{for}T\mathrm{}.$$ (9) This is an implicit equation for $`\lambda _T`$. Computing more and more terms in $``$ change the functional dependence of $`\lambda _T`$ on $`\mathrm{ln}(T/\mu )`$. Then, if everything goes well, eq. (9) should have a solution $$\lambda _Tb/\mathrm{ln}(T/\mu )\mathrm{for}T\mathrm{},$$ (10) approaching zero in this limit. It should be emphasized that a solution of the type (10) cannot in general be the only solution of (8). For example, if $``$ only contains a finite number of significant terms and does not depend on $`\mathrm{ln}(T/\mu )`$, then e.g. a strong, logarithmically divergent coupling is always a solution, because for this particular case $``$ can be ignored for the strong coupling in eq. (8), since for $`\lambda _T\mathrm{}`$ the $`1/\lambda _T`$ dependence of $``$ is insignificant relative to the leading terms on the left hand side of eq. (8). This is valid more generally if $``$ is analytic in $`1/\lambda _T`$, which presumably is equivalent to having no phase transition in going from strong to weak coupling. In this case eq. (3) is a rudimentary version of eq. (8). Also, if $``$ only has a finite number $`q`$ of significant terms, eq. (8) is a polynomial equation of order $`q`$, and hence can have $`q`$ solutions, some of which may be invalid because they are complex. If there are an infinite number of terms in $``$ the situation is of course quite different. A phase transition may occur so that $``$ is not analytic in $`1/\lambda _T`$. Actually $``$ does not have to be terribly complicated to produce an answer which looks much like the right one, as the following $`hypothetical`$ example shows. Suppose we obtain $$=\frac{k}{\lambda _T}$$ (11) in a calculation where corrections to the metric are included, so that it makes sense to consider small values of $`\lambda _T`$. Here $`k`$ is a positive constant, and it is assumed further that higher order terms are absent or have very small coefficients and can be ignored. Then from (8) we find that $`\lambda _T`$ satisfies $$\lambda _T^227\mathrm{ln}(T/\mu )\lambda _T+27k/8\pi =O(\mathrm{\Lambda }^2/T^2)0,$$ (12) where $`\mathrm{\Lambda }`$ is fixed, so that the right hand side of this equation is sub-logarithmic, of order $`1/T^2`$. Thus we have the two solutions $$\lambda _T27\mathrm{ln}(T/\mu )+O(1/\mathrm{ln}(T/\mu ))\mathrm{and}k/[8\pi \mathrm{ln}(T/\mu )]+O(1/\mathrm{ln}^3(T/\mu )),$$ (13) corresponding to a strong coupling<sup>1</sup><sup>1</sup>1This is not the same as the bare strong coupling in eq. (6), which was derived in the strong coupling regime where $`\mathrm{\Lambda }=cT`$, in contrast to the fixed $`\mathrm{\Lambda }`$ behavior relevant when all corrections are included. However, the logarithmic divergence of the strong coupling is exactly the same in the two cases. and to the right logarithmically decreasing behavior of the asymptotically free QCD coupling, respectively, provided we identify the so far arbitrary scale $`\mu `$ with the QCD scale $`\mathrm{\Lambda }`$. If $`k=8\pi b`$ we would then get the right answer for the weak coupling. Since there are two solutions for the coupling, we have the option of taking the weak coupling $`\lambda _T`$ as the right solution. Of course, this assumes that the formula (11) really includes corrections such that the metric makes sense even for small $`\lambda _T`$. This is certainly a fictitious example, and higher order terms in $`1/\lambda _T`$ could play an important role. However, the example shows that the strong$``$ weak coupling transition could happen in a relatively simple way, and it would anyhow be of interest to compute the next $`1/\lambda _T`$ order, since it could give the right functional dependence of the coupling on the logarithm. It would then be interesting to see how far the coefficient of the inverse logarithm is from the right value $`b`$. We do not expect to get the right value of $`b`$, of course, before the problems connected with the break down of the supergravity approximation at small $`\lambda _T`$ have been settled. This would presumably imply that the coefficient $`k`$ does not have the right value $`8\pi b`$, or that there is no $`1/\lambda _T`$ correction ($`k=0`$). Further, it could be that $`k`$ is not really a constant, but depends on the cutoff $`T`$. For example, if $`k=K\mathrm{ln}(T/\mu )`$, where $`K`$ is a positive constant, corresponding to $`=K\mathrm{ln}(T/\mu )/\lambda _T`$, we would again get the strong coupling limit as in (13) but the other solution would be a constant $`K/8\pi `$. In conclusion, the main points of this paper are: * Proportionality between the glueball masses and the square root of the string tension requires the bare strong coupling to run with $`T`$. If, in contrast, the coupling is considered as an arbitrary parameter, there is no underlying string picture of the glueballs in the strong coupling limit. * In the end, when all corrections are included, we get an equation for the coupling which has a number of solutions (if there is no phase transition between weak and strong coupling). One of these solutions is hopefully the right one exhibited in eq. (1), but another one is presumably the running bare strong coupling (6). If there is a phase transition, these two solutions would be on “different branches”. * There exists a very simple case which exhibits the strong and weak coupling, namely the behavior (11). * There exists a curious relation between the bare strong coupling and the asymptotically free one, namely $$\lambda _T27b/\lambda _{YM}+27c^2/8\pi 27b/\lambda _{YM}\mathrm{for}T\mathrm{},$$ (14) provided $`\mu `$ is identified with $`\mathrm{\Lambda }_{QCD}`$. This might indicate that the regulator theory (with $`\lambda _T\mathrm{}`$) has a dual relation to QCD, so that the former is a strong coupling version of the latter, and vica versa. Of course, it could also be that this relation is purely accidental. We end by the remark that the running coupling $`\lambda _T\mathrm{ln}(T/\mu )`$ discussed in this note can be understood as a renormalization needed in strings with extrinsic curvature . It was pointed out in ref. that when world sheet fluctuations are included, the string in the AdS/CFT approach becomes rigid, and hence it is well known that there are renormalizations . The physical situation in the present case is, however, different from the one discussed in the literature on strings with extrinsic curvature.
no-problem/9904/cond-mat9904027.html
ar5iv
text
# Transition Temperature of a Uniform Imperfect Bose Gas ## Abstract We calculate the transition temperature of a uniform dilute Bose gas with repulsive interactions, using a known virial expansion of the equation of state. We find that the transition temperature is higher than that of an ideal gas, with a fractional increase $`K_0(na^3)^{1/6}`$, where $`n`$ is the density and $`a`$ is the S-wave scattering length, and $`K_0`$ is a constant given in the paper. This disagrees with all existing results, analytical or numerical. It agrees exactly in magnitude with a result due to Toyoda, but has the opposite sign. The weakly interacting Bose gas is an old subject that has found new life since the experimental discovery of Bose-Einstein condensation in ultra-cold trapped atoms. Although there is a vast literature on the subject, the transition temperature of the interacting gas remains a controversial subject, even in the uniform case. In this note, we present a calculation that is simple and transparent, in the hope that it will settle the controversy. In the ideal Bose gas, the condition determining the fugacity $`z`$ in the gas phase is $$\lambda ^3n=g_{3/2}\left(z\right)$$ (1) where $`n`$ is the particle density, $`\beta =(k_BT)^1`$, $`\lambda =\sqrt{2\pi \beta \mathrm{}^2/m}`$ is the thermal wavelength, and $$g_\alpha (z)\underset{\mathrm{}=1}{\overset{\mathrm{}}{}}\frac{z^{\mathrm{}}}{\mathrm{}^\alpha }.$$ (2) No single-particle state has a macroscopic occupation in this phase. However, the function $`g_{3/2}\left(z\right)`$ is monotonic, and bounded by $`g_{3/2}(1)=\zeta (3/2)=2.612`$. Thus, particles must go into the zero-momentum state when $`\lambda ^3n>\zeta (3/2)`$, making this state macroscopically occupied. This gives the transition temperature of the ideal Bose gas at a fixed density $`n`$: $$T_0=\frac{2\pi \mathrm{}^2}{mk_B}\left[\zeta (3/2)\right]^{2/3}.$$ (3) Now consider a uniform imperfect Bose gas with repulsive interactions with equivalent hard-sphere diameter $`a`$ (S-wave scattering length). We denote the transition temperature by $`T_c`$, and its fractional shift by $$\frac{\mathrm{\Delta }T}{T_0}\frac{T_cT_0}{T_0}.$$ (4) In an early statement on the subject , it was argued that $`\mathrm{\Delta }T>0`$, since a spatial repulsion leads to momentum-space attraction, and this would make the imperfect gas more ready to condense. However, a Hartree-Fock calculation by Fetter and Walecka , and one by Girardeau based on a mean-field method, yield the opposite sign $`\mathrm{\Delta }T<0`$. A calculation of the grand partition to one-loop order by Toyoda also yields a negative sign. Specifically, Toyoda obtains $$\left(\frac{\mathrm{\Delta }T}{T_0}\right)_{\text{Toyoda}}=K_0(na^3)^{1/6}$$ (5) where $$K_0=\frac{8\sqrt{2\pi }}{3\left[\zeta (3/2)\right]^{2/3}}=3.527.$$ (6) Barring calculational errors, this must be considered reliable, since it is the lowest-order result of a systematic expansion. All subsequent calculations, alas, yield answers different from this and from one another. Stoof gives $`\mathrm{\Delta }Ta^{3/2}`$, while Bijlsma and Stoof obtain $`\mathrm{\Delta }Ta^{1/2}`$. A numerical calculation based on Monte-Carlo simulation by Grüter et al. gives $`\mathrm{\Delta }T/T_0=c_0(na^3)^\gamma `$, where $`c_0=0.34\pm 0.06`$, and $`\gamma =0.34\pm 0.03`$. A recent calculation involving some mean-field assumptions gives $`\mathrm{\Delta }T/T_0=0.7(na^3)^{1/3}`$. Thus, there is no consensus on how $`\mathrm{\Delta }T`$ should depend on the scattering length, nor even the sign! We shall calculate $`T_c`$ using an extension of (1) obtained some time ago via the virial expansion. To lowest order in a power series in $`a`$, the result is $$\lambda ^3n=g_{3/2}\left(z\right)\left[1\frac{4a}{\lambda }g_{1/2}\left(z\right)\right].$$ (7) As a function of $`z`$, the right side rises through a maximum at some value $`z=z_c`$, and then approaches $`\mathrm{}`$ as $`z1`$. Thus, as in the ideal case, the right side is bounded from above, and the bound occurs at $`z_c`$. Since the treatment is valid only when $`a/\lambda 1`$, we put $$z_c=1\delta (\delta 1).$$ (8) The maximum can be located with the help of the expansions $`g_{3/2}\left(z_c\right)`$ $``$ $`\zeta (3/2)2\sqrt{\pi }\delta ^{1/2}`$ (9) $`g_{1/2}\left(z_c\right)`$ $``$ $`\sqrt{\pi }\delta ^{1/2}.`$ (10) We then find $$\delta =\frac{2a}{\lambda }\zeta (3/2).$$ (11) The critical temperature $`T_c`$ can be found by substituting this value into (7), with the result $$\frac{\mathrm{\Delta }T}{T_0}=K_0(na^3)^{1/6}$$ (12) where $`K_0`$ is given in (6). Thus, we agree exactly with Toyoda except for the sign. Since the derivation here is simple and transparent, we believe Toyoda made a sign error. In order to obtain a shift in the transition temperature, it is necessary to incorporate interactions in the gas phase, and that is why the virial expansion is useful. On the other hand, the quasiparticle picture introduced via the Bogoliubov transformation is not helpful here, because the quasiparticles reduce to non-interacting particles at the transition point.
no-problem/9904/cond-mat9904372.html
ar5iv
text
# Magnon Damping by magnon-phonon coupling in Manganese Perovskites ## Abstract Inelastic neutron scattering was used to systematically investigate the spin-wave excitations (magnons) in ferromagnetic manganese perovskites. In spite of the large differences in the Curie temperatures ($`T_C`$s) of different manganites, their low-temperature spin waves were found to have very similar dispersions with the zone boundary magnon softening. From the wavevector dependence of the magnon lifetime effects and its correlation with the dispersions of the optical phonon modes, we argue that a strong magneto-elastic coupling is responsible for the observed low temperature anomalous spin dynamical behavior of the manganites. The elementary magnetic excitations (spin waves) in a ferromagnet can provide direct information about the itinerancy of the unpaired electrons contributing to the ordered moment. In insulating (local moment) ferromagnets, such excitations are usually well defined throughout the Brillouin zone and can be described by the Heisenberg model of magnetism . On the other hand, metallic (itinerant) ferromagnets are generally characterized by the disappearance of spin waves at finite energy and momentum due to the presence of the Stoner (electron-hole pair) excitation continuum associated with the band structure and itinerant electrons in the system . In the ferromagnetic manganese perovskites (manganites) $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> (where $`A`$ and $`B`$ are rare-earth and alkaline-earth elements respectively.) , the ferromagnetism and electric conductivity can be continually suppressed by different $`A`$($`B`$) substitutions until an insulating, charge-ordered ground state is stabilized . Approaching this insulating ground state (with decreasing Curie temperature $`T_C`$), the low temperature spin-wave dispersions have been found to deviate from the nearest-neighbor Heisenberg exchange Hamiltonian that has been successfully applied to the higher $`T_C`$ materials . However, the microscopic origin for such deviation is unknown. Furthermore, spin waves for the lower $`T_C`$ $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> have very similar spin-wave stiffness constants ($`D`$), contrary to the expectation of a Heisenberg ferromagnet where $`D`$ is related to $`T_C`$ . Here we argue that the deviation from the simple Heisenberg Hamiltonian and the observation of magnon damping at large wavevectors are due to strong magneto-elastic (or magnon-phonon) interactions, consistent with the electron-lattice coupling for the manganites in the proximity of the charge ordered insulating state. We study $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> manganites because these materials exhibit a large resistivity drop that is intimately related to the paramagnetic-to-ferromagnetic phase transition at $`T_C`$ . Due to the octahedral crystalline field, the 3$`d`$ energy level of the Mn ion in $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> is split into a low-lying $`t_{2g}`$ triplet and a higher energy $`e_g`$ doublet. In 1951, Zener postulated that the conductivity in these mixed valence systems was due to the simultaneous hopping of an ($`e_g`$) electron (with electron transfer energy $`t`$) from Mn<sup>3+</sup> ($`3d^4`$) to the connecting O<sup>2-</sup> and from the O<sup>2-</sup> to the Mn<sup>4+</sup> ($`3d^3`$) $`e_g`$ band; hence termed double-exchange (DE). Because all the electrons in the Mn 3$`d`$ levels are polarized by a strong intra-atomic exchange interaction $`J_H`$ (Hund-rule coupling), such hopping tends to align the spins of Mn ions on adjacent sites parallel . Thus, the DE mechanism provided a qualitative interpretation of coupled ferromagnetic ordering and electric conductivity . However, recent theoretical work suggests that DE mechanism alone cannot explain the temperature dependence of the resistivity near and above $`T_C`$. Additional interactions, such as a strong dynamical Jahn-Teller (JT) based electron-lattice coupling, are necessary to explain the magnitude of the resistivity drop across $`T_C`$. In this scenario, the JT distortion of the Mn<sup>3+</sup>O<sub>6</sub> octahedra lifts the double degeneracy of $`e_g`$ electrons in Mn<sup>3+</sup> and causes a local lattice distortion that may trap carriers to form a polaron. The formation of lattice polarons in the paramagnetic state leads to the localization of conduction band ($`e_g`$) electrons above $`T_C`$ and hence the insulating behavior. On cooling below $`T_C`$ however, the growing ferromagnetic order increases the $`e_g`$ electron hopping (kinetic) energy $`t`$ and decreases the electron-lattice coupling strength sufficiently that metallic behavior occurs. The system crosses over from a polaron regime to a Fermi-liquid regime and the DE mechanism ultimately prevails in the low temperature metallic state . The electron-lattice coupling described above is dynamical, $`i.e.`$, it involves vibrational distortions of the oxygen octahedron around the Mn<sup>3+</sup> site. However, the static lattice distortion, present because of the ionic size differences of $`A`$ and $`B`$ in $`A_{0.7}B_{0.3}`$MnO<sub>3</sub>, may also affect the electron hopping and the Curie temperature $`T_C`$. Therefore, systematic studies of the spin-wave excitations in $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> with decreasing $`T_C`$ and increasing residual resistivity should reveal how the system evolves from an itinerant to a localized ferromagnet. Such investigation will also test to what extent the low temperature magnetic properties of $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> ferromagnets can be explained by the DE mechanism. For these purpose, we have characterized the low temperature spin dynamics of the ferromagnetic single crystals of Pr<sub>0.63</sub>Sr<sub>0.37</sub>MnO<sub>3</sub> (PSMO), La<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub> (LCMO), and Nd<sub>0.7</sub>Sr<sub>0.3</sub>MnO<sub>3</sub> (NSMO) which have approximately the same nominal carrier concentration but significantly different $`T_C`$s and residual conductivity (see Figure 1). The temperature dependent resistivity $`\rho (T)`$ for these three samples is shown in Figure 1. The characteristic drop in $`\rho `$ coincident with $`T_C`$ is clearly seen to increase with decreasing $`T_C`$. At the same time the residual resistivity increases almost linearly with decreasing $`T_C`$, indicating that with increasing magnetoresistance effect the system becomes a worse metal at low temperatures. An interesting feature of $`\rho (T)`$ at low temperatures is that all three samples exhibit the same temperature dependence below $``$100 K when $`\rho (T)`$ is scaled to the residual value $`\rho (0)`$. The inset to Figure 1 illustrates this point. The open symbols in Figure 2 summarize the spin-wave dispersions along the $`[0,0,\xi ]`$, $`[\xi ,\xi ,0]`$, and $`[\xi ,\xi ,\xi ]`$ directions at 10 K for PSMO, LCMO, and NSMO . Clearly, the dispersions of these three manganites are remarkably similar at the measured frequencies, suggesting that the magnetic exchange coupling strength, derived from the hopping of the $`e_g`$ electrons between the Mn<sup>3+</sup> and Mn<sup>4+</sup> sites, depend only weakly on $`T_C`$. These results are in sharp contrast to the single-band DE model where the spin-wave dispersions are directly related to electronic bandwidth and hence $`T_C`$ . In the strong coupling limit ($`J_H/t\mathrm{}`$) of this model, the spin-wave dispersion of the ferromagnet is consistent with the nearest-neighbor Heisenberg Hamiltonian and $`D`$ should be proportional to the electron transfer energy $`t`$. Previous work has shown that such single-band DE model is adequate for describing the spin dynamics of the highest $`T_C`$ ferromagnetic manganites . To estimate the spin-spin exchange coupling strength, we note that the low-frequency spin waves of $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> manganites LCMO , NSMO, and PSMO are isotropic and gapless with a stiffness $`D165`$ meVÅ<sup>-2</sup>. For a simple cubic Heisenberg ferromagnet with nearest-neighbor exchange coupling $`J`$, $`D=2JSa^2`$, where $`S`$ is the magnitude of the electronic spin at the magnetic ionic sites and $`a`$ is the lattice parameter. From the measured spin-wave stiffness, one can calculate the exchange coupling strength $`J`$ and hence the expected dispersion for a simple nearest-neighbor Heisenberg ferromagnet. The solid lines in Figure 2 show the outcome of such calculation which clearly misses the measured spin-wave energies at large wavevectors. Figure 3 shows typical constant-$`𝐪`$ scans along the $`[0,0,\xi ]`$ and $`[\xi ,\xi ,0]`$ directions for LCMO and NSMO. Most of the data are well described by Gaussian fits which give the amplitude, widths, and peak positions of the excitations. While the dispersion curves shown in Figure 2 are obtained by peak positions at different wavevectors, the amplitude and widths provide information about the damping and lifetime of the magnon excitations. The upper panel in Figure 3 displays the result along the $`[0,0,\xi ]`$ direction and similar data along the $`[\xi ,\xi ,0]`$ direction is shown in the bottom panel. It is clear that spin waves are significantly damped at large wavevectors. Although still relatively well-defined throughout the Brillouin zone in the $`[0,0,\xi ]`$ direction for both compounds, the excitations are below the sensitivity of the measurements at wavevectors beyond (0.25,0.25,0) reciprocal lattice units (rlu) along the $`[\xi ,\xi ,0]`$ direction for NSMO. To further investigate the wavevector dependence of the spin-wave broadening and damping, we plot in Figure 4 the intrinsic widths of the magnons along the $`[0,0,\xi ]`$ direction. The full width at half maximum (FHWM) of the excitations $`\mathrm{\Gamma }`$ shows a similar increase at wavevectors larger than $`\xi 0.3`$ rlu for all three manganites . To determine whether such broadening is due to the Stoner continuum, we note that at low temperatures, the spin moment of itinerant electrons of ferromagnetic manganites is completely saturated and the system is in the half-metallic state . In this scenario of the DE model, there is complete separation of the majority and minority band by a large $`J_H`$. As a consequence, the Stoner continuum is expected to lie at an energy scale (2$`J_H`$) much higher than that of the spin-wave excitations . For this reason, the observed magnon broadening and damping for lower $`T_C`$ manganites are unlikely to be due to Stoner continuum excitations. On the other hand, such behavior may be well understood if one assumes a new spin-wave damping channel that is related to a strong coupling between the conduction band ($`e_g`$) electrons and the cooperative oxygens in the Mn-O-Mn bond, analogous to that of a dynamic JT effect . Although JT based electron-lattice coupling is known to be important for the metal-to-insulator transition at temperatures near and above $`T_C`$ , such coupling may also be important to understand the low temperature magnetic properties. If electron-lattice coupling is indeed responsible for the observed spin-wave broadening and damping, one would expect the presence of such coupling in the lower $`T_C`$ samples that should be absent in the higher $`T_C`$ materials. Experimentally, there have been no reports of magnon-phonon coupling in the higher $`T_C`$ $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> . For the DE ferromagnet La<sub>0.8</sub>Sr<sub>0.2</sub>MnO<sub>3</sub> ($`T_C=304`$ K), Moudden et al. have measured the spin-wave, acoustic and optical phonon dispersions from the zone center to the zone boundary along the $`[0,0,\xi ]`$ direction. Phonon branches were found to cross smoothly through the magnon dispersion, suggesting weak or no magnon-phonon hybridization. For the lower $`T_C`$ ferromagnet LCMO, infrared reflectivity spectra show three zone center ($`𝐪=0`$) transverse optical (TO) phonons located around 20.4 meV, 42 meV, and 73 meV . These three phonon modes were identified as “external”, “bending”, and “stretching” modes which correspond to the vibration of the La/Ca ions against the MnO<sub>6</sub> octahedron, the bending motion of the Mn-O-Mn bond, and the internal motion of the Mn ion against the oxygen octahedron, respectively. To search for possible magnon-phonon coupling in the lower $`T_C`$ manganites, we have measured selected optical phonons in the LCMO single crystal. Figure 2 shows two longitudinal optical (LO) phonon modes throughout the Brillouin zone as solid symbols. The metallic nature of the manganites at low temperature requires the collapse of the LO and TO splitting of the polar modes at $`𝐪=0`$. For LCMO, this means that the two LO modes observed by neutron scattering are likely to be the external and bending modes identified by infrared reflectivity . At wavevectors larger than 0.3 rlu, the dispersions of these two phonon modes are remarkably close to those of the magnons along the $`[0,0,\xi ]`$ and $`[\xi ,\xi ,0]`$ directions, suggesting that the softening of the spin-wave branches in Figure 2 is due to magnon-phonon coupling. In principle, the interaction between the magnetic moments and the lattice can modify spin waves in two different ways. First, the static lattice deformation induced by the ordered magnetic moments may affect the anisotropy of the spin waves. Second, the dynamic time-dependent modulations of the magnetic moment may interfere with the lattice vibrations, resulting in significant magneto-elastic interactions or magnon-phonon coupling. One possible consequence of such coupling is to create energy gaps in the magnon dispersion at the nominal intersections of the magnon and phonon modes. However, our spin-wave dispersion data in Figure 2 show no obvious evidence of any gap at the magnon and phonon crossing at $`\xi 0.3`$ rlu. Alternatively, magnon-phonon coupling, present in all exchange coupled magnetic compounds to some extent, may give raise to spin-wave broadening . In this scenario, the vibrations of the magnetic ions about their equilibrium positions affect the exchange energy through the spatial variation of the spin-spin exchange coupling strength, which in turn leads to spin-wave broadening at the magnon-phonon crossing points. Generally, one would expect such coupling to be strong for the lower $`T_C`$ $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> manganites because of their close proximity to the charge-ordered insulating state . This is exactly what is observed for these materials at $`\xi 0.3`$ rlu. Constant-q scans in Figure 3 show significant broadening of the spin waves from $`\xi <0.3`$ to $`\xi 0.3`$ rlu for LCMO and NSMO. Similarly, Figure 4 reveals that magnon widths increase considerably at wavevectors $`\xi 0.3`$ rlu for all three manganites investigated, consistent with the expectation of a strong magnon-phonon hybridization . We have discovered that spin-wave softening and broadening along the $`[0,0,\xi ]`$ direction occur at the nominal intersection of the magnon and optical phonon modes in lower $`T_C`$ $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> manganites. This result strongly suggests that magneto-elastic coupling is important to the understanding of the low temperature spin dynamics of $`A_{0.7}B_{0.3}`$MnO<sub>3</sub>. In the lower doping canted ferromagnet La<sub>0.85</sub>Sr<sub>0.15</sub>MnO<sub>3</sub> , much larger spin-wave broadening and damping have been found at low temperatures . Although the magnon dispersion relation in that system appears to be consistent with the simple nearest-neighbor Heisenberg Hamiltonian, the observation of strong anisotropic spin-wave broadening is in sharp contrast to the expectation of the single-band DE model where magnons in the ground state are eigenstates of the system . Therefore, it becomes clear that the single-band DE mechanism cannot describe the spin dynamics of La<sub>0.85</sub>Sr<sub>0.15</sub>MnO<sub>3</sub> and lower $`T_C`$ $`A_{0.7}B_{0.3}`$MnO<sub>3</sub> manganites, not even in the low temperature ground state. To understand the extraordinary magnetic and transport properties of $`A_{1x}B_x`$MnO<sub>3</sub>, one must explicitly consider the close coupling between charge, spin, and lattice degrees of freedom in these complex materials. We thank H. Kawano, W. E. Plummer, S. E. Nagler, H. G. Smith, and X. D. Wang for helpful discussions. This work was supported by the US DOE under Contract No. DE-AC05-96OR22464 with Lockheed Martin Energy Research Corporation and JRCAT of Japan.
no-problem/9904/astro-ph9904021.html
ar5iv
text
# Bounds from Primordial Black Holes with a Near Critical Collapse Initial Mass Function ## I Introduction Primordial black holes (PBHs) arise naturally in most cosmologies. Perhaps the least speculative mechanism for PBH formation comes from the collapse of overdense regions in the primordial density fluctuations that gave rise to structure in the universe . Thus PBHs carry information of an epoch about which we know comparatively little, and are a very useful tool for restricting theories of the very early universe, especially within the context of an inflationary scenario. In this paradigm the density spectrum is a consequence of the quantum fluctuations of the inflaton field, and can in principle be calculated given an underlying model. Thus, the non-observation of the by-products or of the effects of the energy density of these PBHs constrains the underlying microscopic theory. The simplest bound that can be extracted from PBH formation is generated by insisting that $`\mathrm{\Omega }_{PBH}1`$. Other bounds may be derived by studying the consequences of their evaporation. Given that black holes evaporate at a rate proportional to their inverse mass squared , the phenomenological relevance of a PBH will depend upon its initial mass. Smaller mass PBHs ($`10^9<M_{bh}<10^{13}`$ g) will alter the heavy elements abundances as well a distort the microwave background , whereas PBHs with larger masses will affect the diffuse gamma-ray background . The net evaporation spectrum from a collection of PBHs will depend on the initial mass distribution, which in turn depends upon the probability distribution for the density fluctuations. Prior to the emergence of the COBE data, bounds from PBHs were calculated assuming a Harrison-Zel’dovich spectrum $`|\delta _k|^2k`$, with an unknown normalization. This lead to the famous “Page-Hawking” bound and all its subsequent improvements . However, assuming that the distribution is Gaussian, we can now relate the mass variance at the time of formation with the mass variance at large scales today, if we know the (model-dependent) power spectrum. None the less, bounds on $`n`$ have been derived within the class of models where the power spectrum is given by a power law $`|\delta _k|^2k^n`$ over the scales of interest . Indeed, in this way it follows that for the scale-invariant Harrison-Zel’dovich spectrum PBHs have too small a number density to be of any astrophysical significance. However, observations (see references in ) now seem to favor a tilted blue spectrum with $`n>1`$ (in CDM models) with more power at smaller scales, and thus the bounds derived from black hole evaporation can be used to constrain the tilt of the spectrum. In this paper we revisit the aforementioned bounds in light of recent calculations which indicate that a spectrum of primordial black hole masses are produced through near critical gravitational collapse. As was pointed out by Jedamzik and Niemeyer , if PBH formation is a result of a critical phenomena, then the initial mass function will be quite different then what was expected from the classic calculation of Carr . In particular, the PBH mass formed at a given epoch is no longer necessarily proportional to the horizon mass. The resulting difference in the initial mass function leads to new bounds, which is the main thrust of this paper. In particular, we revisit the density bounds and the bounds derived from the diffuse gamma-ray observations. We will first derive bounds on $`n`$ in the class of models where the power spectrum is a power law over the scales of interest. We then relax this assumption and instead place bounds on the mass variance at the formation epoch.<sup>*</sup><sup>*</sup>*As will be discussed later, PBH formation is dominated by the earliest formation epoch if they form via critical collapse. These bounds can then be applied to a chosen model by extrapolating the variance today to the formation epoch according to the appropriate power spectrum. Before continuing to the body of this work, we would like to point out that primordial black holes have also played a large role in attempting to explain various data. PBHs can serve as significant cosmological flux sources for all particle species via Hawking radiation. Thus it is very tempting to postulate that present-day observed particle fluxes of unknown origin are a consequence of PBH evaporation. However, to predict these fluxes, or model them realistically, we need to know the mass distribution of black holes, since their emission spectra are determined by their temperature (or inverse mass). If we assume that the black holes of interest were formed from initial density inhomogeneities generated in an inflationary scenario (which is usually assumed), then the black holes are either tremendously over-abundant or completely negligible. To get a phenomenologically interesting quantity of PBHs thus requires an extreme fine-tuning, as will be demonstrated below. Succinctly, this fine-tuning arises because the PBH number density is an extremely rapidly varying function of the spectral index $`n`$. Thus, without even analyzing the details of the spectral profile, explaining unknown fluxes via PBH evaporation is far from compelling. ## II The Initial Mass Function Carr first calculated the PBH spectrum resulting from a scale-invariant Harrison-Zel’dovich spectrum up to an overall normalization. Subsequently Page and Hawking calculated a bound on the normalization by calculating the expected diffuse gamma-ray spectrum from these PBHs . However, using the COBE measurements of the temperature anisotropies translated into density fluctuations (within a CDM), the overall normalization can be determined. For a scale-invariant spectrum, no significant number density of PBHs is generated. However, for a tilted blue power spectrum with more power on smaller scales, a larger number density of PBHs is expected. Given an initially overdense region with density contrast $`\delta _i`$ and radius $`R`$ at time $`t_i`$ (using the usual comoving coordinates), analytic arguments predict a black hole will form if $$1/3\delta _i1.$$ (1) The lower bound on the density contrast comes from insisting that the size of the region at the time of collapse be greater than the Jeans length, while the upper bounds come from the consistency of the initial data with the assumption of a connected topology. The first calculations of the PBH mass distribution assumed the relation $$M_{bh}\gamma ^{3/2}M_h,$$ (2) where $`\gamma `$ determines the equation of state $`p=\gamma \rho `$ and $`M_h`$ is the horizon mass when the scale of interest crossed the horizon. Recently, numerical evidence suggests that near the threshold of black hole formation, gravitational collapse behaves as a critical phenomena with scaling and self-similarity . A scaling relation of the following form was found $$M_{bh}(\delta )=kM_h(\delta \delta _c)^\rho ,$$ (3) where $`\rho `$ is a universal scaling exponent which is independent of the initial shape of the density fluctuation. It was later shown that such scaling should be relevant for PBH formation. Indeed, in Ref. the authors found relation (3) to hold for PBH formation when the initial conditions are adjusted to be nearly critical. They found the exponent to be $`\rho 0.37`$. They also found that for several different initial density shapes, $`\delta _c0.7`$, which is significantly larger than the analytic prediction of $`1/3`$ found by requiring that the initial overdensity be larger than the Jeans mass. Given Eq. (3), calculating the initial PBH mass distribution becomes analytically cumbersome, since in principle one needs to sum over all epochs of PBH formation. However, we would expect that the initial mass function would be dominated by the earliest epoch of formation if we assume a Gaussian distribution with a blue power spectrum, since for larger scales the formation probability should be suppressed. This expectation was tested in Ref. , where the authors used the excursion set formalism to calculate the initial mass function allowing for PBH formation at all epochs. The authors found that it was approximately true the that earliest epoch dominates, for the conditions of interest to us. This simplification allows us to derive quite easily the initial mass distribution . We assume Gaussian fluctuations (the effects of non-Gaussianity will be briefly discussed in Sec. V) and define the usual smoothed density contrast $$\delta _R(x)=d^3y\delta (x+y)W_R(y),$$ (4) where $`\delta (x)=(\rho (x)\rho _b)/\rho _b`$, and $`\rho _b`$ is background energy density. $`W_R`$ is the window function with support in a region of size $`R`$. The probability that a region of size $`R`$ has density contrast between $`\delta `$ and $`\delta +d\delta `$ is given by $$P(R,\delta )d\delta =\frac{1}{\sqrt{2\pi }\sigma _R}\mathrm{exp}\left(\frac{\delta ^2}{2\sigma _R^2}\right)d\delta ,$$ (5) where $`\sigma _R`$ is the mass variance for a region of size $`R`$, $`\sigma _R^2=\delta _R^2(x)/R^3`$. Then using Eq. (3), the physical number density of PBHs within the horizon per logarithmic mass interval, at the formation epoch, can be written as $$\frac{dn_{bh}}{d\mathrm{log}M_{bh}}=V_h^1P[\delta (M_{bh})]\frac{d\delta }{d\mathrm{log}M_{bh}}=\frac{V_h^1}{\sqrt{2\pi }\sigma \rho }\left(\frac{M_{bh}}{kM_H}\right)^{1/\rho }\mathrm{exp}\left[\frac{1}{2\sigma ^2}\left[\delta _c+\left(\frac{M_{bh}}{kM_H}\right)^{1/\rho }\right]^2\right],$$ (6) where $`V_h`$ is the horizon volume the the epoch of PBH formation. We assume prompt reheating, and therefore take the formation epoch to be the time of reheating, which corresponds to the minimum horizon mass . Assuming that the power law spectrum holds down to the scales of the reheat temperature, then $`\sigma _H^2R^{(n+3)}`$. We can then relate the mass variance today $`\sigma _0`$ to the mass variance at the epoch of PBH formation $`\sigma (M_H)`$, using $$\sigma ^2(M_H)=\sigma _0^2\left(\frac{M_{eq}}{M_0}\right)^{(1n)/3}\left(\frac{M_H}{M_{eq}}\right)^{(1n)/2},$$ (7) where $$M_H=M_0\left(\frac{T_{eq}}{T_{RH}}\right)^2\left(\frac{T_0}{T_{eq}}\right)^{3/2},$$ (8) $`M_0`$ is the mass inside the horizon today, $`T_{RH}`$ is the reheat temperature, and $`T_{eq}`$ is the temperature at radiation-matter equality. This relation is essential to connect present-day density fluctuations to those of much earlier times. Two important assumptions underly this useful relation: First, $`n`$ is taken to be constant over all the scales of interest. Second, the universe was assumed to be radiation dominated until the temperature dropped below $`T_{eq}5`$ eV (matter-radiation equality), and then matter dominated thereafter. From the COBE anisotropy data, the mass variance can be calculated $$\sigma _0=9.5\times 10^5.$$ (9) Using this result we can then calculate the physical number density per unit mass interval at $`T=T_{RH}`$ $$\frac{dn_{bh}}{dM_{bh}}=\frac{V_h^1}{\sqrt{2\pi }\sigma (M_H)M_{bh}\rho }y^{1/\rho }\mathrm{exp}\left[\frac{\left(\delta _c+y^{1/\rho }\right)^2}{2\sigma ^2(M_H)}\right],$$ (10) where $$y=\frac{M_{bh}}{kM_H}=\frac{M_{bh}T_{RH}^2}{0.301kg_{}^{1/2}M_{Pl}^3},$$ (11) and $`M_{Pl}`$ is the Planck mass. The physical number density at time $`t`$ is simply Eq. (10) rescaled by a ratio of scale factors, that can be written as $$\frac{dn_{bh}}{dM_{bh}}(t)=\left(\frac{T(t)}{T_{RH}}\right)^3\frac{dn_{bh}}{dM_{bh}}.$$ (12) Finally, we should note that relation (3) is only valid for $`\delta \delta _c`$. Thus we expect that we may integrate over $`\delta `$ with small errors, as long as the width of the Gaussian is sufficiently small. In particular, our results should be trustworthy provided $`\sigma 1`$, which implies $`n`$ should not exceed the maximum $$n_{max}1+\frac{2\mathrm{log}(\sigma _0^2)}{\mathrm{log}(T_{eq}T_0/T_{RH}^2)}.$$ (13) We will see that $`n`$ does not exceed this maximum value for all of the bounds we consider. ## III Bounds from $`\mathrm{\Omega }1`$ Let us now calculate the total energy density in PBHs. We assume the “standard cosmology” where the universe began in an inflationary phase, reheated, was radiation dominated from the reheating period until matter-radiation equality, and then has been matter dominated. The contribution of a PBH with a given initial mass, $`M_{bh}`$, to the energy density today will depend upon its lifetime. The time-dependent PBH mass $`M(t)`$ is given by $$M(t)=M_{}\left[\left(\frac{M_{bh}}{M_{}}\right)^3\frac{t}{t_0}\right]^{1/3},$$ (14) where $`M_{}`$ is the initial mass of a PBH which would be decaying today, $`M_{}5\times 10^{14}`$ g. It is a good approximation to assume that the black hole decays instantaneously at a fixed decay time, $`t_d`$, which we use in the following. There are two components to the PBH density bounds that we can calculate. The first is the total energy density of the PBHs that have not decayed by a given time $`t`$. The second is the total energy density of the products of PBH evaporation. The sum of these components must be less than the critical density $`\mathrm{\Omega }_{pbh,evap}+\mathrm{\Omega }_{pbh}<1`$, at any time. The evaporated products of PBHs, in particular photons, could break up elements during nucleosynthesis, disrupting the well-measured elemental abundances. This and other processes during nucleosynthesis provide additional bounds on the density of PBHs that we do not discuss here. The simplest bound comes from the density of PBHs that have not decayed by time $`t`$, $$\rho _{pbh}(t)=\left(\frac{T(t)}{T_{RH}}\right)^3\rho _{tot}(t_{RH})I_{M_{}(t)}^{M_{max}}(0)$$ (15) where $`T(t)`$ is the temperature of the universe at time $`t`$, and $`M_{}(t)`$ is the initial PBH mass that has just completely evaporated by time $`t`$, $$M_{}(t)M_{}\left(\frac{t}{t_0}\right)^{1/3}.$$ (16) $`I_{M_1}^{M_2}(\xi )`$ is a dimensionless weighted integral over the PBH mass spectrum between $`M_1`$ to $`M_2`$, normalized to the total density $`\rho _{tot}(t_{RH})`$, $$I_{M_1}^{M_2}(\xi )=\frac{1}{\rho _{tot}(t_{RH})}_{M_1}^{M_2}𝑑M_{bh}M_{bh}\frac{dn_{bh}}{dM_{bh}}\left(\frac{M_{bh}}{M_{}}\right)^\xi ,$$ (17) where $`dn_{bh}/dM_{bh}`$ is given by Eq. (10). We can use the above to trivially compute the ratio of the PBH density to the critical density, $`\mathrm{\Omega }(t)`$. In particular, we need only compute the density ratio at three relevant epochs: immediately after reheating $`t=t_{RH}`$, at matter-radiation equality $`t=t_{eq}`$, and present-day $`t=t_0`$. The density ratios areNote that $`\mathrm{\Omega }(t_{RH})=I_0^{M_{max}}(0)`$ is often denoted by $`\beta (t_{RH})`$. $`\mathrm{\Omega }(t_{RH})`$ $`=`$ $`I_0^{M_{max}}(0)`$ (18) $`\mathrm{\Omega }(t_{eq})`$ $`=`$ $`{\displaystyle \frac{T_{RH}}{T_{eq}}}I_{M_{eq}}^{M_{max}}(0)`$ (19) $`\mathrm{\Omega }(t_0)`$ $`=`$ $`{\displaystyle \frac{T_{RH}}{T_{eq}}}I_M_{}^{M_{max}}(0).`$ (20) where $`M_{max}`$ corresponds to $`\delta =1`$, which is of order the horizon mass at reheating. The integral should be independent of the upper limit $`M_{max}`$ if we are to trust our results. Making the conservative approximation that all the PBH decay products are relativistic, the contribution to the density ratio of the products of PBH evaporation that has occurred up until today can be written as $$\mathrm{\Omega }_{pbh,evap}(t_0)=\frac{T_{RH}}{T_{eq}}\left(\frac{T_0}{T_{eq}}\right)^{1/4}I_0^{M_{eq}}(3/2)+\frac{T_{RH}}{T_{eq}}I_{M_{eq}}^M_{}(2).$$ (21) In Fig. 1, we show the upper limit on $`n`$ as a function of $`T_{RH}`$ coming from bounding $`\mathrm{\Omega }_{pbh,evap}(t_0)+\mathrm{\Omega }_{pbh}(t_0)<1`$ (solid line). For larger values of the reheat temperature we get a more stringent bound by imposing the constraint $`\mathrm{\Omega }_{pbh}(t_{RH})1`$, simply because as $`T_{RH}`$ is increased more of the black holes will have decayed at an earlier epoch. Given that most of the energy of the decay products resides in radiation, the effect on $`\mathrm{\Omega }_{pbh}(t_0)`$ is diminished due to the redshifting. This new bound is given by the dotted line in Fig. 1. If we assume that the PBH leaves behind a Planck mass remnant, then we have additional bounds which become important for very large reheat temperature. The best bound in this case comes from calculating $`\mathrm{\Omega }_{remnant}(t_{eq})`$ which is given by $$\mathrm{\Omega }_{remnant}(t_{eq})=\frac{T_{RH}}{T_{eq}}\frac{M_{Pl}}{M_{}}I_0^{M_{eq}}(1).$$ (22) The bound in this case is shown as the dashed line in Fig. 1, and is the best bound at the largest values of the reheat temperature. ## IV Bounds from Diffuse Gamma-Rays For a certain range of $`T_{RH}`$ we can improve our bounds on $`n`$ from diffuse gamma-ray constraints. The present day flux is determined by convoluting the initial mass function with the black hole emission spectrum $$f(x)=\frac{1}{2\pi }\frac{\mathrm{\Gamma }_s(x)}{\mathrm{exp}(8\pi x)(1)^{2s}},$$ (23) where $`s`$ is the spin of the emitted particle, $`x=\omega (t)M(t)/M_{Pl}^2`$, $`\omega (t)`$ is the frequency and $`M(t)`$ is the PBH mass at the time $`t`$ of emission. $`\mathrm{\Gamma }_s(x)`$ is the absorption coefficient and may be written as $`[\omega (t)]^2\sigma _s/\pi `$. $`\sigma _s`$ is the absorption cross section and is calculated using the principle of detailed balance. The values for $`\sigma _s`$ were calculated some time ago by Page. Let us consider how $`\sigma _s`$ behaves for massless particles. At large values of $`x`$, $`\sigma _s`$ performs small oscillations about the geometric optics limit of $`\sigma _g=27\pi M^2/M_{Pl}^4`$. As $`x`$ approaches zero, $`\sigma _s`$ goes to zero for $`s=1/2,1`$ but goes to a constant value for $`s=0`$. We will use the approximation $$\mathrm{\Gamma }_s(x)=(56.7,20.4)x^2/\pi \text{for}s=(\frac{1}{2},1).$$ (24) This approximation is poor at low energies, as it is in error by $`50\%`$ at $`x=0.05`$. However, as we shall see, the contribution to the spectrum of interest is greatly peaked at $`x0.2`$. The case of strongly interacting particles is complicated by the hadronization process. There is a large contribution coming from pion decay, however, given the extreme sensitivity of the flux to the value $`n`$, the effect on the bound is negligible. It has been recently suggested that the self-interactions of the emitted particles will induce a photosphere, thus distorting the spectrum considerably from Eq. (23). It was suggested that two types of photospheres should form. A QCD photosphereIn the case of QCD what is meant by “photosphere” is a quark-gluon cloud. generated by parton-parton interactions as well as a QED photosphere generated by electron-positron-photon interactions. This is idea has been tested more quantitatively via a numerical solution of the Boltzmann equation . Again, while this effect may change the spectrum, especially at higher energies, it is irrelevant as far as the bound on the spectral index is concerned. The flux measured today is given by $$\frac{dJ}{d\omega _0}=\frac{1}{4\pi }_{t_i}^{t_0}𝑑t(1+z)𝑑M_{bh}\frac{dn_{bh}}{dM_{bh}}(t)f(x),$$ (25) where $`dn_{bh}/dM_{bh}`$ is evaluated at time $`t`$ using Eq. (12), $`t_0`$ is the age of the universe, $`t_i`$ is the time of last scatter, and $`f(x)`$ is the instantaneous emission spectrum given above with $$x=\frac{\omega (t)M(t)}{M_{Pl}^2}=\frac{\omega _0(1+z)}{M_{Pl}^2}M_{}\left[\left(\frac{M_{bh}}{M_{}}\right)^3\frac{t}{t_0}\right]^{1/3}.$$ (26) The integral over $`t`$ is cut off at early times, since at redshifts above $`z=z_0700`$ the optical depth will be larger than unity due to either pair production off of matter or ionized matter . Those processes will degrade the energy below the window we are interested in. This integral may be rewritten in the more illuminating form $$\frac{dJ}{d\omega _0}=\frac{1}{4\pi }\frac{M_{Pl}^6}{(\omega _0M_{})^3}_0^{z_0}\frac{dz}{H_0(1+z)^{5/2}}_0^{\mathrm{}}𝑑xx^2\alpha ^2f(x)\frac{dn_{bh}(x,z)}{dM_{bh}},$$ (27) where $$\alpha =\frac{M(t)}{M_{}}=\left\{(1+z)^{3/2}+\frac{x^3M_{Pl}^6}{[(1+z)\omega _0M_{}]^3}\right\}^{1/3}.$$ (28) Let us study the qualitative behavior of the above integral as a function of $`\omega `$ at fixed $`n`$ and $`T_{RH}`$. The $`x`$ integration is controlled by the Boltzmann factor in $`f(x)`$. Indeed, a little manipulation shows the the integrand is highly peaked near $`x0.2`$. Furthermore, the $`\omega `$ dependence in $`\alpha ^2`$ is almost completely canceled by the $`\omega `$ dependence in the factor $`M_{bh}^1y^{1/\rho }\alpha ^{(1/\rho 1)}`$ in $`dn_{bh}/dM_{bh}`$. Thus the $`\omega `$ dependent part of the integrand may be written as $$\frac{dJ}{d\omega _0}\omega _0^3\mathrm{exp}\left[\frac{\left(\delta _c+aT_{RH}^{2/\rho }\alpha ^{1/\rho }\right)^2}{2\sigma ^2(M_H)}\right],$$ (29) where $`a^\rho =M_{}g_{}^{1/2}/(0.301kM_{Pl}^3)`$, and the only $`\omega `$ dependence in the exponential is through $`\alpha `$. If for now we assume that the dominant contribution the higher energy photons comes from recent decays ($`z0`$), and most of the support for the $`x`$ integral comes with $`x0.2`$, then $`\alpha `$ simplifies to $$\alpha \left[1+\left(\frac{0.2M_{Pl}^2}{w_0M_{}}\right)^3\right]^{1/3}.$$ (30) As $`\omega _0`$ gets larger than $`0.2M_{Pl}^2/M_{}100\mathrm{MeV}`$, $`\alpha `$ becomes independent of $`\omega _0`$ and therefore the flux behaves as $`dJ/d\omega _0\omega _0^3`$. For lower energies we can make the approximation $`\alpha 0.2M_{Pl}^2/(w_0M_{})`$, and we would expect that at some point the $`\omega `$ dependence in the exponential will begin to dominate such that the flux should begin to rapidly decrease as we go to lower photon energies. The energy at which the flux turns over is determined by the competition between the two terms $`\delta _c`$ and $`aT_{RH}^{2/\rho }\alpha ^{1/\rho }`$ in the exponential, which is set by the reheat temperature. As we lower the reheat temperature the position of the kink moves to lower energies. If the reheat temperature is higher than $`T_{RH}10^9\mathrm{GeV}`$, however, the peak will stay around 100 MeV, since at these temperatures the second term in the exponential will always dominate. Indeed, we expect the position of the fall off to be near $$\omega _{\mathrm{kink}}\mathrm{min}(100\mathrm{MeV},\frac{0.2g_{}^{1/2}T_{RH}^2}{0.301kM_{Pl}\delta _c^\rho }).$$ (31) Figure 2 shows the flux for fixed $`n`$ for a few different reheat temperatures. The position of the kink is well tracked by Eq. (31). Note however that the flux does not fall off exponentially at energies below the kink. This is because as we go to lower energies we pick up more of a contribution from higher redshifts. It is interesting to contrast this behavior with the flux calculated assuming that the mass of a PBH formed at a given epoch is proportional to the horizon mass at the time of collapse. In Refs. the authors calculated an initial mass function following the Press-Schecter formalism, summing over all epochs and assuming the relation $`M_{bh}\gamma ^{3/2}M_H`$ at each epoch. They found $$\frac{dn_{bh}}{dM_{bh}}=\frac{n+3}{4}\sqrt{\frac{2}{\pi }}\gamma ^{7/4}\rho _iM_{H_i}^{1/2}M_{bh}^{5/2}\sigma _H^1\mathrm{exp}\left(\frac{\gamma ^2}{2\sigma _H^2}\right),$$ (32) where $`\rho _i`$ and $`M_{H_i}`$ are the energy density and horizon mass at $`T_{RH}`$ and $$\sigma _H=\sigma _0\left(\frac{M_{bh}}{\gamma ^{3/2}M_0}\right)^{(1n)/4}.$$ (33) This result reduces to the initial mass function first computed by Page for the Harrison-Zel’dovich spectrum with $`n=1`$ and $`dn_{bh}/dM_{bh}M_{bh}^{5/2}`$. The $`\omega _0`$ dependence of this result arises only through $`M(t)`$ given by Eq. (28). Using this initial mass distribution in Eq. (25), we expect, as in the previous case, $`dJ/d\omega _0\omega _0^3`$ for larger energies, and exponential decay into the lower energies (which will again be mollified from photons descending from higher redshifts). However, for this case the position of the kink will be fixed at around 100 MeV, independent of the reheat temperature. Let us compare the above prediction with the recent COMPTEL and EGRET data. The EGRET collaboration found that the flux in the energy range $`30\mathrm{MeV}100\mathrm{GeV}`$ is well fit by the single power law $$\frac{dJ}{d\omega _0}=7.32\pm 0.34\times 10^9\left(\frac{\omega _0}{451\mathrm{MeV}}\right)^{2.10\pm 0.03}(\mathrm{cm}^2\mathrm{sec}\mathrm{sr}\mathrm{MeV})^1,$$ (34) while the COMPTEL data in the range $`0.8`$$`30\mathrm{MeV}`$ can be fit to the power law $$\frac{dJ}{d\omega _0}=6.40\times 10^3\left(\frac{\omega _0}{1\mathrm{MeV}}\right)^{2.38}(\mathrm{cm}^2\mathrm{sec}\mathrm{sr}\mathrm{MeV})^1.$$ (35) Below $`0.8\mathrm{MeV}`$ there is large increase in the measured flux. Thus, the best bounds are found by comparing the measured flux to predicted flux at $`\omega _{\mathrm{kink}}`$ or at $`0.8\mathrm{MeV}`$, whichever is larger. Because of the rapid rise of the predicted spectrum relative to the measured spectrum, a change in the kink position can change the bound on $`n`$ on the order of $`0.01`$, which we consider within the accuracy of our calculation. The bounds on $`n`$ from the diffuse gamma-rays are specified by the dot-dashed line in Fig. 1. The bound terminates when all but the exponential tail of the PBHs decay prior to a redshift of $`700`$, since the optical depth at such early times exceeds unity, as discussed above. We may compare our results to those derived by Yokoyama ,<sup>§</sup><sup>§</sup>§After this work was completed, we became aware of Ref. that also utilized the critical collapse initial mass function to derive bounds on the PBH mass density by requiring that LSPs (in supersymmetric models) not be overproduced. where the author placed bounds on mass fraction of PBHs at $`t_{RH}`$, $`\beta (t_{RH})=\mathrm{\Omega }(t_{RH})`$, using the initial mass function, Eq. (10). He found that the bounds on $`\beta `$ did not differ significantly from the previous bounds derived using the standard initial mass functions, except for the bounds coming from diffuse gamma-rays. In the latter case, applicable for horizon masses in the range $`M_H5\times 10^{14}`$ g, he found more stringent constraints. Our bounds on $`n`$, translated into bounds on $`\beta `$, agree with his bounds coming from energy density constraints except for the case of larger reheat temperature, since we included the proper scaling of the energy density of photons emitted after PBH decay. Thus our bounds on $`\beta `$ can differ by many orders of magnitude. Our bounds coming from diffuse gamma-rays can also differ by orders of magnitude, but in this case it is for a different reason. Yokoyama determined his bound on $`\beta `$ by imposing the constraint on $`\mathrm{\Omega }_{pbh}(t_0)`$ derived in Ref. . However, when we change the initial mass function we also change the diffuse gamma-ray spectrum significantly in both shape and normalization, as discussed above. Thus, it is inappropriate to directly take the bounds from Ref. and apply them to the case with the new initial mass function, Eq. (10). We find that our bounds on $`\beta `$ from diffuse gamma-rays are more stringent than those determined in Ref. in the range $`M_H>5\times 10^{15}`$ g by several orders of magnitude. ## V Robustness of the Bounds Let us consider the robustness of the bounds. We might worry that the bounds are highly sensitive to the choice of parameters given the sharpness of the initial mass function. Indeed, in the case where it is assumed that the PBH mass is given by Eq. (2), the bounds are $`n`$ are exceptionally sensitive to the exactness of this relation. This is clear from the exponential factor in Eq. (32). Given the initial mass function calculated by Jedamzik and Niemeyer, Eq. (10), we must check the sensitivity to the parameters $`\delta _c,k`$ and $`\sigma _0`$. In Ref. , the authors tested the scaling relation (3) using several different initial shapes density perturbations shapes. They found $`(\delta _c=0.70,k=11.9)`$, $`(\delta _c=0.67,k=2.85)`$, $`(\delta _c=0.71,k=2.39)`$, for Gaussian, Mexican Hat and fourth order polynomial fluctuations, respectively. We varied the value of $`\delta _c`$ between $`0.600.80`$ and found that the bounds changed by at most $`0.01`$. The sensitivity to the variation being maximal at the smaller values of $`T_{RH}`$. Given that the initial mass function is peaked at a number smaller than $`kM_h`$, the sensitivity is increased at smaller $`T_{RH}`$ because $`\sigma `$ is an decreasing function of $`T_{RH}`$. Variations in $`k`$ are equivalent to a scaling in $`T_{RH}`$. Thus, varying $`k`$ by an order of magnitude has essentially no effect on the bound. Lastly, let us consider the sensitivity to the parameter $`\sigma _0`$. The value we used for $`\sigma _0`$ in Eq. (9) was calculated in Ref. using the result It should be emphasized that this result assumed the spectra can be approximated as a power law over the range of $`k`$ that COBE probes. We are then making the further assumption that $`n`$ is constant down to the mass scales of relevance for PBHs. $$\delta _0=1.91\times 10^5\frac{\mathrm{exp}[1.01(1n)]}{\sqrt{1+0.75r}},$$ (36) where $`r`$ is a measure of the size of the tensor perturbations. The $`1\sigma `$ observational error being $`7\%`$. The fit, Eq. (36), is good to within $`1.5\%`$ everywhere within the region $`0.7n1.3`$ and $`0r2`$. The authors of Ref. quote a $`9\%`$ uncertainty in Eq. (36) at $`1\sigma `$, once uncertainties in the systematics and variations in the cosmological parameters are taken into account. The value in Eq. (9) was determined ignoring tensor perturbations. Given that $`\sigma _0`$ scales with $`\delta _0`$ we find that varying $`\sigma _0`$ at the $`2\sigma `$ level has no effect on our bound at the level of $`0.01`$. On the other hand, including some contribution from tensor perturbation will weaken the bound. We found that taking $`r=2`$ weakened the bound by $`0.010.02`$ throughout the range in the reheat temperature. We can also consider the effects of a non-vanishing $`\mathrm{\Omega }_\mathrm{\Lambda }`$. Bunn et al. extended their results to this case and found $$\delta _0|_{\mathrm{\Omega }_\mathrm{\Lambda }}=1.91\times 10^5\frac{\mathrm{exp}[1.01(1n)]}{\sqrt{1+(0.750.13\mathrm{\Omega }_\mathrm{\Lambda }^2)r}}\mathrm{\Omega }_0^{0.800.05\mathrm{log}\mathrm{\Omega }_0}\left(1+0.18(n1)0.03r\mathrm{\Omega }_\mathrm{\Lambda }\right).$$ (37) If $`0r2`$, we can express $`\delta _0|_{\mathrm{\Omega }_\mathrm{\Lambda }}`$ extracted assuming a nonzero cosmological constant to a very good approximation by a scaling of $`\delta _0`$ extracted without a cosmological constant $$\delta _0|_{\mathrm{\Omega }_\mathrm{\Lambda }}\mathrm{\Omega }_0^{0.800.05\mathrm{log}\mathrm{\Omega }_0}\delta _0,$$ (38) (where $`\mathrm{\Omega }_0+\mathrm{\Omega }_\mathrm{\Lambda }=1`$) and thus $`\sigma _0`$ also acquires a correction. Consequently, the bound on $`n`$ is shifted by $$\mathrm{\Delta }nnn|_{\mathrm{\Omega }_\mathrm{\Lambda }}=\frac{2(0.80.05\mathrm{ln}\mathrm{\Omega }_0)\mathrm{ln}\mathrm{\Omega }_0}{42.9+\mathrm{ln}(T_{RH}/10^8\mathrm{GeV})}.$$ (39) In Fig. 3 we show the above correction as a function of $`T_{RH}`$ for several choices of $`\mathrm{\Omega }_\mathrm{\Lambda }`$. If we take $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$ as recent observational data suggests, our bounds on $`n`$ strengthen by about $`0.030.06`$ for $`T_{RH}`$ between $`10^{16}10^3`$ GeV respectively, as shown in Fig. 4. We can also calculate bounds on the mass variance at reheating which is essentially model-independent. If the relation Eq. (7) is violated by, for example, a power spectrum with a spectral index that depends on scale, then our previous bounds on $`n`$ cannot be applied. However, given a inflationary model one could in principle calculate the power spectrum, normalize to the COBE data at our present epoch, and then match onto the mass variance at reheating. In Fig. 5 we show the bounds on the mass variance from both the density bounds as well as the bounds from the diffuse gamma-ray observations. Notice that the diffuse gamma-ray observation bounds on $`\sigma (M_H)`$ are a significant improvement over the density bounds in the applicable range of reheating temperatures. Finally, we must address the issue of non-Gaussianity. It has been pointed out that skewness could very well be important for PBH formation given that its effects are amplified in the tail of the distribution $`P[\delta ]`$, which contributes to PBH formation. In general, the amount of non-Gaussianity expected is highly model dependent. Bullock and Primack investigated several inflationary models to study the amount of non-Gaussianity one would expect at larger values of $`\delta `$. They calculated $`P[\delta ]`$ for three toy models, and found in one case no deviation from Gaussianity and in the other two found a significant suppression in the probability of of large perturbations. However, as was pointed out in Ref. , while these effects can drastically effect the PBH mass fraction $`\beta `$, we expect that, even in the most extreme case considered in Ref. , the effect on the bound on $`n`$ is only at the level of $`0.05`$. For hybrid inflation, where the approximation that $`n`$ is constant actually holds, the perturbations are in fact Gaussian due to the linear dynamics of the inflaton field . Therefore, these bounds should be applied to specific models, with the roughness of the bound determined by the deviations away from Gaussianity. ## VI Conclusions We have calculated the density of primordial black holes using the the near critical collapse mass function that results in a spectrum of PBH masses for a given horizon mass. The normalization of the PBH mass spectrum was determined using the COBE anisotropy data that allowed us to set bounds on the spectral index $`n`$ as a function on the reheat temperature. We find that restricting the density of PBHs to be less than the critical density corresponds to the restriction that the spectral index $`n`$ be less than about $`1.45`$ to $`1.2`$, throughout the range of reheating temperatures resulting after inflation, $`10^3`$ to $`10^{16}`$ GeV respectively. (The precise limits are shown in Fig. 1.) For a smaller range of reheating temperatures, between about $`10^7`$ to $`10^{10}`$ GeV, significant PBH evaporation occurs when the optical depth of the universe is less than one. Hence, we found a slightly stronger bound on the spectral index by restricting the cosmological PBH evaporation into photons to be less than the present-day observed diffuse gamma-ray flux. Due to the extreme sensitivity of the PBH mass density to the spectral index, effects such as the indirect photon flux from PBH evaporation into quarks and gluons which fragment into pions or the formation of a QCD photosphere are completely negligible when calculating the bound on $`n`$. We should also remark that slightly stronger bounds on $`n`$ for larger reheating temperatures $`10^{10}`$ GeV are expected from PBHs that decay during the epoch of nucleosynthesis. If the universe is vacuum-energy dominated, there are corrections to our bounds on $`n`$ that can be substantial. We calculated these corrections for a range of $`\mathrm{\Omega }_\mathrm{\Lambda }`$ and applied them to our bounds on $`n`$ for the case of $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. The improvement is apparent by contrasting Fig. 1 with Fig. 4. Finally, we calculated bounds on the mass variance at reheating. These bounds in principle could be used to constrain any given inflationary model, once the power spectrum is calculated. ###### Acknowledgements. This work was supported in part by the Department of Energy under grant number DOE-ER-40682-143. We thank Rich Holman and Jane MacGibbon for useful discussions. We also thank Andrew Liddle useful discussions and comments on the manuscript.
no-problem/9904/astro-ph9904316.html
ar5iv
text
# HST Studies of the WLM Galaxy. I. The Age and Metallicity of the Globular Cluster 1footnote 11footnote 1 Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. ## 1 Introduction The galaxy known as WLM is a low-luminosity, dwarf irregular galaxy in the Local Group. A history of its discovery and early study was given by Sandage & Carlson (1985). Photographic surface photometry of the galaxy was published by Ables & Ables (1977). Its stellar population has been investigated from ground-based observations by Ferraro et al (1989) and by Minniti & Zijlstra (1997). The former showed that the main body of the galaxy consists of a young population, which dominates the light, while the latter added the fact that there appears to be a very old population in its faint outer regions. Cepheid variables were detected by Sandage & Carlson (1985), who derived its distance, and were reanalyzed by Feast & Walker (1987) and by Lee et al. (1992). The latter paper used $`I`$ photometry of the Cepheids and the RGB (red giant branch) distance criterion to conclude that the distance modulus for WLM is 24.87 $`\pm `$ 0.08. The extinction determined by Feast & Walker (1987) is $`A_B`$ = 0.1. Humason et al. (1956), when measuring the radial velocity of WLM, noticed a bright object next to it that had the appearance of a globular cluster. Its radial velocity was the same as that of WLM, indicating membership. Ables & Ables (1977) found that the cluster’s colors were like those of a globular cluster, and Sandage & Carlson (1985) confirmed this. Its total luminosity is unusual for its being the sole globular of a galaxy. Sandage & Carlson (1985) quote a magnitude of $`V`$ = 16.06, indicating an absolute magnitude of $`M_V`$ = -8.8. This can be compared to the mean absolute magnitude of globulars in galaxies, which is $`M_V=7.1\pm 0.43`$ (Harris 1991). The cluster, though unusually bright, has only a small fraction of the $`V`$ luminosity of the galaxy, which is 5.2 magnitudes brighter in $`V`$. One could ask the question of whether there are other massive clusters in the galaxy, such as luminous blue clusters similar to those in the Magellanic Clouds. Minniti & Zijlstra (1997), using the NTT and thus having a wider field than ours, searched for other globular clusters and found none. However, the central area of the galaxy has one very young, luminous cluster, designated C3 in Hodge, Skelton and Ashizawa (1999). This object is the nuclear cluster of one of the brightest HII regions (Hodge & Miller 1995). There do not appear to be any large intermediate age clusters, such as those in the Magellanic Clouds or that recently identified spectroscopically in the irregular galaxy NGC 6822 by Cohen & Blakeslee (1998) . No other Local Group irregular galaxy fainter than $`M_V`$ = -16 contains a globular cluster. The elliptical dwarf galaxies NGC 147 and NGC 185 (0.8 and 1.3 absolute magnitudes brighter than WLM, respectively) do have a few globular clusters each and Fornax (1.7 absolute magnitudes fainter) has five, which makes it quite anomalous, even for an elliptical galaxy (see Harris, 1991, for references). Another comparison can be made using the specific frequency parameter, as defined and discussed by Harris (1991). The value of the specific frequency calculated for WLM is 7.4, which can be compared to Harris’ value calculated for late-type galaxies, which is $`0.5\pm 0.2`$. The highest average specific frequency is found for nucleated dwarf elliptical galaxies by Miller et al (1998), which is $`6.5\pm 1.2`$, while non-nucleated dwarf elliptical galaxies have an average of $`3.1\pm 0.5`$. These values are similar to those found by Durrell et al (1996), implying that the specific frequency for WLM is comparable to that for dwarf elliptical galaxies but possibly higher than that for other late-type galaxies. Because the WLM cluster as a globular in an irregular dwarf galaxy is unique, it may represent an unusual opportunity to investigate the question of whether Local Group dwarf irregulars share the early history of our Galaxy and other more luminous Group members, which formed their massive clusters some 15 Gyr ago, or whether they formed later, as the early ideas about the globular clusters of the Magellanic Clouds seemed to indicate. Of course, we now know that the LMC has several true globular clusters that are essentially identical in age to the old halo globulars of our Galaxy (Olsen et al. 1998, Johnson et al. 1998), so the evidence suggesting a delayed formation now seems to come only from the SMC. In any case, WLM gives us a rare opportunity to find the oldest cluster (and probably the oldest stars) in a more distant and intrinsically much less luminous star-forming galaxy in the Local Group. ## 2 Data and Reduction ### 2.1 Observations As part of a Cycle 6 HST GO program, we obtained four images of the WLM globular cluster on 26 September, 1998. There were two exposures taken with the F814W filter of 2700 seconds and two with the F555W filter, one each of 2700 seconds and 2600 seconds. The globular cluster was centered on the PC chip and the orientation of the camera was such that the WF chips lay approximately along the galaxy’s minor axis, providing a representative sample of the WLM field stars to allow us to separate cluster stars reliably. ### 2.2 Reductions With two images of equal time per filter, cosmic rays were cleaned with an algorithm nearly identical to that used by the IRAF task CRREJ. The two images were compared at each pixel, with the higher value thrown out if it exceeded 2.5 sigma of the average. The cleaned, combined F555W image is shown in Figure 4. Photometry was then carried out using a program specifically designed to reduce undersampled WFPC2 data. The first step was to build a library of synthetic point spread functions (PSFs), for which Tiny Tim 4.0 (Krist 1995) was used. PSFs were calculated at 49 positions on each chip in F555W and F814W, subsampled at 20 per pixel in the WF chips and 10 in the PC chip. The subsampled PSFs were adjusted for charge diffusion and estimated subpixel QE variations, and combined for various locations of a star’s center within a pixel. For example, the library would contain a PSF for the case of a star centered in the middle of a pixel, as well as for a star centered on the edge of a pixel. In all, a 10x10 grid of possible centerings was made for the WF chips and a 5x5 grid for the PC chip. This served as the PSF library for the photometry. The photometry was run with an iterative fit that located stars and then found the best combinations of stellar profiles to match the image. Rather than using a centroid to determine which PSF to use for a star, a fit was attempted with each PSF centered near the star’s position and the best-fitting PSF was chosen. This method helped avoid the problem of centering on an undersampled image. Residual cosmic rays and other non-stellar images were removed through a chi-squared cut over the final photometry list. The PSF fit was normalized to give the total number of counts from the star falling within a 0.5 arcsec radius of the center. This count rate was then converted into magnitudes as described in Holtzman et al (1995), using the CTE correction, geometric corrections, and transformation. For the color-magnitude diagram (CMD) and luminosity function analyses below, roughly the central 20% of the image was analyzed to maximize the signal from the globular while minimizing the background star contamination. In that region, the effect of background stars is negligible. The CMD from this method, showing all stars observed, is shown in Figure 4a, with the same data reduced with DAOPHOT shown in Figure 4b. Error bars are shown corresponding to the artificial star results (which account for crowding in addition to photon statistics), rather than the standard errors from the PSF fits and transformations. The photometry list is given in Table 4, which will appear in its entirety only in the electronic edition. The table contains X and Y positions of each star, with $`V`$ and $`I`$ magnitudes and uncertainties. Uncertainties given are from the PSF fitting and from the transformations. Artificial star tests were made, with each star added and analyzed one at a time to minimize additional crowding often caused by the addition of the artificial stars. The artificial stars were added to both the combined $`V`$ and $`I`$ images, so that a library of artificial star results for a given position, $`V`$ magnitude, and color could be built. In addition to completeness corrections, these data were employed in the generation of synthetic CMDs for the determination of the star formation history of the cluster. ## 3 Analysis ### 3.1 Luminosity Function The luminosity functions (LFs) of the globular cluster in $`V`$ and $`I`$ are shown in Figure 4a and 4b, respectively, binned into 0.5 magnitude bins. Theoretical luminosity functions are given as well, from interpolated Padova isochrones (Girardi et al 1996, Fagotto et al 1994) using the star formation parameters and distance obtained in the CMD analysis below. The observed and theoretical LFs are in excellent agreement. The bump in the observed $`V`$ LF between magnitudes 25 and 26, and the bump in the observed $`I`$ LF starting at magnitude 24.5 are due to the horizontal branch stars, which cannot be separated from the rest of the CMD cleanly. The only other significant deviation, the bump in the $`V`$ LF between magnitudes 22.5 and 23, is the clump of stars at the tip of the RGB, which is also observed in the CMD. This seems to be a statistical fluke, a result of the relatively small number of stars in that part of the RGB, and is similar to statistical flukes seen in Monte Carlo simulations. Thus as far as can be determined, the observed LF agrees with the theoretical expectations. ### 3.2 Color-Magnitude Diagram For the CMD analysis, a cleaner CMD was achieved by omitting all stars with PSF fits worse than a chi-squared value of 3. The observed $`V,VI`$ CMD is shown in Figure 4a, and was analyzed as described in Dolphin (1997). Interpolated Padova isochrones (Girardi et al 1996, Fagotto et al 1994) were used to generate the synthetic CMDs, with photometric errors and incompleteness simulated by application of artificial star results to the isochrones. No assumptions were made regarding the star formation history, metallicity, distance, or extinction to the cluster, and a fit was attempted with all of these parameters free. The best fits that returned single-population star formation histories were then combined with a weighted average to determine the best parameters of star formation. Uncertainties were derived by taking a standard deviation of the parameters from the fits, and thus include the fitting errors and uncertainties resulting from an age-metallicity-distance “degeneracy.” Systematic errors due to the particular choice of evolutionary models are naturally present, but are not accounted for in the uncertainties. The following parameters were obtained: * Age: 14.8 $`\pm `$ 0.6 Gyr * Fe/H: -1.52 $`\pm `$ 0.08 * Distance modulus: 24.73 $`\pm `$ 0.07 * Av: 0.07 $`\pm `$ 0.06 A synthetic CMD constructed from these parameters is shown in Figure 4b, using the artificial star data to mimic photometric errors, completeness, and blending in the data. The poorly reproduced horizontal branch is a result of the isochrones we used, but the giant branch was reproduced well, with the proper shape and position. ### 3.3 Structure Profiles were calculated in bins of 10 pixels (0.45 arcsec) in both the $`V`$ and $`I`$ images, and are shown in Figure 4, corrected for incompleteness (both as a function of magnitude and position). The cutoff magnitudes of 27 in $`V`$ and 26 in $`I`$ were chosen to minimize the corrections required due to incompleteness. Additionally, the central bin (0-10 pixels) was omitted because of extreme crowding problems. The remaining bins were fit to King models with a least-squares fit. The best parameters for the King models (assuming a distance modulus of 24.73) are as follows (shown by the solid lines in Figure 4). * core radius: 1.09 $`\pm `$ 0.14 arcsec (4.6 $`\pm `$ 0.6 pc) * tidal radius: 31 $`\pm `$ 15 arcsec (130 $`\pm `$ 60 pc) * core density: 59 $`\pm `$ 8 stars/arcsec<sup>2</sup> (3.2 $`\pm `$ 0.4 stars/pc<sup>2</sup>) $`V`$, 44 $`\pm `$ 6 stars/arcsec<sup>2</sup> (2.4 $`\pm `$ 0.3 stars/pc<sup>2</sup>) $`I`$ * background density: 0.77 $`\pm `$ 0.12 stars/arcsec<sup>2</sup> (0.042 $`\pm `$ 0.007 stars/pc<sup>2</sup>) $`V`$, 0.77 $`\pm `$ 0.12 stars/arcsec<sup>2</sup> (0.042 $`\pm `$ 0.007 stars/pc<sup>2</sup>) $`I`$ For a distance modulus of 24.87 (Lee et al. 1992), the corresponding sizes would be 7% larger. For comparison, Trager et al. (1993) find that 2/3 of Milky Way clusters have core radii between approximately 5 and 60 pc. ## 4 Conclusions Our analysis shows that the WLM globular cluster is virtually indistinguishable from a halo globular in our Galaxy. We find that a formal fit to theoretical isochrones indicates an age of 14.8 $`\pm `$ 0.6 Gyr, which agrees with ages currently being measured for Galactic globulars (e.g., vandenBerg 1998) and a metallicity of \[Fe/H\] of -1.52 $`\pm `$ 0.08, a typical globular cluster value that is similar to that obtained for the outer field giant stars along the minor axis of WLM by Minniti and Zijlstra (1997) and by us (Dolphin 1999). The distance modulus for the cluster, derived independently from the parent galaxy, is 24.73 $`\pm `$ 0.07, which agrees within the errors with that derived from Cepheids and the RGB (Lee et al. 1992) of the galaxy. In structure the globular is elongated in outline, with a mean radial profile that fits a King (1962) model within the observational uncertainties. We derive a core radius of 1.09 $`\pm `$ 0.14 arcsec and a tidal radius of 31 $`\pm `$ 15 arcsec, which translate to 4.6 $`\pm `$ 0.6 pc and 130 $`\pm `$ 60 pc, respectively. The core radius is very similar to that found for massive globulars in our Galaxy (Trager et al. 1993), while the tidal radius, though quite uncertain, is rather large by comparison. The former result indicates that formation conditions in this galaxy near its conception were such that a massive, highly concentrated star cluster could form, despite the very small amount of the total mass of material available. The latter result is probably an indication that the tidal force of the galaxy on the cluster is small. The presence of a normal, massive globular cluster in this dwarf irregular galaxy may be an useful piece of evidence regarding the early history of star, cluster and galaxy formation. Recent progress in the field of globular cluster formation has resulted from both observational and theoretical studies (Searle & Zinn 1978, Harris & Pudritz 1994, McLaughlin & Pudritz 1994, Durrell et al 1996, Miller et al 1998, and McLaughin 1999). Although the uncertainties from a single data point are sufficiently large to discourage quantitative analysis, the presence of a globular cluster in WLM would constrain formation models that predict $`1`$ cluster in such a galaxy. We are indebted to the excellent staff of the Space Telescope Science Institute for obtaining these data and to NASA for support of the analysis through grant GO-06813.
no-problem/9904/chao-dyn9904044.html
ar5iv
text
# Self-similarity in a system with a short-time delayed feedback ## I Introduction Optical feedback systems governed by delay differential equations (DDEs) have attracted much attention from both the applied and the fundamental points of view \[1-16\]. Generally, the delay-differential system related to optical bistable or hybrid optical bistable device is described by $$\tau ^{}\dot{x}(t)=x(t)+f(x(tt_R),\mu ),$$ (1) where $`x(t)`$ is the dimensionless output of the system at time $`t,`$ $`t_R`$ is the time delay of the feedback loop, $`\tau ^{}`$ is the response time of the nonlinear medium, the parameter $`\mu `$ is proportional to the intensity of the incident light. In Eq. (1), $`f(x,\mu )`$ is a nonlinear function of $`x`$, characterizing the system, e.g. $`f(x,\mu )=\mu \pi [1\zeta \mathrm{cos}(xx_B)]`$ for Ikeda model , $`f(x,\mu )=\pi [A\mu \mathrm{sin}^2(xx_0)]`$ for Vallée model , and $`f(x,\mu )=\mu \mathrm{sin}^2(xx_0)`$ for the sine-square model . The understanding of the Eq. (1) up to now can be summarized as follows. The first experimental observation of period-doubling bifurcations and chaos in a hybrid bistable device was made by Gibbs et al. following a prediction by Ikeda et al. . The solution of system, which appears after Hopf bifurcation, evolves through a period doubling $`T_F2T_F\mathrm{}2^NT_F`$, as one parameter is varied. These solutions are called $`2^N`$ periodic and the cascade accumulates at the Feigenbaum point. These solutions are named fundamental solutions by Ikeda et al. , we do so in this paper. Later the two groups found that higher- harmonic oscillation states appear successively in the course of transition to developed chaos in long-time delayed case (i.e. delay time is longer enough than the response time). These solutions coexist and each follows a period-doubling cascade. The oscillation period of these harmonic states are given by $`T_F/n`$, where $`n`$ is odd integer and $`T_F`$ is the period of the fundamental solution. As the study of the dimension of the chaotic attractor, the behavior of the DDE exhibits high-dimensional chaos . Ikeda and Matsumoto have given an estimate of the Lyapunov dimension of attractor for the Ikeda model, and it ranges approximately from $`2`$ to $`13`$ when some bifurcation parameter is varied. Recently, some researchers demonstrated that the behavior of quasiperiodicity followed the hierarchy of the Farey tree and the chaotic itinerancy phenomenon switches among some different unstable local chaotic orbits . We reported two new types of solutions found in moderate-time and short-time delay regimes , which are different from the fundamental solution and the odd harmonic solution. In this paper, we study in detail the dynamical behaviors of the new type of solution found in short-time delay regimes. This paper is organized as follows: In Sec. II, the numerical methods used in this paper are introduced. By using Poincaré section technique, we can easily observe the course of bifurcation of DDE, and easily distinguish the new type of solutions and the fundamental solution. In Sec. III, we demonstrate our numerical results. In the short-time delay case, there is a new type of solutions $`S_i(i=1,2,3\mathrm{}),`$which has many similarities comparing with fundamental solution. Moreover, these new solutions are alike each other. In Sec. IV we summarize our results and conclude. ## II Numerical Methods Measuring the delay time in units of $`t_R`$, one can rewrite Eq. (1) as $$\tau \dot{x}(t)=x(t)+f(x(t1),\mu ),$$ (2) where $`\tau =\tau ^{}/t_R`$ characterizes the effect of the time delay when $`\tau ^{}`$ is fixed. In this paper, we study Eq.(2) with the special feedback function $$f(x,\mu )=1\mu x^2.$$ (3) This feedback function can be considered as the first nonlinear term of the Taylor expansion of the general nonlinear function $`f(x,\mu )`$ in the vicinity of a steady state. It should keep the general nonlinear properties of DDE, as shown in Refs.. The Eq. (2) can be solved numerically and a fourth-order Adam’s interpolation is suitable for that. In order to trace the evolution of a DDE, one might investigate the evolution curve of the variable $`x(t)`$ vs the time $`t.`$ However, it is difficult to distinguish different solutions if one only observes the $`x(t)t`$ relation. Some of us (Zhao et al.) have offered a method in Ref. to represent the solutions of a one-variable DDE by using the Poincaré section technique. This method has been proved to be a powerful tool in exploring the evolution of bifurcation of DDE. Let us review this method briefly. Let $`x_t(\theta )x(t+\theta ),1\theta 0,`$ then $`x_{t_2}(\theta )`$ is determined by $`x_{t_1}(\theta )`$ uniquely according to Eq. (2), where $`t_1<t_2`$. We approach the section mapping as follows: choose an appropriate constant $`x_cR`$; integrate Eq. (2) numerically till $`x(t)>x_c`$ and $`x(t+h)<x_c,`$ where $`h`$ is the length of the integrating step; then proceed a simulation procedure to get $`t_i`$ as well as $`x_{t_i}(\theta )`$ such that $`x_{t_i}(0)=x_c`$. To be simple, we denote $`x_{t_i}(\theta )`$ as $`x_i(\theta )`$ in the following discussions. In this way we convert the flow of Eq. (2) into a mapping which maps the curve $`x_i(\theta )`$ onto the curve $`x_{i+1}(\theta )`$. We regard this curve-to-curve mapping as the Poincaré map of a DDE. A periodic solution of Eq. (2) with period $`T`$, $`x(t)=x(t+T),`$ corresponds to a periodic solution of the Poincaré map with period $`N`$, $`x_i(\theta )=x_{i+N}(\theta ),`$ where $`N`$ is an integer. For practical applications, we can take $`n`$ discrete points $`x_i(\theta _j)`$ on the curve $`x_i(\theta )`$ to represent the solution, where $`\theta _j(1,0)`$ and $`j=1,2,\mathrm{},n`$. Then the curve-to-curve mapping appears as a point-to-point mapping in $`R^n.`$ In order to exhibit the bifurcation process, here we usually need a one-dimensional mapping representation $`x_i(\theta _1)`$ with the bifurcation parameter. ## III Results As usually considered, there is no complex phenomenon in short-time delay region since Eq. (2) will approach a normal one-dimensional ordinary differential equation. Our results remind us that this is not the truth. ### A The bifurcation of fundamental solution Before discussing the new type of solutions, let us first review the bifurcation process of the fundamental solution. In the long-time delay case (i.e. $`\tau `$ is very small), Ikeda et al. have shown theoretically that instabilities and chaotic behaviors can occur in the system. As $`\tau `$ is fixed and $`\mu `$ is increased, a square-wave solution appears after the Hopf bifurcation of a steady state. With further increase of $`\mu `$, this square wave solution undergoes a square of bifurcation with its period doubling itself successively and then becomes chaotic. We define this solution as the fundamental solution of the system and marked it as $`S_0.`$ When $`\tau `$ increases (i.e. delay time decreases), the fundamental solution exhibits mirror-similar bifurcation behavior as shown in Fig. 1(a). With the continuous increase of $`\tau `$, the period-doubling bifurcation with less and less order takes place in the course of bifurcation. At $`\tau =1.13,`$ $`S_0`$ undergoes only period-two bifurcation via $`\mu .`$ Fig.1(b) shows a bifurcation diagram just below this value. With further increase $`\tau `$, we can no longer observe the period-doubling bifurcation of $`S_0`$, see Fig. 1(c)-(f). We regard the regime of $`\tau >1.13`$ as short-time delay case. In this regime, the delay time $`t_R`$ is smaller than the response time $`\tau ^{}`$ because $`\tau =\tau ^{}/t_R,`$ and the fundamental solution exhibits only one-period limit cycle state. ### B New type of solutions In the short-time delay case, fundamental solution shows no bifurcation and chaos. This is not to say that there is no chaos in this system with the varying of $`\mu `$ at a fixed parameter $`\tau .`$ In fact, there still exists another chaotic attractor, which locates behind $`S_0`$ in the direction of $`\mu ,`$ as shown in Fig. 1(c)-(f). We marked them as $`S_i(i=1,2,3,\mathrm{}).`$ In the direction of $`\mu ,`$every $`S_i`$ evolves a period-doubling bifurcation. Figure 2 exhibits the evolution courses of fundamental solution and $`S_i,`$which is located on period-one limit cycle state of themselves, respectively. From Fig. 1 and Fig. 2, we can easily find that there are not only similarities but also difference among $`S_0`$ and $`S_i.`$ Firstly, with the increase of $`\tau ,`$ each $`S_i`$ appears the same bifurcation process as $`S_0`$ does. Comparing Fig. 1(c) with Fig. 1(a), we can find the diagram of $`S_1`$ at $`\tau =3.0,`$ displays the same shape as that of $`S_0`$ at $`\tau =0.80.`$ In fact, at certain parameter, $`S_2`$, $`S_3`$ and $`S_4`$ et al. also show the similar shape. Secondly, $`S_i`$ has more and more oscillation with the increase of the subscript $`i.`$ Simultaneously, $`i`$ increases with the increase of $`\tau .`$ $`S_0`$ has only one peak within one period. In contract to $`S_0,`$ $`S_1`$ has not only the peak but also another small peak; $`S_2,S_3`$ and $`S_4`$ have more and more peaks within one period respectively, see Fig. 2. Thirdly, with the increase of $`\tau ,`$ $`S_i`$ appears one by one and $`S_1`$ follows $`S_0`$, $`S_2`$ follows $`S_1`$ in the direction of $`\mu .`$ ### C The scales of $`S_i`$ via $`i`$ From Fig. 1 one can find that $`S_i`$ appears continually with the increase of $`\tau `$. In order to find the law which exhibits the appearance order of $`S_i`$ and the scales of $`S_i`$, we should choose a standard to compare $`S_i`$ with each other. In this paper, we choose the critical values of $`\tau _i`$ as the standard, $`\tau _i`$ is the value at which the second period-doubling bifurcation of the period-1 solution of $`S_i`$ takes place with the decrease of $`\tau .`$ In fact one could also choose another standard. For this specific choice, the bifurcation diagram of $`S_i`$ appears as the patterns in Fig. 3(a)-(e) respectively. Our numerical solutions show that $`\tau _i`$ increases with exponential growth against the increase of $`i.`$ Figure 4(a) demonstrates the result, where the scatters are the values of $`\tau _i`$ and the dotted line is the fitting curve which is exponential growth function with the increase of $`i`$ as follows: $`\tau _i=A\mathrm{exp}(i/B),`$ where $`A`$ and $`B`$ are fitting coefficients.. From Fig. 1 one can find the scale of the bifurcation diagrams of $`S_i`$ along $`\mu `$ increases, while along $`x`$ decreases with the increase of $`i`$. Using the standard chosen above, we can measure the length $`\delta \mu `$ of the period-2 solution in the direction of $`\mu `$ at $`\tau _i`$ and use it to characterize the scale of $`S_k`$ along $`\mu `$. Figure 4(c) shows that $`\delta \mu `$ also increases with exponential growth via $`i`$, and the exponential function is $`\delta \mu \left(i\right)=A\mathrm{exp}(i/B).`$ On the other hand, by measuring the maximum height $`\delta x`$ of the period-2 solution at $`\tau _i,`$ we find that $`\delta x`$ exhibits the exponential decay with the increase of $`i`$: $`\delta x(i)=A\mathrm{exp}(i/B),`$see Fig. 4(d). ## IV Conclusion In this paper we have studied in detail the dynamics of short-time delay differential system. Our numerical result remind us that it is not truth to consider that there is no complex behavior in short-time delay feedback. By using the Poincaré section technique in DDE , a new type of solutions $`S_i`$ was found in this case and it has many similarities as same as fundamental solution in the bifurcation diagrams. We found the law of $`S_i`$ and showed the scales of $`S_i`$ with the increase of $`i.`$ The scales of $`S_i`$ increases along $`\mu `$ and $`\tau `$, while decreases along $`x`$ and delay time . ACKNOWLEDGMENT This work is supported in part by the National Natural Science Foundation of China, and in part by Doctoral Education Foundation of National Education Committee.
no-problem/9904/astro-ph9904417.html
ar5iv
text
# GAMMA-RAY BURST ENVIRONMENTS AND PROGENITORS ## 1 INTRODUCTION Although the study of GRBs (gamma-ray bursts) has been revolutionized in the past few years by finding a number of precise positions and distances, the nature of their progenitor objects remains uncertain (see Mészáros 1999 for a review). The production of a large amount of energy in a short time has naturally led to models involving compact objects (neutron stars and black holes). Fryer, Woosley, & Hartmann (1999) have summarized possible progenitors involving black hole accretion disks: neutron star - neutron star binary mergers (NS/NS), black hole - neutron star mergers (BH/NS), black hole - white dwarf mergers (BH/WD), massive star core collapses, and black hole - helium star mergers (BH/He). Based on estimated formation rates and on accretion disk models with high viscous forces, Fryer et al. (1999) suggest that NS/NS and BH/NS mergers dominate the population of short-duration GRBs and that massive stars and BH/He mergers dominate the long-duration bursts. The afterglows observed to date would have massive star progenitors in this scenario because they followed long-duration bursts. One way of distinguishing between the progenitor models is to examine the position of the GRB in the parent galaxy. Paczyński (1998) pointed out that NS/NS binaries would be expected to have a significant space velocity, which would carry them many kpc from their birthplaces. The observational evidence for the association of several GRBs with star forming regions then provided weak evidence against the NS/NS merger progenitors and favored massive star progenitors. The population synthesis calculations of Fryer et al. (1999) supported this conclusion. However, Bloom, Sigurdsson, & Pols (1999a) found a time to NS/NS merger of $`10^8`$ years. These objects would then follow the star formation rate, although $`15`$% of them might occur well outside of dwarf galaxy hosts. The connection of GRBs to massive stars became stronger with the discovery of the Type Ic supernova SN 1998bw in the error box of GRB 980425 (Galama et al. 1998). The high energy inferred for the optical supernova, $`(23)\times 10^{52}`$ ergs (Iwamoto et al. 1998; Woosley, Eastman, & Schmidt 1999), and the high expansion velocity inferred for the radio supernova (Kulkarni et al. 1998) strengthen the GRB connection. Li & Chevalier (1999) found that the evolution of the radio source indicated non-uniform energy input to the blast wave, as is also inferred in GRBs. They also found evidence that the radio SN 1998bw interacted with the stellar wind expected from the massive star progenitor. In this paper, we emphasize that a stellar wind environment is an unavoidable consequence of a massive star progenitor and that the nature of the GRB afterglow emission can provide a discriminant between massive star and compact binary progenitor models. In § 2, we discuss the expected afterglow for a massive star progenitor model and in § 3 place our models in the context of observations. Our discussion concentrates on the cases $`s=2`$ (stellar wind) and $`s=0`$ (interstellar medium), where the ambient medium has $`\rho r^s`$. ## 2 AFTERGLOWS IN A MASSIVE STAR WIND In the existing massive star GRB progenitor models, the most likely progenitor is the stripped core of a massive star, i.e., a Wolf-Rayet star. MacFadyen & Woosley (1999) consider single massive stars whose cores directly collapse to black holes. The stars have an initial mass $`>25M_{}`$, which is the type of star that is likely to lose its H envelope in winds. Paczyński (1998) noted that the requirement of a rapidly rotating core might necessitate a close binary companion, which again points to Wolf-Rayet stars. The winds from these stars in our Galaxy have velocities of $`1,0002,500\mathrm{km}\mathrm{s}^1`$ and mass loss rates $`\dot{M}10^510^4M_{}\mathrm{yr}^1`$ (Willis 1991). On evolutionary grounds, Langer (1989) advocated $`\dot{M}6\times 10^8(M_{\mathrm{WR}}/M_{})^{2.5}M_{}\mathrm{yr}^1`$, where $`M_{\mathrm{WR}}`$ is the mass of the Wolf-Rayet star. If the stellar mass drops to $`3M_{}`$ at the end of its life because of mass loss, $`\dot{M}10^6M_{}\mathrm{yr}^1`$ at that time. The host galaxies of GRBs are likely to be of low metallicity. Willis (1991) notes that there is no evidence for a metallicity dependence of mass loss from a particular type of Wolf-Rayet star, but that metallicity may affect the distribution of Wolf-Rayet types. The effects of Wolf-Rayet winds can be observed in the wind bubbles around some of these objects. In some cases where the bubble is well-studied, the bubble expansion and X-ray emission are consistent with a weaker wind than that inferred from direct observations of the Wolf-Rayet star (e.g., NGC 6888, García-Segura & Mac Low 1995). The position of the wind termination shock, $`r_t`$, can be estimated by equating the ram pressure in the wind with the pressure in the bubble. We estimate $`r_t/R_b(2R_b/v_wt_w)^{1/2}`$, where $`R_b`$ is the radius of the wind bubble and $`t_w`$ is its age. For $`R_b=3`$ pc, $`t_w=2\times 10^4`$ yr, and $`v_w=1,000\mathrm{km}\mathrm{s}^1`$, we have $`r_t5\times 10^{18}`$ cm. At $`r_t`$, the density increases by a factor of 4 and becomes approximately constant at larger radius. The radius $`r_t`$ is expected to increase as the bubble evolves and its pressure drops. At the end of its life, a Wolf-Rayet star is thus expected to be surrounded by a substantial $`\rho r^2`$ medium. The radio observations of Type Ib/c supernovae can be interpreted as interactions with such a medium in models with constant efficiencies of production of relativistic electrons and magnetic fields (Weiler et al. 1999; Chevalier 1998). For the wind density, we take $`\rho =Ar^2`$, where $`A=\dot{M}/4\pi v_w=5\times 10^{11}A_{}`$ g cm<sup>-1</sup>. The reference value of $`A`$ corresponds to $`\dot{M}=1\times 10^5M_{}\mathrm{yr}^1`$ and $`v_w=1000\mathrm{km}\mathrm{s}^1`$. If a burst were able to occur in a red supergiant star, the low wind velocity, $`v_w10\mathrm{km}\mathrm{s}^1`$, would lead to a higher circumstellar density because $`\dot{M}>10^7M_{}\mathrm{yr}^1`$ up to $`10^4M_{}\mathrm{yr}^1`$ is expected. The only supernova case where we have inferred a lower circumstellar density is around SN 1987A, which exploded as a B3 I star and was thus too cool to drive a strong wind with its ultraviolet radiation field. In this case, the $`r^2`$ wind terminated at $`(34)\times 10^{17}`$ cm and the supernova radio flux rose sharply when the shock front encountered denser gas (Ball et al. 1995; Chevalier 1998). The scaling laws that are appropriate for GRB interaction with an $`s=2`$ medium have been described by Mészáros, Rees, & Wijers (1998) and by Panaitescu, Mészáros, & Rees (1998). Our aim here is to examine the specific predictions for interaction with a Wolf-Rayet star wind. We calculate the expected emission for a thin shell model (cf. Li & Chevalier 1999). We adopt a nucleon-to-electron number density ratio of two, which is appropriate for winds of Wolf-Rayet stars that are predominantly helium (and perhaps carbon/oxygen). For the mass loss case, a density of $`1.67\times 10^{24}\mathrm{g}\mathrm{cm}^3`$ is attained at $`r5.5\times 10^{17}A_{}^{1/2}`$ cm. For an adiabatic blast wave in an $`s=2`$ medium (Blandford & McKee 1976), we find $`\gamma ^2=R/4ct`$ and $`R=(9Et/4\pi Ac)^{1/2}=2.0\times 10^{17}E_{52}^{1/2}A_{}^{1/2}t_{\mathrm{day}}^{1/2}`$ cm, where $`\gamma `$ is the Lorentz factor of the gas, $`R`$ is the observed radius near the line of sight, $`E_{52}=E/10^{52}`$ ergs is the explosion energy, and $`t=t_{\mathrm{day}}\mathrm{day}`$ is the time in the observer’s frame. Over the typical time of observation of a GRB afterglow, the shock front is expected to be within the $`s=2`$ wind, except for the unusual case of a progenitor like that of SN 1987A. In our model, the synchrotron emission frequency of the lowest energy electrons is $$\nu _m=5\times 10^{12}\left(\frac{1+z}{2}\right)^{1/2}(ϵ_e/0.1)^2(ϵ_B/0.1)^{1/2}E_{52}^{1/2}t_{\mathrm{day}}^{3/2}\mathrm{Hz},$$ (1) and the flux at this frequency is $$F_{\nu _m}=20\left(\frac{\sqrt{1+z}1}{\sqrt{2}1}\right)^2\left(\frac{1+z}{2}\right)^{1/2}(ϵ_B/0.1)^{1/2}E_{52}^{1/2}A_{}t_{\mathrm{day}}^{1/2}\mathrm{mJy},$$ (2) where $`z`$ is the redshift, and $`ϵ_e`$ and $`ϵ_B`$ are the electron and magnetic postshock energy fractions. These expressions assume a flat universe with $`H_o=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. As in discussions of afterglows in the ISM, the magnetic field is assumed to be amplified by processes in the shocked region and is not directly determined by the field in the Wolf-Rayet star wind. For electrons with a power law energy distribution above $`\gamma _{em}`$, $`dN_e/d\gamma _e\gamma _e^p`$, the flux above $`\nu _m`$ is $`F_\nu =F_{\nu _m}(\nu /\nu _m)^{(p1)/2}ϵ_e^{p1}ϵ_B^{(p+1)/4}E^{(p+1)/4}At^{(3p1)/4}\nu ^{(p1)/2}`$, provided the electrons are not radiative. $`F_\nu =F_{\nu _m}(\nu /\nu _m)^{1/3}`$ below $`\nu _m`$ until the spectrum turns over due to synchrotron self-absorption at $$\nu _A1\times 10^{11}\left(\frac{1+z}{2}\right)^{2/5}(ϵ_e/0.1)^1(ϵ_B/0.1)^{1/5}E_{52}^{2/5}A_{}^{6/5}t_{\mathrm{day}}^{3/5}\mathrm{Hz}.$$ (3) We estimate the effects of synchrotron cooling following the discussions of Sari, Piran, & Narayan (1998) and Wijers & Galama (1999) for the $`s=0`$ case. Radiative cooling becomes important at a frequency $$\nu _c1\times 10^{12}\left(\frac{1+z}{2}\right)^{3/2}(ϵ_B/0.1)^{3/2}E_{52}^{1/2}A_{}^2t_{\mathrm{day}}^{1/2}\mathrm{Hz},$$ (4) provided $`\nu _c>\nu _m`$. For $`\nu >\nu _c`$, $`F_\nu ϵ_e^{p1}ϵ_B^{(p2)/4}E^{(p+2)/4}t^{(3p2)/4}\nu ^{p/2}`$. These results can be compared to similar results for interaction with a constant density interstellar medium with $`n_o1`$ cm<sup>-3</sup> (e.g., Waxman 1997; Sari et al. 1998). In that case $`F_{\nu _m}`$ is constant with time, $`\nu _m`$ is comparable to the value given above, $`\nu _A`$ is independent of time, and $`\nu _c`$ decreases with time. If $`\nu _A`$, $`\nu _m`$, and $`\nu _c`$ have the same relation to each other in the $`s=0`$ and $`s=2`$ cases, the appearance of the spectrum at one time is similar for both cases, but the evolution is different. At high frequency (optical and X-ray), for $`s=0`$ the flux evolution goes from adiabatic ($`t^{(3p3)/4}`$) to cooling ($`t^{(3p2)/4}`$) while for $`s=2`$ the flux evolution goes from cooling ($`t^{(3p2)/4}`$) to adiabatic ($`t^{(3p1)/4}`$). While cooling, the two cases have the same spectrum and flux decline. At low frequency (radio) with $`\nu <\nu _m`$, the flux evolves as $`t^{1/2}`$ for $`s=0`$, but can make a transition from $`t`$ to constant for $`s=2`$. ## 3 COMPARISON WITH OBSERVATIONS We have noted the evidence that SN 1998bw is interacting with a circumstellar wind (Li & Chevalier 1999), and here discuss the cosmological GRBs. The more detailed models that have been developed for comparison with the best observed GRB afterglows have taken a low, constant density surrounding medium as the starting point. For GRB 970508, Wijers & Galama (1999) and Granot, Piran, & Sari (1999) found $`n_o=0.030,5.3`$ cm<sup>-3</sup>, $`E_{52}=3.5,0.53`$, $`ϵ_e=0.12,0.57`$, and $`ϵ_B=0.089,0.0082`$, respectively. The difference between these results reflects, in part, the difficulty of accurately determining the significant frequencies, such as $`\nu _A`$. The low density found for this case is indicative of an interstellar density. However, these models primarily depend on the spectrum at one time, which can be the same for the $`s=0`$ and $`s=2`$ cases. Panaitescu et al. (1998) have presented a successful $`s=0`$ model for the evolution of GRB 970508, but a number of deviations from the simple model are included, so the case is not clear. With $`s=0`$, the optically thin, adiabatic flux evolution can be described by $`F_\nu t^\alpha \nu ^\beta `$ with $`\alpha =1.5\beta `$. This type of evolution was observed in the afterglow of GRB 970228 (Wijers, Rees, & Mészáros 1997). For $`s=2`$, the expected power law evolution has $`\alpha =(3\beta 1)/2`$ for a constant energy blast wave, so a mechanism must be found to flatten the time evolution if this GRB evolved in a circumstellar wind. The effects of beaming and a change to nonrelativistic flow steepen the time evolution, but continued power input from ejecta can flatten it. This effect is inferred to occur in radio supernovae (Chevalier 1998). Following Rees & Mészáros (1998) for the $`s=0`$ case with power input from ejecta with a mass gradient $`M(>\mathrm{\Gamma }_f)\mathrm{\Gamma }_f^n`$ where $`\mathrm{\Gamma }_f`$ is the Lorentz factor in the freely expanding ejecta, the evolution follows $`F_\nu t^{[2\beta (n+6)]/(n+4)}`$ for the $`s=2`$ case. For $`\beta =0.75`$, the property $`\alpha =1.5\beta `$ is recovered when $`n=5.33`$. The optically thin, $`\nu >\nu _m`$ evolution would then be the same as in the constant density ($`s=0`$) case, but other aspects of the evolution would be different. In particular, the expansion would be much less decelerated and the apparent radius of the blast $`rt^{(n+2)/(n+4)}t^{0.79}`$ and its Lorentz factor $`\gamma t^{1/(n+4)}t^{0.11}`$. At optically thick wavelengths, the flux would increase as $`F_\nu t^{1.57}`$ as opposed to $`F_\nu t^{1/2}`$ for the constant density case. Radio observations could distinguish this kind of evolution, but GRB 970228 was not detected in the radio. Kulkarni et al. (1999) noted that the afterglow of GRB 990123 was consistent with $`p=2.44`$ in a $`s=0`$ model, where $`p`$ is the electron energy spectral index. The optical emission declined with $`\alpha =1.10\pm 0.03`$, suggesting adiabatic evolution, while the X-rays declined with $`\alpha =1.44\pm 0.07`$, suggesting cooling evolution. The faster decline at X-ray wavelengths is distinctive of $`s=0`$ evolution and is the opposite of expectations for $`s=2`$. Most of the well observed optical afterglows also have $`\alpha `$ in the range –1.1 to –1.3, which is plausibly modeled by blast wave evolution in a constant density medium. However, the possibilities of flat electron spectra or cooling evolution do not allow $`s=2`$ models to generally be ruled out. An afterglow with a steep decline was that of GRB 980519. Its optical emission followed $`F_\nu t^{(2.05\pm 0.04)}`$, which, with the observed $`\beta =1.05\pm 0.10`$ for optical and X-ray data, is consistent with expansion in an $`s=2`$ medium (Halpern et al. 1999). The steep decline might be the result of beaming (Halpern et al. 1999; Sari, Piran, & Halpern 1999) instead of interaction with a wind. One way of distinguishing the wind interaction case is to observe the optically thick radio flux rise, $`F_\nu t`$ for a wind as opposed to $`F_\nu t^{1/2}`$ for a constant density. The effect of beaming in the $`s=0`$ case is to decrease the expansion and thus to increase the difference between the models. Radio observations of GRB 980519 were made at ages 0.3, 1.1, and 2.8 days at 8.3 GHz (Frail et al. 1998). Fig. 1 shows that an $`s=2`$ model is capable of fitting the radio, as well as the optical and X-ray, data. The model light curves in Fig. 1 are obtained using the synchrotron self-absorption model of Li & Chevalier (1999). The model takes into account the dynamical evolution of a spherical, constant-energy blast wave in a pre-burst $`s=2`$ stellar wind, relativistic effects on radiation, and synchrotron self-absorption. The distance to GRB 980519 is unknown, so we take a fiducial value of $`z=1`$. Adopting a relatively low magnetic energy fraction of $`ϵ_B=10^5`$, we find that the following combination of parameters fits all available data reasonably well: $`E_{52}=0.54`$, $`ϵ_e=0.62`$, $`p=3`$, and $`A_{}=4.3`$. Rough scalings of these parameters for other choices of $`ϵ_B`$ are given in Li & Chevalier (1999). The value of $`p`$ is higher than that normally found in GRB afterglows, but is within the range found in radio supernovae (Chevalier 1998 and references therein). The model shown in Fig. 1 does not include synchrotron cooling. The good fit to the X-ray data is then expected because the spectral index joining optical and X-ray emission, $`\beta =1.05\pm 0.10`$ (Halpern et al. 1999), is consistent with the separate optical and X-ray indices. From eq. (4) and the parameters for GRB 980519 given above, we have $`\nu _c4\times 10^{16}`$ Hz for $`t_{\mathrm{day}}=1`$, which is below the X-ray frequency (3 keV = $`7\times 10^{17}`$ Hz). We consider the agreement adequate in view of the expected gradual turnover in flux and the observational and theoretical uncertainty in the value of $`\nu _c`$. The model does require a low value of $`ϵ_B`$, as has also been inferred in a number of other afterglows (Galama et al. 1999). The model R-band flux densities on days 60 and 66 of Fig. 1 fall short of the observed values by nearly two orders of magnitude; the emission on these two days is presumed to come from the host galaxy of the burst (Sokolov et al. 1998; Bloom et al. 1998a). However, the observed flux densities on days 60 and 66 are close to those expected of SN 1998bw at a cosmological distance of $`z=1`$ (see Fig. 1). Despite excellent seeing conditions at Keck II, Bloom et al. (1998a) found little evidence for the extension expected of a host galaxy, so the presence of a SN 1998bw-type supernova is possible. Observations of the source at later times than have been reported should be able to confirm or reject this possibility. Another burst with a relatively steep decline with time is GRB 980326, in which the optical $`F_\nu t^{2.10\pm 0.13}`$ (Groot et al. 1998). There is no radio data for this object, but the last optical observation is a factor $`10`$ above the power law decline; this is not the host galaxy because it is not present at a later time (Bloom & Kulkarni 1998). A possible source for the emission is a supernova (Bloom et al. 1999b), as in the case of GRB 980519. Such an event would be consistent with expansion into a wind and the explosion of a massive star. Bloom et al. (1999b) found that the spectrum of the late time source is consistent with that of a supernova, which rules out the possibility of a rise in the nonthermal afterglow emission as observed in GRB 970508 (Panaitescu et al. 1998) and the radio emission from SN 1998bw (Kulkarni et al. 1998; Li & Chevalier 1999). Based on the discovery of the SN 1998bw/GRB 980425 association, Bloom et al. (1998b) proposed a subclass of GRBs produced by supernovae (S-GRBs). We also place SN 1998bw in a supernova class of GRB, but with different properties from those of Bloom et al. (1998b). By supernova, we mean an observed event with an optical light curve and spectrum similar to that of a Type I or Type II supernova. In our picture, the GRB 990123 is an example of a GRB interacting with the ISM (interstellar medium). The similar (slow) rates of decline observed in other GRB afterglows suggests that this type of object is the most common. These objects are not interacting with winds, do not have massive star progenitors, and are not accompanied by supernovae. Plausible progenitors are the mergers of compact objects. The GRBs in the wind type are interacting with winds, have massive star progenitors, and are accompanied by supernovae. SN 1998bw/GRB 980425 is the best example of this class of object. GRB 980519 and GRB 980326 are other possible members. Bloom et al. (1998b) propose that S-GRBs have no long-lived X-ray afterglow because of synchrotron losses. We suggest that X-ray afterglows are possible, although with steeper time evolution than in the ISM case, provided $`ϵ_B`$ is small, as is also inferred for some of the ISM type afterglows. Bloom et al. further propose that S-GRBs have single pulse GRB profiles. We have found evidence for nonuniform energy input in SN 1998bw/GRB 980425 (Li & Chevalier 1999) and suggest that both types of bursts are powered by central engines with irregular power output. The GRB itself may not allow a classification of the event; the two cosmological GRBs mentioned as possible members of the wind class have multiple peak time structure (in ’t Zand et al. 1999; Groot et al. 1998). We are grateful to J. Bloom, S. Kulkarni, and an anonymous referee for useful comments and information. Support for this work was provided in part by NASA grant NAG5-8232.
no-problem/9904/cond-mat9904118.html
ar5iv
text
# Cooper Instability in the Occupation Dependent Hopping Hamiltonians ## I Formulation of the model High temperature superconductivity in the lanthanum , yttrium and related copper-oxide compounds remains a subject of intensive investigation and controversy. It was suggested that electron-phonon interaction mechanism, which is very successful in understanding of conventional (“low temperature”) superconductors within the Bardeen-Cooper-Schriffer scheme , may not be adequate for high-$`T_c`$ cuprates, and even the conventional Fermi liquid model of metallic state may require reconsideration. This opens an area for investigation of mechanisms of electron-electron interaction which can be relevant in understanding peculiarities of superconducting, as well as normal state, properties of cuprates. Specific to all of them is the existence of oxide orbitals. Band calculations suggest that hopping between the oxygen $`p_x,p_y`$ orbitals and between the copper $`d_{x^2y^2}`$ orbitals may be of comparable magnitude. On the experimental side, spectroscopic studies clearly show that the oxygen band appears in the same region of oxygen concentration in which superconductivity in cuprates is the strongest. Therefore there exists a possibility that specific features of oxide compounds may be related to oxygen-oxygen hopping, or to the interaction between the copper and the rotational $`p_xp_y`$ collective modes. If the oxygen hopping is significant then it immediately follows that intrinsic oxygen carriers ($`p_x,p_y`$ oxygen holes) should be different from the more familiar generic $`s`$-orbital derived itinerant carriers. The difference is related to low atomic number of oxygen such that removing or adding of one electron to atom induces a substantial change in the Coulomb field near the remaining ion and therefore results in the change of the effective radius of atomic orbitals near the ion. This will strongly influence the hopping amplitude between this atom and the atoms in its neighborhood. Such “orbital contraction” effect represents a source of strong interaction which does not simply reduce to the Coulomb (or phonon) repulsion (or attraction) between the charge carriers. It was suggested by Hirsch and coauthors , and by the present authors that the occupation dependent hopping can have relevance to the appearance of superconductivity in high-temperature oxide compounds. In the present paper, we investigate the generic occupation-dependent hopping Hamiltonians with respect to peculiarities of the normal state, and to the range of existence of the superconducting state. Theoretical investigation of Cooper instability is supplemented by numeric study of pairing and diamagnetic currents in finite atomic clusters. We study the effect of Cooper pairing between the carriers and show that at certain values and magnitudes of the appropriate coupling parameters, the system is actually superconducting. The properties of such superconducting state are in fact only slightly different from the properties of conventional (low-$`T_c`$) superconductors. Among those we so far can only mention the change in the fluctuation conductivity above or near the critical temperature $`T_c`$. Relaxation of the pairing parameter to equilibrium acquires a small real part due to the asymmetry of contraction-derived interaction between the quasi-particles above and below the Fermi energy. Oxygen atoms in the copper-oxygen layers of the cuprates (Figure 1) have simple quadratic lattice. We assume that $`p_z`$ orbitals of oxygen ($`z`$ is the direction perpendicular to the cuprate plane) are bound to the near cuprate layers whereas carriers at the $`p_x,p_y`$ orbitals may hop between the oxygen ions in the plane. Let $`t_1`$ be the hopping amplitude of $`p_x(p_y)`$ and $`t_2`$ the hopping amplitude of $`p_y(p_x)`$ oxygen orbitals between the nearest lattice sites in the $`x(y)`$ direction in a square lattice with a lattice parameter $`a`$. Then the non-interacting Hamiltonian is $$H_0=t_1\underset{<ij>_x}{}a_i^{}a_jt_2\underset{<ij>_y}{}a_i^{}a_jt_1\underset{<ij>_y}{}b_i^{}b_jt_2\underset{<ij>_x}{}b_i^{}b_j$$ (1) where $`a_i^{}(a_i)`$ is the creation (annihilation) operator for $`p_x`$ and correspondingly $`b_i^{}(b_i)`$ for $`p_y`$ orbitals. The interaction Hamiltonian includes the terms $$H_1=\underset{<ij>}{}a_i^{}a_j\left[Vm_im_j+W(m_i+m_j)\right]+\underset{<ij>}{}b_i^{}b_j\left[Vn_in_j+W(n_i+n_j)\right]$$ (2) where $`n_i=a_i^{}a_i`$, $`m_i=b_i^{}b_i`$. This corresponds to the dependence of the hopping amplitude on the occupation numbers $`n_i,m_i`$ of the form $$\left(\widehat{t}_{ij}\right)_{a_ia_j}=\tau _0(1m_i)(1m_j)+\tau _1\left[(1m_i)m_j+m_i(1m_j)\right]+\tau _2m_im_j$$ (3) and correspondingly $`(\widehat{t}_{ij})_{b_ib_j}`$ of the same form with $`m_i`$ replaced with $`n_i`$. The amplitudes $`\tau _0,\tau _1,\tau _2`$ correspond to the transitions between the ionic configurations of oxygen: $`\tau _0:`$ $`O_i^{}+O_j^2`$ $`O_i^2+O_j^{}`$ (4) $`\tau _1:`$ $`O_i+O_j^2`$ $`O_i^2+O_j`$ (5) $`\tau _2:`$ $`O_i+O_j^{}`$ $`O_i^{}+O_j`$ (6) $`O`$ corresponds to the neutral oxygen ion whereas $`O^{}`$ to the single charged and $`O^2`$ to the double charged negative ions. Since oxygen atom has $`1s^22p^42s^2`$ configuration in its ground state, filling of the $`p`$ shell to the full occupied configuration $`2p^6`$ is the most favorable. Amplitudes $`V`$ and $`W`$ relate to the parameter $`\tau _0,\tau _1,\tau _2`$ according to $$V=\tau _02\tau _1+\tau _2,W=\tau _1\tau _2.$$ (7) Assuming $`t_1=t_2`$ and replacing $`a_i,b_i`$ with $`a_i`$ with the pseudo-spin indices $`\sigma =,`$ we write the Hamiltonian Eq.(1) in the form $$H=t\underset{<ij>\sigma }{}a_{i\sigma }^{}a_{j\sigma }+H_U+H_V+H_W$$ (8) where $`H_U`$ $`=`$ $`U{\displaystyle \underset{i}{}}n_in_i`$ (9) $`H_V`$ $`=`$ $`V{\displaystyle \underset{<ij>\sigma }{}}a_{i\sigma }^{}a_{j\sigma }n_{i,\overline{\sigma }}n_{j,\overline{\sigma }}`$ (10) $`H_W`$ $`=`$ $`W{\displaystyle \underset{<ij>\sigma }{}}a_{i\sigma }^{}a_{j\sigma }(n_{i,\overline{\sigma }}+n_{j,\overline{\sigma }})`$ (11) where we also included the in-site Coulomb interaction (U) between the dissimilar orbitals at the same site. $`\sigma `$ can also be considered as a real spin projection of electrons at the site. In that case, the pairing will originate between the spin-up and spin-down orbitals, rather than between $`p_x`$ and $`p_y`$ orbitals. More complex mixed spin -and orbital- pairing configurations can also be possible within the same idea of orbital contraction (or expansion) at hole localization but are not considered in this paper. The following discussion does not distinguish between the real spin and the pseudo-spin pairing. The Hamiltonian, Eq.(6), is a model one which can not refer to the reliable values of the parameters appropriate to the oxide materials. The purpose of our study is rather to investigate the properties of superconducting transition specific to the model chosen and to find the range of the $`U,V,W`$ values which may correspond to superconductivity. This will be done along the lines of the standard BCS model in the weak coupling limit, $`U,V,W\mathrm{\hspace{0.17em}0}`$, and by an exact diagonalization of the Hamiltonian for a finite atomic cluster at large and intermediate coupling. In the momentum representation, the Hamiltonian becomes $`H=H_0+H_1+H_2`$ with $`H_0={\displaystyle \underset{𝐩\sigma }{}}\xi _𝐩a_{𝐩\sigma }^{}a_{𝐩\sigma }`$ (12) $`H_1={\displaystyle \frac{1}{4}}{\displaystyle \underset{p_1p_2p_3p_4,\alpha \beta \gamma \delta }{}}a_{𝐩_1\alpha }^{}a_{𝐩_2\beta }^{}\mathrm{\Gamma }_{\alpha \beta \gamma \delta }^0(p_1,p_2,p_3,p_4)a_{𝐩_4\delta }a_{𝐩_3\gamma }`$ (13) where $$\xi _𝐩=t\sigma _𝐩\mu ,\sigma _𝐩=2(\mathrm{cos}p_xa+\mathrm{cos}p_ya),$$ (14) and $`\mu `$ is the chemical potential. $`\mathrm{\Gamma }_{\alpha \beta \gamma \delta }^0`$ is the zero order vertex part defined as $$\mathrm{\Gamma }_{\alpha \beta \gamma \delta }^0(p_1,p_2,p_3,p_4)=\left[U+(W+\frac{1}{2}\nu V)(\sigma _{𝐩_1}+\sigma _{𝐩_2}+\sigma _{𝐩_3}+\sigma _{𝐩_4})\right]\tau _{\alpha \beta }^x\tau _{\gamma \delta }^x(\delta _{\alpha \gamma }\delta _{\beta \delta }\delta _{\alpha \delta }\delta _{\beta \gamma })\delta _{𝐩_1+𝐩_2,𝐩_3+𝐩_4}$$ (15) where $`\tau _{\alpha \beta }^x`$ is a Pauli matrix $`\left(\begin{array}{cc}0\hfill & \hfill 1\\ 1\hfill & \hfill 0\end{array}\right).`$ For reasons which will be clear later, we separated $`H_V`$ and put some part of it into the $`H_1`$ term, while the remaining part is included in the $`H_2`$ term, thus giving $$H_2=V\underset{<ij>\sigma }{}a_{i\sigma }^{}a_{j\sigma }(a_{i\overline{\sigma }}^{}a_{i\overline{\sigma }}\frac{\nu }{2})(a_{j\overline{\sigma }}^{}a_{j\overline{\sigma }}\frac{\nu }{2})$$ (17) with $`\nu =<n_i>`$ being the average occupation of the site. ## II The Cooper instability in the occupation-dependent hopping Hamiltonians The Cooper instability realizes at certain temperature $`T=T_c`$ as a singularity in a two-particle scattering amplitude at zero total momentum. Let’s introduce a function $$\mathrm{\Gamma }(p_1p_2,\tau \tau ^{})=<T_\tau a_{𝐩_1}(\tau )a_{𝐩_1}(\tau )\overline{a}_{𝐩_2}(\tau ^{})\overline{a}_{𝐩_2}(\tau ^{})>$$ (18) where $`\overline{a}_{𝐩\alpha }(\tau )=\mathrm{exp}(H\tau )a_{𝐩\alpha }^{}\mathrm{exp}(H\tau )`$, $`a_{𝐩\alpha }=\mathrm{exp}(H\tau )a_{𝐩\alpha }\mathrm{exp}(H\tau )`$ are the imaginary time $`(\tau )`$ creation and annihilation operators. At $`𝐩_1=𝐩_2`$, $`𝐩_3=𝐩_4`$, the kernel of $`\mathrm{\Gamma }_{\alpha \beta \gamma \delta }`$ is proportional to $`G_{\alpha \beta }^x`$ $`G_{\gamma \delta }^y`$ ($`G`$ is one-electron Green function). We keep notation $`\mathrm{\Gamma }(\mathrm{𝐩𝐩}^{})`$ for such a reduced Green function specifying only momenta $`𝐩=𝐩_1=𝐩_2`$ and $`𝐩^{}=𝐩_3=𝐩_4`$. By assuming temporarily $`V=0`$, this Hamiltonian results in an equation for the Fourier transform $`\mathrm{\Gamma }(p,p^{},\mathrm{\Omega })`$ $$\mathrm{\Gamma }(𝐩,𝐩^{},\mathrm{\Omega })=\mathrm{\Gamma }^0(𝐩,𝐩^{})T\underset{\omega }{}\underset{𝐤}{}\mathrm{\Gamma }^0(𝐩,𝐤)G_\omega (𝐤)G_{\omega +\mathrm{\Omega }}(𝐤)\mathrm{\Gamma }(𝐤,𝐩^{},\mathrm{\Omega })$$ (19) corresponding to summation of Feynmann graphs shown in Figure 2. In the above formulas, $`\omega =(2n+1)\pi T`$ and $`\mathrm{\Omega }=2\pi mT`$ ($`n,m`$ integers) are the discrete odd and even frequencies of the thermodynamic perturbation theory . $`G(𝐤,\omega )`$ is a one-particle Green function in a Fourier representation $$G(𝐤,\omega )=\frac{1}{\xi _ki\omega }.$$ (20) Diagrams of Figure 2 are singular since equal momenta of two parallel running lines bring together singularities of both Green functions $`G(𝐤,\omega )`$ and $`G(𝐤,\omega )`$. 6-vertex interaction, Eq.(8), is not generally considered in the theories of strongly-correlated fermionic systems. Such interaction also results in singular diagrams for $`𝐩𝐩`$ scattering shown in Figure 3. Since a closed loop in this figure does not carry any momentum to the vertex, it reduces to the average value of $`\overline{G}`$ which in turn is the average of the number operator, $`<a^{}a>`$. Taking into consideration of such diagrams is equivalent to replacing one of the $`n_i`$’s in Eq.(8) to its thermodynamical average $`\nu =<a_{i\sigma }^{}a_{i\sigma }>`$. Then the $`V`$ term can be added to the renormalized value of $`W`$, $`WW+{\displaystyle \frac{1}{2}}\nu V`$ We will check by numeric analysis in Sec. III to which extent such an approximation may be justified. Solution to Eq.(16) can be received by putting $$\mathrm{\Gamma }(𝐩,𝐩^{},\mathrm{\Omega })=A(\mathrm{\Omega })+B_1(\mathrm{\Omega })\sigma _𝐩+B_2(\mathrm{\Omega })\sigma _𝐩^{}+C(\mathrm{\Omega })\sigma _𝐩\sigma _𝐩^{}.$$ (21) Substituting this expression into Eq.(16) and introducing the quantities $$S_n(\mathrm{\Omega })=T\underset{\omega }{}\underset{𝐤}{}\sigma _𝐤^nG_\omega (𝐤)G_{\omega +\mathrm{\Omega }}(𝐤)$$ (22) we receive a system of coupled equations for $`A,B_1,B_2,C`$ $`\left(\begin{array}{cccc}1+US_0+\stackrel{~}{W}S_1& US_1+\stackrel{~}{W}S_2& 0& 0\\ \stackrel{~}{W}S_0& 1+\stackrel{~}{W}S_1& 0& 0\\ 0& 0& 1+US_0+\stackrel{~}{W}S_1& US_1+\stackrel{~}{W}S_2\\ 0& 0& \stackrel{~}{W}S_0& 1+\stackrel{~}{W}S_1\end{array}\right)`$ $`\left(\begin{array}{c}A\\ B_1\\ B_2\\ C\end{array}\right)=\left(\begin{array}{c}U\\ \stackrel{~}{W}\\ \stackrel{~}{W}\\ 0\end{array}\right)`$ (35) where $`\stackrel{~}{W}=W+\frac{1}{2}\nu V`$, which are solved to give $$A=\frac{U\stackrel{~}{W}^2S_2}{D},B_1=B_2=\frac{\stackrel{~}{W}(1+\stackrel{~}{W}S_1)}{D},C=\frac{\stackrel{~}{W}^2S_0}{D}$$ (36) where $`D`$ is a determinant $$D=\left|\begin{array}{cc}1+US_0+\stackrel{~}{W}S_1& US_1+\stackrel{~}{W}S_2\\ WS_0& 1+WS_1\end{array}\right|.$$ (37) The determinant becomes zero at some temperature which means an instability in the two-particle scattering amplitude ($`\mathrm{\Gamma }\mathrm{}`$). This temperature is the superconducting transition temperature $`T_c`$. At $`T_c`$, Eq.(16) is singular, which means that two-particle scattering amplitude gets infinite. Below $`T_c`$, the finite value of $`\mathrm{\Gamma }`$ is established by including the non-zero thermal averages (the order parameters), $`<a_𝐩^{}a_𝐩^{}>`$, $`<a_𝐩a_𝐩>`$. We first analyze the case of non-retarded, non-contraction interaction $`U`$, and after that will consider the effect of the occupation-dependent hopping terms, $`V`$ and $`W`$. ### A Direct non-retarded interaction Neglecting contraction parameters $`V,W`$, solution to Eq.(16) reduces to $$\frac{1}{U}=T\underset{\omega }{}\underset{𝐤}{}\frac{1}{\xi _𝐤^2+\omega ^2}$$ (38) which after the summation over the discrete frequencies reduces to the conventional BCS equation (at negative $`U`$) $$\frac{1}{|U|}=\underset{𝐤}{}\frac{12n_𝐤}{2\xi _𝐤},$$ (39) with $`n_𝐤=(\mathrm{exp}(\beta \xi _𝐤)+1)^1`$. At finite frequency $`\mathrm{\Omega }`$, Eq.(23) reduces to $$\mathrm{ln}\frac{T}{T_c}=T\underset{\omega }{}_{E_1}^{E_2}𝑑\xi \frac{i\mathrm{\Omega }}{(\xi ^2+\omega ^2)(\xi +i\omega +i\mathrm{\Omega })}$$ (40) where we replaced for simplicity an integration over the Brillouin, zone $`d^3k`$, by the integration over the energy assuming that the density of states near the Fermi energy $`\mu `$ is flat. $`E_1`$ and $`E_2`$ are the lower and upper limits of integration equal to $`4t\mu `$ and $`4t\mu `$, respectively. Such an approximation is not very bad since most singular contribution to integral comes from the point $`\xi _p=0`$ where the integrand is the largest. Above $`T_c`$, Eq.(25) determines the frequency of the order parameter relaxation . There is a small change in this frequency compared to the BCS model in which limits of the integration $`(E_1,E_2)`$ are symmetric with respect to the Fermi energy, and small in comparison to $`\epsilon _F`$, therefore we briefly discuss it now. To receive a real-time relaxation frequency, Eq.(25) needs to be analytically continued to a real frequency domain from the discrete imaginary frequencies $`i\omega _n=(2n+1)\pi iT`$ . Using the identity $$T\underset{\omega }{}\frac{1}{(\omega +i\xi _1)(\omega +i\xi _2)\mathrm{}(\omega +i\xi _n)}=(i)^n\underset{i=1}{\overset{n}{}}\underset{ij}{}\frac{n(\xi _i)}{\xi _i\xi _j}$$ (41) where $`n(\xi )`$ is a Fermi function $`n(\xi )=(\mathrm{exp}(\beta \xi )+1)^1`$ gives $$\mathrm{ln}\frac{T}{T_c}=\frac{i\mathrm{\Omega }}{2}_{E_1}^{E_2}\frac{\mathrm{tanh}\frac{\xi }{2T}}{\xi (2\xi +i\mathrm{\Omega })}𝑑\xi $$ (42) where $$T_c=\frac{2\gamma }{\pi }\sqrt{E_1E_2}\mathrm{exp}\left(\frac{1}{N(\epsilon _F)|U|}\right),\mathrm{ln}\gamma =C=0.577.$$ (43) $`C`$ is Euler constant. Analytic continuation is now simple: we change $`\mathrm{\Omega }`$ to $`i(\omega i\delta )`$, $`\delta =+0`$, to receive a function which will be analytic in the upper half plane of complex $`\omega `$, $`Im\omega >0`$. The order parameter relaxation equation becomes $$\left(\mathrm{ln}\frac{T}{T_c}\frac{\omega }{4}_{E_1}^{E_2}\frac{\mathrm{tanh}\frac{\xi }{2T}}{\xi (\xi \frac{\omega }{2}+i\delta )}𝑑\xi \right)\mathrm{\Delta }=0.$$ (44) At $`\omega T_c`$ and $`TT_cT_c`$, the real and imaginary parts of Eq.(29) are easily evaluated to give $$\left(TT_c\frac{\pi i\omega }{8T_c}+\omega \frac{E_1E_2}{4E_1E_2}\right)\mathrm{\Delta }=0.$$ (45) Thus, the order parameter relaxation equation at $`T>T_c`$ becomes $$(1+i\lambda )\frac{\mathrm{\Delta }}{t}+\mathrm{\Gamma }\mathrm{\Delta }=0$$ (46) where $$\mathrm{\Gamma }=\frac{8}{\pi }(TT_c),\lambda =\frac{2(E_1E_2)}{\pi E_1E_2}T_c.$$ (47) In comparison to the BCS theory in which $`E_1=E_2=\omega _D`$ ($`\omega _D`$ is the Debye frequency) and therefore $`\lambda =0`$, we receive the relaxation which has a non-zero “inductive” component, $`i\lambda \mathrm{\Gamma }`$. Typically, $`E_1E_2\epsilon _F`$ and therefore $`|\lambda |`$ is a small quantity. It increases however near the low ($`\nu 1`$) or near the maximal ($`\nu 2`$) occupation where $`E_1`$ or $`E_2`$ become small. Such mode of relaxation is specific to a non-retarded (non-phonon) interaction which is not symmetric near $`\epsilon _F`$ and spans over the large volume of the $`𝐤`$-space rather than is restricted to a narrow energy $`\omega _D\epsilon _F`$ near the Fermi energy. ### B Occupation-dependent hopping instability and relaxation Neglecting direct interaction, we put $`U=0`$ in Eq.(22) and receive $$\frac{1}{\stackrel{~}{W}}=S_1(\omega )\pm \sqrt{S_0(\omega )S_2(\omega )}$$ (48) where at finite frequency $`\omega `$ $$S_n(\omega )=N(\epsilon _F)T\underset{\omega }{}_{E_1}^{E_2}\left(\frac{\xi +\mu }{t}\right)^n\frac{\mathrm{tanh}\frac{\xi }{2T}}{2\xi \omega +i\delta }𝑑\xi .$$ (49) Putting $`\omega =0`$ we receive from Eq.(33) a transition temperature $`T_c`$. The equation has a solution at $`\stackrel{~}{W}<0`$, $`\mu <0`$, or at $`\stackrel{~}{W}>0`$, $`\mu >0`$ (we assume that $`t>0`$). Plus or minus sign is chosen to receive the maximal value of $`T_c`$ (the second solution corresponding to smaller $`T`$, then, has to be disregarded since at $`T<T_c`$ the order parameter will be finite and therefore Eqs.(20)-(22) do not apply). This gives an expression for $`T_c`$ $$T_c=\frac{2\gamma }{\pi }\sqrt{E_1E_2}\mathrm{exp}\left[\frac{E_1E_2}{2|\mu |t}(t|\mu |)+\frac{E_2^2E_1^2}{8\mu ^2}\right]\mathrm{exp}\left(\frac{t}{2|\stackrel{~}{W}|N(\epsilon _F)}\right)$$ (50) where $`\mu <0`$, $`\stackrel{~}{W}<0`$ (second exponent is dominating the first one in the weak coupling limit $`\stackrel{~}{W}0`$). Real and imaginary parts of $`S_n(\omega )`$ are calculated at $`\omega T_c`$ $`ImS_n(\omega ){\displaystyle \frac{\pi \omega }{8T_c}}\left({\displaystyle \frac{\mu }{t}}\right)^nN(\epsilon _F)`$ (51) $`ReS_n(\omega )={\displaystyle \frac{\omega }{4}}N(\epsilon _F)\left({\displaystyle \frac{\mu }{t}}\right)^n\times \{\begin{array}{cc}\frac{E_2E_1}{E_1E_2},\hfill & n=0\hfill \\ \frac{E_2E_1}{E_1E_2}+\frac{2}{\mu }\mathrm{ln}\frac{\gamma \sqrt{E_1E_2}}{T_c},\hfill & n=1\hfill \\ \frac{E_2E_1}{E_1E_2}+\frac{E_1E_2}{\mu ^2}+\frac{2}{\mu }\mathrm{ln}\frac{\gamma \sqrt{E_1E_2}}{T_c},\hfill & n=2\hfill \end{array}`$ (55) Equation for $`\lambda `$ is received with a value larger than the previous one (Eq.(32)) $$\lambda \frac{T_c}{\mu }\left(3\mathrm{ln}\frac{2\gamma \sqrt{E_1E_2}}{\pi T_c}+\frac{2\mu (E_2E_1)}{E_1E_2}+\frac{E_1E_2}{2\mu }\right).$$ (56) Eigenvalue equation gives the $`𝐩`$-dependence of the two particle correlator $`\mathrm{\Gamma }(𝐩,𝐩^{})=<a_𝐩^{}a_𝐩^{}a_𝐩^{}a_𝐩^{}>`$ near $`T_c`$ $$\mathrm{\Gamma }(𝐩,𝐩^{})=C\left[S_2S_1(\sigma _𝐩+\sigma _𝐩^{})+S_0\sigma _𝐩\sigma _𝐩^{}\right].$$ (57) Since $`C`$ diverges at $`T_c`$, this determines that order parameter becomes macroscopic at $`T<T_c`$. Then, the pair creation operator, $`a_𝐩^{}a_𝐩^{}`$, will almost be a number, i.e., we may decompose Eq.(39) into a product $$\mathrm{\Delta }_𝐩^{}\mathrm{\Delta }_𝐩=<a_𝐩^{}a_𝐩^{}><a_𝐩^{}a_𝐩^{}>$$ (58) and, to be consistent with the $`𝐩`$, $`𝐩^{}`$ dependences, by putting $`\xi _𝐩=\xi _𝐩^{}`$ we receive $$\mathrm{\Delta }_𝐩=C_1\left(\mathrm{exp}(i\theta /2)\sqrt{S_2(0)}+\mathrm{exp}(i\theta /2)\sqrt{S_0(0)}\right)\mathrm{exp}(i\phi )$$ (59) where $$\mathrm{cos}\theta =S_1(0)/\sqrt{S_0(0)S_2(0)}$$ (60) and $`\phi `$ is an overall phase which is irrelevant for a single superconductor but is important for calculating currents in multiple or weakly coupled superconductors. Therefore, system undergoes a pairing transition at temperature found from the Eq.(35). Since the pairs are charged, the state below $`T_c`$ can not be non-superconducting. We have not calculated the Meissner response but in the following section we present numerical calculation of flux quantization which supports the above statement. ## III Exact diagonalization of the occupation-dependent hopping Hamiltonians in finite cluster We calculate the ground state energy of a cubic system as shown in Figure 4. A magnetic flux $`\mathrm{\Phi }`$ is produced by a solenoid passing through the cube. Corners of the cube are the lattice sites, which can be occupied by electrons. With the inclusion of the magnetic flux, model Hamiltonian, Eq. 6, becomes $`H=t{\displaystyle \underset{<ij>\sigma }{}}a_{i\sigma }^{}a_{j\sigma }\mathrm{exp}(i\alpha _{ij})+h.c.+U{\displaystyle \underset{i}{}}n_in_i+`$ (61) $`+{\displaystyle \underset{<ij>\sigma }{}}a_{i\sigma }^{}a_{j\sigma }\left[Vn_{i\overline{\sigma }}n_{j\overline{\sigma }}+W(n_{i\overline{\sigma }}+n_{j\overline{\sigma }})\right]\mathrm{exp}(i\alpha _{ij})+h.c.`$ (62) where $$\alpha _{ij}=(2\pi /\mathrm{\Phi }_0)_{𝐫_i}^{𝐫_j}𝐀𝑑𝐥$$ (63) and $`\mathrm{\Phi }_0=hc/e`$ is the magnetic flux quantum. Throughout the calculations we take $`t=1`$. We start with constructing the model Hamiltonian. In a Hilbert space of one electron $`a=\left(\begin{array}{cc}0\hfill & \hfill 1\\ 0\hfill & \hfill 0\end{array}\right),a^{}=\left(\begin{array}{cc}0\hfill & \hfill 0\\ 1\hfill & \hfill 0\end{array}\right).`$ (68) with a basis specified as $`\psi _0=(0,1)`$ for the ground state ($`n=0`$) and $`\psi _1=(1,0)`$ for the excited state ($`n=1`$). In case of $`N`$ states operator of annihilation $`a_n`$ takes the form $$a_n=s^{n1}ae^{Nn}$$ (69) where $`e`$ is the unit matrix and $`s`$ is unitary matrix $`e=\left(\begin{array}{cc}1\hfill & \hfill 0\\ 0\hfill & \hfill 1\end{array}\right),s=\left(\begin{array}{cc}1\hfill & \hfill 0\\ 0\hfill & \hfill 1\end{array}\right)`$ (74) and $``$ stands for the Kronecker matrix multiplication. Explicitly, we have $`a_1`$ $`=`$ $`aeee\mathrm{}e`$ $`a_2`$ $`=`$ $`saee\mathrm{}e`$ $`\mathrm{}`$ $`a_N`$ $`=`$ $`sss\mathrm{}sa`$ Thus, for example, for two states $`a_1=\left(\begin{array}{cccc}0& 1& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 1\\ 0& 0& 0& 0\end{array}\right),a_2=\left(\begin{array}{cccc}0& 0& 1& 0\\ 0& 0& 0& 1\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right).`$ (83) These matrices, which are annihilation operators, and corresponding Hermitian conjugate matrices, which are the creation operators, satisfy the Fermi anti-commutation relation. These operators are sparse matrices with only $`N/2`$ non-zero elements, which are equal to $`\pm 1`$. Next we solve the Schrödinger equation $`H\psi =E\psi `$. We implemented a novel algorithm for solving such sparse systems, which will be described elsewhere. The cubic cluster within the Hubbard Hamiltonian and no external flux applied to the system was studied previously by Callaway et. al. . Quantum Monte Carlo methods applicable to large systems within the Hubbard model (both attractive and repulsive), but not the occupation-dependent hopping Hamiltonians, are reviewed in a paper of Dagotto . ### A The number parity effect Superconductivity reveals itself in the lowering of the ground state energy as electrons get paired. Therefore the energy needs to be minimal for even number of electrons $`n`$ and will attain a larger value when $`n`$ is odd. We consider a “gap” parameter $$\mathrm{\Delta }_l=E_{2l+1}\frac{1}{2}\left(E_{2l}+E_{2l+2}\right)$$ (84) as a possible “signature” of superconductivity (where $`E_m`$ corresponds to the ground state energy for $`m`$ fermions). For all interaction parameters set to zero ($`U=V=W=0`$), no sign of pairing is observed. To check our analytic results of Sec. IIb and the argument following Eq.(34), we calculated $`\mathrm{\Delta }`$ above and below the half-filling ($`n=8`$ in case of cubic cluster). Below the half-filling chemical potential is negative ($`\mu <0`$) and above the half-filling it is positive ($`\mu >0`$). We first checked that the $`W0^+`$, $`W0^{}`$ and $`V0^+`$, $`V0^{}`$ calculation is consistent with an exact solution available for a non-interacting system of $`n`$ electrons. We then test our program for the case of negative-$`U`$ Hubbard Hamiltonian ($`U<0,V=0,W=0`$) which is known to be superconducting (e.g. Refs. 22,23). Positive-$`U`$ Hubbard model does not show any sign of superconductivity, in disagreement with some statements in the literature . Our calculations can not disprove the (possible) non-pairing mechanisms of superconductivity but these seem to be unlikely models for the problem of superconductivity in oxides which clearly shows pairing of electrons (holes) in the Josephson effect and in the Abrikosov vortices. The relation $`2eV=\mathrm{}\omega `$ is justified in the first case and flux quantum of a vortex is $`hc/2e`$ in the second , both with the value of the charge equal to twice the electronic charge, $`e`$. Figure 5 shows the dependence of the ground state energy upon the number of particles in case of negative-$`U`$ and positive-$`U`$ Hubbard models assuming $`V=0`$ and $`W=0`$. Such dependences are typical for any value of $`|U|`$. There clearly is the pairing effect when $`U<0`$ and there is no sign of pairing at $`U>0`$. Tests for pairing in the contraction $`V`$, $`W`$-models ($`V0,U=W=0`$ and $`W0,U=V=0`$, respectively) are shown in Figs. 6,7. The results are in agreement with our perturbative calculation of Sec. II and with its extension for the intermediate and strong coupling limits $`|V|t`$, $`|W|t`$. Since chemical potential is negative below the half-filling and positive above the half-filling, there is no pairing in the former case ($`\stackrel{~}{W}0^+`$) and there is a sign of pairing in the latter case ($`\stackrel{~}{W}>0`$), in accord with the value of the effective coupling constant $`\stackrel{~}{W}=W+\frac{1}{2}\nu V`$. Similarly, for $`\stackrel{~}{W}0^{}`$ below the half-filling there is a sign of pairing ($`\mathrm{\Delta }0`$) while above the half-filling there is no pairing. These results are summarized in Table 1. For larger values of the interaction parameters, the perturbative results do not remain applicable anymore. Figure 8b shows the dependence of the parity gap $`\mathrm{\Delta }`$ on the strength of the interaction. From Figure 8 it is understood that the $`W`$ interaction introduces a “signature” of pairing in a similar way as the negative-$`U`$ interaction does. The possibility of the “contraction” pairing has been investigated formerly in the papers . ### B Flux quantization Flux quantization is another signature of superconductivity which is a consequence of the Meissner effect. We also tested for the periodicity of the energy versus flux dependence with the period $`\mathrm{\Phi }_1=hc/2e`$ as compared to the period $`\mathrm{\Phi }_0=hc/e`$ in the non-interacting system . Unfortunately, the even harmonics of $`\mathrm{\Phi }_0`$-periodic dependence of the ground state energy (and related to it, the harmonics of the persistent current $`J=E/\mathrm{\Phi }`$ ) may simulate the pairing in a non-superconductive system. Small-size (mesoscopic) system can mask the superconducting behavior . Flux quantization in Hubbard Hamiltonians was studied formerly in Refs. 29-31. We first demonstrate the behavior of the ground state energy with respect to flux, Figure 9. A characteristic feature of mesoscopic system suggests that addition of one extra particle to the system changes the sign of the derivative of the ground state energy with respect to magnetic flux at $`\mathrm{\Phi }=0`$. That is, depending on the parity of the number of particles and on the number of sites, system can change from paramagnetic to diamagnetic state or vice versa. But this behavior is not always observed for the cubic geometry studied. Except the sign change from $`n=2`$ to $`n=3`$ and from $`n=7`$ to $`n=8`$, no such behavior is seen. As mentioned above, however, the $`\mathrm{\Phi }_1`$-periodic component of the $`E(\mathrm{\Phi })`$ dependence begins to appear at the higher value of $`n`$ (Figure 9,c). For both contraction parameters equal to zero, i.e. $`W=V=0`$, we observe appearance of the $`hc/2e`$-periodic component for some values of $`U`$ (Figure 10). Even for positive (repulsive) values of $`U`$, it is possible to see a local minimum appearing at $`\mathrm{\Phi }=hc/2e`$ (Figure 10b). This is in agreement with the authors’ previous works . But this minimum, which does not lead to an exact periodicity of the ground state energy with a period $`\mathrm{\Phi }_0/2`$, should not be attributed to superconductivity, this is rather a characteristic behavior in mesoscopic systems. For $`U<0`$ (while $`W=V=0`$), the expected mesoscopic behavior, that is the change of the sign of the slope of ground state energy at $`\mathrm{\Phi }=0`$, starts to demonstrate itself (Figure 11). But this happens at sufficiently large absolute values of (negative) $`U`$. For other values of $`U`$, however, there is no such change. More pronounced $`hc/2e`$-periodic components are observed with the introduction of non-zero interaction parameters. The role of $`W`$ on the ground state energy, when both $`U`$ and $`V`$ are zero, is shown in Figure 12. Meanwhile setting both $`U`$ and $`W`$ to zero and observing the effect of the non-zero $`V`$ shows that $`V`$ does not play a role as significant as the other two interaction parameters do. There is not much difference in the behavior of the ground state energy upon magnetic flux between the zero and non-zero $`V`$ (for example $`V=1`$) cases. ## IV Conclusions We studied the peculiarity of electron conduction in systems in which conduction band is derived from the atomic shells with a small number of electrons ($`N_e`$) in an atom. Such materials may include oxygen ($`N_e=8`$) in the oxides, carbon ($`N_e=6`$) in borocarbides (e.g. $`LuNi_2B_2C`$), hydrogen ($`N_e=1`$) in some metals (e.g., $`PdH`$). Some materials of this kind are superconductors. It was argued that the Coulomb effects within the atoms strongly influence the inter-atom wave function overlap between the atomic sites and therefore the electron hopping amplitude between the sites. The phenomenology of such conduction mechanism results in a novel, to the conventional solid state theory, Hamiltonians called the occupation-dependent-hopping (or contraction) Hamiltonians, specified with the two coupling parameters $`V`$, $`W`$. We then attempted a study of superconductivity in such systems within the BCS-type approach assuming the Cooper pairing of electrons. The weak-coupling limit allows determination of the range of parameters $`V`$, $`W`$ values and also of the in-site Coulomb interaction $`U`$ value which show the Cooper instability. The strong-coupling limit was addressed by a numeric calculation on finite clusters using the novel algorithm (of non-Lanczos type) for eigenvalues of large sparse matrices. One of the results of this numeric calculation was that the positive-$`U`$ Hubbard model, sometimes believed to be a candidate for high-$`T_c`$ superconductivity, does not comply with the goal. This work was partially supported by the Scientific and Technical Research Council of Turkey (TÜBİTAK) through the BDP program.
no-problem/9904/hep-ph9904274.html
ar5iv
text
# Multijet events and parton showers ## Acknowledgements We would like to thank B. Ivanyi for lively and helpful discussions. We gratefully acknowledge financial support by DFG, BMBF and GSI.
no-problem/9904/patt-sol9904005.html
ar5iv
text
# Bekki-Nozaki Amplitude Holes in Hydrothermal Nonlinear Waves ## Abstract We present and analyze experimental results on the dynamics of hydrothermal waves occuring in a laterally-heated fluid layer. We argue that the large-scale modulations of the waves are governed by a one-dimensional complex Ginzburg-Landau equation (CGLE). We determine quantitatively all the coefficients of this amplitude equation using the localized amplitude holes observed in the experiment, which we show to be well described as Bekki-Nozaki hole solutions of the CGLE. Phys. Rev. Lett. 82 (1999) p. 3252-3255 The status and nature of the so-called amplitude equations which can be derived in the vicinity of symmetry-breaking instabilities is now well-established . They are “universal” in so far as they essentially depend on the symmetries of the physical system and of its bifurcated solutions, but also because they often remain valid, at least at a qualitative level, even far away from the instability threshold . However, determining accurately the coefficients of the underlying relevant amplitude equation from experimental data remains a difficult task, especially in these far-from-threshold regimes. The complex Ginzburg-Landau equation (CGLE), which describes the large-scale modulations of the bifurcated solutions near oscillatory instabilities, is perhaps the most-studied amplitude equation . This priviledged situation is due to both its relevance to many experimental situations and to the variety of its dynamical behavior, in particular its spatiotemporal chaos regimes. One of the landmarks of the CGLE is that it possesses localized “defect” solutions. Even in one space dimension, where no topological constraint exists, numerical simulations of the CGLE and analytical work have revealed the existence and importance of various amplitude hole solutions, which can often be seen as the “building blocks” of the complex spatiotemporal dynamics observed. In particular, the one-parameter family of traveling hole solutions discovered by Bekki and Nozaki has been shown to play an important dynamical role in a large portion of parameter space including in regions where they are linearly unstable . Similar objects have been identified in various experimental contexts of a priori relevance, e.g. Rayleigh-Bénard convection and coupled wakes . However, to our knowledge, there is still no case where a direct comparison with known solutions of the CGLE could be achieved. In this Letter, we present a quantitative comparison of localized amplitude holes observed in an experiment with hole solutions of the CGLE, the relevant amplitude equation. We use the observed holes to fully determine the coefficients of the underlying CGLE. This provides clear-cut evidence of Bekki-Nozaki holes in an experimental context. Our system is a long, straight, and narrow convection cell in which a thin fluid layer with a free surface is subjected to a horizontal temperature gradient. Hydrothermal nonlinear waves appear via a direct Hopf bifurcation, indicating the relevance of the CGLE. The spatiotemporal dynamics of the waves exhibits localized amplitude holes. The basic scales of the equivalent CGLE are determined using the regular part of the wave trains. Data collected in the vicinity of amplitude holes show that they have the structure of Bekki-Nozaki solutions. This also provides estimates of the remaining coefficients of the CGLE, an approach which, we argue, could be efficient in other experimental contexts. Finally, the overall consistency of our results is checked. The experimental setup is schematically described in Fig. 1. A layer of fluid (silicon oil of viscosity $`\nu =0.65\mathrm{cSt}`$ and Prandtl number $`P=10`$) of height $`h`$ is confined between two copper blocks maintained at fixed temperatures $`T_+`$ and $`T_{}`$ by thermostated water circulation, and a bottom glass plate. This forms a straight, narrow channel of length $`L_x=25\mathrm{c}\mathrm{m}`$ and width $`L_y=2\mathrm{c}\mathrm{m}`$. As soon as the temperature difference $`\mathrm{\Delta }T=T_+T_{}`$ is not zero, a basic flow sets in. It consists of a surface flow towards the cold side with a bottom recirculation.Increasing $`\mathrm{\Delta }T`$, the basic flow becomes unstable to traveling hydrothermal waves via a supercritical Hopf bifurcation . We observe these waves by low-contrast shadowgraphy, which captures the vertical average of the temperature gradient variations (surface waves exist, but their effect is negligible). In this geometry, the waves propagate away from a “source region” located arbitrarily on the cold wall, and the end boundaries at $`x=0,L_x`$ act as sinks with no apparent reflection (Fig. 1). For $`h=1.2\mathrm{mm}`$, corresponding to the experiment reported below, the source region emits curved waves which become planar further away (Fig. 1, bottom) and propagate along the $`x`$-axis. Figure 2a presents a typical spatiotemporal evolution as obtained from the acquisition, with a fixed-gain camera, of a single 512-pixel line (of negligible width) along the $`x`$-axis in the center of the cell. Here, the source appears as a rather ill-defined, erratic object. (Closer to the Hopf bifurcation, steady, regular, evolution is observed.) Fourier analysis of diagrams such as Fig. 2a reveals that on each side of the source only waves propagating away from the source are present and that they are approximately monochromatic (the second harmonic is two orders of magnitude smaller). More precisely, restricting ourselves to one side of the source (say $`x90`$ on Fig. 2a), we can write the recorded physical variable: $$V(x,t)=A(x,t)\mathrm{exp}[i(k_0x\omega _0t)]+\mathrm{c}.\mathrm{c}.$$ (1) where $`k_0`$ is the dominant wavelength, $`\omega _0`$ the basic frequency, and $`A`$ a one-dimensional complex field describing the (large-scale) modulations of this wave. (On the other side of the source, one has to change the sign of $`k_0`$.) Using complex demodulation techniques, $`A`$ can be extracted from the experimental data. Figs. 2b,c show the spacetime evolution of $`|A|`$ and $`k=q\pm k_0`$ where $`q=_x\mathrm{arg}(A)`$. In these pictures, the localized deformations of the waves visible in Fig. 2a clearly appear as propagating amplitude holes across which the phase gradient varies rapidly. At some space-time points, $`|A|`$ even vanishes and the phase gradient diverges: a space-time dislocation occurs (Fig. 2a). The amplitude holes can be seen as the objects mediating the evolution to wave patterns more regular than those emitted by the source. Our system clearly calls for a one-dimensional model. The waves arise via a supercritical Hopf bifurcation. Away from the source, they propagate only in one direction. All this indicate that the evolution of $`A`$ could be governed by a single CGLE on each side of the source, even though the regime studied here takes place at finite distance from threshold (for $`h=1.2\mathrm{mm}`$, $`\mathrm{\Delta }T_\mathrm{c}4.3\mathrm{K}`$, and thus the relative distance to threshold is $`\epsilon =0.19`$ for $`\mathrm{\Delta }T=5.1\mathrm{K}`$). We thus suppose that $`A`$ obeys: $`\tau _0(_t+v_\mathrm{g}_x)A=`$ (2) $`\epsilon A+`$ $`\xi _0^2(1+i\alpha )_{xx}Ag(1+i\beta )|A|^2A`$ (3) where $`v_\mathrm{g}`$ is the group velocity of the waves, $`\tau _0`$ and $`\xi _0`$ are the basic time- and length-scales of the wave modulations, and $`g`$ is a real number. Below, we estimate, from the data of Fig. 2, all the coefficients of Eq. (3) and check the overall consistency of our hypothesis. The linear part of the variation of the local frequency $`\omega `$ with the local wavenumber $`k`$ yields our estimate of the group velocity: $`v_\mathrm{g}=\omega /k1.16\mathrm{mm}/\mathrm{s}`$ (Fig. 3a). This is consistent with the average value of the velocity of small perturbations, estimated at $`1.15\pm 0.25\mathrm{mm}/\mathrm{s}`$, to be compared to the phase velocity $`v_\varphi =\omega _0/k_02.8\mathrm{mm}/\mathrm{s}`$. This confirms that the source is indeed a source, since perturbations do propagate outward. Fig. 3b shows the variation of $`|A|`$ with $`k`$ as determined from the portion of Fig. 2 at the left of the source ($`x90\mathrm{m}\mathrm{m}`$). The maximum amplitude is observed for the basic wavenumber: $`k_01.11\mathrm{mm}^1`$. Space-time points away from the localized amplitude holes correspond to the large $`|A|`$ (say $`|A|>0.5`$) portion of the curve. Locally around these points, the solution of (3) is expected to be close to one of the phase-winding solutions of wavevector $`q=k+k_0`$ (see e.g. ): $`A`$ $`=`$ $`A_q\mathrm{exp}[i(qx\omega _qt)]\mathrm{with}A_q^2=(\epsilon \xi _0^2q^2)/g`$ (5) $`\mathrm{and}\omega _q=[\epsilon \beta +(\alpha \beta )\xi _0^2q^2]/\tau _0v_\mathrm{g}q`$ The linear variation of $`|A|^2`$ with $`q^2`$ is confirmed in Fig. 3c, yielding $`\xi _0/\sqrt{\epsilon }2.53\mathrm{mm}`$ and $`\sqrt{\epsilon /g}0.00054`$ (a.u.). Note that we thus have $`L_x\xi _0/\sqrt{\epsilon }3/k_0`$: the cell is effectively “infinite” and the variations of $`A`$ occur on scales significantly larger than the basic length $`k_0^1`$. Timescale $`\tau _0`$ can be estimated from the real part of the spatial linear growth rate of waves near the source, which is equal to $`\epsilon /(\tau _0v_\mathrm{g})`$ . From Fig. 3d, we find $`\tau _0/\epsilon 8.5\pm 0.5\mathrm{s}`$, about four times the basic period $`2\pi /\omega _02.03\mathrm{s}`$, confirming that the variations of $`A`$ are slow compared to the basic oscillations. Note that $`\tau _0`$ is of the same order as the viscous diffusion time $`h^2/\nu =2.2\mathrm{s}`$. At this stage, all the basic scales of Eq. (3) have been estimated. To determine the remaining two parameters $`\alpha `$ and $`\beta `$, global quantities deriving from the “wave part” of the data could, in principle, be sufficient. For example, Fig. 3a could be used to extract the expected variation of $`\omega _q`$ with $`q`$. But the data is too noisy to yield any meaningful estimate of $`\alpha `$ and $`\beta `$. Moreover, as long as the source is not controlled, the “input” waves cannot be varied at will to explore the family of solutions (5), contrary to other experimental situations . We now focus instead on the localized amplitude holes already mentioned. Many localized, propagating objects connecting two phase-winding solutions have been observed numerically in the one-dimensional CGLE . Analytical methods are largely limited, so far, to solutions depending only on the reduced variable $`\xi =xv_\mathrm{h}t`$ where $`v_\mathrm{h}`$ is the (constant) velocity of the object . Using this Ansatz, the CGLE reduces to a third-order ordinary differential equation (ODE) whose fixed points are the phase-winding solutions (5). Localized objects connecting two such solutions of wavevector $`q_\mathrm{L}`$ and $`q_\mathrm{R}`$ appear, within the Ansatz, as homoclinic ($`q_\mathrm{L}=q_\mathrm{R}`$) or heteroclinic ($`q_\mathrm{L}q_\mathrm{R}`$) orbits. The holes observed in Fig. 2 are not stable structures connecting two infinite phase winding solutions, but they subsist long enough, and can be sufficiently isolated to reveal that their wings are indeed well described by phase winding solutions. (As a matter of fact, we repeatedly assimilated, above, the large-amplitude regions separating the holes to portions of these solutions.) Fig. 4 shows an isolated hole extracted from Fig. 2. One can measure rather accurately the two wavenumbers $`q_\mathrm{L}=0.13`$ and $`q_\mathrm{R}=0.32`$ (in the CGLE frame) connected by the central hole, which can thus be tentatively seen as an heteroclinic orbit in the ODE Ansatz. All sufficiently localized structures on Fig. 2 also connect two different wavenumbers. This rules out the homoclinic holes recently studied by van Hecke , and leave, as possible candidates, the family of hole solutions found by Bekki and Nozaki . The explicit form of these solutions is too lengthy to be given here (see, e.g., ). They form a one-parameter family (at fixed $`\alpha `$, $`\beta `$) which can be parametrized by, e.g., the velocity $`v_\mathrm{h}`$ of the hole. They take the shape of an exponentially-localized amplitude hole with a minimum amplitude $`|A|_{\mathrm{min}}`$ accompanied by a rapid phase shift $`\sigma `$. To compare data such as that of Fig. 4 to these solutions, we need to determine the values of $`\alpha `$ and $`\beta `$ and the “optimal” solution of the corresponding family. We proceed as follows: we estimate $`q_\mathrm{L}`$, $`q_\mathrm{R}`$, and $`|A|_{\mathrm{min}}`$ from the data since we found these were the characteristics of the hole for which the most accurate measurement can be made. We find, for all values of the $`(\alpha ,\beta )`$ plane where it exists, the Bekki-Nozaki hole solution with the measured value of $`q_\mathrm{L}+q_\mathrm{R}`$. We then select the (codimension 1) subsets of the $`(\alpha ,\beta )`$ plane where, moreover, this hole solution possesses the measured value of $`q_\mathrm{L}`$ or the estimated value of $`|A|_{\mathrm{min}}`$ (Fig. 5, dashed lines). These two lines intersect, yielding the desired values of $`\alpha `$ and $`\beta `$. Taking into account the error bars on $`q_\mathrm{L}`$, $`q_\mathrm{R}`$, and $`|A|_{\mathrm{min}}`$, we find $`\alpha =1.5\pm 0.5`$ and $`\beta =0.4\pm 0.05`$. By the same token, the hole solution is also uniquely determined (Fig. 4, dashed lines). Its velocity $`v_\mathrm{h}`$, its width, its phase shift $`\sigma `$ are all found consistent with the data. We repeated the procedure for other amplitude holes found in Fig. 2. We found almost the same $`\alpha `$ and $`\beta `$ values, to the accuracy of the estimates. This strengthens the confidence in the results, since different objects moving at different velocities, connecting different wavenumbers all yield the same parameter values. We also performed a final check, by plotting, for the estimated values of $`\alpha `$ and $`\beta `$, the variation of $`|A|_{\mathrm{min}}`$ with the local phase gradient at the “bottom” of the hole along the family of solutions (Fig. 3b, solid line). The agreement with the small-$`|A|`$ values measured on Fig. 2 is very good. This is an additional indication that all the low-amplitude points are indeed located “inside” Bekki-Nozaki holes. For the estimated values of $`\alpha `$ and $`\beta `$, the CGLE is in the parameter region where the phase-winding solutions (5) are linearly stable for $`|q|`$ small enough and no sustained spatiotemporal disorder exists in one space dimension . Moreover, the Bekki-Nozaki amplitude holes are linearly unstable . This is not in contradiction with the dynamics observed in the experiment: the Bekki-Nozaki holes, although unstable, exist, and can constitute important building blocks of even chaotic dynamics . The waves emitted by the source can be locally attracted to this family of unstable fixed points before escaping along its unstable manifold (a mechanism also invoked by van Hecke in ). The tendency of the waves trains to become more regular away from the source (see Fig. 2) is consistent with disorder being only transient in the CGLE with the estimated parameter values. In summary, we have presented experimental results on the dynamics of the nonlinear hydrothermal waves traveling in a laterally-heated fluid layer. We have shown that, although the regime studied here is rather far from the onset of waves, the large-scale modulations of the basic pattern are governed by a one-dimensional complex Ginzburg-Landau equation and we estimated its full set of coefficients. This was made possible by showing that the localized amplitude holes observed experimentally correspond to the Bekki-Nozaki hole solutions of the CGLE. The overall consistency of our results was checked. Since the operating regime of the CGLE at the estimated parameter values does not exhibit sustained disorder, it would be interesting to analyze experimental data collected at other parameter values in the hope of reaching spatiotemporal chaos regimes of the type exhibited by the CGLE. This is left for future work, together with an attempt to obtain a better control of the system by forcing the behavior of the source. More generally, we believe that using the localized structures or defects of pattern-forming systems to determine quantitatively their relevant amplitude equations can be a rewarding approach to this difficult experimental problem. J.B. thanks the spanish government for support through project PB95-0578A (DGICYT) and a postdoctoral grant (SEUI, Ministerio de Educación y Ciencia).
no-problem/9904/quant-ph9904032.html
ar5iv
text
# Enhancement of Magneto-Optic Effects via Large Atomic Coherence ## Abstract We utilize the generation of large atomic coherence to enhance the resonant nonlinear magneto-optic effect by several orders of magnitude, thereby eliminating power broadening and improving the fundamental signal-to-noise ratio. A proof-of-principle experiment is carried out in a dense vapor of Rb atoms. Detailed numerical calculations are in good agreement with the experimental results. Applications such as optical magnetometry or the search for violations of parity and time reversal symmetry are feasible. preprint: Faraday Resonant magneto-optic effects such as the nonlinear Faraday and Voigt effects are important tools in high-precision laser spectroscopy. Applications to both fundamental and applied physics include the search for parity violations and optical magnetometry. In this Letter, we demonstrate that the large atomic coherence associated with Electromagnetically Induced Transparency (EIT) in optically thick samples can be used to enhance nonlinear-Faraday signals by several orders of magnitude while improving the fundamental signal-to-noise ratio. There exists a substantial body of work on nonlinear magneto-optical techniques , which have been studied both in their own right and for applications. Such techniques can achieve high sensitivity in systems with ground-state Zeeman sublevels due to the narrow spectroscopic features associated with coherent population trapping . The ultimate width of these resonances is determined by the lifetime of ground-state Zeeman coherences, which can be made very long by a number of methods (buffer gases and/or wall coating in vapor cells, or atomic cooling or trapping techniques). These resonances are easily saturated, however, and power broadening deteriorates the resolution even for very low light intensities. For this reason, earlier observations of nonlinear magneto-optic features used small light intensities and optically thin samples, which corresponds to a weak excitation of Zeeman coherences. Recently, remarkable experiments of Budker and co-workers demonstrated an excellent performance of the magento-optic techniques in this regime: very narrow magnetic resonances were observed in a cell with a special paraffin coating (effective Zeeman relaxation rate $`\gamma _02\pi 1`$Hz). This Letter shows that by increasing atomic density and light power simultaneously the magneto-optic signal can be enhanced substantially and the fundamental noise (shot noise) can be greatly reduced. A nearly maximal Zeeman coherence generated under these conditions preserves the transparency of the medium despite the fact that the system operates with a density-length product that is many times greater than that appropriate for 1/e absorption of a weak field. At the same time this medium is extraordinarily dispersive , such that even very weak magnetic fields lead to a large magneto-optic rotation. This effect is of the same nature as those resulting in ultra-low group velocities . Our experimental results show a potential for several orders of magnitude improvement over the conventional thin-medium–low-intensity approach. Typical measurements of the nonlinear Faraday effect involve an ensemble of atoms with ground-state Zeeman sublevels interacting with a linearly polarized laser beam. In the absence of a magnetic field, the two circularly-polarized components generate a coherent superposition of the ground-state Zeeman sublevels corresponding to a dark state. A weak magnetic field $`B`$ applied to such an atomic ensemble causes a splitting of the sublevels and induces phase shifts $`\varphi _\pm `$ which are different for right (RCP) and left (LCP) circularly polarized light. Hence, as linearly polarized light passes through the medium, the direction of polarization changes by an angle $`\varphi `$ due to the differing changes in the phase of the two circular components. In our experiment, shown schematically in Fig. 1, an external cavity diode laser (ECDL) was tuned to the 795 nm $`F=2F^{}=1`$ transition of the <sup>87</sup>Rb D<sub>1</sub> absorption line. The laser beam was collimated with a diameter of 2 mm and propagated through a 3 cm long magnetically shielded vapor cell placed between two crossed polarizers. The cell was filled with natural Rb and a Ne buffer gas at a pressure of 3 Torr. The laser power was 3 mW at the cell entrance. The cell was heated to produce atomic densities of <sup>87</sup>Rb near $`10^{12}`$ cm<sup>-3</sup>. A longitudinal magnetic field was created by a solenoid placed inside the magnetic shields and was modulated at a rate of about 10 Hz. The ground state relaxation rate was measured by decreasing sufficiently the laser power, decreasing the density until the absorption was low, and using RF-optical double resonance techniques. The measured value of $`\gamma _02\pi 5`$ kHz (FWHM) is attributed to time-of-flight broadening as well as to a residual inhomogeneous magnetic field. The frequency scale of the magnetic resonance was also determined by using an RF-optical double resonance method. Figure 2 shows the result of direct measurement of the laser intensity at the photo-detector after transmission through the system of two crossed polarizers ($`\theta =45`$ degrees) and a vapor cell. We emphasize that no lock-in detection has been used for the data shown in Fig. 2 whereas in typical nonlinear Faraday measurements sophisticated detection techniques are usually required to obtain a reasonable signal-to-noise ratio. We note that magneto-optical rotation angles increase with optical density as does the slope $`\varphi /B`$ (curves a and b in Fig. 2). The latter increase is the essence of the method being described. Under the present conditions, rotation angles up to 0.7 radians have been observed (curve b) with a good signal-to-noise ratio. For very high densities the absorption becomes large and the amplitude of the magneto-optic signal does not grow with density any further (curve c). From our measurements of the rotation angles we obtain, for the conditions outlined above: $`\varphi /B=1.8\times 10^2\mathrm{rad}/\mathrm{G}.`$ (1) To put this result in perspective, we can estimate the shot-noise limited sensitivity of this medium. The fundamental photon-counting error accumulated over a measurement time $`t_m`$ scales inversely with the output intensity. That is, for a laser frequency $`\nu `$ $$\mathrm{\Delta }\varphi _{err}\sqrt{\mathrm{}\nu /[P(L)t_m]}$$ (2) where $`P(L)`$ is the power transmitted through the cell. Combining this with our measured rotation angles implies a shot-noise limited sensitivity $`B_{min}=\mathrm{\Delta }\varphi _{err}/(\varphi /B)`$ of about 10<sup>-10</sup> G/$`\sqrt{\mathrm{Hz}}`$, which is comparable to the best values estimated in e.g. Ref. . It is important to note that this high sensitivity is achieved in our case despite more than three orders of magnitude difference of the “natural” width of the Zeeman coherence $`\gamma _0`$. This demonstrates the very significant potential of the present technique. We now turn to a theoretical consideration of this result. As a simple model, let us consider the interaction of a dense ensemble of atoms with ground-state angular momentum $`F=1`$ and an excited state $`F=0`$ as shown in Fig. 3. (Although the calculation presented in Fig. 4 represents a simulation of realistic rubidium hyperfine structure, this simple model, with well-chosen parameters, represents the qualitative physics quite well.) We consider a strong laser tuned to exact resonance with the atomic transition and disregard inhomogeneous broadening. RCP and LCP intensities are then attenuated equally according to: $`{\displaystyle \frac{1}{P}}{\displaystyle \frac{dP}{dz}}`$ $`=`$ $`\kappa \gamma {\displaystyle \frac{\left[2|\mathrm{\Omega }|^2\gamma _0+\gamma (4\delta ^2+\gamma _0^2)\right]\mathrm{\Delta }\rho }{(2|\mathrm{\Omega }|^2+\gamma \gamma _02\delta ^2)^2+\delta ^2(2\gamma +\gamma _0)^2}},`$ (3) $`{\displaystyle \frac{d\varphi _\pm }{dz}}`$ $`=`$ $`\pm {\displaystyle \frac{\kappa \gamma \delta }{2}}{\displaystyle \frac{\left[4|\mathrm{\Omega }|^24\delta ^2\gamma _0^2\right]\mathrm{\Delta }\rho }{(2|\mathrm{\Omega }|^2+\gamma \gamma _02\delta ^2)^2+\delta ^2(2\gamma +\gamma _0)^2}},`$ (4) where $`\mathrm{\Omega }=\mathrm{}|E_\pm |/\mathrm{}`$ are the (equal) Rabi frequencies of the field components ($`P|\mathrm{\Omega }|^2`$), $`\gamma _0`$ and $`\gamma `$ are the relaxation rate of Zeeman and optical coherences respectively, $`\delta =g\mu _BB/\mathrm{}`$ is the Zeeman level shift caused by an magnetic field $`B`$ (g is a Landé factor), and $`\kappa =3/(4\pi )N\lambda ^2(\gamma _{ab}/\gamma )`$ is the weak field absorption coefficient (inverse absorption length), and $`\gamma _{ab}`$ is the natural width of the resonance. The population difference between the ground-state Zeeman sublevels and the upper state is $`\mathrm{\Delta }\rho `$. This quantity is affected by optical pumping into the decoupled states ($`b_0`$ in Fig. 3), and depends upon cross-relaxation rates and applied magnetic field. For a weak magnetic field, $`\mathrm{\Delta }\rho 1/3`$. One recognizes from Eq. (4) that in the case of optically thin media ($`\kappa L1`$ where $`L`$ is the cell length) the phase shifts $`\varphi _\pm `$ can be approximated by dispersive Lorentzian functions of $`\delta `$, with amplitude $`\varphi _{max}=\kappa L\mathrm{\Omega }^2/(2\mathrm{\Omega }^2+\gamma \gamma _0)\mathrm{\Delta }\rho `$ and width $`\delta _0=\gamma _0/2+\mathrm{\Omega }^2/\gamma `$. The former is typically rather small (on the order of mrad in the experiments of Ref. ) while the latter saturates when $`|\mathrm{\Omega }|^2`$ exceeds the product $`\gamma \gamma _0/2`$, which corresponds to the usual power-broadening of the magneto-optic resonance . It is important to emphasize here that a principal difference between regimes involving low and high driving power lies in the degree of Zeeman coherence excited by the optical field: $$\rho _{b_{}b_+}=\frac{2|\mathrm{\Omega }|^2\mathrm{\Delta }\rho }{2|\mathrm{\Omega }|^2+\gamma \gamma _02\delta ^2+i\delta (2\gamma +\gamma _0)}$$ (5) Large coherence corresponds to a large population difference between symmetric (i.e. “bright”) and antisymmetric (i.e. “dark”) superpositions of Zeeman sublevels. In the low-power regime this difference is small corresponding to a small coherence. In a regime where the width of the resonance is determined by saturation, a very large (nearly maximal) Zeeman coherence is generated, as per Eq.(5). We will now show that in a medium with large Zeeman coherence the magneto-optic signal is maximized if a large density-length product is chosen. In the case of a strong optical field ($`|\mathrm{\Omega }|^2\gamma _0\gamma `$) and weak magnetic fields $`|\delta |<|\mathrm{\Omega }|^2/\gamma ,\sqrt{\gamma _0/\gamma }|\mathrm{\Omega }|`$, integration of Eqs. (3) and (4) yields for the transmitted power and the rotation angle $`\varphi =(\varphi _+\varphi _{})/2`$ $`P(L)`$ $`=`$ $`(1\alpha _0L)P(0)`$ (6) $`\varphi (L)`$ $`=`$ $`{\displaystyle \frac{\delta }{2\gamma _0}}\mathrm{ln}\left[{\displaystyle \frac{1}{1\alpha _0L}}\right]`$ (7) where $`\alpha _0=\mathrm{\Delta }\rho \kappa \gamma \gamma _0/2|\mathrm{\Omega }_0|^2`$, and $`\mathrm{\Omega }_0`$ corresponds to the input field. Note that in the case of a strong input field and an optically thin medium $`\alpha _0L1`$. However Eq. (7) shows that maximal rotation is achieved with a large density-length product $`\alpha _0L`$. Clearly $`\alpha _0L`$ cannot be too close to unity, since then no light would be transmitted. Using Eq. (2) one finds the optimum value $`(\alpha _0L)_{opt}=1\mathrm{e}^2`$, corresponding to a density-length product $$\mathrm{\Delta }\rho \kappa L|_{opt}=(\alpha _0L)_{opt}\frac{|\mathrm{\Omega }_0|^2}{\gamma \gamma _0}1.$$ (8) In this case the total accumulated rotation angle is quite large and the slope of its dependence upon $`B`$ is maximal: $$\varphi _{opt}/B=g\mu _B/(\mathrm{}\gamma _0)$$ (9) which is in strikingly good agreement with our measured value. The significance of this result can be understood by noting that in a shot-noise limited measurement, the minimum detectable rotation $`\varphi _{err}`$ given in Eq. (2) is inversely proportional to the square root of the laser power. Working at high power, therefore, has a clear advantage, since the fundamental shot noise error is reduced even though the signal is large. To make a more realistic comparison of theory and experiment, we have carried out detailed calculations in which coupled density matrix and Maxwell equations including propagation through the medium and Doppler broadening have been solved numerically for the two components of the optical field. The calculation takes into account a 16-state atomic system with energy levels and coupling coefficients corresponding to those of the Rb $`D_1`$ line. The results of these calculations are shown in Fig. 4 and are in good agreement with the experimental results. In particular, we note that our calculations predict the maximal rotation angle, (which is apparently limited by the optical pumping into the F=1 $`S_{1/2}`$ hyperfine manifold) as well as the slope of the resonance curve. It is important to comment at this point on possible limitations for the extension of the present technique into the domain of narrow resonances. For instance, in the case of a long-lived ground state coherence, spin-exchange collisions can become a limiting factor for the Zeeman relaxation rate. In the case of Rb, this is a few tens of Hz at densities corresponding to the present operating conditions. We note however that it is possible to operate at lower densities by increasing the optical path length (e.g. by utilizing an optical cavity). Likewise, the role of the light shifts due to off-resonant coupling to e.g. F=1 $`S_{1/2}`$ and F=2 $`P_{1/2}`$ hyperfine manifolds needs to be clarified. Although the noise due to classical intensity fluctuations of the circular components of the optical field is obviously canceled in a measurement of the polarization rotation, there might exist additional quantum contributions that add noise. We note however that even if such contributions are present, it is likely that they can be suppressed by tuning the laser to the point of minimum light shifts. For these reasons, we believe that the combination of the present approach with buffer gas or wall-coating techniques is likely to improve substantially the sensitivity of nonlinear magneto-optical measurements. Therefore, we anticipate that this method will be of interest for sensitive optical magnetometry as well as for setting new, lower bounds in test for the violation of parity and time-reversal invariance . The authors warmly thank Leo Hollberg, Alexander Zibrov, and Michael Kash for useful discussions and Tamara Zibrova for valuable assistance. We gratefully acknowledge the support of the Office of Naval Research, the National Science Foundation, the Welch Foundation, and the Air Force Research Laboratory.
no-problem/9904/nucl-th9904013.html
ar5iv
text
# Excitation function of nucleon and pion elliptic flow in relativistic heavy-ion collisions > Within a relativistic transport (ART) model for heavy-ion collisions, we show that the recently observed characteristic change from out-of-plane to in-plane elliptic flow of protons in mid-central Au+Au collisions as the incident energy increases is consistent with the calculated results using a stiff nuclear equation of state ($`K`$= 380 MeV). We have also studied the elliptical flow of pions and the transverse momentum dependence of both the nucleon and pion elliptic flow in order to gain further insight about the collision dynamics. The elliptic flow of hadrons in relativistic heavy ion collisions has been a subject of great interest as it may reveal the signatures of possible QGP phase transition in these collisions \[see Ref. for a recent review\]. Based on kinematical and geometrical considerations of relativistic heavy-ion collisions, Ollitrault predicted that as the incident energy increases nucleons would change from an out-of-plane elliptical flow to an in-plane one. Such a transition has recently been observed in collisions of heavy ions from the Alternating Gradient Synchrotron (AGS) at the Brookhaven National Laboratory . Data from the EOS, E895 and E877 collaborations on the proton elliptic flow in mid-central Au+Au collisions show that the beam energy ($`E_{\mathrm{tr}}`$) at which the elliptical flow changes sign is about 4 GeV/A . Studies based on transport models have indicated that the value for $`E_{\mathrm{tr}}`$ depends on the nuclear equation of state (EOS) at high densities . Using a relativistic Boltzmann-Equation model (BEM), it has been found that the experimental data can be understood if the nuclear equation of state used in the model is stiff ($`K`$=380 MeV) for beam energies below $`E_{\mathrm{tr}}`$ but soft ($`K`$=210 MeV) for beam energies above $`E_{\mathrm{tr}}`$ . Since the baryon density reached in heavy ion collisions at these energies increases with the beam energy, the above study thus suggests that the nuclear equation of state is softened at high densities. Such a softened equation of state may imply the onset of a phase change as suggested by lattice studies of the QCD at finite temperature and zero baryon chemical potential. However, to put this conclusion on a firm ground requires further studies using other models. In this Rapid Communication, we shall study the elliptical flow in heavy ion collisions at AGS energies using a Relativistic Transport (ART) Model and show that the experimental data is consistent instead with the prediction using a stiff EOS without invoking a softening at high densities . Furthermore, we shall show that by studying both the nucleon and pion elliptic flow as a function of beam energy and transverse momentum one can obtain much more information about the reaction dynamics and the origin of the transition in the sign of elliptic flow. Our study is based on the relativistic transport model ART for heavy ion collisions. We refer the reader to Ref. for details of the model and its applications in studying various aspects of relativistic heavy-ion collisions from Bevalac to AGS energies. The elliptic flow reflects the anisotropy in the particle transverse momentum ($`p_t`$) distribution at midrapidity, i.e., $`v_2<(p_x^2p_y^2)/p_t^2>`$, where the average is taken over all particles of a given kind in all events . In the upper window of Fig. 1, we compare the excitation function of $`v_2`$ for protons in mid-central Au+Au reactions obtained using the stiff (cross), soft (filled square) EOS and the cascade (open square) with the experimental data (open circles) of Ref. . An impact parameter of 5 fm, which is consistent with that in the data analysis , is used in the calculations. In agreement with other model calculations , our calculated results also show that the transition energy in the proton elliptic flow is very sensitive to the nuclear EOS. The value of $`E_{\mathrm{tr}}`$ is more than 4 GeV/A in the case of a stiff EOS but decreases to below 3 GeV/A for a soft EOS. As discussed in Ref. , a soft EOS, which gives a smaller sound velocity than that of a stiff EOS, reduces the squeeze-out contribution and thus leads to a smaller transition energy in proton elliptic flow. In the case of cascade calculations, the absence of a repulsive potential further reduces the squeeze-out contribution and results in an essentially in-plane flow in the beam energy range considered here. On the other hand, the value of $`v_2`$ in our calculations is insensitive to the nuclear EOS for incident energies above about 6 GeV/A. This is different from the results of Ref. , where a distinct difference is seen between the elliptic flow due to a soft and a stiff EOS. Our results also differ from that of Ref. based on the Ultrarelativistic Quantum Molecular Dynamics (UrQMD), in which the elliptical flow in the case of a stiff EOS is much smaller than that from the cascade model even for incident energies above 6 GeV/A. However, in both our study and that from the UrQMD the experimental, data are found to be consistent with the calculated results using the stiff EOS in this beam energy range. These results are thus different from that of Ref. , where calculations based on the BEM model show that the experimental data suggest a softening of the EOS from a stiff one at low beam energies to a softer one at higher energies. Since different model calculations lead to different dependence of the proton elliptical flow on the nuclear EOS, it is thus not possible at present to draw conclusions from comparisons of the theoretical results with the experimental data. To test these theoretical models, simultaneous studies of other experimental observables will be useful. Since pions are abundantly produced in high energy heavy ion collisions, their elliptical flow is expected to provide further insight about the collision dynamics. In the lower window of Fig. 1, we show our predictions for the excitation function of the pion elliptic flow. All three charge states of the pion are included in the analysis. Effects due to the different charges will be discussed in the next paragraph. It is seen that pions also show a transition from out-of-plane to in-plane elliptic flow as the beam energy increases. However, both the magnitude of pion elliptic flow and the transition energy at which it changes sign are significantly smaller than those for nucleons. This can be qualitatively understood from the collision dynamics. For nucleons, the sign and magnitude of elliptic flow depends on both transverse expansion time of participant nuclear matter and the passage time of the two colliding nuclei. The latter reflects the time scale for the spectators to be effective in preventing the participant hadrons from developing an in-plane flow, thus enhancing the squeeze-out contribution to the elliptical flow. For pions, however, the shadowing effect due to spectator nucleons is less important as a result of the time delay in their production, i.e., a significant number of pions are emitted later in the reaction from the decay of both baryon and meson resonances after the spectator nucleons have already moved away. Therefore, both the magnitude of squeeze-out contribution and the transition energy in the pion elliptic flow are significantly smaller than those of nucleons. The study of the excitation function of both nucleon and pion flow is useful in understanding the origin of the transition from out-of-plane to in-plane elliptic flow. In Fig. 2, we show the $`p_t`$ dependence of nucleon and pion elliptic flow in a mid-central collision of Au+Au at a beam momentum of 6 GeV/c. They are obtained from the ART model with a soft nuclear EOS. For protons, the elliptic flow increases approximately quardratically at low $`p_t`$ and then increases linearly at high $`p_t`$, as expected from the nucleon azimuthal angle distributions . For pions, their $`v_2`$ value is larger than that for protons at low $`p_t`$ but becomes similar at high $`p_t`$. Again, one can understand this result from the reaction dynamics. Low $`p_t`$ pions are more likely produced later in the reaction, and they are thus less likely to be shadowed by spectator nucleons and have thus a larger in-plane flow compared to low $`p_t`$ protons. On the other hand, high $`p_t`$ pions are mainly produced early in the reaction and thus freeze out together with high $`p_t`$ nucleons, leading then to a similar elliptic flow, which approaches that of the hydrodynamical limit . It is interesting to mention that the observed $`p_t`$ dependence of $`v_2`$ for nucleons and pions is remarkably similar to what one found at both Bevalac/SIS and SPS energies , indicating the similarity of the collision dynamics at these different energies. We note that negative pions have higher in-plane flow than the positive ones as a result of the Coulomb potential from protons, i.e., negative pions are attracted to while positive ones are repelled away from the reaction plane by protons. In summary, using a relativistic transport model we have found that the transition from out-of-plane to in-plane elliptic flow in mid-central Au+Au collisions as the beam energy increases is consistent with a stiff nuclear EOS without invoking a phase transition. This result is consistent with that from the UrQMD model but different from that from the BEM model. To help disentangle these different predictions, we have also showed the excitation function of the pion elliptic flow and the transverse momentum dependence of both the nucleon and pion elliptic flow, which are expected to reveal interesting information about both the reaction dynamics and the origin of the observed change in the sign of elliptic flow. This work was supported in part by NSF Grant No. PHY-9870038, the Robert A Welch foundation under Grant A-1358, and the Texas Advanced Research Program.
no-problem/9904/astro-ph9904162.html
ar5iv
text
# Kruskal Coordinates and Mass of Schwarzschild Black Holes ## Abstract Schwarzschild coordinates ($`r,t`$) fail to describe the region within the event horizon (EH), $`(rr_g)`$, of a Black Hole (BH) because the metric coefficients exhibit singularity at $`r=r_g`$ and the radial geodesic of a particle appears to be null ($`ds^2=0`$) when actually it must be timelike ($`ds^2>0`$), if $`r_g>0`$. Thus, both the exterior and the interior regions of BHs are described by singularity free Kruskal coordinates. However, we show that, in this case too, $`ds^20`$ for $`rr_g`$. And this result can be physically reconciled only if the EH coincides with the central singularity or if the mass of Schwarzschild black holes $`M0`$. The concept of Black Holes (BHs) is one of the most important plinths of modern physics and astrophysics. As is well known, the basic concept of BHs actually arose more than two hundred years ago in the cradle of Newtonian gravitation. In General Theory of Relativity (GTR), the gravitational mass is less than the baryonic mass ($`MM_0`$). Further, as the body contracts and emits radiation $`M`$ keeps on decreasing progressively alongwith $`r`$. Thus, given an initial gravitational mass $`M_i`$, one can not predict with certainty the value of $`M_f`$ when we would have $`2M_f/r=1`$ ($`G=c=1`$). Neither are the values of $`M_i`$, $`M_f`$ and $`M_0`$ related by any combination of fundamental constants though, it is generally assumed that $`M_iM_f`$. Ideally, one should solve the Einstein equations analytically to fix the value of $`M_f`$ for a given initial values of $`M_i`$ and $`M_0`$ for a realistic equation of state (EOS) and energy transport properties. However even when one does away with the EOS by assuming the matter to behave like a dust, $`p0`$, one does not obtain any unique solution if the dust is inhomogeneous. Depending on the various initial conditions and assumptions (like self-similarity) employed one may end up finding either a BH or a “naked singularity”. By further assuming the dust to be homogeneous Oppenheimer and Snyder (OS) found asymptotic solution of the problem by approximating Eq.(36) of their paper. The region exterior to the event horizon ($`r>r_g=2M`$) can be described by the Schwarzschild coordinates $`r`$ and $`t`$: $$ds^2=g_{tt}dt^2+g_{rr}dr^2+g_{\theta \theta }d\theta ^2+g_{\varphi \varphi }d\varphi ^2$$ (1) where $`g_{tt}=(12M/r)`$, $`g_{rr}=(12M/r)^1`$, $`g_{\theta \theta }=r^2`$, and $`g_{\varphi \varphi }=r^2\mathrm{sin}^2\theta `$. Here, we are working with a spacetime signature of +1, -1, -1, -1 and $`r`$ has a distinct physical significance as the invariant circumference radius. For $`r>r_g=2M`$, the worldline of a freeling falling radial material particle is indeed timelike $`ds^2>0`$ and the metric coefficients have the right signature, $`g_{tt}>0`$, $`g_{rr}<0`$, $`g_{\theta \theta }<0`$ and $`g_{\varphi \varphi }<0`$. But at $`r=2M`$, $`g_{rr}`$ blows up and as $`r<2M`$, the $`g_{tt}`$ and $`g_{rr}`$ suddenly exchange their signatures though the signatures of $`g_{\theta \theta }`$ and $`g_{\varphi \varphi }`$ remain unchanged. This is interpreted by saying that, inside the event horizon, $`r`$ becomes “time like” and $`t`$ becomes “spacelike”. However, we see that actually $`r`$ continues to retain, atleast partially, its spacelike character by continuing to be “invariant circumference radius”. Also, note that, if physically measurable quantities like the Rimennian curvature components behaved like $`M/r^3`$ outside the EH, they continue to behave in a similar manner, and not like $`M/t^3`$ inside the EH. And it should be borne in mind here that by a fresh relabelling or by any other means, the curvature components can not be made to assume the form $`M/t^3`$. One particular reason for this is that, we would see later that, inside the EH, we have $`t=\mathrm{}`$ while, of course, the value of $`r`$ remains finite. Thus it may not actually be justified to conclude that $`r`$ becomes the “timelike coordinate” inside the EH even though $`g_{rr}`$ changes its sign. So far, it has not been possible to resolve this enigma of the duality in the behaviour of $`r`$ for $`r<2M`$, and the present paper intends to attend to this problem. Since $`ds`$ is the proper time, we may also write $$ds^2=dt^2\left(1\frac{2M}{r}\right)$$ (2) Therefore, the radial geodesic of a material particle in the Schwarzschild metric becomes, unphysically null ($`ds^2=0`$) and then spacelike ($`ds^2<0`$) as one moves inside the event horizon (EH). In contrast, any physically meaningful coordinate system must be free of such anomalies. Although $`g_{rr}`$ blows up at $`r=2M`$, as mentioned before, the curvature components of the Rimennian tensor behave perfectly normally at $`r=r_g`$, $`R_{kl}^{ij}M/r^3`$. Further, the determinant of the metric coefficients continues to be negative and finite $`g=r^4\mathrm{sin}^2\theta g_{rr}g_{tt}=r^4\mathrm{sin}^2\theta 0`$. Such realizations gave rise to the idea that the Schwarzschild coordinate system suffers from a “coordinate singularity” at the event horizon and must be replaced by some other well behaved coordinate system. It is known that a comoving coordinate system is naturally singularity free and Lemaitre suggested that the region inside $`rr_g`$ may be represented by such a coordinate system whereas the exterior region is still described by the old Schwarzschild coordinates. It is only in 1960 that Kruskal and Szekeres discovered a one-piece coordinate system which can describe both the interior and exterior regions of a BH. They achieved this by means of the following coordinate transformation for the exterior region (Sector I): $$u=f_1(r)\mathrm{cosh}\frac{t}{4M};v=f_1(r)\mathrm{sinh}\frac{t}{4M};r2M$$ (3) where $$f_1(r)=\left(\frac{r}{2M}1\right)^{1/2}e^{r/4M}$$ (4) It would be profitable to note that $$\frac{df_1}{dr}=\frac{r}{8M^2}\left(\frac{r}{2M}1\right)^{1/2}e^{r/4M}$$ (5) And for the region interior to the horizon (Sector II), we have $$u=f_2(r)\mathrm{sinh}\frac{t}{4M};v=f_2(r)\mathrm{cosh}\frac{t}{4M};r2M$$ (6) where $$f_2(r)=\left(1\frac{r}{2M}\right)^{1/2}e^{r/4M}$$ (7) and $$\frac{df_2}{dr}=\frac{r}{8M^2}\left(1\frac{r}{2M}\right)^{1/2}e^{r/4M}$$ (8) Given our adopted signature of spacetime ($`2`$), in terms of $`u`$ and $`v`$, the metric for the entire spacetime is $$ds^2=\frac{32M^3}{r}e^{r/2M}(dv^2du^2)r^2(d\theta ^2+d\varphi ^2\mathrm{sin}^2\theta )$$ (9) The metric coefficients are regular everywhere except at the intrinsic singularity $`r=0`$, as is expected. Note that, the angular part of the metric remains unchanged by such transformations and $`r(u,v)`$ continues to signal its intrinsic spacelike nature. In either region we have $$u^2v^2=\left(\frac{r}{2M}1\right)e^{r/2M}$$ (10) so that $$u^2v^2>1;u/v>\pm 1;r>2M,$$ (11) $$u^2v^20;u=\pm v;r=2M$$ (12) and $$u^2v^2<0;u/v<\pm 1;r<2M$$ (13) So, each of these above three inequalities, and, in particular, the $`r=0`$ point corresponds to not one but two conditions! $$v=\pm (1+u^2)^{1/2}$$ (14) Here, one point needs to be hardly overemphasized; astronomical observations and experiments actually conform to the idea that atleast far from massive bodies or probable BHs, the spacetime is well described by the $`r,t`$ coordinate system. In fact, although in the (normal) physical spacetime, in a spherically symmetric spatial geometry (as defined by the implications of $`r`$ as an “invariant circumference radius”), the physical singularity corresponds to a mathematical point, in the Kruskal world view, this central singularity corresponds to a pair of hyperbolas in the ($`uv`$) plane. While the “+ve” sign of equation corresponds to the central BH singularity, the “-ve” sign corresponds to the singularity inside a so-called White Hole which may spew out mass-energy spontaneously in “our universe”. The white hole singularity belongs to “other universe” whose presence is suggested by the fact that the Kruskal metric remains unaffected by the following additional transformations: $$u=f_1(r)\mathrm{cosh}\frac{t}{4M};v=f_1(r)\mathrm{sinh}\frac{t}{4M};r2M$$ (15) defining Sector (III) and $$u=f_2(r)\mathrm{sinh}\frac{t}{4M};v=f_2(r)\mathrm{cosh}\frac{t}{4M};r2M$$ (16) defining Sector (IV). Thus not only does the region interior to the EH correspond to two different universes, (Sector II and IV) but the structure of the physical spacetime outside the EH, too, effectively corresponds to two universes (Sector I and III). If there exists $`N`$ number of BHs, the (normal) physical spacetime may be much more complex. The aim of this paper is to explicitly verify whether the (radial) geodesics of material particles are indeed timelike at the EH which they must be if this idea of a finite mass Schwarzschild BH is physically correct. First we focus attention on the region $`r2M`$ and differentiate Eq.(3) to see $$\frac{du}{dr}=\frac{u}{r}+\frac{u}{t}\frac{dt}{dr}=\frac{df}{dr}\mathrm{cosh}\frac{t}{4M}+\frac{f}{4M}\mathrm{sinh}\frac{t}{4M}\frac{dt}{dr}$$ (17) Now by using Eq. (4-6) in the above equation, we find that $$\frac{du}{dr}=\frac{ru}{8M^2}(r/2M1)^1+\frac{v}{4M}\frac{dt}{dr};r2M$$ (18) and $$\frac{dv}{dr}=\frac{rv}{8M^2}(r/2M1)^1+\frac{u}{4M}\frac{dt}{dr};r2M$$ (19) By dividing equation (18) by (19), we obtain $$\frac{du}{dv}=\frac{\frac{ru}{2M}+v\frac{dt}{dr}(r/2M1)}{\frac{rv}{2M}+u\frac{dt}{dr}(r/2M1)}$$ (20) Similarly, starting from Eq. (6), we end up obtaining a form of $`du/dv`$ for the region $`r<2M`$ which is exactly similar to the foregoing equation. Now, by using Eq.(12) ($`u=\pm v`$) in Eq. (20), we promptly find that $$\frac{du}{dv}\frac{\frac{\pm r}{2M}+\frac{dt}{dr}(r/2M1)}{\frac{r}{2M}\pm \frac{dt}{dr}(r/2M1)}\pm 1;r2M$$ (21) Thus, we are able to find the precise value of $`du/dv`$ at the EH in a most general manner irrespective of the precise relationship between $`t`$ and $`r`$. Armed with this value of $`du/dv`$, we are in a position now to complete our task by rewriting the radial part of the Kruskal metric ($`d\theta =d\varphi =0`$) as $$ds^2=\frac{32M^3}{r}e^{r/2M}dv^2\left[1\left(\frac{du}{dv}\right)^2\right]$$ (22) Or, $$ds^2=16M^2e^1dv^2(11)=0;r=2M$$ (23) We have found that for the Lemaitre coordinate too, $`ds^2=0`$ at $`r=2M`$. This implies that although the metric coefficients can be made to appear regular, the radial geodesic of a material particle becomes null at the event horizon of a finite mass BH in contravention of the basic premises of GTR! And since, now, we can not blame the coordinate system to be faulty for this occurrence, the only way we can explain this result is that the Event Horizon itself corresponds to the physical singularity or, in other words, the mass of the Schwarzschild BHS $`M0`$. And then, the entire conundrum of “Schwarzschild singularity”, “swapping of spatial and temporal characters by $`r`$ and $`t`$ inside the event horizon (when the angular part of all metrics suggest that $`r`$ has a spacelike character even within the horizon), “White Holes” and “Other Universes” get resolved. Here we recall the conjecture of Rosen “so that in this region $`r`$ is timelike and $`t`$ is spacelike. However, this is an impossible situation, for we have seen that $`r`$ defined in terms of the circumference of a circle so that $`r`$ is spacelike, and we are therefore faced with a contradiction. We must conclude that the portion of space corresponding to $`r<2M`$ is non-physical. This is a situation which a coordinate transformation even one which removes a singularity can not change. What it means is that the surface $`r=2M`$ represents the boundary of physical space and should be regarded as an impenetrable barrier for particles and light rays.” This idea of Rosen is also in accordance with the idea of Einstein that the Schwarzschild type singularity is unphysical and can not occur for realistic cases. And this paper indeed shows that in order that the radial worldlines of free falling material particles do not become null at a mere coordinate singularity, Nature (GTR) refuses to have any spacetime within the EH. Although, having made our basic point, we could have ended this paper at this point, for the sake of further insight, we shall study the behaviour of $`ds^2`$ for the entire spacetime by, again assuming, for a moment, the existence of a finite mass BH. It can be found that in the region $`r>2M`$, one would indeed have $`ds^2>0`$ for $`r>2M`$. And to see the behaviour of $`du/dv`$ inside the EH, we recall the relationship between $`t`$ and $`r`$ (see pp. 824 of ref. or pp. 343 of ref.): $$\frac{t}{2M}=\mathrm{ln}\frac{(r_{\mathrm{}}/2M1)^{1/2}+\mathrm{tan}(\eta /2)}{(r_{\mathrm{}}/2M1)^{1/2}\mathrm{tan}(\eta /2)}+2M\left(\frac{r_{\mathrm{}}}{2M}1\right)^{1/2}\left[\eta +\left(\frac{r_{\mathrm{}}}{4M}\right)(\eta +\mathrm{sin}\eta )\right]$$ (24) where the particle is released with zero velocity from $`r=r_{\mathrm{}}`$ at $`t=0`$ and the “cyclic” coordinate $`\eta `$ is defined by $$r=\frac{r_{\mathrm{}}}{2}(1+\mathrm{cos}\eta )$$ (25) Since $`\mathrm{tan}(\eta /2)=(r_{\mathrm{}}/r1)`$ we find from Eq. (24) that, as $`r2M`$, the logarithmic term blows up and $`t\mathrm{}`$, which is a well known result. And since $`t`$ continues to increase as the particle enters the EH, we have the general result that $`t=\mathrm{}`$ for $`r2M`$. In this limit, we have $$\mathrm{cosh}\frac{t}{4M}\mathrm{sinh}\frac{t}{4M}\frac{e^{t/4M}}{2}=\mathrm{}$$ (26) Consequently, even though, $`u^2v^2`$ continues to be finite we obtain $$\frac{u}{v}=\pm 1;r2M$$ (27) Hence we obtain a more general form of Eq. (21) $$\frac{du}{dv}\pm 1;r2M$$ (28) irrespective of the precise form of $`dt/dr`$. Then from Eq. (22), we find that the metric would continue to be null for $`r<2M`$: $$ds^2=0;r2M$$ (29) And this unphysical happening is of course avoided when we realize that $`M=0`$ and there is no additional spacetime between the EH and the central singularity. We may mention now that we have recently shown that the OS work too actually suggests that the mass of the resultant BH must be $`M0`$. The basic reason for this assertion is extremely simple. The Eq.(36) of OS paper connects $`t`$ and $`r`$ through a relationship which, for large values of $`t`$ is $$t\mathrm{ln}\frac{y^{1/2}+1}{y^{1/2}1}$$ (30) where at the boundary of the fluid $$y=\frac{r}{r_g}=\frac{r}{2M}$$ (31) Since the argument of a logarithmic function can not be negative, in order that $`t`$ is definable at all that we must have $$y=\frac{r}{2M}1;\frac{2M}{r}1$$ (32) Thus atleast for the collapse of a homogeneous dust, “trapped surfaces” do not form and if the collapse continues to the point $`r0`$ we must have $`M_f0`$. This independent finding is in complete agreement with what we have shown in the present paper that Schwarzschild BHs must have $`M=0`$. Although, there is no modulus here in the argument of the logarithmic of Eq. (30) (unlike Eq. ), some readers may wish there were one. Even if one imagined the existence of such a modulus, one would run into contradiction in the following way. Of course we will have $`t\mathrm{}`$ as $`r2M`$. But during the collapse if one would enter $`r<2M`$ (if $`M>0`$), $`t`$ would start decreasing! However, unlike the case of Newtonian gravity, in GTR, $`M=0`$ state need not correspond to a configuration with zero baryonic mass. The $`M=0`$ state is simply one in which the negative gravitational energy exactly offsets the positive energy associated with $`M_0`$ and internal energy, and may indeed represent a physical singularity with infinite energy density and tidal acceleration. For instance, if the collapse process leads to the $`y=1`$ limit, then the curvature components $`R_{kl}^{ij}M/r^3r^2\mathrm{}`$ as $`r0`$. Note also that, the metric coefficients $`g_{uu}`$ and $`g_{vv}`$ for the zero-mass BH blow up in a similar fashion at the EH. It may be noted that the “naked singularities” too may be characterized by $`M=0`$. In the context of the dust collapse, we see that, for, $`M=0`$, the proper time for the formation of the BH would be infinite $$\tau =\pi \left(\frac{r_{\mathrm{}}^3}{8M}\right)^{1/2}=\mathrm{}$$ (33) Further, we have shown elsewhere that the crucial condition (32), $`y1`$, is valid not only for the OS problem, but also for any generic spherical gravitational collapse. And similarly, $`\tau \mathrm{}`$ as $`r0`$ not only for dust collapse, but also for the collapse of any physical fluid. Thus at any given finite proper time there would be no BH, and on the other hand there could be dynamically collapsing configurations with arbitrary high surface redshifts. In fact it can be found that the proper length of a radial geodesic becomes infinite too. And therefore, even if, such dynamically configurations with large surface red-shifts may be collapsing with relativistic velocities, the collapse process will never terminate in any finite amount of time. This happens because spacetime would get infinitely stretched by infinite curvature near $`r=0`$. This is a purely general relativistic effect, and is difficult to comprehend by “common astronomical sense”. Observationally, such configurations may be identified as Black Holes. And if some of these configurations are collapsing with nearly free fall speed, accretion onto such configurations would emit little radiation if the accretion flow happens to be advection dominated. To conclude, irrespective of the observational consequences, we have directly shown that, if GTR is correct, Schwarzschild BHs must have $`M0`$ in order that the radial geodesics of material particles remain timelike at a finite value of $`r`$.
no-problem/9904/hep-th9904201.html
ar5iv
text
# 1 Introduction ## 1 Introduction It has been conjectured long ago that quark confinement in $`QCD`$ would arise because the electric flux lines are squeezed into long flux tubes by the condensation of the magnetic charge . This mechanism of confinement is known as the dual Meissner effect, since it is dual, in the sense of electric/magnetic duality, to the confinement of magnetic fluxes, that arises in a type-two superconductor, because of the condensation of the electric charge . Since the electric flux lines would span two-dimensional surfaces embedded into the four-dimensional space-time, the dual Meissner effect leads to an effective theory of $`QCD`$ in terms of closed strings . The absolute confinement of the electric flux requires indeed that the flux line cannot break into an open string. More analytically, the string functional integral would arise as the string solution of the Migdal-Makeenko equation in the large-$`N`$ limit of $`QCD`$ . This is the string program, that has received recently a new revival, most notably because of the implementation in the string setting of the zig-zag symmetry . A distinctive feature of the string program is that the existence of the string is assumed as an ansatz for the solution of the Migdal-Makeenko equation. In fact it is difficult to see how the strings would arise directly in terms of the functional integral over the four-dimensional gauge connections. A remarkable achievement in this direction was the representation of the partition function of the two-dimensional gauge theory as a sum over branched coverings of the two-dimensional space-time . These coverings are interpreted as the string world sheets, giving evidence in favour of the string solution of pure gauge theories. Yet this representation is obtained from the exact result for the two-dimensional partition function, without a direct link to gauge configurations in the functional integral. In an unrelated development, the cotangent bundle of the moduli of holomorphic bundles on a Riemann surface has played a key role in the Seiberg-Witten solution of the Coulomb branch of some four-dimensional supersymmetric gauge theories . Branched covers appear in these solutions as the spectral curves of the characteristic equation associated to a holomorphic one-form that labels cotangent directions to the moduli of holomorphic bundles . The pre-potential, the unique holomorphic function that determines the low-energy effective action of the supersymmetric theory, is constructed by means of the spectral curve. More precisely, certain submanifolds of the cotangent bundle, that correspond to moduli of representations of the fundamental group of the underlying two-dimensional base manifold, admit an integrable fibration by Jacobians of branched coverings of the base two-dimensional manifold, the Hitchin fibration, that in turn is equivalent to assign the pre-potential. While no direct link to physical four-dimensional fields may be attributed to these coverings in the framework of the Seiberg-Witten solution, a link to the string program would possibly arise, if the cotangent bundle of unitary connections in two-dimensions could be embedded into the four-dimensional $`QCD`$ functional integral. Such an embedding was found in . It was found there that the correct variables to define this embedding are neither the four-dimensional gauge connections, $`A`$, nor their dual variables, $`A^D`$, but a partial mixing of them, that correspond to a partial or fiberwise duality transformation . The coordinates of the cotangent bundle of unitary connections, $`T^{}𝒜`$, appear naturally as the shift $`A^D=A+\mathrm{\Psi }`$ is performed for two dualized polarizations among the four components of the four-dimensional gauge connection. In addition, it was found in , that there is a dense embedding into the $`QCD`$ functional integral, of an elliptic fibration of the moduli space of parabolic $`K(D)`$ pairs into (an elliptic fibration of) the quotient of the cotangent bundle by the action of the gauge group. The last space admits a Hitchin fibration by the moduli of line bundles over branched spectral covers, thus giving a dense embedding of these objects into the $`QCD`$ functional integral. While in the integrability properties of the Hitchin fibration were used to reduce the problem of computing the functional integral in the large-$`N`$ limit to the evaluation of the saddle-point of a certain effective action that contains the Jacobian of the change of variables to the collective field of the Hitchin fibration, in this paper we shall address the following, more qualitative issue, that relates to the string program. What is the locus in the functional integral of the confining branch of $`QCD`$, that is, what is the locus in the moduli space of parabolic $`K(D)`$ pairs, whose image by the Hitchin map contains only Riemann surfaces spanned by closed strings ? A partial answer to this question was given in . In a physical interpretation of the occurrence of Hitchin bundles in the fiberwise dual functional integral was given, in the light of ’t Hooft concept of Abelian projection . This interpretation identifies the branch points of the spectral covers as magnetic monopoles and the parabolic points as electric charges. Since confinement requires magnetic condensation and ’t Hooft alternative excludes electric condensation, the confining branch is the locus, in the parabolic $`K(D)`$ pairs, whose image by the Hitchin map has no parabolic singularity on the spectral cover , a not completely trivial condition. It should be noticed that this idea is in complete analogy with the two-dimensional case , in which the partition function is localized on branched coverings of the base compact space-time, without parabolic points. In fact the occurrence of parabolic points would imply the presence in the vacuum to vacuum amplitudes of string diagrams with the topology of open strings, a situation that it is appropriate to the Coulomb rather than the confinement phase. This last statement may be exemplified thinking to a sphere with two parabolic points as a topological cylinder, a vacuum diagram of an open string theory. We will find in this paper that the confinement locus is characterized precisely by the condition that the residues of the Higgs current, $`\mathrm{\Psi }`$, on the parabolic divisor be nilpotent. This condition turns out to be equivalent to the existence of a (dense in the large-$`N`$ limit) hyper-Kahler reduction of the cotangent bundle of unitary connections under the action of the gauge group. The confining branch of $`QCD`$ is, therefore, the hyper-Kahler locus of the Hitchin fibration of parabolic bundles, embedded in the $`QCD`$ path integral as prescribed by fiberwise duality. On the other side, this is precisely the locus for which spectral covers with the topology of closed string diagrams, but not open ones, occur in the functional integral. The dual mechanism of superconductivity and the string interpretation are therefore compatible, as it should be, and as it has been for long time believed . One more comment. It is a rather strange fact that the same or analogue objects, that are used to construct the Seiberg-Witten solution of four-dimensional $`SUSY`$ theories in the Coulomb branch, appear here as giving rise to a physical string interpretation of the $`QCD`$ functional integral, with an associated hyper-Kahler structure but no supersymmetry. In fact we think that the explanation of this fact has much to do with duality as opposed to supersymmetry. The Seiberg-Witten solution starts from supersymmetry, through the structure theorem for the low-energy effective action, as determined by the pre-potential, and ends up with a non-linear geometric realization of the Abelian electric magnetic/duality of the effective theory in the Coulomb branch, in terms of a Legendre transformation of the pre-potential . We start instead from the non-Abelian duality of the microscopic theory, as defined by the functional integral, to gain, by means of fiberwise duality and the embedding of parabolic bundles, control over the large-$`N`$ limit and a mathematical realization of the dual Meissner effect at the same time. ## 2 The nilpotent condition In this section we show that the spectral covers that are in the image by the Hitchin map of parabolic $`K(D)`$ pairs have no parabolic divisor if and only if the levels of the non-hermitian moment maps are nilpotent on each point of the parabolic divisor. This in turn is a necessary and sufficient condition for the moduli space of parabolic $`K(D)`$ pairs to admit a hyper-Kahler structure. In a special name was used to characterize this closed subspace: parabolic Higgs bundles. In any case the confinement criterium of this paper explains the physical meaning of the hyper-Kahler structure, a mathematical condition whose meaning was suspected to be physically relevant but not elucidated in . Indeed, there it was argued that the two cases of the parabolic $`K(D)`$ pairs and of the parabolic Higgs bundles present equivalent difficulties from the point of view of solving the large-$`N`$ limit, in fact differing by contributions of order of $`\frac{1}{N}`$. We now argue that parabolic Higgs bundles correspond to the confining branch of $`QCD`$ in the fiberwise-dual variables. The functional integral for $`QCD`$ in is defined in terms of the variables $`(A_z,A_{\overline{z}},\mathrm{\Psi }_z,\mathrm{\Psi }_{\overline{z}})`$, obtained by means of a fiberwise duality transformation from $`(A_z,A_{\overline{z}},A_u,A_{\overline{u}})`$, where $`(z,\overline{z},u,\overline{u})`$ are the complex coordinates on the product of two two-dimensional tori, over which the theory is defined. $`(A_z,A_{\overline{z}},\mathrm{\Psi }_z,\mathrm{\Psi }_{\overline{z}})`$ define the coordinates of an elliptic fibration of $`T^{}𝒜`$, the cotangent bundle of unitary connections on the $`(z,\overline{z})`$ torus with the $`(u,\overline{u})`$ torus as a base. The set of pairs $`(A,\mathrm{\Psi })`$ that are solutions of the following differential equations (elliptically fibered over the $`(u,\overline{u})`$ torus) is embedded into the space of parabolic $`K(D)`$ pairs : $`F_Ai\mathrm{\Psi }\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\mu _p^0\delta _pidzd\overline{z}`$ $`\overline{}_A\psi `$ $`=`$ $`{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\mu _p\delta _pdzd\overline{z}`$ $`_A\overline{\psi }`$ $`=`$ $`{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\overline{\mu }_p\delta _pd\overline{z}dz`$ (1) where $`\delta _p`$ is the two-dimensional delta-function localized at $`z_p`$ and $`(\mu _p^0,\mu _p,\overline{\mu }_p)`$ are the set of levels for the moment maps . The space of parabolic $`K(D)`$ pairs consists of a parabolic bundle with a holomorphic connection $`\overline{}_A`$ and a parabolic morphism $`\psi `$. Eq.(1) defines a dense stratification of the functional integral over $`T^{}𝒜`$ because the set of levels is dense everywhere in function space, in the sense of the distributions, as the divisor $`D`$ gets larger and larger. According to Hitchin , there is a Hitchin fibration of parabolic $`K(D)`$ pairs, defined by U(1) bundles over the following spectral cover: $`Det(\lambda 1\mathrm{\Psi }_z)=0`$ (2) The spectral cover depends only from the eigenvalues of $`\mathrm{\Psi }_z`$. The condition that the spectral cover has no parabolic point is therefore the condition that the eigenvalues of $`\mathrm{\Psi }_z`$ have no poles. We notice that the residues of the poles of $`\mathrm{\Psi }_z`$ are determined by the levels of the non-hermitian moment maps. In fact $`\mathrm{\Psi }_z`$ can be made meromorphic with residue at the point $`p`$ conjugated to the level $`\mu _p`$ by means of a gauge transformation $`G`$ in the complexification of the gauge group, that gauges to zero the connection $`\overline{A}_z`$, fiberwise: $`\overline{}\psi {\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}G\mu _pG^1\delta _pdzd\overline{z}=0`$ $`\overline{\psi }{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\overline{G}^1\overline{\mu }_p\overline{G}\delta _pd\overline{z}dz=0`$ (3) From this equation it follows that the residues of the eigenvalues of $`\mathrm{\Psi }_z`$ are proportional to the eigenvalues of $`\mu _p`$. If the eigenvalues of $`\psi `$ have no poles on the covering, $`\mu _p`$ must have zero eigenvalues and therefore must be nilpotent and vice versa, that is the conclusion looked forward. There is however an apparent puzzle. Though the eigenvalues of $`\psi `$ cannot have poles on the covering if the levels of the non-hermitian moment maps are nilpotent, the traces of powers of $`\mathrm{\Psi }_z`$, that are expressed through symmetric polynomials in the eigenvalues, certainly are meromorphic functions on the torus. How can this happen if the eigenvalues of $`\psi `$ have no poles on the covering? The answer is the following, as we have found by a direct check in the $`SU(2)`$ case. If $`\mu _p`$ is nilpotent, the eigenvalues of $`\mathrm{\Psi }_z`$ have singularities that are not parabolic but that look in the coordinates of the $`z`$ torus branched singularities, for example $`z^{\frac{1}{2}}`$. However we should remind the reader that the eigenvalues of $`\psi `$ are really differentials on the covering. Therefore $`z^{\frac{1}{2}}`$ should be really interpreted as $`z^{\frac{1}{2}}dz`$, that is $`d(z^{\frac{1}{2}})`$, that is, in fact, smooth on a simply branched covering. There is no singularity on the covering. Yet, symmetric powers of the eigenvalues of $`\mathrm{\Psi }_z`$ may have meromorphic singularities on the torus. It remains to show that if the residue of $`\psi `$ is nilpotent the quotient is hyper-Kahler. This is a known result . This concludes our proof. In fact a slightly stronger statement holds. If the residues of the Higgs field are nilpotent, Eq.(1) can be interpreted as the vanishing condition for the moment maps of the action of the compact $`SU(N)`$ gauge group on the pair $`(A,\mathrm{\Psi })`$ and on the cotangent space of flags . The quotient under the action of the compact gauge group of the set: $`F_Ai\mathrm{\Psi }\mathrm{\Psi }{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\mu _p^0\delta _pidzd\overline{z}=0`$ $`\overline{}_A\psi {\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}n_p\delta _pdzd\overline{z}=0`$ $`_A\overline{\psi }{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\overline{n}_p\delta _pd\overline{z}dz=0`$ (4) with fixed eigenvalues of the hermitian moment map is, by a general result , the same as the quotient defined by the complex moment maps: $`\overline{}_A\psi {\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}n_p\delta _pdzd\overline{z}=0`$ $`_A\overline{\psi }{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\overline{n}_p\delta _pd\overline{z}dz=0`$ (5) under the action of the complexification of the gauge group. ## 3 Conclusions Our conclusion is that if $`QCD`$ confines the electric charge the functional integral in the fiberwise dual-variables defined in must be localized on the hyper-Kahler locus of parabolic $`K(D)`$ pairs, the parabolic Higgs bundles. This space is characterized by a nilpotent residue of the Higgs current. These are precisely the parabolic $`K(D)`$ pairs whose image by the Hitchin map contain spectral covers arbitrarily branched, but with no parabolic points. The physical interpretation is that there is a monopole condensate in the vacuum but no electric condensate and only closed electric strings occur into vacuum to vacuum diagrams.
no-problem/9904/hep-th9904039.html
ar5iv
text
# A Regularization Scheme for the AdS/CFT Correspondence ## Abstract The prescription of the AdS/CFT correspondence is refined by using a regularization procedure, which makes it possible to calculate the divergent local terms in the CFT two-point function. We present the procedure for the example of the scalar field. It has been stated in most papers on this subject that the correspondence between a fields theory on anti-de Sitter space (AdS) and a conformal field theory (CFT) on its boundary is formally described by the formula $$\mathrm{e}^{I_{AdS}[\varphi ]}=\mathrm{exp}d^dx\varphi _0(x)𝒪(x),$$ (1) where the action $`I_{AdS}`$ is calculated on-shell for a field configuration $`\varphi `$ satisfying a Dirichlet condition on the AdS boundary and the boundary value $`\varphi _0`$ couples as a current to the conformal field $`𝒪`$ living on the AdS boundary. Thus, the formula (1) enables one to calculate correlation functions of the field $`𝒪`$ in the boundary conformal field theory. There are two subtle points, which this formal description does not address. First, a Dirichlet boundary value problem in general is not well defined for anti-de Sitter space, since a generic field does not propagate to the boundary. This point has been addressed by formulating the theory with a boundary lying inside the anti-de Sitter space. Using this approach the two-, three- and four-point functions of various fields have been calculated (see for a recent comprehensive list of references). Secondly, although the $`ϵ`$-prescription yields the non-contact contributions to the correlators in agreement with conformal field theory, the singular contributions for coincidence points have so far escaped a direct calculation. Previous work has focused on obtaining a finite action by interpreting the correlators as distributions and regularizing the action by adding boundary counter terms . Two loop corrections to Super Yang Mills correlators have been calculated in and contribute only to the contact terms. However, we think that a regularization scheme must include a prescription on how to calculate the divergent contact contributions before any counterterms are added. The aim of this letter is to provide a refined prescription of the AdS/CFT correspondence, with which non-local and local terms of the CFT two-point function can be calculated. We would like to emphasize that it is not our aim to obtain regularized CFT correlation functions. We shall use the conventional representation of anti-de Sitter space by the space $`x^i`$, ($`i=1,\mathrm{}d`$), $`x^0>0`$ with the metric $$ds^2=(x^0)^2dx^\mu dx^\mu .$$ (2) Let us consider the example of a scalar field , which satisfies the equation of motion $$(^2m^2)\varphi (x)=\left[x_0^2_\mu _\mu x_0(d1)_0m^2\right]\varphi (x)=0.$$ (3) A solution of equation (3) can be written in the form $$\varphi (x)=d^dyK(x,𝐲)\varphi _ϵ(𝐲),$$ where $`\varphi _ϵ(𝐲)`$ is some boundary field and the bulk-boundary kernel is given by $$K(x,𝐲)=\frac{d^dk}{(2\pi )^d}\left(\frac{x_0}{ϵ}\right)^{\frac{d}{2}}\frac{K_\alpha (kx_0)}{K_\alpha (kϵ)}\mathrm{e}^{i𝐤(𝐱𝐲)\mu k^2}.$$ (4) Here, $`K_\alpha `$ is a modified Bessel function (Mac Donald function) and $`\alpha `$ is related to the mass parameter by $$\alpha =\sqrt{\frac{d^2}{4}+m^2}.$$ In addition, we have introduced the regulating factor $`\mathrm{e}^{\mu k^2}`$ in order to make the integral well defined for all values of $`x_0`$ and $`𝐱𝐲`$. In the limit $`\mu 0`$ equation (4) reduces to the standard Dirichlet kernel. It will turn out that the limits $`\mu 0`$ and $`ϵ0`$ should be taken simultaneously with $`\mu `$ being of order $`ϵ^2`$. The CFT two-point function is determined by the boundary normal derivative of the kernel (4), which is given by $$_0K(x,𝐲)|_ϵ=\frac{1}{ϵ}\frac{d^dk}{(2\pi )^d}\left\{\frac{d}{2}\alpha +k\frac{}{k}\mathrm{ln}\left[(kϵ)^\alpha K_\alpha (kϵ)\right]\right\}\mathrm{e}^{i𝐤(𝐱𝐲)\mu k^2}.$$ (5) Consider first the case of non-coincident points $`𝐱`$ and $`𝐲`$. As without regularization, we use the expansion $$z^\alpha K_\alpha (z)=2^{\alpha 1}\mathrm{\Gamma }(\alpha )\left[1+\underset{j=1}{\overset{\mathrm{}}{}}\frac{1}{j!(1\alpha )_j}\left(\frac{z}{2}\right)^{2j}\frac{\mathrm{\Gamma }(1\alpha )}{\mathrm{\Gamma }(1+\alpha )}\left(\frac{z}{2}\right)^{2\alpha }\underset{j=0}{\overset{\mathrm{}}{}}\frac{1}{j!(1+\alpha )_j}\left(\frac{z}{2}\right)^{2j}\right],$$ (6) where we used the notation $`(a)_j=\mathrm{\Gamma }(a+j)/\mathrm{\Gamma }(a)`$. Proceeding to expand the logarithm in equation (5) one obtains $$z\frac{}{z}\mathrm{ln}\left[z^\alpha K_\alpha (z)\right]=\frac{z^2}{2(1\alpha )}+\mathrm{}\frac{\mathrm{\Gamma }(1\alpha )}{\mathrm{\Gamma }(1+\alpha )}2^{12\alpha }\alpha z^{2\alpha }+\mathrm{},$$ (7) where the first set of dots indicates analytic terms of order $`z^{2n}`$ ($`n>1`$) and the second set higher order non-analytic terms. Substituting equation (7) into equation (5) we recognize integrals of the type $`{\displaystyle \frac{d^dk}{(2\pi )^d}k^\beta \mathrm{e}^{i𝐤𝐱\mu k^2}}`$ $`={\displaystyle \frac{|𝐱|^{1\frac{d}{2}}}{(2\pi )^{\frac{d}{2}}}}{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}𝑑kk^{\frac{d}{2}+\beta }J_{\frac{d}{2}1}(k|𝐱|)\mathrm{e}^{\mu k^2}`$ $`={\displaystyle \frac{\mathrm{\Gamma }\left(\frac{d+\beta }{2}\right)}{2^d\pi ^{\frac{d}{2}}\mathrm{\Gamma }\left(\frac{d}{2}\right)\mu ^{\frac{d+\beta }{2}}}}\mathrm{\Phi }({\displaystyle \frac{d+\beta }{2}};{\displaystyle \frac{d}{2}};{\displaystyle \frac{|𝐱|^2}{4\mu }}).`$ (8) Here, $`\mathrm{\Phi }(a,c,z)`$ is the degenerate hypergeometric function. For $`𝐱0`$ we can take the $`\mu 0`$ limit and replace $`\mathrm{\Phi }`$ with the leading term of its asymptotic expansion , which yields $$\frac{d^dk}{(2\pi )^d}k^\beta \mathrm{e}^{i𝐤𝐱\mu k^2}\stackrel{\mu 0}{=}\frac{2^\beta \mathrm{\Gamma }\left(\frac{d+\beta }{2}\right)}{\pi ^{\frac{d}{2}}\mathrm{\Gamma }\left(\frac{\beta }{2}\right)}\frac{1}{|𝐱|^{d+\beta }}.$$ (9) We notice that for the analytic terms, $`\beta =2n`$, the gamma function in the denominator diverges, so the integral becomes zero. The same holds for the subleading terms of the asymptotic expansion. Hence, the analytic terms do not contribute to the finite distance two-point function. On the other hand, for the leading non-analytic term, $`\beta =2\alpha `$, we obtain (still for $`𝐱𝐲`$) $$_0K(x,𝐲)|_ϵ\stackrel{\mu 0}{=}2\alpha c_\alpha \frac{ϵ^{2\alpha 1}}{|𝐱𝐲|^{2\mathrm{\Delta }}},$$ (10) where $`\mathrm{\Delta }=d/2+\alpha `$ and $`c_\alpha =\mathrm{\Gamma }(\mathrm{\Delta })/(\pi ^{d/2}\mathrm{\Gamma }(\alpha ))`$. The higher order non-analytic terms in equation (7) contribute with higher powers of $`ϵ`$ and can be neglected in the $`ϵ0`$ limit. Hence, we find agreement with previous results . A nice and simple check of this procedure is provided by the conformally coupled scalar field, $`\alpha =1/2`$, where the calculation can be done exactly. Let us now turn to the local terms in the two-point function. In this case, it is more useful not to carry out the expansion (7), since all its terms would contribute because of $`\mathrm{\Phi }(a,c,0)=1`$. Consider instead the expression (5) directly with $`𝐱=𝐲`$. Using the identity $$z\frac{}{z}\mathrm{ln}[z^\alpha K_\alpha (z)]=z\frac{K_{\alpha 1}(z)}{K_\alpha (z)}$$ and changing the integration variable to $`𝐬=\sqrt{\mu }𝐤`$ we find $$_0K(x,𝐱)|_ϵ=\frac{1}{ϵ(4\pi \mu )^{\frac{d}{2}}}\left[\frac{d}{2}\alpha \frac{2a}{\mathrm{\Gamma }\left(\frac{d}{2}\right)}𝑑ss^d\mathrm{e}^{s^2}\frac{K_{\alpha 1}(as)}{K_\alpha (as)}\right],$$ (11) where the parameter $`a`$ denotes the ratio $`a=ϵ/\sqrt{\mu }`$. The $`s`$-integral is well-defined for all positive $`a`$, although in general not elementary. However, we know that it represents some positive number. Hence, we obtain $$_0K(x,𝐱)|_ϵ=\frac{\gamma }{ϵ(4\pi \mu )^{\frac{d}{2}}},$$ (12) where $`\gamma `$ is a regularization dependent parameter satisfying $$\alpha \frac{d}{2}<\gamma <\mathrm{}.$$ Moreover, if $`2\alpha <d`$, i.e. for $`d^2<4m^2<0`$, one could determine $`a`$ such that $`\gamma =0`$. For example, in the conformally coupled case, $`\alpha =1/2`$, one finds $$a_{\frac{1}{2}}=\frac{(d1)\mathrm{\Gamma }\left(\frac{d}{2}\right)}{2\mathrm{\Gamma }\left(\frac{d+1}{2}\right)}.$$ The contribution (12) to the two-point function can be compensated by adding a local counterterm $$I_c=\frac{\gamma }{2}d^dxϵ^d[\varphi _ϵ(𝐱)]^2$$ (13) to the AdS action of a scalar field. However, one must notice that this only eliminates the contact term, but does not regularize the two-point function. In conclusion, we have presented in this letter a refined prescription of the AdS/CFT correspondence, which invokes a regularization scheme in order to calculate the contact contributions to the CFT correlators. Our calculation shows that in general there is such a contribution to the two-point function, but it can be compensated by adding a covariant local surface term to the AdS action. Alternatively, it might serve as a regulator for the CFT two-point function and thus eliminate the need to add counterterms. We think that this is an interesting topic for further study. This work was supported in part by a grant from NSERC. W. M. is grateful to Simon Fraser University for financial support.
no-problem/9904/cond-mat9904246.html
ar5iv
text
# Universal Finite Size Scaling Functions in the 3⁢𝐷 Ising Spin Glass ## Model and FSS method — We consider the $`3D`$ Edwards–Anderson model, whose Hamiltonian is $$=\underset{xy}{}\sigma _xJ_{xy}\sigma _y$$ (1) where $`\sigma _x`$ are Ising spins on a simple cubic lattice of linear size $`L`$ with periodic boundaries, and $`J_{xy}`$ are independent random interactions taking the values $`\pm 1`$ with probability $`\frac{1}{2}`$. The sum runs over pairs of nearest neighbor sites. Let $`\xi (T,L)`$ be a suitably defined finite–volume correlation length, and let $`𝒪(T,L)`$ be any singular observable, such as $`\xi (T,L)`$ itself or the spin–glass susceptibility (see below). Then FSS theory predicts that $$\frac{𝒪(T,L)}{𝒪(T,\mathrm{})}=f_𝒪\left(\xi (T,\mathrm{})/L\right),$$ (2) where $`f_𝒪`$ is a universal function and corrections to FSS are neglected. From Eq. (2) one obtains the relation $$\frac{𝒪(T,2L)}{𝒪(T,L)}=F_𝒪\left(\xi (T,L)/L\right),$$ (3) where $`F_𝒪`$ is another universal function and only finite–volume observables are involved. Our approach works as follows (see Ref. for details). We make MC runs at numerous pairs $`(T,L)`$, $`(T,2L)`$ and we plot $`𝒪(T,2L)/𝒪(T,L)`$ versus $`\xi (T,L)/L`$. If all these points fall with good accuracy on a single curve — thus verifying the Ansatz (3) — we choose a smooth fitting function $`F_𝒪`$. Then, using the functions $`F_\xi `$ and $`F_𝒪`$, we extrapolate the pair $`(\xi ,𝒪)`$ iteratively from $`L2L2^2L\mathrm{}\mathrm{}`$. ## Computational details — We simulate the model in Eq.(1) with the heath–bath algorithm. We measure $`q_x=\sigma _x\tau _x`$ and $`q=L^3_xq_x`$ from two independent replicas $`(\sigma ,\tau )`$ with the same $`J_{xy}`$. We choose as a definition of $`\xi (T,L)`$ the second-moment correlation length $$\xi (T,L)=\frac{\left[S(0)/S(p)\mathrm{\hspace{0.17em}1}\right]^{1/2}}{2\mathrm{sin}(|p|/2)}$$ (4) where $`S(k)`$ is the Fourier transform $$S(k)=\underset{r}{}e^{ikr}q_xq_{x+r},$$ (5) (arguments $`T,L`$ are omitted) and $`p=(0,0,2\pi /L)`$ is the smallest non–zero wave vector . The spin–glass susceptibility is $`\chi _{SG}(T,L)L^3q^2=S(0)`$. The symbol $``$ represents a double average over thermal noise and $`J_{xy}`$, which is estimated from $`N_s`$ samples with different $`J_{xy}`$. The runs are done on a Cray T3E parallel computer with a fast code that exploits the parallelism of spin glass simulations. The binary variables $`\sigma _x`$ and $`J_{xy}`$ at corresponding sites of 64 samples (each represented by a single bit) are stored in a 64-bit integer variable, and 64 $`\sigma _x`$’s are updated simultaneously with only 31 logical instructions and one random number . Average speed on a single processor (PE) is $`4.5\times 10^7`$ spin updates per second (DEC Alpha EV5, 600 MHz). The PEs are arranged in a virtual parallelepiped along whose axis we can distribute independent groups of 64 samples, different “slices” of a large lattice, and different temperatures. We typically used 32 to 128 PEs. Equilibration of the runs is verified with the criterion introduced in Ref. . The sizes simulated range from $`L=4`$ to $`L=48`$, from which we form 104 pairs $`(T,L)`$, $`(T,2L)`$. In Table I some parameters of the simulations are given. The equivalent of about 2 years of computer time on a single PE was employed. ## FSS analysis — In Fig. 1 we show that, within our statistical accuracy, the FSS Ansatz (3) is well verified for $`𝒪=\chi _{SG}`$ and $`𝒪=\xi `$. No systematic deviations from the curves are detectable, but data at $`L=4`$, not displayed in Fig. 1, are significantly outside the curves for $`\xi (T,L)/Lx>0.2`$. We verified that other observables, such as the Binder ratio, also satisfy Eq.(3). We emphasize that FSS was not assumed a priori and that Eq.(3) contains no adjustable parameters. Furthermore, no particular dependence of $`\xi `$ and $`\chi _{SG}`$ on $`T`$ was assumed. We fit the data in Fig. 1 to two functions $`F_{\chi _{SG}}`$, $`F_\xi `$ of the form $`F(x)=1+_{i=1,n}a_i\mathrm{exp}(i/x)`$, obtaining good fits with $`n=3`$ or $`4`$ (goodness of fit parameter $`Q>0.9`$). Using $`F_{\chi _{SG}}`$, $`F_\xi `$, we then compute $`\chi _{SG}(T,\mathrm{})`$, $`\xi (T,\mathrm{})`$ with the iterative procedure described above. In Table II we show that extrapolations from different $`L`$ are consistent, providing a test of the method. In our final analysis, we take the weighted average of the extrapolations from different $`L`$. An implicit assumption of the iterative procedure is that the Ansatz (3) with a given function $`F_𝒪`$ will continue to hold as $`L\mathrm{}`$. This assumption could fail if the system exhibits a crossover at large $`L`$, as in any FSS analysis. However, as shown in Table II, extrapolations at $`T=1.4084`$ from small $`L`$ are consistent with data from large $`L`$, which have little or no finite-size effects. We therefore believe that a crossover is unlikely. In order to test for systematic errors due to corrections to FSS, we repeated the analysis excluding $`L=5,6`$ from the fits of $`F_{\chi _{SG}}`$, $`F_\xi `$ and we found that extrapolated data change within their error bars. We have a good control on the extrapolated data up to $`\xi 140`$; at lower temperatures the statistical errors become quite large, and the data are more sensitive to the region of high $`x`$, where there are few data from large $`L`$. (The largest $`x`$ used for the extrapolations is $`x=0.57`$, from $`T=1.2059,L=5`$). In Fig. 2 we show that with our extrapolated data Eq.(2) is satisfied remarkably well, providing a further test of the method. If $`𝒪\xi ^{\gamma _𝒪/\nu }`$ as $`\xi \mathrm{}`$, then $`f_𝒪(x)`$ in Eq.(2) must satisfy $`f_𝒪(x)x^{\gamma _𝒪/\nu }`$ as $`x\mathrm{}`$. As shown in Fig. 2 (insets), our curves indeed have a power-law asymptotic decay, with negative slopes $`\gamma /\nu =2.30\pm 0.08`$ in Fig. 2(a) and $`1`$ in Fig. 2(b). We emphasize the universality of the scaling functions in Fig. 1 and 2. It would be interesting to determine the same functions for different distributions of the $`J_{xy}`$, in order to test for possible violations of universality . ## Nature of the phase transition — We now compare our extrapolated data with the following scenarios: (i) a $`T_c0`$ continuous phase transition; (ii) a line of critical points terminating at $`T_c0`$, with an exponential divergence as $`TT_c^+`$; (iii) an exponential divergence at $`T=0`$. The last two scenarios imply a lower critical dimension exactly equal to three. (i) We fit our data to $`\xi (T)`$ $`=`$ $`c_\xi (TT_c)^\nu \left[1+a_\xi (TT_c)^\theta \right]`$ (6) $`\chi _{SG}(\xi )`$ $`=`$ $`b\xi ^{2\eta }\left[1+d\xi ^\mathrm{\Delta }\right]`$ (7) with fixed correction–to–scaling exponents $`\theta `$ and $`\mathrm{\Delta }`$ . In the fit we include data with $`\xi \xi _m`$, varying $`\xi _m`$ in order to test the stability of the fits. Without the corrections to scaling ($`a_\xi =d=0`$), the quality of fits is good for $`\xi _m>34`$ ($`Q1`$), but fit parameters (noticeably $`T_c,\nu `$ and $`\eta `$) show small systematic variations with $`\xi _m`$ in the whole range available. Including the corrections, we obtain excellent and stable fits with $`1\theta 2`$ and $`1\mathrm{\Delta }1.5`$, the preferred values being $`\theta =1.4`$ ($`Q>0.6`$) and $`\mathrm{\Delta }=1.3`$ ($`Q>0.98`$). Our estimates for the fitting parameters are $`T_c=1.156\pm 0.015`$, $`\nu =1.8\pm 0.2`$, $`\eta =0.26\pm 0.04`$, $`c_\xi =0.7\pm 0.2`$, $`a_\xi =0.5\pm 0.3`$, $`b=3.3\pm 0.3`$ and $`d=0.9\pm 0.1`$, where the errors take into account the uncertainties on $`\theta `$ and $`\mathrm{\Delta }`$. We then obtain $`\gamma =\nu (2\eta )=4.1\pm 0.5`$. As shown in Fig. 3 and 4, corrections to scaling are important for $`\xi 10`$ . Since the fits do not include the analytic corrections to scaling, $`\mathrm{\Delta }`$ and $`\theta `$ should be regarded as “effective” exponents. For comparison, we quote some estimates from other MC works: $`T_c=1.175\pm 0.025`$ , $`1.11\pm 0.04`$ , $`1.13\pm 0.06`$ , $`1.19\pm 0.01`$ ; $`\nu =1.3\pm 0.1`$ , $`1.20\pm 0.04`$ , $`1.7\pm 0.3`$ , $`2.00\pm 0.15`$ , $`1.33\pm 0.05`$ ; $`\eta =0.22\pm 0.05`$ , $`0.35\pm 0.05`$ , $`0.30\pm 0.06`$ , $`0.37\pm 0.04`$ , $`0.22\pm 0.02`$ (notice that in Ref. a gaussian distribution of the bonds was considered). (ii) We fit our data to $$\xi (T)=f_\xi \mathrm{exp}\left(g_\xi /(TT_c)^\sigma \right)$$ (8) testing the fit stability as above. The fits are excellent with $`\xi _m1.3`$ but, due to strong correlations between $`\sigma `$ and $`T_c`$, the errors on the fit parameters are large. For $`\xi _m=1.9`$ the best fit gives $`\sigma =0.20\pm 0.05`$, $`T_c=1.13\pm 0.02`$, $`f_\xi =(1.0\pm 0.2)\times 10^3`$, $`g_\xi =7\pm 2`$ ($`Q=0.77`$). Notice, however, that any power-law can be approximated by an exponential with sufficiently small $`\sigma `$. For $`\xi _m=3.8`$ the best fit (shown in Fig. 4) gives $`\sigma =0.5\pm 0.3`$, $`T_c=1.08\pm 0.04`$, $`f_\xi =(1.1\pm 0.8)\times 10^1`$, $`g_\xi =2.4\pm 1.5`$ ($`Q=0.69`$). The deviations of the data from this fit for $`\xi <3`$ are consistent with corrections to scaling of $`10\%`$. In general, in the presence of an exponential singularity we expect multiplicative logarithmic corrections to Eq.(7). Our data fit well to $$\chi _{SG}(\xi )=b_l\xi ^{2\eta _l}\left(\mathrm{log}\xi \right)^r$$ (9) for $`\xi _m>2`$, giving $`b_l=1.30\pm 0.03`$, $`\eta _l=0.36\pm 0.03`$, $`r=0.36\pm 0.06`$ ($`Q>0.9`$) (see also Fig. 3). (iii) When we fit our data to $$\xi (T)=f_\xi \mathrm{exp}\left(g_\xi /T^\sigma \right)$$ (10) we find that $`\sigma `$ increases continuously with $`\xi _m`$, from $`\sigma 3`$ to $`\sigma 9`$ . Even assuming that $`\sigma `$ stabilizes for higher $`\xi `$, we believe that a value $`\sigma >9`$ is implausibly large. In fact, Eq. (10) implies a renormalization group (RG) transformation $`dT/dlT^{\sigma +1}`$ ($`e^l`$ being the RG scale factor), while for $`T0`$ (at the lower critical dimension) we expect $`dT/dl=a_2T^2+a_3T^3+\mathrm{}`$ ($`a_2=0`$ in the phenomenological RG theory of Ref. ). To conclude, we have shown that FSS is verified in the $`3D`$ Ising spin glass and that the correlation length diverges at a finite temperature. Whether this is a conventional continuous phase transition (in which case the lower critical dimension is probably close to three) or a transition to a line of critical points, is still not known. We thank A. Pelissetto, A.P. Young and O.C. Martin for useful discussions. This work was supported by the INFM Parallel Computing Initiative.
no-problem/9904/hep-ph9904298.html
ar5iv
text
# March 1999 UM-P-99/07 KIAS-P99023 UDELHP-99/101 Closing the Neutrinoless Double Beta Decay Window into Violations of the Equivalence Principle and/or Lorentz Invariance ## I Introduction During the past few years, neutrino oscillations have been used to explore exotic properties of neutrinos such as possible violations of Lorentz invariance (VLI) and/or violations of the equivalence principle (VEP). Since neutrinolesss double beta decay has served as a window into neutrino masses for the last two decades , it is natural to enquire if this rare decay can tell us anything about VLI and VEP processes. We begin with the observation that the properties of neutrinos enter into the neutrino exchange diagram for the neutrinoless double beta decay amplitude (without right handed currents) in the form of the factor $$A^\nu =(1\gamma _5)P(1\gamma _5),$$ (1) where P is a linear combination of the propagators for each of the Majorana neutrino fields that constitute the $`\nu _e`$ field, $$P=U_{ea}^2P_a.$$ (2) The $`U_{ea}`$ are elements of the unitary matrix connecting mass and weak eigenstate neutrinos. In the absence of external fields, and in the absence of Lorentz invariance violation, the Majorana fields have definite masses, $`m_a`$, and all neutrinos have the same limiting velocity, $`c`$. In that case the $`P_a`$ are, of course, given by $$P_a=(\gamma ^0E\gamma ^kp_kc+m_ac^2)^1.$$ (3) ## II Modifications Due to Violation of Lorentz Invariance. In the Lorentz invariance violating scheme of Coleman and Glashow, the limiting velocity of each neutrino may be distinct, so that $`cc_a`$. Our conclusions are unaltered by the introduction of distinct mass and velocity bases, and for the sake of clarity such a complication will be ignored. To first order in $`m_a`$, the VLI modified $`A^\nu `$ is then given by $$A_{VLI}^\nu \frac{2U_{ea}^2(1\gamma _5)m_ac_a^2}{E^2(pc_a)^2}$$ (4) where the chirality factors have been used to eliminate contributions from factors of $`E\gamma ^0`$ or $`\gamma ^kp_k`$ in the numerator. For neutrinoless double beta decay in nuclei we usually make the approximation of ignoring nuclear recoil, so the energy of the exchanged neutrino is set equal to zero, that is $`E=0`$. With this standard approximation, $$A_{VLI}^\nu 2U_{ea}^2(1\gamma _5)m_a/p^2.$$ (5) Since this expession is independent of limiting velocities, we conclude that VLI cannot enter into neutrinoless double beta decay in any significant way. ## III Modifications Due to Violation of the Equivalence Principle. Following the formalism of Ref., the Dirac equation governing neutrino $`a`$ is modified by the presence of an external gravitational field which couples to neutrinos with strength $`f_a`$ relative to the usual universal Newtonian coupling. For the sake of clarity, we take the mass and gravitational coupling bases to be the same. In the presence of a constant Newtonian potential, $`\mathrm{\Phi }`$, the neutrino propagator becomes $$P_a=[(1+f_a\mathrm{\Phi })E\gamma ^0(1f_a\mathrm{\Phi })\gamma ^kp_k+m_a]^1$$ (6) where we have set the common limiting vacuum (i.e. $`\mathrm{\Phi }=0`$) velocity equal to 1. To first order in $`m_a`$ we then have, making use of the chirality factors as before, $$A_{VEP}^\nu \frac{2U_{ea}^2(1\gamma _5)m_a}{(1+f_a\mathrm{\Phi })^2E^2(1f_a\mathrm{\Phi })^2p^2}.$$ (7) Making the zero recoil approximation as above and retaining only to first order in $`\mathrm{\Phi }`$ for consistency, we then have $$A_{VEP}^\nu 2U_{ea}^2(1\gamma _5)m_a(1+2f_a\mathrm{\Phi })/p^2.$$ (8) The expression above includes only the modification to the neutrino propagator due to the presence of $`\mathrm{\Phi }`$. To this we must also add modifications to the W- boson, quark and electron lines due to $`\mathrm{\Phi }`$. Assuming that only neutrinos have anomalous gravitational couplings, restoration of gravitational gauge invariance in the limit that all $`f_a`$ are equal guarantees that the final $`\mathrm{\Phi }`$-dependent contribution depends only upon $`\mathrm{\Delta }f_a=f_af_0`$, where $`f_0=1`$ in Einsteinian gravity. The $`\mathrm{\Phi }`$-dependent contribution to the total neutrinoless double beta decay rate will then be proportional to $`U_{ea}^2m_a\mathrm{\Delta }f_a`$. Thus, the VEP effect is proportional to both $`m_a`$ and $`\mathrm{\Delta }f_a`$ and is therefore extremely small. ## IV Conclusions We have examined the modifications to the usual neutrino exchange diagram for the neutrinoless double beta decay amplitude arising from violations of Lorentz invariance and/or the equivalence principle in the neutrino sector. We find that the VLI parameters disappear from the decay amplitude in the usual zero recoil approximation and that the VEP parameters enter the amplitude only in combination with with neutrino mass factors. We therefore conclude that neutrinolesss double beta decay cannot provide a significant window into VLI and VEP neutrino processes. This result appears to contradict the conclusions of a recent paper . ###### Acknowledgements. A.H. would like to thank the particle theory group at The University of Melbourne and the Korean Institute for Advanced Study for their hospitality during a portion of this work. He also thanks C.N. Leung for discussions during the early stages of this work. This work was supported in part by the US Department of Energy grant DE-FG02-84ER40163. R.R.V. was supported by the Australian Research Council.
no-problem/9904/cond-mat9904336.html
ar5iv
text
# Sorry, this file is empty. See cond-mat/9912431
no-problem/9904/cond-mat9904447.html
ar5iv
text
# Complexation of DNA with Cationic Surfactant ## Abstract Transfection of an anionic polynucleotide through a negatively charged membrane is an important problem in genetic engineering. The direct association of cationic surfactant to DNA decreases the effective negative charge of the nucleic acid, allowing the DNA-surfactant complex to approach a negatively charged membrane. The paper develops a theory for solutions composed of polyelectrolyte, salt, and ionic surfactant. The theoretical predictions are compared with the experimental measurements. PACS.05.70.Ce - Thermodynamic functions and equations of state PACS.61.20.Qg - Structure of associated liquids: electrolytes, molten salts, etc. PACS.61.25.Hq - Macromolecular and polymer solutions; polymer melts; swelling Gene therapy has increasingly captured public attention after the first gene transfer study in humans was completed in 1995. The procedure delivers a functional polynucleotide sequence into the cells of an organism affected by a genetic disorder. The gene delivery system that has been adopted in over $`90\%`$ of the clinical trials to date is in the form of genetically engineered non-replicating retroviral or adenoviral vectors. Unfortunately, the adverse response of the immune system has hindered application of virus based gene therapy. New strategies are now being explored . One of the approaches, pioneered by Felgner and Ringold, relies on association between the anionic nucleic acid and cationic lipid liposomes. The process of association neutralizes the excess negative charge of a polynucleotide, allowing the DNA-lipid complex to approach a negatively charged phospholipid membrane. Unfortunately, the cationic lipids and surfactants are toxic to an organism. A question that we will try to answer in this letter is: What is the minimum amount of cationic surfactant or lipid that is necessary to form a complex and how does this amount depends on various properties of a system? We study a solution consisting of DNA segments of density $`\rho _{\mathrm{DNA}}`$, surfactants of density $`\rho _{\mathrm{surf}}`$, and salt molecules of density $`\rho _{\mathrm{salt}}`$. The solvent is idealized as a uniform medium of dielectric constant $`D`$. Since the DNA molecule has a large intrinsic rigidity, we model it as a cylinder of fixed length and diameter. When in solution, the $`Z`$ phosphate groups of the DNA strand become ionized, resulting in a net molecular charge, $`Zq`$. An equivalent number of counterions of density $`Z\rho _{\mathrm{DNA}}`$ are released into solution preserving the overall charge neutrality. Similarly, the cationic surfactant molecule in aqueous solution becomes ionized, producing a free negative ion and a flexible chain consisting of one positively charged hydrophilic head group and a neutral hydrophobic tail. The ions of salt, the counterions, and the negative ions dissociated from the surfactant are modeled as hard spheres with point charge located at the center. For simplicity, we shall call the negative ions, “coions”, and the positive ions, “counterions” — independent of the species from which they are derived (see Figure $`1`$). The strong electrostatic attraction between the counterions, cationic surfactant, and the DNA favors formation of clusters consisting of one DNA molecule and $`n_{\mathrm{count}}`$ associated counterions, and $`n_{\mathrm{surf}}`$ associated surfactants. The process of association neutralizes $`n_{\mathrm{surf}}+n_{\mathrm{count}}`$ phosphate groups of a DNA molecule, decreasing the net charge of a complex to, $`q_{\mathrm{complex}}=(Zn_{\mathrm{surf}}n_{\mathrm{count}})q`$ (Figure $`1`$). Our task is to determine the values of $`n_{\mathrm{count}}`$ and $`n_{\mathrm{surf}}`$ which are thermodynamically favored, i.e. which minimize the overall Helmholtz free energy of solution. For a dilute suspension, the main contributions to the free energy can be subdivided into three parts: the energy that it takes to construct an isolated complex, $`F_{\mathrm{association}}`$; an energy that it takes to solvate this complex in the ionic sea, $`F_{\mathrm{solvation}}`$; and the entropic energy of mixing, $`F_{\mathrm{mixing}}`$. To calculate the free energy of an isolated cluster, we use the following simplified model of a complex. Each monomer of a polyion is treated as free or occupied by a counterion or a surfactant (Figure $`2`$). We associate with each monomer $`i`$ two occupation variables $`\sigma (i)`$ and $`\tau (i)`$, such that $`\sigma (t)=1`$ if the site is occupied by a condensed counterion, and $`\sigma (i)=0`$ otherwise. The occupation number for surfactants, $`\tau (i)`$, behaves in a similar way. The free energy can now be calculated as a logarithm of the Boltzmann sum over all possible configurations of condensed counterions and surfactants along the polyion, $`\beta F_{\mathrm{association}}`$ $`=`$ $`\mathrm{ln}{\displaystyle \underset{\nu }{}}e^{\beta E_\nu }.`$ (1) The energy of a given configuration $`\nu `$ can be subdivided into two parts, $`E_\nu =E_1+E_2`$, where the electrostatic contribution is, $$E_1=\frac{q^2}{2}\underset{ij}{}\frac{[1+\sigma (i)+\tau (i)][1+\sigma (j)+\tau (j)]}{D|r(i)r(j)|}.$$ (2) The energy $`E_2`$ arises from hydrophobicity of surfactant molecules. Clearly, when two adjacent sites are occupied by surfactants, the net exposure of hydrocarbon tails to water is reduced. We capture this effect by introducing an additional contribution to the overall energy of interaction, $`E_2`$, given by $$E_2=\frac{\chi }{2}\underset{<ij>}{}\tau (i)\tau (j),$$ (3) where the sum runs over the nearest neighbors. The parameter $`\chi `$, related to the decrease in overall energy due to the agglomeration of surfactants, is obtained from an independent experimental measurement of the energy that is required to move one surfactant molecule from a monolayer to bulk. The exact solution of even this one-dimensional “sub-problem” is rather difficult due to the long-ranged character of the Coulomb force. To proceed we could use a mean-field approximation, but while the mean-field theory works very well for long-range potentials, for one-dimensional systems with short-range forces, it can lead to unphysical instabilities . In order to avoid this difficulty, we treat the long-range electrostatic part of the association free energy using a mean-field approximation, while performing an exact calculation for the short-range hydrophobic interaction. Once a cluster, constructed in isolation, is introduced into solution, it gains an additional energy due to electrostatic interactions with the other entities. The free energy gained in the process of solvation can be obtained using the Debye-Hückel theory. Let us fix the position of one cluster and ask what is the electrostatic potential $`\mathrm{\Phi }`$ that this cluster feels as a result of the presence of all the other clusters, surfactants, counterions, and coions. To answer this question, consider the Poisson equation, $`^2\mathrm{\Phi }`$ $`=`$ $`{\displaystyle \frac{4\pi }{D}}\rho _q.`$ (4) To make the problem well posed, this equation must be provided with a closure which would relate the electrostatic potential, $`\mathrm{\Phi }`$, to the net charge density, $`\rho _q`$. A simple closure motivated by ideas derived from the Debye-Hückel theory is to suppose that the free (unassociated) surfactants, counterions, and coions are distributed around the complex in accordance with the Boltzmann distribution, with other clusters providing a neutralizing background, $`\rho _q`$ $`=`$ $`q_{\mathrm{complex}}\rho _{\mathrm{DNA}}+q\rho _{\mathrm{count}}e^{\beta q\mathrm{\Phi }}q\rho _{\mathrm{coion}}e^{+\beta q\mathrm{\Phi }}+q\rho _{\mathrm{surf}}e^{\beta q\mathrm{\Phi }},`$ (5) where $`\beta =1/(k_BT)`$. Inserting (5) into (4) we obtain the Poisson-Boltzmann equation, which after linerization reduces to the familiar Helmholtz form. The linearization is justified since all the non-linearities are effectively included in the renormalization of DNA charge by the formation of clusters. The Helmholtz equation can be solved analytically, yielding the electrostatic potential of a complex. The electrostatic free energy of solvation is obtained through the usual Debye charging process. The free energy due to mixing of various species is a sum of individual entropic contributions, $$F_{mixing}=F_{counterion}+F_{coion}+F_{surfactant}+F_{complex}.$$ (6) The structure of each one of these terms is similar to that of an ideal gas and can be calculated using the Flory’s theory of polymer melts. For example, in the case of coions, the reduced free energy density is $`\beta F_{\mathrm{coion}}/V=\rho _{\mathrm{coion}}\mathrm{ln}(\varphi _{\mathrm{coion}}/\zeta )\rho _{\mathrm{coion}}`$, where $`\varphi _{\mathrm{coion}}`$ is the volume fraction occupied by the negative microions, while $`\zeta `$ is a factor that takes into account the internal structure of each specie. For structureless particles such as coions and counterions, $`\zeta =1`$. For a flexible linear surfactant chain, $`\zeta `$ is the number of monomers comprising a molecule . For a complex made of a DNA segment and condensed surfactants and counterions, $`\zeta `$ is related to the number of different configurations which can arise when $`n_{count}`$ counterions and $`n_{surf}`$ surfactant molecules associate to a DNA molecule. Minimizing the total free energy, $$F=F_{\mathrm{mixing}}+F_{\mathrm{association}}+F_{\mathrm{solvation}},$$ (7) with respect to the number of associated counterions and surfactants, $$\frac{F}{n_{\mathrm{cout}}}=\frac{F}{n_{\mathrm{surf}}}=0,$$ (8) we find the thermodynamically preferred values for the number of condensed particles. We define a “surfoplex” to be a complex in which almost all of the DNA’s phosphate groups are neutralized by the associated surfactant molecules. As mentioned in the introduction, we are interested in the minimum amount of cationic surfactant needed to transform naked DNA into surfoplexes. To this effect, we study the dependence of the number of condensed surfactant molecules, $`n_{\mathrm{surf}}`$, on the bulk concentration of surfactant. Figure 3 demonstrates the location of the cooperative binding transition associated with the formation of surfoplexes. The transition is a result of competition between the entropic, the electrostatic, and the hydrophobic interactions. Clearly, for small concentrations of surfactant, binding to the polyions is not thermodynamically favored, since the system can lower its free energy by keeping the surfactant in the bulk, thus gaining entropy. As the density of surfactant increases, the gain in electrostatic energy, due to binding, outweighs the loss of entropy due to confinement which, in any case, is largely compensated by the release of bound counterions. The cooperativety predicted by our theory and observed in experiments is due to the fact that once the first surfactant is bound, the binding of additional surfactants is strongly favored by the decrease in hydrophobic energy of the exposed hydrocarbon tails. The high degree of cooperativety allows us to clearly define how much surfactant is necessary to form a surfoplex. To check the predictions of our theory, we compare it to the recent experimental measurements of binding isotherm in a solution of DNA, dodecyltrimethylammonium bromide, and salt, (Figure $`3`$). The agreement is encouraging specially since the theory does not have any fitting parameters. ACKNOWLEDGMENTS We would like to acknowledge helpful conversations with Profs. K.A. Dawson, Michael E. Fisher, and Prof. H. E. Stanley. This work was supported in part by CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico and FINEP - Financiadora de Estudos e Projetos, Brazil.
no-problem/9904/nucl-th9904051.html
ar5iv
text
# Compression modes in nuclei: microscopic models with Skyrme interactions ## Introduction The determination of the nuclear incompressibility $`K`$ is still a matter of debate, despite a remarkable number of works on the subject<sup>1)</sup>. In the present contribution, we present self-consistent calculations of the nuclear collective modes associated with a compression and expansion of the nuclear volume, namely the isoscalar giant monopole and dipole resonances (ISGMR and ISGDR, respectively). In fact, we share the point of view that the most reliable way to extract information on $`K`$ is to perform that kind of calculations, having as the only phenomenological input a given effective nucleon-nucleon interaction, and choose the value of $`K`$ corresponding to the force which can reproduce the experimental properties of the compression modes in finite nuclei. The ISGMR, or “breathing mode”, is excited by the operator $`_{i=1}^Ar_i^2`$ and it has been identified in many isotopes along the chart of nuclei already two decades ago. However, this systematics has never allowed an unambigous determination of $`K`$<sup>1)</sup>. This was one of the motivations for the recent experimental program undertaken at the Texas A& M Cyclotron Institute, which has allowed the extraction of experimental data for the ISGMR of better quality as compared to the past, by means of the analysis of the results of inelastic scattering of 240 MeV $`\alpha `$-particles. We refer to other contributions in these proceedings for reports on these experimental data<sup>2)</sup>. Monopole strength functions turn out to be quite fragmented for nuclei lighter than <sup>90</sup>Zr. For nuclei like <sup>208</sup>Pb, <sup>144</sup>Sm, <sup>116</sup>Sn and <sup>90</sup>Zr, however, one is able to identify a single peak which, together with a high-energy extended tail, exhausts essentially all the monopole Energy Weighted Sum Rule (EWSR). These medium-heavy nuclei are, therefore, those suited for the extraction of information about the nuclear incompressibility and we concentrate ourselves on them in the present work. The ISGDR is excited by the operator $`_ir_i^3Y_{10}`$ and corresponds to a compression of the nucleus along a definite direction, so that it has been called sometimes the “squeezing mode”. Although some first indication about the energy location of this resonance dates back to the beginning of the eighties, a more clear indication about its strength distribution in <sup>208</sup>Pb has been reported only recently<sup>3)</sup>. Measurements have been done also for other nuclei, namely <sup>90</sup>Zr, <sup>116</sup>Sn and <sup>144</sup>Sm (as in the case of the giant monopole resonance). There is some expectation that the study of this mode can help to shed some light on the problem of nuclear incompressibility. Actually, at first sight this compressional mode seems to provide us with a new problem. A simple assumption like the scaling model (illustrated for the present purposes in Ref.<sup>4)</sup>) would lead to two different values of the finite-nucleus incompressibility $`K_A`$ if applied to the ISGMR and the ISGDR with the input of their experimental energies. The hydrodynamical model gives two results which are closer<sup>4)</sup> but which still make us wonder about the validity of methods based on extracting $`K_A`$ and extrapolating it to large values of $`A`$, for the determination of $`K`$. This points again to the necessity of reliable microscopic calculations of the compressional modes, in order to reproduce the experimental data and extract the value of $`K`$ from the properties of the force which is used. Our calculations are performed within the framework of self-consistent Hartree-Fock (HF) plus Random-Phase Approximation (RPA). We use effective forces of Skyrme type<sup>5-7)</sup> and we look at their predictions for the properties of ISGMR and ISGDR. The parametrizations we employ span a large range of values for $`K`$ (from 200 MeV to about 350 MeV). In particular, we focus mainly on two original aspects: firstly, we look at the effects of pairing correlations in open-shell nuclei; secondly, we study if the picture obtained at mean-field level is altered by the inclusion of the coupling of the giant resonances to more complicated nuclear configurations. This inclusion is necessary if one wishes to understand theoretically all the contributions to the resonance width and may shift the resonance centroid. About the first aspect, it is well known that pairing correlations are important in general to explain the properties of ground states and low-lying excited states in open-shell nuclei. Since we wish to see how these correlations affect in particular the compressional modes, we take them into account by extending the HF-RPA approach to a quasi-particle RPA (QRPA) on top of a HF-BCS calculation. About the second aspect, we recall that if we start from a description of the giant resonance as a superposition of one particle-one hole (1p-1h) excitations, in their damping process we must take care of the coupling with states of 2p-2h character. They are in fact known to play a major role and give rise to the spreading width $`\mathrm{\Gamma }^{}`$ of the giant resonance which is usually a quite large fraction of the total width. Within mean field theories, only the width associated with the resonance fragmentation (Landau width) and the escape width $`\mathrm{\Gamma }^{}`$ are included (the latter, provided that 1p-1h configurations with the particle in the continuum are considered). In the past, we have developed a theory in which all the contributions to the total width of giant resonances are consistently treated and we have obtained satisfactory results when applying it to a number of cases. In particular, we will recall what has been obtained<sup>8)</sup> for the case of the ISGMR in <sup>208</sup>Pb. We also report about a new calculation for the ISGDR in the same nucleus. ## Formalism: a brief survey For all nuclei we consider, we solve the HF equations on a radial mesh and, in the case of the open-shell isotopes, we solve HF-BCS equations. A constant pairing gap $`\mathrm{\Delta }`$ is introduced (for neutrons in the case of <sup>116</sup>Sn and for protons in the case of <sup>90</sup>Zr and <sup>144</sup>Sm), and at each HF iteration the quasi-particle energies, the occupation factors and the densities to be input at the next iteration are determined accordingly. $`\mathrm{\Delta }`$ is obtained from the binding energies of the neighboring nuclei<sup>9)</sup>. The states included in the solution of the HF-BCS equations are those below a cutoff energy given by $`\lambda _{HF}+8.3`$ MeV ($`\lambda _{HF}`$ being the HF Fermi energy), in analogy with the procedure of Ref.<sup>10)</sup>. Using the above self-consistent mean fields we work out the RPA or QRPA equations (respectively on top of HF or HF-BCS), in their matrix form. Discrete positive energy states are obtained by diagonalizing the mean field on a harmonic oscillator basis and they are used to build the 1p-1h (or 2 quasi-particles) basis coupled to $`J^\pi `$=0<sup>+</sup> or 1<sup>-</sup>. The dimension of this basis is chosen in such a way that more than 95% (typically 97-99%) of the appropriate EWSR is exhausted in the RPA or QRPA calculation. More details, especially on the way the QRPA equations are implemented, will be given in Ref.<sup>11)</sup>. As mentioned in the previous section, in the case of <sup>208</sup>Pb we perform also calculations that go beyond this simple discrete RPA. This is done along the formalism described in Ref.<sup>12)</sup>, which is recalled here only very briefly. We label by $`Q_1`$ the space of discrete 1p-1h configurations in which the RPA equations are solved. To account for the escape width $`\mathrm{\Gamma }^{}`$ and spreading width $`\mathrm{\Gamma }^{}`$ of the giant resonances, we build two other orthogonal subspaces $`P`$ and $`Q_2`$. The space $`P`$ is made of particle-hole configurations where the particle is in an unbound state orthogonal to all the discrete single-particle levels; the space $`Q_2`$ is built with the configurations which are known to play a major role in the damping process of giant resonances: these configurations are 1p-1h states coupled to a collective vibration. Using the projection operator formalism one can easily find that the effects of coupling the subspaces $`P`$ and $`Q_2`$ to $`Q_1`$ are described by the following effective Hamiltonian acting in the $`Q_1`$ space: $`(E)Q_1HQ_1`$ $`+`$ $`W^{}(E)+W^{}(E)`$ $`=Q_1HQ_1`$ $`+`$ $`Q_1HP{\displaystyle \frac{1}{EPHP+iϵ}}PHQ_1`$ $`+`$ $`Q_1HQ_2{\displaystyle \frac{1}{EQ_2HQ_2+iϵ}}Q_2HQ_1,`$ where $`E`$ is the excitation energy. For each value of $`E`$ the RPA equations corresponding to this effective, complex Hamiltonian $`(E)`$ are solved and the resulting sets of eigenstates enable us to calculate all relevant quantities, in particular the strength function associated with a given operator. To evaluate the matrix elements of $`W^{}`$, we calculate the collective phonons with the same effective interaction used for the giant resonance we are studying (within RPA), and we couple these phonons with the 1p-1h components of the giant resonance by using their energies and transition densities. ## Results for the isoscalar monopole resonance As recalled in the introduction, Youngblood et al.<sup>2)</sup> have recently measured the ISGMR strength distribution with fairly good precision, in the nuclei <sup>90</sup>Zr, <sup>116</sup>Sn, <sup>144</sup>Sm and <sup>208</sup>Pb. In their work, they also compare the experimental centroid energies with the calculations of Blaizot et al.<sup>13)</sup> performed by using RPA and employing the finite-range Gogny effective interaction: a value of the nuclear incompressibility $`K`$ = 231 MeV is deduced. In the following, we denote as centroid energy the ratio $`E_0m_1/m_0`$ ($`m_0`$ and $`m_1`$ being the non-energy-weighted and energy-weighted sum rules, respectively). If we try to compare the experimental values with calculations done at the same RPA level but using the zero-range Skyrme effective interactions, we can infer a different conclusion with respect to the value of $`K`$. Among the Skyrme type interactions, the parametrization which gives probably the best account of the experimental centroid energies in the nuclei studied by the authors of Ref.<sup>2)</sup>, is the SGII force<sup>6)</sup>. The results are shown in Table 1. The force SGII is characterized by a value of the nuclear incompressibility $`K`$ = 215 MeV: since it reproduces very well the ISGMR centroid energy in <sup>208</sup>Pb, and it slightly overestimates those in the other isotopes, one would conclude that $`K`$ is of the order of or slightly less than 215 MeV. This conclusion is inferred by means of simple RPA. It is of course legitimate to wonder if calculations beyond this simple approximation could lead to different values of the nuclear incompressibility. We first consider the effect of pairing correlations. In the case of <sup>116</sup>Sn, the centroid energy of 17.18 MeV obtained with the SGII force in RPA, becomes 17.19 MeV if one turns to QRPA. A very small shift is found also when other forces are used (for instance, with the recently proposed SLy4 force<sup>7)</sup>, one obtains 17.51 MeV and 17.59 MeV for RPA and QRPA, respectively) and when other nuclei are considered. In general, although we know that pairing correlations play a crucial role not only to explain the ground-state of open-shell nuclei but also their low-lying excited states, it appears that they do not affect so much the giant resonances like the ISGMR (or ISGDR, anticipating results of the next section) which lie at relatively high excitation energy compared to the pairing gap $`\mathrm{\Delta }`$. Civitarese et al.<sup>14)</sup> found also small shifts (of the order of 100-150 keV) for the ISGMR and ISGQR when pairing correlations are taken into account: this shift is larger than that obtained in the present work, but it is the result of a different (non self-consistent) model. The present conclusion for the nuclear incompressibility is therefore similar to that obtained by Hamamoto et al.<sup>15)</sup>, since they find that the Skyrme interaction which provides the best results for the ISGMR is the SKM parametrization and this is very similar to SGII (the associated nuclear incompressibility being 217 MeV). Our study, however, is done in a more general framework since we have analyzed also the role of pairing correlations. If we finally consider the results of calculations beyond mean field<sup>8)</sup> (which include not only the continuum coupling but also the coupling with the 2p-2h type states) performed for <sup>208</sup>Pb we find that it is also possible to reproduce rather well the total width of the ISGMR, which is around 3 MeV. This width is actually in large part a consequence of fragmentation (or Landau damping): at least three states share, at the level of RPA, the resonance strength, but continuum as well as 2p-2h couplings are able to give to each peak the correct width so that the overall lineshape coincides with the experimental findings. We stress that the coupling with the 2p-2h type states is also responsible for a downward shift of the ISGMR centroid and peak energies, which is of the order of 0.5 MeV. One may argue that this affects the extraction of the value of $`K`$ from theoretical calculations. Actually, since the value of $`K`$ associated with a given force is obtained by a calculation of nuclear matter at the mean field level, it is legitimate to draw conclusions about $`K`$ from the comparison with the experiment of the ISGMR results for finite nuclei obtained again at the mean field level. But the fact that a given force is able to account for the ISGMR linewidth enforces our confidence about its reliability. And it would be of course legitimate either, to compare the centroid energies obtained after 2p-2h coupling with experiment provided the value of $`K`$ associated with the force is calculated by including the same couplings at the nuclear matter level. No such calculations in nuclear matter have been done so far, to our knowledge. ## Results for the isoscalar dipole resonance A peculiar feature of calculations of this giant resonance is the appearance of a spurious state in the calculated spectrum. When diagonalizing the RPA matrix on a 1<sup>-</sup> basis, we expect to see among all states the spurious state at zero energy corresponding to the center-of-mass motion and we expect as well that it exhausts the whole strength associated with the operator $`_ir_iY_{10}`$. Due to a lack of complete self-consistency (some part of the residual interaction, like the two-body spin-orbit and Coulomb forces, are usually neglected in the RPA because their effect should be rather small) and to numerical inaccuracies, this is not the case. The spurious state comes out in practice at finite energy and its wave function does not overlap completely with that of the exact center-of-mass motion: as a consequence, the remaining RPA eigenstates are not exactly orthogonal to the true spurious state, and their spurious component must be projected out. This is not difficult if RPA is done in the discrete p-h space. In any case, it can be shown<sup>16)</sup> that this projection procedure is equivalent to replacing the $`_ir_i^3Y_{10}`$ operator with $`_i(r_i^3\eta r_i)Y_{10}`$, $`\eta `$ being $`\frac{5}{3}r^2`$. Once this projection is done, we find that a substantial amount of $`r^3Y_{10}`$ strength still remains in the 10 - 15 MeV region, in addition to the strength in the 20 - 25 MeV region. The data do not show any strength in the lower region, i.e., experimental centroid energies correspond to the energy region above 15 MeV. Therefore, to have a meaningful comparison with experiment we will refer from now on to theoretical centroid energies calculated in the interval 15 - 40 MeV. In Table 1 we show the ISGDR centroid energies obtained with the SGII force. Especially in the case of <sup>208</sup>Pb and <sup>116</sup>Sn, it can be noticed that RPA calculations tend to overestimate the value of the centroid energy, the discrepancy being less severe in the other two cases. One may wonder if this is a special feature of SGII, although this force has been said to behave rather well for the monopole case. Fig. 1 shows that this is not the case: the centroid energies obtained in RPA with a number of different Skyrme parametrizations are plotted as a function of their incompressibility $`K`$, and it can be noticed that all forces systematically overestimate the experimental values of the centroid energies. Gogny interactions have also been used to study the ISGDR in this and other nuclei<sup>17)</sup>, but they also predict too large centroid energies. The same can be said about relativistic models like relativistic RPA<sup>18)</sup> or time-dependent relativistic mean field<sup>19)</sup>. We may conclude that the case of the ISGDR in <sup>208</sup>Pb is a kind of exception among the giant resonances studied within the self-consistent HF-RPA approach, as usually one never finds such large discrepancies between theory and experiment ($``$ 4-5 MeV). We finally address the question whether the coupling with more complicated configurations, which has been seen to be responsible of a downward shift of the resonance energy, can diminish this discrepancy in the case of <sup>208</sup>Pb. We have done for the ISGDR a calculation of the type described above in the monopole case. The resulting strength function, which includes continuum and 2p-2h coupling, is depicted in Fig. 2. One can see that the total width of the resonance is quite large, and the theoretical value of about 6 MeV compares well with the experimental result which is about 7 MeV<sup>20)</sup>. Although the resonance lineshape is accounted for by theory, the downward shift with respect to the RPA result is only about 1 MeV. The fundamental problem why the ISGDR energy in <sup>208</sup>Pb cannot be reproduced by theoretical models still remains. ## Conclusion In this paper, we have considered the isoscalar monopole and dipole resonances in a number of nuclei and we have tried to reproduce their properties by means of HF-RPA, or QRPA on top of HF-BCS, or more sophisticated approach which takes care of the continuum properly and of the coupling with nuclear configurations which are more complicated than the simple 1p-1h. In general, we have found that the effect of pairing correlations is quite small as these resonances lie at high energy with respect to the pairing gap $`\mathrm{\Delta }`$. Concerning the RPA results, the situation looks different in the case of monopole and dipole. In the former case, the Skyrme-type force SGII is able to reproduce well the centroid energy in <sup>208</sup>Pb, and it slightly overestimates this energy in other medium-heavy nuclei which have been measured accurately in recent experiments. This would allow us to extract a value of the nuclear incompressibility around 215 MeV. In the case of the ISGDR, however, the same Skyrme force overpredicts this centroid energy in <sup>208</sup>Pb by about 4 MeV. Other parametrizations of Skyrme type cannot do better, and the problem is not solved if one turns to Gogny interactions or to relativistic models. Therefore, although in other nuclei this discrepancy between theory and experiment can be less than in the case of <sup>208</sup>Pb, we can say that the “squeezing mode”, which could be taken as a further probe of the nuclear incompressibility besides the well-known “breathing” monopole oscillation, is challenging us with a new problem. Calculations beyond mean field do not change substantially our conclusions about the centroid energies. However, we stress that these calculations are necessary for a proper account of the giant resonances lineshape, and in fact in our case they have been able to reproduce the total width of the ISGMR in <sup>208</sup>Pb, and also of the ISGDR although its centroid is overestimated. We would like to thank Umesh Garg for stimulating discussions and for communicating experimental data prior to publication, and also Jean-Paul Blaizot for useful discussions. ## References 1) J. P. Blaizot: Phys. Rep. 64, 171 (1980); J. M. Pearson: Phys. Lett. B 271, 12 (1991); S. Shlomo and D. H. Youngblood: Phys. Rev. C 47, 529 (1993). 2) D. H. Youngblood, H. L. Clark and Y.- W. Lui: Phys. Rev. Lett. 82, 691 (1999). 3) B. Davis et al.: Phys. Rev. Lett. 79, 609 (1997). 4) S. Stringari: Phys. Lett. B106, 232 (1982). 5) M. Beiner, H. Flocard, N. Van Giai and Ph. Quentin: Nucl. Phys. A238, 29 (1975). 6) N. Van Giai and H. Sagawa: Phys. Lett. B106, 379 (1981). 7) E. Chabanat, P. Bonche, P. Haensel, J. Meyer and R. Schaeffer: Nucl. Phys. A635, 231 (1998). 8) G. Colò, P. F. Bortignon, N. Van Giai, A. Bracco and R. A. Broglia: Phys. Lett. B276, 279 (1992). 9) A. Bohr and B. M. Mottelson, Nuclear Structure, vol. I, W.A. Benjamin 1969, Eqs. (2.92) and (2.93). 10) N. Tajima, S. Takahara and N. Onishi: Nucl. Phys. A603, 23 (1996). 11) E. Khan, N. Van Giai and G. Colò: to be published. 12) G. Colò, N. Van Giai, P. F. Bortignon and R. A. Broglia: Phys. Rev. C50, 1496 (1994). 13) J. P. Blaizot, J. F. Berger, J. Dechargé and M. Girod: Nucl. Phys. A591, 435 (1995). 14) O. Civitarese, A. G. Dumrauf, M. Reboiro, P. Ring and M. M. Sharma: Phys. Rev. C43, 2622 (1991). 15) I. Hamamoto, H. Sagawa and X. Z. Zhang: Phys. Rev. C56, 3121 (1997). 16) N. Van Giai and H. Sagawa: Nucl. Phys. A371, 1 (1981). 17) J. Dechargé and L. Šips: Nucl. Phys. A407, 1 (1983). 18) N. Van Giai and Z. Y. Ma: these proceedings. 19) D. Vretenar et al.: these proceedings. 20) U. Garg et al.: Proc. Topical Conference on Giant Resonances, Varenna ( Nucl. Phys. A, to be published; H. L. Clark et al.: ibid. ; U. Garg: private communication.
no-problem/9904/cond-mat9904160.html
ar5iv
text
# Thermal Conductivity as a Probe of Quasi-Particles in the Cuprates. ## Abstract In underdoped $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_\mathrm{x}(x=6.63)`$, the low-$`T`$ thermal conductivity $`\kappa _{xx}`$ varies steeply with field $`B`$ at small $`B`$, and saturates to a nearly field-independent value at high fields. The simple expression $`[1+p(T)|B|]^1`$ provides an excellent fit to $`\kappa _{xx}(B)`$ over a wide range of fields. From the fit, we extract the zero-field mean-free-path, and the low temperature behavior of the $`QP`$ current The procedure also allows the $`QP`$ Hall angle $`\theta _{QP}`$ to be obtained. We find that $`\theta _{QP}`$ falls on the $`1/T^2`$ curve extrapolated from the electrical Hall angle above $`T_c`$. Moreover, it shares the same $`T`$ dependence as the field scale $`p(T)`$ extracted from $`\kappa _{xx}`$. We discuss implications of these results. 1. Introduction Thermal conductivity is potentially a very useful probe of the quasi- particle excitations in the superconducting state of the cuprates because it is capable of detecting the quasi-particle ($`QP`$) current in the bulk . In addition, measurements of its field dependence may yield quantitative information on the $`QP`$ mean free path. At present, this seems to be the best way to investigate the low-lying excitations of the condensate. However, the task of disentangling the $`QP`$ current from the larger phonon current in the cuprates poses a difficult problem for experiment. We report recent experiments in which the direct separation of the $`QP`$ current is achieved by detailed analysis of the field dependence of the longitudinal conductivity $`\kappa _{xx}`$. This line of approach was motivated by the observation of plateau features in high-purity $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8`$ (Bi 2212) . The existence of the plateaus at low $`T`$ (where $`\kappa _{xx}/H=0`$) implies that, in the cuprates, vortices are essentially transparent to the phonons. Extensions of these measurements to underdoped $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_\mathrm{x}`$ (YBCO) reveal that this result may be a rather general feature of the cuprates in the clean-limit. This seems to us a significant finding since it allows a direct separation of the $`QP`$ current by the application of an intense field. In addition, we find that the zero-field mean-free-path $`\mathrm{}_0`$ of the $`QP`$ may be estimated to within a factor equal to the vortex scattering cross-section $`\sigma _{tr}`$. The isolation of the $`QP`$ current allows more specific information to be extracted from the thermal Hall conductivity $`\kappa _{xy}`$ (Righi-Leduc effect). With the electronic current independently determined, we may now obtain the Hall angle $`\mathrm{tan}\theta _{QP}`$. The $`QP`$ Hall angle uncovers a number of interesting features which we discuss below. 2. Experiment Measurements of $`\kappa _{xx}`$ in the mixed state of the cuprates have been reported by several groups. Experiments in intense magnetic fields $`𝐇`$ are complicated by problems such as cleaving of the crystal by the large torques generated. The most serious problem, however, seems to stem from the field sensitivity of the thermometers. At the resolution needed, the field dependence of the sensors is serious (in thermocouples, moreover, the field sensitivity is also history dependent). Previously, we employed a bridge-balance method to get around the field- sensitivity problem . In our present approach, we adopt a single-heater, two-sensor method in which the temperature difference $`\delta T`$ between the ends of the sample is detected by two closely matched resistive sensors (cernox). The thermal gradient $`T`$ is applied in the $`ab`$ plane, and $`𝐇`$ is parallel to $`𝐜`$. The temperature is regulated (with a third base-cernox), and measurements are taken after waiting about 10 min. for the field to stabilize. The readings of the two sensors are recorded at three values of the heater current $`I`$ = 0, 0.4, 0.6 mA (typically). The $`I`$ = 0 readings are used to calibrate the field dependence of the two cernox sensors, while the values of $`\delta T`$ determined with $`I`$ at the two values provide a check on the linearity of the sample response. By testing with a standard material that has no intrinsic field dependence in $`\kappa _{xx}`$ (nylon), we have found that at temperatures above 8 K this method provides an accurate and highly reproducible determination of the intrinsic conductance to a resolution of 1 in $`10^3`$. Although the bridge-balance method is capable of higher resolution, the present technique lends itself to full automation. A higher density of points may be obtained, and checks (e.g. for linearity) can be made in situ. A pair of thermocouple junctions are used to detect the $`H`$-antisymmetric (Hall) gradient to obtain $`\kappa _{xy}`$. With the high sensitivity, we have found that, in untwinned, optimally-doped $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_\mathrm{x}`$ ($`T_c`$ = 93 K, $`x`$ = 6.95), $`\kappa _{xx}`$ in a field $`𝐇𝐜`$ becomes increasingly hysteretic below 35 K. Although the hysteresis is small (about 5$`\%`$ of the total $`\kappa _{xx}`$ at 8 K), it greatly complicates the extraction of the $`QP`$ current (we discuss this later). In underdoped crystals, however, the hysteresis is unobservable up to 14 tesla (less than $`10^3`$), and the observed $`\kappa _{xx}`$ vs. $`H`$ is a faithful representation of its intrinsic dependence on $`B`$. In this report, we discuss data from a twinned, underdoped crystal in which $`T_c`$ = 63 K, and $`x`$ = 6.63. The zero- field temperature profile of $`\kappa _{xx}`$ is shown in Fig. 1. The relative magnitude of the anomaly in $`\kappa _{xx}`$ is only about a quarter of that in the 93-K YBCO, but larger than in optimum Bi 2212 and $`\mathrm{La}_{2\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{CuO}_4`$ (LSCO) . Also shown (solid symbols) is our new estimate of the phonon conductivity ($`\kappa _B`$). One of our main results is that the entire field dependence of $`\kappa _{xx}`$ derives from the $`QP`$ current, while the phonon current is unaffected by $`H`$. 3. Results and Analysis Figure 2 displays the field dependence of $`\kappa _{xx}`$ at selected $`T`$. With decreasing $`T`$, the initial slope of $`\kappa _{xx}`$ increases rapidly. Below 15 K, the rapid decrease crosses over to an almost flat dependence, which recalls the plateau features observed in single-domain Bi 2212 . Within our resolution, there is no resolvable hystereses at the temperatures investigated. The higher precision and larger range of the new data enable us to compare the observed field dependence with various expressions. We find that the field dependence is accurately fitted (Fig. 2) to the expression $$\kappa _{xx}(B,T)=\frac{\kappa _e(T)}{(1+p(T)|B|)}+\kappa _B(T),$$ (1) where the entire $`B`$ dependence resides in the denominator of the first term, and the term $`\kappa _B`$ is a field-independent background. At each temperature, the fit yields the two parameters $`\kappa _e(T)`$ and $`p(T)`$ associated with the $`QP`$ current, and the term $`\kappa _B`$ which we identify with the phonon term, viz. $`\kappa _B`$ = $`\kappa _{ph}`$. In our previous experiments on optimally-doped YBCO and LSCO , fits to Eq. 1 were ambiguous because of strong hystereses (the distortions introduced caused the extracted $`p(T)`$ to be non-monotonic in $`T`$). These extraneous effects led us to consider alternate expressions (see below), as well as a possibly field dependent $`\kappa _{ph}`$. However, the present results have clarified the problem. In addition to the absence of observable hysteresis, the curves for $`\kappa _{xx}`$ at low $`T`$ display pronounced curvature in moderate fields, corresponding to a strong attenuation of the $`QP`$ current. The attenuation uncovers a nearly field-independent background that we identify with the phonon thermal conductivity. In the main panel of Fig. 3, we display the $`T`$ depenence of the parameters $`\kappa _e`$ and $`p(T)`$. The motivation for Eq. 1 is that, near the vortex core, the steep variation of the pair potential and the circulating superfluid together present a strong scattering potential for an incident $`QP`$ . Expressing the scattering rate as a transport cross-section $`\sigma _{tr}`$, we assume additivity of the rates, and write the $`QP`$ mean-free-path in a field as $`\mathrm{}(B)=\mathrm{}_0/[1+\mathrm{}_0\sigma _{tr}|B|/\varphi _0]`$, where $`\mathrm{}_0`$ is the zero-field value of the mean-free-path, and $`\varphi _0`$ the flux quantum. Thus, in this model, the $`QP`$ conductivity is given by Eq. 1 with $$p(T)=\mathrm{}_0\sigma _{tr}/\varphi _0.$$ (2) In the Boltzmann equation approach, we may write the $`QP`$ thermal conductivity (in zero field) as $$\kappa _e(T)=\frac{1}{T}\underset{𝐤}{}(\frac{f}{E})E(𝐤)^2v_x(𝐤)^2\tau (𝐤),$$ (3) where $`E(𝐤)`$ and $`\tau (𝐤)`$ are, respectively, the energy and lifetime of a $`QP`$ in the state $`𝐤`$, and $`𝐯(𝐤)=\mathrm{}^1E(𝐤)`$ its group velocity. In a $`d`$-wave superconductor at low $`T`$, the excitations are confined to Dirac cones at the nodes where the energy may be parametrized as $`E(k_1,k_2)=\mathrm{}\sqrt{(k_1v_f)^2+(k_2v_2)^2}`$ ($`k_1`$ and $`k_2`$ are the components of $`𝐤`$ normal and parallel to the Fermi Surface), $`\kappa _e`$ reduces to $$\kappa _e(T)=\frac{\eta }{\pi }\frac{k_B^3T^2}{\mathrm{}^2}\frac{\mathrm{}_0}{v_2}(1+\frac{v_2^2}{v_f^2}),$$ (4) where $`\mathrm{}_0=v_f\tau _0`$ is the mean free path at the nodes, and $`\eta _0^{\mathrm{}}𝑑xx^3(df/dx)5.41`$. The $`T^2`$ dependence of $`\kappa _e`$ in Eq. 4 is masked by the strong $`T`$ dependence of $`\mathrm{}_0`$. However, in our experiment, the latter is obtained independently from the field dependence (with $`p(T)`$ given by Eq. 2). We may divide out the $`T`$ dependence of $`\mathrm{}_0`$, to isolate the quantity $`L_e(T)=\kappa _e(T)/p(T)`$. Comparing Eqs. 2 and 3, we are left with an expression that contains only two material-specific parameters $`v_2`$ and $`\sigma _{tr}`$, viz. $$L_e(T)=\frac{\eta }{\pi }\frac{k_B^3T^2}{\mathrm{}^2}\frac{\varphi _0}{v_2\sigma _{tr}},$$ (5) Figure 3 (inset) reveals that the measured $`L_e(T)`$ displays a nearly $`T^2`$ dependence at low $`T`$, that may be fitted to give $`L_e(T)=6.9\times 10^3T^2`$ WT/mK. Comparing this expression with Eq. 5, we determine from our experiment $$v_2\sigma _{tr}=2.11\times 10^4\mathrm{m}^2/\mathrm{s}.$$ (6) 4. The Hall angle The field parameter $`p(T)`$ has been extracted from $`\kappa _{xx}`$ alone. While its nominally $`1/T^2`$ variation (Fig. 3) is consistent with the identification $`p(T)\mathrm{}_0`$ (Eq. 2), it is important to see if this is consistent with a separate experiment. We turn next to the Hall conductivity $`\kappa _{xy}`$. In the previous Hall study on optimal YBCO, $`\kappa _{xy}`$ was analyzed without the benefit of information on the diagonal electronic current. From the analysis above, we may now extract the Hall angle $`\mathrm{tan}\theta _{QP}(H)=\kappa _{xy}(H)/\kappa _e(H)`$ as a continuous function of $`H`$ at each temperature. In general, $`\mathrm{tan}\theta `$ displays strong negative curvature vs. $`H`$ . Here, we restrict our discussion to the weak-field value $`\theta _{QP}(0)`$. In underdoped YBCO, the small $`QP`$ population generates a weak thermal Hall current, and the uncertainties in determining $`\theta _{QP}`$ are quite large (compared to optimum YBCO). Nevertheless, we find two interesting features of the Hall angle (see Fig. 4). First, $`\theta _{QP}(0)`$ (solid triangles) and $`p`$ (open circles) share the same $`T`$-dependence from $`T_c`$ to about 20 K. Secondly, we recall that the normal-state electrical Hall angle $`\theta _N`$ (open triangles) follows a $`1/T^2`$ dependence. The new values for $`\mathrm{tan}\theta _{QP}(0)`$ lie on the curve for $`\theta _N`$ extrapolated below $`T_c`$. As shown in Fig. 4, the three quantities $`p`$, $`\mathrm{tan}\theta _{QP}(0)`$ and $`\mathrm{tan}\theta _N`$ fall on the same curve over about 2.5 decades (in the plot $`p`$ and $`\mathrm{tan}\theta _{QP}(0)`$ are related by a constant scale factor). Just above $`T_c`$, $`\mathrm{tan}\theta _N`$ displays a slight dip associated with fluctuation effects. The similarity between $`\theta _{QP}`$ and $`\theta _N`$ has also been pointed out by Zeini et al. . We briefly discuss our interpretation. These results suggest that the $`1/T^2`$ dependence is, in fact, intrinsic to the $`QP`$’s below $`T_c`$. The ubiquitous $`T^2`$ dependence of $`\mathrm{cot}\theta _N`$ in the normal state appears to be an extension of the low-temperature behavior into the normal state. The similarity between the $`T`$-dependences of $`p`$ and $`\theta _{QP}`$ imply that the diagonal and the Hall channels relax with the same $`T`$-dependence, $`1/T^2`$, consistent with simple Drude behavior. Just as in conventional metals, the thermal Hall resistivity in the mixed state, $`W_{xy}\kappa _{xy}/\kappa _e^2=\mathrm{tan}\theta _{QP}(0)/\kappa _e`$, should provide a measure of the heat capacity of the $`QP`$ population, as it does (see inset of Fig. 3). This conventional behavior is abruptly altered when we cross $`T_c`$ into the normal state. The Hall angle continues to relax at the same numerical rate. By contrast, the scattering rate in the diagonal conductivity undergoes a dramatic change. The transport mean-free-path $`\mathrm{}_0`$ decreases abruptly by a factor of 4-6 (10 in optimal crystals) across $`T_c`$. This sharp decrease, followed by a nominally $`T`$-linear scattering rate, is responsible for the anomalously strong $`T`$-dependence of the Hall coefficient in the normal state. Thus, the anomalous channel responsible for most of the strange-metal properties is the diagonal conductivity. The Hall channel appears to be quite conventional. A detailed discussion of these results including optimal crystals will appear elsewhere. 5. Discussion We have discussed how the field dependence of $`\kappa _{xx}`$ in underdoped YBCO may be analyzed to extract electronic parameters, such as $`\mathrm{}_0`$ and $`\sigma _{tr}`$. The analysis derives from two striking features of $`\kappa _{xx}`$ intrinsic to high-purity 60-K YBCO crystals at low $`T`$, namely the steep decrease of $`\kappa _{xx}`$ in weak field followed by a saturation at high field, and the absence of resolvable hysteresis. The first feature allows us to fit a much broader range of field scales (as expressed by the dimensionless parameter $`p(T)B`$). The higher density of measurements also helps. We illustrate this point as follows. In a previous attempt to analyze similar measurements in LSCO, it was found that the $`\kappa _{xx}`$ vs. $`H`$ curves were equally well-fitted by Eq. 1 and the expression $`G(B)=\psi (1/2+B_0/B)+\mathrm{ln}(B_0/B),`$ with $`\psi (x)`$ the digamma function. With the data scatter and the smaller range of reduced fields $`B/B_0`$ in LSCO, the two fits could not be distinguished, and Ong et al. argued the case for adopting $`G(B)`$ to describe $`\kappa _{xx}`$ in LSCO. However, with the larger field scale $`pB`$ here, we find that Eq. 1 provides a much better fit (this is evident if the two fits are compared in a plot versus $`\mathrm{ln}H`$). The physics underlying the two fit expressions is of course quite different. In light of the present work, we now favor adopting Eq. 1, instead of the digamma function fit, for analyzing $`\kappa _{xx}`$ vs. $`H`$ curves in cuprates. The second feature (no observable hysteresis) relates to the issue of remanence and vortex pinning in cuprates. It is known that the relaxation of non-equilibrium flux distributions may produce a slow drift in $`\kappa _{xx}`$ if it is observed a few seconds after a change in $`H`$ is made . Further, vortex pinning effects at low $`T`$ can lead to step-like jumps in $`\kappa _{xx}`$ when the field sweep direction is changed. In optimum YBCO and in strongly overdoped Bi 2212, we find that $`\kappa _{xx}`$ increases step-wise when the sweep direction of $`H`$ is reversed from up to down. Recently, however, a hysteretic loop of the opposite sign has been reported by Aubin et al. in Bi 2212 ($`\kappa _{xx}`$ decreases step-wise when $`H`$ is swept down). At present, the origin of the hystereses in $`\kappa _{xx}`$ is not understood (especially the existence of hystereses with different signs). We note that the magnitude of the hystereses in is much smaller than that observed in the magnetization $`M`$ vs. $`H`$. In our measurements on underdoped YBCO, no hysteresis in $`\kappa _{xx}`$ is observed for fields as large as 14 T at temperatures down to 6 K, even though hystereses are sizeable in the $`M`$ vs. $`H`$ curves. In particular, the magnitude of $`\kappa _{xx}`$ at the plateau-like region is not hysteretic. The absence of hysteresis implies that the magnetization is too small to influence the measured $`\kappa _{xx}`$, so we may assume that $`B=\mu _0H`$, as tacitly assumed in the fits. By contrast, hysteretic effects cannot be neglected in optimally doped YBCO (as discussed above). Stronger vortex pinning is clearly responsible for the larger hysteresis in the 93-K crystal. Below 35 K in this crystal, the hysteresis steadily increases to about 5$`\%`$ at 8 K. Although the hysteresis is small, the remanence produces in the trace of $`\kappa _{xx}`$ vs. $`H`$ both a broadening at small $`H`$ and an asymmetry about $`H`$ = 0 that strongly distort the fit to Eq. 1. The distortions preclude a meaningful extraction of below about 35 K. Thus, of the various phases of the cuprates we investigated (YBCO, Bi 2212 and LSCO), the underdoped phase of YBCO appears to be the most suitable for our purpose of isolating the $`QP`$ current from the total thermal current. In high-purity single-domain crystals of Bi 2212, the field dependence of $`\kappa _{xx}`$ displays a distinct break in slope in $`\kappa _{xx}`$ at a characteristic field $`H_k`$, followed by a plateau region in which $`\kappa _{xx}`$ is nearly independent of $`H`$ . The field $`H_k`$, which varies approximately as $`T^2`$, was interpreted as a field-induced phase transition, possibly involving a new order parameter. We compare the present results with the two findings in Bi 2212, i.e. the kink feature at $`H_k`$ and the existence of the plateau. As shown in Fig. 2, $`\kappa _{xx}`$ in 60-K YBCO smoothly crosses over into the field-independent region at low $`T`$, instead of displaying a sharp kink. While the plateau regime is similar in the two systems, the kink feature signalling a phase transition is absent in the YBCO crystal. A difference between the two systems is the electronic anisotropy. From the resistivity anisotropy $`\rho _c/\rho _{ab}`$ ($`10^5`$ compared with $`10^3`$), Bi 2212 is much closer to the $`2D`$ limit than underdoped YBCO. Whether this is a significant factor is a subject for future investigation. In their experiment Aubin et al. observed the value of $`\kappa _{xx}`$ in Bi 2212 at the plateau (at 8 K) to be $`1\%`$ higher in the field sweep-up direction than in sweep-down. In rough analogy with the magnetization profile in the Bean model, they raise the issue that the plateau may be associated with, or reflect a specific state of the vortex system. In our response , we pointed out that the hysteresis in their sample is about 5 times larger (at 8 K) than in the two crystals used by Krishana et al.. At higher temperatures, 15-20 K, where the plateau is just as prominent, the hysteresis is almost unresolved. The hysteresis is an extrinsic effect possibly associated with stronger flux pinning in a more disordered crystal. The present results lend a fresh perspective to the question whether the plateau is intrinsic to the $`QP`$ system or the product of a particular state of the vortex system. The YBCO results show that, whenever the $`QP`$ current is very strongly suppressed by the available field, the thermal conductivity that remains is indeed field-independent and non-hysteretic. This shows that, in YBCO, a plateau regime definitely exists; the lack of observable hysteresis shows that it has nothing to do with a particular state of the vortex system. However, to access it, it may be necessary to work in the underdoped regime and to use high-purity crystals with weak pinning (as discussed above, we are unable to access the plateau region in 90-K YBCO). In principle, the analysis described yields quantitative information on the quasi-particles. Having both the diagonal and off-diagonal conductivities available reduces the uncertainties in identifying the measured parameters, as well as provides consistency checks. The weak-field Hall angle may be expressed as a ‘Hall’ mean-free-path $`\mathrm{}_H\theta _{QP}\mathrm{}k_F/e`$. The value of $`\theta _{QP}`$ at 10 K gives $`\mathrm{}_H4,200\AA `$. The similarity of the $`T`$ dependences in $`\theta _{QP}`$ and $`p`$ implies that $`\mathrm{}_H`$ is proportional to $`\mathrm{}_0`$. If the proportionality constant is 1, we may use $`\mathrm{}_H`$ in Eq. 2 to find that $`\sigma _{tr}90\AA `$, about 1.7 times the diameter of the vortex core ($`2\xi `$) ($`\xi 26\AA `$ if the upper critical field $`H_{c2}`$ 50 T). Using this value of $`\sigma _{tr}`$ in Eq. 6, we obtain the velocity $`v_22.3\times 10^6`$ cm/s. From the penetration depth variation $`\mathrm{\Delta }\lambda =4.3\AA /K`$, Wen and Lee obtain the velocity anisotropy $`v_f/v_27.6`$. With our estimate for $`v_2`$, we find that the Fermi velocity $`v_f1.8\times 10^7`$ cm/s. We acknowledge support from the U.S. Office of Naval Research and the U.S. National Science Foundation. Useful conversations with Philip Anderson, Duncan Haldane, Patrick Lee, Louis Taillefer and Shin-Ichi Uchida are gratefully acknowledged. This manuscript will appear in the Proceedings of the Taniguchi Symposium on the Physics and Chemistry of Transition Metal Oxides 1998, (Springer Verlag 1999). Permanent address: Department of Physics, Zhejiang University, Hangzhou, China.
no-problem/9904/chao-dyn9904011.html
ar5iv
text
# References Randomly Amplified Discrete Langevin Systems Nobuko Fuchikami Department of Physics, Tokyo Metropolitan University Hachioji, Tokyo, 192-0397, Japan Email: fuchi$`\mathrm{@}`$phys.metro-u.ac.jp (February 2, 1999) Abstract A discrete stochastic process involving random amplification with additive noise is studied analytically. If the non-negative random amplification factor $`b`$ is such that $`<b^\beta >=1`$ where $`\beta `$ is any positive non-integer, then the steady state probability density function for the process will have power law tails of the form $`p(x)1/x^{\beta +1}`$. This is a generalization of recent results for $`0<\beta <2`$ obtained by Takayasu et al. in Phys. Rev. lett. 79, 966 (1997). It is shown that the power spectrum of the time series $`x`$ becomes Lorentzian, even when $`1<\beta <2`$, i.e., in case of divergent variance. PACS numbers: 05.40.+j, 02.50.-r, 05.70.Ln, 64.60.Lx Power law behavior of distribution function is widely observed in nature . Recently, Takayasu et al. presented a new general mechanism leading to the power law distribution . They analyzed a discrete stochastic process which involves random amplification together with additive external noise. They clarified necessary and sufficient conditions to realize steady power law fluctuation with divergent variance using a discrete version of linear Langevin equation expressed as $$x(t+1)=b(t)x(t)+f(t),$$ (1) where $`f(t)`$ represents a random additive noise and $`b(t)`$ is a non-negative stochastic coefficient. They derived the following time evolution equation for the characteristic function $`Z(\rho ,t)`$ which is the Fourier transform of the probability density $`p(x,t)`$: $$Z(\rho ,t+1)=_0^{\mathrm{}}W(b)Z(b\rho ,t)𝑑b\mathrm{\Phi }(\rho ),$$ (2) where $`W(b)`$ is the probability density of $`b(t)`$ and $`\mathrm{\Phi }(\rho )`$ is the characteristic function for $`f(t)`$. They showed that when $`b^\beta =1`$ holds for $`0<\beta <2`$, the second moment $`x^2(t)`$ diverges as $`t\mathrm{}`$, but Eq. (2) has a unique steady and stable solution : $$\underset{t\mathrm{}}{lim}Z(\rho ,t)Z(\rho )=1\text{const}\times |\rho |^\beta +\mathrm{},$$ (3) which yields the power law tails in the steady probability density $$\underset{t\mathrm{}}{lim}p(x,t)p(x)1/x^{\beta +1},$$ (4) or equivalently, the cumulative distribution $$P(|x|)1/x^\beta .$$ (5) They also made numerical simulations of Eq. (1) by employing a discrete exponential distribution for $`W(b)`$, and showed that the theoretical estimate of the relation between $`\beta `$ and the parameters specifying $`W(b)`$ (Eq. (15) in ) nicely fits with the simulation “even out of the range of applicability, $`\beta >2`$”. They state that “The reason for this lucky coincidence is not clear”, although they point out at the same time that the power law distribution tails are a generic property of Eq. (1) . In this Brief Report, the following two statements will be presented: (A) Takayasu et al.’s theory can be straightforwardly extended for $`\beta >2`$: If $`b^\beta =1`$ holds for a positive non-integer $`\beta `$, then there exists a unique steady and stable solution of Eq.(2) $$Z(\rho )=\underset{m=0}{\overset{n}{}}A_{2m}(1)^{2m}\rho ^{2m}C|\rho |^\beta +O(\rho ^{2n+2}),$$ (6) where $`2n`$ is the largest even number that is smaller than $`\beta `$. This $`Z(\rho )`$ leads to $`p(x)1/x^{\beta +1}`$. (B) When $`b^\beta =1`$ for a non-integer $`\beta `$ between 1 and 2, the power spectral density (PSD) of $`x(t)`$ is Lorentzian increasing with the observation time $`T`$ as $$S(\omega ,T)\frac{2}{T}\frac{x_0^2}{\mathrm{ln}b^2}\frac{(1/\tau _1)b^2^T}{(1/\tau _1)^2+\omega ^2}\mathrm{for}T1,$$ (7) where $$x_0^2x^2(0)+\frac{f^2}{b^21},$$ (8) $$\tau _1=\frac{1}{\mathrm{ln}b^2+\mathrm{ln}[1/b]}.$$ (9) From the statement (A), “the coincidence” found in is naturally understandable. To prove (A), we assume the following form for $`Z(\rho )`$: $$Z(\rho )=\underset{n=0}{\overset{\mathrm{}}{}}a_n\rho ^n+|\rho |^\beta \underset{n=0}{\overset{\mathrm{}}{}}c_n\rho ^n,a_01,$$ (10) and substitute it into Eq. (2) in the limit $`t\mathrm{}`$. If $`\mathrm{\Phi }(\rho )`$ is an even function (i.e., the distribution function of $`f(t)`$ is symmetric as assumed in ), we can first prove that $`a_1=0`$ because $`b1`$. Also, $`c_1=0`$ because $`b^{\beta +1}1`$. Thanks to $`a_{2m1}=0`$ and $`b^{2m+1}1`$, $`a_{2m+1}=0`$ is derived. Similarly, $`c_{2m1}=0`$ and $`b^{\beta +2m+1}1`$ yield $`c_{2m+1}=0`$. We can thus prove that $`a_n`$ and $`c_n`$ in Eq. (10) vanish for all odd numbers $`n`$, i.e., Eq. (6) holds. (Note that the $`n`$ th moment $`x^n(t)`$ with $`n>\beta `$ diverges not only for even number $`n`$ but also for odd number which corresponds to the vanishing coefficient $`a_n`$.) Taking exactly the same procedures as in , we can prove that this solution is unique and stable. In case of $`\beta >2`$, we have a finite variance but higher order moments, $`x^n(t)`$ with $`n>\beta `$, diverge as $`t\mathrm{}`$. To derive the probability density $`p(x)`$, we only need to assume that all $`k`$-th derivatives of $`Z(\rho )`$ satisfy the boundary condition $$\underset{\rho \pm \mathrm{}}{lim}d^kZ(\rho )/d\rho ^k=0.$$ (11) Using Eq. (11), we can partially integrate the expression $$p(x)\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}e^{ix\rho }Z(\rho )𝑑\rho $$ (12) $`[\beta ]+1`$ times, where $`[\beta ]`$ is the largest integer that is smaller than $`\beta `$. Thus we obtain the asymptotic expansion as $$p(x)|x|^{(\beta +1)}_{\mathrm{}}^{\mathrm{}}e^{i\xi }|\xi |^{\beta [\beta ]1}𝑑\xi |x|^{(\beta +1)}\mathrm{\Gamma }(\beta [\beta ]),$$ (13) where $`\mathrm{\Gamma }`$ is the Gamma function. To prove the statement (B), we note that the two-time correlation function is rigorously obtained from Eq. (1): $$\varphi (\tau ,t)x(t+\tau )x(t)=x^2(t)b^\tau ,$$ (14) where $$x^2(t)=b^2^tx^2(0)+\frac{1b^2^t}{1b^2}f^2.$$ (15) If $`1<\beta <2`$, we have a relation $`0<b<1<b^2`$ because the function $`G(\gamma )b^\gamma `$ satisfies $`G(0)=1`$ and $`G^{\prime \prime }(\gamma )>0`$ . Then $`\varphi `$ increases with $`t`$, but decays with $`\tau `$ as $`e^{\tau /\tau _0}`$ for any fixed value of $`t`$ (Debye-type relaxation), with the relaxation time $$\tau _0=\frac{1}{\mathrm{ln}[1/b]}.$$ (16) Since the correlation function depends on both $`\tau `$ and $`t`$, the Wiener-Khinchin relation cannot be used to obtain the PSD. Defining the PSD which depends on the observation time $`T`$ as $`S(\omega ,T)`$ $``$ $`\left|{\displaystyle _0^T}e^{i\omega t}x(t)𝑑t\right|^2/T`$ (17) $`=`$ $`2\mathrm{R}\mathrm{e}\left\{{\displaystyle _0^T}𝑑\tau {\displaystyle _0^{T\tau }}𝑑te^{i\omega \tau }x(t+\tau )x(t)\right\}/T,`$ and using $`\varphi (\tau ,t)`$ obtained above, we arrive at the expression (7). The spectrum is $`1/f^2`$ -type for $`f1/\tau _1`$ and flat for $`f1/\tau _1`$. Equation (7) implies that the power increases exponentially with the observation time $`T`$, which corresponds to the divergent behavior of the variance $`x^2(t)`$. (We have neglected the case $`0<\beta <1`$ where even the average of $`x`$ diverges as $`x(t)=b^tx(0)`$ because $`b>1`$.) When $`\beta >2`$, both $`0<b<1`$ and $`0<b^2<1`$ hold and results are rather trivial: $`x^2`$ $``$ $`\underset{t\mathrm{}}{lim}x^2(t)={\displaystyle \frac{1}{1b^2}}f^2,`$ (18) $`\varphi (\tau )`$ $``$ $`\underset{t\mathrm{}}{lim}\varphi (\tau ,t)=x^2b^\tau ,`$ (19) $`S(\omega )`$ $``$ $`\underset{T\mathrm{}}{lim}S(\omega ,T)=2x^2{\displaystyle \frac{(1/\tau _0)}{(1/\tau _0)^2+\omega ^2}}.`$ (20) Thus, as far as the PSD is measured, we cannot observe any singular aspect, higher order singularities being hidden. The stochastic process described by Eq. (1) generally leads to the power law behavior $`p(x)1/x^{\beta +1}`$, while it also yields a Lorentzian spectrum $`S(\omega )1/[(1/\tau )^2+\omega ^2]`$. A colored noise, or $`1/f^\alpha `$ fluctuation, whose PSD is proportional to $`1/\omega ^\alpha `$ has attracted much attention since $`1/f`$ noise was discovered several decades ago. Such power law behavior of the PSD is also observed widely in nature, and these two power laws, one in the probability density and the other in the PSD, are sometimes discussed together. Therefore it is interesting to know whether an extremely long time scale $`\tau `$ can be involved in the present stochastic process. Because, in that case, the observation time $`T`$, which relates with the low frequency cut-off $`\omega _0`$ as $`\omega _0=2\pi /T`$, cannot reach this time scale, then a $`1/f^2`$ fluctuation, namely $`S(\omega )1/\omega ^2`$ (for $`\omega \omega _0`$) is observed $`\mathrm{𝑝𝑟𝑎𝑐𝑡𝑖𝑐𝑎𝑙𝑙𝑦}`$. One can immediately see that the time constant $`\tau _0`$ or $`\tau _1`$ becomes large in very limited cases. First, the average of $`b`$ should be close to unity, i.e., $`b=1ϵ`$ with $`0<ϵ1`$. Then $`\tau _0`$ becomes $`1/ϵ1`$. Furthermore, in case of $`\beta >2`$, we need $`b^2`$ smaller than unity, while in case of $`1<\beta <2`$, the condition $`b^2=1+\delta `$ with $`0<\delta 1`$ is necessary. In the latter case, we obtain $`\tau _11/(ϵ+\delta )1`$. The exponential or Poisson distribution for $`W(b)`$ does not lead to such a long time constant. One example of large $`\tau _1`$ is obtained by choosing $`W(b)`$ to be a narrowly peaked distribution having the average which is slightly smaller than unity and the second moment slightly larger than unity. As pointed out in the above, it should be noted that a stochastic process whose stationary density function has power law tails will not necessarily exhibit the power law behavior in PSD.
no-problem/9904/cond-mat9904201.html
ar5iv
text
# Nonuniversality in quantum wires with off-diagonal disorder: a geometric point of view ## Abstract It is shown that, in the scaling regime, transport properties of quantum wires with off-diagonal disorder are described by a family of scaling equations that depend on two parameters: the mean free path and an additional continuous parameter. The existing scaling equation for quantum wires with off-diagonal disorder \[Brouwer et al., Phys. Rev. Lett. 81, 862 (1998)\] is a special point in this family. Both parameters depend on the details of the microscopic model. Since there are two parameters involved, instead of only one, localization in a wire with off-diagonal disorder is not universal. We take a geometric point of view and show that this nonuniversality follows from the fact that the group of transfer matrices is not semi-simple. Our results are illustrated with numerical simulations for a tight-binding model with random hopping amplitudes. Universality is a key concept in any approach to study localization in disordered systems. In the scaling theory of localization, it is commonly believed that the statistical distributions of the conductance, or of energy levels and wave functions, are entirely determined by the fundamental symmetries and the dimensionality of the sample . Once symmetry and dimensionality are taken into account, all microscopic details of a sample can be represented by a single length scale $`\mathrm{}`$, the “mean free path”, such that on length scales $`L`$ larger than $`\mathrm{}`$, the sample is completely characterized by the ratio $`L/\mathrm{}`$. The concept of universality is the cornerstone of various field-theoretic, diagrammatic, and random-matrix approaches to localization . For a quasi-one dimensional geometry, i.e., for quantum wires, and for weak disorder (mean free path $`\mathrm{}`$ is much larger than the Fermi wavelength $`\lambda `$) the statistical distribution of the conductance can be described by the transfer matrix approach of Dorokhov , and Mello, Pereyra, and Kumar (DMPK). In this approach, the transport properties of the quantum wire are described in terms of a scaling equation for its transmission eigenvalues, the so-called DMPK equation . Hüffmann and Caselle have provided the DMPK equation with a geometric foundation by reformulating it in terms of a Brownian motion of the transfer matrix on a symmetric space, a certain curved manifold from the theory of Lie groups and Lie algebras . The existence of a unique natural mathematical framework to describe Brownian motion on symmetric spaces provides the geometric counterpart of the observed universality of the localization properties of disordered quantum wires. In a recent work, two of the authors, together with Simons and Altland , have proposed an extension of the scaling approach of DMPK to quantum wires with off-diagonal disorder (e.g. a lattice model with random hopping amplitudes and no on-site disorder). At the band center $`\epsilon =0`$ such a system has an extra chiral or sublattice symmetry that is not present in the standard case of a wire with diagonal (potential) disorder. Therefore they belong to a different symmetry class, which is referred to as the chiral symmetry class. Chiral symmetry also plays an important role for two-dimensional Dirac fermions in a random vector potential , the lattice random flux model , non-Hermitian quantum mechanics , supersymmetric quantum mechanics , diffusion in a random medium , and in certain problems in QCD . In Ref. , the extension of the DMPK equation to the chiral symmetry class was derived from a simple microscopic model. Here we discuss its geometric origin. This proves to be an exercise with implications that reach far beyond the construction of a mere chiral parallel to the geometric foundation of the DMPK equation of Refs. : Upon inspection of the geometric structure underlying the “chiral DMPK equation”, we find that the localization properties of a disordered wire with off-diagonal disorder are not universal; the geometric approach allows for a one-parameter family of scaling equations for the transmission eigenvalues. The scaling equation that was originally found in Ref. is a special point in this family. Below we discuss the geometric approach in more detail, identify the origin of this non-universality, and illustrate the results with numerical simulations of different microscopic models. We start with a brief summary of the ideas that lead to the standard DMPK equation and its geometric interpretation. The cornerstone of this approach are the symmetry properties of the transfer matrix $`M`$ of a disordered wire. For definiteness, we focus on the lattice Anderson model in a geometry of $`N`$ coupled chains, see Fig. 1. In this model, the Hamiltonian consists of nearest neighbor hopping terms and a random on-site potential. We restrict our attention to spinless particles. The wire consists of a disordered region and of two ideal leads consisting of $`N`$ (uncoupled) chains. In the leads, the wave function is represented by an $`N`$-component vector $`\psi _\pm `$ for the amplitudes of left ($`+`$) and right ($``$) moving waves. The wave functions to the left and right sides of the disordered sample are related by the $`2N\times 2N`$ transfer matrix $`M`$ , $$\left(\genfrac{}{}{0pt}{}{\psi _+}{\psi _{}}\right)_{\mathrm{right}}=M\left(\genfrac{}{}{0pt}{}{\psi _+}{\psi _{}}\right)_{\mathrm{left}}.$$ (1) In this basis of left and right movers, flux conservation implies that the transfer matrix $`M`$ obeys $$M^{}\mathrm{\Sigma }_3M=\mathrm{\Sigma }_3,$$ (2) where $`\mathrm{\Sigma }_3=\sigma _3𝟙_{}`$, $`\sigma _3`$ being the Pauli matrix and $`𝟙_{}`$ the $`N\times N`$ unit matrix. We distinguish between the presence and absence of time-reversal symmetry, labeled by the symmetry parameter $`\beta =1,2`$, respectively. In the presence of time-reversal symmetry, $`M`$ further satisfies $$\mathrm{\Sigma }_1M^{}\mathrm{\Sigma }_1=M,$$ (3) where $`\mathrm{\Sigma }_1=\sigma _1𝟙_{}`$. The transfer matrix $`M`$ can be parametrized as $$M=\left(\begin{array}{cc}U& 0\\ 0& U^{}\end{array}\right)\left(\begin{array}{cc}\mathrm{cosh}x& \mathrm{sinh}x\\ \mathrm{sinh}x& \mathrm{cosh}x\end{array}\right)\left(\begin{array}{cc}V& 0\\ 0& V^{}\end{array}\right),$$ (4) where $`U`$, $`U^{}`$, $`V`$, and $`V^{}`$ are unitary matrices, and $`x`$ is a diagonal matrix with diagonal elements $`x_j`$. For $`\beta =1`$, $`V^{}=V^{}`$ and $`U^{}=U^{}`$. The unitary matrices $`U`$, $`U^{}`$, $`V`$, and $`V^{}`$ serve as “angular coordinates” for $`M`$, the parameters $`x_j`$ serve as “radial coordinates” \[the eigenvalues of $`MM^{}`$ are $`\mathrm{exp}(\pm 2x_j)`$\]. The radial coordinates $`x_j`$ are related to the transmission eigenvalues $`T_j`$ by $`T_j=1/\mathrm{cosh}^2x_j`$. They determine the dimensionless conductance $`g`$ through the Landauer formula, $$g=\underset{j=1}{\overset{N}{}}T_j=\underset{j=1}{\overset{N}{}}\frac{1}{\mathrm{cosh}^2x_j}.$$ (5) In the original derivations of the DMPK equation , the wire is divided into thin slices, and the transfer matrix $`M`$ of the entire wire is found by multiplication of the transfer matrices of the individual slices. For weak disorder, the parameters $`x_j`$ of the transfer matrix undergo only small changes $`x_jx_j+\delta x_j`$ upon each such multiplication. One can view this process as a “Brownian motion” for the parameters $`x_j`$, the length $`L`$ of the wire serving as a fictitious time. With the choice of a maximum information entropy distribution for the transfer matrix of the slice , one can write down the corresponding Fokker-Planck equation, which has the form $`{\displaystyle \frac{P}{L}}`$ $`=`$ $`D{\displaystyle \underset{j}{}}{\displaystyle \frac{}{x_j}}J{\displaystyle \frac{}{x_j}}J^1P,`$ (6) $`J`$ $`=`$ $`{\displaystyle \underset{j}{}}\mathrm{sinh}(2x_j){\displaystyle \underset{j<k}{}}\mathrm{sinh}^\beta (x_jx_k)\mathrm{sinh}^\beta (x_j+x_k).`$ (7) The proportionality constant $`D`$ is determined by the details of the microscopic model.<sup>*</sup><sup>*</sup>* In the DMPK equation, $`D^1=2(\beta N+2\beta )\mathrm{}`$, where $`\mathrm{}`$ is the mean free path, see e.g. Ref. . Hüffmann pointed out that there is a beautiful geometric structure underlying the scaling equation (7). He observed that Eqs. (2)–(4) express that the transfer matrix $`M`$ is a member of a Lie group $`G_\beta `$, with $`U(N,N)`$ is the group of complex matrices $`M`$ with $`M^{}\mathrm{\Sigma }_3M=\mathrm{\Sigma }_3`$; $`SU(N,N)`$ is its subgroup of matrices with unit determinant; $`Sp(N,)`$ is the group of real $`2N\times 2N`$ matrices $`M`$ that obey $`M^T\mathrm{\Sigma }_2M=\mathrm{\Sigma }_2`$, and $`Z_2=\{\pm 1\}`$. To see that $`G_1Z_2\times Sp(N,)`$, see e.g. Ref. . $$G_1Z_2\times Sp(N,),G_2U(N,N)U(1)\times SU(N,N),$$ (8) and proposed to describe the $`L`$ evolution of $`M`$ as a “Brownian motion” on the manifold $`G_\beta `$ . For a rigorous formulation of this Brownian motion process, two further steps have to be taken. We would like to repeat them here, as we need to reconsider them when we deal with the case of a quantum wire with off-diagonal disorder. 1. The Lie groups $`G_\beta `$ are not semi-simple: they are the direct product of the two components $`Z_2`$ and $`Sp(N,)`$, or $`U(1)`$ and $`SU(N,N)`$, for $`\beta =1`$ or $`2`$ respectively. For the product $`MM^{}`$, and hence for the radial coordinates $`x_j`$, only the non-compact components $`Sp(N,)`$ and $`SU(N,N)`$ are relevant, so that we may restrict our attention to the semi-simple Lie groups $`Sp(N,)`$ and $`SU(N,N)`$. The remaining components $`Z_2`$ and $`U(1)`$ correspond to the sign or phase of $`detM`$ and do not affect the $`x_j`$. 2. The algebraic structure on the semi-simple Lie groups $`Sp(N,)`$ and $`SU(N,N)`$ gives rise to a natural metric . However, since this natural metric is not positive definite, it cannot be used to define a Brownian motion process. The problem is solved by dividing out the maximal compact subgroups $`U(N)`$ or $`S(U(N)\times U(N))`$ for $`\beta =1`$ or $`2`$, respectively. The resulting coset spaces $`S_1=Sp(N,)/U(N)`$ and $`S_2=SU(N,N)/S(U(N)\times U(N))`$ are called symmetric spaces and have a natural positive definite metric . In the parametrization (4), the procedure of dividing out the subgroups $`U(N)`$ or $`S(U(N)\times U(N))`$ corresponds to the identification of all transfer matrices $`M`$, $`M^{}`$ for which the product $`M^1M^{}`$ is of the form $$M^1M^{}=\left(\begin{array}{cc}V& 0\\ 0& V^{}\end{array}\right),$$ (9) where $`V`$ and $`V^{}`$ are arbitrary unitary matrices ($`V^{}=V^{}`$ for $`\beta =1`$ and $`detV^{}V=1`$ for $`\beta =2`$). In shifting to the symmetric spaces $`S_\beta `$ no information on the radial coordinates $`x_j`$ is lost, because they are well-defined in each equivalence class. The symmetric spaces $`S_\beta `$ admit a spherical coordinate system whose radial coordinates $`x_1,\mathrm{},x_N`$ are equal to the radial coordinates $`x_1,\mathrm{},x_N`$ of the transfer matrix $`M`$, cf. Eq. (4). The Brownian motion of the $`x_j`$ is described by the radial part of the Laplace-Beltrami operator on $`S_\beta `$. This radial part is known from the literature , and is found to be identical to the differential operator in Eq. (7). As a result, we find that their probability distribution $`P(x_1,\mathrm{},x_N;L)`$ satisfies the Fokker-Planck equation (7), where the constant $`D`$ is the diffusion constant on $`S_\beta `$. The appearance of a single diffusion constant $`D`$ signifies the universality of the localization properties in disordered quantum wires. Let us now consider a quantum wire with off-diagonal disorder (random hopping), at the band center $`\epsilon =0`$. We consider the same geometry as in Fig. 1, the only difference being that now the randomness is in the hopping amplitudes between neighboring lattice site, the on-site potential being zero everywhere. On a bipartite lattice (which is the case we consider here, cf. Fig. 1), and with only off-diagonal (hopping) randomness, localization is different from the standard case described above because of the existence of an additional symmetry, known as a sublattice or chiral symmetry , $$\mathrm{\Sigma }_1M\mathrm{\Sigma }_1=M.$$ (10) Hence the unitary matrices in the parametrization (4) satisfy $`U=U^{}`$ and $`V=V^{}`$. Let us now consider the extension of the DMPK equation (7) that includes the chiral symmetry (10), following Hüffmann’s geometric approach. The Lie groups $`G_\beta ^{\mathrm{ch}}`$ of transfer matrices $`M`$ that obey the chiral symmetry (10) are $`GL(N,)`$ and $`GL(N,)`$ are the multiplicative groups of $`N\times N`$ matrices with real and complex elements, respectively; $`SL(N,)`$ and $`SL(N,)`$ are their subgroups of matrices with unit determinant; $`^+`$ is the multiplicative group of the positive real numbers. The origin of these transfer matrix groups is explained below Eq. (20). $`G_1^{\mathrm{ch}}`$ $``$ $`GL(N,)Z_2\times ^+\times SL(N,),`$ $`G_2^{\mathrm{ch}}`$ $``$ $`GL(N,)U(1)\times ^+\times SL(N,).`$ The construction of a Brownian motion process for $`M`$ on $`G_\beta ^{\mathrm{ch}}`$ proceeds with the same two steps as in the standard case of diagonal disorder. However, there is an important difference. Unlike the transfer matrix groups $`G_\beta `$ discussed above, the transfer matrix groups $`G_\beta ^{\mathrm{ch}}`$ for off-diagonal disorder contain two non-compact factors, $`^+`$ and $`SL(N,)`$, or $`^+`$ and $`SL(N,)`$, for $`\beta =1`$ or $`2`$, respectively. (This difference was noted by Zirnbauer in a field-theoretic context .) Both of them determine the radial coordinates $`x_j`$: The factor $`^+`$ describes the position of the average $`\overline{x}=(x_1+\mathrm{}+x_N)/N`$, while the special linear group $`SL(N)`$ is connected to the relative positions of the $`x_j`$. One therefore has to consider two different Brownian motion processes: One on $`^+`$, to describe the $`L`$ evolution of $`\overline{x}`$, and one on the symmetric space $`S_1^{\mathrm{ch}}=SL(N,)/SO(N)`$ or $`S_2^{\mathrm{ch}}=SL(N,)/SU(N)`$, to describe the $`L`$ evolution of the differences of the radial coordinates $`x_j`$. \[The symmetric spaces $`S_\beta ^{\mathrm{ch}}`$ are obtained after dividing out the maximal compact subgroup of $`SL(N,)`$ or $`SL(N,)`$, as in the standard case.\] Since these two Brownian motion processes have two different and a priori unrelated diffusion constants, one needs their ratio as an extra parameter to characterize the distribution of the transfer matrix. This is the absence of universality in quantum wires with off-diagonal disorder that we announced in the introduction. In this most general case, there is a one-parameter family of Fokker-Planck equations for the distribution $`P(x_1,\mathrm{},x_N;L)`$ of the radial coordinates $`x_j`$. This family of Fokker-Planck equations is constructed in two steps. First, Brownian motion on $`^+`$ results in a simple diffusion equation for the distribution $`P_{\overline{x}}(\overline{x};L)`$ of the average $`\overline{x}`$, $$\frac{P_{\overline{x}}}{L}=\frac{D_R}{N}\frac{^2}{\overline{x}^2}P_{\overline{x}}.$$ (11) Here $`D_R/N`$ is the diffusion coefficient for the Brownian motion on $`^+`$. (We chose to write the diffusion coefficient as $`D_R/N`$ for later convenience; it is the diffusion coefficient that one would obtain for the average $`\overline{x}`$ if the $`N`$ variables $`x_j`$ would diffuse independently with diffusion coefficient $`D_R`$.) Second, Brownian motion on $`S_\beta ^{\mathrm{ch}}`$ is described in terms of radial coordinates $`y_j`$, $`j=1,\mathrm{},N`$, with the constraint $`y_1+\mathrm{}+y_N=0`$ . They correspond to the radial coordinates $`x_j`$ of the transfer matrix $`M`$ via $`y_j=x_j\overline{x}`$. Using the explicit form for the Laplace-Beltrami operator on the symmetric spaces $`S_\beta ^{\mathrm{ch}}`$, the Fokker-Planck equation for the probability distribution $`P_y(y_1,\mathrm{},y_N;L)`$ of the $`y_j`$ reads $$\frac{P_y}{L}=D_S\underset{j=1}{\overset{N}{}}\frac{}{y_j}J\frac{}{y_j}J^1P_y,J=\underset{j<k}{}\mathrm{sinh}^\beta (y_jy_k),$$ (12) where we have to restrict to the subspace $`y_1+\mathrm{}+y_N=0`$. Hence, for the distribution $`P(x_1,\mathrm{},x_N;L)`$ of the radial coordinates $`x_j=\overline{x}+y_j`$ of the transfer matrix $`M`$ we find the Fokker-Planck equations $$\frac{P}{L}=D_{\mathrm{ch}}\underset{j,k=1}{\overset{N}{}}\frac{}{x_j}\left(\delta _{jk}\frac{1\eta }{N}\right)J\frac{}{x_k}J^1P,J=\underset{j<k}{}\mathrm{sinh}^\beta (x_jx_k),$$ (13) where $`D_{\mathrm{ch}}=D_S`$ and $`\eta =D_R/D_S`$. Equation (13) contains the full one-parameter family of scaling equations that describes localization at the band center $`\epsilon =0`$ in quantum wires with off-diagonal disorder. The case $`\eta =1`$ (i.e., equal diffusion constants $`D_R`$ and $`D_S`$) corresponds to the equation that was derived before in Ref. . A solution of the scaling equation (13) in the localized regime $`LD_{\mathrm{ch}}`$ can be obtained by standard methods . The distribution of the radial coordinates is Gaussian, with average and variance given by $$x_j=(N+12j)\beta LD_{\mathrm{ch}},x_kx_jx_jx_k=2\left(\delta _{jk}\frac{1\eta }{N}\right)LD_{\mathrm{ch}}.$$ (14) For even $`N`$, the distribution of the conductance $`g`$ is log-normal, with $`\mathrm{ln}g`$ $`=`$ $`\beta s+2\left[{\displaystyle \frac{1}{\pi }}\left(12{\displaystyle \frac{1\eta }{N}}\right)\right]^{1/2}s^{1/2}+𝒪(1),`$ (15) $`\text{var}\mathrm{ln}g`$ $`=`$ $`2\left[1+\left(1{\displaystyle \frac{2}{\pi }}\right)\left(12{\displaystyle \frac{1\eta }{N}}\right)\right]s+𝒪(1),`$ (16) where $`s=2LD_{\mathrm{ch}}`$. For odd $`N`$, the fluctuations of $`\mathrm{ln}G`$ are of the same order as the average, $`\mathrm{ln}g`$ $`=`$ $`2\left[{\displaystyle \frac{2}{\pi }}\left(1{\displaystyle \frac{1\eta }{N}}\right)\right]^{1/2}s^{1/2}+𝒪(1),`$ (17) $`\text{var}\mathrm{ln}g`$ $`=`$ $`4\left(1{\displaystyle \frac{2}{\pi }}\right)\left(1{\displaystyle \frac{1\eta }{N}}\right)s+𝒪(1).`$ (18) For comparison, with diagonal disorder, $`\mathrm{ln}g=4DL`$ and $`\text{var}\mathrm{ln}g=8DL`$ . Although for even $`N`$ the conductance distribution is log-normal both for diagonal and for off-diagonal disorder, there are some important differences: The presence of a term $`L^{1/2}`$ in $`\mathrm{ln}g`$, the $`\beta `$-dependence of $`\mathrm{ln}g`$, and the absence of universal fluctuations of $`\mathrm{ln}g`$ (they depend on $`\eta `$) are special for off-diagonal disorder and do not appear in the standard case of diagonal disorder . What is $`\eta `$ for a particular microscopic model? Although we cannot answer this question in general, we can discuss some examples of microscopic models of quantum wires with hopping disorder and compare theoretical predictions from Eq. (13) with numerical simulations. We start from the Schrödinger equation for a set of coupled chains with random hopping, which in general can be written as $$\epsilon \psi _n=t_n^{}\psi _{n+1}t_{n1}^{}\psi _{n1}.$$ (19) Here $`n`$ labels the position along the wire (in units of the lattice spacing), $`\psi _n`$ is an $`N`$-component wavevector, and $`t_n`$ is an $`N\times N`$ hopping matrix, see Fig. 2a. (This is a more general formulation than the one of Fig. 1. Note that the lattice of Fig. 1 is in this class.) The hopping matrices $`t_n`$ are real (complex) for $`\beta =1`$ ($`2`$). We connect the disordered region of length $`L`$ to two ideal leads for $`n<0`$ and $`n>L`$, characterized by $`t_n=𝟙`$. For zero energy it is possible to solve for the transfer matrix explicitly, $$M(L)=\frac{1}{2}\left(\begin{array}{cc}m^{}+m^1& m^{}m^1\\ m^{}m^1& m^{}+m^1\end{array}\right),m=\underset{n=1}{\overset{L/2}{}}(t_{2n1}t_{2n}^1).$$ (20) (For convenience, we assumed that $`L`$ is even; recall that we use a basis of left and right movers.) The product $`m^{}m`$ has eigenvalues $`\mathrm{exp}(2x_j)`$, $`j=1,\mathrm{},N`$; the product $`m^1m^1`$ has eigenvalues $`\mathrm{exp}(2x_j)`$. Being an arbitrary $`N\times N`$ matrix with real or complex elements, the matrix $`m`$ is an element of the linear group $`GL(N,)`$ or $`GL(N,)`$, and, taking into account the considerations outlined above, the $`L`$ dependence of the transfer matrix $`M`$ can be described by the random trajectory of the matrix $`m`$ in $`GL(N,)`$ or $`GL(N,)`$. We now consider two examples. First, we consider the microscopic model used in Ref. (and also Ref. ). There, a distribution for the $`t_n`$ was assumed that was invariant under unitary transformations of the chains, $$t_n=\mathrm{exp}(W_n),(W_n)_{\mu \nu }^{}(W_n)_{\rho \sigma }^{}=\frac{1}{2}w^2\beta \delta _{\mu \rho }\delta _{\nu \sigma }.$$ (21) Here $`W_n`$ is a real (complex) matrix with independently and identically Gaussian distributed elements for $`\beta =1`$ ($`2`$). In this case, for small $`w`$, the distribution of the radial coordinates $`x_j`$ of the transfer matrix $`M`$ can be found explicitly. One obtains that the diffusion rates $`D_R`$ and $`D_S`$ are equal, and the parameters $`x_j`$ obey a Fokker-Planck equation of the type (13) with $`D_{\mathrm{ch}}=w^2`$ and $`\eta =1`$ . We mention that the most general Fokker-Planck equation (13), including the dependence on $`\eta `$, can be derived from Eq. (19) using $`t_n=\mathrm{exp}(W_n)`$ and a Gaussian distribution for the matrix $`W_n`$ that involves correlations between the diagonal elements, $$(W_n)_{\mu \nu }^{}(W_n)_{\rho \sigma }^{}=\frac{1}{2}w^2\beta \left[\delta _{\mu \rho }\delta _{\nu \sigma }(1\eta )N^1\delta _{\mu \nu }\delta _{\rho \sigma }\right].$$ (22) As a second example, we consider the random flux model . This model corresponds to a square lattice, where each plaquette contains a random flux, see Fig. 2b. The hopping matrices $`t_n`$ have the form $$t_n=\left(\begin{array}{ccccc}1& e^{i\varphi _{1,n}}& 0& \mathrm{}& 0\\ 0& 1& e^{i\varphi _{2,n}}& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& & & \mathrm{}\\ 0& 0& \mathrm{}& 1& e^{i\varphi _{N1,n}}\\ 0& 0& \mathrm{}& 0& 1\end{array}\right).$$ (23) The phases $`\varphi _{j,n}`$ are distributed such that the fluxes $`\mathrm{\Phi }_{j,n}=\varphi _{j,n+1}\varphi _{j,n}`$ are independently distributed. In order to find the distribution of the radial coordinates $`x_j`$ for the random flux model, we need to compute the matrix $`m`$, and find the eigenvalues $`\mathrm{exp}(2x_j)`$, $`j=1,\mathrm{},N`$, of $`mm^{}`$, see Eq. (20). We do not need to solve this problem exactly: Each of the hopping matrices $`t_n`$ in the random flux model has $`dett_n=1`$, which implies $`\mathrm{exp}(2N\overline{x})=detmm^{}=1`$ for all lengths. Hence, the average radial coordinate $`\overline{x}`$ does not diffuse, so that $`D_R=0`$. We conclude that $`\eta =0`$ for the random flux model. The reason why $`D_R=0`$ or $`\eta =0`$ for the random flux model is that $`dett_n=1`$ for all $`n`$. More generally, all random hopping models which have $`dett_n=1`$ are described by Eq. (13) with $`\eta =0`$. One example of such a model is a random hopping model on a square lattice with only randomness in the transverse hopping amplitudes. In the general case, however, there will be both randomness in the transverse and in the longitudinal hopping amplitudes, so that $`\eta >0`$. To compare the Fokker-Planck equation (13) to numerical simulations, we considered the quantity $$c(N)\underset{L\mathrm{}}{lim}\frac{\mathrm{ln}g}{\text{var}\mathrm{ln}g}=\{\begin{array}{cc}\frac{\beta N/2}{N+(12/\pi )(N2+2\eta )},\hfill & \text{if }N\text{ is even},\hfill \\ & \\ 0,\hfill & \text{if }N\text{ is odd}.\hfill \end{array}$$ (24) Here, we used Eqs. (16) and (18) for $`\mathrm{ln}g`$ and $`\text{var}\mathrm{ln}g`$. In the standard case of on-site disorder, one has $`c(N)=1/2`$ . We have compared Eq. (24) to numerical simulations for a square lattice of width $`N`$ and length $`L`$, attached to perfect leads. For $`\beta =1`$, the transverse hopping amplitudes are taken from a uniform distribution in $`[0.2,0.2]`$, whereas for $`\beta =2`$, the transverse hopping amplitudes are complex numbers with amplitude uniformly distributed in $`[0,0.1]`$ and a random phase. In both cases the longitudinal hopping amplitudes are real numbers taken uniform in $`[1w,1+w]`$. We have numerically computed the ratio $`c(N)`$ by taking an average over more than $`2\times 10^4`$ samples. Results for $`c(N)`$ as a function of $`w`$ for $`N=2`$ and $`N=4`$ and for zero energy $`\epsilon =0`$ are shown in Fig. 3. In the presence of the chiral symmetry, i.e. for zero energy, we expect that $`c(N)`$ depends on the details of the microscopic model (i.e., on the parameter $`w`$) through the parameter $`\eta `$. For $`w=0`$, we have $`\eta =0`$, and hence $`c(2)=\beta /2`$, $`c(4)=\beta /(32/\pi )`$. As the randomness $`w`$ in the longitudinal hopping amplitudes increases, we expect an increase of $`\eta `$, and hence a decrease of $`c(N)`$, which is confirmed by the numerical data shown in the figure. We have also shown data for energy $`\epsilon =0.2`$ where the chiral symmetry is broken.<sup>§</sup><sup>§</sup>§ We have not shown the data points for $`w=0`$ and $`N=2`$ at finite energy, because in that case there is an extra reflection symmetry that is not taken into account in the standard DMPK equation. In this case we find $`c(N)=1/2`$ for all $`w`$, in agreement with the literature . \[The slight increase of $`c(N)`$ with $`w`$ is attributed to a breakdown of the weak disorder condition.\] Except for the case of the random flux model and the random-hopping model with transverse random hopping, where $`\eta =0`$, we do not know how to compute the parameter $`\eta `$ explicitly. Numerical simulations show that, typically, the parameter $`\eta `$ is of order unity. The effect of a nonzero $`\eta `$ on the conductance distribution is most pronounced for small $`N`$. For large $`N`$, the effect is small, because only the radial coordinates near zero contribute to transport and the additional correlations between these coordinates caused by a nonzero $`\eta `$ decrease as $`1/N`$. Therefore, in the limit of large $`N`$, we still expect a universal conductance distribution. Here, we would like to remark that in Ref. , the Fokker-Planck equation (13) with $`\eta =1`$ was used to describe localization in the random-flux model. Despite the fact that the random flux model corresponds to $`\eta =0`$, excellent agreement was found between the theory and numerical simulations performed for $`N=15`$ and up. Before concluding, we would like to make two remarks. First, it is known that universality of one-parameter scaling breaks down in (quasi-) one dimensional disordered systems if there exist long-range correlations in the disorder, for instance in periodic-on-average systems or for random-hopping chains in the presence of a staggering of the hopping parameter , as is relevant e.g. for narrow gap semiconductors or charge-density wave materials . (In the latter case, the staggering of the hopping parameter adds a drift term to the Brownian motion of $`\overline{x}`$, whereas the Brownian motion of the relative positions of the $`x_j`$ remains unaffected .) The nonuniversality that we discuss in this paper is of quite a different origin: It follows directly from the geometric structure of the transfer matrix group; no long-range correlations in the disorder are involved, all the hopping amplitudes in the examples we considered have independent distributions. It should be noted that the appearance of a one-parameter family of scaling equations also occurs in the case of random Dirac fermions in two dimensions, where one finds a line of fixed points, rather than a single fixed point . Our second remark is of a more mathematical nature. Altland and Zirnbauer and Caselle have argued that Cartan’s classification of all symmetric spaces offers a complete classification of all possible random-matrix theories. These random-matrix theories appear in triplets: distributions of eigenvalues of Hermitian matrices (i.e., energy levels), distributions of eigenphases of unitary matrices (i.e., scattering phase shifts), and a Fokker-Planck equation for the radial eigenvalues of a transfer matrix. It is the latter kind of random-matrix theories that we have considered here. While Cartan’s classification is complete for all semi-simple transfer matrix groups, it does not take into account the phenomenon that we have presented in this paper: that for some disordered systems the transfer matrix group is not semi-simple, so that they cannot be represented by a single element from Cartan’s table. To summarize, we have considered the Fokker-Planck equation for the transmission eigenvalues of a quantum wire with off-diagonal (hopping) disorder from a geometric point of view. Under the same assumption of weak disorder that leads to the universal DMPK equation in the standard case of diagonal (on-site) disorder , we have found that the Fokker-Planck equation for the transmission eigenvalues of a quantum wire with off-diagonal disorder contains an extra parameter that depends on the microscopic details of the disorder. The existence of this extra parameter leads to a nonuniversality of transport properties in random-hopping chains, which is most prominent if the number $`N`$ of coupled chains is small. We would like to thank D. S. Fisher and B. I. Halperin for discussions. PWB acknowledges support by the NSF under grant nos. DMR 94-16910, DMR 96-30064, and DMR 97-14725. CM acknowledges a fellowship from the Swiss Nationalfonds. The numerical computations were performed at the Yukawa Institute Computer Facility.
no-problem/9904/astro-ph9904344.html
ar5iv
text
# The paper has been withdrawed. ## I The paper has been withdrawed.
no-problem/9904/hep-ph9904304.html
ar5iv
text
# MZ-TH/99-13, CLNS/99-1613, hep-ph/9904304, April 1999 Transcendental numbers and the topology of three-loop bubbles ## Abstract We present a proof that all transcendental numbers that are needed for the calculation of the master integrals for three-loop vacuum Feynman diagrams can be obtained by calculating diagrams with an even simpler topology, the topology of spectacles. preprint: MZ-TH/99-13, CLNS/99-1613 Feynman diagrams belong to the basic objects needed in the study of present phenomenological elementary particle physics. They provide a simple and convenient language for the bookkeeping of perturbative corrections involving many-fold integrals. In order to arrive at physical predictions the many-fold integrals have to be evaluated explicitly. If one is only aiming at practical applications a numerical evaluation of the integrals may be sufficient. However, their analytical evaluation is theoretically more appealing. This line of research has been vigorously pursued during the last few decades. Major breakthroughs were marked by the introduction of the integration-by-parts technique and the algebraic approach which lead to an intensive use of computers for the necessary symbolic calculations. The new techniques allow one to systematically classify the structure of integrands and to undertake the rather massive computations necessary for multi-loop calculations which may include tens of thousands of Feynman diagrams (see e.g. refs. ). Still the analytical evaluation of the basic prototype integrals remains an art and even has given rise to a new field of research, namely the study of the transcendental structure of the results as well as their connection to topology and number theory (see e.g. refs. ). In general, Feynman integrals are not well defined and require regularization. The dimensional regularization scheme is the regularization scheme favoured by most of the practitioners. In contrast to the Pauli-Villars regularization scheme and other subtraction techniques, the dimensional regularization method is the most natural one in the sense that it preserves various features of the diagrams. For example, massless diagrams (i.e. with vanishing bare mass) remain massless under dimensional regularization. The transcendental structure of massless multi-loop integrals is rather well understood. It is expressible mainly in terms of Riemann’s $`\zeta `$-function, $`\zeta (L)`$ where $`L`$ depends on the topology of the diagram and the number of loops. In contrast to this, the massive case is more complicated and has a richer transcendental structure. At present this field is being very actively studied and unites the community of phenomenological physicists and pure mathematicians. Very recently some novel and striking results concerning three-loop vacuum bubbles have been discovered in this field . The integration-by-parts technique within dimensional regularization reduces the calculation of a general three-loop vacuum diagram to several master configurations. This reduction involves only algebraic manipulations and is universal for any given space-time dimension $`D`$ (see e.g. ref. ). A general strategy for reducing all three-loop vacuum diagrams to a finite set of fixed master integrals through recurrence relations was described in ref. . The analytical structure of the remaining unknown master integrals with tetrahedron topology has been identified with the help of ultraprecise numerical methods . These new achievements allowed one to obtain the complete analytical expression for the three-loop $`\rho `$-parameter of the Standard Model which was known before only numerically . The results can be written in terms of a finite set of transcendental numbers called primitives. Which of these primitives enter the final result for a particular diagram depends on how the masses are distributed along the lines of a specific diagram. The main objects of the calculation in were the finite parts $`F_i`$ of the tetrahedron diagrams (we adopt the notation of ref. ). The diagrams were considered in four-dimensional space-time while the overall ultraviolet divergence appears as a simple pole in $`\epsilon =(4D)/2`$ within dimensional regularization. By extensive use of ultra-high precision numerical calculations it was found that only two new transcendental numbers $`U_{3,1}`$, $`V_{3,1}`$ and the square of Clausen’s dilogarithm $`\mathrm{Cl}_2(\theta )=\mathrm{Im}(\mathrm{Li}_2(e^{i\theta }))`$ (for discrete values of its argument) enter the final results. The presence of the square of Clausen’s dilogarithm $`\mathrm{Cl}_2(\theta )^2`$ (being the square of Clausen’s dilogarithm appearing already at the two-loop level) had been conjectured on the basis of the assumption that the primitives form an algebra . The quantity $`U_{3,1}`$ is related to the master integral $`B_4`$ found earlier and is expressible through the polylogarithm value $`\mathrm{Li}_4(1/2)`$ . The quantity $`V_{3,1}`$ which emerges in the analysis of vacuum diagrams with tetrahedron topology appears to be entirely new . In the present note we discuss some new results obtained in the field of three-loop vacuum bubble diagrams. We show how to identify the above mentioned primitives in the simpler spectacle topology or in the even simpler water melon topology (see e.g. ref. ). Our calculations are done within two-dimensional space-time where the ultraviolet divergences are less severe and the integrals and final results are simpler. By keeping masses finite we have good infrared behaviour. Our previous results tell us that the transcendental structure for the water melon topology in two dimensions is the same as in the case of four-dimensional space-time . There is strong evidence that this is also true for other topologies and for any even space-time dimensionality. Diagrams with tetrahedron topology (Fig. 1) have been analyzed in using arbitrary combination of massless and massive lines albeit with a single mass scale $`m`$. Analytical results for all possible mass configurations containing only few transcendental numbers have been obtained with the help of ultra-high precision (thousands decimal points) numerical calculations . The key observation presented in this note is that all necessary transcendental numbers that appear in the tetrahedron case can already be found in the simpler spectacle and water melon topology shown in Fig. 1. These topologies are sufficiently simple to allow one to perform all necessary integrations analytically. We present analytical results for the three-loop spectacle and water melon diagrams, i.e. their transcendentality structure, without ever using any numerical tools. The main building block for the treatment of three-loop vacuum bubbles is the one-loop two-line massive correlator $`\mathrm{\Pi }(p^2)`$ in $`D=22\epsilon `$ dimensional (Euclidean) space-time, $`\mathrm{\Pi }(p^2)={\displaystyle \frac{d^Dk}{((pk)^2+m^2)(k^2+m^2)}}`$ (1) $`=`$ $`{\displaystyle \frac{2^{3+2\epsilon }\pi ^{1\epsilon }\mathrm{\Gamma }(1+\epsilon )}{(p^2+4m^2)^{1+\epsilon }}}{}_{2}{}^{}F_{1}^{}(1+\epsilon ,{\displaystyle \frac{1}{2}};{\displaystyle \frac{3}{2}};{\displaystyle \frac{p^2}{p^2+4m^2}})`$ (2) with $`{}_{2}{}^{}F_{1}^{}(a,b;c;z)`$ being the hypergeometric function. An alternative representation of the correlator is obtained through the dispersion relation $$\mathrm{\Pi }(p^2)=_{4m^2}^{\mathrm{}}\frac{\rho (s)ds}{s+p^2}$$ (3) with $$\rho (s)=\frac{(s4m^2)^\epsilon }{2\pi \sqrt{s(s4m^2)}}\frac{\pi ^{1/2\epsilon }}{2^{4\epsilon }\mathrm{\Gamma }(1/2+\epsilon )}.$$ (4) In order to reproduce the transcendental structure of the finite parts of the tetrahedron in four dimensions we need a first order $`\epsilon `$ expansion of water melons and spectacles near two-dimensional space-time. Note that these diagrams are well-defined and ultraviolet finite in two dimensions and, formally, require no regularization. However, the sought-for transcendental structure appears only in higher orders of the $`\epsilon `$ expansion while the leading order is simple and contains only the standard transcendental numbers such as $`\zeta (3)`$ or $`\mathrm{ln}(2)`$. Therefore we write $$\mathrm{\Pi }(p^2)=\mathrm{\Pi }_0(p^2)+\epsilon \mathrm{\Pi }_1(p^2)+O(\epsilon ^2)$$ (5) and keep only the first order in $`\epsilon `$ which happens to be sufficient for our goal of finding all the transcendental numbers appearing in the tetrahedron case. Using either the explicit formula in Eq. (1) or the dispersive representation Eq. (3) with the spectral density given by Eq. (4) and expanded to the first order in $`\epsilon `$, we find $$\mathrm{\Pi }_0(4m^2\mathrm{sinh}^2(\eta /2))=\frac{\eta }{4\pi m^2\mathrm{sinh}\eta }$$ (6) where the variable $`\eta `$ has been introduced for convenience, $`\sqrt{p^2}=2m\mathrm{sinh}(\eta /2)`$. In the first order of the $`\epsilon `$ expansion we have $$\mathrm{\Pi }_1(4m^2\mathrm{sinh}^2(\eta /2))=\frac{f(e^\eta )}{4\pi m^2\mathrm{sinh}\eta }$$ (7) with $`f(t)`$ $`=`$ $`2\mathrm{L}\mathrm{i}_2(t)+2\mathrm{ln}t\mathrm{ln}(1+t){\displaystyle \frac{1}{2}}\mathrm{ln}^2(t)+\zeta (2)`$ (8) $`=`$ $`2{\displaystyle _0^t}{\displaystyle \frac{\mathrm{ln}u}{1+u}}𝑑u{\displaystyle \frac{1}{2}}\mathrm{ln}^2t+\zeta (2).`$ (9) The integral for the water melon diagram is given by $$W=2\pi m^2\mathrm{\Pi }(p^2)^2d^2p=W_0+\epsilon W_1+O(\epsilon ^2).$$ (10) Note that here and later we use a two-dimensional integration measure. This prescription differs from the standard dimensional regularization but suffices for our purposes and makes the final expressions simpler. The use of a $`D`$-dimensional integration measure would not change the functional structure of the integrands and would simply generate some additional terms that can be analyzed within the same technique. Upon expanding the integral in Eq. (10) in powers of $`\epsilon `$ we obtain the leading term $$W_0=2\pi m^2\mathrm{\Pi }_0(p^2)^2d^2p=\frac{7}{8}\zeta (3)$$ (11) and the first order term $$W_1=4\pi m^2\mathrm{\Pi }_0(p^2)\mathrm{\Pi }_1(p^2)d^2p=_0^1\frac{2\mathrm{ln}t}{1t^2}f(t)𝑑t.$$ (12) The spectacle diagram is given by the integral $$S=2\pi m^4\frac{\mathrm{\Pi }(p^2)^2}{p^2+M^2}d^2p=S_0+\epsilon S_1+O(\epsilon ^2),$$ (13) where the single “frame” propagator has a mass $`M`$ which differs from the other mass parameter $`m`$ in the “rim” propagators. The expression for the leading order term $`S_0`$ is simple (see ref. ) while the first order term $`S_1`$ (which is of interest for us here) reads $`S_1`$ $`=`$ $`{\displaystyle _0^1}{\displaystyle \frac{2tf(t)\mathrm{ln}tdt}{(1t^2)(t^22t\mathrm{cos}\theta +1)}}`$ (14) $`=`$ $`{\displaystyle _0^1}{\displaystyle \frac{2tf(t)\mathrm{ln}tdt}{(1t^2)(\lambda _0t)(\overline{\lambda }_0t)}}`$ (15) where $`\lambda _0=e^{i\theta }`$, $`\mathrm{cos}\theta =1M^2/2m^2`$. By partial fractioning the rational expressions in the integrands of Eqs. (12) and (14) we find that the most complicated integral in both cases has the form $$_0^1\frac{\mathrm{ln}t}{\overline{\lambda }t}f(t)𝑑t=2I(\lambda )+3\mathrm{L}\mathrm{i}_4(\lambda )\zeta (2)\mathrm{Li}_2(\lambda )$$ (16) where $`I(\lambda )`$ is a generic nonreducible term which cannot be expressed with the help of the transcendentality structure that occured earlier on. One has $$I(\lambda )=_0^1𝑑t\frac{\mathrm{ln}t}{\overline{\lambda }t}_0^t𝑑u\frac{\mathrm{ln}u}{1+u}$$ (17) while the integral of the last two terms in Eq. (8) is explicitly expressed through polylogarithms $`\mathrm{Li}_2`$ and $`\mathrm{Li}_4`$. For the relevant values of $`\lambda `$ this generic integral in Eq. (17) contains all the primitives $`U_{3,1}`$, $`V_{3,1}`$ and also Clausen’s polylogarithms $`\mathrm{Cl}_2`$ and $`\mathrm{Cl}_4`$. The value $`\lambda =1`$ occurs in both water melon and spectacle cases. For this value we obtain $$I(1)=\frac{17\pi ^4}{1440}+2U_{3,1}$$ (18) with the explicit expression for the primitive $`U_{3,1}`$ $`U_{3,1}={\displaystyle \frac{1}{2}}\zeta (4)+{\displaystyle \frac{1}{2}}\zeta (2)\mathrm{ln}^2(2){\displaystyle \frac{1}{12}}\mathrm{ln}^4(2)2\mathrm{L}\mathrm{i}_4\left({\displaystyle \frac{1}{2}}\right)`$ where $`\zeta (4)=\pi ^4/90`$ and $`\zeta (2)=\pi ^2/6`$. The part present in the spectacle diagram in Eqs. (13) and (14) depends on the mass ratio. For $`M=m`$ we have $`\theta =\pi /3`$, so $`\lambda _0=e^{i\pi /3}`$ is one of the sixth order roots of unity. This observation discloses the special role of the sixth order roots of unity which had been observed before in ref. . For $`\lambda =e^{i\pi /3}`$ we obtain $`I(e^{i\pi /3})`$ $`=`$ $`{\displaystyle \frac{197\pi ^4}{38880}}{\displaystyle \frac{1}{3}}\mathrm{Cl}_2^2\left({\displaystyle \frac{\pi }{3}}\right)+2V_{3,1}+{\displaystyle \frac{5i\pi ^3}{162}}\mathrm{ln}3`$ (20) $`+{\displaystyle \frac{13}{108}}i\pi ^2\mathrm{Cl}_2\left({\displaystyle \frac{\pi }{3}}\right){\displaystyle \frac{35i}{18}}\mathrm{Cl}_4\left({\displaystyle \frac{\pi }{3}}\right).`$ The primitive $`V_{3,1}`$ is given by $$V_{3,1}=\underset{m>n>0}{}(1)^m\mathrm{cos}\left(\frac{2\pi n}{3}\right)\frac{1}{m^3n}.$$ (21) In the case $`M=\sqrt{3}m`$ we end up with $`\lambda _0=e^{2i\pi /3}`$, another sixth order root of unity. For this value of $`\lambda `$ we obtain $`I(e^{2i\pi /3})`$ $`=`$ $`{\displaystyle \frac{79\pi ^4}{12960}}+{\displaystyle \frac{1}{3}}\mathrm{Cl}_2^2\left({\displaystyle \frac{\pi }{3}}\right)`$ (23) $`+{\displaystyle \frac{7i\pi ^2}{36}}\mathrm{Cl}_2\left({\displaystyle \frac{\pi }{3}}\right){\displaystyle \frac{11i}{6}}\mathrm{Cl}_4\left({\displaystyle \frac{\pi }{3}}\right).`$ This expression is simpler and does not contain the new primitive $`V_{3,1}`$. The case $`M=2m`$ is a degenerate one, $`\lambda _0=e^{i\pi }=1`$ and the expression for $`I`$ reduces to $`\zeta `$-functions only, $`I(1)`$ $`=`$ $`{\displaystyle \frac{\pi ^4}{288}}={\displaystyle \frac{5}{16}}\zeta (4).`$ (24) Finally, the complete expression for the spectacle (or its most interesting part $`S_1`$) can be easily found by collecting the above results. For the standard arrangement of masses $`M=m`$ we have $`S_1`$ $`=`$ $`{\displaystyle \frac{251\pi ^4}{58320}}+4U_{3,1}{\displaystyle \frac{16}{3}}V_{3,1}+{\displaystyle \frac{8}{9}}\mathrm{Cl}_2^2\left({\displaystyle \frac{\pi }{3}}\right).`$ (25) We have thus fulfilled our promise and have discovered all magic numbers $`U_{3,1}`$, $`V_{3,1}`$ and $`\mathrm{Cl}_2^2(\pi /3)`$ which one encounters in the evaluation of tetrahedron vacuum diagrams using different combinations of massless and massive lines with a single mass scale $`m`$. We have found the magic numbers in the simpler three-loop water melon and spectacle topologies. The reason for this accomplishment can be read off from our analytical expressions: all integrals (or the functional structures of integrands) which appeared in the calculation of the finite parts of the tetrahedron diagrams are contained in our basic object $`I(\lambda )`$. We see the origin of the important role played by the sixth order root of unity: for a single mass $`m`$ this number appears as a root of the denominator in the spectacle diagram. Within the analysis presented in ref. the special role of this sixth order root of unity had no rational explanation. Having discovered this, we extended the analysis to arbitrary values of $`\lambda `$ in Eqs. (14), (16) and (17) by introducing a second mass parameter $`M`$ in the spectacle diagram given by Eq. (13). Concerning possible future extensions of our approach we emphasize that the appearance of the relevant transcendental structure at the level of the simpler topologies is quite an essential simplifying feature if one wants to proceed to even higher-loop calculations. For example, the simplicity of the water melon topology makes them computable with any number of loops . In this sense the present calculations can be considered to be a first step towards the evaluation of four-loop vacuum bubbles. Our result finally leads us to a conjecture about the calculability of the three-loop master integrals. If they are reducible to spectacle and water melon diagrams at the analytical level, this observation may lead to a way beyond the time-consuming integration-by-parts technique for the evaluation of three-loop diagrams. To conclude, by analyzing massive three-loop vacuum bubbles belonging to the spectacle topology class within dimensional regularization for two-dimensional space-time, we discovered and identified analytically all transcendental numbers which were previously found by numerical methods for the three-loop tetrahedron topology. ###### Acknowledgements. The work is supported in part by the Volkswagen Foundation under contract No. I/73611. A.A. Pivovarov is supported in part by the Russian Fund for Basic Research under contracts Nos. 97-02-17065 and 99-01-00091. S. Groote gratefully acknowledges a grant given by the Max Kade Foundation.
no-problem/9904/cond-mat9904144.html
ar5iv
text
# Discrete breathers in nonlinear lattices: Experimental detection in a Josephson array \[ ## Abstract We present an experimental study of discrete breathers in an underdamped Josephson-junction array. Breathers exist under a range of dc current biases and temperatures, and are detected by measuring dc voltages. We find the maximum allowable bias current for the breather is proportional to the array depinning current while the minimum current seems to be related to a junction retrapping mechanism. We have observed that this latter instability leads to the formation of multi-site breather states in the array. We have also studied the domain of existence of the breather at different values of the array parameters by varying the temperature. \] Discrete breathers are a new type of excitation in nonlinear lattices. They are characterized by an exponential localization of the energy. This localization does not occur in linear systems and it is different from Anderson localization, which is due to the presence of impurities. Thus, discrete breathers are also known as intrinsic localized modes. Breathers have been proven to be generic solutions for the dynamics of nonlinear coupled oscillator systems by the use of the novel mathematical technique of the anti-integrable limit . They have been extensively studied and have been proposed to theoretically exist in diverse systems such as in spin wave modes of antiferromagnets , DNA denaturation , and the dynamics of Josephson-junction networks . Also, they have been shown to be important in the dynamics of mechanical engineering systems . Although a number of experiments have been proposed, discrete breathers have yet to be experimentally generated and measured. In this Letter, we present, to our knowledge, the first experimental study of discrete breathers in a spatially extended system. We have designed and fabricated an underdamped Josephson-junction ladder which allows for the existence of breathers when biased by dc external currents. We have developed a method for exciting breathers and explored their existence domain and instability mechanisms with respects to the junction parameters and the applied current. A Josephson junction consists of two superconducting leads separated by a thin insulating barrier. Due to the Josephson effect, it behaves as a solid-state nonlinear oscillator and is usually modeled by the same dynamical equations that govern the motion of a driven pendulum: $`i=\ddot{\phi }+\mathrm{\Gamma }\dot{\phi }+\mathrm{sin}\phi `$. The response of the junction to a current is measured by the voltage of the junction which is given by $`v=(\mathrm{\Phi }_0/2\pi )d\phi /dt`$. By coupling junctions it is possible to construct solid-state physical realizations of different models such as the Frenkel-Kontorova model for nonlinear dynamics and the 2D XY model for phase transitions in condensed matter. Moreover, since the parameters, such as $`\mathrm{\Gamma }(T)`$, vary with temperature, a range of parameter space can be studied easily with each sample. The inset of Fig. 1 shows a schematic of the anisotropic ladder array. The junctions are fabricated using a Nb-Al<sub>2</sub>O<sub>x</sub>-Nb tri-layer technology with a critical current density of $`1000\mathrm{A}/\mathrm{cm}^2`$. The current is injected and extracted through bias resistors in order to distribute the current as uniformly as possible through the array. These resistors are large enough so as to minimize any deleterious effects on the dynamics. The anisotropy of the array is defined by $`h`$ as the ratio of areas of the horizontal to vertical junctions. In our arrays $`h=1/4`$ and $`h=I_{ch}/I_{cv}=R_v/R_h=C_h/C_v`$, and $`\mathrm{\Gamma }_v=\mathrm{\Gamma }_h=\mathrm{\Gamma }`$. As shown in the schematic, we have placed voltage probes at various junctions in order to measure the voltages of both horizontal and vertical junctions. In Fig. 1 we show a typical current-voltage, IV, characteristic of the array. As the applied current increases from zero we measure the average voltage of the 9-th junction. The junction starts at a zero-voltage state and remains there until it reaches the array’s depinning current $`I_{dep}`$ at about $`2\mathrm{mA}`$. The depinning current can roughly be understood as the sum of the vertical junction’s intrinsic critical current $`I_{cv}`$ and the small circulating Meissner current around the array. In a pendulum analogy, the critical current is equivalent to the critical torque, which is just sufficiently strong enough to force the pendulum to start rotating. When the current is larger the junction switches from zero-voltage state to the junction’s superconducting gap voltage, $`V_g`$, which at this temperature is $`2.5\mathrm{mV}`$. At this point all of the vertical junctions are said to be rotating and the array is in its “whirling state”. One of the effects of this gap voltage is to substantially affect the junction’s resistance, and thereby damping, in a complicated nonlinear way. The current can be further increased until the junction reaches its normal state and it behaves as a resistor, $`R_n`$, of $`5\mathrm{\Omega }`$. As the current decreases the junction returns to the gap voltage and then to its zero-voltage state at the retrapping current, $`I_r`$, of $`0.2\mathrm{mA}`$. The hysteresis loop between $`I_{dep}`$ and $`I_r`$ is due to our underdamped junctions: the inertia causes the junctions to continue to rotate when the applied current is lowered from above its critical value. It is this hysteresis loop that allows for the existence of breathers in the ladder with dc bias current. In this current range the zero-voltage ($`V=0`$) and rotating ($`V=V_g`$) solutions coexist. Then, a discrete breather in the ladder corresponds to when one vertical junction is rotating while the other vertical junctions librate. This solution is easy to conceive in the limit where the vertical junctions are imagined to be completely decoupled. However, whether a localized solution can exist in the ladder will be determined by the strength of the spatial coupling between vertical junctions. This coupling occurs through three mechanisms: flux quantization, self and mutual inductances of the meshes, and the horizontal junctions. Though the effective coupling is a complicated function of the array parameters, it is most strongly controlled by $`h`$. If the anisotropy $`h`$ is too large, then the vertical junctions will not support localized solutions that can be excited by dc currents. It has been determined from simulations of the system that $`h=1/4`$ will allow for the existence of breathers in our ladders. Figure 2 shows some possible solutions for the states of our ladder. Graph (a) is the whirling state with every vertical junction rotating as indicated by the arrows. This is the state when all the junctions of the array have switched to the gap voltage. Graph (b) shows the zero voltage state with no rotating junctions. Graph (c) depicts a single-site breather solution. Here, one of the vertical and the horizontal neighboring junctions rotate. The horizontal junctions allow the vertical junction to rotate with a mean voltage without an overall increase of the stored magnetic energy. Graph (d) shows a two-sited breather where two vertical junctions rotate. We have experimentally detected these types of localized solutions (c and d) by measuring the average dc voltage of the junctions as labeled in (c). For our experiments, we have developed a simple reproducible method of exciting a breather: (i) bias the array uniformly to a current below depinning current; (ii) increase the current injected into the middle vertical junction \[labeled $`V_5`$ in Fig. 2(c)\] until its voltage switches to the gap; (iii) reduce this extra current in the middle junction to zero. For example, to prepare the initial state in Fig. 3 we started by increasing the applied current to 1.4 mA which is below $`I_{dep}`$. At this point the array is in the zero-voltage state. We then add an extra bias current to the middle junction (number 5) until it switches to the gap voltage of 2.5 mV and then we reduce this extra bias to zero. In a sense we have prepared the initial conditions for the experiment. We can now increase the uniform applied current while simultaneously measuring the voltages of the vertical junctions ($`V_4`$, $`V_5`$ and $`V_6`$) and the top two horizontal junctions, $`V_{4T}`$ and $`V_{5T}`$, as labeled in Fig. 2(c). Figure 3 shows the result after we have excited the breather and we have increased the array current. Close to the initial current of 1.4 mA only the fifth vertical junction is at $`V_g`$ and both the fourth and sixth vertical junctions are in the zero-voltage state. This is the breather state shown in Fig. 2(c) and in essence the signature of the localized breather: a vertical junction is rotating while its neighboring vertical junctions do not rotate. We also see that both neighboring horizontal junctions have a voltage magnitude that is precisely half of this value ($`V_{4T}=V_{5T}=V_5/2`$ and $`V_4=V_6=0`$). Both the magnitude and the sign can be understood by applying Kirchoff’s voltage law to top-bottom voltage-symmetric solutions, as sketched in Fig. 2(c). The voltage of the top horizontal junction is equal to the negative of the bottom one. Since the voltage drops around the loop must be zero, the horizontal voltages must be half that of the active vertical junction voltage. As we increase the current the breather continues to exist until the applied current approaches $`I_+2\mathrm{mA}`$. At this point the horizontal junctions switch to a zero-voltage state while all of the vertical junctions switch to $`V_g=2.5`$ mV. The array is now in its whirling state as drawn in Fig. 2(a) where $`V_4=V_6=V_5=V_g`$ and $`V_{4T}=V_{5T}=0`$. If we excite the breather again but instead of increasing the applied current we decrease it, we measure curves typical of Fig. 4. As explained above, we prepare the array in an initial condition with a breather located in junction 5 at 1.4 mA. We then decrease the applied current slowly. We start with the signature measurement of the breather: junction five is rotating at $`V_g`$ while $`V_4`$ and $`V_6=0`$. We also see that the horizontal junctions have the expected value of $`V_g/2`$. As the current is decreased the breather persists until the array is biased at 0.8 mA. The fourth vertical junction then switches to the gap voltage while $`V_{4T}`$ switches to a zero voltage state. The resulting array state is sketched in Fig. 2(d) with $`V_4=V_5=V_g`$ while $`V_{5T}=V_g/2`$ and $`V_{4T}=V_6=0`$. The single-site breather has destabilized by creating a two-site breather. As the applied current is further decreased beyond the single-site breather instability at $`0.8\mathrm{mA}`$, the voltage of the fourth and fifth vertical junctions decreases but then suddenly jumps back to $`V_g`$. Then the voltage decreases again, and it again jumps back to $`V_g`$. This second shift corresponds to the sixth junction switching from the zero voltage state to the gap voltage. At this current bias, all of the three measured vertical voltages are rotating. There is a further jump of the voltage as the current decreases. Finally, at 0.2 mA all of the vertical junctions return to their zero-voltage state via a retrapping mechanism analogous to that of a single pendulum. From these experiments and corroborating numerical simulations we speculate that this shifting of the voltage back to $`V_g`$ corresponds to at least one vertical junction switching from the zero-voltage state to the rotating state. The shapes of the IV curves in this multi-site breather regime are influenced by the redistribution of current when each vertical junction switches. This redistribution may also govern the evolution of the system after each transition to one of the other possible breather attractors in the phase space of the array. However, the exact nature of the selection process is not yet understood. The above data was taken at a temperature of 5.2 K. We found four current values of importance: the current when the array returns to the zero-voltage, $`I_r`$; the maximum zero-voltage state current, $`I_{dep}`$; the maximum current the breather supports, $`I_+`$; and the minimum current $`I_{}`$. By sweeping the temperature we can study how the current range in which our breather exists is affected by a change of the array parameters. Figure 5 shows the results of plotting the four special current values versus $`\mathrm{\Gamma }`$. To calculate how the junction parameters vary with temperature we take $`I_{cv}(0)R_n=1.9\mathrm{mV}`$ and assume that the junction critical current follows the standard Ambegaokar-Baratoff dependence . We estimate $`\mathrm{\Gamma }`$ from $`I_r`$ by the relation $`I_r/NI_{cv}=(4/\pi )\mathrm{\Gamma }`$, where $`N`$ is the number of vertical junctions in the array. The other relevant parameter is the dimensionless penetration depth, $`\lambda _{}=\mathrm{\Phi }_0/2\pi L_sI_{cv}`$, which measures the inductive coupling in the array. The loop inductance $`L_s`$ is estimated from numerical modeling of the circuit. By changing the temperature of the sample, we vary the $`I_{cv}`$ of the junction and hence change $`\mathrm{\Gamma }`$ and $`\lambda _{}`$. In this sample, the junction parameters can range from $`0.031<\mathrm{\Gamma }<0.61`$ and $`0.04<\lambda _{}<0.43`$ as the temperature varies from 4.2 K to 9.2 K. In Fig. 5, $`\mathrm{\Gamma }<0.2`$ corresponds to $`T<6.7\mathrm{K}`$ and $`\lambda _{}<0.05`$. At these low temperatures, there is a larger variation in $`\mathrm{\Gamma }`$ because of the sensitive dependence of this parameter to the junction’s resistance below the gap voltage. As Fig. 5 shows, the maximum current supported by the breather, $`I_+`$, is almost equal to $`I_{dep}`$. A simple circuit model gives some physical intuition. Junctions that are rotating have some effective resistance while junctions that are in the zero-voltage state have zero resistance. In our breather state, the center junction is rotating. Therefore, when we apply a current to our array the current will tend to flow around the rotating junction and through the outside junctions that are in the zero-voltage state. When these outside junctions reach their critical currents they will begin to rotate and the breather will disappear. In the simplest case, when we ignore any circulating Meissner currents, this model yields $`I_+/NI_{cv}=(h+1)/(2h+1)=0.8`$. Since $`I_{dep}`$ is roughly $`NI_{cv}`$, the depinning current is the upper bound for the applied current that the breather can support. The instability mechanism that determines $`I_{}`$ in our experiments is more difficult to discern. We offer two suggestions that are due to the underdamped character of our system. One possibility is via a retrapping mechanism similar to that of a single junction. As the middle junction rotates, it reaches a point where the current drive is not sufficient to support the rotation and it destabilizes. This physical picture gives $`I_{}/NI_{cv}=(2h+2)(4/\pi )\mathrm{\Gamma }`$. So that for our parameters, $`I_{}`$ should be 2.5 times larger than $`I_r`$, as it is approximately in Fig. 5. A second possible instability mechanism consist of resonances between the characteristic frequencies of the breather and the lattice eigenmodes. The breather looses energy as it excites the eigenmodes. In our experiments, the breather always looses stability at voltages close to $`V_g`$. For our parameter range, $`V_g`$ is larger than the voltages for the lattice eigenmodes, thus our data seems to favor a retrapping mechanism. Lastly, we add that since $`I_{dep}`$ and consequently $`I_+`$ decreases with $`\mathrm{\Gamma }`$, there also seems to be a critical damping where the breather will cease to exist. Experimentally we did not find a breather for $`\mathrm{\Gamma }>0.2`$. In summary, we have experimentally detected different breather and multi-site breather states in a superconducting Josephson ladder network. By varying the external current and temperature we have studied the domain of existence and the instability mechanisms of these localized solutions. In addition we have also found, but not discussed here, breathers which are not top-bottom voltage symmetric, in which only the top (bottom) horizontal junctions rotate while the bottom (top) junctions are in the zero-voltage state. These experiments are the first observations of discrete breathers and multi-site breathers in a condensed matter system. This work was supported by NSF grant DMR-9610042 and DGES (PB95-0797). JJM thanks the Fulbright Commission and the MEC (Spain) for financial support. We thank S. H. Strogatz, A. E. Duwel, F. Falo, L. M. Floría, and P. J. Martínez for insightful discussions.
no-problem/9904/cond-mat9904390.html
ar5iv
text
# Quasiparticles in the superconducting state of 𝐵⁢𝑖₂⁢𝑆⁢𝑟₂⁢𝐶⁢𝑎⁢𝐶⁢𝑢₂⁢𝑂_{8+𝛿} ## Abstract Recent improvements in momentum resolution lead to qualitatively new ARPES results on the spectra of $`Bi_2Sr_2CaCu_2O_{8+\delta }`$ (Bi2212) along the $`(\pi ,\pi )`$ direction, where there is a node in the superconducting gap. We now see the intrinsic lineshape, which indicates the presence of true quasiparticles at all Fermi momenta in the superconducting state, and lack thereof in the normal state. The region of momentum space probed here is relevant for charge transport, motivating a comparison of our results to conductivity measurements by infrared reflectivity. Landau’s concept of the Fermi liquid underlies much of our present theoretical understanding of electron dynamics in crystalline solids. Landau was able to demonstrate that, even though the electrons interact strongly with one another, one can still describe the low temperature properties of metals in terms of “quasiparticle” excitations, which are bare electrons dressed by the medium in which they move. But we now have materials, such as the high temperature superconductors (HTSCs), and other low dimensional systems, where it is becoming increasingly difficult to reconcile experimental results with the expectations of Fermi liquid theory . Clearly, if the concept of quasiparticles is to be useful, they must live long enough to be considered as independent entities. In fact, in a Fermi liquid, the quasiparticles at the Fermi momentum, $`k_F`$, have (at zero temperature) an infinite lifetime at zero excitation energy - the Fermi energy, $`E_F`$ \- with their lifetime decreasing quadratically with excitation energy. If one were to measure the spectral function of the electrons at $`k_F`$, one would observe a broad feature, corresponding to the incoherent part of the electron, with a sharp peak at $`E_F`$ with spectral weight $`z`$, the quasiparticle component. It is now well established by angle resolved photoemission spectroscopy (ARPES) measurements that, despite the existence of a Fermi surface in momentum space, there are no quasiparticles in the normal state of optimally doped or slightly overdoped HTSCs near the $`(\pi ,0)`$ point of the Brillouin zone (see inset of Fig. 1) . Below $`T_c`$, the superconducting gap is maximal here (at the anti-node point $`A`$) and sharp quasiparticle peaks are observed . This statement can be made since the energy dispersion of the electronic states in this region of the zone is very weak, and therefore the measured spectra are not artificially broadened by the finite acceptance angle (momentum window) of the detector. Along the zone diagonal, however, the intrinsic lineshape is unknown, both in the normal and superconducting states, because the spectra near point N (at the Fermi surface along the diagonal; see inset of Fig. 1) are significantly broadened by the momentum window given the highly dispersive nature (1.6 eV$`\AA `$) of the states in this region. This observation is reinforced by the lack of any temperature dependence of the spectra at point N in sharp contrast to what is observed at point A. In this work, a large improvement in experimental momentum resolution allows us to determine the intrinsic lineshape at $`N`$ for the first time. The significance of these observations stems from the fact that this region of the zone dominates many of the bulk properties of cuprate superconductors, and it is very important to know if and when quasiparticles exist. Above $`T_c`$, the charge and thermal transport are dominated by these states because of their rapid dispersion (large Fermi velocity). Below $`T_c`$, the superconducting energy gap vanishes at the nodal point $`N`$, and low energy excitations in its neighborhood dominate the $`T`$-dependence of various properties, e.g., the superfluid density and thermal Hall conductivity. Moreover, there has been a gamut of opinions concerning the nature of the electronic states near the node, ranging from the cold spot model where quasiparticles are assumed to be present even above $`T_c`$, and the stripes model, where quasiparticles do not exist even below $`T_c`$. Measurements were carried out at the SRC, Wisconsin, on a 4m NIM undulator beamline (resolving power of $`10^4`$ at $`10^{12}`$ photons/sec) as well as a PGM beamline. We used a Scienta SES 200 analyzer in angle resolved mode. The angular resolution and spacing of EDCs is 0.0097 $`\AA ^1`$ at 22eV as calibrated by measuring the superlattice wavevector. The energy resolution was 16 meV (FWHM). Samples were mounted with the $`\mathrm{\Gamma }X`$ direction parallel to the photon polarization, except for Fig. 2 right panel and Fig. 1 left panel, where samples were aligned along $`\mathrm{\Gamma }\overline{M}`$. ARPES gives direct information about the momentum and energy dependence of the lifetime of electrons. In quasi-2D materials, the ARPES signal is given by $`I(\omega )=I_𝐤f(\omega )A(𝐤,\omega )`$, convolved with the energy resolution and momentum aperture of the detector. Here $`I_𝐤`$ is the dipole matrix element between initial and final states, $`f(\omega )`$ the Fermi function, and $`A(𝐤,\omega )`$ the spectral function. Curve (a) in the left panel of Fig. 1 shows an ARPES spectrum in the normal state at 100K for an optimally doped Bi2212 sample with $`T_c=89`$ K. In principle, at the Fermi surface ($`k=k_F`$), $`A(𝐤,\omega )`$ should have a peak centered at zero binding energy. Since ARPES only measures the occupied part of $`A`$, the spectral peak is cut off by the Fermi function, and thus its maximum is displaced to higher energy. Therefore an estimate of its full width-half maximum (FWHM) can be obtained by doubling the measured one, yielding a value of $``$200 meV. Although quasiparticles are only expected to appear at low temperatures, we note this width is of order 2000K, well over an order of magnitude larger than the temperature, indicating that thermal broadening alone cannot be responsible for the large peak width. We can confirm that large widths are intrinsic to optimally doped cuprates by examining the spectral function of the single CuO layer compound Bi2201, which has a lower $`T_c`$, and therefore a normal state accessible at a lower temperature. In curve (b) of Fig. 1 we plot ARPES data for the slightly overdoped Bi2201 compound $`Bi_{1.6}Pb_{0.4}Sr_2CuO_6`$ ($`T_c=20K`$). Even though the normal state data are now taken at 30K, the spectral width has only narrowed to a FWHM of 100 meV, and so does not exhibit a quasiparticle peak. We therefore conclude that quasiparticles do not appear in the normal state of optimally doped compounds at point $`A`$, even at low temperature. As there is some evidence from transport that more Fermi liquid like behavior develops for heavily overdoped materials, the question arises whether ARPES sees evidence for normal state quasiparticles in that case. Curve (d) of Fig. 1 shows a spectrum for a highly overdoped Bi2201 sample ($`T_c`$=4K) at 20K. Indeed, the spectral peak is much narrower than the optimally doped sample, consistent with more Fermi liquid like behavior. It can be seen that the broadening of the peak in the optimally doped sample is not of thermal origin by comparing it to the overdoped sample at a higher temperature, shown in curve (c) of Fig. 1. The width of the peak of the overdoped sample at 80K is in fact narrower than that of the optimally doped sample at 30K. Although the normal state of optimally doped cuprates does not exhibit them, quasiparticles do appear in the superconducting state at point $`A`$. Curve (e) in Fig. 1 shows a spectrum in the superconducting state, where a near resolution limited peak appears. In addition, a break clearly separates the coherent quasiparticle part of the spectral function from the incoherent part, as indicated by the arrow . A significant point is that in the vicinity of the $`A`$ point, the break in the spectra appears exactly at $`T_c`$ . This is thought to be due to the onset of superconductivity, which leads to a reduction of the scattering rate of electrons over an energy range of order 2-3 times the maximum superconducting gap , thus allowing quasiparticles to exist. It is now well established that the HTSCs are $`d`$-wave superconductors with points N characterized by nodes in the energy gap . We now ask the question whether quasiparticles exist at these particular points which exhibit gapless excitations. So far, it has not been possible to address this question because of resolution issues. For example, the dashed curve in the right panel of Fig. 1 shows the spectrum obtained at $`N`$ with the momentum resolution of all previously published work, deep in the superconducting state (T=15K). Although this peak has been called a “quasiparticle peak” in the literature, there is no direct evidence for this in the raw data. The spectrum is as broad as the normal state spectrum of curve (a) in Fig. 1, but for a different reason. This is the direction in momentum space of largest dispersion, and the finite momentum window $`\delta k`$ of the analyzer broadens the peaks as $`\delta E=\mathrm{}v_F\delta k`$, where $`v_F`$ is the Fermi velocity. From the observed dispersion, this momentum broadening is of the order of 100 meV. Now, with a 32-fold improvement in momentum resolution from the Scienta detector (8 times along the direction normal to the Fermi surface, and 4 times in the transverse direction), we show in the solid curve of the right panel of Fig. 1 the qualitatively new result that there are true quasiparticles at the nodes of the d-wave superconducting state. Although these data were obtained in the superconducting state, because they are at the node of the $`d`$-wave state, they are in fact gapless. A break separating the coherent from the incoherent part of the spectral function is visible in the lineshape as indicated by an arrow. But the converse is also true: there are no signs of quasiparticles in the normal state. Fig. 2 (left panel) shows the temperature dependence of the lineshape at point $`N`$. We note that there is a distinct change in lineshape due to the appearance of quasiparticles in the optimally doped sample. In the normal state, the trailing edge of the spectral peak smoothly evolves into an incoherent tail going to high binding energy. As the temperature is lowered below $`T_c`$, a break develops which separates the coherent quasiparticle peak from the incoherent tail. It is important to note that quasiparticles exist all along the Fermi surface (Fig. 2b). In Fig. 3, we plot the momentum dependence of the lineshape along the zone diagonal (at $`45^{}`$ to the Cu-O bond direction) in the normal (125K) and superconducting (40K) states for an optimally doped 89K sample. We note the sharpening of the spectral peak in the normal state as $`k_F`$ is approached, as observed before . In the superconducting state, we see the new result of a coherent peak along this direction, which only exists in a narrow momentum interval about $`k_F`$. More quantitative information can be obtained by plotting the FWHM of the spectral peak from the raw data as a function of the binding energy of the peak, as shown in Fig. 4. Above $`T_c`$, the FWHM is approximately linear in binding energy (we define the FWHM relative to the horizontal line in the right panel of Fig. 1), with a slope of 0.5. An extrapolation of the linear part to zero binding energy results in an offset of 80 meV, about an order of magnitude larger than the temperature. Below $`T_c`$, the FWHM is the same as that in the normal state for binding energies above an energy of about 2-3 times the maximum superconducting gap ($`\mathrm{\Delta }_{max}`$=40 meV), but decreases faster than this below, as expected for electron-electron scattering. The residual offset at zero binding energy is a combination of momentum and energy resolution, and a contribution from the incoherent tail of the spectra (the FWHM of the coherent peak is 30 meV, compared to a resolution estimate of 22 meV). A detailed comparison of ARPES data to optical conductivity data would require fitting the spectra to a model self-energy, using this self-energy to construct the transport scattering rate, $`1/\tau `$, and then averaging $`1/\tau `$ over the Brillouin zone with velocity weighting factors. Rather than go through such a complicated procedure, we elect to directly compare the $`1/\tau `$ from optical conductivity in Bi2212 to the FWHM versus binding energy (discussed above), which should be a rough measure of $`1/\tau `$ versus energy. This is also shown in Fig. 4, where a good match is seen between these two quantities (at low energies, there is a deviation due to the ARPES resolution). We observe that the break in the spectrum near $`k_F`$ separating the coherent peak from the incoherent tail occurs at the same energy, indicating a drop in the imaginary part of the self-energy at the same frequency that optical data see a drop in $`1/\tau `$. Finally, we note that the linear energy variation seen by the optical and APRES data is analogous to the linear temperature dependence of the resistivity, ubiquitous in the cuprate superconductors, and is in support of a marginal Fermi liquid phenomenology. In conclusion, we report the first observation of the intrinsic lineshape along the zone diagonal, from which we deduce a quasiparticle peak below $`T_c`$ and the absence of such a peak in the normal state. The non-existence of quasiparticles in the normal state appears to be a general property of cuprate superconductors. Therefore, the formation of quasiparticles must be related to the presence of a superconducting state. A proper description of high temperature superconductivity will only arise after this peculiar phenomenon is understood. Note in proof: After the completion of this work, we became aware of similar work by T. Valla et al., Science 285, 2110 (1999), with somewhat different conclusions. We thank T. Timusk for providing the optical data. This work was supported by the National Science Foundation DMR 9624048, and DMR 91-20000 through the Science and Technology Center for Superconductivity, the U. S. Dept. of Energy, Basic Energy Sciences, under contract W-31-109-ENG-38, the CREST of JST, and the Ministry of Education, Science, and Culture of Japan. The Synchrotron Radiation Center is supported by NSF DMR 9212658. JM is supported by the Swiss National Science Foundation, and MR in part by the Indian DST through the Swarnajayanti scheme.
no-problem/9904/astro-ph9904347.html
ar5iv
text
# DETERMINATION OF THE HUBBLE CONSTANT USING A TWO-PARAMETER LUMINOSITY CORRECTION FOR TYPE Ia SUPERNOVAE ## 1 INTRODUCTION One requirement for measuring the Hubble constant using SNe Ia as standard candles is a sample of well-measured distant supernovae. They should be distant enough that their measured redshifts are dominated by the Hubble flow, but not so distant that the still-uncertain dynamics associated with deceleration due to the mass density of the universe and possible acceleration due to conjectured repulsive forces are important. A broad range of redshift $`z`$ from 0.01 to 0.2 is suitable, with $`z`$ about 0.05 being optimum. For this purpose the carefully measured and uniformly analyzed Calán-Tololo collection of 29 SNe Ia (Hamuy et al. 1996) covering the range $`0.01<z<0.1`$ is very well-suited. The collection has a spread in relative luminosity covering more than 1.5 magnitudes, but it has been shown that a two-parameter luminosity correction using the color and decline rate of each supernova is both necessary and sufficient to standardize them to a common luminosity (Tripp 1998). A second requirement for measuring $`H_0`$ is to establish the absolute luminosity of SNe Ia that have been standardized in the above way. At the present time there are seven nearby (typically an order of magnitude closer) galaxies that have hosted SNe Ia whose galaxy distances have been determined using Cepheid variables measured by the Hubble Space Telescope. This includes the recently discovered SN 1998bu in the Leo I group galaxy NGC 3368 whose Cepheid distance had already been measured. The Cepheid-determined distances each have a typical accuracy of about 5%, plus an additional overall uncertainty of about 7% associated with the absolute distance scale for Cepheid variables. These seven galaxies have hosted eight recorded SNe Ia. The oldest, SN 1895B, was observed photographically in only one color so cannot be used here. Three of the others preceded the use of CCDs in astronomy and, despite being well-measured by the techniques of the day, are not of the quality that can now be achieved. Thus we also consider an expanded list containing three more recent SNe Ia which, although not found in Cepheid-calibrated galaxies, are very near on the sky to such a galaxy or are generally considered to be within groups of galaxies that have at least one member that has been so calibrated. Results are presented both with and without these additional supernovae. An extensive bibliograpy on the use of Type Ia supernovae for measuring the Hubble constant can be found in the recent review by Branch (1998). ## 2 PROCEDURE We treat the 29 distant SNe of Hamuy et al. (1996) following the method of Tripp (1998), except that we allow the absolute magnitude to vary in a simultaneous fit with the Cepheid-calibrated supernovae. We calculate the Hubble constant $`H_0`$ for each distant supernova using $$\mathrm{log}H_0=\frac{M_BB+52.38}{5}+\mathrm{log}\frac{1q_0+q_0z(1q_0)\sqrt{1+2q_0z}}{q_0^2}$$ (1) Here $`B`$ and $`M_B`$ are the apparent and absolute blue magnitudes at maximum light, while $`q_0`$ is the deceleration parameter and $`z`$ is the measured red shift. For the 29 distant SNe, $`B`$ and $`z`$ are tabulated by Hamuy et al. (1996). These supernovae are not so remote $`(z0.1)`$ that the still uncertain value of $`q_0`$ leads to a significant uncertainty. Conforming to the current evidence of Riess et al. (1998) and Perlmutter et al. (1998) for a negative $`q_0`$, we fix it at -0.45 (see Section 4).As shown previously (Tripp 1998), in order to fit the data of Hamuy et al. (1996) within their quoted errors, the above absolute magnitude $`M_B`$ for each supernova must be adjusted according to both its rate of decline $`\mathrm{\Delta }m_{15}`$ and its color $`(BV)`$. Here $`B`$ and $`V`$ are the maximum apparent blue and visual magnitudes. Empirically, linear dependences prove to be more than adequate for both corrections. We therefore write for $`M_B`$ in equation (1) $$M_B=<M_B^0>+b(\mathrm{\Delta }m_{15}1.05)+R(BV)$$ (2) The parameter $`R`$ incorporates both intrinsic color differences between the SNe and any reddening caused by dust in the host galaxies. These two effects are often difficult to distinguish, particularly for the more distant supernovae. The reddening parameter due to dust alone, usually denoted by $`R_B`$, should be about 4 if the dust surrounding extragalactic supernovae is similar to dust in the Milky Way.<sup>1</sup><sup>1</sup>1 However, recent measurements of extinction from diffuse Galactic cirrus clouds (Szomoru & Guhathakurta 1999) find $`R_B`$ values $`3`$ due perhaps to a smaller average grain size. Since SNe Ia are not closely associated with star formation it may well be that cirrus values are relevant for them. In any event, all SNe with evidence of strong dust obscuration are removed from the sample for the case where we apply a color selection to the data. The weighted average value $`<M_B^0>`$ in equation (2) is obtained from the Cepheid-calibrated SNe in the following manner. We use a similar dependence to standardize each of them to the value it would have if $`\mathrm{\Delta }m_{15}=1.05`$ and $`BV=0`$ for that supernova. Thus the corrected absolute magnitude becomes $$M_B^0=M_Bb(\mathrm{\Delta }m_{15}1.05)R(BV)$$ (3) A least-squares fit of the Cepheid-calibrated SNe gives the weighted average $`<M_B^0>`$ appearing in equation (2), along with the $`\chi ^2`$ for the fit. Both are functions of $`b`$ and $`R`$. The value 1.05 in equations (2) and (3) is the average $`\mathrm{\Delta }m_{15}`$ for 13 SNe Ia compiled by Branch et al. (1996). Chosen for convenience, the choice is arbitrary and has no effect on $`H_0`$, although it will directly affect $`<M_B^0>`$. The same remarks apply to the arbitrary choice of a $`<BV>=0`$ subtraction in the color term. For each Calán-Tololo distant supernova, we use equation. (1) to evaluate $`H_0`$ using the value of $`M_B`$ from equation (2) with $`b`$ and $`R`$ as parameters. The uncertainty in $`H_0`$ for each supernova is obtained by combining in quadrature the quoted errors $`\delta B`$, $`\delta m_{15}`$, and $`\delta (BV)`$ with an uncertainty in luminosity distance due to possible peculiar motion $`\delta v=400`$ km/s of the host galaxy with respect to the Hubble flow. Neglecting correlations between errors (since they are not reported), we have: $$\delta H_0=H_0\sqrt{\left(\frac{\mathrm{ln}10}{5}\right)^2\left[\delta B^2+(b\delta \mathrm{\Delta }m_{15})^2+(R\delta (BV))^2\right]+\left[\left(\frac{1}{z}+\frac{1q_0}{2}\right)\delta z\right]^2}$$ (4) A weighted average of the 29 values of $`H_0\pm \delta H_0`$ then gives a least-squares value for $`H_0`$ along with its uncertainty. The $`\chi ^2`$ for this fit of $`H_0`$ is added to the $`\chi ^2`$ for the Cepheid- calibrated SNe fit of $`<M_B^0>`$, both with $`b`$ and $`R`$ as parameters. These are then varied to minimize the overall $`\chi ^2`$. When the parameters to be varied enter into the evaluation of the uncertainty $`\delta H_0`$ and thus into the $`\chi ^2`$ of the fit, as they do here, there is a bias favoring larger values of $`b`$ and $`R`$ leading to a larger $`\delta H_0`$ and thus to a lower $`\chi ^2`$. We try to eliminate this by temporarily fixing $`b`$ and $`R`$ for the evaluation of $`\delta H_0`$ in equation (4) at the first solution values, then reminimizing and again fixing them at the new values. After a few such iterations, stable values of $`b`$ and $`R`$ are found which are also $`\chi ^2`$ minima. In the end, this results in a small decrease in $`b`$, a substantial decrease in $`R`$ (by about 0.6 unit), and a consequent increase in $`H_0`$ by about 1.3 units. ## 3 RESULTS In Table 1, we list the seven SNe from Cepheid-calibrated galaxies and the three from Cepheid-calibrated galaxy groups, along with the measured values used in our fits; these are the distance modulus $`\mu `$, the measured apparent magnitude $`B`$ and the resulting absolute magnitude $`M_B=B\mu `$ appearing in equation (3), the decline in $`B`$ magnitude, $`\mathrm{\Delta }m_{15}`$, during the first 15 days after maximum, and the color, taken to be $`B_{\mathrm{max}}V_{\mathrm{max}}`$. Where appropriate, the distance modulus incorporates the HST long-exposure correction of 0.05 as well as an estimate of the effect of host-galaxy absorption on the Cepheid distance determination. Conforming to the procedure used by Hamuy et al. (1996) for their more distant SNe, both $`B`$ and $`BV`$ are corrected for Galactic absorption using the estimates of Burstein & Heiles (1984), but no correction is made for absorption within the host galaxy. Our method accomodates this host-galaxy absorption as well as any intrinsic reddening with the parameter $`R`$ used in the fits. Thus the generally uncertain dust absorption is effectively corrected for by this procedure. For the 29 Calán-Tololo SNe, we use values of red-shift, $`B`$, $`BV`$, and $`\mathrm{\Delta }m_{15}`$ given in Table 1 of Hamuy et al.(1996). Table 2 presents the results of our joint fitting of the Cepheid-calibrated (CC) SNe + Calán-Tololo (CT) SNe for two data selections. Shown are the number of fitted supernovae in each category, the best-fit values of $`H_0`$, $`<M_B^0>`$, $`b`$, and $`R`$, and the individual confidence levels for the best joint fit of the two subsets of data. In the first row we limit the sample to the six directly calibrated CC SNe that also satisfy the color selection $`(BV)<0.2`$ (Vaughan et al., 1998) and to the 26 color-selected CT SNe. In the second row, the full nearby sample of 10 CC SNe are jointly fitted with all 29 of the CT SNe. Both yield, fortuitously, identical values for $`H_0=62.8`$. It can be seen from Table 2 that the CT data alone fit extremely well in both cases even though the $`b`$ and $`R`$ parameters are optimized to fit the combined (CT and CC) data. The confidence levels for the CT data are in all cases considerably higher than the most likely value of 0.5. This reflects the presumed overestimate of the errors in the CT data set described in Tripp (1998) where these data alone lead for the 29 (26) SNe to an unrealistically high confidence level of 0.98 (0.97) for $`b=0.52(0.53)`$ and $`R=2.09(2.44)`$. In both cases the joint confidence levels are acceptable due in large measure to these overly good fit of the CT data. For the case of six CC SNe, good fits are also realized for the CC subsample. But this is not the case for the 10 CC SNe where their confidence level falls below 1%. This is due to a conflict between the two most recent CC SNe: 1991T and 1998bu. They are both somewhat reddened slow decliners, with the first being superluminous and the second being subluminous according to this analysis, so that no $`b`$ and $`R`$ corrections can adequately reconcile the two and at the same time fit the CT data.<sup>2</sup><sup>2</sup>2However, if we fit just the 10 CC SNe alone, then a satisfactory confidence level CL = 0.15 is found, with $`b=1.56`$ and $`R=2.80`$. Alternatively, if one or the other of the two conflicting SNe is eliminated, then (CC+CT) fits can be found that are also good CC fits. Thus, making the color cut, thereby discarding 98bu and retaining 91T, yields $`H_0=60.8`$ with a CC confidence level of 0.36, while selecting just the directly calibrated SNe and making no color cut discards 91T and retains 98bu, yielding $`H_0=64.9`$ with a CC confidence level of 0.30. It is to be expected from the extreme nature of these two SNe, lying, as they do, far out on either side of the $`\chi ^2`$ distribution, that when one or the other is eliminated it will substantially impact $`H_0`$. It has been suggested (Fisher et al. 1999) that SN 1991T forms a separate class of relatively rare superluminous objects not seen among the CT sample. It is best explained as the result of a merger of two white dwarfs leading to a super-Chandrasekhar explosion. SN 1998bu is a normal supernova, but with considerable dust obscuration. Making the conventional color cut $`(BV)<0.2`$ eliminates it. Since there is no evidence coming from interstellar absorption lines for strong dust obscuration among the 29 CT SNe, whereas all three of the 10 CC SNe falling outside this cut show clear evidence for this, in order to minimize bias between the samples the safest procedure is to apply the color cut to both groups as we do in the CC6/CT26 fit. We list in Table 3 the values of $`M_B^0`$ for each of the Cepheid-calibrated SNe along with their residuals, i.e., $`\chi =[M_B^0<M_B^0>]/\delta M_B^0`$, for both fits of Table 2. In Figure 1, we show the corrected values (filled circles) and the uncorrected values $`M_B`$ (open circles) for the full sample (CC10/CT29). These are displayed as a function of $`\mathrm{\Delta }m_{15}`$ in Figure 1a and as a function of B-V in Figure 1b. The strong dependences of the uncorrected open circles on $`\mathrm{\Delta }m_{15}`$ and $`BV`$ are mostly removed by the best-fit parameters $`b=0.55`$ and $`R=2.40`$, as seen in the corrected full circles. However, as noted above, this fit to a common value of $`<M_B^0>=19.44`$ has an unsatisfactory $`\chi ^2`$ with CL = 0.009 due to the conflicting demands of SN 1991T and SN 1998bu. ## 4 DISCUSSION ¿From the confidence levels in Table 2, it is evident that the combined ¿data from the Calán-Tololo and the Cepheid-calibrated supernovae can be simultaneously fit in a satisfactory manner. However, the good confidence level, primarily the result of exceedingly good fits of the CT SNe by themselves, disguises a significant difference in the data sets. This is apparent in Figure 2, which shows plots of $`BV`$ vs. $`\mathrm{\Delta }m_{15}`$ for the 29 CT SNe (Figure 2a) and the 10 CC SNe (filled circles in Figure 2b). The striking difference between the two groups is the complete absence of unreddened $`(BV<0.2)`$ events beyond $`\mathrm{\Delta }m_{15}=1.2`$ among the 10 CC SNe and their abundance among the 29 CT SNe. Since $`BV`$ and $`\mathrm{\Delta }m_{15}`$ are both independent of distance, we may include in Figure 2b six additional nearby SNe in the Virgo and Fornax clusters (open circles) whose distances are still uncertain. This inclusion makes the two samples more compatible, suggesting that the void in the nearby sample of 10 may be, in part, due to a statistical fluctuation. Thus, while the 29 CT SNe in Figure 2a display very little correlation between $`BV`$ and $`\mathrm{\Delta }m_{15}`$, within the limited statistics of the 10 CC SNe there is a strong correlation, but one which is much reduced by the inclusion of the six distance-uncalibrated SNe. A possible bias affecting any measurement of the Hubble constant, both in our two parameter fit and previous one and zero parameter fits, may arise because the nearby and distant samples come from galaxies of somewhat different morphologies. Since a galaxy must contain a population of young stars in order to produce Cepheid variables, the CC SNe are generally found in spiral galaxies. (An exception to this is NGC5253, the parent of SNe 1972E and 1895B, which is often classified as an E/SO peculiar galaxy. A putative explanation for its star forming regions is that they are the result of a galactic merger.). Thus, apart from SN 1972E, our CC sample comes from spiral galaxies, whereas the CT sample is about equally divided between spirals and E, E/SO, and SO galaxies. Eliminating from CT26 all but the 13 spirals gives for CC6/CT13, after reminimizing $`\chi ^2`$ for this smaller sample, a value of $`61.9\pm 2.0`$ for $`H_0`$ compared to $`62.8\pm 1.6`$ for the full sample CT6/CT26. Since the former is presumably freer of potential bias we take $`H_0=62`$ as our best estimate of the Hubble constant and the associated value of -19.46 for $`<M_B^0>`$. We now discuss the uncertainty in $`H_0`$ arising from a variety of sources. The 1-$`\sigma `$ statistical error in $`\delta H_0`$ is found to be $`\pm 1.0`$. In addition, there is uncertainty in $`b`$ and $`R`$ which we display in Figure 3 where $`H_0`$, as a function of $`b`$ and $`R`$, is superposed on the 1-, 2-, 3-, and 4-$`\sigma `$ contours of the CC6/CT26 fit. ¿From this we obtain the 1-$`\sigma `$ error from the $`b`$ and $`R`$ uncertainty ¿of $`\delta H_0=\pm 1.2`$. Throughout this analysis we have fixed the insensitive deceleration parameter at $`q_0=0.45`$, found by making a three parameter $`(b,R,q_0)`$ fit of the 26 CT SNe jointly with the 9 cosmological SNe of Riess et al. (1998) for which $`BV`$ colors were available. If we assign an uncertainty of $`\pm 0.5`$ to this value of $`q_0`$, this leads to an uncertainty $`\delta H_0=0.6`$. Combining these three sources in quadrature yields $`\delta H_0=\pm 1.7`$. To these uncertainties arising from the SNe Ia analysis must be added a larger uncertainty coming from the calibration of the Cepheid variables and their possible metallicity dependence. The Madore and Freedman (1991) distance scale in current use fixes the distance modulus to the Large Magellanic Cloud (LMC) to be 18.50 mag (50.1 kpc) and scales more distant Cepheid-based galaxy distances to this value. Because of the importance of this number for the determination of the Hubble constant, much attention has been devoted to methods for obtaining a more accurate value. A recent review from a post-Hipparcos perspective (Walker 1998) recommends a mean modulus of $`18.55\pm 0.10`$, found using a variety of distance indicators. However, even more recent analyses of an eclipsing binary in the LMC (Guinan et al. 1998), found during the OGLE microlensing search, yield a value as low as $`18.22\pm 0.13`$ mag (Udalski et al. 1998). For this and other reasons evident in the Walker(1998) summary, we use without change the distance moduli of Cepheid-calibrated SNe Ia found by the various observers which are all based on the 18.50 mag LMC value. We assign to this value a 1-$`\sigma `$ error of $`\pm 0.15`$ mag, with the usual statistical view that a true distance modulus to the LMC differing by twice that value would be surprising and one differing by 3-$`\sigma `$ would be very unlikely. Using $`\delta H_0=29(H_0/62)\delta M_B`$ obtained from equation (1), this uncertainty leads to a 1-$`\sigma `$ error of $`\delta H_0=\pm 4.34`$. For some years there has been concern that metallicity differences between the LMC and the Cepheid- calibrated galaxies can alter the derived distances to these galaxies. As the most easily measured proxy for metallicity, Kochanek (1997) has collected the logarithmic abundance ratio $`[O/H]`$ for a number of the Cepheid-calibrated galaxies relative to that of the LMC. For galaxies hosting SNe Ia, these cover a range of $`[O/H]`$ from -0.35 to +0.69. We have used these to alter the measured distance moduli $`\mu `$ by means of the expression $`\mu ^{}=\mu +\gamma [O/H]`$, where the parameter $`\gamma `$ is varied to obtain a best fit . For CC6/CT26, the data set for which five of the Cepheid-calibrated galaxies have measured values of $`[O/H]`$, we find $`\gamma =+0.23\pm 0.77`$, in agreement with other recent findings which fall between 0.14 and 0.31 (Kochanek (1997), Kennicutt et al. (1998), Nevalainen and Roos (1998)), but showing that this data set has little sensitivity to $`\gamma `$. If we fix $`\gamma =0.3`$ and follow the previous procedure of varying $`b`$ and $`R`$ to find a $`\chi ^2`$ minimum, then $`H_0`$ increases by 0.8 km s<sup>-1</sup> Mpc<sup>-1</sup>. Since the question of a metallicity dependence is still in dispute (see Beaulieu et al. 1997, Saha et al. 1997, and Saio and Gautschy 1998 for recent differing views), we retain the value of $`H_0`$ without correction but assign an uncertainty in $`\gamma `$ of $`\genfrac{}{}{0pt}{}{+0.3}{0}`$ resulting in $`\delta H_0=\genfrac{}{}{0pt}{}{+0.8}{0}`$ due to this source. Combining all these errors in quadrature yields $`\delta H_0=\pm 4.7`$ for the overall uncertainty. Despite significant differences with other analyses, our value of $`H_062`$ falls squarely in the middle of the range spanned by recent determinations using SNe Ia. These values of $`H_0`$, all using the distance modulus of 18.50 to the LMC, range between 55 and 69 km s<sup>-1</sup> Mpc<sup>-1</sup>. One difference between our analysis and others is that they all seem to use the nearby approximation to equation (1), which is tantamount to setting $`q_0=1`$. If instead they were to use equation (1) with a negative $`q_0`$, as is now apparently required by the cosmological data, then their values would each increase by about 1.7 units in cases when the CT SNe are used for the Hubble flow sample. The other major difference is that only our analysis uses two independent parameters ($`b`$ and $`R`$) to standardize the luminosity of the supernovae; this is required in order to obtain a fit to the CT data with an acceptable $`\chi ^2`$ (Tripp 1998). Among the recent analyses using a collection of nearby calibrating supernovae, Saha et al. (1997), employing seven Cepheid-calibrated SNe (1895B, 1937C, 1960F, 1972E, 1981B, 1989B, 1990N) and no standardizing parameters, find $`H_0=58`$. Suntzeff et al. (1998), using five CC SNe (1937C, 1972E, 1981B, 1990N, 1998bu) along with the Hubble flow CT SNe and with a one parameter correction for $`\mathrm{\Delta }m_{15}`$, obtain $`H_0=64`$. The difference in $`H_0`$ found in these two analyses lies primarily in the introduction of the $`b`$ parameter. Our analysis, involving both $`b`$ and $`R`$, reduces $`b`$ and leads to an intermediate result. This is immediately apparent from Figure 3. As can be discerned from the figure, the Saha et al.(1997) $`b=R=0`$ fit should yield about 60 for $`H_0`$. This is equivalent to their 58 after correcting for the above effect of $`q_0`$. Likewise the Suntzeff et al. (1998) fit with only $`b`$ as a free parameter should, after the $`q_0`$ correction, yield about 66, as can be seen from the figure for $`R=0`$. Apparently, different selections of Cepheid-calibrated SNe have only a minor effect on $`H_0`$. Thus future augmentations to the present small number of Cepheid-calibrated SNe will probably have little impact on $`H_0`$. Assuming that there are no significant observational selection differences between the CT and CC samples, only a revision of the Cepheid distance scale will be capable of altering $`H_0`$ by more than a few units. This work was supported in part by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under contract AC03- 76SF00098 and (for D.B.) an NSF grant AST 9417102. REFERENCES Beaulieu J. P., Sasselov D.D., Renault C. et al. 1997, A&A, 318, L47 Branch D., Romanishin W., Baron E. 1996, ApJ, 465, 73 Branch D. 1998, ARA&A, 36, 17 Burstein D. & Heiles, C. 1984, ApJS, 54, 33 Fisher, A., Branch, D., Hatano, K. & Baron, E. astro-ph/980732, MNRS (in press). Guinan, E. F., Fitzpatrick, L.,E., DeWarf, F.,P. et al. 1999, ApJ, 509, L21 Hamuy, M., Phillips, M. M., Schommer, R. A. et al. 1996, AJ, 112, 2398 Kennicutt, R. C., Stetson, P. B., Saha, A. et al. 1998, ApJ, 498, 181 Kochanek, C. S. 1997, ApJ, 491, 13 Lira, P., Suntzeff, N. B., Phillips, M. M. et al. 1998, AJ, 115, 234 Madore, B. F., Freedman, W. L. 1991, PASP, 103, 933 Nevalainen, J. & Roos, M. 1998, A&A, 339, 7 Perlmutter, S. et al. 1998, ApJ, in press Phillips, M. M. 1993, ApJ, 413, L105 Phillips, M. M., Phillips, A. C., Heathcote, S. R. et al. 1987, PASP, 99, 592 Riess, A. G., Filippenko, A. V., Challis, P. et al. 1998, AJ, 116, 1009 Saha, A., Sandage, A., Labhardt, L. et al. 1996, ApJ, 466, 55 Saha, A., Sandage, A., Labhardt, L. et al. 1997, ApJ, 486, 1 Saio, H. & Gautschy, A. 1998, ApJ, 498, 360 Schaefer, B. E. 1996, ApJ, 460, L19 Schaefer, B. E. 1998, ApJ, 509, 80 Suntzeff, N. B., Phillips, M.M., Covarrubias, R. et al. astro-ph/9811205 Szomoru, A. & Guhathakurta, P., astro-ph/9901422 Tanvir, N. R., Shanks T., Ferguson, H. C., Robinson, D. R. T. 1995, Nat, 377, 27 Turner, A., Ferrarese, L., Saha, A. et al. 1998, ApJ, 505, 207 Tripp, R. 1998, A&A, 331, 815 Vaughan, T. E., Branch, D., Miller, D. L. & Perlmutter, S. 1995, ApJ, 439, 558 Udalski, A., Pietrzynski, G., Wozniak, P. et al. 1999, ApJ, 509, L25 Walker A., astro-ph/9808336, 1998, in Post Hipparcos Candles, eds. F. Caputo & A. Heck (Dordrecht, Kluwer Academic Publ.)
no-problem/9904/physics9904013.html
ar5iv
text
# Untitled Document International Workshop on the Future of Physics and Society Debrecen, Hungary, 4–6 March, 1999 Workshop Summary Raymond S. Mackintosh Physics Department, The Open University Milton Keynes, MK7 6AA, UK Abstract The Debrecen workshop was one of a number held in preparation for the UNESCO–ICSU World Conference on Science, which will take place in Budapest, June, 1999. A report representing the views of the workshop, prepared for that conference and containing a number of recommended actions, is included with this summary. The workshop affirmed the ongoing importance of physics for its own sake and as part of our culture, as a key element in increasingly unified science and as an essential contributor to the solution of environmental and energy problems. The problems faced by physics as an activity and as an educational subject were discussed and actions for both society as a whole and the physics community itself were put forward. Introduction A principal function of the workshop was to submit a report making recommendations to the UNESCO–ICSU World Conference on Science, to be held in Budapest, June, 1999. Nevertheless, a great many important points were raised which are addressed to the international physics community rather than to the World Conference. This workshop summary is therefore in two parts: the first part is exactly the report finally submitted to UNESCO and the second is a summary of the other points raised at the conference which were agreed to be important. Part I: Report to the World Conference on Science Preface The workshop affirmed three general conclusions: 1. The contribution of physics to all aspects of life, material and non-material, will be essential for the foreseeable future. 2. Physics currently faces serious problems in the world. Many of these problems affect science in general, but a number are specific to physics. 3. Actions are needed to assure the continued health of physics research, teaching and cultural influence. Some form of ‘contract’ between physicists and the rest of society will be required. We emphasise that the problems physics faces are not related to the subject matter but to its relations with society and the perceptions of society. By ‘physics’ we include the physical sciences in general and we affirm the growth of interdisciplinary fields and the trend for areas such as astronomy, cosmology, environmental studies and biophysics to become ever more closely linked with all aspects of physics. The workshop identified seven important actions. We recommend that the World Conference on Science organised by UNESCO and ICSU consider these for inclusion in its report “Science Agenda – Framework of Actions”. Many of them apply to other branches of science and hence ‘physics’ could be replaced in many places by ‘science’. Some actions, however, are specific to physics. The list of recommended actions follows. We present in appendices some of the points which led us to make these recommendations. The workshop expressed the view that the experience of the many relevant professional bodies should be exploited in implementing the recommendations. Recommended Actions 1. Promulgate a declaration affirming the vital importance of basic physical science and the need to protect and support curiosity-led physics. 2. Affirm the importance of making a substantial effort to educate and inform the public. A guideline should be established recommending that, say, 1 % of money spent on research should be made available for public awareness. 3. Provide substantial support for the improvement of the teaching of physics throughout the world, at all levels from school to university. This should involve: * establishing guidelines for what level of scientific understanding would be expected at particular stages of school education and how much time should be devoted to physics teaching at each level; * monitoring these standards and defending them from external threat; * encouraging both curricula and teaching methods to adapt to the changing social and scientific environment. In addition, support is required for teachers, for example by enhancing their prestige and providing continuing education and personal development. UNESCO should promulgate the principle that physics should be taught by persons who have been trained to become physics teachers. Reliable information concerning curricula in different countries should be established and made widely accessible. 4. Explore ways of establishing a recognised authoritative and impartial international body, set up under the auspices of UN or UNESCO, to adjudicate damaging disputes involving scientific issues. Examples of such disputes are cold fusion and a wide range of environmental issues. The new body would investigate the extent to which claims are based upon established science or are simply ungrounded opinion, perhaps influenced by pressure groups. This will provide an authoritative scientific basis for important political decisions. 5. Establish means for supporting physics within the new democracies of Europe. This should be done by facilitating international collaboration and by encouraging the support of physicists within their own countries. Find ways to support and utilise for mutual benefit the reservoir of advanced expertise in the former Soviet Union. 6. Special measures should be taken to ensure the free movement of scientists. In particular, UNESCO should encourage governments to facilitate the issuing of visas for scientists if such are required. 7. The long-term health of physics requires the establishment of guidelines linking R&D expenditure to GNP at a level appropriate to the economic state of each country. In addition, there should be guidelines and standards for coherent and stable national science policies; these policies should be developed in close consultation with national scientific communities. UNESCO should establish a committee to make recommendations to governments. In addition, the workshop agreed that there are a number of measures which the physics community itself should take. These will be publicised in due course. Appendix 1: Why the contribution of physics will continue to be essential 1. Physics is a central part of our culture and will continue to inspire many people. Physics reveals important universal truths notwithstanding certain strands of postmodern thought. 2. Physics will continue to underpin all science and technology for the foreseeable future. 3. Physics is and will continue to be essential for analysing and solving urgent environmental and energy problems. 4. Physics plays a unique educational rôle: It is recognised that other scientific disciplines more and more require knowledge of physics. Physics is becoming recognised as providing education of great value for many careers outside physics such as commerce, banking and medicine. PhDs who go into industry are an indispensable byproduct of pure physics research. 5. Physics is global and constitutes our best ‘anti-Babel’. Generations of physicists of the most diverse political and cultural backgrounds have collaborated on the basis of shared understanding and shared ideals. 6. Physics sets standards of rational thought in the face of irrationality; it upholds the primacy of observation. Appendix 2: Some general problems currently faced by science 1. Many people feel that science robs the world of meaning and this deeply affects their attitude to science. Science is felt by many people to be ‘cold’ and ‘alienating’. 2. Modern forms of irrationality are becoming widespread and sometimes involve outright opposition to scientific attitudes and even scientific knowledge. There is sometimes an unfortunate, even dangerous, political aspect. 3. There is a serious ‘authority problem’ in modern life with few people able to make rational judgements as to who or what to believe. This is reflected in a widespread relativism improperly invoking Einstein. Similarly, Heisenberg is improperly invoked in promoting the idea that everything is uncertain anyway. The widespread tendency to adopt conspiracy theories is a potentially dangerous aspect of this problem. There is a corresponding tendency in academe in the form of social constructivism; in extreme form this denies that science can progressively approach universal truth. 4. External pressures, sometimes commercial in nature and often exacerbated by funding problems, lead to damaging conflicts within subject areas. Damaging conflicts also arise between subject areas, particularly under pressure of inadequate funding. 5. In Europe and other places there is a squeeze on industrial research as a result of ‘short-termism’. 6. Science teaching and research face specific local problems, particularly in Eastern Europe and elsewhere. It is precisely the nature of many of these items which makes greater support for science an urgent matter in the modern world. The workshop also identified a series of problems specific to physics and a report discussing these as well as some proposed measures will be published. Part II: Physics in the modern world: some problems and possible solutions The impact of the globalization on all our institutions and our value systems was a common element in many contributions. It is clear that physics will have a key role to play in studying and solving the global environmental and energy problems the world will face in the coming century. Globalization was felt in another way: while some of the problems listed below are particular to specific regions, there was nevertheless very much common ground in the identification of the general problems faced. 1. Some problems we face The workshop identified many difficulties faced by physics as an ‘institution’ and as a subject in schools and universities. These difficulties do not arise from its own subject matter and in particular the conference affirmed that the subject is certainly not ‘worked out.’ Nevertheless, physics as an activity and as an academic subject does face problems and some of the specific points raised were: 1. For many students, physics can seem remote from their everyday concerns. This is true also for the general public. This is in great measure because physics is abstract and lacks visualizable elements (particularly modern microscopic physics, with astrophysics an exception). This presents a problem for teachers and those communicating with the public. 2. The fact that physics is essentially mathematical also presents special problems. While the mathematical language is a main strength of physics as a discipline, it is a major obstacle in the way of communicating the meaning of physics to the general public. 3. Many school science curriculums are relatively static and remote from exciting contemporary developments and unrelated to important contemporary issues such as medicine, energy and the environment. This is in spite of the direct relevance of physics to all these issues. 4. Physicists have acquired a negative image in some parts of society, not least because of the association with nuclear weapons. 5. The public has no clear picture of how society has benefited from physics and how physics is essential for solving environmental and energy problems. 6. There is no ‘physics industry’ in the sense that, for example, there is a ‘chemical industry’ and a ‘biotechnology industry’. The following two problems are, in part, consequences of this. 7. Students in schools are unaware of the career possibilities enabled by education in physics which exist even in countries in which high-technology industry is not strong. 8. Physics faces problems in universities: in many places there are fewer students, and many appear to be less able. Sometimes, multi-disciplinary courses at undergraduate level add to the downward trend in the academic level of courses. This lowering of standards also occurs as a result of pressure to ‘satisfy customers’. The supply of students to do PhDs is highly susceptible to economic circumstances and many countries frequently face a serious shortage. 9. In Europe and other places there is a squeeze on industrial research as a result of ‘short-termism’. 10. In many countries there is a squeeze on pure research and a growing requirement for researchers to justify their work in terms of economic benefits. 11. In many countries there is a serious lack of competent and enthusiastic physics teachers. 12. Physics is particularly subject to competition from pseudo-science. This is an aspect of the authority problem: the public is confused as it is confronted with a mixture of information and misinformation through the media, including the Internet. 2. Hopeful factors we should find ways to exploit The workshop discussed solutions to these problems and also identified some hopeful signs. Among the positive points were: 1. Politicians at the highest levels are beginning to find that the prestige arising from national success in pure science is of value in international negotiations. A related fact is that, in many countries, it is success in science (along with sport) which most arouses national pride. 2. In some countries, and potentially everywhere, there is a higher than ever interest in popular science. This point has been emphasized by professionals in the popular science business, and is also clear from the number of books published. (The simultaneously existing problems remind us that the ‘public’ is not a single undifferentiated body.) Our defence of physics, as well as science in general, must find ways of exploiting these hopeful points. It was pointed out that ‘The resource of the 21st century is knowledge…’ and certainly physical knowledge will be an important part of this. 3. Recommendations to the physics community The workshop identified a number of areas where action by the physics community and its friends, including those involved in teaching physics, could be of great benefit: 1. Physicists should present a united front; suppress factional fighting; show respect for different subject areas. (We are vulnerable to ‘divide and rule’.) 2. Physicists must deal responsibly with the public, avoid exaggeration, be honest and should not infringe conventions relating to peer review and publication. (‘Going public’ prior to peer review has been very damaging to biology, and physics has also been harmed by it.) 3. Physicists should assume more responsibility in the issues of the global environment, sustainable growth or equilibrium and the energy problem. Physics will have a key role to play in finding an acceptable solution to these problems. Particular presentations to the workshop made very clear the seriousness of the situation and exemplified the contribution of physics. 4. Facilitate improved means for scientists to advise (and enter into dialogue with) government and other public organisations. (Interaction should be both ways and involve the grass roots scientists.) 5. We should find ways of using the expertise of sociologists to explore in greater depth the cause and nature or anti-scientific feeling; this could even lead to entente between physics and some part, at least, of the world of sociology. This could be of great benefit. An urgent problem requiring study is the way the media treat pseudo-science in modern pluralistic societies. 6. We should find ways to encourage industry to support long term and curiosity-led research. Governments should be persuaded to encourage, facilitate or enforce this (through tax laws, etc.). 7. Research should be carried out, with the participation of both scientists and economists, which shows the long term influence of scientific research on GNP. This should be done in a way which includes such things as the contribution of the training which is an important byproduct of pure research at PhD level. 8. Many points relating to teaching physics were mentioned, and some appear in the ‘action’ statements to UNESCO. Particular points are: * Physics teaching must respond to changing social and also scientific circumstances. * There is much value in courses which relate the important findings and perspectives of cosmology etc. to common human needs and aspirations. This was demonstrated to the workshop by an account of a general course at undergraduate level. * Teachers should recognise the value of relating physics teaching to matters of everyday importance, including environmental and energy issues. Teachers should emphasise that it is everybody’s moral duty to have an elementary understanding of the physics of the threatened global environment. The abstract aspects of physics should be moderated at the introductory level. * There are many ‘modern physics’ topics which can be made very accessible with imaginative teaching methods involving pupil activity. A case was put that they can be made more accessible and more relevant than some traditional topics if they are presented with appropriate explanations. Evidently there is a need for continuing debate concerning the teaching of physics in schools. There is no accepted general solution to the apparently contradictory requirements of, on the one hand, attracting talented young people into physics and preparing them for university level studies, and, on the other hand, teaching physics in a way that does not repel and alienate future citizens. 9. Various points were put forward concerning means to educate and inform the public, (the subject of a recommended UNESCO action). Points mentioned include: the need to professionalize interaction with the media; the need for humour; demonstrating the openness of science by letting scientific disputes be public; the virtue of science laboratories, travelling exhibitions, science&technology weeks; the importance of the personal and biographical elements in presentations, etc. 10. Investigate and seek remedies for the anomalously low women’s participation in physics in some countries compared to others. We should do this in the first place because of the human fulfilment and beneficial productivity which is currently being lost. There is further potential benefit: the remedy may substantially improve the public status of physics in general. Acknowledgements I am deeply grateful to Herwig Schopper and Rezső Lovas for their thorough critique of this summary and for saving me from embarrassing omissions. The first section, the report for UNESCO, was a joint submission of the three of us on behalf of the workshop. The conference was supported by the UNESCO-Physics Action Council, the European Physical Society, OMFB, OTKA, MTA and MALÉV. I am personally very grateful to the Lovases for hospitality.
no-problem/9904/cond-mat9904420.html
ar5iv
text
# An effective Hamiltonian for an extended Kondo lattice model and a possible origin of charge ordering in half-doped manganites ## I Introduction The family of doped manganites, R<sub>1-x</sub>X<sub>x</sub>MnO<sub>3</sub> (where R = La, Pr, Nd; X=Sr, Ca, Ba, Pb), has renewed both experimental and theoretical interests due to the colossal magnetoresistance and its potential technological application to magnetic storage devices. Apart from their unusual magnetic transport properties, experimental observations of a series of charge, magnetic and orbital ordering states in a wide range of dopant also stimulate extensive theoretical curiosities. Early theoretical studies of manganites concentrated their effort on the existence of metallic ferromagnetism. From the so-called “Double Exchange” (DE) model, in which the mobility of itinerant electrons forces the localized spins to align ferromagnetically, one can understand qualitatively the relation of transport and magnetism. However the rich experimental phase diagrams are far beyond the DE model. For example, according to the DE model, itinerant electrons have the lowest kinetic energy in a tight binding model, and should be driven to form a more stable ferromagnetic phase when the system is half doped, i.e., x=0.5. On the contrary, it is insulating rather than metallic ferromagnetic at a low temperature as expected theoretically. Furthermore, a charge ordered state was observed, which is characterized by an alternating Mn<sup>3+</sup> and Mn<sup>4+</sup> ions arrangement in the real space. Usually when the repulsive interaction between charge carriers dominates over the kinetic energy the charge carriers are driven to form a Wigner lattice. It has been shown experimentally that the charge ordering is sensitive to an applied magnetic field at low temperatures: resistance of a sample may decrease in several order of magnitude and the charge ordering disappears at a low temperature, which implies that the repulsive interaction should have a close relation to the spin background. Although there have been extensive theoretical efforts on anomalous magnetic properties, a comprehensive understanding on the physical origin of ordered states and their relations to the transport properties are still awaited. To explore electronic origin of these phenomena, we try to establish a more unified picture to understand the physics starting from an electronic model, which has been used to investigate the magnetic properties of the system extensively. We derive an effective Hamiltonian in the case of the strong on-site Coulomb interaction and Hund coupling by means of a projective perturbation approach. It is found that the virtual process of electron hopping produces an antiferromagnetic superexchange coupling between localized spins and a repulsive interaction between itinerant electrons. The antiferromagnetic correlation will enhance the repulsive interaction and suppress the mobility of electrons. In the half-doped case, i.e., $`x=0.5`$, relatively strong repulsion will drive electrons to form a Wigner lattice. In the case of the Wigner lattice, we prove that the electrons are fully saturated while the localized spins form an antiferromagnetic background. Strictly speaking, the ground state possesses both anti- and ferromagnetic, i.e., ferrimagnetic long-range orders. ## II Effective Hamiltonian The electronic model for doped manganites studied in this paper is defined as $`H`$ $`=`$ $`t{\displaystyle \underset{ij,\sigma }{}}c_{i,\sigma }^{}c_{j,\sigma }+U{\displaystyle \underset{i}{}}n_{i,}n_{i,}`$ (1) $``$ $`J_H{\displaystyle \underset{i}{}}𝐒_i𝐒_{ic}+J_{AF}{\displaystyle \underset{ij}{}}𝐒_i𝐒_j.`$ (2) where $`c_{i,\sigma }^{}`$ and $`c_{i,\sigma }`$ are the creation and annihilation operators for $`e_g`$ electron at site $`i`$ with spin $`\sigma `$ $`(=,)`$, respectively. $`ij`$ runs over all nearest neighbor pairs of lattice sites. $`𝐒_{ic}=_{\sigma ,\sigma ^{}}\sigma _{\sigma \sigma ^{}}c_{i,\sigma }^{}c_{i,\sigma ^{}}/2`$ and $`\sigma `$ are the Pauli matrices. $`𝐒_i`$ is the spin operator of three $`t_{2g}`$ electrons with the maximal value $`3/2`$. $`J_H>0`$ is the Hund coupling between the $`e_g`$ and $`t_{2g}`$ electrons. The antiferromagnetic coupling originates from the virtual process of superexchange of $`t_{2g}`$ electrons. In reality, the $`e_g`$ orbital is doubly degenerated. For the sake of simplicity, we only consider one orbital per site, which amounts to assuming a static Jahn-Teller distortion and strong on-site interactions (relative to kinetic energy). Usually the Hund coupling in the doped manganites is very strong, i.e., $`J_HSt`$. Large $`J_HS`$ suggests that most electrons form spin $`S+1/2`$ states with the localized spins on the same sites, which makes it appropriate to utilize the projective perturbation technique to investigate the low-energy physics of the Hamiltonian (2). The effect of finite and large $`J_HS`$ can be regarded as the perturbation correction to the case of infinite $`J_H`$, which is described by a quantum double exchange model. Up to the second-order perturbation correction, there are two types of the virtual processes which contribute to the low energy physics (See in Fig. 1): (a). An electron hops from one site to one of the nearest neighbor empty site to form a spin $`S1/2`$ state and then hops backward. The intermediate state has a higher energy $`\mathrm{\Delta }E_a=J_H(S+1/2)`$ than the initial state. (b). One electron hops from one site to one of the singly occupied sites and then backward. The intermediate state has a higher energy $`\mathrm{\Delta }E_b=J_HS+U`$ than that of the initial state. Hence, by using a projective perturbation approach, the effective Hamiltonian is written as $`H_{eff}`$ $`=`$ $`t{\displaystyle \underset{ij,\sigma }{}}\overline{c}_{i,\sigma }^{}\overline{c}_{j,\sigma }+J_{AF}{\displaystyle \underset{ij}{}}\overline{𝐒}_i\overline{𝐒}_j`$ (3) $`+`$ $`{\displaystyle \frac{2St^2}{J_H(2S+1)^2}}{\displaystyle \underset{ij}{}}\left({\displaystyle \frac{𝐒_i}{S}}{\displaystyle \frac{\stackrel{~}{𝐒}_j}{S+\frac{1}{2}}}1\right)P_{ih}P_{js}^+`$ (4) $`+`$ $`{\displaystyle \frac{t^2}{J_HS+U}}{\displaystyle \underset{ij}{}}\left({\displaystyle \frac{\stackrel{~}{𝐒}_i}{S+\frac{1}{2}}}{\displaystyle \frac{\stackrel{~}{𝐒}_j}{S+\frac{1}{2}}}1\right)P_{is}^+P_{js}^+`$ (5) where $`\overline{𝐒}_i=𝐒_iP_{ih}+2S\stackrel{~}{𝐒}_iP_{is}^+/(2S+1)`$ and $`\overline{c}_{i,\sigma }={\displaystyle \underset{\sigma ^{}}{}}{\displaystyle \frac{𝐒_i\sigma _{\sigma \sigma ^{}}+(S+1)\delta _{\sigma \sigma ^{}}}{2S+1}}(1n_{i,\sigma ^{}})c_{i,\sigma ^{}}.`$ $`\stackrel{~}{𝐒}_i`$ is a spin operator with spin $`S+1/2`$, and a combination of spin of electron and localized spin on the same site. $`P_{ih}`$ and $`P_{is}^+`$ are the projection operators for empty site and single occupancy of spin $`S+1/2`$. The first term in Eq.(5) is the quantum double exchange model. It enhances ferromagnetic correlation, and may be suppressed if the antiferromagnetic exchange coupling of localized spin is very strong. The second, third and fourth terms prefer antiferromagnetism to ferromagnetism. The third term describes an attractive particle-hole interaction since the value of the operator before $`P_{ih}P_{js}^+`$ is always non-positive. In another words, an repulsive interaction between electrons in the restricted space arises when the spin background deviates from a saturated ferromagnetic case. To simplify the problem, we take the large spin approximation, and keep $`J_HS=j_h`$ and $`J_{AF}S^2=j_{af}`$. The spin operator is parameterized in polar angle $`\theta `$ and $`\varphi `$. In the approximation, the Hamiltonian is further reduced to $`H_{cl}`$ $`=`$ $`t{\displaystyle \underset{ij}{}}c_{ij}\alpha _i^{}\alpha _j2j_{af}{\displaystyle \underset{ij}{}}\mathrm{sin}^2{\displaystyle \frac{\mathrm{\Theta }_{ij}}{2}}`$ (6) $`+`$ $`{\displaystyle \underset{ij}{}}2\mathrm{sin}^2{\displaystyle \frac{\mathrm{\Theta }_{ij}}{2}}\left({\displaystyle \frac{t^2}{2j_h}}{\displaystyle \frac{t^2}{j_h+U}}\right)\alpha _i^{}\alpha _i\alpha _j^{}\alpha _j`$ (7) $``$ $`{\displaystyle \underset{ij}{}}{\displaystyle \frac{t^2}{2j_h}}\mathrm{sin}^2{\displaystyle \frac{\mathrm{\Theta }_{ij}}{2}}(\alpha _i^{}\alpha _i+\alpha _j^{}\alpha _j)`$ (8) where $`c_{ij}=\mathrm{cos}{\displaystyle \frac{\theta _i}{2}}\mathrm{cos}{\displaystyle \frac{\theta _j}{2}}+\mathrm{sin}{\displaystyle \frac{\theta _i}{2}}\mathrm{sin}{\displaystyle \frac{\theta _j}{2}}e^{i(\varphi _i\varphi _j)}`$ ; $`\mathrm{cos}\mathrm{\Theta }_{ij}=\mathrm{cos}\theta _i\mathrm{cos}\theta _j+\mathrm{sin}\theta _i\mathrm{sin}\theta _j\mathrm{cos}(\varphi _i\varphi _j);`$ $`\alpha _i=\mathrm{cos}{\displaystyle \frac{\theta _i}{2}}(1n_{i,})c_{i,}+\mathrm{sin}{\displaystyle \frac{\theta _j}{2}}(1n_{i,})c_{i,}.`$ Physically, $`\alpha `$ is an electronic operator which is fully polarized along the localized spin on the same site. $`|c_{ij}|=\mathrm{cos}(\mathrm{\Theta }_{ij}/2)`$ and approaches to zero when $`\mathrm{\Theta }_{ij}\pi `$. If we neglect the Berry phase in $`c_{ij}`$, the first term gets back to the classical DE model. Now it is clear that the ferromagnetism is always predominant in the ground state if other terms in the effective Hamiltonian (Eq. (8)) are neglected. The sign of the interaction $$V_{ij}=2\mathrm{sin}^2\frac{\mathrm{\Theta }_{ij}}{2}\frac{t^2}{2j_h}\frac{Uj_h}{U+j_h}$$ (10) is determined by the ratio $`j_h/U`$. If $`U`$ is less than $`j_h`$, the interaction is attractive, but if $`U`$ is greater than $`j_h`$, the interaction is repulsive. The attractive or repulsive interaction will lead to different physics. Hence $`U=j_h`$ is a quantum critical point. The influence of the on-site Coulomb interaction will change qualitatively (not just quantitatively) the physics of the doped manganites, which is usually ignored. In the case of small $`U`$, the attractive interaction will drive electrons to accumulate together to form an electron-rich regime. i.e., the phase separation may occur when the spin background becomes antiferromagnetism. Monte Carlo simulation by Dagotto et al. shows that the phase separation occurs in the case of $`U=0`$. However, the phenomenon was not observed in the case of large $`U`$. The phase diagram of $`U=0`$ is also seen in Ref.. From our analysis, the attractive interaction originates from the virtual process (b). Due to the double occupancy in the intermediate state, an extra energy $`U`$ costs in the process. When $`U`$ is sufficiently large, the process (b) will be suppressed and the process (a) becomes predominant. The net interaction between electrons is repulsive. Therefore the phase separation may occur only if $`U<j_h`$. ## III Origin of Wigner lattice We are now in the position to discuss the instability to the Wigner lattice. In the doped manganites, the on-site Coulomb interaction is much stronger than the Hund’s rule coupling, i.e., $`UJ_HS.`$ In the case, the process (b) in Fig.1 needs a much higher energy to be excited than the process (a) does. The process (a) dominates over the process (b). The effective interaction is repulsive. Hence we shall focus on the case of strong correlation (i.e., $`UJ_HS`$). To simplify the problem, we take $`U+\mathrm{}`$ and neglect the term containing $`U`$ in Eq.(8). A finite and large $`U`$ will produce minor quantitative (not qualitative) changes of the physics we shall discuss. The ratio of the repulsion to the hopping term $`r=(t/j_h)\mathrm{sin}^2\frac{\mathrm{\Theta }_{ij}}{2}/\mathrm{cos}\frac{\mathrm{\Theta }_{ij}}{2}`$ depends on not only $`t/j_h`$, which is usually very small, but also the angle of two spins. $`r=0`$ if $`\mathrm{\Theta }_{ij}=0`$, and $`+\mathrm{}`$ if $`\mathrm{\Theta }_{ij}=\pi `$. In other words, the ratio could become divergent in the antiferromagnetic spin background ($`\mathrm{\Theta }_{ij}=\pi `$) even though $`t/j_h`$ is very small. Relatively large ratio will make a state with a uniform density of electrons unstable. To understand the physical origin for the Wigner lattice at $`x=0.5`$, we first see what happens in the antiferromagnetic background. When all $`\mathrm{\Theta }_{ij}\pi `$, the average energy per bond is $`2j_h`$ if the two sites are empty or occupied, and $`(2j_{af}+t^2/j_h)`$ if one site is empty and another one is occupied. The later has a lower energy. At $`x=1/2`$, $`\alpha _i^{}\alpha _i=1/2`$. The average energy per bond is $`(2j_{af}+t^2/2j_h)`$ for a state with a uniform density of electrons. If the electrons form a Wigner lattice, i.e., $`(\alpha _i^{}\alpha _i1/2)(\alpha _j^{}\alpha _j1/2)=1/4`$, the average energy per bond is $`(2j_{af}+t^2/j_h)`$, which is lower than that of the state with a uniform density. Therefore in the antiferromagnetic background a uniform density state is not stable against the Wigner lattice even for a small $`t/j_h`$. The same conclusion can be reached by means of the random phase approximation. On the other hand, the formation of the Wigner lattice will also enhance the antiferromagnetic exchange coupling from $`j_{af}`$ to $`(j_{af}+t^2/2j_h)`$. The phase diagram of the ground state is determined by the mean field approach. Several of the features are determined in several limits: for example, the ground state is ferromagnetic at $`t/j_h=0`$ and $`j_{af}=0`$. Due to the instability to the Wigner lattice or charge density wave for finite $`t/j_h`$ and $`j_{af}`$ we take $`\alpha _i^{}\alpha _i1/2=\mathrm{\Delta }e^{i𝐐𝐫_i}`$ where $`𝐐=(\pi ,\pi ,\mathrm{})`$ and $`\mathrm{}`$ is the ground state average. We also take $`c_{ij}=\mathrm{cos}(\mathrm{\Theta }/2)`$ and $`\mathrm{sin}^2(\mathrm{\Theta }_{ij}/2)=\mathrm{sin}^2(\mathrm{\Theta }/2)`$. The free energy per bond is $`(\mathrm{\Delta },\mathrm{\Theta })`$ $`=`$ $`{\displaystyle \frac{dk}{(2\pi )^d}\sqrt{ϵ^2(k)\mathrm{cos}^2\frac{\mathrm{\Theta }}{2}+4\frac{t^4}{j_h^2}\mathrm{sin}^4\frac{\mathrm{\Theta }}{2}\mathrm{\Delta }^2}}`$ (11) $``$ $`(j_{af}+{\displaystyle \frac{1}{4}}{\displaystyle \frac{t^2}{j_h}})\mathrm{sin}^2{\displaystyle \frac{\mathrm{\Theta }}{2}}+{\displaystyle \frac{t^2}{j_h}}\mathrm{sin}^2{\displaystyle \frac{\mathrm{\Theta }}{2}}\mathrm{\Delta }^2`$ (12) where $`ϵ(k)=t(_{\alpha =1}^d\mathrm{cos}k_\alpha )/d`$ and d is the number of dimension. The integration runs over the reduced Brillouin zone. The phase diagram (Fig. 2) is obtained by minimizing the energy $`(\mathrm{\Delta },\mathrm{\Theta })`$. $`\mathrm{\Delta }`$ and $`\mathrm{\Theta }`$ are the order parameters for charge and magnetic orderings, respectively. $`\mathrm{\Delta }=0`$ and $`\mathrm{\Theta }=0`$ represents a full ferromagnetic (FM) phase, $`\mathrm{\Delta }=0`$ and $`\mathrm{\Theta }0`$ represents a canted ferromagnetic (CF) phase, $`\mathrm{\Delta }=1/2`$ and $`\mathrm{\Theta }=\pi `$ represents the Wigner lattice (WL), and $`\mathrm{\Delta }<1/2`$ and $`\mathrm{\Theta }0`$ represents a mixture of charge and spin density waves. A full ferromagnetic phase diagram appears at smaller $`t/j_h`$ and $`j_{af}`$, which indicates that the double exchange ferromagnetism is predominant. The Wigner lattice appears at a larger $`t/j_h`$ and $`j_{af}`$. The antiferromagnetic coupling originating from the virtual precess (a) and superexchange coupling of the localized spins can suppress the double exchange ferromagnetism completely. A canted ferromagnetic phase is between the two phases. At $`j_{af}=0`$, the transition from ferromagnetism to the Wigner lattice occurs at $`t^2/j_h=2𝑑kϵ(k)/(2\pi )^d`$ which equals to $`0.63662t`$ for $`d=1`$, $`0.405282t`$ for $`d=2`$, and $`0.336126t`$ for $`d=3`$. When the effective potential energy $`t/j_h`$ becomes to dominate over the kinetic energy, the ferromagnetic phase is unstable against the Wigner lattice. For a finite $`j_{af}`$, a smaller $`t/j_h`$ is required to form a Wigner lattice. However $`t/j_h`$ must be nonzero, even for a large $`j_{af}`$. In the double exchange model, i.e. $`j_h+\mathrm{}`$, we do not expect that the Wigner lattice could appear at low temperatures at $`x=1/2`$ unless a strong long-range Coulomb interaction is introduced. ## IV Ferrimagnetism and Wigner lattice We go back to Eq. (5) to discuss the magnetic properties of the ground state (or at zero temperature) in the case that the Wigner lattice is formed at $`x=1/2`$ ($`1x`$ is the density of electrons). The charge ordering in the manganite is an alternating Mn<sup>3+</sup> and Mn<sup>4+</sup> arrangement rather than a charge density modulation, which means $`n_i=1`$ or 0. A d-dimensional hyper-cubic lattice can be decomposed onto two sublattice $`𝒜`$ and $``$. In the charge ordering state, suppose that all electrons occupy the sublattice $`𝒜`$, then $$P_{ih}P_{js}^+=\{\begin{array}{cc}1,\hfill & \text{if }i\text{ and }j𝒜;\hfill \\ 0,\hfill & \text{otherwise.}\hfill \end{array}$$ (13) The first term in Eq.(5) must be suppressed completely as the Wigner lattice is a static real space pattern, i.e., the hopping processes are forbidden. In the case, the Hamiltonian is reduced to $$H_{AF}=J_{AF}^{}\underset{i,j𝒜}{}\left(\frac{𝐒_i}{S}\frac{\stackrel{~}{𝐒}_j}{S+\frac{1}{2}}1\right)$$ (14) where $`J_{AF}^{}=J_{AF}S^2+2St^2/J_H(2S+1)^2`$ and the summation runs over the nearest neighbor pairs. This is an antiferromagnetic Heisenberg model. The spin on the sublattice $`𝒜`$ is $`S+1/2`$ as the electrons on the sites form spin $`S+1/2`$ state with the localized spins, and the spin on the sublattice $``$ is $`S`$. According to Lieb-Mattis theorem, the ground state of Eq.(14) is unique apart from spin SU(2) ($`2S_{tot}+1`$)-fold degeneracy. The total spin of the ground state $`S_{tot}`$ is equal to the difference of the maximal total spins of two sublattices. In the case, $$S_{tot}=\frac{N_e}{2}$$ (15) which is also the maximal total spin of electrons ($`N_e`$ is the number of electrons). It seems to be that all electrons are saturated fully while the localized spins form a spin singlet state. Furthermore, it is shown rigorously that the ground state possesses antiferromagnetic long-range order as well as ferromagnetic one for any dimension. ## V Discussion and summary We wish to point out that, in the case that the Wigner lattice is formed, the magnetic structure established here is unlikely in full agreement with all experimental observations. The model discuss here is a simplified theoretical model which has neglected some effects, such as the orbital degeneracy of $`e_g`$ electrons, strong John-Teller effect and lattice distortion. Methodologically, we apply the projective perturbation approach to deal with the model. The strong electron-electron correlations has been successfully taken into account by the projection process. The perturbation process tells us that the effective Hamiltonian should be valid at small $`t/j_h`$, which requires a strong Hund coupling comparing with the hopping integral $`t`$. In practice, the parameters of the model for doped manganites are roughly estimated as $`U5.5eV`$, $`J_H0.76eV`$, $`t0.41eV`$, $`J_{AF}2.1meV.`$ Thus, $`U/J_HS4.82`$ and $`t/J_HS0.359`$. For these parameters, the Wigner lattice at low temperatures is stable in the phase diagram in Fig.2. Therefore, the superexchange process in Fig.1(a) should play an important role in driving electrons to form the Wigner lattice no matter whether the direct nearest neighbor Coulomb interaction is strong. It is worth mentioning that the direct Coulomb interaction will always favor to form the Wigner lattice. If the direct Coulomb interaction is also included in the electronic model, which is not much screened, the stability of Wigner lattice will be greatly enhanced. Note that the Coulomb interaction is independent of the magnetic structure, and should not be very sensitive to an external magnetic field. The effect of field-induced melting of the Wigner lattice suggests that the physical origin of the state may be closely related to the magnetic structure, which is an essential ingredient of the present theory. In the actual compounds, both the mechanisms should have important impact on the electronic behaviors. It is unlikely that only one of them is predominant. As for the mean field approximation, when it is sure that the instability of Wigner lattice occurs at low temperatures, it is an efficient and powerful tool to determine the phase diagram, although some other physical quantities, such as critical exponents, cannot be obtained accurately. Due to the strongly correlations of electrons, it is still lack of numerical results to verify the present theoretical prediction as this is the first time to discuss instability of the Wigner lattice in a model without nearest neighbor or long-range interactions. When the system is deviated from $`x=0.5`$, the superexchange interaction is still very important to determine the behaviors of electrons. Recently, it was observed that the charge stripes in (La,Ca)MnO<sub>3</sub> pair. However, the two pairing stripes of Mn<sup>3+</sup> ions are separated by a stripe of Mn<sup>4+</sup> ions. This fact suggests the nearest neighbor interaction should be very strong. Of course, for a comprehensive understanding of the phase diagram, including anisotropic properties of charge and magnetic orderings, we need to take other effects into account. The role of Hund’s rule coupling in the doped manganites has been emphasized since the double exchange mechanism was proposed. However, the rich phase diagrams in the doped manganites go beyond the picture. Our theory shows that the on-site Coulomb interaction also has an important impact on the physical properties of the system. In the model we investigate, the sign of the effective interaction in Eq.(10) depends on the ratio of j<sub>h</sub>/U. Repulsive or attractive interaction will lead to quite different physics. In one of our recent papers , we proposed a mechanism of phase separation based on the attractive interaction caused by the virtual process (a) in Fig. 1, and neglect the on-site interaction U. The phase separation can occur in the high and low doping regions. As the mechanism of the phase separation is completely opposite to the mechanism of the Wigner lattice we discuss in this paper, we have to address the issue which one occurs for the doped manganites. From the estimation of the model parameters for the actual compound, $`U/j_h4.82`$. Thus, the effective interaction should be repulsive, not attractive. From this sense, the phase separation we predicted in Ref. could not occur in doped manganites. In fact, both the phase separation and the Wigner lattice were observed in the family of samples with different dopings. For example the phase separation was observed in La<sub>1-x</sub>Cu<sub>x</sub>MnO<sub>3</sub> with x=0.05 and 0.08. It is worth pointing out that the electronic model is a simplified model for doped manganites since the degeneracy of e<sub>g</sub> electrons and the Jahn-Teller effect have been neglected. The importance of the orbital degeneracy of e<sub>g</sub> electron has been extensively discussed, especially for the ferromagnetism near $`x=0`$. If we take into account the orbital degeneracy, there may exist an superexchange virtual process in the ferromagnetic or A-type antiferromagnetic background, in which the superexchange coupling between different orbits instead of the spin indices in Fig. 1 could produce an attractive interaction as we predicted in Ref. . The mechanism for phase separation may still be responsible for the experimental observation. The investigation along this direction is in progress. Before ending this paper, we would like to address the stability of the Wigner lattice with respect to the transfer $`t`$. Some experimental analysis suggested that a relatively small $`t`$ would favor to form the Wigner lattice, which seems to be unlikely in contradiction with the phase diagram in Fig. 2. In the present theory, the Wigner lattice occurs in a moderate value of $`t`$. On one hand, a large $`t`$ ($`j_h`$), of course, will lead to the instability of the Wigner lattice and destroy the double exchange ferromagnetism. In the case, a paramagnetic phase should be favored at low temperatures. The perturbation technique used in this paper is also not valid. So the region of Wigner lattice in Fig.2 cannot be naively extended to the large $`t`$ case. On the other hand, when $`t`$ becomes very small comparing with $`j_h`$, the Wigner lattice should also be unstable since a small t means to enhance the ratio $`j_h/t`$ and a larger ratio is favorable to double exchange ferromagnetism. If the antiferromagnetism from $`t_{2g}`$ electrons could compete over the double exchange ferromagnetism at $`x=0.5`$, it would suppress ferromagnetism at all the range of $`x`$. The effective transfer $`t`$cos$`(\mathrm{\Theta }/2)`$ is determined by either t or $`\mathrm{\Theta }`$ the angle of the two spins. The Wigner lattice is also accompanied by the strong antiferromagnetic correlation. The field-induced melting effect indicates that the Wigner lattice is unstable in the ferromagnetic background, which also indicates the important role of the antiferromagnetic correlation to stabilize the Wigner lattice. A smaller $`j_{af}`$ will reduce the angle $`\mathrm{\Theta }`$ and should also lead to the instability of the Wigner lattice. Thus, a small $`t`$ does not always favor to form the Wigner lattice. In short, we derived an effective Hamiltonian for an extended Kondo lattice model, based on which a physical mechanism for charge ordering in half-doped manganites is naturally put forward. ###### Acknowledgements. This work was supported by a CRCG research grant at the University of Hong Kong.
no-problem/9904/astro-ph9904129.html
ar5iv
text
# The Spectrum of GRB 930131 (“Superbowl Burst”) from 20 keV to 200 MeV ## 1 INTRODUCTION Broad-band spectra of gamma-ray bursts (GRBs) pose a difficult challenge to any theoretical model trying to explain them. Looking only at a limited range of energy, as, for example, each of the different instruments on board the Gamma Ray Observatory (GRO) does individually, results in a featureless power law perhaps with some curvature. However, a broad-band spectrum, ranging over many decades in energy, typically contains interesting features like peaks, curvature and breaks. Such features will be diagnostic of the physical processes in the burst fireball and the spectra can be used to directly test models of burst emission. Only a few broad-band spectra have been produced (Schaefer et al. 1998; Greiner et al. 1995; Hurley et al. 1994), as only bright bursts detected by multiple instruments on board the GRO have a wide enough range of available data. The brightest such burst is GRB 930131 which reached a peak flux of $`105\text{ ph}\text{ s}^1\text{ cm}^2`$ (Meegan et al. 1996). This burst has BATSE trigger number 2151 and has been called the “Superbowl Burst” after its time of occurence. EGRET and COMPTEL spectra have already appeared in the literature (Sommer et al. 1994; Ryan et al. 1994), but no BATSE spectrum has been presented due to severe deadtime problems. This paper is organized in the following way. In §2, we provide general methods for combining spectra obtained by the same instrument during different time intervals (2.1.), as well as for combining spectra taken by different instruments covering the same time interval (2.2.). These methods can be used in many common GRB applications, provided the necessary requirements are met. In §3, we carry out the construction of the broad-band spectrum of GRB 930131 from 20 keV to 200 MeV. First, we describe how the individual (BATSE, COMPTEL, and EGRET) spectra have been obtained (3.1.). Then, we argue why these independently reduced spectra can be combined with the method of §2, where we point out the non-obliging nature of this procedure in the present case. After presenting the resulting spectrum (3.2.), we compare this to theoretical models of the GRB emission mechanism (3.3.). Subsequently, we discuss evidence for spectral evolution (3.4.). Finally, §4 summarizes the spectral properties of this remarkable burst. ## 2 COMBINING SPECTRA Often the problem occurs to combine individual spectra into either a time-averaged or an instrument-averaged spectrum. The second case arises in cross-calibrating spectral information from instruments that are sensitive in different energy ranges. In this section, it is assumed that the observed count spectra have already been reduced into photon spectra. In the following, we describe the method of combining spectra and give the relevant formulae, which are then applied to the case of GRB 930131 in Section 3. ### 2.1 Combining Across Time Suppose the time over which one wants to average is divided up into smaller time intervals $`k`$ with respective livetimes $`\tau _k`$. For each time interval $`k`$ and energy bin $`i`$ the photon flux (in units of photons/area/energy/time) is $`\left(\frac{dn}{de}\right)_{ik}`$ with standard deviation $`\sigma _{ik}`$. Then, constructing the time-averaged spectrum is straightforward. With the total livetime given by $`\tau _{total}=_k\tau _k`$, the time-averaged photon flux in energy bin $`i`$ is $$\left(\frac{dN}{dE}\right)_i=\tau _{total}^1\underset{k}{}\left(\frac{dn}{de}\right)_{ik}\tau _k\text{ ,}$$ (1) and the resulting standard deviation is $$\sigma _i^2=\tau _{total}^1\sqrt{\underset{k}{}\left(\sigma _{ik}\tau _k\right)^2}\text{ .}$$ (2) ### 2.2 Combining Across Different Instruments Spectra from different instruments can be combined just as can spectra from multiple detectors on the same instrument. We here assume that the combination process is robust, i.e., that the resulting spectrum is not greatly obliging (cf., Section 3.2.). This has to be justified on a case by case basis. Another requirement is that either the input spectra are for identical time intervals, or they cover the entire burst. This combination can be described as a four-step process: Step A: The spectra from different instruments are divided into energy bins in different ways. Therefore, as a first step, all the bin boundaries ($`E^{low}`$ and $`E^{high}`$) from all the instruments are put into increasing order and then used to define subbins. Assume that after the ordering, the following sequence arises: $`\text{}<E_{k1}<E_k<E_{k+1}<\text{}`$ Then define the $`k`$th subbin to cover an energy interval between $`E_k`$ and $`E_{k+1}`$. Figure 1 illustrates this procedure for the case of two instruments. Step B: It is preferable to conduct the combining in $`\nu F_\nu `$-space, where $`\nu F_\nu \left(\frac{dN}{dE}\right)E^2`$. Then the spectrum is roughly constant over a given energy bin, as opposed to the usual steep decline in ordinary $`\frac{dN}{dE}`$-space. Now, for energy bin $`i`$ of instrument $`m`$, having lower and higher energies $`E_{mi}^{low}`$ and $`E_{mi}^{high}`$, respectively, define the energy flux per logarithmic energy interval $$\left(\frac{d\phi }{dE}\right)_{mi}\left(\frac{dN}{dE}\right)_{mi}\left(E_{mi}^{mid}\right)^2\pm \sigma _{mi}\text{ ,}$$ (3) where $`E_{mi}^{mid}=\sqrt{E_{mi}^{low}E_{mi}^{high}}`$ and $`\sigma _{mi}`$ is the uncertainty of $`\left(\frac{d\phi }{dE}\right)_{mi}`$. Our procedure presumes that $`\left(\frac{d\phi }{d\phi }\right)_{mi}`$ changes little across each energy bin, as is the case for energy bins that are small compared to either the detector resolution or the structure in the spectrum. This covers virtually all GRB applications, although a simple interpolation scheme might be appropriate for a particularly steep spectrum observed with very broad bins. Step C: Now, we want to cross-combine the spectra of different instruments. In constructing the spectrum for subbin $`k`$, we first determine whether a given instrument $`m`$ has an energy bin $`i`$ overlapping the subbin. If this is the case, we set $$\left(\frac{d\phi }{dE}\right)_{mk}=\left(\frac{d\phi }{dE}\right)_{mi}\text{ and }\sigma _{mk}=\sigma _{mi}\text{ .}$$ (4) Figure 1 shows the case of two instruments having overlapping energy bins with subbin $`k`$. The energy flux of the cross-combined spectrum is the weighted average of all contributing spectra: $$\left(\frac{d\varphi }{dE}\right)_k=\sigma _k^2\underset{m}{}\frac{1}{\sigma _{mk}^2}\left(\frac{d\phi }{dE}\right)_{mk}\text{ ,}$$ (5) where $$\sigma _k=\left(\underset{m}{}\sigma _{mk}^2\right)^{1/2}\text{ .}$$ (6) Step D: As a last step, put together the subbins into larger bins of width appropriate for the spectral resolution and features. Rebinning, e.g., two subbins $`k`$ and $`k+1`$ into a larger bin $`l`$ with boundaries $`E_l^{low}`$ and $`E_l^{high}`$ is accomplished by the following: $$\left(\frac{d\mathrm{\Phi }}{dE}\right)_l=w_k\left(\frac{d\varphi }{dE}\right)_k+w_{k+1}\left(\frac{d\varphi }{dE}\right)_{k+1}\text{ ,}$$ (7) where one has for the respective weights $$w_k=\frac{E_{k+1}E_l^{low}}{E_l^{high}E_l^{low}}\text{ ,}$$ (8) and $$w_{k+1}=\frac{E_l^{high}E_{k+1}}{E_l^{high}E_l^{low}}\text{ .}$$ (9) The resulting standard deviation is $$\sigma _l^2=w_k^2\sigma _k^2+w_{k+1}^2\sigma _{k+1}^2\text{ .}$$ (10) If the output bin covers more than two subbins, then equations (7)-(10) can be easily generalized or used repeatedly. ## 3 THE SPECTRUM OF GRB 930131 ### 3.1 The Individual Spectra For all 3 instruments (BATSE, EGRET, COMPTEL), their photon spectra have been obtained by the traditional forward-folding technique (Loredo & Epstein 1989). This technique assumes a variety of spectral models $`M`$, and convolves them with the respective detector response matrix (DRM), symbolically $`C_{model}=\text{DRM}M`$, where $`C_{model}`$ is the count spectrum predicted by the model. The parameters of the model are then adjusted to obtain the best fit to the observed count spectrum, $`C_{obs}=\text{DRM}P_{true}`$, where $`P_{true}`$ is the true (photon) spectrum of the source. Alternatively, a model-independent inverse technique could have been adopted, where $`P_{true}=\text{DRM}^1C_{obs}`$. Attempts at doing so have proven unconvincing, and the nearly universal practice in gamma-ray astronomy is to use forward-folding techniques. One exception is the direct inversion method of Pendleton et al. (1996), which has only been applied to low resolution (4-channel) data and introduces considerable additional error (10-15%). #### 3.1.1 BATSE Spectrum The “Superbowl Burst” suffers from severe deadtime effects, which is the reason why the original discovery paper (Kouveliotou et al. 1994) does not present a spectrum for the BATSE energy range. For this bright burst, most of the flux arrives in the first 0.06 seconds, a situation which saturates the BATSE Large Area Detectors (LADs), whereas the smaller but thicker Spectroscopy Detectors (SDs) can reliably record the intense photon flux. In constructing our spectrum, we have selected the two burst-facing Spectroscopy Detectors (SD 4 and 5), for which there are available the well suited STTE-data (SD Time-Tagged Events), which cover the first $``$ 1.5 s of the burst and which have a time resolution of $`128\text{ }\mu `$s. Therefore, we can correct for the deadtime effects by subdividing the total time into 53 individual spectra with a duration of as short as a few ms around the first, intense peak. For each time interval, the photon spectrum is obtained by following the procedure described in Schaefer et al. (1994). In carrying out the forward-folding, we assume a single power-law spectral model. Then, by applying the methods of Section 2.1., we constructed time-averaged spectra for SD 4 and 5 , which were then in turn combined (as described in Section 2.2.) to give the overall spectrum for the BATSE energy-range (21 keV to 1.18 MeV, above which the flux-errors exceed 100%). #### 3.1.2 COMPTEL And EGRET Spectra The COMPTEL and EGRET spectra have previously been published (Ryan et al. 1994, and Sommer et al. 1994, respectively) and we refer the reader to these papers for details. We have chosen to work with the spectrum reported by the EGRET Total Absorption Shower Counter (TASC), since the EGRET spark chamber is too severely affected by deadtime effects. The TASC spectrum covers an energy range from 1 MeV to 180 MeV. The overlap region between BATSE and TASC is nicely covered by the COMPTEL instrument, where the COMPTEL Telescope spectrum covers the range from 0.75 Mev to 30 MeV. Both spectra have been obtained by the forward-folding technique with a power-law model, and are corrected for deadtime effects. ### 3.2 The Combined Spectrum To construct the combined spectrum with the method described in Section 2.2., we first have to ascertain the robustness of this procedure. It is well known (Fenimore et al. 1983) that the resulting spectral shape can possibly depend sensitively on the details of the fitting technique (i.e., that the spectra might be “obliging”). In principle, it could make a big difference whether the low- and high-energy parts, covered by different instruments, are first unfolded separately and only then combined together, or whether the unfolding is done simultaneously to all instruments. The physical reason for this is that high-energy photons might masquerade as low-energy ones, and that, consequently, the low-energy part of the spectrum cannot be accurately unfolded independently of the high-energy part. For the present case, however, this problem does not occur. It has been convincingly shown that the BATSE Spectroscopy Detectors are non-obliging (Schaefer et al. 1994; cf., their Figures 11 and 52). This is primarily due to their thickness, which largely minimizes photon energies being underreported. The TASC and COMPTEL spectra, on the other hand, are not affected by the lower energy BATSE range. Finally, treating the COMPTEL and TASC spectra independently of each other is rendered possible by the fact that the model fitting leads to almost identical results ($`\frac{dN}{dE}E^2`$). We are therefore justified in combining the independently obtained spectra from the 3 GRO instruments (BATSE-EGRET-COMPTEL) into the overall, broad-band spectrum of GRB 930131. This combination is carried out with the method of section 2.2., where we have been careful to construct our BATSE spectrum such that it exactly matches the time coverage of the EGRET TASC instrument, and approximately that of COMPTEL. To evaluate how well the instruments agree in the mutual overlap region around 1 MeV, we compare the fluxes at 1 MeV for the 3 instruments (in units of $`10^3`$photons cm<sup>-2</sup> sec<sup>-1</sup> keV<sup>-1</sup>): BATSE 2$`\pm `$2, COMPTEL 8$`\pm `$3, and TASC $`2\pm `$0.5. The agreement between BATSE and TASC is good, although the BATSE errors approach 100% at these high energies. The COMPTEL flux is somewhat high, but due to its uncertainties it does not contribute significantly to the weighted average of the final, combined spectrum. Table 1 and Figure 2 present the $`\nu F_\nu `$ ($`\left(\frac{dN}{dE}\right)E^2`$) spectrum in units of (photons s<sup>-1</sup> cm<sup>-2</sup> keV<sup>-1</sup>)$`(E^{mid}/100\text{ keV})^2`$. The resulting spectrum is remarkably flat, as compared to other published broad-band spectra, which have a much more peaked appearance (cf., Schaefer et al. 1998). In the following section we ask, whether this rather unusual spectral shape is consistent with the model of shocked synchrotron emission, which successfully fits the characteristics of other broad-band GRB spectra. Subsequently, we investigate whether the flat spectrum of GRB 930131 can be understood as a result of spectral evolution. ### 3.3 Model-Fits We fit our combined spectrum to the shocked synchrotron model of Tavani (1996a, b), which gives the following analytical expression for the energy flux: $$\psi _{model}\left(\frac{d\mathrm{\Phi }}{dE}\right)=\nu F_\nu =C\nu \left[I_1+\frac{1}{e}I_2\right]$$ (11) $$I_1=_0^1y^2e^yF\left(\frac{\nu }{\nu _c^{}y^2}\right)\text{d}y$$ (12) $$I_2=_1^{\mathrm{}}y^\delta F\left(\frac{\nu }{\nu _c^{}y^2}\right)\text{d}y\text{ ,}$$ (13) where $`F(x)x_x^{\mathrm{}}\text{K}_{\frac{5}{3}}(w)\text{d}w`$ is the usual synchrotron spectral function with $`\text{K}_{\frac{5}{3}}`$ being the modified Bessel-function of order $`\frac{5}{3}`$ and $`e=2.718\mathrm{}`$. The normalization constant $`C`$ has units of specific flux. Equations (12) and (13) are summing up the synchrotron emission from a Maxwellian distribution of electron energies which breaks to a power law at high energies. Here, $`\delta `$ is the index of the supra-thermal power-law distribution of particles, resulting from relativistic shock-acceleration. The critical frequency $`\nu _c^{}`$ describes where most of the synchrotron power is emitted. We apply the Levenberg-Marquardt method of non-linear $`\chi ^2`$ fitting (cf., Numerical Recipes, Press et al. 1992) to minimize $$\chi ^2=\underset{i=1}{\overset{N}{}}\left(\frac{\psi _i\psi _{model}(\nu _i^{mid};C,\delta ,\nu _c^{})}{\sigma _i}\right)^2\text{ .}$$ (14) Our observed spectrum with flux $`\psi _i=\left(\frac{d\mathrm{\Phi }}{dE}\right)_i`$ and uncertainty $`\sigma _i`$ contains $`N=37`$ data points. Our best-fit parameters are: $$C=104\pm 8\text{ erg cm}\text{-2}\text{ sec}\text{-1}\text{ Hz}\text{-1}$$ (15) $$\delta =3.3\pm 0.1$$ (16) $$h\nu _c^{}=98\pm 14\text{ keV}$$ (17) The fit has a chi-squared of $`\chi ^2=38`$ with 34 degrees of freedom. Therefore, we can conclude that the spectrum of GRB 930131 is consistent with the Tavani-model. At low energies, the spectrum is asymptotically approaching $`\nu F_\nu \nu ^{4/3}`$, as is usual for burst spectra (Schaefer et al. 1998). This behavior is predicted by optically thin synchrotron theory (Katz 1994). ### 3.4 Spectral Evolution All of the published broad-band spectra (Schaefer et al. 1998; Greiner et al. 1995; Hurley et al. 1994) are strongly peaked and fall off steeply above the peak energy. GRB 930131, on the other hand, has a spectrum which remains constant (within a factor of 4) over four orders of magnitude in energy. Can this behavior be understood as the result of a superposition of many spectra, which individually show the usual, strongly peaked shape and whose peak energy evolves with time? For the BATSE energy range, the number of received photons is sufficiently large to allow the construction of time-resolved spectra, whereas for COMPTEL and EGRET, the dearth of photons renders this detailed treatment impossible. In Figure 3, we present the resulting BATSE spectra for 4 different times. The lightcurve of GRB 930131, as amply documented in the literature (Kouveliotou et al. 1994; Ryan et al. 1994; Sommer et al. 1994), shows a sharp, intense first pulse, lasting for $``$ 0.06 s after the BATSE trigger, followed by a second, less intense and less sharp pulse, lasting from $``$ 0.75 s to $``$ 1.00 s after the trigger. In between, the “interpulse” region of Figure 3, there is significant yet faint flux. Finally, there is again relatively little flux subsequently to the second pulse (lasting for another 50 s). In Figure 3, the first pulse is further subdivided into the spectrum for the time before the maximum flux is reached (0.00 - 0.03 s) and that for the time after the maximum (0.03 - 0.06 s). Since these time-resolved spectra cover only the low-energy range, a meaningful fit to the Tavani-model (cf., Section 3.3.) cannot be done, since the value of the power-law extension $`\delta `$ and the location of the peak energy $`h\nu _c^{}`$ are mostly constrained by the high-energy regime. Both the spectra for the first and second pulses are consistent, though, with the spectral fit (besides the normalization $`C`$) obtained for the overall spectrum (cf., Figure 2). Consequently, there is no evidence that the unusual flat morphology of the “Superbowl-Burst” spectrum is caused by the superposition of individually strongly-peaked, time-variable spectra. The spectrum between pulses is inconsistent with the average burst spectral shape. The observed $`\nu F_\nu `$ is close to $`\nu ^0`$ from 21 keV to 1 MeV with no significant curvature or maximum. The extreme brightness of GRB 930131 allows for this unique measure of the interpulse spectrum. ## 4 SUMMARY AND CONCLUSIONS After having given the relevant formulae for combining individual spectra, we applied these methods to construct the broad-band spectrum of GRB 930131. With appropriate deadtime corrections we first obtained the spectrum for the BATSE energy range, which we then combine with the already published spectra from the COMPTEL and EGRET TASC instruments. Broad-band spectra are fortunate occurences (multiple instruments on board the GRO have to see a bright burst), available for only a handful of bursts. Within the general framework of an expanding relativistic fireball, impacting on a surrounding medium (Mészáros & Rees 1993), an attractive model for the production of the $`\gamma `$-ray photons is synchrotron emission from a shocked and highly magnetized plasma (Tavani 1996a, b). This model is successful in fitting the strongly peaked spectral shapes (in $`\nu F_\nu `$-space) of the GRBs for which broad-band spectra have been obtained. Since our resulting spectrum is so unusually flat, it poses an interesting challenge to the Tavani-model. As described in Section 3.3., the model does fit well, although with a value for the power-law component, which lies at the extreme end of the typically encountered range, $`3<\delta <6`$. In the BATSE energy-range, we were able to construct time-resolved spectra, which show no evidence for significant evolution. We thank D. Palmer for his suggestions concerning the severe deadtime problem in the BATSE data, as well as M. Kippen and E. Schneid for their helpful discussions.
no-problem/9904/gr-qc9904041.html
ar5iv
text
# The Shapiro Conjecture: Prompt or Delayed Collapse in the head-on collision of neutron stars? ## Introduction. The study of the coalescence of neutron stars (NSs) is important for gravitational wave astronomy and high energy astronomy. However, at present we lack even a qualitative understanding of the process. One issue is the prompt vs. delayed collapse problem. While we expect that two $`1.4`$ $`M_{}`$ NSs when merged will eventually collapse to form a black hole, the collapse could be delayed by fragmentation/mass shredding, angular momentum hang-up, and/or shock heating. The time scale of the collapse has important implications on the gravitational wave signals to be detected by LIGO . We focus on the issue of prompt vs. delayed collapse in this paper. The difficulty of getting an answer to this aspect of NS coalescence is very much the same as the full coalescence problem. Namely, we need to solve the full Einstein equations coupled to the general relativistic hydrodynamic (GR-hydro) equations. Recently, Shapiro put up an argument suggesting that one may be able to answer this question without numerical simulations, at least for the case of head-on collisions. The “Shapiro conjecture” goes as follows: Given the conditions: (I) that the two NSs are colliding head-on after falling in from infinity, and (II) the NSs are described by a polytropic equation of state (EOS) $`P=K\rho ^\mathrm{\Gamma }`$ (with $`K`$ a function of the entropy and the polytropic index $`\mathrm{\Gamma }`$ remaining constant throughout the collision process), it is conjectured that no prompt collapse can occur for an arbitrary $`\mathrm{\Gamma }`$ and an arbitrary initial $`K`$. The basic argument is that the potential energy when converted to thermal energy by shock heating is always enough to support the merged object, until neutrino cooling sets in. The argument based on conservation is appealing, and provides useful understanding for a range of the NS coalescence problems. However there is a major assumption for the argument to go through, namely, the collision process can be approximated by a quasi-equilibrium process, in two senses: (A) The coalescing matter can be described by one single EOS everywhere ($`K`$ is a function of time but not space), and (B) whether it collapses or not is determined by hydro-static equilibrium conditions, i.e., whether a stable equilibrium configuration exists or not. This quasi-equilibrium assumption is not self-evident for the head-on collision of heavier NSs. It could happen that the coalesced object collapses before it can thermalize, or the collision process is so dynamic that even though a stable equilibrium state exists, it is not attained in the collapse process. The final outcome depends on the various time scales in the problem. ## Time Scale Considerations. We examine this assumption of “quasi-equilibrium” and see if it can be justified under the conditions of (I) and (II) above. We note that the collision process involves many time scales, and there are at least six of them relevant for our present consideration: 1. The time scale associated with the infall velocity: $`t_i=R/V_i`$, R=the radius of the NS, $`V_i`$= (infall velocity at the point of contact). 2. The time scale associated with the local sound velocity: $`t_s=R/V_s`$ , $`V_s`$= (sound velocity). 3. The time scale associated with the velocity of the shock (velocity in the rest frame of fluid): $`t_{sh}=R/V_{sh}`$, $`V_{sh}`$=(shock velocity). 4. The time scale for the merged object to thermalize, in the sense of being describable by one single EOS (same $`K`$ everywhere): $`t_e`$. 5. The time scale of neutrino cooling $`t_n`$. 6. The time scale of the gravitational collapse $`t_c`$. Some comments of these time scales are in order. We focus on the case of two 1.4 $`M_{}`$ NSs. We model them with a polytropic EOS $`P=K\rho ^\mathrm{\Gamma }`$ with a polytropic index of $`\mathrm{\Gamma }=2`$. The initial $`K`$ value of the two stars is taken to be $`1.16\times 10^5\frac{cm^5}{gs^2}`$. (Maximum stable mass of these values of $`K`$ and $`\mathrm{\Gamma }`$ is 1.46 $`M_{}`$.) We note that the argument in is applicable to all polytropic models. For this model, $`V_i`$ is (somewhat larger than) the Newtonian value $`0.28c`$, as can be estimated by $`\sqrt{GM/(2R)}`$; the diameter of the NSs is about $`2R=26km`$ (the isotropic coordinate radius of this NS is $`9.3km`$, the proper radius is $`R=13km`$). Hence the time scale associated with the infall velocity $`t_i`$ is about (smaller than) $`0.16ms`$. To estimate the second time scale $`t_s`$, note that the sound velocity $`V_s`$ depends strongly on the dynamical process and the region under consideration. For the model mentioned above, the initial central rest mass density of the NSs is about $`1.5\times 10^{15}g/cm^3`$; $`V_s`$ there is about 0.5 $`c`$. With the density elsewhere initially lower than this value, but higher in some period in the central region of the collision, $`V_s`$ varies but is roughly $`0.5c`$. Thus, $`t_s`$ is roughly $`0.1ms`$. To estimate the third time scale $`t_{sh}`$ requires an estimation of the velocity of the shock $`V_{sh}`$ produced in the collision. The locally measured proper velocity of the shock $`V_{sh}`$ is higher than, but of the same order of magnitude of, the sound speed $`V_s`$ at a fraction of $`c`$ in the head-on collision case. Hence $`t_{sh}`$ is also of order $`0.1ms`$. These three time scales determine the time scale 4 which is central to our discussion. In near static situation, or when the bulk velocity of matter is small ($`V_i<<V_s`$ and $`V_i<<V_{sh}`$), $`t_e`$ can be taken to be a few times $`t_s`$ or $`t_{sh}`$. On the other hand, its value in a highly dynamic situation with $`V_i`$ comparable to $`V_s`$ and $`V_{sh}`$ is an important issue to be discussed below. The fifth time scale $`t_n`$ governs the final settling down of the merged object after $`t_e`$. $`t_n`$ is of the order of seconds, orders of magnitude longer than the first four time scales. The gravitational collapse time scale $`t_c`$ is in turn controlled by these time scales 1-5. It can be as short as $`t_i`$, or as long as $`t_n`$. For the collision of two 1.4 $`M_{}`$ NSs, the merged object would have to collapse after $`t_c`$, if not before, for most of the reasonable EOS. We call collapse that occurs on the first four time scales prompt collapse, and collapse that occurs on a longer time scale, like $`t_n`$, delayed collapse. For more general coalescence processes, there can be other time scales involved, e.g., the time scale of angular momentum transfer $`t_a`$, and the time scale of gravitational wave emission $`t_g`$. However, for the case of head-on collision with the stars falling in from infinity, we expect strong shock heating causing $`t_n`$ to be shorter than $`t_g`$. We do not have to consider $`t_a`$ and $`t_g`$ in our present consideration. In Shapiro’s argument, the time scale 4, $`t_e`$, is implicitly taken to be the shortest time scale in the problem, so that the system can be described by a single EOS at any instant in the collision process. The above discussion suggests that this may not be true for the two 1.4 $`M_{}`$ NS collision case. Indeed, the relations between the time scales 1, 2 and 3 strongly affect $`t_e`$. With $`t_i`$ comparable to $`t_s`$ and $`t_{sh}`$, dynamic effects are important, and $`t_e`$ can be longer than $`t_i`$. In particular, with matter falling in at high speed along the axis of the collision, the speed of the shock wave in that direction would be significantly reduced, until after $`t_i`$, delaying “thermalization” of the coalescing objects. For situations like this, arguments based on a uniform EOS throughout the coalescing object cannot be justified. Indeed, when the infalling time scale $`t_i`$ is comparable to the other time scales in the process, it could happen that even if a hydrostatic stable equilibrium configuration exist, the dynamics of the system might not lead to that configuration and the time scale of collapse could be as short as $`t_i`$. Another way of looking at the problem is to imagine we tie the two stars on strings and lower them towards one another in a quasi-stationary fashion while depositing the potential energy extracted back to the two stars. For this case Shapiro’s argument would be applicable. However, for a NS collision with the time scales discussed above, one would have to examine the dynamics of the infall to determine whether a prompt or delayed collapse would occur. In short, as both a thermally supported merged object and a black hole can have the same rest mass and total energy, arguments based solely on conservation of mass and energy without taking the dynamics into consideration cannot rule out one outcome from the other. We note that the above time scale considerations suggests that whether it is a delayed or prompt collapse in head-on collision can depend on the initial NS’s configuration. It does not imply prompt collapse by itself. To demonstrate that a prompt collapse results, one has to perform a fully relativistic simulation. Our NCSA/Potsdam/Wash U collaboration is developing a multi-purpose 3D numerical code, “Cactus”, for relativistic astrophysics and gravitational wave astronomy. This code contains the Einstein equations coupled to the general relativistic hydrodynamic equations. For a description of various aspects of the code and the NS grand challenge project based on it see . Testbeds and methods for evolving neutron stars have been given in , and will not be repeated in this paper. While this multi-purpose code is still under development for various capabilities in treating a broad class of astrophysical scenarios, in this paper we focus on the results obtained by applying this code to the head-on collision problem. ## Simulation results. We show the $`M=1.4`$ $`M_{}`$ head-on collision case. The stars are modeled as given above. We put the two TOV solutions at a proper distance of $`d=44km`$ apart (slightly more than 3 $`R`$ separation) along the z-axis, and boost them towards one another at the speed (as measured at infinity) of $`\sqrt{GM/d}`$ (the Newtonian infall velocity). The metric and extrinsic curvature of the two boosted TOV solutions are superimposed by (i) adding the off-diagonal components of the metric, (ii) adding the diagonal components of the metric and subtracting 1, and (iii) adding the components of the extrinsic curvature. The resulting matter distribution, momentum distributions, conformal part of the metric, and transverse traceless part of the extrinsic curvature are then used as input to York’s procedure for determining the initial data, in maximal slicing. With this setup, the initial data satisfies the complete set of Hamiltonian and momentum constraints to high accuracy (terms in the constraints cancel to $`10^6`$), and physically represent two NSs in head-on collision falling in from infinity, at least up to the Newtonian order. (For initial data setup on the P1N level, see ). The initial data is then evolved with the numerical methods described in . Various singularity avoiding slicings been used and tested against one another (maximal and $`1+log`$ slicings most extensively), yielding basically the same results. The simulations have been carried out with resolutions ranging from $`\mathrm{\Delta }x=1.48km`$ to $`0.246km`$ (13 to 76 grid points across each NS, with $`32^3`$ to $`192^3`$) for convergence and accuracy analysis. In Fig. 1a we show the collapse of the lapse along the $`x=y=0`$ line from $`t=0ms`$ to $`t=0.31ms`$ at intervals of $`0.044ms`$. (With the reflection symmetry across the $`z=0`$ plane and the axisymmetry of the head-on collision, we only need to evolve the first octant.) At $`t=0.31ms`$ the lapse has collapsed significantly. In Fig. 1b we show the evolution of $`g_{zz}`$ along the $`z`$ direction ($`x=y=0`$). We see the familiar grid stretching effect associated with evolving a black hole. Fig. 2 shows the time development of the lapse, the (proper) rest mass density $`\rho `$ and the pressure $`P`$ at the origin, scaled by the critical secular stability values $`\rho _{critical}`$ and $`P_{critical}`$, the values beyond which a static TOV solution is unstable to collapse for the given polytropic coefficient $`K`$ and index $`\mathrm{\Gamma }(=2)`$. We note that the effective $`K(=P/\rho ^2)`$ is time dependent due to shock heating. At coordinate time $`t=0.26ms`$ we see that both $`\rho `$ and $`P`$ surpass $`\rho _{critical}`$ and $`P_{critical}`$, indicating a collapse. In Fig.3 we show the position of the apparent horizon (AH). To confirm the location of the AH, convergence tests both in terms of resolution and in terms of location of the computational boundary have been carried out. (For a discussion of the AH finder, see ). We have also explicitly determined trapped surfaces bounded by the AH for the positive confirmation of a collapsed region. In Fig. 3, the solid and long dashed lines correspond to the AH locations at resolutions of $`\mathrm{\Delta }x=0.492km`$ and $`\mathrm{\Delta }x=0.246km`$, while the dotted line corresponds to $`\mathrm{\Delta }x=0.492km`$ but with the outer boundary two times further out. Although the coordinate position of the AH is substantially elongated in the z direction, the AH is actually quite spherical. The proper circumference on the x-y plane (equatorial) is close to the circumference on the x-z plane (polar), with the latter being $`52.9\pm 1.9km`$. For comparison, $`4\pi M_{AH}`$ is $`52.9\pm 2.1km`$, where $`M_{AH}`$ is the mass of the AH (we note that a substantial part of the matter in the system is enclosed within the AH). Analysis of this in relation to the hoop conjecture will be given elsewhere. Fig.4 shows the contour lines in the $`y=0`$ plane of the log of the gradient of the rest mass density $`\mathrm{log}\left(\sqrt{^i(\rho )_i(\rho )}\right)`$ at time $`t=0.31ms`$. We see a sharp peak at a coordinate radius of $`4.5km`$. The sharp change in rest mass density indicates a shock, stronger in the infalling direction ($`z`$), while weaker near the equatorial plane. The shock is moderately relativistic with Lorentz factor of about $`1.2`$. The shock is well captured in this $`192^3`$ run with high resolution shock capturing (HRSC) GR-hydro treatment. Comparing to Fig. 3, we see that the shock front is inside the AH in all direction at this time, although it is still moving outward in coordinate location. In Fig. 5 we show the convergence of the Hamiltonian and the z-momentum constraints for a measure of the accuracy of the simulation. The evolution of the $`L2`$ norms (integrated squared) of the constraints are scaled by the maximum of the matter terms in the constraints ($`16\pi \rho _{_{ADM}}`$ and $`8\pi j_{_{ADM}}^z`$ respectively). The solid, dotted, and dashed lines represent the constraints at resolutions $`\mathrm{\Delta }x=1.48km`$, $`0.492km`$ and $`0.246km`$ respectively. These long time scale convergence tests indicate that our numerical evolution is stable and convergent for the time scale of our present problem. Towards the end we see that the error is increasing rapidly; an examination of the spatial distribution of the constraint violations shows that the error is due to the familiar problem of resolving the “grid stretching” peaks of the black hole metric (cf. Fig.2). An extensive convergence analysis of many of the variables involved in the simulation has been carried out and will be presented in a follow up paper. We have also performed simulations with the initial boost velocity increased by $`10\%`$ (generating more shock heating) and confirmed that our results are not sensitive to the initial velocity. With these results we conclude that prompt collapse of the merged object formed in head-on collision infalling from infinity is possible, under the same conditions as in Shapiro’s conjecture. We have also carried out simulations of head-on collisions of lower mass NSs and have seen cases in which the shocks propagate to cover the whole star and no AH is found, indicating that the collapse would be delayed until radiative cooling. A detailed analysis of the transition point between prompt and delay collapse is computationally expensive with our 3D code used to carry out the present analysis. A 2D version of the present treatment is being developed with this specific application in mind. ## Conclusions. We pointed out that there is an assumption in Shapiro’s conjecture, namely, the head-on collision process is in quasi-equilibrium (in the sense of (A) and (B) above). We showed that this may not be true for the collision of two $`1.4`$ $`M_{}`$ NS’s. We substantiated our argument with a simulation solving the full set of the coupled Einstein and general relativistic hydrodynamic equations. We confirmed the prompt formation of a black hole in the infalling time scale $`t_i`$ with an apparent horizon found $`0.16ms`$ after the point of contact. In this paper we concentrate on the head-on collision process under the same conditions as in Shapiro’s conjecture. As the time scale argument given above is rather general, and in particular does not depend on the polytropc EOS, we expect the same argument to be applicable to more general situations. An investigation of the prompt vs. dlayed collapse problem of head-on collisions with realistic EOSs, more realistic initial conditions (initial data setup with Post-Newtonian formulation), and with a determination of the critical point between delayed vs. prompt collapse will be given in follow up papers. We thank all present and past members of our NCSA/Potsdam/Wash U team for the joint “Cactus” code development effort, without which this work would not be possible. We thank in particular the contributions from Miguel Alcubierre for AH treatment and Bernd Brügmann for elliptic equation solvers used in this work. We thank Stu Shapiro for useful discussions. This research is supported by NASA NCS5-153, NSF NRAC MCA93S025, and NSF grants PHY96-00507, 96-00049.
no-problem/9904/astro-ph9904306.html
ar5iv
text
# Acknowledgements ## Acknowledgements I thank G. Steigman for helpful comments.
no-problem/9904/cond-mat9904191.html
ar5iv
text
# Irreversible Magnetization of Pin-Free Type II Superconductors ## Abstract The magnetization curve of a type II superconductor in general is hysteretic even when the vortices exhibit no volume or surface pinning. This geometric irreversibility, caused by an edge barrier for flux penetration, is absent only when the superconductor has precisely ellipsoidal shape or is a wedge with a sharp edge where the flux lines can penetrate. A quantitative theory of this irreversibility is presented for pin-free disks and strips with constant thickness. The resulting magnetization loops are compared with the reversible magnetization curves of ideal ellipsoids. The magnetic moment of most superconductors is well known to be irreversible. After Abrikosov’s prediction of quantized flux lines it became clear that the magnetic hysteresis is caused by pinning of these vortex lines at inhomogeneities in the material. Flux-line pinning and the related critical state were subsequently confirmed quantitatively in numerous papers . However, similar hysteresis effects were also observed in type I superconductors, which do not contain flux lines but normal conducting domains, and in type II superconductors with negligible pinning. In these two cases the magnetic irreversibility is caused by a geometric (specimen-shape dependent) barrier which delays the penetration of magnetic flux but not its exit. In this respect the geometric barrier behaves similar to the Bean-Livingston barrier for vortices penetrating a parallel surface. The geometric irreversibility is most pronounced for thin films of constant thickness in a perpendicular field. It is absent only when the superconductor is of exactly ellipsoidal shape or is tapered like a wedge with a sharp edge where flux penetration is facilitated. In ellipsoids the inward directed driving force exerted on the vortex ends by the surface screening currents is exactly compensated by the vortex line tension , and thus the magnetization is reversible. In specimens with constant thickness (i.e. rectangular cross-section) this line tension opposes the penetration of flux lines at the four corner lines, thus causing an edge barrier; but as soon as two penetrating vortex segments join at the equator they contract and are driven to the specimen center by the surface currents, see Fig. 1 below. As opposed to this, when the specimen profile is tapered and has a sharp edge, the driving force even in very weak applied field exceeds the restoring force of the line tension such that there is no edge barrier. The resulting absence of hysteresis in wedge-shaped samples was nicely shown by Morozov et al. . An elegant analytical theory of the field and current profiles in thin superconductor strips with an edge barrier has been presented by Zeldov et al. , see also the extensions . With increasing applied field $`H_a`$, the magnetic flux does not penetrate until an entry field $`H_{\mathrm{en}}`$ is reached; at $`H_a=H_{\mathrm{en}}`$ the flux immediately jumps to the center, from where it gradually fills the entire strip or disk. This behavior in increasing $`H_a`$ is similar to that of thin films with artificially enhanced pinning near the edge , but in decreasing $`H_a`$ the behavior is different: In films with enhanced edge pinning (critical current density $`J_{c,\mathrm{edge}}`$) the current density $`J`$ at the edge immediately jumps from $`+J_{c,\mathrm{edge}}`$ to $`J_{c,\mathrm{edge}}`$ when the ramp rate inverses sign, while in pin-free films with geometric barrier the current density at the edge first stays constant or even increases and then gradually decreases and reaches zero at $`H_a=0`$. The entry field $`H_{\mathrm{en}}`$ was estimated for pin-free thin strips in Refs. , see also Refs. . In this letter the geometry-caused magnetic irreversibility of ideal pin-free type II superconductors is calculated and discussed for the two most important examples of circular disks (or cylinders) and long strips (or slabs) with rectangular profile of arbitrary aspect ratio $`b/a`$. I present flux-density profiles and magnetization loops and give explicit expressions for the entry field $`H_{\mathrm{en}}`$ and for the reversibility field $`H_{\mathrm{rev}}`$ above which the magnetization curve is reversible. Finally, the modification of these results by volume pinning is briefly mentioned. Let us first consider the magnetization of ideal ellipsoids. If the superconductor is homogeneous and isotropic, the magnetization curves $`M(H_a;N)`$ are reversible and may be characterized by a demagnetizing factor $`N`$ with $`0N1`$. If $`H_a`$ is along one of the three principal axes of the ellipsoid then $`N`$ is a scalar. One has $`N=0`$ for long specimens in parallel field, $`N=1`$ for thin films in perpendicular field, and $`N=1/3`$ for spheres. If the magnetization curve in parallel field is known, $`M(H_a;0)=B/\mu _0H_a`$ where $`B`$ is the flux density or induction inside the ellipsoid, then the homogeneous magnetization of the general ellipsoid, $`M(H_a;N)`$, follows from the implicit equation $`H_i=H_aNM(H_i;0).`$ (1) Solving Eq. (1) for the effective internal field $`H_i`$, one obtains $`M=M(H_a;N)=M(H_i;0)`$. In particular, for the Meissner state ($`B0`$) one finds $`M(H_a;0)=H_a`$ and $`M(H_a;N)={\displaystyle \frac{H_a}{1N}}\mathrm{for}|H_a|(1N)H_{c1}.`$ (2) At the lower critical field $`H_{c1}`$ one has $`H_i=H_{c1}`$, $`H_a=H_{c1}^{}=(1N)H_{c1}`$, $`B=0`$, and $`M=H_{c1}`$. Near the upper critical field $`H_{c2}`$ one has an approximately linear $`M(H_a;0)=\gamma (H_aH_{c2})<0`$ with $`\gamma >0`$, yielding $`M(H_a;N)={\displaystyle \frac{\gamma }{1+\gamma N}}(H_aH_{c2})\mathrm{for}H_aH_{c2}.`$ (3) Thus, if the slope $`\gamma 1`$ is small (and in general, if $`|M/H_a|1`$ is small), demagnetization effects may be disregarded and one has $`M(H_a;N)M(H_a;0)`$. The ideal magnetization curve of type II superconductors with $`N=0`$, $`M(H_a;0)`$ or $`B(H_a;0)=H_a+M(H_a;0)`$, may be calculated from Ginzburg-Landau (GL) theory , but any other model curve may be used provided $`M(H_a;0)=M(H_a;0)`$ has a vertical slope at $`H_a=H_{c1}`$ and decreases monotonically in size for $`H_a>H_{c1}`$. For simplicity in this letter I shall assume $`H_{c1}H_{c2}`$ (i.e. large GL parameter $`\kappa 1`$) and $`H_aH_{c2}`$. To illustrate the essential features I may thus use the realistic model $`M(H_a;0)=H_a`$ for $`|H_a|H_{c1}`$ and $`M(H_a;0)=(H_a/|H_a|)(|H_a|^3H_{c1}^3)^{1/3}H_a`$ (4) for $`|H_a|>H_{c1}`$, see the curve labeled $`\mathrm{}`$ in Fig. 3 below. In nonellipsoidal superconductors the induction $`𝐁(𝐫)`$ in general is not homogeneous, and so the concept of a demagnetizing factor does not work. However, when the magnetic moment $`𝐦=\frac{1}{2}𝐫\times 𝐉(𝐫)d^3r`$ is directed along $`H_a`$, one may define an effective demagnetizing factor $`N`$ which in the Meissner state ($`B0`$) yields the same slope $`M/H_a=1/(1N)`$, Eq. (2), as an ellipsoid with the same volume $`V`$. Here the definition $`M=m/V`$ with $`m=\mathrm{𝐦𝐇}_a/H_a`$ is used. For long strips and circular disks or cylinders with cross-section $`2a\times 2b`$ in a perpendicular or axial magnetic field along the thickness $`2b`$, approximate expressions for the slopes $`M/H_a=m/(VH_a)`$ are given in Refs. . Using this and defining $`q(|M/H_a|1)(b/a)`$, one obtains the effective $`N`$ for any aspect ratio $`b/a`$ in the form $`N`$ $`=`$ $`11/(1+qa/b),`$ (5) $`q_{\mathrm{strip}}`$ $`=`$ $`{\displaystyle \frac{\pi }{4}}+0.64\mathrm{tanh}\left[0.64{\displaystyle \frac{b}{a}}\mathrm{ln}\left(1.7+1.2{\displaystyle \frac{a}{b}}\right)\right],`$ (6) $`q_{\mathrm{disk}}`$ $`=`$ $`{\displaystyle \frac{4}{3\pi }}+{\displaystyle \frac{2}{3\pi }}\mathrm{tanh}\left[1.27{\displaystyle \frac{b}{a}}\mathrm{ln}\left(1+{\displaystyle \frac{a}{b}}\right)\right].`$ (7) In the limits $`ba`$ and $`ba`$, formulae (5) are exact, and for general $`b/a`$ the relative error is $`<1\%`$. For $`a=b`$ (square cross-section) they yield for the strip $`N=0.538`$ (while $`N=1/2`$ for a circular cylinder in perpendicular field) and for the short cylinder $`N=0.365`$ (while $`N=1/3`$ for the sphere). Next we consider the full, irreversible magnetization curves $`M(H_a)`$ of pin-free strips and cylinders with cross section $`2a\times 2b`$. Appropriate continuum equations and algorithms (which apply also to pinning) have been proposed recently by Labusch and Doyle and by the author , based on the Maxwell equations and on constitutive laws which describe flux flow and pinning \[or thermal depinning expressed, e.g., by an electric field $`𝐄(𝐉,𝐁)`$\] and the reversible magnetization in absence of pinning, $`M(H_a;0)`$. Here I shall use the method and the model $`M(H_a;0)`$, Eq. (4). The pin-free flux dynamics will be described as viscous motion by $`𝐄=\rho _{\mathrm{FF}}(B)𝐉`$ with flux-flow resistivity $`\rho _{\mathrm{FF}}B`$. In both methods the $`M(H_a;0)`$ law enters the driving force density on the vortices, $`𝐉_𝐇\times 𝐁`$ with definition $`𝐉_𝐇=\times 𝐇`$, where $`𝐇(𝐁)`$ is obtained by inverting the relation $`𝐁(𝐇)=𝐇+𝐌(𝐇;0)`$. While method considers a magnetic charge density on the specimen surface which causes an effective field $`𝐇_i(𝐫)`$ inside the superconductor, our method couples the arbitrarily shaped superconductor to the external field $`𝐁(𝐫,t)`$ via surface screening currents: In a first step the vector potential $`𝐀(𝐫,t)`$ is calculated for given current density $`𝐉`$; then this relation (a matrix) is inverted to obtain $`𝐉`$ for given $`𝐀`$ and given $`𝐇_a`$; next the induction law is used to obtain the electric field \[in our symmetric geometry one has $`𝐄(𝐉,𝐁)=𝐀/t`$ \], and finally the constitutive law $`𝐄=𝐄(𝐉,𝐁)`$ is used to eliminate $`𝐀`$ and $`𝐄`$ and obtain one single integral equation for $`𝐉(𝐫,t)`$ as a function of $`𝐇_a(t)`$, without having to compute $`𝐁(𝐫,t)`$ outside the specimen. This method in general is fast and elegant; but so far the algorithm is restricted to moderate aspect ratios, $`0.03b/a30`$, and to a number of grid points not exceeding 1000 (on a Personal Computer). Improved accuracy is expected by combining methods (19) (working best for small $`b/a`$) and (20). The penetration and exit of flux computed by method is illustrated in Figs. 1 and 2 for isotropic strips and disks without volume pinning, using a flux-flow resistivity $`\rho _{\mathrm{FF}}=\rho B(𝐫)`$ with $`\rho =140`$ (strip) or $`\rho =70`$ (disk) in units where $`H_{c1}=a=\mu _0=|dH_a/dt|=1`$. The profiles of the induction $`B_y(r,y)`$ taken along the midplane $`y=0`$ of the thick disk in Fig. 2 have a pronounced minimum near the edge $`r=a`$, precisely in the region where strong screening currents flow. Away from the edges, the current density $`𝐉=\times 𝐁/\mu _0`$ is nearly zero; note the parallel field lines in Fig. 1. The quantity $`𝐉_𝐇=\times 𝐇(𝐁)`$ which enters the Lorentz force density $`𝐉_𝐇\times 𝐁`$, is even exactly zero since we assume absence of pinning. Our finite flux-flow parameter $`\rho `$ and finite ramp rate $`dH_a/dt=\pm 1`$ mean a dragging force which, similar to pinning, causes a weak hysteresis and a small remanent flux at $`H_a=0`$; this effect may be reduced by choosing larger resistivity and slower ramping. The induction $`B_y(0,0)`$ in the specimen center in Fig. 2 performs a hysteresis loop very similar to the magnetization loops $`M(H_a)`$ shown in Figs. 2, 3. Both loops are symmetric, e.g., $`M(H_a)=M(H_a)`$. The maximum of $`M(H_a)`$ defines a field of first flux entry $`H_{\mathrm{en}}`$, which closely coincides with the field $`H_{\mathrm{en}}^{}`$ at which $`B_y(0,0)`$ starts to appear. The computed entry fields are well fitted by $`H_{\mathrm{en}}^{\mathrm{strip}}/H_{c1}`$ $`=`$ $`\mathrm{tanh}\sqrt{0.36b/a},`$ (8) $`H_{\mathrm{en}}^{\mathrm{disk}}/H_{c1}`$ $`=`$ $`\mathrm{tanh}\sqrt{0.67b/a}.`$ (9) These formulae are good approximations for all aspect ratios $`0<b/a<\mathrm{}`$, see also the estimates of $`H_{\mathrm{en}}\sqrt{b/a}`$ for thin strips in Refs. . The virgin curve of the irreversible $`M(H_a)`$ of strips and disks at small $`H_a`$ coincides with the ideal Meissner straight line $`M=H_a/(1N)`$ of the corresponding ellipsoid, Eqs. (2,5). When the increasing $`H_a`$ approaches $`H_{\mathrm{en}}`$, flux starts to penetrate into the corners in form of stretched flux lines (Fig. 1) and thus $`|M(H_a)|`$ falls below the Meissner line. At $`H_a=H_{\mathrm{en}}`$ flux penetrates and jumps to the center, and $`|M(H_a)|`$ starts to decrease. In decreasing $`H_a`$, this barrier is absent. As can be seen in Fig. 3, above some field $`H_{\mathrm{rev}}`$, the magnetization curve $`M(H_a)`$ becomes reversible and exactly coincides with the curve of the ellipsoid defined by Eqs. (1, 4, 5) (in the quasistatic limit with $`\rho ^1dH_a/dt0`$). The irreversibility field $`H_{\mathrm{rev}}`$ is difficult to compute since, in our present algorithm, it slightly depends on the choices of the flux-flow parameter $`\rho `$ (or ramp rate) and of the numerical grid, and also on the model for $`M(H_a;0)`$. In the interval $`0.08b/a5`$ we find with relative error of $`3\%`$, $`H_{\mathrm{rev}}^{\mathrm{strip}}/H_{c1}`$ $`=`$ $`0.65+0.12\mathrm{ln}(b/a),`$ (10) $`H_{\mathrm{rev}}^{\mathrm{disk}}/H_{\mathrm{c1}}`$ $`=`$ $`0.75+0.15\mathrm{ln}(b/a).`$ (11) This fit obviously does not apply to $`b/a1`$ (since $`H_{\mathrm{rev}}`$ should exceed $`H_{\mathrm{en}}>0`$) nor to $`b/a1`$ (where $`H_{\mathrm{rev}}`$ should be close to $`H_{\mathrm{c1}}`$). The limiting value of $`H_{\mathrm{rev}}`$ for thin films with $`ba`$ is thus not known at present. Remarkably, the irreversible magnetization curves $`M(H_a)`$ of pin-free strips and disks fall on top of each other if the strip is chosen twice as thick as the disk, $`(b/a)_{\mathrm{strip}}2(b/a)_{\mathrm{disk}}`$. This striking coincidence holds for all aspect ratios $`0<b/a<\mathrm{}`$ and can be seen from each of Eqs. (5-7): The effective $`N`$ \[or virgin slope $`1/(1N)`$\], the entry field $`H_{\mathrm{en}}`$, and the reversibility field $`H_{\mathrm{rev}}`$ are nearly equal for strips and disks with half thickness, or for slabs and cylinders with half length. Another interesting feature of the pin-free magnetization loops is that the maximum of $`|M(H_a)|`$ exceeds the maximum of the reversible curve (equal to $`H_{c1}`$) when $`b/a0.8`$ for strips and $`b/a0.4`$ for disks, but at larger $`b/a`$ it falls below $`H_{c1}`$. The maximum magnetization may be estimated from the slope of the virgin curve $`1/(1N)`$, Eq. (5), and from the field of first flux entry, Eq. (6). Finally, Fig. 4 shows how the irreversible magnetization loop is modified when volume pinning of the flux lines is switched on. Increasing critical current density $`J_c`$ (in natural units $`H_{c1}/a`$) inflates the loops nearly symmetrically about the pin-free loop or (above $`H_{\mathrm{rev}}`$) about the reversible curve, and the maximum of $`|M(H_a)|`$ shifts to higher fields. Above $`H_{\mathrm{rev}}`$ the width of the loop is nearly proportional to $`J_c`$, as expected from previous theories which assumed $`H_{c1}=0`$, but at small fields the influence of finite $`H_{c1}`$ is clearly seen up to rather strong pinning.
no-problem/9904/astro-ph9904217.html
ar5iv
text
# Optical spectroscopy of X-ray sources in the old open cluster M 67 Based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias ## 1 Introduction Two observations of M 67 with the ROSAT PSPC resulted in the detection of X-ray emission from 25 members of this old open cluster (Belloni et al. 1993, 1998). The X-ray emission of many of these sources is readily understood. For example, the X-ray emission originates in deep, hot atmospheric layers in a hot white dwarf; is due to mass transfer in a cataclysmic variable; and is caused by magnetic activity in two contact binaries and several RS CVn-type binaries. However, Belloni et al. (1998) point out several X-ray sources in M$ $67 for which the X-ray emission is unexplained. All but one of these objects are located away from the isochrone formed by the main sequence and the (sub)giant branch of M$ $67 (Fig. 1). In this paper we investigate the nature of the X-ray emission of these stars through low- and high-resolution optical spectra. In particular, we investigate whether the emission could be coronal as a consequence of magnetic activity, by looking for emission cores in the Ca ii H&K lines. Tidal interaction in a close binary orbit is thought to enhance magnetic activity at the stellar surface by spinning up the stars in the binary. Therefore, we also derive projected rotational velocities with the crosscorrelation method. Finally, we study the H$`\alpha `$ profile as a possible indicator of activity or mass transfer. The observations and the data reduction are described in Sect. 2, and the analysis of the spectra in Sect. 3. Comparison with chromospherically active binaries is made in Sect. 4. A discussion of our results is given in Sect. 5. In the remainder of the introduction we give brief sketches of the stars studied in this paper; details on many of them are given by Mathieu et al. (1990). The stars are indicated with their number in Sanders (1977), and are listed in Table 1. S 1063 and S 1113 are two binaries located below the subgiant branch in the colour-magnitude diagram of M$ $67. Their orbital periods, 18.4 and 2.82 days respectively, are too long for them to be contact binaries; also they are too far above the main sequence to be binaries of main-sequence stars. In principle, a (sub)giant can become underluminous when it transfers mass to its companion, as energy is taken from the stellar luminosity to restore hydrostatic equilibrium (e.g. Kippenhahn & Weigert 1967). However, mass transfer through Roche lobe overflow very rapidly leads to circularization of the binary orbit, whereas S 1063 has an eccentricity $`e=0.217`$. The orbit of S 1113 is circular, so mass transfer could be occurring in that system. For the moment, the nature of these binaries is not understood. In both, Pasquini & Belloni (1998) observed emission cores in the Ca ii H&K lines. S 1063 is reported to be photometrically variable with $``$ 0.10 mag (Rajamohan et al. 1988; Kaluzny & Radczynska 1991), but no period is found. For S 1113, photometric variability with a period of 0.313 days and a total amplitude of 0.6 mag was claimed by Kurochkin (1960), but this has not been confirmed by Kaluzny & Radczynska (1991), who find variability with only 0.05 mag. S 1063 is the only M 67 star in our sample that shows significantly variable X-ray emission (between 0.0081 and 0.0047 cts s<sup>-1</sup>; Belloni et al. 1998). S 1072 and S 1237 are binaries with orbital periods of 1495 and 698 days, and with eccentricities $`e=0.32`$ and 0.105, respectively. The colour and magnitude of S 1072 cannot be explained with the pairing of a giant and a blue straggler, since this is not compatible with its $`ubvy`$ photometry (Nissen et al. 1987; Mathieu & Latham 1986), nor with superposition of three subgiants, since this is excluded by the radial velocity correlations (Mathieu et al. 1990). The absence of the 6708 Å lithium feature in the spectrum of S 1072 indicates that the surface material has undergone mixing (Hobbs & Mathieu 1991; Pritchett & Glaspey 1991). S 1237 could be a binary of a giant and a star at the top of the evolved main sequence (Janes & Smith 1984); high-resolution spectroscopy should be able to detect the main-sequence star in that case (Mathieu et al. 1990). The wide orbits and significant eccentricities appear to exclude both mass transfer and tidal interaction as explanations for the X-ray emission. S 1242 has the largest eccentricity of the binaries in our sample, at $`e=0.66`$ in an orbit of 31.8 days. Its position on the subgiant branch is explained if a subgiant of 1.25 $`M_{\mathrm{}}`$ has a secondary with $`V>15`$ (Mathieu et al. 1990). Ca ii K line emission is reported by Pasquini & Belloni (1998). Photometric variability with a period of 4.88 days and amplitude of 0.0025 mag has been found by Gilliland et al. (1991). We note that this photometric period corresponds to corotation with the orbit at periastron, which suggests that the X-ray emission may be due to tidal interaction taking place at periastron. The binary would then be an interesting example of a system in transition from an eccentric to a circular orbit. Indeed, according to the diagnostic diagram of Verbunt & Phinney (1995) a giant of 1.25 $`M_{\mathrm{}}`$ with a current radius of $`2.3R_{\mathrm{}}`$ (as derived from the location of S 1242 in the colour-magnitude diagram) cannot have circularized an orbit of 31.8 days. S 1040 is a binary consisting of a giant and a white dwarf. The progenitor of the white dwarf circularized the orbit during a phase of mass transfer (Verbunt & Phinney 1995); as a result the mass of the white dwarf is very low (Landsman et al. 1997). The white dwarf is probably too cool, at 16 160 K, to be the X-ray emitter. Indications for magnetic activity are Ca ii H&K (Pasquini & Belloni 1998) and Mg ii ($`\lambda \lambda `$ 2800 Å, Landsman et al. 1997) emission lines. If the X-rays are due to coronal emission of the giant, this must be the consequence of the past evolution of the binary, since the giant is too small for significant tidal interaction to be taking place in the current orbit. S 1082 is a blue straggler. Photometric variability of 0.08 mag within a few hours was observed by Simoda (1991). Goranskii et al. (1992) found eclipses with a total amplitude of 0.12 mag and a binary period of 1.07 days; however, the radial velocities of the star do not show this period, and vary by about 2 km s<sup>-1</sup>, far too little for a 1 day eclipsing binary (Mathieu et al. 1986). Landsman et al. (1998) detect a significant excess at 1520 Å with the Ultraviolet Imaging Telescope, and ascribe this to a hot, subluminous secondary. Such a secondary was suggested already by Mathys (1991) on the basis of a broad component in the Na i D and O i absorption lines. ## 2 Observations and data reduction Optical spectra were obtained on February 28/29, 1996 with the 4.2m William Herschel Telescope on La Palma, under good weather conditions (seeing $`<1\mathrm{}`$ until 4$`\stackrel{h}{.}`$30 UT, $`<2\mathrm{}`$ thereafter). In addition to the X-ray sources in M$ $67 we observed two ordinary member giants of M$ $67, S 1288 and S 1402, for comparison. Furthermore one flux standard and three velocity standards were observed. The blue high-resolution spectra of S 1113 were obtained on April 7/8, 1998 with the same telescope through a service observation (seeing 1–2$`\mathrm{}`$). A log of the observations is given in Table 2. All spectra have been reduced using the Image Reduction and Analysis Facility (IRAF)<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. ### 2.1 Low-resolution spectra Low-resolution spectra were taken with the ISIS double-beam spectrograph (Carter et al. 1993). The blue arm of ISIS was used with the 300 lines per mm grating and TEK-CCD, resulting in a wavelength coverage of 3831 to 5404 Å and a dispersion of 1.54 Å per pixel at 4000 Å. The red arm, combined with the 316 lines per mm grating and EEV-CCD, covered a wavelength region of 5619 to 7135 Å with a dispersion of 1.40 Å per pixel at 6500 Å. The format of the frames is 1124 $`\times `$ 200 pixels which includes the under- and overscan regions. For the object exposures the slit width was set to $`4\mathrm{}`$. Flatfields were made with a Tungsten lamp while CuAr and CuNe lamp exposures were taken for the purpose of wavelength calibration. For the ISIS-spectra, basic reduction steps have been done within the IRAF ccdred-package. These steps include removing the bias signal making use of the under- and overscan regions and zero frames, trimming the frames to remove the under- and overscan, and flatfielding to correct for small pixel-to-pixel gain variations. The remaining reduction has been done with IRAF specred-package tasks. With the optimal extraction algorithm (Horne 1986) the two dimensional images are reduced to one dimensional spectra. Next, the spectra are calibrated in wavelength with the arc frames. A dispersion solution is found by fitting third (blue) and fourth (red) order polynomials to the positions on the CCD of the arclamp lines. The fluxes of the spectra are calibrated with the absolute fluxes of HZ 44, tabulated at 50 Å intervals (Massey et al. 1988), and adopting the standard atmospheric extinction curve for La Palma as given by King (1985). The estimated accuracy of the flux calibration is $`10`$%. ### 2.2 High-resolution spectra High-resolution echelle spectra were taken with the Utrecht Echelle Spectrograph (UES, Unger et al. 1993). Observations were done with a $`1\mathrm{}`$ slitwidth. For the 1996 observations, the UES was used in combination with a 1024 $`\times `$ 1024 pixels TEK-CCD, and the 31.6 lines per mm grating (E31), which resulted in a broad wavelength coverage, but small separation of the echelle orders on the CCD. In this setup, the UES resolving power is 49 000 per resolution element (two pixels), corresponding to a dispersion of 3 km s<sup>-1</sup> per pixel or 0.06 Å per pixel at 6000 Å. The frames were centered on $`\lambda _{\mathrm{cen}}=4250`$ Å and $`\lambda _{\mathrm{cen}}=5930`$ Å in order to get a blue (3820 to 4920 Å) and red (4890 to 7940 Å) echelle spectrum. The number of orders recorded on the CCD is 34 in the blue and 45 in the red, each covering $``$ 45 to 80 Å increasing for longer wavelengths. Towards the red, gaps occur between the wavelength coverage of adjacent orders. Exposures of a quartz lamp were taken to make the flatfield corrections. ThAr exposures served as wavelength calibration frames. For the 1998 observations of S 1113, a 2048 $`\times `$ 2048 pixels SITe-CCD was used. Two spectra were taken with the 79.0 lines per mm (E79) grating ($`\lambda _{\mathrm{cen}}=4343`$ Å). The difference between the E79 and the E31 gratings is that E79-spectra have a larger separation of the echelle-orders on the detector, which can improve the determination of the sky-background. The spectral resolution of the gratings is the same (the central dispersion in these observations is $`0.04`$ Å per pixel). The specified wavelength-coverage for this combination of grating and detector is 3546 to 6163 Å but only the central orders were bright enough to extract spectra (3724 to 5998 Å). Flatfield and ThAr exposures were made for calibration purposes. The reduction of the UES spectra has been performed using the routines available within the IRAF echelle-package. First, the frames are debiased and the under- and overscan regions removed. After locating the orders on the CCD for both the quartz lamp and the object exposures, we flatfielded the frames. Spectra are extracted with optimal extraction. The small order separation makes sky subtraction difficult; however, our targets are bright, and the resulting error is negligible. In the step of wavelength calibration, the dispersion solution is derived by fitting third and fourth order polynomials leaving rms-residuals of 0.004 Å (red) and 0.002 Å (blue, 0.003 Å for the 1998-spectra). To find absolute fluxes for the Ca ii K ($`\lambda \mathrm{\hspace{0.17em}3933.67}`$ Å) & H ($`\lambda \mathrm{\hspace{0.17em}3968.47}`$ Å) emission lines (Sect. 3.1), the fluxes of the relevant blue orders of an object have been calibrated with the calibrated ISIS spectrum of the same object. Continuum normalization of the orders in the red spectra, required for the rotational velocity analysis, is done by fitting third to fifth order polynomials to the wavelength-calibrated spectra. ## 3 Data analysis We study two indicators of magnetic activity. The direct indicator is emission in the cores of the Ca ii H&K lines. Another indicator is the rotational speed: rapid (differential) rotation and convective motions are thought to generate magnetic fields through a dynamo. ### 3.1 Determination of Ca ii H&K emission fluxes To estimate the amount of flux emitted in the Ca ii H&K line cores, $`F_{\mathrm{Ca}}`$, we add the fluxes above the H&K absorption profiles as follows. An upper and a lower limit of the level of the absorption pseudo-continuum is estimated by eye and is marked by a straight line. For S 1113 this is illustrated in Fig. 7. We obtain a lower and upper limit of the emitted flux by adding the fluxes in each wavelength-bin above these levels. The value given in Table 3 is the average of these two results, the uncertainty is half their difference. Use of higher order fits (following Fernández-Figueroa et al. 1994) to the absorption profile gives similar results. If an emission line is not clearly visible, we obtain an upper limit by estimating the minimal detectable emission flux at the H&K line centers within a 1 Å wide region (typical width of the emission lines). Six of our sources show Ca ii H&K line emission (Fig. 2). The profiles of S 1113 appear to be double-lined, suggesting that we see activity of both stars (Fig. 7). The fluxes given in Table 3 are the total fluxes, i.e. no attempt was made to deblend the emission lines. No emission is visible in the spectrum of S 1082. ### 3.2 Determination of projected rotational velocities #### 3.2.1 Crosscorrelation In order to derive the projected rotational velocity $`v\mathrm{sin}i`$ of our targets, we apply the crosscorrelation technique (e.g. Tonry & Davis 1979). This method computes the correlation between the object spectrum and an appropriately chosen template spectrum as function of relative shift. The position of the maximum of the crosscorrelation function (ccf) provides the value of the radial-velocity difference between object and template. The width of the peak is indicative for the width of the spectral lines and can therefore be used as a measure of the rotational velocity of the stars. For rotational velocities not too large the line profiles may be approximated with Gaussians, allowing analytical treatment of the crosscorrelation method. Assuming that the binary spectrum is a shifted, scaled and broadened version of the template spectrum, the broadening can be related to the width of the ccf peak as follows (Gunn et al. 1996). With $`\tau `$ the dispersion in the template’s and $`\beta `$ the dispersion in the target’s spectral lines, $`\mu `$ the dispersion of the ccf peak and $`\sigma `$ the dispersion of the gaussian that describes the broadening of the object’s spectrum with respect to the template, one can write: $$\mu ^2=\tau ^2+\beta ^2=2\tau ^2+\sigma ^2$$ (1) Eq. 1 applies to both components in the binary spectrum and their corresponding ccf peaks. The crosscorrelations are performed with the IRAF task fxcor that uses Fourier transforms of the spectra to compute the ccf. Before performing the crosscorrelation, the continuum is subtracted from the normalized spectra. Filtering in the Fourier domain is applied to avoid undesirable contributions originating from noise or intrinsically broad lines (see Wyatt 1985). The templates are chosen from the radial-velocity standards such that their spectral types resemble those of the targets. The value of $`\tau `$ for these stars is determined for each order separately by autocorrelation of the template spectrum adopting the same filter used for the crosscorrelations. In this case $`\sigma `$ is zero and therefore $`\tau `$ is found directly from the width of the ccf peak: $`\tau ^2=\mu ^2/2`$. The template spectra are correlated with our target stars order by order, where we limit ourselves to orders in the red spectra that do not suffer from strong telluric lines. Most ccf peaks can be fitted well with a gaussian. As the final value for $`v\mathrm{sin}i`$ we give the broadening $`\sigma `$ averaged over the different orders, and for the uncertainty we take the rms of the spread around the average of $`\sigma `$ (Table 3). Equating $`v\mathrm{sin}i`$ to $`\sigma `$ implicitly assumes that $`\tau `$ is the width of the lines not related to rotation. An upper limit to $`v\mathrm{sin}i`$ is found from the other extreme in which we assume that the total width of the spectral line $`\beta `$ follows from rotation. This creates uncertainties of the order of $`\sigma `$ for S 1242. In the case of S 1237 we find that $`\beta <\tau `$ ($`\sigma <0`$), i.e. the lines in S 1237 are narrower than in the template. For these two stars, we give an upper limit of $`v\mathrm{sin}i<\beta `$ in Table 3. S1113 is the only binary observed whose ccf shows two peaks. The ccf peaks of both stars overlap in the 1998 observation. Therefore we do not use these spectra in the following analysis. For the 1996 spectra, the ccf shows two peaks, one of which is broad indicating the presence of a fast rotating star. Both peaks are clearly separated with a center-to-center velocity separation of $``$ 110 km s<sup>-1</sup>. The lines in the spectrum are smeared out and less pronounced, resulting in a noisy ccf. To improve this, we combine four sequential orders before crosscorrelating (being constrained by the maximum number of points fxcor can handle). From eq. 17 in Gunn et al. we derive the relative light contribution of both components to the spectrum from the height and dispersion of the crosscorrelation peaks, assuming that the binary stars have the same spectrum as the template ($`\alpha =1`$ in eq. 17). According to this the rapidly rotating star contributes $`82`$% of the light. Note that luminosity ratios derived from cross-correlations are uncertain and should be confirmed photometrically. #### 3.2.2 Fourier-Bessel transformation The line profile of the fast rotating star in S 1113 is not compatible anymore with a Gaussian, therefore we adopt another method to determine its $`v\mathrm{sin}i`$, described in Piters et al. (1996). This method uses the property that the Fourier-Bessel transform of a spectral line that is purely rotationally broadened has a maximum at the position of the projected rotational velocity. In practice, this position is a function of the limits over which the Fourier transform is performed. The local maxima of this velocity-versus-cutoff-frequency (vcf) function approach $`v\mathrm{sin}i`$ within half a percent, the result growing more accurate for maxima at higher frequencies. This error is negligible when compared to errors arising from noise, other line broadening mechanisms, etc. (see Piters et al. 1996). In our determination of $`v\mathrm{sin}i`$ of the primary of S 1113, we have used the first local maximum in the vcf-plot of the transformation of four isolated Fe i lines at $`\lambda \lambda `$ 6265.14, 6400.15, 6408.03 and 6411.54 Å. The $`v\mathrm{sin}i`$ in Table 3 is an average of the resulting values; the $`v\mathrm{sin}i`$ of the secondary is found from crosscorrelation. The application of the Fourier-Bessel transformation method is limited on the low velocity side by the spectral resolution: the Fourier transform cannot be performed beyond the Nyquist frequency which for slow rotators lies at a frequency that is lower than the cutoff-frequency at which the first maximum occurs in the vcf-plot. For our spectra this means that the method cannot be used for $`v\mathrm{sin}i<`$ 5.2 km s<sup>-1</sup>. Indeed, for every star for which the crosscorrelation method gives a $`v\mathrm{sin}i`$ smaller than this value, the vcf-plot does not reach the first local maximum, except for S 1082 for which we find $`v\mathrm{sin}i`$ = 9.5(1.6). For S 1072, the Fourier-Bessel transform gives a $`v\mathrm{sin}i`$ of 12.7(1.0) km s<sup>-1</sup>. Spectral lines were selected from those used in Groot et al. (1996) and from the additional lines used for S 1113. ## 4 Results The results of our search for emission cores in the Ca ii H&K lines are displayed in Fig. 2 and in Table 3. The emission lines are strong in S 1063, S 1113 and S 1040; and still detectable in S 1242 which indicates chromospheric activity in these stars. In S 1072 and S 1237 the emission cores are marginal, and on S 1082 we can only determine an upper limit. The (projected) rotational velocities of all of our stars are relatively small $`v\mathrm{sin}i<10`$km s<sup>-1</sup>, with the exception of S 1113. In Sect. 4.1 we investigate whether the relations between X-ray emission, strength of the emission cores in the Ca ii H&K lines, and the rotational velocities of the unusual X-ray emitters in M$ $67 are similar to the relations found for well-known magnetically active stars, the RS CVn binaries. In Sect. 4.2 we briefly discuss the behaviour of the H$`\alpha `$ line and spectral lines other than Ca ii H&K that are indicators of chromospheric activity. Individual systems are discussed in Sect. 4.3. ### 4.1 Comparison with RS CVn binaries To investigate whether the X-rays of the M$ $67 stars studied in this paper are related to magnetic activity, we compare their optical activity indicators and X-ray fluxes with those of a sample of RS CVn binaries. In particular, we select RS CVn binaries for which fluxes of the emission cores in the Ca ii H&K lines have been determined from high-resolution spectra by Fernández-Figueroa et al. (1994). To obtain X-ray countrates for these binaries, we searched the ROSAT data archive for PSPC observations of them. We then analyzed all these observations, and determined the countrates, in the same bandpass as used in the analysis of M 67, using the standard procedure described in Zimmermann et al. (1994). All pointings that we have analyzed actually led to a positive detection of the RS CVn system: even when not the target of the observation, the RS CVn system is usually the brightest object in the field of view. The results of our analysis are listed in Table 4. To compare systems at different distances, we multiply the ROSAT countrate and the flux of the emission cores for each system with the square of the distance listed in Table 4; for M$ $67 we adopt a distance of 850 pc (Twarog & Anthony-Twarog 1989). No corrections are made for interstellar absorption. The choice of the 0.4–2.4 keV bandpass minimizes the effects of interstellar absorption, which are severe at energies $`<0.4`$ keV. As it is unknown which component of the binary emits the X-rays, we plot the total X-ray and Ca ii fluxes, adding the contributions of both components where these are given separately by Fernández-Figueroa et al. (1994). The resulting ’absolute’ countrates and fluxes are shown in Fig. 3. The M$ $67 systems with Ca ii H&K emission clearly visible in Fig. 2, viz. S 1063, S 1113, S 1040 and S 1242 lie on the relation between X-ray and Ca ii H&K emission defined by the RS CVn systems, in agreement with the hypothesis that the X-ray flux of these objects is related to the magnetic activity. It is also seen that the upper limits or marginally detected emission cores in S 1082, S 1072 and S 1237 are high enough that we cannot exclude the hypothesis that the X-ray emission in these systems is related to magnetic activity. The rotational velocity is another indicator of magnetic activity. We investigate the relation between rotational velocity and Ca emission by selecting those stars from the sample of Fernández-Figueroa for which a value of $`v\mathrm{sin}i`$ is given in the Catalogue of Chromospherically Active Binary Stars (Strassmeier et al. 1993). In Fig. 4 the Ca ii H&K emission of these stars is compared with their $`v\mathrm{sin}i`$. In this figure we do discriminate between the separate contributions of both stars to $`F_{\mathrm{Ca}}`$, with the exception of S 1113 for which we combine the total flux $`F_{\mathrm{Ca}}`$ with the $`v\mathrm{sin}i`$ of the primary. The M$ $67 stars are found within the range occupied by chromospherically active stars. We note that the correlation between the observed H&K flux and $`v\mathrm{sin}i`$ is not tight. In particular, high and low Ca ii H&K emission flux is found at low values of $`v\mathrm{sin}i`$. Some of the scatter may be due to the use of $`v\mathrm{sin}i`$ instead of the stellar rotation period. Parameters depending on the spectral type (e.g. properties of the convective region) have been used to reduce the scatter in the activity-rotation relation; whereas this is successful for main-sequence stars with $`0.5\text{ }<BV\text{ }<0.8`$, it fails for other main-sequence stars and for giants (see discussion in Stȩpień 1994). For example, the three giants 33 Psc (K0 III), 12 Cam (K0 III) and DR Dra (K0-2 III) have $`v\mathrm{sin}i`$ values of 10, 10 and 8 km s<sup>-1</sup>, respectively, but differ in $`\mathrm{log}d^2F_{\mathrm{Ca}}`$ by three orders of magnitude (see Fig. 4). ### 4.2 Activity indicators The H$`\alpha `$ lines ($`\lambda `$ 6562.76 Å) of only two stars, S 1063 and S 1113, show clear evidence of emission, as shown in Fig. 6. This is described in more detail in their individual subsection in Sect. 4.3. For the other M$ $67 stars, we have used the few (sub)giants in the library of UES-spectra of Montes & Martín (1998) to investigate the behaviour of H$`\alpha `$. We have chosen library spectra of stars that match the spectra of the M$ $67 stars as closely as possible (see Fig. 5). S 1072 and S 1237 show no evidence for filling in of the H$`\alpha `$ profile compared to a G0IV–V and a K0III star, respectively. In S 1040 H$`\alpha `$ seems slightly filled in compared to a G8IV and a K0III star. This is also the case for S 1242 compared to a G0IV-V star, but we note that no classification for this star is found in literature. For S 1082, no matching spectrum is available in this wavelength region. Filling in of the lines in the Mg i b triplet ($`\lambda \lambda `$ 5167.33, 5172.70 and 5183.62 Å) and in the Na i D doublet ($`\lambda \lambda `$ 5889.95 and 5895.92 Å) is visible in some active stars. The presence of a He i D<sub>3</sub> ($`\lambda `$ 5876.56) absorption or emission feature can also indicate activity (see discussion in Montes & Martín (1998) and references therein). However, in none of the M$ $67 stars we see filled in Mg i b and Na i D lines. Neither do we see a clear He i D<sub>3</sub> feature. For S 1082 (Mg i b and Na i D) and S 1113 (Mg i b, Na i D and He i D<sub>3</sub>) we find no suitable library stars for these features. ### 4.3 Individual systems #### 4.3.1 S 1063 and S 1113 The two stars below the subgiant branch, S 1063 and S 1113, both show relatively strong Ca ii H&K emission, and are the only two stars in our sample showing H$`\alpha `$ in emission, shown in Fig. 6. We use the orbital solutions for both objects to try and identify the star responsible for these emission lines. The velocities of both components in S 1113, and of one component in S 1063 are indicated in Fig. 6. The H$`\alpha `$ line profile of S 1063 is asymmetric, showing emission which is blue-shifted with respect to the absorption. The location of the absorption line is compatible with the velocity of the primary, which dominates the flux; the emission is probably due to the secondary. Remarkably, the Ca ii H&K emission peak is at the velocity of the primary. This suggests that the H$`\alpha `$ emission is not chromospheric in nature. The H$`\alpha `$ emission of S 1063 does not show the double peak that is known to indicate accretion disk emission (Horne & Marsh, 1986). In S 1113 the H$`\alpha `$ emission profile is symmetric and broad, with full width at continuum level of 15 Å. The emission peak is centered on the more massive star, which contributes 82% of the total flux (Table 3). This suggests that the H$`\alpha `$ emission is due to the primary. The Ca ii H&K emission shows marginal evidence for a double peak, suggesting that both stars contribute to the chromospheric emission. In Fig. 7 we indicate the expected position of the H&K lines for both stars. For the phase observed, their peaks overlap in the crosscorrelation function. In the figure we use the velocities resulting from fitting the order that gives the ’cleanest’ crosscorrelation. #### 4.3.2 S 1072 and S 1237 The Ca ii H&K emission in the wide binaries S 1072 and S 1237 is only marginally significant. The level of their X-ray and Ca ii emission is more appropriate for active main-sequence stars (Fig.3). One might speculate that it is due to the invisible companion of the giant detected in the crosscorrelation; even if this were the case, we would not understand why this companion would be chromospherically active. We conclude that we do not understand why these two stars are X-ray sources. We find no indication for a faint secondary in the crosscorrelation profile of S 1237; at the time of observation, the spectra of two equally massive stars as suggested by Janes & Smith (1984) would be separated by 4 to 7 km s<sup>-1</sup> (derived from the ephemeris in Mathieu et al. 1990). Since the secondary is 1.6 magnitude fainter in V than the primary, we think that this small separation is compatible with finding a single peak in the crosscorrelation. #### 4.3.3 S 1242 S 1242 is chromospherically active, as shown by its Ca ii H&K emission. We suggest that this activity, which also explains the X-rays, is due to rapid rotation induced by tidal interaction at periastron, which tries to bring the subgiant into corotation with the orbit at periastron. If we assume that the observed period of photometric variability is the rotation period we derive an inclination of $``$ 9$`\mathrm{°}`$ using our maximum value of $`v\mathrm{sin}i`$ and the estimated radius. This would be in agreement with a companion at the high end of the range 0.14–0.94 $`M_{}`$ allowed by the mass function (Mathieu et al. 1990). #### 4.3.4 S 1040 Our detection of clear chromospheric emission indicates that the X-ray emission of S 1040 is due to the giant. The white dwarf has a low temperature, and is unlikely to contribute to the X-ray flux. We find a rather slow rotational velocity for the giant, 3 km s<sup>-1</sup>. Gilliland et al. (1991) detected a periodicity of 7.97 days in the visual flux (B and V bandpasses) of S 1040, with an amplitude of 0.012 mag. If this is the rotation period of the giant, the radius of 5.1 $`R_{\mathrm{}}`$ (Landsman et al. 1997) implies an equatorial rotation velocity of $`v=32`$ km s<sup>-1</sup>. This is compatible with the velocity measured with our crosscorrelation, $`v\mathrm{sin}i`$, for an inclination $`i\text{ }<5.3\mathrm{°}`$. This inclination has an a priori probability less than 0.5%; and it implies an unacceptably high mass for the white dwarf, from the measured mass function $`f(m)=0.00268`$. We conclude that the 8 days period cannot be the rotation period of the giant. It is doubtful that the white dwarf can be responsible, as its contribution to the $`B`$ and $`V`$ flux is small. #### 4.3.5 S 1082 The H$`\alpha `$ absorption profile of the blue straggler S 1082 is variable. If we consider the most symmetric spectrum profile, that of 00:01 UT, as the unperturbed profile of the primary, we find that the changes are due to extra emission. This is illustrated in Fig. 8. We suggest that this variation is due to the subluminous companion, possibly to a wind of that star. We have also investigated the presence of a broad shallow depression underlying the Na i D lines (near $`\lambda `$ 5895 Å) and the O i triplet (near $`\lambda `$ 7775 Å) as found by Mathys (1991). We find that this broad component is variable, as illustrated in Fig. 9. Mathys (1991) suggests that the broad component originates in the subluminous companion. This companion outshines the primary by a factor six at $`\lambda `$ 1500 Å and thus is presumably hot (Landsman et al. 1998). We note that the star cannot be too hot or it would not show neutral lines. ## 5 Discussion and conclusions In this paper we have tried to find an explanation for the X-ray emission of seven sources in M$ $67. For S 1242 and S 1040 we have concluded from the Ca ii H&K emission cores that magnetic activity is responsible for the X-rays. This is supported by filling in of H$`\alpha `$ (see e.g. Montes et al. 1997; Eker et al. 1995). In S 1242, activity is likely to be triggered by interaction at periastron in the eccentric orbit. This is also reflected in the period of photometric variability. For S 1040, the reason for activity is less clear. The explanation could involve mass transfer from the precursor of the white dwarf to the giant and the latter’s subsequent expansion during the giant phase. As was already noted by Landsman et al. (1997), a similar system is AY Cet, a binary of a white dwarf of $`T_{\mathrm{eff}}=\mathrm{18\hspace{0.17em}000}`$ K (Simon et al. 1985) and a G5III giant in an 56.8 days circular orbit. The $`v\mathrm{sin}i`$ of that giant is also low, 4 km s<sup>-1</sup>, and the long photometric period of 77.2 days implies asynchronous rotation (Strassmeier et al. 1993). The X-ray luminosity for AY Cet is $`1.5\times 10^{31}`$ erg s<sup>-1</sup> in the 0.2–4 keV band as measured with Einstein by Walter & Bowyer (1981), somewhat higher than the luminosity of S 1040. (With the coronal model discussed by Belloni et al. (1998), the countrate for S 1040 corresponds to $`5.6\times 10^{30}`$ erg s<sup>-1</sup> in the 0.2–4 keV band.) Walter & Bowyer attribute the X-rays to coronal activity of the giant. The Ca ii H&K emission cores in S 1063 and S 1113 are very strong. In S 1113 we might even see emission of both stars. Due to the shape of the H$`\alpha `$ emission we cannot conclude with certainty that the X-rays arise in an active corona and not in a disk or stream. The wings in the emission peak of S 1113 are very broad. However, Montes et al. (1997) have demonstrated that the excess emission in the H$`\alpha `$ lines of the more active binaries is sometimes a composite of a narrow and broad component, the latter having a full width at half maximum of up to 470 km s<sup>-1</sup>. They ascribe this broad component to microflaring accompanied by large scale motions. We note the similarity between S 1113 and V711 Tau, a well known extremely active binary of a G5IV ($`v\mathrm{sin}i`$ = 13 km s<sup>-1</sup>) and K1IV ($`v\mathrm{sin}i=38`$ km s<sup>-1</sup>) star in a 2.84 days circular orbit and a mass ratio 0.79 (Strassmeier et al. 1993); the mass ratio of S 1113 is 0.70 (Mathieu et al. 1998, in preparation). From the countrate of V711 Tau in Table 4 we find $`L_\mathrm{x}=6.8\times 10^{30}`$ erg s<sup>-1</sup> in the 0.1–2.4 keV band using the same model as in Belloni et al. (1998) with $`N_H=0`$, which is comparable to the luminosity of S 1113 in the same band $`L_\mathrm{x}=7.3\times 10^{30}`$ erg s<sup>-1</sup> (Belloni et al. 1998). The H$`\alpha `$ emission of S 1063 is more difficult to explain. As this system is not double-lined, H$`\alpha `$ emisson by the (invisible) secondary star would have to be strong to rise above the continuum of the primary. In the binaries S 1072 and S 1237 we see no H$`\alpha `$ emission while the level of Ca ii H&K emission is low in comparison with active stars of the same luminosity class. We have no explanation for this. For S 1072, an option is a wrong identification of the X-ray source with an optical counterpart. Belloni et al. (1998) give a probability of 43% that one or two of their twelve identifications of an X-ray source with a binary in M$ $67 is due to chance. No Ca ii H&K emission is seen in the spectrum of the blue straggler S 1082. Possibly, the X-ray emission has to do with the hot, subluminous secondary that could also cause the photometric variability and whose signature we might have seen in the H$`\alpha `$ line. ###### Acknowledgements. The authors wish to thank G. Geertsema for her help during our observations, P. Groot for providing his program to compute projected rotational velocities with the Fourier-Bessel transformation method and M. van Kerkwijk for comments on the manuscript. MvdB is supported by the Netherlands Organization for Scientific Research (NWO).
no-problem/9904/astro-ph9904376.html
ar5iv
text
# Rate coefficients for rovibrational transitions in H2 due to collisions with He ## 1 Introduction Rovibrationally excited H<sub>2</sub> molecules have been observed in many astrophysical objects (for recent studies, see Weintraub et al. 1998; van Dishoeck et al. 1998; Shupe et al. 1998; Bujarrabal et al. 1998; Stanke et al. 1998). The rovibrational levels of the molecule may be populated by ultraviolet pumping, by X-ray pumping, by the formation mechanism, and by collisional excitation in shock-heated gas (Dalgarno 1995). The excited level populations are then modified by collisions followed by quadrupole emissions. The main colliding partners apart from H<sub>2</sub> are H and He. Although He is only one tenth as abundant as H, collisions with He may have a significant influence in many astronomical environments depending on the density, temperature and the initial rotational and vibrational excitation of the molecule. Collisions with He and H<sub>2</sub> are particularly important when most of the hydrogen is in molecular form, as in dense molecular clouds. To interpret observations of the radiation emitted by the gas, the collision cross sections and corresponding rate coefficients characterizing the collisions must be known. Emissions from excited rovibrational levels of the molecule provide important clues regarding the physical state of the gas, dissociation, excitation and formation properties of H<sub>2</sub>. Here we investigate the collisional relaxation of vibrationally excited H<sub>2</sub> by He. Rovibrational transitions in H<sub>2</sub> induced by collisions with He atoms have been the subject of a large number of theoretical calculations in the past (Alexander 1976, 1977; Alexander and McGuire 1976; Dove et al. 1980; Eastes and Secrest 1972; Krauss and Mies 1965; McGuire and Kouri 1974; Raczkowski et al. 1978) and continue to attract experimental (Audibert et al. 1976; Michaut et al. 1998) and theoretical attention (Flower et al. 1998; Dubernet & Tuckey 1999; Balakrishnan et al. 1999). Recent theoretical calculations are motivated by the availability of more accurate representations of the interaction potentials and the possibility of performing quantum mechanical calculations with few approximations. The potential energy surface determined by Muchnick and Russek (1994) was used by Flower et al. (1998) and by Balakrishnan et al. (1999) in recent quantum mechanical calculations of rovibrational transition rate coefficients for temperatures ranging from 100 to 5000K. Flower et al. presented their results for vibrational levels $`v=0,1`$ and 2 of ortho- and para-H<sub>2</sub>. Balakrishnan et al. (1999) reported similar results for $`v=0`$ and 1. Though both authors have adopted similar close-coupling approaches for the scattering calculations, Flower et al. used a harmonic oscillator approximation for H<sub>2</sub> vibrational wave functions in evaluating the matrix elements of the potential while the calculations of Balakrishnan et al. made use of the H<sub>2</sub> potential of Schwenke (1988) and the corresponding numerically determined wave functions. The results of the two calculations agreed well for pure rotational transitions but some discrepancies were seen for rovibrational transitions. We believe this may be due to the different choice of vibrational wave functions. The sensitivity of the rate coefficients to the choice of the H<sub>2</sub> wave function was noted previously and differences could be significant for excited vibrational levels. We find this to be the case for transitions involving $`v2`$. Thus, in this article, we report rate coefficients for transitions from $`v=2`$ to 6 initial states of H<sub>2</sub> induced by collisions with He atoms using numerically exact quantum mechanical calculations. We also report results of quasiclassical trajectory (QCT) calculations and examine the suitability of classical mechanical calculations in predicting rovibrational transitions in H<sub>2</sub>. ## 2 Results The quantum mechanical calculations were performed using the nonreactive scattering program MOLSCAT developed by Hutson and Green (1994) with the He-H<sub>2</sub> interaction potential of Muchnick and Russek (1994) and the H<sub>2</sub> potential of Schwenke (1988). We refer to our earlier paper (Balakrishnan, Forrey & Dalgarno, 1999) for details of the numerical implementation. Different basis sets were used in the calculations for transitions from different initial vibrational levels. We use the notation \[$`v_1`$$`v_2`$\]($`j_1`$$`j_2`$) to represent the basis set where the quantities within the square brackets give the range of vibrational levels and those in braces give the range of rotational levels coupled in each of the vibrational levels. For transitions from $`v=2,3`$ and 4 we used, respectively, the basis sets \[0–3\](0–11) & (0–3), \[0–3\](0–11) & (0–9) and \[3–5\](0–11) & (0–11). For $`v=5`$ and 6 of para H<sub>2</sub> we used, respectively, \[4–6\](0–14) & (0–8) and \[5–7\](0–14) & (0–8). During the calculations, we found that the $`\mathrm{\Delta }v=\pm 2`$ transitions are weak with cross sections that are typically orders of magnitude smaller than for the $`\mathrm{\Delta }v=\pm 1`$ transitions. Thus, for $`v=5`$ and 6 of ortho-H<sub>2</sub>, we have only included the $`\mathrm{\Delta }v=\pm 1`$ vibrational levels with $`j`$=0–13 in the basis set to reduce the computational effort. The basis sets were chosen as a compromise between numerical efficiency and accuracy and could introduce some truncation errors for transitions to levels which lie at the outer edge of the basis set. Our convergence tests show that truncation errors are small. Rovibrational transition cross sections $`\sigma _{vj,v^{}j^{}}`$ where the pairs of numbers $`vj`$ and $`v^{}j^{}`$ respectively denote the initial and final rovibrational quantum numbers, were computed for kinetic energies ranging from 10<sup>-4</sup> to 3 eV. Sufficient total angular momentum partial waves were included in the calculations to secure convergence of the cross sections. The quasiclassical calculations were carried out using the standard classical trajectory method as described by Lepp, Buch and Dalgarno (1995) in which the procedure of Blais and Truhlar (1976) was adopted for the final state analysis. Because rovibrational transitions are rare at low velocities, useful results could be obtained only for collisions at energies above 0.1 eV. The results are averages over 10000 trajectories. The quantum mechanical calculations were performed using MOLSCAT (Hutson & Green, 1994) suitably adapted for the present system with the potential represented by a Legendre polynomial expansion in which we retained nonvanishing terms of orders 0 to 10, inclusive. Calculation of rovibrational transition rate coefficients over a wide range of temperatures requires the determination of scattering cross sections at the energies spanned by the Boltzmann distribution at each temperature. This is a computationally demanding problem especially when quantum mechanical calculations are required. For many systems, QCT calculations offer a good compromise between accuracy and computational effort. However, the validity of classical mechanics is in question especially for lighter systems and at lower temperatures where quantum mechanical effects such as tunneling are important. Due to the small masses of the atoms involved, the present system offers an excellent opportunity to test the reliability of QCT calculations in predicting rovibrational transitions. Though such attempts have been made in the past (Dove et al. 1980) the classical mechanical and quantum mechanical calculations were done in different energy regimes and a one-to-one comparison was not possible. We carried out quantum mechanical and QCT calculations of rovibrational transition cross sections for the present system over a wide range of energies. In Figure 1 we compare the results for pure rotational de-excitation transitions ($`\mathrm{\Delta }j=j^{}j=2`$) with $`j=2`$ in $`v=0`$ to 5. There are striking similarities and differences between the quantum mechanical and QCT results. They agree quite well at higher energies and have a similar energy dependence with both calculations predicting the same maximum for the cross sections before they fall off. In contrast, the agreement becomes less satisfactory at lower energies with the QCT cross sections rapidly decreasing to zero as the energy is decreased. The quantum mechanical results pass through minima and subsequently increase with further decrease of the kinetic energy. This is a purely quantum mechanical effect that has important consequences for the low temperature rate coefficients (Balakrishnan et al. 1998). The quantum mechanical cross sections eventually conform to an inverse velocity dependence as the relative translational energy is decreased to zero and the corresponding rate coefficients are finite in the limit of zero temperature, in accordance with Wigner’s threshold law. The results illustrate that QCT calculations may be used reliably to calculate rotational transitions for energies higher than 0.5 eV, but at lower energies, quantum mechanical calculation must be employed. The energy regime for the validity of QCT results is more restricted for rovibrational transitions as illustrated in Figure 2 in which we show cross sections for rovibrational transitions $`v,jv1,j+2`$ with $`j=5`$ in $`v=3`$ to 5. The results indicate that the QCT method is inadequate to calculate rovibrational transition cross sections at impact energies below 1 eV. The QCT results exhibit a sharp fall below a threshold near an energy of 1 eV whereas the quantum mechanical results vary smoothly. Similar results hold for other transitions. The higher collision energies required for the validity of the QCT method for rovibrational transitions compared to pure rotational transitions is due in part to the much smaller cross sections for rovibrational transitions making the results more sensitive to the details of the dynamics. Rate coefficients $`k_{vj,v^{}j^{}}(T)`$ were calculated for the temperature range $`T=100`$ to 4000 K by averaging the cross sections over a Boltzmann distribution of relative velocities. Flower et al. (1998) have reported rate coefficients for rovibrational transitions from $`v=02`$ for $`T=1000,2000`$ and 4500 K. A comparison of some of the de-excitation rate coefficients from the $`v=2`$ level calculated in this paper and those reported by Flower et al. is given in Table 1 for $`T=1000`$ K. The agreement is good for rovibrational transitions involving $`|\mathrm{\Delta }j|=0,2`$ and 4 but larger differences, by factors between 2 and 4, are seen for transitions involving larger values of $`|\mathrm{\Delta }j|`$. Our rate coefficients are generally greater than those of Flower et al. (1998) for transitions where the discrepancy is large. Our rate coefficient calculations extend those of Flower et al. (1998) to include the $`v=4,5`$ and 6 vibrational levels. The preferential formation of H<sub>2</sub> in these vibrational levels has been discussed by Dalgarno (1995). In Tables 2 and 3, we present our comprehensive results for rovibrational de-excitation transitions from different rotational levels in para- and ortho-H<sub>2</sub> from the $`v=2`$ level as functions of temperature. The corresponding excitation rate coefficients may be obtained from detailed balance $$k_{v^{}j^{},v,j}(T)=\frac{(2j+1)}{(2j^{}+1)}\mathrm{exp}[(ϵ_{v^{}j^{}}ϵ_{vj})/k_BT]k_{vj,v^{}j^{}}(T)$$ (1) where $`k_B`$ is the Boltzmann constant and $`ϵ_{vj}`$ is the rovibrational energy of the molecule in the $`v,j`$ level. Similar results for $`v=3`$ to 6 are presented in Tables 4-11. Tables 2-11 reveal some interesting aspects of energy transfer. It can be seen that for temperatures less than 1000 K, rate coefficients for rovibrational transitions involving $`\mathrm{\Delta }j=4\mathrm{\Delta }v`$ predominate over other transitions where changes in both $`v`$ and $`j`$ occur. This is clearly seen for ($`vj,v^{}j^{}`$) transitions with $`\mathrm{\Delta }v=v^{}v=1`$ for $`j=37`$ and the effect becomes stronger with increasing rotational excitation but less important as $`T`$ increases. This is an example of quasi-resonant scattering (Stewart et al. 1988; Forrey et al. 1999). A detailed study of this process in the limit of zero temperature and its correspondence with classical mechanics has been carried out recently (Forrey et al. 1999). The efficient conversion of vibrational energy into rotational energy may produce a significant population of high rotational levels in environments where molecular hydrogen is subjected to an intense flux of X-rays or ultraviolet photons. ## 3 Acknowledgments This work was supported by the National Science Foundation (NSF), Division of Astronomy. MV was supported by the NSF through the Research Experience for Undergraduates program at the Smithsonian Astrophysical Observatory. | $`vj,v^{}j^{}`$ | This Work | Flower et al. | | --- | --- | --- | | 20,10 | 1.24(-15) | 1.1(-15) | | 20,12 | 2.03(-15) | 2.6(-15) | | 20,14 | 3.65(-15) | 3.7(-15) | | 20,16 | 3.28(-15) | 1.3(-15) | | 20,18 | 7.75(-16) | 1.6(-16) | | 22,10 | 3.64(-16) | 3.4(-16) | | 22,12 | 2.87(-15) | 2.8(-15) | | 22,14 | 8.05(-15) | 7.2(-15) | | 22,16 | 5.16(-15) | 4.2(-15) | | 22,18 | 2.14(-15) | 6.3(-16) | | 22,20 | 2.45(-11) | 1.8(-11) | | 21,11 | 1.82(-15) | 1.8(-15) | | 21,13 | 4.38(-15) | 4.6(-15) | | 21,15 | 3.95(-15) | 3.4(-15) | | 21,17 | 2.52(-15) | 8.0(-16) | | 23,11 | 9.48(-16) | 8.6(-16) | | 23,13 | 4.42(-15) | 3.7(-15) | | 23,15 | 1.41(-14) | 1.2(-14) | | 23,17 | 1.19(-14) | 7.7(-15) | | 23,21 | 2.55(-11) | 1.8(-11) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 20,10 | 2.81(-19) | 2.27(-18) | 9.84(-18) | 7.51(-17) | 1.24(-15) | 1.67(-14) | 6.93(-14) | 1.80(-13) | | 20,12 | 9.72(-19) | 6.91(-18) | 2.80(-17) | 1.76(-16) | 2.03(-15) | 2.44(-14) | 1.05(-13) | 2.80(-13) | | 20,14 | 1.05(-19) | 1.36(-18) | 9.47(-18) | 1.29(-16) | 3.65(-15) | 5.64(-14) | 2.11(-13) | 4.88(-13) | | 20,16 | 1.39(-18) | 8.83(-18) | 3.23(-17) | 1.94(-16) | 3.28(-15) | 6.83(-14) | 3.10(-13) | 7.69(-13) | | 20,18 | 1.20(-19) | 9.40(-19) | 2.56(-18) | 1.96(-17) | 7.75(-16) | 2.92(-14) | 2.03(-13) | 6.54(-13) | | 22,10 | 6.41(-20) | 2.14(-19) | 8.50(-19) | 1.27(-17) | 3.64(-16) | 6.39(-15) | 2.67(-14) | 6.62(-14) | | 22,12 | 6.45(-19) | 2.72(-18) | 1.09(-17) | 1.26(-16) | 2.87(-15) | 4.25(-14) | 1.70(-13) | 4.19(-13) | | 22,14 | 2.89(-18) | 1.80(-17) | 7.27(-17) | 5.46(-16) | 8.05(-15) | 8.79(-14) | 3.06(-13) | 6.87(-13) | | 22,16 | 3.88(-18) | 2.36(-17) | 8.03(-17) | 3.99(-16) | 5.16(-15) | 8.97(-14) | 3.82(-13) | 9.27(-13) | | 22,18 | 4.49(-17) | 8.25(-17) | 1.17(-16) | 2.38(-16) | 2.14(-15) | 4.58(-14) | 2.71(-13) | 8.12(-13) | | 22,20 | 1.68(-12) | 3.71(-12) | 5.98(-12) | 1.10(-11) | 2.45(-11) | 4.69(-11) | 6.24(-11) | 7.36(-11) | | 24,10 | 9.46(-21) | 7.68(-20) | 4.14(-19) | 4.68(-18) | 1.60(-16) | 3.91(-15) | 1.87(-14) | 4.86(-14) | | 24,12 | 1.14(-19) | 9.15(-19) | 4.70(-18) | 4.86(-17) | 1.41(-15) | 2.82(-14) | 1.22(-13) | 2.99(-13) | | 24,14 | 8.87(-19) | 6.94(-18) | 3.26(-17) | 2.88(-16) | 6.16(-15) | 9.36(-14) | 3.61(-13) | 8.44(-13) | | 24,16 | 1.02(-17) | 7.03(-17) | 2.81(-16) | 1.90(-15) | 2.53(-14) | 2.52(-13) | 8.10(-13) | 1.72(-12) | | 24,18 | 2.63(-16) | 1.08(-15) | 2.71(-15) | 8.61(-15) | 3.62(-14) | 1.90(-13) | 6.59(-13) | 1.55(-12) | | 24,20 | 9.46(-15) | 3.58(-14) | 8.74(-14) | 2.83(-13) | 1.32(-12) | 4.83(-12) | 8.72(-12) | 1.22(-11) | | 24,22 | 2.98(-13) | 9.52(-13) | 2.02(-12) | 5.31(-12) | 1.82(-11) | 4.96(-11) | 7.85(-11) | 1.03(-10) | | 26,10 | 1.11(-21) | 1.15(-20) | 6.67(-20) | 8.71(-19) | 4.39(-17) | 1.69(-15) | 1.03(-14) | 3.09(-14) | | 26,12 | 1.26(-20) | 1.27(-19) | 7.17(-19) | 8.90(-18) | 3.93(-16) | 1.29(-14) | 7.14(-14) | 2.00(-13) | | 26,14 | 1.13(-19) | 1.04(-18) | 5.57(-18) | 6.14(-17) | 2.03(-15) | 4.86(-14) | 2.27(-13) | 5.77(-13) | | 26,16 | 1.60(-18) | 1.29(-17) | 6.08(-17) | 5.44(-16) | 1.18(-14) | 1.84(-13) | 7.02(-13) | 1.60(-12) | | 26,18 | 5.89(-17) | 3.70(-16) | 1.37(-15) | 8.29(-15) | 9.59(-14) | 8.22(-13) | 2.35(-12) | 4.50(-12) | | 26,110 | 1.31(-14) | 3.07(-14) | 5.17(-14) | 9.89(-14) | 2.22(-13) | 5.31(-13) | 1.29(-12) | 2.72(-12) | | 26,20 | 6.77(-17) | 3.80(-16) | 1.28(-15) | 6.85(-15) | 6.87(-14) | 5.18(-13) | 1.34(-12) | 2.31(-12) | | 26,22 | 1.42(-15) | 7.24(-15) | 2.21(-14) | 1.02(-13) | 8.00(-13) | 4.63(-12) | 1.05(-11) | 1.68(-11) | | 26,24 | 4.64(-14) | 1.86(-13) | 4.60(-13) | 1.55(-12) | 7.81(-12) | 2.96(-11) | 5.41(-11) | 7.68(-11) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 21,11 | 2.45(-19) | 2.25(-18) | 1.04(-17) | 9.14(-17) | 1.82(-15) | 2.75(-14) | 1.16(-13) | 2.97(-13) | | 21,13 | 1.42(-18) | 1.09(-17) | 4.66(-17) | 3.28(-16) | 4.38(-15) | 4.69(-14) | 1.72(-13) | 4.12(-13) | | 21,15 | 6.32(-19) | 4.64(-18) | 1.60(-17) | 1.32(-16) | 3.95(-15) | 7.51(-14) | 3.01(-13) | 7.02(-13) | | 21,17 | 8.43(-19) | 5.24(-18) | 2.23(-17) | 1.58(-16) | 2.52(-15) | 5.66(-14) | 2.93(-13) | 7.85(-13) | | 21,19 | 6.90(-19) | 1.67(-18) | 2.07(-18) | 6.96(-18) | 3.28(-16) | 1.52(-14) | 1.30(-13) | 4.73(-13) | | 23,11 | 6.70(-20) | 1.04(-18) | 5.29(-18) | 4.16(-17) | 9.48(-16) | 1.74(-14) | 7.33(-14) | 1.80(-13) | | 23,13 | 5.18(-19) | 7.20(-18) | 3.47(-17) | 2.46(-16) | 4.42(-15) | 6.56(-14) | 2.54(-13) | 6.01(-13) | | 23,15 | 4.76(-18) | 2.24(-17) | 1.05(-16) | 9.22(-16) | 1.41(-14) | 1.49(-13) | 5.02(-13) | 1.09(-12) | | 23,17 | 3.15(-17) | 1.80(-16) | 5.32(-16) | 1.98(-15) | 1.19(-14) | 1.18(-13) | 4.73(-13) | 1.14(-12) | | 23,19 | 1.74(-17) | 2.64(-17) | 2.44(-17) | 7.30(-17) | 1.81(-15) | 3.76(-14) | 2.34(-13) | 7.32(-13) | | 23,21 | 7.73(-13) | 2.12(-12) | 4.03(-12) | 9.09(-12) | 2.55(-11) | 5.93(-11) | 8.67(-11) | 1.08(-10) | | 25,11 | 3.36(-21) | 1.25(-19) | 6.81(-19) | 8.38(-18) | 3.34(-16) | 9.44(-15) | 4.83(-14) | 1.30(-13) | | 25,13 | 3.68(-20) | 9.71(-19) | 4.92(-18) | 5.35(-17) | 1.75(-15) | 3.86(-14) | 1.72(-13) | 4.26(-13) | | 25,15 | 7.05(-19) | 8.96(-18) | 4.23(-17) | 3.76(-16) | 8.51(-15) | 1.31(-13) | 4.98(-13) | 1.14(-12) | | 25,17 | 1.97(-17) | 1.42(-16) | 5.78(-16) | 3.80(-15) | 4.82(-14) | 4.46(-13) | 1.34(-12) | 2.69(-12) | | 25,19 | 1.74(-15) | 5.74(-15) | 1.27(-14) | 3.38(-14) | 1.04(-13) | 3.28(-13) | 8.92(-13) | 1.93(-12) | | 25,21 | 2.81(-15) | 1.46(-14) | 4.69(-14) | 2.02(-13) | 1.26(-12) | 5.89(-12) | 1.21(-11) | 1.82(-11) | | 25,23 | 7.59(-14) | 3.04(-13) | 8.30(-13) | 2.83(-12) | 1.21(-11) | 3.86(-11) | 6.57(-11) | 8.97(-11) | | 27,11 | 7.46(-22) | 1.06(-20) | 7.90(-20) | 1.30(-18) | 7.34(-17) | 3.33(-15) | 2.23(-14) | 7.02(-14) | | 27,13 | 6.52(-21) | 8.65(-20) | 6.02(-19) | 8.89(-18) | 4.15(-16) | 1.49(-14) | 8.65(-14) | 2.48(-13) | | 27,15 | 7.60(-20) | 9.01(-19) | 5.62(-18) | 6.93(-17) | 2.34(-15) | 5.81(-14) | 2.76(-13) | 7.02(-13) | | 27,17 | 1.22(-18) | 1.71(-17) | 8.62(-17) | 7.31(-16) | 1.59(-14) | 2.49(-13) | 9.32(-13) | 2.07(-12) | | 27,19 | 1.28(-16) | 8.18(-16) | 3.14(-15) | 1.87(-14) | 1.93(-13) | 1.47(-12) | 3.91(-12) | 7.10(-12) | | 27,111 | 1.41(-14) | 2.79(-14) | 4.96(-14) | 1.10(-13) | 2.76(-13) | 6.45(-13) | 1.51(-12) | 3.12(-12) | | 27,21 | 1.52(-17) | 1.35(-16) | 6.40(-16) | 4.63(-15) | 6.16(-14) | 6.02(-13) | 1.77(-12) | 3.33(-12) | | 27,23 | 2.32(-16) | 1.87(-15) | 8.06(-15) | 4.96(-14) | 4.84(-13) | 3.35(-12) | 8.23(-12) | 1.39(-11) | | 27,25 | 9.84(-15) | 5.68(-14) | 1.95(-13) | 8.65(-13) | 5.19(-12) | 2.24(-11) | 4.39(-11) | 6.47(-11) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 30,20 | 1.07(-18) | 8.34(-18) | 3.53(-17) | 2.47(-16) | 3.49(-15) | 4.21(-14) | 1.66(-13) | 4.08(-13) | | 30,22 | 3.26(-18) | 2.24(-17) | 8.59(-17) | 4.87(-16) | 4.90(-15) | 5.66(-14) | 2.34(-13) | 5.94(-13) | | 30,24 | 3.10(-19) | 5.35(-18) | 2.79(-17) | 3.64(-16) | 9.40(-15) | 1.21(-13) | 4.20(-13) | 9.36(-13) | | 30,26 | 4.37(-18) | 2.94(-17) | 1.09(-16) | 6.47(-16) | 1.01(-14) | 1.68(-13) | 6.58(-13) | 1.50(-12) | | 30,28 | 4.98(-19) | 3.76(-18) | 1.17(-17) | 9.42(-17) | 3.05(-15) | 9.39(-14) | 5.37(-13) | 1.50(-12) | | 32,20 | 2.40(-19) | 1.04(-18) | 4.18(-18) | 5.00(-17) | 1.17(-15) | 1.64(-14) | 6.15(-14) | 1.43(-13) | | 32,22 | 2.00(-18) | 1.08(-17) | 4.43(-17) | 4.24(-16) | 7.98(-15) | 1.01(-13) | 3.78(-13) | 8.92(-13) | | 32,24 | 8.28(-18) | 5.81(-17) | 2.37(-16) | 1.59(-15) | 1.97(-14) | 1.88(-13) | 6.11(-13) | 1.31(-12) | | 32,26 | 1.71(-17) | 9.45(-17) | 3.00(-16) | 1.41(-15) | 1.59(-14) | 2.19(-13) | 8.18(-13) | 1.83(-12) | | 32,28 | 1.51(-16) | 2.68(-16) | 3.84(-16) | 8.19(-16) | 7.08(-15) | 1.33(-13) | 6.72(-13) | 1.78(-12) | | 32,30 | 2.10(-12) | 4.67(-12) | 7.63(-12) | 1.41(-11) | 3.00(-11) | 5.35(-11) | 6.87(-11) | 7.93(-11) | | 34,20 | 4.51(-20) | 3.71(-19) | 1.96(-18) | 2.07(-17) | 5.97(-16) | 1.13(-14) | 4.61(-14) | 1.08(-13) | | 34,22 | 4.81(-19) | 3.94(-18) | 1.98(-17) | 1.91(-16) | 4.66(-15) | 7.41(-14) | 2.79(-13) | 6.32(-13) | | 34,24 | 3.21(-18) | 2.52(-17) | 1.15(-16) | 9.63(-16) | 1.77(-14) | 2.24(-13) | 7.83(-13) | 1.72(-12) | | 34,26 | 3.09(-17) | 2.08(-16) | 8.09(-16) | 5.18(-15) | 6.15(-14) | 5.39(-13) | 1.62(-12) | 3.27(-12) | | 34,28 | 8.59(-16) | 3.38(-15) | 8.20(-15) | 2.46(-14) | 9.56(-14) | 4.64(-13) | 1.45(-12) | 3.12(-12) | | 34,30 | 1.69(-14) | 6.25(-14) | 1.49(-13) | 4.67(-13) | 2.02(-12) | 6.64(-12) | 1.11(-11) | 1.48(-11) | | 34,32 | 4.41(-13) | 1.38(-12) | 2.86(-12) | 7.29(-12) | 2.37(-11) | 6.05(-11) | 9.19(-11) | 1.17(-10) | | 36,20 | 5.40(-21) | 6.41(-20) | 3.79(-19) | 4.77(-18) | 1.99(-16) | 5.90(-15) | 2.96(-14) | 7.69(-14) | | 36,22 | 6.08(-20) | 6.51(-19) | 3.73(-18) | 4.44(-17) | 1.62(-15) | 4.10(-14) | 1.88(-13) | 4.65(-13) | | 36,24 | 5.01(-19) | 4.60(-18) | 2.43(-17) | 2.55(-16) | 7.12(-15) | 1.35(-13) | 5.40(-13) | 1.23(-12) | | 36,26 | 5.97(-18) | 4.67(-17) | 2.15(-16) | 1.82(-15) | 3.42(-14) | 4.42(-13) | 1.51(-12) | 3.19(-12) | | 36,28 | 1.71(-16) | 1.06(-15) | 3.81(-15) | 2.18(-14) | 2.26(-13) | 1.69(-12) | 4.48(-12) | 8.16(-12) | | 36,210 | 3.35(-14) | 7.60(-14) | 1.26(-13) | 2.36(-13) | 5.03(-13) | 1.20(-12) | 2.85(-12) | 5.56(-12) | | 36,30 | 1.65(-16) | 9.11(-16) | 3.02(-15) | 1.54(-14) | 1.38(-13) | 8.91(-13) | 2.05(-12) | 3.27(-12) | | 36,32 | 2.88(-15) | 1.46(-14) | 4.43(-14) | 1.94(-13) | 1.37(-12) | 6.97(-12) | 1.45(-11) | 2.19(-11) | | 36,34 | 7.49(-14) | 3.00(-13) | 7.43(-13) | 2.41(-12) | 1.09(-11) | 3.75(-11) | 6.50(-11) | 8.92(-11) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 31,21 | 9.61(-19) | 8.24(-18) | 3.76(-17) | 3.05(-16) | 5.20(-15) | 6.88(-14) | 2.72(-13) | 6.68(-13) | | 31,23 | 4.95(-18) | 3.52(-17) | 1.45(-16) | 9.36(-16) | 1.06(-14) | 1.01(-13) | 3.56(-13) | 8.23(-13) | | 31,25 | 3.48(-18) | 2.22(-17) | 7.27(-17) | 5.03(-16) | 1.15(-14) | 1.72(-13) | 6.12(-13) | 1.33(-12) | | 31,27 | 3.94(-18) | 1.97(-17) | 7.89(-17) | 5.43(-16) | 8.16(-15) | 1.51(-13) | 6.61(-13) | 1.58(-12) | | 31,29 | 4.17(-18) | 9.92(-18) | 1.23(-17) | 3.66(-17) | 1.44(-15) | 5.32(-14) | 3.68(-13) | 1.14(-12) | | 33,21 | 3.64(-19) | 3.26(-18) | 1.63(-17) | 1.45(-16) | 3.04(-15) | 4.50(-14) | 1.68(-13) | 3.82(-13) | | 33,23 | 2.04(-18) | 1.17(-17) | 5.84(-17) | 6.25(-16) | 1.23(-14) | 1.54(-13) | 5.46(-13) | 1.23(-12) | | 33,25 | 1.50(-17) | 9.82(-17) | 3.93(-16) | 2.69(-15) | 3.41(-14) | 3.19(-13) | 9.97(-13) | 2.05(-12) | | 33,27 | 1.14(-16) | 5.67(-16) | 1.63(-15) | 5.99(-15) | 3.42(-14) | 2.91(-13) | 1.03(-12) | 2.26(-12) | | 33,29 | 7.14(-18) | 5.74(-17) | 1.18(-16) | 4.21(-16) | 6.07(-15) | 1.10(-13) | 5.85(-13) | 1.61(-12) | | 33,31 | 1.10(-12) | 2.85(-12) | 5.21(-12) | 1.15(-11) | 3.16(-11) | 6.99(-11) | 9.85(-11) | 1.20(-10) | | 35,21 | 6.90(-20) | 6.45(-19) | 3.58(-18) | 4.15(-17) | 1.32(-15) | 2.84(-14) | 1.22(-13) | 2.91(-13) | | 35,23 | 4.97(-19) | 4.37(-18) | 2.27(-17) | 2.34(-16) | 6.01(-15) | 1.03(-13) | 3.96(-13) | 8.88(-13) | | 35,25 | 4.36(-18) | 3.42(-17) | 1.58(-16) | 1.34(-15) | 2.45(-14) | 3.11(-13) | 1.06(-12) | 2.27(-12) | | 35,27 | 6.99(-17) | 4.47(-16) | 1.69(-15) | 1.03(-14) | 1.15(-13) | 9.30(-13) | 2.61(-12) | 4.96(-12) | | 35,29 | 6.11(-15) | 1.88(-14) | 3.81(-14) | 9.04(-14) | 2.52(-13) | 7.54(-13) | 1.89(-12) | 3.77(-12) | | 35,31 | 8.21(-15) | 3.58(-14) | 9.78(-14) | 3.70(-13) | 2.07(-12) | 8.61(-12) | 1.63(-11) | 2.33(-11) | | 35,33 | 1.81(-13) | 6.43(-13) | 1.48(-12) | 4.31(-12) | 1.65(-11) | 4.83(-11) | 7.85(-11) | 1.04(-10) | | 37,21 | 7.60(-21) | 8.23(-20) | 5.12(-19) | 7.35(-18) | 3.58(-16) | 1.23(-14) | 6.65(-14) | 1.79(-13) | | 37,23 | 5.66(-20) | 5.81(-19) | 3.41(-18) | 4.40(-17) | 1.76(-15) | 4.82(-14) | 2.30(-13) | 5.67(-13) | | 37,25 | 5.33(-19) | 4.96(-18) | 2.63(-17) | 2.83(-16) | 8.20(-15) | 1.60(-13) | 6.45(-13) | 1.47(-12) | | 37,27 | 8.72(-18) | 6.89(-17) | 3.12(-16) | 2.54(-15) | 4.66(-14) | 5.88(-13) | 1.95(-12) | 4.00(-12) | | 37,29 | 4.85(-16) | 2.81(-15) | 9.55(-15) | 4.92(-14) | 4.45(-13) | 2.96(-12) | 7.25(-12) | 1.24(-11) | | 37,211 | 4.72(-14) | 1.02(-13) | 1.64(-13) | 2.95(-13) | 6.10(-13) | 1.41(-12) | 3.24(-12) | 6.21(-12) | | 37,31 | 7.94(-17) | 5.07(-16) | 1.90(-15) | 1.15(-14) | 1.33(-13) | 1.11(-12) | 2.90(-12) | 4.99(-12) | | 37,33 | 1.04(-15) | 5.91(-15) | 1.99(-14) | 9.99(-14) | 8.52(-13) | 5.17(-12) | 1.16(-11) | 1.84(-11) | | 37,35 | 3.37(-14) | 1.47(-13) | 3.95(-13) | 1.43(-12) | 7.48(-12) | 2.91(-11) | 5.35(-11) | 7.58(-11) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 40,30 | 3.80(-18) | 2.84(-17) | 1.15(-16) | 7.34(-16) | 8.72(-15) | 9.41(-14) | 3.45(-13) | 7.93(-13) | | 40,32 | 1.04(-17) | 6.85(-17) | 2.42(-16) | 1.22(-15) | 1.09(-14) | 1.21(-13) | 4.68(-13) | 1.13(-12) | | 40,34 | 3.10(-18) | 2.65(-17) | 1.27(-16) | 1.24(-15) | 2.35(-14) | 2.44(-13) | 7.93(-13) | 1.73(-12) | | 40,36 | 1.18(-17) | 8.57(-17) | 3.20(-16) | 1.89(-15) | 2.76(-14) | 3.72(-13) | 1.31(-12) | 2.85(-12) | | 40,38 | 2.13(-18) | 1.25(-17) | 4.47(-17) | 3.58(-16) | 9.81(-15) | 2.64(-13) | 1.31(-12) | 3.27(-12) | | 42,30 | 8.21(-19) | 4.63(-18) | 1.92(-17) | 1.86(-16) | 3.42(-15) | 3.82(-14) | 1.28(-13) | 2.78(-13) | | 42,32 | 5.78(-18) | 3.93(-17) | 1.68(-16) | 1.35(-15) | 2.06(-14) | 2.21(-13) | 7.65(-13) | 1.70(-12) | | 42,34 | 2.28(-17) | 1.73(-16) | 7.04(-16) | 4.29(-15) | 4.52(-14) | 3.77(-13) | 1.16(-12) | 2.42(-12) | | 42,36 | 6.41(-17) | 3.31(-16) | 9.85(-16) | 4.34(-15) | 4.36(-14) | 4.83(-13) | 1.63(-12) | 3.46(-12) | | 42,38 | 4.36(-16) | 7.42(-16) | 1.08(-15) | 2.45(-15) | 2.12(-14) | 3.63(-13) | 1.61(-12) | 3.86(-12) | | 42,40 | 2.72(-12) | 5.95(-12) | 9.66(-12) | 1.75(-11) | 3.54(-11) | 5.96(-11) | 7.43(-11) | 8.42(-11) | | 44,30 | 2.00(-19) | 1.68(-18) | 8.65(-18) | 8.58(-17) | 2.05(-15) | 3.04(-14) | 1.07(-13) | 2.30(-13) | | 44,32 | 2.09(-18) | 1.45(-17) | 7.15(-17) | 6.85(-16) | 1.41(-14) | 1.80(-13) | 6.08(-13) | 1.29(-12) | | 44,34 | 1.24(-17) | 7.53(-17) | 3.37(-16) | 2.86(-15) | 4.60(-14) | 4.88(-13) | 1.57(-12) | 3.29(-12) | | 44,36 | 9.22(-17) | 5.58(-16) | 2.09(-15) | 1.29(-14) | 1.36(-13) | 1.05(-12) | 2.98(-12) | 5.78(-12) | | 44,38 | 2.44(-15) | 9.32(-15) | 2.19(-14) | 6.21(-14) | 2.25(-13) | 1.07(-12) | 3.12(-12) | 6.30(-12) | | 44,40 | 2.93(-14) | 1.07(-13) | 2.52(-13) | 7.51(-13) | 2.96(-12) | 8.63(-12) | 1.35(-11) | 1.73(-11) | | 44,42 | 6.42(-13) | 1.99(-12) | 4.07(-12) | 1.00(-11) | 3.03(-11) | 7.21(-11) | 1.05(-10) | 1.30(-10) | | 46,30 | 3.23(-20) | 3.46(-19) | 2.05(-18) | 2.42(-17) | 8.33(-16) | 1.90(-14) | 8.05(-14) | 1.86(-13) | | 46,32 | 3.15(-19) | 3.23(-18) | 1.85(-17) | 2.04(-16) | 6.14(-15) | 1.20(-13) | 4.73(-13) | 1.05(-12) | | 46,34 | 2.08(-18) | 1.97(-17) | 1.04(-16) | 9.95(-16) | 2.30(-14) | 3.49(-13) | 1.22(-12) | 2.54(-12) | | 46,36 | 1.79(-17) | 1.13(-16) | 4.95(-16) | 4.86(-15) | 9.20(-14) | 1.02(-12) | 3.14(-12) | 6.18(-12) | | 46,38 | 4.36(-16) | 2.68(-15) | 9.49(-15) | 5.13(-14) | 4.81(-13) | 3.24(-12) | 8.10(-12) | 1.41(-11) | | 46,310 | 7.36(-14) | 1.59(-13) | 2.53(-13) | 4.28(-13) | 6.92(-13) | 1.17(-12) | 2.98(-12) | 6.32(-12) | | 46,40 | 3.83(-16) | 2.10(-15) | 6.85(-15) | 3.26(-14) | 2.54(-13) | 1.37(-12) | 2.81(-12) | 4.16(-12) | | 46,42 | 5.72(-15) | 2.88(-14) | 8.65(-14) | 3.58(-13) | 2.22(-12) | 9.79(-12) | 1.87(-11) | 2.67(-11) | | 46,44 | 1.19(-13) | 4.67(-13) | 1.15(-12) | 3.59(-12) | 1.49(-11) | 4.63(-11) | 7.61(-11) | 1.01(-10) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 41,31 | 3.30(-18) | 2.80(-17) | 1.27(-16) | 9.43(-16) | 1.35(-14) | 1.54(-13) | 5.66(-13) | 1.30(-12) | | 41,33 | 1.40(-17) | 1.03(-16) | 4.14(-16) | 2.45(-15) | 2.37(-14) | 2.04(-13) | 6.77(-13) | 1.49(-12) | | 41,35 | 1.43(-17) | 8.40(-17) | 2.80(-16) | 1.69(-15) | 2.86(-14) | 3.47(-13) | 1.13(-12) | 2.36(-12) | | 41,37 | 1.52(-17) | 6.71(-17) | 2.50(-16) | 1.66(-15) | 2.33(-14) | 3.56(-13) | 1.37(-12) | 3.07(-12) | | 41,39 | 2.30(-17) | 4.35(-17) | 4.87(-17) | 1.36(-16) | 5.14(-15) | 1.67(-13) | 9.87(-13) | 2.71(-12) | | 43,31 | 1.48(-18) | 1.40(-17) | 6.88(-17) | 5.38(-16) | 9.11(-15) | 1.08(-13) | 3.55(-13) | 7.47(-13) | | 43,33 | 8.07(-18) | 7.16(-17) | 3.30(-16) | 2.31(-15) | 3.22(-14) | 3.36(-13) | 1.09(-12) | 2.31(-12) | | 43,35 | 4.51(-17) | 3.45(-16) | 1.40(-15) | 7.92(-15) | 7.79(-14) | 6.37(-13) | 1.86(-12) | 3.68(-12) | | 43,37 | 3.67(-16) | 1.86(-15) | 5.39(-15) | 1.88(-14) | 9.16(-14) | 6.44(-13) | 2.04(-12) | 4.24(-12) | | 43,39 | 1.93(-16) | 2.81(-16) | 3.58(-16) | 1.24(-15) | 1.85(-14) | 3.09(-13) | 1.45(-12) | 3.62(-12) | | 43,41 | 1.53(-12) | 4.03(-12) | 7.37(-12) | 1.57(-11) | 4.00(-11) | 8.15(-11) | 1.10(-10) | 1.31(-10) | | 45,31 | 9.55(-19) | 6.74(-18) | 2.19(-17) | 1.67(-16) | 4.62(-15) | 7.89(-14) | 2.92(-13) | 6.32(-13) | | 45,33 | 3.46(-18) | 2.70(-17) | 1.02(-16) | 8.09(-16) | 1.82(-14) | 2.53(-13) | 8.60(-13) | 1.80(-12) | | 45,35 | 1.58(-17) | 1.28(-16) | 5.27(-16) | 3.85(-15) | 6.32(-14) | 6.73(-13) | 2.11(-12) | 4.26(-12) | | 45,37 | 2.09(-16) | 1.41(-15) | 4.78(-15) | 2.47(-14) | 2.46(-13) | 1.76(-12) | 4.61(-12) | 8.44(-12) | | 45,39 | 1.54(-14) | 4.50(-14) | 8.86(-14) | 2.02(-13) | 5.39(-13) | 1.64(-12) | 4.03(-12) | 7.63(-12) | | 45,41 | 1.59(-14) | 6.81(-14) | 1.78(-13) | 6.29(-13) | 3.20(-12) | 1.18(-11) | 2.06(-11) | 2.80(-11) | | 45,43 | 2.78(-13) | 9.75(-13) | 2.18(-12) | 6.01(-12) | 2.14(-11) | 5.84(-11) | 9.08(-11) | 1.17(-10) | | 47,31 | 5.00(-19) | 3.63(-18) | 1.02(-17) | 4.71(-17) | 1.49(-15) | 4.04(-14) | 1.84(-13) | 4.37(-13) | | 47,33 | 3.29(-19) | 3.93(-18) | 2.02(-17) | 1.93(-16) | 6.56(-15) | 1.42(-13) | 5.74(-13) | 1.29(-12) | | 47,35 | 2.61(-18) | 2.86(-17) | 1.35(-16) | 1.07(-15) | 2.61(-14) | 4.17(-13) | 1.45(-12) | 2.98(-12) | | 47,37 | 3.43(-17) | 3.25(-16) | 1.34(-15) | 8.15(-15) | 1.23(-13) | 1.36(-12) | 4.06(-12) | 7.72(-12) | | 47,39 | 1.44(-15) | 1.03(-14) | 3.35(-14) | 1.31(-13) | 8.90(-13) | 5.48(-12) | 1.28(-11) | 2.12(-11) | | 47,311 | 7.76(-14) | 2.02(-13) | 3.36(-13) | 5.96(-13) | 1.13(-12) | 2.44(-12) | 5.38(-12) | 9.96(-12) | | 47,41 | 1.19(-16) | 7.62(-16) | 3.45(-15) | 2.38(-14) | 2.59(-13) | 1.77(-12) | 4.07(-12) | 6.42(-12) | | 47,43 | 2.24(-15) | 1.23(-14) | 3.94(-14) | 1.86(-13) | 1.42(-12) | 7.35(-12) | 1.50(-11) | 2.23(-11) | | 47,45 | 6.18(-14) | 2.70(-13) | 6.86(-13) | 2.25(-12) | 1.04(-11) | 3.64(-11) | 6.29(-11) | 8.55(-11) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 50,40 | 1.23(-17) | 8.75(-17) | 3.31(-16) | 1.89(-15) | 1.99(-14) | 1.96(-13) | 6.65(-13) | 1.42(-12) | | 50,42 | 3.17(-17) | 1.93(-16) | 6.23(-16) | 2.74(-15) | 2.29(-14) | 2.42(-13) | 8.78(-13) | 1.98(-12) | | 50,44 | 1.64(-17) | 1.12(-16) | 4.80(-16) | 3.73(-15) | 5.15(-14) | 4.44(-13) | 1.34(-12) | 2.77(-12) | | 50,46 | 3.24(-17) | 2.40(-16) | 9.18(-16) | 5.33(-15) | 6.73(-14) | 7.11(-13) | 2.18(-12) | 4.29(-12) | | 50,48 | 6.41(-18) | 3.94(-17) | 1.27(-16) | 1.15(-15) | 3.02(-14) | 5.90(-13) | 2.31(-12) | 4.98(-12) | | 52,40 | 2.26(-18) | 1.83(-17) | 8.29(-17) | 6.33(-16) | 8.65(-15) | 7.84(-14) | 2.39(-13) | 4.87(-13) | | 52,42 | 1.64(-17) | 1.26(-16) | 5.42(-16) | 3.80(-15) | 4.74(-14) | 4.42(-13) | 1.41(-12) | 2.94(-12) | | 52,44 | 6.94(-17) | 4.57(-16) | 1.71(-15) | 9.79(-15) | 9.28(-14) | 6.81(-13) | 1.94(-12) | 3.81(-12) | | 52,46 | 2.19(-16) | 1.08(-15) | 3.09(-15) | 1.28(-14) | 1.08(-13) | 9.34(-13) | 2.73(-12) | 5.24(-12) | | 52,48 | 1.17(-15) | 1.87(-15) | 2.35(-15) | 5.07(-15) | 5.30(-14) | 7.47(-13) | 2.73(-12) | 5.74(-12) | | 52,50 | 3.56(-12) | 7.67(-12) | 1.22(-11) | 2.14(-11) | 4.10(-11) | 6.52(-11) | 7.91(-11) | 8.83(-11) | | 54,40 | 7.79(-19) | 6.74(-18) | 3.33(-17) | 3.07(-16) | 5.92(-15) | 6.67(-14) | 2.02(-13) | 3.92(-13) | | 54,42 | 6.78(-18) | 5.55(-17) | 2.59(-16) | 2.21(-15) | 3.66(-14) | 3.71(-13) | 1.10(-12) | 2.13(-12) | | 54,44 | 3.45(-17) | 2.58(-16) | 1.10(-15) | 8.04(-15) | 1.06(-13) | 9.36(-13) | 2.71(-12) | 5.25(-12) | | 54,46 | 2.34(-16) | 1.50(-15) | 5.47(-15) | 3.04(-14) | 2.76(-13) | 1.84(-12) | 4.74(-12) | 8.54(-12) | | 54,48 | 6.95(-15) | 2.46(-14) | 5.43(-14) | 1.43(-13) | 4.78(-13) | 1.99(-12) | 5.05(-12) | 9.23(-12) | | 54,50 | 5.00(-14) | 1.76(-13) | 4.04(-13) | 1.16(-12) | 4.17(-12) | 1.08(-11) | 1.59(-11) | 1.95(-11) | | 54,52 | 9.19(-13) | 2.73(-12) | 5.56(-12) | 1.34(-11) | 3.82(-11) | 8.44(-11) | 1.18(-10) | 1.42(-10) | | 56,40 | 1.54(-19) | 1.45(-18) | 7.81(-18) | 9.33(-17) | 2.81(-15) | 4.66(-14) | 1.60(-13) | 3.21(-13) | | 56,42 | 1.38(-18) | 1.25(-17) | 6.44(-17) | 7.14(-16) | 1.89(-14) | 2.73(-13) | 8.89(-13) | 1.74(-12) | | 56,44 | 7.87(-18) | 6.56(-17) | 3.10(-16) | 2.95(-15) | 6.15(-14) | 7.07(-13) | 2.10(-12) | 3.95(-12) | | 56,46 | 6.19(-17) | 4.54(-16) | 1.88(-15) | 1.41(-14) | 2.11(-13) | 1.86(-12) | 5.06(-12) | 9.16(-12) | | 56,48 | 1.10(-15) | 6.32(-15) | 2.06(-14) | 1.03(-13) | 9.03(-13) | 5.29(-12) | 1.21(-11) | 2.00(-11) | | 56,410 | 1.52(-13) | 3.13(-13) | 5.00(-13) | 8.93(-13) | 1.74(-12) | 3.69(-12) | 7.34(-12) | 1.24(-11) | | 56,50 | 8.52(-16) | 4.58(-15) | 1.48(-14) | 6.69(-14) | 4.50(-13) | 2.01(-12) | 3.72(-12) | 5.15(-12) | | 56,52 | 1.09(-14) | 5.33(-14) | 1.59(-13) | 6.33(-13) | 3.50(-12) | 1.34(-11) | 2.35(-11) | 3.18(-11) | | 56,54 | 1.83(-13) | 6.98(-13) | 1.72(-12) | 5.24(-12) | 2.00(-11) | 5.64(-11) | 8.79(-11) | 1.13(-10) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 51,41 | 6.91(-18) | 5.48(-17) | 2.48(-16) | 1.86(-15) | 2.71(-14) | 3.17(-13) | 1.15(-12) | 2.60(-12) | | 51,43 | 3.01(-17) | 2.04(-16) | 7.83(-16) | 4.50(-15) | 4.39(-14) | 3.84(-13) | 1.29(-12) | 2.83(-12) | | 51,45 | 3.41(-17) | 1.79(-16) | 6.06(-16) | 3.63(-15) | 5.32(-14) | 5.98(-13) | 1.90(-12) | 3.85(-12) | | 51,47 | 7.16(-17) | 2.16(-16) | 6.51(-16) | 3.60(-15) | 4.13(-14) | 5.31(-13) | 1.94(-12) | 4.15(-12) | | 53,41 | 2.75(-18) | 2.17(-17) | 1.02(-16) | 8.29(-16) | 1.39(-14) | 1.69(-13) | 5.85(-13) | 1.27(-12) | | 53,43 | 1.52(-17) | 1.14(-16) | 5.01(-16) | 3.72(-15) | 5.33(-14) | 5.62(-13) | 1.87(-12) | 3.98(-12) | | 53,45 | 8.49(-17) | 5.52(-16) | 2.14(-15) | 1.29(-14) | 1.33(-13) | 1.04(-12) | 2.94(-12) | 5.65(-12) | | 53,47 | 8.40(-16) | 3.52(-15) | 9.33(-15) | 3.12(-14) | 1.50(-13) | 9.87(-13) | 2.98(-12) | 5.91(-12) | | 53,49 | 1.15(-17) | 1.32(-16) | 4.77(-16) | 2.99(-15) | 3.34(-14) | 4.09(-13) | 1.69(-12) | 3.95(-12) | | 53,51 | 1.80(-12) | 4.22(-12) | 8.04(-12) | 1.87(-11) | 4.81(-11) | 9.27(-11) | 1.21(-10) | 1.41(-10) | | 55,41 | 5.66(-19) | 5.21(-18) | 2.77(-17) | 2.75(-16) | 6.23(-15) | 9.66(-14) | 3.66(-13) | 8.15(-13) | | 55,43 | 3.12(-13) | 9.90(-13) | 2.46(-12) | 7.70(-12) | 2.76(-11) | 7.04(-11) | 1.05(-10) | 1.31(-10) | | 55,45 | 2.42(-17) | 1.86(-16) | 8.49(-16) | 6.49(-15) | 9.69(-14) | 1.00(-12) | 3.11(-12) | 6.20(-12) | | 55,47 | 3.32(-16) | 2.03(-15) | 7.60(-15) | 4.29(-14) | 4.04(-13) | 2.75(-12) | 6.95(-12) | 1.23(-11) | | 55,49 | 3.26(-14) | 8.47(-14) | 1.63(-13) | 3.69(-13) | 8.94(-13) | 2.21(-12) | 4.81(-12) | 8.61(-12) | | 55,51 | 2.17(-14) | 8.85(-14) | 2.60(-13) | 1.00(-12) | 4.85(-12) | 1.58(-11) | 2.57(-11) | 3.34(-11) | | 55,53 | 3.12(-13) | 9.90(-13) | 2.46(-12) | 7.70(-12) | 2.76(-11) | 7.04(-11) | 1.05(-10) | 1.31(-10) | | 57,41 | 6.70(-20) | 8.04(-19) | 5.19(-18) | 6.21(-17) | 1.95(-15) | 4.12(-14) | 1.81(-13) | 4.40(-13) | | 57,43 | 4.14(-19) | 4.62(-18) | 2.80(-17) | 3.12(-16) | 8.54(-15) | 1.55(-13) | 6.06(-13) | 1.36(-12) | | 57,45 | 3.13(-18) | 3.11(-17) | 1.73(-16) | 1.63(-15) | 3.44(-14) | 4.79(-13) | 1.64(-12) | 3.38(-12) | | 57,47 | 4.32(-17) | 3.48(-16) | 1.63(-15) | 1.21(-14) | 1.73(-13) | 1.68(-12) | 4.93(-12) | 9.35(-12) | | 57,49 | 2.18(-15) | 1.20(-14) | 4.13(-14) | 2.00(-13) | 1.48(-12) | 7.91(-12) | 1.74(-11) | 2.77(-11) | | 57,411 | 4.40(-14) | 1.37(-13) | 3.02(-13) | 7.59(-13) | 1.78(-12) | 3.70(-12) | 7.19(-12) | 1.22(-11) | | 57,51 | 3.06(-16) | 2.16(-15) | 9.32(-15) | 5.57(-14) | 5.07(-13) | 2.91(-12) | 6.05(-12) | 8.98(-12) | | 57,53 | 2.74(-15) | 1.68(-14) | 6.63(-14) | 3.40(-13) | 2.36(-12) | 1.07(-11) | 2.03(-11) | 2.89(-11) | | 57,55 | 5.70(-14) | 2.50(-13) | 8.00(-13) | 3.15(-12) | 1.45(-11) | 4.61(-11) | 7.56(-11) | 9.97(-11) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 60,50 | 3.91(-17) | 2.58(-16) | 9.12(-16) | 4.70(-15) | 4.39(-14) | 3.92(-13) | 1.20(-12) | 2.36(-12) | | 60,52 | 9.56(-17) | 5.25(-16) | 1.54(-15) | 6.05(-15) | 4.77(-14) | 4.69(-13) | 1.56(-12) | 3.27(-12) | | 60,54 | 7.24(-17) | 4.31(-16) | 1.61(-15) | 1.01(-14) | 1.06(-13) | 7.79(-13) | 2.21(-12) | 4.29(-12) | | 60,56 | 8.01(-17) | 6.49(-16) | 2.51(-15) | 1.41(-14) | 1.52(-13) | 1.28(-12) | 3.51(-12) | 6.43(-12) | | 60,58 | 1.06(-17) | 9.97(-17) | 3.72(-16) | 3.62(-15) | 8.20(-14) | 1.22(-12) | 4.05(-12) | 7.86(-12) | | 62,50 | 8.18(-18) | 6.30(-17) | 2.69(-16) | 1.81(-15) | 1.99(-14) | 1.51(-13) | 4.21(-13) | 7.95(-13) | | 62,52 | 5.18(-17) | 3.84(-16) | 1.57(-15) | 9.81(-15) | 1.03(-13) | 8.42(-13) | 2.46(-12) | 4.76(-12) | | 62,54 | 1.92(-16) | 1.24(-15) | 4.41(-15) | 2.28(-14) | 1.84(-13) | 1.18(-12) | 3.13(-12) | 5.81(-12) | | 62,56 | 6.88(-16) | 3.38(-15) | 9.41(-15) | 3.55(-14) | 2.49(-13) | 1.71(-12) | 4.43(-12) | 7.89(-12) | | 62,58 | 2.78(-15) | 4.22(-15) | 5.52(-15) | 1.35(-14) | 1.34(-13) | 1.52(-12) | 4.76(-12) | 9.07(-12) | | 62,60 | 4.62(-12) | 9.70(-12) | 1.52(-11) | 2.58(-11) | 4.67(-11) | 7.02(-11) | 8.32(-11) | 9.15(-11) | | 64,50 | 3.16(-18) | 2.58(-17) | 1.21(-16) | 1.01(-15) | 1.54(-14) | 1.33(-13) | 3.52(-13) | 6.23(-13) | | 64,52 | 2.48(-17) | 1.92(-16) | 8.51(-16) | 6.55(-15) | 8.78(-14) | 7.09(-13) | 1.87(-12) | 3.34(-12) | | 64,54 | 1.11(-16) | 7.90(-16) | 3.20(-15) | 2.10(-14) | 2.32(-13) | 1.71(-12) | 4.50(-12) | 8.12(-12) | | 64,56 | 6.66(-16) | 4.00(-15) | 1.38(-14) | 6.94(-14) | 5.40(-13) | 3.13(-12) | 7.42(-12) | 1.26(-11) | | 64,58 | 1.75(-14) | 5.94(-14) | 1.27(-13) | 3.15(-13) | 9.72(-13) | 3.67(-12) | 8.42(-12) | 1.42(-11) | | 64,60 | 8.78(-14) | 3.00(-13) | 6.61(-13) | 1.77(-12) | 5.70(-12) | 1.31(-11) | 1.82(-11) | 2.15(-11) | | 64,62 | 1.39(-12) | 4.08(-12) | 7.96(-12) | 1.81(-11) | 4.76(-11) | 9.73(-11) | 1.31(-10) | 1.53(-10) | | 66,50 | 7.42(-19) | 7.00(-18) | 3.71(-17) | 3.85(-16) | 8.71(-15) | 1.04(-13) | 2.96(-13) | 5.26(-13) | | 66,52 | 6.07(-18) | 5.50(-17) | 2.80(-16) | 2.69(-15) | 5.38(-14) | 5.76(-13) | 1.59(-12) | 2.80(-12) | | 66,54 | 2.98(-17) | 2.50(-16) | 1.17(-15) | 9.73(-15) | 1.55(-13) | 1.37(-12) | 3.56(-12) | 6.12(-12) | | 66,56 | 1.97(-16) | 1.44(-15) | 5.94(-15) | 3.99(-14) | 4.68(-13) | 3.36(-12) | 8.21(-12) | 1.38(-11) | | 66,58 | 2.82(-15) | 1.60(-14) | 5.18(-14) | 2.42(-13) | 1.73(-12) | 8.66(-12) | 1.83(-11) | 2.86(-11) | | 66,510 | 2.94(-13) | 6.00(-13) | 9.23(-13) | 1.55(-12) | 2.85(-12) | 6.08(-12) | 1.16(-11) | 1.83(-11) | | 66,60 | 1.98(-15) | 1.01(-14) | 3.02(-14) | 1.25(-13) | 7.44(-13) | 2.79(-12) | 4.66(-12) | 6.08(-12) | | 66,62 | 2.23(-14) | 1.05(-13) | 2.86(-13) | 1.03(-12) | 5.25(-12) | 1.77(-11) | 2.86(-11) | 3.68(-11) | | 66,64 | 3.06(-13) | 1.15(-12) | 2.60(-12) | 7.11(-12) | 2.56(-11) | 6.81(-11) | 1.01(-10) | 1.25(-10) | | | $`T(K)`$ | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`vj,v^{}j^{}`$ | 100 | 200 | 300 | 500 | 1000 | 2000 | 3000 | 4000 | | 61,51 | 2.13(-17) | 1.66(-16) | 7.25(-16) | 5.02(-15) | 6.40(-14) | 6.64(-13) | 2.21(-12) | 4.57(-12) | | 61,53 | 8.24(-17) | 5.33(-16) | 1.95(-15) | 1.03(-14) | 8.87(-14) | 7.30(-13) | 2.32(-12) | 4.79(-12) | | 61,55 | 1.22(-16) | 5.99(-16) | 1.92(-15) | 1.03(-14) | 1.23(-13) | 1.13(-12) | 3.25(-12) | 6.16(-12) | | 61,57 | 2.61(-16) | 6.80(-16) | 1.86(-15) | 9.35(-15) | 9.55(-14) | 1.04(-12) | 3.39(-12) | 6.68(-12) | | 63,51 | 9.64(-18) | 7.36(-17) | 3.26(-16) | 2.43(-15) | 3.49(-14) | 3.55(-13) | 1.11(-12) | 2.21(-12) | | 63,53 | 4.74(-17) | 3.42(-16) | 1.43(-15) | 9.69(-15) | 1.20(-13) | 1.10(-12) | 3.38(-12) | 6.71(-12) | | 63,55 | 2.32(-16) | 1.47(-15) | 5.38(-15) | 2.95(-14) | 2.68(-13) | 1.84(-12) | 4.84(-12) | 8.84(-12) | | 63,57 | 2.51(-15) | 1.03(-14) | 2.55(-14) | 7.71(-14) | 3.31(-13) | 1.88(-12) | 5.10(-12) | 9.35(-12) | | 63,59 | 9.22(-18) | 1.47(-16) | 8.79(-16) | 7.31(-15) | 7.79(-14) | 8.44(-13) | 3.06(-12) | 6.47(-12) | | 63,61 | 2.54(-12) | 6.12(-12) | 1.14(-11) | 2.46(-11) | 5.79(-11) | 1.04(-10) | 1.31(-10) | 1.49(-10) | | 65,51 | 2.30(-18) | 2.07(-17) | 1.02(-16) | 9.05(-16) | 1.72(-14) | 2.19(-13) | 7.27(-13) | 1.47(-12) | | 65,53 | 1.21(-17) | 1.01(-16) | 4.83(-16) | 3.95(-15) | 6.42(-14) | 6.82(-13) | 2.08(-12) | 4.04(-12) | | 65,55 | 7.34(-17) | 5.52(-16) | 2.38(-15) | 1.67(-14) | 2.15(-13) | 1.89(-12) | 5.34(-12) | 9.96(-12) | | 65,57 | 8.69(-16) | 5.24(-15) | 1.84(-14) | 9.48(-14) | 7.83(-13) | 4.66(-12) | 1.09(-11) | 1.82(-11) | | 65,59 | 7.70(-14) | 1.97(-13) | 3.61(-13) | 7.44(-13) | 1.63(-12) | 3.86(-12) | 7.95(-12) | 1.33(-11) | | 65,61 | 4.24(-14) | 1.81(-13) | 4.90(-13) | 1.68(-12) | 7.10(-12) | 2.03(-11) | 3.07(-11) | 3.82(-11) | | 65,63 | 4.98(-13) | 1.68(-12) | 3.91(-12) | 1.10(-11) | 3.54(-11) | 8.29(-11) | 1.18(-10) | 1.43(-10) | | 67,51 | 3.29(-19) | 3.73(-18) | 2.16(-17) | 2.33(-16) | 6.03(-15) | 1.02(-13) | 3.87(-13) | 8.41(-13) | | 67,53 | 1.58(-18) | 1.63(-17) | 1.04(-16) | 1.08(-15) | 2.44(-14) | 3.50(-13) | 1.18(-12) | 2.41(-12) | | 67,55 | 1.15(-17) | 1.08(-16) | 5.59(-16) | 4.82(-15) | 8.56(-14) | 9.75(-13) | 2.93(-12) | 5.57(-12) | | 67,57 | 1.40(-16) | 1.09(-15) | 4.62(-15) | 3.08(-14) | 3.74(-13) | 3.08(-12) | 8.17(-12) | 1.45(-11) | | 67,59 | 5.74(-15) | 3.10(-14) | 9.76(-14) | 4.25(-13) | 2.75(-12) | 1.28(-11) | 2.59(-11) | 3.92(-11) | | 67,511 | 4.76(-14) | 2.19(-13) | 4.93(-13) | 1.17(-12) | 2.68(-12) | 5.86(-12) | 1.12(-11) | 1.81(-11) | | 67,61 | 8.54(-16) | 6.05(-15) | 2.27(-14) | 1.19(-13) | 9.11(-13) | 4.19(-12) | 7.78(-12) | 1.08(-11) | | 67,63 | 6.50(-15) | 4.07(-14) | 1.38(-13) | 6.19(-13) | 3.73(-12) | 1.44(-11) | 2.51(-11) | 3.39(-11) | | 67,65 | 1.06(-13) | 4.94(-13) | 1.39(-12) | 4.74(-12) | 1.96(-11) | 5.63(-11) | 8.72(-11) | 1.11(-10) |
no-problem/9904/astro-ph9904150.html
ar5iv
text
# A possible explanation for the anomalous acceleration of Pioneer 10 ## I Introduction Precise tracking of the Pioneer 10/11, Galileo and Ulysses spacecraft have shown an anomalous constant acceleration for Pioneer 10 with a magnitude $`8.5\times 10^{10}\text{m.s}^2`$. Additional analysis by the same team provide a new value for the acceleration $`(7.5\pm 0.2)\times 10^{10}\text{m.s}^2`$ (where the uncertainty is estimated from points in their Fig 1) and also reveal that there is an additional annual periodic component with a amplitude of $`2\times 10^{10}\text{m.s}^2`$ directed towards the sun. The main method for monitoring the spacecraft is to measure the frequency shift of the signal returned by an active transponder. Any variation in this frequency shift that is not actually due to motion can be confused with a Doppler shift and would be attributed to anomalous velocities and accelerations. This paper argues that there are is an additional frequency shift in the spacecraft signal due to a gravitational interaction with the intervening material. Because the frequency shift is proportional to the distance to the spacecraft it can easily mimic an acceleration. ## II The explanation for the constant acceleration In previous papers it was argued that photons have a gravitational interaction. This claim is based on the premise that in curved space a bundle of geodesics is focused (the ”focusing theorem”, ) and as a consequence photons are also focused. This leads to an interaction in which low energy photons are emitted and the primary photon losses energy. The effect can be observed as a frequency shift in a signal that is a function of distance traveled and the density of the local medium. Although the cosmological consequences of such an interaction are profound , it also leads to predictions which can be tested locally, including the prediction that a frequency shift should be seen in the signals from spacecraft. For a signal passing through a medium with matter density $`\rho `$ the rate of change of frequency, $`f`$, with distance is $$\frac{df}{dx}=\left(\frac{8\pi G\rho }{c^2}\right)^{1/2}f.$$ (1) Note that although point masses may distort and deviate the geodesic bundle they do not focus it and so that there is no frequency shift predicted for signals passing near stars or planets. Since the effect is very small we can write it in effective velocity units as $$\mathrm{\Delta }v=\sqrt{8\pi G\rho }\mathrm{\Delta }x.$$ (2) Differentiating gives an apparent acceleration of $`a=\sqrt{8\pi G\rho }V`$ where $`V`$ is the velocity of the spacecraft (or earth) and $`\rho `$ is the density at the current positions. It is not an average density over the path length. Using the observed anomalous acceleration of $`7.5\times 10^{10}\text{m.s}^2`$ , and a Pioneer 10 velocity of $`12.3\text{km.s}^1`$, the required density for the two-way path is $`5.5\times 10^{19}\text{kg.m}^3`$. The only constituent of the interplanetary medium that approaches this density is dust. One estimate of the interplanetary dust density at 1 AU is $`1.3\times 10^{19}\text{kg.m}^3`$ and more recently Grün suggests a value of $`10^{19}\text{kg.m}^3`$ which is consistent with his earlier estimate of $`9.6\times 10^{20}\text{kg.m}^3`$ . Although the authors do not give uncertainties it is clear that the densities could be in error by a factor of two or more. The main difficulties are the paucity of information and that the observations do not span the complete range of grain sizes. Taking a density of $`10^{19}\text{kg.m}^3`$ the computed (anomalous) acceleration is $`3.4\times 10^{10}\text{ m.s}^2`$, smaller by a factor of two than the observed anomalous acceleration. However the density is required at the distance of Pioneer 10 in 1998 of 72 AU in the plane of the ecliptic (ecliptic latitude of Pioneer 10 is 3). The meteroid experiment on-board Pioneer 10 measures the flux of grains with masses larger than $`10^{10}`$g. the results show that after it left the influence of Jupiter the flux was essentially constant (in fact there may be a slight rise) out to a distance of 18 AU. It is thought that most of the grains are being continuously produced in the Kuiper belt. As their orbits evolve inwards due to Poynting-Robertson drag and planetary perturbations they achieve a roughly constant spatial density. Given the large uncertainties in both the observed density at 1 AU (due to the limitations of the detectors), and the extrapolation of the density to 72 AU, the conclusion is that interplanetary dust could provide the required density to explain the ”anomalous acceleration” by a frequency shift due to the gravitational interaction. ## III The explanation for the annual acceleration Figure 1B in shows a time varying acceleration that has a period of one year and an amplitude that both fluctuates and decreases with time. (It may not be a valid decrease but be due to the solar cycle.) Their figure shows 50-day averages after the best-fit constant anomalous acceleration has been removed. For the years 1987 to 1993 where the curve is well defined the maxima occur at $`0.94\pm 0.03`$yr and the minima at $`0.45\pm 0.03`$yr. The amplitude changes from $`2.5\times 10^{10}\text{m.s}^2`$ in 1988 to $`1.5\times 10^{10}\text{m.s}^2`$ in 1992. In principle the gravitational interaction can explain this acceleration but now the relevant velocity is not that of Pioneer 10 but the orbital velocity of the earth. Taking the earth’s velocity as $`30\text{km.s}^1`$ and a dust density of $`10^{19}\text{kg.m}^3`$ the predicted annual acceleration in 1989 has an amplitude of $`7.6\times 10^{10}\text{ m.s}^2`$. Although this acceleration is a factor of three too large a more significant objection is that the predicted phase disagrees with the observations. With this model the maximum accelerations should occur when the earth has a maximum velocity relative to Pioneer 10, namely when it has maximum elongation as seen from the spacecraft. Since in 1989 Pioneer 10 had an ecliptic longitude of $`72^{}`$ these should occur at 0.17 yr and 0.68 yr. The discrepancy in phase of $`97^{}\pm 11^{}`$ means that the gravitational interaction does not directly explain the annual variation. However since the gravitational interaction was not included in in the complex calculations used to compute the trajectory it is feasible that the effect has been compensated for by small adjustments to other parameters and all that is left is a distorted residual. If mistakenly interpreted as a Doppler shift the annual component of the gravitational interaction is equivalent to an additional velocity of the earth (as seen by Pioneer 10) of $`3.8\text{mm.s}^1`$. For a circular orbit of the earth this is equivalent to a shift in the longitude of Pioneer 10 of 0.026 arcseconds. Thus if there is a gravitational interaction it could be masked by a small error in longitude. In practice the position of Pioneer 10 must be consistent with celestial mechanics and many other observations and it is unlikely that there would be complete compensation. The final analysis requires the inclusion of the gravitational interaction into the orbit calculations. ## IV Conclusion It has been argued that the gravitational interaction with a interplanetary dust density of $`10^{19}\text{kg.m}^3`$ predicts an anomalous acceleration of Pioneer 10 at 72 AU of $`3.4\times 10^{10}\text{ m.s}^2`$ to be compared with the observed value of $`(7.5\pm 0.2)\times 10^{10}\text{m.s}^2`$. The largest uncertainty is in the estimate of the interplanetary dust density. Since the annual period in the gravitational interaction is easily masked by small shift in the longitude of Pioneer 10 its effects are unlikely to be observed. However the predicted magnitude is in the right range and the observed annual acceleration could be the residuals after a partial compensation. ## V Acknowledgments This work is supported by the Science Foundation for Physics within the University of Sydney, and use has made of NASA’s Astrophysics Data System Abstract Service.
no-problem/9904/astro-ph9904136.html
ar5iv
text
# ISO observations toward the reflection nebula NGC 7023: A nonequilibrium ortho- to para-H2 ratio1footnote 11footnote 1Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries France, Germany, the Netherlands and the United Kingdom) and with participation of ISAS and NASA ## 1 Introduction The Short Wavelength Spectrometer (SWS) on board the Infrared Space Observatory (ISO) has allowed us, for the first time, to observe the pure rotational spectrum of H<sub>2</sub> and study the OTP ratio in warm (T$``$ 100 - 1000 K) regions. Previous to ISO observations, the OTP ratio of H<sub>2</sub> had been estimated towards regions heated by UV radiation and shocks to temperatures $`>`$ 2000 K using the relative strengths of the vibrational lines. In outflow regions where shock excitation is the main heating mechanism, observations of the vibrational lines typically reveal OTP ratios of $``$ 3 (Smith, Davis & Lioure 1997). This is the expected value in thermodynamic equilibrium for gas with temperatures $`>`$ 200 K. However, towards the regions heated mainly by the UV radiation (PDRs), the vibrational lines give OTP values in the range 1.5 - 2 (see e.g. Chrysostomou et al. 1993). The pure rotational lines of H<sub>2</sub> have been detected by ISO toward several Galactic regions (S140: Timmermann et al. 1996; Cepheus A West: Wright et al. 1996; BD+404124: Wesselius et al. 1996; HH54: Neufeld et al. 1998). The observations towards all these regions except HH54 are consistent with the H<sub>2</sub> rotational lines arising in gas with temperature $`>`$ 200 K and with an OTP ratio of 3. This value is not consistent with those derived from the vibrational lines in dense PDRs but is in agreement with theoretical predictions. Recently Sternberg & Neufeld (1999) have argued that the low OTP ratios measured from vibrational lines do not represent the actual OTP ratio in PDRs but it is simply a consequence of the optical depth effects in the fluorescent pumping of the vibrational lines. The non-equilibrium OTP ratio measured towards HH54 has been interpreted as arising in hot-shocked excited gas that has not reached the equilibrium yet. In this interpretation, the observed OTP ratio would be the legacy of an earlier stage in the thermal history of the gas when the gas temperature was 90 K. In this Letter we report the observations of the H<sub>2</sub> rotational lines towards the prototypical PDR associated with the reflection nebula NGC 7023. An OTP ratio in the range 1.5 - 2 is derived from our observations. This is the first detection of an OTP ratio $`<`$ 3 in a PDR based on the pure H<sub>2</sub> rotational lines. ## 2 Observations The observations of the S(0), S(1), S(2), S(3), S(4) and S(5) rotational lines of H<sub>2</sub> towards the peak of the PDR associated with NGC 7023 (R.A. (2000): 21<sup>h</sup>01<sup>m</sup>32<sup>s</sup>.5, Dec(2000): 681027<sup>′′</sup>.5). were made using the Short-Wavelength Spectrometer (SWS) (de Grauuw et al. 1996) on board the Infrared Space Observatory (ISO) (Kessler et al. 1996) during revolution 514 with a total on-target time of 5149 s. At the spectral resolution of this mode ($`\lambda `$/$`\mathrm{\Delta }`$$`\lambda `$ $``$ 1000-2000) all the observed lines are unresolved. Data reduction are carried out with version 7.0 of the Off Line Processing routines and the SWS Interactive Analysis at the ISO Spectrometer Data Center at MPE. Further analysis has been made using the ISAP software package. The uncertainities in the calibration are of 15 %, 25 %, 25 %, 25 %, 20% and 30% for the S(5), S(4), S(3), S(2), S(1) and S(0) lines respectively (Salama et al. 1997). ## 3 H<sub>2</sub> rotational lines In Fig. 1 we present the spectra of the S(0), S(1), S(2), S(3), S(4) and S(5) H<sub>2</sub> lines towards the peak of the PDR associated with NGC 7023. The observed intensitites and some interesting observational parameters are presented in Table 1. The data have been corrected for dust attenuation using the extinction curve of Draine & Lee (1984), and a value of 0.43 for the dust opacity at 0.55 $`\mu \mathrm{m}`$. This value has been derived from the ISO LWS01 spectrum (Fuente et al. 1999, hereafter paper II). The extinction for the S(0), S(1), S(2), S(3), S(4) and S(5) lines is very small and amounts to 0.01, 0.02, 0.02, 0.04, 0.02 and 0.01 respectively. Fig. 2 shows the rotational diagram for the H<sub>2</sub> corrected for extinction effects and assuming extended emission. (Note that the errors in Fig. 2 are entirely dominated by the calibration uncertainities.) This diagram shows that the ortho-H<sub>2</sub> levels have systematically lower N<sub>u</sub>/g<sub>u</sub> values (where N<sub>u</sub> and g<sub>u</sub> are the column densities and degeneracies of the upper levels of the transitions) than the adjacent J–1 and J+1 para-H<sub>2</sub> levels producing a $`\mathrm{`}\mathrm{`}`$zig-zag” distribution. In fact, the ortho-H<sub>2</sub> levels seem to define a curve which is offset from that of the para-H<sub>2</sub> levels (see Fig. 2). This is the expected trend if the OTP ratio is lower than 3. The offset between the two set of data is systematic and seems to show the trend of being larger for the low energy levels (S(0) and S(1) transitions), than for the high energy levels (S(2), S(3), S(4) and S(5) transitions). The offset between the ortho- and para- H<sub>2</sub> curves is larger than the observational errors and cannot be due to the different apertures for the different lines. For the case of a point-like source we would have to correct by a factor of 1.93 the intensity of the S(0) line and by a factor of 1.35 the intensities of the S(1) and S(2) lines. In this case, the offset between the para- and ortho- curves would increase. Based on the spatial distribution of the HI(21 cm) line (Fuente et al. 1998) and those of the CII (158 $`\mu \mathrm{m}`$) and OI (63 $`\mu \mathrm{m}`$) lines (Chokshi et al. 1988, paper II), we will assume in the following a beam filling factor of 1 for all the observed H<sub>2</sub> lines. Although the exact shape of the rotational diagram depends on the excitation conditions of the region, the $`\mathrm{`}\mathrm{`}`$zig-zag” features cannot be explained by any model which assumes an equilibrium OTP ratio. In thermodynamic equilibrium, the rotational temperature between an ortho- (para-) H<sub>2</sub> level and the next J+1 level should always increase with the energy of the upper level giving rise to a smooth curve in the rotational diagram regardless of the temperature profile of the region. A $`\mathrm{`}\mathrm{`}`$zig-zag” distribution implies that for some pair of levels the rotational temperature decreases with the energy of the upper level. One can only get this effect with a non-equilibrium OTP ratio. We have compared our data with the models by Burton et al. (1992) in order to estimate the incident UV field, density and OTP ratio. Burton et al. (1992) assumed a constant value of 3 for the OTP ratio in their calculations. The best fit to our data is for an incident UV field of G<sub>o</sub> = 10<sup>4</sup> in units of the Habing field and a density of n = 10<sup>6</sup> cm<sup>-3</sup>, but it systematically underestimates the intensity of the para-H<sub>2</sub> transitions and overestimates the intensities of the ortho-H<sub>2</sub> transitions (see open triangles in Fig. 3). This is the expected behavior if one assumes an OTP ratio of 3 and the actual OTP ratio in the region is $`<`$ 3. We have corrected the line intensities predicted by the model for different values of the OTP ratio assuming that the total amount of H<sub>2</sub> molecules at a given temperature and the line ratios between levels of the same symmetry are not affected by the OTP ratio. These assumptions are correct for the low-J transitions where the collisional excitation dominates. In Fig. 3 we compare our data with the predicted diagram for an OTP ratio of 3 which correspond to the model without any correction (open triangles), 2 (open circles) and 1 (open squares). The diagram for an OTP ratio of 3 clearly does not fit any of our observational points. We find the best fit to our data with an OTP ratio of 2. In this case, the predicted intensities for the S(1), S(2) and S(3) lines are in agreement with the observed values. Only the predicted intensity for the S(0) line is not consistent with the observations. To fit the intensity of the S(0) line it is necessary to assume an OTP ratio of $``$ 1.5 (see the case OTP = 1 in Fig. 3), but in this case, we will have a worse fit for the S(1), S(2) and S(3) lines than for an OTP ratio of 2. As discussed in Section 4, this suggests a possible variation of the OTP ratio with the temperature. The cooler gas emitting only in the S(0) line could have an OTP ratio of $`<`$ 1.5, while the warmer gas emitting in the S(1), S(2) and S(3) lines could have an OTP ratio of 2. However, to establish this variation unambigously, it is necessary to have a very accurate knowledge of the excitation conditions in the region. To be conservative, we will conclude that the OTP ratio is in the range 1.5 - 2 in this PDR. Martini et al. (1997) found an OTP ratio of 2.5$`\pm `$0.4 using the near-IR H<sub>2</sub> vibrational lines towards their position 1 which is 20<sup>′′</sup> offset from ours. Taking into account that because of the optical depth effects in the vibrational lines this value is just a lower limit to the actual OTP ratio, it proves that the OTP ratio is close to 3 for the gas with kinetic temperatures T<sub>k</sub> $`>`$ 2000 K. This difference between the OTP ratios derived from the rotational and vibrational lines argues in favor of a variaton of the OTP ratio with the kinetic temperature. Since the OTP ratio in this source is different from the equilibrium value, the rotation temperature between an ortho- and a para- level does not represent an estimate of the gas kinetic temperature. To estimate the gas kinetic temperature we have calculated the rotation temperature between levels of the same symmetry. For the para-H<sub>2</sub> levels, we have derived a rotation temperature of $``$ 290 K from the S(0) and S(2) lines and of $``$ 500 K from the S(2) and S(4) lines. For the ortho-H<sub>2</sub> levels, the derived temperature is $``$ 440 from S(1) and S(3) lines and $``$ 700 K from the S(3) and S(5) lines. Based on these calculations we conclude that the OTP ratio is $``$ 1.5 - 2 in the gas with kinetic temperatures $``$ 300 - 700 K. We derive a total H<sub>2</sub> column density of 5 10<sup>20</sup> cm<sup>-2</sup> assuming a rotation temperature of 290 K between the J = 0 and J = 2 para-levels and of 440 K between the J = 1 and J = 3 ortho-levels. ## 4 Discussion Based on the H<sub>2</sub> rotational lines data we have derived an OTP ratio in the range of 1.5 - 2 for the gas with kinetic temperatures $``$ 300 - 700 K towards the reflection nebula NGC 7023. This is the second object with a non-equilibrium OTP ratio measured from the H<sub>2</sub> pure rotational lines. The first non-equilibrium OTP ratio was detected towards the outflow source HH54, and interpreted as arising in shock-heated gas that has not reached the equilibrium yet. This interpretation is not plausible for NGC 7023. The high dust temperature,T<sub>d</sub>$``$ 40 K, and the detection of the SiII (34.8 $`\mu \mathrm{m}`$) and CII (157.7 $`\mu \mathrm{m}`$) lines (paper II) proves the existence of a PDR in this region. Although a shock component might also exist (see Martini et al. 1997), the heating is dominated by UV photons. It is then expected that most of the warm H<sub>2</sub> arises in the PDR. A non-equilibrium value of the OTP ratio is not expected for the physical conditions prevailing in a dense PDR. The initial OTP ratio after the H<sub>2</sub> formation is very uncertain. Because of the large exothermicity of the formation process, it is expected to be 3 . However, this OTP ratio can change if the H<sub>2</sub> molecules remain on the grain surface long enough to reach the equilibrium at the dust temperature. In this case, the OTP ratio after H<sub>2</sub> formation will be that of the equilibrium at the grain temperature. After the ejection of the H<sub>2</sub> molecules to the gas phase, exchange reactions with H and H<sup>+</sup> change the OTP ratio until achieving the equilibrium at the gas temperature. For the gas temperatures traced by the H<sub>2</sub> rotational lines ($`>`$ 200 K), the equilibrium OTP ratio is 3. One possibility to explain the low OTP ratio observed in NGC 7023 is to suppose that the OTP ratio in the H<sub>2</sub> formation is lower than 3, and once in the gas phase, the H<sub>2</sub> molecules are destroyed before attaining the equilibrium value at the gas temperature. The dust temperature in a PDR with G<sub>o</sub> = 10<sup>4</sup> and n = 10<sup>6</sup> cm<sup>-3</sup> is $`<`$ 100 K. In particular, the dust temperature measured towards the PDR peak in NGC 7023 using our LWS01 spectrum is 40 K which corresponds to an equilibrium OTP ratio of $``$ 0.1. The OTP conversion in the atomic region is dominated by H - H<sub>2</sub> collisions with a rate of 10$`{}_{}{}^{13}n`$ s<sup>-1</sup> where $`n`$ is the atomic hydrogen density (see Sternberg & Neufeld 1999 and references therein). The hydrogen density derived from the HI image published by Fuente et al. (1998) is $``$ 5 10<sup>3</sup> cm<sup>-3</sup>. A similar value ($``$ 0.5 - 1 10<sup>4</sup> cm<sup>-3</sup>) was derived by Chokshi et al. (1988) based on the OI and CII lines. However, densities larger than 10<sup>5</sup> cm<sup>-3</sup> are derived from molecular data (Fuente et al. 1996, Lemaire et al. 1996, Gerin et al. 1998). The density of atomic hydrogen in the region with T<sub>k</sub> $``$ 300 K is expected to be about an order of magnitude lower than the H<sub>2</sub> density. Then we assume an atomic hydrogen density of $``$ 10<sup>4</sup> cm<sup>-3</sup> in our calculations. With this value, the OTP conversion rate due to H<sub>2</sub> \- H collisions is 10<sup>-9</sup> s<sup>-1</sup>. At the cloud surface the unshielded photodissociation rate is $``$ 5 10$`{}_{}{}^{11}G_{}^{}`$ s<sup>-1</sup> $``$ 5 10<sup>-7</sup> s<sup>-1</sup>, i.e., 2 orders of magnitude larger than the OTP conversion rate. Then, in the cloud surface, before self-shielding is important (A<sub>v</sub> $``$ 0.3 mag, T<sub>k</sub>$`>`$ 500 K), an OTP ratio lower than 3 can be explained by assuming that the OTP ratio in the H<sub>2</sub> formation is the equilibrium value at the grain temperature. Deeper into the molecular cloud, when H<sub>2</sub> is self-shielded, one can consider that the H<sub>2</sub> destruction rate is similar to the H<sub>2</sub> reformation rate which is given by $``$ 3 10$`{}_{}{}^{17}n`$ s<sup>-1</sup>. In this region, a lower limit to the OTP conversion rate is given by the conversion rate due to H<sub>2</sub> \- H<sup>+</sup> collisions which is $``$ 10$`{}_{}{}^{17}n`$ s<sup>-1</sup> (Sternberg & Neufeld 1999). The OTP conversion rate is of the same order than the destruction rate and the OTP ratio is expected to be close to the equilibrium. According to these estimates one expects to have an OTP ratio close to 3 at low temperatures and a non-equilibrium OTP ratio at higher temperatures. The contrary trend is derived from our observations. The high energy S(1), S(2) and S(3) lines are fitted with an an OTP ratio of 2 while the low energy S(0) line is better fitted with an OTP ratio of 1.5. Another possibility to explain the non-equilibrium OTP value in NGC 7023 is to consider the case of a dynamic PDR, i.e. the dissociation front is advancing into the molecular cloud. In this case we do not have to assume a non-equilibrium OTP ratio in the H<sub>2</sub> formation. The PDR is being fed continuously by the cool gas of the molecular cloud in which the equilibrium value of the OTP ratio is lower than 3. This gas is heated by the stellar UV radiation to temperatures $`>`$ 200 K but leaves the PDR before attaining the equilibrium OTP ratio at this temperature. In this scenario, the gas which is expected to have an OTP ratio smaller than the equilibrium value is the gas that has more recently been incorporated into the PDR, which is also the gas at lowest temperature. The OTP ratio is expected to increase and reach values close to 3 at high temperatures. This behavior is consistent with the trend observed in our data. As discussed above, the OTP H<sub>2</sub> conversion is mainly due to H - H<sub>2</sub> collisions with a conversion rate of $``$ 10<sup>-9</sup> s<sup>-1</sup>. To have a significant fraction of the gas with a non-equilibrium OTP ratio the photodissociation front must advance about 1 - 2 mag in the conversion time, i.e., it must penetrate into the molecular cloud at a velocity of $``$ 10$`{}_{}{}^{7}n_{}^{1}`$ kms<sup>-1</sup>. Skinner et al. (1993) proposed the existence of an anisotropic ionized wind associated with this star based on radio continuum observations. Later, Fuente et al. (1998) detected an HI outflow with a velocity of $``$ 7.5 kms<sup>-1</sup>. They proposed that this outflow is formed when the gas that has been photodissociated by the UV radiation, is accelerated by the stellar winds along the walls of a biconical cavity. The velocity of the HI outflow cannot account for the velocity at which the photodissociation front must advance into the molecular cloud to have an OTP ratio lower than 3 unless the OTP conversion rates are severely overestimated (by a factor of $`>`$ 10) and/or the density of the outflow is $`>`$ 10<sup>5</sup> cm<sup>-3</sup>. In spite of this problem, a photodissociation front penetrating into the cloud because of the outflow seems to be the most plausible explanation for the low OTP ratio measured in this region. The existence of an outflow can also explain the difference between this region and other PDRs like S140 in which an OTP ratio of 3 has been derived from the H<sub>2</sub> rotational lines. We would like to thank the referee for his/her helpful comments. This work has been partially supported by the Spanish DGES under grant number PB93-0048 and the Spanish PNIE under grant number ESP97-1490-E. N.J.R-F acknowledges Conserjería de Educación y Cultura de la Comunidad de Madrid for a pre-doctoral fellowship.
no-problem/9904/hep-ex9904030.html
ar5iv
text
# Jet Cross Sections at HERA - Current Issues ## 1 Dijet Cross Sections at Low $`Q^2`$ and Virtual Photon Structure ### 1.1 Comparing Dijet Cross Sections with NLO QCD Calculations H1 have measured the triple-differential cross-section, $`\mathrm{d}^3\sigma _{ep}/\mathrm{d}Q^2\mathrm{d}\overline{E_T}^2\mathrm{d}x_\gamma ^{jets}`$ ($`\overline{E_T}^2`$ is the mean $`E_T`$ of the two highest $`E_T`$ jets) in the $`\gamma ^{}p`$ center-of-mass system (hadronic cms) . The photon virtuality spans the range $`1.6<Q^2<80\mathrm{G}\mathrm{e}\mathrm{V}^2`$ and $`y=Pq/Pk`$ is constrained to $`0.1<y<0.7`$, where $`P`$, $`q`$ and $`k`$ are the four-vectors of the proton, virtual photon, and electron. The momentum fraction of the parton from the photon entering the hard scattering, $`x_\gamma ^{jets}`$, defined as $$x_\gamma ^{jets}=\frac{_{i=1,2}E_{T_i}^{jets}\mathrm{exp}(\eta _i^{jets})}{W}$$ (1) is estimated from the two highest $`E_T`$ jets. The variable $`W^2=(P+q)^2=2PqQ^2`$ defines the hadronic cms energy.<sup>1</sup><sup>1</sup>1One can theoretically define the variable $`x_\gamma =p_0P/qP`$, where $`p_0=x_\gamma q`$ is the four-vector of the incoming parton from the photon. This definition assumes a collinear emission of the parton from the photon, which is an approximation neglecting some $`k_{}`$ contribution due to the finite $`Q^2`$. These contributions are, however, small for moderate $`Q^2`$. This variable differs from that defined in (1). When comparing with QCD calculations at LO level, partons directly give jets and there are exactly two partons in the final state. Thus, $`x_\gamma ^{jets}=x_\gamma `$. However, in NLO, $`x_\gamma `$ still gives the momentum fraction of the parton in the photon, but $`x_\gamma ^{jets}x_\gamma `$. At NLO, one also has contributions to $`x_\gamma ^{jets}<1`$ from the direct contribution, whereas at LO $`x_\gamma ^{jets}=1`$ for the direct process. The triple differential cross-section is shown in figure 1 as a function of $`x_\gamma ^{jets}`$ in ranges of $`Q^2`$ and $`\overline{E_T}^2`$. The data (points) are corrected for detector effects and the error bar shows the quadratic sum of systematic and statistical errors. The cross section decreases rapidly with increasing $`\overline{E_T}^2`$ and with increasing $`Q^2`$. Hadronization corrections are not included in the data. These are believed to be small on average but will presumably change the shape of the $`x_\gamma ^{jets}`$ distribution. QCD calculations of dijet cross-sections with virtual photons have recently been performed at next-to-leading order (NLO) . The calculations are implemented in the fixed order program JetViP . In NLO, in the direct component a logarithm $`\mathrm{ln}E_T^2/Q^2`$ occurs, which is proportional to the photon splitting function. This term is large for $`E_T^2Q^2`$ and therefore subtracted and resummed in the virtual photon structure function. The condition $`E_T^2Q^2`$ ensures that it is possible to resolve an internal structure of the virtual photon. The virtual photon PDF’s are suppressed as $`Q^2E_T^2`$ and various anzätze have been used to interpolate between the regions of known leading-log behaviour . It should be noted that the logarithmic term is only subtracted for the transversely polarized photons, since it vanishes in the case of longitudinally polarized photons for $`Q^20`$. The hadronic content of longitudinal virtual photons should be very small and therefore negligible. The results of the NLO calculations are also shown in figure 1. The direct component of this model is shown as the shaded histogram. The result including a resolved component in NLO is shown as the full line. The calculation including a resolved photon component compares better to the data and indicates a need for a resolved virtual photon component below $`10`$ GeV<sup>2</sup>, especially in the forward rapidity region, corresponding to low $`x_\gamma ^{jets}`$. The discrepancy of data and NLO calculation for $`x_\gamma ^{jets}>0.75`$ will presumably be cured when hadronization effects are considered, which lower the NLO results at large $`x_\gamma ^{jets}`$. ### 1.2 Effective Virtual Photon Parton Densities H1 have used their studies to extract an effective parton denstity (EPDF) of the virtual photon. By using the Single Effective Subprocess Approximation , the cross-section for dijet production in LO can be written $$\frac{\mathrm{d}^5\sigma }{\mathrm{d}y\mathrm{d}x_\gamma \mathrm{d}x_\mathrm{p}\mathrm{d}\mathrm{cos}\theta ^{}\mathrm{d}Q^2}\frac{f_{eff/\gamma }^k(x_\gamma ,P_t^2,Q^2)}{x_\gamma }\frac{f_{eff/\mathrm{p}}(x_\mathrm{p},P_t^2)}{x_\mathrm{p}}|M_{SES}(\mathrm{cos}\theta ^{})|^2.$$ Here the Effective Parton Densities (EPDF), $`f_{eff/\gamma }^k`$ and $`f_{eff/\mathrm{p}}`$, are defined as $`f_{eff/\mathrm{A}}(f_{q/A}+f_{\overline{q}/A})+\frac{9}{4}f_{g/A}.`$ The EPDF is shown in figure 2. We see that, independently of $`Q^2`$, the parton density tends to be flat or rising with $`x_\gamma `$ (not to be confused with $`x_\gamma ^{jets}`$). This behaviour is maintained as the probing scale increases. These are features characteristic of photon structure. The data are compared with predictions from the SaS and DG models which are able to describe the data quite well except where $`Q^2P_t^2`$ and various aspects of the model start to break down. The models tend to underestimate the data in these regions. The three parameterisations for the parton density all give a good description of the data both in the lowest $`x_\gamma `$ range and in the lowest two $`Q^2`$ bins but predict a more rapid suppression as $`Q^2P_t^2`$ than is seen in the data. ## 2 Three Jet Photoproduction ZEUS has measured the high-mass three-jet cross section in photoproduction, $`d\sigma /dM_{\text{3J}}`$, as shown in Figure 3 . The $`𝒪(\alpha \alpha _s^2)`$ pQCD calculations from two groups of authors provide a good description of the data, even though they are leading order for this process. Monte Carlo models also generate three-jet events through the parton shower mechanism and both PYTHIA and HERWIG reproduce the shape of the $`M_{\text{3J}}`$ distribution. For three-jet events there are two relevant scattering angles as illustrated in Figure 4(a). The distributions of $`\mathrm{cos}\theta _3`$ and $`\psi _3`$ are shown in Figure 4(b) and (c). The $`\mathrm{cos}\theta _3`$ distribution resembles that of $`\mathrm{cos}\theta ^{}`$ in dijet production and exhibits forward and backward peaks. It is well described in both $`𝒪(\alpha \alpha _s^2)`$ pQCD calculations and parton shower models. The $`\psi _3`$ distribution is peaked near 0 and $`\pi `$ indicating that the three-jet plane tends to lie near the plane containing the highest energy jet and the beam. This is particularly evident if one considers the $`\psi _3`$ distribution for three partons uniformly distributed in the available phase space. The phase space near $`\psi _3=0`$ and $`\pi `$ has been depleted by the $`E_T^{\text{j}et}`$ cuts and by the jet-finding algorithm. The pQCD calculations describe perfectly the $`\psi _3`$ distribution. It is remarkable that the parton shower models PYTHIA and HERWIG are also able to reproduce the $`\psi _3`$ distribution. Within the parton-shower model it is possible to determine the contribution to three-jet production from initial-state radiation (ISR) and final-state radiation. It is also possible to switch the QCD phenomenon of colour coherence on and off. From a Monte Carlo study it has been determined that ISR is predominantly responsible for three-jet production. Also, it has been found that colour coherence can account for the suppression of large angle emissions which leads to the depletion of the $`\psi _3`$ distribution near $`\psi _3=\pi /2`$ . ## 3 Summary The dijet cross section at low $`Q^2`$ has been measured by the H1 collaboration and compared to NLO QCD calculations. The comparison shows a clear need for resolved virtual photon component. First measurements of a leading order effective virtual photon PDF have been made. Existing models for virtual photon PDF’s are consistent with the measurements but systematic errors are still large. A first measurement of high transverse energy three-jet photoproduction has been performed. The distribution of the three jets is sensitive to colour coherence and is correctly predicted in $`𝒪(\alpha \alpha _s^2)`$ pQCD.
no-problem/9904/gr-qc9904013.html
ar5iv
text
# Gravitational Collapse of Gravitational Waves in 3D Numerical Relativity \[ ## Abstract We demonstrate that evolutions of three-dimensional, strongly non-linear gravitational waves can be followed in numerical relativity, hence allowing many interesting studies of both fundamental and observational consequences. We study the evolution of time-symmetric, axisymmetric and non-axisymmetric Brill waves, including waves so strong that they collapse to form black holes under their own self-gravity. The critical amplitude for black hole formation is determined. The gravitational waves emitted in the black hole formation process are compared to those emitted in the head-on collision of two Misner black holes. \] Gravitational waves have been an important area of research in Einstein’s theory of gravity for years. Einstein’s equations are nonlinear, and therefore can cause waves, which normally would disperse if weak enough, to be held together by their own gravity. This property characterizes Wheeler’s geon proposed more than 40 years ago, and is responsible for many interesting phenomena. Even in planar symmetric spacetimes, there are many interesting results, such as the formation of singularities from colliding plane waves (see and references therein). In axisymmetry, Ref. studied the formation of black holes (BHs) by imploding gravitational waves, finding critical behavior . These discoveries are all in spacetimes with special symmetries, but they raise important questions about fully general three-dimensional (3D) spacetimes, e.g. the nature of critical phenomena in the absence of symmetries has only recently been studied through a perturbative approach . 3D studies of fully nonlinear gravity can only be made through the machinery of numerical relativity. A few studies of gravitational wave evolutions have been performed in the linear and near linear regimes , in preparation for the study of fully nonlinear, strong field 3D wave dynamics. However, until now no such studies have been successfully carried out. In this paper we present the first successful simulations of highly nonlinear gravitational waves in 3D. We study the process of strong waves collapsing to form BHs under their own self-gravity. We determine the critical amplitude for the formation of BHs and show that one can now carry out these evolutions for long times. For waves that are not strong enough to form BHs, we follow their implosion, bounce and dispersal. For waves strong enough to collapse to a BH under their own self gravity, we find the dynamically formed apparent horizons (AHs), and extract the gravitational radiation generated in the collapse process. These waveforms can be compared in axisymmetry to head-on BH collisions (performed earlier and reported in ). The waveforms are similar at late times, dominated by the quasi-normal modes of the resulting BHs as expected. The difference in the waveforms at early times for these two very different collapse scenarios shows to what extent one can extract information about the BH formation process from the observation of the gravitational radiation emitted by the system. All the simulations presented here were performed with the newly developed Cactus code. For a description of the code and the numerical methods used, see . We take as initial data a pure Brill type gravitational wave, later studied by Eppley and others . The metric takes the form $$ds^2=\mathrm{\Psi }^4\left[e^{2q}\left(d\rho ^2+dz^2\right)+\rho ^2d\varphi ^2\right]=\mathrm{\Psi }^4\widehat{ds}^2,$$ (1) where $`q`$ is a free function subject to certain boundary conditions. Following , we choose $`q`$ of the form $$q=a\rho ^2e^{r^2}\left[1+c\frac{\rho ^2}{(1+\rho ^2)}\mathrm{cos}^2\left(n\varphi \right)\right],$$ (2) where $`a,c`$ are constants, $`r^2=\rho ^2+z^2`$ and $`n`$ is an integer. For $`c=0`$, these data sets reduce to the Holz axisymmetric form, recently studied in full 3D Cartesian coordinates in preparation for the present work . Taking this form for $`q`$, we impose the condition of time-symmetry, and solve the Hamiltonian constraint numerically in Cartesian coordinates. An initial data set is thus characterized only by the parameters $`(a,c,n)`$. For the case $`(a,0,0)`$, we found in that no AH exists in initial data for $`a<11.8`$, and we also studied the appearance of an AH for other values of $`c`$ and $`n`$. Such initial data can be evolved in full 3D using the Cactus code, which allows the use of different formulations of the Einstein equations, different coordinate conditions, and different numerical methods. Our focus here is on new physics, but since stable evolutions of such strong gravitational waves have not been obtained before, we comment briefly on the method used for the results in this paper. In , Baumgarte and Shapiro note for weak waves that a system, which is essentially the conformally decomposed ADM system of Shibata and Nakamura , shows greatly increased numerical stability over the standard ADM formulation . We will refer to this system as the BSSN formulation. The use of a particular connection variable in BSSN is reminiscent of the Bona-Massó formulation . We found that BSSN as given in with maximal slicing, a 3-step iterative Crank-Nicholson (ICN) scheme, and a radiative (Sommerfeld) boundary condition is very stable and reliable even for the strong waves considered here. The key new extensions to previous BSSN results are that the stability can be extended to (i) strong, dynamical fields and (ii) maximal slicing, where the latter requires some care. Maximal slicing is defined by vanishing of the mean extrinsic curvature, $`K`$=0, and the BSSN formulation allowed us to cleanly implement this feature numerically, in contrast with the standard ADM equations. (A related idea to improve stability with maximal slicing is that of K-drivers, which helps dramatically, but is ultimately not sufficient for very strong waves in standard ADM formulations , but compare .) We begin our discussion of the physical results with the parameter set ($`a`$=4, $`c`$=0, $`n`$=0); a rather strong axisymmetric Brill wave (BW). Even though this data set is axisymmetric, the evolution has been carried out in full 3D, exploiting the reflection symmetry on the coordinate planes to evolve only one of the eight octants. The evolution of this data set shows that part of the wave propagates outward while part implodes, re-expanding after passing through the origin. However, due to the non-linear self-gravity, not all of it immediately disperses out to infinity; again part re-collapses and bounces again. After a few collapses and bounces the wave completely disperses out to infinity. This behavior is shown in Fig. 1a, where the evolution of the central value of the lapse is given for simulations with three different grid sizes: $`\mathrm{\Delta }x`$=$`\mathrm{\Delta }y`$=$`\mathrm{\Delta }z`$=0.16 (low resolution), 0.08 (medium resolution) and 0.04 (high resolution), using $`32^3`$, $`64^3`$ and $`128^3`$ grid points respectively. At late times, the lapse returns to 1 (the log returns to 0). Fig. 1b shows the evolution of the log of the central value of the Riemann invariant $`J`$ for the same resolutions. At late times $`J`$ settles on a constant value that converges rapidly to zero as we refine the grid. With these results, and direct verification that the metric functions become stationary at late times, we conclude that spacetime returns to flat (in non-trivial spatial coordinates; the metric is decidedly non-flat in appearance!). Next we increase the amplitude to $`a=6`$, holding other parameters fixed. Fig. 2 shows the evolution of the lapse and the Riemann invariant $`J`$ for this case, showing a clear contrast with Fig. 1. The lapse now collapses immediately, and the Riemann invariant after an initial drop grows to a large value at the origin until it is halted by the collapse of the lapse. For this amplitude the low resolution is now too crude and the code crashes at $`t10`$. We have therefore added an extra simulation with $`\mathrm{\Delta }x`$=0.053 (“intermediate” resolution) using $`96^3`$ grid points. To confirm that a BH has indeed formed, we searched for an AH in the $`a`$=6 case (using a minimization algorithm ). For high resolution, an AH was first found at $`t`$=7.7, which grows slowly in both coordinate radius and area. Fig. 2 shows the location of the AH on the $`x`$-$`z`$ plane at time $`t=10`$ for the three resolutions. The mass of the horizon at this time is about $`M_{AH}=0.87`$, but then due to poor resolution of the grid stretching (a common problem of all BH simulations with singularity avoiding slicings), it continues to grow, ultimately exceeding the initial ADM mass of the spacetime, which for this data set is $`M_{ADM}=0.99`$ (obtained in the way described in ). However, the total energy radiated is about 0.12, computed from the Zerilli functions, completely consistent with $`M_{AH}=0.87`$ and an initial mass of $`M_{ADM}=0.99`$ . CPU time constraints make it difficult to run long term, higher resolution simulations (high resolution used $`120`$ hours running on 16 processors of an SGI/Cray-Origin 2000). We also confirmed that an event horizon does not exist in the initial data by integrating null surfaces out from the origin during the simulation. From these two studies we conclude that the critical amplitude $`a^{}`$ for BH formation for the axisymmetric BW packet is $`a^{}=5\pm 1`$. We have performed more simulations within this range, and have narrowed down the interval to $`a^{}=4.85\pm 0.15`$, although near the critical solution higher resolution is required to establish convergence. Our study of these near-critical solutions is still under way and will be presented elsewhere. It is particularly exciting that the dynamical evolution can be followed long enough for the extraction of gravitational waveforms even for the BH formation case. One important question is what physical information of the gravitational collapse process can be extracted from the observation of the radiation. How much will the waveforms from different BH formation processes be different? For this purpose we compare the BW collapse waveforms to those of a very different collapse process, namely the head-on collision of two BHs. In Fig. 3 we show the {$`l`$=2,$`m`$=0} Zerilli function $`\psi `$, obtained from the evolution of Misner data for $`\mu =1.2,1.8,2.2`$ , and from the axisymmetric $`a=6`$ BW collapse. (The case $`\mu =1.2`$ represents a single perturbed black hole, at $`\mu =2.2`$ there are two separate black holes that are outside the perturbative regime.) To compare the waveforms, we adjust the time coordinate of the BW waveforms based on the time delay for different “detector” positions, which for the BW is at $`r=4.6M_{ADM}`$ and for the BHs at $`r=20M_{ADM}`$. We also scale the Zerilli function amplitude for the BHs by $`M_{ADM}`$ and the BW by $`10M_{ADM}`$ to put them on the same figure. We notice the following: (1) The BW waveform is dominated by quasi-normal modes (QNMs) at late times just like in the 2BH case, as expected. A QNM fit shows that at about $`10M_{adm}`$ from the beginning of the wave-train the fundamental mode dominates. (2) However, the BW waveform has more high frequency QNM components in the early phase. The waveforms start with a different offset from zero, which is substantially larger in magnitude in the Brill wave case, but note that in the BW case the detector is put much closer in (at $`4.6M_{adm}`$) and the Zerilli function extraction process gives a larger “Coulomb” component . (3) The fundamental QNMs that dominate the late time evolutions for the two cases have the same phase! We see that both waveforms dip at $`30M_{adm}`$, and peak at $`38M_{adm}`$, to high accuracy. We note that the 2BH waveforms for all $`\mu 2.2`$ have their fundamental QNM appearing with about the same phase, and we see here the BW collapse case also has the same phase. This and other interesting comparisons between the two collapse scenarios will be discussed further elsewhere. (We note that these features noted above are not sensitive to the $`\mu `$ value chosen, within the range of $`\mu =1.22.2`$.) Next we go to a pure strong wave case with full 3D features (the first ever simulated), where the initial waveform is even more dominated by details of the BH formation process. Fig. 4 shows the development of the data set ($`a`$=6, $`c`$=0.2, $`n`$=1), which has reflection symmetry across coordinate planes; it again suffices to evolve only an octant. The initial ADM mass of this data set turns out to be $`M_{ADM}=1.12`$. Fig. 4a shows a comparison of the AHs of this 3D and the previous axisymmetric cases, using the same high resolution, at $`t`$=10 on the $`x`$-$`z`$ plane. The mass of the 3D AH case is larger, weighing in at $`M_{AH}`$=0.99 (compared to $`M_{AH}(2D)=0.87`$). In Fig. 4b we show the {$`l`$=2,$`m`$=0} waveform of this 3D case, compared to the previous axisymmetric case. The $`c=0.2`$ waveform has a longer wave length at late times, consistent with the fact that a larger mass BH is formed in the 3D case. Figs. 4c and 4d show the same comparison for the {$`l`$=4,$`m`$=0} and {$`l`$=2,$`m`$=2} modes respectively. Notice that while the first two modes are of similar amplitude for both runs, the 3D {$`l`$=2,$`m`$=2} mode is completely different; as a non-axisymmetric contribution, it is absent in the axisymmetric run (in fact, it doesn’t quite vanish due to numerical error, but it remains of order $`10^6`$). We also show a fit to the corresponding QNM’s of a BH of mass 1.0. The fit was performed in the time interval $`(10,36)`$, and is noticeably worse if the fit is attempted to earlier times, again showing that the lowest QNM’s dominate at around $`10`$. The early parts of the waveforms $`t<10`$ reflect the details of the initial data and BH formation process. This is especially clear in the {$`l`$=2,$`m`$=2} mode, which seems to provide the most information about the initial data and the 3D BH formation process. At present no 3D BH formation simulation from other scenarios (e.g., true spiraling BH coalescence) are available for comparison, as in the axisymmetric case, but such simulations may actually be available soon . It will be interesting to compare such studies with 3D wave collapses, such as that presented here. In conclusion, we demonstrated numerical evolutions of 3D, strongly non-linear gravitational waves, and studied gravitational collapse of axisymmetric and non-axisymmetric gravitational waves. We compared the wave collapse to the head-on collision of two black holes. The research opens the door to many investigations. Acknowledgments. This work was supported by AEI, NCSA, NSF PHY 9600507, NSF MCA93S025 and NASA NCCS5-153. We thank many colleagues at the AEI, Washington University, Univeristat de les Illes Balears, and NCSA for the co-development of the Cactus code, and especially D. Holz for important discussions. Calculations were performed at AEI, NCSA, SDSC, RZG-Garching and ZIB in Berlin.
no-problem/9904/astro-ph9904254.html
ar5iv
text
# Is the cosmic microwave background really non-Gaussian? ## 1. Introduction The detection of fluctuations in the cosmic microwave background (CMB) by the Differential Microwave Radiometer (DMR) on board the Cosmic Microwave Explorer (COBE) satellite (Smoot et al. 1992) began a new era for studies of the Universe on large scales. Current and planned experiments offer the promise of tight constraints on cosmological parameters, including both quantities important for observational astronomy and inflationary parameters which may provide the ultimate testing ground for fundamental particle physics, relating phenomena on the largest and smallest observable scales. A key cosmological constraint from observations of the primordial density field is its general statistical nature. In most inflationary scenarios, the density is a Gaussian random field, (although see, e.g., Peebles 1999). This implies that the joint probability distribution of the $`N4000`$ temperatures in the galaxy-cut DMR sky map is a multivariate Gaussian. In contrast, topological defect models predict a non-Gaussian density field (e.g., Avelino et al. 1998). However, an analysis by Pen et al. (1997) suggests that defects are inconsistent with the observed power spectrum of fluctuations. At present, inflationary models, or at least generically Gaussian models, dominate the literature on large-scale structure. It is therefore quite intriguing that two groups, Ferreira, Magueijo & Gorski (1998, hereafter FMG) and Pando, Valls-Gabaud & Fang (1998, hereafter PVF), claim that the CMB fluctuations measured by COBE DMR are non-Gaussian. If substantiated, these results could potentially rule out standard inflation as the primary mechanism for cosmic structure formation in the early Universe. The results are all the more surprising because the literature documents considerable previous effort to identify non-Gaussianity in the CMB (see references given in §3), all of which failed. In this Letter, we revisit the question of whether COBE DMR rules out Gaussianity. We begin with a series of new tests of Gaussianity based on eigenmode analyses of the DMR map (§2). We then consider the results of FMG and PVF in Section 3, exploring their sensitivity to individual modes and spatial features as well as the significance of these results in light of the other statistical tests which have been applied to the data. We present our conclusions in Section 4. ## 2. Statistics of eigenmodes We analyze a sky map formed of the combined 53 and 90 GHz four year DMR data pixelized at resolution 6 in Galactic coordinates, with the “custom” Galaxy sky mask (Bennett et al. 1996), and with monopole and dipole contributions removed as in Tegmark & Bunn (1995). The resulting data set consists of temperatures for 3,881 pixels in the sky, which we arrange in a vector $`𝐱`$. We take the covariance matrix $`𝐂\mathrm{𝐱𝐱}^t`$ to be of the form $`𝐂=𝐒+𝐍`$, where the noise matrix $`𝐍`$ is diagonal and the signal matrix $`𝐒`$ is a Harrison-Zel’dovich power spectrum normalized to $`Q_{rmsPS}=18\mu `$k (e.g., Bennett et al. 1996). The pixelized data are both correlated and noisy, hence we subject the map $`𝐱`$ to both a principal component analysis (PCA) and a signal-to-noise eigenmode analysis (SNA), two standard astrophysical tools. Both of these procedures involve expanding the data in a new basis, the eigenvectors of $`𝐂`$ for the PCA case and the eigenvectors of $`𝐍^{1/2}\mathrm{𝐒𝐍}^{1/2}`$ for the SNA case. The eigenvectors are sorted by decreasing eigenvalue and normalized so that the expansion coefficients have unit variance. For the SNA (Bond 1994; Bunn & Sugiyama 1995; Tegmark et al. 1997), the modes are listed in order of decreasing signal-to-noise level. For the PCA, the modes explain successively less and less of the variance in the data. Since the DMR noise per pixel does not fluctuate much, the two methods give similar results. The first few hundred modes contain essentially all the cosmological information, and probe successively smaller angular scales (Bond 1994; Bunn & Sugiyama 1995; Bunn & White 1996). We use the top 250 modes for the Gaussianity tests described below. The purpose of this exercise is two-fold: First, we can determine how many cosmologically significant degrees of freedom a given statistical test should consider. Second, the decomposition into uncorrelated eigenmodes allows the data to be cast as a list of random numbers which, under the null hypothesis that the DMR data are Gaussian<sup>4</sup><sup>4</sup>4Note that as long as the true CMB sky is Gaussian, our data set will be Gaussian as well: Both the smoothing done by the DMR beam, our galaxy cut and our monopole and dipole removal are linear operations, and all linear operations preserve Gaussianity., will be independent and normally distributed. Although the statistics of these samples may not be completely testable in practice, we can still constrain general properties of the COBE data. We run both the lists of 250 PCA and SNA entries and the entire list of 3877 numbers (the rank of the covariance matrix after monopole and dipole subtraction) from the PCA basis through a smattering of tests, first for Gaussianity of the individual list elements. The null hypothesis cleanly passes Kolmogorov-Smirnoff and $`\chi `$-square tests, along with tests of cumulants up to fourth order and of the significance of the top few outliers. None of these tests manage to reject the Gaussian null hypothesis with 95% confidence. Note that these tests are sensitive only to the 1-point distribution of mode amplitudes, not to correlations between modes. This is strong though not irrefutable evidence that if the DMR data are non-Gaussian then mode correlations, not mode amplitudes, are responsible. The next step in testing the Gaussian hypothesis is to look for mode correlations. This is a difficult thing to do in any exhaustive way, even for our short lists of 250 elements, and a thorough treatment of this problem is beyond the scope of this Letter. We note only that no correlations were detected above the 95% confidence level in tests of second, third and fourth order $`N`$-point correlations. We also used mode amplitudes to simulate rolls of a die and examine the one- and two-point distributions of outcomes to see if the die is loaded. If one wishes to play dice with the Universe, evidently it would be a fair game. ## 3. Reports of Non-Gaussianity The above tests of the COBE data (26 in all) join a sizable list of previous results without significant detections of non-Gaussianity. For the 53 GHz DMR 1 year data, the three-point function was studied by Luo (1994, one test) and Hinshaw et al. (1994, two tests) while Smoot et al. (1994) considered the topological genus and kurtosis (two tests). For the 53 and 90 GHz DMR 2 year data, Hinshaw et al. (1995) studied the equilateral and pseudocollapsed three-point function at three $`\mathrm{}`$-cuts with 12 tests in total; the most extreme gave a 98% non-Gaussianity detection, but was deemed to suffer from a known noise problem. Kogut et al. (1996) tested the DMR 4 year data for three-point correlations, genus and peak correlations (4 tests in all), while Heavens (1998) analyzed this same data set using an optimized bispectrum statistic on 5 different scales. Gaztañaga et al. (1998) performed 5 variance-of-variance tests with the strongest rejection of Gaussianity being at the 91% level. Most recently, Diego et al. (1999) concluded that the DMR data were consistent with Gaussianity in a partition function analysis. On the other hand, there are two Gaussianity tests which the data reportedly fail. One is based on the bispectrum statistic of FMG (at 98% confidence) and other on the fourth-order wavelet statistic proposed by PVF (at 99% confidence). These are strong signals of non-Gaussianity, but some caution is in order. Given that the DMR data have been subjected to the 32 other published tests cited above which provide no evidence of non-Gaussianity (not to mention the 26 reported here and any tests, including some of our own, which were not reported because they yielded null results), are the FMG and PVF results simply expected outliers in the distribution of test results? To address this point, suppose we try to rule out some null hypothesis by subjecting a data set to $`n`$ independent statistical tests, and that the most successful one rules it out at a confidence level $`p`$, say 99%. How significant is this really? Let $`p_i`$ denote the confidence level obtained from the $`i^{th}`$ test, with $`p_{\mathrm{max}}\mathrm{max}\{p_i\}`$ corresponding to the most successful test. The probability of getting a less extreme result is then $$P(p_{\mathrm{max}}<p)=\underset{i=1}{\overset{n}{}}P(p_i<p)=\underset{i=1}{\overset{n}{}}p=p^n.$$ (1) For example, the most extreme S/N eigenmode coefficient in Section 2 is a 3.3-$`\sigma `$ outlier. If that one coefficient was all we had, then we would reject Gaussianity at the 99.9% level. However, we have 250 independent numbers and equation (1) shows that our level of confidence in rejection from that one extreme coefficient is only $`0.999^{250}78\%`$. Similarly, the list of 34 published Gaussianity tests mentioned above contains one which rules out the null hypothesis at the 99% level. If these tests were independent, we could reject Gaussianity with only $`0.99^{34}71\%`$ confidence. Of course the tests are not strictly independent. Yet if some of them capture only subsets of the information contained in the $`4000`$ COBE data points, then they may be effectively independent of each other. With this in mind we examine the FMG and PVF tests in more detail. ### 3.1. The wavelet test The detection reported by PVF of a non-Gaussian signal is made with a measure of fourth-order correlations between wavelet coefficients of the DMR data. The coefficients are obtained using a discrete transform of the northern sixth of the sky map, after the spherical plane of the sky has been projected onto the face of a cube. Taken alone, this measure reportedly gives a detection of non-Gaussian signal at 99% confidence. But we have more information, even about PVF’s wavelets: a second projection of the sky map onto the opposite face of the cube (i.e., the opposite hemisphere of the sky) lies roughly at the 40% confidence level. Together, a joint two-faced wavelet analysis gives a weaker rejection of the Gaussian hypothesis, formally at 97% confidence. Furthermore, the PVF detection is claimed only for wavelets on one specific scale even though they seek similar detections on two other scales but do not find them. Likewise, a detection was sought but not found for third-order moments. With $`n=2\times 3\times 2`$, equation (1) predicts a much lower confidence level, $`0.99^{12}89\%`$, for the claimed detection. In an analysis based on compact but smooth wavelets (e.g., Bromley 1994), we confirm the existence of a strong (99.6%) non-Gaussian outlier at the 11+22 scales reported by PVF. Interestingly, we can make the entire non-Gaussian signal vanish by simply zeroing or flipping the sign of a single, modestly rare principal component amplitude (a 2.7-$`\sigma `$ fluctuation of the 90<sup>th</sup> mode). Also the 17<sup>th</sup> eigenmodes from both the PCA and SNA strongly affect the wavelet detection. In the S/N basis, this mode has an amplitude of 2.1-$`\sigma `$ (slightly less in the principal component vector); by zeroing the amplitude of these modes, the wavelet statistic yields less than a 2-$`\sigma `$ detection. Furthermore, when set to zero, a single spot in the sky the size of the COBE beam (centered at $`b=64.6^{}`$, $`l=40.6^{}`$), also cuts the non-Gaussian signal down to a similar level. Of the six pixels in this spot, three are within one standard deviation of the expected noise, while two are at the 2.6-$`\sigma _n`$ level and the third is a strong outlier at 3.1-$`\sigma _n`$. (How this spot affects the PVF results depends on unspecified details of their analysis.) ### 3.2. The bispectrum test FMG introduce a measure, $`I_{\mathrm{}}^3`$, based on averaged triplets of projection coefficients from even-multipole spherical harmonics. Non-Gaussian behavior is seen only at $`\mathrm{}=16`$, but FMG are careful to consider the fact that the bispectrum at eight other $`\mathrm{}`$-values are individually consistent with the Gaussian hypothesis. The reported confidence of the non-Gaussian detection is 98%. There is nonetheless a possibility that the nine $`I_{\mathrm{}}^3`$ values given by FMG are sensitive to only a fraction of the information in the COBE data. Although there is some spherical harmonic mode coupling as a result of the sky mask, the odd $`\mathrm{}`$ multipoles are largely missing as well as multipoles above $`\mathrm{}=18`$. There may also be dependence on localized noise. We emphasize this latter point by setting to zero the five pixels in the beam-size spot on the sky centered at Galactic latitude $`b=39.5^{}`$ and longitude $`\mathrm{}=257^{}`$, shown in Fig. 1. The value of $`I_{16}^3`$ falls from about 0.92 to 0.78, approximately a 98% detection on its own<sup>5</sup><sup>5</sup>5Note that the effect of zeroing the spot depends somewhat on details of removing monopole and dipole contributions to the DMR maps. If explicit removal (e.g., Tegmark & Bunn 1995) is not performed on the custom cut map, as is apparently the case in the bispectrum analysis of Magueijo, Ferreira & Gorski (1999; Fig. 1 therein), then the effect of removing the spot (actually a nearest-neighbor) is to lower $`I_{16}^3`$ to 0.66., but well below 2-$`\sigma `$ when taken in conjunction with the other eight $`I_{\mathrm{}}^3`$ values shown in Fig. 2 ($`.98^984\%`$ if the 9 values were uncorrelated). Note that there is a single, rare pixel brightness value in the spot. In units of the expected noise fluctuations, it is at the level of $`3.5\sigma _n`$, and zeroing it alone cuts $`I_{16}^3`$ by 10%. The bispectrum statistic is also highly sensitive to individual eigenmodes. Zeroing or flipping the sign of principal component 151 causes $`I_{16}^3`$ to drop from 0.92 to the unambiguously Gaussian values of 0.61 and 0.20, respectively. Furthermore, there is sensitivity to S/N eigenmode 224, a 3-$`\sigma `$ fluctuation (the second most extreme of the first 250 modes) which is more strongly coupled to noise than cosmic structure (its S/N eigenvalue is 0.4). Zeroing this knocks $`I_{16}^3`$ to 0.74; flipping the sign causes the value to fall to 0.52, a decidedly Gaussian value. Since both of these eigenmodes are dominated by noise rather than cosmic signal, it is possible that the source of the alleged non-Gaussianity is detector noise rather than CMB. Note that in the above analyses, we systematically searched for the modes or pixels which affected the wavelet and bispectrum statistics the most. In each case we found only a few which cause more than an insignificant (several percent) change in these measures, and interestingly the significant modes or pixels were different for the two measures. Using randomly generated Gaussian skymaps selected for apparent non-Gaussianity similar to the DMR data, we also checked that it is quite common for a Gaussian map to give a wavelet or bispectrum measure that is sensitive to only a few individual pixels or modes, just as in the DMR case. ## 4. Conclusion The problem of the statistical nature of the CMB may be cast in a conceptually simple form: just use the observed temperature fluctuations to make a random number generator, based on assumed statistical properties, and test its quality. Here we have used the 53+90 GHz COBE sky map to generate lists of putative random numbers with principal components and signal-to-noise eigenmodes. In both cases the one-point distribution is manifestly Gaussian. If the CMB on COBE scales is non-Gaussian, it is the result of correlations between modes. Unfortunately, exhaustive tests of mode correlations are not feasible. A few tests which can pick up a fair range of non-Gaussian behavior were performed and no evidence of mode correlations was found. Here we have also considered the two statistics which reportedly detect non-Gaussianity in the COBE data. Both detections turn out to be fragile in the sense that they vanish when a single DMR beam spot or a single eigenmode is removed. Moreover, we found that the detection by PVF, based on wavelets alone, was less significant than originally claimed. Even so, with the dozens of different Gaussianity tests that have now been published, it would not be surprising if a perfectly valid analysis rejected Gaussianity at say 98% confidence purely by accident. Our results cast some doubt on the significance of the claimed non-Gaussian behavior in the CMB. If the reported detections are real nonetheless, then the eigenmodes and COBE-beam spots that we isolated for the wavelet and bispectrum statistics are candidates for potential non-Gaussian sources in the CMB. This latter possibility would perhaps be more satisfying if both measures were coupling to the same non-Gaussian structure in the sky. However, this is not obviously the case, since both the bispectrum statistic and the wavelet measure show virtually no sensitivity to the sky spots and eigenmodes which so dramatically affect the other. It is generally much easier to show that a bad random number generator is bad then to prove that a good one is good. Indeed, the results reported here fail to demonstrate that the CMB really is Gaussian. Conversely, the search for non-Gaussianity is also something of an uphill battle, a fight against the central limit theorem which causes both instrumental effects and the linear combinations involved in the eigenmode expansions to make things look more Gaussian. Therefore statistical measures should be tuned for the specific type of non-Gaussianity that physical models predict. This approach is taken in many recent studies (e.g., Cayon & Smoot 1995; Magueijo 1995; Torres et al. 1995; Gangui 1996; Gangui & Mollerach 1996; Ferreira & Magueijo 1997; Ferreira et al. 1997; Barrieiro et al. 1998, Lewin et al. 1999; Popa 1998) with an eye toward upcoming, high-resolution CMB data. We thank Angélica de Oliveira-Costa, Al Kogut, Alex Lewin, Bill Press, George Rybicki, and Nelson Beebe for useful comments. BCB acknowledges partial support from NSF Grant PHY 95-07695 and the use of supercomputing resources provided by NASA/JPL and Caltech/CACR. MT was funded by NASA though grant NAG5-6034 and Hubble Fellowship HF-01084.01-96A from STScI, operated by AURA, Inc. under NASA contract NAS5-26555. REFERENCES Avelino, P. P., Shellard, E. P. S., Wu, J. H. P., & Allen, B. 1998, ApJL, 507, L101 Barrieiro, R. B., Sanz, J. L., Martínez-González E, & Silk, J. 1998, MNRAS, 296, 693 Bennett, C. L. et al. 1996, ApJ, 464, L1 Bond, J. R. 1994, Phys. Rev. Lett., 74, 4369 Bromley, B. C. 1994, ApJ, 423, L81 Bunn, E. F., & Sugiyama, N. 1995, ApJ, 446, 49 Bunn, E. F., & White, M. 1995, ApJ, 480, 6 Cayon, L., & Smoot, G. F. 1995, ApJ, 452, 487 Diego, J. M., Martínez-González, E., Sanz, J. L., Mollerach, S. , & Martínez, V. 1999, MNRAS, 306, 427 Ferreira, P. G., & Magueijo, J. 1997, Phys. Rev. D, 55, 3358 Ferreira, P. G., Magueijo, J., & Gorski, K. M. 1998, ApJ, 503, 1 (“FMG”) Ferreira, P. G., Magueijo, J., & Silk, J. 1997, Phys. Rev. D, 56, 4592 Gaztañaga, E., Fosalba, P., & Elizalde E 1998, MNRAS, 295, 30P Gangui, A. 1996, Helv. Phys. Acta, 69, 215 Gangui, A., & Mollerach, S. 1996, Phys. Rev. D, 54, 4750 Hinshaw, G. et al. 1994, ApJ, 431, 1 Hinshaw, G. et al. 1995, ApJ, 446, L7 Kogut A et al. 1995, ApJL, 439, 29L Kogut A et al. 1996, ApJL, 464, L29 Lewin, A., Albrecht, A., & Magueijo, J. 1999, MNRAS, 302, 131 Luo, X. 1994, Phys. Rev. D, 49, 3810 Magueijo, J., Ferreira, P. G., & Gorski, K. M. 1999, astro-ph/9903051 Magueijo, J. 1995, Phys. Rev. D, 52, 4361 Pando, J., Valls-Gabaud, D, & Fang., L.. 1998, Phys. Rev. Lett., 81, 4568 (“PVF”) Popa, L. 1998, astro-ph/9806086 Peebles, P. J. E. 1999, ApJ, 510, 523 Pen U.-L., Seljak, U., & Turok, N. 1997, Phys. Rev. Lett., 79, 1611 Smoot, G. F. et al. 1992, ApJL, 396, L1 Smoot, G. F. et al. 1994, ApJ, 437, 1 Tegmark, M., & Bunn, E. F. 1995, ApJ, 455, 1 Tegmark, M., Taylor, A. N., & Heavens, A. F. 1997, ApJ, 480, 22 Torres, S. et al. 1995, MNRAS, 274, 853
no-problem/9904/cond-mat9904155.html
ar5iv
text
# Middle-Field Cusp Singularities in the Magnetization Process of One-Dimensional Quantum Antiferromagnets ## Abstract We study the zero-temperature magnetization process ($`MH`$ curve) of one-dimensional quantum antiferromagnets using a variant of the density-matrix renormalization group method. For both the $`S=1/2`$ zig-zag spin ladder and the $`S=1`$ bilinear-biquadratic chain, we find clear cusp-type singularities in the middle-field region of the $`MH`$ curve. These singularities are successfully explained in terms of the double-minimum shape of the energy dispersion of the low-lying excitations. For the $`S=1/2`$ zig-zag spin ladder, we find that the cusp formation accompanies the Fermi-liquid to non-Fermi-liquid transition. preprint: OUCMT-99-6 Low-dimensional antiferromagnetic (AF) quantum spin systems with various spin magnitude $`S`$ and various spatial structures, have been a field of active researches, both experimentally and theoretically. In particular, the magnetization process ($`MH`$ curve, $`M`$: magnetization, $`H`$: magnetic field) of AF spin chain at low-temperatures have recently drawn much attention, because it exhibits various phase-transition-like behaviors: e.g., the critical phenomena $`\mathrm{\Delta }M\sqrt{HH_\mathrm{c}}`$ associated with the gapped excitation (excitation gap $`H_\mathrm{c}`$) or with the saturated magnetization (at the saturation field $`H_\mathrm{s}`$), magnetization plateau, and, the first-order transition. These are field-induced phase transitions of the ground state, which reflect non-trivial energy-level structure at zero field. What we consider in this Letter is another type of singularity which has not been discussed so much: the cusp singularity at $`H=H_{\mathrm{cusp}}`$ in the middle-field region ($`H_\mathrm{c}<H_{\mathrm{cusp}}<H_\mathrm{s}`$). Existence this type of singularity was first demonstrated by Parkinson for the integrable Uimin-Lai-Sutherland (or, SU(3)) chain, and has also been known for some other integrable spin chains, and ladders. Derivation of the magnetization cusp for each model, however, relies on the model’s integrability in an essential manner, restricting the Hamiltonian to be of somewhat unrealistic form. Hence, whether or not this type of singularity can be found in realistic systems is a highly non-trivial question. In the present Letter, we make the first systematic numerical study of the “middle-field cusp singularity” (MFCS, for short) in the zero-temperature $`MH`$ curve for non-integrable systems, through which we give a positive answer for the above question. Namely, we show that the $`S=1/2`$ zig-zag spin ladder and the $`S=1`$ bilinear-biquadratic chain actually exhibit MFCS in the $`MH`$ curve. The numerical method we employ is the product-wavefunction renormalization group (PWFRG), which is a variant of the density-matrix renormalization group (DMRG). Efficiency of the PWFRG in calculation of the $`MH`$ curve has been demonstrated in previous studies; the PWFRG will allow us to obtain the $`MH`$ curve in the thermodynamic limit with enough accuracy to detect “weak” (non-divergent) singularity like the MFCS. Consider the $`S=1/2`$ zig-zag spin ladder, whose actual realization can be made as a quasi-one-dimensional material. The $`MH`$ curve of the system with bond alternation has recently been studied, where the cusp singularity has not been discussed. The Hamiltonian of the system is given by $`_{\mathrm{zig}\mathrm{zag}}`$ $`=`$ $`{\displaystyle \underset{i}{}}[\stackrel{}{S}_i\stackrel{}{S}_{i+1}+J\stackrel{}{S}_i\stackrel{}{S}_{i+2}]H{\displaystyle \underset{i}{}}S_i^z,`$ (1) where $`\stackrel{}{S}_i`$ is the $`S=1/2`$ spin operator at the $`i`$-th site. We have normalized the nearest neighbor coupling to unity, and have denoted the next-nearest coupling by $`J`$ ($`>0`$). In Fig. 1, we show the $`MH`$ curve calculated by using the PWFRG. We see the $`MH`$ curve for $`J>1/4`$ exhibits a clear MFCS. We take down-spin-particle picture, where one down spin in the saturated (all up) state is regarded as a particle. Then the system near $`H_s`$ can be regarded as that of interacting Bose particles, which reduces to the well-known delta-function Bose gas ($`\delta `$-BG) model in the low-energy limit. Important point is that the $`\delta `$-BG at low density is equivalent to spinless free-Fermi gas. Hence, for discussion of the $`MH`$ curve near $`H_s`$, we may treat the system as that of the spinless Fermions which is completely characterized by the one-particle excitation energy dispersion $`\omega (k)`$. The one-down-spin excitation energy $`\omega (k)`$ is calculated to be $$\omega (k)=\mathrm{cos}k1+J(\mathrm{cos}2k1),$$ (2) which we depicted in Fig.2. It should be noted that, at $`J=1/4`$, there occurs a qualitative change in the shape of $`\omega (k)`$: For $`J1/4`$, $`\omega (k)`$ have a single minimum at $`k=\pi `$, while, for $`J>1/4`$, $`\omega (k)`$ at $`k=\pi `$ changes into a local maximum and two minima newly appear (corresponding $`k`$-positions are determined from $`\mathrm{cos}k=1/(4J)`$). Then, the van Hove singularity associated with the double-minimum shape of $`\omega (k)`$ gives a simple explanation of the MFCS in the $`MH`$ curve for $`J>1/4`$. Let us make a quantitative analysis as follows. From the bottom of $`\omega (k)`$, we obtain $`H_s=2`$ for $`1/4J0`$, and $`H_s=1+2J+1/(8J)`$ for $`J1/4`$. As the applied field $`H`$ is decreased below $`H_s`$, the down-spin density becomes to be non-zero. The $`MH`$ curve is obtained from $`M`$ $`=`$ $`1/2{\displaystyle \frac{1}{2\pi }}{\displaystyle R(k)𝑑k},`$ (3) $`E(M)`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle \omega (k)R(k)𝑑k},`$ (4) $`H`$ $`=`$ $`{\displaystyle \frac{E(M)}{M}},`$ (5) where $`R(k)`$ is the zero-temperature Fermi distribution function which is unity inside the “Fermi vacuum” but is zero otherwise. Hence, how the particles fills the energy band tells us the essential behavior of the $`MH`$ curve. In $`0J1/4`$, the $`MH`$ curve is smooth in the whole range of $`0M1/2`$. The one-particle energy $`\omega (k)`$ near the bottom position $`k=\pi `$ has the expansion $`\omega (k)=2+\frac{1}{2}(14J)(\mathrm{\Delta }k)^2+\frac{1}{24}(16J1)(\mathrm{\Delta }k)^4+\mathrm{}`$ where we have introduced $`\mathrm{\Delta }k=k\pi `$. The $`HM`$ curve near $`H_s`$ is then calculated from (3), (4), and (5) to be $$H_sH=\frac{\pi ^2(14J)}{2}(\mathrm{\Delta }M)^2+\frac{\pi ^4(16J1)}{24}(\mathrm{\Delta }M)^4\mathrm{},$$ (6) where $`\mathrm{\Delta }M=1/2M`$. In the correctly mapped $`\delta `$-BG treatment for a class of AF spin chains, there may be $`(\mathrm{\Delta }M)^3`$ term due to the finiteness of the effective coupling. Hence, for the present model, we fit the PWFRG-calculated $`MH`$ curve with $$H_sH=\alpha (\mathrm{\Delta }M)^2\left[1+\gamma (\mathrm{\Delta }M)+\delta (\mathrm{\Delta }M)^2\mathrm{}\right],$$ (7) to check whether the obtained value of $`\alpha `$ agrees with the free-Fermion prediction. The best-fit results are $`\alpha =2.6`$ ($`J=0.1`$) and $`\alpha =0.92`$ ($`J=0.2`$), which are consistent with the free-Fermion prediction: $`\alpha =2.960\mathrm{}`$ ($`J=0.1`$) and $`\alpha =0.9869\mathrm{}`$ ($`J=0.2`$). At $`J=1/4`$, $`(\mathrm{\Delta }k)^2`$-term in $`\omega (k)`$ vanishes, leading to a different form of the expansion: $`H_sH=\stackrel{~}{\alpha }(\mathrm{\Delta }M)^4\left[1+\stackrel{~}{\gamma }(\mathrm{\Delta }M)+\stackrel{~}{\delta }(\mathrm{\Delta }M)^2\mathrm{}\right]`$. The best-fit value of $`\stackrel{~}{\alpha }`$ from the PWFRG result lies in the range $`\stackrel{~}{\alpha }=1417`$, which is in reasonable agreement with the free-Fermion prediction $`\stackrel{~}{\alpha }=12.2\mathrm{}`$. Hence the free-Fermion picture also holds for $`J=1/4`$, and we have $`\mathrm{\Delta }M(HH_s)^{1/4}`$ with the critical exponent $`1/4`$ being different from the “standard” value $`1/2`$, supporting the finite-size diagonalization result. We have thus seen that the system for $`J1/4`$ is a Fermi liquid, near $`H_s`$. Hence, also for $`J>1/4`$, we may expect that the system continues to behave as a Fermi liquid. Qualitatively, the Fermi-liquid character well explains the $`MH`$ curve, in particular, the appearance of the MFCS. For quantitative discussion, however, there emerges an important difference from the case of $`J1/4`$: Due to the double-minimum shape of $`\omega (k)`$, the system becomes to be two-component liquid \[each component is composed of modes around each minimum\]. Since the separation into two components may not be complete, there should remain interactions between the components. Such a correlated multicomponent system may often behave as a non-Fermi liquid, or, Tomonaga-Luttinger (TL, for short) liquid, whose typical example is the Hubbard chain. The TL liquid is characterized by the smooth edge of the momentum distribution at the Fermi point $`k_\mathrm{F}`$: $`R(k)(k_\mathrm{F}k)^\zeta `$ ($`kk_\mathrm{F}`$, $`\zeta >0`$ ). Then, for very small particle density, we assume $$R(k)=R_0(k_\mathrm{F}|k|)^\zeta /k_\mathrm{F}^\zeta ,$$ (8) where $`R_0`$ is a constant, and we have assumed that $`R(0)`$ remains to be finite even in the vanishing particle density ($`k_\mathrm{F}0`$). Assuming that each component has a parabolic energy dispersion $`\omega (k)=\sigma \mathrm{\Delta }k^2`$ ($`\mathrm{\Delta }k=kk^{}`$) around each minimum $`k=k^{}`$, we have the following form of the ground-state energy density $`E_\mathrm{G}`$ as a function of the total particle number density $`\rho `$: $`E_\mathrm{G}`$ $`=`$ $`C_{\mathrm{TL}}{\displaystyle \frac{\sigma \pi ^2}{12}}\rho ^3,`$ (9) $`C_{\mathrm{TL}}`$ $`=`$ $`C_{\mathrm{TL}}(R_0,\zeta ){\displaystyle \frac{24(\zeta +1)^2}{R_0^2(\zeta +2)(\zeta +3)}}.`$ (10) For the two-component (non-interacting) Fermi liquid, we have $`\zeta =1`$ and $`R_0=2`$ giving $`C_{\mathrm{TL}}(R_0,0)=1`$. Therefore, deviation of $`C_{\mathrm{TL}}`$ from unity implies the non-Fermi liquid character of the system. Since $`\rho `$ corresponds to $`\mathrm{\Delta }M=1/2M`$, the TL-liquid expression (9) gives the coefficient $`\alpha `$ in (7) as $`\alpha =C_{\mathrm{TL}}\sigma \pi ^2/4`$ where $`\sigma `$ is calculated from the band curvature at each minimum of $`\omega (k)`$. Explicitly, we have $`\sigma =(16J^21)/(8J)`$ and $$\alpha =C_{\mathrm{TL}}\frac{(16J^21)}{32J}\pi ^2.$$ (11) The best-fit values of $`\alpha `$ from the PWFRG calculation are $`5.9\pm 0.3`$ ($`J=0.4`$) and $`7.4\pm 0.4`$ ($`J=0.5`$). These values disagree with the free-Fermion values ((11) with $`C_{\mathrm{TL}}=1`$): $`\alpha =1.20\mathrm{}`$ ($`J=0.4`$) and $`\alpha =1.85\mathrm{}`$ ($`J=0.5`$). Rather, our calculation leads to $`C_{\mathrm{TL}}4`$ in (11), showing a clear sign of the non-Fermi-liquid character of the system. Remarkably, in the Hubbard chain with on-site interaction $`U`$ ($`>0`$) which can be viewed as a two-chain $`S=1/2`$ spin ladder via the Jordan-Wigner transformation, the same factor $`C_{\mathrm{TL}}=4`$ appears in the $`U=\mathrm{}`$ limit or in the low-density limit. Therefore, we may conclude that, near $`H_s`$, the $`S=1/2`$ zig-zag ladder for $`J>1/4`$ is a two-component non-Fermi TL liquid. Let us next consider the bilinear-biquadratic (BLBQ, for short) chain with the Hamiltonian $`_{\mathrm{BLBQ}}`$ $`=`$ $`{\displaystyle \underset{i}{}}\left[\stackrel{}{S}_i\stackrel{}{S}_{i+1}+\beta (\stackrel{}{S}_i\stackrel{}{S}_{i+1})^2\right]H{\displaystyle \underset{i}{}}S_i^z,`$ (12) where $`\stackrel{}{S}_i=(S_i^x,S_i^y,S_i^z)`$ is the $`S=1`$ spin operator at the site $`i`$. In Fig.3 we show PWFRG-calculated $`M`$-$`H`$ curves for $`\beta =0.45`$, $`0.6`$, $`0.8`$, $`1.0`$, whare clear MFCS is seen. In Ref., we made a quantitative test for the square-root critical behavior, $`MA(HH_c)^{1/2}`$ ($`A`$: amplitude), of the $`MH`$ curve near the lower critical field $`H_c`$ (which is proportional to the excitation gap at $`H=0`$). There, we found a critical value $`\beta _c`$ ($`0.41`$) where the critical exponent changes from 1/2 to 1/4, which is in close similarity to the $`J=1/4`$ case of the $`S=1/2`$ zig-zag spin ladder. We should remark that the change in the shape of $`\omega (k)`$ at $`\beta _c`$ has recently been found. Therefore, the MFCS in the BLBQ chain is well explained in terms of the Fermi/Tomonaga-Luttinger-liquid picture. We should also note that the observed two-component Fermi/Tomonaga-Luttinger-liquid behavior for $`\beta >\beta _c`$, is also consistent with the appearance of the MFCS. To summarize, in this Letter we have studied the middle-field cusp singularity (MFCS) in the zero-temperature magnetization process ($`MH`$ curve) for antiferromagnetic spin systems in one dimension. For the $`S=1/2`$ zig-zag spin ladder and the $`S=1`$ bilinear-biquadratic chain, we have found clear MFCS in the $`MH`$ curve obtained by using the product-wavefunction renormalization group method which is a variant of the density-matrix renormalization group (DMRG) method. We have explained the mechanism for the MFCS in terms of the shape-change in the energy dispersion curve of the low-lying excitation. Further, for the $`S=1/2`$ zig-zag spin ladder, we have shown that the formation of the MFCS accompanies the Fermi-liquid to non-Fermi-liquid (Tomonaga-Luttinger-liquid) transition in the character of the system. As far as we know, what we have found for the $`S=1/2`$ zig-zag spin ladder is the first non-trivial example of physically observable MFCS. For actual experimental observation, we should of course take the finite-temperature effect into account, which can be made by the “finite-temperature DMRG”. As an important implication drawn from the present study, we should note that the essential mechanism for appearance of the MFCS is the multi-minimum structure of the low-lying excitation energy. Such structure may be well expected for systems with non-trivial spatial structures and/or competing interactions, which often accompany incommensurability in physical quantities. In fact, for the $`S=1/2`$ zig-zag spin ladder in the large $`J`$ region ($`J>0.5`$) (incommensurability at zero field is reported), we have observed another cusp near the lower critical field, whose details will be published elsewhere. This work was partially supported by the Grant-in-Aid for Scientific Research from Ministry of Education, Science, Sports and Culture (No.09640462), and by the “Research for the Future” program of the Japan Society for the Promotion of Science (JSPS-RFTF97P00201). One of the authors (K. O.) is supported by JSPS fellowship for young scientists.
no-problem/9904/quant-ph9904053.html
ar5iv
text
# Reduction of optimum light power with Heisenberg-limited photon-counting noise in interferometric gravitational-wave detectors ## Abstract We study how the behavior of quantum noise, presenting the fundamental limit on the sensitivity of interferometric gravitational-wave detectors, depends on properties of input states of light. We analyze the situation with specially prepared nonclassical input states which reduce the photon-counting noise to the Heisenberg limit. This results in a great reduction of the optimum light power needed to achieve the standard quantum limit, compared to the usual configuration. 04.80.Nn, 42.50.Dv Since the pioneering work by Caves , it is well understood that two sources of quantum noise—the photon-counting noise and the radiation-pressure noise—constitute the fundamental limitation on the sensitivity of an interferometric gravitational-wave detector. These limitations will be of potential importance in long-baseline interferometric detectors which are currently under construction (the LIGO project in the United States and the French-Italian VIRGO project in Europe are the largest ones). For example, the photon-counting shot noise will dominate at the gravitational-wave frequencies above 1 kHz in the VIRGO detector and above 200 Hz in the initial LIGO detector. With a further reduction of the thermal noise, planned in the advanced LIGO interferometer, the role of the shot noise will be even more important. For a coherent laser beam of light power $`P`$, the shot noise associated with photon-counting statistics scales as $`P^{1/2}`$ and the radiation-pressure noise scales as $`P^{1/2}`$. The contributions of these two sources of noise will be equal for some optimum value $`P_{\mathrm{opt}}`$ of the light power. Provided that classical sources of noise (such as thermal and seismic) are sufficiently suppressed, the interferometer with the optimum light power will work at the so-called standard quantum limit (SQL). A simple quantum calculation, based on the use of the Heisenberg uncertainty principle, gives the SQL for the measurement of the relative shift $`z=z_2z_1`$ in the positions of two end mirrors: $$(\mathrm{\Delta }z)_{\mathrm{SQL}}=\sqrt{2\mathrm{}\tau /m},$$ (1) where $`m`$ is the mass of each end mirror and $`\tau `$ is the measurement time. For modern long-baseline interferometers (like LIGO and VIRGO), Fabry-Perot cavities are used in the arms; so $`\tau `$ is actually the cavity storage time, $`\tau =L/\pi c`$, where $`L`$ is the cavity length, $``$ is the finesse, and $`c`$ is the velocity of light. The optimum power, for which the SQL is achieved, is $$P_{\mathrm{opt}}=\frac{mL^2}{\omega \tau ^4},$$ (2) where $`\omega `$ is the light angular frequency. For the initial LIGO configuration, the mirror mass is $`m11\mathrm{kg}`$, the cavity length is $`L4\mathrm{km}`$, the finesse is $`200`$, and the wavelength of the Nd:YAG laser is $`\lambda 1.064\mu \mathrm{m}`$ ($`\omega 1.77\times 10^{15}\mathrm{Hz}`$). This gives the cavity storage time $`\tau 8.5\times 10^4\mathrm{s}`$ and an effective number of bounces $`b=\tau c/2L32`$. The corresponding optimum laser power is $`P_{\mathrm{opt}}191\mathrm{kW}`$ and the SQL of the position shift measurement is $`(\mathrm{\Delta }z)_{\mathrm{SQL}}1.24\times 10^{19}\mathrm{m}`$. Achieving this SQL will make possible to measure gravitational waves with amplitudes $`h`$ greater than $`3\times 10^{23}`$. Presently, the available laser light power is insufficient for achieving the SQL (for example, in the initial LIGO configuration the input laser power is $`6\mathrm{W}`$ and the power recycling gain is about $`30`$). Therefore, in advanced LIGO configurations, it is planned to reduce the shot noise by using more powerful lasers, in conjunction with the power-recycling technique . However, for very high laser power, one encounters serious technical problems related to nonuniform heating of the cavity mirrors caused by absorption of even a small portion of circulating light. The resulting thermal aberrations can seriously deteriorate the performance of the interferometer . Therefore, it will be very interesting to study possibilities for achieving the SQL with low light power. The gravitational-wave detection community is quite familiar with the intriguing idea by Caves to reduce the photon-counting noise by squeezing the vacuum fluctuations at the unused input port. During the last decade, other interesting ideas has been developed in the field of theoretical quantum optics, based on the use of nonclassical photon states for the quantum noise reduction in idealized optical interferometers . The main theoretical motivation of all those papers was to show the possibility of beating the shot-noise limit and achieving the fundamental Heisenberg limit for the photon-counting noise in an ideal interferometric measurement. The aim of the present work is to show that the optimum light power needed for the SQL operation of an interferometric gravitational-wave detector with movable mirrors can be greatly reduced by the use of nonclassical states of light with the Heisenberg-limited photon-counting noise. This result means that Heisenberg-limited interferometry is not only interesting for a demonstration of the fundamental uncertainty principle, but can be also important for the experimental detection of gravitational waves. Let us consider a long-baseline Michelson interferometer whose arms are equipped with high-finesse Fabry-Perot cavities, with end mirrors serving as free test masses. In the quantum description, two modes of the light field enter the interferometer through the two input ports of a 50-50 beam splitter. After being mixed in the beam splitter, the light modes spent time $`\tau `$ in the Fabry-Perot cavities, and then leave the interferometer (through the same beam splitter, but in the opposite direction). The photons leaving the interferometer in the output modes are counted by two photodetectors. A gravitational wave incident on the interferometer will cause a relative shift $`z=z_2z_1`$ in the positions of two end mirrors, which results in the phase shift $`\varphi =(\omega \tau /L)z`$ between the two arms. The performance of such an interferometer can be analyzed in the Heisenberg picture, using a nice group-theoretic description proposed by Yurke et al. . Using the boson annihilation operators $`a_1`$ and $`a_2`$ of the two input modes, one constructs the operators $`J_x=(a_1^{}a_2+a_2^{}a_1)/2,`$ (3) $`J_y=\mathrm{i}(a_1^{}a_2a_2^{}a_1)/2,`$ (4) $`J_z=(a_1^{}a_1a_2^{}a_2)/2.`$ (5) These operators form the two-boson realization of the su(2) Lie algebra, $`[J_k,J_l]=\mathrm{i}ϵ_{klm}J_m`$. The Casimir operator is a constant, $`𝐉^2=j(j+1)`$, for any unitary irreducible representation of the SU(2) group; so the representations are labeled by a single index $`j`$ that takes the values $`j=0,1/2,1,3/2,\mathrm{}`$. The representation Hilbert space $`_j`$ is spanned by the complete orthonormal basis $`|j,m`$ ($`m=j,j1,\mathrm{},j`$). Using Eq. (3), one finds $$𝐉^2=\frac{1}{2}N\left(\frac{1}{2}N+1\right),N=a_1^{}a_1+a_2^{}a_2,$$ (6) where $`N`$ is the total number of photons entering the interferometer. We see that $`N`$ is an SU(2) invariant; if the input state of the two-mode light field belongs to $`_j`$, then $`N=2j`$. The actions of the interferometer elements on the column-vector $`𝐉=(J_x,J_y,J_z)^T`$ can be represented as rotations in the 3-dimensional space . The first mixing in the beam splitter produces a rotation around the $`y`$ axis by $`\pi /2`$, with the transformation matrix $`𝖱_y(\pi /2)`$. The second mixing corresponds to the opposite rotation, with the transformation matrix $`𝖱_y(\pi /2)`$. The relative phase shift produces a rotation around the $`z`$ axis by $`\varphi `$, with the transformation matrix $`𝖱_z(\varphi )`$. The overall transformation performed on $`𝐉`$ is the rotation by $`\varphi `$ around the $`x`$ axis, $$𝖱_x(\varphi )=𝖱_y(\pi /2)𝖱_z(\varphi )𝖱_y(\pi /2).$$ (7) The information on the phase shift $`\varphi `$ is inferred from the photon statistics of the output beams. Usually, one measures the difference between the number of photons in the two output modes, $$q_{\mathrm{out}}=2J_{z\mathrm{out}}=2[(\mathrm{sin}\varphi )J_y+(\mathrm{cos}\varphi )J_z].$$ (8) If we assume that there are no losses in the interferometer and the classical sources of noise are well suppressed, then the uncertainty in the relative position shift $`z`$ of the end mirrors is due to two factors . The first one is the photon-counting noise. Indeed, since there are quantum fluctuations in $`q_{\mathrm{out}}`$, a phase shift is detectable only if it induces a change in $`q_{\mathrm{out}}`$ which is larger than the uncertainty $`\mathrm{\Delta }q_{\mathrm{out}}`$. Consequently, the uncertainty in the phase shift due to the photon-counting noise is $$(\mathrm{\Delta }\varphi )_{\mathrm{pc}}^2=\frac{(\mathrm{\Delta }q_{\mathrm{out}})^2}{(q_{\mathrm{out}}/\varphi )^2}.$$ (9) If the detection is made on a dark fringe ($`\varphi =\pi /2`$ in the unperturbed state), then the contribution of the photon-counting noise is $$(\mathrm{\Delta }z)_{\mathrm{pc}}^2=A_{\mathrm{pc}}\frac{(\mathrm{\Delta }J_y)^2}{J_z^2},A_{\mathrm{pc}}=\left(\frac{L}{\omega \tau }\right)^2.$$ (10) The second source of noise is due to quantum fluctuations in the radiation pressure. The difference between the momenta transferred by light to the end mirrors, $`𝒫=p_2p_1`$, is easily found to be $`𝒫=(2\mathrm{}\omega \tau /L)J_x`$. The relative shift in the positions of the end mirrors, due to the transferred momenta, is $`(\tau /m)𝒫`$. Therefore, the contribution of the radiation-pressure noise is $$(\mathrm{\Delta }z)_{\mathrm{rp}}^2=A_{\mathrm{rp}}(2\mathrm{\Delta }J_x)^2,A_{\mathrm{rp}}=\left(\frac{\mathrm{}\omega \tau ^2}{mL}\right)^2.$$ (11) Consider the standard case when the coherent laser beam of amplitude $`\alpha `$ enters the interferometer’s one input port, while the vacuum enters the other. This input state $`|\mathrm{in}=|\alpha _1|0_2`$ (where $`|0`$ is the vacuum and $`|\alpha =\mathrm{exp}(\alpha a^{}\alpha ^{}a)|0`$ is the coherent state), satisfies $`J_x=J_y=0,J_x^2=J_y^2=|\alpha |^2/4,`$ $`J_z=|\alpha |^2/2,N\overline{N}=|\alpha |^2.`$ Using these results, one finds $$(\mathrm{\Delta }z)^2=(\mathrm{\Delta }z)_{\mathrm{pc}}^2+(\mathrm{\Delta }z)_{\mathrm{rp}}^2=A_{\mathrm{pc}}\overline{N}^1+A_{\mathrm{rp}}\overline{N}.$$ (12) Optimizing $`(\mathrm{\Delta }z)^2`$ as a function of $`\overline{N}`$, one obtains $$\overline{N}_{\mathrm{opt}}=\frac{mL^2}{\mathrm{}\omega ^2\tau ^3},$$ (13) and $`P_{\mathrm{opt}}=\mathrm{}\omega \overline{N}_{\mathrm{opt}}/\tau `$ is given by Eq. (2), while the optimum value of $`\mathrm{\Delta }z`$ is the SQL of Eq. (1). The characteristic noise behavior of Eq. (12) is sometimes explained by the Poissonian photon statistics of the coherent state (i.e., by the fact that $`\mathrm{\Delta }N_1=N_1^{1/2}`$, with $`N_1=a_1^{}a_1`$). However, it is not difficult to see that this explanation is principally wrong. A simple calculation shows that if instead of the coherent state $`|\alpha `$ at the first input port we will use an arbitrary state (pure or mixture) of the single-mode light field, the same result (12) will hold. This will be true, regardless of the statistical properties of the photon state at the first input port, as long as the vacuum enters the second input port. Caves proposed to reduce the optimum power needed to achieve the SQL by using the squeezed vacuum in the second input port. If the carrier mode entering the first input port is in the coherent state $`|\alpha `$, the two-mode input state is given by $`|\mathrm{in}=|\alpha _1|\xi _2`$, where $`|\xi =\mathrm{exp}(\frac{1}{2}\xi a^2\frac{1}{2}\xi ^{}a^2)|0`$ (with $`\xi =r\mathrm{e}^{\mathrm{i}\theta }`$) is the squeezed vacuum state. If one takes $`\theta =0`$ and real $`\alpha `$, the input state satisfies $`J_x=J_y=0,J_{x,y}^2=(\alpha ^2\mathrm{e}^{\pm 2r}+\mathrm{sinh}^2r)/4,`$ $`J_z=(\alpha ^2\mathrm{sinh}^2r)/2,\overline{N}=\alpha ^2+\mathrm{sinh}^2r.`$ In the usual situation, $`\alpha ^2\mathrm{sinh}^2r`$, so one derives $$(\mathrm{\Delta }z)^2A_{\mathrm{pc}}\mathrm{e}^{2r}\overline{N}^1+A_{\mathrm{rp}}\mathrm{e}^{2r}\overline{N}.$$ (14) The use of the squeezed vacuum reduces the photon-counting noise at the expense of the radiation-pressure noise. This results in the reduced optimum light power: $$P_{\mathrm{opt}}(r)P_{\mathrm{opt}}(r=0)\mathrm{e}^{2r},$$ (15) while the optimum value of $`\mathrm{\Delta }z`$ remains the SQL of Eq. (1). However, it is erroneous to think that the reduction of the optimum light power is the merit of the squeezed vacuum alone; actually, the state of the carrier mode is important as well. While the state in the carrier mode was mixed with the phase-insensitive vacuum, the behavior of the quantum noise was determined by the mean number of carrier photons only. However, when mixing the carrier mode with a phase-sensitive state (e.g., with the squeezed vacuum), the precise matching between the quantum states of the two modes is important. For example, if the state of the carrier mode satisfies $`a_1^2+a_1^2=0`$ and $`N_1\mathrm{sinh}^2r`$, then we obtain $$(\mathrm{\Delta }z)^2(A_{\mathrm{pc}}\overline{N}^1+A_{\mathrm{rp}}\overline{N})\mathrm{cosh}2r.$$ (16) This results in the same optimum power (2) as for the normal vacuum case, but the sensitivity deteriorates: $`(\mathrm{\Delta }z)_{\mathrm{opt}}^2(\mathrm{\Delta }z)_{\mathrm{SQL}}^2\mathrm{cosh}2r`$. For example, this will be the case for the carrier mode in the coherent state $`|\alpha `$ with $`\mathrm{arg}\alpha =\pi /4`$ or in any phase-insensitive state (a state is called phase-insensitive, if its density matrix is diagonal in the Fock basis; examples are the Fock states themselves or the thermal state). On the other hand, if the carrier mode in an arbitrary state is mixed with a phase-insensitive state (or, more generally, with any state satisfying $`a_2^2=0`$), then $$(\mathrm{\Delta }z)^2=(2\overline{N}_1\overline{N}_2+\overline{N}_1+\overline{N}_2)\left[A_{\mathrm{pc}}(\overline{N}_1\overline{N}_2)^2+A_{\mathrm{rp}}\right].$$ Here, we used notation $`\overline{N}_k=a_k^{}a_k`$, $`k=1,2`$. Clearly, the quantum noise cannot be reduced here, compared to the vacuum case; in particular, for $`\overline{N}_1\overline{N}_2`$, the optimum power remains as in Eq. (2), but the sensitivity deteriorates: $`(\mathrm{\Delta }z)_{\mathrm{opt}}^2(\mathrm{\Delta }z)_{\mathrm{SQL}}^2(1+2\overline{N}_2)`$. From the above arguments, one understands that the reduction of the optimum power can be achieved with a proper phase matching between the two input modes. In this relation, it is interesting to consider input states which lead to the Heisenberg-limited photon-counting noise . It is well known that the shot-noise limit $`(\mathrm{\Delta }\varphi )_{\mathrm{pc}}=\overline{N}^{1/2}`$, achieved with the vacuum at the second input port, is not a fundamental one. Using the uncertainty relation $`(\mathrm{\Delta }J_x)(\mathrm{\Delta }J_y)\frac{1}{2}|J_z|`$, one obtains $`(\mathrm{\Delta }\varphi )_{\mathrm{pc}}(2\mathrm{\Delta }J_x)^1`$. Since for any input state $`|\mathrm{in}_j`$ the relation $`(\mathrm{\Delta }J_x)^2\frac{1}{2}j(j+1)`$ holds, one finds the Heisenberg limit $$(\mathrm{\Delta }\varphi )_{\mathrm{pc}}[2j(j+1)]^{1/2}.$$ (17) Consequently, for large photon numbers ($`\overline{N}=2j1`$), the photon-counting noise $`(\mathrm{\Delta }z)_{\mathrm{pc}}`$ scales as $`1/P`$. It can be shown that the shot-noise limit can be surpassed with the so-called intelligent (minimum-uncertainty) states . (The use of intelligent states for achieving the Heisenberg limit in spectroscopy was discussed in .) The $`J_x`$-$`J_y`$ intelligent states, by their definition, equalize the uncertainty relation: $`(\mathrm{\Delta }J_x)(\mathrm{\Delta }J_y)=\frac{1}{2}|J_z|`$. These states are determined by the eigenvalue equation $$(\eta J_x\mathrm{i}J_y)|\lambda ,\eta =\lambda |\lambda ,\eta .$$ (18) The spectrum is discrete: $`\lambda =\mathrm{i}m_0\sqrt{1\eta ^2}`$, where $`m_0=j,j1,\mathrm{},j`$, and $`\eta `$ is a real parameter given by $`|\eta |=\mathrm{\Delta }J_y/\mathrm{\Delta }J_x`$. Recently, a method for experimental generation of the SU(2) intelligent states was proposed in . If the two-mode light field entering the interferometer is prepared in the $`J_x`$-$`J_y`$ intelligent state, the quantum noise takes the form $$(\mathrm{\Delta }z)^2=A_{\mathrm{pc}}(2\mathrm{\Delta }J_x)^2+A_{\mathrm{rp}}(2\mathrm{\Delta }J_x)^2.$$ (19) For $`|\eta |<1`$, the intelligent states are squeezed in $`J_y`$ and anti-squeezed in $`J_x`$, thereby reducing the photon-counting noise below the shot-noise limit, on account of increasing contribution of the radiation-pressure noise. For $`\eta 0`$, one obtains $$(2\mathrm{\Delta }J_x)^2=2|J_z/\eta |2(j^2m_0^2+j),$$ (20) and the Heisenberg limit for the photon-counting noise is achieved when $`m_0=0`$. Then, for large photon numbers ($`\overline{N}=2j1`$), we obtain $$(\mathrm{\Delta }z)^22A_{\mathrm{pc}}\overline{N}^2+\frac{1}{2}A_{\mathrm{rp}}\overline{N}^2.$$ (21) Optimizing $`(\mathrm{\Delta }z)^2`$ as a function of $`\overline{N}`$, we find $`(\mathrm{\Delta }z)_{\mathrm{opt}}(\mathrm{\Delta }z)_{\mathrm{SQL}}`$, but the optimum light power needed to achieve the SQL is dramatically reduced: $$\overline{N}_{\mathrm{opt}}=\left(\frac{2mL^2}{\mathrm{}\omega ^2\tau ^3}\right)^{1/2},P_{\mathrm{opt}}=\left(\frac{2\mathrm{}mL^2}{\tau ^5}\right)^{1/2}.$$ (22) Using the parameters for the initial LIGO, we find the values $`\overline{N}_{\mathrm{opt}}4.3\times 10^{10}`$ and $`P_{\mathrm{opt}}9\mu \mathrm{W}`$. Compare these values with those obtained in the standard configuration (with the vacuum at the second port): $`\overline{N}_{\mathrm{opt}}9.2\times 10^{20}`$ and $`P_{\mathrm{opt}}191\mathrm{kW}`$. We see that the use of the intelligent states with the Heisenberg-limited photon-counting noise can in principle reduce the optimum light power by a factor $`2\times 10^{10}`$. There were proposals to achieve the Heisenberg limit for the photon-counting noise by driving the interferometer with two Fock states containing equal numbers of photons . (The use of Fock states was also proposed for Heisenberg-limited interferometry with matter waves and for Heisenberg-limited spectroscopy with degenerate Bose-Einstein gases .) The corresponding input state is $`|\mathrm{in}=|n_1|n_2=|j,0`$ with $`j=n=\overline{N}/2`$. Clearly, this input state cannot be used when one measures the photon difference $`q_{\mathrm{out}}`$ at the output. However, as was shown in Ref. , the Heisenberg-limited photon-counting noise is achieved by measuring the squared difference $`S=q_{\mathrm{out}}^2=4J_{z\mathrm{out}}^2`$ at the output. (We do not discuss here technical problems involved in this kind of measurement.) In this case, the uncertainty in the phase shift due to the photon-counting noise is $$(\mathrm{\Delta }\varphi )_{\mathrm{pc}}^2=\frac{(\mathrm{\Delta }S)^2}{(S/\varphi )^2}=\frac{\mathrm{tan}^2\varphi }{8}+\frac{2\mathrm{tan}^2\varphi }{4j(j+1)}.$$ (23) For $`\varphi =0`$ (this corresponds to a dark fringe for the measurement of $`S`$), the Heisenberg limit is achieved: $`(\mathrm{\Delta }\varphi )_{\mathrm{pc}}=[2j(j+1)]^{1/2}`$. Of course, this improvement is on account of the corresponding increase in the radiation-pressure noise, because $`(\mathrm{\Delta }J_x)^2=\frac{1}{2}j(j+1)`$ takes its maximum value. Therefore, for large photon numbers, we obtain $`(\mathrm{\Delta }z)_{\mathrm{pc}}(2A_{\mathrm{pc}})^{1/2}/\overline{N}`$ and $`(\mathrm{\Delta }z)_{\mathrm{rp}}(A_{\mathrm{rp}}/2)^{1/2}\overline{N}`$, recovering the result of Eq. (21). In fact, this quantum noise behavior and the corresponding reduction of the optimum power are characteristic for input states with the Heisenberg-limited photon-counting noise. It should be emphasized that above results are valid for a *lossless* interferometer, while mirrors of realistic detectors (for example, LIGO) do have losses. The Heisenberg-limited photon-counting noise can be achieved only if the losses are sufficiently small. Let $`\mathrm{\Gamma }`$ be the dimensionless coefficient of losses, defined by $`\overline{N}_{\mathrm{out}}=\overline{N}\mathrm{e}^\mathrm{\Gamma }\overline{N}(1\mathrm{\Gamma })`$. In the case of input Fock states, a simple analysis shows that the Heisenberg-limited photon-counting noise $`(\mathrm{\Delta }\varphi )_{\mathrm{pc}}\overline{N}^1`$ can be obtained only for $`\overline{N}\mathrm{\Gamma }<1/2`$. Sure, the value $`\mathrm{\Gamma }10^{11}`$ is impossible to achieve with the present technology. The problem of losses is of great practical importance but nevertheless it does not cross out the principal value of the idea to reduce the optimum light power by using nonclassical input states. For example, one can imagine a realization of this idea with a prototype interferometer which should have smaller $`m`$ and $`L`$ and larger $`\tau `$. In conclusion, we analyzed the behavior of quantum noise, which limits the sensitivity of interferometric gravitational-wave detectors, for various input states of the light field. We found that by using nonclassical input states exhibiting the Heisenberg-limited photon-counting noise, the optimum light power needed to achieve the SQL can be significantly reduced, compared to the usual configuration. Of course, a practical realization of Heisenberg-limited interferometry will depend on future theoretical and experimental progress in the methods for production of stable and sufficiently powerful sources of nonclassical light and on technical solutions for the reduction of losses and classical sources of noise. The author thanks Kip S. Thorne and John Preskill for stimulating discussions and Ady Mann and Albert Lazzarini for valuable comments on the manuscript. Financial support from the Lester Deutsch Fund is gratefully acknowledged. This work was supported in part by the Institute of Theoretical Physics at the Department of Physics at the Technion, and the author is grateful to the Institute for hospitality during his visit to the Technion. The LIGO Project is supported by the National Science Foundation under the cooperative agreement PHY-9210038.
no-problem/9904/nucl-th9904044.html
ar5iv
text
# (Hybrid) Baryons: Symmetries and Masses ## 1 Introduction Hybrid baryons are bound states of three quarks with an explicit excitation in the gluon field of QCD. The construction of (hybrid) baryons in a model motivated from the strong coupling expansion of the hamiltonian formulation of lattice QCD, the non–relativistic flux–tube model of Isgur and Paton , was detailed in ref. . This model predicts the adiabatic potentials of (hybrid) mesons at large interquark separations, as well as the mass of the $`J^{PC}=1^+`$ hybrid meson, consistent with recent estimates from lattice QCD . In ref. we studied the detailed flux dynamics and built the flux hamiltonian. We restrict our discussion to cases where the flux settles down in a Mercedez Benz configuration (as motivated by lattice QCD ). A minimal amount of quark motion is allowed in response to flux motion, in order to work in the centre of mass frame. Otherwise, we make the so–called “adiabatic” approximation, where the flux motion adjusts itself instantaneously to the motion of the quarks. The main result is that the lowest flux excitation can to a high degree of accuracy (about 5%) be simulated by neglecting all flux–tube motions except the vibration of a junction. This result was obtained within the small oscillation approximation. The junction acquires an effective mass $`M_{\text{eff}}`$ from the motion of the remainder of the flux–tube and the quarks. The model is then simple: a junction is connected via a linear potential to the three quarks. The ground state of the junction motion corresponds to a conventional baryon and the various excited states to hybrid baryons. The junction can move in three directions, and correspondingly be excited in three ways, giving the hybrid baryons $`H_1,H_2`$ and $`H_3`$. The junction motion is depicted in Fig. 1. The hamiltonian for the junction motion in the Mercedez Benz configuration is simply the kinetic energy of the junction added to the sum of the lengths from the junction to the quarks multiplied by the string tension $`b`$, $`H_{\text{flux}}={\displaystyle \frac{1}{2}}M_{\text{eff}}\dot{𝐫}^\mathrm{𝟐}+b{\displaystyle \underset{i=1}{\overset{3}{}}}|𝐥_i𝐫|`$ (1) We shall be taking ansatz wave functions of the form $$\widehat{𝜼}_{}𝐫\mathrm{\Psi }_B(𝐫)$$ (2) for $`H_1`$ hybrid baryons, where $`\mathrm{\Psi }_B(𝐫)`$ is an exponential function. It is not difficult to show that $`\widehat{𝜼}_{}`$ lies in the plane spanned by the three quarks (the “QQQ plane”). ## 2 Quantum numbers of low–lying hybrid baryons Angular Momentum: The hamiltonian in Eq. 1 is not invariant under rotations in the junction position $`𝐫`$, with fixed quark positions. When the junction wave function, which is hence not an eigenfunction of angular momentum, is combined with the quark motion wave functions, which are eigenfunctions of angular momentum, it must be done in such a way that the total angular momentum of the junction and quark motion is well–defined. Obtaining a well–defined total angular momentum is a technically challenging problem that is an artifact of the adiabatic approximation, which separates junction and quark motion. We here merely give an intuitive argument why the total angular momentum $`L`$ of the $`H_1`$ baryons is expected to be 1. The hybrid baryon wave function is proportional to $`\widehat{𝜼}_{}𝐫`$, and since $`\widehat{𝜼}_{}`$ lies in the QQQ plane, it can be regarded as the x–axis, so that $`\widehat{𝜼}_{}𝐫=\sqrt{\frac{2\pi }{3}}r(Y_{11}(\widehat{𝐫})+Y_{11}(\widehat{𝐫}))`$ in terms of spherical harmonics. If the mathematics of conservation of angular momentum is followed through, it is found that if the angular momentum of the quark motion is $`L_q=0`$ (corresponding to the lowest energy quark motion states), then the total angular momentum projection just equals the angular momentum projection of the junction wave functions, which in this case is $`\pm 1`$. Hence the total angular momentum projection is $`\pm 1`$ so that $`L`$ cannot be zero, and should most likely be 1. Exchange symmetry: Exchange symmetry transformations $`S_{ij}`$ exchange the positions of the quarks $`𝐥_i𝐥_j`$. Since the physics does not depend on the quark position labelling convention, the junction hamiltonian should be exchange symmetric, as can be seen explicitly in Eq. 1, noting that the junction position $`𝐫`$ is not determined by the positions of the quarks. We now argue that the junction wave functions of (hybrid) baryons should transform either totally symmetrically or totally anti–symmetrically under exchange symmetry. Since the hamiltonian is invariant under exchange symmetry we have the commutation relation $`[H_{\text{flux}},S_{ij}]=0`$. Combining this with the Schrödinger equation $$H_{\text{flux}}\mathrm{\Psi }=V(l_1,l_2,l_3)\mathrm{\Psi }\text{gives}H_{\text{flux}}(S_{ij}\mathrm{\Psi })=V(l_1,l_2,l_3)(S_{ij}\mathrm{\Psi })$$ (3) so that $`S_{ij}\mathrm{\Psi }`$ is degenerate in energy with $`\mathrm{\Psi }`$. Now since the baryon and each of the hybrid baryons $`H_i`$ have different energies (except when $`l_1=l_2=l_3`$) it follows that $`S_{ij}\mathrm{\Psi }`$ must be a multiple of $`\mathrm{\Psi }`$, i.e. that $`S_{ij}\mathrm{\Psi }=\varsigma \mathrm{\Psi }`$, where $`\varsigma `$ is complex number. Now note that the product of two exchange symmetry transformations is the identity, i.e. that $$S_{ij}S_{ij}=1\text{which implies that}\varsigma ^2=1$$ (4) or $`\varsigma =\pm 1`$. Hence $`S_{ij}\mathrm{\Psi }=\pm \mathrm{\Psi }`$. Assume that $`S_{12}\mathrm{\Psi }=\varsigma \mathrm{\Psi }`$. We now show<sup>1</sup><sup>1</sup>1This result also follows by noting that $`[H_{\text{flux}},S_{ij}]=0`$ implies that $`\mathrm{\Psi }`$ must be an irreducible representation of the exchange symmetry group, i.e. totally symmetric, anti–symmetric or mixed symmetry. But since we already showed that $`S_{ij}\mathrm{\Psi }=\pm \mathrm{\Psi }`$, it follows that $`\mathrm{\Psi }`$ is in either the totally symmetric or anti–symmetric irreducible representation. that $`S_{23}\mathrm{\Psi }=S_{13}\mathrm{\Psi }=\varsigma \mathrm{\Psi }`$, i.e. that $`\mathrm{\Psi }`$ is either totally symmetric or totally anti–symmetric under label exchange. This follows by the two identities $`S_{12}S_{23}S_{13}S_{23}=1\text{which implies that}S_{12}\mathrm{\Psi }=S_{13}\mathrm{\Psi }`$ (5) $`S_{23}S_{12}S_{13}S_{12}=1\text{which implies that}S_{23}\mathrm{\Psi }=S_{13}\mathrm{\Psi }`$ For each of the hybrid baryons $`H_i`$, there are hence two varieties: the junction wave function is totally symmetric (S) or totally anti–symmetric (A) under quark label exchange, denoted by $`H_i^S`$ and $`H_i^A`$. Parity: The inversion of all coordinates $`𝐥_i𝐥_i`$ and $`𝐫𝐫`$, called “parity”, is a symmetry of the junction hamiltonian in Eq. 1. $`\widehat{𝜼}_{}`$ is a vector in the QQQ plane and is a linear combination of the $`\widehat{𝐥}_i`$, which span the plane, with coefficients which are functions of $`l_i`$. The lengths $`l_i`$ remain invariant under parity. However, $`\widehat{𝐥}_i\widehat{𝐥}_i`$ under parity. It follows that $`\widehat{𝜼}_{}`$ is odd under parity. The junction wave function in Eq. 2 is thus even under parity, since $`\widehat{𝜼}_{}\widehat{𝜼}_{}`$ and $`𝐫𝐫`$. For a low–lying hybrid the quark motion wave function is even under parity, so that the full hybrid baryon wave function has even parity. Since quarks are fermions, the wave function should be totally antisymmetric under quark label exchange, called the Pauli principle. Since our philosophy is that (hybrid) baryon dynamics is dominated by (non–perturbative) long distance physics, we consider the colour structure of the (hybrid) baryon to be motivated from the long distance limit, i.e. from the strong coupling limit of the hamiltonian formulation of lattice QCD . Here, the quarks are sources of triplet colour, which flows along the string connected to the quarks into the junction, where an $`ϵ_{ijk}`$ neutralizes the colour. The colour wave function $`ϵ_{ijk}`$ is hence totally antisymmetric under exchange of quarks for both the conventional and hybrid baryon. This imposes constraints on the combination of flavour and non–relativistic spin $`S`$ of the three quarks that is allowed. For a totally symmetric hybrid baryon junction wave function, the flavour–spin wave functions must be totally symmetric. This is because we are interested in the low–lying hybrid baryons which have the quark motion wave function in ground state, i.e. totally symmetric. If the flavour is $`\mathrm{\Delta }`$, which is totally symmetric, this implies that the spin must be totally symmetric, i.e. $`S=\frac{3}{2}`$. Similarly for flavour $`N`$ the spin must be $`\frac{1}{2}`$. For a totally antisymmetric junction wave function, the flavour–spin wave function must be totally antisymmetric. For $`\mathrm{\Delta }`$ flavour this implies that the spin must be totally antisymmetric, which is not realizable. Hence there are no $`\mathrm{\Delta }`$ hybrid baryons with totally antisymmetric junction wave functions. The $`N`$ flavour is found to have spin $`\frac{1}{2}`$. The quantum numbers of the lowest–lying states that can be constructed on the $`H_1`$ adiabatic surface are indicated in Table 1. The total angular momentum $`𝐉=𝐋+𝐒`$. Since $`L=1`$ for ground state $`H_1`$ hybrid baryon, $`J=\frac{1}{2},\frac{3}{2}`$ for $`S=\frac{1}{2}`$, and $`J=\frac{1}{2},\frac{3}{2},\frac{5}{2}`$ for $`S=\frac{3}{2}`$. These assignments are indicated in Table 1. One notes from Table 1 that amongst the $`H_1^S`$ hybrid baryons, there are $`N\frac{1}{2}^+`$ and $`\mathrm{\Delta }\frac{3}{2}^+`$ states which have identical quantum numbers to the conventional $`N`$ and $`\mathrm{\Delta }`$ baryons. It is interesting to compare our hybrid baryons to the predictions of the bag model. Out of all the states listed under $`H_1^S`$ and $`H_1^A`$ in Table 1, only one pair of $`N^2\frac{1}{2}^+,N^2\frac{3}{2}^+`$ states have the same flavour, spin $`S`$, total angular momentum and parity as the low–lying hybrid baryons in the bag model . In fact, for the $`H_1^S`$ hybrid baryons, the bag model swaps the $`N`$ and $`\mathrm{\Delta }`$ flavours from our assignments, keeping other quantum numbers the same. Both our model and the bag model has seven low–lying hybrid baryons . ## 3 Numerical estimate of the hybrid baryon mass The difference between the hybrid and conventional baryon adiabatic potentials (or junction energies) as a function of quark positions, $`V_{H_1}(l_1,l_2,l_3)V_B(l_1,l_2,l_3)`$, was determined numerically from the first part of Eq. 3 by using the hamiltonian in Eq. 1, and were displayed in ref. . Now define the hybrid baryon potential as $$\text{}_{H_1}(l_1,l_2,l_3)\text{}_B(l_1,l_2,l_3)+V_{H_1}(l_1,l_2,l_3)V_B(l_1,l_2,l_3)$$ (6) where $`\text{}_B(l_1,l_2,l_3)`$ is the phenomenologically successful relativized baryon hamiltonian with Coulomb and linear potential terms of ref. (with spin–spin, spin–orbit and tensor interactions neglected); and the parameters are also those of ref. . Note that the Coulomb interaction of the conventional and hybrid baryon is assumed to be identical. We solve the Schrödinger equation for the hamiltonian in Eq. 6 with 95 spin–space basis states incorporating $`L_q=0,1,2`$ harmonic oscillator wave functions for the $`J=\frac{1}{2}`$ case, i.e. construct 95 $`\times `$ 95 dimensional matrices. These matrices are subsequently diagonalized. The differences between the energies for the hybrid and the conventional baryon is then added to the experimental mass of the lowest baryon, taken as the spin–averaged mass of the $`N`$ and $`\mathrm{\Delta }`$, i.e. 1085 MeV . The first three quark orbital excitations $`L_q=0,1,2`$ of hybrid baryons composed of up and down quarks are found to have masses 1976, 2341 and 2619 MeV respectively. Hence, for the lowest hybrid baryon level, with the quantum numbers in Table 1, we obtain that $`M_{H_1}M_B=891`$ MeV, giving a mass estimate of $`M_{H_1}=1976`$ MeV. This mass estimate is substantially higher than other mass estimates in the literature: $`1.5`$ GeV in the bag model and $`1.5\pm 10\%`$ GeV in QCD sum rules . There are two crucial assumptions that were made in the early work on (hybrid) meson masses in the flux–tube model: the adiabatic motion of quarks and the small oscillation approximation for flux motion . It was later shown that when the adiabatic approximation is lifted, the masses goes up, and when the small oscillation approximation is lifted, the masses goes down . In our study of (hybrid) baryons we have partially lifted the adiabatic approximation by working in the centre of mass frame. We have fully lifted the small oscillation approximation. The effects on the masses of (hybrid) baryons when the various approximations are lifted are the same as those found for (hybrid) mesons. In our simulation, we obtain the average values $`\sqrt{\rho ^2}=\sqrt{\lambda ^2}=2.12,\mathrm{\hspace{0.33em}2.52}`$ GeV<sup>-1</sup> for the low–lying baryon and $`H_1`$ hybrid baryon respectively. $`\rho ^2=\lambda ^2`$ is expected since the spatial parts of the wave functions of the low–lying states are totally symmetric under exchange symmetry. The hybrid baryon is 20% larger than the conventional baryon. ## 4 Phenomenology The sign of the the Coulomb interaction is expected to be the same for both conventional and hybrid baryons . This means that the hyperfine interaction has the same sign in both situations, so that the $`\mathrm{\Delta }`$ hybrid baryons are always heavier than the $`N`$ hybrids. This implies that only four of the original seven low–lying baryons, the $`N`$ hybrids, are truely low–lying. We expect a priori the most phenomenologically interesting decay of the low–lying hybrid baryons to be the P–wave decay to $`N\rho `$ and $`N\omega `$, simply because the phase space is favourable and $`\rho `$ and $`\omega `$ are easily isolated experimentally. The $`N\rho `$ decay would be especially relevant to the electro– and photoproduction of hybrid baryons at TJNAF via the vector meson dominated coupling of the photon to the $`\rho `$. Indeed, a search for excited $`N^{}`$ resonances with mass $`<2.2`$ GeV is currently underway in Hall B . Given the mass estimate for the low–lying hybrid baryons, the detection of hybrid baryons in $`N\rho `$ or $`N\omega `$ is feasible at TJNAF. There are also planned experiments in $`\pi N`$ scattering by Crystal Ball E913 at the new D–line at Brookhaven with the capability of searching for states in $`N\{\eta ,\rho ,\omega \}`$, which would isolate states in the mass region $`2`$ GeV . The decay $`\psi p\overline{p}\omega `$ has been observed with a branching ratio of $`1.30\pm \mathrm{0.25\hspace{0.33em}10}^3`$ and $`\psi p\overline{p}\eta ^{^{}}`$ with branching ratio $`9\pm \mathrm{4\hspace{0.33em}10}^4`$ . Since gluonic hadron production is expected to be enhanced above conventional hadron production in the glue–rich decay of the $`\psi `$, it is possible that a partial wave analysis of the $`p\omega `$ or $`p\eta ^{^{}}`$ invariant masses would yield evidence for hybrid baryons. Future work at BEPC and an upgraded $`\tau `$–charm factory would be critical here. ## 5 Conclusions The spin and flavour structure of the low–lying hybrid baryons have been specified, and differ from their structure in the bag model. Exchange symmetry constrains the spin and flavour of the (hybrid) baryon wave function. The orbital angular momentum of the low–lying hybrid baryon is argued to be unity, with the parity even, contrary to conventional baryons where $`L=1`$ would imply the parity to be odd. The low–lying hybrid baryon adiabatic potential and mass has been estimated numerically. The mass estimate is considerably higher than bag model and QCD sum rule estimates.
no-problem/9904/nucl-th9904067.html
ar5iv
text
# Untitled Document
no-problem/9904/cs9904020.html
ar5iv
text
# ODP channel objects that provide services transparently for distributing processing systems ## 1 Distributed Processing Platforms ### 1.1 RPCs and Connections A distributed processing platform provides a method of making a Remote Procedure Call, an RPC, almost transparent to the application developer—see, for example, Tivoli\[TVL99\] and Orbix \[ORB99\]—the relatively mature industry standard for both of which is the Common Object Request Broker Architecture, CORBA, from the Object Management Group\[OMG98\], OMG; this defines an Object Request Broker, ORB, which is an infrastructure for RPCs. Recently Sun Microsystems \[SUN98d\] have enhanced the Remote Method Invocation package, java.rmi, for *Java*\[SUN98c\] as part of the *Java* Development Kit, JDK, 1.2. It now allows connections to be created between a client and a server which can have a different data transfer representation \[SUN98a\]. As pointed out in the documentation for this feature, this is particularly suitable for implementing the Secure Socket Layer, SSL, \[FKK95\] and could also be used to implement the proposed successor to SSL Transport Level Security, \[DA97\]. ##### Open Distributed Processing Architecture A suitable architecture to exploit this new functionality in java.rmi and in other distributed processing platforms has been proposed in the Open Distributed Processing standards, ISO–ODP, \[ODP97\]. ###### Bindings and Channels The prescriptive model, \[ODP95\], generalizes the concept of connection between client and server to be a logical binding which is realized by both client and server using a channel, see also \[Otw95\]. Figure 1 illustrates the concepts of bindings and channels. Application objects have bound with one another probably using a naming service; they have negotiated a channel configuration that requires two objects: a data presentation conversion object and a data transport object. The channel configuration is fixed by bindings. The client sends data; the server receives. The data is passed from the application object to the channel which carries out the conversion and the transport to the server. A binding is a contract between the client and the server stating the parameters by which they will communicate. A binding between two transport objects would usually specify: * Transport layer protocol to be used, e.g. UDP/IP or TCP/IP or Pipe/Unix. * Transport layer parameters * Network addresses for chosen transport layer protocol The presentation layer object would convert from the the local data representation to a network representation. The only specification needed here is the source and target representations. In this case, the binding is implicit in the implementation of the objects, they need not be initialized with parameters. More sophisticated systems will have higher demands and will insist that other services be used as well, for example: * Data security * Transaction management * Call billing * Data compression * Relocation manager These would all require that configuration parameters be specified and may also demand that they are operate together. There is no need for channels to be symmetric, the server could implement a call–logging object in its channel without having an object of the same type in the client’s channel. Bindings do need to be current. A transport object might close a connection, in which case, it would no longer be current, but if it were to leave enough information to allow a re–connection, then it is, in effect, still current. ###### Tri–partite Bindings Bindings need not be bi–partite. An application service that would require a tri–partite binding is relocation management, illustrated in figure 2. The idea of which is that should the server choose to relocate, it would notify a relocation manager of its new addresses and move there. When the client calls the server at its old address and fails to reach it, the relocator object in the client’s channel would call the third party and ask for the new address of the server, establish a new set of bindings with it, i.e. construct a new channel and destroy the old, and send the message again. This would all be transparent to the application object. The relocation object in the channel would need to call the relocation manager for the new addresses and would thus become an application object; it would need to establish its own channel with the relocation manager. ### 1.2 Inflexible System Designs Although the ISO–ODP defines a flexible architecture, most implementors of distributed processing platforms provide application programmers with inflexible systems: the channel can only contain a presentation and a transport object. ##### Presentation: Stubs and Skeletons Both CORBA ORBs and the RMI defined in *Java* make use of “stubs” and “skeletons” and a stub–compiler. The term infrastructure will be used for the engineering that realizes an ORB or RMI. These are used by a client when invoking a method on a remote server. The client does not have a local implementation for the service the remote server has, but it will have a definition of its interface which can be used as if it were the implementation of the service. It can present the interface definition to a stub–compiler which will generate code to invoke each method on the remote server. The stub acts as a proxy for the remote server in the client’s address space. The stub for each method needs to do the following things: 1. Construct a request A request object is a container entity that carries the invocation to the server; it contains: * The address of the remote server * The name of the method being invoked, possibly with version control information for the interface. * The parameters for the method * The return address The stubs can also contain the code to *marshall*<sup>1</sup><sup>1</sup>1The verb is to marshal, but this is so often mis-spelt that to marshall has become acceptable. the parameters into a universal transfer presentation. In this form, the stubs also comprise the presentation object. 2. Invoke the request The request, now just a sequence of bytes with an associated address for the server and a return address for the sender, is passed to the infrastructrure which sends it to the network socket for the server. 3. Get the reply The server will return a reply in another container type. 4. Re–construct the reply The reply is then re–constructed or, rather, its contents are *un–marshalled* and returned to the client. Again, this may be part of the code in the stubs. It might be best now to explain how the infrastructure manages to provide the RPC service. 1. At the server on creation An application programmer defines an interface and produces an implementation for it. A program that effectively acts as a loader issues instructions to the infrastructure to create a socket to receive requests for that server and will associate the remote server implementation with that socket. 2. At the naming service The application programmer will ensure that the address of his newly–created remote server is put into a well–known naming service. The client will then collect the address from the naming service. 3. At the client on invocation The address collected at the naming service will contain enough information to allow the client’s infrastructure to send the request container as a stream of bytes to the socket that the server’s infrastructure has associated with the remote server’s implementation. 4. At the server on invocation The network socket will be activated by the client’s infrastructure (a connect and a data send) and the server’s infrastructure will collect the data at the socket and, because it has recorded the object responsible for that socket, it can activate the server skeleton. These are invoked by the server’s infrastructure when the network socket for a server is activated and the data comprising a call has been collected. It invokes the implementation of the method the client wants to use at the server. A skeleton can consist of one method that unmarshalls enough of the request container to be able to look up the method to invoke—this is known as *dispatching*. This partial unmarshalling has to be done in this way, because each method will unmarshall the remainder of the request container differently to obtain the parameters to pass to the server’s method implementation. After the invocation is made the results will be marshalled into a reply container which is then sent back to the client. The stubs and the skeleton effectively form the presentation channel object. Stubs have to be produced by a stub–compiler, but skeletons can be made generic, if the underlying infrastructure supports a reflective invocation mechanism, see java.lang.reflect or the CORBA Dynamic Invocation Interface. Some stubs generated by the *Java*rmic, Remote Method Interface Compiler, are given in appendix A. These demonstrate the use of reflective language features. ##### Transport When the request containers are passed to the infrastructure the transport object is eventually invoked. Most ORBs only provide one transport mechanism which sends data through TCP/IP sockets to its destination. Although some systems do allow different protocols: UDP/IP or Pipe/Unix. ##### Limitations The problem with this is that if one wants to implement any useful application services—encryption, billing and so forth—the infrastructure does not help. For example, to encrypt and decrypt data sent as part of a remote procedure call, one would have to implement one’s own stubs and skeletons, see figure 3. The application programmer has to construct a call, marshall the data, encrypt it, send it using a generic method, which will marshall it again. At the server, the data would be delivered to the generic method, and the application programmer would then have to decrypt it, unmarshall the decrypted data, reconstruct the call and dispatch it. After dispatching, collect the results, marshall, encrypt and return the reply. ### 1.3 More flexible: *Java* RMI Custom Socket Factories A more flexible implementation has been provided by Sun in *Java* . It allows a different type of socket to be used as the transport object. ##### Method The custom socket factories have to implemented in the following way: 1. Derive and implement classes for the new socket type’s datastream from java.io.FilterOutputStream and java.io.FilterInputStream, call them MyOutputStream and MyInputStream. 2. Derive and implement classes for the new socket types java.net.Socket and java.net.ServerSocket that use the new streams MyOutputStream and MyInputStream. Then create socket factory implementations that can be used by RMI. 1. A client-side socket factory that implements RMIClientSocketFactory and implement the RMIClientSocketFactory.createSocket() method. 2. A server-side socket factory that implements RMIServerSocketFactory and implement the RMIServerSocketFactory.createServerSocket() method. Then one has to ensure that the constructor for the remote server is told to use the new socket factories. The infrastructure creates the new type of socket when demanded and invokes the create socket methods. The RMISecurityManager at the client will then determine that a particular type of socket has to be used and will load the custom socket factory implementations. ##### Possibilities: Implementing SSL Using custom socket factories, it is possible to implement a Secure Socket Layer. The RMIClientSocketFactory.createSocket() would be used to perform the key exchange with the server and the custom input and output streams would apply the session key to encrypt on send and decrypt on receive. ##### Limitations Unfortunately, using custom socket factories only increases the variety of the transport objects that can be employed, it does not allow different kinds of objects to be placed in the channel. ## 2 Channel Objects What is needed is a means of placing objects in the channel before and after the presentation object. These objects should have a simpler instantiation and invocation procedure than using the custom socket method in *Java* . ### 2.1 Some Requirements 1. Different interests System Configurable The objects placed in the channel between client and server are the result of a negotiated agreement between the client, its server and their respective environments. It is well–known that security requirements for messages depend on the workstation that the client is using \[Ash99\], which may be connected to a secure local area network on which both the the client and server reside and so, for example, no security measures need be taken; or, the client could be accessing the server remotely from the Internet through a modem in which the server might require that the client use encryption. It would be desirable if the client and the server could both specify their requirements and some negotiation take place that could create a mutually acceptable protocol stack in the software communications channel. 2. Different Methods for Different Actions A request and reply actually require that four different channels be traversed by messages, see figure 4: 1. Request this channel is created by the infrastructure for the client and sends a message over the network. 2. Indication this channel is used to receive from the network and is created by the infrastructure for the server. 3. Response created by the infrastructure for the server to return the results of the client’s request message. 4. Confirmation receives from the network and is created by the infrastructure for the client. Should an error occur at the server it is returned via the response and confirmation messages. The stubs are responsible for managing the thread of execution of the application object: once a message is put onto the request channel, the application object’s thread can be suspended whilst it waits for the reply to arrive on the confirmation confirmation channel. If the message is a cast of some kind (broadcast, multi–cast or one–cast) there will be no response or confirmation. 3. Different Implementations Most channel objects are derived from the same source and will have the same implementation. *Java* allows classes implementing different objects to be loaded over the network, so it would be possible for the client and server to agree upon and load the same class implementation, which they would be able to do with the custom socket factories method. This may not always be the case, some channel objects may be optimized to make use of a different operating system, but provide the same functionality. Audio and video data streaming are good examples of this need: some micro–processors now have support for stream data. 4. Transparency 1. Management It would be desirable if the additional services provided by the channel objects did not need to be initialized or managed by either the client or the server application programs but they were activated by their respective infrastructures. 2. Exception handling One of the difficulties of developing applications in enterprise environments is that as messaging becomes more sophisticated—supporting for example, confidentiality, authorization, call–billing—the number of possible errors increases because each of these sub–systems introduces new ones. It must be possible for channel objects to clear down their own errors, so that errors returned to application objects only involve the application. 3. Using Possession rather than Inheritance for Coupling It would also be desirable if the classes for the channel objects did not extend the existing class hierachies of the message transmission sub–system, in the way that custom socket and socket factories do. Extending class hierachies is not as flexible as specifying an order of invocation. 4. Interface Definitions Suitable for Reflection. It would also be desirable if the infrastructures could load channel objects’ classes remotely and have a simple enough invocation syntax so that reflection mechanisms could be used to invoke the channel objects without requiring a stub compiler. *Java* already does this with its version 1.2 stubs, using package java.lang.reflect, and CORBA has a Dynamic Invocation Interface which can achieve the same goal. 5. Efficiency It would also be desirable if the channel objects did not create a large stack of calls; primarily because some target environments for remote procedure call platforms will be embedded systems on SmartCards \[SMT99\]. ### 2.2 Some Nomenclature With the aid of figure 4, it is possible to be more precise with the terms used: * Initiator: starts a four–phase call sequence; may also be called the requestor. * Acceptor: accepts the call made upon it; may also be called the responder, because it generates the response. Both an initiator and an acceptor will send and receive as part of four–phase call sequence. Ordinarily, the client will be the initiator of all calls, but some remote procedure call systems permit call–backs, in which case the server is the initiator and the client the acceptor. ## 3 Architectures for Channel Objects There are basically two types of architecture that could support channel objects. ### 3.1 Stream–oriented Architecture The custom socket factory architecture is stream–oriented. It acts upon the data being sent between client and server as a stream applying a data transformation to the data when it is sent and undoing this transformation at the other end. The data is treated as opaque and can be delivered in packets as small as one byte. The implementor of the stream handler does not know whether the data has just started or is about to finish. One of the attractions of this approach is that many of the operations that data networks perform on data can be implemented in software: segmenting, and its converse re–assembling, can be implemented easily and this would allow remote procedure call systems to make use of packet–oriented transmission, such as UDP/IP. Segmenting and re–assembly could be implemented as channel objects and data would be segmented and then sent as packets on a UDP socket for re–assembly by another channel object at the server. Streams can be chained—the output of one providing the input for the next. Most operating systems support pipes to do this: *Java* supports the java.io.PipedInputStream, and output, classes. Multi–casting could be easily achieved with pipes: simply have a pipe that sends on a socket and echoes its input; then connect a series of these together. A stream–oriented architecture is better for real–time data delivery. All of the processing in the stream–handler is applied to the data—it is not expected to communicate with relocation or transaction managers and incur indeterminate time penalties. Consequently, the emphasis in the design of stream–handlers should be to ensure they introduce a constant latency in transmission and reception. ### 3.2 Call–oriented Architecture This architecture implicitly appreciates that the data being delivered is a call. Call–handlers usually add parameters to a remote procedure call. They do not form part of the message sent by the application object, but set its context. For example: * Timestamps logging when messages are sent and received by adding an “time–sent” parameter to the remote procedure call. * Accounting adding an account identifier to a remote procedure call so that the server can log charges for calls to a particular account. * Transactions a transaction might consist of a number of calls to different servers, they could all be identified with a transaction identifier, to synchronize committing and aborting transactions as a whole. * Authorization a Privilege Certificate could be attached to the call so that the server could check what rights and privileges the caller is allowed to exercise within the server’s work–space. A call–oriented architecture should be implemented so that it has access to the parameters passed as part of the request. If this is the case, then as well as being able to add parameters, it would be possible to perform data transformations on parameter values that are part of the call. For example: 1. Representation conversions 1. Wholly The presentation objects implemented in remote procedure call systems change the data representation of the parameters of a call so that they can be transmitted over the network as a sequence of bytes, with the receiver being responsible for converting the byte–sequence to its local representation. It may be more efficient to convert to the receiver’s format before the data is sent so that the server can use conversion methods available from its native operating system. 2. Partly It might be the case that a server has a different data context for a particular data type: internationalization of text strings and currency formats could be converted prior to transmission. 2. Pseudo–Objects Pseudo–objects are usually legacy systems that can be directly controlled by the client’s remote procedure call infrastructure, but for the sake of uniformity, and to simplify re–engineering and relocation of services, they are provided with the same interface as remote servers. The operating system used by a distributed processing platform is itself a legacy system. An example of a pseudo–object that application programmers use is the database driver provided in the *Java* DataBase Connectivity package, java.sql. This pseudo–object implementation establishes and drives a connection with the database. Other examples of services that could be implemented as pseudo–objects are directory and naming services that are available through native operating systems. 3. Stream A call–oriented architecture could also be used to transform data in the same way that a stream–oriented architecture could. Encryption, compression and checksum insertion could all be performed by marshalling the parameters using a data representation object to produce a sequence of bytes and then applying the stream operation to it. The output would be opaque and would replace the parameters. A call–oriented architecture is better suited to recovering from errors, since it is possible to determine which channel object is at fault and it can take measures to recover from the error. ### 3.3 Both architectures are needed The stream–oriented architecture is ideal for delivering data at high speed with a determinate latency, the call–oriented architecture is ideal for communicating control information. This is a similar design problem that faced the developers of telephone networks and it was resolved, in the Integrated Services Digital Network, ISDN \[IEE90\], by having a control channel manage the use of two bearer channels. Figure 5 illustrates a request made to the server on a control channel and a reply being received on a broadband data channel. This sort of architecture could be used for controlling the delivery of “pay–per–view” television, where the RPC mechanism is used by an application to make a payment over a control network and the data is delivered on differently constructed channels over a broadband network possibly to different hardware. Essentially the differences between the two architectures, with regard to the activation of the channel objects, are: * synchronizing with the call, and * the opacity of the call’s contents A stream–oriented architecture need not transmit data to the server in a contiguous block that represents the marshalled bytes of a message. A call–oriented architecture would have each channel object invoked with each call sent. A stream–oriented architecture only has access to the marshalled bytes that represent a message. A call–oriented structure sees the method that is invoked and the parameters for it. The most flexible architecture is the call–oriented one. But for implementing the transport objects, a stream–oriented architecture should be preferred. This means that the two architectures fall above and below the marshalling channel object, see figure 6. This figure attempts to place the channel objects in an Open Systems Interconnection model, \[ITU94\]. Objects in the call–oriented architecture provide presentation layer services and, after the marshalling object, which reduces a call to a sequence of bytes, the session objects and finally the network transport object, a socket driver, can operate upon the data as a stream. The session layer objects would also perform stream–oriented encryption, but a presentation layer object would negotiate keys. Channel objects that are call–oriented will be called *call–handlers* and those that are stream–oriented will be called *stream–handlers*. The marshalling object is a call–handler and, if need be, it could invoked a number of time to render the data as a sequence of bytes. This would be useful for data security, since it may be necessary to encrypt the data and make it unintelligible to other call–handlers. ## 4 Stream–Oriented Architecture: Design Sun with *Java* and the socket factory technique have implemented what is described later as a simplex system, §5.2 : each stream has two channel objects, one for sending—the output stream—and one for receiving—the input stream. There are two kinds of pairs: the client’s, or, more precisely, the initiator’s, pair and the server’s, or acceptor’s, pair. Although there is no explicit code to synchronize the two streams so that only one may be used at a time, it is not expected that an application can simultaneously send and receive. This limitation could be removed by a suitably designed channel object which could specify a different return address. ## 5 Call–Oriented Architecture: Design As pointed out above, a message passed between a sender and receiver would negotiate four different channels: 1. Request 2. Indication 3. Response 4. Confirmation but if the call is a cast of some kind, it need only have two: 1. Request 2. Indication And some parts may not do anything, for example a request handler could log all calls made by the client, but the server need not record all the calls that are made upon it. When an application programmer makes use of a remote procedure call the infrastructure sends the message to the server and blocks the thread pending the arrival of the confirmation. One could also implement the channel objects in this way. This leads to two different architectures: * Either: one channel object for each of request, indication, response and confirmation *Simplex*. * Or: two channel objects: one that requests and blocks pending the arrival of a confirmation; one that receives indications, invokes the service (and blocks waiting for it) and then sends the response *Duplex*. The latter will be discussed first. ### 5.1 Duplex: One Pair of Objects One channel object for the initiator performs request and confirmation, another channel object performs indication and response for the acceptor. This means that every channel object has a bi–directional data–flow with the channel object below it, rather like the application object has with the distinct channels in figure 4. This has the attraction that should the initiator’s channel objects for the request and the confirmation need to share state, then this is achieved implicitly because they are the same object. This has a problem with broadcasts, (or one–casts). The initiator’s channel object (request and confirmation) needs to determine if it is to block or not and that would require this information be made available to the channel object, perhaps best supplied as a parameter to the method invocation. This information is required in CORBA, the Interface Definition Language allows methods on interfaces to be marked as one way, but is not part of the *Java* specification; although it could be inferred if the definition of the remote method returns no result, in which case, the method can be invoked as a one–cast. However, if exceptions are returned by the remote server then the method is a call: the exception, whether raised or not, is a response. ### 5.2 Simplex: Two Pairs of Objects If a channel object is implemented for each stage of the communication, then there are, at most, two pairs of objects: a request and confirmation pair at the initiator; an indication and response pair at the acceptor. Figure 4 illustrates the engineering of this. Because the objects are distinct they will need to bind with one another should they need to share state: an example of the difficulties this might lead to can be seen when communicating and attempting to correct errors. The infrastructure would block the application programmer’s thread of control pending the arrival of the confirmation message from the server containing the results of the call. Ordinarily, the confirmation channel objects will unravel the message returned by the server, instantiate the result for the application programmer’s thread and unblock it. If there is an error in any of the channel objects, then it should be possible for the channel objects to attempt to clear the error and resend the message if need be. The problem then is that the channel objects in the request channel need to be synchronized with the error results received in the confirmation channel. If there is a failure in one of the channel objects in the indication channel, then they must be propagated via the response and confirmation channels. This is a difficult issue and is discussed later at greater length, §6.4 . The important difference between the duplex and simplex methods is the relationship to the state of the call. With the duplex model the request channel retains the state of the call awaiting an acknowledgement from the confirmation channel. What makes the duplex model retain state is that the channel objects invoke one another and form a stack of calls which can be associated with a thread. This can be emulated in the engineering of a simplex model without requiring the use of the stack, by passing a call identifier. This would only be of use if the channel objects were also to retain their internal state when they release control. ##### Example: A Secured Message Delivery Service As an example, an encryption and decryption service would require: 1. Request encrypt 2. Indication decrypt 3. Response encrypt 4. Confirmation decrypt There are only two functions—encrypt and decrypt—but performed at four locations. It should be possible to provide just one pair of implementations—an Encryptor and a Decryptor—located differently. 1. Encryptor * Request channel object initiator * Response channel object acceptor 2. Decryptor * Indication channel object acceptor * Confirmation channel object initiator Encryptor and Decryptor would both be implemented as stream–handler channel objects. Encryption is subject to replays of old messages unless the messages are time–stamped and sequenced. Usually, time–stamping and sequencing are implemented as part of the encryption and decryption objects, but with channel objects they can be provided separately. Time–stamping has two functions: 1. Request timestamp issue 2. Indication timestamp check 3. Response timestamp issue 4. Confirmation timestamp check Two functions: StampIssuer, StampChecker; four locations: 1. StampIssuer * Request channel object initiator * Response channel object acceptor 2. StampChecker * Indication channel object acceptor * Confirmation channel object initiator Similarly for a sequence number generator and checker. StampIssuer and StampChecker would both be implemented as call–handler channel objects. Because it is difficult to synchronize clocks in a distributed networks, some secure message delivery systems allow some skew on the clocks and use checksums to detect replayed messages. Only the acceptor’s indication channel and the initiator’s confirmation need deploy a ReplayDetector object. It would be implemented as a call–handler generating a checksum from the marshalled data incoming as an indication or as a confirmation. ### 5.3 Simplex or Duplex There is a greater similarity in the simplex architecture to the engineering underlying the messaging system than in the duplex architecture, but the duplex architecture has some attractive state–retention properties which should be emulated in a simplex architectureq. The rest of this discussion will concern itself with a simplex architecture that attempts to retain state across all four phases of a call. The other great attraction of the simplex architecture is that if the channel objects reside in different channels, then it is easier to relocate the channel. This would be especially useful when a call uses two different media as the example system in figure 5 illustrated. ## 6 Service Invocation Semantics From what has been said above, the *Java* RPC mechanism has a stream–oriented architecture and what follows is a proposed call–oriented architecture for it. It should serve as an example of how channel objects could be deployed in other CORBA RPC systems. The distributed processing infrastructure will have to determine the order in which the channel objects will be invoked. There are now two ways in which the methods of the channel objects can be invoked and how they should bind with one another. * Either: have the request object invoke a method on the indication object and block waiting for the response: *Peer–to–Peer* invocation. * Or: have the request object perform its work and return a modified call object and return immediately: *Service* invocation. The peer–to–peer option requires the duplex architecture which has been dismissed, so only the service method of invocation is left. Peer–to–peer is very attractive, it would allow channel objects to be client–server pairs and thus be able to use the stub–compiler. Unfortunately, it would demand too much memory to stack all of the calls required to navigate complicated protocol stacks. The approach taken by Sun in their own implementation of java.rmi.server.RemoteRef, shipped as sun.rmi.server.UnicastRef as part of the *Java* Runtime Environment, is suitable for the simplex architecture proposed. Referring to the the code fragment given in the appendix §A.2 , the key method is java.rmi.server.RemoteRef.invoke(). It is implemented along the lines given in the next code fragment: note the two comments indicating when the two types of channel objects should be invoked. ``` package sun.rmi.server; public class UnicastRef implements RemoteRef { public Object invoke(Remote r, reflect.Method m, Object[] p, long l) throws Exception { // *Request channel objects should be invoked now* // Establish a connection using information in Remote r // Get the streams associated with the connection // Marshall data onto the stream // Execute the call // *Confirmation channel objects should be invoked now* // Unmarshall // Release the connection // Return the result } } ``` ### 6.1 Wrappers: Request and Response Channels The channel objects would be invoked serially and would be passed the same parameters as invoke(), collectively call these a Message object. The request channel objects would return a Message object, but these would usually have the original message as one of its parameters. This is a simple encapsulation procedure and is illustrated in figure 7, where a time–stamping channel object has been passed the application object’s message. The application object wants to invoke method query(), the channel object passes this message as a parameter of its own message. That message invokes method stampedAt(). The two message objects might duplicate target and return addresses, it should be possible to remove this redundancy, but it should still be possible to specify a different target address and a different return address if need be. ### 6.2 Unwrappers: Indication and Confirmation Channels The objects located in the indication and confirmation channels would need a skeleton rather like that described above, §1.2 , either a dispatcher or use a reflective invocation on the channel object’s service implementation. After the service has completed processing, it would return its results to the skeleton which would then return the message object to the infrastructure. ### 6.3 Counterparts and Associates Some channel objects will have counterparts in the remote channel, for example, a time–stamping request channel object should have a time–stamp checking indication channel object, but some may not, for example a request logging channel object placed in a server’s indication channel. If a channel object sends then its counterpart receives and vice–versa. A channel object can have an associate in a local channel. A request channel object could have an associate in the confirmation channel. If a channel objects sends then its associate receives and vice–versa. Figure 8 should help to clarify this. 1. Marshalling and Unmarshalling Looking at the marshalling and unmarshalling objects: the marshalling object in the client’s request channel has a counterpart in the server’s indication channel and an associate in the client’s confirmation channel. 2. Timing The top layer of channel objects are used for timing: time–stamping and time–checking, they comprise a full complement, where each channel object has an associate and a complement. 3. Usage The second layer of channel objects is used to record usage statistics and only logs requests and responses, so they have no associates but a counterpart. Clearly, this might prove to be cumbersome, it might be simpler to insist that all channel objects have a counterpart and an associate and have a default implementation which just copies the message over. ### 6.4 Exception Handling Any of the channel objects can raise an exception, but part of the function of channel objects is to attempt to clear exceptions, for example: * relocation objects would obtain new addresses for servers, * key management objects would obtain new keys in the event of expiry. * authorization managers could obtain new privileges. The channel objects have to retain some state that would allow them to act upon exceptions. This would require that they place state in a common object that associated channel objects can access. The state would need to be stored with an identifier unique to the call. 1. Exception Handling in One Channel 1. Clear When one channel object raises an exception, the infrastructure signals the other channel objects in the same channel, in the reverse order in which they were invoked, asking them to attempt to clear the exception. 2. Uncleared: Undo If the exception cannot be cleared then the channel objects should be signalled to undo their previous actions. 3. Cleared: Undo and Redo If the exception can be cleared then the channel objects may need to undo their previous actions and be allowed to redo them. Redo is a distinct operation, because it may allow the implementation to be optimized for error recovery. Co–ordinating this is a little difficult. There are two sequences: 1. Attempt to clear then undo and then redo Each object in turn in ascending order (i.e. reverse order to invocation) attempts to clear the exception, if any one succeeds then the undo operation is invoked in ascending order to the top of the channel. Then the redo operation is invoked in descending order. 2. Attempt to clear and undo and then redo Each object in turn in ascending order (i.e. reverse order to invocation) attempts to clear the exception and performs an undo. If any one succeeds then the undo is invoked on the remaining objects in ascending order and then the redo operation is invoked in descending order. The former might prove more efficient if errors are expected to be cleared, the latter if not. The idea is communicated in figure 9, but the details of invocation are not. If the latter scheme (Attempt to Clear and Undo Simultaneously) is used: channel object $`C`$ raises an exception, it undoes its action, passes back the original message it received to object $`B`$ which also undoes its action and returns the original message it received to $`A`$. $`A`$ clears the exception and redoes its action and passes the message on to $`B`$ which also redoes its action and thence to $`C`$. 2. Exception Handling Across Channels If a receiving counterpart raises an exception, it should be signalled to the sender: this would be achieved by the receiving counterpart sending a message to its sending associate to raise an exception with its receiving counterpart. *Java* already has a proven mechanism for this, objects of class Exception can be contained in objects of class RemoteException. 1. Exception raised in the Indication Channel As an example, see figure 10, a request channel object sends a message which raises an exception in the indication channel, the channel object in the indication channel raises an alert with its associate in the response channel, which sends an exception to its counterpart in the confirmation channel. Points to note in figure 10 are: 1. The indication channel is cleared down 2. The server receives no message 3. The response channel propagates the exception 4. The objects in the confirmation channel can signal their associates in the request channel. It might be possible for the request channel objects to invoke the same procedure as portrayed in figure 9, clear the exception and re–send the message, this would require that they have access to the message as issued by the client. 2. Exception raised in the Confirmation Channel If a response channel object sends a message which raises an exception in the confirmation channel, then the confirmation channel object would signal its associate in the request channel, which may, if it has retained state, i.e. the message it sent, be able to clear the exception and re–send without intervention by the client application object. ### 6.5 Interfaces and Classes #### 6.5.1 A Message Class A simple message class is needed to contain the parameters passed to the invoke() method and to all the channel objects. ``` package java.lang.rmi.channel; import java.rmi.*; import java.lang.reflect.*; public class Message { Remote remote; Method method; Object[] parms; long hash; public Message(Remote r, Method m, Object[] p, long h) { remote = r; method = m; parms = p; hash = h; } } ``` #### 6.5.2 Channel Objects ##### Basic Interface Channel objects then all have the same interface and it is their location which determines their function, i.e. whether they wrap or unwrap. ``` package java.lang.rmi.channel; public interface Handler { public Message clear(Message m, Exception e) throws Exception; public Message todo(Message m) throws Exception; public Message undo(Message m, Exception e) throws ClearedException; public Message redo(Message m) throws Exception; } ``` Both methods of clearing exceptions are covered in this interface because there is a separate clear() method. ##### Exceptions The undo() method throws an exception to indicate it has cleared the exception it was passed. ``` package java.lang.rmi.channel; public class ClearedException extends Exception { public ClearedException(String s) { super(s); } public ClearedException(String s, Exception ex) { super(s, ex); } } ``` Other exceptions which might make processing more decisive: 1. Unclearable It might also prove useful to have clear() raise an exception that the exception cannot be cleared and this would allow the infrastructure to request a re–send decision from the application object. 2. Rebind It might also prove useful if the channel objects can demand a rebind and force the destruction of their channel. This would be useful for a relocation channel object. ##### Handler Specification The next is an abstract class that provides a container to hold the channel objects which would be loaded by the infrastructue. It might be wise, for security reasons, to implement the class more fully and make the getHandler() method final. The class also provides a means of obtaining associates and counterparts. ``` package java.lang.rmi.channel; import java.rmi.*; public abstract class Handlers { public static final int REQUEST = 1; public static final int INDICATION = 2; public static final int RESPONSE = 3; public static final int CONFIRMATION = 4; public Handlers() { ; } Handler getHandler(int identity) { return null; } Remote getCounterPart(int identity) { return null; } Object getAssociate(int identity) { return null; } } ``` ##### Channels Channels themselves would be simple containers probably implemented with a vector object which can be easily iterated in the invoke() operation. ``` package java.lang.rmi.channel; import java.rmi.*; public abstract class Channel { private int identity = -1; private Handlers handlers; public Channel(int id, Channels h) { identity = id; handlers = h; } } ``` #### 6.5.3 Channel Objects: Binding and Invoking Methods ##### Binding Channel objects would need to bind with one another, so that they can exchange parameters. The channel object methods are defined on one interface, they can therefore suppport other methods on another service interface. ##### Invocation When a channel object is passed a message object, it will need to invoke a method on its counterpart. The easiest way for it to do this is to use the same stub and skeleton mechanism for dispatching a call as remote procedure calls use. As can be seen in figure 7, where one channel object adds enough parameters to the message so that when it is received by its counterpart, it can use a local invoke() method to pass the parameters and obtain the resulting message object from a local service interface. If channel objects have to invoke methods upon one another they can either send them with the message or they can go “out–of–band” and communicate them directly if they support the java.rmi.Remote interface using their own channel. ## 7 Summary ### 7.1 Previous Work *Java* certainly has the functionality to implement channel objects. The author has already implemented a similar scheme (using a duplex, peer–to–peer architecture) for a variant of the ANSAware, \[APM98\], known as DAIS, the distributed processing platform produced by ICL, \[ICL98\], which was CORBA compliant. That implementation, written wholly in C, \[KR78\], was able to support a Kerberos, \[NT94\], authentication and confidentiality service invisibly to the application programmer, the channels were constructed according to a template specified by an environment variable. The author has developed a prototype application which can support the Transport Level Security protocol. (A fairly rigorous treatment of both it and the *Kerberos* protocol is given in \[Eav99\].) A simple channel object is put in place that negotiates the keys, whilst a custom socket is used to perform the encryption. ANSAWare evolved to a product known as Reflective Java, \[WS97\], which supported channel objects, but was designed to provide support to application objects—presentation of data and so forth—and not to provide system services. ### 7.2 Difficulties of Binding The principal difficulty faced in these prototypes is that there is no simple or well–defined mechanism for expressing how two parties should bind with one another. ANSAware had a well–developed binding model, \[Otw95\], but its final implementation had a limited number of quality of service parameters. *Java* now has a very well–developed security architecture, \[SUN98b\], but currently seems to have no means of specifying the degree of security *required*, it seems to be principally oriented towards setting permissions. (Of course, a permission one does not have is a requirement.) The SecurityManager would appear to be the best–placed component of the *Java* security architecture to negotiate and configure channels. It will be difficult for any distributed processing platform to achieve any degreee of acceptance in an enterprise–wide data processing environment if it is not possible to implement many of the logging services that have long been available in mainframe systems. The most pertinent example of which is CICS \[Wip87\], the Customer Information Control System, which was originally a messaging system, but was extended to become a transaction monitor that could manage database enquiries. Hopefully, with channel objects in place, remote procedure calls could do the same across the World–Wide Web. ## Appendix A Stubs for the RMI of *Java* The Remote Method Invocation package of *Java* needs to use a stub–compiler to generate stubs. It no longer needs to generate skeletons for servers. There is, of course, no need to distribute the stubs, the client can collect them from the server. ### A.1 Interface and Implementation ##### Interface This very simply returns a string, given a string. ``` package rmi.demo; public interface Answerer extends java.rmi.Remote { String answer(String mesg) throws java.rmi.RemoteException; } ``` ##### Implementation This is the implementation of the interface. Most of the code involves posting the server’s reference to the Naming service. ``` package rmi.demo; import java.rmi.*; import java.rmi.server.UnicastRemoteObject; public class AnswererImpl extends UnicastRemoteObject implements Answerer { private String name; public AnswererImpl(String s) throws RemoteException { super(); name = s; } // Begin // Remotely accessible methods // Parameters have to be implementations of Serializable public String answer(String message) throws RemoteException { System.out.println("Received:" + message); return new String("You said:" + message); } // End public static void main(String args[]) { // Create and install a security manager System.setSecurityManager(new RMISecurityManager()); // Without it no new classes can be loaded try { System.out.println("Construct"); AnswererImpl obj = new AnswererImpl("AnswererServer"); System.out.println("Bind"); Naming.rebind("AnswererServer", obj); System.out.println("Bound"); } catch (Exception e) { System.out.println("AnswererImpl err: " + e.getMessage()); e.printStackTrace(); } } } ``` ### A.2 Stub This is the version 1.2 stub for the rmi.demo.Answerer service interface. 1. Reflective Language Features The most important feature of these stubs is that they make use of the capabilities provided by java.lang.reflect. When the class is loaded by the client, and it is designed to be loaded from a remote location, the initialization code instantiates the java.lang.reflect.method attribute for the stub $ method\_answer\_0 using the getMethod() method of java.lang.Class. 2. Instantiating objects that implement rmi.demo.Answerer Because the client has to use the naming service to locate an object that implements the rmi.demo.Answerer, the naming service loads the stub class and calls the constructor in it rmi.demo.AnswererImpl\_Stub(). The client casts the object the naming service returns to be rmi.demo.Answerer. 3. Invoking methods The stub compiler generates proxy implementations of the methods that are declared in rmi.demo.Answerer. They use the reflective method variable $method\_answer\_0 and the java.rmi.server.RemoteRef.invoke() method. ``` // Stub class generated by rmic, do not edit. // Contents subject to change without notice. package rmi.demo; public final class AnswererImpl_Stub extends java.rmi.server.RemoteStub implements rmi.demo.Answerer, java.rmi.Remote { private static final long serialVersionUID = 2; private static java.lang.reflect.Method $method_answer_0; static { try { $method_answer_0 = rmi.demo.Answerer.class.getMethod("answer", new java.lang.Class[] {java.lang.String.class}); } catch (java.lang.NoSuchMethodException e) { throw new java.lang.NoSuchMethodError("stub class initialization failed"); } } // constructors public AnswererImpl_Stub(java.rmi.server.RemoteRef ref) { super(ref); } // methods from remote interfaces // implementation of answer(String) public java.lang.String answer(java.lang.String $param_String_1) throws java.rmi.RemoteException { try { Object $result = ref.invoke(this, $method_answer_0, new java.lang.Object[] {$param_String_1}, -8351992698817289230L); return ((java.lang.String) $result); } catch (java.lang.RuntimeException e) { throw e; } catch (java.rmi.RemoteException e) { throw e; } catch (java.lang.Exception e) { throw new java.rmi.UnexpectedException("undeclared checked exception", e); } } } ``` ## Appendix B Funding and Author Details Research was funded by the Engineering and Physical Sciences Research Council of the United Kingdom. Thanks to Malcolm Clarke, Russell–Wynn Jones and Robert Thurlby. > Walter Eaves > Department of Electrical Engineering, > Brunel University > Uxbridge, > Middlesex UB8 3PH, > United Kingdom > Walter.Eaves@bigfoot.com > Walter.Eaves@brunel.ac.uk > http://www.bigfoot.com/~Walter.Eaves > http://www.brunel.ac.uk/~eepgwde
no-problem/9904/cond-mat9904277.html
ar5iv
text
# Statistical Mechanics of Torque Induced Denaturation of DNA ## Abstract A unifying theory of the denaturation transition of DNA, driven by temperature $`T`$ or induced by an external mechanical torque $`\mathrm{\Gamma }`$ is presented. Our model couples the hydrogen-bond opening and the untwisting of the helicoidal molecular structure. We show that denaturation corresponds to a first-order phase transition from B-DNA to d-DNA phases and that the coexistence region is naturally parametrized by the degree of supercoiling $`\sigma `$. The denaturation free energy, the temperature dependence of the twist angle, the phase diagram in the $`T,\mathrm{\Gamma }`$ plane and isotherms in the $`\sigma ,\mathrm{\Gamma }`$ plane are calculated and show a good agreement with experimental data. PACS Numbers : 87.14.Gg, 05.20.-y, 64.10.+h Denaturation of the DNA, due to its essential relevance to transcription processes has been the object of intensive works in the last decades. Experiments on dilute DNA solutions have provided evidence for the existence of a thermally driven melting transition corresponding to the sudden opening of base pairs at a critical temperature $`T_m`$ . Later, following the work of Smith et al., micromanipulation techniques have been developed to study single-molecule behaviour under stress conditions and how structural transitions of DNA can be mechanically induced. While most single-molecule experiments have focused on stretching properties so far, the response of a DNA molecule to an external torsional stress has been studied very recently , sheding some new light on denaturation . From a biological point of view, torsional stress is indeed not unusual in the living cell and may strongly influence DNA functioning. For a straight DNA molecule with fixed ends, the degree of supercoiling $`\sigma =(TwTw_0)/Tw_0`$ measures the twist $`Tw`$ (i.e. the number of times the two strands of the DNA double-helix are intertwined) with respect to its counterpart $`Tw_0`$ for an unconstrained linear molecule. In Strick et al. experiment , a $`\lambda `$ DNA molecule, in 10 mM PB, is attached at one end to a surface and pulled and rotated by a magnetic bead at the other end. At stretching forces of $`0.5`$ pN, sufficient to eliminate plectonems by keeping the molecule straight, a torque induced transition to a partially denaturated DNA is observed. Beyond a critical supercoiling $`\sigma _c0.015`$ and an associated critical torque $`\mathrm{\Gamma }_c0.05eV/\text{rad}`$ the twisted molecule separates into a pure B-DNA phase with $`\sigma =\sigma _c`$ and denaturated regions with $`\sigma =1`$. Extra turns applied to the molecule increase the relative fraction of d-DNA with respect to B-DNA. In this letter, we provide a unifying understanding of both thermally and mechanically induced denaturation transitions. We show that denaturation can be described in the framework of first-order phase transitions with control parameters being the temperature and the external torque. This is in close analogy to the liquid-gas transition, where control parameters are the temperature and the pressure. Our theory gives a natural explanation to the BDNA-dDNA phases coexistence observed in single molecule experiments . We give quantitative estimates for the denaturation free-energy $`\mathrm{\Delta }G`$, the temperature dependence of the average twist angle $`\mathrm{\Delta }\theta /\mathrm{\Delta }T`$, the critical supercoiling $`\sigma _c`$ and torque $`\mathrm{\Gamma }_c`$ at room temperature in good agreement with the experimental data. Furthermore the dependence of the critical torque as a function of the temperature is predicted. Our model reproduces the Watson-Crick double helix (B-DNA) as schematized fig 1. For each base pair ($`n=1,\mathrm{},N`$), we consider a polar coordinate system in the plane perpendicular to the helical axis and introduce the radius $`r_n`$ and the angle $`\phi _n`$ of the base pair . The sugar phosphate backbone is made of rigid rods, the distance between adjacent bases on the same strand being fixed to $`L=6.9\AA `$. The distance $`h_n`$ between base planes $`n1`$ and $`n`$ is expressed in terms of the radii $`r_{n1},r_n`$ and the twist angle $`\theta _n=\phi _n\phi _{n1}`$ as $$h_n(r_n,r_{n1},\theta _n)=\sqrt{L^2r_n^2r_{n1}^2+2r_nr_{n1}\mathrm{cos}\theta _n}.$$ (1) The potential energy associated to a configuration of the degrees of freedom $`(r_n,\phi _n)`$ is the sum of the following nearest neighbor interactions. First, hydrogen bonds inside a given pair $`n`$ are taken into account through the short-range Morse potential $`V_m(r_n)=D\left(e^{a(r_nR)}1\right)^2`$ with $`R=10\AA `$ . Fixing $`a=6.3\AA ^1`$ , the width of the well amounts to $`3a^10.5\AA `$ in agreement with the order of magnitude of the relative motion of the hydrogen bonded bases . A base pair with diameter $`r>r_d=R+6/a`$ may be considered as open. The potential depth $`D`$, typically of the order of $`0.1eV`$ depends on the base pair type (Adenine-Thymine (AT) or Guanine-Citosine (GC)) as well as on the ionic strength. Secondly, the shear force that opposes sliding motion of one base over another in the B-DNA conformation is accounted for by the stacking potential $`V_s(r_n,r_{n1})=Ee^{b(r_n+r_{n1}2R)}(r_nr_{n1})^2`$. Due to the decrease of molecular packing with base pair opening, the shear prefactor is exponentially attenuated and becomes negligible beyond a distance $`5b^1=10\AA `$, which coincides with the diameter of a base pair . Thirdly, an elastic energy $`V_b(r_n,r_{n1},\theta _n)=K[h_nH]^2`$ is introduced to describe the vibrations of the molecule in the B phase. The helicoidal structure arises from $`H<L`$: in the rest configuration $`r_n=R`$ at $`T=0`$K, $`V_b`$ is minimum and zero for the twist angle $`\theta _n=2\pi /10`$. Choosing $`H=3\AA `$, we recover at room temperature $`T=298`$K the thermal averages $`h_n3.4\AA `$ and $`\theta _n2\pi /10.4`$ . The above definition of $`V_b`$ holds as long as the argument of the square root in (1) is positive, that is if $`r_n,r_{n1},\theta _n`$ are compatible with rigid rods having length $`L`$. By imposing $`V_b=\mathrm{}`$ for negative arguments, unphysical values of $`r_n,r_{n1},\theta _n`$ are excluded. As the behaviour of a single strand ($`r>r_d`$) is uniquely governed by this rigid rod condition, the model does not only describe vibrations of helicoidal B-DNA but is also appropriate for the description of the denaturated phase. As will be discussed later, the elastic constant $`K=0.014eV/\AA ^2`$ is determined to give back the torsional modulus $`C`$ of B-DNA estimated to $`C=860\pm 100\AA `$ at $`T=298`$K. The parameters of the Morse potential $`D`$ and of the stacking interaction $`E`$ we have set to fit the melting temperature $`T_m=350`$K of the homogeneous Poly(dGdT)-Poly(dAdC)-DNA at $`20mMNa^+`$ , see inset of fig 3. This melting temperature coincides with the expected denaturation temperature of a heterogeneous DNA with a sequence GC/AT ratio equal to unity at $`10mMNa^+`$ , as the $`\lambda `$-DNA in the experimental conditions of . Among all possible pairs of parameters $`(D,E)`$ that correctly fit $`T_m`$, we have selected the pair $`(D=0.16eV,E=4eV/\AA ^2)`$ giving the largest prediction for $`\mathrm{\Delta }G`$, see inset of fig 2, that is in closest agreement with thermodynamical estimates of the denaturation free-energy. When the molecule is fixed at one end and subject to a torque $`\mathrm{\Gamma }`$ on the other extremity, an external potential $`V_\mathrm{\Gamma }(\theta _n)=\mathrm{\Gamma }\theta _n`$ has to be included. A torque $`\mathrm{\Gamma }>0`$ overtwists the molecule, while $`\mathrm{\Gamma }<0`$ undertwists it. The configurational partition function at inverse temperature $`\beta `$ can be calculated using the transfer integral method : $$Z_\mathrm{\Gamma }=_{\mathrm{}}^{\mathrm{}}𝑑\phi _NR,\phi _N|T^N|R,0$$ (2) As in the experimental conditions, the radii of the first and last base pairs are fixed to $`r_1=r_N=R`$. The angle of the fixed extremity of the molecule is set to $`\phi _1=0`$ with no restriction whereas the last one $`\phi _N`$ is not constrained. The transfer operator entries read $`<r,\phi |T|r^{},\phi ^{}>X(r,r^{})\mathrm{exp}\{\beta (V_b(r,r^{},\theta )+V_\mathrm{\Gamma }(\theta ))\}\chi (\theta )`$ with $`X(r,r^{})=\sqrt{rr^{}}\mathrm{exp}\{\beta (V_m(r)/2+V_m(r^{})/2+V_s(r,r^{}))\}`$. The $`\sqrt{rr^{}}`$ factor in $`X`$ comes from the integration of the kinetic term; $`\chi (\theta )=1`$ if $`0\theta =\phi \phi ^{}\pi `$ and 0 otherwise to prevent any clockwise twist of the chain. At fixed $`r,r^{}`$, the angular part of the transfer matrix $`T`$ is translationally invariant in the angle variables $`\phi `$, $`\phi ^{}`$ and can be diagonalized through a Fourier transform. Thus, for each Fourier mode $`k`$ we are left with an effective transfer matrix on the radius variables $`T_k(r,r^{})=X(r,r^{})Y_k(r,r^{})`$ with $$Y_k(r,r^{})=_0^\pi 𝑑\theta e^{\beta (V_b(r,r^{},\theta )+V_\mathrm{\Gamma }(\theta ))}e^{ik\theta }.$$ (3) The only mode contributing to $`Z_\mathrm{\Gamma }`$ is $`k=0`$ once $`\phi _N`$ has been integrated out in (2). The eigenvalues and eigenvectors of $`T_0`$ will be denoted by $`\lambda _q^{(\mathrm{\Gamma })}`$ and $`\psi _q^{(\mathrm{\Gamma })}(r)`$ respectively with $`\lambda _0^{(\mathrm{\Gamma })}\lambda _1^{(\mathrm{\Gamma })}\mathrm{}`$. In the $`N\mathrm{}`$ limit, the free-energy density $`f^{(\mathrm{\Gamma })}`$ does not depend on the boundary conditions on $`r_1`$ and $`r_N`$ and is simply given by $`f^{(\mathrm{\Gamma })}=k_BT\mathrm{ln}\lambda _0^{(\mathrm{\Gamma })}`$. Note that the above result can be straightforwardly extended to the case of a molecule with a fixed twist number $`Tw=N\mathrm{}`$, e.g. for circular DNA. Indeed, the twist density $`\mathrm{}`$ and the torque $`\mathrm{\Gamma }`$ are thermodynamical conjugated variables and the free-energy at fixed twist number $`\mathrm{}`$ is the Legendre transform of $`f_\mathrm{\Gamma }`$. We have resorted to a Gauss-Legendre quadrature for numerical integrations over the range $`r_{min}=9.7\AA <r<r_{max}`$. The Morse potential $`V_m`$ increases exponentially with decreasing $`r<R`$ and may be considered as infinite for $`r<9.7\AA `$ . The extrapolation procedure to $`r_{max}\mathrm{}`$ depends on the torque value $`\mathrm{\Gamma }`$ and will be discussed below. Using Kellog’s iterative method , the eigenvalues $`\lambda _q^{(\mathrm{\Gamma })}`$ and associated eigenvectors $`\psi _q^{(\mathrm{\Gamma })}(r)`$ have been obtained for $`q=0,1,2`$. Like a quantum mechanical wave function, $`\psi _0^{(\mathrm{\Gamma })}(r)`$ gives the probability amplitude of a base pair to be of radius $`r`$. Two quantities of interest are: the percentage of opened base pairs $`P=_{r_d}^{\mathrm{}}𝑑r|\psi _0^{(\mathrm{\Gamma })}(r)|^2`$, the averaged twist angle $`\theta =f_\mathrm{\Gamma }/\mathrm{\Gamma }`$. Results for a freely swiveling molecule at room temperature are as follows. $`\psi _0^{(\mathrm{\Gamma }=0)}`$ is entirely confined in the Morse potential well and describes a closed molecule. Conversely the following eigenfunctions $`\psi _1^{(\mathrm{\Gamma }=0)}`$, $`\psi _2^{(\mathrm{\Gamma }=0)},\mathrm{}`$ correspond to an open molecule: they extend up to $`r_{max}`$ and vanish for $`r<r_d`$. They are indeed orthogonal to another family of excited states that are confined in the Morse potential with much lower eigenvalues. The shape of the open states are strongly reminiscent of purely diffusive eigenfunctions, $`\psi _q^{(\mathrm{\Gamma }=0)}(r)\mathrm{sin}(q\pi (rr_d)/(r_{max}r_d))`$ leading to a continuous spectrum in the limit $`r_{max}\mathrm{}`$. This observation can be understood as follows. For $`r,r^{}>r_d`$, the transfer operator $`T_0(r,r^{})`$ is compared fig. 2 to the exact conditional probability $`\rho (r,r^{})`$ that the endpoint of a backbone rod of length $`L`$ is located at distance $`r^{}`$ from the vertical reference axis knowing that its other extremity lies at distance $`r`$ . For fixed $`r`$, $`T_0`$ and $`\rho `$ both diverge in $`r^{}=r\pm L`$ and are essentially flat in between. The flatness of $`T_0`$ derive from the expression of $`V_b`$: a rigid rod with extremities lying in $`r,r^{}`$ may always be oriented with some angle $`\theta ^{}`$ ($`0`$ at large distances) at zero energetic cost $`V_b(r,r^{},\theta ^{})=0`$. As a conclusion, our model can reproduce the purely entropic denaturated phase. As shown fig 2, at a critical temperature $`T_m=350`$K, $`\lambda _0^{(\mathrm{\Gamma }=0)}`$ crosses the second largest eigenvalue and penetrates, as in a first-order-like transition the continuous spectrum. For $`T>T_m`$, the bound state disappears and $`\psi _1^{(\mathrm{\Gamma }=0)}`$ in fig 2 becomes the eigenmode with largest eigenvalue . The percentage of opened base pairs $`P`$ exhibits an abrupt jump from 0 to 1 at $`T_m`$, reproducing the UV absorbance vs. temperature experimental curve for Poly(dGdT)-Poly(dAdC)-DNA . The difference $`\mathrm{\Delta }G`$ between the free energy $`f_d^{(\mathrm{\Gamma }=0)}`$ of the open state ($`q=1`$) and the free energy $`f_B^{(\mathrm{\Gamma }=0)}`$ of the close state ($`q=0`$) gives the denaturation free energy at temperature $`T`$, see fig. 2. At $`T=298`$K, we obtain $`\mathrm{\Delta }G=0.022eV`$ in good agreement with the free energy of the denaturation bubble formation $`\mathrm{\Delta }G0.025eV`$ estimated in AT rich regions . The thermal fluctuations in the B-DNA phase lead to an undertwisting $`\mathrm{\Delta }\theta /\mathrm{\Delta }T\mathrm{1.4\hspace{0.17em}10}^4\text{rad/K}`$ which closely agrees with experimental measures $`\mathrm{\Delta }\theta /\mathrm{\Delta }T\mathrm{1.7\hspace{0.17em}10}^4\text{rad/K}`$ . The presence of an overtwisting (respectively undertwisting) torque $`\mathrm{\Gamma }>0`$ (resp. $`\mathrm{\Gamma }<0`$) strongly affects $`f_B^{(\mathrm{\Gamma })}`$, leaving almost unchanged the single strand free-energy $`f_d^{(\mathrm{\Gamma })}`$. The denaturation transition takes place at $`T_m(\mathrm{\Gamma })`$ , see the phase diagram shown in the inset of fig 3. We expect a critical point at a high temperature and large positive torque such that $`\psi _1^\mathrm{\Gamma }`$ is centered on $`R`$ . The supercoiling, induced by a torque at a given temperature smaller than $`T_m(\mathrm{\Gamma }=0)=350`$K, is the relative change of twist with respect to the value at zero torque in the B-DNA state, $`\sigma (\mathrm{\Gamma })=(\theta _\mathrm{\Gamma }\theta _{\mathrm{\Gamma }=0})/\theta _{\mathrm{\Gamma }=0}`$. In fig 3, we have plotted the isotherms in the $`\sigma ,\mathrm{\Gamma }`$ plane. Horizontals lines are critical coexistence regions between the B-DNA phase, on the left of the diagram and the denaturated phase on the right (with $`\sigma =1`$). The left steep line is found to define a linear relation between $`\mathrm{\Gamma }`$ and $`\sigma `$ : $`\mathrm{\Gamma }=K_\theta (\theta _\mathrm{\Gamma }\theta _{\mathrm{\Gamma }=0})`$. The slope $`K_\theta `$ does not vary with temperature over the range 298 K$`<T<`$350 K and is related to the torsional modulus $`C`$ of B-DNA through $`C=K_\theta h_n/(k_BT)`$. The value of $`K`$ appearing in the elastic potential $`V_b`$ and given above was tuned to ensure that $`C=860\AA `$. At room temperature, critical coexistence between B-DNA and d-DNA arises at torque $`\mathrm{\Gamma }_c=0.035eV/\text{rad}`$ and supercoiling $`\sigma _c=0.01`$. These theoretical results are in good agreement with the values $`\mathrm{\Gamma }_c=0.05eV/\text{rad}`$, $`\sigma _c=0.015`$ obtained experimentally. We plan to combine the present model with existing elasticity theories of DNA to understand the influence of an external stretching force on the structural transition studied in this paper. It would also be interesting to see how the above results are modified in presence of a heterogeneous sequence. Acknowledgements : The present model is the fruit of a previous collaboration of one of us (S.C.) with M. Barbi and M. Peyrard which we are particularly grateful to. We also thank B. Berge, D. Bensimon, C. Bouchiat, E. Bucci, A. Campa, A. Colosimo, V. Croquette, A. Giansanti for useful discussions.
no-problem/9904/cond-mat9904362.html
ar5iv
text
# Thermoelectric Transport Properties in Disordered Systems Near the Anderson Transition ## 1 Introduction The Anderson-type metal-insulator transition (MIT) has been the subject of investigation for decades since Anderson formulated the problem in 1958 anderson . He proposed that increasing the strength of a random potential in a three-dimensional (3D) lattice may cause an “absence of diffusion” for the electrons. Today, it is widely accepted that near this exclusively-disorder-induced MIT the d. c. conductivity $`\sigma `$ behaves as $`|EE_c|^\nu `$, where $`E_c`$ is the critical energy or the mobility edge at which the MIT occurs, and $`\nu `$ is a universal critical exponent kramer . Numerical studies based on the Anderson Hamiltonian of localization have supported this scenario with much evidence kramer ; schreiber ; bulka ; hofstetter ; slevin . In measurements of $`\sigma `$ near the MIT in semiconductors and amorphous alloys this behavior was also observed with varying values of $`\nu `$ ranging from $`0.5`$$`1.3`$ nu ; lauinger ; stupp . It is currently believed that these different exponents are caused by interactions in the system belitz . Indeed, an MIT may be induced not only by disorder but also by interactions such as electron-electron and electron-phonon interactions, among others mott2 . Nevertheless, the experimental confirmation of the critical behavior of $`\sigma `$ allows the use of the Anderson model in order to describe the transition between the insulating and the metallic states in disordered systems. Besides for the conductivity $`\sigma `$, experimental investigations can also be done for thermoelectric transport properties such as the thermoelectric power $`S`$ lauinger ; sherwood ; lakner , the thermal conductivity $`K`$ and the Lorenz number $`L_0`$. The behavior of these quantities at low temperature $`T`$ in disordered systems close to the MIT has so far not been satisfactorily explained. In particular, some authors have argued that $`S`$ diverges sherwood ; castellani or that it remains constant sivan ; enderby as the MIT is approached from the metallic side. In addition, $`|S|`$ at the MIT has been predicted enderby to be of the order of $`200\mu `$V/K. On the other hand, measurements of $`S`$ close to the MIT conducted on semiconductors for $`T1`$K lakner and on amorphous alloys in the range $`5`$K$`T350`$K lauinger yield values of the order of 0.1-1$`\mu `$V/K. They also showed that $`S`$ can either be negative or positive depending on the donor concentration in semiconductors or the chemical composition of the alloy. The large difference between the theoretical and experimental values is still not resolved. The objective of this paper is to study the behavior of the thermoelectric transport properties for the Anderson model of localization in disordered systems near the MIT at low $`T`$. We clarify the above mentioned difference in the theoretical calculations for $`S`$, by showing that the radius of convergence for the Sommerfeld expansion used in Refs. castellani ; sivan is zero at the MIT. We show that $`S`$ is a finite constant at the MIT as argued in Refs. sivan ; enderby . Besides for $`S`$, we also compute the $`T`$ dependence for $`\sigma `$, $`K`$, and $`L_0`$. Our approach is neither restricted to a low- or high-$`T`$ expansion as in Refs. castellani ; sivan , nor confined to the critical regime as in Ref. enderby . We shall first introduce the model in Sec. 2. Then in Secs. 3 and 4 we review the thermoelectric transport properties in the framework of linear response and the present formulations in calculating them. In Sec. 5 we shall show how to calculate the $`T`$ dependence of these properties. Results of these calculations are then presented in Sec. 6. Lastly, in Sec. 7 we discuss the relevance of our study to the experiments. ## 2 The Anderson Model of Localization The Anderson model anderson is described by the Hamiltonian $$H=\underset{i}{}ϵ_i|ii|+\underset{ij}{}t_{ij}|ij|$$ (1) where $`ϵ_i`$ is the potential energy at the site $`i`$ of a regular cubic lattice and is assumed to be randomly distributed in the range $`[W/2,W/2]`$ throughout this work. The hopping parameters $`t_{ij}`$ are restricted to nearest neighbors. For this system, at strong enough disorder and in the absence of a magnetic field, the one-particle wavefunctions become exponentially localized at $`T=0`$ and $`\sigma `$ vanishes kramer . Illustrating this, we refer to Fig. 1 where we show the density of states $`\rho (E)`$ obtained by diagonalizing the Hamiltonian (1) with the Lanczos method as in Ref. milde0 ; milde . The states in the band tails with energy $`|E|>E_c`$ are localized within finite regions of space in the system at $`T=0`$ kramer . When the Fermi energy $`E_F`$ is within these tails at $`T=0`$ the system is insulating. Otherwise, if $`|E_F|<E_c`$ the system is metallic. The critical behavior of $`\sigma `$ is given by $$\sigma (E)=\{\begin{array}{cc}\sigma _0\left|1\frac{E}{E_c}\right|^\nu ,& |E|E_c,\\ 0,& |E|>E_c,\end{array}$$ (2) where $`\sigma _0`$ is a constant and $`\nu `$ is the conductivity exponent kramer . Thus, $`E_c`$ is called the mobility edge since it separates localized from extended states. At the critical disorder $`W_c=16.5`$, the mobility edge occurs at $`E_c=0`$, all states with $`|E|>0`$ are localized schreiber ; bulka and states with $`E=0`$ are multifractal schreiber ; milde0 . The value of $`\nu `$ has been computed from the non-linear sigma-model wegner , transfer-matrix methods kramer ; slevin , Green functions methods kramer , and energy-level statistics hofstetter ; els . Here we have chosen $`\nu =1.3`$, which is in agreement with experimental results in Si:P stupp and the numerical data of Ref. hofstetter . More recent numerical results kramer ; slevin , computed with higher accuracy, suggest that $`\nu =1.5\pm 0.1`$. As we shall show later, this difference only slightly modifies our results. We emphasize that the Hamiltonian (1) only incorporates the electronic degrees of freedom of a disordered system and further excitations such as lattice vibrations are not included. For comparison with the experimental results, we measure $`\sigma `$ in Eq. (2) in units of $`\mathrm{\Omega }^1\text{cm}^1`$. We fix the energy scale by setting $`t_{ij}=1`$ eV. Hence the band width of Fig. 1 is comparable to the band width of amorphous alloys haussler . Furthermore, the experimental investigations of the thermoelectric power $`S`$ in amorphous alloys lauinger have been done at high electron filling private and thus we will mostly concentrate on the MIT at $`E_c`$. ## 3 Linear Thermoelectric Effects ### 3.1 Definition of the Transport Properties Thermoelectric effects in a system are due mainly to the presence of a temperature gradient $`T`$ and an electric field $`𝐄`$ ashcroft . We recall that in the absence of $`T`$ with $`𝐄0`$, the electric current density $`𝐣`$ flowing at a point in a conductor is directly proportional to $`𝐄`$, $$𝐣=\sigma 𝐄.$$ (3) By applying a finite gradient $`T`$ in an open circuit, electrons, the thermal conductors, would flow towards the low-$`T`$ end as shown in Fig. 2. This causes a build-up of negative charges at the low-$`T`$ end and a depletion of negative charges at the high-$`T`$ end. Consequently, this sets up an electric field $`𝐄`$ which opposes the thermal flow of electrons. For small $`T`$, it is given as $$𝐄=ST.$$ (4) This equation defines the thermopower $`S`$. In the Sommerfeld free electron model of metals, $`S`$ is found to be directly proportional to $`T`$ ashcroft . Note that the negative sign is brought about by the charge of the thermal conductors. For small $`T`$, the flow of heat in a system is proportional to $`T`$. Fourier’s Law gives this as $$𝐣_q=K(T)$$ (5) where $`𝐣_q`$ is the heat current density and $`K`$ is the thermal conductivity ashcroft . At low $`T`$, the phonon contribution to $`\sigma `$ and $`K`$ becomes negligible compared to the electronic part ashcroft . As $`T0`$, $`\sigma `$ approaches a constant and $`K`$ becomes linear in $`T`$. One can then verify the empirical law of Wiedemann and Franz which says that the ratio of $`K`$ and $`\sigma `$ is directly proportional to $`T`$ wiedemann ; chester . The proportionality coefficient is known as the Lorenz number $`L_0`$, $$L_0=\frac{e^2}{k_B^2}\frac{K}{\sigma T}$$ (6) where $`e`$ is the electron charge and $`k_B`$ is the Boltzmann constant. For metals, it takes the universal value $`\pi ^2/3`$ ashcroft ; chester . Strictly speaking, the law of Wiedemann and Franz is valid at very low $`T`$ ($`10`$K) and at high (room) $`T`$. This is because in these regions the electrons are scattered elastically. At $`T10100`$K deviations from the law are observed which imply that $`K/\sigma T`$ depends on $`T`$. In summary, Eqs. (3)-(6) express the phenomenological description of the transport properties. ### 3.2 The Equations of Linear Response A more compact and general way of looking at these thermoelectric “forces” and effects is as follows: the responses of a system to $`𝐄`$ and $`T`$ up to linear order callen are $$𝐣=|e|^1\left(|e|L_{11}𝐄L_{12}T^1T\right)$$ (7) and $$𝐣_q=|e|^2\left(|e|L_{21}𝐄L_{22}T^1T\right).$$ (8) The kinetic coefficients $`L_{ij}`$ are the keys to calculating the transport properties theoretically. Using Ohm’s law (3) in Eq. (7), we obtain $$\sigma =L_{11}.$$ (9) Also from Eq. (7), $`S`$, measured under the condition of zero electric current, is expressed as $$S=\frac{L_{12}}{|e|TL_{11}}.$$ (10) With the same condition, Eq. (8) yields $$K=\frac{L_{22}L_{11}L_{21}L_{12}}{|e|^2TL_{11}}.$$ (11) From Eq. (6) $`L_0`$ is given as $$L_0=\frac{L_{22}L_{11}L_{21}L_{12}}{(k_BTL_{11})^2}.$$ (12) Therefore, we will be able to determine the transport properties once we know the coefficients $`L_{ij}`$. We note that in the absence of a magnetic field, as considered in this work, the Onsager relation $`L_{21}=L_{12}`$ holds callen . Eliminating the kinetic coefficients in Eqs. (7) and (8) in favor of the transport properties, we obtain $$𝐣=\sigma 𝐄\sigma ST$$ (13) and $$\frac{𝐣_q}{T}=S𝐣\frac{KT}{T}.$$ (14) Here, $`𝐣_q/T`$ is simply the entropy current density callen . Hence, the thermopower is just the entropy transported per Coulomb by the flow of thermal conductors. According to the third law of thermodynamics, the entropy of a system and, thus, also $`𝐣_q/T`$ will go to zero as $`T0`$. We can check with Eqs. (13) and (14) that this is satisfied by our calculations in the 3D Anderson model. ### 3.3 Application to the Anderson Transition In general, the linear response coefficients $`L_{ij}`$ are obtained through the Chester-Thellung-Kubo-Greenwood (CTKG) formulation chester ; kubo . The kinetic coefficients are expressed as $$L_{11}=_{\mathrm{}}^{\mathrm{}}A(E)\left[\frac{f(E,\mu ,T)}{E}\right]𝑑E,$$ (15) $$L_{12}=_{\mathrm{}}^{\mathrm{}}A(E)\left[E\mu (T)\right]\left[\frac{f(E,\mu ,T)}{E}\right]𝑑E,$$ (16) and $$L_{22}=_{\mathrm{}}^{\mathrm{}}A(E)\left[E\mu (T)\right]^2\left[\frac{f(E,\mu ,T)}{E}\right]𝑑E,$$ (17) where $`A(E)`$ contains all the system-dependent features, $`\mu (T)`$ is the chemical potential and $$f(E,\mu ,T)=1/\left\{1+\mathrm{exp}([E\mu (T)]/k_BT)\right\}$$ (18) is the Fermi function. The CTKG approach inherently assumes that the electrons are noninteracting and that they are scattered elastically by static impurities or by lattice vibrations. A nice feature of this formulation is that all microscopic details of the system such as the dependence on the strength of the disorder enter only in $`A(E)`$. This function $`A(E)`$ can be calculated in the context of the relaxation-time approximation ashcroft . However, an exact evaluation of $`L_{ij}`$ is difficult, if not impossible, since it relies on the exact knowledge of the energy and $`T`$ dependence of the relaxation time. In most instances, these are not known. In order to incorporate the Anderson model and the MIT in the CTKG formulation, a different approach is taken: We have seen in Eq. (9) that the d.c. conductivity is just $`L_{11}`$. Thus, to take into account the MIT in this formulation, we identify $`A(E)`$ with $`\sigma (E)`$ given in Eq. (2). The $`L_{ij}`$ in Eqs. (15)-(17) can now be easily evaluated close to the MIT without any approximation, once the $`T`$ dependence of the chemical potential $`\mu `$ is known. Unfortunately, this is not known for the experimental systems under consideration nu ; lauinger ; stupp ; sherwood ; lakner , nor for the 3D Anderson model. Thus one has to resort to approximate estimations of $`\mu `$, as we do next, or to numerical calculations, as we shall do in the next sections. ## 4 Evaluation of the Transport Coefficients ### 4.1 Sommerfeld expansion in the metallic regime Circumventing the computation of $`\mu (T)`$, one can use that $`f/E`$ is appreciable only in an energy range of the order of $`k_BT`$ near $`\mu E_F`$. The lowest non-zero $`T`$ corrections for the $`L_{ij}`$ are then accessible by the Sommerfeld expansion ashcroft , provided that $`A(E)`$ is nonsingular and slowly varying in this region. Hence, in the limit $`T0`$, the transport properties are sommer $$\sigma =A(E_F)+\frac{\pi ^2}{6}(k_BT)^2\frac{d^2A(E)}{dE^2}|_{E=E_F},$$ (19) $$S=\frac{\pi ^2k_B^2T}{3|e|A(E_F)}\frac{dA(E)}{dE}|_{E=E_F},$$ (20) $$K=\frac{\pi ^2k_B^2T}{3e^2}\left\{A(E_F)\frac{\pi ^2(k_BT)^2}{3A(E_F)}\left[\frac{dA(E)}{dE}\right]_{E=E_F}^2\right\},$$ (21) and consequently $$L_0=\frac{\pi ^2}{3}\left\{1\frac{\pi ^2(k_BT)^2}{3[A(E_F)]^2}\left[\frac{dA(E)}{dE}\right]_{E=E_F}^2\right\}.$$ (22) In the derivations of $`S`$, $`K`$, and $`L_0`$, the term of order $`T^2`$ in Eq. (19) has been ignored as is customary. We remark that the terms of order $`T^2`$ in Eqs. (21) and (22) are usually dropped, too. In this case in the metallic regime, $`L_0`$ reduces to the universal value $`\pi ^2/3`$ ashcroft . The above approach was adopted in Refs. castellani and sivan to study thermoelectric transport properties in the metallic regime close to the MIT. From Eq. (20), the authors deduce $$S=\frac{\nu \pi ^2k_B^2T}{3|e|(E_FE_c)}.$$ (23) In the metallic regime, this linear $`T`$ dependence of $`S`$ agrees with that of the Sommerfeld model of metals ashcroft . However, setting $`A(E)=\sigma (E)`$ at the MIT castellani in Eq. (2) is in contradiction to the basic assumption of the Sommerfeld expansion, since it is not smoothly varying at $`E_F=E_c`$. Thus identifying $`A(E)=\sigma (E)`$ in Eqs. 19 \- 22 is only valid in the metallic regime with $`k_BT|E_cE_F|`$. ### 4.2 Exact calculation at $`\mu (T)=E_c`$ A different approach taken by Enderby and Barnes is to fix $`\mu =E_c`$ at finite $`T`$ and later take the limit $`T0`$ enderby . Thus, again without knowing the explicit $`T`$ dependence of $`\mu `$, the coefficients $`L_{ij}`$ can be evaluated at the MIT. For the transport properties they obtain, $$\sigma =\frac{\sigma _o\nu (k_BT)^\nu I_\nu }{\left|E_c\right|^\nu },$$ (24) $$S=\frac{k_B}{|e|}\frac{\nu +1}{\nu }\frac{I_{\nu +1}}{I_\nu },$$ (25) $$K=\frac{\sigma _o(k_BT)^{\nu +2}}{e^2T\left|E_c\right|^\nu }\left[(\nu +2)I_{\nu +2}\frac{(\nu +1)^2I_{\nu +1}^2}{\nu I_\nu }\right],$$ (26) and $$L_0=\left[\frac{(\nu +2)I_{\nu +2}}{\nu I_\nu }\frac{(\nu +1)^2I_{\nu +1}^2}{(\nu I_\nu )^2}\right].$$ (27) Here $`I_1=\mathrm{ln}2`$, $`I_\nu =(12^{1\nu })\mathrm{\Gamma }(\nu )\zeta (\nu )`$ for $`\mathrm{Re}(\nu )>0,\nu 1`$, with $`\mathrm{\Gamma }(\nu )`$ and $`\zeta (\nu )`$ the usual gamma and Riemann zeta functions. We see that at the MIT, $`S`$ does not diverge nor go to zero but remains a universal constant. Its value depends only on the conductivity exponent $`\nu `$. This is in contrast to the result (23) of the Sommerfeld expansion. In addition, we find that $`\sigma T^\nu `$ and $`KT^{\nu +1}`$ as $`T0`$. Hence, $`\sigma `$ and $`K/T`$ approach zero in the same way. This signifies that the Wiedemann and Franz law is also valid at the MIT recovering an earlier result in Ref. strinati obtained via diagrammatic methods. However, at the MIT, $`L_0`$ does not approach $`\pi ^2/3`$ but again depends on $`\nu `$. We emphasize that Eqs. (24)-(27) are exact at $`T`$ values such that $`\mu (T)E_c=0`$ enderby . Thus the $`T`$ dependence of $`\sigma `$, $`S`$, $`K`$, and $`L_0`$ for a given electron density can only be determined if one knows the corresponding $`\mu (T)`$. ### 4.3 High-temperature expansion In this section, we will study the lowest-order corrections to the results obtained before with $`\mu (T)=E_c`$. We do this by expanding the Fermi function (18) for $`|E_c\mu (T)|k_BT`$. In addition, we assume $`\mu (T)E_F`$ for the temperature range considered. This procedure gives $$\sigma =L_{11}=\frac{\sigma _o\nu (k_BT)^\nu }{\left|E_c\right|^\nu }\left[I_\nu (\nu 1)I_{\nu 1}\frac{E_cE_F}{k_BT}\right].$$ (28) For the thermopower, the leading-order correction can be obtained without expanding $`f(E,\mu ,T)`$ in $`L_{11}`$ and $`L_{12}`$. This yields a constant for $`S`$ at the MIT sivan . We obtain $$S=\frac{k_B}{|e|}\left[\frac{\nu +1}{\nu }\frac{I_{\nu +1}}{I_\nu }+\frac{E_cE_F}{k_BT}\right].$$ (29) For $`K`$ and $`L_0`$, we again have to use the expansion of $`f(E,\mu ,T)`$ as in (28) in order to get non-trivial terms. The resulting expressions are cumbersome and we thus refrain from showing them here. We remark that the basic ingredients used in the high-$`T`$ expansion are somewhat contradictory, namely, the expansion is valid for high $`T`$ such that $`|E_cE_F|k_BT`$, whereas $`\mu (T)=E_F`$ is true only for $`T=0`$. At present, we thus have various methods of circumventing the explicit computation of $`\mu (T)`$. However, their ranges of validity are not overlapping and it is a priori not clear whether the assumptions for $`\mu (T)`$ are justified for $`S`$ or any of the other transport properties close to the MIT. In order to clarify the situation, we numerically compute $`\mu (T)`$ in the next section and then use the CTKG formulation to compute the thermal properties without any approximation. ## 5 The Numerical Method In Eqs. (15)-(17), the explicit $`T`$ dependence of the coefficients $`L_{ij}`$ occurs in $`f(E,\mu ,T)`$ and $`\mu (T)`$. More precisely, knowing $`\mu (T)`$, it is straightforward to evaluate the $`L_{ij}`$. We recall that, for any set of noninteracting particles, the number density of particles $`n`$ can be determined as $$n(\mu ,T)=_{\mathrm{}}^{\mathrm{}}𝑑E\rho (E)f(E,\mu ,T),$$ (30) where $`\rho (E)`$ is again the density of energy levels (in the unit volume) as in Fig. 1. Vice versa, if we know $`n`$ and $`\rho (E)`$ we can solve Eq. (30) for $`\mu (T)`$. The density of states $`\rho (E)`$ for the 3D Anderson model has been obtained for different disorder strengths $`W`$ as outlined in Sec. 2. We determine $`\rho (E)`$ with an energy resolution of at least $`0.1`$ meV ($`1`$ K). Using $`\rho (E)`$, we first numerically calculate $`n`$ at $`T=0`$ for the metallic, critical and insulating regimes using the respective Fermi energies $`|E_F|<E_c`$, $`E_F=E_c`$, and $`|E_F|>E_c`$. With $`\mu =E_F`$, we have $$n(E_F)=_{\mathrm{}}^{E_F}𝑑E\rho (E).$$ (31) Next, keeping $`n`$ fixed at $`n(E_F)`$, we numerically determine $`\mu (T)`$ for small $`T>0`$ such that $`|n(E_F)n(\mu ,T)|`$ is zero. Then we increase $`T`$ and record the respective changes in $`\mu (T)`$. Using this result in Eqs. (15)–(17) in the CTKG formulation, we compute $`L_{ij}`$ by numerical integration and subsequently determine the $`T`$ dependent transport properties (9)–(12). We consider the disorders $`W=8`$, $`12`$, and $`14`$ where we do not have large fluctuations in the density of states. These values are not too close to the critical disorder $`W_c`$, so that we could clearly observe the MIT of Eq. (2). The respective values of $`E_c`$ have been calculated previously schreiber to be close to $`7.0`$, $`7.5`$, and $`8.0`$. Within our approach, we choose $`E_c`$ to be equal to these values. ## 6 Results and Discussions Here we show the results obtained for $`W=12`$ with $`E_c=7.5`$. The results for $`\sigma `$, $`K`$, and $`L_0`$ are the same at $`E_c`$ and $`E_c`$ since they are functions of $`L_{11}`$, $`L_{22}`$ and $`L_{12}^2`$, only. On the other hand, this is not true for $`S`$. ### 6.1 The Chemical Potential In Fig. 3, we show how $`\mu (T)`$ behaves for the 3D Anderson model at $`E_FE_c=0`$, and $`\pm 0.01`$. To compare results from different energy regions we plot the difference of $`\mu (T)`$ from $`E_F`$. We find that $`\mu (T)`$ behaves similarly in the metallic and insulating regions and at the MIT for both mobility edges at low $`T`$. In all cases we observe $`\mu (T)T^2`$. Furthermore, we see that $`\mu (T)`$ at $`E_c`$ equals $`\mu (T)`$ at $`E_c`$. This symmetric behavior with respect to $`E_F=\mu `$ reflects the symmetry of the density of states at $`E=0`$ as shown in Fig. 1. For comparison and as a check to our numerics, we also compute with our method $`\mu (T)`$ of a free electron gas. The density of states is ashcroft $$\rho (E)=\frac{3}{2}\frac{n}{E_F}\left(\frac{E}{E_F}\right)^{1/2}$$ (32) and we again use $`E_F=E_c=7.5`$. We remark that this value of the mobility edge is in a region where $`\rho (E)`$ increases with $`E`$ in an analogous way as $`\rho (E)`$ for the Anderson model at $`E_c`$ . Thus, as shown in Fig. 3, $`\mu (T)`$ of a free electron gas is concave upwards as in the case of the Anderson model at $`E_c`$. We also plot the result for $`\mu (T)`$ obtained by the usual Sommerfeld expansion for Eq. (30), $$E_F\mu (T)=\frac{E_F}{3}\left(\frac{\pi k_BT}{2E_F}\right)^2.$$ (33) We see that our numerical approach is in perfect agreement with the free electron result. ### 6.2 The d.c. Conductivity In Fig. 4 we show the $`T`$ dependence of $`\sigma `$. The values of $`E_F`$ we consider and the corresponding fillings $`n`$ are given in Tab. 1. The conductivity at $`T=0`$ remains finite in the metallic regime with $`\sigma /\sigma _o=|1E_F/E_c|^\nu `$, because $`(f/E)\delta (EE_F)`$ in Eq. (15) as $`T0`$. Correspondingly, we find $`\sigma =0`$ in the insulating regime at $`T=0`$. In the critical regime, $`\sigma (T0)T^\nu `$, as derived in Ref. enderby , see Eq. (24). We note that as one moves away from the critical regime towards the metallic regime one finds within the accuracy of our data that $`\sigma T^2`$. We observe that in the metallic regime $`\sigma `$ increases for increasing $`T`$. This is different from the behavior in a real metal where $`\sigma `$ decreases with increasing $`T`$. However, as explained in Sec. 2, the behavior of $`\sigma `$ in Fig. 4 is due to the absence of phonons in the present model. We also show in Fig. 4 results of the Sommerfeld expansion (19) and the high-$`T`$ expansion (28) for $`\sigma `$. Paradigmatic for what is to follow we see that the radius of convergence of the Sommerfeld expansion decreases for $`E_FE_c`$ and in fact is zero in the critical regime. On the other hand, the high-$`T`$ expansion is very good in the critical regime down to $`T=0`$ at $`E_c=E_F`$. The small systematic differences between our numerical results and the high-$`T`$ expansion for large $`T`$ are due to the differences in $`\mu (T)`$ and $`E_F`$. The expansion becomes worse both in the metallic and insulating regimes for larger $`T`$. All of this is in complete agreement with the discussion of the expansions in Sec. 4. ### 6.3 The Thermopower In Fig. 5, we show the behavior of the thermopower at low $`T`$ near the MIT. In the metallic regime, we find $`S0`$ as $`T0`$. At very low $`T`$, $`ST`$ as predicted by the Sommerfeld expansion (23). We see that the Sommerfeld expansion is valid for not too large values of $`T`$. But upon approaching the critical regime, the expansion becomes unreliable similar to the case of the d.c. conductivity of Sec. 6.2. This behavior persists even if we include higher order terms in the derivation of $`S`$ such as the $`\mathrm{O}(T^2)`$ term of Eq. (19) as shown in Fig. 5. Before discussing the critical regime in detail, let us turn our attention to the insulating regime. Here, $`S`$ becomes very large as $`T0`$. We have observed that it even appears to approach infinity. A seemingly divergent behavior in the insulating regime has also been observed for Si:P liu , where it has been attributed to the thermal activation of charge carriers from $`E_F`$ to the mobility edge $`E_c`$. However, there is a simpler way of looking at this phenomenon. We refer again to the open circuit in Fig. 2. Suppose we adjust $`T`$ at the cooler end such that $`T`$ remains constant. As $`T0`$ both $`\sigma `$ and $`K`$ vanish in the case of insulators — for $`K`$ we show this in the next section. This implies that as $`T`$ decreases it becomes increasingly difficult to move a charge from $`T`$ to $`T+\delta T`$. We would need to exert a larger amount of force, and hence, a larger $`𝐄`$ to do the job. From Eq. (4), this implies a larger $`S`$ value. In the critical regime, i.e., setting $`E_F=E_c`$, we observe in Fig. 5 that for $`T0`$ the thermopower $`S`$ approaches a value of $`228.4\mu `$V/K. This is exactly the magnitude predicted enderby by Eq. (25) for $`\nu =1.3`$. In the inset of Fig. 5, we show that the $`T`$ dependence of $`S`$ is linear. The nondivergent behavior of $`S`$ clearly separates the metallic from the insulating regime. Furthermore, just as for $`\sigma `$, the Sommerfeld expansion for $`S`$ breaks down at $`E_F=E_c`$, i.e., the radius of convergence is zero. Thus, the divergence of Eq. (23) at $`E_F=E_c`$ reflects this breakdown and is not physically relevant. On the other hand, the high-$`T`$ expansion sivan nicely reflects the behavior of $`S`$ close to the critical regime as also shown in Fig. 5. For $`E_F=E_c`$, the high-$`T`$ expansion (29) assumes a constant value of $`S`$ for all $`T`$ due to setting $`\mu (T)=E_F`$. This is approximately valid, the differences are fairly small as shown in the inset of Fig. 5. We stress that there is no contradiction that $`S>0`$ in our calculations whereas $`S<0`$ in Ref. enderby . In Fig. 6, we compare $`S`$ in energy regions close to $`E_c`$ and to $`E_c`$ villa . Clearly, they have the same magnitude but $`S<0`$ at $`E_c`$ and $`S>0`$ at $`E_c`$. The two cases mainly differ in their number density $`n`$. At $`E_c`$ the system is at low filling with $`n=2.26\%`$ while at $`E_c`$ the system is at high filling with $`n=97.74\%`$. The sign of $`S`$ implies that at low filling the thermoelectric conduction is due to electrons and we obtain the usual picture as in Fig. 2 where the induced field $`𝐄`$ is in the direction opposite to that of $`T`$. At high filling, $`S>0`$ means that $`𝐄`$ is directed parallel to $`T`$. This can be interpreted as a change in charge transport from electrons to holes. We remark that this sign reversal also occurs in the insulating as well as in the critical regime. In Fig. 7, we take the data of Fig. 5 and plot them as a function of $`\mu E_c`$. Our data coincides with the isothermal lines which were calculated according to Ref. enderby by numerically integrating $`L_{12}`$ and $`L_{11}`$ for a particular $`T`$ to get $`S`$. We observe that all isotherms of the insulating ($`\mu >E_c`$) and the metallic ($`\mu <E_c`$) regimes cross at $`\mu =E_c`$ and $`S=228.4\mu `$V/K. Comparing with Eq. (23), we again find that the Sommerfeld expansion does not give the correct behavior of $`S`$ in the critical regime. The data presented in Fig. 7 suggest that one can scale them onto a single scaling curve. In Fig. 8, we show that this is indeed true, when plotting $`S`$ as a function of $`(\mu E_c)/k_BT`$. We emphasize that the scaling is very good and the small width of the scaling curve is only due to the size of the symbols. The result for the high-$`T`$ expansion is indicated in Fig. 8 by a solid line. It is good close to the MIT. In the metallic regime, the Sommerfeld expansion correctly captures the decrease of $`S`$ for large negative values of $`(\mu E_c)/k_BT`$. We remark that a scaling with $`(E_FE_c)/k_BT`$ as predicted in Ref. sivan is approximately valid. The differences are very small as shown in the inset of Fig. 8. ### 6.4 The Thermal Conductivity and the Lorenz Number In Fig. 9, we show the $`T`$ dependence of the thermal conductivity $`K`$. We see that $`K0`$ as $`T0`$ whether it be in the metallic or insulating regime. We note again that this simple behavior is due to the fact that our model does not incorporate phonon contributions. The $`T`$ dependence of $`K`$ varies whether one is in the metallic regime or in the insulating regime and how far one is from the MIT. Directly at the MIT, we find that $`K0`$ as $`T^{\nu +1}`$ confirming the $`T`$ dependence of $`K`$ as given in Eq. (26). Near the localization MIT, the $`T`$ dependence of $`K/T`$ is thus the same as for $`\sigma `$ in agreement with Ref. strinati . Again, we see that the Sommerfeld expansion (21) is reasonable only at low $`T`$ in the metallic regime. As for $`\sigma `$ and $`S`$, we see that the high-$`T`$ expansion is again fairly good in the vicinity of the critical regime. At this point we are able to determine the behavior of the entropy in the system as $`T0`$. In the metallic regime, $`S`$ and $`K`$ vanish as $`T0`$, while in the critical and insulating regime, $`\sigma `$ and $`K`$ vanish as $`T0`$. Applying these results to Eqs. (13) and (14) yields that for all regimes the entropy current density $`𝐣_𝐪/T`$ vanishes as $`T0`$. Therefore, we find that the third law of thermodynamics is satisfied for our numerical results of the 3D Anderson model. Next, we present the Lorenz number (6) as a function of $`T`$ in Fig. 10. In the metallic regime, we obtain the universal value $`\pi ^2/3`$ as $`T0`$. Note that for a metal this value should hold up to room $`T`$ ashcroft . However, our results for the Anderson model show a nontrivial $`T`$ dependence. One might have hoped that the higher-order terms in Eq. (22) could adequately reflect the $`T`$ dependence of our $`L_0`$ data. However, this is not the case as shown in Fig. 10. This indicates that even if we incorporate higher order $`T`$ corrections the Sommerfeld expansion will not give the right behavior of $`L_0`$ near the MIT. We emphasize that the radius of convergence of Eq. (22) is even smaller than for $`\sigma `$, $`S`$ and $`K`$. Similarly, the high-$`T`$ expansion is also much worse than previously for $`\sigma `$, $`S`$ and $`K`$. Thus in addition to the results for the critical regime, we only show in Fig. 10 the results for nearby data sets in the insulating and metallic regimes. The $`T`$ dependence of $`L_0`$ is linear as shown in the inset of Fig. 10. As before for $`S`$, the high-$`T`$ expansion does not reproduce this. At the MIT, $`L_0=2.4142`$. This is again the predicted enderby $`\nu `$-dependent value as given in Eq. (27). In the insulating regime, one can show analytically by taking the appropriate limits that $`L_0`$ approaches $`\nu +1`$ as $`T0`$. In agreement with this, we find that $`L_0=2.3`$ at $`T=0`$ in Fig. 10. At first glance, it may appear surprising that a transport property in the insulating regime could be determined by a universal constant of the critical regime such as $`\nu `$. However, in the evaluation of the coefficients $`L_{ij}`$, the derivative of the Fermi function for any finite $`T`$ decays exponentially and thus one will always have a non-zero overlap with the critical regime. In the evaluation of Eq. (12), this $`\nu `$ dependence survives in the limit $`T0`$. In real materials, we expect the relevant high-energy transfer processes to be dominated by other scattering events and thus $`L_0`$ should be different. Nevertheless, for the present model, this $`\nu `$ dependence holds. ### 6.5 Possible Scenarios in the Critical Regime The results presented in Sec. 6.3 for the thermopower at the MIT show that $`S=228.4\mu `$V/K for $`\nu =1.3`$. This value is $`2`$ orders of magnitude larger than those measured near the MIT lauinger ; sherwood ; lakner . However, as mentioned in the introduction, the conductivity exponents found in many experiments are either close $`\nu =0.5`$ or to $`1`$ nu and one might hope that this difference may explain the small experimental value of $`S`$. Also, recent numerical studies of the MIT by transfer-matrix methods together with non-linear finite-size scaling find $`\nu =1.57\pm 0.03`$ slevin . In Tab. 2 we summarize the values of $`S`$ and $`L_0`$ at the MIT for these conductivity exponents. We see that all $`S`$ values still differ by $`2`$ orders of magnitude from the experimental results. Furthermore, we note that our results for $`S`$ and $`L_0`$ are independent of the unit of energy. Even if, instead of $`1`$ eV, we had used $`t_{ij}=1`$ meV, which is appropriate in the doped semiconductors nu ; stupp ; lakner ; liu , we would still obtain the values as in Tab. 2. Thus our numerical results for the thermopower of the Anderson model at the MIT show a large discrepancy from experimental results. This may be due to our assumption of the validity of Eq. (2) for a large range of energies, or due to the absence of a true Anderson-type MIT in real materials, or due to problems in the experiments. A different scenario for a disorder driven MIT has been proposed by Mott, who argued that the MIT from the metallic state to the insulating state is discontinuous mott1 . Results supporting such a behavior have been found experimentally mott2 ; moebius . According to this scenario, $`\sigma `$ drops from a finite value $`\sigma _{min}`$ to zero mott1 for $`T=0`$ at the MIT. This minimum metallic conductivity $`\sigma _{min}`$ was estimated by Mott to be $$\sigma _{min}\frac{1}{a}\frac{e^2}{\mathrm{}}$$ (34) where $`a`$ is some microscopic length of the system such as the inverse of the Fermi wave number, $`ak_F^1`$. As summarized in Ref. mott2 , experiments in non-crystalline materials seem to indicate that $`\sigma _{min}>300\mathrm{\Omega }^1`$cm<sup>-1</sup>. Let us assume the behavior of $`\sigma (E)`$ close to the MIT to be $$\sigma (E)=\{\begin{array}{cc}\sigma _{min},& |E|E_c,\\ 0,& |E|>E_c,\end{array}$$ (35) with $`\sigma _{min}=300\mathrm{\Omega }^1`$cm<sup>-1</sup>. Using the numerical approach of Sec. 5, we obtain $`S=119.5\mu `$V/K at the MIT. This value is still rather large and thus the assumption of a minimum metallic conductivity as in Eq. (35) cannot explain the discrepancy from the experimental results. We remark that the order of magnitude of $`S`$ is not changed appreciably, even if we add to the metallic side of Eq. (35) a term as given in Eq. (2) with $`\sigma _0`$ a few hundred $`\mathrm{\Omega }^1`$cm<sup>-1</sup> and $`\nu =1`$. Lastly, we note that the transport properties calculated for $`W=8`$ and $`14`$ do not differ from those obtained for $`W=12`$ in both the metallic and insulating regions provided we are at temperatures $`T100`$K. For $`S`$ and $`L_0`$ at the MIT we obtain the same values as for $`W=12`$. Again we observe that both $`S`$ and $`L_0`$ approach these values linearly with $`T`$, but with different slopes. Our results show that the higher the disorder strength the smaller the magnitude of the slope. ## 7 Conclusions In this paper, we investigated the thermoelectric effects in the 3D Anderson model near the MIT. The $`T`$ dependence of the transport properties is determined by $`\mu (T)`$. We were able to compute $`\mu (T)`$ by numerically inverting the formula for the number density $`n(\mu ,T)`$ of noninteracting particles. Using the result for $`\mu (T)`$, we calculated the thermoelectric transport properties within the Chester-Thellung-Kubo-Greenwood formulation of linear response. As $`T0`$ in the metallic regime we verified that $`\sigma `$ remains finite, $`S0`$, $`K0`$ and $`L_0\pi ^2/3`$. On the other hand, in the insulating regime, $`S\mathrm{}`$. This we attribute to both $`\sigma `$ and $`K`$ going to zero. Thus, it becomes increasingly difficult to achieve equilibrium and, hence, the system requires $`𝐄\mathrm{}`$. For $`L_0`$, we obtained a universal value of $`\nu +1`$ even in the insulating regime. Directly at the MIT, the thermoelectric transport properties agree with those obtained in Ref. enderby . Namely, as $`T0`$, we found $`\sigma T^\nu `$, $`KT^{\nu +1}`$, while $`L_0\text{const}`$. The thermopower $`S`$ also remains nearly constant in the critical regime and, in particular, it does not diverge at the MIT in contrast to earlier calculations using the Sommerfeld expansion at low $`T`$ castellani . Here we showed that the difference is not so much due to an order of limits problem, but rather reflects the breakdown of convergence of the Sommerfeld expansion at the MIT sivan . Our result is supported by scaling data for $`S`$ at different values of $`T`$ and $`E_F`$ onto a single curve which is continuous across the transition. Some of the experiments lauinger ; sherwood for $`S`$ have been influenced by the Sommerfeld expansion such that the authors plot their results as $`S/T`$. We remark that in such a plot the signature of the MIT is hard to identify, since $`S/T`$ at the MIT diverges as $`T0`$ solely due to the decrease in $`T`$. Our results suggest that plots as in Figs. 5 and 7 should show the MIT more clearly. The value of $`S`$ is at least two orders of magnitude larger than observed in experiments lauinger ; sherwood ; lakner . This large discrepancy may be due to the ingredients of our study, namely, we assumed that a simple power-law behavior of the conductivity $`\sigma (E)`$ as in Eq. (2) was valid even for $`EE_c`$ and $`EE_c`$. Furthermore, we assumed that it is enough to consider an averaged density of states $`\rho (E)`$. While the first assumption is of course crucial, the second assumption is of less importance as we have checked: Local fluctuations in $`\rho (E)`$ will lead to fluctuations in the thermoelectric properties for finite $`T`$, but do not lead to a different $`T0`$ behavior: $`S`$ remains finite with values as given in Tab. 2. Moreover, averaging over many samples yields a suppression of these fluctuations and a recovery of the previous behavior for finite $`T`$. In this context, we remark that — naively assuming all other parts of the derivation are unchanged — implications of many-particle interactions such as a reduced single-particle density of states at $`E_F`$ coulombgap , will only modify the $`T`$ dependence of $`\mu `$. Consequently, the $`T`$ dependencies of $`S`$, $`\sigma `$, $`K`$, and $`L_0`$ may be different, but their values at the MIT remain the same. Our results also suggest that the critical regime is very small. Namely, as the filling increases slightly from $`n=97.74\%`$ to $`97.80\%`$, the behavior of the system changes from metallic to critical and finally to insulating. Up to the best of our knowledge, such small changes in the electron concentration have not been used in the measurements of $`S`$ as in Refs. lauinger ; sherwood ; lakner . We emphasize that such a fine tuning of $`n`$ is not essential for measurements of $`\sigma `$ as is apparent from Fig. 4. Of course, one may also speculate enderby that these results suggest that a true Anderson-type MIT has not yet been observed in the experiments. ###### Acknowledgements. We thank Frank Milde for programming help and Thomas Vojta for useful and stimulating discussions. We gratefully acknowledge stimulating communications from John E. Enderby and Yoseph Imry. This work has been supported by the DFG as part of SFB393.
no-problem/9904/chao-dyn9904009.html
ar5iv
text
# Thermodynamic limit from small lattices of coupled maps ## Abstract We compare the behaviour of a small truncated coupled map lattice with random inputs at the boundaries with that of a large deterministic lattice essentially at the thermodynamic limit. We find exponential convergence for the probability density, predictability, power spectrum, and two-point correlation with increasing truncated lattice size. This suggests that spatio-temporal embedding techniques using local observations cannot detect the presence of spatial extent in such systems and hence they may equally well be modelled by a local low dimensional stochastically driven system. Observation plays a fundamental role throughout all of physics. Until this century, it was generally believed that if one could make sufficiently accurate measurements of a classical system, then one could predict its future evolution for all time. However, the discovery of chaotic behaviour over the last 100 years has led to the realisation that this was impractical and that there are fundamental limits to what one can deduce from finite amounts of observed data. One aspect of this is that high dimensional deterministic systems may in many circumstances be indistinguishable from stochastic ones. In other words, if we have a physical process whose evolution is governed by a large number of variables, whose precise interactions are a priori unknown, then we may be unable to decide on the basis of observed data whether the system is fundamentally deterministic or not. This has led to an informal classification of dynamical systems into two categories: low dimensional deterministic systems and all the rest. In the case of the former, techniques developed over the last two decades allow the characterisation of the underlying dynamics from observed time series via quantities such as fractal dimensions, entropies and Lyapunov spectra. Furthermore, it is possible to predict and manipulate such time series in highly effective ways with no prior knowledge of the physical system generating the data. In the case of high dimensional and/or stochastic systems, on the other hand, relatively little is known about what information can be extracted from observed data, and this topic is currently the subject of intense research. Many high dimensional systems have a spatial extent and can best be viewed as a collection of subsystems at different spatial locations coupled together.The main aim of this letter is to demonstrate that using data observed from a limited spatial region we may be unable to distinguish such an extended spatio-temporal system from a local low dimensional system driven by noise. Since the latter is much simpler, it may in many cases provide a preferable model of the observed data. On one hand this suggests that efforts to reconstruct by time delay embedding the spatio-temporal dynamics of extended systems may be misplaced, and we should instead focus on developing methods to locally embed observed data. A preliminary framework for this is described in . On the other hand, these results may help to explain why time delay reconstruction methods sometimes work surprisingly well on data generated by high dimensional spatio-temporal systems, where a priori they ought to fail: in effect such methods only see a “noisy” local system, and providing a reasonably low “noise level” can still perform adequately. Overall we see that we add a third category to the above informal classification: namely that of low dimensional systems driven by noise and we need to adapt our reconstruction approach to take account of this. We present our results in the context of coupled map lattices (CML’s) which are a popular and convenient paradigm for studying spatio-temporal behaviour . In particular, consider a one-dimensional array of diffusively coupled logistic maps: $$x_i^{t+1}=(1\epsilon )f(x_i^t)+\frac{\epsilon }{2}(f(x_{i1}^t)+f(x_{i+1}^t)),$$ (1) where $`x_i^t`$ denotes the discrete time dynamics at discrete locations $`i=1,\mathrm{},L`$, $`\epsilon [0,1]`$ is the coupling strength and the local map $`f`$ is the fully chaotic logistic map $`f(x)=4x(1x)`$. Recent research has focused on the thermodynamic limit, $`L\mathrm{}`$, of such dynamical systems . Many interesting phenomena arise in this limit, including the rescaling of the Lyapunov spectrum and the linear increase in Lyapunov dimension . The physical interpretation of such phenomena is that a long array of coupled systems may be thought of as a concatenation of small-size sub-systems that evolve almost independently from each other. As a consequence, the limiting behaviour of an infinite lattice is extremely well approximated by finite lattices of quite modest size. In our numerical work, we thus approximate the thermodynamic limit by a lattice of size $`L=100`$ with periodic boundary conditions. Numerical evidence suggests that the attractor in such a system is high-dimensional (Lyapunov dimension approximately 70). If working with observed data it is clearly not feasible to use an embedding dimension of that order of magnitude. On the other hand, it is possible to make quite reasonable predictions of the evolution of a site using embedding dimensions as small as 4. This suggests that a significant part of the dynamics is concentrated in only a few degrees of freedom and that a low dimensional model may prove to be a good approximation of the dynamics at a single site. In order to investigate this we introduce the following truncated lattice. Let us take $`N`$ sites ($`i=1,\mathrm{},N`$) coupled as in equation (1) and consider the dynamics at the boundaries $`x_0^t`$ and $`x_{N+1}^t`$ to be produced by two independent driving inputs. The driving input is chosen to be white noise uniformly distributed in the interval $`[0,1]`$. We are interested in comparing the dynamics of the truncated lattice to the thermodynamic limit case. We begin the comparison between the two lattices by examining their respective invariant probability density at the central site (if the number of sites is even, either of the two central sites is equivalent). For a semi-analytic treatment of the probability density of large arrays of coupled logistic maps see Lemaître et al.. Let us denote by $`\rho _{\mathrm{}}(x)`$ the single site probability density in the thermodynamic limit and $`\rho _N(x)`$ the central site probability density of the truncated lattice of size $`N`$. We compare the two densities in the $`_1`$ norm by computing $$\mathrm{\Delta }\rho (N)=_0^1|\rho _{\mathrm{}}(x)\rho _N(x)|𝑑x$$ (2) for increasing $`N`$. The results are summarised in figure 1.a where $`\mathrm{log}(\mathrm{\Delta }\rho (N))`$ is plotted for increasing $`N`$ for different values of the coupling. The figure suggests that the difference between the densities decays exponentially as $`N`$ is increased (see straight lines for guidance). Similar results were obtained for intermediate values of the coupling parameter. The densities used to obtain the plots in figure 1.a were estimated by a box counting algorithm by using 100 boxes and $`10^8`$ points ($`10^2`$ different orbits with $`10^6`$ iterations each). The maximum resolution typically achieved by using these values turns to be around $`\mathrm{\Delta }\rho (N)\mathrm{exp}(6.5)0.0015`$. This explains the saturation of the distance corresponding to $`\epsilon =0.2`$. For $`\epsilon =0.8`$ the saturation would occur for approximately $`N=30,35`$ given enough computing power (more refined boxes and more iterations). Nonetheless, densities separated by a distance of approximately $`\mathrm{exp}(3)0.05`$ (see horizontal threshold in figure 1.a), or less, capture almost all the structure. Therefore, one recovers the essence of the thermodynamic limit probability density with a reasonable small truncated lattice (see figures 2.a,b). Next we compare temporal correlations in the truncated lattice with those in the full system. Denote by $`S_{\mathrm{}}(\omega )`$ the power spectrum of the thermodynamic limit and $`S_N(\omega )`$ its counterpart for the truncated lattice. Figure 1.b shows the difference $`\mathrm{\Delta }S(T)`$ in the $`_1`$ norm between the power spectra of the truncated lattice and of the thermodynamic limit for $`\epsilon =0.2`$ and 0.8 (similar results were obtained for intermediate values of $`\epsilon `$). As for the probability density, the power spectra appear to converge exponentially with the truncated lattice size. Note that for large $`N`$, particularly for small $`\epsilon `$, the difference tends to saturate around $`\mathrm{exp}(12)10^6`$, this is because the accuracy of our power spectra computations reaches its limit (with more iterations one can reduce the effects of the saturation). Our results were obtained by averaging $`10^6`$ spectra ($`|\mathrm{DFT}|^2`$) of 1024 points each. In figures 2.c,d we depict the comparison between the spectra corresponding to the thermodynamic limit and to the truncated lattice. As can be observed from the figure, the spectra for the truncated lattice give a good approximation to the thermodynamic limit. It is worth mentioning that the spectra depicted in figures 2.c,d are plotted in logarithmic scale so to artificially enhance the discrepancy of the distance between the thermodynamic limit and the truncated lattice. The distance corresponding to these plots lies well below $`\mathrm{\Delta }S(T)<\mathrm{exp}(7.5)5\times 10^4`$. The convergence of the power spectrum is much faster than the one for the probability density (compare both scales in figures 1). To complete the comparison picture we compute the two-point correlation $$C(\xi ,\tau )=\frac{uvuv}{u^2u^2},$$ (3) where $`u=x_i^t`$ and $`v=x_{i+\xi }^{t+\tau }`$. Thus, $`C(\xi ,\tau )`$ corresponds to the correlation of two points in the lattice dynamics separated by $`\xi `$ sites and $`\tau `$ time steps. To obtain the two-point correlation for the truncated lattice we consider the two points closest to the central site separated by $`\xi `$. We then compute $`\mathrm{\Delta }C_{\xi ,\tau }(N)`$ defined as the absolute value of the difference of the correlation in the thermodynamic limit with that obtained using the truncated lattice of size $`N`$. In figure 3 we plot $`\mathrm{\Delta }C_{1,0}(N)`$ as a function of $`N`$ for $`\epsilon =0.2`$ and 0.8. For $`\epsilon =0.2`$, due to limited accuracy of our calculations, the saturation is reached around $`N=10`$. Nonetheless it is possible to observe an exponential decrease (straight lines in the linear-log plot) before the saturation. For larger values of $`\epsilon `$ the exponential convergence is more evident (see figure 3.b). Similar results were obtained for intermediate $`\epsilon `$-values. Note that because the correlation oscillates, it is not possible to have a point by point exponential decay for $`\mathrm{\Delta }C_{1,0}(N)`$, however, the upper envelope clearly follows an exponential decay (see straight lines for guidance). Similar results were obtained for different values of $`(\xi ,\tau )`$. The above comparisons were carried out by using the data produced by the known system (1). Often, in practice, one is deprived of the evolution laws of the system. In such cases, the only way to analyse the system is by using time series reconstruction techniques. This is particularly appropriate when dealing with real spatio-temporal systems where, typically, only a fraction of the set of variables can be measured or when the dynamics is only indirectly observed by means of a scalar measurement function. In the following we suppose that the only available data is provided by the time series of a set of variables in a small spatial region. We would like to study the effects on predictability when using a truncated lattice instead of the thermodynamic limit. Instead of limiting ourselves to one-dimensional time-series (temporal embedding) we use a mix of temporal and spatial delay embeddings (spatio-temporal embedding). Therefore we use the delay map $$𝑿_i^t=(𝒚_i^t,𝒚_{i1}^t,\mathrm{},𝒚_{i(d_s1)}^t),$$ (4) whose entries $`𝒚_i^t=(x_i^t,x_i^{t1},\mathrm{},x_i^{t(d_t1)})`$ are time-delay vectors and where $`d_s`$ and $`d_t`$ denote the spatial and temporal embedding dimensions. The overall embedding dimension is $`d=d_sd_t`$. The delay map (4) is used to predict $`x_i^{t+1}`$. Note that we are using spatial coordinates only from the left of $`x_i^{t+1}`$ (i.e. $`x_j^t`$ such that $`ji`$). An obvious choice of spatio-temporal delay would be a symmetric one such as $`𝑿_i^t=(x_{i1}^t,x_i^t,x_{i+1}^t)`$. However, this would give artificially good results (for both the full and truncated lattices) since $`x_i^{t+1}`$ depends only on these variables (cf. (1)). This is an artefact of the choice of coupling and observable and could not be expected to hold in general. Therefore, we use the delay map (4) in order to “hide” some dynamical information affecting the future state and hence make the prediction problem a non-trivial one. The best one-step predictions using the delay map (4) are typically obtained for $`d_s=d_t=2`$. Here we use the two cases $`(d_s,d_t)=(2,1)`$ and $`(d_s,d_t)=(2,2)`$; almost identical results are obtained for higher dimensional embeddings ($`(d_s,d_t)[1,4]^2`$). Denote by $`E(N)`$ the normalised root-mean square error for the one step prediction using the delay map (4) at the central portion of the truncated lattice of size $`N`$. The comparison between $`E(N)`$ and $`E(N\mathrm{})`$ is shown in figure 4 where we plot the absolute value of the normalised error difference $$\mathrm{\Delta }E(N)=\left|(E(N)E(\mathrm{}))/E(\mathrm{})\right|$$ (5) for increasing $`N`$ and for different spatio-temporal embeddings and coupling strengths. The figure shows a rapid decay of the prediction error difference for small $`N`$ and then a saturation region where the limited accuracy of our computation hinders any further decay. For $`\epsilon =0.2`$ the drop to the saturation region is almost immediate while for the large coupling value $`\epsilon =0.8`$ the decay is slow enough to observe an apparently exponential decay (see fitted line corresponding to $`d_s=d_t=2`$ for $`N=1,\mathrm{},20`$), thereafter the saturation region is again reached. For intermediate values of $`\epsilon `$, the saturation region is reached between $`N=5`$ and 20 (results not shown here). Before this saturation it is possible to observe a rapid (exponential) decrease of the normalised error difference. This corroborates again the fact that it seems impossible in practice to differentiate between the dynamics of the relatively small truncated lattice and the thermodynamic limit. All the results in this letter where obtained from the simulation of a truncated lattice with white noise inputs at the boundaries. Other kinds of inputs did not change our observations in a qualitative way. It is worth mentioning that a truncated lattice with random inputs with the same probability density as the thermodynamic limit ($`\rho _{\mathrm{}}(x)`$) produces approximatively the same exponential decays as above with just a downward vertical shift (i.e. same decay but smaller initial difference). The properties of the thermodynamic limit of a coupled logistic lattice we considered here (probability densities, power spectra, two-point correlations and predictability) were approximated remarkably well (exponentially close) by a truncated lattice with random inputs. Therefore, when observing data from a limited spatial region, given a finite accuracy in the computations and a reasonably small truncated lattice size, it would be impossible to discern any dynamical difference between the thermodynamic limit lattice and its truncated counterpart. The implications from a spatio-temporal systems time series perspective are quite strong and discouraging: even though in theory one should be able to reconstruct the dynamics of the whole attractor of a spatio-temporal system from a local time series (Takens theorem ), it appears that due to the limited accuracy (CPU precision, time and memory limitations, measurement errors, limited amount of data) it would be impossible to test for definite high-dimensional determinism in practice. The evidence presented here suggests the impossibility of reconstructing the state of the whole lattice from localised information. It is natural to ask whether we can do any better by observing the lattice at many (possibly all) different sites. Whilst in principle this would yield an embedding of the whole high-dimensional system, it is unlikely to be much more useful in practice. This is because the resulting embedding space will be extremely high dimensional and any attempt to characterise the dynamics, or fit a model will suffer from the usual ”curse of high dimensionality”. In particular, with any realistic amount of data, it will be very rare for typical points to have close neighbours. Hence, for instance, predictions are unlikely to be much better than those obtained from just observing a localised part of the lattice. If one actually wants to predict the behaviour at many or all sites, our results suggest that the best approach is to treat the data as coming from a number of uncoupled small noisy systems, rather than a single large system. Of course, if one has good reason to suppose that the system is spatially homogeneous, one should fit the same local model at all spatial locations, thereby substantially increasing the amount of available data. This work was carried out under an EPSRC grant (GR/L42513). JS would also like to thank the Leverhume Trust for financial support.
no-problem/9904/astro-ph9904032.html
ar5iv
text
# Space Telescope Imaging Spectrograph Parallel Observations of the Planetary Nebula M94-20 1footnote 11footnote 1Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under the NASA contract NAS 5-26555. ## 1 Introduction The Hubble Space Telescope (HST) Archival Pure Parallel Program, started in 1997 shortly after the second Hubble servicing mission, was implemented to provide the maximum amount of science possible with HST and its primary cameras: the Wide–Field Planetary Camera 2 (WFPC2), the Near-Infrared Camera and Multi–Object Spectrograph (NICMOS), and the Space Telescope Imaging Spectrograph (STIS). During observations of a target by the primary instrument, the other instruments make a preplanned series of observations in nearby fields, the positions of which are determined by the focal plane offset of the instrument and the roll angle of HST. The resulting observations are placed in the Hubble Data Archive at the Space Telescope Science Institute, and are made immediately available to the astronomical community. The STIS parallels include broadband camera images and full field slitless spectroscopy. Full details of the STIS parallel survey (SPS) capabilities and techniques can be found in Gardner et al. (1998). The imaging mode has a very broad bandpass (FWHM $``$ 5000Å) peaking at 5500Å and ranging from $``$ 2500Å to 10300Å. The full field spectroscopy mode has a wavelength range of 5200Å to 10300Å, and a dispersion of 4.9Å per pixel. In an imaging mode observation of 2000 seconds, point sources as faint as $`\mathrm{m}_\mathrm{V}=28`$ can be detected with a S/N of 5, and the spectra can achieve a limiting magnitude of 22 under the same circumstances. Approximately 6000 STIS parallel images and spectra were taken in the first year of the parallel program, comprising more than 1000 separate fields. Of these, over 300 images and spectra are of the Large Magellanic Cloud (LMC). These data represent a unique opportunity to peer deeply into the LMC and investigate many of its physical properties. Planetary nebulae (PNe, singular PN) provide a way to study several important properties of the LMC, including kinematics (Vassiliadis, Meatheringham & Dopita 1992), dynamical and chemical evolution and can also be used as distance indicators (for example, Feldmeier, Ciardullo & Jacoby 1997 and references therein). Nearly 300 PNe have been found in the LMC (Leisy et al. 1997). LMC PNe are of particular importance in the determination of the PNe luminosity function, since they all lie at approximately the same distance. Unfortunately, the great distance to the LMC means that most PNe are very faint and unresolvable by ground based telescopes, and the typically crowded LMC fields make studying the PNe very difficult. The high spatial resolution of HST and low sky background of Earth orbit provide an advantage for studying LMC PNe (for example, Dopita et al. 1996). In particular, STIS can record spatially resolved spectra of the larger, fainter PNe. Here we report on the serendipitous STIS parallel observations of the planetary nebula M94–20, first discovered by Morgan (1994) in a ground–based emission line survey of the LMC. The STIS camera mode images clearly resolve the nebula to be $``$2 arcseconds in diameter and also detect the central star. The spectra provide identification of several emission lines in the nebula. The low resolution of the STIS spectra together with the relatively large angular extent of the nebula blends the nebular diagnostic emission lines, but the images and spectra are still a useful tool in investigating extragalactic PNe and can be used to plan followup observations, both ground and space–based. ## 2 Observations Six images and two spectra were taken of the field containing the planetary nebula M94-20. The total exposure times were 1100 and 1200 seconds for the images and spectra, respectively. The observations were taken as part of the STIS Archival Pure Parallel Program, HST Program ID 7783, S. Baum, Principal Investigator. WFPC2 was the prime instrument during the STIS parallels, observing the LMC as part of the HST program “Star Formation History of the Large Magellanic Cloud”, HST ID 7382, Smecker–Hane, Principal Investigator. The primary target was LMC-DISK1 at $`\alpha =05^h11^m14\stackrel{}{\mathrm{.}}92`$ and $`\delta =71^{}15^m41\stackrel{}{\mathrm{.}}62`$. STIS was located approximately five arcminutes north of WFPC2 during the observations. The STIS parallel images were formatted as 1024x1024 arrays (one pixel = 0$`\stackrel{}{\mathrm{.}}`$051), while the spectra were automatically binned onchip in the y–direction to 1024x512 (one pixel = 0$`\stackrel{}{\mathrm{.}}`$051 in the spectral direction, and 0$`\stackrel{}{\mathrm{.}}`$102 in the spatial direction). The observations were taken over two orbits, and there was a 0$`\stackrel{}{\mathrm{.}}`$8 (15.5 STIS CCD pixels) telescope slew to the southwest between the two sets of observations. The offsets between images were determined by cross–correlating a small subsection in each, which were then used to shift and combine the images. The offsets were applied with appropriate binning to the spectra which were shifted and combined as well. The images and full field spectra were processed using STIS Investigation Definition Team pipeline calibration software which performs basic data reduction steps such as bias and dark current subtraction. The observation particulars are shown in Table 1. For the analysis using the imaging mode, the sky background was subtracted by taking a simple median of the area near the PN. The background in the full field spectral image was subtracted from the spectrum using a column–by–column median, employing sigma–clipping to remove the positive bias of the stellar contamination. ## 3 Discussion ### 3.1 The Nebula Figure 1 shows the full field processed image containing M94–20. The PN is located at the top of the image, centered roughly in the horizontal (x–axis) direction. North is to the left and East is down; we display the image in this way so that it has the same sense as the spectrum, which disperses light along the x–direction. Though close to the detector edge, the nebula is located fully inside the image. Note the small irregular galaxy located 9 arcseconds to the north of M94–20, and another located 22 arcseconds to the south. Although both lie in the dispersion direction of the PN, neither interferes significantly with the spectrum. The bright diagonal line in the image is a diffraction spike from a star located just off the detector, and the circular features near bright stars are internal reflections in STIS. The inset shows a close–up of just the nebula. The PN, unresolved in the discovery survey (Morgan 1994), is clearly resolved in the STIS image. The measured properties of the nebula and central star (see Section 3.2) are listed in Table 2. The nebula is slightly elliptical, with a mean diameter of roughly 2 arcseconds, making this one of the largest PNe in the LMC. Most LMC PNe are smaller than 1 arcsecond in diameter, with the notable exception of LMC–SMP72, a bipolar PN measuring approximately 2 x 3 arcseconds (Dopita et al. 1996). M94–20 has a bright elliptical rim, and the outermost parts of the nebula also show some faint structure, reminiscent of the double–elliptical structure of NGC 6543 (aka the “Cat’s Eye”). One of the outer ellipses is aligned with the inner rim, while the other has a position angle of $`145^{}`$. The large angular extent of the PN implies that this is an evolved object. A subarray of the spectrum containing M94–20 is shown in Figure 2 (top). The extracted subarray covers the full spectral range of the original spectrum, but only 32 pixels ($`3\stackrel{}{\mathrm{.}}3`$) in the spatial direction. The stars in the image appear as sources of continuum, while M94–20 is clearly an emission–line object. Figure 2 (middle) shows a closeup of the nebular spectrum from 6000Å to 7200Å as well as the positions of the line images using ellipses with major and minor axes corresponding to those measured in the camera mode image (bottom). The bright elliptical patch corresponds to the lines of H$`\alpha `$ and \[NII\] in M94–20. At this resolution, the \[NII\] 6548Å and 6584Å lines are separated from H$`\alpha `$ by only 3 and 4 pixels, respectively. The nebula itself is about 40 pixels or ten times that size, so the three spectral line images overlap. The blended nebular spectral image is about 4 pixels wider than the nebular image in the camera mode, also indicating that more than one line is present. The \[OI\] line at 6300Å is also clearly seen. The \[SII\] 6717, 6731Å emission line images are barely detected in the spectrum. We also note that we may have a detection of \[SIII\] 9069Å and Pa$`ϵ`$,\[SIII\] at 9545Å, which does not reproduce well in Figure 2 but can be seen very faintly in the original data. We flux calibrated the brighter emission lines using the absolute sensitivity of STIS for a point source (Collins and Bohlin 1998), correcting the sensitivity curve for a slitless spectrum of an extended line emission object, and masking out the continuum spectra of nearby stars that overlapped the PN spectrum. We find that the total observed H$`\alpha `$ \+ \[NII\] 6548, 6584Å flux is $`7.3\mathrm{x10}^{15}\mathrm{erg}/\mathrm{sec}/\mathrm{cm}^2`$. D. Morgan and Q. Parker (1998, private communication) made followup observations of many of the PNe found by Morgan (1994), and for M94–20 detected \[OIII\] 4959, 5007Å and H$`\alpha `$. They report the \[OIII\] flux to be $`2.7\mathrm{x10}^{14}\mathrm{erg}/\mathrm{sec}/\mathrm{cm}^2`$. The value of the ratio of H$`\alpha `$ \+ \[NII\] 6548, 6584Å to \[OIII\] 4959, 5007Å in LMC PNe is typically 3–4 (for example, Vassiliadis, Dopita, Morgan and Bell 1992); which is somewhat lower than but consistent with these measurements. The typical value of the ratio of \[OI\] 6300Å to H$`\alpha `$ in LMC PNe is approximately 0.07. We find the \[OI\] 6300Å flux is $`1.2\mathrm{x10}^{15}\mathrm{erg}/\mathrm{sec}/\mathrm{cm}^2`$, which again gives a ratio higher than but consistent with the typical value. Morgan and Parker also find an upper limit to the H$`\beta `$ flux of $`1.2\mathrm{x10}^{14}\mathrm{erg}/\mathrm{sec}/\mathrm{cm}^2`$, making M94–20 a relatively faint LMC PN (Vassiliadis, Dopita, Morgan and Bell 1992). We note that our measurements may also be upper limits, since background subtraction is difficult due to the stellar spectra superposed on the PN spectrum. The detection of \[SII\] 6717, 6731Å is very weak and is complicated by their proximity to the H$`\alpha `$ \+ \[NII\] 6548, 6584Å lines. We subtracted a linear fit to the slope of the wing of the H$`\alpha `$ \+ \[NII\] lines to find the total flux of the \[SII\] lines, and get an upper limit of $`5\mathrm{x}10^{16}\mathrm{erg}/\mathrm{sec}/\mathrm{cm}^2`$. The emission lines from the PN are remarkably featureless spatially, given the obvious structure in the camera mode image. The spectral mode of STIS has a blue cutoff at $``$5300Å, while the imaging mode goes down to $``$2000Å. The imaging mode will therefore detect the lines of H$`\beta `$, the \[OII\] doublet at 3727 and 3729Å, and the lines of \[OIII\] at 4959, 5007Å. The emission lines seen in the imaging mode but not in the spectroscopic mode must account for these features. ### 3.2 The Central Star The central star, as well as two other superposed stars, are clearly visible in the camera mode image. Relative to the Fine Guidance Sensor guide stars, we find the J2000 coordinates of the central star are $`\alpha =5^\mathrm{h}11^\mathrm{m}10\stackrel{}{\mathrm{.}}64\mathrm{and}\delta =71^{}10^\mathrm{m}26\stackrel{}{\mathrm{.}}54`$, in good agreement with those found in Leisy et al. (1997). The spectrum of the star is too weak to determine a temperature, so we used the camera mode sensitivity of STIS to flux calibrate blackbody models of 20000K, 40000K and 100000K. The calibrated models were then folded into a Johnson V filter bandpass to find the V magnitude of the star. We found the central star magnitude to be $`\mathrm{m}_\mathrm{V}=26.0\pm 0.2`$. The sensitivity of STIS in the imaging mode drops rapidly with decreasing wavelength, so the calculated magnitude does not depend strongly on the temperature of the star. Given the magnitude and a distance to the LMC of 55 kpc, we find that the luminosity of the central star is 0.09 solar, and $`\mathrm{log}(\mathrm{T}_{\mathrm{eff}}/\mathrm{T}_{})=4.6`$ for a typical white dwarf radius of $`5\mathrm{x}10^8\mathrm{cm}`$. The low temperature indicates this may be a relatively old white dwarf, which is consistent with the evolved nature of the PN. ### 3.3 Other PNe There are approximately 300 known PNe in the LMC (Leisy et al. 1997). The number density is highest near the Bar and tapers off rapidly with distance (Morgan 1994). M94–20 lies over a degree from the Bar, near the edge of the LMC. Over 300 observations comprising 70 separate fields have been taken in the LMC by STIS. Of these, 29 also have associated full field spectra, enhancing the ability to detect candidate PNe (i.e., emission line objects) clearly. The detection limit of an observation depends most strongly on the brightness and the size of the nebula in a given emission line and the integration time. The 1200 second exposure of the M94–20 observations is fairly typical of parallels, and noting that the \[SII\] 6717, 6731Å emission lines in the nebula are barely detected, we expect to detect any nebula the size of M94–20 brighter than about $`23\mathrm{x}10^{15}\mathrm{erg}/\mathrm{sec}/\mathrm{cm}^2`$ in a line for a typical LMC PN. A smaller nebula would be detectable at fainter limits, with the brightness scaling inversely with the area. We note here again that M94–20 is a relatively faint, large LMC PN. No other previously catalogued LMC PNe have been observed by STIS in the parallel survey. We have processed and examined the other fields in the LMC and also 20 STIS fields in the Small Magellanic Cloud (SMC) and have found no other extended PNe to within the detection limits, catalogued or otherwise, although large HII regions are common and easily detected. Unresolved or partially resolved PNe are difficult to identify in this case because most of the full–field spectra in the LMC and SMC are either single observations or have one cosmic ray split (that is, two observations without telescope movement), making the differentiation of sharp spectral features from cosmic rays and hot pixels difficult. We note here that in many other fields, STIS observations have two or more cosmic ray splits, making unresolved PN detection relatively easier. We have also searched the relevant WFPC2 parallel fields; no known LMC or SMC PNe are located within these parallel survey fields. Discovering previously unrecorded PNe using WFPC2 parallels is more difficult due to the lack of unambiguous spectral information in the broad passbands used. Although the odds of finding PNe in the random STIS parallels are low for the LMC or SMC, we are encouraged by the observations of M94–20. The ability of STIS to get deep, spatially resolved spectra makes it an excellent instrument for tagging objects for further deep ground–based spectroscopy. STIS parallels will continue to be taken in the Magellanic Clouds, and it is only a matter of time before additional PNe are observed. Interestingly, PNe in nearby galaxies can also be detected in STIS parallels, although this becomes a difficult process as PNe at that distance are no longer resolved spatially. However, it also means that the larger volumes of space on the scale of the host galaxy are observed as well. Ground–based surveys using narrow passband filters centered on \[OIII\] 5007Å and H$`\alpha `$ have found many PN candidates in M31 and M33 (Ciardullo et al. 1989; Bohannan, Conti & Massey 1985). We made a preliminary search of STIS parallel fields in M31 and NGC 205 which yielded several emission lines objects. At least one of these objects is a planetary nebula, even though only a small fraction of the surface area of those galaxies has been observed. A similar search in M33 fields has also resulted in several detections. These observations are awaiting ground–based followups to get spectra with wavelength coverage that includes $`\mathrm{H}\beta `$ and \[OIII\] 5007Å to help distinguish PNe from HII regions. Future STIS parallels will undoubtedly find more PNe, especially if the narrow \[OII\] 3727Å and \[OIII\] 5007Å filters are used in imaging mode in conjunction with the G430L grating, which has a bandpass that includes these emission lines. We note finally that the use of STIS parallels goes well beyond that of finding (or adding to our knowledge of pre–existing) PNe. Moderate redshift galaxy counts (Gardner et al. 1998), research on stellar population counts, low mass stars (Plait 1999), globular clusters in nearby galaxies, and many other fields can benefit from a relatively simple search of the parallel archive. Since the data already exist and are publically accessible, the STIS parallels are a excellent tool that should be exploited to their fullest extent. The authors wish to acknowledge the help of Nick Collins, Bob Hill, Jon Gardner and David Morgan for giving their advice during this work. Figure 1: Combined and processed camera–mode image of M94–20 field. The inset shows a close–up of the nebula. The inner rim and central star can be seen clearly in the close–up image. Figure 2.– (Top) A subarray of the full field spectrum containing M94–20. The subarray covers the full spectral range of the original spectrum, and 32 pixels ($`3\stackrel{}{\mathrm{.}}3`$) in the spatial direction. Several emission lines can be seen in this log scale image, while stars appear as streaks. The close–up of the region from 6000Å to 7200Å is also shown (middle) along with a schematic of the positions of the detected emission lines (bottom). | Table 1 | | | | | | --- | --- | --- | --- | --- | | Log of STIS M94–20 Observations (1997 October 23) | | | | | | Rootname | STIS Mode | Exposure | R.A. | Dec | | | | (sec) | (2000) | (2000) | | O48B74010 | IMAGE | 400 | 5 11 15.44 | -71 10 30.9 | | O48B75010 | IMAGE | 400 | 5 11 15.56 | -71 10 30.3 | | O46N43FGQ | IMAGE | 150 | 5 11 15.44 | -71 10 30.9 | | O46N43FHQ | SPEC | 600 | 5 11 15.44 | -71 10 30.9 | | O46N44FRQ | IMAGE | 150 | 5 11 15.56 | -71 10 30.3 | | O46N44FUQ | SPEC | 600 | 5 11 15.56 | -71 10 30.3 | | Table 2 | | | --- | --- | | Properties of M94–20 and the central star | | | Parameter | Value | | Nebula | | | Major axis …………….. | $`2\stackrel{}{\mathrm{.}}1`$ | | Minor axis …………….. | $`1\stackrel{}{\mathrm{.}}6`$ | | Major axis (rim) …….. | $`1\stackrel{}{\mathrm{.}}3`$ | | Minor axis (rim) …….. | $`0\stackrel{}{\mathrm{.}}9`$ | | P.A. (rim) ……………… | $`30^{}`$ | | H$`\alpha `$ \+ \[NII\] flux ………. | $`7.3\mathrm{x10}^{15}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ | | \[OI\] $`\lambda `$6300 flux ……….. | $`1.2\mathrm{x10}^{15}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$ | | Central Star | | | $`\mathrm{m}_\mathrm{v}`$ ………………………… | 26 $`\pm `$ 0.2 | | $`\mathrm{log}(\mathrm{L}/\mathrm{L}_{})`$……………….. | $`1.0`$ | | $`\mathrm{log}(\mathrm{T}_{\mathrm{eff}})`$ …………………. | 4.6 |
no-problem/9904/astro-ph9904104.html
ar5iv
text
# HARD X-RAYS FROM THE GALACTIC NUCLEUS: PRESENT AND FUTURE OBSERVATIONS ## 1 The Problem of the X/$`\gamma `$-Ray Low Luminosity of the Galactic Nucleus Increasing evidences in support of the presence of a massive (2.5 10<sup>6</sup> M) Black Hole (BH) at the dynamical center of our Galaxy have been collected in the past years. Proper and radial motions of stars in the central parsec of the Galaxy, obtained with high resolution near-infrared observations , show indeed the presence of a dark mass with density $`>`$ 10<sup>12</sup> M pc<sup>-3</sup> located within $`<`$ 0.01 pc from Sgr A, and governing the dynamics of the mass of the region. The compact synchrotron radiosource Sgr A, which coincides with the Galaxy dynamical center and shows very low proper motion, is therefore considered the visible counterpart of the Galactic Nucleus (GN) BH, and its spectrum has been studied at all wavelengths (for recent reviews on the Galactic Center see ). However, unlike stellar-mass BHs in binary systems and super massive BHs in AGNs, the Galactic Nucleus is found in the infrared and X-ray domains extremely faint and underluminous. In particular the results of the Galactic Center SIGMA/GRANAT Survey coupled to Rosat , ART-P and ASCA observations have shown that Sgr A total X-ray luminosity is well below 10<sup>37</sup> erg s<sup>-1</sup>, i.e. $`<`$ 10<sup>-7</sup> times the Eddington Luminosity for such a BH. This is rather intriguing since stellar winds of the close IRS 16 star cluster provide enough matter to power the accreting BH. Due to non uniformities in the winds the matter is accreted with substantial angular momentum and so the flow is certainly not in form of pure spherical free fall. In standard thin accretion disks around BH about 10$`\%`$ of the energy provided by the accreted matter ($`\dot{M}`$c<sup>2</sup>) is expected to be radiated between infrared and X-ray frequencies. Estimates of accretion rate for Sgr A are in the range $`(6200)10^6`$ M yr<sup>-1</sup> and luminosities around $`10^{40}10^{42}`$ erg s<sup>-1</sup> are therefore expected. New models of accretion flow have been recently proposed, in which very low radiation efficiency is obtained, even for non-spherical infall, by assuming that most of the energy is advected into the BH rather then being radiated. These “advection dominated accretion flow” models (ADAF) seem to be able to interpret the whole Sgr A spectrum from radio to $`\gamma `$-rays, and in particular to explain the observed low X-ray flux . Sgr A is in fact considered the test case for this set of models and observations in the X-ray domain of this object are crucial to establish the validity of the model assumptions and/or to constrain their parameters. ## 2 The 1990-1997 SIGMA/GRANAT Survey of the Galactic Center: 30-300 keV Upper Limits on Sgr A The 30-1300 keV SIGMA telescope on the GRANAT satellite, observed the Galactic Center between March 1990 and October 1997 about twice a year, for a total of 9.2 10<sup>6</sup> s effective time. The telescope provided an unprecedented angular resolution ($`15^{}`$) at these energies and, for the quoted observing time, a typical 1$`\sigma `$ flux error of $`<`$ 2-3 mCrab (1 mCrab is about 8 $`\times `$ 10<sup>-12</sup> erg cm<sup>-2</sup> s<sup>-1</sup> in the 40-80 keV band). Analysis of a subset of these data already provided the most precise hard X-ray images of the Galactic Nucleus and proved that the Sgr A luminosity in the 40-150 keV band is lower then 10<sup>36</sup> erg s<sup>-1</sup>. These results were important because, though it was known from the Einstein Observatory data , that the GN was not bright in soft X-rays, it was still possible that Sgr A could have, as any respectable BH in low state, a bright hard tail extending to $`>`$ 100 keV and possibly also a component of 511 keV line emission. For example the close source 1E 1740.7-2942, weak at $`<`$ 4 keV was later observed, in particular by SIGMA, to be bright in soft $`\gamma `$-rays, was found associated to radio jets and even to display transient events of 500 keV emission (see references in ). Indeed 1E 1740.7-2942 is now recognized to be a good BH candidate at only 40 from the GN and to be responsible for most of the hard X-ray/$`\gamma `$-ray activity observed by early low-angular-resolution experiments from the direction of the Galactic Center and previously associated to the GN . Recently Goldoni et al. (1999) , have re-analyzed the full set of SIGMA data to improve the upper limits estimation. In this analysis they included models with both point-source and diffuse emission to fit sky images. Reconstructed images of the central 4$`\times `$4 region in different energy bands are presented in Fig. 1. Four variable point-sources are contributing to the emission from the central 1 circle (1E 1740.7-2942, A 1742-294, GROJ 1744-28, GRS 1743-290). The last one is only $`11^{}`$ from Sgr A and, if considered a single source, cannot be associated to Sgr A. However the presence of some weak contribution from the Galactic Nucleus cannot be excluded and therefore 2 sets of limits were derived for the Sgr A hard X-ray flux: one for which all GRS 1743-290 emission is attributed to one single source (case A) and another obtained by including a source at the Sgr A position (fixed parameter) in the fitting procedure (case B). Results are reported in Table 1, for the four energy bands of Fig. 1. At high energy no emission at all is detected apart from 1E 1740.7-2942 and values in the two colums are identical. The most stringent upper limits of Table 1 (A column) have been reported in Fig. 2 in units of E L<sub>E</sub> for a distance of 8.5 kpc, and compared with the ADAF model predicted spectrum of Sgr A. The reported model is the “best model” as defined in , and refers to a BH mass of 2.5 10<sup>6</sup> M, a mass accretion in Eddington units $`\dot{m}`$ = 1.3 10<sup>-4</sup> (i.e. 7 10<sup>-6</sup> M yr<sup>-1</sup>), a viscosity parameter $`\alpha `$ = 0.3, an equipartition parameter $`\beta `$ = 0.5 (exact equipartition between gas pressure and magnetic pressure), and a fraction of viscous heat converted in electron heat of $`\delta `$ = 0.001. Comparison with X-ray results, i.e. luminosities measured by Rosat (0.8-2.5 keV) and ASCA (2-10 keV), is also shown in Fig. 2. ASCA value is actually reported as upper limit, following Narayan et al. 1998 , because the observed flux was rather associated to the X-ray burster A 1742-289, located only at 1 from Sgr A . Note however that this interpretation is controversial and the ASCA measure may well contain some contribution from Sgr A . ## 3 The IBIS/INTEGRAL Galactic Center Deep Survey The Imager on Board the Integral Satellite (IBIS) is one of the two main instruments of INTEGRAL, the ESA $`\gamma `$-ray mission to be launched in 2001. It provides, thanks to its coded-aperture imaging system composed by a tungsten mask and two pixellated detector layers, fine imaging (12 FWHM resolution), good spectral resolution ($`<`$ 8 $`\%`$ at 100 keV and $``$ 6 $`\%`$ at 1 MeV) and good sensitivity over a wide (20 keV-10 MeV) energy range and a very wide field of view (29$`\times `$29 at 0 sensitivity) . The INTEGRAL Core Program includes the so called Galactic Center Deep Exposure (GCDE) program, the deep observation of a central Milky Way region 60 wide in longitude and 20 in latitude. The GCDE will consist of a grid of 31$`\times `$11 pointings separated by 2. The net GCDE observing time is 4.8 10<sup>6</sup> s per year, of which about 84 $`\%`$ performed in pointing mode . Considering the GCDE scan in pointing mode and the IBIS sensitivity over its FOV we estimated that the Galactic Nucleus will be actually observed by IBIS for an effective (on-axis equivalent) time of $``$ 4 10<sup>6</sup> s in 3 years (see Fig. 3 for the IBIS/Sensitivity of the INTEGRAL GCDE scan). Adding slew and Galactic Plane Survey times the total net 3-years IBIS Core Program exposure on the GN will increase to 5 10<sup>6</sup> s. IBIS broad-band sensitivity is shown in Fig. 2 for the pointing GCDE time on the E L<sub>E</sub> vs. E plot to compare it with the ADAF model spectrum for Sgr A . Two curves for IBIS are represented for two extreme estimates of the in-flight background. In Fig. 2 is reported also the expected GCDE sensitivity for the INTEGRAL X-Ray monitor (JEM-X) which provides images in the range 3-60 keV but in a smaller field of view. This picture demonstrates that IBIS sensitivity estimated for the GCDE will allow either to detect the high energy emission predicted by ADAF from Sgr A or to set tighter constraints on the model parameters. ## 4 Simulations of the Galactic Nucleus IBIS/INTEGRAL Observations ASCA, ART-P and Rosat results have shown that at low energies lots of point sources and also diffuse emission are present in the area and can make identifications difficult. In particular activity of the X-ray burster A1742-289, only 1 away from Sgr A, would not be easily separated by the GN one even by X-ray instruments like JEM-X. On the other hands results at higher energies, i.e. around 80 keV, should be rather ideal to reveal emission from the GN, since at these energies, due to their softer spectra, most of the Neutron Star binaries will be significantly fainter than the expected emission from the GN BH (see NS binary typical spectrum in Fig. 2). Even at energies $`>`$ 50 keV, however, imaging will be crucial since SIGMA showed the presence of several high-energy sources located within 2 from the Galactic Nucleus (see Fig. 1). To prove the imaging capabilities of the IBIS telescope we performed simulations of a deep IBIS observation of the Galactic Center with the ISGRI low energy (20-700 keV) $`\gamma `$-ray detector layer of the telescope . Basic simulation procedure was described in , we used results of the SIGMA survey for source fluxes and the predicted Sgr A flux from ADAF spectrum. Sky image reconstruction procedures are based on standard cross-correlation techniques and iterative analysis and removal of sources as described e.g. in . One simplified assumption was that the observation was performed as a single pointing rather then a sum of pointings along a grid as it is actually expected. This should not influence the results since the scanning will rather help to remove background unknown systematic effects, not presently included in the simulation. Work is in progress to include more realistic operational conditions. Fig. 4 shows some of the results of the simulations. These are zooms of the central part of reconstructed sky images obtained by an iterative decoding and point-source cleaning algorithm applied to the simulated detector images. A cluster of 4 high energy sources in the central 2 circle appears in the images (Fig. 4, left). Fine image analysis allow to position them and to remove their contribution (Fig. 4, right). Sgr A is then detected at the expected position and signal to noise ratio of $``$ 6 $`\sigma `$, corresponding to L$`_{50140\mathrm{keV}}`$ $``$ 3.5 10<sup>34</sup> erg s<sup>-1</sup>, the level predicted by the ADAF model. Including diffuse emission (possibly present at $`<50`$ keV energies ) showed that its presence can make detection of faint sources more complicated and somemore refined procedures should be employed to model the composite diffuse plus point-sources emission. However diffuse emission will not influence the search for the GN emission at high energies. ## 5 Conclusions: XMM Core Program Observations of Sgr A We have presented here the best available hard X-ray upper limits on Sgr A, obtained by the deep Galactic Center SIGMA/GRANAT survey. As shown above (Fig. 2), they do not constrain the present ADAF models, invoked to resolve the apparent contradiction between the presence of a massive black hole at the Galactic Center and its lack of activity in the X-ray domain. However we note that the ADAF model critically depends on mass accretion since L $``$ $`\dot{m}^2`$. The assumed value of accretion rate in the model ($`\dot{m}`$=$`1.310^4`$ , actually determined by Rosat flux) is close to the lower limit of the estimated range of $`\dot{m}`$ (i.e. $`(130)10^4`$, ). Even a factor 3 higher in $`\dot{m}`$ would make the model not consistent with our limits, and also not compatible with other spectral data of Sgr A. New generation of telescopes aboard the future X-ray (AXAF/Chandra, XMM) and gamma-ray (INTEGRAL) missions, will allow to deeply search and study the GN high energy emission and in this way to test present models for massive BH accretion. We have shown with simulations that IBIS/INTEGRAL will be able to disentangle the hard X-ray emission of the Milky Way central square degree and to detect ADAF emission from Sgr A or to set appropriate upper limits over the band 50-140 keV, by using 3 year data of the INTEGRAL/GCDE core program. In case of detection, the reconstructed flux will allow to test ADAF spectra and the radiation processes expected in such a massive BH. On the other hands the set of upper limits will imply an $`\dot{M}`$ at least a factor 1.4 lower then the present value of ADAF model, making the problem of low accretion rate even more difficult. The ADIOS models , in which advection is coupled to inflows/outflows of matter and which allow even lower emission then ADAF models for the same mass supply rate, may have then to be invoked. Alternatively different hypothesis on magnetic field will have to be included, e.g. allowing for lower values of the $`\beta `$ equipartition parameter in order to steepen the spectrum and reduce contribution at high energies . In this context hard X-ray results will be even more valuable if coupled to high resolution results at lower energies which could constrain the mass accretion rate and allow to study the spectral shape. XMM observations of the Galactic Center are planned in the Core Program of the first year of the mission operations. More than 5 10<sup>4</sup> s of the XMM Galactic Center scan will be devoted to observe the central half-degree of the Galaxy with the EPIC cameras. Such exposure time will provide sensitivities of the order of 10<sup>32</sup> erg s<sup>-1</sup> ($`5\sigma `$ for 8.5 kpc distance) in the energy range $`210`$ keV, with 6<sup>′′</sup> angular resolution (FWHM) and good spectral resolution. We expect that these observations will allow to resolve the confusion over the soft X-ray sources associated to Sgr A and to obtain spectral data to test radiative properties of the closest supermassive accreting black hole. ## REFERENCES
no-problem/9904/hep-th9904112.html
ar5iv
text
# (𝑚,𝑛)-String-Like Dp-Brane Bound States ## I Introduction <br> Polchinski’s seminal work on D-brane has dramatically changed our view on perturbative superstrings. Yet, we can use many tools developed in the perturbative framework of superstrings to do D-brane calculations. These help us, at least in certain cases, to attack some hard problems in physics such as the information loss puzzle and the entropy problem in black hole physics. The D-brane picture is also the basis for the recent $`AdS/CFT`$ conjectures of Maldacena. By definition, a D-brane is a hypersurface carrying a RR charge in type II string theory on which an open string can end. From the D-brane worldvolume point of view, such an ending of a fundamental string (for short, F-string) is characterized by the non-vanishing U(1) gauge field strength on the brane at least in the low energy limit. A configuration of an F-string ending on a Dp-brane for every allowable $`p`$ can actually be BPS saturated, preserving a quarter of the spacetime supersymmetries. At the linearized approximation, this has been demonstrated by Callan and Maldacena for $`p2`$ cases and by Dasgupta and Mukhi for $`p=1`$ case. The interpretations for $`p2`$ and $`p=1`$ cases are, however, quite different. In the former case, the excitation of a worldvolume scalar field along a transverse direction is interpreted as the F-string attached to the Dp-brane. Whereas, for the latter one, the excitation of this scalar field due to the introduction of an F-string ending indicates that one half of the original D-string must bend rigidly to form a 3-string junction. In spite of our reasonably well understanding of an F-string ending on a Dp-brane from the worldvolume point of view, our understanding of this same ending from the spacetime point of view is still unsatisfactory<sup>*</sup><sup>*</sup>* For some very recent efforts in this direction see .. The well-known $`p`$-brane solitonic solutions of type II supergravity theories , nowadays called Dp-branes, are merely hypersurfaces carrying RR charges each of which preserves one half of the spacetime supersymmetries of type II string theories. The mass per unit $`p`$-brane volume for such a configuration carrying unit RR charge is just the Dp-brane tension. This BPS configuration can also be described by its worldvolume Born-Infeld action in its simplest form with flat background and vanishing worldvolume gauge field strength which clearly indicates that each of the spacetime Dp-brane solitonic solutions does not have an F-string ending on it (this also explains why they preserve 1/2 rather than 1/4 of the spacetime supersymmetries). As just pointed out, a non-vanishing worldvolume gauge field strength is an indication of a string ending on the corresponding Dp-brane. In general we expect that such a configuration preserves a quarter of the spacetime supersymmetries. The question that we intend to address here is: Does there exist a BPS state for each Dp-brane that has a non-vanishing worldvolume gauge field strength and yet preserves one half of the spacetime supersymmetries? We will argue in this paper that the answer is yes based on known Dp-brane results and the 3-string junction. Each of these BPS states is actually a non-threshold bound state of a Dp brane carrying certain units of quantized constant electric field strength or a non-threshold (F, Dp) bound state with F representing the F-strings. There actually exist more general non-threshold bound states. For example, by the type IIB S-duality, we should have D3 branes carrying both quantized constant electric and magnetic fields. We will discuss the $`p=3`$ case in this paper and others in the subsequent publications. In the following section, we will review relevant Dp-brane results for the purpose of this paper. In section 3, we will present our arguments for the existence of such BPS states and conclude this paper. ## II Review of Some D-Brane Results <br> This section is largely based on the discussion of BPS states of a fundamental string (F-string) ending on a Dp-brane by Callan and Maldacena for $`p2`$ and by Dasgupta and Mukhi for $`p=1`$ in the linearized approximation. The linear arguments should be trusted since we are here interested only in BPS states. As in , we assume that the massless excitations of a Dp-brane are described by the dimensional reduction of the 10-dimensional supersymmetric Maxwell theory. The supersymmetry variation of the gaugino is $$\delta \chi =\mathrm{\Gamma }^{MN}F_{MN}ϵ,$$ (1) where $`M,N`$ are the 10-dimensional indices. A BPS configuration is the one in which $`\delta \chi =0`$ for some non-vanishing killing spinor. The ending of an F-string on a Dp-brane is equivalent to placing a point charge on the brane. The Coulomb potential due to such a point charge will give rise to a non-vanishing $`F_{0r}`$ with $`r`$ the radial coordinate of the $`p`$ spatial dimensions of the worldvolume. With a non-vanishing $`F_{0r}`$, it is obvious from Eq. (1) and $`\delta \chi =0`$ that the existence of non-vanishing Killing spinors (i.e., the preservation of some unbroken supersymmetries), requires the excitation of one of the scalar fields, say $`X^9`$, such that $`F_{9r}=_rX^9=F_{0r}`$. Then $`\delta \chi =0`$ can be expressed in a familiar form as $$(1\mathrm{\Gamma }^0\mathrm{\Gamma }^9)ϵ=0,$$ (2) which says that one half of the worldvolume supersymmetries are broken by this configuration. In other words, this configuration of an F-string ending on a Dp-brane is still a BPS state which preserves one half of the worldvolume supersymmetries or a quarter of the spacetime supersymmetries. It is easy to check that $`F_{0r}=F_{9r}=c_p(p2)/r^{p1}`$ for $`p>2`$, $`F_{0r}=F_{9r}=c_2/r`$ for $`p=2`$, and $`F_{01}=F_{91}=c_1`$ for $`x^1>0`$ and $`F_{01}=F_{91}=0`$ for $`x^1<0`$For concreteness, we assume that the original D-string is placed along the $`x^1`$-axis. for $`p=1`$ satisfy the corresponding linearized equations of motion, respectively. This has to be true to guarantee the existence of the corresponding BPS states. In the above, the constant $`c_p`$ is related to the point charge and can be fixed by some charge quantization which will be discussed later. To have a clear picture about an F-string ending on a Dp-brane, we need to examine the above BPS configuration closely. The cases for $`p2`$ and $`p=1`$ are quite different. So we discuss them separately. Let us discuss $`p>2`$ first. Here we can solve $`X^9`$ from $`F_{9r}=c_p(p2)/r^{p1}=_rX^9`$ as $`X^9=c_p/r^{p2}`$. As explained in , the excitation of $`X^9`$ amounts to giving the brane a transverse ‘spike’ protruding in the 9 direction and running off to infinity. This spike must be interpreted as an F-string attached to the Dp-brane. Callan and Maldacena have shown that the energy change due to the introduction of a point charge to the Dp-brane worldvolume equals precisely to the F-string tension times $`X^9`$ which is the energy of an F-string if the spike is interpreted as the F-string. This also says that attaching an F-string to a Dp-brane does not cost any energy which is not true in the case of $`p=1`$. The $`p=2`$ case is not much different from $`p>2`$ cases apart from the fact that $`X^9`$ now behaves according to $`X^9=c_2\mathrm{ln}r/\delta `$ with $`\delta `$ a small-distance cutoff rather than like a ‘spike’. This $`X^9`$, a sort of inverse ‘spike’, should also be interpreted as an F-string because of the underlying D-brane picture and the similar energy relation.The energy change of D2 due to the introduction of a point charge to the worldvolume can also be expressed as the F-string tension times $`X^9`$. Here a large-distance cutoff needs to be introduced to make the calculation meaningful. So far, we have considered only the single-center Coulomb solution in the above BPS states describing an F-string ending on a Dp-brane for $`p2`$. The BPS nature of these configurations allows multi-center solutions. For example, for $`p>2`$, $`X^9`$ is now $$X^9=\underset{i}{}\frac{c_p^i}{|\stackrel{}{r}\stackrel{}{r}_i|^{p2}},$$ (3) where $`c_p^i`$ can be positive or negative, depending on to which side of the Dp-brane an F-string is attached. This solution represents multiple strings along $`X^9`$ direction ending at arbitrary locations on the brane. This solution is still BPS, preserving also a quarter of the spacetime supersymmetries. The energy change of Dp-brane due to the endings of multiple strings is again equal to the summation of the F-string tension times individual F-string length and is independent of the locations of the end points. Therefore, no attachment energy is spent for such endings. These multi-center solutions are one of the important properties which we need in section 3. The story for $`p=1`$ case is quite different. As discussed in , the excitation of $`X^9`$ is no longer interpreted as an F-string ending on a D-string but as an indication that one side of the original infinitely long and straight D-string or (0,1)-string must be bent rigidly (with vanishing axion) with respect to the point on the D-string where a point charge is inserted. Let us look at it in some detail since the physics picture for this case consists of the starting point of our arguments for the existence of the Dp-brane bound states in the next section. In the presence of this point charge, Gauss’ law in one spatial dimension states, in the case of vanishing axion, that $`F_{01}=c_1`$ for $`x^1>0`$ and $`F_{01}=0`$ for $`x^1<0`$<sup>§</sup><sup>§</sup>§One can also have an alternative solution of $`F_{01}=0`$ for $`x^1>0`$ and $`F_{01}=c_1`$ for $`x^1<0`$. when the original D-string or (0,1)-string is along $`x^1`$ axis. Unbroken susy condition $`F_{91}=_1X^9=F_{01}`$ says $`X^9`$ $`=`$ $`c_1x^1,x^1>0,`$ (4) $`=`$ $`0,x^1<0.`$ (5) Because of the special properties of 1 + 1 dimensional electrodynamics, the above solution is linearly increasing away from the inserted charge, in contrast to $`p2`$ cases. Before the work of Dasgupta and Mukhi, Aharony et al. concluded that three strings are allowed to meet at one point provided there exist the corresponding couplings and the charges at the junction point are conserved. Schwarz then went one step further to conjecture, based on his ($`m`$,$`n`$)-string in type IIB theory, that there exists a BPS state for such 3-string junction provided the three strings are semi-infinite and the angles are chosen such that tensions, treated as vectors, add up to zero. With this, one should not be surprised about the above solution and the natural interpretation, as indicated already in for an F-string ending on a D-string, that the insertion of the point charge at the origin of the D-string causes one half of the string to bend rigidly. The solution itself does not spell out the ending of F-string or (1,0)-string. But a consistent picture requires that the point charge represents the ending of a semi-infinitely long F-string or (1,0)-string (chosen here along positive $`x^9`$ direction) coming in perpendicular to the original D-string or (0,1)-string along $`x^1`$ direction. The bent segment described by Eq. (5) that goes out from the junction is a D-string carrying one unit of quantized electric flux or Schwarz’s (1,1)-string (or ($`1`$,$`1`$)-string depending on the orientation) in type IIB theory which follows from the charge conservation. The 3-string junction has also been studied by Sen from spacetime point of view based on Schwarz’s ($`m`$,$`n`$)-strings in type IIB theory. He showed that a 3-string junction indeed preserves 1/4 of the spacetime supersymmetries and a string network which also preserves 1/4 of the spacetime supersymmetries can actually be constructed using 3-string junctions as building blocks. Such a string network may, to our understanding, correspond to the multi-center solutions in $`p2`$ cases. The energy change of the D-string due to the ending of an F-string, unlike the $`p2`$ cases, is no longer equal to the F-string tension times the attached F-string length, primarily due to the formation of the (1,1)-string bound state. In summary, the 3-string junction, as the BPS state of an F-string ending on a D-string, is just the consequence of D-brane picture, 1 + 1 dimensional electrodynamics and the non-perturbative SL(2,Z) strong-weak duality symmetry in type IIB string theory. One important point to notice, which is well-known nowadays and will be useful in our later discussion, is that $`m`$ F-strings in the ($`m`$,$`n`$)-string bound state in type IIB theory are just $`m`$ units of the quantized electric flux. Therefore an ($`m`$,1)-string can be viewed as a D-string carrying $`m`$ units of quantized electric flux. The ($`m`$,1)-string tension is $`\sqrt{1/g^2+m^2}T_f`$ with $`T_f=1/2\pi \alpha ^{}`$ the F-string tension. For small string coupling $`g`$ and small $`m`$, the ($`m`$,1)-string tension can be approximated as $`(1/g+gm^2/2)T_f`$. Therefore, $`(gm^2/2)T_f`$ should correspond to the linearized energy per unit length of the worldsheet constant gauge field strength $`F_{01}`$, i.e., $`((2\pi \alpha ^{}F_{01})^2/2g)T_f`$. So we have $`F_{01}=gmT_f`$ which fixes $`c_1=gT_f`$ for a single F-string. By T-dualities, the electric field $`F`$ due to the ending of F-strings on a Dp-brane is quantized according to $$\frac{1}{(2\pi )^{p2}\alpha ^{(p3)/2}}_{S_{p1}}F=gm,$$ (6) where $``$ denotes the Hodge dual in the worldvolume. This is precisely the condition used in to fix the constant $`c_p`$. ## III $`\mathrm{D}_p`$ Brane Bound States <br> Until now, we understand that the obvious reason for the existence of ($`m`$,$`n`$)-strings in non-perturbative type IIB string theory is the SL(2,Z) strong-weak duality symmetry under which the NSNS and RR 2-form potentials transform as a doublet. However, as we will explain below, an ($`m`$,$`n`$)-string bound state is not special at all if it is interpreted as $`n`$ D-strings carrying $`m`$ units of quantized electric flux or field strength as discussed in the previous section. We will argue in this section that such a kind of bound states, i.e., a Dp-brane carrying certain units of quantized electric flux or field lines, actually exist for all Dp branes for $`1p8`$. All these bound states are BPS saturated and preserve one half of the spacetime supersymmetries just like an ($`m`$,$`n`$)-string. The fact that the ($`m`$,$`n`$)-strings were discovered earlier is because they can be easily recognized in the non-perturbative type IIB string theory and it happens that $`m`$ units of quantized electric flux or field strength can be identified as $`m`$ F-strings. Now to present our argument let us begin with a 3-string junction. Without loss of generality and for simplicity, let us focus on an F-string ending on a D-string with zero axion. As discussed in the previous section, the third string is a D-string carrying one unit of quantized electric flux. This is a stable BPS configuration which preserves a quarter of the spacetime supersymmetries. Suppose that we do not have an a priori knowledge of Schwarz’s ($`m`$,$`n`$)-strings and we do the linear study as Dasgupta and Mukhi described in the previous section. The D-brane picture makes it certain that there must exist a stable BPS configuration of an F-string ending on a D-string which preserves a quarter of the spacetime supersymmetries. So we must conclude from our linear analysis that this BPS state is a 3-string junction. The F-string remains as an F-string in the junction but the electric charge at the end of the F-string will create a constant electric field or flux flowing along either side of the D-string with respect to the end point. At the final stable state, one side of the D-string remains as the original D-string but the other side becomes a D-string carrying one unit of quantized electric flux. This appears as 3 different kinds of strings meeting at one point. We know that the D-string carrying one unit of quantized electric flux or field strength in the 3-string junction is semi-infinite. Now let us push the junction point to spatial infinity in such a way that the D-string carrying one unit of quantized electric flux is along one of the axes while the F-string and the D-string are all at spatial infinity. To a local observer, this D-string carrying one unit of quantized electric flux must appear to be a stable BPS configurationIf one thinks carefully, each of the two ends of an ($`m`$,$`n`$)-string of Schwarz at spatial infinity must be either associated with a 3-string junction or attached to any other allowable object.. Further, we must conclude that the D-string carrying certain units of quantized electric flux must be a BPS one preserving one half of the spacetime supersymmetries based on the facts that the 3-string junction preserves a quarter of the spacetime supersymmetries and there exist BPS saturated configurations for both the F-string and D-string each preserving one half of the spacetime supersymmetries. In the 3-string junction, we must also conclude that supersymmetry conditions from any two constituent strings can be independent and the supersymmetry conditions from the remaining string must be related to those from the other two strings We know that these are all true from Schwarz’s ($`m`$,$`n`$)-strings and Sen’s analysis of spacetime supersymmetry for 3-string junctions.. So we conclude that there exist a bound state of $`n`$ D-strings carrying $`m`$ units of quantized electric flux based on D-brane picture, charge conservation and the linear study discussed in the previous section. We now know that this bound state is just Schwarz’s ($`m`$,$`n`$)-string which provides one way to identify one unit of quantized electric flux or field line as an F-string<sup>\**</sup><sup>\**</sup>\**Another way of such an interpretation is given in . . The only thing special for the bound state of $`n`$ D-strings carrying $`m`$ units of quantized electric flux is the 1 + 1 dimensional electrodynamics which states that the gauge field strength is constant on one side of a point charge. If we can consistently have a constant electric field in a Dp-brane worldvolume, we find no reason that a Dp-brane carrying certain units of quantized electric flux should not exist from the above discussion. To make our arguments for the existence of such Dp-brane bound states clear, let us first consider a specific $`p=3`$ case. We take $`p=3`$ partially because of the current fashion of $`AdS_5/CFT_4`$ correspondence and partially because of the familiarity of the 1 + 3 dimensional electrodynamics. We will discuss the general cases for $`1p8`$ afterwards. In the case of D3-brane, we do not have the property of 1 + 1 dimensional electrodynamics. In general, when an F-string ends on a D3-brane, the F-string will be spike-like, not rigid, near the end point. But this will not prevent us from doing the same as we did above for $`p=1`$ case. As we will see, insisting a constant electric field in any finite spatial region of worldvolume in a consistent fashion will automatically push the endings of F-strings to spatial infinity. Therefore, the ‘spike’ will appear to be a rigid F-string to any finite region of space. The first question is what kind of electric charge distribution in 1 + 3 dimensions gives rise to a constant electric field<sup>††</sup><sup>††</sup>††It happens in this case that we can also have a bound state of a D3 brane carrying certain units of quantized constant magnetic field by the Type IIB S-duality. There actually exist such bound states for $`2p8`$ . For p = 3, we can have a bound state of a D3 brane carrying both quantized constant electric and magnetic fields. We will discuss the p = 3 bound states later in this section. There actually exist similar and more general bound states which will be discussed in forthcoming papers.. We know that a uniform 2-d surface charge distribution will do the job. The next question is where this surface should be placed. When we say a constant electric field, we mean that the field is constant not only in magnitude but also in direction in any finite region of space. So we have to place this charge surface at spatial infinity. Otherwise, the direction of the electric field will be opposite on the two sides of the surface. For concreteness, let us say that we label $`x^1,x^2`$ and $`x^3`$ as the 3-space of D3 brane and take the charge surface as $`x^2x^3`$-plane and place it at $`x^1=\mathrm{}`$. Now where does the surface charge come from? It all comes from the endings of parallel NSNS-strings, say along $`x^9`$ direction, on the $`x^2x^3`$-plane such that the resulting surface charge density is a constant. This is possible because of the existence of the multi-center solution discussed in the previous section. Since these NSNS-strings are parallel to each other, the whole system is still a BPS one, preserving a quarter of the spacetime supersymmetries. Note that these endings of F-strings are now at spatial infinity. Therefore the ‘spikes’ describing the endings of these F-strings have no influence on the electric field in any finite region of space. So everything fits together nicely. In any finite region, we can detect only the D3-brane carrying a constant electric field in it. By the same token as in $`p=1`$ case, we must conclude that this bound state preserves also one half of the spacetime supersymmetries. Because the charge at the end of each of these NSNS-strings is quantized, we expect that the electric field should also be quantized. If each of these NSNS-strings is $`m`$ F-strings, we should have here $`F_{01}=gmT_f`$ with $`g`$ the corresponding string coupling constant. This can be obtained by T-dualities from the $`F_{01}=gmT_f`$ in $`p=1`$ case<sup>‡‡</sup><sup>‡‡</sup>‡‡To be more precise, we T-dualize the D3 brane Born-Infeld action with flat background and non-vanishing constant worldvolume field $`F_{01}`$ along $`x^2`$ and $`x^3`$ directions. We then end up with a D-string Born-Infeld action. Therefore we can read $`F_{01}=gmT_f`$. Noticing the relationship between the exact tension and linearized tension for $`p=1`$ case, we must have the tension for the D3 brane bound state as given in Eq. (7) since the two cases are related to each other by T-dualities.. The discussion for a general $`p`$ for $`2p8`$ is not much different from the $`p=3`$ case. To be concrete, let us take the spatial dimensions of a Dp-brane along $`x^1,\mathrm{},x^p`$. The $`(p1)`$ dimensional surface with uniform charge distribution resulting from the endings of parallel NSNS-strings, say, along $`x^9`$-direction is taken as a $`(p1)`$-plane along $`x^2,\mathrm{},x^p`$ directions and is placed at $`x^1=\mathrm{}`$. Then the electric field resulting from this charge surface will be constant and along $`x^1`$-direction in any finite region of space. It is also quantized as $`F_{01}=gmT_f`$ for an NSNS-string (to be thought of as $`m`$ F-strings). The rest will be the same as in the case of $`p=3`$. Since $`F_{01}=gmT_f`$, we can use the corresponding Dp-brane action to determine the corresponding tension $`T_p(m,n)`$ describing $`n`$ Dp-branes carrying $`m`$ units of quantized constant electric field which is $$T_p(m,n)=\frac{T_0^p}{g}\sqrt{n^2+g^2m^2},$$ (7) where $`T_0^p=1/(2\pi )^p\alpha ^{(p+1)/2}`$. This expression clearly indicates that the configuration of $`n`$ Dp-branes carrying $`m`$ units of quantized constant electric field with $`m`$ and $`n`$ relatively prime integers is a non-threshold bound state. So we conclude that $`n`$ Dp-branes carrying $`m`$ units of quantized constant electric field consist of a BPS non-threshold bound state which preserves one half of the spacetime supersymmetries. Since the quantized electric flux or field lines can be interpreted as F-strings, these bound states should be identified with the (F, Dp) bound states which are also related to the ($`m`$,$`n`$)-string or (F, D1) by T-dualities along the transverse directions. But here we must be careful about the notation ‘F’ in (F, Dp). This F actually represents an infinite number of parallel NS-strings along, say, $`x^1`$ direction, which are distributed evenly over a $`(p1)`$-dimensional plane perpendicular to the $`x^1`$-axis (or the strings). As indicated above, each of these NS-strings is $`m`$ F-strings if $`F_{01}=gmT_f`$. The tension formula Eq. (7) implies that we should have one NS-string (or $`m`$ F-strings) per $`(2\pi )^{p1}\alpha ^{(p1)/2}`$ area over the above $`(p1)`$-plane. Since T-dualities preserve supersymmetries, we can see in a different way that these bound states preserve one half of the spacetime supersymmetries since the original (F,D1) preserves one half of the spacetime supersymmetries. We will use this identification and perform T-dualities to construct explicitly the spacetime configurations for these bound states in a forthcoming paper. We will show there that the tension formula Eq. (7) holds and there are indeed $`m`$ F-strings per $`(2\pi )^{p1}\alpha ^{(p1)/2}`$ area of ($`p1`$)-dimensions. The spacetime configurations for (F, Dp) for $`p=3,4,6`$ have been given in , respectively. Once we have the above, it should not be difficult to have a non-threshold bound state of $`n`$ D3 branes carrying $`q`$ units of quantized constant magnetic field with $`n,q`$ relatively prime. All we need is to replace the F-strings in the above for $`p=3`$ case by D-strings. If we also choose the quantized constant magnetic field along the $`x^1`$-axis, we must have $`F_{23}=qT_f`$ from the discussion in about a D-string ending on a D3 brane. The corresponding tension is $$T_3(q,n)=\frac{T_0^3}{g}\sqrt{n^2+q^2}.$$ (8) This tension formula implies that the linearized approximation on the D3 brane worldvolume is good only if $`nq`$. This bound state should correspond to the so-called (D1, D3) bound state. Again, we should have an infinite number of D-strings in this bound state and there should be $`q`$ D-strings per $`(2\pi )^2\alpha ^{}`$ area over the $`x^2x^3`$-plane. Similarly, if we replace the F-strings or D-strings by ($`m`$,$`q`$)-strings in the above, we should end up with a non-threshold bound state of $`n`$ D3 branes carrying $`m`$ units of quantized electric flux lines and $`q`$ units of quantized magnetic flux lines with any two of the three integers relatively prime. The tension for this bound state is $$T_3(m,q,n)=\frac{T_0^3}{g}\sqrt{n^2+q^2+g^2m^2}.$$ (9) The linearized approximation on the worldvolume is good if either $`nq,nm`$ for fixed and finite $`g`$ or $`nq`$ for small $`g`$ and finite $`m`$. We denote this bound state as ((F, D1),D3). We should also have an infinite number of ($`m`$,$`q`$)-strings in this bound state. We also have one ($`m`$,$`q`$)-string per $`(2\pi )^2\alpha ^{}`$ area over the $`x^2x^3`$-plane. In , we will construct explicit configuration for ((F,D1),D3) bound state which gives the (D1, D3) bound state as a special case. We will confirm all the above mentioned properties for them. The spacetime configurations for (Dp, D(p + 2)) for $`0p4`$ have been given in . Note added: After the submission of this paper to hep-th, we were informed that the existence of the bound states of Dp branes carrying constant electric fields was also discussed in but in a completely different approach of the mixed boundary conditions. ###### Acknowledgements. JXL acknowledges the support of NSF Grant PHY-9722090.
no-problem/9904/cond-mat9904066.html
ar5iv
text
# 1 Introduction ## 1 Introduction In recent years several molecular dynamics computer simulations have been done in order to investigate the structure and dynamics of sodium silicate melts and glasses (Smith, Greaves, and Gillan 1995, Cormack and Cao 1997). By using the potential proposed by Vessal, Amini, Fincham and Catlow (1989) these authors have found that the structure of systems like, e.g., sodium disilicate (SDS) is characterized by a microsegregation in which the sodium atoms form clusters of a few atoms between bridged SiO<sub>4</sub> units. In order to see whether this somewhat surprising result is reproduced also by a different model from the one of Vessal et al. (1989) we have performed simulations of SDS using a different potential (discussed in detail below). In addition to the investigation of the structure we also study the dynamical properties of SDS in order to see whether the finite size effects that have been observed in pure silica (Horbach, Kob, Binder, and Angell 1996, Horbach, Kob, and Binder 1999a, and Horbach, Kob, and Binder 1999b) are present in SDS as well. ## 2 Model The model potential we use to describe the interactions between the ions in SDS is the one proposed by Kramer, de Man, and van Santen (1991) which is a generalization of the so called BKS potential (van Beest, Kramer, and van Santen 1990) for pure silica. It has the following functional form: $$\varphi (r)=\frac{q_\alpha q_\beta e^2}{r}+A_{\alpha \beta }\mathrm{e}^{B_{\alpha \beta }r}\frac{C_{\alpha \beta }}{r^6}\alpha ,\beta [\mathrm{Si},\mathrm{Na},\mathrm{O}].$$ (1) Here $`r`$ is the distance between an ion of type $`\alpha `$ and an ion of type $`\beta `$. The values of the parameters $`A_{\alpha \beta },B_{\alpha \beta }`$ and $`C_{\alpha \beta }`$ can be found in the original publication. The potential (1) has been optimized by Kramer et al. for zeolites, i.e. for systems that have Al ions in addition to Si, Na and O. In that paper the authors used for silicon and oxygen the partial charges $`q_{\mathrm{Si}}=2.4`$ and $`q_\mathrm{O}=1.2`$, respectively, whereas sodium was assigned its real ion charge $`q_{\mathrm{Na}}=1.0`$. With this choice charge neutrality is not fulfilled in systems like SDS. To overcome this problem we introduced for the sodium ions a position dependent charge $`q(r)`$ instead of $`q_{\mathrm{Na}}`$, $$q(r)=\{\begin{array}{cc}0.6\left(1+\mathrm{ln}\left[C\left(r_\mathrm{c}r\right)^2+1\right]\right)\hfill & r<r_\mathrm{c}\hfill \\ 0.6\hfill & rr_\mathrm{c}\hfill \end{array}$$ (2) which means that for $`rr_\mathrm{c}`$ charge neutrality is valid ($`q(r)=0.6`$ for $`rr_\mathrm{c}`$). Note that $`q(r)`$ is continuous at $`r_\mathrm{c}`$. We have fixed the parameters $`r_\mathrm{c}`$ and $`C`$ such that the experimental mass density of SDS and the static structure factor from a neutron scattering experiment (see below) are reproduced well. From this fitting we have obtained the values $`r_\mathrm{c}=4.9`$Å and $`C=0.0926`$Å<sup>-2</sup>. With this choice the charge $`q(r)`$ crosses smoothly over from $`q(r)=1.0`$ at $`1.7`$ Å to $`q(r)=0.6`$ for $`rr_\mathrm{c}`$. The simulations have been done at constant volume with the density of the system fixed to $`2.37\mathrm{g}/\mathrm{cm}^3`$. The equations of motion were integrated with the velocity form of the Verlet algorithm and the Coulombic contributions to the potential and the forces were calculated via Ewald summation. The time step of the integration was $`1.6`$ fs. In this paper we investigate the properties of SDS in the liquid state at $`T=2100`$ K and in the glass state at $`T=300`$ K. The equilibration time at $`T=2100`$ K was two million time steps thus corresponding to a real time of $`3.5`$ ns. At this temperature we simulated systems with $`N=1008`$ and $`N=8064`$ particles. In order to improve the statistics two independent runs were done for the large system and eight independent runs for the small system. The glass state was produced by cooling the system from equilibrated configurations at $`T=1900`$ K with a cooling rate of $`1.1610^{12}\mathrm{K}/\mathrm{s}`$. The pressure is $`4.5`$ GPa at $`T=2100`$ K and $`0.96`$ GPa at $`T=300`$ K. ## 3 Results In order to demonstrate that our model is able to reproduce the structure of real SDS very well we compare the static structure factor $`S^{\mathrm{neu}}(q)`$ with the one from a neutron scattering experiment by Misawa, Price, and Suzuki (1980). To calculate $`S^{\mathrm{neu}}(q)`$ one has to weight the partial structure factors from the simulation with the experimental coherent neutron scattering lengths $`b_\alpha `$ ($`\alpha [\mathrm{Si},\mathrm{Na},\mathrm{O}]`$): $$S^{\mathrm{neu}}(q)=\frac{1}{_\alpha N_\alpha b_\alpha ^2}\underset{kl}{}b_kb_l\mathrm{e}^{i𝐪[𝐫_k𝐫_l]}.$$ (3) The values for $`b_\alpha `$ are $`0.414910^{12}`$ cm, $`0.36310^{12}`$ cm and $`0.580310^{12}`$ cm for silicon, sodium and oxygen, respectively. They are taken from Susman, Volin, Montague, and Price (1991) for silicon and oxygen and from Bacon (1972) for sodium. Fig. 1 shows $`S^{\mathrm{neu}}(q)`$ from the simulation and the experiment at $`T=300`$ K. We see that the overall agreement between simulation and experiment is good. For $`q>2.3`$ Å<sup>-1</sup>, which corresponds to length scales of next nearest Si–O and Na–O neighbors, the largest discrepancy is at the peak located at $`q=2.8`$ Å<sup>-1</sup> where the simulation underestimates the experiment by approximately $`15`$% in amplitude. Very well reproduced is the peak at $`q=1.7`$ Å<sup>-1</sup>, which is called the first sharp diffraction peak, and which is a prominent feature in pure silica as well. In silica this peak arises from the tetrahedral network structure since the length scale which corresponds to it, i.e. $`2\pi /1.7\mathrm{\AA }^1=3.7`$ Å, is approximately the spatial extent of two connected SiO<sub>4</sub> tetrahedra. From the figure we recognize that this structure is partly present in SDS also. The peak at $`q=0.95`$ Å<sup>-1</sup> is not present in the experimental data which might be due to the fact that in this $`q`$ region the experimental resolution is not sufficient. By looking at the coordination number distributions, discussed below, we see that the peak at $`q=0.95`$ Å<sup>-1</sup> is related to a super structure which is formed by the sodium and silicon atoms. In agreement with this interpretation the length scale corresponding to this peak, i.e. $`2\pi /0.95\mathrm{\AA }^1=6.6`$ Å, is two times the mean distance of nearest Na–Na or Na–Si neighbors. The coordination number distribution $`P_{\alpha \beta }(z)`$ for different pairs $`\alpha \beta `$ gives the probability that an ion of type $`\alpha `$ has exactly $`z`$ nearest neighbors of type $`\beta `$. By definition two neighboring atoms have a distance from each other which is less than the location of the first minimum $`r_{\mathrm{min}}`$ of the corresponding partial pair correlation function $`g_{\alpha \beta }(r)`$. From the functions $`g_{\alpha \beta }(r)`$ we find for $`r_{\mathrm{min}}`$ the values $`3.6`$ Å, $`5.0`$ Å, $`2.35`$ Å, $`5.0`$ Å, $`3.1`$ Å, and $`3.15`$ Å for the Si–Si, Si–Na, Si–O, Na–Na, Na–O, and O–O correlations. We note that $`P_{\mathrm{Si}\mathrm{O}}(z)`$ is larger than $`0.99`$ for $`z=4`$ at $`T=2100`$ K and $`T=300`$ K which means that nearly every silicon atom is four fold coordinated with oxygen atoms forming a SiO<sub>4</sub> tetrahedron. Some of the distribution functions are shown in Fig. 2. We recognize from Fig. 2a that about $`65`$% of the oxygen atoms are bridging oxygens between two tetrahedra ($`P_{\mathrm{O}\mathrm{Si}}(z=2)0.65`$), and that about $`28`$% of the oxygen atoms form dangling bonds ($`z=1`$) with corresponding silicon atoms. In the neighborhood of these dangling bonds sodium atoms are located. This means that the sodium atoms partly destroy the (disordered) tetrahedral network which is the structure for pure silica. A significant number of oxygen atoms have no silicon atoms, but only sodium atoms as direct neighbors. In Fig. 2b we show $`P_{\mathrm{Na}\mathrm{Na}}(z)`$ at $`T=2100`$ K and $`T=300`$ K, and we see that essentially the two distributions coincide with a a mean value between $`z=8`$ and $`z=9`$. Basically the same distribution is found for $`P_{\mathrm{Na}\mathrm{Si}}(z)`$. Therefore, every sodium atom is surrounded by $`8`$$`9`$ other sodium atoms and $`8`$$`9`$ silicon atoms. Since the mean distance between Na–Na and Na–Si neighbors is approximately the same, i.e. about $`3.3`$ Å, we can conclude that sodium and silicon atoms form a spherical super structure in which every sodium atom is surrounded by a first shell of oxygen atoms and a second shell of silicon and sodium atoms. In Fig. 2b we have also included $`P_{\mathrm{Na}\mathrm{O}}(z)`$ and we recognize that every sodium atom has on average about $`4`$$`5`$ nearest oxygen neighbors. There is again no essential difference between $`P_{\mathrm{Na}\mathrm{O}}`$ for $`T=2100`$ K and $`P_{\mathrm{Na}\mathrm{O}}`$ for $`T=300`$ K, although the pressure is about a factor $`4.5`$ higher at $`T=2100`$ K. This means that the structural features which we observe at $`T=2100`$ K are not formed due to the relatively high pressure. Nevertheless, we emphasize that the small difference between $`P(z)`$ in the liquid and in the glass state is partly due to the high cooling rate of about $`10^{12}\mathrm{K}/\mathrm{s}`$ we used to produce the structures at $`T=300`$ K. A careful analysis of the cooling rate effects for SDS will be presented elsewhere (Horbach, Kob, and Binder 1999c). Having described the structure of SDS we turn now our attention to a dynamical quantity, namely the self part of the dynamic structure factor $`S_\mathrm{s}(q,\nu )`$, which depends on frequency $`\nu `$ and the magnitude of the wave–vector $`q`$. It is defined by $`S_\mathrm{s}(q,\nu )`$ $`=`$ $`{\displaystyle \frac{N_\alpha }{N}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑t\mathrm{e}^{2\pi \nu t}\mathrm{e}^{i𝐪[𝐫_\alpha (t)𝐫_\alpha (0)]}`$ (4) $`=`$ $`{\displaystyle \frac{N_\alpha }{N}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑t\mathrm{e}^{2\pi \nu t}F_\mathrm{s}(q,t)\alpha [\mathrm{Si},\mathrm{Na},\mathrm{O}]`$ where $`𝐫_\alpha (t)`$ is the position vector of a tagged particle of type $`\alpha `$ at time $`t`$ and $`F_\mathrm{s}(q,t)`$ is the incoherent intermediate scattering function. The details of the Fourier transformation in (4) are given elsewhere (Horbach et al. 1999a). $`S_\mathrm{s}(q,\nu )`$ for silicon, sodium and oxygen is shown in Fig. 3a at the temperature $`T=2100`$ K and the wave–vector $`q=1.7`$ Å<sup>-1</sup> for the two system sizes $`N=1008`$ and $`N=8064`$. For sodium the curves for the small and the large system coincide over the whole frequency range. This is not the case for silicon and oxygen for which $`S_\mathrm{s}(q,\nu )`$ has no system size dependence for frequencies $`\nu 1.1`$ THz whereas for smaller frequencies there is a missing of intensity for the small system. The vibrational modes causing a shoulder around $`\nu =0.9`$ THz in $`S_\mathrm{s}(q,\nu )`$ are usually called boson peak excitations. In the small system a part of these excitations is missing due to the loss of intensity for $`\nu 1.1`$ THz. An explanation of this behavior for the case of silica, which shows qualitatively the same finite size effects, can be found in Horbach et al. (1999a and 1999b). Due to the sum rule $`𝑑\nu S_\mathrm{s}(q,\nu )=1`$ the missing of the boson peak excitations for frequencies 0.4 THz $`\nu `$ 1.1 THz in the small system has to be “reshuffled” to smaller frequencies leading to a broadening and an increase of the quasielastic line around $`\nu =0`$. Since the quasielastic line is outside the frequency resolution of our Fourier transformation the consequences in the change of the quasielastic line can be observed better in the Fourier transform of $`S_\mathrm{s}(q,\nu )`$, i.e. the incoherent intermediate scattering function $`F_\mathrm{s}(q,t)`$, which is shown in Fig. 3b for the system sizes $`N=8064`$ and $`N=1008`$ at $`q=1.7`$ Å<sup>-1</sup>. We recognize from this figure that $`F_\mathrm{s}(q,t)`$ shows a two step relaxation behavior similar to the case of silica and fragile glassformers (Ngai, Riande, and Ingram 1998). As expected from our results for $`S_\mathrm{s}(q,\nu )`$ the scattering functions $`F_\mathrm{s}(q,t)`$ have no system size dependence for sodium. In $`F_\mathrm{s}(q,t)`$ for silicon and oxygen the height of the plateau increases and the $`\alpha `$ relaxation process shifts to longer times with decreasing system size. Furthermore, the scattering functions for the small system show a pronounced oscillation for $`t>0.2`$ ps which is due to the fact that the boson peak excitations present in the small system cause a peak in $`S_\mathrm{s}(q,\nu )`$ whereas in the large system only a shoulder is present. Finally we mention that the finite size effects in the dynamics of SDS are found in the whole $`q`$ range and, moreover, do not affect the static properties of SDS. Acknowledgments: This work was supported by SFB 262/D1 and by Deutsch–Israelisches Projekt No. 352–101. We also thank the RUS in Stuttgart for a generous grant of computer time on the Cray T3E. ## References * Bacon, G. E., 1972, Acta Cryst. A, 28, 357. * Cormack, A. N., and Cao, Y., in Modelling of Minerals and Silicated Materials, Eds.: B. Silvi and P. Arco (Kluwer, Dordrecht 1997). * Horbach, J., Kob, W., Binder, K., and Angell C. A., 1996, Phys. Rev. E, 54, R5889. * Horbach, J., Kob, W., and Binder, K., 1999a (submitted to Phys. Rev. B) * Horbach, J., Kob, W., and Binder, K., 1999b, to appear in the Proceedings on Neutrons and Numerical Methods, Grenoble, 1998, preprint in cond-mat/9901162 * Horbach, J., Kob, W., and Binder, K., 1999c (to be published) * Kramer, G. J., de Man, A. J. M., and van Santen, R. A., 1991, J. Am. Chem. Soc., 113, 6435. * Misawa, M., Price, D. L., and Suzuki, K., 1980, J. Non–Cryst. Solids, 37, 85. * Ngai, K. L., Riande, E., and Ingram, M.D., (editors), 1998, J. Non–Cryst. Solids, 235–237 (Proceedings of the Third International Discussion Meeting on Relaxations in Complex Systems, Vigo, 1997). * Smith, W., Greaves, G. N., and Gillan, M. J., 1995, J. Chem. Phys., 103, 3091. * Susman, S., Volin, K. J., Montague, D. G., and Price, D. L., 1991, Phys. Rev. B, 43, 11076. * van Beest B. W., Kramer G. J., and van Santen R. A., 1990, Phys. Rev. Lett., 64 1955. * Vessal, B., Amini, A., Fincham, D., and Catlow, C. R. A., 1989, Philos. Mag. B, 60, 753.