id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9908/cond-mat9908187.html
ar5iv
text
# Spatial structure of an incompressible Quantum Hall strip Abstract The incompressible Quantum Hall strip is sensitive to charging of localized states in the cyclotron gap. We study the effect of localized states by a density functional approach and find electron density and the strip width as a function of the density of states in the gap. Another important effect is electron exchange. By using a model density functional which accounts for negative compressibility of the QH state, we find electron density around the strip. At large exchange, the density profile becomes nonmonotonic, indicating formation of a 1D Wigner crystal at the strip edge. Both effects, localized states and exchange, lead to a substantial increase of the strip width. 1. Introduction Theory of the QHE predicts that near integer filling the system divides into compressible regions separated by incompressible strips . Potential distribution within a QHE sample was recently imaged by using capacitance probes , atomic force microscope and single electron transistor . High resolution images of the incompressible strip give the strip width several times larger than the theoretical prediction . To bridge between theory and experiment one has to extend the analysis to include $``$ the effect of disorder producing finite density of states in the cyclotron gap; $``$ electron exchange correlations which affect compressibility of the QHE state. Below we study these effects using a Density Functional approach, taking special care of the effect of a large dielectric constant ($`ϵ_{\mathrm{GaAs}}=12.1`$). Because of relatively small depth of the 2DEG beneath the semiconductor surface, the interparticle interaction within the 2DEG is affected by image charges. This changes electrostatics of the strip, and modifies potential induced on the exposed surface. Finite density of states in the QHE gap gives rise to a finite screening length. For an incompressible strip of width exceeding this screening length, we find a large departure from the theory , in agreement with . The results compare well with the experiment . The effect of electron exchange is important in determining the structure of compressible regions adjacent to the strip. Exchange correlation gives rise to negative compressibility of the 2DEG. We consider negative compressibility by using a model density functional, and show that it strongly alters the distribution of electric charge, even to the extent that the potential and the charge density profiles can become nonmonotonic. 2. The effect of finite density of states in the cyclotron gap Incompressible strips are formed in the regions of nonuniform 2DEG density, at nearly integer filling, created either by perturbing the exposed surface by an STM probe or by gating the 2DEG . The strips are aligned normal to the average 2DEG density gradient. Charge distribution around the strip is controlled by electrostatics . Density $`n(r)`$ in the 2DEG buried at a distance $`d`$ beneath semiconductor surface can be found by minimizing a density functional: $$U_{\mathrm{ext}}(r)=V(rr^{})n(r^{})d^2r^{}+\mu (n),V(r)=\frac{e^2}{ϵ|r|}+\frac{(ϵ1)e^2}{ϵ(ϵ+1)\sqrt{r^2+4d^2}}$$ (1) Here $`U_{\mathrm{ext}}(r)`$ is external potential due to donors or gates, the Hartree interaction $`V(rr^{})`$ takes into account image charges, and the chemical potential $`\mu `$ includes various non–Hartree contributions: finite density of states, QHE gap, exchange effects, etc. Since the scale of the observed structures is always larger than the 2DEG depth $`d`$, we replace the Hartree interaction in (1) by $`V(r)=2e^2/((ϵ+1)|r|)`$, assuming $`|r|d`$. In this section we consider a free electron model for $`\mu (n)`$ which includes degenerate (i.e., infinitely compressible) Landau level states and localized states in the QH gap: $$dn/d\mu =(n_{\mathrm{LL}}n_{\mathrm{gap}})\underset{m>0}{}\delta (\mu m\mathrm{}\omega _c)+n_{\mathrm{gap}}/\mathrm{}\omega _c,n_{\mathrm{LL}}=eB/hc.$$ (2) In the simplest model Eq.(2) the density of localized states is constant. Below we focus on the $`m=1`$ QHE plateau. To introduce the 2DEG density gradient into the problem, we express $`U_{\mathrm{ext}}`$ in terms of fictitious positive charge density within the 2DEG plane: $$U_{\mathrm{ext}}(r)=V(rr^{})n_{\mathrm{eff}}(r^{})d^2r^{},n_{\mathrm{eff}}(r)=n_{\mathrm{LL}}\stackrel{}{r}\stackrel{}{}n$$ (3) For $`\mu (n)0`$ in (1), i.e. without magnetic field, it follows from (1) that $`n(r)=n_{\mathrm{eff}}(r)`$. Now, we nondimensionalize the problem by choosing $$w_0=\left((ϵ+1)\mathrm{}\omega _c/2e^2|n|\right)^{1/2}\mathrm{and}n_0=\left((ϵ+1)\mathrm{}\omega _c|n|/2e^2\right)^{1/2}$$ (4) as the length and density units. Then the only remaining dimensionless parameter is the ratio of the fully incompressible strip width $`w_0`$ to the screening radius $`r_s`$ corresponding to the density of states $`n_{\mathrm{gap}}/\mathrm{}\omega _c`$ in the gap. Up to a factor of $`2\pi `$, this ratio is $`\gamma =n_{\mathrm{gap}}/n_0=n_{\mathrm{gap}}\left(2e^2/(ϵ+1)\mathrm{}\omega _c|n|\right)^{1/2}`$. The nondimensionalized problem reads: $$\frac{(x^{}\delta n(r^{}))d^2r^{}}{|rr^{}|}=\underset{0}{\overset{\delta n(r)}{}}F_\gamma (u)𝑑u$$ (5) where $`\delta n(r)=n(r)n_{\mathrm{LL}}`$, and $`F_\gamma (u)=\gamma ^1`$ for $`|u|<\gamma /2`$, and $`0`$ otherwise. Here the coordinate system is such that the $`x`$ axis is normal to the strip, and the $`y`$ axis is parallel to the strip. One can obtain exact results for $`\gamma 0`$ and $`\gamma 1`$. The strip width at $`\gamma =0`$ is $`2w_0/\pi `$, in accord with the electrostatic sproblem . At $`\gamma 1`$ the deviation from constant density gradient is small, because electrostatic potential is well screened. In this case, spatial variation of the chemical potential follows that of the density, increasing by $`\mathrm{}\omega _c`$ across the strip. Hence the strip width is $`n_{\mathrm{gap}}/|n|`$. We solve the problem numerically for all $`\gamma `$ (see Fig.1). In the whole range of $`\gamma `$ the strip width is reasonably accurately given by the formula $$w(2/\pi +\gamma )w_0=(2/\pi )w_0+n_{\mathrm{gap}}/|n|,$$ (6) interpolating between the two exactly solvable limits. A common model for the density of states in a Landau level is a broad line (gaussian or lorentzian) with the states localized in the tail. In this model, the transition between (less compressible) strip and (more compressible) outer region will be more gradual than in the model considered. The estimate (6) for the width of the strip, however, will remain correct, assuming that $`n_{\mathrm{gap}}`$ measures total number of states in the Landau levels tails. In the experiment , at the $`m=2`$ plateau, the density of states in the gap $`n_{\mathrm{gap}}0.03n_{\mathrm{total}}`$, where $`n_{\mathrm{total}}=1.510^{11}\mathrm{cm}^2`$. The density gradient $`n210^{10}\mathrm{cm}^2/\mu \mathrm{m}`$. Substituting this in (4), we get $`w_0=0.3\mu \mathrm{m}`$, $`\gamma 1`$. In the fully incompressible case , the strip width would be $`2w_0/\pi =0.2\mu \mathrm{m}`$. The observed width $`0.5\mu \mathrm{m}`$ agrees with Eq.(6) for estimated $`\gamma `$. 3. The effect of negative compressibility of the compressible edge For a fully incompressible strip ($`n_{\mathrm{gap}}=0`$), the density is constant within it and varies outside as a square root of the distance from the strip edge . Here we study how this behavior is modified due to finite compressibility of the Landau level states. The Thomas–Fermi theory recipe is to use Eq.(1) with $`\mu (n)=\delta n/\kappa `$, where $`\kappa `$ is compressibility. Such a model, however, is inconsistent, because of the negative sign of $`\kappa `$ in the QH state . The Thomas–Fermi problem with $`\kappa <0`$ leads to an unphysical instability. The difficulty is circumvented by realizing that the exchange interaction in the case of negative compressibility is essentially nonlocal . This motivates using in (1) an effective interaction which is simplest to write in the Fourier representation: $$V_{\mathrm{eff}}(k)=\frac{4\pi e^2}{(ϵ+1)|k|}\mathrm{\Lambda }(k),\mathrm{\Lambda }(k)>0,\mathrm{\Lambda }(0)=1,\mathrm{\Lambda }^{}(0)=\frac{(ϵ+1)}{4\pi e^2\kappa }=a,$$ (7) where $`a>0`$ is the screening length. The interaction (7) with listed restrictions on $`\mathrm{\Lambda }(k)`$ ensures stability as well as correct Hartree interaction and compressibility. Otherwise, one can make a reasonable choice of $`\mathrm{\Lambda }(k)`$ at $`ka>1`$. The problem (1) near the strip edge, with $`V_{\mathrm{eff}}`$ of the form (7) accounting for exchange effects, can be solved by the Wiener–Hopf method . For that, we write $`\delta n(x)=n^+(x)\theta (x)+n^{}(x)\theta (x)`$, where $`x>0`$ is the compressible region, and Fourier transform Eq.(1): $`V_{\mathrm{eff}}(k)n_k^+=(U_{\mathrm{ext}}^+(k)+U_{\mathrm{ext}}^{}(k))`$, where $`n_k^\pm `$ and $`U_{\mathrm{ext}}^\pm (k)`$ are analytic in the upper and lower complex $`k`$ half-planes. The Wiener–Hopf trick requires factoring $`V_{\mathrm{eff}}(k)=A_k^+/A_k^{}`$, where $`\pm `$ indicates the analyticity half-plane. Then, $`A_k^{}U_{\mathrm{ext}}^+(k)=\left[A_k^{}U_{\mathrm{ext}}^+(k)\right]^++\left[A_k^{}U_{\mathrm{ext}}^+(k)\right]^{}`$ which yields $`n_k^+=[A_k^{}U_{\mathrm{ext}}^+(k)]^+/A_k^+`$. We use $`V_{\mathrm{eff}}(k)`$ of the form (7) with $$\mathrm{\Lambda }(k)=\mathrm{exp}\left[ak(1(2/\pi )\mathrm{tan}^1(k/\lambda ))\right],$$ (8) and obtain a Wiener–Hopf solution in a closed form. Here the parameter $`\lambda `$ regularizes the interaction at large $`k`$ (and small $`r`$): $`V_{\mathrm{eff}}(r\lambda ^1)=e^{2\lambda a/\pi }V(r)`$. Factoring this $`V_{\mathrm{eff}}(k)`$ gives $$A_k^+=\frac{4\pi e^2}{(ki\delta )^{1/2}}\left(\frac{\delta +ik}{\lambda +ik}\right)^{iak/\pi },A_k^{}=(k+i\delta )^{1/2}\left(\frac{\delta ik}{\lambda ik}\right)^{iak/\pi },$$ (9) where $`\delta =+0`$. Near the edge $`U_{\mathrm{ext}}(x)=Ex+c`$, and thus $$n_k^+=\frac{(ϵ+1)E}{4\pi e^2}\frac{(i\delta ^{})^{1/2}}{(ki\delta )^{3/2}}\left(\frac{ki\lambda }{ki\delta }\right)^{iak/\pi },$$ (10) where $`\delta ^{}w_0^1`$. The inverse Fourier transform of $`n_k^+`$ gives the charge distribution near the strip edge. Note the asymptotic behavior of $`\delta n(x)`$: $`\delta n(x\lambda ^1)=2n_0(x/\pi w_0)^{1/2}`$$`\delta n(x\lambda ^1)=2n_0(x/\pi w_0)^{1/2}e^{\lambda a/\pi }`$. Here we expressed $`E`$ and $`\delta ^{}`$ in terms of $`w_0`$ and $`n_0`$. The solution shows that at large screening length $`a`$ there is a significant departure of the density near the edge from the square root profile of . The density profile becomes nonmonotonic at $`a\lambda 1`$. The interpretation of this result is that as the density is lowered, electrons at the edge form a one dimensional Wigner solid at densities such that the interior of the system is still fluid. We studied numerically the effect of exchange on the strip width. As the exchange interaction parameter increases, the strip becomes wider (see Fig.2). In the simulation, a model interaction $$V_{\mathrm{eff}}(r)=\alpha /|r|+(1\alpha )/(r^2+\stackrel{~}{a}^2)^{1/2}$$ (11) was used, with $`\stackrel{~}{a}=a/(1\alpha )`$. Similar to the Wiener–Hopf solution for an isolated edge, at large values of the exchange parameter the density profile becomes nonmonotonic. Note that our density functional, being quadratic in $`n(r)`$, obeys an exact particle–hole symmetry. Hence the density profiles on the upper and lower sides of the plateau in Fig.2 are identical up to a sign change. In conclusion, we find that finite density of localized states and electron exchange interaction both have similar effect on the width of the incompressible strip. The strip width increases as a function of the localized states density, and as a function of electron exchange parameter. However, the density profile in these two problems evolves differently. For a high density of localized states the density gradient becomes nearly uniform, whereas at large exchange the plateau in the density distribution becomes wider. At very large exchange, the density profile becomes nonmonotonic, indicating formation of a one dimensional Wigner crystal at the edge. L.L. is grateful to R. Ashoori, G. Finkelstein, T. D. Fulton, and A. Yacoby for useful discussions of their data. Research at MIT is supported in part by the MRSEC Program of NSF under award 6743000 IRG. References 1. D. B. Chklovskii, B. I. Shklovskii, and L. I. Glazman, Phys. Rev. B 46, 4026 (1992). 2. S. H. Tessmer, P. I. Glicofridis, R. C. Ashoori, L. S. Levitov, M. R. Melloch, Nature, 392, No.6671, 51 (1998); G. Finkelstein, P. I. Glicofridis, S. H. Tessmer, R. C. Ashoori, M. R. Melloch, preprint 3. A. Yacoby, H. F. Hess, T.A. Fulton, L. N. Pfeiffer and K. W. West, Solid State Communications, 111, 1-13 (1999) 4. K. L. McCormick, M. T. Woodside, M. Huang, M. S. Wu, P. L. McEuen, C. Duruoz, J. S. Harris, Phys. Rev. B, 59, 4654 (1999) 5. Y. Y. Wei, J. Weis, K. v. Klitzing, K. Eberl, Phys. Rev. Lett., 81, 1674 (1999) 6. I. A. Larkin, J. H. Davies, Phys. Rev. B, 52, R5535 (1995) 7. J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 68, 674 (1992); B. Tanatar, D. M. Ceperley, Phys. Rev. B 39, 5005 (1989) 8. A. L. Efros, cond-mat/9905368 9. I. A. Larkin, L. S. Levitov, to be published
no-problem/9908/cond-mat9908353.html
ar5iv
text
# 1 (a) Schematic RG-flow for 𝛿=0 and 𝑝>𝑝_𝑐 in the 𝑣-Δ plane. 𝐼,𝐶 and 𝐷 denote the I-, C- and the disordered phase. (b)Possible phase diagram for finite 𝛿 at fixed 𝑣 for 𝑝>𝑝_𝑐 and (c) for 𝑝<𝑝_𝑐 Disorder Driven Lock-In Transitions of CDWs and Related Structures Thomas Nattermann, Thorsten Emig and Simon Bogner Institut für Theoretische Physik, Universität zu Köln, Zülpicherstr. 77, D-50973 Köln, Germany > Abstract. Thermal fluctuations are known to play an important role in low-dimensional systems which may undergo incommensurate-commensurate or (for an accidentally commensurate wavevector) lock-in transitions. In particular, an intermediate floating phase with algebraically decaying correlations exists only in $`D=2`$ dimensions, whereas in higher dimensions most features of the phase diagram are mean-field like. > > Here we will show, that the introduction of frozen-in disorder leads to strong fluctuation effects even in $`D<4`$ dimensions. For commensurate wavevectors the lattice pinning potential dominates always over weak impurity pinning if $`pp_c=6/\pi `$ ($`D=3`$), where $`p`$ denotes the degeneracy of the commensurate phase. For larger p a disorder driven continuous transition between a long-range ordered locked-in phase and quasi-long-range ordered phase, dominated by impurity pinning, occurs. Critical exponents of this transition, which is characterized by a zero temperature fixed point, are calculated within an expansion in $`4D`$. The generalization to incommensurate wavevectors will be discussed. If the modulation in the quasi-long-range ordered phase has hexagonal symmetry, as e.g.for flux-line lattices, the algebraic decay is non-universal and depends on the Poisson ratio of the elastic constants. Weakly driven transport is dominated by thermally activated creep in both phases, but with different creep exponents. Incommensurate (I) phases appear in a large variety of systems (for a review see e.g. ). Examples are: (i) Charge density waves in quasi one and two dimensional conductors (e.g. in TTF–TCNQ, 2H–TaSe). (ii) Spin density waves (e.g. in $`\mathrm{CuGeO}_3`$). (iii) Mass density waves in adsorbed monolayers (e.g. He, Kr on graphite or metal surfaces) or of reconstructed surfaces (e.g. of Mo). (iv) Polarization density waves in ferroelectrics with an incommensurate phase (e.g. in $`\mathrm{K}_2\mathrm{SeO}_4`$). (v) Flux density waves in type II superconductors or Josephson junctions in an external field. Charge density waves are usually accompanied by a mass density wave like in superionic conductors or reconstructed metal surfaces. A common feature of the I phase is, that the wave vector $`𝐪_0`$, which describes the spatial modulation of the density wave in the absence of any coupling to the lattices, varies continuously with the parameter of the system (e.g. temperature $`T`$, pressure $`p`$, chemical potential $`\mu `$, magnetic field $`H`$ etc.). If $`2\pi /q_0`$ is close to a multiple of the spacing of the underlying crystal lattice, i.e. if $`|𝐠/p𝐪_0|=|𝜹|<\delta _c`$, commensurability effects may become important. Here $`𝐠`$ denotes a reciprocal lattice vector of the crystal and $`p`$ is an integer. The modulation then may become commensurate (C) with the crystal lattice. The most striking effect of the C–phase is the existence of a gap in the excitation spectrum, in contrast to the I–phase, where the low–lying excitations are gapless. The systematic mean–field theory of the IC–transition was worked out by Bruce, Cowley and Murray in 1978 . These authors distinguish IC–transition of type–I and type–II, depending on the absence or existence, respectively, of an inversion symmetry around $`𝐠/p`$ in the (disordered phase) soft mode dispersion. In the most simple case of type–I transition, condensation takes place only on wave vectors $`\pm 𝐪_0`$. For temperatures sufficiently below the mean–field transition temperature $`T_{c0}`$ the system can be described, ignoring amplitude fluctuations, by the sinus–Gordon Hamiltonian. $$=\gamma \mathrm{d}^Dx\left\{\frac{1}{2}(\mathbf{}\varphi 𝜹)g^2v\mathrm{cos}p\varphi \right\},$$ (1) where $`\varphi (𝐱)`$ describes the long–wavelength distortions of the charge (spin, mass, flux etc.) density $$\rho (𝐱)=\rho _0+\rho _1\mathrm{Re}\left\{e^{i\left(\frac{1}{p}\mathrm{𝐠𝐱}+\varphi (𝐱)\right)}\right\}.$$ (2) Minimization of (1) yields for $`\delta <\delta _cgv^{1/2}`$ the solution of the C–phase $`\varphi =\frac{2\pi }{p}n(n=0,1,\mathrm{},p1)`$. The solution for $`\delta >\delta _c`$ is a regular lattice of phase–solitons of distance $`l\left|\mathrm{ln}\left(\frac{\delta \delta _c}{\delta _c}\right)\right|`$ and internal widths $`\xi _01/(pgv)^{1/2}`$, which describe the I–phase close to the IC–transition. Far from the transition on has $`\varphi (𝐱)𝜹𝐱`$. Type–I transitions are therefore continuous. In general, a large number of C–phases is possible which may lead in certain lattice models to a devil’s staircase behaviour of the modulation vector as a function of the misfit $`\delta `$ . Type–II transitions, on the contrary, are discontinuous and show in the I–phase an almost sinusoidal modulation of the order parameter. A recent example is the spin–Peierls system $`\mathrm{CuGeO}_3`$ . Thermal fluctuations have a strong effect on the IC–phase diagram in $`D=2`$ dimensions. If we exclude topological defects (i.e. vortices), then (i) the IC transition becomes an inverted Beresinskii–Kosterlitz–Thouless–transition with a reduced transition temperature $`T_c(\delta =0)8\pi \gamma /p^2`$. (ii) Inside the I–phase solitons interact now by entropic repulsion. Close to the IC–transition the soliton–distance $`l`$ increases as power law $`l(\delta \delta _c)^{\beta _s}`$ where $`\beta _s=\frac{\zeta }{2(1\zeta )}`$. Here $`\zeta `$ is the thermal roughness exponent of a single soliton (domain wall) $`\zeta _{\mathrm{th}}=(3D)/2=1/2`$. (iii) The spatial variation of the density $`\delta \rho (𝐱)=\rho (𝐱)\rho _0`$ shows in the I–phase only quasi long range order (LRO): $$K(𝐱)=<e^{i\left(\varphi (𝐱)\varphi (\mathrm{𝟎})\right)}>|𝐱|^\eta cos(2\pi z/pl),$$ (3) where $`𝜹=\delta 𝐞_𝐳`$. $`\eta `$ depends on $`l`$, $`T/\gamma `$ and $`p`$ and approaches $`2/p^2`$ for $`l\mathrm{}`$. (iv) Topological defects diminish further the ordered (C and I) phases and even change the topology of the phase diagramm for $`p4`$. In particular, for $`p2`$ there is no direct IC–transition, both phases are separated by a fluid phase . Closely results are expected for $`1`$dimensional quantum systems. It is interesting to remark, that a qualitatively similar picture emerges also in lower dimensions $`1<D<2`$, but with different singularities at the transitions . On the contrary, in three (and higher) dimensions, thermal fluctuations have only minor effects on the phase diagram and are relevant essentially only in the critical region . In the rest of the paper we investigate the influence of randomly distributed frozen impurities on the lock–in transition. Since impurities favour certain values of the phases, the impurity Hamiltonian can be written in the form $`_{\mathrm{imp}}`$ $`=`$ $`\gamma {\displaystyle \mathrm{d}^DxV(𝐱,\varphi )}`$ $`V(𝐱,\varphi )`$ $`=`$ $`\sqrt{\mathrm{\Delta }}\mathrm{cos}\left(\varphi (𝐱)\alpha (𝐱)\right),\mathrm{\Delta }=\rho _1^2V_0^2n_{\mathrm{imp}}`$ (4) $`\alpha (𝐱)`$ is a randomly frozen phase $`(0\alpha 2\pi )`$, $`\gamma V_0`$ and $`n_{\mathrm{imp}}`$ denote the strength and the concentration, respectively, of the impurities. The modell defined by (1) and (4) describes also an $`XY`$–model in a crystalline (corresponding to a $`p`$–fold axis) and a random field. We will first discuss the case of a vanishing lattice potential, $`v=0`$, and exclude topological defects for most parts of the rest of the paper. Larkin and in the present context first Fukuyama and Lee have shown, that the impurities destroy the translational LRO of the charge density wave on scales $`LL_\xi \mathrm{\Delta }^{1/(4D)}`$ in all dimensions $`D4`$. Later studies have shown, that for $`2<D<4`$ $`K(𝐱)`$ decays as a power $$K(𝐱)e^{i𝜹𝐱}\left|\frac{𝐱}{L}\right|^{\overline{\eta }},$$ (5) with a universal exponent $`\overline{\eta }=\left(\frac{\pi }{3}\right)^2ϵ`$ in lowest order in $`ϵ=(4D)`$. Thus, we regain now quasi–LRO in the I-phase in all dimensions $`2<D<4`$. In systems in which the modulation of the I–phase has hexagonal symmetry, like in flux line lattices, the situation is more complicated. If we describe the distortions of flux lines by the displacement field $`𝐮(𝐱)`$, the relevant correlation function to describe long range translational order is given by $`K_𝐆(𝐱)=<e^{i𝐆\left(𝐮(𝐱)𝐮(\mathrm{𝟎})\right)}>`$, where $`𝐆`$ denotes the reciprocal lattice vector of the Abrikosov lattice. Recently, Emig at al. found from a functional renormalization group (FRG) calculation $$K_𝐆(𝐱)\left[L_\xi ^{}(𝐱_{}^2+\kappa z_l^2)^{\frac{1}{2(1+\kappa )}}(𝐱_{}^2+z_l^2)^{\frac{1}{2(1+1/\kappa )}}\right]^{\overline{\eta }_𝐆},$$ (6) where $`𝐱=(𝐱_{},z)`$, $`z_l=(c_{11}/c_{44})^{1/2}z`$ and $`\kappa =c_{66}/c_{11}`$. $`c_{11},c_{44}`$ and $`c_{66}`$ are the effective elastic constants of the flux line lattice (renormalized by thermal and disorder effects). In the marginal cases $`\kappa =0`$ and $`\kappa =1`$ one finds from (6) for the structure factor $$S(𝐆+𝐪)=\mathrm{d}_{}^3xe_{}^{i\mathrm{𝐪𝐱}}K_𝐆^{}(𝐱)\left(𝐪_{}^2+\frac{c_{44}}{c_{66}}q_z^2\right)_{}^{(3+\overline{\eta }_𝐆)/2},$$ (7) i.e. the structure factor exibits Bragg–peaks. The exponent $`\overline{\eta }_𝐆`$ is non–universal and depends on the value of $`\kappa `$ ($`1.143\overline{\eta }_{𝐆_0}1.159`$ for $`1\kappa 0`$). In a large range of external fields $`\kappa \varphi _0/16\pi \lambda ^2B`$ such that one could in principle test these predictions by measuring the field–dependence of width of the Bragg peaks. Next we come back to our original model (1), (4) keeping the lattice potential $`v`$ finite. Neglecting the non–Gaussian character of $`\varphi (𝐱)`$, which is justified for $`ϵ1`$, lowest order perturbation theory in $`v`$ yields an effective Hamiltonian with a mass $$g^2p^2v<\mathrm{cos}p\varphi >e^{p^2<\varphi ^2>/2}\left(\frac{L}{L_\xi }\right)^{p^2\overline{\eta }/2}.$$ (8) Comparing this power of $`L`$ with the $`L^2`$ behaviour of elastic term, we conclude, that the periodic perturbation is always relevant if $`p<p_c=\frac{6}{\pi \sqrt{ϵ}}`$. In this case, even an arbitrarily weak periodic potential will be relevant and we regain true translational LRO of the C-phase. For $`p>p_c`$, on the other hand, weak periodic pinning is irrelevant, but for $`p`$ close to $`p_c`$ we expect a transition to a commensurate phase for sufficiently strong $`v`$. A simple estimate for the threshold value $`v_c`$ follows from a comparison of the forces resulting from impurity and lattice pinning. With $`f_{\mathrm{imp}}\gamma L_\xi ^2`$ and $`f_v\gamma g^2pv`$ we get for the transition line $`v_c=v_c(\mathrm{\Delta })1/(g^2pL_\xi ^2)`$ (or, inversely, $`\mathrm{\Delta }_c=\mathrm{\Delta }_c(v)(g^2pv)^{(4D)/2}`$ ). A more accurate description of the transition can be obtained from a functional renormalization group treatment , which confirms this estimate. The transition turns out to be second order with a divergent correlation length $`\xi (vv_c)^\nu `$ in the C-phase, here $`\nu ^1=4\left(\frac{p^2}{p_c^2}1\right)`$. Moreover, the order parameter for translational LRO $$<\psi >=<e^{i\varphi }>(vv_c)^\beta ,\beta =\nu \frac{\pi ^2}{18}ϵ$$ (9) is finite in this phase. On the contrary, in the I–phase the quasi–long range order of (5) is regained. The fixed point, which describes this transition is at zero temperature. The temperature eigenvalue $`\theta =2+ϵ`$ appears in the modified hyperscaling relation $`\nu (D\theta )=2\alpha `$ typical for zero-temperature fixed points.. As a site remark we note, that these exponents describe for $`p=2`$ also the transition between the low temperature phase of the random field Ising model and the quasi–long range ordered phase of the random field $`XY`$–model. Apart from the change in the correlation functions at the transition there is also a change in the response on a small external drive $`f_{\mathrm{ex}}f_{\mathrm{imp}},f_v`$. The creep velocity $`u_{creep}`$ can be written in the form $$u_{creep}(f_{\mathrm{ex}})e^{\frac{E_c}{T}\left(\frac{f_c}{f_{\mathrm{ex}}}\right)^\mu }.$$ (10) Here $`E_c\gamma \xi ^2,f_c\mathrm{max}(f_{\mathrm{imp}},\gamma \xi ^2)`$ and $`\mu =(D2)/2`$ and $`\mu =D1`$ for the I– and the C–phase, respectively. Because of the pronounced difference in the creep exponent $`\mu `$ in both phases measurement of the creep should give a clear indication about which phase is present. Recently measured I-V curves of the conductor o-TaS<sub>3</sub> at temperatures below 1K can be fitted by (10) with $`\kappa =1.5`$$`2`$ . The experimentally observed tendency to larger $`\kappa `$ for purer crystals confirms the above interpretation. In several materials, such as $`\mathrm{K}_{0.3}\mathrm{MoO}_3`$, the periods are near $`p=4`$ commensurability at low temperatures. For this material one obtains $`\xi _010^6`$cm . The typical parameters $`\gamma V_010^2`$eV, $`\rho _1=10^2`$ and $`v_\mathrm{F}=10^7\mathrm{cm}/\mathrm{sec}`$ yield, after proper rescaling of anisotropy, the estimate $`L_\xi 10^4`$cm for an impurity density of $`100`$ppm. Thus it should in principle be possible to see commensurability effects if the misfit $`\delta `$ becomes small enough, i.e. at low temperatures. So far, we have excluded topological defects. These can be considered if we treat $`\varphi (𝐱)`$ as a multivalued field which may jump by multiples of $`2\pi `$ along certain surfaces. These surfaces are bounded by vortex lines. For $`\delta =0=v`$ it has been shown recently that for weak enough disorder strength $`\mathrm{\Delta }<\mathrm{\Delta }_D`$, the system is stable with respect to the formation of vortices . However, vortex lines will proliferate for $`\mathrm{\Delta }>\mathrm{\Delta }_D`$. At present it is not clear whether the corresponding transition is continuous or first order. For $`\delta =0`$ but $`v>0`$ we expect that this transition extends to a line $`\mathrm{\Delta }_D(v)`$ until $`v`$ reaches a critical value $`v_D`$ with $`\mathrm{\Delta }_D(v_D)=\mathrm{\Delta }_c(v_D)`$ (see Figure 1). For larger $`v`$ the transition is probably in the universality class of the $`p`$-state clock model in a random field, which has an upper critical dimension $`D_c=6`$ . A non-zero value of $`\delta `$ will in general increase the size of the incommensurate phase, as schematically scetched in Figure 1. Our results can also be applied to the pinning of flux line lattices in type-II superconductors. In layered superconductors, the CuO<sub>2</sub> planes provide a strong pinning potential favoring, for flux lines oriented parallel to the layers, a smectic phase with translational order present only along the layering axis . The influence of disorder on this phase is described by the CDW model studied in this paper, if the CDW phase $`\varphi (𝐱)`$ is identified with the deviations of the smectic layers from their locked-in state. Also a FLL oriented perpendicular to the layers in general feels a weak periodic potential of the underlying crystal, but now the flux line displacements are described by a vector field. Since also in this case weak disorder leads only to logarithmically growing transverse displacement of the flux lines , a disorder driven roughening transition of the CDW type studied above can be expected for the flux line lattice. References | 1. | G.Grüner, Density waves in solids, (Addison–Wesley, Reading, 1994), | | --- | --- | | | P. Bak, Rep. Prog. Phys. 45, 587 (1982), | | | V. L. Pokrovsky and A. L. Talapov, Theory of Incommensurate Crystals, | | | Harwood Academic Publishers. 1984. | | | P.M. Chaikin and T.C. Lubensky,Principles of Condensed Matter Physics, Cambridge UP 1995 | | 2. | A.D. Bruce, R.A. Cowley and A.F. Murray, J. Phys. C11, 351 (1978), | | 3. | S.M. Battacharjee, T. Nattermann and C. Ronnewinkel, Phys.Rev.B 58, 2658 (1998). | | 4. | S.N. Coppersmith et al.,Phys. Rev. Lett. 46, 549 (1981). | | 5. | J.M. Kosterlitz, J. Phys. C 10, 3753 (1977). | | 6. | A. Aharony and P. Bak,Phys. Rev. B 23, 4770 (1981). | | 7. | A.I. Larkin,Sov. Phys. JETP, 31, 784 (1970), | | | H. Fukuyama and P.A. Lee, Phys. Rev. B 17, 535 (1978). | | 8. | S. E. Korshunov, Phys. Rev. B 48, 3969 (1993), | | | T. Giamarchi and P. Le Doussal,Phys. Rev. Lett. 72, 1530 (1994). | | 9. | T. Emig, S. Bogner and T. Nattermann,Phys. Rev. Lett. 83 (1999), in press. | | 10. | T. Emig, and T. Nattermann, Phys. Rev. Lett. 79, 5090 (1997). | | 11. | S. V. Zaitsev-Zotov, G. Remenyi and P. Monceau, Phys. Rev. Lett. 78, 1098 (1997). | | 12. | M. Gingras and D. A. Huse, Phys. Rev. B 53, 15193 (1996) | | | J. Kierfeld, T. Nattermann and T. Hwa, Phys. Rev. B 55, 626 (1997), | | | D. S. Fisher, Phys. Rev. Lett. 78, 1964 (1997). | | 13. | L. Balents and D.R. Nelson, Phys. Rev. B 52, 12951 (1995). |
no-problem/9908/quant-ph9908034.html
ar5iv
text
# Recovering coherence from decoherence: a method of quantum state reconstruction ## I Introduction The reconstruction of quantum states is a central topic in quantum optics and related fields . During the past years, several techniques have been developed, for instance, the direct sampling of the density matrix of a signal mode in multiport optical homodyne tomography , tomographic reconstruction by unbalanced homodyning , cascaded homodyning and reconstruction via photocounting . There are also proposals of measurement of the electromagnetic field inside a cavity as well as the vibrational state of an ion in a trap . The full reconstruction of nonclassical field states as well as of (motional) states of an ion have been already experimentally accomplished. The quantum state reconstruction is normally achieved through a finite set of either field homodyne measurements, or selective measurement of atomic states in the case of cavities. This makes possible to construct a quasidistribution (such as the the Wigner function) which constitutes an alternative representation of the quantum state of the field. Nevertheless, in real experiments, the presence of noise and dissipation has normally destructive effects. In fact, as it has been already pointed out, the reconstruction schemes themselves also indicate loss of coherence in quantum systems . regarding this subject, a scheme for compensation of losses in quantum-state measurements has been already proposed , and the relation between losses and $`s`$-parameterized quasiprobability distributions has been already pointed out in . The scheme on loss compensation in applies to photodetector losses, and consists essentially of a mathematical inversion formula expressing the initial density matrix in terms of the decayed one. Our scheme, as discussed in , involves a physical process that actually enables us to store information about all the quantum coherences of the initial state in the diagonal elements (photon distribuition) of the density matrix of a transformed state. By storing this information in the diagonal elements, it becomes much more robust under dissipation, allowing us to recover the Wigner function of the initial field state in a time scale of the order of the energy decay time that is, of course, much longer than the extremely fast decoherence time scale that is normally associated with the dissipation of quantum coherences. We consider a single mode high-$`Q`$ cavity where we suppose that a (nonclassical) field state $`\widehat{\rho }(0)`$ is previously prepared. The first step of our method consists in driving the generated quantum state by a coherent pulse. The reconstruction of the field state may be accomplished after turning-off the driving field, i.e., at a time in which the cavity field has already suffered decay. We use the fact that by displacing the initial state (even while it is decaying) we make its quantum coherences robust enough to allow its experimental determination, at a later time, despite dissipation. We then show that the evolution of the cavity field is such that it directly yields the Wigner function of the initial nonclassical field simply by measuring the photon number distribution of the displaced field. For that we make direct use of the series representation of quasiprobability distributions . A numerical simulation of our method is presented, and we take into account the action of dissipation while driving the initial field. This manuscript is organized as follows: in Sec. II we discuss, taking into account losses, the process of displacement of the initial field. In Sec. III we show how to reconstruct the initial cavity field after allowing the displaced field to decay. In Sec. IV we present a simulation of the reconstruction of a Schrödinger cat state. In Sec. V we summarize our conclusions. ## II Driving the initial field We assume that the initial nonclassical field $`\widehat{\rho }(0)`$ is prepared inside a high $`Q`$ cavity. The master equation in the interaction picture for the reduced density operator $`\widehat{\rho }`$ relative to a driven cavity mode, taking into account cavity losses at zero temperature and under the Born-Markov approximation is given by $`{\displaystyle \frac{\widehat{\rho }}{t}}={\displaystyle \frac{i}{\mathrm{}}}[\widehat{H}_d,\widehat{\rho }]+{\displaystyle \frac{\gamma }{2}}\left(2\widehat{a}\widehat{\rho }\widehat{a}^{}\widehat{a}^{}\widehat{a}\widehat{\rho }\widehat{\rho }\widehat{a}^{}\widehat{a}\right),`$ (1) with $$\widehat{H}_d=i\mathrm{}\left(\alpha ^{}\widehat{a}\alpha \widehat{a}^{}\right),$$ (2) where $`\widehat{a}`$ and $`\widehat{a}^{}`$ are the annihilation and creation operators, $`\gamma `$ the (cavity) decay constant and $`\alpha `$ the amplitude of the driving field. We define the superoperators $`\widehat{}`$ and $`\widehat{}`$ by their action on the density operator $$\widehat{}\widehat{\rho }=(\alpha ^{}\widehat{a}\alpha \widehat{a}^{})\widehat{\rho }\widehat{\rho }(\alpha ^{}\widehat{a}\alpha \widehat{a}^{}),$$ (3) and $$\widehat{}\widehat{\rho }=\gamma \widehat{a}\widehat{\rho }\widehat{a}^{}\frac{\gamma }{2}\left(\widehat{a}^{}\widehat{a}\widehat{\rho }+\widehat{\rho }\widehat{a}^{}\widehat{a}\right).$$ (4) It is not difficult to show that $$[\widehat{},\widehat{}]\widehat{\rho }=\frac{\gamma }{2}\widehat{}\widehat{\rho },$$ (5) and the formal solution of Eq. (1) can then be written as $$\widehat{\rho }(t)=\mathrm{exp}\left[(\widehat{}+\widehat{})t\right]\widehat{\rho }(0)=\mathrm{exp}(\widehat{}t)\mathrm{exp}\left[\frac{2\widehat{}}{\gamma }(1e^{\gamma t/2})\right]\widehat{\rho }(0).$$ (6) After driving the initial field during a time $`t_d`$, the resulting field density operator will read $$\widehat{\rho }(t_d)=e^{\widehat{}t_d}\widehat{\rho }_\beta (0),$$ (7) where $$\widehat{\rho }_\beta (0)=\widehat{D}^{}(\beta )\widehat{\rho }(0)\widehat{D}(\beta ),$$ (8) and with $$\beta =2\alpha \frac{1e^{\gamma t_d/2}}{\gamma }.$$ (9) This means that if we drive the initial field while it decays, during a time $`t_d`$, this is equivalent to having the field driven by a coherent field with an effective amplitude $`\beta `$ given in Eq. (9). ## III The reconstruction method The driving of the initial field is carried out during a time $`t_d`$. This procedure will enable us to obtain information about all the elements of the initial density matrix from the diagonal elements of the time-evolved displaced density matrix only. As diagonal elements decay much slower than off-diagonal ones, information about the initial state stored this way becomes robust enough to withstand the decoherence process. We will now show how this robustness can be used to obtain the Wigner function of the initial state after it has already started to dissipate. Once the injection of the coherent pulse is completed, the cavity field is left to decay, so that its dynamics will be governed by the master equation in Eq. (1) without the first (driving) term in its right-hand-side. Therefore, the cavity field density operator will be, at a time $`t`$, given by $$\widehat{\rho }_\beta (t)=e^{(\widehat{J}+\widehat{L})t}\widehat{\rho }_\beta (0),$$ (10) with $$\widehat{J}\widehat{\rho }=\gamma \widehat{a}\widehat{\rho }\widehat{a}^{},\widehat{L}\widehat{\rho }=\frac{\gamma }{2}\left(\widehat{a}^{}\widehat{a}\widehat{\rho }+\widehat{\rho }\widehat{a}^{}\widehat{a}\right).$$ (11) The next step is to calculate the diagonal matrix elements of $`\widehat{\rho }_\beta (t)=\mathrm{exp}\left[(\widehat{J}+\widehat{L})t\right]\widehat{\rho }_\beta `$ in the number state basis, or $$m|\widehat{\rho }_\beta (t)|m=\frac{e^{m\gamma t}}{q^m}\underset{n=0}{\overset{\mathrm{}}{}}q^n\left(\begin{array}{c}n\\ m\end{array}\right)n|\widehat{\rho }_\beta (0)|n,$$ (12) where $`q=1e^{\gamma t}`$. Now we multiply those matrix elements by powers of the function $$\chi (t)=12e^{\gamma t}.$$ (13) If we sum the resulting expression over $`m`$, we obtain the following simple sum $$F_W=\frac{2}{\pi }\underset{m=0}{\overset{\mathrm{}}{}}\chi ^m(t)m|\widehat{\rho }_\beta (t)|m=\frac{2}{\pi }\underset{n=0}{\overset{\mathrm{}}{}}(1)^nn|\widehat{D}^{}(\beta )\widehat{\rho }(0)\widehat{D}(\beta )|n.$$ (14) The expression in Eq. (14) is exactly the Wigner function corresponding to $`\widehat{\rho }`$ (the initial field state) at the point specified by the complex amplitude $`\beta `$. Therefore we need simply to measure the diagonal elements of the dissipated displaced cavity field $`P_m(\beta ;t)=m|\widehat{\rho }_\beta (t)|m`$ for a range of $`\beta `$’s, the transformation in Eq. (14) in order to obtain the Wigner function of the initial state for this range. We note that after performing the sum, the time-dependence cancels out completely, leaving us a constant Wigner function, as it should be. Therefore the initial state may be reconstructed, at least in principle, at an arbitrary later time. In practice, however, the decay of the field energy will impose a limitation on the times during which we will be able to measure the $`P_m`$s. The next step in our scheme is to measure the field photon number distribution $`P_m(\beta ;t)`$. In a cavity, particularly in the microwave regime where there are no photodetectors available, experimentalists have been forced to use atoms instead to probe the intra-cavity field. One way of determining $`P_m`$ is by injecting atoms into the cavity and measuring their population inversion as they exit after an interaction time $`\tau `$ much shorter than the cavity decay time. It is convenient in this case to use three-level atoms in a cascade configuration with the upper and the lower level having the same parity and satisfying the two-photon resonance condition. The population inversion in this case is $$W(\alpha ;t+\tau )=\underset{n=0}{\overset{\mathrm{}}{}}P_n(\alpha ;t)\left[\frac{\mathrm{\Gamma }_n}{\delta _n^2}+\frac{(n+1)(n+2)}{\delta _n^2}\mathrm{cos}\left(2\delta _n\lambda \tau \right)\right],$$ (15) where $`\mathrm{\Gamma }_n=\left[\mathrm{\Delta }+\chi (n+1)\right]/2`$, $`\delta _n^2=\mathrm{\Gamma }_n^2+\lambda ^2(n+1)(n+2)`$, $`\mathrm{\Delta }`$ is the atom-field detuning, $`\chi `$ is the Stark shift coefficient, and $`\lambda `$ is the atom-field coupling constant. Now we take $`\mathrm{\Delta }=0`$ (two-photon resonance condition) and $`\chi =0`$. It is a good approximation to make $`\left[(n+1)(n+2)\right]^{1/2}n+3/2`$, so that the population inversion reduces to $$W(\alpha ;t+\tau )=\underset{n=0}{\overset{\mathrm{}}{}}P_n(\alpha ;t)\mathrm{cos}\left(\left[2n+3\right]\lambda \tau \right).$$ (16) This represents the atomic response to the displaced field. In order to obtain $`P_m`$ from a family of measured population inversions, we need to invert the Fourier series in Eq. (16), or $$P_m(\beta ;t)=\frac{2\lambda }{\pi }_0^{\tau _{max}}𝑑\tau W(t+\tau )\mathrm{cos}\left(\left[2m+3\right]\lambda \tau \right).$$ (17) We need a maximum interaction time $`\tau _{max}=\pi /\lambda `$ much shorter than the cavity decay time, and this condition implies that we must be in the strong-coupling regime, i.e. $`\lambda \gamma `$. Of course after some time the atoms sent through the cavity will not be able to respond anymore to the decaying field, hampering our reconstruction scheme. Nevertheless this happens at a time scale of $`t_d1/\lambda `$, which is normally much longer than a typical decoherence time. Our scheme is easily extended to other ($`s`$-parametrized ) quasi-probability distributions, which also may be expressed as a series , $$F(\beta ;s)=\frac{2}{\pi (s1)}\underset{n=0}{\overset{\mathrm{}}{}}\left(\frac{s+1}{s1}\right)^nn|\widehat{\rho }_\beta |n.$$ (18) For this we have to multiply the photon number distribution of the displaced (and dissipated) field by a generalization of the function in Eq. (13), or $$\chi (s;t)=1+\frac{2e^{\gamma t}}{s1}.$$ (19) This increases our possibilities of measuring quantum states. We have available several quasiprobabilities, and we may choose a more convenient function depending on the particular conditions of a reconstruction experiment. ## IV Reconstruction of a Schrödinger cat state We show now how our method may be applied to a specific case, e.g., to a reconstruction of a Schrödinger cat state, represented by a quantum superposition of two coherent states having distinct amplitudes, $`|\alpha `$ and $`|\alpha `$. The density operator corresponding to such state is $$\widehat{\rho }(0)=𝒩\left[|\alpha \alpha |+|\alpha \alpha |+e^{i\varphi }|\alpha \alpha |+e^{i\varphi }|\alpha \alpha |\right],$$ (20) where $`\varphi `$ is a relative phase and $`𝒩`$ a normalization constant. Schrödinger cat states are very fragile under dissipation , and therefore are specially suitable for exemplifying our scheme. In a real experiment, just after having prepared the field in the state in Eq. (20) by one of the conventional methods, (see reference for instance), the cavity will be driven by a coherent field, say, of amplitude $`\alpha _0`$. After that, at a time $`t_0`$, $`N`$ conveniently prepared three-level atoms should cross the cavity, so that we are able to assign a particular population inversion $`W_{\alpha _0}(t_0)=P_e(t_0)P_g(t_0)`$ to that time. These sort of measurements for the same driving field amplitude $`\alpha _0`$ will be repeated for a range of times $`t_0,t_1,t_2\mathrm{}t=t_0+\pi /\lambda `$, so that we obtain a set of values for the population inversion in that time interval. Now through numerical integration (see Eq. (17)), we obtain the required photon number distribution $`P_m(\alpha _0,t)`$. This is exactly what we need in our reconstruction scheme. The next step is to multiply $`P_m`$ by the terms $`\chi (t)^m=(12e^{\gamma t})^m`$ and sum over $`m`$. This directly yields, according to our scheme, the value of the Wigner function of the initial field in Eq. (20) at the point $`\alpha _0=x_0+iy_0`$ in phase space. We remark that the convergence of the series in Eq. (14) is guaranteed, because the photon number distributions of physical states normally decreases very quickly as $`m`$ increases, and the statistical erros also become small simply because we are dealing with diagonal elements of the density operator in the number state basis . We need then to repeat this procedure $`M`$ times, for different values of the driving field amplitude $`\alpha _1,\alpha _2\mathrm{}\alpha _M`$, to be able to cover enough points in phase-space and obtain the whole Wigner function of the original field. We have produced a numerical simulation of the above mentioned steps. Our simulation is going to be illustrated by the Wigner functions themselves. At $`t=0.0`$, a Schrödinger cat state (Eq. 20) is generated within the cavity. Its corresponding Wigner function is shown in Fig. 1, where we note the characteristic interference structure. After the field has decayed, say, at $`t=0.1/\gamma `$, the loss of coherence is indicated by the significant reduction of the interference structure, as seen in Fig. 2. Dissipation brings out the initially pure state very close to a statistical mixture . We may instead drive the cavity with a coherent pulse (duration $`t_d`$), at $`t=0`$, in order to start our reconstruction procedure. After following the steps described above, we obtain the reconstructed Wigner function, as shown in Fig. 3, which is essentially the one in Fig. 1. We note that both peaks as well as the interference structure characteristic of a Schrödinger cat state are entirely preserved. In our simulation we have considered fluctuations that might be present during the measurement of the atomic inversion. There might be experimental errors from various sources, such as flucutuations in the amplitude of the driving field as well as in the generated nonclassical state, which would cause distortions in the atomic inversion. In Fig. 4 we show the atomic inversion as a function of time and for a given value of the driving field amplitude $`\beta =(0,2)`$. We have obtained the field’s Wigner function shown in Fig. 3 departing from a family of those “distorted” atomic inversions, for different values of $`\beta `$. ## V Conclusion In conclusion, we have presented a method for reconstructing the Wigner function of an initial nonclassical state at times when the field would have normally lost its quantum coherence. In particular, even at times such that the Wigner function would have lost its negativity, reflecting the decoherence process. A crucial step in our approach is the driving of the initial field immediately after preparation, which stores quantum coherences in the diagonal elements of the time evolved displaced density matrix, making them robust. We have therefore shown that the initial displacement transfers the robustness of a coherent state against dissipation to any initial state, allowing the full reconstruction of the field state under less than ideal conditions. A natural application of our method would be the measurement of quantum states in cavities, where dissipation is difficult to avoid. Moreover, the application of the driving pulse at different times after the generation of a field state, would allow the “snapshooting” of the Wigner function as the state is dissipated. This means that valuable information about the (mixed) quantum state as well as about the decay mechanism itself could be retrieved while it suffers decay. The possibility of reconstructing quantum states even in the presence of dissipation may be also relevant for applications in quantum computing. Loss of coherence associated to dissipation is likely to occur in those devices, and our method could be used, for instance, as a scheme to refresh the state of a quantum computer in order to minimize the destructive action of the environment. ## ACKNOWLEDGMENTS One of us, H.M.-C., thanks W. Vogel for useful comments. This work was partially supported by FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo, Brazil), CONACYT (Consejo Nacional de Ciencia y Tecnología, México), CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil) and ICTP (International Centre for Theoretical Physics, Italy).
no-problem/9908/astro-ph9908308.html
ar5iv
text
# The yellow hypergiants HR 8752 and 𝜌 Cassiopeiae near the evolutionary border of instability1footnote 11footnote 1 Based on observations obtained with the William Herschel Telescope, operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. ## 1 Introduction Hypergiants are supergiant stars with strongly developed large-scale atmospheric velocity fields, excessive mass loss and extended circumstellar envelopes. They are rare objects, only 12 of them being known in our Galaxy. They are very luminous but are not neccessarily the most luminous objects in their spectral class. The yellow hypergiants and their characteristics have been reviewed recently by de Jager (1998). There are indications (relatively small mass; overabundance of Na and N with respect to the Sun) that yellow hypergiants are evolved stars, evolving from the red supergiant phase to the blue phase. Stellar evolutionary computations (e.g. Maeder & Meynet 1988) place cool hypergiants in a certain area on the H-R diagram (3.6 $`<\mathrm{log}T_{eff}<`$3.9, 5.3$`<\mathrm{log}L/L_{}<`$5.9) and predict that redward loops down to 4000$`\pm `$1000 K occur only for stars with $`M_{\mathrm{ZAMS}}`$ 60 $`M_{}`$. Once in the red supergiant phase ($`T_{eff}`$ 3000–4000 K), stars with $`M_{\mathrm{ZAMS}}`$ 10 $`M_{}`$ shrink again and evolve to become blue supergiants. However, B$`\ddot{\mathrm{o}}`$hm-Vitense (1958) has noted that stars with $`T_{\mathrm{eff}}`$ near 9000 K have density inversions, which may indicate instability. This has led to research on the yellow evolutionary void; the area on the H-R diagram which occupies the region 3.8 $`<\mathrm{log}T_{eff}<`$4.0, 5.2$`<\mathrm{log}L/L_{}<`$5.9 (de Jager & Nieuwenhuijzen 1997). The physics of this “forbidden” region for massive evolved stars on their blueward evolutionary loop has been studied recently by Nieuwenhuijzen & de Jager (1995) and de Jager & Nieuwenhuijzen (1997). Inside the void the atmospheres are moderately unstable, which is shown in various ways. The atmospheres have a negative density gradient at a certain depth level, the sonic point of the stellar wind is situated in photospheric levels, and the sum of all accellerations is directed outwards during part of the pulsational cycle (Nieuwenhuijzen & de Jager 1995). It is expected that stars, when approaching the void during their blueward evolution, may show signs of instability, but the very process of approaching the void has not yet been studied. This is a field where $`no`$ observations have guided theory so far. A monitoring of stars approaching the void will help to understand the nature of the instabilities, the hydrodynamics of unstable atmospheres and finally to answer the most important question of whether or not these stars can pass the void. It is believed that the Galactic hypergiants HR 8752, $`\rho `$ Cas and IRC+10420 are presently “bouncing” against the “yellow evolutionary void” (de Jager 1998) at $``$ 7500$`\pm `$500 K, while there were periods when they had $`T_{eff}`$ 4000 K. The brightness of IRC+10420 in $`V`$-band increased by 1 mag from 1930 to 1970 (Jones et al. 1992) and its $`T_{\mathrm{eff}}`$ has increased by 1000 K over the last 20 years (Oudmaijer et al. 1996). We do not know how rapidly they change their $`T_{\mathrm{eff}}`$ but there are some reasons to believe that these changes are accompanied by variations in the mass loss (de Jager 1998). Other hypergiants that appear to have a similar position on the HR diagram are Var A in M33 and V382 Car (Humphreys 1978). Another interesting object; HD 33579 appears to be located inside the void evolving to the red (Humphreys et al. 1991). The maximum $`T_{\mathrm{eff}}`$ ever observed in HR 8752 is 7170 K (de Jager 1999, private communication). Previous ground-based spectroscopic observations of HR 8752 and $`\rho `$ Cas have been carried out only in the optical and near IR region (4000–9000 Å). High-resolution IUE spectra of $`\rho `$ Cas and HR 8752 have been discussed by Lobel et al. (1998) and Stickland & Lambert (1981), respectively. In this letter we report first observations of these hypergiants in the near ultraviolet and communicate for the first time the finding of spectroscopically recorded large changes of the effective temperature of the cool hypergiant HR 8752 which cannot be ascribed to the regular variability of a supergiant atmosphere. This finding is based on a unique combination of high-resolution optical spectra which span a period of about 30 years. Thus, HR 8752 turned to be the first cool supergiant that showed the effects of stellar evolution from a study of its 30 years old spectrosopcic history. ## 2 The Observations The observations were carried out in 1998 August 4 using the Utrecht Echelle Spectrograph (UES) at the Nasmyth focus of the 4.2-m WHT at the ORM (La Palma). Two spectral images of $`\rho `$ Cas and one image of HR 8752 were obtained. A UV-sensitive CCD detector EEV 42 4200$`\times `$2148 (pixel size: 13.5$`\times `$13.5 $`\mu `$m) with 60% quantum efficiency at 3200 Å provided superb sensitivity down to the atmospheric cut-off at 3050 Å. We obtained spectra which cover the wavelength range between 3050 and 3920 Å in 40 orders at a spectral resolving power of $`R=\lambda /\mathrm{\Delta }\lambda 55,000`$. For the data reduction we used standard iraf <sup>2</sup><sup>2</sup>2iraf is distributed by the National Optical Astronomical Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation, USA. procedures. The wavelength calibration was performed with a Th–Ar lamp. The final signal-to-noise (S/N) ratio varies for the different echelle orders, being in the range 80–160 for both stars. Additional high resolution spectra of these stars in the wavelength range 3500-11 000 Å were acquired with SOFIN echelle spectrograph at the 2.5-m NOT (La Palma) in 1998 October 9-10. The archival spectra from 1969 Sep. 7, 1976 July 15 and 1978 August 8 were obtained at the Dominion Astronomical Observatory, Victoria, Canada using the 1.2 m telescope in the coude focus (Smoliński et al. 1994). The dispersion of the spectrograms was about 6 Å/mm and signal-to-noise ratio at the level of 30 to 50. ## 3 Analysis and Conclusions Simple comparison of the near-UV spectra of HR 8752 and $`\rho `$ Cas shows that these stars are no longer spectroscopic “twins”. It is enough to overplot their spectra and to identify a number of lines in order to be convinced that the atmosphere of HR 8752 is hotter than that of $`\rho `$ Cas. Most of the absorption lines in this spectral range belong to $`\alpha `$\- and Fe-group elements Ti, Si, Cr, Sc, Fe, Mn and V. In fact, the spectrum of HR 8752 is considerably $`clean`$ from blends compared with $`\rho `$ Cas because of a displacement of the ionization equilibrium. A clear illustration is presented in Fig. 1, where we compare one of the near UV echelle orders and two unblended optical Fe i lines (selected by Lobel et al. 1998) in our targets. We have also found that many absorption lines in the near-UV spectrum of $`\rho `$ Cas are split. The first report of this phenomenon dates back to Bidelman & McKellar (1957). We confirm findings by Sargent (1961) and Lobel (1997) that these splits in absorption appear only in lines with $`\chi _{\mathrm{up}}`$ 3 eV. Various explanations for the split absorption cores have been suggested in the literature. The phenomenon has been explained recently by Lobel (1997), showing that the line splitting is caused by static emission emerging from detached and cool circumstellar shells, modelled for a fast bi-polar wind. In order to quantify the differences in the atmospheric conditions of our targets, we have employed a grid of LTE, plane–parallel, constant flux, and blanketed model atmospheres (Kurucz 1993), computed with atlas9 without overshooting. These models are interpolated for several values of $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$. For $`\rho `$ Cas we used \[Fe/H\]=0.3 (Lobel et al. 1998) and for HR 8752 \[Fe/H\]=$``$0.5 (Schmidt 1998). Synthetic spectra were computed first, using the LTE code wita3 (Pavlenko 1991) which takes into account molecular dissociation balance (note that our targets may have $`T_{\mathrm{eff}}`$ as low as 4000 K) and all important opacity sources. Atomic data were obtained from the VALD-2 database (Kupka et al. 1999). Our spectral window contains molecular bands of OH, CH and NH which can be used to derive CNO abundances and constrain the range of the atmospheric parameters. Molecular data for the CH (3145 Å), NH (3360 Å) and OH (0,0) (3120-3260 Å) bands were taken from Kurucz (1993), Cottrell & Norris (1978) and Israelian et al. (1998), respectively. To minimize the effects associated with errors in the transition probabilities of molecular lines, the oscillator strengths ($`gf`$-values) have been modified from their original values to match the solar atlas (Kurucz et al. 1984) with solar abundances (Anders & Grevesse 1989). Synthetic spectra of the Sun were computed using a model with $`T_{\mathrm{eff}}`$=5777 K, $`\mathrm{log}g`$=4.4, \[Fe/H\]=0.0, microturbulence $`\xi =1\mathrm{km}\mathrm{s}^1`$. Our first attempts to fit the spectral lines located in the CH and NH regions assuming solar CNO abundances have shown that these molecules are simply not present in the spectra. We have increased the abundance of nitrogen 10 times and still found no effect on the measured equivalent widths. This can be considered as clear evidence that both stars had $`T_{\mathrm{eff}}>`$ 6200 K (given the values of dissociation energies of CH, NH and OH molecules) at the time of our observation. In fact, at $`T_{\mathrm{eff}}`$=6200 K we still expect 10–20 mÅ lines of the OH molecule located between 3100–3200 Å (Israelian et al. 1998) even if oxygen is slightly underabundant in $`\rho `$ Cas with \[O/H\]=$``$0.3 (Takeda & Takeda-Hidai 1998). Given the S/N of the data, we could easily detect a minimum of 3-4 unblended OH lines if they were present in the spectra. We confirm a microturbulent velocity $`\xi =11\pm 2\mathrm{km}\mathrm{s}^1`$ in both stars (de Jager 1998). Figure 2 shows the comparison between synthetic and observed spectra of both stars corresponding to the regions surrounding the CH and NH lines. We stress that these plots should not be considered as “best fits”. We only want to show basic features and blends in these regions and demonstrate the effect of varying $`T_{\mathrm{eff}}`$ on the synthetic spectra. We did not convolve synthetic spectra with Gaussian macro-broadening (which is a combined effect of rotation and macroturbulence) because it is not affecting the EWs and therefore our final values of $`T_{\mathrm{eff}}`$/$`\mathrm{log}g`$. However, we convolved them with a Gaussian (FWHM=0.12 Å) to reproduce the instrumental profile. The differences between the observed and calculated equivalent widths have been minimized for the best set of $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$ and $`\xi `$ (i.e. the same method as used by Lobel et al. 1998). We have selected 16 spectral lines of Sc, Cr, Ti, etc. (Fig. 2), located in the windows 3130–3170 (near the CH band) and 3340–3380 (near the NH band) and measured their EWs (typically 300–800 mÅ) with a multi-Gaussian function of the splot task of iraf. The final values of the atmospheric parameters are $`T_{\mathrm{eff}}`$=7900$`\pm `$200 K and $`\mathrm{log}g`$=1.1$`\pm 0.4`$ for HR 8752 and $`T_{\mathrm{eff}}`$=7300$`\pm `$200 K and $`\mathrm{log}g`$=0.8$`\pm 0.4`$ for $`\rho `$ Cas. Because of the problem with UV opacities in presently available models of atmospheres (such as ATLAS9) with $`T_{\mathrm{eff}}7500`$ K we think that the latter value can be overestimated by about 250 K (half the amplitude of $`T_{\mathrm{eff}}`$ variations caused by pulsation determined from optical spectra), since the violet wing extensions are not as strongly developed as was observed by us in Nov.-Dec. ’93 (see Fig. 1). The spectrum of HR 8752 from 1969 was analyzed with a different approach. Due to the limited spectral region, covering wavelengths from 4800 till 6060 Å, severe blending and a low signal-to-noise ratio, only a limited number of relatively unblended lines were accessible for the analysis - 27 Fe i and 6 Fe ii lines. Equivalent widths were typically in range from 200 to 600 mÅ. The atmospheric parameters have been found by forcing an independence of the determined single line abundance on the excitation potential and the equivalent width, with a unique value of iron abundance for both neutral and ionized lines. The analysis was made using atmospheric models computed with a modified version of the TLUSTY code. The use of ATLAS9 opacity sources and ODF functions enables us to treat these models as an extension of the existing grid of ATLAS9 models (Kurucz, 1993). Both plane-parrallel and spherically symetric models have been calculated. For spherically symmetric models a luminosity value of log (L/L) = 5.50 has been utilized, as was determined by Schmidt (1998). The resulting parameters are $`T_{eff}`$=5250$`\pm `$250 K, $`\mathrm{log}g`$=$``$0.5$`\pm 0.5`$, \[Fe/H\]=$``$0.55$`\pm 0.25`$, microturbulence $`\xi _\mu `$=10$`\pm 1`$ km s<sup>-1</sup> derived for plane-parallel models, and $`T_{eff}`$=5630$`\pm `$200 K, $`\mathrm{log}g`$=$``$0.7$`\pm 0.5`$, \[Fe/H\]=$``$0.46$`\pm 0.25`$ and $`\xi _\mu `$=11$`\pm 1`$ km s<sup>-1</sup> with spherically symetric models. For the latter case we compute that the atmospherical extension was 23 percent (being measured as the ratio of the geometrical distance between optical depths $`10^4`$ and $`1`$ and the stellar radius). It is generally accepted that H$`\alpha `$ is the best indicator for global changes in the outer part of the envelope where the wind is accelerating in a typical cool supergiant. Variations in the velocity and density structure of the upper layers produce changes in the asymmetry of the line, while an increase of the temperature (quasi-chromosphere) can force the wing to go into emission. This effect has been clearly observed in $`\rho `$ Cas (de Jager et al. 1997). However, changes in H$`\alpha `$ may reflect those in the chromospheric structure rather than wind variations. For this reason it is desirable to study wind variations in other absorption lines. In general, winds of cool stars are subtle and difficult to detect. Far shortward extended wings due to the wind absorption have been observed in many Fe i lines of $`\rho `$ Cas in the phase when $`T_{\mathrm{eff}}`$=7250 K (Lobel et al. 1998). The upper limit of the mass-loss rate was derived as 9.2 10<sup>-5</sup>M yr<sup>-1</sup>. We have also detected these wings in many lines in the near-UV (Fig. 3). In addition, we have also found violet wings extending up to 120 $`\mathrm{km}\mathrm{s}^1`$ in the spectrum of HR 8752. Assuming $`\mathrm{log}(L/L_{})`$=5.6 (de Jager 1998) and $`\rho `$=7 10<sup>-15</sup> gr cm<sup>-3</sup> as an upper limit of the density for the outermost layers of the atmosphere (from the model with $`T_{\mathrm{eff}}`$=7900 K and $`\mathrm{log}g`$=1.1), we estimate from $`\dot{M}=4\pi R_{}\rho V_{\mathrm{}}`$ an upper limit $`\dot{M}_{\mathrm{max}}`$=6.7 10<sup>-6</sup> M yr<sup>-1</sup> assuming spherically symmetric mass loss. We derived for $`\rho `$ Cas almost the same $`T_{\mathrm{eff}}`$ as it had in Dec. 21 1993 (Lobel et al. 1998). This suggests that $`\rho `$ Cas makes small “oscillations” with an amplitide $`\mathrm{\Delta }T_{eff}`$ 500 K near the void. However, the effective temperature of HR 8752 has risen sharply over the last decades and places the star on the border of the void. When deriving the mass loss rates we have assumed spherically symmetric outflow. However, one should keep in mind that the real distribution of the matter around these hypergiants is very complex and asymmtertic (Lobel 1997, Petrov & Herbig 1992, Humphreys et al. 1997). It is very unlikely that the high effective temperture of HR 8752 is due to the extra heating produced by the secondary B1, which is located at 200 AU from the primary and has an orbital period of 500 yr (Piters et al. 1988). In that case, the overall spectrum of HR 8752 would be a combination of spectral lines formed in the hot upper layers (heated by the secondary) and cool inner layers (Piters et al. 1988). This is not observed and the spectrum is quite “normal”, as one would expect for a hypergiant with $`T_{\mathrm{eff}}8000`$ K. Understanding the final stages of stellar evolution of stars with 10 $`M_{}`$ $`M_{\mathrm{ZAMS}}`$ 60 $`M_{}`$ requires detailed knowledge of the atmospheric pulsations and mass-loss mechanisms of cool hypergiants. We are inclined to think that the large variations with $`\mathrm{\Delta }T_{\mathrm{eff}}`$ 3000–4000 K are not caused by pulsations but reflect some complex $`evolutionary`$ changes due to the active reconstruction of the stellar interior. Just how enhanced mass loss occurs at bouncing, is not known. It seems significant that a number of stars moving to the blue is clustering at the low-temperature side of the void while none of them occurs inside the void. This leads to the hypothesis that when approaching the border of that area, the star may show excessive mass loss and the development of an envelope, associated with a reduction of the effective temperature. How frequently (maybe just once?) this will happen before the star eventually passes through the void is an open question. It is quite possible that the final passage of the most massive stars through the void never takes place and that these stars finally explode as Type II supernovae. We thank C. de Jager and H. Nieuwenhuijzen for many discussions and Ilya Ilyin for helping with the reduction of the SOFIN spectra. We also thank the anonymous referee for the useful comments.
no-problem/9908/nucl-th9908012.html
ar5iv
text
# Role of break-up processes in fusion enhancement of drip-line nuclei at energies below the Coulomb barrier ## Abstract We carry out realistic coupled-channels calculations for <sup>11</sup>Be + <sup>208</sup>Pb reaction in order to discuss the effects of break-up of the projectile nucleus on sub-barrier fusion. We discretize in energy the particle continuum states, which are associated with the break-up process, and construct the coupling form factors to these states on a microscopic basis. The incoming boundary condition is employed in solving coupled-channels equations, which enables us to define the flux for complete fusion inside the Coulomb barrier. It is shown that complete fusion cross sections are significantly enhanced due to the couplings to the continuum states compared with the no coupling case at energies below the Coulomb barrier, while they are hindered at above barrier energies. Quantum tunneling in systems with many degrees of freedom has attracted much interest in recent years in many fields of physics and chemistry . In nuclear physics, heavy-ion fusion reactions at energies near and below the Coulomb barrier are typical examples for this phenomenon. In order for fusion processes to take place, the Coulomb barrier created by the cancellation between the repulsive Coulomb force and the attractive nuclear interaction has to be overcome. It has by now been well established that the coupling of the relative motion of the colliding nuclei to nuclear intrinsic excitations as well as to transfer reaction channels cause large enhancements of the fusion cross section at subbarrier energies over the predictions of a simple barrier penetration model. The effect of break-up processes on fusion, on the other hand, has not yet been understood very well, and many questions have been raised during the last few years both from the experimental and theoretical points of view. The issue has become especially relevant in recent years due to the increasing availability of radioactive beams. These often involve weakly-bound systems close to the drip lines for which the possibility of projectile dissociation prior to or at the point of contact cannot be ignored. Different theoretical approaches to the problem have led to controversial results, not only quantitatively but also qualitatively. The probability for fusion at energies below the barrier has been in fact predicted to be either reduced or enhanced by the coupling to the continuum states. These investigations, however, were not satisfactory in view of the rather simplified assumptions used in the treatment of both the structure and reaction aspects of the problem. In refs. the coupling to the break-up channels was incorporated in terms of a “survival factor”, a procedure that underestimates the effects of the coupling in the classically forbidden region, i.e. the dynamical modulation of the effective potential which is most relevant at energies below the barrier. Ref. took this effect into account but the entire continuum space was mocked up by a single discrete configuration and an arbitrary function was introduced to parametrize the radial dependence of the couplings to such state. In this letter we address the problem without resorting to these approximations. Realistic form factors to the continuum states are constructed by folding the external nuclear and Coulomb fields with the proper single-particle transition densities, obtained by promoting the last weakly-bound nucleon to the continuum states. The reaction mechanism is described within a full quantal coupled-channel description. The flux for complete fusion is separated inside the Coulomb barrier from that for incomplete fusion using the incoming boundary conditions . In order to isolate the genuine effect of the break-up process, we include only the continuum states in the coupling scheme, neglecting continuum-continuum coupling as well as other inelastic channels such as bound excited states in either reaction partner. For the same reason, we do not take into account static modifications on the ion-ion potential which may arise from the halo properties of the projectile. As an example for our study we choose the fusion reaction <sup>11</sup>Be + <sup>208</sup>Pb, where the projectile is generally regarded as a good example of a system with a single neutron “halo”. In a pure single-particle picture, the last neutron in <sup>11</sup>Be occupies the $`2s_{1/2}`$ state, bound by about 500 keV. The strong concentration of strength at the continuum threshold evidenced in break-up reactions has been mainly ascribed to the promotion of this last bound neutron to the continuum of $`p_{3/2}`$ and $`p_{1/2}`$ states at energy $`E_c`$ via the dipole field. Since the presence of the first excited $`1p_{1/2}`$ state (still bound by about 180 keV) may perturb the transition to the corresponding $`p_{1/2}`$ states in the continuum, we prefer here to consider only the contribution to the $`p_{3/2}`$ states. The initial $`2s_{1/2}`$ bound state and the continuum $`p_{3/2}`$ states are generated by Woods-Saxon single-particle potentials whose depths have been adjusted to reproduce the correct binding energies for the $`1p_{3/2}`$ and $`2s_{1/2}`$ bound states. In particular, one needs for the latter case a potential which is much deeper than the “standard” one. The form factor $`F(r;E_c)`$ as a function of the internuclear separation $`r`$ and of the energy $`E_c`$ in the continuum is then obtained by folding the corresponding transition density with the external field generated by the target. In addition to the Coulomb field, a Woods-Saxon nucleon-nucleus potential is used, with parameters of $`R=r_{}A__T^{1/3}`$, $`r_{}=1.27`$ fm, $`a=0.67`$ fm, $`V=(51+33(NZ)/A)`$ MeV, and $`V_{ls}=0.44V`$. The dipole form factor $`F(r;E_c)`$ thus constructed at several values of $`r`$ and $`E_c`$ are shown in Fig. 1. In Fig. 1(a), we display the form factor as a function of $`r`$ for a fixed value of the energy in the continuum ($`E_c`$ being 0.9 MeV). Note the long tail of the nuclear contribution as a consequence of the large radial extension of the weakly-bound wave function, resulting in the predominance of the nuclear form factor up to the unusual distance of about 25 fm. The same reason gives rise to a deviation of the Coulomb part from the asymptotic behavior proportional to $`r^2`$. Note also the constructive interference of the nuclear and Coulomb parts, due to the negative $`E1`$ effective charge of the neutron excitation. In figs. 1(b) and 1(c), we show, instead, the energy dependence of the form factors for a fixed value of $`r`$. At large values of $`r`$ the curves are peaked at very low energies, reflecting the corresponding behavior of the $`B(E1)`$. At distances around the barrier (which are most relevant to the fusion process) the peaks of the distributions move to higher energies, especially for the nuclear part. In order to perform the coupled-channel calculation the distribution of continuum states is discretized in bins of energy, associating to each bin the form factor corresponding to its central energy. We have considered the continuum distribution up to 2 MeV, with a step of 200 keV. In this way, the calculations are performed with 10 effective excited channels. The ion-ion potential is assumed to have a Woods-Saxon form with parameters $`V_{}=`$152.5 MeV, $`r_{}`$=1.1 fm and $`a=`$0.63 fm, a set that leads to the same barrier height as the Akyüz-Winther potential. At the distance inside the Coulomb barrier where the incoming boundary conditions are imposed, the flux for the entrance channel and the excited break-up channels are evaluated separately. Cross sections for complete fusion, leading to <sup>219</sup>Rn, are then defined using only the flux for the entrance channel, while those for incomplete fusion which leads to <sup>218</sup>Rn are defined in terms of the flux for the break-up channels . Figure 2(a) shows the results of our calculations. The solid line represents the complete fusion cross section, while the dashed line denotes the sum of the complete and incomplete fusion cross sections. Also shown for comparison, by the thin solid line, is the cross section in the absence of the couplings to the continuum states. One can see that these enhance the fusion cross sections at energies below the barrier over the predictions of a one-dimensional barrier penetration model. Note that this is the case not only for the total (complete plus incomplete) fusion probability, but also for the complete fusion in the entrance channel. This finding is qualitatively the same as that of Ref. which used a three-body model to reach to this conclusion, and supports the results of the original calculation performed in Ref. . As it has been emphasized there, accounting properly for the dynamical effects of the coupling in the classically forbidden region is essential to arrive to this conclusion. The situation is completely reversed at energies above the barrier, where fusion in the break-up channel becomes more important and dominates at the expense of the complete fusion. As a consequence, the cross sections for complete fusion are hindered when compared with the no-coupling case. It is interesting to check if this hindrance is caused mainly by the Coulomb interaction. Figure 2(b) shows the effect of the individual nuclear and Coulomb excitations separately. It is apparent from this figure that the nuclear coupling plays a more important role than the Coulomb one in fusion reactions at energies below the Coulomb barrier. This can be understood by considering that the nuclear process is essentially dominated by the values of the potentials and couplings around the Coulomb barrier. At energies above the barrier, on the other hand, both the Coulomb and the nuclear break-up processes play an important role in suppressing the fusion cross sections. This is a characteristic feature of loosely bound systems, where the nuclear form factor extends outside the Coulomb barrier. For fusion of stable nuclei, the Coulomb break-up would become more important in complete fusion at energies above the barrier. In summary, we have performed exact coupled-channels calculations for weakly-bound systems using realistic coupling form factors to discuss effects of break-up on subbarrier fusion reactions. As an example, we have considered the fusion of <sup>11</sup>Be with a <sup>208</sup>Pb target, taking into account the dipole transition of the weakly-bound $`2s_{1/2}`$ neutron to the $`p_{3/2}`$ continuum, which gives the dominant contribution to the low-energy $`B(E1)`$ response. Couplings to bound excited states both in the projectile and in the target nuclei as well as the static change of the ion-ion potential were left aside in order to investigate genuine effects of the break-up processes on fusion reactions. We find that the coupling to break-up channels enhances cross sections for the complete fusion at energies below the Coulomb barrier, while it reduces them at energies above. Very recently, a complete fusion excitation function was measured for the <sup>9</sup>Be + <sup>208</sup>Pb reaction at near-barrier energies by Dasgupta et al. . They showed that cross sections for complete fusion are considerably smaller at above-barrier energies compared with a theoretical calculation that reproduces the total fusion cross section, in general agreement with our results. We note, however, that it is not at all easy to identify reliable reference measurements to compare with at energies below the barrier. This feature may make it quite difficult to settle the issue on a purely experimental basis. The authors are grateful to the ECT\* in Trento, the INT at the University of Washington (A.V.), and the INFN Padova (K.H.) for their hospitality and for partial support for this project.
no-problem/9908/cond-mat9908117.html
ar5iv
text
# 𝐵⁢(𝐻) Constitutive Relations Near 𝐻_{𝑐⁢1} in Disordered Superconductors ## I Introduction The physics of vortex lines in high-temperature superconductors has attracted much experimental and theoretical interest . The competition between interactions, pinning, and thermal fluctuations gives rise to a wide range of novel phenomena that are both interesting in their own right and technologically important. Here, we focus primarily on the effects of disorder on vortex behavior, phenomena which are also important for low-temperature Type II superconductors. In a disorder-free sample, vortex lines always flow in response to a current, leading to a resistance even at arbitrarily small currents. By contrast, defects in superconductors attract vortex lines. Just as a few nails can hold a carpet in place, an entire vortex line system can be held in place by a few defects, leading to a large critical current—provided the vortices are in a solid phase, where they are held in place relative to each other by their mutual repulsion, which gives rise to a shear modulus. If the temperature is raised so that the vortices are in a liquid state, then vortices held in place by the disorder are pinned, but the other vortices can flow around them, leading again to zero critical current. In the case of the high-temperature superconductors, disorder is naturally present in the form of oxygen vacancies. The quantity of such vacancies can be altered by changing the doping of the crystal, i.e., by introducing more or less oxygen during the growth process. Such changes can have a strong impact on the pinning of the vortices, and hence on the critical current. Substitutional defects can play a similar role in low-temperature superconductors. Artificial defects added to superconductors are particularly effective in increasing the critical current. Of special note are columnar defects, which are generated by bombarding the sample with heavy ions that produce damage tracks in their wake . These tracks pin vortices strongly because their width ($`60\AA `$) is comparable to the vortex core size ($`20\AA `$). When the columnar pins are aligned with the direction of the magnetic field (and hence with the vortex lines) a large increase in the critical current is observed. However, an even greater increase in the critical current is observed when the columns are splayed, i.e., they are not all oriented in the same direction . The superior pinning properties of columnar defects with controlled splay was predicted theoretically based on expected reduction in a variable-range hopping vortex transport mechanism and enhanced vortex entanglement due to splay . Despite the technological importance, much remains to be done in order to understand the behavior of vortex lines in the presence of splayed columnar disorder. Even at high temperatures, when the analysis is expected to be most tractable, standard approaches seem to break down and give nonsensical results. For example, the boson mapping works quite well in describing many of the properties of vortex lines in the presence of point or unsplayed columnar disorder. This approach utilizes a formal correspondence between vortex trajectories and the world lines of fictitious quantum-mechanical bosons in two dimensions. In this analogy, the temperature $`T`$ plays the role of Planck’s constant $`\mathrm{}`$, the bending energy (or line tension) $`g`$ plays the role of the boson mass $`m`$, and the length of the sample $`L`$ corresponds to $`\beta \mathrm{}`$ for the bosons. The analogy works best in the $`L\mathrm{}`$ limit, which corresponds to the $`T0`$ limit for the bosons. In this limit, in the absence of disorder and interactions, the bosons should form a condensate. In the presence of disorder or interactions, some “bosons” are kicked out of the condensate, resulting in a condensate density $`n_0`$ which is less than the total density $`n`$. For vortex lines, the “condensate density” is a measure of the degree of entanglement . This condensate density can be calculated for the various types of disorder discussed above. It is well-behaved for point and unsplayed columnar disorder, but in the presence of even weak splayed columnar disorder, $`n_0`$ diverges , suggesting that the boson mapping is flawed in the presence of splayed columnar disorder. Indeed, Täuber and Nelson conclude that the super-diffusive wandering of the flux lines causes the mapping onto non-relativistic bosons to break down. Unfortunately, this mapping—which has provided many insights for the cases of no disorder, point disorder, and unsplayed columnar defects—seems to be less suitable for understanding the behavior of vortex lines in the presence of splayed columnar disorder. Here, we focus on the behavior of vortex lines near $`H_{c1}`$, where the lines are dilute. In particular, we predict the $`B(H)`$ constitutive relation for vortex lines in the presence of the various types of disorder discussed above. We begin by reviewing the traditional Abrikosov result, expected to hold in the absence of disorder and thermal fluctuations. Each vortex that enters the sample will gain a free energy proportional to $`(HH_{c1})`$ per unit length. However, there is an energy cost due to repulsive interactions between any two vortices proportional to $`\frac{1}{\sqrt{r}}\mathrm{exp}(r/\lambda )`$ for $`r\lambda `$, where $`r`$ is the distance between the vortices and $`\lambda `$ is the London penetration depth. In the dilute limit, the interactions with nearest neighbors will dominate over interactions with more distant neighbors, and the free energy density is given by $$f=c_1(HH_{c1})n+c_2n^{5/4}e^{c_3/\sqrt{n}},$$ (1.1) where $`n`$ is the density of vortices, and $`c_1`$, $`c_2`$, and $`c_3`$ are constants that can be determined in terms of the vortex parameters . (We leave them general here to better elucidate the structure of the argument.) Upon minimizing $`f`$ with respect to $`n`$, we obtain $$n=\left\{\frac{c_3}{\mathrm{ln}\left[\frac{c_2c_3}{2c_1n^{1/4}}\frac{1}{(HH_{c1})}\right]}\right\}^2.$$ (1.2) The dominant behavior may be obtained by substituting $`nc_3^2`$ on the right hand side, so that the magnetic field (given by $`B=n\varphi _0`$, where $`\varphi _0`$ is the flux quantum) varies inversely as the square of the logarithm of $`HH_{c1}`$. Plugging in the relevant parameters $`c_1`$, $`c_2`$, and $`c_3`$ for a triangular lattice, one finds $$n=\frac{2\varphi _0}{\sqrt{3}\lambda ^2}\left\{\frac{1}{\mathrm{ln}\left[\frac{3\varphi _0}{4\pi \lambda ^2}\frac{1}{(HH_{c1})}\right]}\right\}^2.$$ (1.3) In the presence of disorder, Eq. (1.3) will be modified. With point disorder, we obtain $`B(HH_{c1})`$ (as calculated and measured by Bolle et al) in 1+1 dimensional samples, where the vortices only have one direction transverse to the magnetic field in which they can wander (see Fig. 1). In 2+1 dimensional samples (Fig. 2), we obtain $`B(HH_{c1})^{3/2}`$, in agreement with calculations done by Nattermann and Lipowsky . In the presence of splayed columnar defects, we find that $`B(HH_{c1})^{3/2}`$ both for 1+1 dimensional (Fig. 3) and 2+1 dimensional (Fig. 4) samples. For columnar disorder with unbound disorder strength, Larkin and Vinokur argue that $`Be^{C(HH_{c1})}`$ in 2+1 dimensions. However, we show that for the more physical case of bounded disorder, $`B\mathrm{exp}[C_3/(HH_{c1})]`$ in 2+1 dimensions (Fig. 5), while $`B\mathrm{exp}[C_2/(HH_{c1})^{1/2}]`$ in 1+1 dimensions (Fig. 6). In addition to these relations, we also estimate the prefactors (up to factors of order unity) in terms of physical constants such as the temperature, the disorder strength, and the superconducting coherence length. These prefactors are important for comparisons with experiment. A recent torsional oscillator experiment carried out by Bolle et al confirms that $`B(HH_{c1})`$ for vortex lines in two dimensions in the presence of point disorder. The experiment was done by attaching a thin sheet of superconducting 2H-NbSe<sub>2</sub> to a high-$`Q`$ micro-electromechanical device, and measuring the resonance frequency of torsional oscillations of the sample. Because a sample with a magnetic moment $`𝐌`$ in a field $`𝐇`$ exerts an additional torque $`\tau =𝐌\times 𝐇`$ on the oscillator, the magnetic field which penetrates and becomes trapped in the sample can be probed by finding shifts in the resonance frequency as a function of applied field $`H`$. Small jumps in the frequency as a function of applied field were observed , which were attributed to individual vortices entering the sample. By counting the number of jumps as a function of $`H`$, these authors obtained $`B(HH_{c1})`$. In the presence of parallel or splayed columnar disorder, the $`B`$ vs. $`H`$ curve could be measured by a very similar experiment, where the sample was first irradiated isotropically by heavy ions to produce the required disorder. In three dimensions, a similar experiment could be done using a long thin needle-shaped sample to avoid demagnetizing effects. For point disorder, this experiment was done in the 1960’s with ambiguous results . In particular, work by Finnemore et al on Niobium samples seems to indicate $`B(HH_{c1})^x`$, with $`x>1`$. However, the decades-old data is too rough to allow quantitative comparison with the prediction for point disorder $`B(HH_{c1})^{3/2}`$. We approach these problems differently for the different types of disorder. Parallel columnar disorder localizes vortices into finite transverse regions of the sample in both 1+1 and 2+1 dimensions. Therefore, at low densities, the vortices can easily avoid paying a large cost associated with their mutual repulsion if they are located in different areas of the sample. In the language of the boson mapping described above, we can approximate the intervortex interactions as merely restricting the occupancy of any given localized state to precisely one boson . The form of the $`B(H)`$ constitutive relation is then determined by the low energy tails of the density of states. This idea is explored further in Sec. III D. By contrast, as discussed in Sec. II, in the presence of point disorder or splayed columnar disorder, noninteracting vortices will be in extended states. As such, repeated collisions between vortices play a crucial role in determining the $`B(H)`$ constitutive relation, as the intervortex interactions attempt to localize each vortex into a cage surrounded by their neighbors. The energy lost due to restricted vortex wandering will determine $`B(H)`$. We investigate this case more fully beginning in Sec. II with the problem of a single flux line superimposed on a background of disorder. The partition function describing a flux line can be mapped onto the noisy Burger’s, or KPZ, equation describing the fluctuations of an elastic interface in the presence of a random space and time dependent potential influencing the interface’s progress . This problem has been studied extensively for random potentials uncorrelated in space and time (appropriate to point disorder for the vortex lines) , but more recently has also been investigated for correlated potentials . Provided we restrict ourselves to the case where the splayed columnar defects are nearly isotropically oriented, we can use previous results for the wandering exponent $`\zeta `$, which describes how far the vortex line wanders transverse to the magnetic field in a distance $`l`$ in the field direction by the formula $`\left\{\overline{[𝐫(l)𝐫(0)]^2}\right\}^{1/2}l^\zeta `$, where $`𝐫(z)`$ labels the transverse coordinates of the flux line. Columnar defects with an approximately isotropic distribution of splay can be created by using neutrons or protons to trigger fission of, e.g., bismuth nuclei in BSCCO . In Sec. III, we develop an approximate theory for a dilute set of vortex lines in the presence of point and splayed columnar disorder. We first derive the scaling of $`B`$ vs. $`H`$, and then focus on the prefactor of this relation. In some experiments , the columns do not traverse the entire sample, so we also consider the effect that finite column size will have on our predictions. Specifically, we expect a crossover from the point disorder behavior at extremely weak $`HH_{c1}`$ to the splayed columnar behavior at somewhat larger $`HH_{c1}`$. ## II Single vortex line ### A Model We begin with a model free energy for a single vortex line in a $`d`$ dimensional sample of thickness $`L`$. We label the direction of the magnetic field by $`\tau `$, and the transverse position of the line at $`\tau `$ by a $`d1`$ dimensional vector $`𝐫(\tau )`$. The free energy then reads : $$F[𝐫(\tau )]=\frac{1}{2}g_0^L\left(\frac{d𝐫}{d\tau }\right)^2𝑑\tau +_0^LV[𝐫(\tau ),\tau ]𝑑\tau ,$$ (2.1) where $`g`$ is the line tension. The pinning potential $`V[𝐫,\tau ]`$ arises due to the interaction with, say, point disorder or splayed columnar defects. Its mean value merely affects the average field $`H_{c1}`$ at which vortex lines will penetrate the sample, so we subtract it out, and assume that $`\overline{V[𝐫,\tau ]}=0`$. We further assume that the noise is Gaussian with a correlator $$\mathrm{\Delta }(𝐫𝐫^{},\tau \tau ^{})=\overline{V[𝐫,\tau ]V[𝐫^{},\tau ^{}]}.$$ (2.2) We focus on the case of “nearly isotropic” splay. This is the defect correlator for a set of randomly tilted columnar pins, each described by a trajectory $`𝐫(\tau )=𝐑+𝐯\tau `$ with a Gaussian distribution of the tilts $`𝐯`$, $`P[𝐯]e^{v^2/2v_D^2}`$ in the limit $`v_D\mathrm{}`$ . For nearly isotropic splay, the Fourier transform of the correlator, defined by $$\mathrm{\Delta }(𝐤,\omega )=d^{d1}𝐫_0^L𝑑\tau \mathrm{\Delta }(𝐫,\tau )e^{i(𝐤𝐫\omega \tau )},$$ (2.3) is given by $`\mathrm{\Delta }/k`$ , which differs from the truly isotropic limit $`\mathrm{\Delta }/(k^2+\omega ^2)^{1/2}`$. However, using the correlator for nearly isotropic splay simplifies the calculations significantly. In the language of the noisy Burger’s (or KPZ) equation, truly isotropic splay has both spatial and temporal correlations, while nearly isotropic splay has only spatial correlations. By focusing on the nearly isotropic splay, we can therefore rely on work done on the KPZ equation with spatially correlated disorder and avoid the more complicated case of temporal disorder. Moreover, we believe that this difference should not affect the physical implications at large length scales significantly . The partition function associated with the free energy of Eq. (2.1), $$𝒵(𝐫,\tau )=𝒟𝐫^{}(\tau ^{})e^{F[𝐫^{}(\tau ^{})]/T},$$ (2.4) i.e., the path integral of the Boltzmann factor $`e^{F[𝐫^{}(\tau ^{})]/T}`$ over all vortex trajectories $`𝐫^{}(\tau ^{})`$ running from $`\tau ^{}=0`$ to position $`𝐫(\tau )`$ at $`\tau ^{}=\tau `$, obeys the differential equation $$T\frac{𝒵(𝐫,\tau )}{\tau }=\left[\frac{T^2}{2g}^2+V(𝐫,\tau )\right]𝒵(𝐫,\tau ).$$ (2.5) Eq. (2.5) can be further transformed by means of the Cole-Hopf transformation (similar to the WKB transformation in quantum mechanics) $$𝒵(𝐫,\tau )=\mathrm{exp}\left[\mathrm{\Phi }(𝐫,\tau )\right]$$ (2.6) into $$T\frac{\mathrm{\Phi }}{\tau }=\frac{T^2}{2g}^2\mathrm{\Phi }+\frac{T^2}{2g}(\mathrm{\Phi })^2+V(𝐫,\tau ).$$ (2.7) This equation, known as the KPZ , or noisy Burger’s equation , has been studied in the case of uncorrelated (point-like) disorder in a great variety of contexts. It was studied by Forster, Nelson, and Stephen as a model for the velocity fluctuations in a randomly stirred, $`(d1)`$-dimensional turbulent fluid. Later, it reappeared as a model for surface roughening proposed by Kardar, Parisi, and Zhang . In this context, it has inspired a great deal of work, summarized in a review article by Halpin-Healy and Zhang . There have also been investigations of the properties of this equation with correlated disorder $`\mathrm{\Delta }(𝐤)=\mathrm{\Delta }/k^{2\rho }`$ in the context of surface roughening. However, to date, there has not been any reliable characterization of the exponents for the entire $`d`$ vs. $`\rho `$ space. Medina, et al have found results that seem to be accurate in two dimensions . Halpin-Healy has obtained results that are accurate in two dimensions, and may describe the behavior for sufficiently small $`\rho `$ in higher dimensions as well. Recently, Frey, et al have found exact exponent solutions not only in two dimensions but also for sufficiently large powers of $`\rho `$ in higher dimensions. (It is by reference to these exact results that we judge the accuracy of Medina, et al. and Halpin-Healy’s results.) Although they are not able to obtain exact results for all $`d`$ and $`\rho `$, Frey, et al. make a plausible conjecture for the behavior at smaller values of $`\rho `$ that is in agreement with Halpin-Healy’s short range results and with numerical simulations . Our purpose here is to adapt this work to vortex lines in the presence of splayed columnar disorder, and explain some of the consequences for experiments. The application of the results for point disorder to experiments is also reviewed. In Sec. II B, we set the stage for a more detailed discussion with a simple, self-contained renormalization group treatment that gives results in agreement with Frey, et al.’s exact results in its range of applicability. In Sec. II C, we discuss the limitations of this approach. ### B Renormalization group #### 1 Scaling To understand the vortex wandering at long length scales, we use the renormalization group method of Ref. to integrate out the short-distance behavior. Since, under renormalization, the coefficients of the $`^2\mathrm{\Phi }`$ term and the $`(\mathrm{\Phi })^2`$ will no longer remain identical, we replace Eq. (2.7) by the more conventional notation $$\frac{\mathrm{\Phi }}{\tau }=\nu ^2\mathrm{\Phi }+\frac{\lambda }{2}(\mathrm{\Phi })^2+V(𝐫,\tau ),$$ (2.8) where initially $`\nu =2\lambda =T/2g`$, and we have absorbed a factor of $`T`$ into $`V`$ so that $`\mathrm{\Delta }(𝐤)=\mathrm{\Delta }/(T^2k)`$. We generalize to correlated disorder $`\mathrm{\Delta }(𝐤)=\mathrm{\Delta }/(T^2k^{2\rho })`$. It is instructive to generalize the correlator this way, because then not only does $`\rho =1/2`$ describe nearly isotropic splayed columnar disorder, but $`\rho =0`$ generates the results for point disorder as well. This renormalization group will be based around a perturbation series in powers of $`\lambda `$. We first rescale Eq. (2.8) by a scale factor $`b`$, with $`𝐫`$ $``$ $`b𝐫,`$ (2.9) $`\tau `$ $``$ $`b^z\tau ,`$ (2.10) $`\mathrm{\Phi }`$ $``$ $`b^\chi \mathrm{\Phi },`$ (2.11) where $`z`$ and $`\chi `$ will eventually be chosen to keep various coupling constants fixed under the renormalization procedure. Since $`z`$ describes the ratio of the scaling in the timelike direction with that in the spacelike direction, $`\zeta =1/z`$ is exactly the wandering exponent that we are looking for. Upon inserting these transformations into Eq. (2.8), we see that the equation remains invariant under these changes provided we rescale $`\nu `$, $`\lambda `$, and $`\mathrm{\Delta }`$ via $`\nu `$ $``$ $`b^{z2}\nu ,`$ (2.12) $`\lambda `$ $``$ $`b^{\chi +z2}\lambda ,`$ (2.13) $`\mathrm{\Delta }`$ $``$ $`b^{2\rho +1d+z2\chi }\mathrm{\Delta }.`$ (2.14) In the absence of the nonlinearity (i.e., $`\lambda =0`$), the equation becomes completely scale invariant if we choose $`z=2`$ and $`\chi =(2\rho +3d)/2`$. However, any small $`\lambda `$ then rescales according to $$\lambda b^{(2\rho +3d)/2}\lambda ,$$ (2.15) and thus the nonlinear term will be relevant for $`d<2\rho +3`$ (i.e., $`d<4`$ for splayed columnar disorder and $`d<3`$ for point disorder). For $`d>2\rho +3`$, we expect the mean field exponents displayed in Eq. (2.122.14) to be accurate, and $`\zeta =1/2`$. However, in the physical situation of interest, $`d2\rho +3`$ and we expect the scaling exponents to change due to the nonlinearity. #### 2 Perturbation theory To understand the case $`d2\rho +3`$, we rewrite Eq. (2.8) in Fourier space via the definition $$\mathrm{\Phi }(𝐤,\omega )=d^{d1}𝐫_0^L𝑑\tau \mathrm{\Phi }(𝐫,\tau )e^{i(𝐤𝐫\omega \tau )},$$ (2.16) obtaining $$\mathrm{\Phi }(𝐤,\omega )=G_0(𝐤,\omega )V(𝐤,\omega )\frac{\lambda }{2}G_0(𝐤,\omega )d^{d1}𝐤^{}_0^L𝑑\omega ^{}𝐤^{}(𝐤𝐤^{})\mathrm{\Phi }(𝐤^{},\omega ^{})\mathrm{\Phi }(𝐤𝐤^{},\omega \omega ^{}),$$ (2.17) where $$G_0(𝐤,\omega )=\frac{1}{i\omega +\nu k^2}.$$ (2.18) We define a renormalized Green’s function $`G(𝐤,\omega )`$ via $$\mathrm{\Phi }(𝐤,\omega )G(𝐤,\omega )V(𝐤,\omega )$$ (2.19) and calculate $`G(𝐤,\omega )`$ perturbatively in $`\lambda `$. The perturbation series can be summarized diagrammatically by giving graphical representations to the renormalized and bare propagators $`G(𝐤,\omega )`$ and $`G_0(𝐤,\omega )`$, the disorder $`V(𝐤,\omega )`$, the interaction, and the disorder correlator $`\mathrm{\Delta }(𝐤)`$ as shown in Fig. 7. As expressed in Eq. (2.19), $`\mathrm{\Phi }(𝐤,\omega )`$ is represented by a double arrow followed by a cross to represent the disorder. Eq. (2.17) can then be represented diagrammatically as in Fig. 8. We obtain an iterative solution by substituting Eq. (2.17) for each of the $`\mathrm{\Phi }(𝐤,\omega )`$ terms appearing on the right hand side of Eq. (2.17), thereby obtaining an iterative series in powers of $`\lambda `$. The result for the renormalized propagator is presented diagrammatically (to second order in $`\lambda `$) in Fig. 9. We wish to use this perturbation theory to calculate renormalized versions of the parameters $`\nu `$, $`\lambda `$, and $`\mathrm{\Delta }`$ (which we will denote by $`\stackrel{~}{\nu }`$, $`\stackrel{~}{\lambda }`$, and $`\stackrel{~}{\mathrm{\Delta }}`$ respectively). We define $`\stackrel{~}{\nu }`$ by $$\underset{𝐤\mathrm{𝟎}}{lim}G(𝐤,0)=\frac{1}{\stackrel{~}{\nu }k^2}$$ (2.20) Upon multiplying Eq. (2.17) (or its diagrammatic equivalent, Fig. 8) by $`V(𝐤,\omega )`$ and averaging over the noise , we obtain the equation for the renormalized propagator represented diagrammatically to one loop order in Fig. 10. We define $`\stackrel{~}{\lambda }`$ via a renormalized vertex that contains the effects of the interactions, shown on the left hand side of Fig. 11. The vertex amplitude, in the limit of small $`𝐤`$, $`𝐪`$, $`\omega `$, and $`\mathrm{\Omega }`$, is given by $$\frac{\stackrel{~}{\lambda }}{2}𝐪(𝐤𝐪)G_0(𝐤,\omega )G_0(𝐪,\mathrm{\Omega })G_0(𝐤𝐪,\omega \mathrm{\Omega }),$$ (2.21) and serves as the definition of $`\stackrel{~}{\lambda }`$. Expanding in terms of the bare quantities, we obtain the equation determining $`\stackrel{~}{\lambda }`$ to one loop, shown in Fig. 11. We define the renormalized noise correlator $`\stackrel{~}{\mathrm{\Delta }}`$ by $$\overline{\mathrm{\Phi }^{}(𝐤,\omega )\mathrm{\Phi }(𝐤,\omega )}2\stackrel{~}{\mathrm{\Delta }}(𝐤)G(𝐤,\omega )G(𝐤,\omega )$$ (2.22) in the limit of $`𝐤\mathrm{𝟎}`$ and $`\omega 0`$, where $$\stackrel{~}{\mathrm{\Delta }}(𝐤)=\frac{\stackrel{~}{\mathrm{\Delta }}}{T^2k^{2\rho }}.$$ (2.23) Expanding in terms of the bare quantities gives rise to Fig. 12. For the case $`\rho =0`$, diagrams like these are expressed in integral form and evaluated in detail in Ref. and, e.g., by Barabási and Stanley . The case $`\rho 0`$ does not produce any new complications, so we simply report the results: $`\stackrel{~}{\nu }`$ $`=`$ $`\nu \left[1{\displaystyle \frac{\lambda ^2\mathrm{\Delta }}{T^2\nu ^3}}{\displaystyle \frac{d2\rho 3}{4(d1)}}K_{d1}{\displaystyle _0^\mathrm{\Lambda }}𝑑qq^{d2\rho 4}\right]`$ (2.24) $`\stackrel{~}{\lambda }`$ $`=`$ $`\lambda `$ (2.25) $`\stackrel{~}{\mathrm{\Delta }}`$ $`=`$ $`\mathrm{\Delta }\left[1+\delta _{\rho ,0}{\displaystyle \frac{\lambda ^2\mathrm{\Delta }}{T^2\nu ^3}}{\displaystyle \frac{K_{d1}}{4}}{\displaystyle _0^\mathrm{\Lambda }}𝑑qq^{d4}\right],`$ (2.26) where $`\mathrm{\Lambda }`$ is a cutoff in momentum space and $`K_d`$ is the surface area of a d-dimensional sphere divided by $`(2\pi )^d`$. The nonlinear coupling $`\lambda `$ is unrenormalized, as required by Galilean invariance . Moreover, the noise correlator is also unrenormalized for any $`\rho >0`$; the diagram correcting the vertex produces only white noise (point disorder). This can be seen by noting that the one-loop diagram that renormalizes the noise correlator (shown in Fig. 12) is regular as $`𝐤\mathrm{𝟎}`$ because the momenta passing through the disorder correlator remain finite as $`𝐤\mathrm{𝟎}`$. Since the other diagrams in Fig. 12 diverge as $`1/k^\rho `$, only white noise, rather than correlated noise, is produced. We will ignore the effect of this white noise for now, since the correlated noise is more singular, and return to discuss its effects in Sec. II C. #### 3 Renormalization group recursion relations All corrections from first order perturbation theory in Eqs. (2.24)—(2.26) are well behaved for $`d>2\rho +3`$, as expected from the earlier scaling argument. In lower dimensions, the renormalization group procedure resums this divergent perturbation series by integrating over modes with high momentum $`\mathrm{\Lambda }e^l<k\mathrm{\Lambda }`$ and rescaling the resulting equations by $`𝐤e^l𝐤`$. Upon combining the scale transformations Eqs. (2.12)—(2.14) with the diagrammatic results above we can easily obtain the flow equations: $`{\displaystyle \frac{d\nu }{dl}}`$ $`=`$ $`\nu \left[z2{\displaystyle \frac{\lambda ^2\mathrm{\Delta }}{T^2\nu ^3}}{\displaystyle \frac{d2\rho 3}{4(d1)}}K_{d1}\right]`$ (2.27) $`{\displaystyle \frac{d\lambda }{dl}}`$ $`=`$ $`\lambda [\chi +z2]`$ (2.28) $`{\displaystyle \frac{d\mathrm{\Delta }}{dl}}`$ $`=`$ $`\mathrm{\Delta }\left[z2\chi d+1+2\rho +\delta _{\rho ,0}{\displaystyle \frac{\lambda ^2\mathrm{\Delta }}{T^2\nu ^3}}{\displaystyle \frac{K_{d1}}{4}}\right].`$ (2.29) We can express these in a single flow equation for the combination $`g^2=\frac{\lambda ^2\mathrm{\Delta }}{T^2\nu ^3}`$: $$\frac{dg}{dl}=\frac{3+2\rho d}{2}g+K_{d1}g^3\frac{(3+\delta _{\rho ,0})(d1)6\rho 6}{8(d1)}.$$ (2.30) We look for fixed points of Eq. (2.30). We first review the case $`\rho =0`$, i.e., point disorder. In $`d=2`$, the equation reads $$\frac{dg}{dl}=\frac{1}{2}g\frac{1}{4}K_1g^3,$$ (2.31) which has an unstable fixed point (corresponding to no disorder) at $`g=0`$ and a stable fixed point at $`g=\sqrt{\frac{2}{K_1}}`$. Upon inserting this fixed point value into Eqs. (2.27)—(2.29), we see that the long-wavelength physics is characterized by $`z=3/2`$ and $`\chi =1/2`$, giving a wandering exponent of $`\zeta =z^1=2/3`$, in agreement with other results . In $`d=3`$, by contrast, Eq. (2.30) reads $$\frac{dg}{dl}=\frac{1}{8}K_2g^3.$$ (2.32) The fixed point at $`g=0`$ is still unstable, but any small disorder flows off to $`g=\mathrm{}`$, or strong coupling, where our renormalization group is no longer accurate. Thus determination of the wandering exponent for point disorder in three dimensions is beyond the scope of this method . We now turn to the case $`\rho >0`$. The flow equation for $`g`$ now reads $$\frac{dg}{dl}=\frac{2\rho +3d}{2}g+\frac{3}{8}\frac{d(2\rho +3)}{(d1)}K_{d1}g^3.$$ (2.33) For $`d<2\rho +3`$, the coefficient of the linear term is positive, while the coefficient of the cubic term is negative, leading to an unstable fixed point at $`g=0`$ and a stable one at $`g=\sqrt{\frac{4(d1)}{3K_{d1}}}`$. Eqs. (2.27)—(2.29) now lead to $`z=(3+d2\rho )/3`$, $`\chi =(2\rho +3d)/3`$, yielding a wandering exponent of $$\zeta =\frac{3}{3+d2\rho }$$ (2.34) For splayed columnar disorder, $`\rho =1/2`$, and so the wandering exponent is $`\zeta =3/4`$ in two dimensions and $`\zeta =3/5`$ in three dimensions . ### C Discussion One issue that has not been addressed is that of white noise. Recall from Sec. II B 2 that the diagram that renormalizes the disorder correlator $`\mathrm{\Delta }(𝐤)`$ does not produce any correlated disorder; however, it does produce white noise. Thus, even if point disorder were not initially present, it would be produced by the renormalization group. Naively, one would expect that correlated disorder would dominate over white noise for any $`\rho >0`$ since correlated disorder is more singular than white noise at $`𝐤=\mathrm{𝟎}`$. This is, in fact the case for $`\rho `$ sufficiently large; however, for small $`\rho `$ the white noise will dominate. Frey et al. have shown that below $`d=3+2\rho `$, the renormalization group allows two fixed points—one long-ranged (dominated by correlated disorder), and one short-ranged (without correlated disorder). Moreover, they argue that the fixed point with the larger dynamic exponent $`z`$ will be stable, and that if the long-ranged fixed point is the stable one, its dynamic exponent is characterized by the exact result $$z_{\text{lr}}=\frac{3+d2\rho }{3},$$ (2.35) in agreement with the simpler calculation of Sec. II B. In the case $`d=2`$, we can use the known result for point disorder $`z=3/2`$ to show that the short-ranged fixed point is stable for $`\rho <1/4`$, while the long-ranged one is stable for $`\rho >1/4`$. This establishes that for the case of splayed columnar disorder in 2 dimensions, the results are unaffected by the white noise. The situation is less clear in 3 dimensions because the dynamic exponent of the short-ranged fixed point, $`z_{\text{sr}}`$, is not known. Nevertheless, based on the above considerations, we expect that there is a curve $`\rho _c(d)`$ such that for $`\rho >\rho _c(d)`$, the long-ranged fixed point is stable, while for $`\rho <\rho _c(d)`$, the short-ranged fixed point is stable. Frey et al.. conjecture that $`\rho _c(d)=\frac{d1}{4}`$, based on the fact that $`\rho _c(2)=1/4`$, and $`\rho _c(5)=1`$. (The latter is known from the fact that in $`d=5,\rho =1`$, this equation corresponds to the Burger’s equation with non-conserved noise, previously studied by Forster et al) This conjecture leads immediately to a result for $`z_{\text{sr}}`$, namely, $`z_{\text{sr}}=\frac{7+d}{6}`$, which is in agreement with the results of Halpin-Healy and with numerical simulations in $`d=3`$. In three dimensions, this would therefore imply that splayed columnar disorder is at the boundary between the regions of stability between the short- and long-ranged fixed points, and hence that $`\zeta =3/5`$ for both point and splayed columnar disorder. ## III Dilute vortex lines In this section, we show how the $`B`$ vs. $`H`$ constitutive relation, which can be measured experimentally , follows from a knowledge of the exponent $`\zeta `$. Our central result, that $$B(HH_{c1})^{\frac{(d1)\zeta }{2(1\zeta )}},$$ (3.1) applies whenever the lines are dilute and $`0<\zeta <1`$ . However, the prefactor (which is important for comparison with experiment) will depend somewhat on the experimental regime. Here, we first derive the above scaling relation, generalizing results for point disorder , and then find expressions for the prefactor in various regimes of temperature and disorder. Recall that parallel columnar defects localize the vortices, yielding $`\zeta =0`$. Because the vortices are localized, Eq. (3.1) does not apply to this case. In Sec. III D, we analyze this case with the boson mapping, finding $$B\mathrm{exp}\left[\frac{C_d}{(HH_{c1})^{(d1)/2}}\right].$$ (3.2) ### A $`B`$ vs. $`H`$ scaling relation with point or splayed columnar disorder We first review the scaling properties of the free energy. If we rescale the system by scaling the $`\tau `$ direction by a factor $`l`$, then the transverse directions rescale by a factor $`l^\zeta `$. According to Eq. (2.1), the elastic term of the free energy then scales by a factor $`l^{2\zeta 1}`$. Because the physics at low temperatures reflects a balance between the pinning and elastic energies, we expect that the pinning term of the free energy scales the same way. Thus, the pinning energy on a scale $`l`$ is given by $`U_p(l)l^{2\zeta 1}`$. The wandering exponent $`\mathrm{\Delta }r(l)l^\zeta `$ (where we define $`\mathrm{\Delta }r(l)=\left\{\overline{[𝐫(z+l)𝐫(z)]^2}\right\}^{1/2}`$) describes the transverse wandering of lines at long scales. At sufficiently high temperatures, thermal wandering describes the physics at shorter length scales. We can then match the small-scale results onto the large-scale results at the length scales at which both should be valid. The exact short length scale mechanism will depend on the experimental conditions, so we keep it general for now. We assume that there is a distance in the $`\tau `$ direction $`l_c`$, a transverse distance $`x_c`$, and an energy scale $`U_c`$ above which the results from Sec. II are valid. (We will provide expressions for these parameters in Sec III B.) Then, for $`l>l_c`$, $`\mathrm{\Delta }x(l)`$ $`=`$ $`x_c\left({\displaystyle \frac{l}{l_c}}\right)^\zeta ,`$ (3.3) $`U_p(l)`$ $`=`$ $`U_c\left({\displaystyle \frac{l}{l_c}}\right)^{2\zeta 1},`$ (3.4) which have the correct long-distance behavior and match with the required values at $`l_c`$. When a finite concentration of vortices enter the sample, their wandering becomes limited by intervortex collisions at a length scale given by $`\mathrm{\Delta }x(l^{})=a_0`$, where $`a_0`$ is the average spacing between the vortex lines; i.e., at $`l^{}=l_c(a_0/x_c)^{1/\zeta }`$. To find the optimal density of vortex lines, we can balance the energy gain (per unit length) $`g(HH_{c1})/H_{c1}`$ of allowing a vortex line to penetrate with the pinning energy lost (per unit length) $`U_p(l^{})/l^{}`$ due to collisions . This yields a vortex spacing $$a_0=x_c\left[\frac{gl_c(HH_{c1})}{U_cH_{c1}}\right]^{\frac{\zeta }{2(\zeta 1)}}.$$ (3.5) In this, and subsequent formulae, we neglect dimensionless constants of order unity. The field $`B`$ which penetrates the superconducting sample is related to $`a_0`$ via $`B=\varphi _0/(a_0^{d1}W^{3d})`$, where for the case of vortex lines confined to a plate-like geometry (as in Figs. 1 and 3), $`W`$ is the width of the sample in the third dimension. It follows that $$B=\frac{\varphi _0}{x_c^{d1}W^{3d}}\left[\frac{gl_c(HH_{c1})}{U_cH_{c1}}\right]^{\frac{(d1)\zeta }{2(1\zeta )}}.$$ (3.6) For flux lines in two dimensions, Eq. (3.6) reduces to $$B=\{\begin{array}{cc}\frac{\varphi _0gl_c}{x_cWU_c}\frac{HH_{c1}}{H_{c1}}\hfill & \text{for point disorder (}\zeta =2/3\text{)}\hfill \\ \frac{\varphi _0g^{3/2}l_c^{3/2}}{x_cWU_c^{3/2}}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}\hfill & \text{for splayed columnar disorder (}\zeta =3/4\text{).}\hfill \end{array}$$ (3.7) while in three dimensions we have $$B=\frac{\varphi _0g^{3/2}l_c^{3/2}}{x_c^2U_c^{3/2}}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}$$ (3.8) for either point disorder or splayed columnar disorder ($`\zeta =3/5`$). ### B Physics at shorter length scales We now estimate the values $`l_c`$, $`x_c`$, and $`U_c`$ that appear in Sec. III A. The physics at short length scales, before the effects of disorder build up, is given by application of the naive scaling analysis of Sec. II B 1 to Eq. (2.8), which gives $`z=2`$, or $`\zeta =1/2`$. The physics is similar to that of a random walk, dominated by thermal disorder, as a function of the time-like paramter $`l`$: $`x^2=\nu l`$ for $`l<l_c`$, where $`\nu =T/2g`$. We expect the system to cross over to the large-scale behavior when the corrections to this diffusion term become comparable to the initial value. Upon generalizing Eq. (2.24) so that we only integrate out to a length scale $`x_c`$, we see that the criterion which determines $`x_c`$ is simply $$\frac{\lambda ^2\mathrm{\Delta }}{T^2\nu ^3}_{x_c^1}^{\xi ^1}𝑑qq^{d2\rho 4}1,$$ (3.9) where we have neglected factors of order unity. This leads to expressions for the crossover parameters $`x_c`$ $`=`$ $`\left({\displaystyle \frac{T^3}{g\mathrm{\Delta }}}\right)^{\frac{1}{3+2\rho d}}`$ (3.10) $`l_c`$ $`=`$ $`{\displaystyle \frac{g}{T}}\left({\displaystyle \frac{T^3}{g\mathrm{\Delta }}}\right)^{\frac{2}{3+2\rho d}}`$ (3.11) $`U_c`$ $`=`$ $`T`$ (3.12) provided $`d<3+2\rho `$. The last equality results from noting that $`U`$ is unrenormalized when $`\zeta =1/2`$. The inequality $`d<3+2\rho `$ is satisfied for splayed columnar disorder in two or three dimensions, and for point disorder in two dimensions, but not for point disorder in three dimensions. In the latter case, one finds $`x_c`$ $`=`$ $`\xi e^{\frac{2\pi T^3}{g\mathrm{\Delta }}},`$ (3.13) $`l_c`$ $`=`$ $`{\displaystyle \frac{g\xi ^2}{T}}e^{\frac{4\pi T^3}{g\mathrm{\Delta }}},`$ (3.14) $`U_c`$ $`=`$ $`T.`$ (3.15) The above results apply at sufficiently high temperatures. However, as is evident from Eq. (3.10), $`x_c`$ decreases with decreasing temperature. If $`x_c<\xi `$, where $`\xi `$ is the (transverse) cutoff provided by the superconducting coherence length, then the thermal regime is absent entirely. From Eq. (3.10), we see that this breakdown occurs for temperatures $`T<T^{}`$, where $$T^{}=\left(g\mathrm{\Delta }\xi ^{3+2\rho d}\right)^{1/3}.$$ (3.16) Below this temperature, we must use a zero-temperature treatment to determine the characteristic scales $`x_c`$, $`l_c`$, and $`U_c`$. Consider the free energy contributions displayed in Eq. (2.1), given that the vortex line has typically wandered a transverse distance $`x_c=\xi `$ in a longitudinal distance $`l_c`$. We assume for simplicity that $`\xi l_c`$. The energy cost of this wandering arising from the first term of Eq. (2.1) is approximately $`g\xi ^2/l_c`$. This energy is offset by the line’s ability to find a more hospitable pinning environment. Let $`V(l_c)=_0^{l_c}V[𝐫(\tau ),\tau ]𝑑\tau `$ describe the pinning energy of the wandering line. The gain in energy due to wandering should be of order the standard deviation of this zero mean random variable, namely $`\sqrt{\overline{V^2(l_c)}}`$. We have $`\overline{V^2(l_c)}`$ $`=`$ $`{\displaystyle _0^{l_c}}𝑑\tau {\displaystyle _0^{l_c}}𝑑\tau ^{}\overline{V[𝐫(\tau ),\tau ]V[𝐫(\tau ^{}),\tau ^{}]}`$ (3.17) $`=`$ $`l_c\mathrm{\Delta }{\displaystyle \frac{d^{d1}𝐤}{(2\pi )^{d1}}\frac{1}{k^{2\rho }}}.`$ (3.18) The final integral has an infrared cutoff given by $`a_0^1`$ and an ultraviolet cutoff given by $`\xi ^1`$: due to the finite size of the vortex core, the vortex line only sees a different disorder profile when it wanders a distance $`\xi `$. For $`d>1+2\rho `$, as is the case for point disorder in two or three dimensions and for splayed columnar disorder in three dimensions, the ultraviolet cutoff dominates and we have $$\overline{V^2(l_c)}\frac{l_c\mathrm{\Delta }}{\xi ^{d12\rho }}.$$ (3.19) For splayed columnar disorder in two dimensions, we find a logarithmic correction, $$\overline{V^2(l_c)}=l_c\mathrm{\Delta }\mathrm{ln}(a_0/\xi ).$$ (3.20) Balancing the energy gain due to disorder with the energy loss due to wandering leads to a characteristic length $`l_c`$ $$l_c=\left(\frac{g^2\xi ^{d+32\rho }}{\mathrm{\Delta }}\right)^{1/3}$$ (3.21) for $`d>1+2\rho `$, and $$l_c=\left(\frac{g^2\xi ^4}{\mathrm{\Delta }\mathrm{ln}(a_0/\xi )}\right)^{1/3}.$$ (3.22) for $`d=1+2\rho `$ (splayed columnar disorder in two dimensions). The corresponding energy scale is $$U_c=\left(g\mathrm{\Delta }\xi ^{3+2\rho d}\right)^{1/3},$$ (3.23) except for splayed columnar disorder in 2 dimensions, where $$U_c=\left(g\mathrm{\Delta }\xi ^2\mathrm{ln}(a_0/\xi )\right)^{1/3}$$ (3.24) Up to logarithmic corrections, these results match smoothly onto the high-temperature formulae of Eqs. (3.10)—(3.12) in the region where they both apply, namely, $`T\left(g\mathrm{\Delta }\xi ^{3+2\rho d}\right)^{1/3}`$. The above results require $`l_c\xi `$, an assumption that breaks down if $`\mathrm{\Delta }`$ is sufficiently large. In fact, the results observed for point disorder in two dimensions by Bolle et al indicate that for this experiment, $`l_c<\xi `$. The relevant estimates in this regime are summarized in Appendix A. Combining the results for the matching parameters from Eqs. (3.10)—(3.12) and (3.21)—(3.24) with the $`B`$ vs. $`H`$ constitutive relation of Eq. (3.6) leads to $$B=\{\begin{array}{cc}\frac{\varphi _0gT}{W\mathrm{\Delta }}\frac{HH_{c1}}{H_{c1}}\hfill & \text{for }T(g\mathrm{\Delta }\xi )^{1/3}\hfill \\ \frac{\varphi _0}{W}\left(\frac{g^4\xi }{\mathrm{\Delta }^2}\right)^{1/3}\frac{HH_{c1}}{H_{c1}}\hfill & \text{for }T(g\mathrm{\Delta }\xi )^{1/3}\hfill \end{array}$$ (3.25) for point disorder in two dimensions (Fig. 1), $$B=\{\begin{array}{cc}\frac{\varphi _0g^3\xi }{T^3}e^{\frac{KT^3}{g\mathrm{\Delta }}}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}\hfill & \text{for }T(g\mathrm{\Delta })^{1/3}\hfill \\ \frac{\varphi _0g^2\xi }{\mathrm{\Delta }}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}\hfill & \text{for }T(g\mathrm{\Delta })^{1/3}\hfill \end{array}$$ (3.26) for point disorder in three dimensions (Fig. 2), $$B=\{\begin{array}{cc}\frac{\varphi _0g^2}{W\mathrm{\Delta }}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}\hfill & \text{for }T(g\mathrm{\Delta }\xi ^2)^{1/3}\hfill \\ \frac{\varphi _0g^2}{W\mathrm{\Delta }\mathrm{ln}\left[\frac{\mathrm{\Delta }}{\xi g^2}\left(\frac{H_{c1}}{HH_{c1}}\right)^{3/2}\right]}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}\hfill & \text{for }T(g\mathrm{\Delta }\xi ^2)^{1/3}\hfill \end{array}$$ (3.27) for splayed columnar disorder in two dimensions (Fig. 3), and $$B=\frac{\varphi _0g^2}{\mathrm{\Delta }}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}$$ (3.28) for splayed columnar disorder in three dimensions (Fig. 4). ### C Crossover between splayed columnar and point disorder: finite length columns Splayed columnar disorder arising from fission fragments often consists of columns with a typical length $`l_{\text{col}}`$ that is much smaller than the sample size $`L`$ (as seems to be the case in Refs. ). We then expect to observe a crossover from the behavior typical of splayed columnar disorder to that of point disorder sufficiently close to $`H_{c1}`$. On scales $`l`$ such that $`l_{\text{col}}lL`$, the vortex lines feel the finite size of the columns, and thus the behavior should be closer to that described by point disorder. However, for $`l_cll_{\text{col}}`$, the vortex lines behave as if the columns were infinitely long, and thus the behavior is that of splayed columnar disorder. In other words, in two dimensions, we expect the $`B`$ vs. $`H`$ constitutive relation to be $`B(HH_{c1})`$ at very weak fields, where the length scale between collisions is above $`l_{\text{col}}`$, and $`B(HH_{c1})^{3/2}`$ at somewhat stronger fields. In three dimensions, since the constitutive relation is the same for both point and splayed columnar disorder, the crossover will appear only in the amplitude of the power law. In the regime $`ll_{\text{col}}`$, we expect that the behavior can be described via the methods of Sec. III A, with splayed columnar disorder playing the role of the small-scale mechanism alluded to near the beginning of Sec. III A. Specifically, let $`x_x`$ and $`U_x`$ be the transverse length scale and energy at which the behavior will cross over from splayed columnar to point disorder. (These will play the role of $`x_c`$ and $`U_c`$ respectively, while $`l_{\text{col}}`$ will play the role of $`l_c`$.) Then, applying Eq. (3.3), we find $$\mathrm{\Delta }x(l)=\{\begin{array}{cc}x_c\left(\frac{l}{l_c}\right)^{3/4},\hfill & l_cll_{\text{col}}\hfill \\ x_x\left(\frac{l}{l_{\text{col}}}\right)^{2/3},\hfill & ll_{\text{col}}.\hfill \end{array}$$ (3.29) Matching these formulae at $`l_{\text{col}}`$, we see that $$x_x=x_c\left(\frac{l_{\text{col}}}{l_c}\right)^{3/4}.$$ (3.30) Similarly, by using Eq. (3.4) and matching at $`l_{\text{col}}`$, we obtain $$U_x=U_c\left(\frac{l_{\text{col}}}{l_c}\right)^{1/2}.$$ (3.31) Eq. (3.7) then leads to $$B=\{\begin{array}{cc}\frac{\varphi _0gl_{\text{col}}}{x_xWU_x}\frac{HH_{c1}}{H_{c1}}\hfill & \text{for sufficiently weak fields}\hfill \\ \frac{\varphi _0g^{3/2}l_c^{3/2}}{x_cWU_c^{3/2}}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}\hfill & \text{for stronger fields.}\hfill \end{array}$$ (3.32) We need to find what the field strength will be at crossover. Crossover occurs when the distance between vortex lines $`a_0`$ becomes comparable to $`x_x`$. In other words, we cross over to the splayed columnar disorder result at the field at which a vortex line typically collides with another vortex line every $`l_{\text{col}}`$ in the $`z`$ direction. This yields $$B=\frac{\varphi _0}{x_xW}=\frac{\varphi _0}{x_cW}\left(\frac{l_c}{l_{\text{col}}}\right)^{3/4}.$$ (3.33) To find the $`H`$ at which this occurs, we use Eq. (3.32). Whichever expression we use, the same result is obtained, which demonstrates the self-consistency of our result, $$\frac{HH_{c1}}{H_{c1}}=\frac{U_c}{g\sqrt{l_cl_{\text{col}}}}.$$ (3.34) In summary, we conclude that $$B=\{\begin{array}{cc}\frac{\varphi _0gl_c}{x_cWU_c}\left(\frac{l_c}{l_{\text{col}}}\right)^{1/4}\frac{HH_{c1}}{H_{c1}},\hfill & \frac{HH_{c1}}{H_{c1}}\frac{U_c}{g\sqrt{l_cl_{\text{col}}}}\hfill \\ \frac{\varphi _0g^{3/2}l_c^{3/2}}{x_cWU_c^{3/2}}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2},\hfill & \frac{HH_{c1}}{H_{c1}}\frac{U_c}{g\sqrt{l_cl_{\text{col}}}}.\hfill \end{array}$$ (3.35) The parameters $`x_c`$, $`l_c`$, and $`U_c`$ appearing in these equations are those of two dimensional splayed columnar disorder, which dominates at short length scales. This yields for high temperatures ($`T[g\mathrm{\Delta }\xi ^2]^{1/3}`$) $$B=\{\begin{array}{cc}\frac{\varphi _0g^{3/2}}{Wl_{\text{col}}^{1/4}\mathrm{\Delta }^{3/4}}\frac{HH_{c1}}{H_{c1}},\hfill & \frac{HH_{c1}}{H_{c1}}\frac{\mathrm{\Delta }^{1/2}}{gl_{\text{col}}^{1/2}}\hfill \\ \frac{\varphi _0g^2}{W\mathrm{\Delta }}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2},\hfill & \frac{HH_{c1}}{H_{c1}}\frac{\mathrm{\Delta }^{1/2}}{gl_{\text{col}}^{1/2}}\hfill \end{array}$$ (3.36) and for low temperatures ($`T[g\mathrm{\Delta }\xi ^2]^{1/3}`$) $$B=\frac{\varphi _0g^{3/2}}{Wl_{\text{col}}^{1/4}\mathrm{\Delta }^{3/4}\left[\mathrm{ln}\left(\frac{\mathrm{\Delta }^{3/4}l_{\text{col}}^{1/4}}{g^{3/2}\xi }\frac{H_{c1}}{HH_{c1}}\right)\right]^{3/4}}\frac{HH_{c1}}{H_{c1}}$$ (3.37) for $$\frac{HH_{c1}}{H_{c1}}\frac{\mathrm{\Delta }^{1/2}\left\{\mathrm{ln}\left[\frac{\mathrm{\Delta }}{\xi g^2}\left(\frac{H_{c1}}{HH_{c1}}\right)^{3/2}\right]\right\}^{1/2}}{gl_{\text{col}}^{1/2}},$$ (3.38) while $$B=\frac{\varphi _0g^2}{W\mathrm{\Delta }\mathrm{ln}\left[\frac{\mathrm{\Delta }}{\xi g^2}\left(\frac{H_{c1}}{HH_{c1}}\right)^{3/2}\right]}\left(\frac{HH_{c1}}{H_{c1}}\right)^{3/2}$$ (3.39) for $$\frac{HH_{c1}}{H_{c1}}\frac{\mathrm{\Delta }^{1/2}\left\{\mathrm{ln}\left[\frac{\mathrm{\Delta }}{\xi g^2}\left(\frac{H_{c1}}{HH_{c1}}\right)^{3/2}\right]\right\}^{1/2}}{gl_{\text{col}}^{1/2}}.$$ (3.40) | Vortex lines | Bosons | | --- | --- | | $`g`$ | $`m`$ | | $`k_BT`$ | $`\mathrm{}`$ | | $`L_z`$ | $`\beta \mathrm{}`$ | | $`(HH_{c1})\varphi _0/4\pi `$ | $`\mu `$ | | $`B/\varphi _0`$ | $`n`$ (boson density) | | Vortex lines in three-dimensional samples | Two-dimensional bosons | | Vortex lines in two-dimensional samples | One-dimensional bosons | | Parallel columnar disorder | Point disorder | TABLE I. Detailed correspondence of the parameters of the vortex line system with the parameters of the boson system. ### D $`B(H)`$ constitutive relation with parallel columnar disorder The boson mapping is particularly useful to study vortex lines in the presence of parallel columnar defects. The dimensionality of the fictitious bosons is one lower than that of the superconducting sample, i.e., vortices in three-dimensional superconductors are described by two-dimensional bosons, while those in two-dimensional superconductors correspond to one-dimensional bosons. Parallel columnar disorder plays the role of point disorder, while the temperature $`T`$ plays the role of Planck’s constant $`\mathrm{}`$, the bending energy $`g`$ plays the role of the boson mass $`m`$, and the sample length $`L`$ plays the role of $`\beta \mathrm{}`$ for the bosons. (See Table I for a summary.) Since all eigenstates are localized in one and two dimensional quantum mechanics, $`\zeta =0`$ for vortex lines in the presence of disordered parallel columnar pins in both two and three dimensions. Thus, Eq. (3.1) does not apply: the physics leading to the $`B(H)`$ constitutive relation is very different. Rather than restricting vortex wandering, intervortex repulsion will assign an energy cost to two vortex lines that occupy the same localized region. For $`BH_{c1}`$, this energy cost will be prohibitive, and we approximate the effects of the interaction as prohibiting multiple occupancy of the same state . Thus, the vortex interactions play the role of the Pauli exclusion principle, and, in this approximation, the behavior is the same as for spinless non-interacting fermions. From Table I, the $`n(\mu )`$ relationship at $`T=0`$ for the fictitious bosons yields the $`B(H)`$ relationship in the thermodynamic limit $`L\mathrm{}`$. But the $`n(\mu )`$ relationship is simply given by $$n(\mu )=_{\mathrm{}}^\mu g(E)𝑑E,$$ (3.41) where $`g(E)`$ is the non-interacting density of states per unit energy per unit area. Thus the $`B(H)`$ relation in the dilute limit is determined by the low energy tail of the density of states. Larkin and Vinokur determined $`B(H)`$ in the following fashion: they assumed a Gaussian disorder potential, $$V(𝐫)V(𝐫^{})=\mathrm{\Delta }_1\delta ^2(𝐫𝐫^{}),$$ (3.42) from which it follows that at low energies , $$g(E)e^{2.9E/E_0},$$ (3.43) with $`E_0=\mathrm{\Delta }_1g/T^2`$ in the vortex line language. The end result is that $$Be^{𝒩\varphi _0(HH_{c1})/E_0},$$ (3.44) where $`𝒩`$ is a numerical factor. However, we do not believe this to be an accurate description of real flux lines. The disorder is taken to be Gaussian, and as such is not bounded below. Therefore, there are states at arbitrarily low energy ($`E\mathrm{}`$), as Eq. (3.43) shows, leading to vortex penetration at arbitrarily small fields. Indeed, according to Eq. (3.44), we would expect a small density of vortex lines parallel to the $`z`$-axis to penetrate the sample in the limit $`H_z=0`$, and even for $`H_z<0`$! This unphysical behavior is an artifact of choosing a disorder potential that is not bounded below. To fix this problem, we choose the pinning potential $`V(r)`$ from a uniform distribution over the range $`E_0<V<E_1`$. While the real distribution of the disorder will be bounded, it may not be of this form. Therefore, at the end of this section, we discuss how our results would be altered by choosing a different (bounded) distribution. Clearly, $`g(E)`$ is bounded from below by $`E=E_0`$, yielding $`H_{c1}=\frac{4\pi E_0}{\varphi _0}`$. The form of the density of states as $`EE_0`$ from above is determined by the frequency of large, rare regions where the disorder potential is always near the bottom of the band . To find $`g(E)`$, we (see, e.g., Ref. ) estimate the probability $`p(R,\delta )`$ of finding a sphere of radius $`R`$ with all energies within $`\delta `$ of the bottom of the band as $$p(R,\delta )\left(\frac{\delta }{E_1E_0}\right)^{(R/l_0)^{d_B}}=\mathrm{exp}\left[\left(\frac{R}{l_0}\right)^{d1}\mathrm{ln}\left(\frac{\delta }{E_1E_0}\right)\right],$$ (3.45) where $`d_B`$ is the dimension of the fictitious bosons ($`d_B=d1`$) and $`l_0`$ is the (microscopic) transverse distance over which the disorder potential is correlated, i.e., the radius of the columnar defects. Because, in the boson mapping, the kinetic energy takes the form (see Table I) $`\frac{T^2}{2g}^2`$, the low-energy eigenstate produced by a such an anomalous region will be given approximately by $$EE_0+\delta +\frac{cT^2}{gR^2},$$ (3.46) where $`c`$ is a numerical factor of order unity. Therefore, the probability of finding a state between energy $`E`$ and $`E+dE`$ using a sphere of radius $`R`$ is given by $$p(R,E)\frac{p(R,\delta )}{\delta }|_{\delta =EE_0\frac{cT^2}{gR^2}},$$ (3.47) yielding $$p(R,E)\mathrm{exp}\left[\left(\frac{R}{l_0}\right)^{d1}\mathrm{ln}\left(\frac{EE_0\frac{cT^2}{gR^2}}{E_1E_0}\right)\right].$$ (3.48) Note that from Eq. (3.46), since $`\delta 0`$,the lower limit $`R`$ at which it is possible to create a state with energy $`E`$ is $$R=\sqrt{\frac{cT^2}{g}\frac{1}{EE_0}}.$$ (3.49) Upon optimizing Eq. (3.48) with respect to $`R`$, we find that (up to logarithmic corrections) the maximum occurs at the lower limit of $`R`$ given by Eq. (3.49). Thus, the form of the density of states at low energies is given by $$g(E)\mathrm{exp}\left[\left(\frac{c^{}T^2}{gl_0^2}\frac{1}{EE_0}\right)^{(d1)/2}\right].$$ (3.50) The logarithmic corrections alluded to above will change the factor of order unity in the exponential, and may introduce pre-exponential terms. We do not, however, calculate these effects, as the results are not independent of the details of the distribution from which the disorder has been drawn. In particular, if the distribution is bounded below and does not vanish too quickly at the lower bound, only the numerical coefficient of order unity $`c^{}`$ and pre-exponential terms will change; the form of density of states will be the same. However, if the distribution falls off faster than any power law at its lower bound (e.g., where the probability of obtaining an energy $`V\mathrm{exp}[K/(VE_0)]`$), then the exponent $`(d1)/2`$ in Eq. (3.50) may change as well. Thus, presuming the disorder potential does not fall off too fast near the bottom of the band, $$B(H)\mathrm{exp}\left[C_d\left(\frac{T}{gl_0}\right)^{d1}\left(\frac{H_{c1}}{HH_{c1}}\right)^{(d1)/2}\right],$$ (3.51) where $`C_d`$ is a constant of order unity. ## IV Outlook We expect that the results described above will be valid when the lines are dilute, i.e., when $`B`$$`\stackrel{<}{}`$$`H_{c1}`$. Here we discuss the outlook for an understanding of vortex lines in disordered superconductors when $`B`$$`\stackrel{>}{}`$$`H_{c1}`$, in which case the effects of interactions between the vortex lines must be taken into account more carefully. At high temperatures, generalizations of “hydrodynamic” models described by Marchetti and Nelson and by Nelson and Le Doussal should describe the lines quite effectively in the presence of point, parallel columnar, and splayed columnar disorder. These models predict a liquid-like state at high temperatures. In the presence of point and parallel columnar disorder, there may be phase transitions to glassy vortex states at low temperatures both in 2 and in 3 dimensions . In the presence of point disorder in 3 dimensions, two types of glassy phases are possible. For sufficiently weak disorder, the vortex lines form a “Bragg glass” in which dislocations do not proliferate . At stronger disorder, dislocations enter the sample. However, it is not yet clear if there is a sharp phase transition separating this “glassy” state with dislocations from a high-temperature flux liquid. The case of splayed columnar disorder in 2 dimensions with dense lines has been investigated by Devereaux, Scalettar, Zimanyi, and Moon . They conclude that there is a transition to a “splay glass” phase at low temperatures. The situation is less clear in three dimensions. We expect that, in contrast to the case of point disorder, there will be no Bragg glass in the presence of splayed columnar disorder: the pinning produced by columns in random directions attracting the vortex lines is likely to have a much stronger entangling effect on the vortices than point disorder. Since the dislocation-free Bragg glass observed in the presence of point disorder is only marginally stable to dislocations , we expect the analogous system with splayed columnar disorder to be unstable to dislocations (especially screw dislocations, which cause entanglement). ## ACKNOWLEDGMENTS We are grateful to C. Bolle, D. J. Bishop, G. Blatter, E. Frey, B. I. Halperin, and T. Hwa for helpful conversations. One of us (drn) acknowledges numerous enlightening discussions on the effects of splay with T. Hwa, P. LeDoussal, and V. Vinokur as part of the collaboration which led to Ref. . This research was supported by the National Science Foundation through Grant No. DMR97-14725 and through the Harvard Materials Research Science and Engineering Laboratory via Grant No. DMR98-09363. ## A Zero-temperature kink regime In this Appendix, we adapt the results of Sec. III B to the case where the effective temperature is low, so a zero-temperature approach is applicable, but the effective disorder strength is much stronger than the elastic energy tending to produce straight vortex lines. This appears to be the regime in which the experiments of are performed. This regime is characterized by the inequality $`l_c<\xi `$. It follows that the free energy of Eq. (2.1) no longer applies, because it relies on an expansion of the line tension $$\sqrt{1+\left(\frac{d𝐫}{d\tau }\right)^2}1+\frac{1}{2}\left(\frac{d𝐫}{d\tau }\right)^2.$$ (A.1) In this case, however, we can approximate the wandering on short scales as being by nearly transverse kinks of distance $`\xi `$, with an energy cost of $`g\xi `$ rather than $`gl_c^2/\xi `$. Balancing this against the energy gain of pinning of Eqs. (3.19) and (3.20), we obtain $$l_c=\frac{g^2\xi ^{d+12\rho }}{\mathrm{\Delta }}$$ (A.2) for point disorder in two or three dimensions, and for splayed columnar disorder in three dimensions, while for splayed columnar disorder in two dimensions we obtain $$l_c=\frac{g^2\xi ^2}{\mathrm{\Delta }\mathrm{log}(a_0/\xi )}.$$ (A.3) In either case, we find $$U_c=g\xi ,$$ (A.4) in agreement with the results of Ref. .
no-problem/9908/cond-mat9908289.html
ar5iv
text
# Mixed-spin cluster expansion for a quasi-one-dimensional Haldane system ## Abstract We present a novel mixed-spin cluster expansion method for a quasi-one-dimensional Haldane system with bond alternation. By mapping the $`s=1`$ antiferromagnetic spin model on square and cubic lattices to the equivalent mixed-spin model, we study the competition among the Haldane, the dimer and the magnetically ordered phases. The mixed-spin cluster expansion proposed here allows us to directly deal with the Haldane phase, which may not be reached by standard series expansion methods. The phase diagram is determined rather precisely by making use of an additional symmetry property in the effective mixed-spin model introduced. Low-dimensional spin systems with the spin gap for the excitation spectrum have been extensively studied since the Haldane conjecture, which clarified that the gap formation in the integer-spin Heisenberg chain reflects the topological nature of spins. Recent extensive experimental and theoretical investigations on the stability of the Haldane system against various perturbations have been providing a variety of interesting topics. The instability of the spin-gap phase in the $`s=1`$ spin models has been studied in detail so far for one-dimensional (1D) systems. For instance, the effect of the bond alternation is understood qualitatively well by the non-linear sigma model, as well as the valence bond solid (VBS) approach. The accurate critical point between the dimer and the Haldane phases has been further obtained by the series expansion, the exact diagonalization, the quantum Monte Carlo simulations and the density matrix renormalization group (DMRG). On the other hand, the $`s=1`$ spin systems with the 2D or 3D structures have not been studied so well, although the effects of the antiferromagnetic correlations due to the interchain couplings should be important for real materials. So far, Sakai and Takahashi investigated a quasi-1D $`s=1`$ spin system by combining the mean field theory with the exact diagonalization results for the spin chain, and gave a rough estimate for the phase-transition point to the antiferromagnetic phase. In this paper, we systematically study how the Haldane and the dimer phases for the $`s=1`$ antiferromagnetic chain are driven to the magnetically ordered phase in 2D and 3D systems by exploiting the series expansion techniques. In particular, we propose a mixed-spin cluster expansion by mapping the $`s=1`$ spin model to the equivalent mixed-spin model, which allows us to deal with the Haldane phase. This new approach is a realization of the notion of the VBS in a perturbation theory. We determine the phase diagram rather precisely both for the 2D and 3D cases by computing the spin excitation gap and the staggered susceptibility. Let us first consider the $`s=1`$ antiferromagnetic quantum spin system on a 2D square lattice, which is described by the Hamiltonian $`H`$ $`=`$ $`{\displaystyle \underset{i,j}{}}\left[\mathrm{\Gamma }_i𝐒_{i,j}𝐒_{i+1,j}+J𝐒_{i,j}𝐒_{i,j+1}\right],`$ (1) where $`J`$ is the interchain coupling and $`𝐒_{i,j}`$ is the $`s=1`$ operator at the $`(i,j)`$-th site in the $`(xy)`$ plane. Here we have introduced the bond-alternation parameter $`\alpha (0\alpha 1)`$ along the $`x`$ direction, $`\mathrm{\Gamma }_i=1(\alpha )`$ for even (odd) $`i`$. All the exchange couplings are assumed to be antiferromagnetic. We employ the series expansion method developed by Singh, Gelfand and Huse. Since this method combines the conventional perturbation theory with the cluster expansion, it has an advantage to deal with the spin system in higher dimensions even for the cases for which the reliable results are difficult to be obtained by the exact diagonalization, the DMRG, etc. In fact, the series expansion method has been successfully applied to the 2D spin systems with various structures, Kondo lattice, bilayer systems, etc. However, to apply the series expansion technique to the present system including the Haldane phase, a nontrivial generalization is needed, since a naive cluster expansion may not describe the Haldane state. For instance, the dimer state is adiabatically connected to the isolated $`s=1`$ dimers, but the Haldane state does not have its analogue in the isolated local singlets composed of several $`s=1`$ spins. To overcome this problem, we wish to recall the notion of the VBS, which captures the essence of the Haldane-gap formation. To realize this idea in the series expansion, we first divide half of the $`s=1`$ spins into two $`s=1/2`$ spins as schematically shown in Fig. 1, and map the system to the mixed-spin system which is equivalent to the original model except for a trivial isolated excited mode. As a starting configuration in the perturbative expansion, we can then consider two types of the mixed-spin cluster singlets formed by the solid lines in Figs. 1 (b) and (c). It is seen that by starting from the configuration (b) we can directly deal with the Haldane phase since it has the structure of the Haldane state in the VBS picture, whereas if the configuration (c) is chosen, we naturally end up with the standard dimer expansion. The above mapping thus gives us an important message that the Haldane phase is adiabatically connected to the isolated mixed-spin singlet states in Fig. 1(b), and thereby can be treated by the mixed-spin cluster expansion method. The resulting cluster expansion around the isolated mixed-spin singlets should provide a quite powerful method, which enables us to deal with the competition among the Haldane phase, the dimer phase and the magnetically ordered phase in 2D and 3D systems. Let us begin with the quantum phase transition between the Haldane phase and the antiferromagnetic phase in 2D $`s=1`$ spin system with bond alternation. To this end, we consider the effective 2D mixed-spin system shown in Fig. 2. In this figure, the large (small) circle represents the $`s=1(s=1/2)`$ spin. The bold solid, the thin solid and the dashed lines indicate the coupling constant $`1`$, $`\lambda `$ and $`J\lambda `$, respectively. In this figure, the model without bond alternation is drawn for simplicity. We note that the mixed-spin system reproduces the original 2D spin system at $`\lambda =1`$. To perform the cluster expansion, the Hamiltonian is divided into two parts as $`H=𝐒_i𝐒_j+\lambda 𝐒_i𝐒_j`$. The first term is the unperturbed Hamiltonian which stabilizes the isolated mixed-spin cluster singlets. The corresponding mixed-spin cluster has the configuration, $`1/211/2`$, which is formed by the antiferromagnetic couplings $`1`$ and $`\alpha `$. These isolated clusters have the singlet ground state with the spin gap $`\mathrm{\Delta }=(3\alpha +3\sqrt{914\alpha +9\alpha ^2})/4`$. The perturbed part of the Hamiltonian labeled by $`\lambda `$ connects these isolated mixed-spin singlets to form a 2D network and thus enhances the antiferromagnetic correlation. We compute the staggered susceptibility $`\chi _{\mathrm{AF}}`$, and the singlet-triplet excitation gap $`\mathrm{\Delta }`$ at the ordering wave vector. These quantities are then expanded as a power series in $`\lambda `$. We finally determine the phase boundary by the divergent staggered susceptibility and the vanishing spin gap, which are estimated by applying the Padé approximants to the quantities obtained up to the finite order in $`\lambda `$. To confirm how well our mixed-spin cluster approach works, we first investigate the $`s=1`$ spin chain without bond alternation. Performing the mixed-spin cluster expansion, we calculate the ground state energy $`E_g`$, the staggered susceptibility $`\chi _{\mathrm{AF}}`$ and the singlet-triplet excitation gap $`\mathrm{\Delta }`$ up to the eleventh, the fifth and the seventh order, respectively. At first sight, the order in the series for the staggered susceptibility and the excitation gap might not be high enough to produce the accurate values at $`\lambda =1`$ (the Haldane point) by means of the ordinary differential methods. It is remarkable, however, that there exists an additional symmetry property like $`Q(\lambda )=\lambda Q(1/\lambda )`$ for each quantity $`Q`$ in our effective mixed-spin chain, which enables us to expand the quantity $`Q`$ as a power series even around $`\lambda =1`$. Fitting this power series with that obtained by the cluster expansion, we end up with the rather accurate values, $`E_g=1.4022`$, $`\chi _{\mathrm{AF}}=19.6`$ and $`\mathrm{\Delta }=0.404`$, which are compared with those of the Monte Carlo simulations: $`E_g=1.4015\pm 0.0005`$, $`\mathrm{\Delta }=0.41`$ in ref., and also the exact diagonalization $`\mathrm{\Delta }=0.411\pm 0.001`$, $`\chi _{\mathrm{AF}}=18.4\pm 1.3`$ in ref. To study the quantum phase transition on a 2D lattice by increasing the interchain couplings, we evaluate the singlet-triplet excitation gap $`\mathrm{\Delta }`$ by means of the mixed-spin cluster expansion up to the fifth order in $`\lambda `$ for various choices of $`\alpha `$ and $`J`$. It is sufficient to consider the parameter regime near $`\lambda =1`$ to discuss the original Haldane system. In the case without bond alternation $`(\alpha =1)`$, applying the Dlog Padé approximants to the spin gap, the critical value $`J_c=0.056\pm 0.001`$ and the critical exponent $`\nu =1.86\pm 0.08`$ are obtained. Our results for $`\alpha =1`$ are much more accurate than those of the mean field theory combined with the exact diagonalization, which claimed the critical value to be $`J_c>0.025`$. We here note that the obtained critical exponent is different from the value $`\nu =0.71`$ expected for the 3D classical Heisenberg model. This implies that the quantum phase transition in our generalized mixed-spin model does not belong to the universality class of the 3D classical Heisenberg model in generic cases, although it should do in the specific case $`\lambda =1`$. Assuming that the spin gap in the vicinity of the transition point vanishes with the same exponent even for the Haldane system with bond alternation, we determine the phase boundary shown as the dots with error bars in Fig. 3. The error bars come from the different values obtained by different biased Padé approximants employed: \[1/2\], \[2/1\], \[2/2\], \[2/3\], \[3/2\] approximants. Since the error bars increase with the decrease of $`\alpha `$ away from unity, it seems difficult to determine the phase boundary in the region close to the dimer phase. However, it is to be noted that this phase diagram should have the symmetry property as $`J(\alpha )=\alpha J(1/\alpha )`$. Taking this into account, we can thus determine rather precisely the phase boundary between the Haldane phase and the antiferromagnetic phase, which is drawn by the solid line in Fig. 3. We shall see momentarily that the critical point between the dimer and the Haldane phases determined in this procedure is quite consistent with that obtained by the dimer expansion. Let us now turn to the dimer phase. In this case, our mixed-spin cluster expansion is equivalent to the standard dimer expansion. We perform the dimer expansion of the staggered susceptibility and the spin gap up to the fifth and the sixth order in $`\lambda `$ for various $`J`$, respectively. To estimate the phase boundary which separates the dimer phase and the antiferromagnetic phase, we use the ordinary Padé approximants as well as the biased Padé approximants, for which the phase transition is assumed to belong to the unversality class of the 3D classical Heisenberg model. Using these Padé approximants, we arrive at the phase diagram shown in Fig. 3. When $`J=0`$ with small $`\alpha `$, the system is reduced to the isolated $`s=1`$ bond-alternating chain, which is known to have disordered ground state with the spin gap due to the dimer singlet. Increasing the parameter $`J`$ and $`\alpha `$, the antiferromagnetic correlation grows up, and the quantum phase transition to the magnetically ordered state occurs. We wish to note that the critical point $`(\alpha ,J)=(0.59,0)`$, which is determined from the series expansion of the spin gap, separates the Haldane phase, the dimer phase and the antiferromagnetically ordered phase in Fig. 3. Since the system in this case is reduced to the independent $`s=1`$ spin chains with bond alternation, our numerical results reproduce the well-known fact that the ground state of the reduced chain with $`\alpha _c=0.59`$ is in a critical phase with neither the spin gap nor the long-range order. To confirm how accurate our results for 2D cases are, we have directly analyzed the spin chain ($`J=0`$) by applying the Dlog Padé approximants to the spin gap computed up to the eighth order. This gives $`\alpha _c=0.612\pm 0.004`$, which is close to the value $`0.59`$ obtained above, and also to $`0.60\pm 0.01`$ obtained by DMRG. Judging from these results, we can say that our phase boundary determined by the excitation gap in Fig. 3 is quite accurate, while that by the staggered susceptibility has a slight deviation only around the critical point. In order to demonstrate that our approach is also powerful to compute the elementary excitation with finite momenta, we show the calculated dispersion relation in Fig. 4 along the specific line of $`J=0`$. Reflecting the isolated spin-singlet structure, the Brillouin zone becomes half of the original one. In the dimer phase $`0<\alpha <\alpha _c`$, using the first order inhomogeneous differential method, we can obtain the dispersion relation. Here, to obtain the dispersion for the Haldane phase, we have again made use of the additional symmetry property inherent in the effective mixed-spin model mentioned above. It is to be emphasized that such a precise dispersion is obtained within the lower-order perturbations, which is indeed due to the additional symmetry we have used. We now move to the 3D system. The advantage of our approach is particularly remarkable for the 3D problem because other numerical methods may often meet some difficulties to treat a large system in the 3D case. We here consider a cubic lattice system by adding the interchain couplings $`J`$ in the $`z`$ direction to the spin model discussed above. By extending our treatment to the 3D system, we thus study the competition between the two kind of gapped states and the antiferromagnetic state. Applying the dimer expansion to calculate the spin gap and the staggered susceptibility up to the fifth order and using the Dlog \[2/2\] Padé approximants, we first determine the phase boundary which separates the dimer and the antiferromagnetic ordered phase in Fig. 5. When $`\alpha =0`$, our system reproduces the $`s=1`$ bilayer Heisenberg model. Increasing the inter-dimer coupling $`J`$ from zero, the antiferromagnetic correlation grows up and the quantum phase transition occurs at $`J_c=0.143\pm 0.006`$. We note that the quantum phase transitions in the bilayer model have already been studied by Gelfand et al with the series expansion method. On the other hand, to observe the phase transition from the Haldane phase to the ordered phase, we further perform the mixed-spin cluster expansion up to the fourth order for both of the above quantities. In the homogeneous case $`(\alpha =1)`$, by analyzing the data in terms of various Dlog Padé approximants we end up with the critical point $`J_c=0.026\pm 0.001`$, which is consistent with those of the non-linear $`\sigma `$ model approach and the mean field theory combined with the numerical method. The phase diagram thus determined is shown in Fig. 5. In summary, we have investigated the quantum phase transitions for the $`s=1`$ quantum systems with the 2D and 3D structures. Using the series expansion, we have discussed how the dimer phase and the Haldane phase realized in 1D compete with the magnetically ordered phase in higher dimensions. In particular, we have proposed a novel approach based on the mixed-spin cluster expansion which realizes the idea of the VBS in the perturbation theory. This new approach has made it possible to treat the Haldane phase in the series expansion framework, which was not dealt with so far by ordinary series expansion methods. For the spin chain case, we have obtained fairly good results comparable to other numerical methods. For the 2D and 3D cases, the phase diagram has been determined rather precisely by making use of an additional symmetry property in the effective mixed-spin model. It is quite interesting to futher apply the mixed-spin cluster approach to the frustrated case, the anisotropic case, etc., in quasi-1D Haldane systems, which is now under consideration. The work is partly supported by a Grant-in-Aid from the Ministry of Education, Science, Sports, and Culture. A. K. is supported by the Japan Society for the Promotion of Science. A part of numerical computations in this work was carried out at the Yukawa Institute Computer Facility.
no-problem/9908/cond-mat9908199.html
ar5iv
text
# Surface Phonons and Other Localized Excitations ## I introduction Unlike molecules which have discrete vibrational frequencies, crystals have a continuous spectrum of vibrations which can propagate as travelling waves . This fact causes crystals to be much better heat conductors than glasses or liquids. Sometimes the spectrum is interrupted by gaps where no propagating normal modes occur. Other interesting behavior happens at frequencies inside the gap, such as localized (non-propagating) normal modes associated with defects and surfaces. The text by Ziman has a good discussion. A visualization of surface modes on the (100) surface of Cu is on the website of Ch. Wöll, Ruhr-Universität Bochum . The present paper shows how this happens for some particular cases of one-dimensional crystals, or linear chains of atoms. Our treatment uses only classical mechanics, and gives properties (frequency and displacement pattern) rigorously by pictorial arguments with no higher algebra. The surface phonon provides the simplest example of wave localization, an effect which occurs in many branches of physics. Analogous phenomena are found in the quantum treatment of electrons in single-particle approximation , and in the new field of “photonic band-gap systems” . This paper reports a simple way of understanding the surface phonon on the diatomic linear chain. The model is then extended and reinterpreted to give simple explanations of some other localized modes. ## II diatomic molecule A diatomic molecule has a single vibrational “normal mode.” Even though the restoring force of atom 1 on atom 2 has in reality a complicated quantum-mechanical origin, for small displacements away from equilibrium, it can always be well approximated by a spring obeying Hooke’s law with a spring constant $`K`$. Using standard physics of the two-body problem , if the two atoms have masses $`M_H`$ and M<sub>L</sub> ($`H`$ and $`L`$ are for heavy and light), the squared oscillation frequency $`\omega ^2`$ is $`K/M_{\mathrm{red}}`$ where $`M_{\mathrm{red}}`$ is the “reduced mass” $`M_LM_H/(M_L+M_H)`$. ## III perfect infinite chains A crystalline solid is a very large molecule, with a continuous spectrum (or band) of vibrational frequencies. Solids can also be modelled by masses connected to each other by springs. A one-dimensional chain of masses is often studied, not because it is found in nature, but because the mathematics is simple and can be generalized to more realistic three-dimensional arrangements. For a large enough collection of atoms, most of the vibrational normal modes are classified as “bulk” normal modes, which means they are essentially identical to those of a hypothetical infinite sample with no boundaries. Each “bulk” normal mode has a pattern of atomic displacements which extends throughout the system. Similar to the normal modes of a vibrating string, these are sine and cosine standing waves. Alternately, one can use linear combinations of sines and cosines to give an equivalent basis of left and right-going travelling waves. For the case of all masses equal to $`M_0`$, the $`\mathrm{}`$’th atom (located at $`R_{\mathrm{}}=\mathrm{}a`$) has a displacement $`A\mathrm{sin}(kR_{\mathrm{}}\omega _kt)`$ in a right-going travelling wave. The corresponding squared frequency is $`(4K/M_0)\mathrm{sin}^2(ka/2)`$. There are as many such solutions ($`N`$) as there are atoms in the chain, namely solutions for each $`k`$ in the range $`(\pi /a,\pi /a)`$. This is derived in many texts . For $`N\mathrm{}`$ the spectrum is continuous between the minimum squared frequency of zero and the maximum of $`\omega _{\mathrm{MAX}}^2=4K/M_0`$. A particularly original discussion is given by Martinez . The vibrational spectrum of a real material sometimes has a gap, an interval of frequencies where there are no travelling wave solutions. A simple model illustrating this is the “diatomic chain,” an infinite chain of alternating masses $`M_L`$, $`M_H`$. The algebra, which is more complicated than the monatomic chain, is also given in texts . The dispersion curve for $`\omega _k^2`$ is given in Fig. 1. There are now two “branches,” labelled acoustic and optic, and a gap. Exactly in the middle of the gap, the surface may induce a localized vibrational normal mode, with amplitude which falls exponentially ($`\mathrm{exp}(R/\xi )`$) with distance $`R`$ into the bulk. Before discussing this, we sharpen our understanding with a quantitative interpretation of the four special bulk modes indicated by circles in Fig. 1. The frequencies of these special modes can be understood without the algebra needed to find the frequencies of the modes at general $`k`$-vectors. ## IV special bulk modes The four special modes circled in Fig. 1 have the simple vibrational patterns shown in Fig. 2. First, why are these patterns “normal modes”? If we take as initial conditions, the velocities of all atoms to be zero and the positions to be as shown in the figures, then Newton’s laws have simple, and perhaps even obvious solutions: the pattern is preserved, and oscillates in time as $`\mathrm{cos}(\omega t)`$ for some special choice of $`\omega `$. This is the definition of a normal mode. Second, what is the corresponding frequency of oscillation? This can be answered by careful consideration of forces and masses. This is the simplest mode with all atoms having the same displacement. This has infinite wavelength (zero wavevector), no stretch of any spring, and therefore zero restoring force and $`\omega =0`$. This has oppositely directed displacements for adjacent atoms. Each unit cell of the crystal has the same displacment pattern. Therefore the wavelength is infinite and the wavevector is zero. The displacements in mode $`𝐛`$ are such that $`u_L`$ (the displacement of the light atom) is proportional to $`M_H`$, and similarly $`u_H`$ is proportional to $`M_L`$. Thus the center of mass of each unit cell is fixed. The mode is almost the same as in a diatomic molecule, except each atom has two springs attached, one stretched and the other compressed by the same amount. Therefore, when released from rest, each pair of atoms oscillates with fixed center of mass but with twice the restoring force of an isolated diatomic molecule, i.e. $`\omega ^2=2K/M_{\mathrm{red}}`$. This is the highest frequency normal mode in the spectrum. This has light atoms stationary and heavy atoms moving in an alternating pattern. The light atoms feel equal and opposite forces which cancel, while the heavy atoms feel repulsive and attractive forces which add. This pattern also oscillates in time, with squared frequency $`\omega ^2=2K/M_H`$. This is the same as mode c except heavy and light atoms are interchanged, making the squared frequency equal to $`2K/M_L`$. Modes c and d have wavelength $`4a`$ and wavevector $`\pi /2a`$. All other normal modes of the infinite crystal are more complicated and have frequencies which lie on smooth curves connecting these four modes. ## V surface mode in the gap Modes which are confined to the surface region normally must have frequencies which lie outside the “bulk” bands. Discussions of such modes are given in texts on surface physics and measurements are cataloged by Kress and de Wette . We have discovered a very simple explanation of the fact that a “gap mode” confined to the surface occurs in the diatomic chain if the endmost atom is a light atom. Consider mode $`e`$, which like mode $`b`$ has pairs of atoms vibrating with fixed center of mass. However, adjacent pairs vibrate in such a way that the connecting spring is not stretched. Thus each pair experiences no force from any other atom and is decoupled from the rest of the chain. The resulting decoupled pairs oscillate with $`\omega ^2=K/M_{\mathrm{red}}`$ as for isolated diatomic molecules. Since all pairs have the same frequency, this is a stable normal mode. The frequency lies exactly in the middle of the gap of the squared frequency spectrum ($`K/M_{\mathrm{red}}=(1/2)(2K/M_L+2K/M_H)`$). In order to be decoupled, the heavy atom of a given pair, and the adjacent light atom of the next pair deeper into the bulk, must have the same displacement, smaller by $`M_L/M_H`$ (and with opposite sign) than the displacement of the previous light atom closer to the surface, in order to conserve center of mass position. Since adjacent pairs have displacement ratios $`M_L/M_H`$, the $`n`$-th pair has amplitude proportional to $`(M_L/M_H)^n=(1)^n\mathrm{exp}(n\mathrm{ln}(M_H/M_L))`$. This is an exponential decay $`\mathrm{exp}(2na/\xi )`$ with decay length $`\xi =2a/\mathrm{ln}(M_H/M_L)`$. If the surface atom had been a heavy atom, this mode would have been exponentially growing rather than decaying, which is not allowed for a normal mode. Mode e was first found by Wallis in an elegant calculation of the spectrum of finite chains. Our simple argument is not (to our knowledge) in the literature. A “standard” derivation is given in the text by Cottam and Tilley . ## VI surface modes of three-dimensional crystals Mode e is directly related to a branch of surface normal modes of higher-dimensional diatomic crystals. A two-dimensional version is shown in Fig. 3. Various types of surfaces are possible for such crystals. If cut perpendicular to a conventional $`\widehat{x}`$ or $`\widehat{y}`$ axis shown in the figure by dashed lines, the surface contains equal numbers of $`A`$ and $`B`$ ions, and is referred to as “non-polar.” By contrast, the surface shown is a “polar surface” with a layer of light atoms exposed and layers of heavy and light atoms alternating underneath. There is a vibrational normal mode in which each layer oscillates perpendicular to the surface (as indicated by arrows) and which is localized at the surface. Of course, in real crystals the forces extend beyond first neighbors, so the displacement ratio $`(M_L/M_H)`$ may not be exactly obeyed and the squared frequency may not lie exactly at mid-gap, but the actual behavior will mimic reasonably well the idealized one-dimensional example of the previous section. There is actually not just one mode of this type, but a branch of such modes, with displacement patterns sinusoidally modulated along the surface. The one depicted in Fig. 3 has the surface atoms “a”, “b”, “c”, all moving in phase, corresponding to an infinite wavelength, or zero wavevector, parallel to the surface. The other extreme case of modulation is when atoms along the surface are completely out of phase; when atom “a” moves down, atom “b” moves up, and so forth, corresponding to a wavelength $`\lambda =2\sqrt{2}a`$ in the plane of the surface. Thus we anticipate a branch of surface excitations with wavevectors lying in the plane of the surface. In order for such a mode to be exponentially localized in the surface region, the frequency of oscillation must lie in a gap where there are no corresponding bulk normal modes with the same components of wavevector in the plane of the surface. A gap is almost certain to occur for the case of zero wavevector, but at increasing wavevectors the gap may disappear, and the mode ceases to be localized near the surface. Dimension two or three also opens new possibilities less directly related to one-dimensional models, such as surface normal modes with displacements in the plane of the surface. Many branches of surface normal modes have been seen experimentally by scattering experiments. Unfortunately we have not been able to locate in the literature any observation of the simple mode illustrated in Fig. 3. This is perhaps because polar surfaces are relatively unstable and hard to create and work with. ## VII impurity atom on the surface Another known result is that a surface mode appears above the bulk frequency spectrum for a monatomic chain, provided the atom on the surface is lighter than the rest by at least a factor of two. This can be proven by a reinterpretation of the previous construction. For mode $`𝐞`$ in Fig. 2, let the two atoms connected by the unstretched spring be reinterpreted as a single atom of mass $`M_0=M_H+M_L`$. Then the model has new interior atoms all with mass $`M_0`$, but a surface impurity atom with mass $`M_{\mathrm{imp}}=M_L<0.5M_0`$. The surface mode $`𝐞`$ still solves Newton’s laws with $`\omega _S^2=K/M_{\mathrm{red}}`$ and $`M_{\mathrm{red}}=M_LM_H/(M_L+M_H)`$. In terms of the new variables $`M_{\mathrm{imp}}`$ and $`M_0`$ the reduced mass $`M_{\mathrm{red}}`$ is $`M=M_{\mathrm{imp}}(M_0M_{\mathrm{imp}})/M_0`$. The frequency $`\omega _S^2`$ lies above the top of the bulk band ($`\omega _{\mathrm{MAX}}^2=4K/M_0`$) if $`M_0>2M_{\mathrm{imp}}`$, and merges into the bulk band for $`M_02M_{\mathrm{imp}}`$. This result seems also to have been first discovered by Wallis . A “standard” proof of this result is in the book by Desjonquères and Spanjaard . ## VIII localized gap mode of a stacking fault The gap mode e of Fig. 2 generates a corresponding mode of a defective bulk crystal, shown in Fig. 4. This mode decays exponentially in both directions away from the center of symmetry. This center lies in the middle of a “stacking fault” where two light-mass atoms have been put adjacent to each other. It is a one-dimensional version of a planar defect which occurs in real three-dimensional crystals. The quantum-mechanical force between two light-mass atoms differs from the force which binds the atoms of unlike mass. Therefore, we must expect that the separation $`a^{}`$ of the light-mass atoms will differ from the equilibrium separation $`a`$ of unlike atoms, and that the force constant $`K^{}`$ between these atoms will differ from the constant $`K`$ occuring elsewhere. Notice that for the special displacement pattern of Fig. 4, there is no force between the adjacent light atoms, so the values of $`a^{}`$ and $`K^{}`$ are irrelevant; the squared frequency of the normal mode is exactly the same as the surface mode e of Fig. 2, and is pinned at midgap. The stacking fault is a simple example of a “topological defect,” that is, a defect which cannot be transformed away by any local change. As far as we know, the mid-gap normal mode of vibration found here for the stacking fault has not previously been discussed in the literature. However, a close analog is the “topological soliton” found at mid-gap in the electronic spectrum of the “Su-Schrieffer-Heeger” model for polyacetylene with a topological defect in the pattern of dimerization of carbon-carbon bonds along the chain. ## IX localized vibration of a light mass impurity in a monatomic chain Suppose an impurity of mass $`M_{\mathrm{imp}}<M_0`$ is substituted into a monatomic chain of mass $`M_0`$ with no change in force constants. Define the fractional mass deficit to be $`ϵ=(M_0M_{\mathrm{imp}})/M_0>0`$. It is known that this system supports a localized mode whose frequency “splits off” above the frequency $`\omega _{\mathrm{MAX}}`$ of the uppermost bulk mode. Specifically, the mode has squared frequency $`\omega _{\mathrm{MAX}}^2/(1ϵ^2)`$ and is localized around the impurity with localization length $`a/\mathrm{ln}((1+ϵ)/(1ϵ))`$. The earliest presentation of this mode known to us is by Montroll and Potts . The topic of localized modes in solids had been given a systematic formulation in three earlier papers by Lifshits, available only in Russian . A textbook derivation is given by Mihály and Martin , and a nice qualitative discussion is given by Harrison . These results follow rigorously by reinterpretation of Fig. 4. Simply regard each pair of co-moving atoms as a single atom whose mass is the sum of the two shown in the figure. Thus $`M_0`$ is $`M_H+M_L`$, $`M_{\mathrm{imp}}`$ is $`2M_L`$, and the new lattice constant $`a`$ is twice the previous distance $`a`$. When the impurity mass is heavier than the host mass, there is no longer a split-off bound state, but instead a “resonance” within the bulk band. In three-dimensional crystals the occurrence of a vibrational bound state requires a minimum mass deficit $`ϵ`$ which is model-dependent, whereas our 1-d example has a bound state for arbitrarily small mass deficit. This is a classical discrete-system analog of the continuum quantum-mechanical theorem that an attractive well always has a bound state in a 1-d one-electron problem (and also in 2-d) but requires a critical well-depth in 3-d . For the impurity on the surface, however, we saw that even in 1d there is a critical mass deficit of 1/2. The quantum analog is that if the well is at the edge of a 1-d half space (the other half of space is impenetrable because of an infinite potential), then there is a critical well-depth, equal to the well-depth at which the second bound state appears for the symmetric well in the full 1-d space. ## X summary Two simple surface phonons and two simple bound defect modes in one-dimensional lattices have been quantitatively explained by pictorial construction and elementary physics of the two-body problem. This is certainly not a complete catalog of interesting localized modes, but we think that these modes can serve as useful pedagogical models for phenomena in several branches in physics. ## ACKNOWLEDGMENTS We thank A. A. Maradudin and N. Stojic for help. This work was supported in part by NSF grant no. DMR-9725037.
no-problem/9908/cond-mat9908436.html
ar5iv
text
# Direct Observation of Antiferro-Quadrupolar Ordering – Resonant X-ray Scattering Study of DyB2C2 – \[ ## Abstract Antiferroquadrupolar (AFQ) ordering has been conjectured in several rare-earth compounds to explain their anomalous magnetic properties. No direct evidences for AFQ ordering, however, have been reported. Using the resonant x-ray scattering technique near the Dy $`L_{III}`$ absorption edge, we have succeeded in observing the AFQ order parameter in DyB<sub>2</sub>C<sub>2</sub> and analyzing the energy and polarization dependence. Much weaker coupling between orbital degrees of freedom and lattice in $`4f`$ electron systems than in $`3d`$ compounds provides an ideal platform to study orbital interactions originated from electronic mechanisms. \] Magnetic ions in a highly symmetrical crystalline may have an orbital degeneracy in the crystalline electric field (CEF) ground state. With decreasing temperature, this degeneracy becomes lifted by some interactions. A typical example is the cooperative Jahn-Teller (JT) distortion, where the orbital degrees of freedom, coupled with lattice distortion, gives rise to a structural phase transition to lower crystalline symmetry. Long range orbital ordering (OO) thus driven was confirmed in KCu<sub>2</sub>F<sub>4</sub> by polarized neutron scattering and in LaMnO<sub>3</sub> by resonant x-ray scattering. Although OO is not necessarily associated with the cooperative JT distortion as reported in La<sub>0.88</sub>Sr<sub>0.12</sub>MnO<sub>3</sub>, coupling with other degrees of freedom such as charge or lattice would remain in $`3d`$ compounds. In $`4f`$ electron systems with degenerated ground state, however, orbital degrees of freedom may remain and undergo a phase transition without structural distortion because of much weaker coupling between lattice and well-localized $`4f`$ orbitals. Such a possibility was first discussed in cubic CeB<sub>6</sub> as long range ordering of electric quadrupole moments of $`4f`$ orbitals. Quadrupole ordering is defined as a phenomenon that $`f`$ electron charge distribution which diagonalizes certain quadrupole moments orders spontaneously and spatially as lowering temperature. In ferroquadrupole (FQ) arrangement, aligned quadrupole moments uniformly distort the lattice through a linear coupling between quadrupole moment $`O_\mathrm{\Gamma }`$ at the wave vector $`q=0`$ and strain $`ϵ_\mathrm{\Gamma }`$ in the same symmetry. Therefore, the order parameter can be obtained by measuring JT-like lattice distortion. In antiferroquadrupole (AFQ) arrangement, however, the AFQ order parameter at $`q0`$ does not linearly couple with the uniform strain. Thus, atomic displacement is not always expected, which makes it extremely difficult to observe the order parameter. In this Letter, we present the first direct evidence for AFQ ordering by the resonant x-ray scattering study of DyB<sub>2</sub>C<sub>2</sub> near the Dy $`L_{III}`$ absorption edge. As shown in Fig. 1, DyB<sub>2</sub>C<sub>2</sub> has the $`P4/mbm`$ tetragonal structure consisting of Dy layers and B-C networks stacking alternatively along the $`c`$ direction. Covalently bonded B-C network requires no electron transfer from Dy<sup>3+</sup> ions ($`4f^9`$, $`{}_{}{}^{6}H_{15/2}^{}`$), thus DyB<sub>2</sub>C<sub>2</sub> has three conduction electrons per formula in the 5$`d`$ band and is metallic. Recently, Yamauchi et al. reported that DyB<sub>2</sub>C<sub>2</sub> exhibits phase transitions at $`T_Q25`$ K and $`T_N16`$ K. Specific heat measurement showed two distinct $`\lambda `$-type anomalies at $`T_Q`$ and $`T_N`$, each of which releases the entropy equivalent to $`R\mathrm{ln}2`$. Since Dy<sup>3+</sup> is a Kramers ion, two Kramers doublets should be involved in these successive transitions. It is thus expected that the ground and first excited Kramers states are close or nearly degenerated and that quadrupole degrees of freedom remain. In contrast to the specific heat, almost no anomaly was observed in the magnetic susceptibility at $`T_Q`$ and no structural transition nor lattice distortion were confirmed at $`T_Q`$ and $`T_N`$. Thus, the transition at $`T_Q`$ is neither magnetic nor structural. Neutron diffraction revealed antiferromagnetic (AFM) ordering below $`T_N`$. Spins are aligned within the $`c`$ plane and the magnetic structure is basically described with two propagation vectors $`[\mathrm{1\; 0\; 0}]`$ and $`[\mathrm{0\; 1}\frac{1}{2}]`$, indicating that Dy magnetic moments realize 90 arrangement along $`c`$, which is hardly explained only by magnetic interactions. They also found weak magnetic signals at $`[\mathrm{0\; 0\; 0}]`$ and $`[\mathrm{0\; 0}\frac{1}{2}]`$, indicating that moments are slightly canted within the $`c`$ plane. From these results, they proposed that the phase I ($`T>T_Q`$) is paramagnetic, the phase II ($`T_N<T<T_Q`$) is the AFQ ordered phase, and the phase III ($`T<T_N`$) is the AFM and AFQ ordered phase. We have grown a DyB<sub>2</sub>C<sub>2</sub> single crystal by the Czochralski method. The crystal was checked by powder x-ray diffraction, which shows a diffraction pattern consistent with Ref. and no detectable foreign phases. The temperature dependence of magnetization is also in good agreement. X-ray scattering measurements were performed on a six-axis diffractometer at the beamline 16A2 of the Photon Factory in KEK. A piece of the sample ($`2`$ mm cubic) was mounted in a closed cycle <sup>4</sup>He refrigerator so as to align the $`c`$-axis parallel to the $`\varphi `$ axis of the spectrometer. The mosaicness was about 0.07 FWHM. The azimuthal angle $`\mathrm{\Psi }`$ (rotation around the scattering vector) is defined as 0 where the scattering plane contains the $`b`$ axis, i.e., $`[\mathrm{0\; 1\; 0}]`$. The incident energy was tuned near the Dy $`L_{III}`$ edge, which was experimentally determined to be 7.792 keV using fluorescence. To separate the linearly polarized $`\sigma ^{}`$ ($``$ the scattering plane) and $`\pi ^{}`$ ($``$ the scattering plane) components of diffracted beam, we used the PG (006) reflection, which scattering angle is about 91 at this energy resulting in almost complete polarization: the $`\sigma \pi ^{}/\sigma \sigma ^{}`$ intensity ratio at $`(\mathrm{0\; 0\; 2})`$ was less than 0.5 %. In our configuration, $`(\mathrm{0\; 0\; 2})`$ intensity at Dy $`L_{III}`$ for $`\sigma \sigma ^{}`$ is $`2.5\times 10^6`$ counts per second (cps) when the ring current is 300 mA. AFQ ordering will be directly observed by exploiting the sensitivity of x-ray scattering to an anisotropic $`f`$ electron distribution. In the present study, we have utilized the ATS (anisotropic tensor of x-ray susceptibility) technique, which was originally developed for detecting “forbidden reflections” which appear due to the asphericity of atomic electron density. The ATS reflections, which are usually very small, would increase in the resonant x-ray scattering near an absorption edge because the anomalous scattering factor, sensitive to an anisotropic charge distribution, is dramatically enhanced. This technique was successfully applied to the OO phenomena in 3$`d`$ oxides. We thus tuned the incident energy of x-rays at Dy $`L_{III}`$, where $`2p_{3/2}5d_{5/2}`$ dipole and $`2p_{3/2}4f_{7/2}`$ quadrupole transitions are expected. Figure 1 shows a schematic view of investigated $`q`$-space. Fundamental reflections appear where $`h+k=even`$. To look for the AFQ and AFM order parameters, we made scans along $`(\mathrm{0\; 0}l)`$, $`(\frac{1}{2}0l)`$, $`(\mathrm{1\; 0}l)`$, $`(h\mathrm{0\; 2})`$, $`(\frac{1}{2}\frac{1}{2}l)`$, $`(\mathrm{1\; 1}l)`$ and $`(\mathrm{2\; 1}l)`$ at Dy $`L_{III}`$ and making no polarization analysis. Scans at 30 K give only fundamental reflections. At 20 K ($`<T_Q`$), two kinds of superlattice reflections appear; one is characterized by a propagation vector $`q_{Q1}=(\mathrm{0\; 0}\frac{1}{2})`$ and the other is by $`q_{Q2}=(\mathrm{1\; 0}\frac{1}{2})`$. At 10 K ($`<T_N`$), additional reflections appear at forbidden reflection points which propagation vector is $`q_M=(\mathrm{1\; 0\; 0})`$. These temperature dependences suggest that the $`q_{Q1}`$ and $`q_{Q2}`$ reflections correspond to the expected AFQ ordering and that the $`q_M`$ peak is the AFM order parameter. Figure 2 shows the incident energy dependences of fluorescence as well as $`(\mathrm{0\; 0\; 2.5})`$, $`(\mathrm{1\; 0\; 2.5})`$ and $`(\mathrm{1\; 0\; 2})`$ reflections, i.e, $`q_{Q1}`$, $`q_{Q2}`$ and $`q_M`$ points. The $`(\mathrm{0\; 0\; 2.5})`$ peak shows a sharp enhancement at Dy $`L_{III}`$ in both $`\sigma \sigma ^{}`$ and $`\sigma \pi ^{}`$ processes. Note that there exists another enhancement for $`\sigma \pi ^{}`$ at 7.782 keV, 10 eV lower than the Dy $`L_{III}`$ edge, which we speculate corresponds to level splitting within $`2p`$ and $`5d`$ states or a quadrupole transition. The $`(\mathrm{1\; 0\; 2})`$ peak shows an enhancement in $`\sigma \pi ^{}`$ at the Dy $`L_{III}`$ edge at 10 K. No such enhancement was found in $`\sigma \sigma ^{}`$ at $`(\mathrm{1\; 0\; 2})`$, indicating that the $`(\mathrm{1\; 0\; 2})`$ reflection is dominated by $`\sigma \pi ^{}`$ scattering, as expected for resonant magnetic scattering. As for $`(\mathrm{1\; 0\; 2.5})`$, there exists a clear energy enhancement in $`\sigma \pi ^{}`$ at Dy $`L_{III}`$ below $`T_N`$ indicating magnetic contribution, which is consistent with Ref. On the contrary, the $`\sigma \sigma ^{}`$ scattering exhibits non-resonant reflection below $`T_Q`$ as shown in Fig. 2(d). We first focus upon the resonant peaks, then discuss this non-resonant contribution at $`(\mathrm{1\; 0\; 2.5})`$. In addition to the enhancement, it is expected that the resonant ATS scattering from AFQ ordering shows the azimuthal angle dependence reflecting the shape of $`f`$ electron distribution. As shown in Fig. 3, we measured azimuthal dependence for two different polarizations by rotating the crystal around the scattering vector kept at $`(\mathrm{0\; 0\; 2.5})`$. Figure 3 demonstrates that the $`\sigma \sigma ^{}`$ scattering exhibits a characteristic four-fold oscillation, compatible to the tetragonal symmetry. The intensity approaches zero at $`\mathrm{\Psi }=0`$ and $`\frac{\pi }{2}`$. The $`\sigma \pi ^{}`$ scattering of $`(\mathrm{0\; 0\; 2.5})`$ also shows a four-fold oscillation. However, the oscillation for $`\sigma \pi ^{}`$ is reversed to that of $`\sigma \sigma ^{}`$. Plus, the intensity minimum remains finite at 10 K and approaches zero at 20 K, indicating that there exists a magnetic contribution to the $`\sigma \pi ^{}`$ scattering at $`(\mathrm{0\; 0\; 2.5})`$, which is consistent with Ref. These azimuthal dependences strongly indicate the existence of anisotropic $`f`$ electron distribution and the associated AFQ ordering below $`T_Q`$. Figures 4 show the order parameters measured at $`(\mathrm{0\; 0\; 2.5})`$ and $`(\mathrm{1\; 0\; 2})`$ as well as the spontaneous strain $`\mathrm{\Delta }c`$ estimated from the $`(\mathrm{0\; 0\; 2})`$ peak position. The order parameters behave as continuous 2nd order transitions and can be fitted to power laws indicated in the figures. The transition temperatures thus obtained are in good agreement with the values reported by Yamauchi et al. The critical exponents $`\beta `$ obtained for the AFQ ordering and AFM ordering are about 0.2. The spontaneous strain $`\mathrm{\Delta }c`$ has the $`\beta `$ value close to 0.5, which is twice as much as that of the AFM ordering, indicating that $`\mathrm{\Delta }c`$ is a secondary order parameter and quadratically coupled to the AFM ordering. For quantitative discussions, we need more statistics, which will not only give more precise $`\beta `$ values but also provide more information such as correlation lengths above $`T_N`$ and $`T_Q`$. Note that no anomaly was found in $`\mathrm{\Delta }c`$ at $`T_Q`$, which implies that the quadrupole ordering has very weak coupling, if any, to the lattice of DyB<sub>2</sub>C<sub>2</sub>. Since the superlattice peak at $`(\mathrm{1\; 0\; 2.5})`$ appearing below $`T_Q`$ is non-resonant and has $`\sigma \sigma ^{}`$ polarization, it might be ascribed to atomic displacement. As a simple model for order estimation, let us assume that Dy ions are displaced along $`c`$ and that the directions are alternated between nearest neighbors. From the intensity ratio below Dy $`L_{III}`$, $`I(\mathrm{1\; 0\; 2.5})/I(\mathrm{0\; 0\; 2})=4.0\times 10^5`$, we obtain the displacement $`\delta =0.0014`$ Å (0.00040 $`c`$). With this small atomic displacement, the change of lattice constant may not be detected by the present x-ray diffraction which resolution is $`\mathrm{\Delta }c/c10^4`$ (see Fig. 4(c)). Recently, Benfatto et al. theoretically reexamined the resonant x-ray scattering study of LaMnO<sub>3</sub> and argued that the resonant signal is mostly due to the JT distortion resulting in anisotropic Mn-O bond lengths. This is in contrast with another theoretical description by Ishihara and Maekawa, who proposed a mechanism based upon the Coulomb interaction between $`4p`$ conduction band and the ordered $`3d`$ orbitals. In DyB<sub>2</sub>C<sub>2</sub>, however, it is very unlikely that the lattice distortion results in the observed anisotropic electron distribution. It is still required to further study the $`(\mathrm{1\; 0\; 2.5})`$ reflection, including the azimuthal dependence, and to consider other possibilities such as asphericity of atomic electron density due to AFQ ordering. Let us briefly discuss the CEF of DyB<sub>2</sub>C<sub>2</sub>. Using the equivalent operator formalism, we have constructed a point charge model, which shows that the ground ($`J_z=\pm \frac{1}{2}`$) and first excited ($`J_z=\pm \frac{3}{2}`$) Kramers doublets almost degenerate and are well separated from the other excited states. These results confirm the existence of a pseudo quartet ground state in which the orbital degrees of freedom remain, and are consistent with a strong planar magnetic anisotropy which aligns the magnetic moments within the $`c`$ plane. Details of the calculation will be published elsewhere.. The present study unambiguously shows that the resonant scattering at $`q_{Q1}`$ corresponds to the AFQ ordering. However, the mechanism yielding such resonant scattering is not completely understood. When allowed, the dipole transition usually overwhelms the quadrupole transition in resonant scattering. Similar to a $`d`$ orbital angular moment, a quadrupole moment has five elements, i.e, $`Q_m^{(2)}`$ ($`m=2,1,0,1,2`$) where $`Q_m^{(l)}=\rho (𝐫)𝐫^l\sqrt{4\pi /(2l+1)}Y_{lm}(\theta ,\varphi )𝑑𝐫`$ in the polar coordinate. In the CEF, the five elements are classified in a particular irreducible representation, which can be conveniently explained by the Stevens’s equivalent operators. In the cubic $`O_h`$ symmetry, for example, they are proportional to $`O_2^0=\{3J_z^2J(J+1)\}/\sqrt{3}`$ and $`O_2^2=J_x^2J_y^2`$ in the $`\mathrm{\Gamma }_3`$ ($`e_g`$) symmetry, and $`O_{xy}=J_xJ_y+J_yJ_x`$, $`O_{yz}`$, and $`O_{zx}`$ in the $`\mathrm{\Gamma }_5`$ ($`t_{2g}`$) symmetry. Actual quadrupole moments can be obtained by calculating their expected values. Through a strong $`cf`$ coupling between $`5d`$ conduction band and localized $`4f`$ orbitals, the AFQ ordering would be projected onto the $`5d`$ orbital states. To completely understand the present experimental results, it is necessary to establish much more detailed scattering mechanism in a proper CEF symmetry. In conclusion, the present resonant ATS x-ray scattering study has directly shown the existence of long range AFQ ordering in DyB<sub>2</sub>C<sub>2</sub>, which had been theoretically conjectured in some $`f`$ electron systems, and given the order parameter and the information concerning the final polarization and azimuthal dependence, which are directly linked to the type of AFQ moments. The authors are indebt to H. Yamauchi, H. Onodera and Y. Yamaguchi for sharing their experimental results and sample preparation techniques prior to publication. We also acknowledge N. Kimura for helping single crystal growth and T. Arima and S. Ishihara for crucial discussions. This work was supported by Core Research for Evolutional Science and Technology (CREST).
no-problem/9908/hep-ph9908268.html
ar5iv
text
# Estimating 𝜀'/𝜀 in the Standard Model (June 22, 1999) ## Abstract I discuss the comparison of the current theoretical calculations of $`\epsilon ^{}/\epsilon `$ with the experimental data. Lacking reliable “first principle” calculations, phenomenological approaches may help in understanding correlations among different contributions and available experimental data. In particular, in the chiral quark model approach the same dynamics which underlies the $`\mathrm{\Delta }I=1/2`$ selection rule in kaon decays appears to enhance the $`K\pi \pi `$ matrix element of the $`Q_6`$ gluonic penguin, thus driving $`\epsilon ^{}/\epsilon `$ in the range of the recent experimental measurements. The results announced by the KTeV Collaboration last February and by the NA48 Collaboration at this conference (albeit preliminary) have marked a great experimental achievement, establishing 35 years after the discovery of CP violation in the neutral kaon system the existence of a much smaller violation acting directly in the decays. While the Standard Model (SM) of strong and electroweak interactions provides an economical and elegant understanding of the presence of indirect ($`\epsilon `$) and direct ($`\epsilon ^{}`$) CP violation in term of a single phase, the detailed calculation of the size of these effects implies mastering strong interactions at a scale where perturbative methods break down. In addition, CP violation in $`K\pi \pi `$ decays is the result of a destructive interference between two sets of contributions (for a suggestive picture of the gluonic and electroweak penguin diagrams see the talk by Buras at this conference ), thus potentially inflating up to an order of magnitude the uncertainties on the individual hadronic matrix elements of the effective four-quark operators. In Fig. 1, taken from Ref. , the comparison of the theoretical predictions and the experimental results available before the Kaon 99 conference is summarized. The gray horizontal band shows the two-sigma experimental range obtained averaging the recent KTeV result with the older NA31 and E731 data, corresponding to $`\epsilon ^{}/\epsilon =(21.8\pm 3)\times 10^4`$. The vertical lines show the ranges of the most recent published theoretical predictions, identified with the cities where most of the group members reside. The figure does not include two new results announced at this conference: on the experimental side, the first NA48 measurement and, on the theoretical side, the new prediction based on the $`1/N`$ expansion , which I will refer to in the following as the Dortmund group estimate. The inclusion of the NA48 result $`\epsilon ^{}/\epsilon =(18.5\pm 7.3)\times 10^4`$ lowers the experimental average shown in Fig. 1 by about 4%. Looking at Fig. 1 two comments are in order. On the one hand, we should appreciate the fact that within the uncertainties of the theoretical calculations, there is indeed an overall agreement among the different predictions. All of them agree on the presence of a non-vanishing positive effect in the SM. On the other hand, the central values of the München (phenomenological $`1/N`$) and Rome (lattice) calculations are by a factor 3 to 5 lower than the averaged experimental central value. In spite of the complexity of the calculations, I would like to emphasize that the difference between the predictions of the two estimates above and that of the Trieste group, based on the Chiral Quark Model ($`\chi `$QM) , is mainly due to the different size of the hadronic matrix element of the gluonic penguin $`Q_6`$. In addition, I will show that the enhancement of the $`Q_6`$ matrix element in the $`\chi `$QM approach can be simply understood in terms of chiral dynamics and, in this respect, it is related to the phenomenological embedding of the $`\mathrm{\Delta }I=1/2`$ selection rule. The $`\mathrm{\Delta }I=1/2`$ selection rule in $`K\pi \pi `$ decays is known by some 40 years and it states the fact that kaons are 400 times more likely to decay in the isospin zero two-pion state than in the isospin two component. This rule is not justified by any symmetry consideration and, although it is common understanding that its explanation must be rooted in the dynamics of strong interactions, there is no up to date derivation of this effect from first principle QCD. As summarized by Martinelli at this conference lattice cannot provide us at present with reliable calculations of the $`I=0`$ penguin operators relevant to $`\epsilon ^{}/\epsilon `$ , as well as of the $`I=0`$ components of the hadronic matrix elements of the tree-level current-current operators (penguin contractions), which are relevant for the $`\mathrm{\Delta }I=1/2`$ selection rule. In the Münich approach the $`\mathrm{\Delta }I=1/2`$ rule is used in order to determine phenomenologically the matrix elements of $`Q_{1,2}`$ and, via operatorial relations, some of the matrix elements of the left-handed penguins. Unfortunately, the approach does not allow for a phenomenological determination of the matrix elements of the penguin operators which are most relevant for $`\epsilon ^{}/\epsilon `$ , namely the gluonic penguin $`Q_6`$ and the electroweak penguin $`Q_8`$. Values in the ballpark of the leading $`1/N`$ estimate are assumed for these matrix elements, taking also into account that all present approaches show a suppression of $`Q_8`$ with respect to its vacuum saturation approximation (VSA). In the $`\chi `$QM approach, the hadronic matrix elements can be computed as an expansion in momenta in terms of three parameters: the constituent quark mass, the quark condensate and the gluon condensate. The Trieste group has computed the $`K\pi \pi `$ matrix elements of the $`\mathrm{\Delta }S=1,2`$ effective lagrangian up to $`O(p^4/N)`$ in the chiral and $`1/N`$ expansions . Hadronic matrix elements and short distance Wilson coefficients are matched at a scale of $`0.8`$ GeV as a reasonable compromise between the ranges of validity of perturbation theory and chiral lagrangian. By requiring the $`\mathrm{\Delta }I=1/2`$ rule to be reproduced within a 20% uncertainty one obtains a phenomenological determination of the three basic parameters of the model. This step is needed in order to make the model predictive, since there is no a-priori argument for the consistency of the matching procedure. As a matter of fact, all computed observables turn out to be very weakly scale dependent in a few hundred MeV range around the matching scale. Fig. 2 shows an anatomy of the (model dependent) contributions which lead in the Trieste approach to reproducing the $`\mathrm{\Delta }I=1/2`$ selection rule. Point (1) represents the result obtained by neglecting QCD and taking the factorized matrix element for the tree-level operator $`Q_2`$, which is the only one present. The ratio $`A_0/A_2`$ is found equal to $`\sqrt{2}`$: a long way to the experimental point (8). Step (2) includes the effects of perturbative QCD renormalization on the operators $`Q_{1,2}`$. Step (3) shows the effect of including the gluonic penguin operators . Electroweak penguins are numerically negligeable for the CP conserving amplitudes and are responsible for the very small shift in the $`A_2`$ direction. Therefore, perturbative QCD and factorization lead us from (1) to (4). Non-factorizable gluon-condensate corrections, a crucial model dependent effect, enter at the leading order in the chiral expansion leading to a substantial reduction of the $`A_2`$ amplitude (5), as first observed by Pich and de Rafael . Moving the analysis to $`O(p^4)`$ the chiral loop corrections, computed on the LO chiral lagrangian via dimensional regularization and minimal subtraction, lead us from (5) to (6), while the corresponding $`O(p^4)`$ tree level counterterms calculated in the $`\chi `$QM lead to the point (7). Finally, step (8) represents the inclusion of $`\pi `$-$`\eta `$-$`\eta ^{}`$ isospin breaking effects . This model dependent anatomy shows the relevance of non-factorizable contributions and higher-order chiral corrections. The suggestion that chiral dynamics may be relevant to the understanding of the $`\mathrm{\Delta }I=1/2`$ selection rule goes back to the work of Bardeen, Buras and Gerard in the $`1/N`$ framework using a cutoff regularization. This approach has been recently revived and improved by the Dortmund group, with a particular attention to the matching procedure . A pattern similar to that shown in Fig. 2 for the chiral loop corrections to $`A_0`$ and $`A_2`$ was previously obtained in a NLO chiral lagrangian analysis, using dimensional regularization, by Missimer, Kambor and Wyler . The $`\chi `$QM approach allows us to further investigate the relevance of chiral corrections for each of the effective quark operators of the $`\mathrm{\Delta }S=1`$ lagrangian. Fig. 3 shows the contributions to the CP conserving amplitude $`A_0`$ of the relevant operators, providing us with a finer (model dependent) anatomy of the NLO chiral corrections. From Fig. 3 we notice that, because of the chiral loop enhancement, the $`Q_6`$ contribution to $`A_0`$ is about 20% of the total amplitude. As we shall see, the $`O(p^4)`$ enhancement of the $`Q_6`$ matrix element is what drives $`\epsilon ^{}/\epsilon `$ in the $`\chi `$QM to the $`10^3`$ ballpark. A commonly used way of comparing the estimates of hadronic matrix elements in different approaches is via the so-called $`B`$ factors which represent the ratio of the model matrix elements to the corresponding VSA values. However, care must be taken in the comparison of different models due to the scale dependence of the $`B`$’s and the values used by different groups for the parameters that enter the VSA expressions. Table 1 reports the $`B`$ factors used for the predictions shown in Fig. 1. An alternative pictorial and synthetic way of analyzing different outcomes for $`\epsilon ^{}/\epsilon `$ is shown in Fig. 4, where a “comparative anatomy” of the Trieste and München estimates is presented. From the inspection of the various contributions it is apparent that the final difference on the central value of $`\epsilon ^{}/\epsilon `$ is almost entirely due to the difference in the $`Q_6`$ component. In second order, a larger (negative) contribution of the $`Q_4`$ penguin in the München calculation goes into the direction of making $`\epsilon ^{}/\epsilon `$ smaller. The difference in the $`Q_4`$ contribution is easily understood. In the München estimate the $`Q_4`$ matrix element is obtained using the operatorial relation $`Q_4=Q_2_0Q_1_0+Q_3`$, together with the knowledge acquired on $`Q_{1,2}_0`$ from fitting the $`\mathrm{\Delta }I=1/2`$ selection rule at the charm scale. As a matter of fact, the phenomenological fit of $`\mathrm{\Delta }I=1/2`$ rule requires a large value of $`Q_2_0Q_1_0`$ (which deviates by up to an order of magnitude from the naive VSA estimate). The assumption that $`Q_3`$ is given by its VSA value leads, in the München analysis, to a large value of $`Q_6`$: about 5 times larger than its VSA value. On the other hand, in the $`\chi `$QM calculation $`Q_3`$ turns out to have a sign opposite to its VSA expression, in such a way that a smaller value for $`Q_4`$ is obtained. A lattice calculation of all gluonic penguins is definitely needed to disentangle such patterns. At any rate, the main difference between the $`\epsilon ^{}/\epsilon `$ central values obtained in the Trieste and München calculations rests in the $`Q_6`$ matrix element. The nature of the difference is apparent in Fig. 5 where the various penguin contributions to $`\epsilon ^{}/\epsilon `$ in the Trieste analysis are further separated in LO (dark histograms) and NLO components—chiral loops (gray histograms) and $`O(p^4)`$ tree level counterterms (dark histograms). It is clear that chiral loop dynamics plays a subleading role in the electroweak penguin sector ($`Q_{810}`$) while enhancing by 60% the $`Q_6`$ matrix element. At $`O(p^2)`$ the $`\chi `$QM prediction for $`\epsilon ^{}/\epsilon `$ would just overlap with the München estimate once the small effect of the $`Q_4`$ operator is taken into account. The $`\chi `$QM analysis shows that the same dynamics that is relevant to the reproduction of the CP conserving $`A_0`$ amplitude (Fig. 3) is at work also in the CP violating sector (gluonic penguins). In order to ascertain whether the model features represent real QCD effects one should wait for future improvements in lattice calculations . On the other hand, indications for such a dynamics arise from current $`1/N`$ calculations . The idea of a connection between the $`\mathrm{\Delta }I=1/2`$ selection rule and $`\epsilon ^{}/\epsilon `$ is certainly not new , although at the GeV scale, where one can trust perturbative QCD, penguins are far from providing the dominant contribution to the CP conserving amplitudes. Before concluding, I like to make a comment on the role of the strange quark mass in the $`\chi `$QM calculation of $`\epsilon ^{}/\epsilon `$ : in such an approach the basic parameter that enters the relevant penguin matrix elements is the quark condensate and the explicit dependence on $`m_s`$ appears at the NLO in the chiral expansion. Varying the central value of $`\overline{m}_s(m_c)`$ from 150 MeV to 130 MeV affects $`Q_6`$ and $`Q_8`$ at the few percent level. A more sensitive quantity is $`\widehat{B}_K`$, which parametrize the $`\overline{K}K`$ matrix element. This parameter, which equals unity in the VSA turns out to be quite sensitive to $`SU(3)`$ breaking effects. Taking $`\overline{m}_s(m_c)=130\pm 20`$ MeV, $`\mathrm{\Lambda }_{\mathrm{QCD}}^{(4)}=340\pm 40`$ MeV and varying all relevant paramaters, the updated $`\chi `$QM result is: $$\widehat{B}_K=1.0\pm 0.2,$$ to be compared with the value used in the 1997 analysis (Table 1). This increases the previous determination of $`\text{Im}\lambda _t`$ by roughly 10% and correspondingly $`\epsilon ^{}/\epsilon `$ (an updated analysis of $`\epsilon ^{}/\epsilon `$ in the $`\chi `$QM with gaussian treatment of experimental inputs is in progress). I conclude by summarizing the relevant remarks: * Phenomenological approaches which embed the $`\mathrm{\Delta }I=1/2`$ selection rule in $`K\pi \pi `$ decays, generally agree with present lattice calculations in the pattern and size of the $`I=2`$ components of the $`\mathrm{\Delta }S=1`$ hadronic matrix elements. * Concerning the $`I=0`$ matrix elements, where lattice calculations suffer from large sistematic uncertainties, the $`\mathrm{\Delta }I=1/2`$ rule forces upon us large deviations from the naive VSA (see Table 1). * In the Chiral Quark Model calculation, the fit of the CP conserving $`K\pi \pi `$ amplitudes, which determines the three basic parameters of the model, feeds down to the penguin sectors showing a substancial enhancement of the $`Q_6`$ matrix element, such that $`B_6/B_8^{(2)}2`$. This is what drives the $`\epsilon ^{}/\epsilon `$ prediction in the $`10^3`$ ballpark. * Up to 40% of the present uncertainty in the $`\epsilon ^{}/\epsilon `$ prediction arises from the uncertainty in the CKM elements $`\text{Im}(V_{ts}^{}V_{td})`$ which is presently controlled by the $`\mathrm{\Delta }S=2`$ parameter $`B_K`$. A better determination of the unitarity triangle from B-physics is expected from the B-factories and hadronic colliders. From K-physics, the rare decay $`K_L\pi ^0\nu \overline{\nu }`$ gives the cleanest “theoretical” determination of $`\text{Im}\lambda _t`$ . * In spite of clever and interesting new-physics explanations for a large $`\epsilon ^{}/\epsilon `$ , it is premature to interpret the present theoretical-experimental “disagreement” as a signal of physics beyond the SM. Ungauged systematic uncertainties affect presently all theoretical estimates. Not to forget the long-standing puzzle of the $`\mathrm{\Delta }I=1/2`$ rule: perhaps an “anomalously” large $`\epsilon ^{}/\epsilon `$ ($`B_6/B_8^{(2)}2`$) is just the CP violating projection of $`A_0/A_220`$. My appreciation goes to the Organizers of the Kaon 99 Conference for assembling such a stimulating scientific programme and for the efficient logistic organization. Finally, I thank J.O. Eeg and M. Fabbrichesi for a most enjoyable and fruitful collaboration.
no-problem/9908/cond-mat9908081.html
ar5iv
text
# Phase transition from a 𝑑_{𝑥²-𝑦²} to 𝑑_{𝑥²-𝑦²}+𝑖⁢𝑑_{𝑥⁢𝑦} superconductorTo Appear in Physica C ## Abstract The temperature dependencies of specific heat and spin susceptibility of a coupled $`d_{x^2y^2}+id_{xy}`$ superconductor in the presence of a weak $`d_{xy}`$ component are investigated in the tight-binding model (1) on square lattice and (2) on a lattice with orthorhombic distortion. As the temperature is lowered past the critical temperature $`T_c`$, first a less ordered $`d_{x^2y^2}`$ superconductor is created, which changes to a more ordered $`d_{x^2y^2}+id_{xy}`$ superconductor at $`T_{c1}(<T_c)`$. This manifests in two second order phase transitions identified by two jumps in specific heat at $`T_c`$ and $`T_{c1}`$. The temperature dependencies of the superconducting observables exhibit a change from power-law to exponential behavior as temperature is lowered below $`T_{c1}`$ and confirm the new phase transition. PACS number(s): 74.20.Fg, 74.62.-c, 74.25.Bt Keywords: $`d_{x^2y^2}+id_{xy}`$-wave superconductor, specific heat, susceptibility. The unconventional high-$`T_c`$ superconductors with a high critical temperature $`T_c`$ have a complicated lattice structure with extended and/or mixed symmetry for the order parameter . For many of these high-$`T_c`$ materials, the order parameter exhibits anisotropic behavior. The detailed nature of anisotropy was thought to be typical to that of an extended $`s`$-wave, a pure $`d`$-wave, or a mixed $`(s+\mathrm{exp}(i\theta )d)`$-wave type. Some high-$`T_c`$ materials have singlet $`d`$-wave Cooper pairs and the order parameter has $`d_{x^2y^2}`$ symmetry in two dimensions , which has been supported by recent studies of temperature dependence of some superconducting observables . In some cases there is the signature of an extended $`s`$\- or $`d`$-wave symmetry. The possibility of a mixed angular-momentum-state symmetry was suggested sometime ago by Ruckenstein et al. and Kotliar . There are experimental evidences based on Josephson supercurrent for tunneling between Pb and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> (YBCO) , and on photoemission studies on Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+x</sub> among others which are difficult to reconcile employing a pure $`s`$\- or $`d`$-wave order parameter. These observations suggest that a mixed \[$`s+\mathrm{exp}(i\theta )d`$\] symmetry is applicable in these cases . More recently Krishana et al. reported a phase transition in the high-$`T_c`$ superconductor Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> induced by a magnetic field from a study of the thermal conductivity as a function of temperature and magnetic field. Laughlin have suggested that the new superconducting phase is the time-reversal symmetry breaking $`d_{x^2y^2}+id_{xy}`$ state. Similar conclusion has been reached by Ghosh in a recent work from a study of the temperature dependence of thermal conductivity. This has led to the possibility of the transition to a $`d_{x^2y^2}+id_{xy}`$ phase from a pure $`d_{x^2y^2}`$ phase of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub>. From a study of vortex in a $`d`$-wave superconductor using a self-consistent Bogoliubov-de Gennes formalism, Franz and Teśanović also predicted the possibility of the creation of a $`d_{x^2y^2}+id_{xy}`$ superconducting state. Although, the creation of the mixed superconducting state $`d_{x^2y^2}+id_{xy}`$ is speculative, Franz and Teśanović conclude that a dramatic change should be observed (in the observables of the superconductor) as the superconductor undergoes a phase transition from a $`d_{x^2y^2}`$ state to a $`d_{x^2y^2}+id_{xy}`$ state. In this work we study the effect of this phase transition on superconducting specific heat and spin susceptibility in the absence of magnetic field. The general trend of observables under the $`d_{x^2y^2}`$\- to $`d_{x^2y^2}+id_{xy}`$-wave phase transition of the superconductor, as studied here, is expected to be independent of the external magnetic field. We recall that there have been several studies on the formation of a mixed $`(s+id)`$-wave superconducting state from a pure $`d`$-wave state . First we study the temperature dependence of the order parameter of the mixed $`d_{x^2y^2}+id_{xy}`$ state within the Bardeen-Cooper-Schrieffer (BCS) model . The BCS model for a mixed $`d_{x^2y^2}+id_{xy}`$ state becomes a coupled set of equations. The ratio of the strengths of the $`d_{x^2y^2}`$\- and $`d_{xy}`$-wave interactions should lie in a narrow region in order to have a coexisting $`d_{x^2y^2}`$\- and $`d_{xy}`$-wave phases in the case of $`d_{x^2y^2}+id_{xy}`$ symmetry. As the $`d_{x^2y^2}`$-wave ($`d_{xy}`$-wave) interaction becomes stronger, the $`d_{xy}`$-wave ($`d_{x^2y^2}`$-wave) component of the order parameter quickly reduces and disappears and a pure $`d_{x^2y^2}`$-wave ($`d_{xy}`$-wave) state emerges. The order parameter of each of $`d_{x^2y^2}`$ and $`d_{xy}`$ states has nodes on the Fermi surface and may change sign along the Fermi surface. The $`s`$-wave order parameter does not have this property. Because of this, many $`d`$-wave superconducting observables have power-law dependence on temperature, whereas the $`s`$-wave observables exhibit exponential dependence. We find that in the present coupled $`d_{x^2y^2}+id_{xy}`$ state the order parameter does not exhibit nodes and change of sign along the Fermi surface and exhibits a typical $`s`$-wave like behavior. Consequently, the observables in the coupled $`d_{x^2y^2}+id_{xy}`$ state does not exhibit typical $`d`$-wave power-law dependence on temperature, but rather a typical $`s`$-wave exponential dependence. For a weaker $`d_{xy}`$-wave admixture, in the present study we establish in the two-dimensional tight-binding model (1) on square lattice and (2) on a lattice with orthorhombic distortion another second-order phase transition at $`T=T_{c1}<T_c`$, where the superconducting phase changes from a pure $`d_{x^2y^2}`$-wave state for $`T>T_{c1}`$ to a mixed $`d_{x^2y^2}+id_{xy}`$-wave state for $`T<T_{c1}`$. The specific heat exhibits two jumps at the transition points $`T=T_{c1}`$ and $`T=T_c`$. The temperature dependencies of the superconducting specific heat and spin susceptibility change drastically at $`T=T_{c1}`$ from power-law behavior for $`T>T_{c1}`$ to exponential behavior for $`T<T_{c1}`$. We find that the observables for the normal state are closer to those for a pure superconducting $`d_{xy}`$ state than to those for a pure superconducting $`d_{x^2y^2}`$ state. Consequently, superconductivity in $`d_{x^2y^2}`$ wave is more pronounced than in pure $`d_{xy}`$ wave. Hence as temperature decreases the system passes from the normal state to a “less” superconducting $`d_{x^2y^2}`$-wave state at $`T=T_c`$ and then to a “more” superconducting state $`d_{x^2y^2}+id_{xy}`$ with dominating $`s`$-wave behavior at $`T=T_{c1}`$ signaling a second phase transition. The profound change in the nature of the superconducting state at $`T=T_{c1}`$ becomes apparent from a study of the entropy. At a particular temperature the entropy for the normal state is larger than that for all superconducting states signaling an increase in order in the superconducting state. In the case of the present $`d_{x^2y^2}+id_{xy}`$ state we find that as the temperature is lowered past $`T_{c1}`$, the entropy of the superconducting $`d_{x^2y^2}+id_{xy}`$ state decreases very rapidly (not shown explicitly in this work) indicating the appearance of a more ordered superconducting phase and a second phase transition. We base the present study on the two-dimensional tight binding model which we describe below. This model is sufficiently general for considering mixed angular momentum states, with or without orthorhombic distortion, employing nearest and second-nearest-neighbor hopping integrals. The effective interaction in this case can be written as $`V_{\mathrm{𝐤𝐪}}`$ $`=`$ $`V_1(\mathrm{cos}k_x\beta \mathrm{cos}k_y)(\mathrm{cos}q_x\beta \mathrm{cos}q_y)`$ (1) $``$ $`V_2(\mathrm{sin}k_x\mathrm{sin}k_y)(\mathrm{sin}q_x\mathrm{sin}q_y).`$ (2) Here $`V_1`$ and $`V_2`$ are the couplings of effective $`d_{x^2y^2}`$\- and $`d_{xy}`$-wave interactions, respectively. As we shall consider Cooper pairing and subsequent BCS condensation in both these waves the constants $`V_1`$ and $`V_2`$ will be taken to be positive corresponding to attractive interactions. In this case the quasiparticle dispersion relation is given by $$ϵ_𝐤=2t[\mathrm{cos}k_x+\beta \mathrm{cos}k_y\gamma \mathrm{cos}k_x\mathrm{cos}k_y],$$ (3) where $`t`$ and $`\beta t`$ are the nearest-neighbor hopping integrals along the in-plane $`a`$ and $`b`$ axes, respectively, and $`\gamma t/2`$ is the second-nearest-neighbor hopping integral. We consider the weak-coupling BCS model in two dimensions with $`d_{x^2y^2}+id_{xy}`$ symmetry. At a finite $`T`$, one has the following BCS equation $`\mathrm{\Delta }_𝐤`$ $`=`$ $`{\displaystyle \underset{𝐪}{}}V_{\mathrm{𝐤𝐪}}{\displaystyle \frac{\mathrm{\Delta }_𝐪}{2E_𝐪}}\mathrm{tanh}{\displaystyle \frac{E_𝐪}{2k_BT}}`$ (4) with $`E_𝐪=[(ϵ_𝐪E_F)^2+|\mathrm{\Delta }_𝐪|^2]^{1/2},`$ where $`E_F`$ is the Fermi energy and $`k_B`$ the Boltzmann constant. The order parameter $`\mathrm{\Delta }_𝐪`$ has the following anisotropic form: $`\mathrm{\Delta }_𝐪\mathrm{\Delta }_1(\mathrm{cos}q_x\beta \mathrm{cos}q_y)+i\mathrm{\Delta }_2\mathrm{sin}q_x\mathrm{sin}q_y`$. Using the above form of $`\mathrm{\Delta }_𝐪`$ and potential (1), Eq. (4) becomes the following coupled set of BCS equations $`\mathrm{\Delta }_1=V_1{\displaystyle \underset{𝐪}{}}{\displaystyle \frac{\mathrm{\Delta }_1(\mathrm{cos}q_x\beta \mathrm{cos}q_y)^2}{2E_𝐪}}\mathrm{tanh}{\displaystyle \frac{E_𝐪}{2k_BT}}`$ (5) $`\mathrm{\Delta }_2=V_2{\displaystyle \underset{𝐪}{}}{\displaystyle \frac{\mathrm{\Delta }_2(\mathrm{sin}q_x\mathrm{sin}q_y)^2}{2E_𝐪}}\mathrm{tanh}{\displaystyle \frac{E_𝐪}{2k_BT}}`$ (6) where the coupling is introduced through $`E_𝐪`$. In Eqs. (5) and (6) both the interactions $`V_1`$ and $`V_2`$ are assumed to be energy-independent constants for $`|ϵ_𝐪E_F|<k_BT_D`$ and zero for $`|ϵ_𝐪E_F|>k_BT_D`$, where $`k_BT_D`$ is the usual Debye cutoff. The specific heat is given by $$C(T)=\frac{2}{T}\underset{𝐪}{}\frac{f_𝐪}{E_𝐪}\left(E_𝐪^2\frac{1}{2}T\frac{d|\mathrm{\Delta }_𝐪|^2}{dT}\right)$$ (7) where $`f_𝐪=1/(1+\mathrm{exp}(E_𝐪/k_BT))`$. The spin susceptibility $`\chi `$ is defined by $$\chi (T)=\frac{2\mu _N^2}{T}\underset{𝐪}{}f_𝐪(1f_𝐪)$$ (8) where $`\mu _N`$ is the nuclear magneton. We solved the coupled set of equations (5) and (6) numerically by the method of iteration and calculated the gaps $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$ at various temperatures for $`T<T_c`$. We have performed calculations (1) on a perfect square lattice and (2) in the presence of an orthorhombic distortion with Debye cut off $`k_BT_D=0.02586`$ eV ($`T_D=300`$ K) in both cases. The parameters for these two cases are the following: (1) Square lattice $``$ (a) $`t=0.2586`$ eV, $`\beta =1`$, $`\gamma =0`$, $`V_1=0.73t`$, and $`V_2=6.8t`$, $`T_c=71`$ K, $`T_{c1}`$ = 28 K; (b) $`t=0.2586`$ eV, $`\beta =1`$, $`\gamma =0`$, $`V_1=0.73t`$, and $`V_2=7.9t`$, $`T_c=71`$ K, $`T_{c1}`$ = 47 K; (2) Orthorhombic distortion $``$ (a) $`t=0.2586`$ eV, $`\beta =0.95`$, and $`\gamma =0`$, $`V_1=0.97t`$, and $`V_2=6.5t`$, $`T_c`$ = 70 K, $`T_{c1}`$ = 25 K; (b) $`t=0.2586`$ eV, $`\beta =0.95`$, and $`\gamma =0`$, $`V_1=0.97t`$, and $`V_2=8.0t`$, $`T_c`$ = 70 K, $`T_{c1}`$ = 52 K. For a very weak $`d_{x^2y^2}`$-wave ($`d_{xy}`$-wave) coupling the only possible solution corresponds to $`\mathrm{\Delta }_2=0`$ ($`\mathrm{\Delta }_1=0`$). In Figs. 1 and 2 we plot the temperature dependencies of different $`\mathrm{\Delta }`$’s for the following two sets of $`d_{x^2y^2}+id_{xy}`$-wave corresponding to models 1 and 2 above (full line $``$ models 1(a) and 2(a); dashed line $``$ models 1(b) and 2(b)), respectively. In both cases the temperature dependence of the $`\mathrm{\Delta }`$’s are very similar. In the coupled $`d_{x^2y^2}+id_{xy}`$-wave as temperature is lowered past $`T_c`$, the parameter $`\mathrm{\Delta }_1`$ increases up to $`T=T_{c1}.`$ With further reduction of temperature, the parameter $`\mathrm{\Delta }_2`$ becomes nonzero and begins to increase and eventually both $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$ first increases and then saturates as temperature tends to zero. Recently, the temperature dependencies of the order parameter of the $`d_{x^2y^2}+is`$-wave superconducting state has been studied, where at $`T=T_{c1}`$, the transition from $`d_{x^2y^2}`$ to $`d_{x^2y^2}+is`$ state takes place. In that case, below $`T=T_{c1}`$ the $`d_{x^2y^2}`$-wave component of the order parameter is suppressed, as the $`s`$-wave component becomes nonzero. No such suppression of the $`d_{x^2y^2}`$-wave takes place in this case as the $`d_{xy}`$ component appears. Now we study the temperature dependence of specific heat in some detail. The different superconducting and normal specific heats are plotted in Figs. 3 and 4 for square lattice \[models 1(a) and 1(b)\] and orthorhombic distortion \[models 2(a) and 2(b)\], respectively. In both cases the specific heat exhibits two jumps $``$ one at $`T_c`$ and another at $`T_{c1}`$. From Eq. (7) and Figs. 1 and 2 we see that the temperature derivative of $`|\mathrm{\Delta }_𝐪|^2`$ has discontinuities at $`T_c`$ and $`T_{c1}`$ due to the vanishing of $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$, respectively, responsible for the two jumps in specific heat. For a pure $`d_{x^2y^2}`$ wave we find that the specific heat exhibits a power-law dependence on temperature. However, the exponent of this dependence varies with temperature. For small $`T`$ the exponent is approximately 2.5, and for large $`T`$ ($`TT_c`$) it is nearly 2. In the $`d_{x^2y^2}+id_{xy}`$ model, for $`T_c>T>T_{c1}`$ the specific heat exhibits $`d_{x^2y^2}`$-wave power-law behavior; for $`T<T_{c1}`$ the specific heat exhibits an $`s`$-wave like exponential behavior. For the $`d`$-wave model $`d_{x^2y^2}`$, $`C_s(T_c)/C_n(T_c)`$ is a function of $`T_c`$ and $`\beta `$. In Figs. 3 and 4 this ratio, for $`T_c`$ = 70 K, is approximately 3 (2.5) for $`\beta =`$ 1 (0.95). In a continuum $`d`$-wave calculation this ratio was 2 in the absence of a van Hove singularity . We also calculated the specific heat for the pure $`d_{xy}`$ case. For square lattice with $`V_1=9.0`$ we obtain $`T_c=67`$ K and $`C_s(T_c)/C_n(T_c)=1.82`$ and $`C_s(T)/C_n(T_c)(T/T_c)^{1.4}`$ for the whole temperature range. For orthorhombic distortion $`\beta =0.95`$, with $`V_1=9.0`$ we obtain $`T_c=69`$ K and $`C_s(T_c)/C_n(T_c)=1.94`$ and $`C_s(T)/C_n(T_c)(T/T_c)^{1.5}`$ for the whole temperature range. This power-law behavior with temperature in both the $`d`$ waves is destroyed in the coupled $`d_{x^2y^2}+id_{xy}`$ wave and for $`T<T_{c1}`$, we find an $`s`$-wave-like exponential behavior in both cases. In both the uncoupled $`d`$ waves the order parameter $`\mathrm{\Delta }`$ has nodes on the Fermi surface and changes sign and this property is destroyed in the coupled $`d_{x^2y^2}+id_{xy}`$ wave, where the order parameter has a typical $`s`$ wave behavior. In Fig. 5 we study the jump $`\mathrm{\Delta }C`$ in the specific heat at $`T_c`$ for pure $`s`$\- and $`d`$-wave superconductors as a function of $`T_c`$, where we plot the ratio $`\mathrm{\Delta }C/C_n(T_c)`$ versus $`T_c`$. For a BCS superconductor in the continuum $`\mathrm{\Delta }C/C_n(T_c)`$ = 1.43 (1.0) for $`s`$-wave ($`d`$-wave) independent of $`T_c`$ . Because of the presence of the van Hove singularity in the present model this ratio increases with $`T_c`$ as can be seen in Fig. 5. For a fixed $`T_c`$, the ratio $`\mathrm{\Delta }C/C_n(T_c)`$ is larger for square lattice ($`\beta =1`$) than that for a lattice with orthorhombic distortion ($`\beta =0.95`$) for both $`s`$ and $`d_{x^2y^2}`$ waves. However, for a $`d_{xy}`$-wave superconductor $`\mathrm{\Delta }C/C_n(T_c)`$ is smaller for square lattice ($`\beta =1`$) than that for a lattice with orthorhombic distortion ($`\beta =0.95`$). The jump in $`d_{xy}`$ wave is smaller than that for $`s`$ and $`d_{x^2y^2}`$ waves. At $`T_c`$ = 100 K, in the $`s`$-wave ($`d_{x^2y^2}`$-wave) square lattice case this ratio could be as high as 3.63 (2.92), whereas for $`d_{xy}`$ wave this ratio at 100 K is 1.15 (1.25) for square lattice (orthorhombic distortion). Next we study the temperature dependencies of spin susceptibility for square lattice and in the presence of orthorhombic distortion which we exhibit in Figs. 6 and 7, respectively. There we plot the results for pure $`d_{x^2y^2}`$, $`d_{xy}`$, and $`s`$ waves for comparison, in addition to those for models 1(a), 1(b), 2(a) and 2(b). In all cases reported in these figures $`T_c70`$ K. For pure $`d`$-wave cases we obtain power-law dependencies on temperature. The exponent for this power-law scaling is independent of critical temperature $`T_c`$ but vary from a square lattice to that with an orthorhombic distortion. For $`d_{x^2y^2}`$ wave, the exponent for square lattice (orthorhombic distortion, $`\beta `$ = 0.95) is 2.6 (2.4). For $`d_{xy}`$ wave, the exponent for square lattice (orthorhombic distortion, $`\beta `$ = 0.95) is 1.1 (1.6). For the mixed $`d_{x^2y^2}+id_{xy}`$ wave, $`d_{x^2y^2}`$-wave power-law behavior is obtained for $`T_c>T>T_{c1}`$. For $`T<T_{c1}`$, one has a typical $`s`$-wave behavior. In conclusion, we have studied the ($`d_{x^2y^2}+id_{xy}`$)-wave superconductivity employing a two-dimensional tight binding BCS model on square lattice and also for orthorhombic distortion. We have kept the potential couplings in such a domain that a coupled ($`d_{x^2y^2}+id_{xy}`$)-wave solution is allowed. For a weaker $`d_{xy}`$ admixture, as temperature is lowered past the first critical temperature $`T_c`$, a weaker (less ordered) superconducting phase is created in $`d_{x^2y^2}`$ wave, which changes to a stronger (more ordered) superconducting phase in ($`d_{x^2y^2}+id_{xy}`$) wave at $`T_{c1}`$. The $`d_{x^2y^2}+id_{xy}`$-wave state is similar to an $`s`$-wave-type state with no node in the order parameter. The phase transition at $`T_{c1}`$ from a $`d_{x^2y^2}`$ wave to a $`d_{x^2y^2}+id_{xy}`$ wave is marked by power-law (exponential) temperature dependencies of specific heat and spin susceptibility for $`T>T_{c1}`$ ($`<T_{c1}`$). Similar behavior has been observed for a $`d_{x^2y^2}+is`$-wave state . As the mixed state is $`s`$-wave like in both cases, from the present study it would not be possible to identify the proper symmetry of the order parameter $``$ $`d_{x^2y^2}+id_{xy}`$ opposed to $`d_{x^2y^2}+is`$ $``$ and further phase sensitive tests of pairing symmetry in cuprate superconductors is needed. We thank Conselho Nacional de Desenvolvimento Científico e Tecnológico and Fundação de Amparo à Pesquisa do Estado de São Paulo for financial support. Figure Captions: 1. The order parameters $`\mathrm{\Delta }_1`$, $`\mathrm{\Delta }_2`$ in Kelvin at different temperatures for $`d_{x^2y^2}+id_{xy}`$-wave models 1(a) (full line) and 1(b) (dashed line) for square lattice. 2. The order parameters $`\mathrm{\Delta }_1`$, $`\mathrm{\Delta }_2`$ in Kelvin at different temperatures for $`d_{x^2y^2}+id_{xy}`$-wave models 2(a) (full line) and 2(b) (dashed line) in presence of orthorhombic distortion $`(\beta =0.95)`$. 3. Specific heat ratio $`C(T)/C_n(T_c)`$ versus $`T/T_c`$ for models 1(a) and 1(b) for square lattice: 1(a) (full line), 1(b) (dashed line), $`d_{xy}`$ (dotted line), normal (dashed-dotted line). In all cases $`T_c70`$ K. 4. Specific heat ratio $`C(T)/C_n(T_c)`$ versus $`T/T_c`$ for models 2(a) and 2(b) for orthorhombic distortion: 2(a) (full line), 2(b) (dashed line), $`d_{xy}`$ (dotted line), normal (dashed-dotted line). In all cases $`T_c70`$ K. 5. Specific heat jump for different $`T_c`$ for pure $`s`$ and $`d`$ waves: $`s`$ wave (solid line, square lattice), $`s`$ wave (dashed line, orthorhombic distortion), $`d_{x^2y^2}`$ wave (dashed-dotted line, square lattice), $`d_{x^2y^2}`$ wave (dashed-double-dotted line, orthorhombic distortion), $`d_{xy}`$ wave (dotted line, square lattice), $`d_{xy}`$ wave (dashed-triple-dotted line, orthorhombic distortion). 6. Susceptibility ratio $`\chi (T)/\chi (T_c)`$ for square lattice versus $`T/T_c`$: pure $`d_{x^2y^2}`$ wave (solid line), pure $`d_{xy}`$ wave (dashed line), pure $`s`$ wave (dashed-dotted line), model 1(a) (dotted line), model 1(b) (dashed-double-dotted line). In all cases $`T_c70`$ K. 7. Susceptibility ratio $`\chi (T)/\chi (T_c)`$ in presence of orthorhombic distortion versus $`T/T_c`$: pure $`d_{x^2y^2}`$ wave (solid line), pure $`d_{xy}`$ wave (dashed line), pure $`s`$ wave (dashed-dotted line), model 2(a) (dotted line), model 2(b) (dashed-double-dotted line). In all cases $`T_c70`$ K.
no-problem/9908/cond-mat9908373.html
ar5iv
text
# Compressible Anisotropic States around the Half-Filled Landau Levels ## I Introduction Two-dimensional electron systems in a strong magnetic field have been providing various fascinating phenomena for the last two decades of this twentieth century. The integer quantum Hall effect (IQHE) and fractional quantum Hall effect (FQHE) are observed around the integral filling factor and rational filling factor with an odd denominator respectively. These effects are caused by the formation of the incompressible liquid state with a finite energy gap. The IQHE state has a Landau level’s energy gap or Zeeman’s energy gap and FQHE state has an energy gap due to the Coulomb interaction. Remarkable progress of the composite fermion (CF) theory for the FQHE shed light on the Fermi-liquid-like state at the half-filled lowest Landau level. Evidences for the Fermi-liquid-like state were obtained in many experiments. It is believed that the state is compressible isotropic state and has a circular Fermi surface. Recent experiments at the half-filled Landau levels have revealed the anisotropic nature of the compressible states. At the half-filled third and higher Landau levels, highly anisotropic behavior was observed in the magnetoresistance. At the half-filled lowest and second Landau levels, transition to the anisotropic state was also observed in the presence of the periodic potential or in-plane magnetic field respectively. Although the origin of the anisotropy is still unknown, the unidirectional charge density wave (UCDW) state is a candidate for the anisotropic state. Theoretical works in higher Landau levels showed possibility of the UCDW. Recent theoretical works in lower Landau levels support the UCDW or liquid-crystal state. In this paper we investigate the compressible charge density wave (CCDW) states, which include the UCDW state and compressible Wigner crystal (CWC) state, in the several lower Landau levels. As a result, the UCDW states are found to be the lowest energy states in the CCDW states at the half-filled Landau levels. The CCDW state is a gapless state with an anisotropic Fermi surface and has a periodically modulated charge density. Using the von Neumann lattice representation, we construct the CCDW state and calculate the charge density profile and Hartree-Fock energy self-consistently. The von Neumann lattice representation has a quite useful property in studying the quantum Hall system with the periodic potential. The lattice structure of the von Neumann lattice can be adjusted to the periodic potential by varying the modular parameter of the unit cell. In the present case the periodic potential is caused by the charge density modulation through the Coulomb interaction. If the translational invariance on the lattice is unbroken, a Fermi surface is formed. In the Hartree-Fock approximation, we show that the self-consistency equation for the CCDW has two types of solution at the half-filling. The one has a belt-shaped Fermi-sea corresponding to the UCDW and the other has a diamond-shaped Fermi-sea corresponding to the CWC. For the belt-shaped Fermi sea, one-direction of the momentum space is filled and the other direction is partially filled. Therefore the UCDW is regarded as a collection of the one-dimensional lattice Fermi-gas systems which was called the quantum Hall gas (QHG) in Ref. . In the UCDW state, there are two length scales, the wave length of the UCDW, $`\lambda _{\mathrm{CDW}}`$ and Fermi wave length of the lattice fermions, $`\lambda _\mathrm{F}`$. These two parameters obey a duality relation. We obtain the wave length dependence of the energy. Moreover we calculate the kinetic energy of the gas system. The paper is organized as follows. In Sec. II, the Hartree-Fock energy for the CCDW states is calculated. The density profile, wave length dependence of the energy, and the kinetic energy of the UCDW are obtained in Sec. III. The summary and discussion are given in Sec. IV. ## II Hartree-Fock energy for the CCDW state : UCDW vs. CWC In this section we construct the CCDW state in the Hartree-Fock approximation using the formalism developed in Ref. . Let us consider the two-dimensional electron system in a perpendicular magnetic field $`B`$ in the absence of impurities. The electrons interact with each other through the Coulomb potential $`V(𝐫)=q^2/\mathrm{r}`$. In this paper we ignore the spin degree of freedom and use the natural unit of $`\mathrm{}=c=1`$. In the $`l`$-th Landau level space, the free kinetic energy is quenched as $`\omega _c(l+1/2)`$, $`l=0,1,2\mathrm{}`$, where $`\omega _c=eB/m`$. In the von Neumann lattice formalism, the electron field is expanded as $$\psi (𝐫)=\underset{l,𝐗}{}b_l(𝐗)W_{l,𝐗}(𝐫),$$ (1) where $`b`$ is an anti-commuting annihilation operator and $`𝐗`$ is an integer valued two-dimensional coordinate. The Wannier base functions $`W_{l,𝐗}(𝐫)`$ are orthonormal complete basis in the $`l`$-th Landau level.. Expectation values of the position of $`W_{l,𝐗}(𝐫)`$ are located at two-dimensional lattice sites $`m𝐞_1+n𝐞_2`$ for $`𝐗=(m,n)`$, where $`𝐞_1=(ra,0)`$, $`𝐞_2=(a/r\mathrm{tan}\theta ,a/r)`$, and $`a=\sqrt{2\pi /eB}`$. The area of the unit cell is $`𝐞_1\times 𝐞_2=a^2`$ which means that a unit flux penetrates the unit cell of the von Neumann lattice. The unit cell is illustrated in Fig. 1. For simplicity we set $`a=1`$ in the following calculation. The Bloch wave basis, which is given by $`u_{l,𝐩}(𝐫)=_𝐗W_{l,𝐗}(𝐫)e^{i𝐩𝐗}`$, is another useful basis on the von Neumann lattice. The lattice momentum $`𝐩`$ is defined in the Brillouin zone (BZ), $`|p_i|\pi `$. The wave function $`u_{l,𝐩}(𝐫)`$ extends all over the plane and its probability density has the same periodicity as the von Neumann lattice. In the momentum space, the system has a translational invariance which is referred to as the K-invariance in the CF model. In the following, we show that symmetry breaking of the K-invariance generates a kinetic energy and leads to an anisotropy in the charge density. We consider a mean field state of filling factor $`\nu =l+\overline{\nu }`$ where $`l`$ is an integer and $`0<\overline{\nu }<1`$. Let us consider a mean field for the CCDW which has the translational invariance on the von Neumann lattice as $$U_l(𝐗𝐗^{})=b_l^{}(𝐗^{})b_l(𝐗),$$ (2) and $`U_l(0)=\overline{\nu }`$. Ignoring the Landau level’s energy and the inter-Landau level effect, the Hartree-Fock Hamiltonian within the $`l`$-th Landau level $`H_{\mathrm{HF}}^{(l)}`$ reads $`H_{\mathrm{HF}}^{(l)}`$ $`=`$ $`{\displaystyle \underset{𝐗,𝐗^{}}{}}U_l(𝐗𝐗^{})\{\stackrel{~}{v}_l(2\pi (\widehat{𝐗}\widehat{𝐗}^{}))v_l(\widehat{𝐗}\widehat{𝐗}^{})\}`$ (4) $`\times \{b_l^{}(𝐗)b_l(𝐗^{}){\displaystyle \frac{1}{2}}U_l(𝐗^{}𝐗)\},`$ where $`\stackrel{~}{v}_l(𝐤)`$ $`=`$ $`\{L_l({\displaystyle \frac{k^2}{4\pi }})\}^2e^{\frac{k^2}{4\pi }}\stackrel{~}{V}(𝐤),`$ (5) $`v_l(𝐗)`$ $`=`$ $`{\displaystyle \frac{d^2k}{(2\pi )^2}\stackrel{~}{v}_l(𝐤)e^{i𝐤𝐗}}.`$ (6) Here $`\stackrel{~}{V}(𝐤)=2\pi q^2/k`$ for $`𝐤0`$ and $`\stackrel{~}{V}(0)=0`$ due to the charge neutrality condition. $`\widehat{𝐗}`$ is a position of the Wannier basis in the real space, that is, $`\widehat{𝐗}=(rm+n/r\mathrm{tan}\theta ,n/r)`$ for $`𝐗=(m,n)`$. By Fourier transforming Eq. (4), we obtain the self-consistency equations for the kinetic energy $`\epsilon _l`$ as $`\epsilon _l(𝐩,\overline{\nu })`$ $`=`$ $`{\displaystyle _{\mathrm{BZ}}}{\displaystyle \frac{d^2p^{}}{(2\pi )^2}}\stackrel{~}{v}_l^{\mathrm{HF}}(𝐩^{}𝐩)\theta (\mu _l\epsilon _l(𝐩^{},\overline{\nu })),`$ (7) $`\overline{\nu }`$ $`=`$ $`{\displaystyle _{\mathrm{BZ}}}{\displaystyle \frac{d^2p}{(2\pi )^2}}\theta (\mu _l\epsilon _l(𝐩,\overline{\nu })),`$ (8) where $`\mu _l`$ is the chemical potential and $`\stackrel{~}{v}_l^{\mathrm{HF}}`$ is defined by $$\stackrel{~}{v}_l^{\mathrm{HF}}(𝐩)=\underset{𝐗}{}\{\stackrel{~}{v}_l(2\pi (\widehat{𝐗}))v_l(\widehat{𝐗})\}e^{i𝐩𝐗}.$$ (9) Equations (7) and (8) determine a self-consistent Fermi surface. Existence of a Fermi surface breaks the K-invariance inevitably. The mean value of the kinetic energy $`\epsilon _l=_{\mathrm{BZ}}\epsilon _l(𝐩)d^2p/(2\pi )^2`$ is equal to $`v_l(0)\overline{\nu }`$, which is independent of $`r`$ and $`\theta `$. The energy per particle in the $`l`$-th Landau level is calculated as $$E^{(l)}=\frac{1}{2\overline{\nu }}\underset{\mathrm{X}}{}|U_l(𝐗)|^2\{\stackrel{~}{v}_l(2\pi \widehat{𝐗})v_l(\widehat{𝐗})\}.$$ (10) $`E^{(l)}`$ is a function of $`\overline{\nu }`$, $`r`$ and $`\theta `$. The parameters $`r`$ and $`\theta `$ are determined so as to minimize the energy $`E^{(l)}`$ at a fixed $`\overline{\nu }`$. There are two types of the Fermi-sea which satisfies the Eqs. (7) and (8). The one is a belt-shaped Fermi-sea illustrated in Fig. 2 (a), that is, $`|p_y|p_\mathrm{F}`$. This solution corresponds to the UCDW state. At $`\nu =l+\overline{\nu }`$, the Fermi wave number $`p_\mathrm{F}`$ is equal to $`\pi \overline{\nu }`$ and the mean field of the UCDW becomes $$U_l(𝐗)=\delta _{m,0}\frac{\mathrm{sin}(p_\mathrm{F}n)}{\pi n},$$ (11) for $`𝐗=(m,n)`$. We take $`\theta =\pi /2`$ for the UCDW without losing generality because the system has the rotational invariance. Then the charge density of the UCDW is uniform in the y-direction and oscillates in the x-direction with a wave length $`\lambda _{\mathrm{CDW}}=ra`$. In the y-direction, $`p_\mathrm{F}`$ corresponds to the Fermi wave number $`k_\mathrm{F}=p_\mathrm{F}r/a=\pi \overline{\nu }r/a`$ in the real space. The duality relation between $`\lambda _\mathrm{F}=2\pi /k_\mathrm{F}`$ and $`\lambda _{\mathrm{CDW}}=ra`$ exists. Namely $`\lambda _\mathrm{F}\lambda _{\mathrm{CDW}}=2a^2/\overline{\nu }`$. In the CF model, the composite fermion has a wave number $`k_\mathrm{F}=\sqrt{2\pi }/a`$ at $`\nu =1/2`$. In the UCDW state for $`r=1.636`$ which minimizes the energy at $`\nu =1/2`$, $`\pi \overline{\nu }r`$ equals 2.57, which is very close to the value $`\sqrt{2\pi }=2.51`$. This implies that there exists unknown connection between the CF state and the UCDW state. The other Fermi-sea which satisfies the Eqs. (7) and (8) at $`\overline{\nu }=1/2`$ is a diamond-shaped one illustrated in Fig. 2 (b). This solution corresponds to the CWC state whose density is modulated with the same periodicity as the von Neumann lattice. The mean field of the CWC becomes $$U_l(𝐗)=\frac{2}{(\pi )^2}\frac{\mathrm{sin}\frac{\pi }{2}(m+n)\mathrm{sin}\frac{\pi }{2}(mn)}{m^2n^2},$$ (12) for $`𝐗=(m,n)`$. Substituting Eqs. (11) and (12) into Eq. (10), we calculate the energy for various CCDW states at $`\overline{\nu }=1/2`$. By varying the parameters $`r`$ and $`\theta `$, we obtained the lowest energy numerically. The results are summarized in Table. I. The unit of the energy is $`q^2/l_B`$ and $`l_B=\sqrt{1/eB}`$. As seen in the Table, the UCDW state is the lowest energy state in all cases. Therefore the UCDW state is the most plausible state in the CCDW states. For the CWC state, $`\theta =\pi /2`$ corresponds to a rectangular lattice and $`\theta =\pi /3`$ to a triangular lattice. ($`\theta =\pi /2`$ and $`r=1`$ means a square lattice and $`\theta =\pi /3`$ and $`r=1.075`$ means a regular triangular lattice.) For the UCDW state, the wave length $`\lambda _{\mathrm{CDW}}=ra`$ increases with the increasing $`l`$. This behavior is consistent with the numarical calculation in finite systems. The Hartree-Fock calculation in the higher Landau level predicts $`\lambda _{\mathrm{CDW}}=2.72.9\sqrt{2l+1}l_B`$. Our results agree with this at $`l=1,2,3`$. At $`l=0`$, however, our result is much smaller than this. At $`\nu =1/2`$, the energy of the UCDW state is slightly higher than the value of the gapfull charge density wave (CDW) calculation. In the gapfull CDW state, the higher order correction is small because of the energy gap. In the UCDW state, however, the correction might be large compared with the CDW state. Therefore it is necessary for more definite conclusion to include fluctuation effects around the mean field. Although this task goes beyond the scope of this paper, the Hamiltonian on the von Neumann lattice must be useful to study fluctuation effects. ## III Property of the UCDW states In this section, we calculate the density profile of the UCDW and wave length dependence of the energy. The density of the electron for $`\nu =l+1/2`$ reads $$\rho (𝐫)=l+_\pi ^\pi \frac{dp_x}{2\pi }_{\pi /2}^{\pi /2}\frac{dp_y}{2\pi }|u_{l,𝐩}(𝐫)|^2.$$ (13) Since $`|u_{l,𝐩}(𝐫)|^2`$ is a periodic function of $`𝐩`$ in the BZ and depends only on the combination of $`𝐫+(p_y/2\pi )𝐞_1(p_x/2\pi )𝐞_2`$, $`\rho `$ of Eq. (13) is uniform in the y-direction. Here we take $`𝐞_1=(r,0)`$ and $`𝐞_2=(0,1/r)`$ for $`\theta =\pi /2`$. The translation in the momentum space is equivalent to the translation of the charge density of the CCDW. Therefore the symmetry breaking of the K-invariance is same as the symmetry breaking of the translational invariance in the real space. The density profiles for the UCDW at the half-filled $`l`$-th Landau level of $`l=0,1,2`$ are plotted in Fig. 3. The unit of the density is $`a^2`$ and the wave length $`\lambda _{\mathrm{CDW}}=ra`$ in Table. I is used in Fig. 3. As seen in this Figure, the amplitude of the wave decreases with the increasing $`l`$. | $`l`$ | $`E_{\mathrm{UCDW}}`$ | $`E_{\mathrm{CWC}}(\pi /2)`$ | $`E_{\mathrm{CWC}}(\pi /3)`$ | | --- | --- | --- | --- | | 0 | -0.4331 | -0.3939 | -0.3891 | | 1 | -0.3490 | -0.3122 | -0.3110 | | 2 | -0.3074 | -0.2715 | -0.2703 | | 3 | -0.2800 | -0.2448 | -0.2436 | | $`l`$ | $`r_{\mathrm{UCDW}}`$ | $`r_{\mathrm{CWC}}(\pi /2)`$ | $`r_{\mathrm{CWC}}(\pi /3)`$ | | 0 | 1.636 | 1.000 | 1.295 | | 1 | 2.021 | 1.000 | 1.075 | | 2 | 2.474 | 1.000 | 1.075 | | 3 | 2.875 | 1.205 | 1.335 | Table I. Minimum energy and corresponding parameter $`r`$ for the CCDW states at $`\nu =l+1/2`$. For the CWC states, the energies at $`\theta =\pi /2`$ (rectangular lattice) and $`\pi /3`$ (triangular lattice) are shown. The units of the energy and $`r`$ are $`q^2/l_B`$ and $`a`$ respectively. To minimize the energy, we calculated the $`r`$ dependence of the energy which is plotted in Fig. 4. As seen in Fig. 4, a quasi-stable state appears for $`l2`$ near $`r=1`$ and the $`r`$ dependence of the energy becomes flattened as $`l`$ grows. This means that the UCDW state for $`l2`$ has flexibility against disorder effect. This observation agrees with the absence of the spontaneous formation of the anisotropic state for $`l1`$. The kinetic energy $`\epsilon _l(𝐩,\overline{\nu })`$ for the UCDW is written as $$\epsilon _l(𝐩,\overline{\nu })=_\pi ^\pi \frac{dp_x^{}}{2\pi }_{\pi \overline{\nu }}^{\pi \overline{\nu }}\frac{dp_y^{}}{2\pi }\stackrel{~}{v}_l^{\mathrm{HF}}(𝐩^{}𝐩).$$ (14) This is independent of $`p_x`$ after integration and we denote $`\epsilon _l(𝐩,\overline{\nu })=\epsilon _l(p_y,\overline{\nu })`$. $`\epsilon _l(p_y,1/2)`$ for $`l=`$0, 1, and 2 are shown in Fig. 5. As seen in Fig. 5, the bandwidth $`\mathrm{\Gamma }_l`$ decreases with the increasing $`l`$, that is, $`\mathrm{\Gamma }_0=0.7363`$, $`\mathrm{\Gamma }_1=0.5682`$, and $`\mathrm{\Gamma }_2=0.5042`$ in the unit of $`q^2/l_B`$. Using the mean field of Eq. (11), the kinetic term in $`H_{\mathrm{HF}}^{(l)}`$ is written as $$K_{\mathrm{HF}}^{(l)}=\underset{m}{}\frac{dp_y}{2\pi }a_{l,m}^{}(p_y)\epsilon _l(p_y,\overline{\nu })a_{l,m}(p_y),$$ (15) where $`a_{l,m}(p_y)=_nb_l(𝐗)e^{ip_yn}`$ for $`𝐗=(m,n)`$. Therefore the UCDW state is regarded as a collection of the one-dimensional lattice Fermi-gas systems which extend to the y-direction. In the Buttiker-Landauer formula, the conductance of a one-dimensional channel is equal to $`e^2/2\pi `$ in the absence of the backscattering effect. Thus the conductance of the UCDW have an anisotropic value as $`\sigma _{xx}`$ $`=`$ $`0,`$ (16) $`\sigma _{yy}`$ $`=`$ $`n_x{\displaystyle \frac{e^2}{2\pi }},`$ (17) where $`n_x`$ is a number of the one-dimensional channels which extend from one edge to the other edge. If we take $`\sigma _{xy}=\nu e^2/2\pi `$, the resistance becomes $`\rho _{xx}`$ $`=`$ $`{\displaystyle \frac{n_x}{\nu ^2}}{\displaystyle \frac{2\pi }{e^2}},`$ (18) $`\rho _{yy}`$ $`=`$ $`0.`$ (19) Thus the formation of the UCDW leads the anisotropy in the magnetoresistance. For $`\nu =9/2`$, $`2\pi /e^2\nu ^2=1.3\times 10^3`$ $`\mathrm{\Omega }`$ which is of the same order as the experimental value $`1.0\times 10^3`$ $`\mathrm{\Omega }`$. Disorder effects decreases $`n_x`$ by destroying the UCDW ordering. Furthermore the backscattering effect due to impurities reduces $`\sigma _{yy}`$ and $`\rho _{xx}`$. In the case of the edge modes in the quantum Hall regime, there is no backscattering because of the chirality. The left-mover lives on the one edge far from the other edge where the right-mover lives. For the UCDW state, on the other hand, each one-dimensional system has the width of $`ra`$ at most. Therefore the backscattering effect strongly affects on the conductance in the UCDW. To conclude this section, we point out a subtle problem concerned with the K-invariance and sliding mode. In an ordinary one-dimensional system, the difference of the chemical potentials between the left and right edge of the Fermi sea yields the net electric current. In the one-dimensional system of the UCDW, however, the difference of the chemical potentials can be canceled out by sliding the Fermi sea in Eqs. (7) and (8), thanks to the K-invariance. Then there is no net electric current. This contradicts the above assertion apparently. As mentioned before, the translation in the momentum space is equal to the translation of the charge density in the real space. In other words, to slide the Fermi sea is the same as to slide the CCDW in the real space. The sliding mode is expected to be pinned by impurities. Therefore the violation of the K-invariance due to pinning of the CCDW can remedy the contradiction. ## IV Summary and Discussion In this paper we have studied the CCDW state whose periodicity of the charge density coincides with that of the von Neumann lattice. The CCDW state is gapless and has an anisotropic Fermi surface. We obtained two types of the CCDW state, the UCDW state and CWC state. By calculating the Hartree-Fock energy, the UCDW is found to have a lower energy at the half-filled Landau levels. Furthermore, the wave length dependence of the energy, the density profile, and the kinetic energy of the UCDW are calculated numerically. As a result, it is found that the amplitude of the UCDW and bandwidth of the Landau level decrease with the increasing $`l`$ and a quasi-stable state appears for $`l2`$. The UCDW state has a belt-shaped Fermi-sea. Consequently the system consists of many one-dimensional lattice Fermi-gas systems which extend to the uniform direction. Formation of this structure could be the origin of the anisotropy observed in experiments. To confirm this speculation, experimental works for detecting the wave length of the UCDW and theoretical works to include fluctuations around the mean field solution are necessary. Since there is no energy gap in the CCDW state, the fluctuation effect might be large compared with the gapfull CDW state. Actually Fradkin and Kivelson proposed a rich phase diagram by considering fluctuations around the stripe-ordered state. We believe that the von Neumann lattice formalism presents an appropriate scheme to study the fluctuation effects around the mean field. Recently a new insulating state is discovered around the quarter-filled third Landau level. This state seems gapfull and to have an integral quantized Hall conductance. These facts suggest that the state is a gapfull CDW state which is different from the CCDW. The CDW state whose periodicity is $`q/p`$ times that of the von Neumann lattice is gapfull and has $`q`$ bands with $`p`$-hold degeneracy. In the presence of a magnetic field and periodic potential, the Hall conductance of the free electron system in the gap region is equal to the Chern number. Thus the observed quantized Hall conductance is probably the Chern number of the periodic potential problem. The transition between this gapfull CDW and CCDW studied in this paper is an interesting future problem. ###### Acknowledgements. I would like to thank K. Ishikawa, T. Ochiai, and J. Goryo for useful discussions. I also thank Y. S. Wu and G. H. Chen for helpful discussions. This work was partially supported by the special Grant-in-Aid for Promotion of Education and Science in Hokkaido University provided by the Ministry of Education, Science, Sports, and Culture, the Grant-in-Aid for Scientific Research on Priority area (Physics of CP violation) (Grant No. 11127201), the Grant-in-Aid for Basic Research (Grant No. 10044043), and the special Grant for Basic Research (Hierarchical matter analyzing system) from the Ministry of Education, Science, Sports, and Culture, Japan.
no-problem/9908/hep-lat9908009.html
ar5iv
text
# JLAB-THY-99-21 Comment on “Valence QCD: Connecting QCD to the Quark Model” ## Abstract I criticize certain conclusions about the physics of hadrons drawn from a “valence QCD” approximation to QCD. Lattice QCD is not just useful as a technique for calculating strong interaction observables like the proton mass: it can also be used to help us understand QCD. This is the goal of the work described in Ref. . Its authors present a field theory which they call “valence QCD” (vQCD) which they hope can be identified with the valence quark model. The key feature built into vQCD is a form of suppression of Z-graphs, i.e., of quarks propagating “backward in time” . The authors make sound arguments for the importance of trying to capture the essence of the quark model in a field-theoretic framework, and present some interesting results (both theoretical and numerical) on vQCD. This comment is not directed at the goals of vQCD but rather at certain conclusions about the physics of hadrons which the authors have drawn from their work which I consider unjustified. Foremost among these is the claim highlighted in their abstract that baryon hyperfine interactions are “ . . . largely attributed to the Goldstone boson exchanges between the quarks. . . ” and not to standard one-gluon-exchange (OGE) forces . Fig. 1(a): Z-graph-induced meson exchange between two quarks. Fig. 1(b): A cartoon of the space-time development of the Z-graph-induced meson exchange in a baryon in the flux tube model. For diagrammatic clarity three different flavors of quarks are shown. Note that if the created meson rejoins the flux tube from which it originated, the produced $`q\overline{q}`$ pair can be of any flavor; however, such a process would be a closed $`q\overline{q}`$ loop and therefore not part of the quenched approximation. Also possible, but not shown, are OZI-violating graphs with the creation or annihilation of a disconnected $`q\overline{q}`$ meson; these are irrelevant to octet meson exchange in the SU(3) limit and enter in broken SU(3) only through the $`\eta \eta ^{}`$ mixing angle. The origin of this claim is that in vQCD the $`\mathrm{\Delta }N`$ splitting is only about 50 MeV in contrast to the 180 MeV found on the same lattices in standard quenched QCD (qQCD) . The authors of Ref. argue that since vQCD differs from qQCD only in the suppression of Z-graphs, Z-graph-induced meson exchange between quarks (see Figs. 1), and in particular the exchange of the octet of pseudoscalar mesons (OPE), must be the origin of most of the hyperfine interaction. My objection to this conclusion is that vQCD also appears to produce a very different spectrum (and thus, one presumes, very different internal hadronic structures) from qQCD, so that the reduced hyperfine splittings cannot necessarily be associated with a reduction in the strength of the hyperfine interaction. To see this we begin with an examination of the spectrum of vQCD, which I have extracted from Ref. and display in Fig. 2 in comparison with the spectra of nature and of qQCD . Most masses are rough estimates based on the graphs displayed in Ref. and, except for the vQCD $`a_1`$ mass, I have made no attempt to estimate statistical or systematic uncertainties of the lattice “data”. However, the problem displayed in Fig. 2 does not need such refinements to stand out clearly: the spectrum of vQCD is dramatically different from both qQCD and nature! Fig. 2: The meson and baryon spectra of nature, quenched QCD, and valence QCD. I indicate by the shaded band an estimated error for the vQCD $`a_1`$ mass, since this mass is important to the arguments of the text. Fig. 3: The meson spectra of quenched QCD and valence QCD as functions of the quark mass $`m`$. The heavy quark center of gravity $`\frac{1}{4}m_\rho +\frac{3}{4}m_\pi `$ obtained from qQCD at $`m=440`$ MeV is shown as a dotted line and is used to define an origin for all the other spectra, since it is the $`a_1`$ excitation energy relative to this center of gravity that is of interest here. The authors of Ref. argue quite cogently that the physical mechanisms which produce a constituent quark mass $`m_{const}300`$ MeV in qQCD will be suppressed in vQCD, and proceed to analyze their results assuming that their spectra will be repaired by an overall shift of meson and baryon masses by $`2m_{const}`$ and $`3m_{const}`$, respectively. With this picture in mind, they conclude that the $`\mathrm{\Delta }N`$ hyperfine interaction in vQCD is much too small and are led to the conclusion quoted above. However, it is clear from Fig. 2 that such a simple shift of the spectral zeros cannot fix the vQCD spectrum: they would still have essentially zero orbital excitation energy (the $`a_1\rho `$ mass difference). At one point in the paper the authors mention the possibility of repairing the vQCD spectra by “tuning” the quark mass, though they never discuss this option. I have extracted their meson spectra for heavier quark masses and plotted them in Fig. 3 for both qQCD and vQCD. The qQCD spectra are very reminiscent of the experimental spectra shown in Fig. 4, but vQCD appears to be very different even for relatively heavy quark masses. There is certainly no indication that quark masses $`m300`$ MeV, which fix the spectral zero problems in the mesons and baryons as expected, fix the problem of the vQCD excitation spectrum. The authors of Ref. are silent on this matter. Fig. 4: The experimental spectra of $`b\overline{b}`$, $`c\overline{c}`$, $`s\overline{s}`$, and isovector light quarkonia with the center of gravity of the $`S`$-wave mesons aligned. The $`2^{++}`$ states have been used in lieu of the $`1^{++}`$ states because $`\varphi _1`$ is not yet clearly identified. The pseudoscalar $`s\overline{s}`$ state (“$`\eta +\eta ^{}`$”) has been located by unmixing a $`2\times 2`$ matrix assumed to consist of primordial $`s\overline{s}`$ and $`\frac{1}{\sqrt{2}}(u\overline{u}+d\overline{d})`$ states. The $`\eta _b`$ is not yet discovered, but the theoretical prediction is shown as a dotted spectral line. The spectra are shown to scale, which may conveniently be calibrated from the $`\chi _{c2}\psi `$ splitting of 459 MeV. There is a complication in completing my criticism of Ref. . The preceding discussion is focused on the meson spectrum because it is only for mesons that the orbital excitation spectrum is given in Ref. . Fig. 2 certainly gives the impression that vQCD has similar problems in both the meson and baryon sectors: in both the masses drop precipitously and the hyperfine interactions are much weakened. However, it is a logical possibility that the zeroth-order baryon and meson spectra have very different dynamical origins and that the baryon spectrum would be satisfactory after an overall shift (or tuning of $`m`$). In fact, even before showing us spectra, the authors of Ref. show us lattice data on vQCD which indicates that the nucleons are of normal size, a result which suggests that the baryons are normal . It would therefore certainly be interesting to know where the negative parity baryons are in vQCD. Let me note, however, that if the zeroth-order baryon and meson spectra have different physical origins, it would contradict much that we think we understand about QCD and would render such apparent empirical confirmation of that understanding as the equality of the slopes of meson and baryon Regge trajectories into misleading accidents. If this were true, it would represent a far more profound conclusion than the ones the authors of Ref. draw: it would destroy rather than just modify the quark model. If vQCD eventually leads us to this result, it would thus make the other physics conclusions of Ref. irrelevant. The simplest possibility is surely that the baryon excitation spectrum is as different from qQCD and nature as was the meson spectrum. In summary, the spectrum of vQCD does not appear to represent a good approximation to nature, to qQCD, or to the quark model. This does not mean that we can’t learn many things from it, but it does mean that one must be very cautious in drawing conclusions from vQCD. In particular, hyperfine splittings are normally especially sensitive functions of the internal structure of the states being perturbed (e.g., in the nonrelativistic quark model they are proportional to the square of the spatial wave function at zero separation). There is certainly no reason to believe that a ground state system belonging to such a poorly described spectrum will have reasonable short distance matrix elements. I would like to point out that independent of the basic objections I have raised to drawing physics conclusions from vQCD at this stage, the association of the missing hyperfine strength of vQCD with Goldstone boson exchange (OPE) between quarks suffers from some serious problems. The first of these is apparent from Fig. 2: while qQCD describes both the $`\rho \pi `$ and $`\mathrm{\Delta }N`$ splitting, they are both poorly described in vQCD. It would be natural and economical to identify a common origin for these problems. However, the Z-graph-induced meson exchange of Fig. 1(a) can only act between two quarks and not between a quark and an antiquark. Of course it is logically possible that there are different mechanisms in operation in the two systems but, as we will see, this is very difficult to arrange. Figure 4 showed the evolution of quarkonium spectroscopy as a function of the quark masses. In heavy quarkonia ($`b\overline{b}`$ and $`c\overline{c}`$) we know that hyperfine interactions are generated by one-gluon-exchange (OGE) perturbations of wave functions which are solutions of the Coulomb-plus-linear potential problem. I find it difficult to look at this diagram and not see a smooth evolution of the wavefunction (characterized by the slow evolution of the orbital excitation energy) convoluted with the predicted $`1/m_Q^2`$ strength of the OGE hyperfine interaction. This same conclusion can be reached by approaching the light quarkonia from another angle. Figure 5(a) shows the evolution of ground state heavy-light meson hyperfine interactions from the heavy quark limit to the same isovector quarkonia shown in Fig. 4. In this case we know that in the heavy quark limit the hyperfine interaction is given by the matrix element of $`\stackrel{}{\sigma }_Q\stackrel{}{B}/2m_Q`$, consistent with the OGE mechanism and the striking $`1/m_Q`$ behaviour of the data on ground state splittings as $`m_Q`$ is decreased from $`m_b`$ to $`m_c`$ to $`m_s`$ to $`m_d`$. Since the OPE mechanism cannot contribute here, the OGE mechanism is therefore certainly the natural candidate for generating all meson hyperfine interactions. The problem that arises for the OPE hypothesis is that it is then nearly impossible to avoid the conclusion that OGE is also dominant in baryon hyperfine interactions: the OGE $`q\overline{q}`$ and $`qq`$ hyperfine interactions are related by a simple factor of $`1/2`$, and given the similarities of meson and baryon structure (for example, their charge radii, orbital excitation energies, and magnetic moments are all similar), it is inevitable that the matrix elements of OGE in baryons and mesons are similar. This connection is explicitly realized in quark models . Fig. 5: Ground state meson (a) and baryon (b) hyperfine splittings in heavy-light systems as a function of the mass $`m_Q`$ of the heavy quark. The spectra on the far left are the $`m_Q\mathrm{}`$ limits of heavy quark symmetry. The $`\mathrm{\Sigma }_Q^{}\mathrm{\Lambda }_Q`$ splitting and the positions of $`\mathrm{\Sigma }_b^{}`$ and $`\mathrm{\Sigma }_b`$ are estimates from the quark model; all other masses are from experiment. The spectra are shown to scale; the meson scale may conveniently be calibrated with the $`D^{}D`$ splitting of 141 MeV and the baryon scale with the $`\mathrm{\Sigma }_c\mathrm{\Lambda }_c`$ splitting of 169 MeV. There is another very serious problem with the OPE mechanism which surfaces in mesons. I have explained that there are no Z-graph-induced meson exchanges in mesons. However, Fig. 6 shows how the same meson exchanges which are assumed to exist in baryons will drive OZI-violating mixings in isoscalar channels via annihilation graphs. I have argued above that the structure of mesons and baryons is so similar that it is impossible to avoid their having similar OGE matrix elements. The same is true for OPE matrix elements: it is impossible to maintain that OPE is strong enough to produce the $`\mathrm{\Delta }N`$ splitting in baryons without predicting a matrix element of comparable strength associated with Fig. 6 in mesons. Such matrix elements will violate the OZI rule . To see this, consider the mixing between the pure $`\omega `$-like state $`\frac{1}{\sqrt{2}}(u\overline{u}+d\overline{d})`$ and the pure $`\varphi `$-like state $`s\overline{s}`$. This mixing will be driven by kaon exchange and from the preceding very general arguments we must expect that the amplitude $`A_{OZI}`$ for this OZI-violating process will have a strength of the same order as the 200 MeV $`\mathrm{\Sigma }^{}\mathrm{\Sigma }`$ splitting (which is also driven purely by kaon exchange). Empirically $`A_{OZI}`$ is very tiny - - - of the order of 10 MeV - - - in all known meson nonets (except the pseudoscalars); amplitudes an order of magnitude larger would lead to dramatic violations of the OZI rule. Fig. 6: OZI-violating mixing in isoscalar mesons via the exchange of a $`q\overline{q}^{}`$ meson. The mesons thus produce some disastrous conclusions for the Goldstone boson exchange hypothesis. The first is the very unaesthetic conclusion that two totally distinct mechanisms are in operation producing meson and baryon hyperfine interactions: OGE in mesons and OPE in baryons. The second is the virtual impossibility of having strong OGE matrix elements in mesons without also producing strong OGE matrix elements in baryons, in conflict with the basic hypothesis of that model. The third is that the OPE mechanism produces unacceptably large OZI violation in meson nonets. As shown in Figure 5(b), experiment actually provides strong evidence in support of the dominance of OGE and not OPE in the baryons themselves! Fig. 5(b) is the baryon analog of Fig. 5(a) where once again one knows rigorously that the hyperfine interactions are controlled by the matrix elements of $`\stackrel{}{\sigma }_Q\stackrel{}{B}/2m_Q`$ for heavy quarks. It is clear from this Figure that in the heavy quark limit the OPE mechanism is not dominant: exchange of the heavy pseudoscalar meson $`P_Q`$ would produce a hyperfine interaction that scales with heavy quark mass like $`1/m_Q^2`$, while for heavy-light baryons the splittings are behaving like $`1/m_Q`$ as demanded by heavy quark theory (with which the OGE mechanism is automatically consistent). It is difficult to look at this diagram and not see a smooth evolution of this $`1/m_Q`$ behaviour from $`m_c`$ to $`m_s`$ to $`m_d`$, where by SU(3) symmetry $`\mathrm{\Sigma }_{SU(3)}^{}\mathrm{\Lambda }_{SU(3)}=\mathrm{\Delta }N`$, the splitting under discussion here. One might try to escape this conclusion by arguing that between $`m_c`$ and $`m_s`$ the OGE-driven $`1/m_Q`$ mechanism turns off and the $`1/m_Q^2`$ OPE mechanism turns on. From the baryon spectra alone, one cannot rule out this baroque possibility. However, in the heavy-light mesons of Fig. 5(a) there is no alternative to the OGE mechanism, and since the $`Q\overline{q}`$ interaction continues to grow like $`1/m_Q`$ as $`m_Q`$ gets lighter, so must the $`Qq`$ interaction. I see no escape from the conclusion that OGE is dominant in all ground state hyperfine interactions. The preceding discussion of the generic problems of the OPE mechanism was an extended digression designed to dampen any remaining enthusiasm for one of the highlighted conclusions of Ref. . It is not logically connected to and should not distract the reader from the main argument presented in this Comment. Valence QCD is a potentially interesting field-theoretic approximation to QCD from which we could in principle learn a great deal about the physics driving hadron structure and dynamics. My criticisms of the conclusion extracted from vQCD in Ref. about the physics of hyperfine interactions are based on the fact that vQCD has a very different spectrum from qQCD and nature. It is thus unclear whether the reduced $`\mathrm{\Delta }N`$ splitting of vQCD is due to a diminished hyperfine interaction or a change in the short distance structure of the hadrons. Until it is better understood, vQCD must only be used with great care in drawing conclusions about the physics of hadrons. ACKNOWLEDGEMENTS This work was supported by DOE contract DE-AC05-84ER40150 under which the Southeastern Universities Research Association (SURA) operates the Thomas Jefferson National Accelerator Facility. REFERENCES
no-problem/9908/nucl-th9908082.html
ar5iv
text
# Role of spectroscopic factors in the potential-model description of the 7Be(𝒑,𝜸)8B reaction \[ ## Abstract In standard potential-model descriptions of the $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$ reaction the $`{}_{}{}^{7}\mathrm{Be}+p`$ spectroscopic factors $`𝒮`$ appear in the cross section. We argue that the microscopic substructure effects which are represented by $`𝒮`$ are short-ranged and cannot affect the asymptotic normalization of the wave function. We believe that the standard way of describing reactions in a potential model may be incorrect and the low-energy cross section should not depend on $`𝒮`$ in the case of external capture reactions, like $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$ \] Recently the $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$ radiative capture reaction has been studied extensively both experimentally and theoretically. This interest is rooted in the fact that the <sup>8</sup>B produced by this reaction in our sun is the main source of the high-energy solar neutrinos . The high-energy solar neutrino flux is directly proportional to the low-energy ($`E_{\mathrm{cm}}=20`$ keV) astrophysical cross section factor, $`S_{17}(E)`$, of $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$. Among the recent experimental results are a new direct measurement of the low-energy cross section by using a <sup>7</sup>Be target , the determination of $`S_{17}(E)`$ from the inverse process, the Coulomb dissociation of <sup>8</sup>B , and the utilization of transfer reactions in order to determine the asymptotic normalization constant of the bound-state <sup>8</sup>B wave function , which in turn can be used to extract $`S_{17}(0)`$. On the theoretical side, the capture reaction has been studied recently in $`{}_{}{}^{7}\mathrm{Be}+p`$ potential models , in three-body models , in shell-models , and in microscopic cluster models . Interesting results came also from the R-matrix study of the experimental data , from the investigation of the energy-dependence of $`S(E)`$ , and from the studies of the asymptotic normalization constants of the <sup>8</sup>B wave function . Yet, despite all these and other advances, $`S_{17}(0)`$ is still the most uncertain input parameter in solar models . In the present work we would like to clarify a few points of this problem in connection with the potential models. At solar energies the $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$ reaction takes place deep below the Coulomb barrier. This means that the capture cross section receives contributions almost exclusively from those parts of the initial scattering and final bound state wave functions that describe large $`{}_{}{}^{7}\mathrm{Be}p`$ separations. At low energies the scattering wave functions are almost fully known, as the phase shifts practically coincide with the hard core phase shifts. The asymptotic behavior of the bound state wave function is also known, as it is proportional to the Coulomb-Whittaker function, $$\chi _I^{\mathrm{bound}}(r)=\overline{c}_IW_{\eta ,l+1}^+(kr)/r,r\mathrm{},$$ (1) where $`\eta `$ is the Coulomb parameter, $`l`$ is the relative angular momentum between <sup>7</sup>Be and $`p`$, and $`I=1,2`$ is the channel spin, which comes from the coupling of the <sup>7</sup>Be spin and the spin of the proton. Therefore, the zero-energy $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$ cross section depends only on the asymptotic normalization constants $`\overline{c}_I`$ . Using a generic formula, which is specified later for the various models, this means $$S_{17}(0)=N(\overline{c}_1^2+\overline{c}_2^2)\mathrm{eVb}.$$ (2) (We note, that the different notations followed in various papers are slightly confusing. Our $`\overline{c}`$ quantity is the equivalent of $`\beta `$ in Ref. , while the spectroscopic factor $`𝒮`$, see below, is denoted by $`𝒥`$ there.) We would like to emphasize that we use the Eq. (1) definition of the asymptotic normalization constant in all cases. In the case of a potential-model description, $`\chi `$ is the so-called single-particle wave function, while in the case of a microscopic model, $`\chi `$ is the wave function describing the relative motion between <sup>7</sup>Be and $`p`$. The precise value of $`N`$ depends on the details of the scattering wave function. In the case of our scattering state coming from the microscopic cluster model , which corresponds roughly to $`r_c=2.4`$ fm hard-sphere radius , $`N`$ is 37.8, therefore $$S_{17}(0)^{\mathrm{micr}.}=37.8(\overline{c}_1^2+\overline{c}_2^2)\mathrm{eVb}.$$ (3) Note that in Refs. the integration of the cross section was not done to a sufficiently large radius. All $`S_{17}(0)`$ values given there should be increased by roughly 0.4 eVb. Note also that in Ref. the hard-core scattering states, used in a potential model, were chosen to be in sync with those coming from Refs. , but once again the integration distance was too short. Thus, the corrected $`S_{17}(20\mathrm{keV})=37.2(\overline{c}_1^2+\overline{c}_2^2)`$ eVb relation found there, is in agreement with Eq. (3). The peripheral nature of the $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$ reaction is illustrated in Fig. 1. A schematic local potential is shown between <sup>7</sup>Be and $`p`$. One can see that at 20 keV, which is the most effective reaction energy in our sun, the proton hits the Coulomb barrier at about 250 fm. It has to tunnel through a huge barrier in order to allow the capture to take place. Therefore, the cross section is really sensitive almost exclusively to the asymptotic parts of the wave functions. One can also see in Fig. 1 that the asymptotic normalization of the bound state wave function is most sensitive to the radius of the $`{}_{}{}^{7}\mathrm{Be}p`$ potential. A slightly bigger radius (shown by the long-dashed line in Fig. 1; for the sake of illustration the change in the radius is strongly exaggerated) leads to a smaller and narrower barrier, and thus to a significantly larger tunneling probability, which gives a larger cross section . In Eqs. (2-3) there is one point which is not yet specified, namely the full normalization of the bound-state wave function. In the case of an 8-body model of the reaction, the full 8-body wave function should be normalized to unity. However, the $`{}_{}{}^{7}\mathrm{Be}p`$ relative motion wave functions, as one-dimensional functions which contain the $`\overline{c}`$ constants, are obviously not normalized the same way (see below). In the case of a potential model, the effects of the internal structure of <sup>7</sup>Be, which are neglected in the model, has to be taken into account in some implicit way. Conventionally this is done through the spectroscopic factors $`𝒮`$. The spectroscopic amplitude functions, $`g`$, of the $`{}_{}{}^{7}\mathrm{Be}+p`$ configuration in <sup>8</sup>B are given as $$g(𝐫)=\mathrm{\Psi }_𝐫|\mathrm{\Psi }^{{}_{}{}^{8}\mathrm{B}},$$ (4) where $`\mathrm{\Psi }^{{}_{}{}^{8}\mathrm{B}}`$ is the normalized antisymmetrized 8-body wave function of <sup>8</sup>B , while $`\mathrm{\Psi }_𝐫`$ is defined as $$\mathrm{\Psi }_𝐫=𝒜\left[\mathrm{\Phi }^{{}_{}{}^{7}\mathrm{Be}}\mathrm{\Phi }^p\delta (𝐫𝝆)\right].$$ (5) Here $`𝒜`$ is the intercluster antisymmetrizer between <sup>7</sup>Be and $`p`$, $`\mathrm{\Phi }^{{}_{}{}^{7}\mathrm{Be}}`$ and $`\mathrm{\Phi }^p`$ are the normalized antisymmetrized internal wave function of <sup>7</sup>Be and a spin-isospin eigenstate of the proton, respectively, and $`𝝆`$ is the relative coordinate between <sup>7</sup>Be and $`p`$. The quantum numbers carried by $`g`$, like the channel spin $`I`$, the angular momentum coupling, etc., are not indicated here for simplicity. The spectroscopic factor is given as $$𝒮=|g(𝐫)|^2𝑑𝐫.$$ (6) This quantity is a measure of the cluster substructure effects which are neglected in a potential model, and can be calculated using a microscopic model, like the shell model or the cluster model, or can be extracted from nuclear reaction measurements. The various quantities that are calculated from potential models, like the decay widths of resonances, cross sections, etc., contain $`𝒮`$ in order to take into account the effects of the neglected microscopic substructure. In other words, the norm of the potential-model wave function is assumed to be different from unity, depending on how large the neglected microscopic effects are. In potential models, the $`{}_{}{}^{7}\mathrm{Be}(p,\gamma ){}_{}{}^{8}\mathrm{B}`$ cross section contains $`𝒮`$ and thus the Eqs. (2-3) expressions are modified as $$S_{17}(0)^{\mathrm{pot}.}=37.8(\overline{c}_1^2𝒮_1+\overline{c}_2^2𝒮_2)\mathrm{eVb}$$ (7) (here the same hard-core scattering state is assumed as in the microscopic model), where $`𝒮_1`$ and $`𝒮_2`$ are the channel-spin spectroscopic factors. It is important to note that this way of taking into account the microscopic effects relies on the assumption that these effects can be handled separately from the calculation of the matrix element of the cross section. We note also, that most of the potential-model calculations generate the $`I=1`$ and $`I=2`$ channel wave functions of the <sup>8</sup>B ground state separately, not in a correct coupled-channel description. In any case, the total single-particle wave function (containing both the $`I=1`$ and $`I=2`$ components) is assumed here to be normalized to unity. We would like to argue, however, that we believe that the conventional definition of the cross section in the potential model might be incorrect. The effects of the microscopic substructure, which leads to the appearance of the spectroscopic factors in the cross section formula, are short-distance corrections because they originate from short-range effects, like the antisymmetrization. Therefore, these effects should only affect the internal parts of the wave functions and should not modify the asymptotic normalization constants. To illustrate this, in Fig. 2 we show the $`{}_{}{}^{7}\mathrm{Be}p`$ relative-motion wave function $`\chi `$ of <sup>8</sup>B in the $`I=2`$ channel, coming from the microscopic cluster model of Refs. (dashed line). The MN effective nucleon-nucleon interaction was used and the total wave function contained only $`({}_{}{}^{4}\mathrm{He}+{}_{}{}^{3}\mathrm{He})+p`$ terms with cluster size parameters $`\beta =0.4`$ fm<sup>-2</sup>. Note that this relative-motion wave function is the one behind the antisymmetrizer in the <sup>8</sup>B wave function, thus its norm, which is not one, has no physical meaning. Also shown in Fig. 2 are the spectroscopic amplitude $`g(r)`$ (solid line) defined in Eq. (4), and the Coulomb-Whittaker function of the $`{}_{}{}^{7}\mathrm{Be}p`$ relative motion multiplied by the asymptotic normalization constant $`\overline{c}_2=0.763`$ (for $`I=1`$, $`\overline{c}_1=0.302`$), coming from the model (dotted line). The spectroscopic amplitude was calculated using the procedure discussed in Ref. . As one can see, the relative motion function $`\chi `$ and the spectroscopic amplitude $`g`$ coincides beyond $`r7`$ fm. The difference between the two functions in the internal region gives a measure of the antisymmetrization effect. The effect of taking into account the microscopic substructure in the potential model would be similar (although much smaller, because the norm of $`g`$ is close to one) on the potential-model wave function. Therefore, it seems to us that the usual way of treating microscopic effects in the potential model, through the spectroscopic factors, cannot be right. Multiplying the potential-model wave function by $`\sqrt{𝒮}`$ modifies it not only in the internal region but asymptotically as well. We suggest that the correct way to take into account the effects of the microscopic substructure in the potential model would be to use the spectroscopic amplitude functions in the expressions of the various matrix elements. In certain cases where the internal parts of the wave functions play the major role, like the decay widths of resonances or the cross sections of non-peripheral reactions, the results would be close to those coming from the conventional definition, because of the Eq. (6) relation. However, in certain cases, like the present peripheral reaction cross section, where only the asymptotic parts of the wave functions are important, there would be no effect coming from the difference between the wave function and $`g`$. If our suggestion turns out to be correct, then the Eq. (7) cross section formula for potential models should not contain the spectroscopic factors. We realize of course that our present arguments are rather heuristic. A thorough study of the connection between microscopic and macroscopic approaches to capture reactions, similar to that presented in Ref. for nuclear structure, would be highly welcome. Using the spectroscopic amplitudes, one of which is shown in Fig. 2, we calculated the spectroscopic factors predicted by our present cluster model. They are $`𝒮_{I_7,I}^J=𝒮_{3/2,2}^2=0.915`$ and $`𝒮_{3/2,1}^2=0.205`$. Here $`I_7`$ and $`I`$ are the spin of <sup>7</sup>Be and the channel spin, respectively, while $`J`$ is the total spin of the <sup>8</sup>B ground state. The spectroscopic factor of the state which contains the excited state of <sup>7</sup>Be ($`I_7=1/2`$) is $`𝒮_{1/2,1}^2=0.25`$ in our model. The total spectroscopic factor corresponding to $`I_7=3/2`$ and $`J=2`$ is $`𝒮=𝒮_{3/2,2}^2+𝒮_{3/2,1}^2=1.12`$, in good agreement with the shell-model predictions of Ref. . We note that the spectroscopic factors given in Refs. erroneously do not take into account the $`A/(A1)=8/7`$ center-of-mass correction factor . The corrected numbers are $`𝒮=1.177`$, 1.166, and 1.143 for the CK, B, and K shell-model interactions, again in good agreement with Ref. . We calculated the spectroscopic factors also for the other models and interactions used in Refs. and found them to be between 1.07 and 1.12. One can observe a correlation between $`S_{17}(0)`$ and $`𝒮`$ if the peak position of $`g`$ is roughly the same: a larger $`𝒮`$ leads to a slight increase of $`g`$ in the asymptotic region, and thus to a bigger $`S_{17}(0)`$. All our results with the MN and MHN interactions fit into this trend. In the case of the V2 interaction the peak position of $`g`$ is shifted to a slightly larger radius, which makes the corresponding $`𝒮=1.071.1`$ small, compared to the large $`S_{17}(0)=28.829.8`$ eVb cross section, predicted by the V2 force. We note that our spectroscopic factors are slightly smaller than, but compatible with the shell-model result, $`𝒮=1.11.2`$ . A conservative estimate of $`𝒮`$, based on the shell model and the cluster model, could be around $`1.051.2`$. In summary, we presented circumstantial evidences which indicate that the conventional way of treating microscopic substructure effects in potential models may be wrong. We believe that the correct way of handling those effects would be the use of the spectroscopic amplitude functions, instead of the potential-model wave functions and spectroscopic factors. This would lead to a zero-energy cross section for peripheral reactions, which is independent of the spectroscopic factors. ###### Acknowledgements. This work was supported by OTKA Grants F019701, F033044, and D32513, and by the Bolyai Fellowship of the Hungarian Academy of Sciences. Some preliminary investigations related to the present work were benefited from discussions with R. G. Lovas and K. Varga.
no-problem/9908/astro-ph9908125.html
ar5iv
text
# Forming the Dusty Ring in HR 4796A ## 1. INTRODUCTION HR 4796A is a nearby A star with a large infrared (IR) excess. Jura (1991) measured the far-IR excess of this wide binary using IRAS data. Jura et al. (1995, 1998) associated the excess with the A0 primary and derived the ratio of the far-IR to stellar luminosity, $`L_{FIR}/L_{}`$ $`5\times 10^3`$. In 1998, two groups reported extended thermal emission at $`\lambda `$ = 20 $`\mu `$m from a dusty disk with an inner hole at $``$ 40–70 AU assuming the Hipparcos distance of 67 $`\pm `$ 3.5 pc (Jayawardhana et al. (1998); Koerner et al. (1998)). Observations with NICMOS aboard HST have revealed a thin annulus of scattered light, with a width of $``$ 17 AU at a distance of $``$ 70 AU from the central star (Schneider et al. (1999)). With an age of $``$ 10 Myr (Stauffer et al. (1995); Barrado y Navascues et al. (1997)), the A0 star is older than most pre–main-sequence stars and younger than stars like $`\beta `$ Pic and $`\alpha `$ Lyr with ‘debris disks’. The dusty ring in HR 4796A challenges theories of planet formation. In most planetesimal accretion calculations, planet-sized objects do not form on short timescales at large distances from the central star. Kenyon & Luu (1999; KL99 hereafter) estimate formation times of 10–40 Myr for Pluto at 35 AU from the Sun. Achieving shorter timescales at 70 AU in HR 4796A requires large initial masses, which might conflict with masses derived from IR observations. In the inner Solar System, planet formation cannot be confined to a narrow ring, because high velocity objects in adjacent annuli interact and ‘mix’ planetary growth over a large area (Weidenschilling et al. (1997)). This problem may be reduced at larger distances from the central star, where planetary growth is “calmer”. Our goal in this paper is to develop planetesimal accretion models that can lead to the dusty ring observed in HR 4796A. We begin in §2 with Monte Carlo calculations to constrain the geometry and optical depth of dust in the ring. In §3, we derive plausible initial conditions which produce the observed dust distribution on 10 Myr timescales. These models also satisfy constraints on the dust mass from IRAS observations and lead to a self-consistent picture for ring formation. We conclude in §4 with a brief summary and discussion of the implications of this study for planet formation in other star systems. ## 2. MODEL IMAGES Current data constrain the geometry and optical depth of the ring. Near-IR images measure the amount of scattered light from the ratio of the 1.1–1.6 $`\mu `$m radiation to the stellar luminosity, $`L_{NIR}/L_{}`$ $`2\times 10^3`$ (Schneider et al. (1999)). The far-IR luminosity limits the amount of stellar radiation absorbed and reradiated. To construct a physical model, we assume an annulus of width $`\mathrm{\Delta }a`$ and height $`z`$ at a distance $`a`$ = 70 AU from the central star. The luminosity ratios depend on the solid angle $`\mathrm{\Omega }/4\pi =2\pi az/4\pi a^2=z/2a`$, the radial optical depth $`\tau `$, and the albedo $`\omega `$: $`L_{NIR}/L_{}`$ = $`\tau \omega (z/2a)`$ and $`L_{FIR}/L_{}`$ = $`\tau (1\omega )(z/2a)`$. These equations assume gray opacity and scattering in the geometric optics limit. If the annulus contains planetesimals and dust in dynamical equilibrium, $`z/\mathrm{\Delta }a`$ 1 (Hornung et al. (1985)). Anticipating the results of our coagulation calculations, where $`z/a`$ $`10^2`$, we then have $`\omega `$ 0.3 – close to observed values in $`\beta `$ Pic (Backman & Paresce (1993)) – and $`\tau `$ 1. We construct scattered light images using a 3D Monte Carlo code (Wood & Reynolds (1999)) with forced first scattering (Witt 1977) and a “peeling-off” procedure (Yusef-Zadeh, Morris, & White 1984). We adopt a dust number density, $`n=n_0\mathrm{e}^{z^2/2H^2}\mathrm{e}^{(a70)^2/2A^2}`$, where the scale height $`H`$ and scale length $`A`$ are in AU. We assume $`\omega `$ = 0.3, isotropic scattering (see Figure 12 of Augereau et al. (1999)), and adjust $`\tau `$ until the models yield $`L_{NIR}/L_{}`$ = $`1.5\times 10^3`$ for an input $`H`$ and $`A`$. Model images with $`\omega \tau `$ = constant are identical in the optically thin limit. Figure 1 compares several models with the NICMOS 1.1 $`\mu `$m image (from FITS data kindly sent by G. Schneider). We convolved Monte Carlo images with a gaussian point-spread function with FWHM = 0$`^{\prime \prime }.`$12 to approximate the 0$`^{\prime \prime }.`$12 resolution of NICMOS (Schneider et al. (1999)). Model images with $`H>`$ 5 AU (FWHM = 14 AU) or $`A>`$ 10 AU (FWHM = 27 AU) are more extended than the data (Augereau et al. (1999)). Our preferred model with $`H=0.5`$ AU, $`A=5`$ AU, and $`\omega \tau `$ = 0.25 reproduces the size and shape of the NICMOS image as well as the limb brightening observed towards the ring edges. These results match NICMOS flux ratios best for our adopted geometry; larger $`H`$ implies smaller $`\omega \tau `$. The 3$`\sigma `$ limit, $`\omega \tau `$ = 0.12–0.35, agrees with previous estimates (cf. Koerner et al. (1998); Schneider et al. (1999); Augereau et al. (1999)). We disagree, however, with the $`\tau 10^3`$ of Schneider et al.; their result is valid only for scattering in a spherical shell. ## 3. COAGULATION MODEL To calculate dust evolution in HR 4976A, we use a coagulation code based on the particle-in-a-box method (KL (99)). This formalism treats planetesimals as a statistical ensemble of bodies with a distribution of horizontal and vertical velocities about Keplerian orbits (Safronov (1969)). We begin with a size distribution of $`N_i`$ bodies having total mass $`M_i`$ in each of $`i`$ mass batches. Collisions among these bodies produce (i) growth through mergers along with cratering debris for low impact velocities or (ii) catastrophic disruption into numerous small fragments for high impact velocities. Inelastic collisions, long range gravitational interactions (dynamical friction and viscous stirring), and gas drag change the velocities of the mass batches with time. The code has been tested against analytic solutions of the coagulation equation and published calculations of planetesimal growth. Although inappropriate for the last stages of planet formation, our approach well-approximates the early stages (Kokubo & Ida (1996)). We model planetesimal growth in an annulus of width $`\mathrm{\Delta }a`$ = 12 AU centered at $`a`$ = 70 AU. The central star has a mass of 2.5 $`\mathrm{M}_{}`$. The input size distribution has equal mass in each of 38 mass batches with initial radii $`r_i`$ = 1–80 m. For a Minimum Mass Solar Nebula with mass $`M_{MMSN}`$, the total mass in the annulus is $`M_0`$ 15 $`M_E`$; the initial number of bodies with $`r_i`$ = 1 m is $`N_03\times 10^{20}`$. All batches start with the same initial velocity. The mass density $`\rho _0`$ = 1.5 g cm<sup>-3</sup>, intrinsic strength $`S_0=`$ $`2\times 10^6`$ erg g<sup>-1</sup>, and other bulk properties of the grains are adopted from earlier work (see KL99). Planetesimal growth at 70 AU follows the evolution described previously (KL99). The 80 m bodies first grow slowly into 1 km objects. During this slow growth phase, frequent collisions damp the velocity dispersion of all bodies. “Runaway growth” begins when the gravitational range of large objects exceeds their geometric cross-section. These bodies grow from 1 km up to $``$ 100 km in several Myr. During runaway growth, collisional debris, dynamical friction, and viscous stirring increase the velocity dispersion of small bodies from $``$ 1 m s<sup>-1</sup> up to $``$ 40 m s<sup>-1</sup>. This evolution reduces the gravitational range of the 100 km objects and ends runaway growth. The largest objects then grow slowly to 1000+ km sizes. Figure 2(a) shows the growth of the largest object in several models. For $`M_0`$ = 10 $`M_{MMSN}`$ and $`e_0`$ = $`10^3`$, Pluto-sized objects form in $`t_P`$ = 2.1 Myr at $`a`$ = 35 AU, 13 Myr at 70 AU, and 93 Myr at 140 AU. Models with smaller $`M_0`$ take longer to make “Pluto”. Plutos form more quickly for $`e_0<10^3`$, because gravitational focusing factors are larger. Figure 2(b) shows the evolution of the scale height $`H`$ for small objects. Initially, $`H=2\pi a\mathrm{sin}i0.003a`$ for $`e_010^3`$. Collisional damping cools the bodies during the slow-growth phase; $`H`$ remains small. $`H`$ increases dramatically during runaway growth, when dynamical processes heat up the smallest bodies. Once runaway growth ends, $`H`$ slowly increases to 0.3–0.6 AU independent of $`M_0`$, $`e_0`$, and other input parameters. When $`H`$ begins to increase, high velocity collisions produce numerous “dust grains” with sizes $``$ 1 m. We do not follow explicitly the evolution of these bodies. Instead, we assume that collisional debris is (i) swept up by 1 m or larger objects, (ii) ejected by radiation pressure, or (iii) dragged inwards by the Poynting-Robertson effect. Grains with sizes exceeding 4–5 $`\mu `$m are stable against radiation pressure (Jura et al. (1998); Augereau et al. (1999)). Poynting-Robertson drag reduces the mass in small grains on a timescale $`t_{PR}`$ 1.0 Myr ($`r_i/4\mu `$m). With the short collision times, $`10^5`$ yr, in our model annulus, 1 Myr seems a reasonable estimate of the timescale for collisions to produce 4 $`\mu `$m grains which are removed by radiative processes. For this paper, we calculate the accretion explicitly and adopt a 1 Myr timescale for dust removal. Figure 2(c) shows the dust mass as a function of time. The results are not sensitive to the adopted mass distribution for grains with $`r_i`$ 4 $`\mu `$m or to factor of 2–3 variations in the removal timescale. The dust mass is initially large due to the starting conditions. The dust mass decreases with time, because (i) collisional damping of the smaller bodies leads to less collisional debris and (ii) radiative processes and accretion by large bodies remove dust. Once runaway growth begins, collisions between small bodies produce more dust. The dust mass then reaches a rough equilibrium between collision debris and dust removed by radiation forces and by the larger bodies. These results indicate that large dust masses correlate with runaway growth and the formation of 1 or more Plutos in the outer parts of the disk. To predict the amount of radiation absorbed and scattered by dust and larger bodies, we compute $`\tau `$ from the model size distribution. We assume the geometric optics limit because $`r_i\lambda `$. For the large bodies $`\tau =_{i=1}^Nn_i\sigma _i\mathrm{\Delta }a`$, where $`n_i`$ is the number density in mass batch $`i`$, $`\sigma _i`$ is the extinction cross-section, and $`N`$ is the number of mass batches. We adopt $`\sigma _i=2\pi r_i^2`$ and a volume $`V_i=2\pi a\mathrm{\Delta }aH_i`$ to compute $`n_i=N_i/V_i`$ and hence $`\tau `$ for material with $`r_i`$ 1 m. Estimating $`\tau `$ for small particles requires an adopted cumulative size distribution, $`N_Cr_i^q`$. We consider three choices: (i) $`q=2.5`$, the collisional limit for coagulation; (ii) $`q=3`$, equal mass per mass interval; and (iii) $`q=3.5`$, the approximate distribution for grains in the interstellar medium. Our calculations produce $`q2.7`$ for 1–100 m bodies. We expect a slightly steeper mass distribution for smaller bodies, because collisions between smaller bodies produce fewer mergers and more debris. Figure 2(d) shows how $`\tau `$ evolves for a single model. The large bodies initially have modest radial optical depth, $`\tau _L`$ 0.2. This optical depth decreases with time, except for a brief period when runaway growth produces 10–100 km objects with small scale height above the disk midplane. The large bodies are transparent once a Pluto forms. The small grains are also initially opaque. This dust is transparent at late times if most of the mass is in the largest grains, $`q`$ 2.8. The dust is opaque for $`q`$ 3. Table 1 summarizes results for various initial conditions. Models with $`M_0`$ 10–20 $`M_{MMSN}`$ and $`e_0`$ $`10^4`$$`10^3`$ achieve $`\tau 1`$ in 10 Myr. Less massive disks produce less dust on longer timescales. The results are not sensitive to other input parameters, including the size distribution and the bulk properties of the bodies. Table 1 also shows why dust in HR 4796A lies in a ring. In disks with surface density $`\mathrm{\Sigma }a^{3/2}`$, the Pluto formation timescale<sup>1</sup><sup>1</sup>1Pluto is a handy reference: 1000+ km objects form roughly in the middle of the rapid increase in $`H`$ which produces large dust masses. is $`t_P`$ 13 Myr $`(M_0/10M_{MMSN})^1`$ $`(a/70\mathrm{AU})^{2.7}`$. Once an annulus at $`a`$ begins to form dust, material at $`a+\mathrm{\Delta }a`$ must wait a time, $`\mathrm{\Delta }t/t_p2.7\mathrm{\Delta }a/a`$, to reach the same state. This result sets a hard outer limit to the ring, $`\mathrm{\Delta }a/a0.4\mathrm{\Delta }t/t_p`$ 0.1–0.2, if $`\mathrm{\Delta }t`$ is the time for $`H`$ to double in size during runaway growth, 2–3 Myr. We expect a hard inner edge, because particle velocities reach the shattering limit of $``$ 100 m s<sup>-1</sup> (KL99) or planets sweep up the dust (e.g., Pollack et al. (1996)) or both. ## 4. DISCUSSION AND SUMMARY Our results indicate that the dusty ring in HR 4796A is a natural outcome of planetesimal evolution. Planet formation at 70 AU in 10 Myr is possible with an initial disk mass of 10–20 $`M_{MMSN}`$. Dust production associated with planet formation is then confined to a ring with $`\mathrm{\Delta }a`$ 7–15 AU. The optical depth in this ring satisfies current constraints on scattered light at 1–2 $`\mu `$m and on thermal emission at 10–100 $`\mu `$m if the size distribution of the dust is $`N_Cr_i^q`$ with $`q3`$ for $`r_i`$ 1 m. Models with disk masses smaller than 10 $`M_{MMSN}`$ fail to produce planets and an observable dusty ring in 10 Myr. An uncertainty in our model is the timescale to produce 1–80 m bodies from small dust grains in a turbulent, gaseous disk. Cuzzi et al. (1993) show that grains grow very rapidly once they decouple from eddies in the disk. The decoupling timescale depends on the unknown disk viscosity at 70 AU. Our model makes several observational predictions. We expect $`L_{NIR}/L_{}`$ = constant for $`\lambda `$ 5 $`\mu `$m; current data are consistent with this prediction at the 1.5$`\sigma `$ level. Better measurements of the ring flux at $`\lambda `$ 1.6 $`\mu `$m would test our optical depth assumptions and yield interesting constraints on grain properties. Deep images at $`\lambda `$ 10–20 $`\mu `$m with high spatial resolution should detect material outside the ring. We predict $`\tau `$ 0.1 in large bodies for $`a`$ 80 AU; the surface brightness and temperature of this material should decrease markedly with radius. This material should have negligible mass in small objects, because coagulation concentrates mass in the largest objects when $`H`$ is small. We also expect a flux of dust grains into the central star, although we cannot yet compare quantitative predictions with observations. Future calculations of radiative processes within the ring will address this issue. Applying this HR 4796A model to other stars with circumstellar disks is challenging due to small number statistics and unfavorable circumstances. Nearby companion stars probably influence the dynamics of dusty rings in HD 98800 and HD 141569 (Pirzkal et al. (1997); Low et al. (1999); Lagrange et al. (2000)). In HR 4796A, the M-type companion lies well outside the ring radius and cannot modify ring dynamics significantly. Older systems like $`\beta `$ Pic and $`\alpha `$ Lyr require time-dependent treatment of dust to allow the ring to spread with time (e.g., Artymowicz (1997)). We plan to incorporate this time-dependent behavior in future calculations to see whether the ring in HR 4796A can evolve into a debris disk (as in e.g. $`\beta `$ Pic and $`\alpha `$ Lyr) on a timescale of 100–200 Myr. The main alternative to in situ ring formation at 70 AU is migration of a planet formed at a smaller radius. Weidenschilling & Marzari (1996) show that gravitational interactions can scatter large objects into the outer disk in less than 1 Myr. Migration reduces the required ring mass by a factor of 10–100. However, the scattered body has a large eccentricity, $`e0.5`$. Dynamical friction might circularize the orbit in 10 Myr, but would induce large eccentricities in smaller bodies. The width of the dusty ring would probably exceed observational constraints. Future calculations can address these issues. We thank B. Bromley for helping us run our code on the HP Exemplar “Neptune” at JPL and for a generous allotment of computer time through funding from the NASA Offices of Mission to Planet Earth, Aeronautics, and Space Science.
no-problem/9908/astro-ph9908059.html
ar5iv
text
# Photometric and Kinematic Studies of Open Star Clusters I: NGC 581 (M 103) ## 1 Introduction The shape of the initial mass function (IMF) is an important parameter to understand the fragmentation of molecular clouds and therefore the formation and development of stellar systems. Besides studies of the Solar neighbourhood (Salpeter salpeter (1955), Tsujimoto et al. tsuji (1997)), work on star clusters plays an important role (Scalo scalo1 (1986)), as age, metalicity, and distance of all cluster stars can be assumed to be equal. Most of the previous studies indicate that the IMF of a star cluster has the shape of power laws $$N(m)m^\mathrm{\Gamma }$$ (1) within different mass intervals. The following typical values of their exponents as given in Scalo (scalo2 (1998)): $`\mathrm{\Gamma }=1.3`$ for $`m>10M_{},`$ $`\mathrm{\Gamma }=1.7`$ for $`1M_{}<m<10M_{},\text{ and}`$ (2) $`\mathrm{\Gamma }=0.2`$ for $`m<1M_{}.`$ Knowledge of membership is essential to derive the IMF especially of open star clusters, where the contamination of the data with field stars is a major problem. Two methods for membership determination are in use nowadays and each of them has its advantages and disadvantages: * The classical method is to separate between cluster and field stars by their proper motions: All cluster stars can be expected to move in the same way, whereas the field stars show more widely spread and differently centred proper motions (see e.g. the recent work of Francic francic (1989)). For each star a membership probability can be specified. To obtain a sufficient epoch difference, old as well as recent photographic plates are needed to measure proper motions, so that this method is limited by the comparably poor sensitivity of the old plates. * With the introduction of CCD imaging to astronomy, statistical field star subtraction became more popular. Assuming (almost) identical field star distributions in the cluster region itself and the surrounding area, the distribution of the field stars can be subtracted from the one of the (contaminated) cluster area. This makes sense for fainter stars but with the bright stars one deals with statistics of small numbers. Our work combines these two methods of membership determination: The proper motions are investigated for the bright stars of the cluster, whereas the fainter stars are treated with statistical field star subtraction. From the cleaned data we derive the luminosity and mass functions of the cluster. NGC 581 (M 103), which is located at $`\alpha _{2000.0}=1^h33.2^m`$, $`\delta _{2000.0}=+60^{}42\mathrm{}`$, was chosen as a first test object for our technique because the cluster and a sufficiently large field star region can be covered within the field of view of the telescope used. A $`V`$ image of NGC 581 is shown in Fig. 1. In Sect. 2 we present our CCD photometry, and in Sect. 3 a proper motion study of NGC 581 and another cluster which is located on the photographic plates, Trumpler 1. The resulting colour magnitude diagram (CMD) is discussed in Sect. 4, leading to the determination of the IMF of NGC 581 in Sect. 4.4. ## 2 CCD photometry The photometry is based on 22 CCD frames taken in Johnson $`B`$ and $`V`$ filters at the 1m Cassegrain telescope of Hoher List Observatory. The telescope was equipped with a focal reducing system and a 2k $`\times `$ 2k CCD camera called HoLiCam (Sanner et al. holicam (1998)), which has a pixel size of $`15\mu \text{m}\times 15\mu \text{m}`$ and a resolution of $`0.8\mathrm{}\text{pix}^1`$. The field of view covered in this configuration is a circular area with a diameter of $`28\mathrm{}`$. Information about the images used for the photometry are summed up in Tab. 1. The images of equal exposure times were averaged, resulting in integrated exposure times of 35 min in $`V`$ and 60 min in $`B`$ for the longest exposures. The shorter images were used to gain information about the bright stars which were saturated after longer exposure times. After standard image processing the photometry was performed with DAOPHOT II (Stetson stetson (1991)) running under IRAF. After an error selection process, the data were calibrated from instrumental to Johnson magnitudes using the photoelectric sequence of Hoag et al. (navy (1961)). Their standard stars as well as our instrumental magnitude values and their deviations are given in Table 2. We applied the following equations to transform instrumental to apparent magnitudes: $`vV`$ $`=`$ $`a_0a_1(BV)`$ (3) $`(bv)(BV)`$ $`=`$ $`a_0^{}a_1^{}(BV)`$ (4) with $`a_0=5.536\pm 0.04,`$ $`a_1=0.094\pm 0.02`$ (5) $`a_0^{}=2.418\pm 0.04,`$ $`a_1^{}=0.139\pm 0.02`$ (6) where $`B`$ and $`V`$ represent apparent and $`b`$ and $`v`$ instrumental magnitudes, respectively. Mean photometric errors in different magnitude intervals are given in Table 3. One can see that the errors for $`B`$ increase more rapidly as a consequence of HoLiCam’s poorer sensitivity in blue wavelengths. Although the total exposure time in $`B`$ was almost twice as large as in $`V`$, the limiting magnitude of the photometry is defined by the $`B`$ images. From these data, we determined a CMD which is shown in Fig. 2. It represents a total of 2134 stars for which both $`V`$ and $`BV`$ magnitudes are available. We present the photometric data of all these objects in Table 4. The CMD shows two main sequence features and a scattered giant branch in a colour range around $`BV1.4\text{ mag}`$. More detailed analysis of the CMD is being presented in Sect. 4. ## 3 Proper motion study ### 3.1 Data reduction For the proper motion study eight photographic plates from the Bonn Doppelrefraktor (until 1965 located in Bonn, thereafter at Hoher List Observatory) were used, covering an epoch difference of 81 years. The 16 cm $`\times `$ 16 cm plates of the $`D=0.3\text{ m}`$, $`f=5.1\text{ m}`$ instrument represent a region of $`1.6^{}\times 1.6^{}`$. The plates were digitized with the PDS machines of the Astronomisches Institut Münster and the Tautenburg Plate Scanner (TPS) of Thüringer Landessternwarte Tautenburg (Brunzendorf & Meusinger TPS (1998)). The positions gained with DAOPHOT II from 15 CCD frames were added to the plate data. Tables 1 and 5 give an overview of the material included in the proper motion study. The celestial positions of the stars were determined from the plate coordinates with respect to six HIPPARCOS stars (ESA hipp (1997)) using an astrometric software package developed by Geffert et al. (geffert (1997)). We obtained good results using quadratic polynomials (6 plate constants) for transforming $`(x,y)`$ to $`(\alpha ,\delta )`$ for the photographic plates and cubic polynomials (10 constants) for the CCD images, respectively. The mean positional deviations of the HIPPARCOS stars after the first reduction step were of the order of $`0.1\mathrm{}`$ in both right ascension and declination. Using the output positions and proper motions of each step as the basis of the next run, we derived a stable solution of proper motions of a total of 2,387 stars on the whole field after four iterations with a mean error for the proper motions of approx. $`1.1\text{ mas\hspace{0.17em}yr}^1`$ in both coordinates. The differences between our measurements and the HIPPARCOS data (“observed – calculated” or $`OC`$ values) are listed in Table 6. Compared with our measurements, HIPPARCOS star no. 7155 showed high deviations — probably caused by a double star nature of this object (Wielen et al. wielen (1998)) — and was therefore excluded before the data reduction. We present the proper motions of all stars in Table 7. ### 3.2 NGC 581 228 stars were located within $`10\mathrm{}`$ from the centre of NGC 581. Only these shown in Fig. 3 were taken into account for the vector point plot diagram. Due to the cluster’s large distance (see Sect. 4.2), the centres of the distributions of field and cluster stars are more or less the same. Therefore, the separation between field and cluster stars is not very apparent. According to the method presented by Sanders (sanders (1971)), we fitted a sharp (for the members) and a wider spread (for the field stars) Gaussian distribution to the distribution of the stars in the vector point plot diagram. We computed the parameters of the two distributions with a maximum likelihood method. From the values of the distribution at the location of the stars in the diagram we derived the membership probabilities. Due to the small difference between the centres of the two distributions the stars with proper motions far away from the maximum are clearly identified as non-members, whereas it is difficult to decide which objects of the central region do belong to the cluster and which not. This is represented in the histogram for the membership probabilities (Fig.4) with a clear peak at $`P0`$ and a much less pronounced increase towards $`P=1`$. Fig. 5 shows the positions of the member and non-member stars. With this method 77 stars were classified to be members of NGC 581 with a probability of at least 0.8, while 151 objects show a lower membership probability. The average proper motions for cluster stars are: $`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`(0.11\pm 1.07)\text{ mas\hspace{0.17em}yr}^1`$ $`\mu _\delta `$ $`=`$ $`(0.95\pm 1.24)\text{ mas\hspace{0.17em}yr}^1`$ and for the field stars: $`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`(1.14\pm 4.46)\text{ mas\hspace{0.17em}yr}^1`$ $`\mu _\delta `$ $`=`$ $`(0.91\pm 3.77)\text{ mas\hspace{0.17em}yr}^1.`$ Although the values are very similar, the standard deviations show a large difference between field and cluster stars. We should remark that the ”true” members might be more concentrated towards the cluster centre, but because of the problems in dividing between cluster and field stars, some non-members might be taken for member stars and vice versa. ### 3.3 Trumpler 1 Near the edge of the photographic plates we found the open star cluster Trumpler 1 for which we attempted a proper motion study, too, as a by-product of our work. In a region of $`0.15^{}`$ around the centre of the cluster at $`\alpha _{2000}=01^\text{h}35^\text{m}.7`$, $`\delta _{2000}=+61^{}17\mathrm{}`$, we detected only 64 stars on the base of the photographic plates. CCD photometry is not available. A membership analysis with the same methods as described above resulted in average proper motions of $`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`(0.59\pm 1.71)\text{ mas\hspace{0.17em}yr}^1`$ $`\mu _\delta `$ $`=`$ $`(2.68\pm 1.19)\text{ mas\hspace{0.17em}yr}^1`$ for the cluster members and $`\mu _\alpha \mathrm{cos}\delta `$ $`=`$ $`(1.41\pm 5.73)\text{ mas\hspace{0.17em}yr}^1`$ $`\mu _\delta `$ $`=`$ $`(4.59\pm 3.93)\text{ mas\hspace{0.17em}yr}^1.`$ for the field stars. We present a vector point plot diagram in Fig. 6. It should be mentioned that these proper motions are less accurate than our results for NGC 581, as Trumpler 1 is located close to the edge of the plates and we did not include CCD positions in our analysis. However, the results indicate that Trumpler 1 and NGC 581 have not only roughly the same ages and distances (see Phelps & Janes phelps1 (1993)), but — within the errors — the same absolute proper motions. Therefore, the two objects may be taken as a candidate for a binary star cluster (see e.g. Subramaniam et al. subram (1995)) as they are known for the Magellanic Clouds (Dieball & Grebel dieball (1998)). ## 4 Analysis of the colour magnitude diagram ### 4.1 Field star subtraction The CMD derived in Sect. 2 is contaminated with field stars. Before trying to analyse the CMD it is essential to subtract these stars to emphasise the features which belong to NGC 581 and to prepare the data for the determination of the cluster IMF. In the magnitude range covered by the proper motion study it is possible to distinguish between field and cluster stars using the information of the membership probabilities from Sect. 3. The proper motion study is complete down to $`V=14\text{ mag}`$ so that until this point, a membership probability of 0.8 or higher is considered a suitable criterion for the definition of the members of NGC 581. Below this limit, the star numbers are high enough to justify a statistical field star subtraction. For this, the CCD field was divided into two regions of the same area of $`1.310^6`$ pixels: a circular region with the star cluster in its centre (radius: 654 pix. or $`18\mathrm{}`$) and a ring encircling this region (outer radius: 925 pix. or $`26\mathrm{}`$). Under these assumptions, all cluster members are surely located in the central area. For both parts separate CMDs were computed. Each of them was divided into rectangular bins with a length of $`0.5\text{ mag}`$ in magnitude and $`0.1\text{ mag}`$ in colour. (Variation of the bin sizes did not affect the results.) The numbers of stars in the field CMD cells were determined and as many stars of the corresponding cell of the inner region CMD were randomly chosen and removed. Assuming a homogeneous distribution of field stars over the whole field of view, the resulting CMD, which is presented in Fig. 8, represents only the cluster members. To illustrate the advantage of our method, Fig. 7 shows the luminosity functions obtained with the proper motions and the statistical field star subtraction, respectively, in the range of the photographic plates ($`V<14\text{ mag}`$). It can be seen that the proper motion analysis leads to less member stars than the statistical subtraction. ### 4.2 Morphology of the cluster CMD and age determination The brightest stars in the field of NGC 581 (marked with letters A to C in Fig. 1) were saturated even in the shortest exposures so that no direct photometric information was available for them. However, the three stars were the subject of previous photoelectric studies (Hoag et al. navy (1961), Purgathofer wien (1964)) so that their $`V`$ magnitudes and $`BV`$ colours could be added to our CMD manually. Star A was too bright to be measured on the photographic plates with a sufficient accuracy so that we could not derive its proper motion, either. However, since star A is a HIPPARCOS star (ESA hipp (1997), star no. 7232), we could get its proper motion from this source. Calculating its membership probability led to a value of 0.89. Star B has a proper motion far away from the one of the cluster so that it can be considered a non-member, while star C belongs to NGC 581 with a probability of 0.90. After field star subtraction, we found three red stars with $`BV2\text{ mag}`$ from $`V10.8\text{ mag}`$ to $`V12.4\text{ mag}`$. Compared with the well populated main sequence in the same magnitude range, it seems obvious that these objects cannot be cluster members. We assume these objects to be field stars which are classified as member stars as a consequence of the small difference between field and cluster proper motions. The secondary main sequence and giant branch like structures completely vanished within the range of the proper motions. Below $`V=14\text{ mag}`$, we still find some stars remaining in the region of these features. Since the vector point plot diagram does not give any evidence for a second cluster in the same line of sight, we assume that these stars belong to the field star population(s). It is very unlikely, too, that the remaining stars are pre main sequence cluster members, because at the age of NGC 581 (see the following paragraph) this kind of objects should be much closer to the main sequence than the stars in the CMD are (Iben iben (1965)). Therefore we assume these stars to be remnants of the field star subtraction due to the imperfect statistics of the sample. We fitted isochrones of the Geneva group (Schaller et al. schaller (1992)) to the resulting CMD from which we derived the distance modulus, reddening, age, and metalicity of NGC 581. The best fitting isochrone is plotted into the cleaned CMD of Fig. 8. The parameters of the selected isochrone are shown in Table 8. Isochrones of the Padua group (Bertelli et al. padua (1994)) were fitted to the CMD for comparison and led to the same set of parameters. ### 4.3 Completeness correction To obtain comprehensive luminosity and initial mass functions, the data have to be corrected for completeness. As crowding is not a problem in our images, the completeness in the field and cluster regions are the same so that it is not necessary to correct for completeness before the field star subtraction. With artificial star experiments using the DAOPHOT II routine addstar we derived the completeness function shown in Fig. 9. As the $`V`$ images reach down to much fainter magnitudes, these experiments were only performed with the $`B`$ exposures. Down to $`B=17.25\text{ mag}`$ the sample is more than $`80\%`$ complete with a sharp drop afterwards to almost $`0\%`$ at $`B=19\text{ mag}`$. Figure 9 shows that the completeness level of $`60\%`$ is reached around $`B=18\text{ mag}`$, so that we assume this value to be a reasonable cut-off for our studies. With a main sequence star colour of $`BV=0.8\text{ mag}`$ for $`B=18\text{ mag}`$, this corresponds to a limiting magnitude of $`V=17.2\text{ mag}`$. ### 4.4 Initial mass function With the stars of the cleaned CMD we determined the luminosity (Fig. 10) and initial mass functions of NGC 581 after deleting all objects outside a $`\mathrm{\Delta }(BV)=0.3\text{ mag}`$ wide strip around the main sequence and above the turnover of the isochrone around $`V=10\text{ mag}`$ from the CMD and applying the completeness correction described in Sect. 4.3. As mentioned in Sect. 3.2, some field stars will have stayed in the sample while a few cluster stars might have been rejected. However, we assume these two effects do not influence the IMF of NGC 581. From the initial stellar masses given in the Geneva isochrone data, we computed a mass-luminosity relation represented by a $`6^{\text{th}}`$ order polynomial: $$m[M_{}]=\underset{i=0}{\overset{6}{}}d_iV^i[mag]$$ (7) with the following parameters: $`d_0`$ $`=`$ $`523.815`$ $`d_1`$ $`=`$ $`+221.123`$ $`d_2`$ $`=`$ $`36.347`$ $`d_3`$ $`=`$ $`+3.055`$ $`d_4`$ $`=`$ $`0.140`$ $`d_5`$ $`=`$ $`+0.003`$ $`d_6`$ $`=`$ $`0.0000327.`$ A $`5^{\text{th}}`$ order polynomial still did not fit the faint part of the mass-luminosity relation well enough. With Eq. (7) we derived the initial masses of the stars classified as objects of NGC 581 on the base of their $`V`$ magnitudes. $`V`$ was preferred compared with the $`B`$ values as their photometric errors are smaller at equal magnitudes. We included all stars with a mass of higher than $`m=1.41M_{}`$ (corresponding to the previously mentioned value of $`V=17.2\text{ mag}`$ or $`\mathrm{log}m=0.15`$). In this range, the completeness is $`60\%`$ or higher. 198 stars were selected by this criterion. The stars were divided in $`\mathrm{\Delta }\mathrm{log}m=0.1`$ wide bins. The star numbers were corrected using the results of the completeness calculation, resulting in 247 “virtual” stars from $`9.45M_{}`$ down to $`1.41M_{}`$. The resulting histogram is presented in Fig. 11. The single star with a mass around $`9.45M_{}`$ ($`\mathrm{log}m=0.98`$) was not taken into account for the IMF determination, since this single object at one end of the histogram might heavily influence the IMF slope. The slope of the IMF was determined by linear regression to the histogram. We obtained $`\mathrm{\Gamma }=1.80\pm 0.19`$ from the stars with a mass from $`1.41M_{}`$ to $`8.70M_{}`$ (An experiment including the $`9.45M_{}`$ star led to $`\mathrm{\Gamma }=2.11\pm 0.23`$ and therefore to a difference in the exponent of more than 0.3!). The resulting IMF is shown in Fig. 11. ## 5 Summary and discussion NGC 581 is a young open star cluster with an age of 16 Myr. It is located at a distance of 2900 pc from the Sun. We derived proper motions of 228 stars in the region of the cluster down to $`14\text{ mag}`$. 77 of those can be considered members of the object. A study of the IMF of NGC 581 leads to a power law with a slope of $`\mathrm{\Gamma }=1.80`$. The results of the photometry mainly coincide with the findings of Phelps & Janes (phelps1 (1993)), however, we do not find evidence for a star formation over a period of as long as 10 Myr: While Phelps & Janes claim the necessity of two isochrones of different ages to fit both the blue and red bright stars in their CMD, the Geneva $`\mathrm{log}t=7.2`$ isochrone fits all features of our CMD sufficiently well. The IMF slope might be slightly higher than Scalo’s (scalo2 (1998)) figure for this mass range, as Phelps & Janes (phelps2 (1994)) find a steep slope for NGC 581, too. Their value of $`\mathrm{\Gamma }=1.78`$ marks the steepest IMF of their entire study of open star clusters. Although their IMF is based on a photometry with a deeper limiting magnitude, our determination has its advantages, too: Phelps & Janes only used statistical field star subtraction and no information of proper motions, which may have great impact on the high mass range and therefore might affect the slope of the IMF. Furthermore, they only took one single region in the vicinity of their star clusters for the field star subtraction, while we were able to use the surrounding of the cluster itself. And finally, their field of view is so small that they probably did not cover the entire cluster which, assuming the presence of mass segregation, might have flattened their IMF due to a lack of low mass stars from the outer regions of the cluster. If we would not have based part of the membership determination on the proper motions, we would have ended up with 9 more stars in the magnitude range of $`V<14\text{ mag}`$ (see Fig. 7). Since the total star number in this region is low, this would have decreased the IMF the slope to $`\mathrm{\Gamma }=1.73\pm 0.28`$. Within the errors, this result would still match with our value above, however, the shape of this IMF is less well represented by a power law, which is expressed by the higher error. Therefore we claim that adding the information of the field star subtraction did improve the reliability of our IMF study. ###### Acknowledgements. The authors acknowledge Wilhelm Seggewiss for allocating time at Hoher List Observatory and Albert Bruch for time at the Münster PDS machines. Thanks a lot to Andrea Dieball for the field star subtraction software and to Klaas S. de Boer and Andrea Dieball for carefully reading the manuscript of this publication. J.Sa. especially thanks Thomas Schimpke for introducing him to the use of PDS and the software for handling the digitised photographic plates and Georg Drenkhahn for his support in programming IRAF scripts. J.B. acknowledges financial support from the Deutsche Forschungsgemeinschaft under grant ME 1350/3-2.
no-problem/9908/cond-mat9908301.html
ar5iv
text
# Configurational Entropy and Diffusivity of Supercooled Water ## Abstract We calculate the configurational entropy $`S_{\text{conf}}`$ for the SPC/E model of water for state points covering a large region of the $`(T,\rho )`$ plane. We find that (i) the $`(T,\rho )`$ dependence of $`S_{\text{conf}}`$ correlates with the diffusion constant and (ii) that the line of maxima in $`S_{\text{conf}}`$ tracks the line of density maxima. Our simulation data indicate that the dynamics are strongly influenced by $`S_{\text{conf}}`$ even above the mode-coupling temperature $`T_{\text{MCT}}(\rho )`$. Computer simulation studies are contributing to the understanding of the slow dynamical processes in simple and molecular liquids on approaching the glass transition. In particular, the space, time and temperature dependence of many dynamical quantities have been calculated and compared with the predictions of the mode-coupling theory (MCT) , which provides a description of the dynamics in weakly supercooled states of many atomic and molecular liquids, including the SPC/E model of water studied here . Simulations have also been used to investigate the “thermodynamic approach” to the glass transition , which envisages a relation between diffusion constant $`D`$ and configurational entropy $`S_{\text{conf}}`$ by relating the dynamics of liquids at low $`T`$ to the system’s exploration of its configuration space. The properties of the liquid are dominated by the basins of attraction of local potential energy minima; the liquid experiences vibrations localized around a basin and rearranges via relatively infrequent inter-basin jumps . Motivated by the separation of time scales, the total entropy $`S`$ may be separated into two parts: (i) an intra-basin contribution $`S_{\text{vib}}`$, which measures the vibrational entropy of a system constrained to reside within a basin, and (ii) a configurational contribution, $`S_{\text{conf}}`$, quantifying the multiplicity of basins . Explicit calculation of $`S_{\text{conf}}`$ has been performed for hard spheres , soft spheres, Lennard-Jones systems and tetravalent network glasses with the aim of evaluating the Kauzmann temperature $`T_K`$, at which $`S_{\text{conf}}`$ appears to vanish . One of the most studied models of molecular liquids is the SPC/E potential, designed to mimic the behavior of water . The dynamical properties of this model have been studied in detail in the weakly supercooled regime and have been shown to be consistent with the predictions of MCT . The SPC/E potential is of particular interest for testing theories of the supercooled-liquid dynamics, since, as observed experimentally for water, along isotherms $`D`$ has a maximum as function of the pressure $`P`$ or of the density $`\rho `$ . This maximum becomes more pronounced upon cooling. Furthermore, the line of isobaric density maxima, which corresponds to the line of isothermal entropy maxima from the Maxwell relation $`(V/T)_P=(S/P)_T`$, appears strongly correlated to the line of $`D`$ maxima \[Fig. 1(a)\], providing motivation to test possible relationships between $`S_{\text{conf}}`$ and $`D`$. A test of such relationship, based on the analysis of experimental data at ambient pressure, was performed by Angell and coworkers in 1976. Here we calculate the entropies $`S`$, $`S_{\text{vib}}`$ and $`S_{\text{conf}}`$ for the SPC/E potential for state points covering a large region of the $`(T,\rho )`$ phase diagram. We then compare the behavior of $`S_{\text{conf}}`$ with $`D`$. We also estimate $`T_K`$ and consider its relation with the MCT temperature $`T_{\text{MCT}}`$. We first calculate $`S`$ for a reference point ($`T=1000`$ K and $`\rho =1.0`$ g/cm<sup>3</sup>), and then calculate the entropy as a function of $`(T,\rho )`$ via thermodynamic integration using the state points simulated in Ref. , as well as new simulations extending to higher temperatures . We show an example of $`S`$ along the $`\rho =1.0`$ g/cm<sup>3</sup> isochore in Fig.2(a). We calculate $`S_{\text{vib}}`$, i.e. the entropy of the liquid constrained in a typical basin, using the properties of the basins visited in equilibrium by the liquid. For each ($`T`$, $`\rho `$), we calculate, using the conjugate gradient algorithm , the corresponding local minima – called inherent structures (IS) – for 100 configurations . We estimate $`S_{\text{vib}}`$ by adding anharmonic corrections to the harmonic contribution $`S_{\text{harm}}`$ $$S_{\text{harm}}=k_B\underset{i=1}{\overset{6N3}{}}\left[\mathrm{ln}\frac{k_BT}{\mathrm{}\omega _i}1\right],$$ (1) where $`\omega _i`$ are the normal-mode frequencies of the IS determined form the Hessian matrix . Ref. found that the harmonic approximation for a binary Lennard-Jones mixture is a valid estimate of $`S_{\text{vib}}`$ for temperatures around $`T_{\text{MCT}}`$. However, in the case of the SPC/E potential, we find that there are significant anharmonicities in the basins. This can be seen from the fact that, if the system were purely harmonic, the energy $`E`$ should equal the energy of a minimum $`E_{\text{conf}}`$ plus the contribution $`E_{\text{harm}}=(6N3)k_BT/2`$ of the harmonic solid approximation. In contrast with the binary mixture Lennard-Jones case, we find the vibrational contribution $`E_{\text{vib}}EE_{\text{conf}}`$ to be roughly 10% larger than the harmonic approximation, even at the lowest temperatures studied. We then estimate the anharmonic contributions to $`S_{\text{vib}}`$ by heating the IS at constant volume and measuring the deviation of $`E_{\text{vib}}`$ from the harmonic approximation . For each collection of basins corresponding to a particular $`(T,\rho )`$ state point, we calculate $`E_{\text{vib}}`$ in the range $`T=0200`$ K and fit our results using the approximation $$E_{\text{vib}}=E_{\text{harm}}+aT^2+bT^3.$$ (2) By integrating the relation $`dS_{\text{vib}}=dE_{\text{vib}}/T`$, we find $$S_{\text{vib}}=S_{\text{harm}}+2aT+\frac{3}{2}bT^2$$ (3) We use Eq. (3) to extrapolate $`S_{\text{vib}}`$ to higher temperatures. ¿From the knowledge of $`S_{\text{vib}}`$, we calculate $`S_{\text{conf}}=SS_{\text{vib}}`$. As an example of the entire procedure, we show in Figs. 2(a) and 2(b) the $`T`$ dependence of $`S_{\text{vib}}`$ and $`S_{\text{conf}}`$ for the $`\rho =1.0`$ g/cm<sup>3</sup> isochore. In Fig. 3(a), we show $`S_{\text{conf}}`$ as a function of $`\rho `$ for several $`T`$, and in Fig. 3(b) we show the behavior of $`D`$ along the same isotherms. Fig. 4 shows that at high densities the behavior of $`ln(D)`$ is nearly linear when plotted as a function of $`(TS_{\text{conf}})^1`$, as proposed by Adam and Gibbs over 30 years ago. Fig. 1(b) shows the lines of maxima for $`D`$ and $`S_{\text{conf}}`$ in the region $`T260`$ K, where we have been able to clearly detect a maximum. To highlight the differences between $`S`$ and $`S_{\text{conf}}`$, we show also the line at which $`S(\rho ,T)`$ has a maximum. Fig. 1(b) shows that the lines of maxima in $`D`$ and in $`S_{\text{conf}}`$ track each other within the uncertainties of our calculations. Note that the density where $`D`$ has a maximum ($`\rho 1.15`$ g/cm<sup>3</sup>), as well as the density where the maximum in $`S_{\text{conf}}`$ occurs, depends weakly on $`T`$. One possible explanation is that, for the values of $`\rho `$ where the maxima in $`D`$ and $`S_{\text{conf}}`$ occur, two density-dependent (and primarily $`T`$ independent) mechanisms balance. Increasing $`\rho `$ in water from the “ideal” tetrahedral density (i.e., that of ice Ih, about 0.92 g/cm<sup>3</sup>) leads to the progressive destruction of the hydrogen-bond network, and hence increases the number of minima (and therefore S$`_{\text{conf}}`$), since there are more configurations corresponding to a disordered tetrahedral network. However, at large enough density, core repulsion begins to dominate the liquid properties, as expected for “typical” liquids. In such a case, one expects that increasing $`\rho `$ decreases the number of minima in the potential energy landscape, as the system becomes more densely packed, and hence fewer configurations are possible. We observe that the close connection between $`D`$ and $`S_{\text{conf}}`$ shown in Fig. 3 occurs in the same region where the dynamics of SPC/E water can be rather well described by MCT, suggesting that MCT may be able to capture the reduction of the mobility due to entropic effects. Moreover, in the same region of the ($`T`$, $`\rho `$) plane, $`D`$ also correlates well with the number of directions in configuration space connecting different basins. This suggests the possibility of a statistical relation between the number of minima and their connectivity . Before concluding, we note that the present approach allows us to estimate the locus of the Kauzmann temperature $`T_K(\rho )`$ at which $`S_{\text{conf}}`$ would disappear on extrapolating to lower $`T`$. Fig. 5 shows the calculated values of $`T_K(\rho )`$ together with the locus of mode coupling temperatures $`T_{MCT}(\rho )`$. The ratio $`T_{MCT}/T_K`$ has been used as an indication of the fragility of a liquid ; we find $`T_{MCT}/T_K1.051.15`$ suggesting that SPC/E is an extremely fragile liquid in the region of temperatures near $`T_{MCT}`$ — as also found in experimental measurements. We note that the values of $`T_K(\rho )`$ depend strongly on the validity of the extrapolation of the potential energy to temperatures lower than the one we can equilibrate with the present computer facilities. Slower changes in $`E_{\text{conf}}`$ below the lowest simulated state point would produce a much slower decrease for $`S_{\text{conf}}`$ . This may be quite plausible since it appears that $`E_{\text{conf}}`$ is approaching the crystal value, which is expected to always be less than the liquid value. A much slower change in $`S_{\text{conf}}`$ is expected to be accompanied by a slower decrease of $`D`$, which may be related to a possible fragile-to-strong transition in water . We thank C.A. Angell, S. Sastry, and R.J. Speedy for helpful discussions, and the NSF for support; FS acknowledges partial support from MURST (PRIN 98).
no-problem/9908/astro-ph9908225.html
ar5iv
text
# Formation and Disruption of Cosmological Low Mass Objects ## 1 Introduction Today, we have a great deal of observational data concerning the early universe. However, we have very little information about the era referred to as the ‘dark ages’. Information regarding the era of recombination (with redshift $`z`$ of about $`10^3`$) can be obtained by the observation of cosmic microwave background radiation. After the recombination era little information is accessible until $`z5`$, after that we can observe luminous objects such as galaxies and QSOs. On the other hand, the reionization of the intergalactic medium and the presence of heavy elements at high-$`z`$ suggest that there are other population of luminous objects, which precedes normal galaxies. Thus, theoretical approach to reveal the formation mechanism of the such unseen luminous objects is very important. It is now widely accepted that luminous objects are formed from overdense regions in the early universe. These overdense regions collapse to form luminous objects, in case they fragment into many stellar size clouds and many massive stars are formed. In order to understand the way in which luminous objects are formed, physical processes of the clouds in various stages of evolution should be studied, individually. The formation process of luminous objects is roughly divided into three steps, formation of cold clouds by H and/or $`\mathrm{H}_2`$ line cooling, formation of the first generation stars in the cold clouds, and the star formation throughout the host clouds. These steps are disturbed by the feedback from the first stars. The first step has been investigated by many authors (e.g., Haiman, Thoul & Loeb (1996); Ostriker & Gnedin (1996); Tegmark et al. (1997); Gnedin & Ostriker (1997); Abel et al. (1998)) and it has been shown that the low mass clouds (virial temperature is several $`\times 10^3`$ K) become the earliest cooled dense clouds. The second step, however, is not investigated enough, although initial mass function and formation efficiency of the first generation stars in the clouds are very challenging and crucial problems. Many authors attacked this problem (Matsuda, Sato & Takeda (1969); Hutchins (1976); Carlberg (1981); Palla, Salpeter & Stahler (1983); Uehara et al. (1996)) and they obtained various conclusions. However, now, the mass of the first generation stars are estimated through detailed investigation to be fairly large (Nakamura & Umemura (1999); Omukai & Nishi (1998)). For the third step, the feedback from the luminous objects on the other clouds has been studied by several authors (Haiman, Rees & Loeb (1996, 1997); Ferrara (1998); Haiman, Abel & Rees (1999)). Haiman, Abel & Rees (1999) examined the build-up of the UV background in hierarchical models and its effects on star formation inside small halos that collapse prior to reionization. They stressed that early UV background below 13.6 eV suppresses the H<sub>2</sub> abundance and there exists a negative feedback even before reionization. Moreover, the feedback from the formed stars on their own host cloud is more serious. The main feedback consists of two different processes, UV radiation from the stars and energy input by SNe. Through ionization of H (Lin & Murray (1992)) and dissociation of H<sub>2</sub> (Silk (1977); Omukai & Nishi (1999)), UV radiation have negative feedback on the further star formation in the host clouds. Especially, H<sub>2</sub> is dissociated in such a large region that the whole of an ordinary low mass cloud is influenced by one O5 type star (Omukai & Nishi (1999)). The feedback from SNe on the host clouds is probably negative (e.g., Mac Low & Ferrara (1999)). SNe can disrupt the host clouds before they become luminous, because the explosion energy is comparable with the typical binding energy of host clouds. In this Letter, we investigate the evolution of low mass primordial clouds systematically, and assess the mass of the first luminous objects. ## 2 Cooling diagram The formation of cold dense clouds, i.e., progenitors of luminous objects, is basically understood by the comparison between free-fall time and the cooling time. The ‘cooling diagram’ originally introduced by Rees & Ostriker (1977) and Silk (1977) shows the region where cooling time is shorter than the free-fall time on $`\rho `$ \- $`T`$ plane, and vice versa. In this section, we present the cooling diagram on $`\rho `$ \- $`T`$ plane including H<sub>2</sub> cooling. With this diagram, we can predict whether a cloud virialized at $`z=z_{\mathrm{vir}}`$ with virial temperature $`T_{\mathrm{vir}}`$ cools. ### 2.1 H<sub>2</sub> fraction with given virial temperature In order to estimate the cooling rate at $`T<10^4`$K, we need the fraction of H<sub>2</sub>. The number fraction of H<sub>2</sub> (hereafter denoted as $`y_{\mathrm{H}_2}`$) is not generally in equilibrium for $`T<10^4`$ K in the epoch of galaxy formation. The value of $`y_{\mathrm{H}_2}`$ at a time depends not only on $`\rho `$ and $`T`$ but also on the initial condition. Consequently, we cannot evaluate the cooling rate on the $`\rho `$ \- $`T`$ plane without estimating non-equilibrium $`y_{\mathrm{H}_2}`$. Tegmark et al. (1997) calculate $`y_{\mathrm{H}_2}`$ numerically, however, their primordial $`y_{\mathrm{H}_2}`$ is about two orders of magnitude over estimated because the destruction rate of $`\mathrm{H}_2^+`$ by cosmic microwave background radiation at high-$`z`$ is under estimated (Galli & Palla (1998)). Since their primordial value $`10^4`$ is comparable to the necessary value to cool, their cooling criterion is not reliable generally. Here, using recent reaction rates and the cooling rate of H<sub>2</sub> (Galli & Palla (1998)), we adopt a simplified and generalized method to estimate the cooling function of H<sub>2</sub>, differently from that of Tegmark et al. (1997). We introduce four important time scales, $`t_{\mathrm{dis}}`$, $`t_{\mathrm{form}},t_{\mathrm{cool}},`$ and $`t_{\mathrm{rec}}`$. They represent dissociation and formation time of H<sub>2</sub>, cooling time, and recombination time, respectively. Comparing these time scales, we assess the non-equilibrium fraction of H<sub>2</sub> with given virial temperature, and redshift. Below, our estimation is summarized (see Fig. 1a). 1. The case $`t_{\mathrm{dis}}<\mathrm{min}(t_{\mathrm{cool}},t_{\mathrm{rec}})`$ (Region of “$`t_{\mathrm{dis}}`$ fastest” in Fig. 1a): H<sub>2</sub> is in chemical equilibrium. In this case, $`y_{\mathrm{H}_2}=y_{\mathrm{H}_2}^{\mathrm{eq}}`$, where $`y_{\mathrm{H}_2}^{\mathrm{eq}}`$ denotes the fraction of H<sub>2</sub> in chemical equilibrium (solution of $`t_{\mathrm{form}}=t_{\mathrm{dis}}`$). 2. The case $`t_{\mathrm{rec}}<\mathrm{min}(t_{\mathrm{cool}},t_{\mathrm{dis}})`$ (Region of “$`t_{\mathrm{rec}}`$ fastest” in Fig. 1a): H<sub>2</sub> is out of chemical equilibrium, and H<sub>2</sub> molecules are formed until the recombination process significantly reduces the electron fraction. As a result, $`y_{\mathrm{H}_2}`$ is determined by the equation, $`t_{\mathrm{form}}=t_{\mathrm{rec}}`$. Combined with the relation $`t_{\mathrm{form}}=y_{\mathrm{H}_2}/y_{\mathrm{H}_2}^{\mathrm{eq}}t_{\mathrm{dis}}`$(Susa et al. (1998)), $`y_{\mathrm{H}_2}`$ is obtained as, $`y_{\mathrm{H}_2}=y_{\mathrm{H}_2}^{\mathrm{eq}}\left(t_{\mathrm{rec}}/t_{\mathrm{dis}}\right)`$. 3. The case $`t_{\mathrm{cool}}<\mathrm{min}(t_{\mathrm{rec}},t_{\mathrm{dis}})`$ (Region of “$`t_{\mathrm{cool}}`$ fastest” in Fig. 1a): When the cooling time is the shortest of the three time scales, $`y_{\mathrm{H}_2}`$ is determined by the equation $`t_{\mathrm{form}}=t_{\mathrm{cool}}`$. In other words, $`y_{\mathrm{H}_2}`$ increases until the system is cooled significantly. In case $`y_{\mathrm{H}_2}`$ cooling dominates the other cooling processes, $`y_{\mathrm{H}_2}y_{\mathrm{H}_2}^{\mathrm{eq}}\sqrt{t_{\mathrm{cool}}^{\mathrm{eq}}/t_{\mathrm{dis}}}`$. Otherwise, $`y_{\mathrm{H}_2}`$ is the solution of a quadratic equation. Here, $`t_{\mathrm{cool}}^{\mathrm{eq}}`$ represents cooling time scale by H<sub>2</sub> rovibrational transitions with $`y_{\mathrm{H}_2}^{\mathrm{eq}}`$. The electron fraction of a virialized cloud is assumed as $`y_e=\mathrm{max}(y_e^{\mathrm{rel}},y_e^{\mathrm{eq}})`$. Here $`y_e^{\mathrm{rel}}`$ is the fraction of cosmologically relic electrons calculated in Galli & Palla (1998). It equals to $`3.02\times 10^4`$ for their standard model. The chemical equilibrium fraction of electrons is denoted $`y_e^{\mathrm{eq}}`$. With this electron fraction, we estimate the fraction of H<sub>2</sub>. In Fig. 1b, $`y_{\mathrm{H}_2}`$ is plotted with given virial temperatures for four redshifts. For low redshift ($`z\begin{array}{c}<\hfill \\ \hfill \end{array}100`$) and high temperature ($`T10^4`$K), H<sub>2</sub> is in chemical equilibrium with given ionization degree. As the temperature drops, H<sub>2</sub> gets out of equilibrium because the cooling time becomes shorter than the other time scales. Below $`2000`$ K, recombination time scale is the shortest, and $`y_{\mathrm{H}_2}`$ becomes relic value. For high redshift, the destruction of H<sup>-</sup> ($`z\begin{array}{c}>\hfill \\ \hfill \end{array}100`$) and H$`{}_{2}{}^{}{}_{}{}^{+}`$ ($`z\begin{array}{c}>\hfill \\ \hfill \end{array}200`$) by cosmic microwave background radiation reduces $`y_{\mathrm{H}_2}`$ significantly. ### 2.2 Comparison between free-fall time and cooling time We are able to assess the cooling rate with the given H<sub>2</sub> fraction evaluated in the previous subsection. We compare the time scale of collapse ($`t_{\mathrm{ff}}`$) with the cooling time ($`t_{\mathrm{cool}}`$) which include the contribution from the H<sub>2</sub> cooling. They are, $$t_{\mathrm{ff}}=\left(\frac{3\pi }{32G\rho _{\mathrm{vir}}}\right)^{1/2},t_{\mathrm{cool}}=\frac{1.5\mu ^1kT_{\mathrm{vir}}}{n_{\mathrm{vir}}\mathrm{\Lambda }(y_{\mathrm{H}_2},T_{\mathrm{vir}},n_{\mathrm{vir}})}.$$ (1) Here, $`\rho _{\mathrm{vir}}18\pi ^2\mathrm{\Omega }\rho _{\mathrm{cr}}`$ and $`n_{\mathrm{vir}}\mathrm{\Omega }_b\rho _{\mathrm{vir}}/m_p`$, where $`\rho _{\mathrm{cr}}1.9\times 10^{29}h^2(1+z_{\mathrm{vir}})^3\mathrm{g}\mathrm{cm}^3`$. We adopt $`\mathrm{\Omega }=1`$, $`\mathrm{\Omega }_b=0.06`$ and $`h=0.5`$ in this paper. Equating $`t_{\mathrm{ff}}`$ and $`t_{\mathrm{cool}}`$ in eq. (1), we obtain the boundary between the cooled region during the collapse and the other region, which is drawn on the $`(1+z_{\mathrm{vir}})`$ \- $`T_{\mathrm{vir}}`$ plane in Fig. 2a. The objects virialized into the region denoted as $`t_{\mathrm{ff}}<t_{\mathrm{cool}}`$ will be cooled by H<sub>2</sub> during the gravitational collapse. In this case, collapsing cloud will be a mini-pancake, because the thermal pressure becomes negligible. We remark that the cooling region expands into $`T_{\mathrm{vir}}\begin{array}{c}<\hfill \\ \hfill \end{array}10^4`$K, which is different from classical cooling diagram such as the one in Rees & Ostriker (1977). We also compare the cooling time scale with the Hubble expansion time ($`H^1`$). The line $`t_{\mathrm{cool}}=H^1`$ is also drawn on Fig. 2a. The cooling region during the Hubble expansion time is slightly larger than the previous one, because Hubble expansion time is longer than the free-fall time. In this case, the collapse proceeds in semi-statically. As a result, the central region of the cloud will proceed to the run away collapse phase (Tsuribe & Inutsuka (1999)). ## 3 SNe and disruption of the bound objects As the collapse proceeds, small amount of the total gas is cooled to $`100`$K by H<sub>2</sub>. In those clouds, massive stars ($`10100M_{}`$) will be formed (Nakamura & Umemura (1999); Omukai & Nishi (1998)), eventually. After massive first generation stars form, evolution of the host clouds become slower because of strong regulation by UV radiation (Omukai & Nishi (1998)). Thus, next generation stars are hardly formed before the first generation stars die. Subsequent SNe might disrupt the gas binding before significant amount of total gas transferred into stars. Here, we derive the cloud disruption condition by SNe, with the assumption that the cloud is spherical and the density is constant, for the simplicity. We estimate the kinetic energy transferred from the SNe to gas. The velocity of expanding shock front from the center of a supernova remnant (SNR) is $`v_\mathrm{s}(t)=\left({\displaystyle \frac{7.64\times 10^3\left(\gamma ^21\right)KE}{\rho _1}}\right)^{1/5}t^{3/5},`$ (2) where $`K=1.53`$, $`E`$ is the total thermal energy given by the SN, $`\rho _1`$ is the density of the cloud before the explosion, and $`t`$ denotes the elapsed time since the explosion (Spitzer (1978)). Integrating eq. (2), we obtain the location of the shock front: $`R_\mathrm{s}(t)=\left({\displaystyle \frac{0.746\left(\gamma ^21\right)KE}{\rho _1}}\right)^{1/5}t^{2/5}.`$ (3) The mass of the hot bubble is also obtained as $`m_{\mathrm{SNR}}=\frac{4}{3}\pi R_\mathrm{s}^3(t)\rho _1`$. The hot bubble keeps pushing the surrounding gas until the thermal energy is pumped off by the radiative cooling. Thus, the total momentum transferred from the SNR to the gas cloud is $`p_{\mathrm{tot}}=m_{\mathrm{SNR}}(t_{\mathrm{cool}})v_\mathrm{s}(t_{\mathrm{cool}})`$. Equating this momentum with the momentum of the whole cloud, we have the expanding velocity of the cloud: $`v_{\mathrm{tot}}={\displaystyle \frac{m_{\mathrm{SNR}}\left(t_{\mathrm{cool}}\right)}{m_{\mathrm{tot}}}}v_s\left(t_{\mathrm{cool}}\right).`$ (4) If this velocity $`v_{\mathrm{tot}}`$ is smaller than the escape velocity ($`v_{\mathrm{esc}}`$) of the cloud, it will be still bounded. Otherwise, it disrupts. Now, we replace $`m_{\mathrm{tot}}`$ in equation (4) with $`m_\mathrm{J}(T_{\mathrm{vir}},z_{\mathrm{vir}})`$, which is the virialized mass of the cloud collapsed at $`z_{\mathrm{vir}}`$ with $`T_{\mathrm{vir}}`$. The escape velocity from the cloud is directly related to the virial temperature. Consequently, we can draw the disruption boundary ($`v_{\mathrm{esc}}=v_{\mathrm{tot}}`$) on the $`(1+z_{\mathrm{vir}})`$ \- $`T_{\mathrm{vir}}`$ plane. On the cooling diagram (Fig. 2a), the boundaries ($`v_{\mathrm{esc}}=v_{\mathrm{tot}}`$) are superimposed for two cases. These lines are obtained with the assumption that the input thermal energy from the SNe is $`10^{51}`$ erg and $`10^{52}`$ erg, respectively. The input energy almost reflects the number of the SNe. The values $`10^{51}`$ erg and $`10^{52}`$ erg represent the case of single SN and $`10`$ SNe, respectively. The former should corresponds to the clouds in $`t_{\mathrm{ff}}<t_{\mathrm{cool}}<H^1`$, because they will have a runaway collapsing central core. The core evolves much faster than the envelope and will be a massive star, probably, followed by a single SN. The latter case represents the clouds in $`t_{\mathrm{ff}}>t_{\mathrm{cool}}`$. They will have a shocked pancake, and the cooled region will fragment into stars. That’s why they should have multiple SNe. However, we should note that the disruption criteria by SNe strongly depend on the geometry of objects (Mac Low & Ferrara (1999); Ciardi et al. (1999) and references there in). In the case that cooling is efficient ($`t_{\mathrm{ff}}>t_{\mathrm{cool}}`$), a cloud evolves dynamically and becomes complicated shape, which is probably flattened. The geometric effect makes the momentum transfer from the SNR to the surrounding gas less efficient than our evaluation. On the other hand, if cooling is not efficient ($`t_{\mathrm{ff}}<t_{\mathrm{cool}}`$), a cloud becomes fairly spherical and has a centrally condensed density profile. In this case, the effect of SNe may become stronger than the above estimate (e.g., Morgan & Lake (1989)). Moreover, if the duration of the multiple SNe is longer than or comparable with the evolution time scale of a SNR, disruption criteria is not evaluated only with the total energy of multiple SNe (e.g., Ciardi & Ferrara (1997)). Thus, our estimate shows the qualitative tendency, so that detailed calculation for a individual cloud is necessary to derive the SNe effects accurately. ## 4 Evolution of low mass objects and mass of the first luminous objects According to the argument in the previous section, the clouds in the region $`t_{\mathrm{ff}}>t_{\mathrm{cool}}`$ will experience multiple SNe. Assuming that the total energy of multiple SNe as $`10^{52}`$ erg, the survived region is the dark shaded upper region of Fig. 2b (denoted as “LO”).<sup>1</sup><sup>1</sup>1Of course there exists some ambiguity in the total energy, but Fig. 2b dose not change by this ambiguity qualitatively. As shown in Fig. 2b survived clouds are fairly massive ($`T_{\mathrm{vir}}\begin{array}{c}>\hfill \\ \hfill \end{array}10^4`$ K) and they evolve into luminous objects through following processes (Nishi et al. (1998)): (1) By pancake collapse of an overdense region or collision between subclouds in a potential well, a quasi-plane shock forms (e.g., Susa, Uehara & Nishi (1996)). (2) If the shock-heated temperature is higher than $`10^4`$ K, the post-shock gas is ionized and cooled efficiently by H line cooling. After it is cooled bellow $`10^4`$ K, $`\mathrm{H}_2`$ is formed fairly efficiently and it is cooled to several hundred K by $`\mathrm{H}_2`$ line cooling (e.g., Shapiro & Kang (1987); Susa et al. (1998)). (3) The shock-compressed layer fragments into cylindrical clouds when $`t_{\mathrm{dyn}}t_{\mathrm{frag}}`$ (Yamada & Nishi (1998); Uehara & Nishi (1999)). (4) The cylindrical cloud collapses dynamically and fragments into cloud cores when $`t_{\mathrm{dyn}}t_{\mathrm{frag}}`$ (Uehara et al. (1996); Nakamura & Umemura (1999)). (5) Primordial stars form in cloud cores (Omukai & Nishi (1998)). (6) Since the gravitational potential of the cloud is deep enough, subsequent SNe cannot disrupt the cloud. Star formation regulation by UV radiation is also weak because of highly flattened configuration of the host cloud. (7) Next generation stars can form efficiently and the cloud evolves into a luminous object. On the other hand, the clouds in the region $`t_{\mathrm{ff}}<t_{\mathrm{cool}}<H^1`$ will experience a single SN. As a result, the survived region is bounded by the line denoted as $`10^{51}`$ erg (denoted as “Ly-$`\alpha `$” of Fig. 2b) and they evolve into luminous objects if they are isolated. However, the evolution time scale of these clouds is rather long because of large disturbance by the SN and they are reionized at low-$`z`$. <sup>2</sup><sup>2</sup>2If the first star dies without SN and becomes black hole, metal pollution of the cloud does not occur. However, considering the star formation regulation by UV radiation,Omukai & Nishi (1999) the evolution of the cloud is still slow and it is not likely to evolve into luminous object. After reionization, they may be observed as Ly-$`\alpha `$ clouds. Since the baryonic mass of these clouds are several $`\times 10^5M_{}`$ and the ejected metal mass by a SN is several $`M_{}`$, their metallicity is estimated to be $`10^3Z_{}`$. The observation of QSO absorption line systems imply the similar metallicity for Ly-$`\alpha `$ clouds (Cowie et al. (1995); Songaila & Cowie (1996); Songaila (1997); Cowie & Songaila (1998)). They are typically the 1$`\sigma `$ objects and collapse at $`z10`$. Our estimate basically agrees with the more detailed calculation in Ciardi & Ferrara (1997). In the unshaded lower right region ($`T_{\mathrm{CMB}}>T_{\mathrm{vir}}`$) and the lightly shaded region “NC” of Fig. 2b, clouds are diffuse and do not become luminous, because radiative cooling is not efficient. In the shaded region “IG”, SNe destroy the binding of host objects, followed by the diffusion of heavy elements into the surrounding medium. Therefore, the first luminous objects are probably formed in the region “LO” and their mass is estimated to be several $`\times 10^7M_{}`$, if we consider the $`23\sigma `$ objects. Formation epoch of the first luminous objects is $`z30`$ (considering the 3$`\sigma `$ objects) or $`z20`$ (considering the 2$`\sigma `$ objects). This estimated mass is larger than the one obtained by Tegmark et al. (1997), because small clouds may be blown up by their own SNe. We would like to thank the anonymous referee for valuable comments. This work is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists, No.2370 (HS) and by the Japanese Grant-in-Aid for Scientific Research on Priority Areas (No. 10147105) (RN) and Grant-in-Aid for Scientific Research of the Ministry of Education, Science, Sports and Culture of Japan, No. 08740170 (RN).
no-problem/9908/astro-ph9908096.html
ar5iv
text
# The X–ray spectra of VW Hydri during the outburst cycle ## 1 Introduction X–rays from dwarf novae arise very near the white dwarf, presumably in a boundary layer between the white dwarf and the accretion disk surrounding it. Information on the properties of the X–ray emitting gas as a function of the mass transfer rate through the accretion disk is provided by observations through the outburst cycle of dwarf novae. It may be hoped that such observations help to elucidate the nature of the X–ray emission in cataclysmic variables, and by extension in accretion disks in general. VW Hyi is a dwarf nova that has been extensively studied during outbursts and in quiescence, at wavelengths from optical to hard X–rays. It is a dwarf nova of the SU UMa type, i.e. in addition to ordinary dwarf nova outbursts it occasionally shows brighter and longer outbursts, which are called superoutbursts. Ordinary outbursts of VW Hyi occur every 20–30 d and last 3–5 days; superoutbursts occur roughly every 180 d and last 10–14 d (Bateson bateson77 (1977)). A multi-wavelength campaign combining data obtained with EXOSAT, Voyager, the International Ultraviolet Explorer, and by ground based optical observers covered three ordinary outbursts, one superoutburst, and the three quiescent intervals between these outbursts (Pringle et al. pringle87 (1987), Van Amerongen et al. amerongen87 (1987), Verbunt et al. verbunt87 (1987), Polidan, Holberg polidan87 (1987), van der Woerd & Heise woerd87 (1987)). The EXOSAT data show that the flux in the 0.05–1.8 keV range decreases during the quiescent interval; the flux evolution at lower energies and at higher energies (1–6 keV) are compatible with this, but the count rates provided by EXOSAT are insufficient to show this independently. Folding the EXOSAT data of three outbursts showed that a very soft component appears early in the outbursts and decays faster than the optical flux (Wheatley et al. wheatley96 (1996)). The ROSAT Position Sensitive Proportional Counter (PSPC) and Wide Field Camera (WFC) covered a dwarf nova outburst of VW Hyi during the ROSAT All Sky Survey (Wheatley et al. wheatley96 (1996)). The PSPC data show that the flux in the 0.1–2.5 keV range is lower during outburst. The ROSAT data showed no significant difference between outburst and quiescent X–ray spectrum. The best spectral constraints are obtained for the quiescent X–ray spectrum by combining ROSAT WFC from the All Sky Survey with data from ROSAT PSPC and GINGA pointings. A single temperature fit is not acceptable, the sum of two optically thin plasma spectra, at temperatures of 6 keV and 0.7 keV is somewhat better. The spectrum of a plasma which cools from 11 keV and has emission measures at lower temperatures proportional to the cooling time, provides an acceptable fit of the spectrum in the 0.05–10 keV energy range (Wheatley et al. wheatley96 (1996)). In this paper we report on a series of BeppoSAX observations of VW Hyi, which cover an ordinary outburst and a substantial part of the subsequent quiescent interval. The observations and data reduction are described in Sect. 2, the results in Sect. 3 and a discussion and comparison with earlier work is given in Sect. 4. ## 2 Observations and data reduction VW Hyi is monitored at optical wavelengths by the American Association of Variable Star Observers (AAVSO). On Sep 23 1998 the optical magnitude of VW Hyi started to decrease. The outburst lasted for 5–6 days and reached a peak magnitude of 9.2. This outburst served as a trigger for a sequence of six observations by BeppoSAX between Sep 24 and Oct 18. As a result we have obtained one X–ray observation during outburst and five observations during quiescence. Since VW Hyi appears as an on-axis source, the Low Energy Concentrator Spectrometer (LECS, Parmar et al. parmar97 (1997)) source counts are extracted from a circular region with a 35 pixel radius centered at the source. We use the Sep 1997 LECS response matrices centered at the mean raw pixel coordinates (130,124) for the channel-to-energy conversion and to fold the model spectra when fitted to the data. The combined Medium Energy Concentrator Spectrometer (MECS2 and MECS3, Boella et al. boella97 (1997)) source counts are extracted from a circular region with a 4′ (30 pixel) radius. The September 1997 MECS2 and MECS3 response matrices have been used. These matrices are added together. The background has been subtracted using an annular region with inner and outer radii of 35 and 49.5 pixels for the LECS and 30 and 42.5 pixels for the MECS, around the source region. We ignore the data of the High Pressure Gas Scintillation Proportional Counter (HPGSPC, Manzo et al. manzo97 (1997)) and the Phoswitch Detection System (PDS, Frontera et al. frontera97 (1997)) since their background subtracted spectra have a very low signal to noise ratio. The LECS and MECS data products are obtained by running the BeppoSAX Data Analysis System pipeline (Fiore et al. fiore99 (1999)). We rebin the energy channels of all four instruments to $`\frac{1}{3}\times \text{FWHM}`$ of the spectral resolution and require a minimum of 20 counts per energy bin to allow the use of the chi-squared statistic. The total LECS and MECS net exposure times are 82.5 ksec. and 181.4 ksec. respectively. The factor 2.2 between the LECS and MECS exposure times is due to non-operability of the LECS on the daytime side of the earth. ## 3 Results In Fig. 1 we show the optical lightcurve, provided by the American Association of Variable Star Observers and the Variable Star Network, of VW Hyi at the time of our X–ray observations. These optical observations show that our first BeppoSAX observation was obtained during an ordinary outburst that peaked on Sep 24, whereas observations 2–6 were obtained in quiescence. The last ordinary outbursts preceding our first BeppoSAX observation was observed by the AAVSO to peak on Sep 8; the first outburst observed after our last BeppoSAX observation was a superoutburst that started on Nov 5 and lasted until Nov 19. ### 3.1 Lightcurve In Fig. 1 we also show the count rates detected with the BeppoSAX LECS and MECS. For the latter instrument we show the count rates separately for the full energy range 1.5-10 keV, and for the hard energies only in the range 5-10 keV. In both LECS and MECS the count rate is lower during the outburst than in quiescence. In quiescence the count rate decreases significantly between our second and third (only in the MECS data), and between the third and fourth observations (both LECS and MECS data), but is constant after that (see Table 1). The MECS count rate decreases during our first observation, when VW Hyi was in outburst, as is shown in more detail in Fig. 2. This decrease can be described as exponential decline $`N_{\mathrm{ph}}e^{t/\tau }`$ with $`\tau 1.1`$ d. The count rates in the LECS are compatible with the same decline, but the errors are too large for an independent confirmation. The count rates at lower energies, 0.1–1.5 keV, are compatible with both a constant value and the exponential decay during our first observation. ### 3.2 Spectral fits We have made spectral fits to the combined MECS and LECS data for each of the six separate BeppoSAX observations and computed the luminosities assuming a distance of 65 pc to VW Hyi (see Warner warner87 (1987)). As expected on the basis of earlier work, described in the introduction, we find that the observed spectra cannot be fitted with a single-temperature plasma. The combination of spectra of optically thin plasmas at two different temperatures does provide acceptable fits. The parameters of these fits are listed in Table 2, and their variation between the separate observations is illustrated in Fig. 3. The need for a two-temperature fit is illustrated in Figs. 4 and 5 for the outburst spectrum of observation 1 and for the quiescent spectrum of the combined observations 3–6: the low temperature component is required to explain the excess flux near 1 keV. The Fe-K emission line near $`6.70\pm 0.05\text{ keV}`$ is clearly present in our data, and is due to hydrogen or helium like iron from the hot component of the plasma. The LECS data in observations 3–6 are poorly fitted above $`5`$ keV which is probably due to calibration uncertainties of the instrument (Fiore et al. fiore99 (1999)). We fix $`n_\mathrm{H}`$ at $`4\times 10^{19}\text{ cm}^2`$, the best-fit value of the combined observation 3–6. (Fixing $`n_\mathrm{H}`$ at $`6\times 10^{17}\text{ cm}^2`$, which was found by Polidan et al. (polidan90 (1990)), does not change the fit parameters, except for the chi-squared values of observations 2, 3 and 3–6 which become slightly worse; 98, 111 and 158 respectively.) The temperature of both the cool and the hot component of the two-temperature plasma is higher during quiescence than during the outburst, increasing from respectively 0.7 keV and 3.2 keV in outburst to 1.3 keV and 6 keV in quiescence. The temperatures immediately after outburst – in our second observation – are intermediate between those of outburst and quiescence. The emission measure (i.e. the integral of the square of the electron density over the emission volume, $`n_\mathrm{e}^2𝑑V`$) of both the cool and the hot component of the two-temperature plasma is also higher in quiescence; immediately after outburst the emission measure of the hot component is higher than during the later phases of quiescence. The temperatures and emission measures of the two-temperature plasma are constant, within the errors, in the later phases of quiescence of our observations 3–6. For that reason, we have also fitted the combined data of these four observations to obtain better constraints on the fit parameters (see Table 2). Note that the decrease of the count rate between observations 3 and 4, mentioned in Sect. 3.1, is significant even though it is not reflected in the emission measures and luminosities of the two components separately. This is due to the combined spectral fitting of the LECS and the MECS, since the decrease in count rate is less significant for the LECS. Moreover, the errors on the count rates are much smaller than those on the emission measures ($`10\%\text{ and }20\%`$ respectively). We fit the first 31 ksec and the next 46 ksec of the outburst spectrum (1a and 1b) separately. Both fits are good with $`\chi ^2<1`$. From the fit results we compute the MECS and ROSAT PSPC count rates. The results are shown in Table 3. We have only indicated the temperature and emission measure of the hot component since the cool component is responsible for the iron line emission outside the MECS bandwidth and does not have a large impact upon the continuum emission. Note from Table 3 that the decay in count rate is entirely due to the decrease of the emission measure. To compare our observations with the results obtained by Wheatley et al. (wheatley96 (1996)) we consider next the cooling flow model (cf. Mushotzky, Szymkowiak mushotzky88 (1988)) for our observations 1, 2 and 3–6. In this model the emission measure for each temperature is restricted by the demand that it is proportional to the cooling time of the plasma. The results of the fits are shown in Table 2. Note that these results are not better than the two-temperature model fits. Due to the poor statistics of the LECS outburst observation we cannot constrain the lower temperature limit. The MECS is not sensitive to this temperature regime at all. A contour plot of the upper and lower temperature limits for the combined quiescent observations 3–6 is shown in Fig. 6. The boundaries of the low temperature in Fig. 6 are entirely determined by the Fe-L and Fe-M line emission; for a low temperature of $`0.35`$ keV the contributions to the line flux integrated over all higher temperatures exceeds the observed line flux. For a low temperature of $`1.2`$ keV there is not sufficient line flux left in the model. The boundaries of the high temperature are determined by the continuum slope; for a high temperature of $`8.5`$ and $`11.5`$ keV the model spectrum is too soft and too hard respectively to fit the data. ## 4 Comparison with previous X–ray observations ### 4.1 Time variability We predict the ROSAT count rates of VW Hyi during outburst and quiescence with the observed BeppoSAX flux from the two-temperature fit (see Table 2). Here we do apply $`n_\mathrm{H}=6\times 10^{17}\text{ cm}^2`$ (Polidan et al. polidan90 (1990)) since ROSAT is probably more sensitive to $`n_\mathrm{H}`$ than BeppoSAX. The predicted count rates during outburst and quiescence are 0.31 and $`0.87\text{ cts s}^1`$. The ROSAT observed count rates are 0.4 and $`1.26\text{ cts s}^1`$ respectively (Belloni et al. belloni91 (1991); Wheatley et al. wheatley96 (1996)). Both predictions appear to be different from the observations by a factor $`0.75`$. From Fig. 2, we observe a decrease in MECS count rate by a factor of $`4`$ during outburst. This is inconsistent with the constant 0.4 $`\text{ cts s}^1`$ observed by the ROSAT PSPC during outburst (Wheatley et al. wheatley96 (1996)). Using the LECS data during the outburst in a bandwidth (0.1–1.5 keV) comparable to the ROSAT PSPC we cannot discriminate observationally between a constant flux and the exponential decay observed by the MECS. However, our spectral fits to the data require that the 0.1-2.5 keV flux decreases in tandem with the hard flux. Thus the difference between the ROSAT PSPC and the BeppoSAX MECS lightcurves during outburst may either be due to variations between individual outbursts or to the different spectral bandwidths of the observing instruments. The predicted decay of the count rate significantly exceeds the range allowed by the ROSAT observations of the Nov 1990 outburst. We interpret the time variability of the count rate shown in Figs. 2 and 3, as a change mainly in the amount of gas in the inner disk that emits keV photons. At the end of the outburst, while the inner disk is still predominantly optically thick, the mass accretion rate onto the white dwarf is decreasing. As a result, the amount of hot optically thin gas drops gradually. This is observed in Fig. 2. The transition to a predominantly optically thin inner disk occurs just before observation 2. As a result the amount of optically thin emitting material in the disk increases strongly. This is shown by the increase of the emission measure of the hot component in Fig. 3, observation 2, which even peaks above the quiescent value. The settling of the accretion rate towards quiescence is shown in Fig. 3, observations 3–6 for both the temperature and the emission measure. In contrast to the emission measure, the temperature of the hot component increases only gradually throughout observations 1–6 as it reflects the slowly decreasing accretion rate rather than the amount of optically thin emitting material in the disk. ### 4.2 Spectral variability Both a two-temperature plasma model and a cooling flow model fit the spectrum of our BeppoSAX observations of VW Hyi better than a one-temperature model. The contribution of the cool component lies mainly in the presence of strong Fe-L line emission around 1 keV. The hot component contributes the continuum and the Fe-K line emission at $`6.7`$ keV. Adding a soft atmospheric component in the form of a $`10`$ eV blackbody model does not improve our fits. This blackbody component, reported by Van der Woerd et al. (woerd86 (1986)) and Van Teeseling et al. (teeseling93 (1993)), is too soft to be detected by BeppoSAX LECS. Based upon the $`\chi ^2`$-values, the BeppoSAX observation of VW Hyi does not discriminate between a continuous temperature distribution (the cooling flow model) and a discrete temperature distribution (the two-component model) of the X–ray emitting region. Wheatley et al. (wheatley96 (1996)) derive a lower and upper temperature of $`0.53\text{ and }11_2^{+3}`$ keV respectively for a cooling flow fit to the combined ROSAT PSPC and GINGA LAC data during quiescence. These temperatures are consistent with our cooling flow fits to BeppoSAX data during quiescence; there is a small overlap between the 2 and 3$`\sigma `$ contours shown in Fig. 6 by Wheatley et al. and the contours of our Fig. 6. ## 5 Conclusions BeppoSAX does not discriminate between a continuous (cooling flow) and a discrete temperature distribution. Our observation of a decreasing count rate, followed by a constant count rate during quiescence is in contradiction with the disk instability models. These models predict a slightly increasing mass transfer onto the white dwarf which must show up as an increase in the X–ray flux. Ad hoc modifications to disk instability models, such as interaction of the inner disk with a magnetic field of the white dwarf (Livio, Pringle livio92 (1992)), evaporation of the inner disk (Meyer, Meyer-Hofmeister meyer94 (1994)), or irradiation of the inner disk by the white dwarf (King king97 (1997)), possibly are compatible with the decrease of ultraviolet flux (e.g. Van Amerongen et al. amerongen90 (1990)) and X–ray flux during quiescence. If we assume a continuous temperature distribution the upper temperature limit of our quiescence spectrum is consistent with the observations by Wheatley et al. (wheatley96 (1996)). The cooling flow model requires an accretion rate of $`3\times 10^{12}\text{ M}_{}\text{ yr}^1`$ to explain the X–ray luminosity late in quiescence. A similar result is obtained when we convert the luminosity derived from the two-temperature model to an accretion rate. Any outburst model must accommodate this accretion rate. BeppoSAX MECS observes a significant decrease in the count rate during outburst. Our simulations show a similar decrease for the ROSAT PSPC which would have been significantly detected. The fact that the ROSAT count rate during outburst was constant (Wheatley et al. wheatley96 (1996)) and the results from our cooling flow model fits suggest that the outburst of Sep 24 1998 behaved differently from the outburst of Nov 3 1990. ###### Acknowledgements. This work has been supported by funds of the Netherlands Organization for Scientific Research (NWO).
no-problem/9908/math9908074.html
ar5iv
text
# On some dimensional properties of 4-manifolds ## 1. Introduction It was shown in that under the assumption of Jensen’s principle $`\mathrm{}`$ there exists a differentiable $`n`$-manifold $`M_m^n`$, $`n4`$, of any given Lebesgue dimension $`m`$ where $`m>n`$. This manifold is countably compact, perfectly normal and hereditarily separable. Under the same set-theoretical assumption $`\mathrm{}`$ for any countable ordinal number $`\alpha >4`$ there exists a $`4`$-manifold $`M_\alpha ^4`$ with $`\mathrm{Ind}M_\alpha ^4=\alpha `$. also contains examples of: (a) weakly infinite-dimensional $`4`$-manifolds without the large inductive dimension and (b) strongly infinite-dimensional $`4`$-manifolds. Recently it was shown that for a given countable complex $`L`$, with $`\left[L\right]\left[S^4\right]`$ and which serves as the extension dimension of a metrizable compactum, there exists a differentiable $`4`$-manifold $`M=M^{4,L}`$ with $`\mathrm{e}\mathrm{dim}M=\left[L\right]`$. It should be emphasized that it is still unknown whether the extension dimension of a metrizable compactum is realized by a countable complex. Below we construct a differentiable $`4`$-manifold with similar properties for any, not necessarily countable, complex. ## 2. Preliminaries We recall that a subset $`UX`$ of a space $`X`$ is functionally open in $`X`$ if there is a continuous map $`\phi :X\text{}`$ such that $`U=\phi ^1(\text{}\{0\})`$. Also, we say that $`X`$ is at most $`n`$-dimensional (and write $`dimXn`$) if every finite functionally open cover $`𝒰`$ of $`X`$ has a finite functionally open refinement $`𝒱`$ of order $`n+1`$. The latter means that $`𝒱^{}=\mathrm{}`$ for any family $`𝒱^{}𝒰`$ consisting of at least $`n+2`$ elements. For normal spaces this definition is equivalent to the usual definition of Lebesgue dimension. The next statement is well known (see, , for example). ###### Proposition 2.1. For every space $`X`$ we have $`dimX=dim\beta X`$. We assume that reader is familiar with notions of a CW-complex, a simplicial complex with the metric topology and an absolute neighborhood retract in the category $``$ of metrizable spaces ($`ANR()`$-space) (see, for instance, ). In what follows, by a simplicial complex we mean any simplicial complex with the metric topology. Let us note here that all simplices are assumed to be closed which implies that every finite simplicial complex is compact. By an $`ANR`$ we mean an $`ANR()`$-space. ###### Theorem 2.2. \[14, Theorem 3.3.10\] Every simplicial complex is an $`ANR`$. The next statement, which is a corollary of \[14, Theorem 5.2.1\], allows us to consider only simplicial complexes. ###### Theorem 2.3. Every CW-complex has a homotopy type of a simplicial complex. ###### Definition 2.4. Following we say that a space $`Z`$ is an absolute extensor of a normal space $`X`$ and write $`ZAE(X)`$ if for each closed subspace $`Y`$ of $`X`$ any map $`f:YZ`$ has an extension $`\overline{f}:XZ`$. The next statement is an immediate corollary of the above definition. ###### Proposition 2.5. If $`ZAE(X)`$, $`X`$ is a normal space and $`Y`$ is a closed in $`X`$, then $`ZAE(Y)`$. ###### Definition 2.6. Let $`X`$ and $`Z`$ be normal spaces. Recall that $`Z`$ is an absolute neighborhood extensor of a space $`X`$ (notation: $`ZANE(X)`$) if for every closed subspace $`YX`$ any map $`f:YZ`$ has an extension $`\overline{f}:UZ`$ where $`U`$ is a neighborhood of $`Y`$ in $`X`$. ###### Proposition 2.7. Let $`X`$ be a normal countably compact space and let $`L`$ be a simplicial complex. Then $`LANE(X)`$. ###### Proof. Let $`f:YL`$ be a map of a closed subset $`YX`$. Since $`L`$ is metrizable, $`f(Y)`$ is compact. Hence $`f(Y)`$ is contained in some finite subcomplex $`KL`$. But every finite complex is an $`ANE`$ for any normal space. Thus, there is an extension $`\overline{f}:UK`$ of $`f`$ defined on an open neighborhood $`U`$ of $`Y`$ in $`X`$. ∎ ###### Proposition 2.8. The following conditions are equivalent for every countably compact normal space $`X`$ and every simplicial complex $`L`$: * $`LAE(X)`$. * $`LAE(\beta X)`$. ###### Proof. $`(1)(2)`$. By Definition 2.4, we need to check that for every closed set $`Y\beta X`$ any map $`f:YL`$ has an extension $`\overline{f}:\beta XL`$. By Proposition 2.7, there is an extension $`f_1:UL`$, where $`U`$ is a neighborhood of $`Y`$ in $`\beta X`$. Let $`U_1`$ be a smaller neighborhood of $`Y`$ in $`\beta X`$ such that $`\mathrm{cl}_{\beta X}U_1U`$. Set $`F=X\mathrm{cl}_{\beta X}U_1`$ and let $`f_2=f_1|F`$. By condition (1), there is an extension $`\overline{f}_2:XL`$. As in the proof of Proposition 2.7, $`\overline{f}_2(X)`$ is contained in some finite complex $`KL`$. But as was noted above $`K`$ is compact. Hence the map $`\overline{f}_2`$ can be extended to a map $`\overline{f}:\beta XKL`$. It remains to show that $`\overline{f}|Y=f`$. But $`\overline{f}|F=f_1`$. Hence, since $`F`$ is dense in $`\mathrm{cl}_{\beta X}U_1`$, we have $`\overline{f}|\mathrm{cl}_{\beta X}U_1=f_1`$. On the other hand, $`f_1|Y=f`$. $`(2)(1)`$. Let $`Y`$ be a closed subset of $`X`$ and let $`f:YL`$ be a map. Set $`F=\mathrm{cl}_{\beta X}Y`$. Since $`Y`$ is closed in a normal space $`X`$, $`F=\beta Y`$. Then $`f`$ can be extended to a map $`f_1:FL`$ because $`f(Y)`$ lies in some finite complex $`KL`$. Now, by condition $`(2)`$, the map $`f_1:FL`$ can be extended to a map $`\overline{f}_1:\beta XL`$. It only remains to note that the map $`\overline{f}=\overline{f}_1|X`$ extends $`f`$. Proposition 2.8 is proved. ∎ ###### Proposition 2.9. Let $`X`$ be a countably compact normal space, $`F`$ be its closed subset and $`U=XF`$. Suppose $`LAE(F)`$ and $`LAE(Y)`$ for every closed in $`X`$ set $`YU`$. Then $`LAE(X)`$. ###### Proof. By Definition 2.4, we need to verify that for every closed set $`AX`$ any map $`f:AL`$ has an extension $`\overline{f}:XL`$. Let $`f_0=f|(AF)`$. Since $`LAE(F)`$, the map $`f_0`$ can be extended to a map $`\overline{f}_0:FL`$. Define the map $`f_1:AFL`$ by letting $`f_1|A=f`$ and $`f_1|F=\overline{f}_0`$. Clearly, $`f_1`$ is continuous. By Proposition 2.7, the map $`f_1`$ has an extension $`\overline{f}_1:VL`$, where $`V`$ is a neighborhood of $`AF`$ in $`X`$. Take a neighborhood $`V_1`$ of $`AF`$ such that $`\mathrm{cl}(V_1)V`$ and let $`Y=XV_1`$, $`Y_1=\mathrm{Bd}(V_1)`$, $`g=\overline{f}_1|Y_1`$. Then $`Y`$ is closed in $`X`$ and $`Y_1`$ is closed in $`Y`$. By condition $`LAE(Y)`$. Hence, the map $`g`$ has an extension $`\overline{g}:YL`$. Finally, define a map $`\overline{f}:XL`$ by letting $$\overline{f}|Y=g\text{and}\overline{f}|\mathrm{cl}(V_1)=\overline{f}_1.$$ Evidently, $`\overline{f}`$ is well defined and continuous. It is also clear that $`\overline{f}|A=f`$. Proposition 2.9 is proved. ∎ Next we define a relation $``$ on the class of all simplicial complexes. Following we say that $`KL`$ if for every normal countably compact space $`X`$ the condition $`KAE(X)`$ implies the condition $`LAE(X)`$. The relation $``$ is reflexive and transitive and, consequently, it is a relation of preorder. This preorder induces the following equivalence relation: $$KLKL\text{and}LK.$$ For a simplicial complex $`L`$ by $`\left[L\right]`$ we denote the class of all complexes which are equivalent to $`L`$. These classes $`\left[L\right]`$ are called extension types. ###### Remark 2.10. Relation $`LAE(X)`$, preorder $``$ and extension types $`\left[L\right]`$ can be defined for different classes of spaces $`X`$. A. N. Dranishnikov defined relation $`LAE(X)`$ for the class $`LC`$ of all metrizable locally compact spaces. In he defined this relation for the class $`𝒞`$ of all compact Hausdorff spaces. One can define relation $`_\sigma `$ and associated concepts for arbitrary class $`\sigma `$ of topological spaces. Let $`C`$ be the class of all compact metrizable spaces and $`𝒞C`$ be the class of all normal countably compact spaces. ###### Proposition 2.11. For any simplicial complexes $`K`$ and $`L`$ the following conditions are equivalent: * $`K_CL`$; * $`K_𝒞L`$; * $`K_{𝒞C}L`$; * $`K_{LC}L`$; The equivalence $`(1)(2)`$ was proved in \[10, Theorem 11\]. For the equivalence $`(2)(3)`$ consult with Proposition 2.8. As for the equivalence $`(1)(4)`$ it follows from Theorem 2.18 and the next trivial statement. ###### Proposition 2.12. Let $`X_\alpha `$, $`\alpha A`$ be a discrete family of normal spaces. Then for any simplicial complex $`L`$ $$LAE\left(\{X_\alpha :\alpha A\}\right)LAE(X_\alpha )\text{for each}\alpha A.$$ If $`\sigma `$ is a class of topological spaces, then by $`\text{𝔼}_\sigma `$ we denote the class of all extension types of all simplicial complexes generated by the relation $`_\sigma `$. In view of Proposition 2.11 we shall use a simpler notation: 𝔼 and $``$. ###### Definition 2.13. (see ) Let $`X`$ be a countably compact normal space. Its extension dimension $`\mathrm{e}\mathrm{dim}X`$ is defined as the smallest extension type $`\left[L\right]`$ of simplicial complexes, satisfying condition $`LAE(X)`$. ###### Proposition 2.14. \[8, §3, Proposition 1\] For any compactum $`X`$ there exists unique extension type $`\left[L\right]`$ such that $`\mathrm{e}\mathrm{dim}X=\left[L\right]`$. ###### Proposition 2.15. \[8, §3, Proposition 2\] The correspondence $`\mathrm{e}\mathrm{dim}`$ maps the class $`𝒞`$ epimorphically onto the class 𝔼. Propositions 2.8 and 2.14 yield ###### Proposition 2.16. For any normal countably compact space $`X`$ there exists unique extension type $`\left[L\right]`$ such that $`\mathrm{e}\mathrm{dim}X=\mathrm{e}\mathrm{dim}\beta X=\left[L\right]`$. Propositions 2.8 and 2.15 yield ###### Proposition 2.17. The correspondence $`\mathrm{e}\mathrm{dim}`$ maps the class $`𝒞C`$ epimorphically onto the class 𝔼. ###### Theorem 2.18. Suppose that a normal countably compact space $`X`$ is the union of its closed subsets $`X_i`$, $`i\omega `$. If $`\mathrm{e}\mathrm{dim}X_i\left[L\right]`$ for each $`i\omega `$, then $`\mathrm{e}\mathrm{dim}X\left[L\right]`$. The proof of the above statement repeats the proof (see, for instance, ) of classical countable sum theorem for Lebesgue dimension $`dim`$ for normal spaces by means of extension of mappings into $`S^n`$. The main feature of the sphere $`S^n`$ exploited in that proof is $`S^nANE(X)`$. The corresponding property $`LANE(X)`$ in our case is guaranteed by Proposition 2.7. For further references we formulate just mentioned description of the Lebesgue dimension as a separate statement. Obviously it provides the main link between the theory of Lebesgue dimension and the theory of extension dimension. ###### Theorem 2.19. For any normal space $`X`$, $$dimXn\mathrm{e}\mathrm{dim}X\left[S^n\right].$$ ## 3. On a realization of dimensional types by manifolds We recall one result from in a more convenient for us form. ###### Theorem 3.1. For an arbitrary metrizable compactum $`C`$, assuming $`\mathrm{}`$, there exists a differentiable, countably compact, perfectly normal, hereditarily separable $`4`$-manifold $`M_C^4`$ such that $`\beta M_C^4M_C^4`$ is a metrizable compactum homeomorphic to the disjoint sum of $`C`$ and some open subset $`U`$ of the $`3`$-dimensional sphere $`S^3`$. The manifold $`M_C^4`$ is a manifold of type $`M_\phi ^4`$ from , where $`\phi =\phi _C:B^4B_\phi ^4`$ is a quotient mapping, defined on the closed ball $`B^4`$, with the following properties. Let the sphere $`S^3`$ be the boundary of $`B^4`$. There exists a closed set $`AS^3`$ such that * $`A=\phi ^1\phi (A)`$; * $`\phi (A)=C`$; * each fiber $`\phi ^1(y)`$, $`yC`$, is a non-degenerate continuum nowhere dense in $`S^3`$. * $`\phi ^1\phi (x)=\{x\}`$ for every $`xB^4A`$. Thus, $`\phi (S^3)S_\phi ^3`$ is homeomorphic to the disjoint sum of $`C`$ and $`S^3A`$. By \[13, Proposition 2.3\], $`\beta M_C^4M_C^4=S_\phi ^3`$. Let $`\mathrm{\Lambda }`$ be the class of all complexes and let $$\mathrm{\Lambda }^0=\{L\mathrm{\Lambda }:\left[L\right]=\mathrm{e}\mathrm{dim}X\text{for some metrizable compactum}X\}.$$ By Proposition 2.14, for every metrizable compactum $`X`$ there is a complex $`L\mathrm{\Lambda }^0`$ such that $`\mathrm{e}\mathrm{dim}X=\left[L\right]`$. Set $$\mathrm{\Lambda }_4^0=\{L\mathrm{\Lambda }^0:\left[L\right]\left[S^4\right]\}.$$ The next theorem is the main result of this section. ###### Theorem 3.2. For an arbitrary complex $`L\mathrm{\Lambda }_4^0`$, assuming $`\mathrm{}`$, there exists a differentiable, countably compact, perfectly normal, hereditarily separable $`4`$-manifold $`M=M^{4,L}`$ such that $`\mathrm{e}\mathrm{dim}M=\left[L\right]`$. ###### Proof. We use the scheme of the proof of \[5, Theorem 3.1\], where a similar result was obtained for countable complexes. The only difference is that in our situation we can not apply auxiliary results for countable complexes which were used in . Consider a complex $`L\mathrm{\Lambda }_4^0`$. By definition of $`\mathrm{\Lambda }_4^0`$, there is a metrizable compactum $`C`$ such that (3.1) $$\mathrm{e}\mathrm{dim}C=\left[L\right].$$ Set $`M=M_C^4`$, where $`M_C^4`$ is a manifold from Theorem 3.1. We claim that this is a required manifold. First of all, $`M`$ is countably compact. Hence, in view of Proposition 2.8, (3.2) $$\mathrm{e}\mathrm{dim}M=\mathrm{e}\mathrm{dim}\beta M.$$ Further, by Corollary 2.5 and Theorem 3.1, we have (3.3) $$\mathrm{e}\mathrm{dim}\beta M\mathrm{e}\mathrm{dim}(\beta MM)\mathrm{e}\mathrm{dim}C=\left[L\right].$$ Now we apply Proposition 2.9 to the pair $`(S_\phi ^3,C)`$. Since $`S_\phi ^3C`$ is open in $`S^3`$, Theorem 2.19 yields (3.4) $$\mathrm{e}\mathrm{dim}S_\phi ^3\mathrm{max}\{\left[S^3\right],\left[L\right]\}=\left[L\right].$$ Finally, let us apply Proposition 2.9 to the pair $`(\beta M,S_\phi ^3)`$. Since $`dimY4`$ for any compactum $`YM`$, by Theorem 2.19 and inequality (3.4), we obtain (3.5) $$\mathrm{e}\mathrm{dim}\beta M\mathrm{max}\{\left[S^n\right],\left[L\right]\}=\left[L\right].$$ Inequalities (3.3) and (3.5) yield $$\mathrm{e}\mathrm{dim}\beta M=\left[L\right].$$ Thus, equality (3.2) finishes the proof of Theorem 3.2. ∎ As corollaries of Theorem 3.2 we discuss several examples of complexes $`L\mathrm{\Lambda }_4^0`$ with certain curious properties. First of all we recall two results needed for our discussion. ###### Proposition 3.3. \[4, Proposition 2.6\] Let $`K`$ be an $`n`$-dimensional locally compact polyhedron. Then $`\mathrm{e}\mathrm{dim}K=\left[S^n\right]`$. ###### Proposition 3.4. \[4, Corollary 2.3\] Let $`L`$ be a simplicial complex homotopy dominated by a finite complex. Then there exists a metrizable compactum $`X^L`$ such that $`\mathrm{e}\mathrm{dim}X^L=\left[L\right]`$. ###### Remark 3.5. It follows from the proof of Theorem 3.2 that for every complex $`L\mathrm{\Lambda }_4^0`$ there exists a metrizable compactum $`C_L`$ such that (3.6) $$\mathrm{e}\mathrm{dim}M^{4,L}=\mathrm{e}\mathrm{dim}M_{C_L}^4=\mathrm{e}\mathrm{dim}C_L=\left[L\right].$$ ###### Example 3.6. Let $`=\{S^n:n4\}`$ and $`C_n=I^n`$. Then from (3.6) and Proposition 3.3 we obtain that $`dimM^{4,S^n}=n`$ – the fact proved earlier in . ###### Definition 3.7. Let $`L_n=M(\text{}_2,n+1)S^{n+1}`$, where $`M(\text{}_2,n+1)`$ is the Moore complex, i.e. the complex obtained from $`(n+1)`$-dimensional disk $`B^{n+1}`$ by attaching to its boundary $`S^n`$ the disk $`B^{n+1}`$ via the map $`S^nS^n`$ of degree $`2`$. It is clear that $`L_n`$ is a finite complex with $`\left[S^n\right]<\left[L_n\right]<\left[S^{n+1}\right]`$. ###### Example 3.8. Let $`=\{L_n:n4\}`$ and let $`C_n`$ be a metrizable compactum with $`\mathrm{e}\mathrm{dim}C_n=\left[L_n\right]`$ (see Proposition 3.4). Then $`\mathrm{e}\mathrm{dim}M^{4,L_n}=\left[L_n\right]`$. ###### Corollary 3.9. Assuming $`\mathrm{}`$, there exists a differentiable, countably compact, perfectly normal, hereditarily separable $`4`$-manifold $`M^4`$ such that $`\left[S^4\right]<\mathrm{e}\mathrm{dim}M^4<\left[S^5\right]`$. ## 4. On the dimension of products of manifolds The Stone-Čech remainder $`\beta XX`$ of a space $`X`$ is denoted by $`X^{}`$. ###### Lemma 4.1. Let $`M_i`$ be a countably compact $`n_i`$-manifold of dimension $`dimM_i=m_i`$, $`i=1,2`$. Then $$dim(M_1\times M_2)=\mathrm{max}\{n_1+m_2,n_2+m_1,dim(M_1^{}\times M_2^{})\}.$$ ###### Proof. Because each manifold is a $`k`$-space (being first countable) it follows from \[11, Theorem 3.10.13\] that $`M_1\times M_2`$ is countably compact. Hence, by Gliksberg’s theorem , $`M_1\times M_2`$ is pseudocompact and $`\beta (M_1\times M_2)=\beta M_1\times \beta M_2`$. By Proposition 2.1, $`dim(M_1\times M_2)=dim\beta (M_1\times M_2)`$. So we have to find out the exact value of $`dim\beta (M_1\times M_2)`$. Let $`X=\beta M_1\times \beta M_2`$, $`F=M_1^{}\times M_2^{}`$ and $`U=XF`$. By Dowker’s theorem , (4.1) $$dimX=\mathrm{max}\{dimF,k\},$$ where (4.2) $$k=sup\{dimY:YU,Y\text{is closed in}X\}.$$ It is clear, that each $`Y`$ from (4.2) is contained in some $`Y^{}=(K_1\times \beta M_2)(\beta M_1\times K_2)`$, where $`K_iM_i`$ is a finite sum of $`n_i`$-dimensional cubes, $`i=1,2`$. By Morita’s theorem , (4.3) $$dim(K\times Z)=dimK+dimZ,$$ whenever $`Z`$ is a paracompact space and $`K`$ is a compact polyhedron. By the finite sum theorem for $`dim`$, (4.3) yields (4.4) $$dim(K_i\times \beta M_j)=dimK_i+dim\beta M_j.$$ Consequently, $$\begin{array}{c}dimY^{}=\mathrm{max}\{dim(K_1\times \beta M_2),dim(\beta M_1\times K_2)\}\stackrel{\text{by}(\text{4.4})}{=}\hfill \\ \hfill \mathrm{max}\{n_1+dim\beta M_2,dim\beta M_1+n_2\}\stackrel{\text{by Proposition}\text{2.1}}{=}\\ \hfill \mathrm{max}\{n_1+dimM_2,dimM_1+n_2\}=\mathrm{max}\{n_1+m_2,m_1+n_2\}.\end{array}$$ Equality (4.1) finishes the proof of Lemma 4.1. ∎ ###### Corollary 4.2. Let $`M_i`$ be a countably compact $`n_i`$-manifold of dimension $`dimM_i=m_i`$, $`i=1,2`$. If $`\mathrm{max}\{n_1+m_2,n_2+m_1\}dim(M_1^{}\times M_2^{})`$, then $`dim(M_1\times M_2)=dim(M_1^{}\times M_2^{})`$. ###### Proposition 4.3. Let $`M_1`$ and $`M_2`$ be countably compact manifolds. Then $$dim(M_1\times M_2)dimM_1+dimM_2.$$ ###### Proof. According to Lemma 4.1, we only need to check that $`dim(M_1^{}\times M_2^{})m_1+m_2`$. But for any compact spaces $`X_1`$ and $`X_2`$ we have (see ) $$dim(X_1\times X_2)dimX_1+dimX_2.$$ Hence, $$\begin{array}{c}dim(M_1^{}\times M_2^{})dimM_1^{}+dimM_2^{}dim\beta M_1+dim\beta M_2=\hfill \\ \hfill dimM_1+dimM_2=m_1+m_2.\end{array}$$ Proposition 4.3 is proved. ∎ The next statement is an immediate corollary of the countable sum theorem for Lebesgue dimension. ###### Lemma 4.4. Let $`X_i`$ be metrizable compacta, $`i=1,2`$. Let $`F_i`$ be a closed subset of $`X_i`$, and let $`U_i=X_iF_i`$. Then $$\begin{array}{c}dim(X_1\times X_2)=\hfill \\ \hfill \mathrm{max}\{dim(U_1\times U_2),dim(U_1\times F_2),dim(F_1\times U_2),dim(F_1\times F_2)\}.\end{array}$$ ###### Theorem 4.5. Let $`m`$ be a natural number such that $`m5`$. Then, assuming $`\mathrm{}`$, there exists a differentiable, countably compact, perfectly normal, hereditarily separable $`4`$-manifold $`M=M_m`$ such that $`dimM=m`$ and $`dim(M\times M)=2m1<2dimM`$. ###### Proof. Let $`B`$ be a two-dimensional metrizable compactum such that $`dim(B\times B)=3`$. Such a compactum was constructed by V. G. Boltyanski . Let $`C=B\times I^{m2}`$. Then in accordance with (4.3), (4.5) $$dimC=m,$$ (4.6) $$dim(C\times C)=2m1.$$ Let $`M=M_C^4`$ be a manifold from Theorem 3.1. We claim that $`M`$ is a required manifold. Indeed, by the properties of $`M_C^4`$, the set $`M^{}C=U`$ is homeomorphic to an open subset of $`S^3`$. Consequently, Lemma 4.1 and (4.6) yield $`dim(M^{}\times M^{})=2m1`$. In this situation Corollary 4.2 finishes the proof of theorem 4.5. ∎ ###### Question 4.6. Does there exist a $`4`$-manifold $`M`$ such that $$2dimMdim(M\times M)2\mathrm{?}$$ A similar question about two different manifolds has a positive solution. ###### Theorem 4.7. Let $`m_1,m_2`$ and $`r`$ be natural numbers such that $`5m_1m_2`$ and $`4+m_2r<m_1+m_2`$. Then assuming $`\mathrm{}`$ there exist differentiable, countably compact, perfectly normal, hereditarily separable $`4`$-manifolds $`M_1`$ and $`M_2`$ of dimension $`dimM_i=m_i`$ such that $$dim(M_1\times M_2)=r<m_1+m_2=dimM_1+dimM_2.$$ ###### Proof. We follow the proof of Theorem 4.5. First let us recall the following result \[7, §2, Corollary 2\]. * For all natural numbers $`m_1,m_2`$ and $`r`$ such that $`m_1m_2`$ and $`m_2<rm_1+m_2`$, there exist metrizable compacta $`X_1`$ and $`X_2`$ such that $`dimX_i=m_i`$ and $`dim(X_1\times X_2)=r`$. Set $`M_i=M_{X_i}^4`$, where $`X_1`$ and $`X_2`$ are the above mentioned compacta with $`m_1,m_2`$ and $`r^{}`$ satisfying inequalities $`m_2<r^{}m_1+m_2`$. From Lemma 4.4 we get $`dim(M_1^{}\times M_2^{})=\mathrm{max}\{3+m_2,r^{}\}`$. On the other hand, for $`k`$ from (4.2) we have $`k=dimY^{}=4+m_2`$. In view of Lemma 4.1 we have $`dim(M_1\times M_2)=\mathrm{max}\{4+m_2,r^{}\}`$. Theorem 4.7 is proved. ∎ ###### Remark 4.8. As we have seen the dimension of the product of manifolds can be much less than the sum of their dimensions. But, since our manifolds $`M_i`$ are countably compact, by Proposition 4.3 we have $`dim(M_1\times M_2)dimM_1+dimM_2`$. ###### Question 4.9. Are there manifolds $`M_1`$ and $`M_2`$ such that $$dim(M_1\times M_2)>dimM_1+dimM_2\mathrm{?}$$ ## 5. On the dimension of subsets of $`4`$-manifolds ###### Theorem 5.1. Assuming $`\mathrm{}`$, there exists an infinite-dimensional, differentiable, countably compact, perfectly normal, hereditarily separable $`4`$-manifold $`M^4`$ such that for every closed set $`FM^4`$ we have $$\text{either}dimF4\text{or}dimF=\mathrm{}.$$ ###### Proof. Let $`M^4=M_C^4`$, where $`M_C^4`$ is a manifold from Theorem 3.1 and $`C`$ is well known Henderson’s infinite-dimensional compactum with no positive-dimensional compact subsets . By Proposition 2.1 and Theorem 3.1, $`dimM^4=\mathrm{}`$. Now let $`F`$ be a closed subset of $`M^4`$ such that $`dimF5`$. Then in view of Proposition 2.1, $$dim\beta F5.$$ But, since $`M^4`$ is normal, $`\beta F=\mathrm{cl}_{\beta M^4}F`$. Set $$A=\left(\mathrm{cl}_{\beta M^4}F\right)\left(\beta M^4M^4\right).$$ By Proposition 2.9 for the pair $`(\beta F,A)`$, we have $`dimA5`$. But in accordance with Theorem 3.1, $`\beta M^4M^4`$ is a disjoint sum of $`C`$ and some open subset of $`S^3`$. Hence, $$dim(AC)5.$$ Therefore, by the property of Henderson’s compactum, $`dim(AC)=\mathrm{}`$. Consequently, $$dimF=dim\beta FdimCdim(AC)=\mathrm{}.$$ Theorem 5.1 is proved. ∎ The next statement is a generalization of Theorem 3.2. ###### Theorem 5.2. Let $``$ be a countable subset of $`\mathrm{\Lambda }_4^0`$ (see Theorem 3.2). Then, assuming $`\mathrm{}`$, there exists a differentiable, countably compact, perfectly normal, hereditarily separable $`4`$-manifold $`M^4`$ which admits a family $`U_L`$, $`L`$, of open subsets of extension dimension $`\mathrm{e}\mathrm{dim}U_L=\left[L\right]`$. Moreover, one can choose sets $`U_L`$ in such a way that * either $`\mathrm{cl}(U_L)\mathrm{cl}(U_L^{})=\mathrm{}`$ if $`LL^{}`$, * or $`\{U_L:L\}\mathrm{}`$. ###### Proof. By Theorem 3.2, there is a metrizable compactum $`X_L`$ of extension dimension $`\mathrm{e}\mathrm{dim}X_L=\left[L\right]`$. Let $`C`$ be the Alexandroff compactification of the discrete sum $`\{X_L:l\}`$ of these compacta. Now we define $`M^4`$ as a manifold $`M_C^4`$ from Theorem 3.1. Since $`\{X_L:L\}`$ is a discrete family in a compact space $`\beta M^4`$, there is a disjoint family of neighborhoods $`V_L`$ of $`X_L`$ in $`\beta M^4`$. We may assume also, that $$\mathrm{cl}_{\beta M^4}(V_L)\mathrm{cl}_{\beta M^4}(V_L^{})=\mathrm{}\text{if}LL^{}.$$ Now, in case (1), we set (5.1) $$U_L=V_LM^4.$$ To realize case (2) we take an open metrizable subset $`UM^4`$ and set (5.2) $$U_L=(UV_L)M^4.$$ Claim. $`\mathrm{e}\mathrm{dim}(\mathrm{cl}_{M^4}U_L)=\left[L\right]`$. Proof. By Proposition 2.8, it suffices to verify that (5.3) $$\mathrm{e}\mathrm{dim}\beta \left(\mathrm{cl}_{M^4}U_L\right)=\left[L\right].$$ But (5.4) $$\beta \left(\mathrm{cl}_{M^4}U_L\right)=\mathrm{cl}_{\beta M^4}\left(\mathrm{cl}_{M^4}U_L\right)=\mathrm{cl}_{\beta M^4}\left(U_L\right).$$ On the other hand, according to (5.1), $`U_L`$ is dense in $`V_L`$. Hence, (5.5) $$\mathrm{cl}_{\beta M^4}\left(U_L\right)=\mathrm{cl}_{\beta M^4}\left(V_L\right).$$ Let $`\mathrm{\Phi }_L=M^4\mathrm{cl}_{\beta M^4}\left(V_L\right)`$ and let $`F_L=\mathrm{cl}_{\beta M^4}\left(V_L\right)M^4`$. Since $`\mathrm{\Phi }_L`$ is dense in $`\mathrm{cl}_{\beta M^4}\left(V_L\right)`$, we have (5.6) $$\mathrm{cl}_{\beta M^4}\mathrm{\Phi }_L=\mathrm{\Phi }_LF_L=\mathrm{cl}_{\beta M^4}\left(V_L\right).$$ Hence, (5.7) $$\beta \mathrm{\Phi }_L=\mathrm{\Phi }_LF_L.$$ For every compactum $`Y\mathrm{\Phi }_L`$ we have $`dimY4`$. On the other hand, $`F_LX_L`$ and $`F_LX_L^{}=\mathrm{}`$ whenever $`LL^{}`$. Hence, $`\mathrm{e}\mathrm{dim}F_L=\left[L\right]`$. Now we apply proposition 2.9 to the pair $`(\beta \mathrm{\Phi }_L,F_L)`$. We have (5.8) $$\mathrm{e}\mathrm{dim}\beta \mathrm{\Phi }_L=\mathrm{e}\mathrm{dim}F_L=\left[L\right].$$ Finally, conditions (5.7), (5.6), (5.5) and (5.4) give us the required equality (5.3). This finishes proof of our Claim. In order to prove the equality $`\mathrm{e}\mathrm{dim}U_L=\left[L\right]`$ we need more general version of Theorem 2.18. Its proof is based on the fact that a countable simplicial complex is an $`ANE`$ for the class of all normal spaces. * Suppose that a normal space $`X`$ is the union of its closed countably compact subsets $`X_i`$, $`i\omega `$. If $`\mathrm{e}\mathrm{dim}X_i\left[L\right]`$ for each $`i\omega `$, then $`\mathrm{e}\mathrm{dim}X\left[L\right]`$. In order to finish the proof of Theorem 5.2 represent $`U_L`$ as the union of an increasing sequence $$U_L^0\mathrm{cl}_{M^4}U_L^0U_L^1\mathrm{}$$ and apply our claim. Theorem 5.2 is proved. ∎
no-problem/9908/astro-ph9908348.html
ar5iv
text
# X-ray Clusters and the Search for Cosmic Flows ## 1 Introduction Efforts to improve the sensitivity of redshift-independent distance estimators as peculiar velocity probes exploit correlations in the physical properties of the objects used. One such correlation, between absolute magnitude, $`M_R`$ and Hoessel’s (1980) ‘structure parameter’, $`\alpha `$ ($`0.921\left[\delta M_R/\delta \mathrm{ln}r\right]|_{r_m}`$; logarithmic slope of the luminosity profile at a metric aperture, $`r_m`$) was employed by Lauer & Postman (1994, hereafter LP) for a sample of optically selected Brightest Cluster Galaxies (BCGs) in 119 ACO (Abell, Corwin & Olowin 1989) hosts, to reduce the scatter in BCG $`M_R`$ from 0.33 to 0.24 mag. Subsequent analysis suggested a large-scale coherent bulk flow — a result in conflict with current cosmological models (e.g. Strauss & Willick 1995). Hudson & Ebeling (1997) speculated that this scatter may be further reduced via corrections for BCG environment. We explore this possibility with an independent, X-ray selected data set. ## 2 X-ray Selected Sample X-ray selection has significant advantages over optical selection: ($`i`$) Diffuse X-ray emission from cluster cores identifies bona-fide clusters and reduces superposition effects, which in optical catalogues, can overestimate cluster richness. ($`ii`$) X-ray parameters provide a more physical reflection of the nature of cluster environments. ($`iii`$) X-ray surveys are background limited and thus free from problems in estimating local background galaxy number density. Additionally, surveying the whole sky using a single detector avoids biases present in optical catalogues, assembled from disparate survey characteristics (e.g. ACO). We search for diffuse emission above an X-ray flux limit of $`3\times 10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup> in the latest reduction of the ROSAT all-sky survey (RASS II). The resulting database is paired with coordinates for ACO clusters with published redshifts, $`z_{LG}<0.1`$. The $`250`$ clusters for which the ACO coordinate lies within $`15`$ arc-minutes of the X-ray centroid form a parent sample. Composite images frequently show coincidences between peaks of X-ray emission and projected position of dominant galaxies (Lazzati & Chincarini 1998). BCG candidates for our sample are identified via this positional coincidence. In over 90% of cases, positional coincidences are unambiguous. When no candidate is found close to a local X-ray centroid, the dominant elliptical closest to the global cluster X-ray centroid is adopted as the BCG candidate. For $`200`$ clusters, for which photometric zero point accuracy $`\mathrm{\Delta }R<0.03`$ mag., we measure structure parameter, $`\alpha `$ at 10 $`h^1`$ kpc and $`R`$-band absolute metric aperture magnitude, $`M_R`$ within elliptical apertures of this semi-major axis (excluding contaminating sources) of all ellipticals within X-ray peaks. ## 3 Results: BCG Structure and Environment Figure 1 compares $`\alpha `$ distributions for the LP sample and a subset of the X-ray sample (for which positional coincidences are unambiguous). (a) The mean value of the X-ray distribution ($`\overline{\alpha }=0.71`$) is higher than the LP sample ($`\overline{\alpha }=0.57`$). (b) The LP data show excess low-$`\alpha `$ ($`<0.5`$) galaxies. Conversely, the X-ray selected sample shows an excess contribution from high-$`\alpha `$ ($`>0.6`$) galaxies. The distribution of X-ray selected BCG candidates appears shifted to high-$`\alpha `$ values. (c) Despite larger sample size, the X-ray selected distribution occupies a narrower range of $`\alpha `$ than the LP case. (d) The X-ray sample histogram is more Gaussian than the LP sample. Comparing identifications in 49 clusters common to both samples, we find $`30\%`$ of these hosts have different BCG candidates, depending upon selection method. In every common cluster in which LP find a low-$`\alpha `$ ($`<0.5`$) BCG, our objective technique selects a different candidate, always with higher $`\alpha `$ than the optically selected BCG. Thus, X-ray selection reduces contaminating effects evident in optical samples, preferentially selecting high-$`\alpha `$ BCG candidates and producing a more homogeneous sample. Figure 2 does not show any $`M_R`$-$`\alpha `$ correlation (c.f. LP fig. 4). Appealing to a speculated correction for environment (Hudson & Ebeling 1997), we consider 152 clusters with reliably determined X-ray luminosities, $`L_{44}`$. The response of $`M_R`$ to $`L_{44}`$ (figure 3) does not indicate a correlation in any interval of $`\alpha `$. Hence, despite the homogeneity of X-ray selected samples, these candidates remain relatively poor standard candles. The scatter remains $`0.34`$ mag. dominating any dipole signal that may be present as the result of a coherent bulk streaming motion. ## 4 Discussion The $`M_R`$-$`\alpha `$ correlation in optical BCG samples (Hoessel 1980, LP) is principally constrained at low-$`\alpha `$. Above $`\alpha 0.5`$, scatter about the mean relation increases and no correlation is evident. X-ray selection preferentially samples this high-$`\alpha `$ regime. Therefore $`M_R`$-$`\alpha `$ correlations result from galaxies not coincident with X-ray centroids. Such galaxies are less likely to reflect the dynamics of the underlying cluster potential than X-ray centroid coincident BCG(s) in the same host. The existence of $`M_R`$-$`\alpha `$ correlations may signal biases in selection procedure via inclusion of poor tracers of cluster peculiar velocity. Since homogeneous samples of X-ray coincident sources fail to improve BCG reliability as standard candles, perhaps their usefulness in cosmic flow studies has been overstated.
no-problem/9908/hep-ph9908370.html
ar5iv
text
# Heuristics of the center vortex picture ## Heuristics of the center vortex picture A discussion of the deconfinement transition in Yang-Mills theory presupposes a picture of the phenomenon of confinement. Conversely, any picture of confinement should be able to accomodate the deconfinement phase transition. The work presented here is concerned specifically with the so-called center vortex picture of confinement; this picture is based on the conjectured presence of center vortices in typical Yang-Mills gauge configurations. These vortices represent closed magnetic flux lines in three space dimensions, describing closed two-dimensional world-sheets in four space-time dimensions. Space-time in the following will always be considered Euclidean. The magnetic flux represented by the vortices is furthermore quantized such that a Wilson loop linking vortex flux takes a value corresponding to a nontrivial center element of the gauge group. In the case of $`SU(2)`$ color discussed here, the only such element is $`(1)`$. For $`N`$ colors, there are $`N1`$ different possible vortex fluxes corresponding to the $`N1`$ nontrivial center elements of $`SU(N)`$. Consider an ensemble of center vortex configurations in which the vortices are distributed randomly, specifically such that intersection points of vortices with a given two-dimensional plane in space-time are found at random, uncorrelated locations. In such an ensemble, confinement results in a very simple manner. Let the universe be a cube of length $`L`$, and consider a two-dimensional slice of this universe of area $`L^2`$, with a Wilson loop embedded into it, circumscribing an area $`A`$. On this plane, distribute $`N`$ vortex intersection points at random, cf. Fig. 1 (left). According to the specification above, each of these points contributes a factor $`(1)`$ to the value of the Wilson loop if it falls within the area $`A`$ spanned by the loop; the probability for this to occur for any given point is $`A/L^2`$. The expectation value of the Wilson loop is readily evaluated in this simple model. The probability that $`n`$ of the $`N`$ vortex intersection points fall within the area $`A`$ is binomial, and, since the Wilson loop takes the value $`(1)^n`$ in the presence of $`n`$ intersection points within the area $`A`$, its expectation value is $$W=\underset{n=0}{\overset{N}{}}(1)^n\left(\begin{array}{c}N\\ n\end{array}\right)\left(\frac{A}{L^2}\right)^n\left(1\frac{A}{L^2}\right)^{Nn}=\left(1\frac{2\rho A}{N}\right)^N\stackrel{N\mathrm{}}{}\mathrm{exp}(2\rho A)$$ (1) where in the last step, the size of the universe $`L`$ has been sent to infinity while leaving the planar density $`\rho =N/L^2`$ of vortex intersection points constant. Thus, one obtains an area law for the Wilson loop, with the string tension $`\sigma =2\rho `$. This simple mechanism lies at the core of the center vortex picture of confinement. After having been proposed already in , evidence that the Yang-Mills dynamics actually favors the formation of magnetic flux tubes arose in the framework of the Copenhagen vacuum . Also lattice studies were initiated with the aim to study vortices . These studies in essence defined vortices via their effect on Wilson loops, as discussed above. While this definition has the advantage of being gauge invariant, it does not allow to easily localize vortices, i.e. associate a collection of vortex world-surfaces with any given lattice gauge configuration. The absence of techniques allowing to carry out such an identification for a long time posed a considerable obstacle to the study of center vortex physics, especially the study of their global properties. These properties, however, constitute a crucial aspect for many applications, as a closer examination of the above heuristic picture shows. Namely, for vortex intersection points to be distributed in a sufficiently random manner on a space-time plane to induce an area law for the Wilson loop, the vortices must form networks which percolate throughout space-time. To see this, consider the converse, namely that vortices can be separated into clusters of bounded extension. This implies that any vortex intersection point on a plane comes with a partner a finite distance (smaller than the bound on the cluster extension) away, because vortices are closed. For simplicity, assume the pairs of intersection points to occur with a fixed mutual distance $`d`$, and distribute $`N`$ pairs on a space-time plane containing a Wilson loop of area $`A`$, cf. Fig. 1 (right), where the lines between the points in the figure are merely to guide the eye in identifying pairs of points. Now, the probability that any given pair contributes a factor $`(1)`$ to the Wilson loop is $`pPd/L^2`$, where $`P`$ denotes the perimeter of the loop, since only pairs whose midpoints lie within a strip of width $`d`$ around the Wilson loop are able to contribute a factor $`(1)`$, and they do this with a probability $`p`$ related to the angular distribution of the pairs. Note that $`p`$ is independent of the dimensions of the Wilson loop. The probability that $`n`$ pairs contribute a factor $`(1)`$ is again binomial, in complete analogy to above, and one consequently obtains a perimeter law for the expectation value of the Wilson loop in the limit of an infinite universe, $$W=\left(1\frac{2pPd}{L^2}\right)^N\stackrel{N\mathrm{}}{}\mathrm{exp}(\rho pPd)$$ (2) where $`\rho =2N/L^2`$ again denotes the density of points. Thus, in the absence of percolation, confinement disappears. This leads to the conjecture that the deconfinement phase transition in the vortex picture may take the guise of a percolation transition. However, as already indicated above, to test such global properties of vortices in lattice experiments, new techniques are needed which allow to associate a vortex world-sheet configuration with any given lattice gauge configuration. These techniques have only been furnished quite recently, sparking renewed interest in the vortex picture. The present work is one contribution to these efforts. ## Locating vortices on the lattice The abovementioned techniques, introduced in , employ a two-step procedure familiar from the dual superconductor picture of confinement. First, one uses the gauge freedom to bring a given gauge configuration as close as possible to the collective degrees of freedom under consideration; in the case of the dual superconductor, that is the Abelian degrees of freedom, in particular, the monopoles. The second step consists of projecting onto these degrees of freedom, i.e. neglecting residual deviations away from, say, Abelian configurations in the case of the dual superconductor. This second step clearly constitutes a truncation of the theory. This idea was adapted to the case of vortex degrees of freedom as follows . One fixes gauge configurations to the maximal center gauge, $$\text{max }\underset{i}{}|\text{tr }U_i|^2$$ (3) where the $`U_i`$ are the link variables on a space-time lattice. This procedure biases links towards elements of the center of the gauge group. Next, one performs a truncation of the configurations, namely center projection, $$U\text{ sign tr }U$$ (4) i.e. one replaces each $`SU(2)`$ link variable by the center element closest to it in the group. Thus, one remains with a lattice of center elements. Such a lattice can be associated in the standard fashion with a vortex configuration. One examines all plaquettes on the lattice, and if a plaquette takes the value $`(1)`$, a vortex is said to pierce that plaquette. Thus, vortices in the lattice formulation are defined on the dual lattice, i.e. the lattice shifted by the vector $`(a/2,a/2,a/2,a/2)`$ w.r.t. the original one, $`a`$ denoting the lattice spacing. One can easily convince oneself that the vortices defined in this way have all the properties postulated further above. Having isolated vortices on the lattice, the first question to answer is whether these degrees of freedom do indeed determine the physics of confinement, i.e. whether they furnish the full string tension found in exact calculations without any truncations. Without this basis, more detailed considerations of vortex properties run the risk of being academical. One carries out two lattice experiments, both times using the full Yang-Mills action as a weight, but in one experiment, one calculates the observable in question, such as the Wilson loop, using the full configurations; in the other experiment, one uses the center projected configurations. If the results agree, the observable is said to display center dominance. Center dominance for the string tension has indeed been verified in $`SU(2)`$ lattice gauge theory both at zero temperature and at finite temperatures , including the so-called “spatial string tension” all the way into the deconfined regime. Furthermore, the vortex density obeys the proper scaling law as dictated by the renormalization group for physical quantities, cf. (note erratum in ) and . ## Vortex percolation properties Given techniques allowing to locate vortex world-sheets in space-time, or vortex loops on three-dimensional slices thereof, it is possible to discriminate between different vortex clusters. In the following, three-dimensional slices of space-time, where one of the space directions is left away, will be considered, since this displays the relevant percolation properties most clearly. To define a cluster, one finds a link on the dual lattice which is part of a vortex and furthermore locates all adjacent links which are also part of the vortex. This is repeated with all new links found, until no further links exist which are connected with the cluster in question. Having detected all vortex clusters in this manner, it is possible to determine the space-time extension of each cluster, i.e. the largest distance between any pair of points on the cluster. In a percolating phase, most of the available vortex length will be organized into clusters of the maximal possible extension, whereas in a phase with no vortex percolation, most of the vortex material present in the configuration will be concentrated in clusters much smaller than the typical extension of the universe. To generate “vortex material distributions” which allow to read off which scenario is realized, one simply measures both the extension of each cluster as well as the number of links contained in it, and adds the latter number to a bin corresponding to the cluster extension in question. Fig. 2 displays such distributions, obtained for $`\beta =2.4`$ on $`12^3\times N_t`$ lattices , which have been normalized such that the integral over the distributions gives unity, and where the cluster extension on the horizontal axis is in units of the maximal extension possible in the universe in question. In view of Fig. 2, one indeed obtains a transition from a confining phase, in which vortices percolate, to a deconfining phase, in which they cease to percolate. This confirms the conjecture proposed above in the introductory section. If one analyzes the small vortex clusters dominating the deconfined phase in more detail, one finds that a large part of these vortices wind in the (Euclidean) temporal direction, i.e. the space-time direction whose extension is identified with the inverse temperature. Therefore, one finds that the typical configurations in the two phases can be characterized as displayed in Fig. 3 in a three-dimensional slice of space-time, where one space direction has been left away. Note that Fig. 3 also furnishes an explanation of the spatial string tension in the deconfined phase. A spatial Wilson loop embedded into Fig. 3 (right) can exhibit an area law, since intersection points of winding vortices with the minimal area spanned by the loop can occur in an uncorrelated fashion despite those vortices having small extension. Note also the dual nature of this (magnetic) picture as compared with electric flux models . In such models, electric flux percolates in the deconfined phase, while it does not percolate in the confining phase. ## Outlook While it has thus been established how vortices generate the confining and deconfining phases of Yang-Mills theory, it remains to be clarified what the essential features of the dynamics underlying their behavior are. One interesting observation in this context is that a simple model of vortices as random surfaces in four-dimensional space-time already is able to generate the vortex phenomenology described above, i.e. a percolating confining and a non-percolating deconfining phase, separated by a transition as a function of temperature. The necessary ingredients are an action per unit vortex area (i.e. a Nambu-Goto term), and an action penalty related to the curvature of the vortex surfaces. By construction, this model can be understood in terms of the entropy associated with random surfaces in a given space-time domain; it contains no further dynamics. Evaluating the partition function of such a model amounts to counting possible vortex surface configurations given a certain vortex density (enforced by the Nambu-Goto term), and given an ultraviolet cutoff on the space-time fluctuations of the surfaces (enforced by the curvature penalty). A detailed report on a lattice investigation of this model will be given in an upcoming publication. Further issues being, or recently having been, investigated include: The Pontryagin index associated with center vortex configurations , and the breaking of chiral symmetry ; the continuum meaning of the maximal center gauge ; generalizations to $`SU(3)`$ color ; and whether a random surface model for vortices can be justified in terms of a low-energy effective theory describing infrared Yang-Mills dynamics .
no-problem/9908/astro-ph9908298.html
ar5iv
text
# RELATIVISTIC DYNAMOS IN MAGNETOSPHERES OF ROTATING COMPACT OBJECTS ## 1 INTRODUCTION Interesting high-energetic phenomena have been observed in various compact astrophysical systems, such as pulsars, X-ray binaries and active galactic nuclei. Though the energy-release processes via radiation emission and jet formation have not been fully understood as yet, strong magnetic fields near the central objects can be a crucial component for explaining the observational features, and many works have been devoted to the developments of relativistic magnetohydrodynamical (MHD) models. In particular, Khanna & Camenzind (1994, 1996a) have recently proposed a self-excitation mechanism of axisymmetric magnetic fields, based on relativistic Ohm’s law in Kerr geometry. This effect which is called self-excited gravitomagnetic dynamo is due to the coupling between relativistic frame-dragging of a rotating central object and rotational motion of surrounding plasma, and such dynamo action is expected to play an important role in the astrophysical phenomena as a trigger of relativistic plasma flows. Unfortunately, numerical calculations done by Brandenburg (1996) have shown that in a wide set of standard thin disk models around a rotating black hole no magnetic field can be maintained against ohmic dissipation. Cowling’s antidynamo theorem for axisymmetric magnetic fields still holds even near a rotating black hole in the situations previously considered. Therefore, a new viewpoint will be necessary if the kinematic theory of resistive MHD presented by Khanna & Camenzind (1994, 1996a) is applied to the problem of generation of axisymmetric magnetic fields (see also Núñez 1997). In this paper we do not adhere to the accretion disk models, but we pursue analytically the processes permissible in rotating magnetospheres of compact objects. Though in the kinematic treatment of neglecting the feedback of magnetic fields due to the Lorentz force on plasma velocity the basic field equations become much simpler in comparison with the full MHD theory, some additional approximations are required to make any analytical approach possible. We would like to restrict attention to the simplified cases in which only the essential interactions between poloidal and azimuthal magnetic fields for allowing the existence of growing or stationary modes are preserved. The plasma injected into the rotating magnetosphere will partially accrete onto the central object and partially escape to infinity. The poloidal velocity of plasma flows, however, can remain sub-Alfvenic in some intermediate (quasi-equilibrium) region between the outer light cylinder and the surface of the central object, where the plasma angular velocity may be different from the Keplerian one, because gravitational forces do not dominate in comparison with any other interactions (see, e.g., Camenzind 1987; Takahashi et al. 1990). The stationary axisymmetric structure of the magnetosphere is mainly determined in the framework of ideal MHD theory under the frozen-in condition. We consider the evolution of electromagnetic fields perturbed by the presence of small magnetic diffusivity. Though the motion of the plasma is assumed to be non-perturbed, a complicated dynamical evolution of electromagetic fields due to the poloidal flows will occur. Therefore, by fixing the poloidal velocity to be zero in the quasi-equilibrium region, we study the generation of magnetic fields which goes on slowly in a long diffusion timescale. Our purpose is to point out some basic aspects of the perturbed fields governed by the plasma rotation and the frame-dragging effect. We obtain the main results such that (i) if a background uniform magnetic field exists, it can sustain excited poloidal and azimuthal multipolar modes even in slow-rotation cases, and (ii) a sufficient charge separation generated by plasma rotation with a relativistic speed can cause self-excited dynamo without any frame-dragging effect. We discuss these processes of generation of magnetic fields (which have been missed in the above-mentioned numerical models) in relation to active phenomena observed in the relativistic magnetospheres. In the following we use units such that $`c=G=1`$, and the axisymmetric stationary metric denoted by $`g_{ab}`$ has signs ($``$ \+ + +). ## 2 KINEMATIC EQUATIONS FOR AXISYMMETRIC DYNAMOS A contribution of accreting plasma and electromagnetic fields around a rotating compact object to the stationary axisymmetric gravitational field remains negligibly small. Therefore, we can always study the MHD interaction under a fixed gravitational field with the line element of the form $$ds^2=g_{tt}dt^2+2g_{t\varphi }dtd\varphi +g_{\varphi \varphi }d\varphi ^2+g_{rr}dr^2+g_{\theta \theta }d\theta ^2,$$ (1) where $`r`$, $`\theta `$ and $`\varphi `$ are the spherical coordinates, and the metric $`g_{ab}`$ is assumed to be independent of the time coordinate $`t`$ and the azimuthal angle coordinate $`\varphi `$. The angular velocity of the dragging of inertial frames is denoted by $`\omega g_{t\varphi }/g_{\varphi \varphi }`$. We define $`g`$ to be the determinant of $`(g_{ab})`$, and for the lapse function $`\alpha \sqrt{g_{tt}+(g_{t\varphi }^2/g_{\varphi \varphi })}`$ we have $`\sqrt{g}=\alpha \sqrt{g_{\varphi \varphi }g_{rr}g_{\theta \theta }}`$. We do not limit the metric to the Kerr form for later discussions. Further, we do not use explicitly the 3+1 formalism developed by Thorne, Price, & Macdonald (1986), and we treat relativistic Ohm’s law in a covariant form written by $$F_{ab}u^b=4\pi \eta (j_aQu_a),$$ (2) where $`\eta `$ is the magnetic diffusivity. According to the kinematic MHD theory the plasma 4-velocity $`u^a(r,\theta )`$ is also fixed, and $`Qj^au_a`$ is the electric charge density measured by an observer comoving with the plasma. The electric current density $`j^a`$ should be rewritten by the electromagnetic field $`F_{ab}`$ via the Maxwell equations $`4\pi j^a=_bF^{ab}`$, where $`_b`$ denotes the covariant derivative with respect to the metric $`g_{ab}`$. Because we consider time-dependent fields under the assumption of axisymmetry, the poloidal magnetic components $`F_{r\varphi }`$ and $`F_{\theta \varphi }`$ and the azimuthal electric component $`F_{t\varphi }`$ are given by the single scalar potential $`\mathrm{\Psi }(t,r,\theta )`$ via the equation $$F_{a\varphi }=_a\mathrm{\Psi }.$$ (3) Then the four field variables $`\mathrm{\Psi }`$, $`F^{rt}`$, $`F^{\theta t}`$ and $`F^{r\theta }`$ remain to be solved, and the equation added to relativistic Ohm’s law is the azimuthal part of the Faraday law $$_tF_{r\theta }+_rF_{\theta t}+_\theta F_{tr}=0.$$ (4) These field equations for the kinematic evolution are still too complicated to discuss analytically the behaviors of solutions. Hence, the following investigation is limited to the models with no poloidal flow, i.e., $$u^r=u^\theta =0,$$ (5) which will be justified if plasma is located in a quasi-equilibrium region slightly distant from the surface of the central object, and a dynamical evolution of electromagnetic fields caused by the poloidal plasma flows with sub-Alfvenic velocities is not essential to the problem of dynamo action. Further, we assume $`\eta `$ to be a very small constant, and the time variation of fields is described by a long diffusion timescale such that $`tr^2/\eta `$. Then it is convenient to introduce the variable $`T\eta t`$ instead of $`t`$. As a result of this assumption concerning the order of $`\eta `$ the field equations can have consistent solutions, if the ratio of the amplitudes is understood to be $$\eta F_{r\theta }/\mathrm{\Psi }=O(1).$$ (6) Now let us give the field equations which are simplified according to the above-mentioned approximations. By virtue of equation (5) the poloidal part of equation (2) leads to $$F^{At}u_t+F^{A\varphi }u_\varphi =\eta j^A,$$ (7) where the poloidal current density is approximately given by $$\eta j^A=\frac{1}{\sqrt{g}}ϵ^{AB}_BF,ϵ^{r\theta }=ϵ^{\theta r}=1,$$ (8) in which the displacement current is neglected. (Hereafter the superscripts or subscripts written by $`A`$ and $`B`$ mean the coordinates $`r`$ and $`\theta `$.) Because we have $`F_{A\varphi }=_A\mathrm{\Psi }`$, these relations are used to express $`F^{At}`$ and $`F^{A\varphi }`$ by $`\mathrm{\Psi }`$ and $`F\eta \sqrt{g}F^{r\theta }`$, for example, in the approximated form of the proper charge density $`Q`$ given by $$4\pi Q=\frac{u_a}{\sqrt{g}}_b(\sqrt{g}F^{ab})F^{tA}_Au_t+F^{\varphi A}_Au_\varphi ,$$ (9) which should be derived from equation (2) by using the current conservation $`_aj^a=0`$ and the inequality $`|Q||\eta _a(Qu^a)|`$. Note that for the stationary and axisymmetric velocity field $`u_a`$ we have $`_tu_a=_\varphi u_a=0`$, while the rotational motion can generate the non-vanishing components $`_Au_t`$ and $`_Au_\varphi `$ for $`A=r,\theta `$. Then, except in the case that both $`u_t`$ and $`u_\varphi `$ are constant, $`_bu_a`$ becomes non-symmetric under the permutation of $`a`$ and $`b`$ to assure the validity of equation (9) for the estimation of charge separation. From the Maxwell equations we also obtain approximately the azimuthal current density $`j_\varphi `$, in order to substitute it into the azimuthal part of equation (2) of the form $$u^t_T\mathrm{\Psi }=4\pi (j_\varphi Qu_\varphi ),$$ (10) with equation (9) for the proper charge density $`Q`$. Then, we arrive at the final result of the evolution equation for $`\mathrm{\Psi }`$ $$_T\mathrm{\Psi }=S_1+S_2+S_3,$$ (11) where $$S_1=\frac{g_{\varphi \varphi }}{u^t\sqrt{g}}_A(\frac{\sqrt{g}}{g_{\varphi \varphi }}^A\mathrm{\Psi }),$$ (12) $$S_2=\frac{u_\varphi }{(\alpha u^t)^2g_{\varphi \varphi }}^A\mathrm{\Psi }(g_{\varphi \varphi }_A\omega +u_t_Au_\varphi u_\varphi _Au_t),$$ (13) $$S_3=\frac{1}{(\alpha u^t)^2\sqrt{g}}ϵ^{AB}_AF\{g_{\varphi \varphi }_B\omega u_\varphi (_Bu_t+\omega _Bu_\varphi )\}.$$ (14) The final term $`S_3`$ can contribute to the excitation of $`\mathrm{\Psi }`$ through the coupling to $`F`$. The ohmic diffusion is mainly due to the first term $`S_1`$, and the role of $`S_2`$ (i.e., self-generation or self-destruction) will depend on the topology of $`\mathrm{\Psi }`$. The Faraday law (4) is the evolution equation for $`F`$, which approximately reduces to the form $$_TF=\frac{\alpha ^2g_{\varphi \varphi }}{\sqrt{g}}\{_A(\frac{\sqrt{g}}{\alpha ^2u^tg_{\varphi \varphi }}^AF)ϵ^{AB}_A\mathrm{\Psi }_B\mathrm{\Omega }\},$$ (15) where $`\mathrm{\Omega }u^\varphi /u^t`$ is the plasma angular velocity. Note that the right-hand side of equation (15) is also decomposed into the terms for diffusion and amplification of $`F`$. In the following sections we will study the coupled equations (11) and (15) for $`\mathrm{\Psi }`$ and $`F`$ to see the efficiency of excitation mechanisms. ## 3 THE FRAME-DRAGGING EFFECT In the case of no poloidal velocity $`u^A=0`$ we can give the specific energy and angular momentum of plasma denoted by $`u_t`$ and $`u_\varphi `$ as follows, $$u_t=\gamma \{\alpha ^2+g_{\varphi \varphi }\omega (\mathrm{\Omega }\omega )\},u_\varphi =\gamma g_{\varphi \varphi }(\mathrm{\Omega }\omega )$$ (16) where $`\gamma u^t=1/\sqrt{\alpha ^2g_{\varphi \varphi }(\mathrm{\Omega }\omega )^2}`$ is the Lorentz factor of rotating plasma. If the plasma is co-rotating with the angular velocity of the background magnetosphere, the Lorentz factor $`\gamma `$ becomes very large in a region close to the light cylinder surface, and the term originated from the convection current density $`Qu_\varphi `$ will dominate in $`S_3`$. In this section we would like to restrict attention to the frame-dragging effect, by neglecting any contribution of such charge separation under the condition $`\mathrm{\Omega }=\omega `$. ### 3.1 The $`\omega `$-$`\mathrm{\Omega }`$ Dynamo Recall that in the numerical models of $`u_\varphi =0`$ calculated by Brandenburg (1996) the dynamo action can work only if $`\omega `$ is taken to be artificially large. To see roughly this result from equations (11) and (15) with $`u_\varphi =0`$, let us consider simplified forms of the metric components as functions of the coordinate $`\theta `$ such that $`g_{\varphi \varphi }`$ and $`g`$ are proportional to $`\mathrm{sin}^2\theta `$, and $`\omega `$, $`g_{rr}`$ and $`g_{\theta \theta }`$ are independent of $`\theta `$. (Now the Lorentz factor $`u^t=1/\alpha `$ depends only on $`r`$. This simplified metric may be regarded as the Kerr metric in slow-rotation approximation.) Then, by setting the time behavior of $`\mathrm{\Psi }`$ and $`F`$ to be $`\mathrm{exp}(\mu T)`$, we obtain $$\frac{g_{\theta \theta }}{1x^2}(\frac{\mu }{\alpha }L_1)\mathrm{\Psi }=_x^2\mathrm{\Psi }\frac{\sigma }{\alpha }_xF,$$ (17) and $$\frac{g_{\theta \theta }}{1x^2}(\frac{\mu }{\alpha }L_2)F=_x^2F+\alpha \sigma _x\mathrm{\Psi },$$ (18) where $`x\mathrm{cos}\theta `$. The efficiency of dynamo action will be determined by the behavior of $`\sigma (r)`$ dependent on the metric components as follows, $$\sigma =\frac{g_{\varphi \varphi }g_{\theta \theta }}{\mathrm{sin}\theta \sqrt{g}}\frac{d\omega }{dr}.$$ (19) Further, $`L_1\mathrm{\Psi }`$ and $`L_2F`$ in the left-hand sides of equations (17) and (18) represent the ohmic diffusion of $`\mathrm{\Psi }`$ and $`F`$ in radial direction, which are given by $$L_1\mathrm{\Psi }=\frac{g_{\varphi \varphi }}{\sqrt{g}}_r(\frac{\sqrt{g}}{g_{\varphi \varphi }g_{rr}}_r\mathrm{\Psi }),$$ (20) and $$L_2F=\frac{\alpha g_{\varphi \varphi }}{\sqrt{g}}_r(\frac{\sqrt{g}}{\alpha g_{\varphi \varphi }g_{rr}}_rF).$$ (21) We can decompose $`\mathrm{\Psi }`$ and $`F`$ into modes symmetric or antisymmetric with respect to the equatorial plane (Núñez 1996), and in this paper we would like to consider only the configuration of magnetic fields with the symmetry such that $`\mathrm{\Psi }(x)=\mathrm{\Psi }(x)`$ and $`F(x)=F(x)`$. (This corresponds to dipole-type topology of $`\mathrm{\Psi }`$. Quadrupole-type fields may be more easily excited near the equatorial plane. Here we do not pursue this possibility.) For physical modes the functions $`\mathrm{\Psi }`$ and $`F`$ should satisfy the boundary conditon that both $`\mathrm{\Psi }/(1x^2)`$ and $`F/(1x^2)`$ remain finite on the polar axis ($`x^2=1`$). Then, if the diffusion terms $`L_1\mathrm{\Psi }`$ and $`L_2F`$ are neglected, it is easy to obtain $$\mathrm{\Psi }=\mathrm{\Psi }_0\{1(1)^n\mathrm{cos}(\sigma x)\},F=(1)^n\alpha \mathrm{\Psi }_0\mathrm{sin}(\sigma x)$$ (22) as a stationary eigenmode with $`\mu =0`$. By virtue of the boundary condition for $`\mathrm{\Psi }`$ and $`F`$ the eigenvalue of $`\sigma `$ is given by $`\sigma =n\pi `$, where $`n=1,2,\mathrm{}`$. Of course, if one takes account of a diffusion effect due to the terms $`L_1\mathrm{\Psi }`$ and $`L_2F`$, the minimum value of $`\sigma `$ should become larger than $`\pi `$. However, for the Kerr metric on the equatorial plane, we can estimate the value of $`\sigma `$ to be $$\sigma =\frac{2Ma(3r^2+a^2)}{r^4+a^2(r^2+2Mr)}2,$$ (23) where $`M`$ and $`a`$ are the mass and rotation parameters, respectively. Therefore, one can expect no self-excitation of fields to occur near the Kerr black hole. In this sense the rotation of the black hole turns out to be too slow to excite the dynamo action. ### 3.2 Induced Excitation Now let us propose an alternative process which can work even in the slow-rotation case and may be called induced excitation instead of self-excitation. The key assumption is the existence of a background poloidal field denoted by $`\mathrm{\Psi }_B`$ as a stationary solution of the vacuum Maxwell equations $`_bF^{ab}=0`$. (The typical example is given by Wald’s (1974) solution for the Kerr hole immersed in a uniform magnetic field with the form $`\mathrm{\Psi }_B=B_0\{ag_{t\varphi }+(g_{\varphi \varphi }/2)\}`$.) If plasma is injected into the magnetosphere, the structure should be deformed by the motion of plasma. However, the original vacuum field in the background magnetosphere can remain dissipation-free and play a role of a stationary source field in equations (17) and (18). Then, we will be able to find a stationary solution written by $`\mathrm{\Psi }=\mathrm{\Psi }_B+\mathrm{\Psi }_L`$ and $`F=F_L`$. These perturbed parts $`\mathrm{\Psi }_L`$ and $`F_L`$ represent localized fields with amplitudes decreasing for large $`r`$, and the dynamical balance between the ohmic dissipation and the excitation via the frame-dragging effect for $`\mathrm{\Psi }_L`$ and $`F_L`$ is induced by the background field $`\mathrm{\Psi }_B`$. (This process has been also mentioned in Khanna & Camenzind (1996b) and Khanna (1997), and the numerical examples have been presented by Khanna (1998c).) To verify this induced excitation as a viable process, we write the stationary fields $`\mathrm{\Psi }`$ and $`F`$ satisfying equations (17) and (18) in the expansion forms $$\mathrm{\Psi }=(1x^2)\underset{n=0}{\overset{\mathrm{}}{}}q_{2n}(r)\frac{dP_{2n+1}(x)}{dx},$$ (24) and $$F=(1x^2)\underset{n=0}{\overset{\mathrm{}}{}}q_{2n+1}(r)\frac{dP_{2n+2}(x)}{dx},$$ (25) according to the boundary condition on the polar axis and the symmetry with respect to the equatorial plane. By the help of the recurrence relations for Legendre polynomials $`P_n`$ we have the equations for the coefficients $`q_n`$ as follows, $$g_{\theta \theta }(L_1q_{2n})(2n+1)(2n+2)q_{2n}=\frac{\sigma }{\alpha }(c_{2n+1}q_{2n+1}c_{2n1}q_{2n1}),$$ (26) and $$g_{\theta \theta }(L_2q_{2n+1})(2n+2)(2n+3)q_{2n+1}=\alpha \sigma (c_{2n+2}q_{2n+2}c_{2n}q_{2n}),$$ (27) where $`c_n=(n+1)(n+2)/(2n+3)`$. It is clear that the higher multipolar modes are generated from the lower ones through the action of the dragging of inertial frames denoted by $`\sigma `$. Our main purpose is to point out a remarkable difference of the efficiency between the self-excitation and the induced one. Hence, for further analytical study, we consider a distant region where the metric components are approximately given by $$\alpha =1,g_{\varphi \varphi }=r^2\mathrm{sin}^2\theta ,g_{rr}=g_{\theta \theta }/r^2=1,$$ (28) keeping the dragging of inertial frames written as $$\omega =2J/r^3$$ (29) for the angular momentum $`J`$ of a central object. Equation (29) leads to $$\sigma =6J/r^21.$$ (30) In order to assure $`\sigma =r^2d\omega /dr`$ to remain very small even for a small $`r`$ in the following calculation, one may assume the behavior $$\sigma =(r/r_c)^2\sigma _c,\sigma _c=6J/r_c^2$$ (31) in the inner region $`r<r_c`$, where $`r_c`$ will be of the order of the radius of a central object. In the slow-rotation limit the recurrence relations (26) and (27) for $`n1`$ reduce to the approximated form $$\frac{d^2q_{n+1}}{dr^2}\frac{(n+2)(n+3)}{r^2}q_{n+1}=(1)^n\frac{\sigma c_n}{r^2}q_n,$$ (32) because we can neglect $`q_{n+2}`$ in comparison with $`q_n`$ in the right-hand sides. (Consistently with equation (32), the ratio $`q_{n+1}/q_n`$ is assumed to be of the order of $`\sigma `$, and the convergence of the expansion forms equations (24) and (25) is assured.) Now, if $`q_n`$ is known, it is easy to obtain the higher multipolar mode $`q_{n+1}`$. Note that in the flat spacetime the azimuthal component of magnetic field measured in orthonormal frame is given by $`B_T=F/\eta r\mathrm{sin}\theta `$. Then, $`q_{2n+1}/r`$ should vanish in the limit $`r\mathrm{}`$ and be regular in the limit $`r0`$. The function $`q_{2n}`$ for any localized poloidal flux should satisfy the same boundary conditions. For $`\sigma =0`$ we have the two independent solutions for each $`q_{n+1}`$ as follows, $$q_{n+1}=r^{(n+2)},q_{n+1}=r^{n+3},$$ (33) violating either the outer boundary condition or the inner one. If there exists a localized field corresponding to a lower multipolar mode $`q_n`$, however, the frame-dragging effect giving $`\sigma 0`$ can generate the higher one $$q_{n+1}=r^{(n+2)}_0^rb_n(\rho )\rho ^{2n+4}𝑑\rho ,$$ (34) where $$b_n(r)=_r^{\mathrm{}}(1)^{n+1}\frac{\sigma c_n}{\rho ^{n+4}}q_n(\rho )𝑑\rho .$$ (35) Therefore, the key problem is the generation of the lowest mode $`q_0`$, for which we obtain the equation $$\frac{d^2q_0}{dr^2}\frac{2}{r^2}q_0=\frac{6\sigma }{5r^2}q_1,$$ (36) because $`c_n=0`$ for $`n=1`$. Note that the remaining source field for $`q_0`$ is only the lowest mode $`q_1`$ of the azimuthal magnetic field, satisfying the equation $$\frac{d^2q_1}{dr^2}\frac{6}{r^2}q_1=\frac{2\sigma }{3r^2}q_0.$$ (37) Both modes should be self-consistently generated according to these coupled equations. As was previously mentioned, a localized solution as a result of self-excited dynamo will be prohibited for a small $`\sigma `$. For example, we can check the suppression of the action even in the extreme case of assuming a sharp gradient in the angular velocity $`\omega `$ at $`r=r_c`$: The value of $`\omega `$ is given by a nonzero constant $`\mathrm{\Delta }\omega `$ in the inner region $`r<r_c`$, while it becomes zero in the outer region $`r>r_c`$. In this case, though the localized modes $`q_0`$ and $`q_1`$ can be continuous with values denoted by $`q_{c0}`$ and $`q_{c1}`$, the gradients $`dq_0/dr`$ and $`dq_1/dr`$ must have discontinuous gaps estimated to be $`3q_{c0}/r_c`$ and $`5q_{c1}/r_c`$, respectively. (We have $`q_nr^{n+2}`$ for $`r<r_c`$, while $`q_nr^{(n+1)}`$ for $`r>r_c`$.) Then, integrating equations (36) and (37) over the narrow region with the sharp gradients, we obtain the eigenvalue of the discontinuous gap given by $`r_c\mathrm{\Delta }\omega =5\sqrt{5}/2`$. Núñez (1997) has also treated a case with a sharp gradient in $`\mathrm{\Omega }`$ by using equation (29) for $`\omega `$, and he has claimed the existence of growing modes for a smaller discontinuous gap $`r_c\mathrm{\Delta }\mathrm{\Omega }1`$. This difference will be mainly due to the estimation of the diffusion term given by $`^2\mathrm{\Psi }/r^2`$, which was rewritten into the form $`ϵ^2^2\mathrm{\Psi }/x^2`$ under the scale transformation $`r=r_c(ϵ/2)+ϵx`$. It seems to be unacceptable that the amplitude of the diffusion term is suppressed by the small factor $`ϵ^2`$. In fact, the straightforward application of the scale transformation will lead to $`^2\mathrm{\Psi }/r^2=ϵ^2^2\mathrm{\Psi }/x^2`$. In our calculation the diffusion terms $`d^2q_0/dr^2`$ and $`d^2q_1/dr^2`$ are estimated to be very large by virtue of the steep change of the gradients $`dq_0/dr`$ and $`dq_1/dr`$. Then, the ohmic dissipation due to the diffusion terms dominate in the evolution, unless the discontinuous gap $`\mathrm{\Delta }\omega `$ itself becomes unphysically large. Therefore, let us consider the induced excitation for the physically plausible behavior of $`\sigma `$ given by equations (30) and (31). In the case $`\sigma =0`$, the two independent solutions for $`q_0`$ have the forms $`r^2`$ and $`r^1`$ corresponding to a uniform magnetic field and a dipole one, which will be relevant to the magnetospheres of black holes and neutron stars. Of course, these vacuum fields should be regarded as background fields in the magnetospheres, of which the origin cannot be attributed to the dynamo action discussed here. Our problem is rather to study the field generation against the ohmic dissipation in the background magnetospheres. We limit the analysis to the case of the uniform background field with the strength $`B_0`$. Then, for $`\sigma 0`$ due to the frame-dragging effect the solution $`q_0`$ is modified into the form $$q_0=\frac{B_0r^2}{2}+\sigma _c^2p_0.$$ (38) To obtain $`q_1`$ in the first order of $`\sigma _c`$, it is sufficient to substitute the uniform background field into equation (37), and the lowest mode of localized azimuthal field is found to be $$q_1=B_0J(\frac{2r_c^2}{15r^2}\frac{1}{3})$$ (39) for $`r>r_c`$, and $$q_1=B_0J(\frac{8r^3}{15r_c^3}+\frac{r^4}{3r_c^4})$$ (40) for $`r<r_c`$. This azimuthal part can work as a source of the localized poloidal field $`p_0`$ in (36), and we obtain $$p_0=\frac{B_0r_c^3}{675r}(\frac{r_c^3}{r^3}\frac{45r_c}{4r}+\frac{104}{7})$$ (41) for $`r>r_c`$, and $$p_0=\frac{B_0r^2}{675}(\frac{4r^3}{r_c^3}+\frac{45r^4}{28r_c^4}+7)$$ (42) for $`r<r_c`$. The generated poloidal part $`p_0(r)`$ has a maximum point in the outer region $`r>r_c`$, and the poloidal field lines along which $`(1x^2)p_0(r)`$ is constant show a loop structure on the poloidal plane, which is maintained against the ohmic dissipation. In the slow-rotation limit such a modification of the background uniform field due to the generated poloidal field remains small. Nevertheless, we can expect that a remarkable structure of poloidal field lines as a result of the induced excitation appears in the magnetosphere, if the result presented here is extended to a fast-rotation case of the Kerr geometry. The role of the generated azimuthal field $`B_T=3\mathrm{sin}\theta \mathrm{cos}\theta q_1(r)/\eta r`$ with the strength of the order of $`B_0J/\eta r`$ may be astrophysically more important even in the slow-rotation limit. In the presence of azimuthal magnetic fields, outflows of plasma from the central region may be efficiently drived by the Lorentz force. Further, a Poynting flux is excited if azimuthal magnetic fields exist in the magnatosphere, and in our approximation the Poynting flux vector $`P^A`$ ($`A=r,\theta `$) is given by $$P^A=\frac{1}{4\pi }F_{Bt}F^{AB},$$ (43) In the asymptotic region $`rr_c`$ the components $`P^r`$ and $`P^\theta `$ are easily estimated to be $$P^r=\frac{B_0^2J^2}{2\pi \eta r^3}\mathrm{sin}^2\theta \mathrm{cos}^2\theta ,$$ (44) and $$P^\theta =\frac{B_0^2J^2}{4\pi \eta r^4}\mathrm{sin}\theta \mathrm{cos}\theta (1+\mathrm{cos}^2\theta ).$$ (45) Interestingly, the Poynting flux propagates outward from the central region and converges toward the rotation axis along the curves given by $`(1+\mathrm{cos}^2\theta )/r=`$ const. Though the generation of azimuthal magnetic fields is also possible through ideal and non-relativistic MHD processes, the dynamical balance between the ohmic dissipation and the induced excitation due to the frame-dragging effect can be an origin of the magnetospheric structure responsible for producing high-energetic phenomena in the polar region. ## 4 THE EFFECT OF CHARGE SEPARATION In the case of vanishing $`u_\varphi `$ no self-excitation of magnetic fields was found, even if an artificial sharp gradient of the frame-dragging angular velocity $`\omega `$ was assumed. This is mainly because a sufficiently large $`\omega `$ is not allowed for any gravity around a central object rotating with a limited angular momentum. We also obtain the upper limit of the angular velocity $`\mathrm{\Omega }`$ of plasma by the requirement $`(\mathrm{\Omega }\omega )^2<\alpha ^2/g_{\varphi \varphi }`$ (see eq. ). Therefore, even in the cases $`\mathrm{\Omega }\omega `$, the $`\omega `$-$`\mathrm{\Omega }`$ coupling will remain inefficient for self-excited dynamo, at least within the analytical framework developed here. (As was previously mentioned, the existence of growing modes claimed by Núñez (1997) may be a possible way to self-excited dynamo, if his estimation for the diffusion effect can be justified.) However, the situation may crucially change in the presence of charge separation given by equation (9). For the plasma co-rotating with the angular velocity of the background magnetosphere, the value of $`u_\varphi `$ can become large without limit near the light cylinder surface, and a large gradient of $`u_\varphi `$ (i. e., a large proper charge density $`Q`$) will be also allowed to occur there. Then, the azimuthal convection current $`Qu_\varphi `$ which appears in equation (10) can play a key role for self-excitation of poloidal flux instead of the frame-dragging effect. To study the $`Q`$-$`\mathrm{\Omega }`$ coupling as a mechanism of self-excited dynamo, let us restrict the following discussion to the case of no gravity and assume the plasma angular velocity to behave as $`\mathrm{\Omega }=\mathrm{\Omega }(R)`$. (Hereafter, we use the cylindrical coordinates $`R`$, $`Z`$ and $`\varphi `$.) Now the modes $`\mathrm{\Psi }`$ and $`F`$ satisfying equations (11) and (15) can have the forms $$\mathrm{\Psi }=\psi (R)\mathrm{cos}(kZ)e^{\mu T},F=f(R)\mathrm{sin}(kZ)e^{\mu T},$$ (46) according to the assumed symmetry with respect to the equatorial plane. Using the Lorentz factor given by $`\gamma =1/\sqrt{1(R\mathrm{\Omega })^2}`$, the special relativistic versions of equations (11) and (15) reduce to $$L\psi R\mathrm{\Omega }^2\gamma ^2\frac{d\psi }{dR}=kR\mathrm{\Omega }\frac{d\gamma }{dR}f,$$ (47) and $$Lf=kR\frac{d\mathrm{\Omega }}{dR}\gamma \psi ,$$ (48) where $`L`$ is the differential operator defined by $$L=\gamma R\frac{d}{dR}(\frac{1}{\gamma R}\frac{d}{dR})(k^2+\gamma \mu ).$$ (49) The plasma angular velocity $`\mathrm{\Omega }`$ may be nearly constant in an inner region (i.e., $`\mathrm{\Omega }\mathrm{\Omega }_i`$, which is equal to the angular velocity of the background stationary magnetosphere with a rigid rotation), while it should decrease as $`R1/\mathrm{\Omega }_i`$. For mathematical simplicity, we represent such a behavior as a sharp gradient in $`\mathrm{\Omega }`$ at $`R=R_c<1/\mathrm{\Omega }_i`$. (Núñez (1997) has also studied this case in terms of the $`\omega `$-$`\mathrm{\Omega }`$ coupling without considering charge separation.) By virtue of the discontinuous gap $`\mathrm{\Delta }\mathrm{\Omega }`$ the gradients $`d\psi /dR`$ and $`df/dR`$ should also have the discontinuous gaps $`\mathrm{\Delta }(d\psi /dR)`$ and $`\mathrm{\Delta }(df/dR)`$ at $`R=R_c`$. Then, using the equality $$\frac{R\mathrm{\Omega }}{\gamma }d\gamma =d(R\mathrm{\Omega }+\frac{1}{2}\mathrm{ln}\frac{1R\mathrm{\Omega }}{1+R\mathrm{\Omega }}),$$ (50) the integrations of equations (47) and (48) over the narrow region with the sharp gradient in $`\mathrm{\Omega }`$ lead to $$\mathrm{\Delta }(\frac{1}{\gamma }\frac{d\psi }{dR})=kf_c\mathrm{\Delta }(R\mathrm{\Omega }+\frac{1}{2}\mathrm{ln}\frac{1R\mathrm{\Omega }}{1+R\mathrm{\Omega }}),$$ (51) and $$\mathrm{\Delta }(\frac{1}{\gamma }\frac{df}{dR})=k\psi _c\mathrm{\Delta }(R\mathrm{\Omega }),$$ (52) where $`\psi =\psi _c`$ and $`f=f_c`$ at $`R=R_c`$. For further analysis we consider the discontinuous change from $`\mathrm{\Omega }=\mathrm{\Omega }_i`$ (i.e., $`\gamma =1/\sqrt{1R_c^2\mathrm{\Omega }_i^2}\gamma _i`$) to $`\mathrm{\Omega }=0`$ (i.e., $`\gamma =1`$). Then, for the modes with $`k1/R_c`$, the amplitudes of $`\psi `$ and $`f`$ should decrease with the forms $`\mathrm{exp}(k_o(RR_c))`$ at $`R>R_c`$ and $`\mathrm{exp}(k_i(RR_c))`$ at $`R<R_c`$, where $$k_o=\sqrt{k^2+\mu },$$ (53) and $$k_i=\sqrt{k^2+\gamma _i\mu }.$$ (54) From these behaviors of $`\psi `$ and $`f`$ we can easily estimate the discontinuous gaps $`\mathrm{\Delta }(d\psi /dR)`$ and $`\mathrm{\Delta }(df/dR)`$ at $`R=R_c`$, and the growing rate $`\mu `$ is found to be $$\frac{\mu }{k^2}=\frac{a(\gamma _i)}{b(\gamma _i)},$$ (55) if the eigenvalue problem of equations (51) and (52) is solved, where $$a=\sqrt{\frac{\gamma _i1}{\gamma _i+1}}\mathrm{ln}(\gamma _i+\sqrt{\gamma _i^21})2,$$ (56) and $$b=1+\frac{2}{\gamma _i+1}(\gamma _i+\frac{(\gamma _i1)^{3/2}}{\sqrt{\gamma _i+1}\mathrm{ln}(\gamma _i+\sqrt{\gamma ^21})2\sqrt{\gamma _i1}})^{1/2}.$$ (57) The stationary mode with $`\omega =0`$ corresponds to the case $`\gamma _i=\gamma _m`$ satisfying $`a(\gamma _m)=0`$, and the numerical value is estimated to be $`\gamma _m5.55`$. If plasma rotates with a relativistic velocity giving $`\gamma >\gamma _m`$ in a region around a central object, a sufficient decrease of $`\mathrm{\Omega }`$ within a range $`R_c(1\delta )<R<R_c(1+\delta )`$ ($`\delta 1`$) can excite growing magnetic fields with the scale of $`1/k`$ much smaller than the distance $`R_c`$. An interesting possibility is that this self-excitation of magnetic fields produces relativistic outflows of plasma across the light cylinder as a result of the back-reaction, though it is a process beyond the scope of the kinematic theory considered in this paper. The spatial variation $`\mathrm{\Delta }\mathrm{\Omega }`$ of the angular velocity also may become smaller as the back-reaction works. Then, the self-excited dynamo will stop, and the acceleration of outflows will occur only through a stationary MHD process. The very active phase of self-excitation of magnetic fields and violent plasma acceleration (which will be responsible for flare-like events of radiation flux) can reopen only when a region with highly relativistic angular velocity appears again. ## 5 CONCLUSIONS In a very simplified model we have succeeded in showing the self-excited dynamo through the coupling between $`\mathrm{\Omega }`$ and $`Q`$ without any frame-dragging effect: Poloidal magnetic field generates azimuthal magnetic field by the help of differential rotation of plasma, and the azimuthal field induces charge separation if the Lorentz factor of the plasma rotation is not constant. The azimuthal convection current carried with the rotating charged plasma can amplify the original poloidal field. Though in our calculation a sharp gradient of $`\mathrm{\Omega }`$ has been assumed, the condition essential to the self-excited dynamo would be the existence of large spatial variations $`\mathrm{\Delta }(R\mathrm{\Omega })`$ and $`\mathrm{\Delta }\gamma `$ such that $`\mathrm{\Delta }(R\mathrm{\Omega })\mathrm{\Delta }\gamma 4.5`$ within a radial scale $`\mathrm{\Delta }R`$ smaller than $`R`$. Different from the angular velocity $`\omega `$ of the dragging of inertial frames, we can expect $`\mathrm{\Delta }\gamma 1`$ as an astrophysically permissible case, e.g., if we consider a rotational motion of plasma near the light cylinder in magnetospheres of relativistic compact objects. However, a siginificant poloidal motion may occur near the light cylinder, before $`\gamma `$ of plasma rotation becomes much larger than unity. Then, the charge separation may be suppressed in the magnetosphere. The self-excited dynamo due to the $`\mathrm{\Omega }`$-$`Q`$ coupling will be able to work only if strong background fields balance the centrifugal force in a plasma rotating with highly relativistic speed. Hence, this dynamo action should be understood to be an origin of flare-like instability permissible near the light cylinder rather than a mechanism of generation of strong background fields. We have also discussed the induced excitation of magnetic fields which occurs through the frame-dragging effect acting on uniform background magnetic field. This is a process leading to a new equilibrium configuration of magnetic field lines against the ohmic dissipation. Our analysis has been limited to the case of slow rotation, in which the higher poloidal multipoles remain very small, and the extension to rapid rotation is an important problem to be solved. Though no flare-like activity is expected for this process, the magnetospheric structure with outgoing Pynting flux convergent toward the rotation axis is an interesting result. In this paper we have clarified only the fundamental aspects of kinematic generation of magnetic fields based on relativistic Ohm’s law, and the astrophysical impications for black-hole or neutron-star magnetospheres should be confirmed in more realistic models. In particular, taking account of the presence of poloidal plasma flows will be an important task, which may work for antidynamo. Further, in the cases $`\mathrm{\Omega }\omega `$, the $`Q`$-$`\mathrm{\Omega }`$ coupling also can be an important origin of the induced excitation even if the self-excited dynamo does not occur. The combined effects of the $`\omega `$-$`\mathrm{\Omega }`$ and $`Q`$-$`\mathrm{\Omega }`$ couplings remain unclear in this paper. To treat more appropriately the problem of charge separation, one may need two-component plasma theories (see Khanna 1998a, 1998b). Axisymmetric dynamo action in charged plasma is an interesting problem in future investigations. The author thanks Masaaki Takahashi and Masashi Egi for valuable discussions and Ramon Khanna, the referee, for suggestions on improving the manuscript. This work was supported in part by the Grant in-aid for Scientific Research (C) of the Ministry of Education, Science, Sports and Culture of Japan (No.10640257).
no-problem/9908/quant-ph9908046.html
ar5iv
text
# August 1999 MPI-Pth/99-34 On the Relation Between Quantum Mechanical and Classical Parallel Transport ## Abstract We explain how the kind of “parallel transport” of a wavefunction used in discussing the Berry or Geometrical phase induces the conventional parallel transport of certain real vectors. These real vectors are associated with operators whose commutators yield diagonal operators; or in Lie algebras those operators whose commutators are in the (diagonal) Cartan subalgebra. In discussing the Berry or Geometrical phase one uses the concept of a “parallel transport” of a quantum mechanical state $`\psi `$, by which is meant $$<\psi (t)|\dot{\psi }(t)>=0$$ (1) One may also consider a complete (orthonormal) basis set of states $`|n>`$ obeying this condition. If the time-dependent states $`|n(t)>`$ are obtained from a basis of initial states $`|n>`$ by a unitary transformation $`U`$ (as would be generated by a hamiltonian) $$U(t)|n>=|n(t)>$$ (2) we can say we have an orthonormal “frame” undergoing this kind of parallel transport . In this case the “parallel transport” condition, namely $$<n(t)|\dot{n}(t)>=0$$ (3) for all $`n`$, can also be written as $$<n(t)|\dot{U}U^{}|n(t)>=0$$ (4) (Quantities without a time argument refer to the fixed basis, while those with an argument (t) refer to the moving basis, thus $`|n>=|n(0)>`$.) Now the “parallel transport” and “moving frames” implied by these equations are not the same as those of usual differential geometry. Rather, there, in the viewpoint where one studies a euclidean frame moving in a higher dimensional space and then restricts to a submanifold , there is a set of real vectors $`𝐞_a`$ instead of quantum mechanical state vectors, and parallel transport among a set of vectors $`a,b,c\mathrm{}`$ on the submanifold means $$\dot{𝐞}_a(t)𝐞_b(t)=0$$ (5) for all pairs $`a,b,c\mathrm{}`$ in the submanifold. That is, the set $`𝐞_a`$ are not a complete set, but rather form a moving subspace in a larger space. In this formulation the dot symbol means the ordinary derivative in the ambient space, while in the “intrinsic” formulation of differential geometry the dot symbol would mean the covariant derivative with a connection. This condition Eq , which we might call “classical” parallel transport, looks quite different from Eq . What is the relationship between the two kinds of “parallel transport”, if any? It seems there should be some such relationship. For example, in our treatment , of the geometric phase in SU(2), where $`U`$ is an SU(2) group element, we could view the “quantum mechanical parallel transport” as inducing the “classical parallel transport” of the $`x`$ and $`y`$ vectors of a “dreibein” sliding, but not rotating, on the sphere. Here we would like to briefly elucidate why this is and to indicate how to generalize the idea, including its application to higher groups. Briefly, we will show how the condition Eq for a complex basis leads to a “classical” parallel transport, Eq , of certain vectors associated with the problem, such as the $`x`$, $`y`$ vectors of the “dreibein”. Our first task is to identify the vectors $`𝐞_a`$, which we do as follows. Consider a complete set of operators or matrices $`\lambda _a`$, like the generators of a Lie group, complete in the sense that they transform among each other under $`U`$. That is, there are the time dependent operators $`\lambda (t)=U(t)\lambda _aU^{}(t)`$, which may be reexpressed in terms of the original, fixed, $`\lambda `$. These then generate the vectors $`𝐞_a`$ via (summation convention) $$\lambda _a(t)=U(t)\lambda _aU^{}(t)=𝐞_a^j(t)\lambda ^j$$ (6) If we choose the $`\lambda _a`$ such that $`Tr(\lambda _a\lambda _b)=N\delta _{ab}`$, where $`N`$ is a normalization factor, we can write explicitly $$𝐞_a(t)^j=1/NTr[\lambda _a(t)\lambda ^j]=𝒯r[\lambda _a(t)\lambda ^j]$$ (7) where we define $`𝒯r`$ to include the normalization factor. Furthermore, with hermitian $`\lambda _a`$ the $`𝐞_a(t)`$ are real. The scalar product of two vectors is then given by the trace of the product of the corresponding $`\lambda `$, as in $`𝐞_a(t)𝐞_b(t)=𝒯r[\lambda _a(t)\lambda _b(t)]`$. The definition of the $`\lambda (t)`$ is chosen so that $`<n(t)|\lambda (t)|m(t)>=<n|\lambda |m>`$. Now a main point of was that the information conveyed by the condition Eq or Eq could be interpreted, in the group theoretical context, by saying that in the “local frame” there was no rotation with respect to the subspace of diagonal generators, that is in the Cartan subspace. We can formulate this point in a general manner by viewing the evolution of the states as being determined by a hamiltonian $`h(t)`$, where $`h(t)=i\dot{U}U^{}`$. (We reserve the symbol $`H`$ for the more usual hamiltonian, which however is absent in the present considerations. $`H`$ includes the “dynamical phase” which usually has been removed from the problem before we get to Eq ). Thus $$i|\dot{n}(t)>=h(t)|n(t)>$$ (8) and Eq states that the diagonal elements of $`h`$ are zero in the moving basis: $$<n(t)|h(t)|n(t)>=0$$ (9) We would now like to explain how Eq can lead to $`\dot{𝐞}_a(t)𝐞_b(t)=0`$ among some of the $`𝐞(t)`$. The desired quantity $`\dot{𝐞}_a(t)𝐞_b(t)`$ may be found from $$\dot{𝐞}_a(t)𝐞_b(t)=𝒯r[\dot{\lambda }_a(t)\lambda _b(t)]=i𝒯r[[h(t),\lambda _a(t)]\lambda _b(t)]$$ (10) where we use the equation of motion $`i\dot{\lambda }_a(t)=[h(t),\lambda _a(t)]`$ following from the definition of $`\lambda _a(t)`$. Rearranging the last expression we have $$\dot{𝐞}_a(t)𝐞_b(t)=i𝒯r[h(t)[\lambda _a(t),\lambda _b(t)]]$$ (11) Now consider two $`\lambda `$’s such that their commutator gives a diagonal matrix, $`[\lambda _a,\lambda _b]=(diag)`$. Applying $`U`$, the same holds in the moving basis for the $`\lambda _a(t)`$ namely $$<m(t)|[\lambda _a(t),\lambda _b(t)]|n(t)>\delta _{nm}$$ (12) For such a pair in Eq , while $`h`$ has no diagonal elements, the commutator has only diagonal elements. But this gives zero for the trace, and hence $`\dot{𝐞}_a(t)𝐞_b(t)=0`$. We thus arrive at our conclusion: “classical parallel transport”, Eq , follows as a result of “quantum parallel transport” Eq for those vectors $`𝐞_a(t)`$ whose corresponding commutators among the $`\lambda _a`$ yield diagonal matrices. In an abbreviated language with a “matrix valued vector” $`e_a^j\lambda ^j`$, we could say “for those vectors whose mutual commutators are diagonal”. In group theory this is the requirement that the commutator lie in the Cartan subalgebra, when the latter, as usual, has been chosen diagonal. Precisely this was the case in our SU(2) example where the $`S_x=\lambda _x`$ and $`S_y=\lambda _y`$ generators are the two non-Cartan generators. Their commutator yields only the Cartan generator $`S_z=\lambda _z`$, which in the usual choice of basis is diagonal. This is why under a $`U`$ obeying Eq they undergo parallel transport in the sense of Eq . Note however, that it is necessary to explicitly take the Cartan operators diagonal. Comments A striking difference between the quantum Eq and the classical Eq is that in the classical case the concept is linear; if two vectors are parallel transported then their sum is also. However, for the parallel transport of Eq , as may be easily verified, this is not true. This implies that Eq is not in general preserved under linear transformation; a new “frame” $`|n^{}>=u_{n^{}n}|n>`$ will not in general be parallel transported, even if the $`|n>`$ are. Thus a full statement of the problem involves a specification as to which set of vectors satisfy Eq , as reflected in the necessity to choose a definite basis, one in which the Cartan generators are diagonal. This helps in clarifying the following potential misunderstanding: We might be tempted to conclude that when dealing with real quantities, as with orthogonal rotations, that Eq follows simply from unitarity and thus represents no further information. That is, given that all quantities are real, $`<n(t)|\dot{n}(t)>+<\dot{n}(t)|n(t)>=2<n(t)|\dot{n}(t)>=0`$ follows simply from preservation of the norm. Thus we would be lead by our above arguments to the nonsensical result that any orthogonal transformation will automatically induce parallel transport. However, this argument would neglect the requirement that the Cartan operators be diagonal. In fact for real orthogonal representations the generators are antisymmetric, or in the above notation, the $`\lambda `$ are pure imaginary. But we need the Cartan operators in diagonal form, and antisymmetric operators cannot be brought to diagonal form without introducing a complex basis. Thus complex numbers are reintroduced and Eq does indeed represent a second condition, and not just simply unitarity or preservation of the norm. We would like to thank the Institute of Advanced Studies, Jerusalem, for its hospitality in the spring of 1998, when this work was begun.
no-problem/9908/cond-mat9908273.html
ar5iv
text
# Resonant Raman Study of Superconducting Gap and Electron-Phonon Coupling in 𝐘𝐛𝐁𝐚_𝟐⁢𝐂𝐮_𝟑⁢𝐎_{𝟕-𝛿} Resonant Raman Study of $`\mathrm{𝐘𝐛𝐁𝐚}_\mathrm{𝟐}\mathrm{𝐂𝐮}_\mathrm{𝟑}𝐎_\mathrm{𝟕}`$ ## Abstract We investigate the electronic background as well as the O2-O3 mode at $`\mathit{330}\mathrm{𝑐𝑚}^\mathit{1}`$ of highly doped $`YbBa_2Cu_3O_{7\delta }`$ in $`B_{1\mathrm{g}}`$ symmetry. Above the critical temperature $`T_\mathrm{c}`$ the spectra consist of an almost constant electronic background and superimposed phononic excitations. Below $`T_\mathrm{c}`$ the superconducting gap opens and the electronic background redistributes exhibiting a $`2\mathrm{\Delta }`$ peak at $`\mathit{320}\mathrm{𝑐𝑚}^\mathit{1}`$. We use a model that allows us to separate the background from the phonon. In this model the phonon intensity is assigned to the coupling of the phonon to inter- and intraband electronic excitations. For excitation energies between 1.96 eV and 2.71 eV the electronic background exhibits hardly any resonance. Accordingly, the intraband contribution to the phonon intensity is not affected. In contrast, the interband contribution vanishes below $`T_\mathrm{c}`$ at 1.96 eV while it remains almost unaffected at 2.71 eV. PACS numbers: 74.25.Gz, 74.25.Kc, 74.62.Dh, 74.72.Bk, 78.30.Er The Fano-type line shape of the $`B_{1\mathrm{g}}`$ mode in Raman experiments in the $`R\mathrm{Ba}_2\mathrm{Cu}_3\mathrm{O}_7`$ ($`R`$-123) system with $`R`$=rare earth or yttrium has been subject of several investigations.. Using extended Fano models like those presented by Chen et al. and Devereaux et al., the self-energy contributions to the phonon parameters as a consequence of the interaction of the phonon with low-energy electronic excitations can in principle be obtained. Moreover, a measure of the electron-phonon coupling and the “bare” phonon intensity, i.e., the one resulting from a coupling to interband excitations, can be estimated from a detailed analysis of the Raman spectra. Therefore, a simultaneous description of the real and the imaginary part of the electronic response function $`\chi ^e(\omega )=R^e(\omega )+i\varrho ^e(\omega )`$ of the intraband excitations is needed. Such a description has recently been presented by us and applied to Ca- and Pr-doped Y-123-films. Here, we will use our description in order to investigate the different contributions to the $`B_{1\mathrm{g}}`$ phonon intensity in overdoped $`\mathrm{YbBa}_2\mathrm{Cu}_3\mathrm{O}_7`$ (Yb-123) and their resonance properties as well as the resonance of the pair-breaking peak ($`2\mathrm{\Delta }`$ peak) in $`B_{1\mathrm{g}}`$ symmetry. We study a fully oxygenated high-quality Yb-123 single crystal grown with a self flux method. Due to the high oxygen content and the small rare earth ion radius the crystal is overdoped ($`T_\mathrm{c}`$=76 K). $`B_{1\mathrm{g}}`$ Raman spectra \[$`\mathrm{z}(\mathrm{x}^{},\mathrm{y}^{})\overline{\mathrm{z}}`$ in Porto notation\] have been taken using laser lines at 458, 514, and 633 nm (2.71, 2.41, and 1.96 eV) in a setup described elsewhere. They have been corrected for the spectral response of spectrometer and detector. For a comparison of the spectra obtained with different excitation energies the cross-section is calculated from the efficiencies using ellipsometric data of Y-123.. All given temperatures are actual spot temperatures with typical heatings between 5 K and 15 K. In order to describe the line shape of the $`B_{1\mathrm{g}}`$ phonon we subdivide the Raman cross-section $`I_\mathrm{c}(\omega )`$ into a sum of the electronic response $`\varrho _{}(\omega )`$ and an electron-phonon interference term $`I_\mathrm{p}(\omega )`$: $$I_\mathrm{p}(\omega )=\frac{C}{\gamma (\omega )\left[1+ϵ^2(\omega )\right]}\times \frac{R_{\mathrm{tot}}^2(\omega )2ϵ(\omega )R_{\mathrm{tot}}(\omega )\varrho _{}(\omega )\varrho _{}^2(\omega )}{C^2}.$$ (1) The constant $`C=A\gamma ^2/g^2`$ is a parameter for the intensity where $`\gamma `$ represents the symmetry element of the electron-phonon vertex projected out by the measurement geometry and $`g`$ is the lowest order expansion coefficient of the electron-phonon vertex describing the coupling to non-resonant intraband electronic excitations. $`R_{}(\omega )+i\varrho _{}(\omega )=Cg^2\chi ^e(\omega )`$ is the electronic response in the measured units. While $`\varrho _{}(\omega )`$ can be obtained directly from the spectra, $`R_{}(\omega )`$ has to be calculated via a Hilbert transformation. $`R_{\mathrm{tot}}(\omega )=R_{}(\omega )+R_0`$ with $`R_0=Cg(g_{\mathrm{pp}}/\gamma )`$ where $`g_{\mathrm{pp}}`$ is an abbreviated “photon-phonon” vertex that describes the coupling to resonant interband electronic excitations. The renormalized phonon frequency $`\omega _\nu (\omega )`$ and linewidth $`\gamma (\omega )`$ are given by $`\omega _\nu ^2(\omega )=\omega _\mathrm{p}^22\omega _\mathrm{p}R_{}(\omega )/C`$ and $`\gamma (\omega )=\mathrm{\Gamma }+\varrho _{}(\omega )/C`$, respectively, and $`ϵ(\omega )=\left[\omega ^2\omega _\nu ^2(\omega )\right]/\left[2\omega _\mathrm{p}\gamma (\omega )\right]`$. The bare phonon intensity $`I_{\mathrm{pp}}`$ resulting from the coupling to interband excitations is given by $`I_{\mathrm{pp}}=\frac{\pi }{C}R_0^2`$ (Ref. 4). The imaginary part of the measured electronic response (background) is modeled by two contributions: $`I_{\mathrm{}}\mathrm{tanh}(\omega /\omega _\mathrm{T})`$ and $`I_{\mathrm{red}}(\omega ,\omega _{2\mathrm{\Delta }},\mathrm{\Gamma }_{2\mathrm{\Delta }},I_{2\mathrm{\Delta }},I_{\mathrm{supp}})`$. The first term describes the incoherent background with a crossover frequency $`\omega _\mathrm{T}`$ and the second the redistribution below $`T_\mathrm{c}`$ using two Lorentzians, one is centered at the $`2\mathrm{\Delta }`$ peak with the intensity $`I_{2\mathrm{\Delta }}`$ and the other, proportional to $`I_{\mathrm{supp}}`$, describes the suppression between $`\omega =0`$ and $`\omega =2\mathrm{\Delta }`$. Figure 1 (a) displays the $`B_{1\mathrm{g}}`$ cross-section of Yb-123 at 20 K obtained with $`\mathrm{}\omega _\mathrm{i}=2.71`$ eV as well as its description. For the description we use Eq. (1) for the $`B_{1\mathrm{g}}`$ phonon and the Ba mode, Lorentzians for all other modes, and the background contributions stated above. The description yields a $`2\mathrm{\Delta }`$ peak at $`320\mathrm{cm}^1`$. Assuming that the background is not resonant, a calculated spectrum with vanishing bare phonon intensity ($`R_0=0`$) for the $`B_{1\mathrm{g}}`$ phonon is drawn in Fig. 1 (b), where other phonons are dropped for clarity. We compare this calculation to the cross-section obtained with $`\mathrm{}\omega _\mathrm{i}=1.96`$ eV and 20 K and find a good agreement. Results of the analysis of the $`B_{1\mathrm{g}}`$ phonon line shape for measurements with $`\mathrm{}\omega _\mathrm{i}=2.71`$ eV are shown in Fig. 2. It turns out that the strong broadening as well as the slight softening of the renormalized phonon can entirely be assigned to the redistributing background leaving anharmonic decays for the bare phonon parameters. As we obtained similar result with $`\mathrm{}\omega _\mathrm{i}=2.41`$ eV we used a fixed parameter set $`\mathrm{\Gamma }(T)`$, $`\omega _\mathrm{p}(T)`$ for all excitation energies. This is especially important for the spectra recorded with $`\mathrm{}\omega _\mathrm{i}=1.96`$ eV, where the decreasing or even vanishing bare phonon intensity hinders a reliable determination of the phonon parameters. The upper panels of Fig. 3 display the peak height of the $`2\mathrm{\Delta }`$ peak obtained with our description of the electronic background. Obviously, the $`2\mathrm{\Delta }`$ peaks vanish above $`T_\mathrm{c}`$. The remaining peak intensity above $`T_\mathrm{c}`$ for $`\mathrm{}\omega _i=2.71`$ eV is just a compensation of a slightly underestimated electron-phonon coupling. Below $`T_\mathrm{c}`$ the intensities increase in a monotonic fashion saturating, more or less pronounced, at low temperatures. With respect to the resonance properties of the $`2\mathrm{\Delta }`$ peak we find a decreasing intensity with decreasing excitation energy. This partly explains the discrepancy between the calculated and the measured cross-sections shown in Fig. 1 (b). The energy of the $`2\mathrm{\Delta }`$ peak decreases only slightly with increasing temperature from $``$ 310 cm<sup>-1</sup> at 30 K down to $``$ 260 cm<sup>-1</sup> at 70 K. Regarding the temperature dependencies of the bare phonon intensity $`I_{\mathrm{pp}}`$ in Fig. 3 we find similar behavior for the data sets obtained with $`\mathrm{}\omega _\mathrm{i}=`$2.71 eV and 2.41 eV. They exhibit a slight decrease with decreasing temperature being not affected by $`T_\mathrm{c}`$. In contrast, we find a dramatically decreasing intensity with $`\mathrm{}\omega _\mathrm{i}=`$1.96 eV below $`T_\mathrm{c}`$. The overall decrease of $`I_{\mathrm{pp}}`$ for $`T>T_\mathrm{c}`$ with decreasing excitation energy is similiar to the behavior of $`I_{2\mathrm{\Delta }}`$ for $`T0`$, however, more pronounced. The sudden drop of $`I_{\mathrm{pp}}`$ below $`T_\mathrm{c}`$ for $`\mathrm{}\omega _\mathrm{i}=1.96`$ eV suggests a superconductivity-induced closing of the resonant excitation channel of the phonon. For even lower excitation energies $`I_{\mathrm{pp}}`$ vanishes almost completely for $`T0`$ as we have observed with $`\mathrm{}\omega _\mathrm{i}=1.71`$ eV and 1.58 eV. Above $`T_\mathrm{c}`$, however, the decreasing intensity with decreasing excitation energy appears to continue monotonically. This suggests that more fundamental changes of the band structure take place below $`T_\mathrm{c}`$. They will most likely appear around the van Hove singularity at $`(\frac{\pi }{a},0)`$ where the electron-phonon coupling is enhanced. It remains open at present how far the band structure changes inferred from our data are related to the anomalies around 2 eV observed in thermal-difference reflectance spectroscopy, or to the missing spectral weight deduced from a sum-rule type analysis of c-axis optical conductivity data. The authors thank U. Merkt for encouragement. S. O. acknowledges a grant of the Deutsche Forschungsgemeinschaft via the Gradiuiertenkolleg “Physik nanostrukturierter Festkörper”.
no-problem/9908/hep-ex9908006.html
ar5iv
text
# INTRODUCTION ## INTRODUCTION The structure of the parity violation in the electroweak interaction can be probed directly in the production and decay of polarized $`Z^0`$ bosons. The parity violations of all three leptonic states are characterized by the $`Z^0`$-lepton coupling asymmetry parameters; $`A_e`$, $`A_\mu `$ and $`A_\tau `$. The standard model assumes lepton universality, so that all three species of leptonic asymmetry parameters are expected to be identical and directly related to the effective weak mixing angle, $`\mathrm{sin}^2\theta _W^{eff}`$. Measurements of leptonic asymmetry parameters at the $`Z^0`$ resonance provide an important test of lepton universality and the weak mixing angle . We report new results on direct measurements of the asymmetry parameters $`A_e`$, $`A_\mu `$ and $`A_\tau `$ using leptonic $`Z^0`$ decays. The measurements are based on the data collected by the SLD experiment at the SLAC Linear Collider (SLC). The SLC produces polarized $`Z^0`$ bosons in $`e^+e^{}`$ collisions using a polarized electron beam. The polarization allows us to form the left-right cross section asymmetry to extract the initial-state asymmetry parameter $`A_e`$. It also enables us to extract the final-state asymmetry parameter for lepton $`l`$, $`A_l`$, directly using the polarized forward-backward asymmetry. Experiments at the $`Z^0`$ resonance without beam polarization have measured the product of initial- and final-state asymmetry parameter, $`A_eA_l`$. Those same experiments have also measured the tau polarization which yields $`A_e`$ and $`A_\tau `$ separately. The SLC beam polarization enables us to present the only existing direct measurement of $`A_\mu `$. The polarized asymmetries yield the statistical enhancement on the final-state asymmetry parameter by a factor of about 25 compared to the unpolarized forward-backward asymmetry. In this report, we use the data recorded in 1996-98 at the SLD with the upgraded vertex detector. The obtained results are combined with earlier published results . There are two principle goals of this study. One is to test lepton universality by comparing the three asymmetry parameters. The other purpose is to complement the weak-mixing-angle result from the left-right cross section asymmetry using the hadronic event sample and to add additional precision to the determination of the weak mixing angle. ## THE SLC AND THE SLD This analysis relies on the Compton polarimeter, tracking by the vertex detector and the central drift chamber (CDC), and the liquid argon calorimeter (LAC). Details about the SLC, the polarized electron source and the measurements of the electron-beam polarization with the Compton polarimeter, can be found in Refs. and . A full description of the SLD and its performance have also been described in detail elsewhere . Only the details most relevant to this analysis are mentioned here. In the previous measurements , the analysis was restricted in the polar-angle range of $`|\mathrm{cos}\theta |<0.7`$ because the trigger efficiency for muon-pair events and tracking efficiency fall off beyond $`|\mathrm{cos}\theta |=0.7`$. The upgraded vertex detector and new additional trigger system improved the acceptance. The upgraded vertex detector (VXD3) , a pixel-based CCD vertex detector, was installed in 1996. The VXD3 consists of 3 layers which enable a self-tracking capability independent of the CDC and provides 3-layer and 2-layer coverage out to $`|\mathrm{cos}\theta |=0.85`$ and 0.90, respectively. The self-tracking capability and wide acceptance of VXD3 give significant improvement in solid-angle coverage because high precision VXD3-hit vectors in 3-D are powerful additions to the global track finding capability. The detailed implementation of this new strategy to recover deficiencies in track finding with the CDC alone is already developed, and working well on recent SLD reconstruction data . The new additional trigger for lepton-pair events is the WIC Muon Trigger (WMT). The purpose of the WMT is to trigger muon-pair events passing through the endcaps. The WMT uses data from the endcap Warm Iron Calorimeter (WIC) which consists of inner and outer sections. The WMT requires straight back-to-back tracks in the endcap WIC passing through the interaction point. In order to increase the efficiency of the WMT, only one of back-to-back inner or back-to-back outer tracks are required. Angular coverage of the WMT is $`0.68<|\mathrm{cos}\theta |<0.95`$ with reasonable trigger efficiency covering the lack of leptonic trigger region in the previous analysis. ## THEORY ### $`A_{LR}`$ and $`\stackrel{~}{A}_{FB}^l`$ Polarization-dependent asymmetries are easily computed from the tree-level differential cross section for the dominant process $`e_{L,R}^{}+e^+Z^0l^{}+l^+`$ at $`Z^0`$ resonance, where $`l`$ represents either a $`\mu `$\- or a $`\tau `$-lepton. The differential cross section is expressed as follows: $$\frac{d\sigma }{d\mathrm{cos}\theta }\left(1PA_e\right)\left(1+\mathrm{cos}^2\theta \right)+2\left(A_eP\right)A_l\mathrm{cos}\theta ,$$ (1) where $`\mathrm{cos}\theta `$ is the angle of the outgoing lepton ($`l^{}`$) with respect to the electron-beam direction. Photon exchange terms and, if final-state leptons are electrons, $`t`$-channel contributions have to be taken in to account. The leptonic asymmetry parameters which refer to the initial- and final-state lepton appear in this expression as $`A_e`$ and $`A_l`$, respectively. Note that the first term, symmetric in $`\mathrm{cos}\theta `$, exhibits initial-state coupling to the electron by its dependence on $`A_e`$. The second term, asymmetric in $`\mathrm{cos}\theta `$, is mostly influenced by $`A_l`$. $`P`$ is the signed longitudinal polarization of the electron beam in the convention that left-handed bunches have negative sign . The relationships between the asymmetry parameters and between vector and axial-vector, or left-right couplings, are given as follows: $$A_l=\frac{2g_V^lg_A^l}{g_{V}^{l}{}_{}{}^{2}+g_{A}^{l}{}_{}{}^{2}}=\frac{g_{L}^{l}{}_{}{}^{2}g_{R}^{l}{}_{}{}^{2}}{g_{L}^{l}{}_{}{}^{2}+g_{R}^{l}{}_{}{}^{2}}.$$ (2) where $`g_L^l=g_V^l+g_A^l`$ and $`g_R^l=g_V^lg_A^l`$. The Standard Model relates the weak mixing angle to the couplings by the expressions $`g_V^l=\frac{1}{2}+2\mathrm{sin}^2\theta _W^{eff}`$ and $`g_A^l=\frac{1}{2}`$. Simple asymmetries can be used to extract $`A_l`$ from data; the left-right asymmetry and the left-right-forward-backward asymmetry. The left- and right-handed cross sections are obtained by integrating Eq. (1) over all $`\mathrm{cos}\theta `$ giving $`\sigma _L^l`$ or $`\sigma _R^l`$ for left- and right-handed electron beams, respectively. (For convenience, we drop the superscript in the following discussions since the meaning of the expressions will be clear enough in context.) Parity violation causes $`\sigma _L`$ and $`\sigma _R`$ to be different. Hence, we define the left-right cross section asymmetry, $`A_{LR}`$, $$A_{LR}=\frac{1}{|P|}\frac{\sigma _L\sigma _R}{\sigma _L+\sigma _R}.$$ (3) Four cross sections are obtained by integrating forward (F) and backward (B) hemispheres separately, along with left- and right-handed polarization. Here forward (backward) means $`\mathrm{cos}\theta >0`$ ($`\mathrm{cos}\theta <0`$). Based on these four possibilities, we define the polarized forward-backward asymmetry, $`\stackrel{~}{A}_{FB}^l`$, as follows: $$\stackrel{~}{A}_{FB}^l=\frac{(\sigma _{LF}\sigma _{LB})(\sigma _{RF}\sigma _{RB})}{(\sigma _{LF}+\sigma _{LB})+(\sigma _{RF}+\sigma _{RB})}.$$ (4) ### Leptonic Asymmetry Parameters: $`A_e`$, $`A_\mu `$ and $`A_\tau `$ With equal luminosities for left- and right-handed electron beams, the cross sections in Eq. (3) may be replaced with the numbers of events: $`N_L`$ and $`N_R`$. After integrating Eq. (1) over all angles to get expressions for $`N_L`$ and $`N_R`$ in terms of $`P`$, $`A_e`$ and $`A_l`$, and after substituting in Eq. (3) for both signs of polarization, what remains is given by $$A_e=A_{LR}.$$ (5) In a similar fashion, integrating over forward or backward hemispheres, and substituting both signs of polarization in Eq. (4), gives the expression $$A_l=(\stackrel{~}{A}_{FB}^l/|P|)(1+\frac{x_{max}^2}{3})/x_{max},$$ (6) where $`x_{max}=\mathrm{cos}\theta _{max}`$ is the maximum polar angle accepted by the lepton-event trigger and reconstruction efficiencies. The leptonic asymmetry parameters are particularly potent ways to measure the weak mixing angle precisely because $`A_l`$ is expressed as follows: $$A_l=\frac{2\left(14\mathrm{sin}^2\theta _W^{eff}\right)}{1+\left(14\mathrm{sin}^2\theta _W^{eff}\right)^2}$$ (7) making $`A_l`$ very sensitive to the weak mixing angle: $$\frac{dA_l}{d\mathrm{sin}^2\theta _W^{eff}}7.9.$$ (8) ### The Maximum Likelihood Method The essence of the measurement is expressed in Eqs. (3) and (4), but instead of simply counting events we perform a maximum likelihood fit, event by event, to incorporate the contributions of all the terms in the cross section and to include the effect of initial state radiation. The likelihood function for muon- and tau-pair events is defined as follows: $$(A_e,A_l,x)=𝑑s^{}H(s,s^{})\left(Z(s^{},A_e,A_l,x)+Z\gamma (s^{},A_e,A_l,x)+\gamma (s^{},x)\right),$$ (9) where $`A_e`$ and $`A_\mu `$ are free parameters. $`A_e`$ and $`A_\mu `$ ($`A_\tau `$) are determined simultaneously with the muon-pair (tau-pair) events. The integration over $`s^{}`$ is done with the program MIZA to take into account the initial-state radiation from two times the beam energy $`\sqrt{s}`$ to the invariant mass of the propagator $`\sqrt{s^{}}`$ described by the radiator function $`H(s,s^{})`$. The spread in the beam energy has a negligible effect. The maximum likelihood fit is less sensitive to detector acceptance as a function of polar angle than the counting method, and has more statistical power. $`Z(\mathrm{}),\gamma (\mathrm{})`$ and Z$`\gamma (\mathrm{})`$ are the tree-level differential cross sections for Z exchange, photon exchange and their interference. The integration is performed before the fit to obtain the coefficients $`f_Z,f_{Z\gamma }`$ and $`f_\gamma `$, and the likelihood function becomes $$(A_e,A_l,x)=f_ZZ(A_e,A_l,x)+f_{Z\gamma }Z\gamma (A_e,A_l,x)+f_\gamma \gamma (x).$$ (10) These coefficients give the relative sizes of the three terms at the SLC center-of-mass energy. As for the electron final state, it includes both $`s`$-channel and $`t`$-channel $`Z^0`$ and photon exchanges which gives four amplitudes and ten cross section terms. All ten terms are energy-dependent. We define a maximum likelihood function for electron-pair events by modifying Eqs. (9) and (10) including all ten terms. The integration over $`s^{}`$ is performed with DMIBA to obtain the coefficients to give the relative size of the ten terms. ## ANALYSIS ### Data Sample This study includes the data obtained during the 1996 and 1997-98 SLD runs. Results are combined with published analyses from data taken during the 1993 SLD run and the 1994-95 run. The 1996 data set consisted of about 50,000 $`Z^0`$’s with about $`77\%`$ polarization. The 1997-98 data sample contains about 340,000 $`Z^0`$’s. The beam polarization for the 1997-98 runs averaged about 73%. The data were recorded at a mean center-of-mass energy of 91.26 GeV and 91.23 GeV during 96-97 and 98 runs, respectively . The branching ratio $`Z^0l^+l^{}=3.4\%`$ so that the total branching ratio into all three lepton species combined is about 10%. ### Event Selection Leptonic $`Z^0`$ decays are characterized by their low multiplicity and high momentum charged tracks. Muons and electrons are particularly distinctive as they emerge back-to-back with little curvature from the primary interaction vertex, and tau pairs form two tightly collimated cones directed in well-defined opposite hemispheres. Lepton-pair candidates are chosen on the basis of the momentum of the charged tracks as well as from energy deposited in the calorimeter. The criteria used for the event selection give a high efficiency for finding the signal events while the backgrounds remain sufficiently low as to be almost entirely negligible. The pre-selection: * Lepton-pair candidates are initially selected by requiring the charged multiplicity between two and eight charged tracks to reduce background from hadronic $`Z^0`$ decays. * The product of the sums of the charges of the tracks in each hemisphere must be -1. This insures a correct determination of the sign of the scattering angle. * Requiring that at least one track have momentum greater than 1 GeV reduces two-photon background while leaving candidate events with a high efficiency. After the pre-selection, additional selection criteria are applied. The Bhabha selection: * A single additional cut effectively selects $`e^+e^{}`$ final states. We require the sum of the energies associated with the two highest momentum tracks in the event must be greater than 45 GeV as measured in the calorimeter. The muon-pair selection: * Muon final state selection starts by demanding that the invariant mass of the event, based on charged tracks, be greater than 70 GeV. Tau final states usually fail this selection. * Since muons deposit little energy as they traverse the calorimeters, we require also that the largest energy recorded in the calorimeter by a charged track in each hemisphere be greater than zero and less than 10 GeV. Electron pairs are removed by this requirement. The tau-pair selection: * Tau selection requires that the largest calorimeter energy associated with a charged track in each hemisphere is less than 27.5 GeV and 20.0 GeV for the magnitude of $`\mathrm{cos}\theta `$ less than or greater than 0.7, respectively, to distinguish them from $`e^+e^{}`$ pairs. * We take the complement of the muon event mass cut and require the event mass to be less than 70 GeV. * At least one track must have momentum above 3 GeV to reduce backgrounds from two-photon events. * We define the event acollinearity from the vector sums of the momenta of the tracks in the separate hemispheres, and the angle between the resultant momentum vectors must be greater than 160 degrees. This also removes two-photon events. * Finally, the invariant mass of charged tracks in each hemisphere is required to be less than 1.8 GeV to further suppress hadronic backgrounds. The results from the event selections are summarized in Table 1. Each event is assigned a polar production angle based on the thrust axis defined by the charged tracks. Our published results based on the 1993-95 data were restricted to the polar-angle range $`\left|\mathrm{cos}\theta \right|<0.7`$ because of the lepton trigger and the tracking acceptance . In the 1996-98 data sets, we can use a wider polar-angle range $`\left|\mathrm{cos}\theta \right|<0.8`$ for the analysis. Polar-angle distributions for electron-, muon- and tau-pair final states from the 1996 through 1998 data sets are shown in Fig. 1. The left-right cross section asymmetries and forward-backward angular distribution asymmetries are clearly seen. The acceptance in $`\mathrm{cos}\theta `$ out to $`\pm 0.7`$ is uniform, but falls off at larger $`\mathrm{cos}\theta `$. Since the data are plotted out to $`\mathrm{cos}\theta `$ $`\pm 0.8`$, it was necessary to correct the data for the acceptance efficiency in order that the fitted curve could be compared with the data. Similar fits were done for the 1993 and 1994-95 data sets. In all cases the curves fit the data well. ### Systematic Effects and Corrections to Asymmetry Parameters The maximum likelihood procedure gives an excellent first estimate of the asymmetry parameters and the statistical error on each parameter. However there are several systematic effects which can bias the result: * Uncertainty in beam polarization; * Background; * Uncertainty in beam energy; and * V-A structure in $`\tau `$ decay. We must estimate the systematic errors on these effects. We discuss these effects in this section and summarize in Table 2 and Table 3. Effect of polarization asymmetries: Asymmetry measurements at SLD rely critically on the time-dependent polarization. SLD has three detectors to measure the polarization, Cherenkov detector (CKV) , Polarized Gamma Counter (PGC) and Quartz Fiber Calorimeter (QFC) . Due to beamstrahlung backgrounds produced during luminosity running, only the CKV detector can make polarization measurements during beam collisions. Hence it is the primary detector and the most carefully analyzed. Dedicated electron-only runs are used to compare electron polarization measurements between the CKV, PGC and QFC detectors. The PGC and QFC results are consistent with the CKV result at the level of 0.5%. Details on the polarization measurements are discussed in Ref. . The preliminary estimates of the error on the polarization are given by $`\delta P/P=0.67\%`$ and 1.08% for 1996 and 1997-98, respectively . Effect of Backgrounds: Muon-pair samples are almost background free but tau-pair candidates are contaminated by electron-pairs, two-photon and hadronic events. A small percentage of tau-pairs are identified as electron-pairs. Beam-gas and cosmic ray backgrounds have been estimated and found negligible. Estimates of backgrounds are given in Table 1. These estimates have been derived from detailed Monte Carlo simulations as well as from studying the effect of cuts in background-rich samples of real data. Tau pairs are the only non-negligible backgrounds in the electron- and muon-pair samples. The tau-pairs background in the muon-pair sample is negligible since the world-averaged measurements say $`A_\mu `$ and $`A_\tau `$ are consistent within $`1310^3`$ and the effect would be smaller than $`5\times 10^5`$. For the same reason, the muon-pair background in the tau-pair sample can be neglected. The $`t`$-channel electron-pair background, the two-photon background and the hadronic background cause small corrections to $`A_\tau `$. We estimate how the backgrounds discussed above affect each asymmetry parameter by creating an ensemble of fast simulations. The simulation-data sets are generated from the same formula for the cross-sections used to fit the real data. Trial backgrounds are then superimposed on each data sets, where the shape of the background has been obtained from the shape of the data that form the particular background. Each background is normalized relative to the signal according to detailed Monte Carlo estimates. The effect of each background on each asymmetry parameter is determined from the differences between the fitted parameter values before and after inclusion of the backgrounds. The net corrections due to backgrounds and their uncertainties are given in Table 2. Effect of uncertainty of center of mass energy: The calculation of the maximum likelihood function depends on the average beam energy $`\sqrt{s}`$ since the coefficients in the likelihood functions (see Eq. (10)) will depend on the center-of-mass energy. During the 1998 run, a $`Z^0`$-peak scan was performed to provide a calibration of the SLD energy spectrometer. It shows that the spectrometer measurements had a small bias and that SLD has been running slightly below the $`Z^0`$ resonance. Hence we redetermine the coefficients in Eq. (10) for 1998 data to correct the effect. The uncertainty due to a $`\pm 1\sigma (50\mathrm{M}\mathrm{e}\mathrm{V})`$ variation of the center-of-mass energy is estimated by computing them for the peak energy as well as for the $`1\sigma `$ variation. V-A structure in tau decays: The largest systematic effect for the tau analysis, indicated in Table 2, comes about because we measure not the taus themselves, but their decay products. The longitudinal spin projections of the two taus from $`Z^0`$ decay are 100% anti-correlated: one will be left-handed and the other right-handed. So, given the V-A structure of tau decay , the decay products from the $`\tau ^+`$ and the $`\tau ^{}`$ from a particular $`Z^0`$ decay will take their energies from the same set of spectra. For example, if both taus decay to $`\pi \nu `$, then both pions will generally be low in energy (in the case of a left handed $`\tau ^{}`$ and right handed $`\tau ^+`$) or both will be generally higher in energy. The effect is strong at SLD because the high beam polarization induces very high tau polarization as a function of polar-production angle. And, most importantly, the sign of the polarization is basically opposite for left- and right-handed polarized beam events. So a cut on event mass causes polar-angle dependence in selection efficiency for taus which has the opposite effect for taus from events produced with the left- and right-handed polarized electron beam. Taking all tau decay modes into account, using Monte Carlo simulation, we find an overall shift of $`+0.0130\pm 0.0029`$ on $`A_\tau `$ (where the uncertainty is mostly from Monte Carlo statistics and the value extracted from the fit must be reduced by this amount). $`A_e`$ is not affected since the overall relative efficiencies for left-beam and right-beam events are not changed much (only the polar angle dependence of the efficiencies are changed). The above-mentioned systematic effects are non-negligible, although small compared with current statistical errors. Other potential corrections are discussed below. Their effect on the asymmetry parameters is deemed negligible for the current measurements. Effect of detector asymmetries: Since there will generally be no bias in the fit as long the efficiency is symmetric in $`\mathrm{cos}\theta `$, there will be a problem only if the efficiency for detecting positive tracks is different from that of negative tracks. We estimate this effect by examining the relative numbers of opposite sign back-to-back tracks with positive-positive and negative-negative pairs. The latter will occur whenever one of the two back-to-back tracks in a two-pronged event has a wrong sign of measured charge. Double charge mismeasurement is less likely. The correction for biases due to charge mismeasurement is found to be negligible. Final state thrust angle resolution: We have also studied the effect of uncertainty in the thrust axis by smearing the directions of outgoing tracks. Final state QED radiation can affect the determination of the track angle particularly for electrons, although we find the angle to be well-determined in that case as well. The result depends somewhat on how final pairs are selected but this source of correction is also deemed negligible from our studies. Summary of systematic errors: Table 3 summarizes the systematic errors on the asymmetry parameters due to the contributing factors discussed above. The superscript on each parameter indicates the lepton species from which that particular parameter was determined. For example, $`A_e^\mu `$ refers to the estimate of $`A_e`$ obtained through the dependence expressed in Eq. (1) by analyzing the muon pairs. ## RESULTS Preliminary results from fits to the 1996-98 data are summarized below: $`A_e(199698)`$ $`=`$ $`0.1572\pm 0.0069\pm 0.0027\text{ (from }e^+e^{}\text{}\mu ^+\mu ^{}\text{ and }\tau ^+\tau ^{}\text{);}`$ $`A_\mu (199698)`$ $`=`$ $`0.147\pm 0.018\pm 0.002\text{ (from }\mu ^+\mu ^{}\text{); and}`$ $`A_\tau (199698)`$ $`=`$ $`0.127\pm 0.018\pm 0.004\text{ (from }\tau ^+\tau ^{}\text{),}`$ where the first error is statistical and second is due to systematic effects. The numbers have been corrected for the effect of backgrounds and the “V-A effect” for taus. The estimates for $`A_e`$, $`A_\mu `$ and $`A_\tau `$ are obtained by fitting each lepton sample separately by the maximum likelihood procedure. $`A_e`$ is obtained from all lepton species combined (combined from $`A_e^e`$, $`A_e^\mu `$ and $`A_e^\tau `$). Adding our published results from the 1993-95 data, our current best estimates for leptonic asymmetry parameters at SLD are as follows: $`A_e(199398)`$ $`=`$ $`0.1558\pm 0.0064\text{ (from }e^+e^{}\text{}\mu ^+\mu ^{}\text{ and }\tau ^+\tau ^{}\text{);}`$ $`A_\mu (199398)`$ $`=`$ $`0.137\pm 0.016\text{ (from }\mu ^+\mu ^{}\text{);}`$ $`A_\tau (199398)`$ $`=`$ $`0.142\pm 0.016\text{ (from }\tau ^+\tau ^{}\text{); and}`$ $`A_l(199398)`$ $`=`$ $`0.1523\pm 0.0057,`$ where statistical and systematic errors are combined. The asymmetry parameters are consistent with lepton universality. The global asymmetry parameter is referred to as $`A_l`$ (For $`A_l`$, systematic errors are conservatively taken to be fully correlated between lepton species). ## SUMMARY We report new direct measurements of the $`Z^0`$-lepton coupling asymmetry parameters $`A_e`$, $`A_\mu `$ and $`A_\tau `$, with polarized $`Z^0`$’s collected from 1993 through 1998 by the SLD detector at the SLAC Linear Collider. Maximum likelihood fits to the reactions $`e_{L,R}^{}+e^+Z^0e^+e^{}`$, $`\mu ^+\mu ^{}`$ and $`\tau ^+\tau ^{}`$ are used to measure the parameters. The probability density function used in the fit incorporates all three $`s`$-channel terms required from the tree-level calculations for the muon- and tau-pair final states. The electron-pair final states are described by both $`s`$\- and $`t`$-channel $`Z^0`$ and photon exchange requiring ten cross section terms, all of which are included in the probability density function. Whether three or ten terms, the probability density function used in the fit results from convoluting the energy-dependent cross section formulas with a spectral function. The function incorporates initial state QED radiation, the intrinsic beam-energy spread and the effect of energy-dependent selection criteria. The parameters obtained from these fits require no further corrections for these effects. However, $`A_\tau `$ is corrected for a bias that results from the V-A structure of tau decays and both tau- and electron-pair events require additional small corrections due to backgrounds. Preliminary results are summarized in the previous section. Comparison of the $`A_e`$, $`A_\mu `$ and $`A_\tau `$ shows no significant differences in these asymmetry parameters. By assuming lepton universality, the weak mixing angle corresponding to the global asymmetry parameter, $`A_l`$, is given $$\mathrm{sin}^2\theta _W^{eff}=0.23085\pm 0.00073.$$ (11) The weak mixing angle from $`A_{LR}`$ using hadrons yields $$\mathrm{sin}^2\theta _W^{eff}=0.23101\pm 0.00029.$$ (12) Those results are consistent and the combined preliminary result of the weak mixing angle at SLD is $$\mathrm{sin}^2\theta _W^{eff}=0.23099\pm 0.00026.$$ (13) Our result still differs from the most recent combined LEP I result by about $`2.7\sigma `$. It is interesting to note that the LEP leptonic average ($`\mathrm{sin}^2\theta _W^{eff}=0.23151\pm 0.00033`$), from the tau-polarization and the unpolarized forward-backward leptonic asymmetries, is consistent with our result within $`1.2\sigma `$. However the LEP hadronic average ($`\mathrm{sin}^2\theta _W^{eff}=0.23230\pm 0.00032`$), from the $`b`$-quark and $`c`$-quark unpolarized forward-backward asymmetries and the hadronic charge asymmetry, is different from our result by $`3.1\sigma `$. ## Acknowledgment We thank the personnel of the SLAC accelerator department and the technical staffs of our collaborating institutions for their outstanding efforts on our behalf. This work was supported by the Department of Energy, the National Science Foundation, the Istituto Nazionale di Fisica Nucleare of Italy, the Japan-US Cooperative Research Project on High Energy Physics, and the Science and Engineering Research Council of the United Kingdom. ## \**List of Authors Kenji Abe,<sup>(21)</sup> Koya Abe,<sup>(33)</sup> T. Abe,<sup>(29)</sup> I. Adam,<sup>(29)</sup> T. Akagi,<sup>(29)</sup> H. Akimoto,<sup>(29)</sup> N.J. Allen,<sup>(5)</sup> W.W. Ash,<sup>(29)</sup> D. Aston,<sup>(29)</sup> K.G. Baird,<sup>(17)</sup> C. Baltay,<sup>(40)</sup> H.R. Band,<sup>(39)</sup> M.B. Barakat,<sup>(16)</sup> O. Bardon,<sup>(19)</sup> T.L. Barklow,<sup>(29)</sup> G.L. Bashindzhagyan,<sup>(20)</sup> J.M. Bauer,<sup>(18)</sup> G. Bellodi,<sup>(23)</sup> A.C. Benvenuti,<sup>(3)</sup> G.M. Bilei,<sup>(25)</sup> D. Bisello,<sup>(24)</sup> G. Blaylock,<sup>(17)</sup> J.R. Bogart,<sup>(29)</sup> G.R. Bower,<sup>(29)</sup> J.E. Brau,<sup>(22)</sup> M. Breidenbach,<sup>(29)</sup> W.M. Bugg,<sup>(32)</sup> D. Burke,<sup>(29)</sup> T.H. Burnett,<sup>(38)</sup> P.N. Burrows,<sup>(23)</sup> R.M. Byrne,<sup>(19)</sup> A. Calcaterra,<sup>(12)</sup> D. Calloway,<sup>(29)</sup> B. Camanzi,<sup>(11)</sup> M. Carpinelli,<sup>(26)</sup> R. Cassell,<sup>(29)</sup> R. Castaldi,<sup>(26)</sup> A. Castro,<sup>(24)</sup> M. Cavalli-Sforza,<sup>(35)</sup> A. Chou,<sup>(29)</sup> E. Church,<sup>(38)</sup> H.O. Cohn,<sup>(32)</sup> J.A. Coller,<sup>(6)</sup> M.R. Convery,<sup>(29)</sup> V. Cook,<sup>(38)</sup> R.F. Cowan,<sup>(19)</sup> D.G. Coyne,<sup>(35)</sup> G. Crawford,<sup>(29)</sup> C.J.S. Damerell,<sup>(27)</sup> M.N. Danielson,<sup>(8)</sup> M. Daoudi,<sup>(29)</sup> N. de Groot,<sup>(4)</sup> R. Dell’Orso,<sup>(25)</sup> P.J. Dervan,<sup>(5)</sup> R. de Sangro,<sup>(12)</sup> M. Dima,<sup>(10)</sup> D.N. Dong,<sup>(19)</sup> M. Doser,<sup>(29)</sup> R. Dubois,<sup>(29)</sup> B.I. Eisenstein,<sup>(13)</sup> I.Erofeeva,<sup>(20)</sup> V. Eschenburg,<sup>(18)</sup> E. Etzion,<sup>(39)</sup> S. Fahey,<sup>(8)</sup> D. Falciai,<sup>(12)</sup> C. Fan,<sup>(8)</sup> J.P. Fernandez,<sup>(35)</sup> M.J. Fero,<sup>(19)</sup> K. Flood,<sup>(17)</sup> R. Frey,<sup>(22)</sup> J. Gifford,<sup>(36)</sup> T. Gillman,<sup>(27)</sup> G. Gladding,<sup>(13)</sup> S. Gonzalez,<sup>(19)</sup> E.R. Goodman,<sup>(8)</sup> E.L. Hart,<sup>(32)</sup> J.L. Harton,<sup>(10)</sup> K. Hasuko,<sup>(33)</sup> S.J. Hedges,<sup>(6)</sup> S.S. Hertzbach,<sup>(17)</sup> M.D. Hildreth,<sup>(29)</sup> J. Huber,<sup>(22)</sup> M.E. Huffer,<sup>(29)</sup> E.W. Hughes,<sup>(29)</sup> X. Huynh,<sup>(29)</sup> H. Hwang,<sup>(22)</sup> M. Iwasaki,<sup>(22)</sup> D.J. Jackson,<sup>(27)</sup> P. Jacques,<sup>(28)</sup> J.A. Jaros,<sup>(29)</sup> Z.Y. Jiang,<sup>(29)</sup> A.S. Johnson,<sup>(29)</sup> J.R. Johnson,<sup>(39)</sup> R.A. Johnson,<sup>(7)</sup> T. Junk,<sup>(29)</sup> R. Kajikawa,<sup>(21)</sup> M. Kalelkar,<sup>(28)</sup> Y. Kamyshkov,<sup>(32)</sup> H.J. Kang,<sup>(28)</sup> I. Karliner,<sup>(13)</sup> H. Kawahara,<sup>(29)</sup> Y.D. Kim,<sup>(30)</sup> M.E. King,<sup>(29)</sup> R. King,<sup>(29)</sup> R.R. Kofler,<sup>(17)</sup> N.M. Krishna,<sup>(8)</sup> R.S. Kroeger,<sup>(18)</sup> M. Langston,<sup>(22)</sup> A. Lath,<sup>(19)</sup> D.W.G. Leith,<sup>(29)</sup> V. Lia,<sup>(19)</sup> C.Lin,<sup>(17)</sup> M.X. Liu,<sup>(40)</sup> X. Liu,<sup>(35)</sup> M. Loreti,<sup>(24)</sup> A. Lu,<sup>(34)</sup> H.L. Lynch,<sup>(29)</sup> J. Ma,<sup>(38)</sup> M. Mahjouri,<sup>(19)</sup> G. Mancinelli,<sup>(28)</sup> S. Manly,<sup>(40)</sup> G. Mantovani,<sup>(25)</sup> T.W. Markiewicz,<sup>(29)</sup> T. Maruyama,<sup>(29)</sup> H. Masuda,<sup>(29)</sup> E. Mazzucato,<sup>(11)</sup> A.K. McKemey,<sup>(5)</sup> B.T. Meadows,<sup>(7)</sup> G. Menegatti,<sup>(11)</sup> R. Messner,<sup>(29)</sup> P.M. Mockett,<sup>(38)</sup> K.C. Moffeit,<sup>(29)</sup> T.B. Moore,<sup>(40)</sup> M.Morii,<sup>(29)</sup> D. Muller,<sup>(29)</sup> V. Murzin,<sup>(20)</sup> T. Nagamine,<sup>(33)</sup> S. Narita,<sup>(33)</sup> U. Nauenberg,<sup>(8)</sup> H. Neal,<sup>(29)</sup> M. Nussbaum,<sup>(7)</sup> N. Oishi,<sup>(21)</sup> D. Onoprienko,<sup>(32)</sup> L.S. Osborne,<sup>(19)</sup> R.S. Panvini,<sup>(37)</sup> C.H. Park,<sup>(31)</sup> T.J. Pavel,<sup>(29)</sup> I. Peruzzi,<sup>(12)</sup> M. Piccolo,<sup>(12)</sup> L. Piemontese,<sup>(11)</sup> K.T. Pitts,<sup>(22)</sup> R.J. Plano,<sup>(28)</sup> R. Prepost,<sup>(39)</sup> C.Y. Prescott,<sup>(29)</sup> G.D. Punkar,<sup>(29)</sup> J. Quigley,<sup>(19)</sup> B.N. Ratcliff,<sup>(29)</sup> T.W. Reeves,<sup>(37)</sup> J. Reidy,<sup>(18)</sup> P.L. Reinertsen,<sup>(35)</sup> P.E. Rensing,<sup>(29)</sup> L.S. Rochester,<sup>(29)</sup> P.C. Rowson,<sup>(9)</sup> J.J. Russell,<sup>(29)</sup> O.H. Saxton,<sup>(29)</sup> T. Schalk,<sup>(35)</sup> R.H. Schindler,<sup>(29)</sup> B.A. Schumm,<sup>(35)</sup> J. Schwiening,<sup>(29)</sup> S. Sen,<sup>(40)</sup> V.V. Serbo,<sup>(29)</sup> M.H. Shaevitz,<sup>(9)</sup> J.T. Shank,<sup>(6)</sup> G. Shapiro,<sup>(15)</sup> D.J. Sherden,<sup>(29)</sup> K.D. Shmakov,<sup>(32)</sup> C. Simopoulos,<sup>(29)</sup> N.B. Sinev,<sup>(22)</sup> S.R. Smith,<sup>(29)</sup> M.B. Smy,<sup>(10)</sup> J.A. Snyder,<sup>(40)</sup> H. Staengle,<sup>(10)</sup> A. Stahl,<sup>(29)</sup> P. Stamer,<sup>(28)</sup> H. Steiner,<sup>(15)</sup> R. Steiner,<sup>(1)</sup> M.G. Strauss,<sup>(17)</sup> D. Su,<sup>(29)</sup> F. Suekane,<sup>(33)</sup> A. Sugiyama,<sup>(21)</sup> S. Suzuki,<sup>(21)</sup> M. Swartz,<sup>(14)</sup> A. Szumilo,<sup>(38)</sup> T. Takahashi,<sup>(29)</sup> F.E. Taylor,<sup>(19)</sup> J. Thom,<sup>(29)</sup> E. Torrence,<sup>(19)</sup> N.K. Toumbas,<sup>(29)</sup> T. Usher,<sup>(29)</sup> C. Vannini,<sup>(26)</sup> J. Va’vra,<sup>(29)</sup> E. Vella,<sup>(29)</sup> J.P. Venuti,<sup>(37)</sup> R. Verdier,<sup>(19)</sup> P.G. Verdini,<sup>(26)</sup> D.L. Wagner,<sup>(8)</sup> S.R. Wagner,<sup>(29)</sup> A.P. Waite,<sup>(29)</sup> S. Walston,<sup>(22)</sup> S.J. Watts,<sup>(5)</sup> A.W. Weidemann,<sup>(32)</sup> E. R. Weiss,<sup>(38)</sup> J.S. Whitaker,<sup>(6)</sup> S.L. White,<sup>(32)</sup> F.J. Wickens,<sup>(27)</sup> B. Williams,<sup>(8)</sup> D.C. Williams,<sup>(19)</sup> S.H. Williams,<sup>(29)</sup> S. Willocq,<sup>(17)</sup> R.J. Wilson,<sup>(10)</sup> W.J. Wisniewski,<sup>(29)</sup> J. L. Wittlin,<sup>(17)</sup> M. Woods,<sup>(29)</sup> G.B. Word,<sup>(37)</sup> T.R. Wright,<sup>(39)</sup> J. Wyss,<sup>(24)</sup> R.K. Yamamoto,<sup>(19)</sup> J.M. Yamartino,<sup>(19)</sup> X. Yang,<sup>(22)</sup> J. Yashima,<sup>(33)</sup> S.J. Yellin,<sup>(34)</sup> C.C. Young,<sup>(29)</sup> H. Yuta,<sup>(2)</sup> G. Zapalac,<sup>(39)</sup> R.W. Zdarko,<sup>(29)</sup> J. Zhou.<sup>(22)</sup> (The SLD Collaboration) <sup>(1)</sup>Adelphi University, Garden City, New York 11530, <sup>(2)</sup>Aomori University, Aomori , 030 Japan, <sup>(3)</sup>INFN Sezione di Bologna, I-40126, Bologna, Italy, <sup>(4)</sup>University of Bristol, Bristol, U.K., <sup>(5)</sup>Brunel University, Uxbridge, Middlesex, UB8 3PH United Kingdom, <sup>(6)</sup>Boston University, Boston, Massachusetts 02215, <sup>(7)</sup>University of Cincinnati, Cincinnati, Ohio 45221, <sup>(8)</sup>University of Colorado, Boulder, Colorado 80309, <sup>(9)</sup>Columbia University, New York, New York 10533, <sup>(10)</sup>Colorado State University, Ft. Collins, Colorado 80523, <sup>(11)</sup>INFN Sezione di Ferrara and Universita di Ferrara, I-44100 Ferrara, Italy, <sup>(12)</sup>INFN Lab. Nazionali di Frascati, I-00044 Frascati, Italy, <sup>(13)</sup>University of Illinois, Urbana, Illinois 61801, <sup>(14)</sup>Johns Hopkins University, Baltimore, Maryland 21218-2686, <sup>(15)</sup>Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720, <sup>(16)</sup>Louisiana Technical University, Ruston,Louisiana 71272, <sup>(17)</sup>University of Massachusetts, Amherst, Massachusetts 01003, <sup>(18)</sup>University of Mississippi, University, Mississippi 38677, <sup>(19)</sup>Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, <sup>(20)</sup>Institute of Nuclear Physics, Moscow State University, 119899, Moscow Russia, <sup>(21)</sup>Nagoya University, Chikusa-ku, Nagoya, 464 Japan, <sup>(22)</sup>University of Oregon, Eugene, Oregon 97403, <sup>(23)</sup>Oxford University, Oxford, OX1 3RH, United Kingdom, <sup>(24)</sup>INFN Sezione di Padova and Universita di Padova I-35100, Padova, Italy, <sup>(25)</sup>INFN Sezione di Perugia and Universita di Perugia, I-06100 Perugia, Italy, <sup>(26)</sup>INFN Sezione di Pisa and Universita di Pisa, I-56010 Pisa, Italy, <sup>(27)</sup>Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX United Kingdom, <sup>(28)</sup>Rutgers University, Piscataway, New Jersey 08855, <sup>(29)</sup>Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309, <sup>(30)</sup>Sogang University, Seoul, Korea, <sup>(31)</sup>Soongsil University, Seoul, Korea 156-743, <sup>(32)</sup>University of Tennessee, Knoxville, Tennessee 37996, <sup>(33)</sup>Tohoku University, Sendai 980, Japan, <sup>(34)</sup>University of California at Santa Barbara, Santa Barbara, California 93106, <sup>(35)</sup>University of California at Santa Cruz, Santa Cruz, California 95064, <sup>(36)</sup>University of Victoria, Victoria, British Columbia, Canada V8W 3P6, <sup>(37)</sup>Vanderbilt University, Nashville,Tennessee 37235, <sup>(38)</sup>University of Washington, Seattle, Washington 98105, <sup>(39)</sup>University of Wisconsin, Madison,Wisconsin 53706, <sup>(40)</sup>Yale University, New Haven, Connecticut 06511.
no-problem/9908/astro-ph9908093.html
ar5iv
text
# Impact of atmospheric parameters on the atmospheric Cherenkov technique ## 1 Introduction The atmospheric Cherenkov technique for air-shower detection, in particular the imaging atmospheric Cherenkov technique, has become an increasingly mature experimental method of very-high-energy (VHE) $`\gamma `$-ray astronomy in recent years. Large effort has gone, for example, into the optimisation of $`\gamma `$-hadron separation, the energy calibration, and the evaluation of spectra of $`\gamma `$-ray sources. Imaging and non-imaging methods play an increasingly important part in measuring the spectrum and composition of cosmic rays, from TeV energies well into the knee region. Among the non-imaging techniques, (former) solar power plants with dedicated Cherenkov equipment have begun to achieve unprecedentedly low energy threshold for ground-based $`\gamma `$-ray detectors. The technique of imaging Cherenkov telescopes is also evolving, with several stereoscopic arrays of ten-meter class telescopes now under development , even larger single telescopes , and some hope for further progress in photon detection techniques. One very important common aspect in all the variants of the atmospheric Cherenkov technique is the atmosphere itself, as the target medium for the VHE cosmic particles, as the emitter of Cherenkov photons, and as the transport medium for those photons. The present paper tries to further our understanding of several important atmospheric parameters, mainly by extensive numerical simulations. Among the parameters investigated are the vertical profile of the atmosphere, the transmission and scattering of Cherenkov light, the importance of spherical versus plane-parallel geometry in shower simulations (or its insignificance, depending on zenith angle), and the refraction of Cherenkov light. Since available tools were not really adequate for most of the questions involved, the Cherenkov part of the CORSIKA air shower simulation program has been substantially extended and a flexible and very detailed simulation procedure for imaging Cherenkov telescopes developed (although the later is of less relevance for the present paper). The major goal of this study is to be of practical usefulness for the experimentalist. ## 2 Atmospheric profiles For the detection of air showers by particle detectors, a pressure correction is usually sufficient to account for different atmospheric density profiles. For the atmospheric Cherenkov technique the situation is more complex since the shower is not only sampled at one altitude but light is collected from all altitudes. In addition to different longitudinal shower development for different atmospheric density profiles, the atmospheric Cherenkov technique is also sensitive to the index of refraction $`n`$. Both the amount of Cherenkov light emitted and its emission angle are affected by the index of refraction at each altitude. The number of Cherenkov photons emitted per unit path length (in the wavelength range $`\lambda _1`$ to $`\lambda _2`$) is described by the well-known equation $$\frac{dN}{dx}=2\pi \alpha z^2_{\lambda _1}^{\lambda _2}\left(1\frac{1}{(\beta n(\lambda ))^2}\right)\frac{1}{\lambda ^2}𝑑\lambda ,$$ (1) with $`\alpha `$ being the fine structure constant ($``$1/137), $`z`$ being the charge number, and $`\beta =v/c`$. Particles with $`\beta <1/n(\lambda )`$ cannot emit Cherenkov light at wavelength $`\lambda `$. At visible wavelengths this results in an energy threshold of more than 20 MeV (35 MeV) for electrons or positrons and about 4.5 GeV (8 GeV) for muons at sea level (at 10 km altitude), respectively. The amount of light emitted by particles above threshold depends on the index of refraction. The opening angle $`\theta _\mathrm{c}`$ of the Cherenkov light cone above threshold also depends on $`n`$: $$\mathrm{cos}\theta _\mathrm{c}=1/n\beta $$ (2) which in the limit $`\beta =1`$ and for $`(n1)1`$ corresponds to $$\theta _\mathrm{c}\sqrt{2(n1)}\text{radians}.$$ (3) Different atmospheric density profiles, generally, result in different indices of refraction near shower maximum and, thus, in different amounts of Cherenkov light emitted. Most of the light arriving in the inner region of fairly flat light density (of about 120 m radius at 2000 m altitude) is emitted near and after the shower maximum and is particularly affected by the longitudinal shower development as compared to the total amount of Cherenkov light. Since $`n(\lambda )1`$ changes by only 5% over the wavelength range 300–600 nm, the range typically covered by photomultipliers, air-shower Cherenkov simulations are usually simplified by assuming a wavelength-independent index of refraction $`n`$, obtained at an effective wavelength. Another frequent simplification is to assume that $`n1`$ is proportional to air density but, strictly, $`n`$ is a more complex function of pressure, temperature, and water vapour content. For the present work, the wavelength independence is also assumed but the dependence on water vapour content etc. is taken into account. For the purpose of this study the CORSIKA shower simulation program has been adapted to read tables of atmospheric profiles, including density and index of refraction, and suitably interpolate between tabulated values. For the electromagnetic part of the shower development, based on EGS , several layers of exponential density profile are used in CORSIKA. Fitting of the corresponding parameters to tabulated vertical profiles can now be done at program start-up. The Cherenkov part of CORSIKA, originally based on work of M. Rozanska, S. Martinez, and F. Arqueros, has been rewritten to account for tabulated indices of refraction and for atmospheric refraction and includes an interface to arbitrary systems of telescopes or other Cherenkov light detectors. Seven atmospheric tables have been used for this work. Six tables were obtained from the MODTRAN program for atmospheric transmission and radiance calculations (tropical, mid-latitude summer and winter, subarctic summer and winter, and U.S. standard atmosphere 1976). An antarctic winter profile was constructed from radiosonde measurements above the Amundsen-Scott (south pole, 2800 m altitude) and Neumayer (latitude 70 S, near sea level) stations. Figure 1 shows the quite significant impact of the various atmospheric profiles on the lateral density of Cherenkov light in 100 GeV gamma-ray showers observed at an altitude of 2200 m (the impact being similar for any altitude far beyond shower maximum). The same atmospheric light transmission model is used in all cases (see section 3). The energy of 100 GeV has been chosen here because it will be a rather typical energy for the next generation of atmospheric Cherenkov experiments and because enough showers can easily be simulated such that shower fluctuations cancel out to a negligible level. A 60% higher light density near the shower axis is obtained for the antarctic winter as compared to the tropical profile. At moderate latitudes a seasonal effect of 15–20% is apparent and should be included in energy calibrations of IACT installations. Air-shower Cherenkov simulations used for the energy calibration of various experiments have to date mainly used the U.S. Standard Atmosphere 1976 profile. Inappropriate atmospheric models could lead to systematic errors in absolute flux calibrations of the Crab nebula – the de-facto standard VHE $`\gamma `$-ray source. Flux calibrations relative to the (not very accurately known) flux of cosmic rays would be less subject to assumed atmospheric profiles. These relative flux calibrations are most useful for comparison between different experiments but require that the same cosmic-ray flux and composition are assumed. The relative method also depends on applied hadronic interaction models which are still less accurate than electromagnetic shower codes even at energies where accelerator data are available. Note also that, among absolute calibration methods for Cherenkov telescopes, both calibration with a reference light source and with muon rings require detailed knowledge of the spectral response curve since neither the light source nor the muon rings have the same spectrum as the Cherenkov light from near the shower maximum. To a lesser extent this is also true for the relative method because hadron showers with a deeper shower maximum and some light from penetrating muons have, on average, less short-wavelength extinction than gamma showers. The atmospheric profile is not only important for the average light density at small core distances but also for the radial fall-off. At multi-TeV energies, this radial fall-off is useful as a means to discriminate between hadron and gamma-ray initiated showers and to estimate the cosmic-ray mass composition. Simulations with inappropriate atmospheric profiles could lead to systematics in both cases. The reasons for the different light profiles are illustrated to some extent by Figure 2, showing the average longitudinal development of showers for four profiles. For profiles with lower temperatures in the lower stratosphere and troposphere the maximum of Cherenkov emission is shifted downwards – to regions of higher density, i.e. higher index of refraction and thus higher Cherenkov efficiency – with respect to profiles with higher temperatures. It should be noted that the atmospheric thickness corresponding to the height of maximum of all Cherenkov emission remains largely unaffected (not more than 5 g/cm<sup>2</sup>), but the thickness of the maximum of emission into the inner 50 m is increasing substantially from the tropical to the antarctic winter profile (by about 30 g/cm<sup>2</sup>). The amount of Cherenkov light within 500 m from the core is roughly proportional to $`(n1)`$ at the shower maximum, with about 15% difference between tropical and antarctic winter. If light arriving very far from the shower core is included, the differences are even smaller. Near the core, however, differences are large (see Figure 1) which is due to several effects: * The amount of Cherenkov emission is roughly proportional to $`(n1)`$ at median altitude $`h_{\mathrm{med}}`$ of Cherenkov emission (or at maximum, as before). * With increasing $`(n_{\mathrm{med}}1)`$ at $`h_{\mathrm{med}}`$ the Cherenkov cone opening angle is increased and the light is spread over a larger area – decreasing the central light density. * With decreasing $`h_{\mathrm{med}}`$ the distance between emission maximum and observer is decreased – increasing the central light density. * For Cherenkov light near the core, the median height of emission $`h_{\mathrm{med}}^{}`$ is typically 1000–1500 m below that of all Cherenkov light ($`h_{\mathrm{med}}`$), which emphasises the geometrical factor even more. Qualitatively, the central light density $`\rho _\mathrm{c}`$ for vertical showers follows $$\rho _\mathrm{c}\frac{n_{\mathrm{med}}1}{(n_{\mathrm{med}}1)(h_{\mathrm{med}}^{}h_{\mathrm{obs}})^2}=(h_{\mathrm{med}}^{}h_{\mathrm{obs}})^2$$ (4) where $`h_{\mathrm{obs}}`$ is the observation level altitude. The numerator accounts for the Cherenkov efficiency, the denominator for the area of the light pool. The index of refraction cancels out in this approximation, leaving the distance between $`h_{\mathrm{med}}^{}`$ and $`h_{\mathrm{obs}}`$ as the dominating factor. Since for increasing primary energy $`h_{\mathrm{med}}`$ approaches the observation altitude, the geometrical factor will be increasingly important. In simulations, it is possible to separate the effects of the atmospheric profiles on shower development and on Cherenkov emission. In Figure 3 the shower development is treated with the CORSIKA built-in U.S. standard atmosphere approximation but the index of refraction is taken from different atmospheric profiles. Since the impact of the different distances between observation level and median emission altitude is not present in this case, any differences in lateral light density are much smaller. The position and shape of the rim of the ‘light pool’ 100-120 m from the core are the most obvious differences remaining, which are due to different Cherenkov cone opening angles in the lower stratosphere. ## 3 Transmission of Cherenkov light The atmospheric extinction of light is another source of concern for the energy calibration of atmospheric Cherenkov experiments and to some extent also for the image parameters of telescopes. There are several sources of extinction: absorption bands of several molecules, molecular (Rayleigh) scattering as well as aerosol (Mie) scattering and absorption. For a detailed introduction see for example . The relevance of the various absorbers or scatterers at different wavelengths is illustrated in Figure 4. At wavelengths below 340 nm ozone (O<sub>3</sub>) is a very important absorber – not only in the ozone layer but even near ground. Relevant absorption bands are the Hartley bands in the 200-300 nm range and the Huggins bands extending to 340 nm. Near 600 nm there are the weak Chappuis bands. Normal oxygen (O<sub>2</sub>) can be disassociated by light below 242 nm leading to the Hertzberg continuum. In addition, there is the Hertzberg band at 260 nm. The O<sub>2</sub> absorption is of no concern to most Cherenkov experiments – typically using photomultipliers (PMs) with borosilicate glass windows which are insensitive below 290 nm – and is in fact frequently neglected. However, O<sub>2</sub> absorption is a limiting factor for UV observations. Other molecules are of little relevance in the near-UV and visible range. Most Cherenkov light in the PM sensitivity range is actually lost by molecular scattering. Although some of the light may also be scattered into the viewing angle, such scattered light is generally not important and scattering can be considered like an absorption process. The same argument applies to aerosols where both scattering and absorption play a role. The relevance of scattered light is discussed in Section 4. While molecular scattering and O<sub>2</sub> absorption are easily predictable and almost constant at any site, both aerosols and ozone are site-dependent and variable. Aerosols are mainly limited to the boundary layer of typically 1–2.5 km thickness above the surrounding terrain where the diurnal variation and the dependence on ground material and wind speed is largest. In the boundary layer, the heating of the ground by solar radiation leads to turbulence and rapid vertical exchange of air and dust. Not just near ground but even in the stratosphere the aerosols play a role – including meteoric and volcanic dust. Ozone also shows diurnal and seasonal variations. The total extinction of star light is easily measured (and a routine procedure at optical observatories), by fitting the function $$\mathrm{ln}I(\lambda )=\mathrm{ln}I_0(\lambda )\tau _1(\lambda )\mathrm{sec}z$$ (5) to several observations of a reference star (here in the plane-parallel atmosphere approximation). In this equation $`I`$ is the measured intensity, $`I_0`$ the true intensity, $`\tau _1`$ the optical depth per unit airmass and $`\mathrm{sec}z`$ the secant of the zenith angle. For the procedure one or several sources are measured at widely different zenith angles, allowing to fit $`I_0`$ and $`\tau _1`$. This procedure, however, cannot disentangle the vertical structure of absorbers. Different assumptions on this structure easily lead to differences of 5–10% in the amount of Cherenkov light, even at mountain altitude. At sea-level, even differences of up to 30% between different calculations can be traced back to different assumptions on the extinction. One example of a bad assumption is to take the density of aerosols as proportional to air density. One such example is illustrated in Figure 5. The aerosol-air proportionality assumption leads to an over-estimate (by 4–8%) of Cherenkov light even if the measured star-light extinction at the actual (mountain) altitude is taken into account. The reason for that is that the Cherenkov light is produced, say, halfway down in the atmosphere, implying 50% of the star-light extinction under the assumption, but actually some 80–90% of the aerosol extinction happens below the average Cherenkov production altitude. The aerosol-density proportionality assumption together with the extrapolation of mountain-altitude extinction measurements down to sea level, for example, leads to a severe over-estimate of Cherenkov light intensity at sea level. A much more realistic model of aerosol vertical structure, aerosol properties plus all the relevant molecular absorption and scattering is included in the MODTRAN program. MODTRAN has been used for the extinction models used in this paper. Unless otherwise noted, a U.S. standard profile with rural haze model of 23 km sea-level horizontal visibility has been used. Transmission curves obtained with this model are shown in Figure 6. In the case of the observatory on La Palma (operated by the IAC), the tropical profile and navy maritime haze model – with quite little aerosol extinction – is in excellent agreement with measured extinction curves as well as with the long-term average V band extinction, as provided by the Isacc Newton group on La Palma and the Royal Greenwich Observatory, Cambridge. Temperature and pressure profiles of the MODTRAN tropical profile are also in quite good agreement with radiosonde measurements from the nearby Tenerife island . Tropospheric ozone measurements (with the same radiosondes), however, exceed the MODTRAN model by a factor of 1.5–2. In Figure 7 the transmission curves obtained with the built-in tropical profile and with the profile taken from the radiosonde measurements are compared. For Cherenkov measurements with borosilicate window PMs the differences are insignificant but for UV observations they are important. As a further atmospheric variable the impact of volcanic dust was studied. The 1991 Pinatubo eruption, for example, led to 30 million tons of stratospheric dust – compared to 1 million tons before the eruption. It is visible in La Palma extinction measurements (obtained with the Carlsberg Meridian Circle and made available by the Royal Greenwich Observatory, Cambridge) for a period of two years. This eruption had a high (5–10%) impact on extinction of stellar light for one and a half years. MODTRAN provides a set of options to enhance the amount of volcanic dust in the calculations. Results for different amounts of dust and different Cherenkov emission altitudes are shown in Figure 8. The volcanic dust extinction is insignificant for altitudes below about 14 km and as such has little impact on Cherenkov measurements. A calibration of Cherenkov telescopes with stellar light under high volcanic dust conditions could lead to an over-estimate of shower energies and, thus, fluxes. Some of the forthcoming Cherenkov installations will likely be installed at the base of a mountain instead of at the top – due to environmental or infrastructure reasons. It seems appropriate to compare the expected atmospheric transmission for sites at different altitudes. Since the aerosol absorption is strongest in the boundary layer, the altitude of the surroundings (on a scale of the order of hundred kilometers and more) is also relevant. If the ‘base’ of the mountain is still above the boundary layer, the reduced altitude should not affect the transmission very much. If the base is already at the bottom of the boundary layer, even a move from 2.4 km to 1.8 km altitude results in 10% less Cherenkov light (Figure 9, as deduced with the MODTRAN rural haze model). These calculations still assume clear nights while in practice the base of the mountain may be more frequently under a cloud layer or affected by ground fog than the top. This, of course, can only be resolved by a long-term site comparison. ## 4 Scattering of light In the preceding discussion, all molecular and aerosol scattering of Cherenkov light is treated as an absorption process. This assumption was apparently used in any atmospheric Cherenkov simulations so far. However, estimates of the impact of scattered Cherenkov light were taken into account for the fluorescence technique . In this section quantitative results of full Cherenkov simulations with scattered light are presented.<sup>1</sup><sup>1</sup>1Since fluorescence light is not available with CORSIKA yet, these simulations could not be applied to the fluorescence technique at this stage. When considering scattered light one has to take the relevant integration time into account. Hardly any scattered light will arrive within or even before the Cherenkov light shower front but most scattered light arrives with quite significant delay due to its detour. For short integration times, small-angle scattering is responsible for most of the scattered light. Rayleigh scattering (of unpolarised light) is described by the simple normalised phase function of scattering angle $`\gamma `$ $$P_\mathrm{R}(\gamma )=\frac{3}{16\pi }\frac{2}{2+\delta }\left((1+\delta )+(1\delta )\mathrm{cos}^2\gamma \right)$$ (6) with $`\delta `$ being the depolarisation factor due to anisotropic molecules ($`\delta 0.029`$). For aerosol scattering – in principle described by Mie scattering theory – the situation is much more complicated and depends on size distribution, composition, and shapes of aerosol particles. In all practical cases, aerosol scattering is quite asymmetric with a forward peak. Due to its forward peak, aerosol scattering generally dominates over molecular scattering in Cherenkov light measurements with short integration times. Although Cherenkov light of air showers is partially polarised, the polarisation is ignored in the following because it is only relevant for large-angle scattering. It should also be noted that the amount of aerosol scattering (and to some extent also its phase function) can be highly variable – a fact that is very important for the air shower fluorescence technique where the contamination of the weak fluorescence light by scattered (in addition to direct) Cherenkov light has to be (and usually is) taken into account. In the following, an average amount and phase function for aerosol scattering is assumed which should be more or less typical for a good astronomical site situated well above the boundary layer in which turbulent mixing due to the diurnal temperature cycle is relevant. Aerosol scattering and absorption coefficients have been calculated with the MODTRAN program. The phase function can be approximated by a Henyey-Greenstein phase function with asymmetry parameter $`g`$: $$P_{\mathrm{HG}}(\gamma )=\frac{1}{4\pi }\frac{(1g^2)}{(12g\mathrm{cos}\gamma +g^2)^{3/2}}.$$ (7) The tropospheric aerosol phase function in MODTRAN has an asymmetry $`g=\mathrm{cos}\gamma P_{\mathrm{HG}}(\gamma )0.7`$ and is in the angle range $`0^{}<\gamma <140^{}`$ well represented by a Henyey-Greenstein phase function of $`g0.66`$ (see Figure 10). In the following, $`g=0.7`$ is used. For the shower simulations with a modified CORSIKA 5.70 program an U.S. standard atmospheric profile was used unless otherwise noted. Atmospheric transmission coefficients (including absorption and scattering on aerosols) were used as calculated with the MODTRAN rural haze model. The scattering algorithm used with CORSIKA includes multiple scatterings although these turned out to be insignificant. An observation level at an altitude of 2200 m is assumed. Since the relevance of scattered light is wavelength dependent, usual observation conditions are simulated by applying the quantum efficiency curve of a photomultiplier (PM) with borosilicate glass window and bi-alkali photocathode and the reflectivity of an aluminised mirror. The relevance of scattered light integrated over the whole sky is shown in Figure 11 for 100 TeV proton showers. Note that within the central 200 m, 1–3% of the total Cherenkov light is scattered light (for integration times below 100 ns). This fraction is increasing with distance since the lateral distribution of scattered light is flatter than that of the direct light. Within the central kilometer, aerosol scattered light dominates over Rayleigh scattered light. Beyond a few kilometers from the core and for integration times of more than one microsecond, scattered light eventually exceeds the direct light. Note that, for the short integration times, the smaller field of view of non-imaging Cherenkov counters like AIROBICC or even BLANCA will not much reduce the amount of scattered light. Most of the scattered light arriving within a few 10 ns of the direct light is scattered by no more than ten degrees. In the imaging atmospheric Cherenkov technique the Cherenkov light is integrated over a much smaller field of view and, generally, over a very short time interval (20 ns or even less). In this case the impact of scattered light is even smaller. As a conservative measure of scattered light all light in a 5 diameter field of view of a Cherenkov telescope centered on a gamma source is counted here – although in practice pixels more than 0.5 from the image major axis would generally be below a minimum threshold and not counted. Note that almost all light scattered by less than 2.5 has a delay of less than 10 ns with respect to direct light and short integration times would not significantly suppress scattered light. In the small field of view of such telescopes the scattered light has approximately the same path length as direct light and the ratio of scattered to direct light approximately scales with the airmass ($`1/\mathrm{cos}z`$ for a plane-parallel atmosphere). Since experimental groups are more concerned about scattered light in large-zenith-angle observations than near vertical, Figure 12 shows the case for zenith angle $`z=60^{}`$. Even in this case, scattered light is quite marginal (of the order of $`10^3`$). It should be noted, however, that some aerosol scattering phase function models for the boundary layer (e.g. as in MODTRAN, see Figure 10) have an additional peak in the very forward direction, which is not present in the MODTRAN 2–10 km tropospheric scattering model and not accounted for by the Henyey-Greenstein phase function. Under such circumstances – mainly Cherenkov experiments not far above the surrounding terrain or observing in the presence of thin clouds or fog – small-angle scattering of the Cherenkov light could be up to an order of magnitude more severe. This, for example, is the case in simulations of sea level observations with the MODTRAN rural haze phase function for aerosol scattering in the lower 2 km, where scattered light in the 5 field of view amounts to 1% of direct light at 60 zenith angle. Even at that level, scattered light should not be of major concern. The impact of thin clouds – which has not been simulated here – should be primarily on the trigger rate. Depending on the cloud altitude, the image length parameter could be reduced by losing only light emitted above the layer but the impact of scattered light on image parameters would still be small. Image parameters could rather be affected by the change of night-sky noise – which could either be reduced due to absorption or increased due to scattering of urban light. ## 5 Planar versus spherical atmosphere The CORSIKA program is at present using a plane-parallel atmosphere for shower simulations – except for a special, not Cherenkov-enabled, horizontal version. The impact of a planar versus a spherical atmosphere, in its qualitative implications, is nevertheless easy to show (see Figure 13). With spherical geometry the shower maximum (at constant atmospheric depth) is at a lower altitude, where the index of refraction is larger and more Cherenkov light is emitted. This geometry effect is only relevant for large zenith angles. Instead of shower simulations by the Monte Carlo method, an analytical approximation of the average Cherenkov emission of gamma showers is used here. It takes the longitudinal shower profile, the Cherenkov emission threshold depending on the index of refraction, and the emission of electrons above threshold into account. This approximation reproduces the longitudinal profile of Cherenkov emission in CORSIKA simulations very well for all model atmospheres. This approximation has been used with both spherical and planar atmospheric geometry to show that the difference is insignificant below 60 zenith angle, and little significant below 70 (see Figure 14). For hadronic showers there is a small additional effect of fewer pions and kaons decaying before the next interaction and, thus, fewer muons with the spherical geometry at very large zenith angles. ## 6 Refraction One of the recent achievements in VHE energy $`\gamma `$-astronomy is the fact that TeV $`\gamma `$-ray sources can be located with sub-arcminute accuracy . In addition, observations at large zenith angles are carried out by more and more Cherenkov telescope experiments, either to extend the observation time for a source or the effective area for high-energy showers, or to detect sources only visible at large zenith angles. Refraction of Cherenkov light in the atmosphere is therefore of increasing concern but is usually either neglected entirely or only considered in a qualitative way. The following discussion is based on numerical ray-tracing. The refraction method built into recent CORSIKA versions is based on a fit to such ray-tracing. For a plane-parallel atmosphere Snell’s law of refraction is $$n(z)\mathrm{sin}\theta (z)=\mathrm{const}.$$ (8) with $`n(z)`$ being the index of refraction at altitude $`z`$ and $`\theta (z)`$ being the zenith angle of the ray at this altitude. For a spherical atmosphere $$n(z)(R_\mathrm{E}+z)\mathrm{sin}\theta (z)=\mathrm{const}.$$ (9) has to be used instead, with $`R_\mathrm{E}`$ being the earth radius. The refraction of Cherenkov light emitted in the atmosphere is evidently smaller than that of star light seen from the same direction. Thus, even when using guide stars for tracking of Cherenkov telescopes, a correction for refraction has to be applied to take full advantage of measured shower directions. For $`\gamma `$-showers of 0.1–1 TeV the Cherenkov light is refracted typically 60–50% (70–60%) as much as stellar light up to 40 (near 60) zenith angle, with less refraction for showers of higher energy (see Figure 15). The different amount of refraction of light from the beginning and the end of the shower, respectively, leads to a change of image length. When an inclined shower is seen from below the axis, it appears slightly shorter, and when seen from above the axis, it appears longer – by a fraction of an arcminute. Apart from the change in the Cherenkov light direction, refraction also affects the arrival position and time. The impact on the arrival time is marginal if measured in a plane perpendicular to the shower axis and is well below one nanosecond even at 80 zenith angle. The arrival position in the shower plane is affected by typically 3 meters (18 m) in TeV $`\gamma `$-showers of 60 (75) zenith angle. Actual changes depend on the height of emission and lead to a small distortion of the lateral shape – the average shift being irrelevant. ## 7 Conclusions The impact of a number of atmospheric parameters on the atmospheric Cherenkov technique in general have been presented. It turns out that there are several such parameters which deserve more attention in the experimental analysis. This applies to the imaging technique in VHE $`\gamma `$-ray astronomy and to non-imaging techniques – both in $`\gamma `$-ray and cosmic-ray studies. The vertical structure of the atmosphere is the most striking of these parameters, with up to 60% difference in Cherenkov light between tropical and polar models and some 15–20% seasonal variation at mid-latitude sites. The appropriate structure could be easily applied in shower simulations for particular sites – while most calculations so far were restricted to US Standard Atmosphere 1976 or similar profiles. Relevant measurements are routine procedure of many meteorological institutions and data are readily available. Atmospheric extinction of Cherenkov light appears as another area which deserves perhaps more care, as the accuracy of the experiments improves. In particular, the assumption that extinction (scattering and absorption) by aerosols is, like Rayleigh scattering, only a function of traversed atmospheric thickness, is not a very good one. Extrapolation of high-altitude extinction measurements to low-level sites must be avoided. Cherenkov experiments should, at least, apply standard astronomical extinction monitoring procedures – if not already available from co-located optical observatories. In order to reduce energy systematics below ten percent this might, however, not be enough. Measurement of the aerosol vertical structure is – in contrast to stellar light extinction or to the air density profile – rather difficult. Lidar remote sensing methods, for example, measure primarily the back-scattered amount of light, although lidar methods are available to measure also the extinction profile with a reasonable 10% accuracy. The ratio of back-scattered light to total extinction depends very much on aerosol composition. Even otherwise similar phase functions (see Figure 10) differ easily by a factor of two in the back-scattering regime. As a consequence, some model-dependence will likely remain in transmission calculations but the best available models for a site should be used. For UV Cherenkov experiments sensitive below 300 nm wavelength, the variations of ozone profiles are an additional area of concern and absorption on oxygen can no longer be ignored. Scattered Cherenkov light has a rather minor impact for Cherenkov experiments – in contrast to air shower fluorescence experiments. Scattering should, however, be of some concern to wide-angle non-imaging experiments using the Cherenkov light lateral distribution either to discriminate between hadronic and gamma showers or to disentangle the cosmic-ray mass composition. The assumption of a plane-parallel atmosphere in air shower simulations becomes a problem when going to zenith angles beyond about $`75^{}`$. These large zenith angles are very important for fluorescence experiments in order to achieve the largest possible effective areas. For Cherenkov $`\gamma `$-ray experiments the increase in effective area, however, will – under most experimental conditions that can be envisaged – be more than counterbalanced by the much worse gamma-hadron discrimination when observing showers from more than 60 km distance. As a consequence, implementation of the proper spherical geometry in shower simulations is of less importance to the Cherenkov than to the fluorescence method. Atmospheric Cherenkov light refraction – until recently negligible compared to experimental errors – has to be accounted for when locating sources with sub-arcminute accuracy as now possible. For the time being, an approximate correction, e.g. as taken from Figure 15, should generally be sufficient. For more accurate results, enhancements to the CORSIKA code should be available with new CORSIKA versions. In addition to those atmospheric parameters covered in this paper, there are other, more subtle effects at work. One such example would be the time-variable night-sky background, e.g. due to airglow and scattering of urban illumination – something that should be kept in mind but is probably beyond the scope of what should be or can be accurately modelled in shower simulations. The geomagnetic field – not covered by this paper – has an additional impact. For present IACT experiments, say at energies of the order of 1 TeV, the main impact is a variation of the Cherenkov light intensity by a few percent between observations parallel to and perpendicular to the field direction. Image parameters are little affected at these energies. For future experiments observing lower energy showers with better angular resolution the geomagnetic field can be expected to be more significant. ## Acknowledgements Radiosonde measurements for Tenerife were kindly provided by the Izana Global Atmospheric Watch (GAW) Observatory. The use of further radiosonde data from the Amundsen-Scott and Neumayer stations is acknowledged, as well as the use of La Palma extinction data provided by the Isaac Newton group on La Palma, Canary Islands, and the Royal Greenwich Observatory, Cambridge. Most of this paper is based on calculations with CORSIKA 5.7 and it is a pleasure to thank its authors, in particular D. Heck, for their support. The MODTRAN 3 v1.5 program was kindly provided by the Phillips Laboratory, Geophysics Directorate, at Hanscom AFB, Massachusetts (USA).
no-problem/9908/quant-ph9908072.html
ar5iv
text
# Quantitative wave-particle duality and non-erasing quantum erasure ## I Experimental setup and procedure In our experiments single photons (at $`670\mathrm{nm}`$) were directed into a compressed Mach-Zehnder interferometer (see Fig. 1). An adjustable half waveplate (HWP) in path 1 was used to entangle the photon’s path with its polarization (i.e., with the WWM) thus yielding WW Knowledge . Our adjustable analysis system — quarter waveplate (QWP), HWP, and calcite prism (PBS) — allowed the polarization WWM to be measured in any arbitrary basis. The photons were detected using geiger-mode avalanche photodiodes — Single Photon Counting Modules (EG&G #SPCM-AQ, efficiency $`60\%`$). The input source, described below, was greatly attenuated so that the maximum detection rates were always less than $`50,000\mathrm{s}^1`$; for the interferometer passage time of 1ns, this means that on average fewer than $`10^4`$ photons were in the interferometer at any time. The probability for having no photon at all is close to unity at any arbitrary instant, but state reduction removes this part a posteriori as soon as a detector “clicks.” The reduced state is virtually indistinguishable from a one-photon Fock state because the probability for two or more photons is negligibly small. This one-photon-at-a-time operation is essential to allow sensible discussion of the likely path taken by an individual light quantum . Perhaps unnecessarily, we emphasize that our experiment is not intended to be a direct proof of the quantum nature of light. Rather, we accept the existence of photons as an established experimental fact . The quantized electromagnetic field has a classical limit as a field (unlike other quantum fields that have, at best, a limit in terms of particles), and some properties of the quantum field have close classical analogs. In particular, the counting rates of single-photon interferometers, such as the one used in our experiment, are proportional to the intensities of the corresponding classical electromagnetic field. But there is no allowance for individual detector clicks in Maxwell’s equations , nor for the quantum entanglement of photonic degrees of freedom that we exploit. And clearly, the trajectory of a light quantum through the interferometer is a concept alien to classical electrodynamics, as is the experimenter’s knowledge $`K`$ about this trajectory. For Visibility measurements the polarization analyzer was lowered out of the beam path, and the maximum and minimum count rates on detector 1 were measured as the length of path 2 was varied slightly (via a piezoelectric transducer). After subtracting out the separately-measured detector background (i.e., the count rate when the input to the interferometer was blocked, typically 100–400$`\mathrm{s}^1`$), the visibility was calculated in the standard manner: $`V=(\mathrm{Max}\mathrm{Min})/(\mathrm{Max}+\mathrm{Min})`$. For the determination of the Likelihood, and hence the Knowledge, the following procedure was used. With the polarization analyzer in place, and path 2 blocked, the counts on the two detectors were measured. Detector 1 (2) looked at polarization $`\lambda `$ ($`\lambda ^{}`$), determined by the analysis settings. After subtracting the backgrounds measured for each detector, the count rates from detector 1 were scaled by the relative efficiency of the two detectors: $`\eta _2/\eta _1=1.11\pm 0.01`$. (In this way our calculated value of the Knowledge corresponds to what would have been measured if our detectors had been identical and noiseless.) Call the resulting scaled rates $`R_{1\lambda }R(\text{path 1},\text{polarization }\lambda )`$ and $`R_{1\lambda ^{}}R(\text{path 1},\text{polarization }\lambda ^{})`$. Next, we measure the corresponding quantities for path 2: $`R_{2\lambda }`$ and $`R_{2\lambda ^{}}`$. The betting strategy is the one introduced by Wootters and Zurek and optimized in Ref. : Pick the path which contributes most to the probability of triggering the detector that has actually fired. The Likelihood is then $$L=\frac{\mathrm{max}\{R_{1\lambda },R_{2\lambda }\}+\mathrm{max}\{R_{1\lambda ^{}},R_{2\lambda ^{}}\}}{R_{1\lambda }+R_{2\lambda }+R_{1\lambda ^{}}+R_{2\lambda ^{}}}.$$ (2) ## II Experimental results ### A Wave-particle duality for pure states Figure 2 shows the results when a pure vertical-polarization state (V) was input to the interferometer, as a function of the internal HWP’s orientation. As expected, when the HWP is aligned to the vertical ($`\theta _{\mathrm{HWP}}=0`$), therefore leaving the polarization unchanged, we see nearly complete Visibility and get no WW Knowledge. The measured values of $`V`$ are slightly lower than the theoretical curve because the intrinsic visibility of the interferometer (even without the HWP) is only $`98\%`$, due to nonideal optics . Conversely, with the HWP set (at $`\theta _{\mathrm{HWP}}=45^{}`$) to rotate the polarization in path 1 to horizontal (H), the Visibility is essentially zero, and the Knowledge nearly equal to 1. Formally, the spatial wave function and the polarization WWM wave function are completely entangled by the HWP: $`|\psi |1|𝖧_{\mathrm{WWM}}+e^{i\varphi }|2|𝖵_{\mathrm{WWM}}`$, where $`\varphi `$ is the relative phase between paths 1 and 2. Tracing over the WWM effectively removes the coherence between the spatial modes. That a small visibility persists in our results can be explained by slight residual polarization transformations by the interferometer mirrors and beam splitters, so that the polarizations from the two paths are not completely orthogonal; and by the remarkable robustness of interference — both theoretically and experimentally, $`V>4.4\%`$ even though $`L>99.9\%`$! In Fig. 2 we also display two sets of Knowledge data, one taken in the optimal basis , the other fixed in the horizontal-vertical basis. These data demonstrate that Knowledge can depend on the measurement technique. With the optimal basis, the value of $`V^2+K^2`$ is always very close to the predicted unit value; our experiment is the first to verify this. The average of all the data points in Fig. 2 gives $`0.976\pm 0.017`$. The slight discrepancy with the predicted value of $`1`$ is mostly due to the intrinsic visibility of the interferometer — for the minimum-visibility arrangement, $`V^2+K^2=0.998`$. ### B Wave-particle duality for (partially-)mixed states Using photons from an attenuated quartz halogen lamp that was spectrally-filtered with a narrow-band interference filter (centered at $`670\mathrm{nm}`$, $`1.5\mathrm{nm}`$ FWHM) and spatially-filtered via a single-mode optical fiber, we explored Eq. (1) for mixed states (slight polarizing effects from the fiber actually led to $`4\%`$ residual net polarization). The measurements of Visibility and Knowledge for this nearly completely-mixed input state have values close to the theoretical prediction of $`0`$ (Fig. 3a). $`K0`$ for a completely-mixed WWM state because any unitary transformations on an unpolarized input also yield an unpolarized state (the density matrix is unaffected), so there is no WW information. That $`V0`$ can be understood by examining the behavior of orthogonal pure WWM states, with no definite phase relationship between them. In the basis where the HWP rotates the WWM states by $`90^{}`$, the orthogonal polarizations from paths 1 and 2 cannot interfere; in the basis aligned with the HWP’s axes, each polarization individually interferes, but the interference patterns are shifted relatively by $`180^{}`$ (due to the birefringence of the HWP), so the sum is a fringeless constant. To enable production of an even more mixed input, and to allow generation of arbitrary partially-mixed states, we used a “tunable” diode-laser scheme (see Fig. 1b). By rotating the (pure linear) polarization input to the first polarizing beam splitter, one can control the relative contribution of horizontal and vertical components. For example, for incident photons at $`45^{}`$, one has equal H and V amplitudes which are then added together with a random and rapidly varying phase to produce an effectively completely mixed state of polarization . With 5 times more vertical than horizontal, the state is then 1/3 completely-mixed to 2/3 pure. This case is shown in Fig. 3b. Note that the maximum Visibility (and Knowledge) is numerically equal to the state purity, as the mixed-component displays no interference (and contains no WW information). The data taken for various input states show excellent agreement with theoretical predictions (Fig. 3c). ### C Quantum erasure (erasing and non-erasing) In contrast to many interference situations where the WW information may be inaccessible, the quantum state of our WWM is easily manipulated. One can then in fact “erase” the distinguishing information and recover interference (though this simple physical picture fails when non-pure WWM states are considered). In our experiments, such an erasure consists of using a polarization analysis to reduce or remove the WW labeling. For example, if the path 1 and 2 polarizations are horizontal and vertical, respectively, analysis at $`\pm 45^{}`$ will recover complete fringes; any photon transmitted through a $`45^{}`$ polarizer is equally likely to have come from either path. Figure 4a shows quantum eraser data under the condition that a pure vertical photon is input to the interferometer, and rotated by the HWP in path 1 by either $`90^{}`$ or $`20^{}`$. The visibility on detector 1 after the analyzer can assume any value from $`0`$ to $`1`$, the latter case being a complete quantum erasure. Even for a completely-mixed state, it is still possible to recover interference (Fig. 4b). With no WW information to erase, this non-erasing quantum erasure may seem quite remarkable at first. However, the essential feature of quantum erasure is not that it destroys the possibly available WW information, but that it sorts the photons into sub-ensembles (depending on the quantum state of the WWM) each exhibiting high-visibility fringes. Complete interference is recoverable by analyzing along the eigenmodes of the internal HWP — along one axis we see fringes, along the other we see “anti-fringes,” shifted by $`180^{}`$ . More generally, one post-selects one of the WWM eigenstates as determined by the interaction Hamiltonian of the interfering quantum system and the WWM . For a partially-mixed WWM state, just as the value of $`V^2+K^2`$ lies partway between $`1`$ (pure state) and $`0`$ (completely-mixed state), the analysis angles yielding zero visibility also fall between those for pure and mixed states (Fig. 4c). Quantitatively, for a fractional purity $`s`$, the angles are at $`\theta _{\mathrm{HWP}}\pm 1/2\mathrm{arccos}\mathbf{\left(}s\mathrm{cos}(2\theta _{\mathrm{HWP}})\mathbf{\right)}`$; consult Ref. for further details. A convenient geometrical visualization of our results can be had by considering the polarization analysis in the Poincaré sphere representation , in which all linear polarizations lie on the equator of the sphere, circular polarizations lie on the poles, and arbitrary elliptical states lie in between. Any two orthogonal states lie diametrically opposed on the sphere. For pure polarization input states to our interferometer, there are in general exactly two points on the sphere for which the interference visibility will be exactly equal to zero. These correspond to the polarizations where a detector sees light from only one of the interferometer paths. Along the entire great circle that bisects the line connecting these two points, the quantum eraser will yield perfect visibility. Curiously, the situation for a completely-mixed input state is reversed. Here there are in general exactly two polarization states for which the quantum eraser recovers unit visibility, corresponding to the eigenmodes of the polarization elements inside the interferometer; on the great circle equidistant from these two points the Visibility vanishes. For example, in some mixed-state experiments described in Ref. , the eigenmodes are the poles on the Poincaré sphere, and the great circle corresponds to the equator — no visibility is observed for any linear polarization analysis. ## III Discussion Our results demonstrate the validity of Eq. (1) at the percent level. Moreover, they highlight some features associated with mixed states, which may not have been widely appreciated. Namely, that it is possible for both the interference visibility and the path distinguishability to equal zero. We have also seen that in some cases where the visibility is intrinsically equal to zero, it is possible to perform quantum erasure on the photons, and recover the interference. Remarkably, this is true even when the input state is completely mixed, and there exists no WW information to erase. The operation of the polarizer is essentially to select a sub-ensemble of photons. Depending on how this selection is performed, we may recover fringes, anti-fringes, no fringes, or any intermediate case. The WW labeling in our experiment arose from an entanglement between the photon’s spatial mode and polarization state. It could just as well have been with another photon altogether, as in the experiments in , or even with a different kind of quantum system . The same results are predicted, as long as the WW information is stored in a 2-state quantum system (e.g., internal energy states, polarization, spin, etc.). More generally, our findings are extendible to analogous experiments with quanta of different kinds such as, for example, interferometers with electrons , neutrons , or atoms . To counter a possible misunderstanding let us note that, quite generally, entanglement concerns different degrees of freedom (DoF’s), not different particles. For certain purposes, such as quantum dense coding or quantum teleportation , it is essential that the entangled DoF’s be carried by different particles and can thus be manipulated at a distance. For other purposes, however, one can just as well entangle an internal DoF of the interfering particle itself with its center-of-mass DoF . In our experiment the photon’s polarization DoF is entangled with the spatial mode DoF represented by the binary alternative “reflected at the entry beam splitter, or transmitted?” Analogously, hyperfine levels of an atom were used to mark its path in the experiments of Refs. . In the extreme situation of perfect WW distinguishability, the entangled state is of the form stated in Sec. II A, namely $`|\psi |1|𝖧+e^{i\varphi }|2|𝖵`$. Appropriate measurements on the spatial DoF (defined by $`|1`$ and $`|2`$) and the polarization DoF ($`|𝖧`$ and $`|𝖵`$) would show that the entanglement is indeed so strong that Bell’s inequality is violated — clear evidence that a description based solely on classical electrodynamics cannot account for all features of our experiment. Of course, inasmuch as one cannot satisfy the implicit assumption that the measurements on the entangled subsystems be space-like separated, this violation of Bell’s inequality implies nothing about the success or failure of local-hidden-variable theories; however, this is not relevant here. Finally we’d like to mention that further progress was made since the completion of the work reported here. Experimental tests of more sophisticated inequalities than (1) were performed , and there was progress in theory as well . In particular, the quantitative aspects of quantum erasure were investigated beyond the initial stage reached in Ref. . ## Acknowledgments BGE is grateful to Helmut Rauch and collaborators for their hospitality at the Atominstitut in Vienna, where part of this work was done, and he thanks the Technical University of Vienna for financial support. PGK and PDDS acknowledge Andrew White for helpful discussions and assistance. Correspondence should be addressed to PGK (email: Kwiat@lanl.gov).
no-problem/9908/hep-lat9908004.html
ar5iv
text
# Using lattice methods in non-canonical quantum statistics ## 1 Introduction The usefulness of the canonical ensemble in statistical mechanics is remarkable. The standard explanation of this success relies in taking the “thermodynamical limit” which corresponds to increasing the volume of the system to infinity while keeping all the relevant intensive quantities, i.e. densities, fixed and finite. From this point of view, the canonical ensemble should not have much utility for small systems consisting of only a few particles. This, however, does not seem to be the case, and in the following we propose a new approach which explains why and how the canonical ensemble can help also in the analysis of small equilibrium systems. We also explain how lattice simulations in general can be employed in this analysis. ## 2 Ensembles from coarse-graining of the energy fluctuation spectrum The standard approach to quantum statistics uses a density matrix $`\widehat{\rho }`$, which is a non-negative, hermitian, trace class operator normalized to one and which gives the expectation value of an observable $`\widehat{A}`$ by the formula $`\widehat{A}=\mathrm{Tr}(\widehat{A}\widehat{\rho })`$. In some complete eigenbasis $`|\psi _i`$, the density matrix can thus be expanded as $`\widehat{\rho }=_ip_i|\psi _i\psi _i|`$, where the eigenvalues $`p_i`$ satisfy $`p_i0`$ and the normalization condition $`_ip_i=1`$. Suppose now that the system has a discrete energy spectrum, which in quantum mechanics is achieved for every potential that grows sufficiently fast at infinity. An equilibrium, i.e. time-independent, ensemble is then given by a density matrix which has time-independent eigenvalues and which satisfies $`[\widehat{H},\widehat{\rho }]=0`$. In this case, the eigenvectors $`\psi _i`$ can be chosen so that they are also energy eigenvectors with an eigenvalue $`E_i`$. After these preliminaries, it is not hard to see that for any equilibrium ensemble which is given by a density matrix and which has energy as the only relevant conserved quantity, we can find a smooth mapping $`F`$ so that $`\widehat{\rho }=F(\widehat{H})`$, i.e. that $`\widehat{\rho }=_iF(E_i)|\psi _i\psi _i|`$. We will call such a smooth mapping the fluctuation spectrum of the ensemble and we will use the term precanonical ensemble for those equilibrium ensembles which satisfy: 1. The canonical partition function converges, $`\mathrm{Tr}\mathrm{e}^{\beta \widehat{H}}<\mathrm{}`$, for all $`\beta >0`$. 2. The energy fluctuations decay at least exponentially at high energies: $`\mathrm{e}^{\beta E}F(E)`$ is a rapidly decreasing function for all $`\beta <\beta _+`$, where $`\beta _+>0`$ is a parameter. The following representation is then valid for any precanonical ensemble and for all $`0<\beta <\beta _+`$ as long as $`\widehat{A}`$ is a positive observable which satisfies $`\mathrm{Tr}[\widehat{A}\mathrm{e}^{\beta \widehat{H}}]<\mathrm{}`$ in the same range of $`\beta `$: $$\mathrm{Tr}\left[\widehat{A}F(\widehat{H})\right]=_{\beta \mathrm{i}\mathrm{}}^{\beta +\mathrm{i}\mathrm{}}\frac{\mathrm{d}w}{2\pi \mathrm{i}}\overline{F}(w)\mathrm{Tr}\left[\widehat{A}\mathrm{e}^{w\widehat{H}}\right],$$ (1) where $`\overline{F}`$ is the Laplace transform of $`F`$ and the integrand in the above equation is an analytic function in the half-plane $`0<\mathrm{Re}w<\beta _+`$. This result follows from Fourier-transform formulae for rapidly decreasing functions; the precise mathematical details can be found from . The value of the integral representation (1) can be computed by saddle point methods and, for instance when $`\widehat{A}=\widehat{1}`$, there is a unique positive saddle point $`\beta `$ which dominates the value of the integral. For $`\widehat{A}=\widehat{1}`$ it can be solved for small $`\beta `$ from the saddle point equation $$\widehat{H}_\beta ^{\mathrm{can}}=E+\beta \epsilon ^2+𝒪(\beta ^2),$$ (2) where, assuming the normalization $`F=1`$, $$E=dxF(x)x,\epsilon ^2=dxF(x)(xE)^2.$$ (3) Since the precise form of the fluctuation spectrum is difficult to measure, we need some way of parameterizing its large scale properties. As usual, these can be extracted from the original fluctuation spectrum by a coarse-graining transformation—here we used a convolution with a Gaussian distribution, $$F(x)F_\mathrm{\Lambda }(x)_{\mathrm{}}^{\mathrm{}}dyF(y)\frac{1}{\sqrt{2\pi \mathrm{\Lambda }^2}}\mathrm{e}^{\frac{1}{2\mathrm{\Lambda }^2}(xy)^2}.$$ Under this transformation, the Laplace-transform in (1) will change to $`\overline{F}_\mathrm{\Lambda }(w)=\mathrm{e}^{\frac{1}{2}\mathrm{\Lambda }^2w^2}\overline{F}(w)`$. When $`\mathrm{\Lambda }`$ approaches infinity it is clear that the positive saddle point value $`\beta `$ must go to zero. Since $`\overline{F}_\mathrm{\Lambda }`$ is analytic near the origin, we can in this limit use the approximation $`Ew+\frac{1}{2}\epsilon ^2w^2`$ for $`\mathrm{ln}\overline{F}_\mathrm{\Lambda }(w)`$—the parameters are obtained from (3) by replacing $`F`$ with $`F_\mathrm{\Lambda }`$. Taking the inverse Laplace-transform then shows that this corresponds to using the Gaussian ansatz $`F_\mathrm{\Lambda }(x)=G_\epsilon (Ex)`$ for the fluctuation spectrum. This should not come as a surprise; the argumentation is the same as used with the central limit theorem of probability theory. For large $`\mathrm{\Lambda }`$ the positive saddle point typically becomes dominant. On the other hand, the trace left in the positive saddle point approximation is simply the canonical trace, $`\mathrm{Tr}(\widehat{A}\mathrm{e}^{\beta \widehat{H}})`$, and often the canonical expectation value becomes a good approximation of the coarse-grained one. The precise condition for the use of the canonical ensemble can be given in terms of the canonical variance $`\sigma ^2=(\widehat{H}\widehat{H})^2_\beta ^{\mathrm{can}}`$ and the normalized canonical energy operator $`\widehat{h}=(\widehat{H}\widehat{H})/\sigma `$. The condition for using the canonical approximation for the partition function and the one for using the canonical expectation value for a positive observable $`\widehat{A}`$ are, respectively, $$a\frac{\sigma ^2}{2\epsilon ^2}1\mathrm{and}a\widehat{A}\widehat{h}^2/\widehat{A}1.$$ Similarly, the Gaussian ensemble can be used if the left hand sides in the previous equations are not too large. A more complete explanation of these results can be found in . The canonical approximation of the Gaussian expectation values has already been analysed in and we quote here only the results: The simple bounds, already referred to in the above, for the approximation of positive observables are $$a\frac{\widehat{A}\widehat{h}^2}{\widehat{A}}\mathrm{ln}\frac{\widehat{A}_{E_,\epsilon }^{\mathrm{gauss}}}{\widehat{A}_\beta ^{\mathrm{can}}}a$$ and this approximation can be improved for $`a1`$ by using the asymptotic series $$\widehat{A}_{E,\epsilon }^{\mathrm{gauss}}=\widehat{A}_\beta ^{\mathrm{can}}+(1\widehat{h}^2)\widehat{A}a+𝒪(a^2).$$ (4) ## 3 Gaussian ensemble on a lattice There are two different ways of using lattice simulations in the evaluation of Gaussian expectation values. In the direct approach, the lattice approximation is applied to the complex temperature trace in (1) which, after an exchange of the order of the integration and the continuum limit, leads to an integral kernel for the lattice simulations. Unfortunately the kernel is not a positive function and the results are in most cases obtained as a delicate cancellation of oscillations of the kernel. This, however, is likely to be an unavoidable feature of any space-lattice simulation of microcanonical expectation values as the energy wavefunctions themselves are typically highly oscillatory. This approach is most useful when canonical simulations at a complex temperature are possible. Then the trace in the the integrand in (1) can be evaluated in a number of points and the integral computed by a discrete Fourier-transform. This would yield results for a range of values of the parameters $`E`$ and $`\epsilon `$ and, therefore, it would enable an inspection of a whole energy range at the same time. The main difficulty in this approach is, of course, in the complex temperature lattice simulation with its oscillatory kernel function. If an expectation value at only one value of the parameters $`E`$ and $`\epsilon `$ is needed, then a second alternative is also possible: perform first the Fourier-transformation of the canonical lattice kernel and do the lattice simulations with the resulting kernel function. In this case, the evaluation of the kernel function becomes an obstacle slowing down the simulation. The second approach to the Gaussian evaluation problem is to use the asymptotic series given in (4) which requires the computation of canonical expectation values only. The problem in this case is to find the correct lattice operators which would correspond to the different powers of $`\widehat{h}`$ in the continuum limit. We will now show how these can be found in a simple quantum mechanical case and comment on some general features which should be relevant also for field theory lattice simulations—a more complete analysis of this kind of lattice system can be found from . Consider, for simplicity, a non-relativistic particle in a potential $`V(x)`$. The Hamiltonian of this system is $`\widehat{H}=\frac{1}{2m}\widehat{p}^2+V(\widehat{x})`$ and if the potential is bounded from below and increases sufficiently fast at infinity, the complex temperature trace has a rigorous lattice approximation given by $$\mathrm{Tr}\mathrm{e}^{w\widehat{H}}=\underset{L\mathrm{}}{lim}\mathrm{d}^Lx[L/(2\pi w)]^{L/2}\mathrm{e}^{\frac{1}{w}P_LwV_L},$$ $$P_L=\frac{L}{2m}\underset{k=1}{\overset{L}{}}\left|x_{k1}x_k\right|^2,V_L=\frac{1}{L}\underset{k=1}{\overset{L}{}}V(x_k).$$ A straightforward differentiation of this result then gives the operators which will measure in the continuum limit the expectation values of different powers of the Hamiltonian. We have given the first few of them in Table 1. Two features of these results are worth pointing out: first, the kinetic energy is given by the operator $`\frac{1}{2}L\frac{1}{\beta }P_L`$, which shows that $`P_L`$ diverges as the lattice size in the continuum limit and thus needs to be “renormalized”. This reflects the well-known result that the continuum path-integral is concentrated on paths which are continuous, but non-differentiable. Secondly, each power of the Hamiltonian needs a separate renormalization term in the sense that using the powers of the operator giving the expectation value, $`c_1`$, is not enough. This is exactly analogous to the situation of composite operators in field theory. ## 4 Conclusions We have introduced the Gaussian ensemble as a means of refining the accuracy of the canonical ensemble and we have shown by a coarse-graining procedure why this would have applications also for non-thermal equilibrium systems. The canonical, complex temperature, lattice simulations offer one way of inspecting the behavior of the Gaussian expectation values. A second way, applicable for systems near the thermodynamical limit, uses correction terms which can be computed with the well-established methods of canonical lattice simulations.
no-problem/9908/hep-lat9908032.html
ar5iv
text
# NORDITA-1999/50 HE Interactions of heavy-light mesonsPresented by P. Pennanen, petrus@hip.fi ## 1 TWO HEAVY-LIGHT MESONS Experimental candidates for bound states of four quarks (of which two are antiquarks) lie close to meson-antimeson thresholds. In our case e.g. a heavy $`\mathrm{{\rm Y}}`$ particle could be a $`B^{}\overline{B}^{}`$ system. Systems with heavy quarks should be more easily bound as the repulsive kinetic energy is smaller with the attractive potential being flavour independent. The binding of four-quark systems has been studied e.g. in bag, string-flip and deuson models, states with two $`b`$ quarks being stable in most models. In the lattice calculation we measure two diagrams for both meson-meson and meson-antimeson cases – see Fig. 1. One of them is the unconnected one without light quark interchange and the other connected, where the light quarks hop from one meson to another. The light quark mass we use is approximately that of $`s`$. Because of the static approximation for the heavy quarks their spin and isospin decouples making the pseudoscalar $`B`$ and the vector $`B^{}`$ degenerate, while physically they have a 46 MeV (1%) separation. We call this degenerate set $``$. For the two-meson system combinations of $`B`$ and $`B^{}`$ do have different energies in our case. We measure wavefunctions symmetric under interchange of the mesons with the light quark spin and isospin being singlet or triplet; these then couple to $`B,B^{}`$ combinations . We obtain estimates of light quark propagators using maximally variance reduced pseudo-fermionic ensembles , 24 of which are generated for each gauge configuration. A second nested Monte Carlo calculation is then performed for the pseudofermions of each gauge configuration. The variance reduction requires “halving” of the lattice into two separate regions with the propagators going from one region to the other. Thus the connected diagram for $`\overline{}`$ and the cross correlator between $`Q\overline{Q}`$ and $`\overline{}`$ need to have one of the normally spatial axes as the temporal axis, introducing considerable technical complications. The parameters of our calculation are $`\beta =5.2,C_{SW}=1.76,a0.14\mathrm{fm},M_{PS}/M_V=0.72`$ for unquenched and $`\beta =5.7,C_{SW}=1.57,a0.17\mathrm{fm},M_{PS}/M_V=0.65`$ for the quenched case with a $`16^3\times 24`$ lattice being used for both. We have 20 and 54 gauge configurations for quenched and unquenched respectively; with pseudofermionic fields these take some 60 GB of diskspace. A variational basis of local and fuzzed mesonic operators is used, diagonalization of which maximizes overlap with the ground state of the system. We measure the different spin and isospin components as discussed in Ref. . For the $`\overline{}`$ case $`I_q=1`$ has only the unconnected diagram, whereas $`I_q=0`$ has the connected one subtracted. ## 2 RESULTS The raw correlators show that for the $``$ system the unconnected diagram is much noisier and does not contribute to the binding at $`R>1`$. The connected diagram, on the other hand, gives a small binding for larger $`R`$ and also contributes to the observables where the spin of the light quarks changes. At $`R=0`$ the heavy quarks are at the same point and the $``$ case looks like a baryon with an antitriplet string. We can compare to previously measured energies of the $`\mathrm{\Lambda }_b,\mathrm{\Sigma }_b`$ baryons for $`I_q,S_q=(0,0)`$ and $`I_q,S_q=(1,1)`$ respectively, finding excellent agreement . States with a sextet string $`(0,1;1,0)`$ lie higher. The $`\overline{}`$ singlet at $`R=0`$ looks like a pion, and we find agreement with the energy of a pion with non-zero momentum. The meson-meson potentials are shown in Fig. 2. The $`I_q,S_q=(1,1)`$ case is similar to $`(0,0)`$ but less bound; the level ordering observed at $`R=0`$ is retained and the attraction disappears earlier. For the $`(1,0)`$ a remnant of the sextet string makes the small-$`R`$ potential repulsive. The $`(0,1)`$ at $`R=0`$ is attractive for unquenched and repulsive for quenched; this is the only qualitative ($`2.5\sigma `$) difference between the quenched and unquenched results visible in our data. Both $`(0,1)`$ and $`(1,0)`$ seem to have attraction at $`r0.3`$ fm, which is a meson exchange effect as opposed to the small distance behaviour governed by gluonic effects. From a crude two-body Schrödinger approach using these potentials we expect binding for all of these cases except perhaps $`(1,0)`$. The meson-antimeson potential for $`I_q,S_q=(1,0)`$ is shown in Fig. 3, the $`(1,1)`$ case being similar. For both of these a $`Q\overline{Q}+q\overline{q}`$ state with the same quantum number is lighter for small $`R`$. For $`(1,0);(1,1)`$ the relevant energies are $`V(R)+\pi `$ and $`V(R)+\rho `$ respectively, where $`V(R)`$ is the static $`Q\overline{Q}`$ potential. An estimate of $`V(R)+\pi `$ is included in Fig. 3. The contribution from meson exchange can be examined e.g. by looking at the crossed diagram measured in the quenched approximation at fixed $`T`$ as a function of $`R`$. The $`BBBB`$ case should have a contribution from $`\rho `$ exchange, and we indeed find agreement by using the previously measured $`m_\rho `$ and normalizing by hand. For $`BB^{}B^{}B`$ we should have a contribution from $`\pi `$ exchange. In this case we can use a recent determination of the $`BB^{}\pi `$ coupling , the experimental decay constant and our $`m_\pi `$. In the one-$`\pi `$ exchange formula everything is thus known, and we find excellent quantitative agreement for $`R0.5`$ fm. This is strong support for deuson models in this distance range. ## 3 STRING BREAKING In our quenched and unquenched calculations the ground state $`\overline{}`$ and $`Q\overline{Q}`$ potentials cross at $`r1.2`$ fm. We are investigating the breaking of the $`Q\overline{Q}`$ string by using a variational approach similar to that used in Higgs models by several groups. The cross correlator between two-meson and two-quark states allows us to study their mixing also in the quenched theory – in the unquenched case additional fermion bubbles induce corrections. The quenched mixing matrix element can then be used to estimate the splitting of energy levels at the string breaking point, even though no actual splitting occurs with quenching. With an unquenched calculation the energy splitting can be studied directly using the full variational approach. One might think that an excited string would break at a smaller distance than the ground state. This is not necessarily the case, as e.g. the first excited state has $`J_z=1`$ with quark separation along $`z`$ and only breaks into mesons $`_L\overline{}_L^{}`$ with $`L+L^{}>0`$. In general it is an open question if a state with particular quantum numbers has the lowest energy at a particular heavy quark separation as a) a hybrid $`Q\overline{Q}`$ meson with excited glue, b) a ground state $`Q\overline{Q}`$ meson and a $`q\overline{q}`$ meson or c) two heavy-light mesons. These energy levels and their mixing can be studied on the lattice with our techniques.
no-problem/9908/cond-mat9908299.html
ar5iv
text
# Nonequilibrium Phase Transition in Non-Local and Nonlinear Diffusion Model \[ ## Abstract We present the results of analytical and numerical studies of a one-dimensional nonlocal and nonlinear diffusion equation describing non-equilibrium processes ranging from aggregation phenomena to cooperation of individuals. We study a dynamical phase transition that is obtained on tuning the initial conditions and demonstrate universality and characterize the critical behavior. The critical state is shown to be reached in a self-organized manner on dynamically evolving the diffusion equation subjected to a mirror symmmetry transformation. \] The study of equilibrium phase transitions is now a mature field and, increasingly, attention is being paid to outstanding problems in nonequilibrium statistical physics . Such problems are often challenging because of inherent non-linearities. There are many examples of non-equilibrium phenomena which are intrinsically non-local such as the growth of thin films in the presence of shadowing and the sculpting of the drainage basin of river networks due to erosional processes . A striking development in the field of non-equilibrium statistical physics is the development of the paradigm of self-organized criticality , entailing the competition between two dynamical processes leading to a critical state without any fine tuning of parameters. The principal theme of this letter is the study of a simple one dimensional nonlinear and nonlocal diffusion equation to elucidate the nature of a non-equilibrium phase transition. The equation is essentially one describing biased diffusion with the magnitude of the bias determined by the instantaneous configuration of the random walkers undergoing diffusion. Physically, the equation describes aggregation, population dynamics, and represents a simple model for cooperative behavior in game theory . Strikingly, the behavior changes from that of a conventional critical point (which requires tuning) to that of self-organized criticality on considering the time evolution of a transformed equation obtained by a mirror symmetry transformation with $`x`$ replaced by $`x`$. Our basic equation for $`P(x,t)`$, the probability that the diffusing particle is at position $`x`$ at time $`t`$, is $$\frac{P}{t}=v(t)\frac{P}{x}+\frac{1}{4}\frac{^2P}{x^2}$$ (1) where both the nonlinearity as well as the nonlocality are introduced in the bias velocity $`v`$ defined by $$v(t)=_0^{\mathrm{}}𝑑xP(x,t)1/2.$$ (2) On setting $`v`$ equal to zero in Eq. (1), one recovers the standard unbiased diffusion equation , whereas one obtains simple biased diffusion, when $`v`$ is a constant. In our equation, $`v`$ is a measure of the imbalance between the population of walkers in the right and left and the drift bias promotes further aggregation. Eq. (1) describes the temporal evolution of the distribution function $`P(x,t)`$ and leads to one of two outcomes in the large time limit. Depending on the initial distribution, one ends up with the bias to the right or to the left winning so that $`P`$ becomes 1 either at $`+\mathrm{}`$ or at $`\mathrm{}`$. Our focus is on the non-equilibrium transition between these limiting behaviors. Note that there is a set of initial conditions (of measure zero) that correspond to the dynamical critical point – we will demonstrate that the critical behavior is universal. Let us define a new variable $`w(t)=_0^t𝑑\tau v(\tau )`$, and introduce $`y(t)=xw(t)`$ so that Eq. (1) is cast in the form of a standard diffusion equation, $$\frac{P}{t}=\frac{1}{4}\frac{^2P}{y^2}.$$ (3) For simplicity, let us first consider an initial Gaussian distribution of $`P(x,t)`$ centered around $`x_0`$ and with variance $`\sigma _0`$. The solution of Eq. (3) is then given by $$P(x,t)=N\mathrm{exp}\left\{\frac{[xx_0w(t)]^2}{t+2\sigma _0^2}\right\}.$$ (4) where $`N=1/\sqrt{\pi (t+2\sigma _0^2)}`$ is the normalization constant. With $`\sigma _0=0`$, $`P(x,t)`$ is also the fundamental solution , which will be used to obtain the distribution at time $`t`$, starting from more general initial conditions. Expression (4) is only a formal solution because $`w(t)`$ is itself a function of $`P(x,t)`$. The transition is between two phases corresponding to aggregation on the right or on the left and therefore one would expect that the critical point would correspond to a situation with no bias (i.e. $`x_0=0`$). In this case, the distribution is symmetric with respect to the origin at all times, and there is nothing to choose between left and right, thus an unbiased behaviour ensues. In order to probe the behaviour in the vicinity of this critical point, one could start with an initial distribution with a tiny value of $`x_0`$ (small bias) and watch how the system evolves. Combining Eqs. (1) and (2) and noting that $`\dot{w}=v`$, we find $$\dot{w}=\frac{1}{2}\text{Erf}\left\{\frac{w(t)+x_0}{\sqrt{t+2\sigma _0^2}}\right\}$$ (5) At criticality ($`x_0=0`$), the velocity is zero at all times and $`w`$ vanishes too. In the critical regime, one expects that $`w(t)1`$, for all $`t`$ less than a crossover time $`T_r`$, so that one may linearize Eq. (5) to find that $$\dot{w}=\frac{1}{\sqrt{\pi }}\frac{w+x_0}{\sqrt{t+2\sigma _0^2}}.$$ (6) This equation can be easily solved and yields $$w(t)=x_0\left\{\text{e}^{2(\sqrt{t+2\sigma _0^2}\sqrt{2\sigma _0^2})/\sqrt{\pi }}1\right\}.$$ (7) One may define a characteristic transient time, $`T_r`$, spent in the critical region during which the linearization approximation holds. From (7), one finds that $`T_r`$ diverges as $`x_00`$ as $$T_r\mathrm{ln}^2|x_0|.$$ (8) In the critical region, the characteristic length-scale $`\xi `$ is expected to follow the diffusion law, and therefore $$\xi \sqrt{T_r}|\mathrm{ln}|x_0||,$$ (9) as we will verify numerically later on. Alternatively, in the initial condition, one may introduce a bias by fixing $`x_0=0`$ and instead letting $`P(\mathrm{},0)=\mathrm{\Phi }_0`$ as an effective asymmetric boundary condition. With the same procedure used to derive Eq. (5) we have $$\dot{w}=\frac{\mathrm{\Phi }_0}{2}+\frac{1\mathrm{\Phi }_0}{2}\text{Erf}\left\{\frac{w(t)}{\sqrt{t+2\sigma _0^2}}\right\}.$$ (10) $`\mathrm{\Phi }_0=0`$ corresponds to the critical point, and by linearizing Eq. (10) for small $`\mathrm{\Phi }_0`$, it is straightforward to show that one again obtains $`T_r\mathrm{ln}^2(\mathrm{\Phi }_0)`$, with $`\xi \sqrt{T_r}`$. We have verified that the same critical behaviour holds for other families of initial conditions. For example, when $$P(x,0)=(1/2\epsilon )\delta (x1)+(1/2+\epsilon )\delta (x+1),$$ (11) we find that $$T_r\mathrm{ln}^2|\epsilon |$$ (12) and $`\xi |\mathrm{ln}|\epsilon ||`$. Note that $`\epsilon `$ is now a measure of the deviation from the critical point and $`\epsilon =0`$ correponds to the unbiased, zero drift situation. We now turn to the results of numerical experiments on a 1-dimensional lattice which are useful for validating our analytic predictions and for probing the nonlinear regime. The discrete version of Eq. (1), used in our simulations, reads $$P_x(t+1)=\frac{P_x(t)}{2}+\frac{\mathrm{\Phi }(t)}{2}P_{x1}(t)+\frac{1\mathrm{\Phi }(t)}{2}P_{x+1}(t)$$ (13) with $`\mathrm{\Phi }(t)=_{x0}P_x(t)`$. The velocity is thus given by $`v(t)=\mathrm{\Phi }(t)1/2`$. This equation was proposed by Nowak and Sigmund as a simplified model of the evolution of indirect reciprocity by image scoring. Indirect reciprocity is determined by reputation and status and is characterized by each individual having an image score. A potential donor and recipient of an altruistic act have an interaction in which the donor helps the recipient provided the recipient’s image score is positive. Such an altruistic act increases the image score of the donor by $`1`$ (the selfish act would have decreased it by $`1`$) and the image score of the recipients is unchanged. Eq. (13) is the governing equation for the time evolution of $`P_x`$, the fraction of players with image score $`x`$. The two phases that we have considered correspond to cooperation and defection and our finding is that not much time is spent agonizing over which phase to select even in the vicinity of the critical point. Indeed, the time scale to decide on one of the two different phases only diverges logarithmically as one approaches the critical point. Fig. 1 shows the divergence of $`T_r`$ as $`\epsilon 0`$ for the initial distributions given in Eq. (11) and for a lattice of $`6`$x$`10^5`$ sites. Numerical results are in excellent accord with our theoretical predictions (see Eq. (12)) Let us consider now, an interesting generalization of the initial conditions that we studied analytically: $`P(\mathrm{},0)=\mathrm{\Phi }_0`$ and $`P(z,0)=1\mathrm{\Phi }_0`$ and $`P=0`$ at all other locations, initially. The critical value of $`\mathrm{\Phi }_0`$ increases monotonically to a non-zero value $`\mathrm{\Phi }_c(z)`$ for positive $`z`$ with $`\mathrm{\Phi }_c(z\mathrm{})=1/2`$. The smallest value of $`\mathrm{\Phi }_c(z)`$ on a discrete lattice occurs when $`z=1`$, which is the case we focus on. $`\mathrm{\Phi }_c\mathrm{\Phi }_c(z=1)`$ is found to be 0.261970531164…, a result that was noted earlier by Nowak and Sigmund . Fig. 2 shows the behaviour of the bias velocity as a function of time as $`\mathrm{\Phi }_0`$ approaches $`\mathrm{\Phi }_c`$ from above and from below. $`T_r`$ can be identified as the time after which $`v(t)`$ becomes equal to its asymptotic values of either $`1/2`$ and $`\mathrm{\Phi }_01/2`$ and its scaling behaviour is shown in Fig. 3. The average location of the random walkers (excluding the number fixed at $`x=\mathrm{}`$) behaves with time (in the vicinity of the fixed point) as $$x(t)\sqrt{t+2\sigma _0^2}.$$ (14) In order to derive this result, we note that the velocity increases very slowly in the linear critical regime and can be approximately considered constant (this is consistent with the assumption that, for time $`tT_r`$, $`w`$ is much smaller than $`1`$). The derivative of the velocity given by Eq. (2) is then vanishing, and using Eq. (1) to eliminate $`\dot{P}`$ we obtain, after integrating over $`x`$, the expression $$4vP(0,t)_xP(0,t)=0.$$ (15) Using the formal solution (4), we obtain $$w(t)\sqrt{t+2\sigma _0^2}$$ The result (14) then follows on noting that the average position of the walkers is given by $`w(t)+x_0`$. This critical regime behaviour of $`x(t)`$ crosses over to a linear temporal behavior when the bias reaches a sufficient strength. There is indeed a sharp onset of the linear behaviour at a value of $`x`$, which one may identify with $`\xi `$. The scaling behaviour of $`\xi `$ is shown in Fig. 4. We now turn to a simple mechanism for obtaining self-organized critical behavior in our model. In Eq. (1) the transformation $`xx`$ is equivalent to a change of the sign of the bias velocity. In this situation, the system spontaneously organizes in such a way that the aggregation of walkers is disfavoured. As a consequence, the asymptotic distribution becomes symmetric (characterized by $`v=0`$) and this corresponds to a self-tuning to the critical state, a behaviour typical of self-organized criticality . In this asymptotic regime, the scaling $`x(t)\sqrt{t}`$ derived in (14) still ought to hold and is confirmed by simulations performed on the discretized diffusion equation (see Fig. 5). In this letter, we have introduced and studied a diffusion equation with nonlinear and nonlocal features. In this model, spontaneous fluctuations in the population of walkers are able to drive the entire population towards one of the two boundaries located at $`(x=\pm \mathrm{})`$. This mechanism is of interest as the basis for the development of more realistic models of self-aggregation and self-organization in cooperative states of populations of interacting individuals. A mirror symmetry transformation applied to the equation reveals a dynamical evolution corresponding to generic behavior associated with self-organized criticality. This work was supported by INFN, NASA, NATO and The Donors of the Petroleum Research Fund administered by the American Chemical Society.
no-problem/9908/hep-lat9908030.html
ar5iv
text
# U(1) staggered Dirac operator and random matrix theoryPartially funded by the Department of Energy under contracts DE-FG02-97ER41022 and DE-FG05-85ER2500. ## Abstract We investigate the spectrum of the staggered Dirac operator in 4d quenched U(1) lattice gauge theory and its relationship to random matrix theory. In the confined as well as in the Coulomb phase the nearest-neighbor spacing distribution of the unfolded eigenvalues is well described by the chiral unitary ensemble. The same is true for the distribution of the smallest eigenvalue and the microscopic spectral density in the confined phase. The physical origin of the chiral condensate in this phase deserves further study. By now it is a well-known fact that the spectrum of the QCD Dirac operator $$iD+im=\left(\begin{array}{cc}im& T\\ T^{}& im\end{array}\right)\mathrm{in}\mathrm{a}\mathrm{chiral}\mathrm{basis}$$ (1) is related to universality classes of random matrix theory (RMT), i.e. determined by the global symmetries of the QCD partition function . In RMT the matrix $`T`$ in Eq. (1) is replaced by a random matrix with appropriate symmetries, generating the chiral orthogonal (chOE), unitary (chUE), and symplectic (chSE) ensemble, respectively . For SU(2) and SU(3) gauge groups numerous results exist confirming the expected relations . We have investigated 4d U(1) gauge theory described by the action $`S\{U_l\}=_p(1\mathrm{cos}\theta _p)`$ with $`U_l=U_{x,\mu }=\mathrm{exp}(i\theta _{x,\mu })`$ and $`\theta _p=\theta _{x,\mu }+\theta _{x+\widehat{\mu },\nu }\theta _{x+\widehat{\nu },\mu }\theta _{x,\nu }(\nu \mu ).`$ At $`\beta _c1.01`$ U(1) gauge theory undergoes a phase transition between a confinement phase with mass gap and monopole excitations for $`\beta <\beta _c`$ and the Coulomb phase which exhibits a massless photon for $`\beta >\beta _c`$. The question of the order of this phase transition, and hence the issue of a continuum limit $`\beta \beta _c0`$ of the massive phase, has remained a subject of endless debate, see for instance and references therein. For $`\beta >\beta _c`$ a critical line of continuum theories ought to exist as the photon is massless for all these $`\beta `$-values. We are interested in the relationship between U(1) gauge theory and RMT across this phase transition . The Bohigas-Giannoni-Schmit conjecture states that quantum systems whose classical counterparts are chaotic have spectral fluctuation properties (measured, e.g., by the nearest-neighbor spacing distribution $`P(s)`$ of unfolded eigenvalues) given by RMT, whereas systems whose classical counterparts are integrable obey a Poisson distribution, $`P(s)=\mathrm{exp}(s)`$. As the Dirac operator is a quantum-mechanical object without classical counterpart, the meaning of the Bohigas-Giannoni-Schmit conjecture for lattice gauge theory is somewhat unclear. Nevertheless, as for SU(2) and SU(3) gauge groups, we expect the confined phase to be described by RMT, whereas free fermions are known to yield the Poisson distribution. The question arose whether the Coulomb phase will be described by RMT or by the Poisson distribution, with apparently no clear theoretical arguments in favor of either scenario. In Ref. some of the authors have resolved this question by numerical analysis. We generated twenty (or more) gauge configurations at each of the following parameter values: $`8^3\times 4`$ lattice at $`\beta =0`$, $`0.9`$, $`0.95`$, $`1`$, $`1.05`$, $`1.1`$, $`1.5`$ and $`8^3\times 6`$ lattice at $`\beta =0.9`$, $`1.1`$, $`1.5`$. The nearest-neighbor spacing distributions for the $`8^3\times 6`$ lattice at $`\beta =0.9`$ (confined phase) and at $`\beta =1.1`$ (Coulomb phase), averaged over 20 independent configuration, are depicted in Fig. 1. Both are well described by the chUE. In contrast, the right plot in Fig. 1 shows that free fermions are described by the Poisson distribution. The large prime numbers for the lattice size are needed to remove the degeneracies of the spectrum. We have continued the above investigation with a study of the distribution of small eigenvalues in the confined phase. The Banks-Casher formula relates the eigenvalue density $`\rho (\lambda )`$ at $`\lambda =0`$ to the chiral condensate, $$\mathrm{\Sigma }=|\overline{\psi }\psi |=\underset{m0}{lim}\underset{V\mathrm{}}{lim}\pi \rho (0)/V.$$ (2) The microscopic spectral density $$\rho _s(z)=\underset{V\mathrm{}}{lim}\frac{1}{V\mathrm{\Sigma }}\rho \left(\frac{z}{V\mathrm{\Sigma }}\right)$$ (3) should be given by the result for the chUE of RMT . This function also generates the Leutwyler-Smilga sum rules . To study the smallest eigenvalues, spectral averaging is not possible, and one has to produce large numbers of configurations. Our present results are for $`\beta =0.9`$ in the confined phase with 10000 configurations on a $`4^4`$, 10000 configuration on a $`6^4`$, and 1106 configurations on an $`8^3\times 6`$ lattice. The upper plot in Fig. 2 exhibits the distribution $`P(\lambda _{\mathrm{min}})`$ of the smallest eigenvalue $`\lambda _{\mathrm{min}}`$ in comparison with the prediction of the (quenched) chUE of RMT for topological charge $`\nu =0`$, $$P(\lambda _{\mathrm{min}})=\frac{(V\mathrm{\Sigma })^2\lambda _{\mathrm{min}}}{2}\mathrm{exp}\left(\frac{(V\mathrm{\Sigma }\lambda _{\mathrm{min}})^2}{4}\right).$$ (4) The agreement is excellent for all lattices. For the chiral condensate we obtain $`\mathrm{\Sigma }0.35`$ by extrapolating the histogram for $`\rho (\lambda )`$ to $`\lambda =0`$ and using Eq. (2). (With staggered fermions on a finite lattice one always has $`\rho (0)=0`$, but within reasonable limits the extrapolated value is independent of the choice of the bin size.) Since the average value of $`\lambda _{\mathrm{min}}`$ goes like $`V^1`$, $`\lambda _{\mathrm{min}}`$ decreases with increasing lattice size. In the lower plot of Fig. 2 the same comparison with RMT is done for the microscopic spectral density (3) up to $`z=10`$, and the agreement is again quite satisfactory. Here, the analytical RMT result for the (quenched) chUE and $`\nu =0`$ is given by $$\rho _s(z)=\frac{z}{2}[J_0^2(z)+J_1^2(z)].$$ (5) The quasi-zero modes which are responsible for the chiral condensate $`\mathrm{\Sigma }0.35`$ build up when we cross from the Coulomb into the confined phase. For our $`8^3\times 6`$ lattice, Fig. 3 compares on identical scales densities of the small eigenvalues at $`\beta =0.9`$ (left plot) and at $`\beta =1.1`$ (right plot), averaged over 20 configurations. The quasi-zero modes in the left plot are responsible for the non-zero chiral condensate $`\mathrm{\Sigma }>0`$ via Eq. (2), whereas no such quasi-zero modes are found in the Coulomb phase. This is as expected. However, it may be worthwhile to understand the physical origin of the U(1) quasi-zero modes in more detail. For 4d SU(2) and SU(3) gauge theories a general interpretation is to link them, and hence the chiral condensate, to the existence of instantons. As there are no instantons in 4d U(1) gauge theory, one needs another explanation, and it may be interesting to study similarities and differences to the 4d SU(2) and SU(3) situations. An analogous case exists in 3d QCD . In conclusion, the nearest-neighbor spacing distribution of 4d U(1) quenched lattice gauge theory is described by the chUE in both the confinement and the Coulomb phase. In the confinement phase we also find that the $`P(\lambda _{\mathrm{min}})`$ distribution and the microscopic spectral density (3) are described by the chUE. A better physical understanding of the origin of the quasi-zero modes, which are responsible for the non-zero chiral condensate, is desirable.
no-problem/9908/patt-sol9908001.html
ar5iv
text
# Selfsimilar Domain Growth, Localized Structures and Labyrinthine Patterns in Vectorial Kerr Resonators ## Abstract We study domain growth in a nonlinear optical system useful to explore different scenarios that might occur in systems which do not relax to thermodynamic equilibrium. Domains correspond to equivalent states of different circular polarization of light. We describe three dynamical regimes: a coarsening regime in which dynamical scaling holds with a growth law dictated by curvature effects, a regime in which localized structures form, and a regime in which polarization domain walls are modulationally unstable and the system freezes in a labyrinthine pattern. The problem of the growth of spatial domains of different phases has been thoroughly studied in the context of the dynamics of phase transitions: a system is placed in an unstable state and one considers its relaxation to the state of thermodynamic equilibrium . This process is dominated by the motion of domain walls and other defects. It is in this context that seminal ideas of selfsimilar evolution and dynamical scaling were introduced for nonequilibrium processes. Asymptotic domain growth laws, with their underlying physical mechanisms, have been well established, and dynamical scaling has been generally demonstrated. A growth law $`R(t)t^{1/2}`$ holds for dynamics with no conservation law and domains made of equivalent phases. This law follows from the minimization of surface energy, and it has been shown to be robust against the appearance of point defects in systems with a discrete number of phases, three dimensional vortices or chiral domain walls . Other well known growth laws are $`R(t)t^{1/3}`$ for systems with conserved order parameter and $`R(t)t`$ for nonconserved dynamics with a metastable phase , and also for hydrodynamic systems in spatial dimension $`d>2`$ . Domain growth in systems that do not approach a final state of thermodynamic equilibrium is much less understood. For example, the mechanisms underlying a growth law $`R(t)t^{1/5}`$ in pattern forming systems in which the spatial coupling is non purely diffusive (Swift-Hohenberg equation) have not been clearly identified. Other general issues that need to be considered are the role of hamiltonian vs. dissipative dynamics , the effects of nonrelaxational dynamics such as one-dimensional motion of fronts between equivalent states and spiral formation , the emergence of localized structures (LS) , or transverse instabilities of domain walls leading to labyrinthine patterns . Driven nonlinear optical systems offer a wealth of opportunities for the study of pattern formation and other nonequilibrium processes in which the spatial coupling is caused by diffraction instead of diffusion. These systems are specially interesting because they naturally lead to the consideration of vectorial complex fields, being the vector character associated with the polarization of light, and also because they often support the formation of LS . Such bright light spots are being actively considered for applications in parallel optical processing. Only very recently domain growth has been considered in some of these systems and some growth laws obtained from numerical simulations have been reported . However, clear mechanisms for the growth laws have often not been identified, and some of these laws do not correspond unambiguously to an asymptotic regime. In addition, the question of dynamical scaling has, in general, not been addressed. In this letter we consider a Kerr medium as a clear example of a nonlinear optical system in which many of the issues and scenarios mentioned above can be explored. We show that after switching-on a pump field, domain walls are formed which separate regions with different polarization of light. The dynamical evolution of these polarization domain walls leads to three different regimes. For high pump values there is a coarsening regime for which we demonstrate dynamical scaling with a growth law $`R(t)t^{1/2}`$. For lower pump values this process is contaminated by the emergence of LS formed by the collapse of polarization domain walls to a stable bound structure. In a third regime the system evolves into a nearly frozen labyrinthine pattern caused by a transverse modulational instability of the polarization domain wall. These three qualitatively different regimes have been experimentally observed in another optical system and considered in the realm of Swift-Hohenberg models . Our calculations are based on a mean field model that describes the transverse spatio-temporal evolution of the two circularly polarized components of the electric field complex envelope, $`E_+`$ and $`E_{}`$, in an optical cavity filled with an isotropic self-defocusing Kerr medium and pumped with a linearly polarized real field $`E_0`$ : $$_tE_\pm =(1i\theta )E_\pm +i_{}^2E_\pm +E_0\frac{1}{4}i\left[|E_\pm |^2+\beta |E_{}|^2\right]E_\pm .$$ (1) Here $`\theta `$ is the cavity detuning, and $`_{}^2`$ is the laplacian in the transverse plane. Equations (1) are damped and driven coupled Nonlinear Schrödinger equations which can be rewritten as $$_tE_\pm =E_\pm i\frac{\delta }{\delta E_\pm ^{}},$$ (2) where $`[E_+,E_{}]`$ is a real functional. Therefore, except for the linear dissipative term, the dynamics can be written in Hamiltonian form. This corresponds to a rather different dynamics than the normal relaxational dynamics considered in systems that approach a state of thermodynamic equilibrium. We will study different regimes for different values of the pump $`E_0`$. Equations (1) admit symmetric ($`I_{s+}=I_s`$) and asymmetric ($`I_{s+}I_s`$) steady state homogeneous solutions, where $`I_\pm =|E_\pm |^2`$. The homogeneous symmetric solution is linearly stable for $`E_0<E_{0,a}`$, while the asymmetric solutions only exist for $`E_{0,a}<E_{0,b}<E_0`$ and they are linearly stable for $`E_{0,b}<E_{0,c}<E_0`$ . There are two equivalent homogeneous stable solutions for $`E_{0,c}<E_0`$, one in which $`I_{s+}I_s`$ and the other one, obtained interchanging $`E_+`$ by $`E_{}`$, in which $`I_{s+}I_s`$. These solutions are elliptically polarized, but very close to being circularly polarized, because one of the two circularly polarized components dominates. For simplicity we will call them the right and left circularly polarized solutions. If the pump field $`E_0`$ is switched-on from $`E_0=0`$ to a value $`E_0>E_{0,c}`$, only the mode with zero wavenumber can initially grow from the initial condition $`E_\pm =0`$. One then expects that either of the two equivalent homogeneous solutions will locally grow and that domains separated by polarization walls will emerge. This is indeed the process that we study. We note, however, that a solution with a stripe pattern orthogonally polarized to the pump exists for $`E_0>E_{0,a}`$ . This pattern solution is the one obtained by continuity from the homogeneous symmetric solution through a Turing-like instability. We have numerically checked that such solution remains stable for pump values $`E_0E_{0,c}`$, but it is not the solution approached by the physical process just described of switching-on the pump to a value $`E_0>E_{0,c}`$. We find three different dynamical regimes for $`E_0>E_{0,c}`$, summarized in fig. 1. For $`E_0>E_{0,2}`$ domains grow and the system coarsens, for $`E_{0,2}>E_0>E_{0,1}`$ stable LS are formed, while for $`E_{0,1}>E_0>E_{0,c}`$ a labyrinthine pattern emerges. These regimes are better understood by considering the evolution of an initial isolated polarization droplet: a circular domain of one of the solutions surrounded by the other solution. We find that the radius of the circular domain varies consistently with a curvature driven front motion. The normal front velocity $`v_n`$ (eikonal equation) follows a law of the form $`v_n(𝐫,t)=\gamma (E_0)\kappa (𝐫,t)`$, where $`\kappa `$ is the local curvature of the domain wall and $`\gamma (E_0)`$ is a coefficient that depends on the pump field amplitude. For a circular domain we get $`dR(t)/dt=\gamma (E_0)/R(t)`$. In figure 1 we show the function $`\gamma (E_0)`$ as obtained from the numerical solution of eqs. (1) in a two-dimensional system for relatively large initial droplets. Notice that $`\gamma (E_0)`$ changes sign at $`E_0=E_{0,1}`$, which indicates a change from droplet shrinkage to droplet growth. We first consider the regime of domain coarsening which occurs for $`E_0>E_{0,2}`$. In this regime $`\gamma (E_0)>0`$ and an isolated drop shrinks to zero radius. In the general dynamics starting from random initial conditions around $`E_\pm =0`$, sharp domain walls are initially formed and they evolve reducing their curvature. The system approaches a final homogeneous state in which one of the two circularly polarized solutions fills the whole system. In order to characterize the coarsening process we have calculated the pair correlation function of $`I_+`$ and $`I_{}`$, defined as $`C_{I_\pm }(𝐫,t)=I_\pm (𝐱+𝐫,t)I_\pm (𝐱,t)`$. The average $`\mathrm{}`$ is performed over the set of points $`𝐱`$ (and additionally over a set of 100 different random initial conditions). Due to the symmetry of the problem $`C_{I_+}=C_I_{}C`$. Results for the circularly averaged correlation function $`C(r,t)`$ are shown in fig. 2. The mean size $`L(t)`$ of the domains is calculated as the distance at which $`C(r,t)`$ takes half its value at the origin, i.e., $`C(L(t),t)=\frac{1}{2}C(0,t)`$. We obtain a well defined asymptotic growth law $`L(t)t^{1/2}`$ that follows from domain wall motion driven by curvature effects. We have further obtained that the dynamics is selfsimilar, i.e., that there is dynamical scaling. This is seen in fig. 2 where we plot $`C(r,t)`$ before and after rescaling the spatial coordinate of the system with the characteristic domain size $`L(t)`$. We observe that curves for different times in the scaling regime collapse to the single scaling function after rescaling. These results coincide with those obtained for many thermodynamic systems with nonconserved order parameter . We note, however, that in our case the dynamics does not follow the minimization of any obvious energy and that surface tension is not a proper concept for the diffractive spatial coupling considered in optical systems. We next address the regime of formation of LS ($`E_{0,2}>E_0>E_{0,1}`$). In this regime, as in the previous case, $`\gamma (E_0)>0`$, and a large isolated droplet initially shrinks with a radius decreasing as $`R(t)t^{1/2}`$. However the shrinkage stops at a well defined final value of the radius. Initial droplets with a smaller radius grow to this final stable radius. In the general dynamics following the switch-on of the pump, domain walls are initially formed. They first evolve reducing their length as in the coarsening regime. But while in that regime a closed loop disappears, here it collapses to a stable LS formed by a bound state of the domain wall. The final state is composed of stretched domain walls and LS. To understand this process is convenient to consider the form of the polarization domain walls in a $`d=1`$ geometry, as shown in figure 3. An isolated $`d=1`$ domain wall is stationary. We observe that the intensity profiles of the walls do not approach monotonically the asymptotic value of the homogeneous state. When several domain walls are created in the transient dynamics, they interact with each other. Since the front profiles have oscillatory tails, the interaction between two walls can lead to repulsive forces. As a consequence, LS formed by bound domain walls can be formed which stop the coarsening process. These oscillatory tails are less important the larger is $`E_0`$ (see figure 3). However, we find that for all the values of $`E_0`$ which we have explored (up to $`E_0=10`$), this effect is enough to stop coarsening in $`d=1`$: a frozen pattern state is always dynamically reached . What happens in our $`d=2`$ situation is a competition between the $`d=1`$ repulsive effect between walls and the curvature effect that tends to reduce a droplet to zero radius. When the repulsive force is large enough, it might counterbalance the shrinkage process driven by curvature, and thus leads to the formation of a LS. This happens for $`E_{0,1}<E_0<E_{0,2}`$. The mechanism is the one also discussed in . These structures can be seen as a hole of $`I_+`$ ($`I_{}`$) in the background of a circularly $`+`$ \- polarized ($``$ \- polarized) state, together with a peak of $`I_{}`$ ($`I_+`$). Since the oscillatory tails are larger as $`E_0`$ decreases, the size of the LS decreases with $`E_0`$. We have found a perfect linear dependence of the radius of the LS with $`E_0`$. In figure 4 we show a plot of a LS together with its transverse profile. Note that the intensity in the LS is greater than in the surrounding background. We finally discuss the regime of labyrinthine pattern formation which occurs for $`E_0<E_{0,1}`$: switching-on the pump produces a very dense pattern of domain walls that repel each other. In this regime $`\gamma (E_0)<0`$, and an isolated droplet of arbitrary small size grows as $`R(t)t^{1/2}`$. In an infinitely large system the droplet would grow without limit, but with periodic boundary conditions it grows until the domain wall interacts with itself. Repulsion of the domain wall leads to a labyrinthine pattern as shown in fig. 5. An independent way of identifying the value $`E_0=E_{0,1}`$, below which labyrinthine patterns emerge, is by a linear stability analysis in $`d=2`$ of the $`d=1`$ domain wall profile. We have numerically obtained that such flat domain wall has a transverse modulational instability for values of the pump amplitude for which $`\gamma (E_0)<0`$. We find a longwavelength instability in which arbitrary small wavenumbers become unstable for $`E_0<E_{0,1}`$ (see fig. 6). This is reminiscent of the situation described for vectorial Second Harmonic Generation . In physical terms, both the droplet growth and the modulational instability indicate that the system prefers to have the longest possible domain walls, or equivalently the largest possible curvature. This leads to a nearly frozen state in which the oscillatory tails of the domain walls prevent their self-crossing and in which coarsening is suppressed. LS might form, but their natural tendency to grow is stopped by surrounding walls. In summary, we have described a situation in nonlinear optics in which many of the generic issues and possible scenarios of domain growth in nonthermodynamic systems occur. In spite of the nonrelaxational dynamics we have found a regime of selfsimilar evolution with a growth law characteristic of curvature driven motion. In other regimes, obtained just by changing the pump amplitude, domain growth is contaminated by the emergence of LS or suppressed by an instability of the domain wall that leads to a nearly frozen labyrinthine pattern. Domain walls and LS are here associated with the polarization vectorial degree of freedom of light. Financial support from DGICYT (Spain, Projects PB94-1167 and PB97-0141-C02-01) is acknowledged. Helpful discussions with P. Colet, M. Hoyuelos and B. Malomed are also acknowledged.
no-problem/9908/nucl-th9908076.html
ar5iv
text
# Dropping 𝜎-Meson Mass and In-Medium S-wave 𝜋-𝜋 Correlations \[ ## Abstract The influence of a dropping $`\sigma `$-meson mass on previously calculated in-medium $`\pi \pi `$ correlations in the $`J=I=0`$ ($`\sigma `$-meson) channel is investigated. It is found that the invariant-mass distribution around the vacuum threshold experiences a further strong enhancement with respect to standard many-body effects. The relevance of this result for the explanation of recent $`A(\pi ,2\pi )X`$ data is pointed out. \] In medium s-wave pion-pion correlations have recently attracted much attention both on the theoretical and experimental sides. These studies are of relevance for the behavior of the in-medium chiral condensate and its fluctuations with increasing density . In earlier studies we have shown that standard p-wave coupling of the pion to $`\mathrm{\Delta }`$-h and p-h configurations induces a strong enhancement of the $`\pi \pi `$ invariant-mass distribution around the $`2m_\pi `$ threshold , thus signalling increased fluctuations in the $`\sigma `$-channel. This fact was independently confirmed in . It has been argued in that this effect could possibly explain the $`A(\pi ,2\pi )`$ knockout reaction data from the CHAOS collaboration . More recently Vicente Vacas and Oset have claimed that the theory underestimates the experimentally found $`\pi \pi `$ mass enhancement. This claim may be partly questioned, since the reaction theory calls for a calculation with a finite total three momentum of the in-medium pion pairs <sup>*</sup><sup>*</sup>*It was shown in that considering the finite three momenta of the pion pair may further increase the effect of the in-medium pion-pion final-state interaction on the final $`\pi ^+\pi ^{}`$ invariant mass-distribution at threshold. Furthermore, at finite total three momenta of the pair, the sigma-meson couples directly to particle-hole excitations as well, which results into more strength enhancement at threshold, as shown in . . On the other hand Hatsuda et al. argued that the partial restoration of chiral symmetry in nuclear matter, which leads to a dropping of the $`\sigma `$-meson mass , induces similar effects as the standard many-body correlation mentioned above. It is therefore natural to study the combination of both effects. This is the objective of the present note. As a model for $`\pi \pi `$ scattering we consider the linear sigma model treated in leading order of the $`1/N`$-expansion . The scattering matrix can then be cast in the following form $`T_{ab,cd}(s)`$ $`=`$ $`\delta _{ab}\delta _{cd}{\displaystyle \frac{D_\pi ^1(s)D_\sigma ^1(s)}{3\sigma ^2}}{\displaystyle \frac{D_\sigma (s)}{D_\pi (s)}},`$ (1) where $`s`$ is the Mandelstam variable. In Eq. (1) $`D_\pi (s)`$ and $`D_\sigma (s)`$ are respectively the full pion and sigma propagators, while $`\sigma `$ is the sigma condensate. The expression in Eq. (1) reduces in fact, in the soft pion limit, to a Ward identity which links the $`\pi \pi `$ four-point function to the $`\pi `$\- and $`\sigma `$ two-point functions as well as to the $`\sigma `$ one-point function. To this order, the pion propagator and the sigma-condensate are obtained from the Hartree-Bogoliubov (HB) approximation . In terms of the pion-mass $`m_\pi `$ and the decay constant $`f_\pi `$, they are given by $$D_\pi (s)=\frac{1}{sm_\pi ^2},f_\pi =\sqrt{3}\sigma .$$ (2) The sigma meson, on the other hand, is obtained from the Random Phase Approximation (RPA) involving $`\pi `$-$`\pi `$ scattering and reads $$D_\sigma (s)=\left[sm_\sigma ^2\frac{2\lambda ^4\sigma ^2\mathrm{\Sigma }_{\pi \pi }(s)}{1\lambda ^2\mathrm{\Sigma }_{\pi \pi }(s)}\right]^1,$$ (3) where $`\mathrm{\Sigma }_{\pi \pi }(s)`$ is the $`\pi \pi `$ self-energy regularized by means of a form factor which is used as a fit function and allows to reproduce the experimental $`\pi \pi `$ phase shifts. The coupling constant $`\lambda ^2`$ denotes the bare quartic coupling of the linear $`\sigma `$-model, related to the mean-field pion mass $`m_\pi `$, sigma mass $`m_\sigma `$, and the condensate $`\sigma `$ via the mean-field saturated Ward identity $$m_\sigma ^2=m_\pi ^2+2\lambda ^2\sigma ^2.$$ (4) It is clear from what was said above that the $`\sigma `$-meson propagator in this approach is correctly defined, since it satisfies a whole hierarchy of Ward identities. In cold nuclear matter the pion is dominantly coupled to $`\mathrm{\Delta }`$-h, p-h, as well as to the 2p-2h excitations which, on the other hand, are renormalized by means of repulsive nuclear short-range correlations, (see for details). Since the pion is a (near) Goldstone mode, its in-medium s-wave renormalization does not induce considerable changes. The sigma meson, on the other hand, is not protected against an important s-wave renormalization from chiral symmetry. Therefore, following a very economical procedure, we extract an approximate density dependence of the mean-field sigma meson mass by taking into account the density dependence of the condensate. From eq.(4) it is clear that the density dependence of the sigma-meson is essentially dictated by the density dependence of the condensate. For densities below and around nuclear saturation density $`\rho _0`$ we take for the in-medium $`sigma`$-meson mass the simple ansatz (see also ) $$m_\sigma (\rho )=m_\sigma (1\alpha \frac{\rho }{\rho _0})$$ (5) where $`\rho `$ is the nuclear matter density and $`m_\sigma `$ is the vacuum $`\sigma `$-meson mass. The parameter $`\alpha `$ can be estimated from model calculations or QCD sum rules and lies in the range from 0.2 to 0.3. These are the values which we also will use in this work. The result for the sigma-meson mass distribution $`ImD_\sigma (E_{\pi \pi })`$, as calculated from Eq. (3) by using the in-medium mass (5), is shown in Fig. 2 for various densities. One sees that, as density increases, a strong downward shift of the sigma-mass distribution occurs. The enhancement at low energies is strongly reinforced as the in-medium $`\sigma `$-meson mass is included. For $`\alpha =0.2`$ and $`\alpha =0.3`$ the peak height is increased by a factor 2 and 3 respectively. Similarly for the T-matrix, a sizeable effect can be noticed in its imaginary part. There is therefore a large flexibility to explain the findings of CAHOS collaboration Comparing the curve for $`\alpha =0.2`$, for instance, and the curves of figure 4 (from ref. ), which where used to compute the $`\pi ^+\pi ^{}`$ mass distribution (figure 5 of ), one realizes that there is indeed enough strength at threshold to reproduce the experiemtal data.. Work in this direction is in progress. These findings call for some comments. It is clear that vertex corrections, usually a source of repulsion and not taken into account in this work, could weaken the effects. In particular the incidence of the nuclear density on the coupling constants should be considered. This seems, however, to be of minor importance as was recently shown by Chanfray et al. . More care should also be taken in properly incorporating Pauli-blocking when renormalizing the pion pairs in matter, although preliminary investigations have shown that this effect is weak. In conclusion we have shown that a dropping sigma-meson mass, linked to the partial restoration of chiral symmetry in nuclear matter, further enhances the build up of previously found $`\pi \pi `$ strength in the $`I=J=0`$ channel. Further studies are necessary to show how precisely this is linked to the recent findings by Bonutti et al. .
no-problem/9908/hep-ph9908367.html
ar5iv
text
# 1 Introduction ## 1 Introduction The importance of theoretically well founded event generators which describe experimental data to a satisfactory degree can in general not be overstated. But since the whole of this workshop is dedicated to Monte Carlo event generators, we feel it is unnecessary to dwell on this here. Event generators based on perturbative QCD cascade models have been immensely successful in reproducing the bulk of the data recorded at LEP . Some of us were hoping for a similar development at HERA, but it soon became clear that this was not possible. Before this workshop there was not one QCD cascade based generator which even came close to the agreement with data achieved at LEP. This was especially true for deep inelastic scattering (DIS) at small $`x`$: none of the conventional generators with DGLAP-based initial state cascades was able to even qualitatively describe the measured final state in the proton direction . It was the hope that generators based on BFKL or CCFM evolution could be developed and would be able to describe this region, but the only two such generators available before the workshop, SMALLX and LDCMC , did not live up to these expectations. The only generators which gave a fair description of small-$`x`$ final states were generally not considered to be on firm theoretical grounds: R AP G AP modeling a resolved virtual photon contribution , and A RIADNE based on the colour dipole cascade model . One could hope that the generators at least would give a good description of data at high $`Q^2`$ and high $`x`$, where DGLAP evolution should be a good approximation of the underlying parton dynamics. But most of the generators were not even able to reproduce data in this region . With this in mind the work of our group was quickly divided into two directions. One was concerned with developing new generators implementing CCFM evolution to better understand the small $`x`$ region. The other direction was to look at existing generators and try to understand why some of them failed to describe the high $`Q^2`$ region and, if these discrepancies could be fixed, try to tune the parameters in the programs to get as good description as possible of available data. ## 2 Confronting the Generators ### High $`Q^2`$ and the Current Breit Hemisphere Over the course of the workshop much understanding and development of the event generators and the data in the high $`Q^2`$ region and the current fragmentation region of the Breit frame has been achieved. One of the puzzles before the workshop was why the A RIADNE program, which gives a good overall description of the HERA DIS data, had problems describing the high $`Q^2`$ region. In this region one naïvely expected our understanding of the underlying physics to be on a firmer theoretical base than at low $`x`$ and $`Q^2.`$ Even in the current fragmentation region of the Breit frame, where it is expected that DIS events should resemble one hemisphere of $`\mathrm{e}^+\mathrm{e}^{}`$ annihilation events, A RIADNE had difficulties in describing the data. This is despite the fact that A RIADNE gives a very good description of $`\mathrm{e}^+\mathrm{e}^{}`$ data. The main difference between the treatment of colour dipoles in A RIADNE between $`\mathrm{e}^+\mathrm{e}^{}`$ and DIS is due to the fact that in DIS the initial parton configuration is not point like; the proton remnant is treated as an extended object. It was observed at high $`Q^2`$ the phase space available for radiation was restricted even in the current fragmentation region due to this treatment of the extended proton remnant. This deficiency in the model has been understood and overcome and modifications introduced within A RIADNE . These modifications went a long way towards removing the discrepancies between A RIADNE and the data, though problems still persist in describing the data and are under investigation . *Edén* questioned the assumptions behind the equivalence of the current fragmentation region of the Breit frame and a single hemisphere in $`\mathrm{e}^+\mathrm{e}^{}`$ experiments . It was shown that in DIS QCD radiation can give rise to high $`p_T`$ emissions which have no correspondence in an $`\mathrm{e}^+\mathrm{e}^{}`$ event; these emissions lead to a de-population of the current fragmentation region. In order to limit the effect of these high $`p_T`$ emissions, a jet algorithm was applied and DIS events with a jet $`p_T>Q/2`$ were removed from the comparison. This had a sizeable reduction in the discrepancy between the predictions of the mean charged multiplicity of $`\mathrm{e}^+\mathrm{e}^{}`$ and DIS Monte Carlo generators at low $`Q^2`$. This cut on jet $`p_T`$ though suppresses the contribution to the DIS sample from boson-gluon fusion. In so doing, charm production is reduced by approximately a factor of 2. Further improvement between the $`\mathrm{e}^+\mathrm{e}^{}`$ and DIS generators could then be achieved by artificially removing heavy quark contributions from the $`\mathrm{e}^+\mathrm{e}^{}`$ generator by generating events with just light quarks. It was proposed that the experiments perform further studies of the current fragmentation region, applying this jet selection criteria, to compare with light quark enriched samples from the LEP1 experiments. A comparison between MLLA predictions and the ARIADNE Monte Carlo at the parton level were made during the workshop . It was shown for the $`\mathrm{e}^+\mathrm{e}^{}`$ scenario there was a good agreement between the MLLA truncated parton spectra and that generated from A RIADNE . The MLLA predictions for the current region of the Breit frame in DIS are identical to the $`\mathrm{e}^+\mathrm{e}^{}`$ predictions, so the study was extended to DIS. It was shown that a reasonable agreement only became possible with the introduction of the previously discussed modifications of the proton remnant in A RIADNE . At lower $`Q^2`$ discrepancies between the MLLA predictions and A RIADNE were observed. These discrepancies could well be due to the problems discussed by *Edén* . ### Comparisons with Data During the workshop a forum was established between the H1 and ZEUS collaborations for a joint coordinated investigation of the generators, working closely with the programs’ authors. The HzTool package was substantially updated to include the majority of H1 and ZEUS hadronic final state data both for DIS and photoproduction. A concerted effort was also made to include preliminary data from the collaborations in this program. During the course of the workshop there were two investigations of the Monte Carlo predictions compared to the HERA DIS data . One attempted to ‘tune’ the program’s in the higher $`(x,Q^2)`$ region . In this study the A RIADNE , HERWIG and L EPTO Monte Carlo generators for DIS data were investigated . Other programs such as R AP G AP and those developed over the duration of the workshop are planned to be examined as part of a continuing program of work by the authors. The other study concentrated on a comparison with new energy flow data and particle spectra data and, in addition, included a comparison of the data with the R AP G AP program. Both studies found that modifications to the generators (e.g. new soft colour interaction implementation in L EPTO and the high $`Q^2`$ changes in A RIADNE ) helped tremendously in trying to describe the data. Unfortunately, it proved difficult to find sets of parameters within the generators studied that would describe the whole range of distributions at both low and high $`Q^2.`$ As a compromise various setting have been given, optimised for particular regions of phase space. The momentum generated by the workshop in trying to ‘tune’ the generators will allow more detailed investigations, including generators not so far studied. The ultimate aim of the forum, set up as a consequence of the workshop, is to have event generators that are able to describe the HERA data as impressively as they do the LEP data ! ## 3 Developing New Generators for small $`x`$ The fact that DGLAP-based generators have not been able to reproduce small-$`x`$ DIS final states measured at HERA has often been taken as an indication that effects of BFKL or CCFM evolution is visible in the data. This view has been strengthened since the A RIADNE generator, which has the feature of $`k_{}`$-non-ordering in common with BFKL, qualitatively describes the data. To really verify that BFKL evolution is responsible for e.g. the high rate of forward jets, or the large forward transverse energy flow, it is necessary to have an event generator with BFKL dynamics properly implemented. One such generator, SMALLX , has been available for quite some time. It implements CCFM evolution, which in the small-$`x`$ limit is equivalent to BFKL, but it could only generate events at the parton level. Not long before the workshop, the LDCMC generator was released which implements the Linked Dipole Chain model, a reformulation of the CCFM evolution. Although this generator describes the small-$`x`$ region slightly better than DGLAP-based ones, it was clearly not able to explain the data in the forward region to a satisfactory level . Before the workshop it was already clear that small-$`x`$ DIS final states could be described by adding to a normal DGLAP-based generator, a contribution corresponding to a resolved virtual photon. This model was treated in detail in Working Group 30, while our group concentrated on developing CCFM-based models. So, although resolved virtual photons may be a reasonable way to describe small-$`x`$ final states we will not discuss them further here. During this workshop, a lot of effort was put into developing old and new generators based on CCFM evolution. *Goerlich and Turnau* have developed the SMALLX generator so that it is now interfaced to the Lund string fragmentation model implemented in J ETSET . Using a simple parameterization of the input gluon density they can evolve $`F_2`$, but fail to find a good description of the HERA data. Regardless of this poor agreement, they use this gluon parameterisation to generate the hadronic final state. They find that they cannot describe the transverse energy flow, but contrary to LDCMC they overshoot the data rather than undershoot it . Much effort has also been put into trying to understand the discrepancies between the LDCMC and data. The LDC model should, to leading order, be equivalent to CCFM, but the program also includes estimates of non-leading effects and has e.g. included the evolution of quark chains and the correction of splitting functions to reproduce $`22`$ matrix elements for local hard sub-collisions in the ladder. However, no significant progress was made during this workshop, and LDCMC still does not reproduce data at small $`x`$. Similarly disappointing results were presented by *Salam* , who reported on the work done by the Milan group on CCFM phenomenology. They investigated different possible formulations of the so-called non-Sudakov form factor . Although their modifications were formally sub-leading and only important as $`z1`$ in the splittings, large effects were noticed. With the modification which led to the largest correction, they were able to reproduce the $`F_2`$ measured at HERA, but not the final state properties, such as forward jet rates. It should be noted that they did not try to include the, formally sub-leading, $`1/(1z)`$ pole in the splitting function, which also could give rise to large corrections. Also *Jung* has used a somewhat non-standard form of the non-Sudakov form factor introduced in . Implementing this in the SMALLX program<sup>1</sup><sup>1</sup>1Together with some other modifications, the obtained version of SMALLX is called SMMOD . he obtains a good description of $`F_2`$. For the final state properties, he finds a large dependency on the so-called kinematical, or consistency constraint, which was introduced ensure that the standard form of the non-Sudakov form factor is below unity in the allowed phase space. Since the non-Sudakov used by *Jung* does not suffer from this problem, the consistency constraint is not needed, and without it a good description of the data, e.g. forward jet rates, is obtained. From the SMALLX program it is possible to extract the evolved unintegrated gluon density. This is used by *Jung* in a completely new program, C ASCADE , which implements CCFM in a backward evolution framework . Also with this program a reasonable description of small $`x`$ data is obtained, although the agreement between C ASCADE and SMMOD is not perfect due to differences in the normalizations of the unintegrated gluon density between the two programs. Clearly much progress has been made during this workshop, although much work is still needed. The fact that there exist three hadron-level generators which all claim to (in leading order) implement the same CCFM evolution, but giving completely different results, urgently calls for further investigations. In comparing these models among themselves and with data, it is important to have good observables which are sensitive to the characteristics of BFKL/CCFM evolution. Two new such observables was suggested during the workshop. *Goerlich and Turnau* suggested measuring the difference in transverse energy flow with and without a forward high-$`k_{}`$ particle. This showed a good separation power between the DGLAP-based L EPTO generator, their own SMALLX generator and A RIADNE . A similarly good separation power was shown for some observables based on transverse momentum transfer proposed by *Van Mechelen and De Wolf*. By summing vectorially all transverse momentum on either side of a given rapidity cut, the $`k_{}`$ of the propagator gluon is reconstructed, and correlations can be measured as function of rapidity. In this way it should be possible to test the $`k_{}`$-non-ordering property of BFKL evolution . More general advice to event generator authors, was given by *Levin* , who discussed the recently calculated next-to-leading corrections to BFKL , and their implications for small-$`x`$ evolution. He also presented indications that so-called screening corrections due to the large gluon density at small $`x`$, are becoming visible at HERA, and urged Monte Carlo authors to consider including such corrections in their programs. ## 4 Conclusions As far as QCD cascades are concerned, this has been a very productive workshop. During the workshop we have increased our understanding of the high $`Q^2`$ region and why the standard generators had problems describing the corresponding hadronic final states. Now, most of these problems have been solved and attempts to tune the parameters of these generators has started. But many difficulties still remains, and it has not been possible to find a single consistent set of parameters for any of the generators which can describe all observables at high $`Q^2`$. At small $`x`$ the situation is even worse. But also here much work has been invested and much progress has been made during the workshop. There are now three hadron-level generators implementing CCFM evolution. Although only one of them has been shown to be able to reproduce the characteristics of small-$`x`$ final states, the situation is much better than before the workshop, when only the A RIADNE and R AP G AP (with resolved virtual photons) programs gave a reasonable description. By carefully comparing all these different models we may soon be able to understand better the underlying parton dynamics. Although the workshop is now over, the work will continue and so will the fruitful collaboration between experimentalists and event generator authors.
no-problem/9908/hep-ph9908237.html
ar5iv
text
# References SLAC-PUB-8216 hep-ph/9908237 August 1999 Resonant Two-body $`D`$ Decays <sup>1</sup><sup>1</sup>1Supported in part by the Department of Energy under contract number DE-AC03-76SF00515. Michael Gronau <sup>2</sup><sup>2</sup>2Permanent Address: Physics Dept., Technion – Israel Institute of Technology, 32000 Haifa, Israel. Stanford Linear Accelerator Center Stanford University, Stanford, CA 94309 ABSTRACT > The contribution of a $`K^{}(1430)`$ $`0^+`$ resonance to $`D^0K^{}\pi ^+`$ is calculated by applying the soft pion theorem to $`D^+K^{}\pi ^+`$, and is found to be about 30$`\%`$ of the measured amplitude and to be larger than the $`\mathrm{\Delta }I=3/2`$ component of this amplitude. We estimate a 70$`\%`$ contribution to the total amplitude from a higher $`K^{}(1950)`$ resonance. This implies large deviations from factorization in $`D`$ decay amplitudes, a lifetime difference between $`D^0`$ and $`D^+`$, and an enhancement of $`D^0\overline{D}^0`$ mixing due to SU(3) breaking. To be published in Physical Review Letters Hadronic two-body and quasi two-body weak decays of $`D`$ mesons, which constitute a sizable fraction of all hadronic $`D`$ decays , involve nonperturbative strong interactions. Long distance QCD effects spoil the simplicity of the short distance behavior of weak interactions . Therefore, a simplified approach in which the amplitudes of these processes are given by a factorizable short-distance current-current effective Hamiltonian is not expected to work too well. Various approaches were employed to include long distance effects. The most commonly and very frequently used prescription, motivated by $`1/N_c`$ arguments , is to apply “generalized factorization” : The two relevant Wilson coefficients ($`c_1,c_2`$), multiplying appropriate four-quark short distance operators, are replaced by scale-dependent free parameters ($`a_1,a_2`$). In this prescription, the magnitudes of isospin amplitudes are calculated from experimentally determined decay constants and form factors, while strong phases (to be determined from experiment) are assigned to these amplitudes to account for final state interactions. In spite of its somewhat ad hoc and disputable procedure (evidently final state phases do not occur only in elastic scattering, but are largely due to inelastic processes), this phenomenological treatment works reasonably well in Cabibbo-favored $`D`$ decays . Its failure in the Cabibbo-suppressed $`D\pi \pi `$ and $`DK\overline{K}`$ processes is believed to be associated with inelastic hadronic rescattering. It was pointed out almost twenty years ago that the observed resonance states in the $`K\pi ,K\rho /K^{}\pi ,\pi \pi ,\pi \rho `$ channels, with masses close to the $`D`$ mass, may strongly affect final state interactions in $`D`$ decays . The idea is clear and simple, however its implementation involves a multi-channel rescattering S-matrix which cannot be quantified in a model-independent manner . In practice, it is impossible to calculate the effect of s-channel resonance states in two-body $`D`$ decays without knowing the weak couplings of the $`D`$ meson to these resonances. If some of these couplings are sufficiently large, the corresponding resonances may have large or even dominating contributions in certain decays. In this case the apparent success in describing two-body and quasi two-body decays in terms of “generalized factorized” amplitudes would be an accident which ought to be further investigated. Large resonance contributions in $`D^0`$ decays could explain the observed $`D^+D^0`$ lifetime difference. Contrary to the $`D^0`$, the final states in Cabibbo-favored $`D^+`$ decays, made of $`\overline{d}s\overline{d}u`$, are pure $`I=3/2`$ and do not receive such contributions. Also, resonant amplitudes involve large SU(3) breaking in the resonance masses and widths. Consequently intermediate resonance states are expected to lead to large $`D^0\overline{D}^0`$ mixing. We return to these questions in our conclusion. The purpose of this Letter is to present the first model-independent quantitative study of direct channel resonance contributions to two-body $`D`$ decays. We will calculate the contribution of $`\overline{K}^0(1430)`$, a particular excited $`K`$-meson $`0^+`$ state ($`s\overline{d}`$ in a P-wave), to the Cabibbo-favored $`D^0K^{}\pi ^+`$ decay process. In spite of the fact that this resonance peaks at 436 MeV below the $`D`$ mass, we find its contribution to amount to a sizable fraction, approximately 30$`\%`$, of the measured $`DK^{}\pi ^+`$ amplitude. Another $`0^+`$ $`K\pi `$-resonance, observed around 1900 MeV, is likely to have a larger contribution due its close proximity to the $`D`$ meson mass. Assuming that its weak coupling to $`D`$ is approximately equal to that of the resonance at 1430 MeV, we estimate its contribution to be about 70$`\%`$. An important step in our analysis is the evaluation of the weak interaction matrix element between a $`D`$ meson and the 1430 MeV resonance state. For this purpose, we apply the soft pion theorem which relates this amplitude to the measured $`I=3/2D^+\overline{K}^0\pi ^+`$ amplitude . It is crucial in our argument that the final state $`\overline{K}^0\pi ^+`$ is “exotic”, in which case the amplitude does not involve a pole term (“surface term”) and varies smoothly and only slightly in the soft pion limit. The $`14300^+K^{}`$ resonance contribution to $`D^0K^{}\pi ^+`$ is given by a Breit-Wigner form $$A(1430,K^{}\pi ^+)=\frac{h_1g}{m^2(D^0)m^2+im\mathrm{\Gamma }},$$ (1) where $`h_1<\overline{K}^0|H_W|D^0>,m(D^0)=1864.6\pm 0.5\mathrm{MeV},mm(K^0)=1429\pm 6\mathrm{MeV},\mathrm{\Gamma }\mathrm{\Gamma }(K^0)=287\pm 23\mathrm{MeV}`$ . The strong $`K^0K\pi `$ coupling $`g`$ is obtained from the $`K^0`$ width $$g^2=\frac{8\pi m^2\mathrm{\Gamma }f}{p_\pi },f\mathrm{BR}(\overline{K}^0K^{}\pi ^+)=0.62\pm 0.07,p_\pi =621\mathrm{MeV}.$$ (2) The hadronic weak matrix element $`h_1`$ is related to the measured $`I=3/2`$ amplitude $`h_2<\overline{K}^0\pi ^+(q_\pi )|H_W|D^+>`$ through the soft pion theorem $$\underset{q_\pi 0}{lim}<\overline{K}^0\pi ^+(q_\pi )|H_W|D^+>=\frac{i}{f_\pi }<\overline{K}^0|[Q_5^{},H_W]|D^+>,$$ (3) where $`f_\pi =130`$ MeV and $`Q_5^{}`$ is the axial charge. Note that the amplitude on the left-hand-side involves no pole term since $`\overline{K}^0\pi ^+`$ is an $`I=3/2`$ state. (On the other hand, the $`I=1/2`$ $`DK^{}\pi `$ amplitude contains such a pole term from an intermediate $`0^{}(1460)`$ $`K\pi `$ resonance , and consequently does not vary smoothly in the soft pion limit). The (V-A)(V-A) structure of the $`\mathrm{\Delta }I=1`$ weak Hamiltonian implies $$[Q_5^{},H_W]=[Q^{},H_W],$$ (4) and the isospin-lowering operator $`Q^{}`$ obeys $`Q^{}|D^+>=|D^0>,<\overline{K}^0|Q^{}=0`$. Neglecting the small variation in the $`D^+\overline{K}^0\pi ^+`$ amplitude as one moves the pion four momentum from its physical value to zero, one finds $$|h_1|f_\pi |h_2|.$$ (5) The amplitude $`h_2`$ is obtained from the measured width $`\mathrm{\Gamma }(D^+K^0\pi ^+)`$ $$h_2^2=\frac{8\pi m^2(D^+)\mathrm{\Gamma }(D^+K^0\pi ^+)}{q_\pi },m(D^+)=1869\pm 0.5\mathrm{MeV},q_\pi =368\mathrm{MeV},$$ $$\mathrm{\Gamma }(D^+K^0\pi ^+)=\frac{0.023\pm 0.003}{\tau (D^+)f},\tau (D^+)=1.051\pm 0.013\mathrm{ps}.$$ (6) Combining (1)(2)(5)(6), one finds $$|A(1430,K^{}\pi ^+)|=(7.85\pm 0.65)\times 10^7\mathrm{GeV}.$$ (7) The error contains only experimental errors. The uncertainty due to taking the soft pion limit $`q_\pi 0`$ in the smoothly varying amplitude is assumed to be smaller and is neglected. It would be interesting to study this correction, which could slightly increase or decrease the amplitude. In order to compare the calculated $`K^{}(1430)`$ resonance contribution to the measured $`I=1/2`$ term in $`D^0K^{}\pi ^+`$, one expresses all three $`D\overline{K}\pi `$ amplitudes in terms of isospin amplitudes. Using a somewhat different normalization than elsewhere , we write $`A(D^0K^{}\pi ^+)`$ $`=`$ $`A_{1/2}+A_{3/2},`$ $`\sqrt{2}A(D^0\overline{K}^0\pi ^0)`$ $`=`$ $`A_{1/2}+2A_{3/2},`$ $`A(D^+\overline{K}^0\pi ^+)`$ $`=`$ $`3A_{3/2}.`$ (8) Consequently $`|A_{1/2}|^2`$ $`=`$ $`{\displaystyle \frac{2}{3}}[|A(D^0K^{}\pi ^+)|^2+|A(D^0\overline{K}^0\pi ^0)|^2{\displaystyle \frac{1}{3}}|A(D^+\overline{K}^0\pi ^+)|^2],`$ $`|A_{3/2}|`$ $`=`$ $`{\displaystyle \frac{1}{9}}|A(D^+\overline{K}^0\pi ^+)|^2,`$ (9) $`\mathrm{cos}\delta _I`$ $`=`$ $`{\displaystyle \frac{|A(D^0K^{}\pi ^+)|^22|A(D^0\overline{K}^0\pi ^0)|^2+\frac{1}{3}|A(D^+\overline{K}^0\pi ^+)|^2}{6|A_{1/2}A_{3/2}|}},`$ where $`\delta _I`$ is the relative phase between isospin amplitudes. One then finds from the experimental rates the values $`|A_{1/2}|`$ $`=`$ $`(24.5\pm 1.2)\times 10^7\mathrm{GeV},`$ $`|A_{3/2}|`$ $`=`$ $`(4.51\pm 0.22)\times 10^7\mathrm{GeV},`$ $`\delta _I`$ $`=`$ $`(90\pm 7)^{}.`$ (10) This and (7) imply $$\frac{|A(1430,K^{}\pi ^+)|}{|A_{1/2}|}=0.32\pm 0.03.$$ (11) That is, the 1430 MeV $`K\pi `$ resonance contribution is about 30$`\%`$ of the dominant $`I=1/2`$ amplitude in $`DK\pi `$. Its contribution to $`D^0K^{}\pi ^+`$ is larger than the $`I=3/2`$ component of this amplitude. Note that $`A(D^0K^{}\pi ^+)A_{1/2}`$, since $`|A_{3/2}|^2|A_{1/2}|^2`$ and $`\delta _I90^{}`$ In view of this sizable result, which is rather striking for a resonance peaking 436 MeV below the $`D`$ mass, one raises the question of possibly larger contributions to $`DK\pi `$ from resonances lying closer to the $`D`$. One such resonance state, around 1900 MeV (denoted $`K^{}(1950)`$ in ), was observed in $`K\pi `$ scattering , with a mass $`m^{}=1945\pm 22`$ MeV and a width $`\mathrm{\Gamma }^{}=201\pm 86`$ MeV. Somewhat different values, $`m^{}=1820\pm 40`$ MeV, $`\mathrm{\Gamma }^{}=250\pm 100`$ MeV, were obtained in a K-matrix analysis . Since this resonance lies right at the $`D`$ mass, its contribution to $`D^0K^{}\pi ^+`$ is likely to be larger than that of $`K^{}(1430)`$. In order to calculate this contribution, one must know the matrix element $`<K^{}(1950)|H_W|D>`$, for instance by relating it to $`<K^{}(1430)|H_W|D>`$. The higher resonance is most likely a radial n=2 excitation of the state at 1430 MeV which is an n=1 P-wave $`s\overline{d}`$ state. In both amplitudes the local $`H_W`$ connects a $`c\overline{u}`$ S-wave state to an $`s\overline{d}`$ P-wave state which is more spread out. The radially excited n=2 state is slightly less localized than the n=1 state. Consequently, one expects $`<K^{}(1950)|H_W|D>`$ to be slightly smaller than $`<K^{}(1430)|H_W|D>`$. Assuming about equal weak amplitudes for the two resonance states, one estimates from (1)(2) $`{\displaystyle \frac{|A(1950,K^{}\pi ^+)|}{|A(1430,K^{}\pi ^+)|}}`$ $``$ $`\sqrt{{\displaystyle \frac{[(m^2(D^0)m^2)^2+m^2\mathrm{\Gamma }^2]m^2\mathrm{\Gamma }^{}f^{}p_\pi }{[(m^2(D^0)m^2)^2+m^2\mathrm{\Gamma }^2]m^2\mathrm{\Gamma }fp_\pi ^{}}}}=2.12.4,`$ $`f^{}`$ $``$ $`\mathrm{BR}(K^{}(1950)K^{}\pi ^+)=0.35,p_\pi ^{}=904\mathrm{MeV},`$ (12) depending somewhat on $`m^{}`$ and $`\mathrm{\Gamma }^{}`$. Namely, in the absence of a radial suppression of its weak coupling to $`D`$, the resonance around 1900 MeV contributes about 70$`\%`$ of the $`I=1/2`$ $`DK\pi `$ amplitude. In reality the contribution may be somewhat (but not very much) smaller. The combined contribution of the two resonances, at 1430 MeV and in the range 1820$``$1945 MeV, is considerably larger than the $`I=3/2`$ amplitude in $`DK\pi `$. These contributions dominate the $`I=1/2`$ amplitude if the two resonances interfere constructively. This is the case if the mass of the second resonance is lower than $`m_D`$, as claimed in . This explains the $`I=1/2`$ dominance observed in these decays. In view of its important role in $`D`$ decays, it would be helpful to determine the mass of the higher resonance more precisely. The above calculations show that direct channel resonances have very large contributions in certain two body $`D`$ decays. In a four-quark operator language (or in a diagram language) these contributions are manifestations of annihilation (or W-exchange) amplitudes. A possible phenomenological way of incorporating them in $`D`$ decays is by employing a diagramatic language , decomposing the $`DK\pi `$ amplitudes, for instance, into a color favored “tree” amplitude $`T`$, a “color-suppressed” amplitude C and an “exchange” amplitude $`E`$. In a more general context this description is based on flavor SU(3) . Here we only assume isospin symmetry. The three amplitudes $`T,C,E`$ are an over-complete set. Only two combinations are required to describe the two isospin amplitudes $`3A_{1/2}`$ $`=`$ $`2TC+3E,`$ $`3A_{3/2}`$ $`=`$ $`T+C.`$ (13) The amplitude $`T`$ may be chosen to be real, $`C`$ obtains a complex phase from rescattering, while $`E`$ is given by the sum of two Breit-Wigner forms, representing the two resonances in $`DK\pi `$. Clearly this scheme, which is more appropriate for the case of large resonance contributions, deviates substantially from the “generalized factorization” framework . In the latter prescription one combines the real amplitudes $`T`$ $`=`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}|V_{ud}V_{cs}|a_1f_\pi (m_D^2m_K^2)F^{DK}(m_\pi ^2),`$ $`C`$ $`=`$ $`{\displaystyle \frac{G_F}{\sqrt{2}}}|V_{ud}V_{cs}|a_2f_K(m_D^2m_\pi ^2)F^{D\pi }(m_K^2),`$ $`E`$ $`=`$ $`0,`$ (14) into isospin amplitudes (S0.Ex9) which are assigned arbitrary phases. A large nonzero $`E`$ term, which is required in order to describe resonating amplitudes, modifies the values obtained from the experimental data for $`a_1`$ and $`a_2`$ relative to their values in the generalized factorization prescription. Although the numerical changes may not be very large, which is the reason for the apparent success of the generalized factorization approach, the difference between the physical interpretations of the two descriptions, with and without the $`E`$ term, is evident. A fit of $`D`$ decays to $`\overline{K}\pi ,\overline{K}\eta ,\overline{K}\eta ^{}`$ in terms of diagrammatic amplitudes, assuming flavor SU(3) by which $`T,C,E`$ can be separated, was carried out recently by Rosner . He finds (in units of $`10^6`$ GeV) $`|T|2.7,|C|2.0,|E|1.6`$. A large phase ($`114^{}`$) is found in $`E/T`$. The large magnitude of $`E`$, comparable to the other two amplitudes, and its sizable phase relative to $`T`$, are evidence for the important role of resonances in these decays. To demonstrate the insensitivity of the naive factorization prescription to large nonfactorizable resonant contributions, we note the folowing: Extracting $`a_1`$ and $`a_2`$ from the above values of $`|T|`$ and $`|C|`$, using in (S0.Ex10) the values $`F^{DK}(m_\pi ^2)=0.77`$ , $`F^{D\pi }(m_\pi ^2)=0.70,f_K=160`$ MeV, gives $`|a_1|=1.06,|a_2|=0.64`$. These values do not differ by too much from $`a_1=c_1(m_c)=1.26,a_2=c_2(m_c)=0.51`$, obtained in the traditional way which disregards resonance contributions . While intermediate resonances were shown here to be important in $`D^0`$ decays, they do not contribute to Cabibbo-favored $`D^+`$ decays, where the final states consisting of $`\overline{d}s\overline{d}u`$ are pure $`I=3/2`$. This can be a qualitative explanation for the measured longer $`D^+`$ lifetime. A calculation of the $`D^+/D^0`$ lifetime ratio, including resonance contributions in $`D^0`$ decay, is a challenging task. To conclude, we comment on the possible effect of direct channel resonances on $`D^0\overline{D}^0`$ mixing. Reasonably small SU(3) breaking in $`D`$ decays to two pseudoscalar mesons was shown to enhance the mixing by several orders of magnitudes relative to the short distance box diagram contribution . The actual enhancement was argued to be much smaller when summing over all decay modes, if a large energy gap existed between the charmed quark mass and $`\mathrm{\Lambda }_{\mathrm{QCD}}`$ . Resonance statess close to the $`D`$ mass violate this assumption. Moreover, resonant contributions lead to particularly large SU(3) breaking between SU(3)-related $`D`$ decay rates. For instance, mass and widths differences between $`K\pi `$ and $`\pi \pi `$ resonances show up as large rate differences (when CKM factors are included), since direct channel resonance amplitudes peak strongly when the resonance mass approaches the $`D`$ mass. This raises the possibility that SU(3) breaking in resonance amplitudes enhances $`D^0\overline{D}^0`$ mixing beyond predictions based on the contributions of a few two body decays . Such effects were discussed recently in , where it was noted that in the lack of information about weak Hamiltonian matrix elements between a $`D`$ meson and the resonances, some crude assumptions must be made. The authors assume vacuum saturation for these matrix elements, implying that P-wave $`0^+`$ resonances (for which the wave functions vanish at the origin) do not contribute to $`D^0\overline{D}^0`$ mixing. Our model-independent calculation finds a large matrix element for the $`0^+`$ $`K\pi `$ resonance at $`1430`$, which indicates that the mixing can indeed be larger than estimated in . This interesting possibility deserves further studies. Acknowledgements: I thank Yuval Grossman, Alex Kagan, David Leith, Alexey Petrov, Dan Pirjol, Jon Rosner, Michael Scadron and Michael Sokoloff for useful discussions. I am grateful to the SLAC Theory Group for its very kind hospitality. This work was supported in part by the United States $``$ Israel Binational Science Foundation under research grant agreement 94-00253/3, and by the Department of Energy under contract number DE-AC03-76SF00515.
no-problem/9908/hep-lat9908018.html
ar5iv
text
# Random Matrices and the Convergence of Partition Function Zeros in Finite Density QCD ## I Introduction In contrast with the numerous successes of lattice QCD, simulations at finite chemical potential have oscillated between mildly promising and outright frustrating. The source of the trouble lies in the following. In the Euclidean formulation of QCD, the chemical potential spoils the anti-Hermiticity of the Dirac operator. As a result, the fermion determinant is no longer a real number. In general, it has a complex phase. Hence, the action cannot serve as a statistical weight in a Monte Carlo sampling of field configurations. Quenched simulations, where the fermion determinant is not included in the statistical weight, may provide a fairly reliable approximation to selected observables of the true unquenched theory. However, in the presence of a chemical potential, quenched simulations have produced consistently unphysical results . The reason is that the quenched theory is the $`N_f0`$ limit of an unphysical theory where the fermion determinant is replaced by its absolute value . This is a theory with a second, “conjugate” set of anti-quark species together with the normal quarks. Because of Goldstone bosons consisting of a quark and a conjugate anti-quark, the critical chemical potential in quenched QCD is half the pion mass. This phenomenon was demonstrated in lattice simulations by Gocksch using a $`U(1)`$ toy model and was understood analytically in by using a random matrix model inspired by QCD. One important conclusion of further studies of the same RMM is that the phase of the fermion determinant leads to very large cancellations in the ensemble averaging . A measure of this phenomenon is given by the fact that the partition function is proportional to $`\mathrm{exp}(\mu ^2N)`$, where $`N`$ is the size of the random matrix, corresponding to the number of sites in a lattice simulation. Cancellations of this magnitude would require prohibitive statistics in order for a brute force simulation<sup>*</sup><sup>*</sup>*where the determinant is included as an observable in a quenched ensemble to be successful. We wish to note that in some models the construction of clever algorithms makes it possible to deal with these cancellations in Monte Carlo simulations . It would be hard to overrate the potential importance of a successful lattice approach to QCD at finite chemical potential. At this time, there is not even an estimate for the value of the critical chemical potential from lattice simulations. In the absence of the guidance provided by the lattice, it is difficult to assess the many semi-empirical descriptions of nuclear matter at high density . For instance, an extension of the RMM to include temperature via the first Matsubara frequency gives reasonable predictions about the phase diagram of QCD, even while ignoring most of its dynamics. In the present paper we wish to exploit the qualitative similarity between our simple RMM and $`N_c=3`$ QCD at finite chemical potential in an attempt to understand certain lattice results on the problem of finite $`\mu `$. We are interested in the analytic dependence of the QCD partition function on the chemical potential $`\mu `$. This can be obtained by computing the coefficients of the expansion of the partition function in powers of the chemical potential or the fugacity. The Glasgow method of lattice QCD is designed to do this. The unquenched partition function can be seen as the quenched average of the fermion determinant, i.e. averaged only with the gauge action. In general, one may also use an unquenched ensemble at some fixed chemical potential $`\mu =\mu _0`$, and include the inverse fermion determinant at that same value, $`Z(\mu )=\mathrm{det}\text{ /}D(\mu )_{\mathrm{gauge}}=C{\displaystyle \frac{\mathrm{det}\text{ /}D(\mu )}{\mathrm{det}\text{ /}D(\mu _0)}}_{\mathrm{gauge},\mu =\mu _0}.`$ (1) Here $`C`$ is an irrelevant constant. One expects the efficiency of the averaging process to depend on the overlap between the quantity being averaged and the distribution used to generate the ensemble. When these two functions have their largest values in vastly different places, this is known as an overlap problem. In the lattice Glasgow method the fermion determinant is expanded in powers of the fugacity $`\xi =\mathrm{exp}(\mu )`$. The expansion is finite and exact, since the fermion determinant is just an $`N\times N`$ matrix ($`N`$ is the number of lattice points times the number of degrees of freedom per site). It is obtained by writing $`det\text{ /}D\left(\mu \right)=\xi ^Ndet\left(P+\xi \right)`$, where $`P`$ is called the propagator matrix. The expansion coefficients, written in terms of the eigenvalues of $`P`$, are then obtained by ensemble averaging. The zeros of the partition function in the complex $`\mu `$ plane map out the phase structure. In particular, the ones close to the real axis define the critical value(s) of $`\mu `$. Since the original paper by Yang and Lee this approach has been widely used in statistical mechanics. For some recent applications we refer to . With the Glasgow method one obtains information about the full $`\mu `$-dependence of $`Z`$, using an ensemble generated at one fixed value of the chemical potential. Of course, the question of the overlap between whatever ensemble one is using and the fermion determinant for the given $`\mu `$ remains. The issue is even more ominous considering that we are unable to even define the notion of an ensemble at nonzero $`\mu `$, even in this random matrix model, due to the complex action. In this paper, we analyze the Glasgow method using a random matrix model at nonzero chemical potential. As we have already discussed before, this model mimics the problems of the QCD partition function at nonzero chemical potential. Since the phase structure of this model is known analytically, it is an ideal testing ground for evaluating this algorithm and obtaining a better understanding of its problems. In section 2 we introduce the random matrix model and derive some of its analytical properties. The bulk of this paper is in section 3. It contains our numerical analysis of the Glasgow method, and an explanation is given of the poor convergence of certain zeros of the partition function. Concluding remarks are made in section 4. ## II Random matrix model We consider a random matrix model (RMM) defined by the partition function $`Z(m,\mu )`$ $`=`$ $`{\displaystyle 𝒟Ce^{N\mathrm{tr}CC^{}}det\left(D(m,\mu )\right)},`$ (2) $`D(m,\mu )`$ $`=`$ $`\left[\begin{array}{cc}m& iC+\mu \\ iC^{}+\mu & m\end{array}\right].`$ (5) Here, $`C`$ is an $`N\times N`$ complex matrix, and the integration is over the Haar measure, $`{\displaystyle 𝒟C}={\displaystyle \underset{ij=1}{\overset{N}{}}}{\displaystyle 𝑑C_{ij}𝑑C_{ij}^{}}.`$ (6) This model was first formulated for $`\mu =0`$ in order to describe the correlations of the smallest eigenvalues of the Dirac operator. In this case it has been shown rigorously that the model describes the zero-momentum sector of the low-energy effective partition function of QCD . In the present context, the matrix $`D(m,\mu )`$ mimics the QCD Dirac operator for quark mass $`m`$ and chemical potential $`\mu `$. The integration over matrix elements replaces the integration over gauge field configurations. The massless part of our random Dirac operator is anti-Hermitian for $`\mu =0`$, but for $`\mu 0`$ it has no definite Hermiticity properties. In order to study the properties of the partition function in the chemical potential plane $`\mu `$, we rewrite the fermion determinant as follows, $`det\left(D(m,\mu )\right)=det\left(\begin{array}{cc}iC+\mu & m\\ m& iC^{}+\mu \end{array}\right)=det\left(F(m)+\mu \mathrm{𝟏}\right).`$ (9) The matrix $`F(m)`$ is analogous to the propagator matrix from lattice QCD in the sense that its eigenvalues are the values of $`\mu `$ for which the fermion determinant vanishes. In terms of the eigenvalues $`\lambda _k`$ of $`F(m)`$ the RMM partition function is $`Z(m,\mu )={\displaystyle 𝒟Ce^{N\mathrm{tr}CC^{}}\underset{k}{}(\lambda _k+\mu )}.`$ (10) The quantity $`n=_\mu \mathrm{ln}Z(m,\mu )/N`$ is the analog of the baryon number density of QCD. It is equal to the ensemble average of $`\frac{1}{N}_k\frac{1}{\lambda _k+\mu }`$. In QCD at zero temperature, one expects the baryon number density to be identically zero for small $`\mu `$, and then to increase starting from a certain critical value of $`\mu `$ . Similarly, our model shows a phase transition with an increase of the baryon number density. However, $`n`$ is not zero below the critical value of $`\mu `$. See for an explanation of the relationship between $`n`$ and the baryon number density of QCD. It is not clear how to define a statistical ensemble of gauge field configurations (or random matrices for that matter), corresponding to the true partition function with finite $`\mu `$. In the quenched approximation one discards the fermion determinant, so the partition function does not depend any more on $`\mu `$ or $`m`$. However, a ‘number density’ can still be computed by taking the average of $`\frac{1}{N}_k\frac{1}{\lambda _k+\mu }`$. Similarly, one can define a quenched ‘chiral condensate’ from the ensemble average of $`\frac{1}{N}\mathrm{Tr}\left(D(m,\mu )^1\right)`$. The quenched approximation can be interpreted either as the limit of a process where one takes the number of flavors to zero, or the quark mass $`m`$ to infinity. The quantity $`n(\mu _0,\mu )={\displaystyle 𝒟Ce^{N\mathrm{tr}CC^{}}\underset{k}{}(\lambda _k+\mu _0)\frac{1}{N}\underset{k}{}\frac{1}{\lambda _k+\mu }}`$ (11) is called the partially quenched baryon number density. The “sea” $`\mu _0`$ defines the ensemble, and the “valence” $`\mu `$ probes the eigenvalue distribution. ### A Quenched eigenvalue distribution For $`m=0`$ the propagator matrix is block-diagonal, and its eigenvalues are $`i`$ times those of $`C`$. The exact distribution of the eigenvalues of a general complex matrix has been calculated a long time ago by Ginibre . In our normalization, it is given by $`\rho (\lambda _1,\mathrm{},\lambda _N)=𝒞{\displaystyle d^2\lambda _1\mathrm{}d^2\lambda _Ne^{N\underset{k}{}\left|\lambda _k\right|^2}\underset{k>l}{}\left|\lambda _k\lambda _l\right|^2}`$ (12) The corresponding one-point function is $`\rho (\lambda )=𝒞e^{N|\lambda |^2}{\displaystyle \underset{k=0}{\overset{N1}{}}}{\displaystyle \frac{\left(N|\lambda |^2\right)^k}{k!}}`$ (13) In the large $`N`$ limit the eigenvalues are uniformly distributed in the complex unit circle. This follows from the properties of the truncated exponential, to be discussed in more detail later. For $`|\lambda |<1`$, the truncated exponential is a good approximation of the complete one, so $`\rho `$ is a constant. For $`|\lambda |>1`$, the truncated exponential behaves like a power of $`|\lambda |^2`$ which is quickly suppressed by $`\mathrm{exp}(N|\lambda |^2)`$, so the distribution vanishes with a sharp tail of width of order $`1/\sqrt{N}`$. For nonzero $`m`$ we can calculate the eigenvalue distribution of the propagator matrix in the large $`N`$ limit using the conjugate replica trick . We consider the partition function, where we have replaced the fermion determinant with its absolute value squared, $`Z(m,m^{},\mu ,\mu ^{})`$ $`=`$ $`{\displaystyle 𝒟Ce^{N\mathrm{tr}CC^{}}\left|det\left(\begin{array}{cc}m& iC+\mu \\ iC^{}+\mu & m\end{array}\right)\right|^{2\stackrel{~}{n}_f}}.`$ (16) Here we can use either $`m`$ or $`\mu `$ as a dummy variable probing the eigenvalue distribution of the corresponding operator as a function of the other variable which is made real and therefore has a physical meaning. For the spectrum of the propagator matrix we set $`m=m^{}`$ and probe the eigenvalues with the complex dummy variable $`\mu `$. This is the reverse of what was done in where the spectrum of the Dirac operator was investigated for given $`\mu `$. The conjugate replica trick also allows us to calculate the eigenvalue distribution in the present case. Because of the absolute value of the determinant under the integral, the partition function is expected to be a smooth function of $`\stackrel{~}{n}_f`$ and the limit $`\stackrel{~}{n}_f0`$ should be obtainable from the partition function for positive integral values of $`\stackrel{~}{n}_f`$. For any $`\stackrel{~}{n}_f`$, the eigenvalue density is positive definite so it cannot be an analytic function of a complex variable. The resolvent is defined as $`G(z)=\frac{1}{N}_k\frac{1}{\lambda _kz}`$. The average eigenvalue density is then given by $`NRe\left(_z^{}G(z)\right)`$. For the partition function (16) the resolvent in the $`\mu `$ plane is $`G(z=\mu )=_\mu Z(m,\mu ,\mu ^{})`$. By standard manipulations the partition function can be rewritten as an integral over an $`2\stackrel{~}{n}_f\times 2\stackrel{~}{n}_f`$ complex matrix. For $`\stackrel{~}{n}_f=1`$, one finds $`Z(m,m^{},\mu ,\mu ^{})={\displaystyle d^2ad^2bd^2cd^2𝑑e^{N(aa^{}+bb^{}+cc^{}+dd^{})}det\left[\begin{array}{cccc}m+a& \mu & 0& id\\ \mu & m+a^{}& ic& 0\\ 0& id^{}& m^{}+b^{}& \mu ^{}\\ ic^{}& 0& \mu ^{}& m^{}+b\end{array}\right]}.`$ (21) The resulting saddle point equations have two kinds of nontrivial solutions, depending on whether the off-diagonal quantities $`c,c^{},d,d^{}`$ are identically zero or not. If they vanish, the partition function factorizes into pieces that depend only on $`m,\mu `$ or on $`m^{},\mu ^{}`$. Then, $`G(z)`$ is always an analytic function of $`z`$. Therefore, the region in the complex plane of $`z`$ ($`m`$ or $`\mu `$) where the eigenvalues are located must be dominated by the other kind of solution, which mixes the parameters and their conjugates. It turns out that this solution is such that $`c=c^{}=d=d^{}`$, and the quantity $`cc^{}`$ is positive. For $`m=m^{}`$ real, it is given by $`cc^{}={\displaystyle \frac{x^2}{x^2m^2}}{\displaystyle \frac{m^2}{4(x^2m^2)}}+{\displaystyle \frac{y^2m^2}{2x^2(x^2m^2)}}{\displaystyle \frac{(x^2+y^2)y^2m^2}{4x^2}}(x^2+y^2).`$ (22) The boundary of the domain of eigenvalues is given by the curve, where the two types of solutions merge, i.e. the condition $`cc^{}=0`$. One can see immediately that for $`m=0`$ the boundary $`cc^{}=0`$ reduces to the unit circle. In Fig. 1 we show the distribution of the eigenvalues in the complex plane of the propagator matrix of size $`N=96`$ for masses of value $`m=0`$ and $`m=0.0625`$. Each plot consists of eigenvalues from 20 independent configurations. The analytic curve given by $`cc^{}=0`$ is also drawn in each case. We clearly see that this does give the correct result for the boundary of eigenvalues. ### B Unquenched partition function The unquenched partition function defined in eq. (5) can be computed analytically . It is given by $`Z(m,\mu )={\displaystyle 𝑑\sigma 𝑑\sigma ^{}\mathrm{e}^{N\mathrm{tr}\sigma \sigma ^{}}det\left(\begin{array}{cc}m+\sigma & \mu \\ \mu & m+\sigma ^{}\end{array}\right)^N}`$ (25) where the integration is over the $`N_f\times N_f`$ matrix $`\sigma `$. In the $`N\mathrm{}`$ limit the integrals can be evaluated via saddle point approximation. The saddle points are given by a cubic equation, $`\sigma ^{}`$ $`=`$ $`\sigma `$ (26) $`\sigma (m+\sigma )^2\mu ^2\sigma `$ $`=`$ $`m+\sigma ,`$ (27) where the matrix variable is diagonal and $`\sigma `$ is now a number. For fixed real $`m`$, there are four branch points in the complex $`\mu `$ plane, given by four of the six $`\mu `$ values for which the discriminant of the above equation, $`D={\displaystyle \frac{1}{27}}\left(m^4\mu ^2m^2\left(2\mu ^45\mu ^2{\displaystyle \frac{1}{4}}\right)+\left(1+\mu ^2\right)^3\right),`$ (28) vanishes. The branch points are connected by two branch cuts. The derivatives of the partition function are discontinuous across these cuts. The points where the cuts cross the real axis are interpreted as critical values of the parameters. They indicate a first-order phase transition in the thermodynamic limit . For finite $`N`$, the partition function can be evaluated as a polynomial in either $`m`$ or $`\mu `$. The coefficients are obtained by expanding the determinant and performing the integrals. One exact expansion of the partition function is given by $`Z(m,\mu )={\displaystyle \frac{\pi N!}{N^{N+1}}}{\displaystyle \underset{k=0}{\overset{N}{}}}{\displaystyle \underset{j=0}{\overset{Nk}{}}}{\displaystyle \frac{(Nm^2)^k}{(k!)^2}}{\displaystyle \frac{(N\mu ^2)^j}{j!}}{\displaystyle \frac{(Nj)!}{(Njk)!}}.`$ (29) The zeros of this polynomial can be readily calculated. They are located along lines in the complex plane and in the limit $`N\mathrm{}`$ they converge to the exact cuts found by the saddle point analysis below (33). ### C Phase structure and zeros of a partition function When a partition function has a nontrivial phase structure in the thermodynamic limit, the complex plane of one of its thermodynamic parameters is split into regions separated by cuts. Inside each region, the partition function is analytic in the parameters, so that its derivatives are continuous. The (first-order) transitions occur across the cuts, where the (logarithmic) derivatives of the partition function will in general have discontinuities. Before taking the thermodynamic limit (i.e. at finite $`N`$), the partition function is an analytic function of the parameters. If this partition function at finite $`N`$ is a polynomial in any of its thermodynamic parameters, such as in our case the mass or the chemical potential, it will have zeros in the complex plane of these parameters. Its logarithmic derivatives will have poles at the locations of the zeros. In the thermodynamic limit, the zeros coalesce into the same cuts that define the phase structure. Our partition function illustrates nicely the connection between the zeros and the cuts. For $`m=0`$ it is a truncated exponential, $`Z_N(0,\mu )={\displaystyle \frac{\pi N!}{N^{N+1}}}{\displaystyle \underset{j=0}{\overset{N}{}}}{\displaystyle \frac{(N\mu ^2)^j}{j!}}.`$ (30) For $`\mu ^21`$ the largest term in this sum occurs well before the truncation so we have a good approximation of the exponential. On the other hand, for $`\mu ^21`$ the series is dominated by the term with $`j=N`$. One can obtain a better estimate using the incomplete gamma function, which is closely related to the truncated exponential : $`n!e^x{\displaystyle \underset{k=0}{\overset{n}{}}}{\displaystyle \frac{x^k}{k!}}`$ $`=`$ $`\mathrm{\Gamma }(1+n,x)={\displaystyle \underset{x}{\overset{\mathrm{}}{}}}e^tt^n𝑑t.`$ (31) For real positive arguments the situation is simple. If $`x`$ is less than the value $`t=n`$ which maximizes the integrand, then the saddle point is integrated over, the integral is a good approximation to the gamma function and the exponential is obtained. For $`x>n`$ the integral is dominated by its endpoint and the integrand is well approximated by the last term of the series. If $`x`$ is complex, the saddle point dominates if the integration contour can be deformed across the saddle point. For our partition function $`Z_N(0,\mu )`$ $`=`$ $`\pi e^{\mu ^2N}{\displaystyle \underset{\mu ^2}{\overset{\mathrm{}}{}}}e^{Nu}u^N𝑑u,`$ (32) this happens if $`\mathrm{Re}(\mu ^2+\mathrm{ln}\mu ^2)<1`$ and $`\mathrm{Re}\mu ^2>1`$. In this case $`Z_N(0,\mu )\mathrm{exp}(\mu ^2NN)`$. If the integral is dominated by the endpoint then $`Z_N(0,\mu )\mu ^{2N}`$. Nontrivial zeros of the partition function are obtained for values of the chemical potential where the saddle point contribution and the end point contribution are of comparable order of magnitude. The zeros are thus given by the equation $`\mathrm{Re}(1+\mu ^2+\mathrm{log}\mu ^2)=0.`$ (33) At finite $`N`$ the deviation of the zeros from this critical curve is of order $`1/\sqrt{N}`$. ## III Glasgow averaging in the RMM The unquenched partition function for given $`\mu `$ and $`m`$ can be thought of as the quenched expectation value of the fermion determinant. If the zeros of the partition function $`\xi _k`$ are known, we can express the partition function as a product over its zeros $`{\displaystyle \underset{k}{}}\left(\lambda _k+\mu \right)={\displaystyle \underset{k}{}}\left(\xi _k+\mu \right).`$ (34) In this identity, the zeros of the partition function appear as a kind of “averaged eigenvalues” of the propagator matrix. The Glasgow method from lattice QCD attempts to perform this averaging. For a given matrix from the quenched ensemble, one writes out the coefficients of the polynomial on the left-hand side. The average coefficients are the coefficients of the right-hand side. The main focus of this paper is to study the convergence properties of the zeros to the exact ones for finite ensembles. The eigenvalues are in general complex. Because of the structure of $`F(m)`$, the eigenvalues occur in pairs $`\{\lambda _k,\lambda _k^{}\}`$. Also the matrix $`C`$ occurs with the same probability as $`C`$ in (5). Therefore the ensemble average will also contain the pair $`\{\lambda _k,\lambda _k^{}\}`$. Upon ensemble averaging, one can then easily show that the odd coefficients vanish and the remaining coefficients are real. These simplifications should not mislead us into believing that we have safely avoided the trouble of averaging over the complex phase of the determinant. As we will see shortly, the problem will show up in the form of a very high precision needed for the coefficients in order to calculate the roots reliably. Since the suppression achieved by averaging over the phase of the determinant is on the order of the magnitude of the unquenched partition function, which for $`m=0`$ is $`\mathrm{exp}\left(N(1+\mu ^2)\right)`$, it becomes exponentially difficult to achieve such precision. The quenched ensemble is not the only way to sample the set of all matrices. One may multiply and divide by any convenient function in the above formula. One factor modifies the ensemble, i.e., the way the individual matrices are generated, and the other one is used to compensate each configuration for the modified weight. One obvious choice is to use the unquenched ensemble at $`\mu =0`$. In the next section we report the results of numerical experiments performed using Glasgow averaging for the partition function (5), both with the quenched ensemble and the unquenched ensemble at $`\mu =0`$. ### A Numerical simulations We performed two different kinds of simulations. For $`m0`$, we generated matrix elements with the Gaussian distribution given by (5). We constructed the propagator matrix and obtained its eigenvalues. For each set, the coefficients of the corresponding polynomial are calculated and added to the average. For $`m=0`$, a much more economical procedure is possible, namely, generating sets of eigenvalues directly, using the exact Ginibre distribution (12). In this case we have employed a Metropolis algorithm. A given set is varied using small steps and the modified set is accepted depending on the corresponding value of the weight function. We have found that both cases have similar convergence properties, and in this paper we will only report on the case $`m=0`$. In Fig. 2 we show the zeros in the complex $`\mu `$ plane for a quenched ensemble of matrices of size $`N=32`$ averaged over $`N_E=10^6,10^7`$ configurations. The averaged roots do converge to the exact values as expected, but the convergence is extremely slow. The unconverged zeros are the ones situated closer to the real axis. They are situated in a cloud of a well defined shape and most of them are located on its edge. As the averaging proceeds the cloud shrinks and finally disappears. All the roots situated outside the cloud are obtained correctly for a given $`N`$, $`N_E`$ combination. This is illustrated in Fig. 3 where we perform ensemble averaging as large as $`N_E=10^8`$. In order to obtain more converged roots, we had to reduce the size of the matrices to $`N=16`$. In this figure we also show results for $`N_E=10^4`$ and $`10^6`$. The roots next to the real axis are always the last to converge. This is unfortunate since the value of the critical chemical potential (in our case $`\mu _c=0.527`$) is determined precisely by discontinuity across the real axis. The same pattern is observed for various matrix sizes $`N`$. The shape of the cloud of unconverged zeros is very similar. However, the number of configurations $`N_E`$ corresponding to a given degree of convergence increases sharply with the matrix size $`N`$. One is able to determine $`\mu _c`$ with reasonable accuracy only for small values of $`N`$. As we can see in Fig. 3, even in that case a very large number of configurations is required. Only after averaging over $`10^8`$ configurations do we get a good estimate of $`\mu _c`$, but the roots are still not completely converged. Convergence is not improved by including the determinant at $`\mu =0`$ in the statistical weight used to generate the eigenvalues. It was suggested previously that using an unquenched ensemble at zero chemical potential might improve the efficiency of the averaging. Our results indicate that, at best, using the unquenched ensemble has no effect, but as it appears from Fig. 3 the results are actually worse than for the quenched ensemble. An easy explanation is that a small number of configurations with the determinant close to zero are assigned a very large weight factor (cf. (1)) which has a destructive effect. Using a “negative” number of flavors is another possibility but it is not likely to improve convergence. We were able to run simulations which show some noticeable degree of convergence with $`N`$ up to $`48`$. There are many ways one could measure the degree of convergence of a given set of roots. The measure we will employ is the distance $`\mathrm{\Delta }(N_E)`$ between where the two curves forming the boundary of the unconverged roots cross the real axis. For the lower boundary we use the smallest imaginary part of the eigenvalue near the real axis. The upper estimate is made by running a circle through the points near the real axis and choosing the one with the largest radius. In both cases the determination of how close to the real axis the points should be is made by eye, choosing a cutoff that appears to give a reasonable fit to the boundary near the real axis. Ideally these are the points where the contour of the cloud of unconverged roots crosses the real axis, corresponding to a lower and an upper estimate for $`\mu _c`$ using the given ensemble. The dependence $`\mathrm{\Delta }(N_E)`$ on the size of the ensemble, $`N_E`$, is illustrated in Fig. 5 where we show results for $`N=12`$, $`N=24`$ and $`N=48`$. The thickness of the cloud of zeros, $`\mathrm{\Delta }(N_E)`$, shows a logarithmic dependence on $`N_E`$. We have included logarithmic fits for the range $`N_E=10^5`$ through $`10^8`$ ($`10^6`$ for $`N=12`$). Of course, once $`\mathrm{\Delta }(N_E)`$ becomes close to zero, it stays that way. The slopes of the fits for $`N=24,48`$ vary only slightly with $`N`$. Finally, the issue of most practical interest is how the number of configurations $`N_E`$ required to achieve a fixed value of $`\mathrm{\Delta }(N_E)`$ varies with the size of the matrix. This was estimated by fitting lines to the available data for $`\mathrm{\Delta }(N_E)`$ versus $`\mathrm{log}_{10}(N_E)`$. As we saw in Fig. 5 this does give nice linear fits. We then chose values of $`\mathrm{\Delta }(N_E)`$ and calculated where the linear fits intersected these values for each value of $`N`$. Our results are plotted in Fig. 5. We conclude that the number of configuration required to obtain a given precision increases exponentially with the matrix size $`N`$. This is the most important conclusion of this paper. In the remaining sections we will attempt to reinforce this conjecture by studying the sensitivity of the zeros of the exact partition function to small random perturbations of the corresponding polynomial coefficients. ### B Perturbed exact zeros It is a well known fact that extreme care has to be taken when calculating zeros of very high order polynomials. In particular, this is the case for a finite representation of a partition function, such as the one under investigation, where one is ultimately interested in large values of $`N`$. In our numerical experiments the polynomial coefficients are obtained as a result of a statistical averaging process. The averaged coefficients are approximations of the exact ones. As the simulation proceeds, the error decreases. We wish to investigate the sensitivity of the zeros of the partition function to the precision with which the coefficients are obtained. We consider the exact polynomial and add a fixed relative error to each coefficient: $`\stackrel{~}{c}_k=c_k(1+R_kϵ),`$ (35) where $`R_k`$ are random numbers between $`[1,1]`$, and $`ϵ`$ is a small real positive number. We then calculate the zeros of the polynomial $`\underset{k}{}\stackrel{~}{c}_kz^k`$. The effect of obtaining approximate coefficients by Glasgow averaging is quite similar to that of simply adding “artificial noise” to the exact coefficients. In Fig. 6 we plot the roots obtained for $`N=32`$ via Glasgow from $`N_E=10^7`$ configurations and the roots of the exact polynomial perturbed by noise of magnitude $`ϵ=10^3`$. The patterns of the two sets of roots are hardly distinguishable. In this subsection we study the effect of such small random perturbations. In particular, we are interested in the dependence of the precision $`\mathrm{\Delta }(N_E)`$ (in obtaining $`\mu _c`$) on $`N`$ and $`ϵ`$. This approach has the advantage that we can consider larger values of $`N`$ than in the case of direct Glasgow averaging. We expect that the relation between the error parameter $`ϵ`$ and the number of Glasgow configurations necessary to achieve the same accuracy is given by the central limit theorem, $`ϵ1/\sqrt{N_E}`$. One striking feature for larger $`N`$ is the extreme sensitivity of the roots to small perturbations. For $`N=96`$, a noise factor of $`10^{18}`$ already leads to a 20% error in the critical value of the chemical potential. Therefore, computing the zeros for $`N=96`$ is already beyond the capability of standard double precision. In all our calculations we use multiprecision arithmetic, implemented either using the GNU multiprecision package or the one made publicly available by NASA . In Fig. 8 we plot zeros with different degrees of artificial noise (see label of figure) for $`N=96`$. The zeros for $`ϵ=10^{25}`$ coincide with the exact roots. The noise factor for the remaining four sets of zeros increases with the same factor of $`10^3`$ from one set to the next. The intercepts of the edge of the eigenvalue cloud with the real axis are practically equally spaced for the different sets of zeros. The precision $`\mathrm{\Delta }(ϵ)`$ appears to be a linear function of $`\mathrm{log}(ϵ)`$ until $`\mathrm{\Delta }(ϵ)`$ vanishes. In Fig. 8 we plot $`\mathrm{\Delta }(ϵ)`$ versus $`\mathrm{log}_{10}(ϵ)`$ for $`N=48,96,192,384`$. For error values not too for away from convergence, we observe a linear relationship between $`\mathrm{\Delta }(ϵ)`$ and $`\mathrm{log}(ϵ)`$. In all cases convergence is approximately achieved when $`\mathrm{log}_{10}(ϵ)N/4`$. This shows that the accuracy required to achieve a given precision increases exponentially with $`N`$. These results are consistent with results obtained with Glasgow averaging where the relative variance of the coefficients is roughly constant and varies as $`1/\sqrt{N}_E`$ with only a weak dependence on $`N`$. We conclude that adding random noise to the exactly known coefficients has the same effect on the roots of the partition function as Glasgow averaging. This confirms once more that an exponentially large number of configurations is required to obtain a given precision. Extrapolating the $`N`$-dependence of the number of configurations needed given by the linear interpolation in Fig. 5 to lattice simulations leads to extremely large numbers for even a small lattice size. Consider for example a matrix size of $`N=128`$ (so $`D`$ in (5) is $`256\times 256`$), corresponding to a $`4^4`$ lattice with one degree of freedom per site. To achieve a precision of $`\mathrm{\Delta }=0.3`$ we would need approximately $`N_E=10^{28}`$ configurations, and for $`\mathrm{\Delta }=0.2`$ we would need $`N_E=10^{32}`$ configurations. A much more reasonable number of configurations can only be achieved if we consider a small lattice of $`2^4`$ which corresponds to $`N=24`$ for Kogut-Susskind (staggered) fermions. Here we would need $`N_E=10^6`$ configurations for $`\mathrm{\Delta }=0.3`$ or $`N_E=10^8`$ configurations for $`\mathrm{\Delta }=0.2`$. This exponential dependence was noted previously . In , a signal was achieved for a lattice of $`2^4`$ with $`10^5`$ configurations but for a $`4^4`$ lattice only a very weak signal was found after $`2\times 10^6`$ configurations. ### C Further analysis of the convergence of Glasgow averaging In the previous two subsections we hope to have convinced the reader that the phenomena accompanying Glasgow averaging are nothing but the effect of knowing the polynomial coefficients of the partition function only with limited precision. This follows from the fact that the pattern of the false (unconverged) zeros is reproduced by adding small random numbers to the exact polynomial coefficients. In our model the zeros close to the real axis are the last to converge. These are also the most interesting in practice, since they determine the critical value $`\mu _c`$. It is conceivable that for certain situations in QCD (such as with finite temperature) the roots close to the real axis are among the faster converging ones, which would provide a glimmer of hope for the Glasgow method. In this subsection we consider the effect of artificial noise in detail. We wish to understand this phenomenon as well as why the false zeros are always concentrated in a cloud of a well defined shape. We will find that there is a clear correlation between the magnitude of the partition function and the stability of the zeros. In Fig. 9 we give a topographic map of the absolute value of the RMM partition function on a logarithmic scale as a function of $`\mu `$. That is, we plot the level curves of the free energy per site given (up to a constant) by $`\mathrm{ln}(|Z_N|^2)/N`$. This is the natural quantity to study, since the partition function scales exponentially with $`N`$ . We observe a discontinuity in the derivative of the absolute value of the partition function in the complex $`\mu `$ plane. This is the locus of the zeros of the partition function. The deepest points are those around the (real) critical value of the chemical potential. A quick comparison with the preceding scatter plots should convince the reader that the higher the value of $`\mathrm{ln}(|Z|)`$ the more robust are the zeros in that neighborhood. However, the level curves do not match the contour of the cloud of false zeros. The situation a is little more subtle as we will see below. #### 1 The noisy partition function It is useful to separate the approximate (“noisy”) partition function into the exact one and the part due to the error in the coefficients, $`Z_{\mathrm{tot}}(\mu )=Z_0(\mu )+Z_{\mathrm{err}}(\mu ).`$ (36) The partition function $`Z_{\mathrm{err}}(\mu )`$ is a polynomial whose coefficients are the differences between the exact coefficients and the approximate ones, $`Z_{\mathrm{err}}(\mu )={\displaystyle \underset{k=0}{\overset{N}{}}}\delta c_k\mu ^k;\stackrel{~}{c}_k=c_k+\delta c_k.`$ (37) The differences are generated as $`\delta c_k=ϵR_kc_k`$ in the ‘artificial noise’ case. Our initial assumption was that for true Glasgow averaging the relative error in the coefficients is approximately the same for all zeros. This assumption was reinforced by the similarity between the two types of results discussed in previous subsections. In the following, we will discuss mainly the ‘artificial noise’ partition function. In terms of $`Z_{\mathrm{err}}(\mu )`$, an explanation of the qualitative picture we have observed is the following. In the regions of the complex $`\mu `$ plane where $`|Z_{\mathrm{err}}(\mu )||Z_0(\mu )|`$ one may ignore the $`Z_{\mathrm{err}}(\mu )`$. The roots of the total partition function located in these regions coincide to a good degree with the corresponding exact roots. On the contrary, in regions where $`Z_{\mathrm{err}}(\mu )`$ dominates, the roots are determined by the latter and their general pattern has no similarity to their exact counterparts. This is consistent with the fact that the robust zeros are the ones situated in the region with higher $`\mathrm{ln}(|Z_0|)`$. The shrinking of the cloud of false zeros with decreasing $`ϵ`$ is a consequence of the corresponding decrease in the magnitude of $`Z_{\mathrm{err}}`$. Throughout most of the complex $`\mu `$ plane, $`Z_0`$ is clearly larger than $`Z_{\mathrm{err}}`$. However, near the roots of $`Z_0`$ the error part has its chance to dominate. This is because the value of the exact partition function is the result of a major cancellation in the region where $`Z_0`$ is well approximated by $`\mathrm{exp}(\mu ^2N)`$. The sum of the (finite) series is much smaller than the general term, which is typically of order $`1`$. By multiplying each term of this series with a random number we spoil the cancellations. The sum of the perturbed series may therefore be significantly larger in absolute value than the exact sum. #### 2 Sensitivity of the zeros One arrives at similar conclusions by studying the infinitesimal variation of the roots. Let $`\mu _k`$ be the exact roots (so that $`Z_0(\mu _k)=0`$) and let $`\mu _k+\delta \mu _k`$ be the roots of the total noisy partition function. From our decomposition, we have $`Z_0(\mu _k+\delta \mu _k)+Z_{\mathrm{err}}(\mu _k+\delta \mu _k)=0{\displaystyle \frac{|Z_{\mathrm{err}}(\mu _k+\delta \mu _k)|}{|Z_0(\mu _k+\delta \mu _k)|}}=1.`$ (38) Since the modified roots are not zeros of $`Z_0`$ or $`Z_{err}`$, the above equality is not fulfilled trivially. It indicates that the false zeros should be located in the transition region where $`|Z_0|`$ and $`|Z_{err}|`$ are comparable, i.e., on the border of the region where $`Z_{err}`$ dominates, rather than scattered inside it. In Fig. 10 we show how the contour of the cloud of false zeros is obtained using the formula above. We can also make a quantitative estimate of how well converged a given root is. If we expand the first half of (38) we get $`Z_0(\mu _k)+\delta \mu _kZ_0^{}(\mu _k)+Z_{\mathrm{err}}(\mu _k)+\delta \mu _kZ_{\mathrm{err}}^{}(\mu _k)0.`$ (39) The first term is simply zero. The last term we will neglect as being of higher order in $`\delta \mu _k`$. The remaining two terms can be rearranged to give $`\delta \mu _k{\displaystyle \frac{Z_{\mathrm{err}}(\mu _k)}{Z_0^{}(\mu _k)}}.`$ (40) In other words, the variation of a given root $`\mu _k`$ is proportional to the ratio of $`Z_{err}`$ to $`Z_0^{}`$. If this quantity is negligible, the root is close to the exact one. If the ratio is close to $`1`$ or larger, the shift in $`\mu _k`$ is large, and the root is not obtained correctly. Therefore we can say that in the region where $`|Z_{\mathrm{err}}||Z_0^{}|`$, the roots of $`Z_{tot}`$ are reliable. #### 3 Error piece as additional phase We may check our arguments in the previous sections by comparing the magnitudes of the exact, the total and the noise partition functions (or free energies) in the complex $`\mu `$ plane. Fig. 12 shows the absolute value of the free energy per site, $`\mathrm{ln}(Z_0)/N`$ in one quadrant of the complex $`\mu `$ plane. The location of the exact zeros is on the cusp which runs from $`\mu =i`$ to $`\mu =0.527`$. In Fig. 12 we have the same type of plot now for the total partition function with a noise factor $`ϵ=10^8`$. The cusp in the noisy result bifurcates. The exact and the noisy surfaces coincide exactly except for the region between the two new branches of the cusp, where the noisy partition function is larger. The new cusps coincide with the locus of the false zeros. A few false zeros are also scattered inside the region. It is clear from Fig. 13 where we plot the surfaces corresponding to $`Z_0`$ and $`Z_{\mathrm{err}}`$ simultaneously that the new cusps are located at the intersection of the free energy surfaces. The exact piece dominates everywhere except for inside the bifurcation, where the error piece dominates. The $`\mu `$ dependence is so steep for both of them that the smaller piece becomes negligible very fast as one moves away from the intersection line. When we discussed earlier the properties of polynomials as partition functions, we associated the different analytic functions which are approximated by the polynomial in different regions of the complex parameter space with the phases of the partition function. We mentioned that the zeros are typically located along the phase transition lines, since within any given phase the partition function is a smooth analytic function which does not vanish. The partition function $`Z_{\mathrm{err}}`$ has a quasi-analytic behavior similar to that of the partition function itself. The bifurcation of the line of zeros can be interpreted as the presence of an additional spurious ‘phase’ in the partition function, namely, the region where the error piece dominates. Then the fact that the false zeros are located on the $`|Z_{\mathrm{err}}|=|Z_0|`$ line is natural. #### 4 Analytic approximation of the error partition function The question is whether the error partition function can be at all approximated by an analytic function. In the present case the answer is very simple. Let us consider the error partition function, $`Z_N^{\mathrm{error}}(\mu )=ϵ{\displaystyle \underset{k=0}{\overset{N}{}}}R_k{\displaystyle \frac{(N\mu ^2)^k}{k!}}.`$ (41) It is the same truncated inverse exponential we have discussed, only now each term is multiplied by a random number $`R_k`$ of order $`1`$. The terms in the original series conspire to achieve a major cancellation, of order $`e^{N\mathrm{Re}\mu ^2}`$. The random numbers spoil this, and each term in the series is left to fend for itself, and the sum is dominated by the largest terms. From the ratio of two consecutive terms in the sum, $`{\displaystyle \frac{N\mu ^2}{k}}`$ (42) we conclude that the largest term (in absolute value) is the one with $`k=k_{\mathrm{max}}=|\mu ^2N|`$. If $`|\mu ^2|>1`$ then the largest term is the one with $`k=N`$. Of course the surrounding terms must have a significant contribution, but the end result should still be proportional to this largest term. For $`|\mu |<1`$ our estimate is therefore $`|Z_N^{\mathrm{noise}}(\mu )|𝒞{\displaystyle \frac{k_{\mathrm{max}}^{k_{\mathrm{max}}}}{k_{\mathrm{max}}!}}{\displaystyle \frac{e^{N|\mu |^2}}{|\mu |\sqrt{N}}}`$ (43) In Fig. 14 we plot the absolute value of the two surfaces corresponding to the continuum limit, $`e^{\mu ^2N}`$ and $`e^N\mu ^{2N}`$, and the one corresponding to our estimate of the error partition function given above. The intersections of the three surfaces follow the pattern of the corresponding zeros. We also checked that (43) approximates the error partition function well. As an added bonus, we can now explain the scaling with $`N`$ of the required precision. We found that the error partition function, i.e., the exact partition function whose coefficients have been multiplied by random numbers of order $`1`$, is well approximated by an analytic expression. This expression shares an important property of the true partition function, namely, it scales exponentially with $`N`$. $`{\displaystyle \frac{1}{N_1}}\mathrm{ln}(Z_{N_1}^{exact}(\mu )){\displaystyle \frac{1}{N_2}}\mathrm{ln}(Z_{N_2}^{exact}(\mu ));{\displaystyle \frac{1}{N_1}}\mathrm{ln}(Z_{N_1}^{err}(\mu )){\displaystyle \frac{1}{N_2}}\mathrm{ln}(Z_{N_2}^{err}(\mu ))`$ (44) The magnitude of the error partition function is also controlled by the factor $`ϵ`$ which mimics the precision to which the coefficients are approximated in the averaging process. The locus of the false zeros is controlled by the relative magnitude of the true partition function and the error partition function. To have the same pattern of zeros, we must also scale $`ϵ`$ exponentially with $`N`$, $`ϵ(N)=\alpha ^N`$ . ## IV Conclusions We have investigated Glasgow averaging using a random matrix model at nonzero chemical potential. We have found that in a quenched ensemble the method converges, but that it requires an exponentially large number of configurations. The roots of the averaged polynomial are initially distributed similarly to the eigenvalues of individual configurations. As the averaging proceeds, the roots approach their exact values. After averaging over a finite number of configurations, the roots clearly separate into two groups. Some roots are close to the corresponding exact ones. Typically, these are the zeros far from the real axis. The remaining roots are situated in a cloud around the intersection of the real axis and the locus of the exact zeros. The zeros outside the cloud are practically exact, while those inside and on the boundary of the cloud are very badly determined. They cannot be traced to individual exact zeros. The shape of the cloud is similar for different matrix sizes. It shrinks as more configurations are taken into account. However it only shrinks as a logarithmic function of the total number of configurations. By interpolating the number of configurations needed to reach a given precision for several matrix sizes, we were able to estimate the dependence of the number of configurations needed for a given matrix size. Our conclusion is that the number of configurations needed to reach a given level of precision grows exponentially with the size of the matrix. The results of the corresponding unquenched simulations, using an ensemble generated with $`N_f=1`$ and $`\mu =0`$ are similar to the quenched ones. The unquenched ensemble may have been helpful in improving the overlap between the simulation and the ‘true’ ensemble corresponding to fixed nonzero $`\mu `$. However, this would still necessitate simulating an ensemble at nonzero $`\mu `$, which is precisely what we were trying to avoid. The exponential statistics observed by us are more likely to be the signature of the sign problem itself, i.e., the magnitude of the cancellation brought about by the varying phase of the determinant. Hence the Glasgow method is unable to surmount the sign problem. However, in a problem where the latter is absent, such as $`SU(2)`$ simulations , the Glasgow method – even quenched – should be helpful. It would be interesting to see how the overlap issue manifests itself in this case. We obtained results very similar to those of Glasgow averaging by perturbing the exact coefficients in the RMM partition function polynomial. We studied empirically the dependence of the reliability of the zeros on the precision with which the coefficients are known and on the size $`N`$ of the model matrix for larger matrices. Just like in the case of averaging, this dependence is exponential. That is, given a desired error bound for the zeros, the necessary precision on the coefficients grows exponentially with $`N`$. The extrapolation of this dependence to Glasgow averaging translates into exponentially large statistics, since the precision on the coefficients should be proportional to the square root of the number of configurations. There is a correlation between the phenomenon of slower or faster converging zeros and the magnitude of the continuum partition function. The zeros that converge slowly are in the region where the partition function is suppressed. Large cancellations require better precision, hence more statistics. More formally, the effect of perturbing the coefficients can be understood as the addition of an extra (error) polynomial to the true partition function. This extra piece is found to scale exponentially with $`N`$, just like the true partition function. The effect on the phase structure can be seen as the introduction of a spurious phase, which replaces the true ones in the regions of parameter space where the error partition function dominates. The zeros in these regions follow the modified phase boundaries. The scaling property of the error partition function explains the need for exponential statistics in Glasgow averaging. The negative result regarding unquenched simulations at $`\mu =0`$ is perhaps disappointing. It indicates that the quenched ensemble and the unquenched ensemble at $`\mu =0`$ are equally relevant to the behavior of the model close to the critical $`\mu `$. The upside of the failure of the unquenched ensemble at $`\mu =0`$ is that in future applications of the Glasgow method it might be worth trying to use a quenched ensemble. We remind the reader that using the quenched ensemble is not equivalent to the quenched approximation. Another glimmer of hope is in the phenomenon of fast converging zeros. In the present random matrix model the zeros that determine the critical $`\mu `$ are the last to converge. From our analysis there is no indication that the sensitive zeros are generally those close to the real (physical) axis. There is a possibility that in QCD or in other interesting non-Hermitian models the critical parameter values are determined by the robust zeros. It is hard to make any statement in this respect from the currently available QCD data . Perhaps a schematic model with features closer to QCD in terms of eigenvalue distribution in the complex $`\mu `$ plane would help clarify this issue. In general, it would be interesting to see how much of the analysis in this work regarding zeros of approximately known polynomial partition functions applies to other models. ## V Acknowledgements We thank I.M. Barbour, S. Hands, E. Klepfish, M.P. Lombardo and S.E. Morrison for fruitful discussions. R.D. Amado and M. Plümacher are thanked for several critical readings of the manuscript. This work was supported in part by NSF grants PHY-98-00098 and PHY-97-22101 as well as the DOE grant DE-FG-88ER40388. Part of the calculations presented in this paper have been carried out at the National Scalable Cluster Project Center at the University of Pennsylvania, which is supported by a grant from the NSF. The multiprecision calculations were performed using the GNU multiprecision package or the package made available by NASA .
no-problem/9909/gr-qc9909065.html
ar5iv
text
# Untitled Document An alternative method for calculating the energy of gravitational waves Miroslav Súkeník and Jozef Sima Slovak Technical University, Radlinského 9, SK-812 37 Bratislava, Slovakia Abstract In the expansive nondecelerative universe model, creation of matter occurs due to which the Vaidya metrics is applied. This fact allows for localizing gravitational energy and calculating the energy of gravitational waves using an approach alternative to the well established procedure based on quadrupole formula. Rationalization of the gradual increase in entropy of the Universe using relation describing the total curvature of space-time is given too. As a source of gravitational waves, any physical system with time dependent mass distribution can be considered. The amount of such energy, emitted within a time unit is described by known quadrupole formula: $`\frac{dE}{dt}=P_{qw}=\frac{G}{45c^5}\stackrel{\dot{}\dot{}\dot{}}{K}_{\alpha \beta }`$(1) where $`K_{\alpha \beta }`$ is the tensor of quadrupole mass distribution in the source of emission. For $`K_{\alpha \beta }`$ it holds: $`K_{\alpha \beta (t)}=\rho _{(t,x)}.(3X_\alpha X_\beta \delta _{\alpha \beta }X_\kappa X^\kappa )\mathrm{\Delta }V`$ (2) In the expansive nondecelerative universe (ENU) model , the creation of matter and of gravitational energy simultaneously occur. The laws of energy conservation still hold since the energy of gravitational field is negative in ENU. The total energy of the Universe is thus exactly zero and the Universe can continuously expand with the velocity of light $`c`$. This postulate is expressed in the ENU by equation: $`a=c.t_c=\frac{2GM_u}{c^2}`$(3) where a is the gauge factor, $`t_c`$ is the cosmological time, $`M_u`$ is the mass of ENU. In such a model, due to the matter creation the Vaidya metrics must be used which enables to localise gravitational energy. Weak fields are in first approximation described by Tolman’s relation $`ϵ_g=\frac{R.c^4}{8\pi .G}=\frac{3m.c^2}{4\pi a.r^2}`$(4) in which $`\epsilon _g`$ is the density of the gravitational energy being emitted by a body with the mass $`m`$ at the distance $`r`$, $`R`$ denotes the scalar curvature (contrary to the more frequently used Schwarzschild metrics, in the Vaidya metrics, $`R`$ 0). Using relations (3) and (4), equation expressing gravitational output $`P_g`$ is obtained: $`P_g=\frac{d}{dt}ϵ_g𝑑V=\frac{m.c^3}{a}=\frac{m.c^2}{t_c}`$ (5) If the total mass of universe, $`M_u`$ is substituted from (3) to (5), relation for the total output of the gravitational energy $`P_{tot}`$ (its absolute value) by the Universe appears $`P_{tot}=\frac{c^5}{2G}\left|2\times 10^{52}\right|W`$(6) This output is time independent and represents at the same time the upper limit of the gravitational energy output. No machine or physical phenomenon may create a higher output than that of $`P_{tot}`$. To illustrate the meaning of the above statement, radiant output of our galaxy is about 10<sup>37</sup> W, number of galaxies in the perceptible part of the Universe is about 10<sup>11</sup> which corresponds to the radiant output amounting 10<sup>48</sup> W. Sources of the gravitational waves can be both periodical and aperiodical. As an example of aperiodical sources, accelerated direct motion, burst of supernova or nonspheric gravitational breakdown can serve. Further we will focus our attention to periodical sources such as planets rotating around the Sun or a double star rotating around the common centre of inertia. The emitted gravitational energy cannot exceed $`P_{tot}`$ value from (6). In any case, a condition (7) must be observed $`P_{gw}E_k`$.$`\omega P\frac{c^5}{2G}`$(7) where $`P_{gw}`$ is the energy of gravitational waves emitted within a time unit, $`E_k`$ means the kinetic energy of a body moving on circular or elliptic orbit with the angular velocity $`\omega `$. In these circumstances (the excentricity of an elliptic orbit is omitted in deriving the following relations), the ratio of the emitted gravitational energy $`P_{gw}`$ and the kinetic energy $`E_k`$ of such body will be comparable to that of $`E_k`$ and $`P_{tot}`$ (in the limiting case, both value are identical and equal to unity), i.e. $`\frac{P_{gw}}{E_k.\omega }\frac{E_k.\omega }{P_{tot}}`$(8) It follows from (8) that the output of emitted gravitational waves is $`P_{gw}=\frac{2G.E_k^2.\omega ^2}{c^5}`$(9) The validity of equation (9) was tested on a system consisting of two bodies with the nearly identical masses $`m_1`$ $`m_2`$ = $`m`$ (10) These bodies rotate around the common centre of inertia on circular orbit with diameter $`r`$ and angular velocity $`\omega `$. The kinetic energy of either of the bodies is $`E_k=\frac{1}{2}m.r^2.\omega ^2`$ (11) Taking the identical masses of the bodies into account, both bodies must emit the same quantity of gravitational energy. Then it follows from (9), (10) and (11) $`P_{gw}=\frac{4G.E_k^2.\omega ^2}{c^5}=\frac{G.m^2.r^4.\omega ^6}{c^5}`$ (12) Quadrupole formula leads to equation $`P_{gw}=\frac{32G.r^4.\omega ^6}{5c^5}\left(\frac{m_1.m_2}{m_1+m_2}\right)^2`$ (13) which can be, for cases where (10) and (13) hold, transformed into $`P_{gw}=\frac{8G.m^2.r^4.\omega ^6}{5c^5}`$(14) Equation (14) differs from our relation (12) only in coefficient 8/5 which can be, bearing in mind the simplifications used in our derivation, assessed as an excellent agreement. Further domain of application of our approach lies in possibility to rationalize some questions which are still open, the entropy of the Universe in its begining being one of them. A problem arises in interpretation of the individual elements in a simplified equation expressing the total curvature of space-time $`R_{}=W_{}+R_{}^og_{}`$(15) where $`R`$ is the Riemann tensor representing the total curvature of space-time, $`W`$ is the Weyl’s tensor describing deformation and slap forces, $`R^o`$ is the Ricci’s tensor, $`g`$ is the metrics tensor. Application of the Vaidya metrics manifests (4) that the scalar curvature decreases in time. Since this curvature is a reduction of the Ricci’s tensor, the latter must decrease in time too. At identical conditions the Reimann’s tensor is time independent which leads to conclusion stating that the Weyl’s tensor must be gradually increasing in time and, in turn, the Universe had to start its expansion being in a highly ordered state, i.e. in the state with a minimal entropy . Conclusions The values of the energy of gravitational waves obtained using our alternative simple approach based on first approximation applied to the domain of weak gravitational fields are comparable to those derived by exact quadrupole relation. In addition to our previous results, the prospectiveness of the ENU model and Vaidya metrics is newly underlined. References V. Skalský, M. Súkeník: Astrophys. Space Sci.,178 (1991) 169 S. Hawking: Sci. Amer., 236 (1980) 34 P.C. Vaidya: Proc. Indian Acad. Sci., A33 (1951) 264 J. Sima, M. Súkeník: General Relativity and Quantum Cosmology, Preprint in: US National Science Foundation, E-print archive: gr-qcxxx.lanl.gov., paper 9903090 R. Penrose: The Large, the Small and the Human Mind,Cambridge University Press, 1997, p. 24
no-problem/9909/astro-ph9909423.html
ar5iv
text
# HST/NICMOS2 observations of the HD 141569 A circumstellar disk Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract No. NAS5-26555. ## 1 Introduction The recent discovery of a disk around the old Pre-Main Sequence A0 star HR 4796 A (Koerner et al. 1998; Jayawardhana et al. 1998; Augereau et al. 1999a; Schneider et al. 1999) opened new perspectives in our understanding of the evolution of circumstellar disks and the early stages of planetary formation (Lagrange et al. 1999). The comparable age, spectral type and IRAS infrared excess of the post Herbig Ae/Be star HD 141569 A are pertinent clues for suspecting the presence of an optically thin disk around this star at a similar evolutionary status ($`t_{}\mathrm{\hspace{0.17em}10}`$Myr, B9.5Ve star, van den Ancker et al. 1998). A main difference with HR 4796 A may concern the multiplicity. Whereas HR 4796 A has a physically bounded companion (HR 4796 B), which may play a role in the dynamics of the disk, the two stellar companions of HD 141569 A detected so far (Gahm et al. 1983; Pirzkal et al. 1997) may not be gravitationally linked to the primary, as postulated by Lindroos (1985). In addition, no spectroscopic companion is detected by Corporon & Lagrange (1999). Since IRAS, further investigations of the material around HD 141569 A have been performed. HD 141569 A shows a very small intrinsic polarization consistent with that of the prototype Vega-like stars (Yudin & Evans 1998; Yudin et al. 1999). Emission spectral features from circumstellar dust grains have also been observed by Sylvester et al. (1996a) at 10–20 $`\mu `$m. Whereas the spectral energy distribution (SED) constrains the grain composition, it poorly constrains the shape of the dust distribution. In particular, models predict an inner edge for the disk ranging between 10 AU (Malfait et al. 1998) and 650 AU (Sylvester & Skinner 1996b). In this Letter, we present the first resolved scattered light images of the HD 141569 A circumstellar disk obtained with the coronagraph on the HST/NICMOS2 camera. A disk detection is also reported independently by Weinberger et al. (1999) at $`1.1\mu `$m. We afterwards detail the morphology of the resolved structure, its brightness at $`1.6\mu `$m and finally discuss some disk properties. ## 2 Observations and reduction procedure ### 2.1 The data HST/NICMOS2 coronagraphic observations of HD 141569 A ($`V=7.1`$) were obtained on 1998 August 17 in MULTIACCUM mode. Six consecutive exposures on the target were performed in filter F160W (central wavelength $`1.6\mu `$m, bandwidth $`0.4\mu `$m) corresponding to a total integration time of 14 m 23 s. The reduction procedure of coronagraphic data requires a comparison star to assess the Point Spread Function (PSF). For this, the A1V star HD 145788, which shows no evidence of circumstellar material, was observed during the same orbit for 6 m 24 s to achieve a similar signal to noise ratio given its own flux ($`V=6.25`$). A narrow band (filter F171M) view of the field around HD 141569 A taken during the target acquisition confirms the presence of two bright companions HD 141569 B and HD 141569 C previously identified in K band by Pirzkal et al. (1997) (Fig.1). Measured position angles (hereafter PA) and separations from the primary star are summarized in Figure 1. PAs are fully consistent with Pirzkal et al. (1997) results whereas distances from this work are about 11% larger than those measured by these last authors. ### 2.2 Reduction procedure for the coronagraphic data : basic cleaning and PSF subtraction For both HD 141569 A and the PSF reference (HD 145788), we added the calibrated files provided by the STScI. We cleaned the bad pixels and ’grots’ (STScI 1997) using Eclipse reduction procedures (Devillard 1997). Blurred stripes on the images associated to electronic echos of the source (also called ’Mr Staypuft’ ghosts, STScI 1997) were subtracted by averaging a profile perpendicular to the stripes in a region free of astronomical sources. Before being subtracted, the reference star flux has to be scaled to that of the target object. The ratio of the HD 141569 A image to the reference star image gives the scaling factor. At this stage of the reduction, this is also an unbiased and powerful way to detect circumstellar material. Indeed, a circumstellar structure is expected to appear as a continuous feature in the ratio at a level significantly higher than the background level. Figure 2 (left) shows this ratio. An annular structure centered on the star clearly appears especially in the right and bottom parts of the image whereas the light of the stellar companions (mainly HD 141569 B) contaminates the opposite side. Azimuthally averaged radial profiles on different areas of the ratio confirm the presence of an excess (Fig.2, right). The true linear resolution is higher in an angular sector close to the major axis of the annular structure than in the perpendicular direction (resp. solid line and bold asterisks Fig.2, right). The superimposition of the two azimuthally averaged radial profiles shows that these profiles have a similar behaviour and are roughly a constant up to $`2\mathrm{}`$, then shows a strong discrepancy between $`2\mathrm{}`$ and $`4\mathrm{}`$. We assume then that the region up to $`2\mathrm{}`$ is free of significant resolved dust amount and that the scaling factor is $`0.400\pm 0.015`$ (between $`0.8\mathrm{}`$ and $`2\mathrm{}`$). ## 3 Results ### 3.1 Orientation of the disk Figure 3 (left) shows the final reduced image of the disk and brings out the annular structure evidenced in the ratio of HD 141569 A to the reference star (Fig.2). Unsubtracted secondary spider diffraction spikes are responsible for the bright areas inside the annulus. These patterns do not correspond to any realistic excess. An image of what would be observed without these spikes is shown in the bottom-right corner of the reduced image. We computed the distance from the star corresponding to the maximum surface brightness of the annulus versus the position angle over a 90$`\mathrm{°}`$ range (Southern part of the disk). Least-squares ellipse fitting constrains the major axis of the observed annulus to be at a position angle of $`355.4\mathrm{°}\pm 1\mathrm{°}`$ and leads to an upper limit for the disk inclination from edge-on of $`37.5\mathrm{°}\pm 4.5\mathrm{°}`$ assuming that the disk is axisymmetrical with respect to the star. The fit is superimposed on the reduced image of the disk (Fig.3, right) and agrees well with all the observed structure. ### 3.2 Surface brightness distribution and photometry Both radial surface brightness profiles along the Northern and the Southern semi-major axis of the disk peak at $`325\pm 10`$ AU from the star, according to the Hipparcos star distance of $`99_8^{+9}`$ pc (Fig.4). Discrepancies between the two profiles are due, inside 2.5$`\mathrm{}`$ to the above-mentioned diffraction spikes, and outside 3.6$`\mathrm{}`$ to an imperfect elimination of the blurred light from the stellar companions HD 141569 B and HD 141569 C. We now focus on the Southern profile which shows more clearly the annular shape of the disk. Both inside and outside the peak at 325 AU, the surface brightness steeply decreases (FWHM $``$ 150 AU). The decline becomes smoother further than 420 AU. More precisely, Table 1 gives the steepness indexes which match the surface brightness for different ranges of distance from the star. The change of slope around $`4.2\mathrm{}`$ has to be confirmed. Indeed, this distance corresponds to the position of a different detector matrix, also we do not exclude that shading may have caused this effect (STScI 1997). Nevertheless, the disk is positively detected at least up to $`6\mathrm{}`$. We performed the photometry of the disk on elliptic contours with semi-major axis between $`2\mathrm{}`$ and $`9\mathrm{}`$. We find a total scattered flux density of $`4.5\pm 0.5`$ mJy ($`13.5`$ mag), which must be considered as a lower value since the scattered light below the spider diffraction patterns is not taken into account. This corresponds to a ratio of scattered to stellar flux at 1.6 $`\mu `$m of about $`2.2\times 10^3`$. ## 4 Discussion ### 4.1 Disk properties Assuming an optically thin ring and an inclination from edge-on of $`35\mathrm{°}`$, we reproduce the main shape of the surface brightness along the major axis of the disk with an annulus peaked at 330 AU from the star and a radial surface dust distribution proportional to $`r^4`$ inside the peak and to $`r^{6.8}`$ outside. This profile does not depend on the exact anisotropic scattering properties of the grains because it is measured along the major axis of the disk, i.e. where the scattering angle always close to $`90\mathrm{°}`$. For simplicity, we have therefore assumed that grains scatter isotropically. The predicted surface brightness is superimposed in Figure 4 on the observed one. The dust population derived, assumed to be made of amorphous fluffy grains as described in Augereau et al. (1999a) larger than about a half micrometer, fits quite well the 20–100 $`\mu `$m SED but does not explain the shorter wavelength data. Emission features at 7.7 $`\mu `$m, 8.6 $`\mu `$m and 11.3 $`\mu `$m, tentatively attributed to aromatic molecules (e.g. Polycyclic Aromatic Hydrocarbons), have been detected by Sylvester et al. (1996a). The present model can not reproduce these features. Anyway, the presence of dust closer to the star is required to reproduce at least the 10 $`\mu `$m continuum. This second population is expected to be too close to the star (typically inside the first hundred AU) to be detectable in the present data. Such hot grains probably contribute to the $`20\mu `$m SED and may also be responsible for all or part of the 12.5 $`\mu `$m and 17.9 $`\mu `$m extended emissions (0.75$`\mathrm{}`$ (75 AU) in radius) resolved by Silverstone et al. (1998). More data and modeling are needed to confirm that issue. ### 4.2 Comparison with HR 4796 A It is particularly instructive to further compare HD 141569 A to HR 4796 A: * both stars exhibit a circumstellar ring, but the HD 141569 A annulus is about 9–10 times wider than the HR 4796 A one, * two dust populations are required to fit both full SEDs (Koerner et al. 1998; Augereau et al. 1999a), * the inner edge of the HD 141569 A disk suggests a truncation process as already proposed for HR 4796 A, * the outer disk distribution (further $``$325 AU) seems steeper than the spatial distribution of dust supplied by colliding or evaporating bodies (Lecavelier des Etangs et al. 1996) as for HR 4796 A. A perturbing body as a source for this outer truncation is possible. Nevertheless, no massive perturbing body is detected so far unlike for HR 4796 A. Most of the above remarks regarding the properties and the dynamics of the HD 141569 A circumstellar disk will be further investigated in a forthcoming paper.
no-problem/9909/astro-ph9909131.html
ar5iv
text
# Soft X-ray Absorption by High-Redshift Intergalactic Helium ## 1 Introduction The Ly$`\alpha `$ absorption by intergalactic He ii has now been observed in four quasars at redshifts $`2.4<z<3.2`$ (Jakobsen et al. 1994; Davidsen et al. 1996; Reimers et al. 1997; Anderson et al. 1999; Heap et al. 1999). While it is known that hydrogen (together with He i) was ionized at redshift $`z5`$, given the presence of transmitted flux to the blue of the hydrogen Ly$`\alpha `$ wavelength in sources up to that redshift, the double ionization of helium can take place at a lower redshift. The reason is simple: even though there are only $`8`$ helium atoms for every 100 hydrogen atoms, He ii recombines at a rate $`5.5`$ times faster than hydrogen. Consequently, if a large number of recombinations for each helium ion take place, then the He ii reionization will be delayed relative to the hydrogen reionization as long as the number of photons emitted to the intergalactic medium (hereafter IGM) above $`54.4`$ eV is less than 0.44 times the number of photons emitted above $`13.6`$ eV. This condition is satisfied by the known sources of ionizing radiation (quasars and galaxies). In practice, the mean number of recombinations of He ii may not be very large: for the uniform IGM, the He ii recombination rate is $`4`$ times the Hubble rate at $`z=4`$, and then a somewhat softer emission spectrum is required for a delayed He ii reionization. However, the recombination rate is enhanced by the clumping factor of the ionized gas, which increases as progressively denser gas is reionized, as described recently in Miralda-Escudé, Haehnelt, & Rees (1999; hereafter MHR). The present state of the observations of intergalactic He ii can be summarized as follows: at redshift $`z2.8`$, there is a He ii “Ly$`\alpha `$ forest”, with a flux decrement that is consistent with the hydrogen Ly$`\alpha `$ forest flux decrement, and a background spectrum produced by quasars, plus a possible contribution from galaxies to increase the ratio $`J_{\mathrm{H}\mathrm{i}}/J_{\mathrm{He}\mathrm{ii}}`$ of intensities at the H i and He ii ionization edges (Davidsen et al. 1996, Miralda-Escudé et al. 1996, Bi & Davidsen 1997, Croft et al. 1997). At $`z2.8`$, the He ii Ly$`\alpha `$ spectra appear to be divided into two different types of regions: those where transmitted flux is still observed (with a similar $`J_{\mathrm{H}\mathrm{i}}/J_{\mathrm{He}\mathrm{ii}}`$ implied for the ionizing background), and those where the transmitted flux is undetectable or, at least, much smaller (see Heap et al. 1999 and references therein). The regions with greater transmitted flux occupy a small fraction of the spectrum at $`z3`$, and they have typical widths of $`10^3\mathrm{km}\mathrm{s}^1`$. Some of them can be attributed to the proximity effect of the source being observed (e.g., Hogan et al. 1997). These observations are not yet conclusive in clarifying the ionization history of He ii in the IGM. The reason is that the optical depth of a homogeneous medium with the mean baryon density of the universe, where all the helium is He ii, is very large: $$\tau _{0,\mathrm{He}\mathrm{ii}}=1.7\times 10^3\frac{\mathrm{\Omega }_bhY}{0.007}\frac{H_0(1+z)^{3/2}}{H(z)}\left(\frac{1+z}{4}\right)^{3/2}.$$ (1) Therefore, if only a fraction $`10^3`$ of the average IGM helium density is in the form of He ii, the flux transmission can already be reduced to very low levels. Even when the line of sight crosses a void with density $`\rho =0.1\overline{\rho }`$ (about the lowest densities in the photoionized IGM at $`z=3`$ according to numerical simulations of Cold Dark Matter models; see MHR and references therein), the void will still be optically thick to He ii Ly$`\alpha `$ photons if its He ii fraction is greater than $`0.01`$. The implication is that even a very stringent upper limit to the He ii Ly$`\alpha `$ flux transmission does not imply that most of the He ii had not yet been reionized at the observed redshift. Although the observations quoted earlier could be simply interpreted as revealing an IGM with a patchy ionization of He ii, with He iii regions surrounding luminous quasars and a pure He ii IGM filling the volume between the He iii regions, the correct picture is probably much more complicated. In the regions where no transmitted flux is detected, the helium might also be doubly ionized over most of the volume, although with a much lower ionizing background intensity than in the regions near luminous quasars. The higher He ii fraction implied in the regions with low intensity, although still much smaller than unity, could be enough to completely absorb the Ly$`\alpha `$ photons. The presence of this low intensity, but widespread, background above the He ii edge is quite plausible, as discussed in MHR, if there is a modest contribution to the emissivity from sources of luminosity much lower than quasars. A more direct observational probe to the mean fraction of He ii in the IGM at different redshifts would be highly valuable to test models for the reionization history, which is in turn important for the history of galaxy formation owing to the strong heating of the IGM gas produced by the He ii reionization (e.g., Efstathiou 1992; Miralda-Escudé & Rees 1994). It is shown in this paper that this new probe may be found in the soft X-ray continuum absorption spectra of high-redshift quasars. The distributed He ii in the IGM should cause, in addition to the He ii forest absorption in Ly$`\alpha `$ and the other Lyman series lines, the continuum absorption (due to photoionization) below a rest-frame wavelength $`228`$ Å. This continuum absorption should be very large near the edge, but the flux should recover at shorter wavelengths. We shall see that when most of the helium in the IGM is He ii, this recovery of the flux should occur at $`0.5\mathrm{keV}`$, or an observed frequency of $`0.1\mathrm{keV}`$ at $`z=4`$. ## 2 X-Ray Absorption by Intergalactic Helium Let $`F(z)`$ be the mean fraction of helium in the IGM in the form of He ii as a function of redshift. For the moment, we consider a homogeneous IGM (with a uniform fraction $`F(z)`$), but because we shall consider continuum absorption only, the results will be valid also for a clumpy IGM, as long as the clumps have a large covering factor. The continuum absorption spectrum by He ii on a source at redshift $`z_s`$ is given by $$\tau (\nu )=_0^{z_s}𝑑z\frac{dl}{dz}n_{\mathrm{He},0}(1+z)^3F(z)\sigma _{\mathrm{He}}[\nu (1+z)],$$ (2) where $`dl`$ is the proper length element along the line of sight, $`n_{\mathrm{He},0}`$ is the mean primordial helium density extrapolated to the present epoch, $`\sigma _{\mathrm{He}}(\nu )`$ is the photoionization cross section of He ii, and $`\nu `$ is the observed frequency. If He ii is highly ionized at low redshift, most of the contribution to $`\tau (\nu )`$ will be from high redshift, where we can use $`dl/dzcH_0^1\mathrm{\Omega }_0^{1/2}(1+z)^{5/2}`$. We also define $`\mathrm{\Sigma }_{\mathrm{He}}(\nu )\sigma _{\mathrm{He}}(\nu )(\nu /\nu _{\mathrm{He}})^3`$, where $`\nu _{\mathrm{He}}`$ is the frequency at the ionization edge of He ii (the function $`\mathrm{\Sigma }_{\mathrm{He}}`$ varies only slowly with frequency and is plotted below in Fig. 1). Equation (1) can be rewritten as $$\tau (\nu )=\frac{cn_{\mathrm{He},0}}{H_0\mathrm{\Omega }_0^{1/2}}\sigma _{\mathrm{He}}[\nu (1+z_s)]\left(1+z_s\right)^3_0^{z_s}\frac{dzF(z)}{(1+z)^{5/2}}\frac{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z)]}{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z_s)]},$$ (3) or, using $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_bh^2=0.019`$, $`Y=0.25`$, $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$: $$\tau (\nu )=550\left(\frac{\nu _{\mathrm{He}}}{\nu }\right)^3\frac{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z_s)]}{\mathrm{\Sigma }_{\mathrm{He}}(\nu _{\mathrm{He}})}_0^{z_s}\frac{dzF(z)}{(1+z)^{5/2}}\frac{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z)]}{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z_s)]}.$$ (4) As an example, we consider the simple case of a sudden and complete reionization of He ii at $`z=z_i`$: $`F=1`$ at $`z>z_i`$ and $`F=0`$ at $`z<z_i`$. The integral in the above equation is then well approximated as $`(z_sz_i)/(1+z_s)^{5/2}`$ (for small $`z_sz_i`$). For $`z_i=3`$ and $`z_s=4`$, $`\tau (\nu )10(\nu _{\mathrm{He}}/\nu )^3`$, so the optical depth reaches unity at an observed frequency $`\nu 0.12`$ keV. ## 3 Comparison to Galactic absorption Any extragalactic source will also be absorbed in soft X-rays by Galactic hydrogen and helium, with a hydrogen column density $`N_{\mathrm{H}\mathrm{i}}`$ that can be determined from the H i 21 cm emission. The absorption optical depth is $$\tau _G(\nu )=N_{\mathrm{H}\mathrm{i}}\left[\sigma _{\mathrm{H}\mathrm{i}}(\nu )+0.084\sigma _{\mathrm{He}\mathrm{i}}(\nu )\right]N_{\mathrm{H}\mathrm{i}}\sigma _{\mathrm{H}\mathrm{i}}(\nu )R_{\mathrm{He}}(\nu ),$$ (5) where the ratio of helium to hydrogen atoms is $`(Y/3.97)/(1Y)=0.084`$, $`\sigma _{\mathrm{H}\mathrm{i}}`$ and $`\sigma _{\mathrm{He}\mathrm{i}}`$ are the photoionization cross sections of H i and He i, and we have defined the quantity $`R_{\mathrm{He}}(\nu )`$. At the frequencies that will be of interest to us, $`h\nu 0.2`$ keV, this quantity is $`R_{\mathrm{He}}(\nu )3`$ (this is shown in detail below, in Fig. 1). Dividing the optical depth due to the high-redshift IGM from (4) by the Galactic optical depth in (5), we obtain the ratio to be: $$\frac{\tau (\nu )}{\tau _G(\nu )}=\frac{56}{N_{\mathrm{H}\mathrm{i},20}R_{\mathrm{He}}(\nu )}\frac{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z_s)]}{\mathrm{\Sigma }_{\mathrm{He}}(4\nu )}_0^{z_s}\frac{dzF(z)}{(1+z)^{5/2}}\frac{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z)]}{\mathrm{\Sigma }_{\mathrm{He}}[\nu (1+z_s)]},$$ (6) where $`N_{\mathrm{H}\mathrm{i},20}=N_{\mathrm{H}\mathrm{i}}/(10^{20}\mathrm{cm}^2)`$. This fiducial H i column density is around the lowest value that is usually reached at high Galactic latitude (e.g., Laor et al. 1997). Using the same example as in §2, where the integral is given by $`(z_sz_i)/(1+z_s)^{5/2}0.018`$, we find $`\tau (\nu )/\tau _G(\nu )1/[N_{\mathrm{H}\mathrm{i},20}R_{\mathrm{He}}(\nu )]1/(3N_{\mathrm{H}\mathrm{i},20})`$. Thus, we conclude that the absorption by high-redshift intergalactic helium should be smaller than the Galactic absorption only by a factor $`3`$ when the Galactic column density has the lowest value normally found in sources, $`N_{\mathrm{H}\mathrm{i}}10^{20}\mathrm{cm}^2`$. Laor et al. (1997) measured the Galactic column densities in a sample of 23 low-redshift quasars from the absorption in soft X-rays. Comparing the column densities they derived with those measured from the 21 cm emission, they show that there are no significant differences within observational error. In particular, the two quasars with the smallest error (10%) in the column density determined from the soft X-ray spectrum by Laor et al. also agree with the 21 cm column density (see their Fig. 2). Provided that the Galactic absorption can be subtracted from the measured X-ray absorption on a high-redshift quasar, given the column density measured from 21cm emission, it should be possible to detect the continuum absorption of intergalactic helium in the soft X-rays. A natural question to ask here is if the shape of the absorption can be used to distinguish Galactic absorption from the high-redshift He ii absorption. Unfortunately, the shape is almost identical in the two cases, as we shall now see. In Figure 1, the solid line is the hydrogen cross section $`\sigma _{\mathrm{H}\mathrm{i}}(\nu )`$, multiplied by $`(\nu /\nu _{\mathrm{H}\mathrm{i}})^3`$, and the dashed line is $`[\sigma _{\mathrm{H}\mathrm{i}}(\nu )+n_{\mathrm{He}}/n_\mathrm{H}\sigma _{\mathrm{He}\mathrm{i}}(\nu )](\nu /\nu _{\mathrm{H}\mathrm{i}})^3`$ \[so the ratio of the dashed line to the solid line is $`R_{\mathrm{He}}(\nu )`$, defined in eq. (5)\]. We have used the exact analytical expression for $`\sigma _{\mathrm{H}\mathrm{i}}(\nu )`$ (e.g., Spitzer 1978), and the fit of Verner et al. (1996) for $`\sigma _{\mathrm{He}\mathrm{i}}(\nu )`$. The dashed line is therefore the expected shape of the Galactic absorption, except for the fact that some of the ionized gas in the Galaxy may not have the helium doubly ionized, and there could then be some additional He ii absorption at $`z=0`$. This is shown by the dotted line, equal to $`\sigma _{\mathrm{He}}(\nu )(n_{\mathrm{He}}/n_\mathrm{H})(\nu /\nu _{\mathrm{H}\mathrm{i}})^3`$, and we see that it has almost the same slope as the dashed line at energies $`0.15`$ keV. The intergalactic He ii absorption should have the same shape as the solid line, at redshift $`z=3`$ (because the He ii cross section is identical to the H i cross section shifted to a frequency 4 times higher). Notice from equation (4) that the frequency dependence of $`\tau (\nu )`$ is basically the same as the redshifted cross section $`\sigma _{\mathrm{He}}(\nu )`$. Thus, we see from Fig. 1 that the high redshift absorption has a slightly steeper cross section than the Galactic one. In the range from $`0.1`$ keV to $`0.2`$ keV, the Galactic cross section is $`\sigma \nu ^{3.0}`$, and the intergalactic helium cross section is $`\sigma _{\mathrm{He}}\nu ^{3.15}`$. In practice, it will be extremely difficult to reach the high signal-to-noise required to distinguish between these two slopes; moreover, the intrinsic emission spectrum of the source introduces an additional uncertainty. Therefore, the high-redshift helium absorption can probably be detected only as an excess of absorption over that expected from the Galactic column density derived from the 21 cm emission. ## 4 A Reionization Model In any realistic model, the reionization of all the He ii in the universe will take place over a substantial length of time, of the order of a Hubble time. In the reionization model proposed in MHR, the low-density gas is ionized earlier and reionization advances outside-in. The stage of reionization depends at every redshift on the gas overdensity $`\mathrm{\Delta }_{\mathrm{He}}`$ up to which the helium is mostly ionized to He iii. Using results of numerical simulations for the density distribution and the photon mean free path in the IGM, the model in MHR predicts the He ii Ly$`\alpha `$ flux decrement and the fraction of gas at overdensities greater than $`\mathrm{\Delta }_{\mathrm{He}}`$, assumed to be still in the form of He ii. The observed flux decrement of $`0.7`$ at $`z=2.6`$ (Davidsen et al. 1996) requires $`\mathrm{\Delta }_{\mathrm{He}}70`$, and the fraction of He ii $`F0.2`$ (see Figs. 2 and 9 in MHR). At the slightly higher redshift $`z3.1`$, a flux decrement as high as $`0.99`$ (see Heap et al. 1999) requires $`\mathrm{\Delta }_{\mathrm{He}}12`$, and the fraction of gas at greater overdensities increases only to $`F0.3`$, showing how a small increase in He ii ionization can result in a dramatic decline of the transmitted flux due to the effect that the most underdense voids rapidly become optically thick to the He ii Ly$`\alpha `$ photons. As an example of a reasonable model matching the above conditions, where the He ii reionization takes place over an extended period of time, we consider the simple case $`F=[(1+z)/7]^{5/2}`$, where the first He ii sources would turn on at $`z=6`$. For this case, equation (6) yields (neglecting the ratios of the function $`\mathrm{\Sigma }_{\mathrm{He}}`$) $`\tau (\nu )/\tau _G(\nu )0.43z_s/N_{\mathrm{H}\mathrm{i},20}/R_{\mathrm{He}}(\nu )`$ (for $`z_s<6`$). It must be born in mind here that whatever contribution arises from low redshifts to the integral in equation (6) should originate mostly in dense systems, where the helium is self-shielded and remains in the form of He ii (these are the observed Lyman limit and damped absorption systems). When the mean free path between these absorption systems is not much smaller than the Hubble length (i.e., when the absorbers do not have a large covering factor), the He ii column density on a particular line of sight will no longer be equal to the mean value, but will have large fluctuations: it will most often be smaller than the mean value, but it will exceed this mean value when a rare, strong absorption system is intersected. The presence of these rare absorption systems, which should dominate the total He ii content of the universe at low redshift only, can in principle be determined independently for every line of sight from observations of the H i absorption spectrum. In practice, though, the Lyman limit absorption from higher redshift systems, and the He ii Ly$`\alpha `$ absorption itself, will prevent the determination of the H i column density of any low-redshift absorber in a high-redshift source. The absence of strong metal lines can still be a good indication that a strong H i absorber is not present. In order for a low-redshift absorber to cause soft X-ray absorption comparable to that of the high-redshift IGM, it must have a column density $`N_{\mathrm{H}\mathrm{i}}10^{19.5}(1+z)^3\mathrm{cm}^2`$. Only $``$ 20% of the lines of sight have an absorber with this strength (e.g., Storrie-Lombardi, Irwin, & McMahon 1996). Nevertheless, the fluctuation in the low-redshift absorption will be an additional source of uncertainty (especially if the low-redshift H i Ly$`\alpha `$ absorption cannot be measured), implying that the excess soft X-ray absorption will need to be detected in several sources before one can be sure that the effect of the helium in the low-density IGM at high redshift has been detected. The fluctuations in the He iicolumn density due to high-redshift absorbers of high column density are less important, because these absorbers do not contain a large fraction of the baryons. For example, at $`z=3`$ we need an absorber with $`N_{\mathrm{H}\mathrm{i}}10^{21}\mathrm{cm}^2`$ to produce an absorption similar to the IGM. These absorbers are rare, and they can also be readily identified in the H i Ly$`\alpha `$ spectrum when their redshift is not much lower than the source redshift. ## 5 Conclusions Soft X-ray absorption can be a new powerful tool to measure the fraction of helium in the form of He ii as a function of redshift. This direct determination of the He ii fraction provides a straightforward test of any model of reionization based on the observed emitting sources (quasars and galaxies). Although the observations of the He ii Ly$`\alpha `$ absorption spectra provide a much greater wealth of information, their interpretation is more complicated due to the complexities introduced by the highly inhomogeneous IGM and the fluctuating intensity of the ionizing background. The main challenge in detecting this effect will be to find enough high-redshift, X-ray bright sources, and to accurately measure the 21 cm H i column density to subtract the Galactic contribution to the soft X-ray absorption. A systematic uncertainty that will be faced is the possible existence of some Galactic He ii along the line of sight, which can also produce excess soft X-ray absorption not accounted for by the Galactic H i column density. In addition, extragalactic damped Ly$`\alpha `$ absorbers at low redshifts can also make a significant contribution, which may be difficult to correct for in high-redshift sources. These systematic uncertainties can be put under control once the effect is measured in many sources over a wide range of redshift. I am grateful to Andy Fabian for encouraging discussions on the possibilities for observing the effect proposed here.
no-problem/9909/hep-lat9909109.html
ar5iv
text
# Baryon mass extrapolation ## 1 FORMALISM In recent years there has been tremendous progress in the computation of baryon masses within lattice QCD. Still, it remains necessary to extrapolate the calculated results to the physical pion mass ($`\mu =140`$ MeV) in order to make a comparison with experimental data. In doing so one necessarily encounters some non-linearity in the quark mass (or $`m_\pi ^2`$), including the non-analytic behavior associated with dynamical chiral symmetry breaking. We recently investigated this problem for the case of the nucleon magnetic moments . It is vital to develop a sound understanding of how to extrapolate to the physical pion mass. ### 1.1 Self-Energy Contributions Chiral symmetry is dynamically broken in QCD and the pion alone is a near Goldstone boson. It is strongly coupled to baryons and plays a significant role in $`N`$ and $`\mathrm{\Delta }`$ self-energies. The one-loop pion induced self-energies of the $`N`$ and $`\mathrm{\Delta }`$ are given by the processes shown in Fig. 1. In the standard heavy baryon limit, the analytic expression for the pion cloud correction to the masses of the $`N`$ and $`\mathrm{\Delta }`$ are of the form $$\delta M_N=\sigma _{NN}+\sigma _{N\mathrm{\Delta }};\delta M_\mathrm{\Delta }=\sigma _{\mathrm{\Delta }\mathrm{\Delta }}+\sigma _{\mathrm{\Delta }N},$$ (1) where $$\sigma _{NN}=\sigma _{\mathrm{\Delta }\mathrm{\Delta }}=\frac{3g_A^2}{16\pi ^2f_\pi ^2}_0^{\mathrm{}}𝑑k\frac{k^4u_{NN}^2(k)}{w^2(k)},$$ (2) $$\sigma _{N\mathrm{\Delta }}=\frac{6g_A^2}{25\pi ^2f_\pi ^2}_0^{\mathrm{}}𝑑k\frac{k^4u_{N\mathrm{\Delta }}^2(k)}{w(k)(\mathrm{\Delta }M+w(k))},$$ (3) $$\sigma _{\mathrm{\Delta }N}=\frac{3g_A^2}{50\pi ^2f_\pi ^2}_0^{\mathrm{}}𝑑k\frac{k^4u_{N\mathrm{\Delta }}^2(k)}{w(k)(\mathrm{\Delta }Mw(k))}.$$ (4) Here $`\mathrm{\Delta }M=M_\mathrm{\Delta }M_N`$ , $`g_A=1.26`$ is the axial charge of the nucleon, $`w(k)=\sqrt{k^2+m_\pi ^2}`$ is the pion energy and $`u_{NN}(k)`$, $`u_{N\mathrm{\Delta }}(k)`$, $`\mathrm{}`$ are the $`NN\pi `$, $`N\mathrm{\Delta }\pi `$, $`\mathrm{}`$ form factors associated with the emission of a pion of three-momentum $`k`$. The form factors reflect the finite size of the baryonic source of the pion field and suppress the emission probability at high virtual pion momentum. As a result, the self-energy integrals are not divergent. The leading non-analytic (LNA) contribution of these self-energy diagrams is associated with the infrared behavior of the corresponding integrals; i.e. the behavior as $`k0`$. As a consequence, the leading non-analytic behavior does not depend on the details of the form factors. Indeed, the well known results of chiral perturbation theory are reproduced even when the form factors are approximated by $`u(k)=\theta (\mathrm{\Lambda }k)`$. Of course, our concern with respect to lattice QCD is not so much the behavior as $`m_\pi 0`$, but the extrapolation from high pion masses to the physical pion mass. In this context the branch point at $`m_\pi ^2=\mathrm{\Delta }M^2`$, associated with transitions of $`N\mathrm{\Delta }`$ or $`\mathrm{\Delta }N`$, is at least as important as the LNA behavior near $`m_\pi =0`$. Heavy quark effective theory suggests that as $`m_\pi \mathrm{}`$ the quarks become static and hadron masses become proportional to the quark mass. In this spirit, corrections are expected to be of order $`1/m_q`$ where $`m_q`$ is the heavy quark mass. The presence of a cut-off associated with the form factor acts to suppress the pion induced self energy for increasing pion masses, as evidenced by the $`m_\pi ^2`$ in the denominators of Eqs. (2), (3) and (4). While some $`m_\pi ^2`$ dependence in the form factor is expected, this is a second-order effect and does not alter the qualitative feature of the self-energy corrections tending to zero as $`1/m_\pi ^2`$ in the heavy quark limit. Rather than simplifying our expressions to just the LNA terms, we retain the complete expressions , as they contain important physics that would be lost by making a simplification. We note that keeping the entire form is not in contradiction with $`\chi `$PT. However, as one proceeds to larger quark masses, differences between the full forms and the expressions in the chiral limit will become apparent, highlighting the importance of the branch point and the form factor reflecting the finite size of baryons. As a result of these considerations, we propose to use the analytic expressions for the self-energy integrals corresponding to a sharp cut-off in order to incorporate the correct LNA structure in a simple three-parameter description of the $`m_\pi `$ dependence of the $`N`$ and $`\mathrm{\Delta }`$ masses. In the heavy quark limit hadron masses become proportional to the quark mass. Hence we can simulate a linear dependence of the baryon masses on the quark mass, $`m_q`$, in this region, by adding a term involving $`m_\pi ^2`$. The functional form for the mass of the nucleon suggested by this analysis is then: $$M_N=\alpha _N+\beta _Nm_\pi ^2+\sigma _{NN}(\mathrm{\Lambda }_N)+\sigma _{N\mathrm{\Delta }}(\mathrm{\Lambda }_N),$$ (5) while that for the $`\mathrm{\Delta }`$ is: $$M_\mathrm{\Delta }=\alpha _\mathrm{\Delta }+\beta _\mathrm{\Delta }m_\pi ^2+\sigma _{\mathrm{\Delta }\mathrm{\Delta }}(\mathrm{\Lambda }_\mathrm{\Delta })+\sigma _{\mathrm{\Delta }N}(\mathrm{\Lambda }_\mathrm{\Delta }).$$ (6) ### 1.2 Model Dependence The use of a sharp cut-off, $`u(k)=\theta (\mathrm{\Lambda }k)`$, as a form factor may seem somewhat unfortunate given that phenomenology suggests a dipole form factor better approximates the axial-vector form factor. However, the sensitivity to such model-dependent issues is shown to be negligible in Fig. 2. There, the self-energy contribution $`\sigma _{NN}(=\sigma _{\mathrm{\Delta }\mathrm{\Delta }})`$ for a 1 GeV dipole form factor (solid curve) is compared with a sharp cut-off form factor combined with the standard $`\alpha +\beta m_\pi ^2`$ terms of (5) or (6). Optimizing $`\mathrm{\Lambda }`$, $`\alpha `$ and $`\beta `$ provides the fine-dash curve of Fig. 2. Differences are at the few MeV level indicating negligible sensitivity to the actual analytic structure of the form factor. Here we have focused on the pion self-energy contribution to the $`N`$ and $`\mathrm{\Delta }`$ form factors. Only the pion displays a rapid mass dependence as the chiral limit is approached. Other mesons participating in similar diagrams do not give rise to such rapidly changing behavior and can be accommodated in the $`\alpha +\beta m_\pi ^2`$ terms of (5) or (6). Moreover, the form factor suppresses the contributions from more massive intermediate states including multiple pion dressings. Other multi-loop pion contributions renormalize the vertex and hence we use the renormalized coupling $`g_A`$ as a measure of the pion-nucleon coupling. ## 2 ANALYSIS We consider two independent dynamical-fermion lattice simulations of the $`N`$ and $`\mathrm{\Delta }`$ masses. We select results from CP-PACS’s $`12^3\times 32`$ and $`16^3\times 32`$ simulations at $`\beta =1.9`$, and UKQCD’s $`12^3\times 24`$ simulations at $`\beta =5.2`$. Figure 3 displays fits of (5) to the lattice data. In order to perform fits in which $`\mathrm{\Lambda }`$ is unconstrained, it is essential to have lattice simulations at light quark masses approaching $`m_\pi ^20.1`$ GeV<sup>2</sup>. It is common to see the use of the following $`\chi `$PT-motivated expression for the mass dependence of hadron masses, $$M_N=\alpha +\beta m_\pi ^2+\gamma m_\pi ^3.$$ (7) The result of such a fit for the $`N`$ is shown as the dashed curve in Fig. 3. The coefficient of the $`m_\pi ^3`$ term in a three parameter fit is $`0.761`$. This disagrees with the coefficient of $`5.60`$ known from $`\chi `$PT (which is correctly incorporated in (5) and illustrated as the solid and dash-dot curves of Fig. 3) by almost an order of magnitude. This clearly indicates the failings of (7). The dotted curve of Fig. 3 indicates the leading non-analytic term of the chiral expansion dominates from the chiral limit up to the branch point at $`m_\pi =\mathrm{\Delta }M300`$ MeV, beyond which $`\chi `$PT breaks down. The curvature around $`m_\pi =\mathrm{\Delta }M`$, neglected in previous extrapolations of the lattice data, leads to shifts in the extrapolated masses of the same order as the departure of lattice estimates from experimental measurements.
no-problem/9909/astro-ph9909376.html
ar5iv
text
# TeV Scale Quantum Gravity and Mirror Supernovae as Sources of Gamma Ray Bursts ## I Introduction The origin of the gamma ray bursts (GRBs) observed for over three decades still remains unclear. The GRBs are short, intense photon bursts with photon energies in the keV and MeV range although bursts with energy spectra extending above a GeV have been observed. The isotropy and $`\frac{dN}{dV}`$(intensity) distributions and the high redshift galaxies associated with some GRBs indicate that the sources of GRBs are located at cosmological distances. The specific nature of the sources remains however unclear. If unbeamed, the sources must emit total $`\gamma `$-ray energies of $`10^{51}`$ to $`10^{53}`$ ergsBeaming reduces this by $`\frac{\mathrm{\Delta }\mathrm{\Omega }}{4\pi }`$ and increases the required burst rate by $`\frac{4\pi }{\mathrm{\Delta }\mathrm{\Omega }}`$ over the few per day seen in the universe. This is very much reminiscent of typical supernova energies. However, most supernovae (e.g. type II supernovae) cannot be these sources, since $`\gamma `$-rays with typical radiation lengths of 100 gm/cm<sup>2</sup> cannot penetrate the large amount ($`10M_{}`$) of overlying ejecta. Many of the models for unbeamed (beamed) GRBs use massive compact sources to produce neutrinos which annihilate to form fireballs of $`e^+e^{}`$’s and $`\gamma `$’s. The fireballs expand and cool adiabatically, until the temperature (or the transverse energy) is low enough so that the $`e^+e^{}`$ annihilate into the $`\gamma `$’s. To avoid the ‘baryon load” problem and the absorption of $`\gamma `$’s, fairly “bare collapses” are required. Accretion induced collapses and binary neutron star mergers were considered but it is not clear whether these are sufficiently “baryon clean”. One “baryon clean” source candidate based novel particle physics is a neutron star to strange quark star transition. Other recent suggestions invoked the existence of sterile neutrinos. If the emitted neutrinos undergo maximal oscillation to the sterile neutrinos, the latter can penetrate the baryon barrier and subsequently normal neutrinos will appear via the $`\nu _s\nu `$ oscillation. In this scenario, the last “back” conversion occur at relatively large distances Both $`\nu \nu _s`$ and $`\nu _s\nu `$ are quenched by dense matter if $`\mathrm{\Delta }m^210^4`$ eV<sup>2</sup>. and the $`\nu \overline{\nu }e^+e^{}`$ which goes like $`R^8`$ is inefficient<sup>§</sup><sup>§</sup>§Disklike (and beamed) geometry may partially alleviate this problem.. Similar difficulties are encountered by models utilizing exact “mirror” symmetric theories where the sterile (mirror) neutrinos emitted in a mirror star collapse oscillate into ordinary neutrinos. In this note, we propose another GRB scenario in the context of the asymmetric mirror models. It utilizes the conversion of $`\nu ^{}+\overline{\nu }^{},\gamma ^{}+\gamma ^{}e^+e^{},\gamma \gamma `$ etc inside the mirror star, where primed symbols denote mirror particles. Since the familiar electrons and photons do not interact with mirror matter, the expanding fireball is not impeded and we have an ideal bare collapse. The resulting photons expected to have initial energies of $``$ GeV, can be processed in this expansion down to the MeV part of the GRB spectra observed. Furthermore, if the source is embedded in the disk of a galaxy, further degrading can take place due to the “minibaryon load” of the disk resulting in keV gamma rays as well as possibly structure in the gamma ray spectrum. The key requirement is that the conversion process be fast enough so that a finite fraction of the collapse energy is indeed converted into ordinary matter. As we will see this naturally obtains if we can have a low scale (of order of a TeV) for quantum gravity In the p-Brane construction, ordinary and mirror matter could reside on two sets of branes with a relatively large (compared to $`\mathrm{\Lambda }^1(TeV)^1`$) separation $`r_0`$. The gauge group is of the form $`GG_{matter}\times G_{mirror}`$ where each $`G=SU(3)\times SU(2)_L\times U(1)_Y`$. The detailed model implementing this scenario will have to be such that it can lead to enhanced amplitude for the four Fermi operators that lead to familiar particle production via the collision of mirror particles whereas suppressed coefficient for the ones that lead to neutrino mixing. The latter in general involve exchange of fermions and the desired suppression is therefore not implausible. We thank Markus Luty for discussions on this point. In section 2 we give a brief review of the assumptions of mirror matter models within which we work. In section 3 we outline our scenario, computing the initial $`\gamma `$ energies, and a brief discussion of possible fireball mechanism for degradation of the photon energies. We also discuss the effect of a baryon cloud (“mini-baryon load”) which can lead to further degradation of gamma energies. We work within the framework of TeV scale gravity using the results of Silagadze for the production of familiar matter from mirror matter. We conclude in section 4 with a brief discussion. ## II Asymmetric mirror model and large scale structure in the mirror sector Let us begin with a brief overview of the asymmetric mirror matter model and the the parameters describing fundamental forces in the mirror sector. In asymmetric mirror matter models, one considers a duplicate version of the standard model with an exact mirror symmetry which is broken in the process of gauge symmetry breaking. Denoting all particles and parameters of the mirror sector by a prime over the corresponding familiar sector symbol (e.g. mirror quarks are $`u^{},d^{},s^{},`$ etc and mirror Higgs field as $`H^{}`$, mirror QCD scale as $`\mathrm{\Lambda }^{}`$) we assume that $`<H^{}>/<H>=\mathrm{\Lambda }^{}/\mathrm{\Lambda }\zeta `$. This is admittedly a strong assumption for which there is no particle physics proof, but it does provide a certain degree of economy. Of course, if one envisioned the weak interaction symmetry to be broken by a new strong interaction such as technicolor in both sectors, then it is possible to argue that such a relation emerges under certain assumptions. There also exists a cosmological motivation for assuming $`<H^{}>/<H>=\mathrm{\Lambda }^{}/\mathrm{\Lambda }15`$. One can show that in this case the mirror baryons can play the role of cold dark matter of the universe. The argument goes as follows: one way to reconcile the mirror universe picture with the constraints of big bang nucleosynthesis (BBN) is to assume asymmetric inflation with the reheating temperature in the mirror sector being slightly lower than that in the normal one. Taking the allowed extra number of neutrinos at the BBN to be 1 implies $`(T_R^{}/T_R)^30.25`$. One can then calculate the contribution of the mirror baryons to $`\mathrm{\Omega }`$ to be $`\mathrm{\Omega }_B^{}(T_R^{}/T_R)^3\zeta \mathrm{\Omega }_B`$ (1) Since one expects, under the above assumption, the masses of the proton and neutron to scale as the $`\mathrm{\Lambda }`$ in both sectors, if we assume that $`\mathrm{\Omega }_B0.07`$, then this implies $`\mathrm{\Omega }_B^{}0.26`$ leading to a total matter content $`\mathrm{\Omega }_m0.33`$. Thus familiar and mirror baryons together could explain the total matter content of the universe without need for any other kind of new particles. An important implication of this class of mirror models is that the interaction strengths of weak as well as electromagnetic processes (such as Compton scattering cross sections etc) are much smaller than that in the familiar sector. This has implications for the formation of structure in the mirror sector. Structure formation in a similar asymmetric mirror model was studied in Ref. where it was shown that despite the weakness in the mirror particle processes, there are cooling mechanisms that allow mirror condensates to form as the universe evolves. The basic idea is that the mirror matter provides gravitational wells into which the familiar matter gets attracted to provides galaxies and their clusters. However due to weakness of the physical processes, the mirror matter is not as strongly dissipative as normal matter. So for instance in our galaxy, the familiar matter is in the form of a disk due to dissipative processes whereas mirror stars which form the halo are not in disk form. In contrast, in the symmetric mirror model, the mirror matter would also be in a disk form and therefore could not help in explaining observed spherical galactic halos. Furthermore, since mirror matter condensed first in view of the lower temperature, it is reasonable to expect that mirror star formation largely took place fairly early (say $`z1`$) and the subsequent rate is much lower. In what follows to understand the observed GRBs we would require a mirror star formation rate of about one per million year per galaxy (to be contrasted with about 10/year/galaxy for familiar stars). In the asymmetric mirror model, it has been shown that there are simple scaling laws (first reference in ) for the parameters of the mirror stars: (i) the mass of the mirror stars scale as $`\zeta ^2`$; (ii) the radius of the mirror stars also scales like $`\zeta ^2`$ whereas (iii) the core temperature scales slightly faster than $`\zeta `$. Here $`\zeta `$ denotes the ratio of the mass scales in the mirror and familiar sectors and is expected to be of order 15-20 from considerations of neutrino physics. Due to the higher temperature of the mirror stars, they will “burn” much faster and will reach the final stage of the stellar evolution sooner. Because of the $`\zeta ^4`$ decrease of weak cross sections and the increase in particle masses we do not expect mirror star collapse to result in explosion. Rather there should be neutrino emission and black hole formation. Thus we would expect that there will be an abundant supply of mirror “supernovae.” We will show in the next section that these could be the sources of the GRBs. ## III Low quantum gravity scale and production of familiar photons in mirror supernovae In a mirror supernova, one would expect most of the gravitational binding energy to be released via the emission of mirror neutrinos as in the familiar case. However, in the asymmetric mirror matter model, we expect the temperature of the collapsing star to be higher. We have $`NT=GM^2/R`$ where $`N`$ is the number of mirror baryons in the star (about $`M_{}/\zeta m_p`$). At $`\zeta =10`$ the maximum mirror star mass is about $`M_{}`$ so that $`T`$ is about a $`GeV`$ where we have taken the radius of the collapsed mirror star to be about a kilometer. Let us now estimate the production cross section for the familiar photons in the collision of the mirror photons in the core. The most favorable case occurs if we assume that the quantum gravity scale is in the TeV range. In this case, assuming two extra dimensions and following reference , we estimate the cross section $`\sigma _{\gamma ^{}\gamma ^{}\gamma \gamma }`$ to be, $`\sigma _{\gamma ^{}\gamma ^{}\gamma \gamma }{\displaystyle \frac{1}{10}}{\displaystyle \frac{s^3}{\mathrm{\Lambda }^8}}`$ (2) where $`s`$ is the square of the total center of mass energy. For $`s=1`$ GeV<sup>2</sup> and $`\mathrm{\Lambda }1`$ TeV, we get, $`\sigma _{\gamma ^{}\gamma ^{}\gamma \gamma }10^{52}`$ cm<sup>2</sup>. We estimate the rate of energy loss per unit volume to into familiar, not mirror, photons to be roughly $`{\displaystyle \frac{dQ}{dtdV}}cn_\gamma ^{}^22E_\gamma ^{}\sigma _{\gamma ^{}\gamma ^{}\gamma \gamma }`$ (3) Multiplying by the volume of the one kilometer black hole gives about $`10^{52}`$ erg/s. This energy is of the right order of magnitude for the total energy release in the case of unbeamed or mildly beamed GRBs. However the initial energy of individual photons obtained via $`\nu ^{}\gamma `$ conversion is essentially that of the mirror neutrinos i.e. $`E_\gamma (t=0)E_\nu ^{}3T_{mirror}`$. The spectrum of the latter- just like that of ordinary neutrinos obtained in the core cooling of ordinary type II supernovae- is expected to be roughly thermal with $`T_{mirror}`$ GeV, which is roughly 100 times higher than for familiar collapse. While in some GRBs, photons of energies in the range of GeVs to TeVs have been observed, the bulk of the spectrum is in the MeV/keV region. Reprocessing the initial photons leading to energy degradation is therefore important. Two distinct mechanisms contribute to reprocessing: (i) Fireball evolution and (ii) Overlying putative familiar material. Let us discuss both these mechanisms. Mechanism (i): At t=0, we have, because of universality of gravitational interactions an equal number of familiar $`e^+e^{}`$ produced with the photons. The resulting dense $`e^+e^{}\gamma `$ “fireball” constitutes a highly opaque plasma. There is an extensive literature dealing with the evolution of such fireballs. In the case where this evolution is free from the effects of overlaying matter (i.e. the effects of (ii) are negligible), the discussion becomes almost model independent and many features can be deduced from overall energetics and thermodynamic considerations. Thus at t=0 when a fraction $`ϵ`$ of the mirror neutrinos convert to $`\gamma `$’s (and/or $`e^+e^{}`$’s), the latter have a blackbody spectrum with temperature $`T_\nu ^{}`$. However the overall normalization, i.e. the energy density $`U_\gamma =ϵU_\nu ^{}=ϵaT_\nu ^{}^4`$ (4) falls short by a factor $`ϵ`$ of the universal black body energy density at such a temperature. Fast processes of the form $`\gamma \gamma e^+e^{}3\gamma `$ (allowed in the thermal environment) will then immediately reequilibrate the system at $`T_\gamma ϵ^{\frac{1}{4}}T_\nu ^{}\left({\displaystyle \frac{1}{3}}{\displaystyle \frac{1}{30}}\right)T_\nu ^{}`$ (5) (corresponding to GRB energies between $`10^{48}10^{52}`$ ergs and mirror supernova energies of $`10^{52}10^{53}`$ ergs). Subsequent evolution can further increase $`N_\gamma `$ and correspondingly decrease $`\overline{E}_\gamma `$ down towards the MeV range. Independent of this, mere expansion reduces the transverse photon energy according to $`E_\gamma ^{tr}(R/r)T_\gamma (t=0)`$, where R is the size of the source and $`r`$ is the current $`\gamma `$ location. (The last expression which parallels that for adiabatic cooling simply reflects the geometrical convergence of trajectories of colliding $`\gamma `$’s which become more and more parallel with distance $`r`$.) Since $`E_{tr}`$ controls the center of mass energy of the $`\gamma \nu `$ collisions, the $`\gamma \gamma e^+e^{}`$ processes become kinematically forbidden and the density of $`e^+e^{}`$ pairs falls exponentially i.e. $`n_{e^+e^{}}e^{\frac{m_e}{T_{tr}(r)}}`$ eventually leaving freely propagating $`\gamma `$’s. Mechanism (ii): A “mini-baryon load” of familiar material encountered by the outgoing $`\gamma `$’s could further reduce the photon energy. Also the presence such matter in conjunction with mild beaming could induce the very short time structure often observed. In order to have an effective degrading of the emitted photon energies, we will need an appropriate density of familiar matter which can be estimated as follows. Let us assume a density profile of the form: $`\rho (R)={\displaystyle \frac{\rho _0R_0^2}{R^2+R_0^2}}`$ (6) Then we demand the constraint that $`\rho (R)𝑑R100`$ gm/ cm<sup>2</sup> where 100 gm/cm<sup>2</sup> represents the radiation length of photons in matter. This implies $`\rho _0R_0100`$ gm/cm<sup>2</sup>. The kinematical requirement of having comoving baryonic plus fireball system requires $`\gamma _B{\displaystyle \frac{fW_{GRB}}{M_{Baryo}}}\gamma _{Fireball}{\displaystyle \frac{E_{e^+e^{}}}{2m_e}}`$ (7) where $`f`$ is the fraction of energy imparted to baryons and $`\gamma _B`$ is the Lorentz factor. Using $`M_{baryo}\frac{4\pi }{3}(\rho _0R_0)R_0^2`$, we find $`R_0=10^{12}cm\left[{\displaystyle \frac{(W/(10^{50}ergs))}{(E/100MeV)(\rho _0R_0/100gmcm^3)}}\right]^{1/2}`$ (8) so that for the nominal values of the total GRB energy, the fireball processed energy of individual $`e^+,e^{},\gamma `$ and the column density, we find $`R_0=10^{12}`$ cm so that $`\rho _0=10^{10}`$ gr/cm<sup>3</sup> and $`M_{baryo}10^{25}gr10^8M_{}`$. It is interesting to note that in the present scenario, GRB’s originating from mirror supernovae in the galactic halos, which most likely would not face the “minibaryon load”, may have only the first stage i.e. energy degradation by fireball mechanism and hence will have a harder spectra and smoother time profile. (Clearly discerning such a component in the GRB population will be quite interesting.) On the other hand the GRBs originating from supernovae in the disk of galaxies will have degradation due to both mechanisms and therefore more structure in the spectra as well as a softer spectra. Beyond the immediate neighbourhood of the mirror star there would be further energy degradation from interaction with interstellar matter ranging from molecular clouds to interstellar comets. There is not however sufficient material in one kilopersec to overcome the small value of the Thompson cross section i.e. $`n_e\sigma _T\mathrm{}10^2`$ as against a required value of one. ## IV Discussion Section 3 shows, we believe, that mirror matter supernovae, within the asymmetric mirror matter model, can provide a plausible explanation for gamma ray bursts. The scenario requires some coupling between the mirror and familiar sectors. In Section 3, we have used the couplings provided by TeV range quantum gravity following the estimate of reference (), but other coupling mechanisms (such as a small $`\gamma \gamma ^{}`$ mixing) might be possible as well. Given TeV scale gravity, it is noteworthy that the same value of $`\zeta `$ required by other ”manifestations” of mirror matter gives both an appropriate upper limit to the energy of the familiar gammas produced and an appropriate cross section section for their production. A major advantage of this GRB explanation is that it solves the baryon load problem in a natural way. In this model, we would expect production of GeV neutrinos at nearly the same rate as $`e^+e^{}`$ and $`\gamma \gamma `$ etc. For GRBs located in our galaxy, they should be observable in detectors such as Super-Kamiokande. If this model is correct, given the short lifetime of the mirror stars, the GRB frequency of $`10^6`$/year/galaxy must be a result of low mirror star formation rate, which as mentioned above is not an unreasonable assumption. Finally, it is tempting to speculate that, if the primary GRB mechanism is to produce a fireball in the many MeV temperature range, there should exist a GRB population with temperatures in that range. In view of the fact that most of the data on GRBs comes from BATSE detector which triggers mostly on $`\gamma `$’s below 300 keV, it appears that such a population is not necessarily excluded by current data. This possibility that mirror matter can explain GRBs adds to a growing list of arguments that asymmetric mirror matter should be taken seriously. These include: (1) the requirement in many string theories that mirror matter exist; (2) the fact that the same range for $`\zeta `$ that was required in Section 3 for GRBs gives a mirror neutrino at the proper mass difference from $`\nu _e`$ to be the sterile neutrino responsible for simultaneously solving all the neutrino puzzles; (3) the fact that the same range of $`\zeta `$ gives an appropriate amount of dark matter to give an overall $`\mathrm{\Omega }_M`$ in the range 0.2 to 0.3; and (4) the fact that the same range of $`\zeta `$ gives an explanation of the MACHO microlensing events as being caused by mirror black holes of about $`M_{odot}/2`$ mass. Acknowledgments We appreciate a helpful communication from Tom Siegfried. The work of RNM is supported by a grant from the National Science Foundation under grant number PHY-9802551 and the work of V. L. T. is supported by the DOE under grant no. DE-FG03-95ER40908.
no-problem/9909/nucl-th9909075.html
ar5iv
text
# Rescattering of Vector Meson Daughters in High Energy Heavy Ion Collisions ## A Introduction Chiral symmetry, the symmetry which acts separately on left- and right-handed fields in QCD, is spontaneously broken in nature, resulting in associated Goldstone bosons in the pion field as well as the observed hadron mass spectra. However, it is expected that the spontaneously broken part of chiral symmetry may be restored at high temperatures and densities . Such conditions might be reached in the interior of neutron stars or during a relativistic heavy ion collision. In fact, a primary goal of high energy nuclear collisions is the creation of such matter and the study of its properties. One expected signal of the restoration of chiral symmetry is a change of the vector meson properties . In the nuclear system, the self energy of hadrons is changed by the medium they inhabit. A change in the effective mass of daughter particles can change the effective lifetime, and consequently the observed width, of the parent particle that decays in medium. Such modifications could be accompanied by a measureable change in the branching ratio of mesons decaying in the medium. The light vector mesons ($`\rho `$, $`\omega `$, $`\varphi `$) offer an especially promising channel for study of chiral symmetry effects due to their multi-channel decays and lifetimes comparable to the space-time extent of the system produced in heavy ion collisions. The $`\varphi `$ is of particular interest as the sum of the daughter mass of its di-kaon decay channel is very close to the mass of the $`\varphi `$. As a result, even a small change in daughter or parent masses could measureably alter the decay channel . Study of vector mesons through their leptonic decay channels, either $`e^+e^{}`$ or $`\mu ^+\mu ^{}`$ should provide the cleanest signal of the changing masses as leptons interact with the nuclear medium predominantly electromagnetically. In contrast, decays to hadrons are affected by strong final state interactions. If a $`\varphi `$ decays in the center of the reaction volume in a heavy ion collision, the lifetime of the kaon travelling a distance $`d`$ through a length $`L`$ of hadronic matter is $`exp(Ld\sigma )`$. Approximating the heavy ion collision as a pion bath with density $`d=.5`$/fm<sup>3</sup> and cross section $`\sigma =10100`$mb, the 1/e pathlength of a kaons would be between .2fm and 2fm, much smaller than the size of the collision region. However, while the hadronic daughters interact in the medium, leptons should escape unscathed. The rescattering of hadronic daughters in the nuclear medium could mimic or obscure effects of chiral symmetry restoration by causing the reconstructed invariant mass to fall outside the vector meson peak, effectively decreasing the measured yields. While studies of the viability of the di-kaon channel of the $`\varphi `$ for studying chiral symmetry restoration have been done , the effect of the rescattering of daughters on the experimentally measured branching ratio has not previously been considered. ## B Model Results To study the effect of hadronic scattering of daughters, we implemented the hadronic cascade code RQMD version 2.4 to describe the space time distribution of $`\varphi `$’s and their daughters. In RQMD one can follow the history of all $`\varphi `$’s that decay throughout the collision along with their daughter kaons. Upon simulation of an event we determine the positions and momenta of all kaons originating from $`\varphi `$ decays. Fig. 1 shows the invariant mass distribution of all kaon pairs from $`\varphi `$ decays in simulated central Pb+Pb events at 158 GeV$``$A/c beam energy. The right hand figures show $`\varphi `$’s whose daughters escaped from the collision zone without rescattering (top figure) and those whose daughters did rescatter (bottom) yet still escaped the collision zone as kaons. The left hand figure gives an overlay for comparison. Since the $`\varphi `$ lifetime is comparable to, though larger than, the expected lifetime of a heavy ion collision, most $`\varphi `$’s do not decay in medium resulting in a tight peak of the invariant mass ($`M_{inv}`$) with a width of $``$4 MeV. However, $`17`$% of $`\varphi `$’s decaying to two kaons have at least one daughter kaon that rescatters and another (non-orthogonal) $`17`$% of $`\varphi `$’s have at least one daughter that is absorbed. The sum of these two processes leads to $`26`$% of decaying $`\varphi `$’s with an absorbed or rescattered daughter. The $`M_{inv}`$ distribution of these kaons is much broader than the original distribution; broad enough to escape detection, in particular in experiments with low statistics or large combinatorial backgrounds. Tables I and II itemize the channels contributing to the first hadronic scattering of a daughter kaon from a phi meson. The dominant interactions for both $`K^+`$ and $`K^{}`$ daughters proceed through the $`K^{}`$ resonance with a pion or through annihilation of the strange quark by interaction with another kaon. Nearly one-quarter of all rescatterings occur through a variety of high mass resonances and a sizeable fraction of the rescattered $`K^{}`$ daughters proceed by a reaction with a nucleon into a $`\mathrm{\Sigma }`$ or other high mass strange baryon. The circles in Fig. 2 display the $`\varphi `$ survival probability, i.e. the probability that neither of the daughter kaons rescatter, as a function of $`m_t`$ for the rapidity region $`|y|<1`$. The probability decreases with $`m_t`$, approaching $`60`$% at the lowest $`m_t`$. As a result, the measured yield of $`\varphi `$’s through the kaon channel should be lower at low $`m_t`$ than that measured in the dilepton channels. The ratio of the yield of $`\varphi K^+K^{}`$ to $`\varphi l^+l^{}`$ corrected by the branching ratio should then approximately follow the circle symbols in Fig. 2 if daughter rescattering is the sole mechanism contributing to the difference. Figs. 3 and 4 show reconstructed $`\varphi `$ yields from RQMD as a function of $`m_t`$ and space-time, respectively for $`|y|<1`$. Fig. 3 shows $`d^2N_\varphi /m_tdm_tdy`$ at mid rapidity for all $`\varphi `$ mesons which decay in RQMD overlayed with the $`m_t`$ distribution for those $`\varphi `$’s whose daughters kaons do not rescatter and those $`\varphi `$’s whose daughters do rescatter. The depletion of reconstructed $`\varphi `$’s at low $`m_t`$ results in a higher effective temperature of the $`\varphi `$ meson at low $`m_t`$. The inverse slope for $`0<m_tm_\varphi <1`$ GeV of the original $`\varphi `$’s is $`T=220\pm 3`$ MeV while for $`\varphi `$’s observed through the kaon decay channel RQMD predicts $`T=242\pm 4`$ MeV. The extracted yields and temperatures of $`\varphi `$’s should therefore be measureably different due to daughter rescattering. The $`m_t`$ dependence of the rescattering effect upon $`\varphi `$’s is a reflection of the phase space freeze-out distribution of particles in heavy ion collisions. Fig. 4 shows the freeze-out distribution in time and space of the $`\varphi `$’s from RQMD. The left hand figure shows the freeze out longitudinal proper time $`\tau =\sqrt{t^2z^2}`$ where $`t`$ and $`z`$ are the freeze-out time and z-position, while the right hand figure displays the radial freeze-out distribution. The solid line corresponds to those $`\varphi `$’s whose daughters escaped unscathed from the collision, while the dashed line shows the distribution for those whose daughters did rescatter. It is clear from the figure, and intuitive, that those $`\varphi `$’s with rescattered daughters predominantly decayed at early times in the dense nuclear media, during that part of the collision of particular interest for chiral symmetry measurements. This picture is echoed in the rapidity distributions in Fig. 5 where in the top plot we present the rapidity of (1) all $`\varphi `$’s, (2) those $`\varphi `$’s whose daughter kaons escape the collision zone unperturbed and (3) those $`\varphi `$’s whose daughters do not escape. Here we see the approximately 25% of all $`\varphi `$’s that are lost in the collision. The effect of rescattering is similar to that seen in the $`m_t`$ distribution as those $`\varphi `$’s closer to $`y=0`$ have a larger probability of being rescattered. In the bottom half of Fig. 5 we plot the probability of survival of both daughter kaons which has a clear rapidity dependence. This dependence leads to a marginal widening of the measured $`\varphi KK`$ rapidity distribution relative to that of the $`\varphi ll`$ channel. The conclusions from RQMD simulations are intuitive. Those $`\varphi `$’s that decay at early times are more apt to have rescattered daughter kaons due to the amount of hadronic matter through which they must travel. Further, rescattering and pressure build-up during the collision implies that particles that freeze-out early in the collision will have a softer transverse mass distribution than those which freeze-out later . We then expect that the rescattering effects at early times will be reflected in the transverse mass distribution as confirmed in RQMD. In order to compare these observations with experimental measurements we note that RQMD only includes the imaginary part of the cross section in its calculations. While Shuryak and Thorrsen have shown that the real cross section for kaons in a dense nuclear medium is much smaller than the imaginary cross section, we estimate here the maximal effect on the observed cross section expected from the inclusion of the real part. The addition of the real cross section can, at most, rescatter or absorb all kaons from $`\varphi `$’s that decay within the freeze-out volume. Shown in the bottom two plots of Fig. 4 is the point of last rescattering for kaons from RQMD. We define the freeze-out volume for kaons to be that point within which 95% of all kaons have had their last interaction, which corresponds to $`\tau 36.5`$fm and $`r20`$fm. The maximal effect then of adding the real cross section to the RQMD calculation would be having all $`\varphi `$’s that decay within these ($`\tau `$,r) bounds be unreconstructable and lost from the invariant multiplicity. The square symbols in Fig. 2 show the result of making such a drastic assumption. In the lowest $`m_t`$ bin approximately 25% more $`\varphi `$’s are depleted than in the imaginary only calculation. Note that the inclusion of the real part of the cross section will only change the quantitative result from RQMD slightly while the qualitative shape remains the same. ## C Discussion Recently, two experiments at the SPS studying Pb+Pb collisions with a beam energy of 158GeV/c have made preliminary reports of $`\varphi `$ measurements. Experiment NA50 measured $`\varphi \mu ^+\mu ^{}`$ at mid rapidity over the transverse mass ($`m_t`$) range $`1.7<m_t<3.2`$ GeV/c<sup>2</sup> while experiment NA49 reported a $`\varphi K^+K^{}`$ distribution also at midrapidity but for $`1<m_t<2.2`$ GeV/c<sup>2</sup>. The reported $`m_t`$ inverse slopes are strikingly different; NA50 quotes $`T=218\pm 10`$ MeV while NA49 finds $`T=305\pm 15`$ MeV. Although, within the present accuracy, the data seem to be more consistent in the $`m_t`$ range where they overlap, the extrapolated yields are significantly different. This points either towards a drastic softening of the $`m_t`$ distribution with increasing $`m_t`$ or distinctly different spectra reconstructed from $`\varphi KK`$ and $`\varphi \mu \mu `$. The latter is in qualitative agreement with the effect of rescattering of the decay kaons, depleting the low $`m_t`$ region. Quantitatively RQMD predicts a 17 MeV difference of the slope, much smaller than the observed difference. The curve in Fig. 2 shows the ratio of the two extrapolated spectra, $`\varphi KK/\varphi \mu \mu `$, compared to the calculations from RQMD for the expected and maximal effect of rescattered daughter kaons. The curve is well below what could be described by even the maximal daughter rescattering and we conclude that this effect can not by itself describe the experimental data. It is interesting to note, however, that RQMD does reproduce the observed $`\varphi KK`$ rapidity distribution. The line in Fig. 5 corresponds to the experimentalists gaussian fit to their data in with no renormalization on our part. This line corresponds quite nicely to the RQMD curve for measured $`\varphi KK`$ for those phi’s whose daughters did not rescatter. The gaussian width and height of the distribution from RQMD ($`\sigma =.96\pm .1`$, $`A=2.45\pm .46`$) are approximately consistent with the fit to the experimental data ($`\sigma =1.22\pm .17`$, $`A=2.43\pm .15`$) though the RQMD distribution is not particularly well described by a gaussian. Comparisons of $`\varphi KK`$ with $`\varphi ll`$ may be very informative if the spectra are measured over the same range of $`m_t`$, with similar systematics. If the ratio of the $`m_t`$ spectra has the shape characterstic of rescattering, the low $`m_t`$ dip in the $`\varphi KK/ll`$ ratio reflects the amount of rescattering and therefore the time spent in the dense hadronic phase. This could help to clear up uncertainties about how long the hadronic system interacts before freezing out . Including effects of chiral symmetry restoration on kaon properties may alter these arguments, but it is unlikely that both effects will produce identical $`m_t`$ dependencies. ## D Acknowledgements We would like to thank Dr. H. Sorge for the use of the RQMD code and gratefully acknowledge helpful discussions with Dr. E. Shuryak. We further acknowledge the aid of Drs. C. Hoehne and N. Willis for aid in acquiring and interpreting the NA49 and NA50 data, respectively.
no-problem/9909/astro-ph9909239.html
ar5iv
text
# The X-ray variability of the Seyfert 1 galaxy MCG-6-30-15 from long ASCA and RXTE observations ## 1 INTRODUCTION Observational and theoretical progress over the past decade have greatly improved our understanding of accreting black holes in Seyfert galaxies. Time-averaged X-ray spectra reveal a hard power-law continuum with a broad iron line and continuum reflection components (Pounds et al 1990; Nandra & Pounds 1994; Tanaka et al 1995; Nandra et al 1997). The power-law emission is produced by thermal Comptonization in $`100`$ keV hot gas (Zdziarski et al 1994) above a thin accretion disc, which causes the reflection (Guilbert & Rees 1988; Lightman & White 1988; George & Fabian 1991; Matt, Perola & Piro 1991). Spectral variability in some objects show that the emission is produced in flarelike events which may move about over the inner accretion flow. The information from such variability is complex and its interpretation is not necessarily straightforward. Barr & Mushotzky (1986) and Wandel & Mushotzky (1986), by invoking the criterion for the fastest doubling time found in observations of AGN, demonstrate that a correlation between the X-ray luminosity and variability time scales exist and use this to place upper limits on the black hole masses in their sample of AGN observed with a slew of X-ray telescopes that include Ariel V, HEAO-1, and the Einstein observatory. Such a method however has been criticized for its dependence on data quality and coverage (Lawrence et al. 1987; Mchardy & Czerny 1987). It does nevertheless provide a strong limit on the size of the emitting region. For example, Reynolds et al. (1995) and Yaqoob et al. (1997) have reported a factor of 1.5 increase in flux in as little as 100s in MCG$``$6-30-15. Others have resorted to employing power density spectrum (PDS) techniques (and variants thereof) as potential black hole mass estimators (e.g. Edelson & Nandra 1999 for NGC3516; Nowak & Chiang 1999 for MCG$``$6-30-15; Hayashida et al. 1998 introduces the normalized power spectrum density (NPSD) for their sample of AGN observed with Ginga). However, even this is not without its caveats. X-ray variability studies of AGNs thus far (with the exception of IRAS18325-5926; Iwasawa et al. 1998) has shown that the power-law spectrum is essentially chaotic with no characteristic period. The problem of unevenly sampled data streams, characteristic of X-ray observations of AGN, is overcome by the solution of Lomb (1976) and Scargle (1982; see Press et al 1992). Others still have searched for time lags and leads using cross correlation function techniques. However, the problem associated with unevenly sampled data is still a concern, and has been addressed by e.g. Edelson & Krolik (1988), Yaqoob et al (1997), Reynolds (1999), and in this paper. MCG$``$6-30-15 is a bright nearby (z=0.0078) Seyfert 1 galaxy that has been extensively studied by every major X-ray observatory since its identification. A recent study that takes advantage of the high energy and broad-band coverage of RXTE by Lee et al. (1998, 1999) with respectively a 50 ks observation in 1996, and 400 ks 1997 observation have confirmed the clear existence of a broad iron line and reflection component in this object. A previous study by Iwasawa et al. (1996) (hereafter I96) using a 4.5 day observation with ASCA revealed the iron line profile to be variable, which is confirmed in this long 1997 ASCA observation (Iwasawa et al. 1999, hereafter I99) which was observed simultaneously with RXTE . Additionally, Lee et al. (1999) have been able to constrain the iron abundance - reflection fraction relation for the first time using the RXTE observation. Guainazzi et al. (1999) give good bounds on the high energy cutoff of the continuum from a BeppoSAX observation of MCG$``$6-30-15. We investigate in this paper changes in the direct and reflected components with our simultaneous RXTE and ASCA observations spanning a good time interval of $``$ 400 ks, in order to speculate on possible causes for variability. In Section 3, we investigate rapid variability with colour ratios. This is followed in Section 4 by flux-correlated studies, and a detailed analysis of the time intervals surrounding the bright ASCA and RXTE flare shown in Fig. 1. In Section 5, we shift gears to temporal studies and search (using cross-correlation techniques) for the presence of time lags and leads, in order to better understand the physical processes that may give rise to each other. We discuss in Section 6 tentative evidence for the presence of a possible 33 hour period, and break in the power spectrum. Finally, we discuss the spectral and temporal findings in Section 7. In particular, we discuss the challenge that spectral findings may pose to current models for reflection by cold material, and implications from temporal studies for placing constraints on the size of the emitting region and mass of the black hole in MCG$``$6-30-15. We present also a simple model to explain some of our enigmatic spectral findings. We conclude in Section 8 with a summary of the pertinent findings from this study. ## 2 Observations MCG$``$6-30-15 was observed by RXTE over the period from 1997 Aug 4 to 1997 Aug 12 by both the Proportional Counter Array (PCA) and High-Energy X-ray Timing Experiment (HEXTE) instruments. We note that good data covered $``$400 ks, even though the RXTE observation spanned $``$700 ks. It was simultaneously observed for $`200\mathrm{ks}`$ by the ASCA Solid-state Imaging Spectrometers (SIS) over the period 1997 August 3 to 1997 August 10 with a half-day gap part way through the observation. We concentrate only on the RXTE PCA observations in this paper. ### 2.1 Data Analysis We extract PCA light curves and spectra from only the top Xenon layer using the ftools 4.2 software. The light curves were extracted with the default 16s time bins for Standard 2 PCA data time resolution. Data from PCUs 0, 1, and 2 are combined to improve signal-to-noise at the expense of slightly blurring the spectral resolution. Data from the remaining PCUs (PCU 3 and 4) are excluded because these instruments are known to periodically suffer discharge and are hence sometimes turned off. Good time intervals were selected to exclude any earth or South Atlantic Anomaly (SAA) passage occulations, and to ensure stable pointing. We also filter out electron contamination events. We generate background data using pcabackest v2.0c in order to estimate the internal background caused by interactions between the radiation/particles and the detector/spacecraft at the time of observation. This is done by matching the conditions of observations with those in various model files. The model files used are the L7-240 background models which are intended to be specialized for application to faint sources with count rate less than 40 $`\mathrm{cts}\mathrm{s}^1\mathrm{PCU}^1`$. The PCA response matrix for the RXTE data set was created using pcarsp v2.36. Background models and response matrices are representative of the most up-to-date PCA calibrations. Figure 1 shows the background subtracted light curve for ASCA and RXTE , with 256s binning. The time intervals corresponding to the ASCA and RXTE flares are decomposed further and analyzed in depth in Section 4.3. Significant variability can be seen in both light curves on short and long timescales. Flare and minima events are seen to correlate temporally in both light curves. ## 3 Rapid Variability Intraday variability has been seen in most quasars and AGNs; in MCG$``$6-30-15, rapid X-ray variability on the order of 100 s has been reported by a number of workers (e.g. Reynolds et al. 1995; Yaqoob et al. 1997). These significant luminosity changes on time scales as short as minutes can have strong implications for restricting the size of the emitting region, and efficiency of the central engine. Colour ratios are good tools for discerning the processes that may give rise to these rapid variability effects. For the purposes of our study, we define the following count rate hardness ratios in order to assess the processes that may give rise to variability : $$R_1=\frac{57keV}{34.5keV}$$ (1) $$R_2=\frac{7.510keV}{34.5keV}\text{}R_3=\frac{7.510keV}{57keV}$$ (2) $$R_4=\frac{1020keV}{34.5keV}\text{}R_5=\frac{1020keV}{57keV}$$ (3) $$R_6=\frac{1020keV}{7.510keV}$$ We expect the reflection component to dominate above 10 keV. At the lower energies, the dominant features include the iron line between 5.5-6.5 keV and the warm absorber below 2 keV. We note also that the iron line contribution to the overall flux in the 5-7 keV band is only $``$ 15%, and hence be hereafter referred to as the iron line band. The $`\mathrm{R}_1`$ assessment of the (5-7 keV) iron line band versus (3-4.5 keV) lower continuum (Fig. 2a) shows that the source is intrinsically harder during the minima. Similarly, $`\mathrm{R}_2`$ hardness ratio of the (7.5-10 keV) upper continuum to the (3-4.5 keV) lower continuum reveals a similar trend (Fig. 3a); likewise, $`\mathrm{R}_4`$ assessment of the (10-20 keV) reflection hump and (3-4.5 keV) lower continuum (Fig. 4a). $`\mathrm{R}_3`$, $`\mathrm{R}_5`$, $`\mathrm{R}_6`$ show no obvious trends (respectively Figs. 3b,4b,4c) from count-rate versus hardness ratio plots, although subtle correlations can be seen in the time versus hardness ratio plots. In particular Figs. 4b and 4c show dramatic hardening of the spectrum during the time periods following the RXTE flare. (We mark the beginning of the ASCA and RXTE flares respectively with ‘A’ and ‘X’ in the light curves shown in Figs. 2$``$4.) In order to further quantify the amount / absence of variablity, we apply a $`\chi ^2`$ test to the hardness ratio trends as shown in the left panels of Figs. 2$``$4. Table 1 details these results for 138 degrees of freedom. ### 3.1 The unusual properties of the deep minimum period following the hard RXTE flare We mark the regions corresponding to the ASCA and RXTE flare events in Figs. 2$``$4. The time span corresponding to the ASCA and RXTE flares were chosen in order that the times surrounding the flares and minimum (following said flare) can be compared. The spectral behaviour during these periods is thoroughly investigated in Section 4.3. For present purposes, we draw the reader’s attention to the deep minimum immediately following the bright RXTE flare. (The general location of this minimum in the light curve is labeled with ‘DM’ in Figs. 2$``$4.) Despite the hardness ratio results discussed previously (e.g. no obvious trends seen in $`\mathrm{R}_3`$, $`\mathrm{R}_5`$, $`\mathrm{R}_6`$), the spectrum during this time period always hardens in all colours, whereas this is not necessarily the case with the other minima events (e.g. the deep minimum following the soft ASCA flare). In addition, it is not clear whether the spectrum reaches its lowest minimum slightly before or after an observed hardening. We also bring attention (by means of a horizontal line through the hardness ratio versus time plots in Figs. 2$``$4) to the general trend for harder spectra in the $``$ 260 ks time period that begins with the RXTE flare. ### 3.2 Possible causes of Hardness Ratio Variability Many complicated processes contribute to the overall temporal and spectral behaviour of an AGN. It is necessary to disentangle these components in order to assess the physical processes that are responsible for the observed variability phenomenon. Effects due to reprocessing are significant only at the higher energies, the (10-20 keV) band being the most sensitive probe of any subtle changes in the reflection component. Martocchia & Matt (1996) suggested that gravitational bending/focusing as the X-ray source gets closer to the black hole will enhance the amount of reflection. Therefore, it is possible that flares closer to the hole during the minima can enhance the amount of reflection, through increased beaming of the emission towards the disk. We conclude however from the combined hardness ratio findings described thus far that the spectral hardening during periods of diminished intensity point largely to changes in the intrinsic photon index $`\mathrm{\Gamma }`$ being the main culprit for the observed spectral variability. We are led to this conclusion because spectral changes are seen in all bands, which is most likely to be due to changes in the spectral index. In other words, we may be seeing the effects of changes in the coronal temperature affecting the intrinsic power law slope, or that changes in $`\mathrm{\Gamma }`$ are due to coverage of flares such that we are observing the effects of flares occurring at different heights (e.g. Poutanen & Fabian 1999). However, as we shall now show, direct spectral analysis reveals a more complex situation. ## 4 Spectral Fitting Having established that variability effects are many, we next investigate spectral changes in time sequence in order to obtain a better understanding of variability phenomena between the different flux states with particular emphasis on the flares and deep minima. Data analysis is restricted to the 3 to 20 keV PCA energy band. The lower energy restriction at 3 keV is selected in order that the necessity for modelling photoelectric absorption due to Galactic ISM material, or the warm absorber that are known to be present in this object is removed. (Lee et al. 1999 have shown that effects due to the warm absorber at 3 keV are negligible for this data set.) ### 4.1 Spectra selected in time sequence and by flux We choose the time intervals (Fig. 1) in order to contrast and study the different variability states. They are defined such that intervals i1 and i6 correspond to the two RXTE deep minima, and intervals i2 and i7 for the relatively calm periods following these minima, and i5 a flare event for comparison. We investigate in detail in Section 4.3 the times surrounding the RXTE counterpart of the soft ASCA flare, and the hard flare observed by RXTE (unfortunately missed by ASCA ), as depicted in Fig. 1 and subsequently in Figs 9, and 11. Both the ASCA and RXTE light curves for the full observation are shown in Fig. 1. We also separate the data according to flux in order to assess whether a clear picture can be developed of the dominant processes that may be at work for a given flux state. To do so, we separate the 400 ks observation by flux, with f2 and f3 being the intermediate states between the lowest f1 ($`2.84\times 10^{11}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$), and highest f4 ($`4.79\times 10^{11}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$) fluxes. Properties of these flux levels (in the 2-60 keV, and 3-20 keV energy bands), and i1$``$i7 intervals are detailed respectively in Table 2, and Table 7. ### 4.2 Temporal Variability and Spectral Features A nominal fit to the entire data set (ie. ASCA , RXTE PCA and HEXTE) demonstrated the clear existence of a redshifted broad iron K$`\alpha `$ line at $``$ 6.0 keV and reflection hump between 20 and 30 keV as shown in Lee et al. (1998, 1999). In general, fits to the solar abundance models of George & Fabian 1991 (hereafter GF91) are better than simple power law fits and further reinforce the preference for a reflection component. Since the reflected continuum does not contribute significant flux to the observed spectrum below 10 keV, a simple power law and iron line fit to the data below 10 keV will reveal changes in the intrinsic power-law of the X-ray source. If indeed spectral variability is due to changes in the intrinsic power law slope which would implicate changes in the conditions of the X-ray emitting corona, we should see noticeable changes in the values of $`\mathrm{\Gamma }_{(310)}`$ between the different temporal states. However, because reflection only significantly affects energies above 10 keV, changes seen in $`\mathrm{\Gamma }_{(1020)}`$ would suggest that variability may be due to the amount of reprocessing. (We have tested this premise by comparing simulated spectra in xspec using the pexrav model and find that the overall flux contribution from reflection alone is $`>`$ 60 per cent above 10 keV.) This would have strong implications for geometry (ie. where the direct X-ray flares are partially obscured), motion of the source (eg. Reynolds & Fabian 1997; Beloborodov 1998), or gravity (light-bending effects that will beam/focus more of the emission down towards the disk; Martocchia & Matt 1996). #### 4.2.1 The different flux states We first investigate whether a correlation exists between flux and the various fit parameters, and in particular whether changes in reflection are dramatic. We do so by first fitting a simple model that consists of a power law and redshifted Gaussian to the 3-10 keV band, and comparing that with a power law fit to the 10-20 keV band. In doing so, we find that while the intrinsic 3-10 keV power law slope increases, $`\mathrm{\Gamma }_{1020}`$ appears to flatten with increasing flux (Fig. 5). Additionally, the iron $`\mathrm{K}\alpha `$ flux $`F_{\mathrm{K}\alpha }`$ does not change with any statistical significance while the flux between the lowest f1 and highest f4 flux nearly doubles (Table 2); similar results for the constancy of the iron line in MCG$``$6-30-15 was noted by McHardy et al. (1998), and by Chiang et al. (1999) for Seyfert 1 galaxy NGC5548. The ratio plot of data-to-model using a simple power law fit to the 3-20 keV data clearly illustrates the difference in the line and reflection component between the lowest and highest flux states in Fig. 7a. Having demonstrated the existence of a strong reflection component in Fig. 7a, and assessed its nature with more complicated fits in Lee et al. (1999) for this data set, we next investigate the features of reflection in detail for the different fluxes by fitting the data with a multicomponent model that includes the reflected spectrum. The underlying continuum is fit with the model pexrav which is a power law with an exponential cut off at high energies reflected by an optically thick slab of neutral material (Magdziarz & Zdziarski 1995). We fix the inclination angle of the reflector at $`30^{}`$ so as to agree with the disk inclination one obtains when fitting accretion disk models to the iron line profile as seen by ASCA (Tanaka et al. 1995). Due to the strong coupling between the fit parameters of $`\mathrm{\Gamma }`$, abundances, and reflection, we fix the low-Z and iron abundance respectively at 0.5 and 2 solar abundances as determined by Lee et al. (1999) for the fits presented in Table 3; the high energy cutoff is fixed at 100 keV appropriate for this object (Guainazzi et al. 1999; Lee et al. 1999). An additional Gaussian component is added to model the iron line. Using this complex model, we find that the 4-20 keV power law slope and reflection fraction $`R`$ increases with flux (Table 3) while the strength of the iron line, $`F_{\mathrm{K}\alpha }`$ decreases (Fig. 6a), the latter in contrast to the findings for a constant $`F_{\mathrm{K}\alpha }`$ discussed in the context of simpler fits. ($`F_{\mathrm{K}\alpha }`$ is defined as the total number of photon flux in the line.) We note that $`F_{\mathrm{K}\alpha }`$ is consistent with constancy if unity abundances are assumed; this is in agreement with simple power law fits. However, this leaves us with difficult-to-constrain errors, and worse fits in a $`\chi ^2`$ sense. (With the exception of $`F_{\mathrm{K}\alpha }`$, all other parameters as e.g. $`\mathrm{\Gamma }`$ shown in Table 3 follow similar trends whether unity or non-unity abundances are assumed.) It is clear that degeneracies exist and cannot be resolved with 3$``$20 keV RXTE data - we suspect that the dependence on abundance is largely due to the modelling of the iron edge in these complex fits. Nevertheless, we test this hypothesis by including an edge feature to the simple power law model of Table 2. These results presented in Table 4 show that $`F_{\mathrm{K}\alpha }`$ is indeed consistent with constancy and strengthens the argument that the behaviour of the iron line as given by the complex model of Table 3 is complicated with degeneracies. This and the flux constancy of the iron line is well illustrated in Fig. 7b which shows the ratio of the best-fit data against the model of Table 3. Certainly, the case is strong for a requirement of supersolar iron abundances and is reflected in the strength of the iron line. We discuss this in depth in Lee et al. (1999). While $`F_{\mathrm{K}\alpha }`$ apparently decreases with flux from complex fits, we find that the reflection fraction, and absolute normalization of the reflection component ($`Rnorm`$) increases with flux (Table 3 and Fig.6b). We define $`Rnorm=AR`$, where $`A`$ is the power law flux at 1 keV, in units of $`10^3`$ $`\mathrm{ph}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1`$. (Fig. 7a clearly shows that stronger reflection is present during higher flux states.) The anticorrelation between $`F_{\mathrm{K}\alpha }`$ and reflection (Fig. 6c) can be due to an ‘artificial’ effect in which the presence of a strong reflection spectrum during the high flux states has the effect of removing part of the flux from the line in the fits, thereby resulting in lower observed $`F_{\mathrm{K}\alpha }`$ during the higher flux states. This is linked to the strong coupling between the fit parameters of $`\mathrm{\Gamma },R,\mathrm{and}abundance`$, discussed previously. There is the possibility that iron becomes more ionized as the flux increases which will weaken the observed line flux. (We discuss ionization scenarios in Section 7.) As always we caution the effect of inadequate spectral resolution and possibly incomplete models. Additionally, we find that $`W_{\mathrm{K}\alpha }`$ anticorrelates with $`R`$ (Fig. 6d). This lack of proportionality between $`R`$ and $`W_{\mathrm{K}\alpha }`$ is unexpected in the context of the standard corona/disk geometry, the implications of which are discussed in Section 7. Chiang et al. (1999) find similar results in their multi-wavelength campaign of NGC5548. In light of degeneracies associated with complex fits, we primarily present our results from simple power law fits. ### 4.3 The bright ASCA and RXTE flares We next investigate in greater detail the time sequences surrounding the brightest ASCA and RXTE flares. In particular, we are prompted by the peculiar behaviour of the iron line surrounding the time interval of the ASCA flare as reported by I99 for this time sequence as seen in the ASCA data. According to I99, there is dramatic change of the line in both profile and intensity during this period: interval a as depicted in Fig. 8 is marked by an extremely redshifted line profile that is characterized by a sharp decline in the line energy at 5.6 keV (far below the 6.4 keV rest energy of the line emission) with a red wing that extends to 3.5 keV, and line intensity that is $``$3 times that for the time averaged data. The transition to time interval b is marked by an abrupt factor of 2.2 drop in the averaged 0.6-10 keV count rate, and similar $``$2 drop in intensity. (We present in this section the behaviour as seen by RXTE .) Fig. 8 shows the ASCA and corresponding RXTE light curves for this period of interest. We note that the two observations are slightly offset from each other; this will allow for a good assessment of the events immediately preceding and following this flare event. Unfortunately ASCA data do not exist to coincide with the most prominent RXTE flare during the time interval between $`2.8\times 10^5`$ and $`3.8\times 10^5`$ s. We fit the 3-10 keV data with a number of different models : (1) power law + Gaussian, (2) (power law + Gaussian) modified by absorption, (3) power law + diskline and (4) (power law + diskline) modified by absorption. For the majority of the fits, the diskline model by Fabian et al. (1989) provided the best results; these power law + diskline best fit values are detailed in Table 5 and Table 6, respectively for the ASCA and RXTE flare events. (We note however that differences between $`\chi ^2`$ for fits to the different models are not great : $`\mathrm{\Delta }\chi ^2`$ less than 2 for 1 extra parameter.) Due to the inadequate resolution of RXTE in the iron line band such that it is insensitive to the details that a diskline model can provide, we fix diskline parameters at best fit ASCA values reported by I99 for this 1997 observation : emissivity $`\alpha =4.1`$, inclination $`i=30^{}`$, respectively inner and outer radius $`R_{in}=6.7`$, and $`R_{out}=24`$. We caution that fixing these values is likely to be an oversimplification of the true scenario since the line profile is known to vary on short time scales (e.g. Iwasawa et al. 1996), the impact of which is not easily assessable given present data quality. Nevertheless, we find that it is presently the best option for giving a first order approximation of what is likely to be happening. #### 4.3.1 The ASCA flare Figs. 8 & 9 illustrate the RXTE time intervals corresponding to the periods surrounding the ASCA flare, with periods A1 and A2 chosen to respectively correspond to the times immediately preceding and following the flare ‘a’ seen in ASCA . Table 5 details fit results for the time intervals depicted in Fig. 9. There are noticeable changes in the intrinsic power law slope throughout the intervals of interest. In particular, the most dramatic changes occur between the intervals q and e. $`\mathrm{\Gamma }_{310}`$ steepens from q to A1 intervals immediately preceding the flare event seen in ASCA , which may have the effect of producing the noticeable decrease in $`W_{\mathrm{K}\alpha }`$. In the $``$ 11 ks separating the ‘ pre- ’ and ‘ post- ’ flare event (respectively A1 and A2), $`F_{\mathrm{K}\alpha }`$ increases (from $`1.67\pm 0.44`$ to $`2.61\pm 0.38`$ $`\mathrm{ph}\mathrm{cm}^2\mathrm{s}^1`$) and $`W_{\mathrm{K}\alpha }`$ (from $`278\pm 74`$ to $`543\pm 79`$ eV), nearly double. This is consistent with I99 findings from ASCA data for factor of $``$ 2 difference in $`F_{\mathrm{K}\alpha }`$ for flare a, and post-flare b intervals shown in Fig. 8. It also appears that $`\mathrm{\Gamma }_{1020}`$ steepens during this transition, although errors are too large to definitively make this claim. Subsequent changes to $`\mathrm{\Gamma }_{310}`$ are still noticeable ; $`\mathrm{\Gamma }_{1020}`$ appears to flatten in e, and along with $`W_{\mathrm{K}\alpha }`$ and flux remain fairly constant through g. Unfortunately, the errors associated with $`F_{\mathrm{K}\alpha }`$ are generally too large to make statistically significant statements about changes in the iron line flux, even though there are indications for this (e.g. Fig. 10); notice especially the difference in $`F_{\mathrm{K}\alpha }`$ between A1 and A2. #### 4.3.2 The RXTE flare We next investigate the time intervals surrounding the RXTE flare (Fig. 11); due unfortunately to an absence of ASCA data during this time interval, we cannot make analogous comparisons. However, we find that changes to the intrinsic power law slope during these intervals follow similar trends presented for the RXTE counterpart of the ASCA flare in the previous section, although events surrounding this bright RXTE flare event appear to be much more erratic and complicated. (We remind the reader that this is also apparent in hardness ratio comparisons during this time interval.) The similarities with the ASCA flare lie in observed trends to changes in $`\mathrm{\Gamma }_{310}`$. In particular, there is a noticeable flattening of $`\mathrm{\Gamma }_{310}`$ in the transition between ‘ pre- ’ and ‘ post- ’ flare states X1 to X2, which continues through interval m, before suddenly steepening in the following interval n. Unfortunately, errors are such that again we are unable to make any statements regarding changes to $`F_{\mathrm{K}\alpha }`$ or $`\mathrm{\Gamma }_{1020}`$. However, we note that $`F_{\mathrm{K}\alpha }`$ is generally high for these time intervals associated with this hard RXTE flare event. Table 6 details these results. As with the ASCA flare, Fig. 12 for the RXTE flare hints at changes in $`F_{\mathrm{K}\alpha }`$ on short time intervals (e.g. X1 and m) ### 4.4 The intervals i1 to i7 It is clear from the spectral analysis thus far that conditions can alter suddenly and erratically. In order to assess whether a more simplified picture exists, we investigate spectral features of the deep minima in contrast with flare type events, using a model that consist of simple power law and redshifted Gaussian component. Table 8 confirms the findings of Section 4.2.1 such that in general, $`\mathrm{\Gamma }_{310}`$ tends to be flatter during the minima in contrast to the flare states. A close comparison of $`\mathrm{\Gamma }_{310}`$ versus $`\mathrm{\Gamma }_{1020}`$ for the differing states suggests that we are largely seeing intrinsic changes in the power law slope rather than reflection, although it is likely that we are seeing contributions from both effects. Additionally, ratio plots of data against model using a power law fit show that there is a noticeable change in the line flux, profile, as well as the reflection component, similar to that seen in Fig. 7a. ### 4.5 Summary of spectral findings We find evidence from flux-correlated studies that $`\mathrm{\Gamma }_{310}`$ steepens significantly with flux ($`\mathrm{\Delta }\mathrm{\Gamma }_{310}0.06`$ for a doubling of the flux from f1 to f4; Fig. 13a) while surprisingly, the iron line strength appears to remain constant (at most differing by $`1.5\times 10^5`$ $`\mathrm{ph}\mathrm{cm}^2\mathrm{s}^1`$from flux-correlated studies). Changes to $`\mathrm{\Gamma }_{1020}`$ ($`\mathrm{\Delta }\mathrm{\Gamma }_{1020}0.3`$) and $`W_{\mathrm{K}\alpha }`$ ($`\mathrm{\Delta }W_{\mathrm{K}\alpha }200`$ eV) are also evident and anticorrelate with flux (Fig. 13b for the latter). A close look at events corresponding to deep minima versus flares reinforces the finding that changes to the intrinsic power law slope are evident, with a comparably steeper $`\mathrm{\Gamma }_{310}`$ value during the flares. Figs. 13 illustrate that the behaviour of the intrinsic photon index and $`W_{\mathrm{K}\alpha }`$ during the flares is consistent with the flux-correlated behaviour. We find that reflection increases with flux when fitting flux-separated data with a complex model that includes the reflected spectrum. Curiously, reflection fraction $`R`$ anticorrelates with $`W_{\mathrm{K}\alpha }`$; similarly the absolutely normalization of the reflection component ($`Rnorm`$) anticorrelates with $`F_{\mathrm{K}\alpha }`$. We note that contrary to findings from simple power law fits, $`F_{\mathrm{K}\alpha }`$ is seen to decrease with flux, when non-unity abundances are assumed. We caution however about the large degeneracies associated with complex fits when only the 3$``$20 keV RXTE data are considered. (The case for requiring supersolar abundances is discussed in Lee et al. 1999). With the exception of the behaviour of $`F_{\mathrm{K}\alpha }`$, all other parameters as e.g. $`\mathrm{\Gamma }`$ display similar trends using complex and simple power law fits. The apparent discrepancy between $`F_{\mathrm{K}\alpha }`$ from simple and complex fits in conjunction with the large errors associated with $`F_{\mathrm{K}\alpha }`$ would suggest that an ambiguity exists in measurements of the iron line itself, which RXTE is unable to resolve. A detailed investigation of the time intervals surrounding the bright ASCA and RXTE flare reveal further complexities. While changes to $`\mathrm{\Gamma }_{310}`$ are consistent with flux-correlated studies, we find that there is evidence to suggest that a change in $`F_{\mathrm{K}\alpha }`$ occurs during the time intervals immediately associated with the flare event; in other words, tentative evidence for changes to $`F_{\mathrm{K}\alpha }`$ are apparent on short time scales. In particular, a large increase in the line flux (e.g A2) is evident in the interval immediately following the flare; $`W_{\mathrm{K}\alpha }`$ increases similarly. It is curious that $`F_{\mathrm{K}\alpha }`$ shows a significant increase after the flare rather than during, and may be an indication that are we witnessing some type of response to the flare (e.g. Fig. 10), but also caution that the evidence is very tentative. We note also that $`F_{\mathrm{K}\alpha }`$ is comparably high ($``$ factor of 1.7 increase) during the times surrounding the RXTE flare, and times following the ASCA flare, in contrast to values presented in Tables 2, 3, and 8. ## 5 Time Lags, Leads, and Reverberation Motivated by the temporal findings of e.g. Miyamoto et al. (1988) and Cui et al. (1997) for Cygnus X-1, we next wish to investigate whether the collecting area of RXTE coupled with this long observation is sufficient to discern time lags, time leads, and in particular whether reverberation effects can finally be seen. We acknowledge that a good assessment can be hampered by unevenly sampled data, the nature of which has been investigated by a number of workers (e.g. Edelson & Krolik 1998; Yaqoob et al. 1997) for AGN to In‘t Zand & Fenimore (1996) for application to gamma-ray bursts. We adopt a method similar to that presented by Edelson & Krolik (1998). Accordingly, we define the following formalism for our calculations of the autocorrelation function (ACF), and cross correlation function (CCF). $$CCF(\tau _i)=\frac{1}{M}\underset{i=n}{\overset{n}{}}\frac{y^{}(i)z^{}(i+\tau )}{n_\tau }\text{for }\tau 0$$ (4) $$\text{where }z^{}=z_i\frac{_{i=\mathit{1}}^nz_i}{n}$$ $$\text{and }CCF(\tau _0)=\underset{i=\mathit{1}}{\overset{n}{}}(y_i\overline{y})(z_i\overline{z})=M$$ The time interval $`\tau `$ is incremented by n multiples of 64s ($`\tau =n\times 64`$); the subsequent definition of $`n_\tau `$ is the number of bins for which the difference in time between consecutive time intervals satisfy the present value for $`\tau `$. The variables $`K`$ and $`M`$ correspond respectively to the ACF and CCF value at $`\tau =0`$; these terms are used in order to normalize the ACF and CCF so that coherent noise addition at $`\tau =0`$ is eliminated. In order to better understand the nature of our findings shown in Figs. 15-17, we run our CCF algorithm on simulated light curves. The light curves over which the CCFs are evaluated are identical except that one has a specified fraction shifted in phase or time. In principle, time- and phase-shifted light curves are identical at certain Fourier frequencies as for example, for the case in which the power spectrum can be represented by a single sine curve. However, if the light curve is the addition of e.g. several sine curves (more representative of actual physical systems), then a time shift will have the effect of shifting the entire pattern by the specified time interval, whereas a phase shift will shift each individual sine curve along by that phase. The final outcome of the phase-shifted light curve will consist of the additive components of the individual sine curves that have been shifted, i.e. the individual frequencies are added together separately. The CCF of the simulated phase- and time-shifted light curves are shown in Figs. 14; solid lines correspond to the CCF, and dashed lines to the mirror image of this. We point out that the CCFs show subtle differences. For the phase-shifted light curve of Fig. 14a, a constant offset from zero is seen between the CCF and its mirror image (represented by respectively the solid and dashed lines). This is contrasted with the time-shifted light curve in which the CCF and its mirror image become comparable (i.e. there is no persistent offset) at $``$ 30 bins (i.e. $``$1900s). Simulated light curves are generated using Monte Carlo techniques for the flux, with power inversely proportional to frequency, while times are identical to those in the real data. We note that the similarity between the phase- and time-shifted data between 0 and $``$10 bins (i.e. $``$ 0-640 s) is due to the fact that only 20 per cent of the light curve has been shifted, with the other 80 per cent of the light curve that remain identical between the phase-shifted and time-shifted simulations. Assessment of the CCFs between the upper continuum (E3 : 7.5-10 keV) and lower continuum (E1 : 3-4.5 keV) and iron line region (E2 : 5-7 keV) shown in Fig. 15, reveal evidence for a possible phase shift. In a comparison with the CCF of the simulated light curve shown in Fig. 14a, in which an artificial phase lag of $`\varphi `$ 0.6 rad is introduced, we find that a similar trend exists in the actual data. This would suggest that $``$ 10-20 per cent of the upper continuum band lags that of the lower energy bands E1 and E2. We next assess the nature of the (10-20 keV) reflection component with the other lower energy bands. In contrast to the previous findings, the CCFs of the reflection component (E4) with the lower continuum (E1), and iron line region (E2) points at possible time shifts ($`<`$ 1000s) as suggested in Fig. 16. Fig. 14b illustrates that the CCFs of a time shifted light curve is marked by a non-symmetric bodily shift of the CCF, whereas this is not the case of the CCF of a light curve with a phase shift. Fig. 17 shows that neither a time nor phase shift is seen in a comparison of the iron line region (E2) with the lower continuum (E1), and the reflection component (E4) and upper continuum (E3). CCFs of the individual energy bands with itself for all energy bands mentioned thus far look nearly identical to Figs. 17. We note that errors are such that we are unable to make definitive statements either about a phase or time lag at this time. ### 5.1 Large scale bumps versus small scale flicker It is interesting to compare our time-lag results with those of Nowak & Chiang (1999) and Reynolds (1999). Nowak & Chiang use similar cross-correlation techniques to those employed here to search for time lags between the soft band (0.5–1.0 keV) ASCA light curve and the hard band (8–15 keV) RXTE light curve. They found that the hard band lags the soft band by $`1.6\pm 0.5`$ ksec. In this work, we find lags $`<1000`$ s between RXTE bands E1 (2–4.5 keV) and E4 (10–20 keV). Noting that thermal Comptonization predicts a time lag that varies logarithmically with energy, our results are consistent with those of Nowak & Chiang. Reynolds (1999) uses an interpolation method to constrain trial transfer functions linking two given bands. He found a lag of 50–100 s between the 2–4 keV and 8–15 keV RXTE bands, rather smaller than that found here. To reconcile these results, one must appreciate that these methods probe lags at different Fourier frequencies. The CCF methods tend to probe lags across a broad spectrum of Fourier frequencies. Due to the red nature of the power spectrum, such methods are naturally biased towards the lower Fourier frequencies. The method of Reynolds (1999), instead, probes the higher Fourier frequencies since he uses fairly spiky trial transfer functions. Hence, to paraphrase these technical results, the rapid flickering seems to get transmitted up the observed energy spectrum with a smaller time lag (by an order of magnitude) than experienced by the slower variations. ## 6 Power spectra and periodicity ? While spectral studies of X-ray variability in time sequence may hold the key to understanding the underlying processes that are responsible for producing the observed dramatic flux changes, it is insufficient for constraining the size of the emitting region in the absence of a good understanding of the flare mechanisms. The line profile obtained from ASCA observations suggest that the X-ray emission originates from $``$ 10-20 gravitational radii of the black hole. This together with a periodic signal can constrain the size of the emitting region. Alternatively, we can attempt to estimate the black hole mass by assessing where the break frequency in the PDS occurs (assuming that the break frequency scales with mass). We calculate the power density spectrum using the Lomb-Scargle method (Lomb 1976; Scargle 1982; Press et al. 1992) appropriate for unevenly sampled data, and fit a power law slope independently to the RXTE and ASCA data between $`10^5`$ and $`10^4`$ Hz. (We ignore data above the latter in order to avoid contamination to the fit from the 96 minute orbital period of RXTE .) Figs. 18 show that respectively, $`f^{1.3}`$ and $`f^{1.5}`$ is sufficiently representative of the RXTE and ASCA data down to $``$ $`45\times 10^6`$ Hz, where the break in the power spectrum may occur. (We note that this value is only given as a limit to $`f_{br}`$ \- while there appears to be no additional evidence for a break below $`45\times 10^6`$ Hz, the observations are insufficiently long to claim a definitive determination.) This is consistent with the findings of Hayashida et al. (1998) for MCG$``$6-30-15. Since the count rate throughout these observations remain steadily between 6-24 ct $`\mathrm{s}^1`$ for RXTE and 0.5-3 ct $`\mathrm{s}^1`$ for ASCA , the findings above would suggest that large scale power does not exist in abundance even though much shorter time scale variability is highly evident. We note that Papadakis & Lawrence (1993) caution against standard Fourier analysis techniques for estimating power spectra, in the form of a bias of the periodogram due to a windowing effect, in addition to a possible ‘red noise leak’. The former is tied to a concern that the sampling window function can alias power from frequencies above and below the central frequency $`\omega _p`$, thereby distorting the true shape of the power spectrum; the latter ‘red noise leak’ effect is concerned with a transfer of power from low to high frequencies. (The problem of the ‘red noise leak’ was first noted by Deeter & Boyton (1982) and Deeter (1984).) However, we conclude according to subsequent equations (6) and (7) that the effects of this bias is negligible for the data sets in question. The expected value of the periodogram is defined such that : $$E[I(\omega _p)]=_{\frac{\pi }{\mathrm{\Delta }T}}^{\frac{\pi }{\mathrm{\Delta }T}}f(\omega )F_N(\omega \omega _p)𝑑\omega $$ (5) where $`F_N(\omega \omega _p)`$ is the Fejer kernel (Priestley 1981) which assumes the shape of the $`sine^2`$ function for frequencies $`\omega _p`$. It follows that the bias is : $$b(\omega )=E[I(\omega _p)]f(\omega )$$ (6) For lengthy time series (e.g. 400 ks), the mean value of the periodogram would tend increasingly more towards the true value of the power spectrum at frequency $`\omega _p`$, as the Fejer kernel becomes increasingly concentrated around this frequency. In any case, we only wish to note that a potential break is seen in the power spectrum at $`45\times 10^6`$ Hz. (A similar break is noted in this object from Ginga data by Hayashida et al. 1998.) We wish additionally to point out an interesting possibility for a 33hr periodicity. This is illustrated in Fig. 19 with 33hr interval tickmarks superimposed upon the ASCA and RXTE light curves. This has been determined by taking the mean of the times of the 2 brightest flares in the RXTE light curve (the peak of the RXTE flare X1 and i5, shown in Fig. 1). We note that the power of these peaks is not sufficient to be significant in Lomb-Scargle power spectra (i.e. the sharp peaks do not carry very much power). Accordingly, we have not attempted to quantify the significance of the peaks, and in part also because of the red noise nature of the PDS. We merely point out that 5 out of the 6 tickmarks in the RXTE light curve with flux $`>14\mathrm{ct}\mathrm{s}^1`$ occur in 33 hr intervals, with the same trend seen in the ASCA light curve. ## 7 Discussion There is evidence that the observed spectral variability is complex. However, all evidence points predominantly to a steepening of the spectral index with increasing flux. (Other AGNs that have exhibited spectral variability include NGC7314 Turner 1987; NGC2992 Turner & Pounds 1989; NGC4051 Matsuoka et al. 1990; NGC3227 Turner & Pounds 1989, Pounds et al. 1990; 3C273 Turner et al. 1989, 1990; NGC5548 Nandra et al. 1991, Chiang et al. 1999; NGC4151 Perola et al. 1986, Yaqoob & Warwick 1991;) 1H0419-577 Guainazzi et al. 1998.) This may be indicative of changes in the temperature or optical depth of the Comptonizing medium or of the soft local radiation field. For instance, we can postulate that during periods of intense flux, a substantial amount of the hard X-rays are absorbed by the disk, and thermalized resulting in a source of soft photons. These will pass through the corona and Compton cool it, thereby giving rise to a steeper spectral slope. However, unless we fully understand the nature of the competition between coronal heating and Compton cooling, it is not clear what physics dominates to give the observed behaviour. The ejection model of Beloborodov (1998) can explain the observed relationship between reflection fraction and spectral index shown in Table 3. In this model, observed spectral features are due to a non static corona, in which flares are accompanied by plasma ejection from the active regions. In other words, the bulk velocity ( $`\beta v/c`$ ) in the flare is $`>`$ 0. As the ‘blobs’ move away (are ejected) from the disk, a decrease in reflection is seen as a result of diminished reprocessing due to special relativistic beaming of the primary continuum radiation away from the disk. The model does not however account for the constancy of the iron line flux. We note that the possibility that some of the the iron line and reflection may be due to some distant material such as e.g. a torus is ruled out for this data based on (1) I96, and I99 findings for a non-constant, and sometimes absent narrow core seen in long ASCA observations of MCG$``$6-30-15 in 1994 and 1996, and (2) RXTE findings in this paper that the reflection fraction is seen to increase with flux. ### 7.1 Reflection and the iron line A major result from the present observations is the enigmatic behaviour of the iron line; the inverse proportionality between $`W_{\mathrm{K}\alpha }`$ and $`R`$ and of $`Rnorm`$ with $`F_{\mathrm{K}\alpha }`$. GF91 predict that the bulk of the line arises from fluorescence in optically thick material. In this simple reflection picture, we would expect $`R`$ and $`W_{\mathrm{K}\alpha }`$ to be proportional to each other provided (1) the Compton reflection continuum does not dominate the iron line region, (2) the state of the illuminated region does not change, and (3) the primary continuum has a fixed spectral shape. We note for the last point that GF91 point out that differences in the photon index up to $`\mathrm{\Delta }\mathrm{\Gamma }0.2`$ (as compared to our flux-correlated findings for $`\mathrm{\Delta }\mathrm{\Gamma }0.06`$) will contribute less than a 10 per cent effect to $`W_{\mathrm{K}\alpha }`$. This lack of proportionality between $`W_{\mathrm{K}\alpha }`$ and $`R`$ (and $`Rnorm`$ with $`F_{\mathrm{K}\alpha }`$), in conjunction with an apparently constant iron line may point at changes to the ionization of the disk in MCG$``$6-30-15, which is explored further below. We note that tentative evidence for observed changes in $`F_{\mathrm{K}\alpha }`$ does exist during time intervals surrounding flare events. Such variability is clear from the ASCA analysis (I99). It is possible that $`F_{\mathrm{K}\alpha }`$ changes on time scales shorter than is resolvable from time-averaged spectra; in other words, changes to $`F_{\mathrm{K}\alpha }`$ may only become resolvable during bright flares. We note that Chiang et al. (1999) find similar results for the constancy of the iron line and inverse proportionality between $`W_{\mathrm{K}\alpha }`$ and $`R`$ in their multi-wavelength campaign of the Seyfert 1 galaxy NGC5548. ### 7.2 A simple model for the observed spectral variations We propose the following model in order to explain some of the enigmatic properties of the observed variability phenomena. Spectral variability is no doubt complex, and does not conform to the present picture of a cold disk geometry for MCG$``$6-30-15. If however the variable emission from MCG$``$6-30-15 is from a part of the disc which is more ionized, say with an ionization parameter $`\xi 100`$ (see spectra in Ross & Fabian 1993, Ross et al 1999), then the reflection continuum will respond to the flux while the iron line does so only weakly. At that ionization parameter the iron line can be resonantly scattered by the matter in the surface of the disc and its energy lost to the Auger process (Ross, Fabian & Brandt 1996). The more highly ionized region could either be the innermost regions of the disc, perhaps within say $`6r_\mathrm{g}`$, or the regions directly beneath the most energetic flares. We note that flux-correlated changes in the surface density of the disk can lead to changes to ionization states without incorporating large changes to luminosities (Young et al. 1999, in preparation). To illustrate in the context of the RXTE light curve shown in Fig. 1, assume that the flux below 10 $`\mathrm{ct}\mathrm{s}^1`$ reflect physical processes that occur within the radius $`640r_g`$. Next, assume that this is enhanced in the $`>10\mathrm{ct}\mathrm{s}^1`$ variability, by flares within $`6r_g`$, where the Auger destruction effect becomes important. Accordingly, this will lead to observable changes in reflection (i.e stronger reflection during the higher flux periods), with minimal changes in $`F_{\mathrm{K}\alpha }`$. We note that our interpretation that variability largely comes from within the innermost stable orbit for a Schwarzschild black hole may be consistent with the scenario for a very active corona (and hence strong hard X-ray emission) within $`6r_g`$, proposed by Krolik (1999). In this model, magnetic fields within the radius of marginal stability are strong and amplified through shearing of their footpoints, which can enhance variability. ### 7.3 Implications for Mass Estimates #### 7.3.1 Constraints from spectral studies The constancy of the iron line on day-to-day scales suggests that the timescale for variability (i.e. the observed periods for which dramatic flux changes are observed) we are naively probing are much larger (in the ‘standard’ scenario) than the fluorescing region. In other words, slow changes would imply on a naive model much larger crossing times and hence large regions for the crossing times of the continuum. A light-crossing time of the fluorescing region larger than $``$ 50 ks (assuming an average radius $`20r_s`$) will lead to an estimate for the black hole mass $`10^8\mathrm{M}_{}`$. Reynolds (1999) points out however that the bulge/hole mass relationship of Magorrian et al. (1998) implies a much lower mass estimate for MCG$``$6-30-15, by an order of magnitude, of about $`10^7\mathrm{M}_{}`$. In the scenario of the simple model presented above, and given evidence for short timescale variability of the iron line (here and I99) as well as the location of the flare line found in I99, the constancy of the line suggests that the timescale for variability for which we are probing is much smaller than the fluorescing region and reconciles the above mass problem. #### 7.3.2 PDS : Analogies with Galactic Black Hole Candidates Of further interest is the apparent break in the power spectrum of MCG$``$6-30-15 seen in both the RXTE and ASCA data. The origin of the break is not yet known (but see e.g. Edelson & Nandra 1999; Poutanen & Fabian 1999; Kazanas, Hua, & Titarchuk 1997; and Cui et al. 1997 for possible explanations) , but does provide a useful means to determine the black hole mass, through scaling from similar breaks in the power spectrum of the famous galactic black hole candidate (GBH) Cygnus X-1, and other objects like it. The behaviour of the PDS in MCG$``$6-30-15 is not unlike that of GBHs in the ‘low’ (hard) state (see e.g. Belloni & Hasinger 1990; Miyamoto et al. 1992; van der Klis 1995). Power law slopes (with form $`f^\alpha `$) of order $`\alpha `$-1 to -2 are observed at high frequencies and flatten to $`0`$ at lower frequencies. If we bridge the gap between AGNs and GBHs and assume that similar physics are at play, we can make predictions for the black hole mass in MCG$``$6-30-15 (using the values for the cutoff frequencies $`f_{br}`$) by a simple scaling relation with Cygnus X-1. Belloni & Hasinger (1990) report that $`f_{br}`$ 0.04-0.4 Hz for Cygnus X-1; for MCG$``$6-30-15, we find evidence that $`f_{br}45\times 10^6`$ Hz. The resulting ratio between the 2 cutoff frequencies is $`10^410^5`$. Herrero et al. (1995) argue that the black hole mass in Cygnus X-1 is $`10\mathrm{M}_{}`$, which leads us to conclude that the mass of the black hole in MCG$``$6-30-15 is $`10^510^6`$, smaller than anticipated. Our mass estimate agrees with that of Hayashida et al. (1998) and Nowak & Chiang (1999) who also used the break frequency and scaling arguments. However, such mass estimates should be treated with extreme caution. The break frequencies in any one given GBHC can vary by one or two orders of magnitude depending upon the exact flux/spectral state of the source. Given that we do not know how to map AGN spectral states into analogous GBHC states, the mass estimate derived above (and that of Nowak & Chiang 1999) will also be uncertain by up to two orders of magnitude. Additionally, it is not entirely clear what timescales to identify the cutoff frequency with. A small black hole mass (i.e. $`<`$ $`2\times 10^6\mathrm{M}_{}`$) would also imply the presence of a super-Eddington black hole in MCG$``$6-30-15. #### 7.3.3 A possible 33 hour period Finally, we address what implications a 33 hr period would have on the black hole mass in MCG$``$6-30-15. If we make the assumption that this is the orbital time scale for e.g. a flare to circumnavigate the black hole in MCG$``$6-30-15, then we can estimate the mass via the relation : $$M_{BH}(R)=3.1\times 10^7t_{orb}(\frac{R}{10R_s})^{1.5}M_{}$$ (7) where $`R`$ is the distance from the center, and $`R_s2r_g`$ is the Schwarzschild radius (the gravitational radius of the black hole $`r_gGM/c^2`$, and $`t_{orb}`$ is in days). The diskline model constrains $`R_{in}`$ and $`R_{out}`$ assuming some power law emissivity function ($`R^\alpha `$) that declines to larger radii. (We note that beyond $`r_{out}`$ the line emission is negligible.) Accordingly, we expect that most of the power is concentrated in the inner radii. I96 and I99 constrain using time averaged ASCA data $`r_{in}(6.7\pm 0.8)r_g`$ for MCG$``$6-30-15. This combined with $`t_{orb}`$ 33hr (= 1.375 days) give a mass for the black hole in MCG$``$6-30-15 $`2.6\times 10^7\mathrm{M}_{}`$. ## 8 Conclusion We summarize below the spectral and timing results of this paper. It is clear that complicated processes are present, the nature of which is not obviously apparent, and may prove to be a challenge to present theoretical models. $``$ Hardness ratios reveal that spectral variability may be largely attributed to changes in the intrinsic photon index. In particular, spectral hardening is observed during periods of diminished intensity in comparisons of the (7.5$``$10 keV) upper continuum, (5$``$7 keV) iron line region, and (10$``$20 keV) reflection hump with the (3$``$4.5 keV) lower continuum. Particularly hard spectra are noted in a time interval corresponding to $`260\mathrm{ks}`$ that begins shortly after the hard RXTE flare. $``$ We find from flux correlated studies that changes to the photon index are evident. In particular, $`\mathrm{\Gamma }_{310}`$ steepens while $`\mathrm{\Gamma }_{1020}`$ flattens with flux; for a doubling of the flux, $`\mathrm{\Delta }\mathrm{\Gamma }_{310}0.06`$ and $`\mathrm{\Delta }\mathrm{\Gamma }_{1020}0.3`$. This coupled with findings for a constant iron line can contribute to the reduced fractional variability in the iron line band noted by Reynolds (1999). We note that changes to $`\mathrm{\Gamma }_{1020}`$ are significant only with large changes in flux whereas changes in $`\mathrm{\Gamma }_{310}`$ are apparent even with subtle changes in flux. This point is well illustrated in detailed studies of the time intervals surrounding the ASCA and RXTE flares. Nevertheless, it would appear that both changes in the intrinsic power law slope (reflected by changes to $`\mathrm{\Gamma }_{310}`$), and reflection (reflected by changes to $`\mathrm{\Gamma }_{1020}`$) both contribute in varying degrees to the overall spectral variability. $``$ We find curiously that the iron line flux is consistent with being constant over large time intervals on the order of days (but this is not the case on much shorter time intervals of order $`<`$ 12 ks), and the equivalent width anitcorrelates with the continuum flux. (Observed changes to $`F_{\mathrm{K}\alpha }`$ on short time intervals are summarized in the next point.) This may point at evidence for a partially ionized disk. $``$ From concentrated studies of the time intervals surrounding the RXTE and ASCA flares, we find tentatively that $`F_{\mathrm{K}\alpha }`$ shows a noticeable increase after the flare events. (This is less significant for the periods of the RXTE flare.) This may be an indication that we are witnessing some type of response to the flare. We note that $`F_{\mathrm{K}\alpha }`$ is comparably high ($``$ factor of 1.7 larger) during the times surrounding the RXTE flare, and times following the ASCA flare, in contrast to time averaged analysis of flux-correlated data. $``$ We find tentative evidence from cross correlation techniques for a possible phase lag comparable to $`\varphi 0.6`$ between the (7.5$``$10 keV) upper continuum, and (5$``$7 keV) iron line band and (3$``$4.5 keV) lower continuum. $``$ CCFs further reveal possible time lags (time delays $`<`$ 1 ks) between the (10$``$20 keV) reflection hump and iron line band, and reflection hump with lower continuum. $``$ We report an apparent break (from $`f^0`$ to $``$ $`f^{1.5}`$) of MCG$``$6-30-15 at $`45\times 10^6`$ Hz ($`5670`$ hrs) seen by both ASCA and RXTE . Scaling with the mass of the GBH Cygnus X-1 gives a smaller than expected black hole mass of $`10^510^6\mathrm{M}_{}`$ for MCG$``$6-30-15. However, this is unlikely to be a proper estimate of the mass for the black hole in MCG$``$6-30-15. (A black hole mass $`<`$ $`2\times 10^6\mathrm{M}_{}`$ would make the black hole in MCG$``$6-30-15 super-Eddington.) $``$ We report on the possibility for a 33 hr period seen in both the ASCA and RXTE light curves. This combined with a value of $`7r_g`$ for the inner radius, implies black hole mass $`2.6\times 10^7\mathrm{M}_{}`$ for MCG$``$6-30-15. ## ACKNOWLEDGEMENTS We thank Juri Poutanen for useful discussions about cross correlation techniques. We thank all the members of the RXTE GOF for answering our inquiries in such a timely manner. JCL thanks the Isaac Newton Trust, the Overseas Research Studentship programme (ORS) and the Cambridge Commonwealth Trust for support. ACF thanks the Royal Society for support. KI and WNB thank PPARC and NASA RXTE grant NAG5-6852 for support respectively. CSR thanks the National Science Foundation for support under grant AST9529175, and NASA for support under the Long Term Space Astrophysics grant NASA-NAG-6337. CSR also acknowledges support from Hubble Fellowship grant HF-01113.01-98A awarded by the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555.
no-problem/9909/hep-ph9909465.html
ar5iv
text
# Experimental Constraints on Four-Neutrino Mixing ## I Introduction Neutrino oscillations have been proposed by B. Pontecorvo more than forty years ago from an analogy with $`K^0\overline{K}^0`$ oscillations. In 1967 B. Pontecorvo predicted the possibility that the flux of solar $`\nu _e`$’s could be suppressed because of neutrino oscillations . About one year later R. Davis and collaborators reported the first measurement of the Homestake <sup>37</sup>Cl$`(\nu _e,e^{})`$<sup>37</sup>Ar radiochemical detector, which gave an upper limit for the flux of solar $`\nu _e`$’s on the Earth significantly smaller than the prediction of the existing Standard Solar Model . This was the first experimental indication in favor of neutrino oscillations. It is interesting to notice that in the 1967 paper B. Pontecorvo introduced the concept of “sterile” neutrinos. He considered the possibility of oscillations of left-handed neutrinos created in weak interaction processes into left-handed “antineutrino” states, quanta of the right-handed component of the neutrino field that does not participate in weak interactions. He called these states “sterile”, opposed to the usual “active” neutrinos. He also pointed out that in this case the mass eigenstates are Majorana particles. As we will see later, the present experimental data indicate the existence of at least one sterile neutrino. In 1969 V. Gribov and B. Pontecorvo wrote a famous paper in which they formulated the theory of $`\nu _e\nu _\mu `$ oscillations. In 1976 S.M. Bilenky and B. Pontecorvo introduced the general scheme with Dirac and Majorana neutrino mass terms, lying the foundations of the theory of neutrino mixing. The early studies of neutrino mixing and neutrino oscillations are beautifully summarized in the 1978 review of S.M. Bilenky and B. Pontecorvo . Today neutrino oscillations are subject to intensive experimental and theoretical research . This beautiful quantum mechanical phenomenon provides information on the masses and mixing of neutrinos and is considered to be one of the best ways to explore the physics beyond the Standard Model. Indeed, the smallness of neutrino masses may be due to the existence of a very large energy scale at which the conservation of lepton number is violated and small Majorana neutrino masses are generated through the see-saw mechanism or through non-renormalizable interaction terms in the effective Lagrangian of the Standard Model . The best evidence in favor of the existence of neutrino oscillations has been recently provided by the measurement in the Super-Kamiokande experiment of an up–down asymmetry of high-energy $`\mu `$-like events generated by atmospheric neutrinos: $$𝒜_\mu (D_\mu U_\mu )/(D_\mu +U_\mu )=0.311\pm 0.043\pm 0.01.$$ (1) Here $`D_\mu `$ and $`U_\mu `$ are, respectively, the number of downward-going and upward-going events, corresponding to the zenith angle intervals $`0.2<\mathrm{cos}\theta <1`$ and $`1<\mathrm{cos}\theta <0.2`$. Since the fluxes of high-energy downward-going and upward-going atmospheric neutrinos are predicted to be equal with high accuracy on the basis of geometrical arguments , the Super-Kamiokande evidence in favor of neutrino oscillations is model-independent. It provides a confirmation of the indications in favor of oscillations of atmospheric neutrinos found through the measurement of the ratio of $`\mu `$-like and $`e`$-like events (Kamiokande, IMB, Soudan 2, Super-Kamiokande) and through the measurement of upward-going muons produced by neutrino interactions in the rock below the detector (Super-Kamiokande, MACRO) . Large $`\nu _\mu \nu _e`$ oscillations of atmospheric neutrinos are excluded by the absence of a up–down asymmetry of high-energy $`e`$-like events generated by atmospheric neutrinos and detected in the Super-Kamiokande experiment ($`𝒜_e=0.036\pm 0.067\pm 0.02`$) and by the negative result of the CHOOZ long-baseline $`\overline{\nu }_e`$ disappearance experiment . Therefore, the atmospheric neutrino anomaly consists in the disappearance of muon neutrinos and can be explained by $`\nu _\mu \nu _\tau `$ and/or $`\nu _\mu \nu _s`$ oscillations (here $`\nu _s`$ is a sterile neutrino that does not take part in weak interactions). Other indications in favor of neutrino oscillations have been obtained in solar neutrino experiments (Homestake, Kamiokande, GALLEX, SAGE, Super-Kamiokande) and in the LSND experiment . The flux of electron neutrinos measured in all five solar neutrino experiments is substantially smaller than the one predicted by the Standard Solar Model and a comparison of the data of different experiments indicate an energy dependence of the solar $`\nu _e`$ suppression, which represents a rather convincing evidence in favor of neutrino oscillations . The disappearance of solar electron neutrinos can be explained by $`\nu _e\nu _\mu `$ and/or $`\nu _e\nu _\tau `$ and/or $`\nu _e\nu _s`$ oscillations . The accelerator LSND experiment is the only one that claims the observation of neutrino oscillations in specific appearance channels: $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$. Since the appearance of neutrinos with a different flavor represents the true essence of neutrino oscillations, the LSND evidence is extremely interesting and its confirmation (or disproof) by other experiments should receive high priority in future research. Four such experiments have been proposed and are under study: BooNE at Fermilab, I-216 at CERN, ORLaND at Oak Ridge and NESS at the European Spallation Source . Among these proposals only BooNE is approved and will start in 2001. Neutrino oscillations occur if neutrinos are massive and mixed particles , i.e. if the left-handed components $`\nu _{\alpha L}`$ of the flavor neutrino fields are superpositions of the left-handed components $`\nu _{kL}`$ ($`k=1,\mathrm{},N`$) of neutrino fields with definite mass $`m_k`$: $$\nu _{\alpha L}=\underset{k=1}{\overset{N}{}}U_{\alpha k}\nu _{kL},$$ (2) where $`U`$ is a $`N\times N`$ unitary mixing matrix. From the measurement of the invisible decay width of the $`Z`$-boson it is known that the number of light active neutrino flavors is three , corresponding to $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$. This implies that the number $`N`$ of massive neutrinos is bigger or equal to three. If $`N>3`$, in the flavor basis there are $`N_s=N3`$ sterile neutrinos, $`\nu _{s_1}`$, …, $`\nu _{s_{N_s}}`$. In this case the flavor index $`\alpha `$ in Eq. (2) takes the values $`e,\mu ,\tau ,s_1,\mathrm{},s_{N_s}`$. ## II The necessity of at least three independent $`\mathrm{\Delta }𝐦^2`$’s The three evidences in favor of neutrino oscillations found in solar and atmospheric neutrino experiments and in the accelerator LSND experiment imply the existence of at least three independent neutrino mass-squared differences. This can be seen by considering the general expression for the probability of $`\nu _\alpha \nu _\beta `$ transitions in vacuum, that can be written as $$P_{\nu _\alpha \nu _\beta }=\left|\underset{k=1}{\overset{N}{}}U_{\alpha k}^{}U_{\beta k}\mathrm{exp}\left(i\frac{\mathrm{\Delta }m_{kj}^2L}{2E}\right)\right|^2,$$ (3) where $`\mathrm{\Delta }m_{kj}^2m_k^2m_j^2`$, $`j`$ is any of the mass-eigenstate indices, $`L`$ is the distance between the neutrino source and detector and $`E`$ is the neutrino energy. The range of $`L/E`$ characteristic of each type of experiment is different: $`L/E10^{11}10^{12}\mathrm{eV}^2`$ for solar neutrino experiments, $`L/E10^210^3\mathrm{eV}^2`$ for atmospheric neutrino experiments and $`L/E1\mathrm{eV}^2`$ for the LSND experiment. From Eq. (3) it is clear that neutrino oscillations are observable in an experiment only if there is at least one mass-squared difference $`\mathrm{\Delta }m_{kj}^2`$ such that $$\frac{\mathrm{\Delta }m_{kj}^2L}{2E}0.1$$ (4) (the precise lower bound depends on the sensitivity of the experiment) in a significant part of the energy and source-detector distance intervals of the experiment (if the condition (4) is not satisfied, $`P_{\nu _\alpha \nu _\beta }\left|_kU_{\alpha k}^{}U_{\beta k}\right|^2=\delta _{\alpha \beta }`$). Since the range of $`L/E`$ probed by the LSND experiment is the smaller one, a large mass-squared difference is needed for LSND oscillations: $$\mathrm{\Delta }m_{\mathrm{LSND}}^210^1\mathrm{eV}^2.$$ (5) Specifically, the maximum likelihood analysis of the LSND data in terms of two-neutrino oscillations gives $$0.20\mathrm{eV}^2\mathrm{\Delta }m_{\mathrm{LSND}}^22.0\mathrm{eV}^2.$$ (6) Furthermore, from Eq. (3) it is clear that a dependence of the oscillation probability from the neutrino energy $`E`$ and the source-detector distance $`L`$ is observable only if there is at least one mass-squared difference $`\mathrm{\Delta }m_{kj}^2`$ such that $$\frac{\mathrm{\Delta }m_{kj}^2L}{2E}1.$$ (7) Indeed, all the phases $`\mathrm{\Delta }m_{kj}^2L/2E1`$ are washed out by the average over the energy and source-detector ranges characteristic of the experiment. Since a variation of the oscillation probability as a function of neutrino energy has been observed both in solar and atmospheric neutrino experiments and the ranges of $`L/E`$ characteristic of these two types of experiments are different from each other and different from the LSND range, two more mass-squared differences with different scales are needed: $`\mathrm{\Delta }m_{\mathrm{sun}}^210^{12}10^{11}\mathrm{eV}^2\text{(VO)},`$ (8) $`\mathrm{\Delta }m_{\mathrm{atm}}^210^310^2\mathrm{eV}^2.`$ (9) The condition (8) for the solar mass-squared difference $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ has been obtained under the assumption of vacuum oscillations (VO). If the disappearance of solar $`\nu _e`$’s is due to the MSW effect , the condition $$\mathrm{\Delta }m_{\mathrm{sun}}^210^4\mathrm{eV}^2\text{(MSW)}$$ (10) must be fulfilled in order to have a resonance in the interior of the sun. Hence, in the MSW case $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ must be at least one order of magnitude smaller than $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$. It is possible to ask if three different scales of neutrino mass-squared differences are needed even if the results of the Homestake solar neutrino experiment is neglected, allowing an energy-independent suppression of the solar $`\nu _e`$ flux. The answer is that still the data cannot be fitted with only two neutrino mass-squared differences because an energy-independent suppression of the solar $`\nu _e`$ flux requires large $`\nu _e\nu _\mu `$ or $`\nu _e\nu _\tau `$ transitions generated by $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ or $`\mathrm{\Delta }m_{\mathrm{LSND}}^2`$. These transitions are forbidden by the results of the Bugey and CHOOZ reactor $`\overline{\nu }_e`$ disappearance experiments and by the non-observation of an up-down asymmetry of $`e`$-like events in the Super-Kamiokande atmospheric neutrino experiment . ## III Four-neutrino schemes The existence of three different scales of $`\mathrm{\Delta }m^2`$ imply that at least four light massive neutrinos must exist in nature. Here we consider the schemes with four light and mixed neutrinos , which constitute the minimal possibility that allows to explain all the existing data with neutrino oscillations. In this case, in the flavor basis the three active neutrinos $`\nu _e`$, $`\nu _\mu `$, $`\nu _\tau `$ are accompanied by a sterile neutrino $`\nu _s`$. Let us notice that the existence of four light massive neutrinos is favored also by the possibility that active-sterile neutrino transitions in neutrino-heated supernova ejecta could enable the production of $`r`$-process nuclei . The six types of four-neutrino mass spectra with three different scales of $`\mathrm{\Delta }m^2`$ that can accommodate the hierarchy $`\mathrm{\Delta }m_{\mathrm{sun}}^2\mathrm{\Delta }m_{\mathrm{atm}}^2\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ are shown qualitatively in Fig. III. In all these mass spectra there are two groups of close masses separated by the “LSND gap” of the order of 1 eV. In each scheme the smallest mass-squared difference corresponds to $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ ($`\mathrm{\Delta }m_{21}^2`$ in schemes I and B, $`\mathrm{\Delta }m_{32}^2`$ in schemes II and IV, $`\mathrm{\Delta }m_{43}^2`$ in schemes III and A), the intermediate one to $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ ($`\mathrm{\Delta }m_{31}^2`$ in schemes I and II, $`\mathrm{\Delta }m_{42}^2`$ in schemes III and IV, $`\mathrm{\Delta }m_{21}^2`$ in scheme A, $`\mathrm{\Delta }m_{43}^2`$ in scheme B) and the largest mass squared difference $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ is relevant for the oscillations observed in the LSND experiment. The six schemes are divided into four schemes of class 1 (I–IV) in which there is a group of three masses separated from an isolated mass by the LSND gap, and two schemes of class 2 (A, B) in which there are two couples of close masses separated by the LSND gap. ## IV The disfavored schemes of class 1 In the following we will show that the schemes of class 1 are disfavored by the data if also the negative results of short-baseline accelerator and reactor disappearance neutrino oscillation experiments are taken into account . Let us remark that in principle one could check which schemes are allowed by doing a combined fit of all data in the framework of the most general four-neutrino mixing scheme, with three mass-squared differences, six mixing angles and three CP-violating phases as free parameters. However, at the moment it is not possible to perform such a fit because of the enormous complications due to the presence of too many parameters and to the difficulties involved in a combined fit of the data of different experiments, which are usually analyzed by the experimental collaborations using different methods. Hence, we think that it is quite remarkable that one can exclude the schemes of class 1 with the following relatively simple procedure. Let us define the quantities $`d_\alpha `$, with $`\alpha =e,\mu ,\tau ,s`$, in the schemes of class 1 as $$d_\alpha ^{(\mathrm{I})}|U_{\alpha 4}|^2,d_\alpha ^{(\mathrm{II})}|U_{\alpha 4}|^2,d_\alpha ^{(\mathrm{III})}|U_{\alpha 1}|^2,d_\alpha ^{(\mathrm{IV})}|U_{\alpha 1}|^2.$$ (11) Physically $`d_\alpha `$ quantifies the mixing of the flavor neutrino $`\nu _\alpha `$ with the isolated neutrino, whose mass is separated from the other three by the LSND gap. The probability of $`\nu _\alpha \nu _\beta `$ ($`\beta \alpha `$) and $`\nu _\alpha \nu _\alpha `$ transitions (and the corresponding probabilities for antineutrinos) in short-baseline experiments are given by $$P_{\nu _\alpha \nu _\beta }=A_{\alpha ;\beta }\mathrm{sin}^2\frac{\mathrm{\Delta }m_{41}^2L}{4E},P_{\nu _\alpha \nu _\alpha }=1B_{\alpha ;\alpha }\mathrm{sin}^2\frac{\mathrm{\Delta }m_{41}^2L}{4E},$$ (12) with the oscillation amplitudes $$A_{\alpha ;\beta }=4d_\alpha d_\beta ,B_{\alpha ;\alpha }=4d_\alpha (1d_\alpha ).$$ (13) The probabilities (12) have the same form as the corresponding probabilities in the case of two-neutrino mixing, $`P_{\nu _\alpha \nu _\beta }=\mathrm{sin}^2(2\vartheta )\mathrm{sin}^2(\mathrm{\Delta }m^2L/4E)`$ and $`P_{\nu _\alpha \nu _\alpha }=1\mathrm{sin}^2(2\vartheta )\mathrm{sin}^2(\mathrm{\Delta }m^2L/4E)`$, which have been used by all experimental collaborations for the analysis of the data in order to get information on the parameters $`\mathrm{sin}^2(2\vartheta )`$ and $`\mathrm{\Delta }m^2`$ ($`\vartheta `$ and $`\mathrm{\Delta }m^2`$ are, respectively, the mixing angle and the mass-squared difference in the case of two-neutrino mixing). Therefore, we can use the results of their analyses in order to get information on the corresponding parameters $`A_{\alpha ;\beta }`$, $`B_{\alpha ;\alpha }`$ and $`\mathrm{\Delta }m_{41}^2`$. The exclusion plots obtained in short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments imply that $$d_\alpha a_\alpha ^0\text{or}d_\alpha 1a_\alpha ^0(\alpha =e,\mu ),$$ (14) with $$a_\alpha ^0=\frac{1}{2}\left(1\sqrt{1B_{\alpha ;\alpha }^0}\right)(\alpha =e,\mu ),$$ (15) where $`B_{e;e}^0`$ and $`B_{\mu ;\mu }^0`$ are the upper bounds, that depend on $`\mathrm{\Delta }m_{41}^2`$, of the oscillation amplitudes $`B_{e;e}`$ and $`B_{\mu ;\mu }`$ given by the exclusion plots of $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments. From the exclusion curves of the Bugey reactor $`\overline{\nu }_e`$ disappearance experiment and of the CDHS and CCFR accelerator $`\nu _\mu `$ disappearance experiments it follows that $`a_e^03\times 10^2`$ for $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ in the LSND range (6) and $`a_\mu ^00.2`$ for $`\mathrm{\Delta }m_{41}^20.4\mathrm{eV}^2`$ . Therefore, the negative results of short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments imply that $`d_e`$ and $`d_\mu `$ are either small or large (close to one). Taking into account the unitarity limit $`d_e+d_\mu 1`$, for each value of $`\mathrm{\Delta }m_{41}^2`$ above about $`0.3\mathrm{eV}^2`$ there are three regions in the $`d_e`$$`d_\mu `$ plane that are allowed by the results of disappearance experiments: region SS with small $`d_e`$ and $`d_\mu `$, region LS with large $`d_e`$ and small $`d_\mu `$ and region SL with small $`d_e`$ and large $`d_\mu `$. These three regions are illustrated qualitatively by the three shadowed areas in Fig. IV. For $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ there is no constraint on the value of $`d_\mu `$ from the results of short-baseline $`\nu _\mu `$ disappearance experiments and there are two regions in the $`d_e`$$`d_\mu `$ plane that are allowed by the results of $`\overline{\nu }_e`$ disappearance experiments: region S with small $`d_e`$ and region LS with large $`d_e`$ and small $`d_\mu `$ (the smallness of $`d_\mu `$ follows from the unitarity bound $`d_e+d_\mu 1`$). These two regions are illustrated qualitatively by the two shadowed areas in Fig. IV. Let us consider now the results of solar neutrino experiments, which imply a disappearance of electron neutrinos. The survival probability of solar $`\nu _e`$’s averaged over the fast unobservable oscillations due to $`\mathrm{\Delta }m_{41}^2`$ and $`\mathrm{\Delta }m_{31}^2`$ is bounded by $$P_{\nu _e\nu _e}^{\mathrm{sun}}d_e^2.$$ (16) Therefore, only the possibility $$d_ea_e^0$$ (17) is acceptable in order to explain the observed deficit of solar $`\nu _e`$’s with neutrino oscillations. Indeed, the solar neutrino data imply an upper bound for $`d_e`$, that is shown qualitatively by the vertical lines in Figs. IV and IV. It is clear that the regions LS in Figs. IV and IV are disfavored by the results of solar neutrino experiments. In a similar way, since the survival probability of atmospheric $`\nu _\mu `$’s and $`\overline{\nu }_\mu `$’s is bounded by $$P_{\nu _\mu \nu _\mu }^{\mathrm{atm}}d_\mu ^2,$$ (18) large values of $`d_\mu `$ are incompatible with the asymmetry (1) observed in the Super-Kamiokande experiment. The upper bound for $`d_\mu `$ that follows from atmospheric neutrino data is shown qualitatively by the horizontal lines in Figs. IV and IV. It is clear that the region SL in Fig. IV, that is allowed by the results of $`\nu _\mu `$ short-baseline disappearance experiments for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$, and the large–$`d_\mu `$ part of the region S in Fig. IV are disfavored by the results of atmospheric neutrino experiments. Therefore, only the region SS in Fig. IV and the small–$`d_\mu `$ part of the region S in Fig. IV are allowed by the results of solar and atmospheric neutrino experiments. In both cases $`d_\mu `$ is small. But such small values of $`d_\mu `$ are disfavored by the results of the LSND experiment, that imply a lower bound $`A_{\mu ;e}^{\mathrm{min}}`$ for the amplitude $`A_{\mu ;e}=4d_ed_\mu `$ of $`\nu _\mu \nu _e`$ oscillations. Indeed, we have $$d_ed_\mu A_{\mu ;e}^{\mathrm{min}}/4.$$ (19) This bound, shown qualitatively by the LSND exclusion curves in Figs. IV and IV, excludes region SS in Fig. IV and the small-$`d_\mu `$ part of region S in Fig. IV. From Figs. IV and IV one can see in a qualitative way that in the schemes of class 1 the results of the solar, atmospheric and LSND experiments are incompatible with the negative results of short-baseline experiments. A quantitative illustration of this incompatibility is given in Fig. IV, in which the shadowed area is excluded by the bound $`d_\mu a_\mu ^0`$ or $`d_\mu 1a_\mu ^0`$ obtained from the exclusion plot of the short-baseline CDHS $`\nu _\mu `$ disappearance experiment. The horizontal line in Fig. IV represents the upper bound $$d_\mu 0.55a_\mu ^{\mathrm{SK}}$$ (20) (the vertically hatched area above the line is excluded) obtained from the Super-Kamiokande asymmetry (1) and the exclusion curve of the Bugey $`\overline{\nu }_e`$ disappearance experiment . It is clear that the results of short-baseline disappearance experiments and the Super-Kamiokande asymmetry (1) imply that $`d_\mu 0.55`$ for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ and that $`d_\mu `$ is very small for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$. These small values of $`d_\mu `$ are disfavored by the curve in Fig. IV labelled LSND + Bugey, which represents the constraint $$d_\mu A_{\mu ;e}^{\mathrm{min}}/4a_e^0$$ (21) (the diagonally hatched area is excluded), derived from the inequality (19) using the bound (17). Hence, in the framework of the schemes of class 1 there is no range of $`d_\mu `$ that is compatible with all the experimental data. Another way for seeing the incompatibility of the experimental results with the schemes of class 1 is presented in Fig. IV, where we have plotted in the $`A_{\mu ;e}`$$`\mathrm{\Delta }m_{41}^2`$ plane the upper bound $`A_{\mu ;e}4a_e^0a_\mu ^0`$ for $`\mathrm{\Delta }m_{41}^2>0.26\mathrm{eV}^2`$ and $`A_{\mu ;e}4a_e^0a_\mu ^{\mathrm{SK}}`$ for $`\mathrm{\Delta }m_{41}^2<0.26\mathrm{eV}^2`$ (solid line, the region on the right is excluded). One can see that this constraint is incompatible with the LSND-allowed region (shadowed area). Summarizing, we have reached the conclusion that the four schemes of class 1 shown in Fig. III are disfavored by the data. ## V The favored schemes of class 2 The four-neutrino schemes of class 2 are compatible with the results of all neutrino oscillation experiments if the mixing of $`\nu _e`$ with the two mass eigenstates responsible for the oscillations of solar neutrinos ($`\nu _3`$ and $`\nu _4`$ in scheme A and $`\nu _1`$ and $`\nu _2`$ in scheme B) is large and the mixing of $`\nu _\mu `$ with the two mass eigenstates responsible for the oscillations of atmospheric neutrinos ($`\nu _1`$ and $`\nu _2`$ in scheme A and $`\nu _3`$ and $`\nu _4`$ in scheme B) is large . This is illustrated qualitatively in Figs. V and V, as we are going to explain. Let us define the quantities $`c_\alpha `$, with $`\alpha =e,\mu ,\tau ,s`$, in the schemes A and B as $$c_\alpha ^{(\mathrm{A})}\underset{k=1,2}{}|U_{\alpha k}|^2,c_\alpha ^{(\mathrm{B})}\underset{k=3,4}{}|U_{\alpha k}|^2.$$ (22) Physically $`c_\alpha `$ quantify the mixing of the flavor neutrino $`\nu _\alpha `$ with the two massive neutrinos whose $`\mathrm{\Delta }m^2`$ is relevant for the oscillations of atmospheric neutrinos ($`\nu _1`$, $`\nu _2`$ in scheme A and $`\nu _3`$, $`\nu _4`$ in scheme B). The exclusion plots obtained in short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments imply that $$c_\alpha a_\alpha ^0\text{or}c_\alpha 1a_\alpha ^0(\alpha =e,\mu ),$$ (23) with $`a_\alpha ^0`$ given in Eq. (15). The shadowed areas in Figs. V and V illustrate qualitatively the regions in the $`c_e`$$`c_\mu `$ plane allowed by the negative results of short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments. Figure V is valid for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ and shows that there are four regions allowed by the results of short-baseline disappearance experiments: region SS with small $`c_e`$ and $`c_\mu `$, region LS with large $`c_e`$ and small $`c_\mu `$, region SL with small $`c_e`$ and large $`c_\mu `$ and region LL with large $`c_e`$ and $`c_\mu `$. The quantities $`c_e`$ and $`c_\mu `$ can be both large, because the unitarity of the mixing matrix imply that $`c_\alpha +c_\beta 2`$ and $`0c_\alpha 1`$ for $`\alpha ,\beta =e,\mu ,\tau ,s`$. Figure V is valid for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$, where there is no constraint on the value of $`c_\mu `$ from the results of short-baseline $`\nu _\mu `$ disappearance experiments. It shows that there are two regions allowed by the results of short-baseline $`\overline{\nu }_e`$ disappearance experiments: region S with small $`c_e`$ and region L with large $`c_e`$. Let us take now into account the results of solar neutrino experiments. Large values of $`c_e`$ are incompatible with solar neutrino oscillations because in this case $`\nu _e`$ has large mixing with the two massive neutrinos responsible for atmospheric neutrino oscillations and, through the unitarity of the mixing matrix, small mixing with the two massive neutrinos responsible for solar neutrino oscillations. Indeed, in the schemes of class 2 the survival probability $`P_{\nu _e\nu _e}^{\mathrm{sun}}`$ of solar $`\nu _e`$’s is bounded by $$P_{\nu _e\nu _e}^{\mathrm{sun}}c_e^2/2,$$ (24) and its possible variation $`\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)`$ with neutrino energy $`E`$ is limited by $$\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)\left(1c_e\right)^2.$$ (25) If $`c_e`$ is large as in the LS or LL regions of Fig. V or in the L region of Fig. V, we have $$P_{\nu _e\nu _e}^{\mathrm{sun}}\frac{\left(1a_e^0\right)^2}{2}\frac{1}{2},\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)(a_e^0)^210^3,$$ (26) for $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ in the LSND range (6). Therefore $`P_{\nu _e\nu _e}^{\mathrm{sun}}`$ is bigger than 1/2 and practically does not depend on neutrino energy. Since this is incompatible with the results of solar neutrino experiments interpreted in terms of neutrino oscillations , we conclude that the regions LS and LL in Fig. V and the region L in Fig. V are disfavored by solar neutrino data, as illustrated qualitatively by the vertical exclusion lines in Figs. V and V. Let us consider now the results of atmospheric neutrino experiments. Small values of $`c_\mu `$ are incompatible with atmospheric neutrino oscillations because in this case $`\nu _\mu `$ has small mixing with the two massive neutrinos responsible for atmospheric neutrino oscillations. Indeed, the survival probability of atmospheric $`\nu _\mu `$’s is bounded by $$P_{\nu _\mu \nu _\mu }^{\mathrm{atm}}\left(1c_\mu \right)^2,$$ (27) and it can be shown that the Super-Kamiokande asymmetry (1) and the exclusion curve of the Bugey $`\overline{\nu }_e`$ disappearance experiment imply the upper bound $$c_\mu 0.45b_\mu ^{\mathrm{SK}}.$$ (28) This limit is depicted qualitatively by the horizontal exclusion lines in Figs. V and V. Therefore, we conclude that the regions SS and LS in Fig. V and the small-$`c_\mu `$ parts of the regions S and L in Fig. V are disfavored by atmospheric neutrino data. Finally, let us consider the results of the LSND experiment. In the schemes of class 2 the amplitude of short-baseline $`\nu _\mu \nu _e`$ oscillations is given by $$A_{\mu ;e}=\left|\underset{k=1,2}{}U_{ek}U_{\mu k}^{}\right|^2=\left|\underset{k=3,4}{}U_{ek}U_{\mu k}^{}\right|^2.$$ (29) The second equality in Eq. (29) is due to the unitarity of the mixing matrix. Using the Cauchy–Schwarz inequality we obtain $$c_ec_\mu A_{\mu ;e}^{\mathrm{min}}/4\text{and}\left(1c_e\right)\left(1c_\mu \right)A_{\mu ;e}^{\mathrm{min}}/4,$$ (30) where $`A_{\mu ;e}^{\mathrm{min}}`$ is the minimum value of the oscillation amplitude $`A_{\mu ;e}`$ observed in the LSND experiment. The bounds (30) are illustrated qualitatively in Figs. V and V. One can see that the results of the LSND experiment confirm the exclusion of the regions SS and LL in Fig. V and the exclusion of the small-$`c_\mu `$ part of region S and of the large-$`c_\mu `$ part of region L in Fig. V. Summarizing, if $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ only the region SL in Fig. V, with $$c_ea_e^0\text{and}c_\mu 1a_\mu ^0,$$ (31) is compatible with the results of all neutrino oscillation experiments. If $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ only the large-$`c_\mu `$ part of region S in Fig. V, with $$c_ea_e^0\text{and}c_\mu b_\mu ^{\mathrm{SK}},$$ (32) is compatible with the results of all neutrino oscillation experiments. Therefore, in any case $`c_e`$ is small and $`c_\mu `$ is large. However, it is important to notice that, as shown clearly in Figs. V and V, the inequalities (30) following from the LSND observation of short-baseline $`\nu _\mu \nu _e`$ oscillations imply that $`c_e`$, albeit small, has a lower bound and $`c_\mu `$, albeit large, has an upper bound: $$c_eA_{\mu ;e}^{\mathrm{min}}/4\text{and}c_\mu 1A_{\mu ;e}^{\mathrm{min}}/4.$$ (33) ## VI Conclusions We have seen that only the two four-neutrino schemes A and B of class 2 in Fig. III are compatible with the results of all neutrino oscillation experiments. Furthermore, we have shown that the quantities $`c_e`$ and $`c_\mu `$ in these two schemes must be, respectively, small and large. Physically $`c_\alpha `$, defined in Eq. (22), quantify the mixing of the flavor neutrino $`\nu _\alpha `$ with the two massive neutrinos whose $`\mathrm{\Delta }m^2`$ is relevant for the oscillations of atmospheric neutrinos ($`\nu _1`$, $`\nu _2`$ in scheme A and $`\nu _3`$, $`\nu _4`$ in scheme B). The smallness of $`c_e`$ implies that electron neutrinos do not oscillate in atmospheric and long-baseline neutrino oscillation experiments. Indeed, one can obtain rather stringent upper bounds for the probability of $`\nu _e`$ transitions into any other state and for the size of CP or T violation that could be measured in long-baseline experiments in the $`\nu _\mu \nu _e`$ and $`\overline{\nu }_\mu \overline{\nu }_e`$ channels . Let us consider now the effective Majorana mass in neutrinoless double-$`\beta `$ decay, $$|m|=\left|\underset{k=1}{\overset{4}{}}U_{ek}^2m_k\right|.$$ (34) In scheme A, since $`c_e`$ is small, the effective Majorana mass is approximately given by $$|m|\left|U_{e3}^2+U_{e4}^2\right|m_4\left|U_{e3}^2+U_{e4}^2\right|\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}.$$ (35) Therefore, in scheme A the effective Majorana mass can be as large as $`\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}`$. Since $`c_e`$ is small, in scheme A we have $`|U_{e3}|^2\mathrm{cos}^2\vartheta _{\mathrm{sun}}`$ and $`|U_{e4}|^2\mathrm{sin}^2\vartheta _{\mathrm{sun}}`$, where $`\vartheta _{\mathrm{sun}}`$ is the mixing angle determined from the two-generation analysis of solar neutrino data . In the case of the small mixing angle MSW solution of the solar neutrino problem $`|U_{e4}||U_{e3}|1`$ and from Eq. (35) one can see that $`|m|\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}`$ In the case of the large mixing angle MSW solution, $`\mathrm{sin}^22\vartheta _{\mathrm{sun}}`$ is constrained to be less than about 0.97 at 99% CL and the effective Majorana mass lies in the range $`7\times 10^2\mathrm{eV}|m|1.4\mathrm{eV}`$. In the case of the vacuum oscillation solution, $`\mathrm{sin}^22\vartheta _{\mathrm{sun}}`$ can be as large as one and there is no lower bound for $`|m|`$. If future measurements will show the correctness of a large mixing angle solution of the solar neutrino problem (due to vacuum oscillations or the MSW effect), the measurement of $`|m|`$ would give information on the value of a Majorana phase in the mixing matrix $`U`$, that does not contribute to neutrino oscillations . In scheme B the contribution of the “heavy” neutrino masses $`m_3`$ and $`m_4`$ to the effective Majorana mass is strongly suppressed : $$|m|_{34}\left|U_{e3}^2m_3+U_{e4}^2m_4\right|c_em_4a_e^0\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}2\times 10^2\mathrm{eV}.$$ (36) Therefore, if future neutrinoless double-$`\beta `$ decay experiments will find that $`|m|2\times 10^2\mathrm{eV}`$, it would mean that scheme B is excluded, or that neutrinoless double-$`\beta `$ decay proceeds through other mechanisms, not involving the effective Majorana mass $`|m|`$. Finally, if the upper bound $`N_\nu ^{\mathrm{BBN}}<4`$ for the effective number of neutrinos in Big-Bang Nucleosynthesis is correct , the mixing of $`\nu _s`$ with the two mass eigenstates responsible for the oscillations of atmospheric neutrinos must be very small . In this case atmospheric neutrinos oscillate only in the $`\nu _\mu \nu _\tau `$ channel and solar neutrinos oscillate only in the $`\nu _e\nu _s`$ channel. This is very important because it implies that the two-generation analyses of solar and atmospheric neutrino data give correct information on neutrino mixing in the two four-neutrino schemes A and B. Otherwise, it will be necessary to reanalyze the solar and atmospheric neutrino data using a general formalism that takes into account the possibility of simultaneous transitions into active and sterile neutrinos in solar and atmospheric neutrino experiments . ## Acknowledgments I would like to thank S.M. Bilenky for his friendship, for teaching me a lot of physics, for many interesting and stimulating discussions and for a fruitful collaboration lasting several years. I would like to thank also W. Grimus and T. Schwetz for enjoyable collaboration on the topics presented in this report.
no-problem/9909/astro-ph9909005.html
ar5iv
text
# Observational Test of Environmental Effects on The Local Group Dwarf Spheroidal Galaxies ## 1 Introduction Recent observations have been revealing the physical properties of the Local Group dwarf spheroidal galaxies (dSphs). The dSphs have luminosities of order $`10^5`$$`10^7L_{}`$, and are characterized by their low surface brightnesses (Gallagher & Wyse 1994 for review). The observations of such low-luminosity objects are important for several reasons. One of them is that we can examine the environmental effects in detail by the observational data because such objects with small binding energies may be easily affected by their environments. For the Local Group dSphs, the tidal forces exerted by the Galaxy or M31 are likely to be the most important environmental effects. In fact, for example, Kroupa (1997) and Klessen & Kroupa (1998) theoretically discussed the fate of dwarf satellite galaxies based on the tidal effects, and Bellazzini et al. (1996) presented an observational support by examining correlations between surface brightness and tidal force (but see Hirashita, Kamaya & Takeuchi 1999). If the tidal forces really have major effects on the dSphs, a dwarf galaxy closer to a giant galaxy (in the Local Group, the galaxy or M31) should be more disturbed and have a more extended surface brightness profile. In this paper, we independently examine this point from the observational point of view by introducing a “compactness parameter” derived from the core radius of the surface brightness profile (§3.1). Since the new data of the companions of M31 have recently been available (e.g., Armandroff et al. 1998; Caldwell 1999; Grebel & Guhathakurta 1999; Hopp et al. 1999), we use these dSphs as well as those surrounding the Galaxy. This paper is organized as follows. First of all, in the next section, we present the sample and data. Then, we introduce the “compactness parameter” and present the result of our analysis in §3. In §4, the dark matter problem for the dSphs is discussed based on the result in §3. Finally, we summarize the content of this paper in §5. ## 2 Sample and Data The physical parameters ($`V`$ band absolute magnitude, core radius, and galactocentric distance from the parent galaxy) of the Local Group dSphs in Mateo (1998) are used, except for And V, VI, VII. We refer to Caldwell (1999) for these three dSphs. The adopted quantities are presented in Table 1. The galacticentric distances for the companions of our galaxy are derived from their heliocentric distances (Mateo 1998 and references therein). For the companions of M31, we calculate the distances from M31 by using both their projected and heliocentric distances, taking into account the distance from M31 to us (770 kpc; Mateo 1998), and these are presented in the column of $`R_{GC}`$. ## 3 Results ### 3.1 Definition of compactness parameter $`C`$ The physical parameters (luminosity, radius, and velocity dispersion) of the dwarf elliptical galaxies (dEs) and dSphs as well as normal elliptical galaxies are known to correlate (e.g., Peterson & Caldwell 1993). Since the dSph sample shows a more significant scatter in the correlation than the other ellipticals (Caldwell et al. 1992), we examine whether the scatter is caused by the environmental effect from the parent galaxies. For this purpose, we define the “compactness parameter” by utilizing the relation between core radius ($`r_c`$) and $`V`$ band absolute magnitude ($`M_V`$). We present the data plotted on the $`\mathrm{log}M_V\mathrm{log}r_c`$ plane in Figure 1. The locus of dwarf elliptical galaxies in Peterson & Caldwell (1993) is also shown by the dotted square marked with dEs. In the following, we present the definition of the “compactness parameter”. First of all, we determine the standard core radius ($`r_{c,0}`$) for each dSph from the following relation, $`\mathrm{log}r_{c,0}=aM_V+b.`$ (1) Peterson & Caldwell (1993) analyzed 17 dEs and found that there is the scaling relation between their effective radii ($`R_e`$) and $`V`$ band luminosities ($`L_V`$) as $`L_VR_e^{5.0\pm 0.5}.`$ (2) Here, we use this scaling relation to obtain the constant “$`a`$” in the equation (1). Although we use core radii unlike Peterson & Caldwell (1993) (they used effective radii), this has little effect on the following result since there is only a small difference between core radii and effective radii (e.g., Caldwell 1999). Assuming that $`L_Vr_{c,0}^{5.0},`$ (3) we obtain the following relation between $`r_{c,0}`$ and $`M_V`$: $`\mathrm{log}r_{c,0}=0.080M_V+1.42.`$ (4) We adopt the zero point “$`b`$” so that the averaged values of $`\mathrm{log}r_c`$ and $`M_V`$ for our sample galaxies (`<`$`\mathrm{log}r_c`$`>` $`=2.34`$, `<`$`M_V`$`>` $`=11.6`$) satisfy the above equation, though the way to determine the zero point does not matter to the following analysis. Note that we use $`r_{c,0}`$ to indicate a core radius obtained for each dSph by substituting the observed $`M_V`$ of the galaxy into the above mean relation. Finally, we define the “compactness parameter” ($`C`$) as $`C\mathrm{log}(r_c/r_{c,0}),`$ (5) where the values of $`r_c`$ are listed in Table 1. Here, we comment on an error of $`C`$ (referred to $`\mathrm{\Delta }C`$ hereafter), which is determined from an error of $`r_\mathrm{c}`$. Since errors of $`r_\mathrm{c}`$ are presented in Mateo (1998), we find that almost all the absolute values of $`\mathrm{\Delta }C`$ are smaller than 0.1 as given in the column of $`C`$ in Table 1. It should be noted that most of the errors of $`\mathrm{log}R_{\mathrm{GC}}`$ are smaller than 0.1 as seen in Table 1. These errors are small enough to make our following discussions valid. If $`r_c`$ is larger than $`r_{c,0}`$, in other words, $`C>0`$ for a dSph, we should consider that the galaxy is more extended than it should be for its luminosity. In the context of the environmental effect, we could find a negative correlation for the sample dSphs between $`C`$ and $`R_{\mathrm{GC}}`$ since a dSph closer to a giant galaxy should be more disturbed and more extended by the tidal effects. ### 3.2 Result We present the data of our sample dSphs plotted on the $`\mathrm{log}R_{\mathrm{GC}}C`$ plane in Figure 2. There, the Galaxy’s companions are indicated by filled squares, and M31’s by open squares. The correlation coefficient between $`\mathrm{log}R_{\mathrm{GC}}`$ and $`C`$ with all the data is $`0.28`$. Dividing our sample into the Galaxy’s companions and the M31’s ones, the coefficients become $`0.36`$ and $`+0.12`$, respectively. That is, we find no significant correlation. Note that this conclusion is not altered even if a different inclination of the $`\mathrm{log}r_{\mathrm{c},0}M_V`$ relation ($`a`$ in eq.1) is adopted within a reasonable range. Although a mean value of $`C`$ for the M31 companions may be smaller than that for the Galaxy’s, this difference is not significant considering a large scatter of $`C`$. For the Galaxy’s companions, Sculpter could be seen as an exception and if it is removed from them, there might be the correlation (the correlation coefficient could be $`0.66`$). This may suggest that the group of so-called dSph satellites is heterogeneous, as discussed in §4.2. ## 4 Discussion ### 4.1 The dark matter problem To date, stellar velocity dispersions of dSphs have been extensively measured (e.g., Mateo et al. 1993 and references therein), which, in general, indicate too large mass to be accounted for by the visible stars in the dSphs. In other words, dSphs have generally high mass-to-light ratios. This fact may imply the presence of dark matter (DM) in these systems (e.g., Mateo et al. 1993). Existence of DM is supported by the large spatial distribution of stars to their outer regions (Faber & Lin 1983) and the relation between the physical quantities of the dSphs (Hirashita et al. 1999, but see Bellazzini et al. 1996). Moreover, on the basis of this DM picture, the relation between the ratio of the virial mass to the $`V`$-band luminosity and the virial mass (the $`M_{\mathrm{vir}}/LM_{\mathrm{vir}}`$ relation) for the Local Group dSphs is naturally understood as the sequence of their star formation histories in their forming phases by quasistatic collapse in the DM halo (Hirashita et al. 1998). However, the above arguments may be challenged if we consider the tidal force exerted by the Galaxy. If a dwarf galaxy orbiting a giant galaxy (the Galaxy or M31 in the Local Group) is significantly perturbed by the tides of the giant galaxy, the observed velocity dispersion of the dwarf galaxy can be larger than the gravitationally equilibrium dispersion (Kuhn & Miller 1989; Kroupa 1997). This tidal picture of the dSphs also suggests that the large velocity dispersions do not necessarily show the existence of DM. Indeed, Klenya et al. (1998) demonstrated that Ursa Minor has a statistically significant asymmetry in the stellar distribution which can be attributed to tidal effects. In summary, about the large stellar velocity dispersions and the large $`M_{\mathrm{vir}}/L`$ of the dSphs, two major models are possible; the tidal heating without DM and the presence and dominance of DM. Although we cannot give a clear answer as to which of these models has more validity, we discuss this problem taking into consideration the result obtained in §3.2. ### 4.2 Remarks based on our result #### 4.2.1 Tidal picture From the absence of the correlation between $`C`$ and $`\mathrm{log}R_{\mathrm{GC}}`$ as shown in §3.2, it is not suggested that the tidal forces have major effects on the dSphs, irrespective of whether the resonant orbital coupling (Kuhn & Miller 1989) could occur or not. However, we emphasize that once the sample is split into the well-studied Galactic satellites on the one hand, and the satellites of M31 on the other hand, then the values and behaviour of $`C`$ with $`R_{\mathrm{GC}}`$ are consistent with the tidal forces being important, at least for the companions of the Galaxy. It is noted that an orbit of a satellite may not be circular. If the orbits of the dSphs are elliptical, their present $`R_{\mathrm{GC}}`$’s might not reflect their averaged $`R_{\mathrm{GC}}`$ from past to present and $`R_{\mathrm{GC}}`$ may not a good measure of tidal effects unless $`R_{\mathrm{GC}}`$ is very small or very large. On the other hand, a satellite can pass near the parent galaxy frequently enough to allow serious tidal perturbation within a Hubble time even if the semi-major axis is 100 kpc. Thus, the correlation should disappear and the tidal picture might not be rejected completely from our result. However, if we consider a dSph in the elliptical orbit around a giant galaxy and is observed at a location relatively far from the giant galaxy, the dSph could not experience the galactic tide unless the orbit is highly elliptical. In this case, since the duration staying around the apogalacticon should be much longer (§VI of Searle & Zinn 1973), the galaxy would not suffer the tidal effect from the giant galaxy enough to be disturbed. Moreover, it should be noted that $`R_{\mathrm{GC}}`$s in our sample widely spread from $`20`$ to $`300`$ kpc. If $`R_{\mathrm{GC}}`$ is $`300`$ kpc, the orbital period is expected to be comparable to a Hubble time by assuming the Keplerian motion, which is adopted because orbits of the dSphs are still unknown in detail. Consequently, though some difficulties may exist, a dwarf galaxy closer to a giant galaxy (in the Local Group, the Galaxy or M31) should be more disturbed, and thus, it is likely that the correlation appears. That the sample of M31 companions shows a smaller correlation than the Galactic sample may be due to the fact that the three-dimensional distance estimates M31-satellites are very uncertain, and therefore that projection can hide the tidal signature for the sample of M31 satellites. #### 4.2.2 DM picture In the DM model, the DM in the formation epoch may have determined the star formation efficiency (Hirashita et al. 1998) and also the present physical state of the galaxy. Assuming that the masses of the dSphs are dominated by the DM, the tidal forces could have no effect on the dSphs because their tidal radii are larger than their core radii (Pryor 1996). Since there is no reason why more extended dSphs are closer to the Galaxy or M31 in the DM model, it seems rather natural that no significant correlation between $`R_{\mathrm{GC}}`$ and $`C`$ is found. In other words, the physical conditions of the dSphs should be determined by their DM contents, not by their environments. Thus, the DM model does not break down even in front of our result. #### 4.2.3 Possible heterogeneity of the dSph sample It should be noted that the group of so-called dSph satellites may be heterogeneous. Some may well be evolved “normal” low-mass galaxies in the sense that they contain dark matter and have a cosmological origin, and some may be secondary satellites that formed during mergers of gas-rich protogalactic clumps contain little DM and have a globular cluster-like origin. The latter of these systems will not contain dark matter, and will be significantly affected by tides while orbiting around the larger parent galaxy. Moreover, remnants of these can be long-lived and may fake domination by DM (Kroupa 1997, Klessen & Kroupa 1998). Therefore, it is worthwhile examining whether our result make some difficulties in accepting such heterogeneity. In Figure 2, all the Milky Way satellites except for Sculpter, have positive values of $`C`$, and in addition, their $`\mathrm{log}R_{GC}`$ and $`C`$ seems to correlate. Note that Sagittarius is believed to be experiencing serious tidal modification. Thus it may be true that among the satellites of the Galaxy in our sample, only Sculpter is exceptionally a DM dominated dSph, and the tidal effects are the dominating factor for the others. However, since there is no clear evidence of the heterogeneity, all we can do here is to mention it as one possibility. ## 5 Summary In order to investigate the tidal effect on the Local Group dSphs, we examined the correlation between the distances from their host galaxy (the Galaxy or M31) and the compactnesses of their surface brightness profiles, “$`C`$” defined newly in this paper for the dSphs. As a result, we find no significant correlation and thus no direct evidence that tidal effects have a major effect on the dSphs. However, in most cases, $`C`$ is sufficiently large to allow the possibility of tidal effects, especially so since $`C`$ decreases for the furthest dSph satellites of the Galaxy. Based on this result, we discussed the validity of the existing pictures which have been suggested to explain fundamental properties, especially the origin of their large mass-to-luminosity ratios, of the dSphs. We are grateful to the anonymous referee for many useful comments to improve the paper very much. We thank Dr. K. Ohta and Dr. S. Mineshige for continuous encouragement. One of us (H.H) acknowledges the Reserch Fellowship of the Japan Society for the Promotion of Science for Young Scientists. Figure Caption Fig. 1— The relation between the core radius ($`r_c`$) and $`V`$ band absolute magnitude ($`M_V`$) for the sample dSphs is shown. Filled squares indicate the dSphs which are companions of the Galaxy and open squares indicate those of M31. The solid line indicates the relation that we use to derive the standard core radius for each dSph (see text for detail). The area marked with dEs represents a typical locus of dwarf elliptical galaxies (e.g., Peterson & Caldwell 1993). Fig. 2— Log $`R_{\mathrm{GC}}C`$ relation (see text for their detailed definitions) of our sample dSphs. Filled squares and open squares have the same meanings as those in Fig. 1.
no-problem/9909/astro-ph9909475.html
ar5iv
text
# A New Giant Branch Clump Structure In the Large Magellanic Cloud ## 1 INTRODUCTION The Large Magellanic Cloud (LMC) has long been a favorite stellar laboratory, providing us not only with valuable information about its own complex star formation history but also with important clues for understanding the formation and evolution of distant galaxies. Moreover, the interest in studying different astrophysical aspects of this galaxy has been rapidly increasing recently, mainly due to the advent of more powerful telescope/instrument combinations and computing facilities. Recently, Geisler et al. (1997, hereinafter Paper I) carried out a search for the oldest star clusters in the LMC by observing with the Washington $`C,T_1`$ filters candidate old clusters spread throughout the LMC disk. Although they did not find any genuine old cluster like the Galactic globular clusters, their study has considerably increased the sample of intermediate-age clusters ($`t`$ 1-3 Gyr) with ages determined with a high degree of confidence. In addition, their results reinforce the conclusion that an important epoch of cluster formation, which began $``$ 3 Gyr ago, must have been preceded by a quiescent period of many billion years, unless dissipation processes have been more effective than previously thought (e.g., Olszewski 1993). In addition, they determined not only the properties of the clusters but also those of their surrounding fields. From the relatively wide field covered by their images ($``$ 15$`\mathrm{}`$ on a side) they found that clusters and fields have on average similar ages and metallicities, except in 3 cases where clusters are $``$ 0.3 dex more metal-poor than the surrounding field, suggesting that the chemical evolution was not globally homogeneous (Bica et al. 1998, hereinafter Paper II). A further intriguing result of Paper II was the discovery of what appeared to be a well populated secondary clump in the Color-Magnitude Diagrams (CMDs) of two fields located in the northern part of the LMC near the clusters SL 388 and SL 509. This unusual feature, made up by stars distributed uniformly across the fields, lies below the prominent Red Giant Clump (RGC) slightly toward its bluest color and extends 0.45 mag fainter. The feature also appears in the very populous SL 769 field located $``$ 6<sup>o</sup> away, thus representing around 10% of the whole sample of fields observed in Paper I. Since this feature appeared as a roughly distinct secondary clump, Bica et al. coined the term “dual clump” to describe this phenomenon. The authors tentatively suggested that these stars are evidence of a depth effect with a secondary component located behind the LMC disk at a distance comparable to the Small Magellanic Cloud (SMC), perhaps due to debris from previous interactions of the LMC with the Galaxy and/or the SMC. However, they also mentioned arguments against this scenario and noted other possible explanations. Westerlund et al. (1998) have also found a similar feature in the CMDs of three fields located in the NE of the LMC. On the basis of their $`BV`$ photometry they suggested that the red giant clump is bimodal and contains stars from an old population ($`t`$ $``$ 10 Gyr) and from another younger population ($`t`$ $``$ 0.3-4 Gyr), in the sense that the fainter the clump the older the stars. Besides observational findings, Girardi et al. (1998) and Girardi (1999) have theoretically predicted that stars slightly heavier than the maximum mass for developing degenerate He cores should define a secondary clumpy structure, about 0.3-0.4 mag in the $`I`$ band below the bluest extremity of the red clump. According to Girardi (1999) this evolutionary effect should be seen in CMDs of composite stellar populations containing $``$ 1 Gyr old stars and with mean metallicities higher than $`Z`$ = 0.004. However, the current state of both observational and theoretical results makes it impossible to determine whether the intriguing feature is caused by the presence of an old stellar population or by an evolutionary effect or even by a layer of stars located behind the LMC. Furthermore, not only its origin but also its morphology remains uncertain, which must be known before the magnitude of the red giant clump can be used as a robust distance indicator (e.g., Paczyński & Stanek 1998). In this paper we report on the first observations carried out with the aim of mapping the extent and determining the nature of the “dual clump” phenomenon. Indeed, the apparent dual clumps from the limited sample of Paper II are now found to merge and form a continuous feature. The selection and observation of the fields, as well as the reduction of the data are presented in Section 2. In Section 3 we present the results and discuss them in the light of recent theoretical and observational interpretations. Finally, in Section 4 we summarize our main conclusions. ## 2 OBSERVATIONS AND REDUCTIONS The fields for mapping out the extent of the secondary clump phenomenon were selected on the basis of their proximity to SL 388 and SL 509, and the presence of star clusters which had not been observed with Washington photometry. The first criterion aims at observing LMC regions located not only in the line of sight between SL 388 and SL 509, but also those placed around them within one degree from the midpoint of both clusters. The nearest cluster from this center from Paper II is SL 262 which is located 1.5<sup>o</sup> from SL 388 and no dual clump structure is visible in its CMD. In order to maximize the assigned telescope time, we centered the fields so that they included clusters without, or with only very unreliable, age and metallicity determinations for our continuing study of the chemical evolution of the LMC. The clusters having integrated $`UBV`$ photometry were taken from Bica et al. (1996), and the fainter ones from the recent revised catalog of star clusters, associations and emission nebulae (Bica et al. 1999). We also observed the field of NGC 2209 for the purpose of checking the possible evolutionary origin of the dual clump. This cluster is located $``$ 14<sup>o</sup> toward the south-east from SL 509 and placed by Corsi et al. (1994) photometric data in the minimum of the relationship between red giant clump and Main Sequence (MS) termination magnitudes. Table 1 lists the selected fields and the clusters contained within these fields. Note that fields # 5, 16, 17, 23, and 26 were not observed. The observations were carried out at the CTIO 0.9m telescope during 6 photometric nights in November 1998. The Cassegrain Focus IMager (CFIM) and the CCD Tek 2K #3 were employed in combination with the Washington $`C`$ and Kron-Cousins $`R`$ filters. Geisler (1996) has shown that the $`R_{KC}`$ filter is a very efficient and accurate substitute for the Washington $`T_1`$ filter. The pixel size of the detector was 0.4$`\mathrm{}`$/pixel, resulting in a field of $``$ 13.5$`\mathrm{}`$ wide. We used the Arcon 3.3 data adquisition system in $`quad`$ mode (four amplifiers) with a mean gain and readout noise of 1.5 $`e^{}`$/ADU and 4.2$`e^{}`$, respectively. During each night exposures of 2400<sup>s</sup> in $`C`$ and 900<sup>s</sup> in $`R_{KC}`$ were taken for the selected fields as well as for standard fields (Geisler 1996) with airmasses approximately ranging from 1.1 up to 1.6. In addition, a series of 10 bias and 5 dome and sky flatfield exposures per filter were obtained nightly. The weather conditions kept very stable with a typical seeing of 1.0$`\mathrm{}`$-1.2$`\mathrm{}`$, although some images have slightly larger FWHMs due to temperature changes of up to 2<sup>o</sup> C. In general, the secondary mirror was focused twice per night. We covered a total area of $``$ 1$`^\mathrm{}^o`$ spread over $``$ 2.6$`^\mathrm{}^o`$. The collected data for a total of 21 selected fields and the NGC 2209 field were fully processed at the telescope using the QUADPROC package in IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronony, Inc., under cooperative agreement with the National Science Foundation.. The distribution of the observed fields is shown in Fig. 1. After applying the overscan-bias subtraction for the four amplifiers independently, we carried out flatfield corrections using a combined skyflat frame, which was previously checked for non-uniform illumination pattern with the averaged domeflat frame. Then, we did aperture photometry for the standard fields ($``$ 30 stars per night) using the PHOT task within DAOPHOT II (Stetson 1991). The relationships between instrumental and standard magnitudes were obtained by fitting the following equations : $$c=a_1+C+a_2\times X_C+a_3\times (CT_1)$$ (1) $$r=b_1+T_1+b_2\times X_R+b_3\times (CT_1),$$ (2) in which $`a_i`$ and $`b_i`$ ($`i`$ = 1, 2 and 3) are the coefficients derived through the FITPARAM routine in IRAF, and $`X`$ represents the effective airmass. Capital and lowercase letters represent standard and instrumental magnitudes. The resulting coefficients and their standard deviations are listed in Table 2, the typical rms errors of eqs. (1) and (2) being 0.017 and 0.015 mag, respectively. Point Spread Function (PSF) photometry for the LMC fields and the NGC 2209 field was performed using the stand-alone version of the DAOPHOT II package (Stetson 1994), which provided us with $`X`$ and $`Y`$ coordinates and instrumental $`c`$ and $`r`$ magnitudes for all the stars identified in each field. The PSFs were generated from two samples of 30-35 and $``$ 100 stars selected interactively. For each frame a quadratically varying PSF was derived by fitting the stars in the larger sample, once their neighbors were eliminated using a preliminary PSF obtained from the smaller star sample, which contained the brightest, least contaminated stars. Then, we used the ALLSTAR program for applying the resulting PSF to the identified stellar objects and creating a subtracted image which was used for finding and measuring magnitudes of additional fainter stars. The PSF magnitudes were determined using as zero points the aperture magnitudes yielded by PHOT. This procedure was iterated three times on each frame. Next, we computed aperture corrections from the comparison of PSF and aperture magnitudes using the subtracted neighbors PSF star sample, resulting in typical values around -0.016$`\pm `$0.010 mag. Notice that PSF stars are distributed throughout the whole CCD frame, so that variations of the aperture correction should be negligible. Finally, the standard magnitudes and colors for all the measured stars were computed inverting eqs. (1) and (2), once positions and instrumental $`c`$ and $`r`$ magnitudes of stars in the same field were matched using Stetson’s DAOMATCH and DAOMASTER programs. Thus, we achieved accurate photometry for $``$ 242,000 stars, with mean magnitude and color errors for stars brighter than V = 19 of $`\sigma `$$`(T_1)`$ = 0.014 and $`\sigma `$$`(CT_1)`$ = 0.022 mag, respectively. Later on, with the aim of gathering both astrometric and photometric information in a self-consistent way, we built a master file which contains the positions for all the stars referred to a unique coordinate system. For some fields we only applied the appropriate offsets in the $`X`$ and $`Y`$ values, while in other fields we matched from tens up to hundreds of stars in common using DAOMASTER and our own routines. We also averaged their $`T_1`$ magnitudes and $`CT_1`$ colors and recomputed their photometric errors based on the difference. The typical mean difference (absolute value) for approximately five hundred stars brighter than V = 19 in common turned out to be $`\mathrm{\Delta }`$$`T_1`$ = 0.026$`\pm `$0.021 and $`\mathrm{\Delta }`$$`(CT_1)`$ = 0.030$`\pm `$0.023. In total, 7760 stars have two measurements of their magnitude and color. This photometry can be obtained from the first author upon request. ## 3 ANALYSIS AND DISCUSSION ### 3.1 Description of CMDs The CMDs of the 21 observed LMC fields certainly show a mixture of different stellar populations. They appear to be dominated by a 3-4 Gyr old population as deduced from the $`\delta `$$`T_1`$ index, which measures the difference in magnitude between the mean magnitude of the clump/HB and the MS turnoff (see Paper I). Fig. 2 illustrates a typical field CMD of the surveyed region. Likewise the MS is well-populated and extends along $``$ 3 mag. Assuming that the MS comes from the superposition of MSs with different turnoffs, we estimated an age range from 3 up to 7.5 Gyrs using the $`\delta `$$`T_1`$ index, with an average of 4.5$`\pm `$1.0 Gyr for all the fields. This significant age range is also supported by the presence of a Sub Giant Branch (SGB) with a broad vertical extension due to the transition of MS stars with different ages to the SGB. The Red Giant Branch (RGB) is also clearly visible covering a wide range in color from $`CT_1`$ $``$ 1.8 up to 3.6. However, the most striking feature of these CMDs is the giant clump region. In addition to the normal Red Giant Clump (RGC), most of the field CMDs also show a vertical structure (VS) composed of stars which lie below the RGC and extend from the bottom of the RGC to $``$ 0.45 mag fainter. The VS spans the bluest color range of the RGC and also appears in the CMD of NGC 2209. This intriguing feature does not clearly appear in the CMDs of our previous LMC clusters survey, but only a dual clumpy structure in around 10% of the cluster sample (see Paper II). To our knowledge, such a feature has not been observed previously. In order to delimit and characterize this intriguing feature, we first estimated its position and size, and examined its shape going through the individual field CMDs. An enlargement of the area of interest in these CMDs is shown in Fig. 3. As can be seen, the RGCs are nearly located at the same magnitude and color, centered at $``$ 18.5 and 1.60 mag, respectively. The constancy of the location in the CMD also appears to be the case for the VS, even in those fields where the VS arises as a small and sparse groups of stars. In two fields (marked with an asterisk in Fig. 1) the RGC is slightly tilted, following approximately the reddening vector, but the mean positions of both the RGC and VS remain unchanged. Moreover, the VS maintains not only its mean position but also its verticality. Therefore, given that the locus of the VS in the CMD does not seem to show any correlation with position in the LMC and that reddening variations over our survey field should be minimal and, in order to highlight the VS phenomenon, we built a composite CMD using all the measured stars. The resulting diagram is shown in Fig. 4 in which we also included our published Washington photometry for SL 338 and SL 509 (see Section 3.2). We define VS stars as those stars which fall into the rectangle $`T_1`$ $`=`$ 18.75-19.15 and $`CT_1`$ $`=`$ 1.45-1.55. This definition results in a compromise between maximizing the number of VS stars and minimizing contamination from, among other sources, MS, SGB, RGB, and Red Horizontal Branch stars. The continuous nature of this feature is clearly evident. We are unsure of the reason why this feature appeared as a “dual clump” in two of our Paper II fields. In none of our present fields is there any significant bifurcation. The composite CMD of Fig. 4 thus should present the best representation of this feature. ### 3.2 The VS phenomenon It was mentioned above that VS stars appear to be present in most of the fields of Fig. 1, although in some of them they could hardly be recognized. Therefore, it would be appropriate to map out the extent of the VS phenomenon in order to have a more quantitative estimate of its dimensions. For that purpose, we counted the number of stars lying within the VS rectangle, assuming for all the fields the same Galactic field star distribution. This assumption is particularly true if the CMDs of cluster fields located in different parts of the LMC disk (see Fig. 4 in Paper II) are compared with that of the outermost field (OHSC 37), for which we found no evidence of LMC field stars (see Santos et al. 1999, hereinafter Paper III). Furthermore, in an area of the same size as the selected LMC fields, the OHSC 37 field only has two stars within the VS box, so that we did not perform any correction due to foreground star contamination. The number of stars we counted for the northern LMC fields are shown in Fig. 1. The VS stars show a strong spatial variation, reaching their highest density just north-east of SL 509. We also repeated the same counting procedure for the fields of SL 126, SL 262, SL 388, SL 509 and SL 842 by revisiting our Washington data published in Paper II. SL 126, SL 262 and SL 842 are the nearest clusters to SL 388 and SL 509. Both present and published data sets were obtained following the same stellar object selection criteria, so that they can be compared directly. The number of VS stars in SL 388 and SL 509 is 37 and 106, which successfully match the trend followed by the selected LMC fields (see Fig. 1). On the other hand, the fields of SL 126, SL 262 and SL 842 turned out to have 4, 9 and 9 VS stars, respectively, placing an upper limit to the size of the VS region. All these results suggest that the region in which the VS phenomenon is concentrated extends over at least $``$ 2$`^\mathrm{}^o`$ and that the feature is not clearly seen either in the field of SL 262, which is $``$ 1.5<sup>o</sup> to the NW of SL 388, nor in SL 842 located $``$ 4.5<sup>o</sup> to the NE of SL 509. With the aim of looking into whether VS stars are also found in LMC clusters, we took advantage of the fact that there is roughly one star cluster in each selected field; the total cluster sample being 21. First, we determined the cluster centers and selected their radii by eye-judging the variation of the stellar density in the cluster surroundings. Cluster radii vary between 20 and 110 pixels with an average of 45 pixels (20$`\mathrm{}`$). Then, we performed VS star counts within both cluster radii and four circular LMC field areas on the same image chosen for comparison purposes. The four circular field areas were distributed throughout the entire image; none of them closer to the corresponding cluster than three cluster radii, thus avoiding cluster star contamination. They were also located far away from the image edges, avoiding vignetting and flatfield residuals effects. Finally, the radii of the four comparison areas in each image were fixed at one-half of the cluster radius. Eighty percent of the selected LMC comparison fields contained no VS stars, while one and two VS stars were found in two and one LMC comparison fields, respectively. These results are in very good agreement with the total number of VS stars found in each frame once they are scaled to the cluster area. Similarly, star clusters apparently also have no VS stars, except in the case of SL 515 which has 12 VS stars, five of them lying inside 1/2 of the cluster radius ($`r`$$``$45$`\mathrm{}`$). SL 515 is located in one of the selected LMC fields with the highest VS star densities as can be seen in Fig. 1. However, 9$`\mathrm{}`$ to the SE from SL 515 there is another star cluster (SL 529) which only has two VS stars. Therefore, the excess of VS stars in SL 515 would not seem to be related to the peak in the field VS star distribution, but with some property of the cluster itself, presumably its mass (see Section 3.3). By looking at the images and comparing the cluster radii we found that SL 515 is the largest and perhaps the most massive cluster in the sample. We also made the same comparison for the SL 388 and SL 509 fields and found one and five cluster VS stars, respectively, and no field VS stars. Both clusters have relatively large radii ($`r`$$``$30$`\mathrm{}`$). An additional test for exploring the nature of the VS phenomenon consists of comparing the number of VS stars with the total number of stars in the CMDs of different regions to investigate whether there is any trend of the ratio between them with their spatial distributions (No. VS stars/Total number CMD stars $`=F`$(position)). For this test, we decided not to use the whole CMD of each selected LMC field because of different incompleteness factors at fainter magnitudes. Thus we did not consider MS stars but rather a box defined by $`T_1`$ = 17.5-19.7 and $`CT_1`$ = 1.0-2.2, which are precisely the limits of the CMD in Fig. 4. This box (hereinafter RG box) includes all the red giant phases so that if there were any correlation between VS and LMC giant stars (strictly VS $`=f`$(RG-VS)), it should arise without any bias due to the presence of MS or other kinds of stars. Fig. 5 shows the resulting relationship in which we also include the fields of SL 126, SL 262 and SL 842. There is a strong correlation between the number of VS stars in the field and the number of LMC giants in the same zone. The lowest VS star counts occur in the outermost LMC fields, such that of OHSC 37, where the number of red giants is also a minimum. ### 3.3 The NGC 2209 case The giant clump luminosity is one of the best indicators of the development of the RGB, and consequently, an important tool for studying the nature of the VS phenomenon. Indeed, the RGC luminosity varies along a sequence which depends on the age (mass) of the giant stars. Furthermore, Corsi et al. (1994) data and Girardi (1999) models show that the clump magnitude ($`V_{clump}`$) has a maximum- (faintest value) as a function of the termination MS magnitude ($`V_{TAMS}`$) which corresponds to an age of $``$ 1.0-1.5 Gyr. Precisely, our interest in observing NGC 2209 comes from the fact that this cluster falls onto the faintest magnitude in the $`V_{clump}`$ vs. $`V_{TAMS}`$ relationship shown in Corsi et al. (1994), thus providing us with a valuable opportunity to test whether the VS is caused by evolutionary effects. NGC 2209 is located $``$ 14<sup>o</sup> away from the selected LMC fields and therefore any local effects in our VS area should be negligible. Using our Washington photometry we performed an analysis similar to that carried out for the selected LMC fields, i.e., we first looked at the NGC 2209 field CMD. Its main features resemble those of the northern LMC fields, as shown in Fig. 6. The RGC is tilted and shifted with respect to Fig. 1 by $`\mathrm{\Delta }`$$`(CT_1)`$ $``$ 0.20 and $`\mathrm{\Delta }`$$`T_1`$ $``$ 0.30 mag. According to the relations $`E(CT_1)`$ = 1.966$`E(BV)`$ and $`A_{T_1}`$ = 2.62$`E(BV)`$ (Geisler 1996) these offsets are consistent with a mean reddening $``$ 0.10 mag higher. Fig. 6 also reveals the presence of a VS at the same position relative to the RGC, reinforcing the conclusion that VS stars belong to the LMC. Its shape and magnitude extent are essentially the same as described in Section 3.1, while its color range is somewhat wider. The tilted RGC following approximately the reddening vector (see Fig. 6) could suggest the existence of differential reddening, although evolutionary effects could also yield an inclined clump. According to Catelan & Freitas Pacheco (1996) horizontal branch (HB) stars could result in a tilted clump if the helium content were very high (Y=0.30). They also argued that a differential reddening as small as $`\delta `$$`E(BV)`$ = 0.06 mag cannot cause a CMD dispersion as large as the one originating from the evolution away from the Zero Age HB itself. Notice also that tilted clumps also appear in two fields marked with an asterisk in Fig. 1, but their positions and sizes (magnitude and color dispersions) are nearly the same as the remaining fields (see Section 3.1). On the other hand, Hodge (1960) noticed an apparently dark patch in NGC 2209 of $``$ 15$`\mathrm{}`$ in diameter, about 10$`\mathrm{}`$ from the cluster center, suggesting either an internal or foreground origin for the globule. In addition, using $`BV`$ CCD photometry and CMD analysis, Dottori et al. (1987) concluded that the globule should be internal to the cluster, so that differential reddening is not unexpected. Indeed, we estimated a VS width approximately twice that of the northern selected fields. The extracted CMD of NGC 2209 also shows a remarkable color dispersion not only for giant clump stars, but also for SGB stars, which appear distributed at both edges of their whole color range (see Fig. 6). Next, we counted the VS stars distributed in the NGC 2209 field using a box with the same dimensions as for the northern LMC fields and reddened by $`\mathrm{\Delta }`$$`E(BV)`$ = 0.10. We also applied the same shift to the RG box, thus centering the RGC. The number of VS stars in the NGC 2209 field and in the cluster itself ($`r`$$``$45$`\mathrm{}`$) was 69 and 10, respectively, whereas no VS stars were found in four circular field areas (equal cluster area criterion), as expected. This result is in very good agreement with that found for SL 515, in the sense that relatively massive clusters can develop giant clumps with a considerable number of VS stars. Finally, if we compare the field VS stars number with that corresponding to the RG box, we can conclude that LMC regions with a noticeable large giant population can also be reservoirs of VS stars. The fact that NGC 2209, located many degrees from our main VS area, also shows this feature argues against a depth effect interpretation (e.g., background galaxies or debris) and for an evolutionary origin. ### 3.4 Comparison with theory It is known that stars defining the RGC in CMDs of intermediate-age and old open clusters are in the stage of central helium burning (Cannon 1970, Faulkner & Cannon 1973). However, according to Girardi (1999) models - computed using a grid of masses with a resolution of $``$ 0.1 $`M_{}`$ in the vicinity of the onset of helium burning mass - the position of GC stars in the CMD depends on the masses of the stars. Particularly, stars with $`MM_{Hef}22.5M_{}`$ form electron-degenerate cores with masses nearly constant ($`M_c0.45M_{}`$) after the central hydrogen exhaustion, thus allowing stars to reach similar luminosities. These stars correspond to our RGC stars. On the other hand, for stars with $`M>M_{Hef}`$ helium ignition takes place under non-degenerate conditions and both $`M_c`$ and luminosity increase with $`M_{Hef}`$, the minimum luminosity being about 0.4 mag fainter than those of stars with slightly lower masses. Girardi’s models predict that such stars should define a secondary clumpy feature located below the RGC and at its bluest extremity, reminiscent of our VS feature. The spread in the intrinsic luminosity of stars burning helium in their cores evidenced by this feature provides a further constraint on using the magnitude of the GC stars as a self-consistent distance indicator. Now, we can check Girardi’s (1999) predictions in the light of the present observational findings, so that new constraints to the theory can improve our knowledge of stellar evolution and the star formation history in the LMC. In contrast with the tentative explanation of debris from a dwarf galaxy located behind the LMC suggested in Paper II, Girardi claimed that the secondary clump in the CMDs of SL 388 and SL 509 fields might have been caused by a population younger (higher mass) than RGC stars. However, in the present work we did not find such a separated fainter clump but rather a VS having approximately the same number of stars per magnitude interval and peaking at its brighttest limit. The peak of the VS luminosity function has approximately 25% more stars than the remaining fainter part of the VS, independent of the bin sizes (see Table 3 and Fig. 7). Therefore, the VS can be described as the faint tail of a long continuous vertical distribution formed by stars developing non-degenerate helium cores; the upper part of this long VS is represented by the so-called “vertical red clump” (VRC), recently extensively discussed in the literature (e.g., Zaritsky & Lin 1997; Beaulieu & Sacket 1998; Gallart 1998; Ibata et al. 1998). The presence of VRC stars in the Hess diagram (density-coded CMD) of a 2<sup>o</sup>$`\times `$1.5<sup>o</sup> region located $``$ 2<sup>o</sup> northwest of the center of the LMC was interpreted by Zaritsky & Lin (1997) as red clump stars that are closer to us than those in the LMC. However, according to Girardi et al. (1998) and Beaulieu & Sacket (1998), among others, evolutionary effects appear to describe its nature more satisfactorily. Thus, VRC stars should be the more massive clump stars, while stars with $`M`$ $``$ 2 $`M_{}`$ should define the lower magnitude limit (Girardi’s secondary clump); stars with even smaller masses are grouped in the RGC. Fig. 4 shows the presence of not only VS stars but also VRC and HB stars. Note that even though both the Zaritsky & Lin and our present surveyed areas are roughly similar in size, VRC stars are clearly much less numerous than VS stars in our Fig. 4, which surprisingly contrasts with their Hess diagrams where no VS stars are seen despite the presence of the VRC. Certainly, if an LMC field contains both high mass (VRC) and low mass (RGC) stars, we should also expect to find intermediate-mass stars (VS stars), which were not detected in the Zaritsky & Lin survey data. We are uncertain as to what causes this paradox. On the other hand, bearing in mind the differences mentioned above, it would be interesting to investigate how fundamental properties of VS stars compare with those predicted by Girardi for secondary clump stars. Note that Girardi predicted that secondary clumps should be observed in fields with an important number of 1 Gyr old stars ($`M`$ $``$ 2 $`M_{}`$) mixed with older stars, and with metallicities higher than about $`Z`$ = 0.004 (\[$`Fe/H`$\] $``$ -0.7). In addition, he pointed out that neither main nor secondary clumps should be mixed due to differential reddening, distance dispersions and photometric errors. We first derived the ages of NGC 2209 and SL 515 and compared them with those of Girardi’s models. The cluster ages were estimated using the $`\delta `$$`T_1`$ index as defined in Paper I, yielding values of 1.5 and 1.6 Gyr for NGC 2209 and SL 515, respectively. These values are in good agreement with the ages associated with stars having $`M_{Hef}`$ just in the limit for non-degenerate helium cores. We also estimated the ages for the remaining star clusters contained in the selected LMC fields, all of them resulting to be on average younger than 1.5 Gyr old. The ages derived for SL 388 and SL 509 in Paper II are 2.2 and 1.2 Gyr, so that slightly older stars than those predicted by Girardi’s models could also fall into the CMD VS region. However, most of the clusters and surrounding fields of Paper II - except ESO 121-SC03 - have ages in the range 1.0-2.2 Gyr, but only in two of them were secondary clumps clearly distinguished. Moreover, the metallicity of SL 509 is \[Fe/H\] = -0.85, while those of the surrounding cluster fields are all in the range $``$0.35 - $``$0.7. In order to find some explanation for such a paradoxical result, which would appear to be opposite to the predictions of Girardi’s secondary clump models, we counted VS and RG box stars for all the surrounding fields of clusters analized in Paper II, and put these values in the VS vs. RG box plane. We applied reddening corrections with respect to the SL 388 and SL 509 fields ($`E(BV)`$=0.03) and adopted the same foreground star contamination for all the fields, given the similar galactic star distribution in their CMDs compared to the field CMD of OHSC 37 (see Fig. 4 of Paper II). Fig. 5 shows the resulting relationship (open squares), in which we also included the 9 Gyr old field centered on ESO 121-SC03, the outermost OHSC 37 field, and the inner disk SL 769 field. In particular, the younger SL 769 fields turnoffs are younger than 1 Gyr, thus providing an important number of RG stars. As can be seen, fields with only a few RG box stars do not have many VS stars either, independent of their ages and metallicities, while VS stars become more important as the number of RG box stars increases. However, the surrounding fields do not seem to show the same correlation in Fig. 5 as for selected LMC fields (star symbols). In the case of SL 769, the number of VS stars is near the average of those in the selected LMC fields, but RG box stars are nearly three times more numerous. Furthermore, the fields around SL 388 and SL 817 share similar ages, metallicities and number of VS stars, while the number of RG box stars in the SL 817 field is twice that in the SL 388 field, which suggests that a large number of RG stars alone is not a sufficient requirement for the appearance of the VS phenomenon. The number of VS stars in the fields of SL 509 and SL 862 are also quite different, although their ages, metallicities and number of RG box stars are very similar. Furthermore, SL 509 itself has 5 VS stars (see Section 3.2), whereas no VS stars appear to be associated with SL 862. All these results apparently suggest that there should be other conditions, besides age, metallicity and necessary RG star density, that would trigger the formation of VS stars, such as the environment of the VS star forming regions, different star formation rate, mass function, etc. Nevertheless, non-uniform spatial distribution of VS stars in the LMC reveals that non-homogeneously distributed star formation events occurred in this galaxy about 1-2 Gyrs ago. ## 4 CONCLUSIONS ¿From the analysis of Washington photometry for 21 selected fields located in the northern part of the LMC, and 14 cluster fields distributed throughout the LMC disk, we conclusively identify the existence of a vertical structure of stars that lies below the RGC at its bluest color and up to 0.45 mag fainter. Our previous data (Paper II) uncovered two northern fields which contained what appeared to be a “dual clump”, with a secondary clump lying fainter and bluer than the RGC. Stars lying in the same CMD region were described as very old stars ($`t`$ $``$ 10 Gyr) by Westerlund et al. (1998) from $`BV`$ photometry of three fields located in the NE of the LMC. However, our much larger present database indicates that there exists a continuous distribution of stars, which we term VS (“vertical structure”) stars, not only in the CMDs of field stars, but also in certain intermediate-age star clusters. These results demonstrate that VS stars belong to the LMC and that they are not composed of old objects in the LMC or of a background population of RGC stars. We also determine that VS stars are only found in those fields which satisfy some particular conditions, such as containing a significant number of 1-2 Gyr old stars and which have metallicities higher than \[Fe/H\]$``$ -0.9 dex, in good agreement with Girardi’s (1999) models which predicted that a minimum in the luminosity of core He burning giants is reached just before degeneracy occurs. These conditions constrain the VS phenomenon to appear only in some isolated parts of the LMC, particularly those with a noticeable large giant population. However, a large number of RG stars, of the appropriate age and metallicity, is not a sufficient requisite for forming VS stars. Thus, for example, we found an area spread over 2.6$`^\mathrm{}^o`$ centered just to the north-east of SL 509 with 3 times fewer RG stars than the inner disk cluster SL 769, but with approximately the same number of VS stars. Clusters with the appropriate age and metallicity to contain a significant number of VS stars are also required to be relatively massive; NGC 2209, for example, constitutes a good example of Girardi’s predictions. Finally, although Girardi’s models successfully predict the existence of red giants fainter and bluer than RGC stars on the basis of an evolutionary effect, there is still a need for more detailed studies explaining for example the VS vs. RG relationship, the ratio between the number of VS and VRC stars, whether tilted RGCs are related with VS features, the VS luminosity function, etc. The fact that Zaritsky & Lin (1997) found red clump stars with high and low masses, but none at the intermediate degenerate mass limit to form VS stars also remains unexplained. Indeed, 1-2 Gyr old stars with \[Fe/H\] $``$ -0.3 dex are very common in the LMC, although VS stars are only clearly seen in certain parts of the galaxy, which constitutes an unresolved mystery. The authors would like to thank the CTIO staff for their kind hospitality during the observing run. A.E.P. greatly appreciates the opportunity provided by CTIO of spending two months of his Gemini Fellowship at its Headquarters in La Serena, Chile. Support for this work was provided by the National Science Foundation through grant number GF-1003-98 from the Association of Universities for Research in Astronomy, Inc., under NSF Cooperative Agreement No. AST-8947990. Peter Stetson is also sincerely acknowledged for his help in the installation and execution of DAOPHOT and DAOMASTER. D.G. would like to acknowledge a very useful conversation with D. Alves who pointed out the importance of observing NGC 2209. We also thanks the referee for his valuable comments and suggestions. A.E.P. and J.J.C. acknowledge the Argentinian institutions CONICET and CONICOR for their partial support. J.F.C.S. Jr. also acknowledges the Brazilian institutions CNPq and FAPEMIG for their support.
no-problem/9909/hep-ph9909434.html
ar5iv
text
# Signals of the quark-gluon plasma in nucleus-nucleus collisions ## 1 INTRODUCTION The main motivation for studying nucleus-nucleus collisions at high energy is to learn the properties of the densest and hottest forms of matter that one can produce in the laboratory. One hopes in particular to reach the conditions under which hadronic matter is expected to turn into the quark-gluon plasma, a new phase of matter whose degrees of freedom are the hadron constituents, the quarks and the gluons. Understanding the behaviour of bulk matter governed by QCD elementary degrees of freedom and interactions, and studying how it turns into hadronic matter, offers challenging perspectives and touches fundamental issues in the study of Quantum Chromodynamics in its non perturbative regime, such as the nature of confinement, of chiral symmetry breaking, etc. Unfortunately, the tools with which we are probing these fascinating features of dense and hot matter are not ideal. The dynamics of nuclear collisions is complicated and at present allows at best for only semi-quantitative predictions. In absence of a definite signal to look for, we need to learn details of how nucleus-nucleus collisions work before one can draw any general conclusion about properties of hadronic matter. Progress in the field is therefore largely conditioned by progress in experiments which, in fact, has been quite impressive. The data accumulated over the last 10 years both at the BNL/AGS and at CERN/SPS, in particular those involving the heavier projectiles and targets, start to draw a consistent picture of what happens in nucleus-nucleus collisions at high energy. There is clear evidence that at the highest energy achieved so far, nuclear collisions deviate substantially form a naive picture based on a mere superposition of independent nucleon-nucleon collisions; collective behaviour is seen. As the collision energy is tuned up the relevant degrees of freedom change, from nucleons to hadronic resonances and hadronic strings, and hints that quark degrees of freedom are playing a role have been obtained. But while a coherent picture of the collision dynamics is emerging, finding unambiguous signatures of quark-gluon plasma formation remains an open problem. Presumably, unless one is very lucky, confirmation of plasma production will not come from a unique signal, and evidences based on systematic and well focused observations will have to be accumulated. Some may wish to argue that, perhaps, we are lucky. Indeed, several anomalies have been observed in the data. Among these are the suppression of $`J/\mathrm{\Psi }`$ production, the excess emission of lepton pairs in the mass range below the $`\rho `$ resonance and the enhanced production of strange and multistrange baryons. The temptation is great to associate these anomalies with the production of the quark-gluon plasma but, in my opinion, this is a bit premature. This Quark-Matter meeting is special. It is the last conference before RHIC starts to produce data, opening a new area in the field. This is also the time where the CERN SPS program is coming to an end. RHIC physics will be reviewed in a special session organized by M. Gyulassy, while E. Shuryak will discuss the CERN SPS program and its future perspectives. The present talk is a brief introduction to the field. My goal is to indicate where the focus of the present discussions is without going into the details of the interpretations of the various results. Recent, more systematic, reviews can be found in Refs. , and there is some overlap, unavoidable, with E. Shuryak’s talk . The talk is organized as follows. I start with theoretical considerations on the phase diagram of hot and dense hadronic matter and the properties of the quark-gluon plasma. Then I review the general patterns of ultrarelativistic collisions. In the last part of the talk I briefly discuss the specific signals for which anomalous behaviour has been observed. The last section contains conclusions. ## 2 THEORETICAL CONSIDERATIONS There is some rough analogy between the transition from hadronic matter to the quark-gluon plasma and that from a neutral atomic gas to the corresponding ionized plasma. In both cases as the temperature or the density rises, the basic degrees of freedom in the system change, becoming, in the high temperature/high density phase, the elementary constituents. However, in spite of the fact that the neutral atomic gas and the completely ionized plasma have very distinct physical properties, no phase transition separates them, and the process of ionization is a very gradual one. In contrast, QCD predicts that the transition from hadronic matter to the quark-gluon plasma is a sharp one, accompanied by the rapid increase of the entropy density corresponding to the liberation of quark and gluon degrees of freedom. Because of QCD asymptotic freedom, it is not surprising that quarks and gluons become free at high temperature (or high density); what is not a priori obvious is the sharpness of the transition, and also the fact that it occurs for a relatively low temperature. The critical temperature for pure gauge theory is now determined with an accuracy of a few percent in pure SU(3) gauge theory: $`T_c264`$ MeV. With dynamical quarks, calculations are more complicated and the resulting critical temperature more uncertain, $`T_c150200`$ MeV. The nature of this (phase) transition has been somewhat clarified, but not entirely . In pure SU(3) gauge theory, one has evidence of a first order transition. With massless or light quarks, the transition seems to be dominated by the effects of chiral symmetry breaking and the associated soft modes. Universality arguments suggest then an O(4) critical behaviour for 2 light flavors, whereas for 3 massless quarks the transition is first order. What happens in the real world depends on whether the mass of the strange quark can be considered as large or small. A heavy strange quark is inert in the transition which is then, as for two flavors, a second-order one. If the strange quark is effectively light, the transition is first order. Unfortunately, the actual mass of the strange quark is of the order of the typical QCD scale, and the situation is still controversial. It cannot be excluded that with non vanishing masses for all quarks, the sharp phase transition disappears and becomes simply a crossover. How can we characterize the phases before and after the transition? This question, related to that of finding an unambiguous signature of the quark-gluon plasma, has no simple answer. In the real world where quark masses are non vanishing, no order parameter has been found to distinguish the two phases. What is meant then by confinement or deconfinement transition? A plausible picture which is receiving support from lattice calculations is that of the dual superconductor involving color magnetic monopole condensation. This picture is reviewed here by A. Di Giacomo . One interesting consequence of this picture is the existence of strings between heavy quarks. This leads to a linearly increasing potential at zero or low temperature, and provides a simple view of color confinement. Quite remarkably, the string tension drops rapidly as $`T`$ approaches $`T_c`$ and vanishes at $`T_c`$ . Above $`T_c`$ the potential is a screened potential, with the screening radius a decreasing function of the temperature. We shall refer to this behaviour in our discussion of $`J/\mathrm{\Psi }`$ suppression. Another perspective on what happens at the transition is given by chiral symmetry. The quark condensates are expected to decrease with increasing temperature and density and, in the case where the quarks are massless, to vanish for some values of these parameters: at this point chiral symmetry is restored. In the limit of small temperature $`T`$ and baryon density $`\rho `$, the variations of the condensates can be obtained from model independent considerations: $`\delta \overline{q}q_TT^2`$ and $`\delta \overline{q}q_\rho \rho .`$ For higher density however, interaction effects must be taken into account; these may strongly affect in particular the value of the density at which chiral symmetry is restored . Lattice results show that the ideal gas limit is approached as $`T`$ becomes large, but this approach is slow: typically, the energy density at $`2T_c`$ is about 85% of the Stefan-Boltzmann limit value. These results can be accounted for reasonably well by phenomenological fits involving massive quasi-particles . Although the quasiparticle picture suggested by such fits is a rather crude one, it supports the idea that one should be able to give an accurate description of the thermodynamics of the quark-gluon plasma in terms of its elementary excitations and encourages the development of analytical calculations of the thermodynamic potential using weak coupling techniques. Such calculations are difficult: although the gauge coupling $`g`$ is small if the temperature $`T`$ is sufficiently high, the perturbative series shows rather poor convergence properties (see , and also ). However, sophisticated rearrangements of the perturbative expansion and various resummations have been applied to the calculation of the thermodynamic potential . Particularly promising in this context are self-consistent approximations of the entropy which have been shown recently to accurately reproduce lattice data at high temperature . Although much remains to be done, the results obtained so far are quite encouraging and give us the hope that an analytical control of the high temperature phase of QCD is within reach. A main limitation of present lattice calculations is their inability to deal with finite chemical potentials. There is little progress to report on this issue, although some of the pathologies of the quenched approximation have been clarified with the use of random matrix models . There has been however recently exciting developments in analytical investigations of the high density part of the phase diagram with the resurgence of the old idea of color superconductivity, but with totally new perspectives. Among all possible quark pairings, it has been recognized in particular that a special condensate involving a correlation between color and flavor, possible with massless flavors, has very special properties. This “color-flavor locked state” leads to substantial gaps, of the order of 100 MeV and has many remarkable features. A most intriguing one is the apparent continuity between this phase of quark matter and some form of nuclear matter, as it has been suggested in Ref. . This may offer the chance to to explore such properties as confinement or chiral symmetry breaking using weak coupling calculations at high density, a most fascinating possibility indeed! I would like to end this section by returning to the real world and list a few questions for experiments. Clearly, given the complexity of nuclear collisions, we cannot probe all the detailed features discussed above. So what can we hope to “see”? Can we observe changes in the equation of state, see the phase transition? Can we trace deconfinement, or chiral symmetry restoration, in an unambiguous way? Does matter produced at the early stages of the collisions behave as non interacting quarks and gluons? Note that most of these questions refer to systems in thermal equilibrium; it is for such systems that our theoretical tools are best developed. Colliding nuclei are not at all systems in equilibrium, although there is some evidence that local thermal equilibrium may be achieved at the late stages of the collisions. There may exist interesting genuine non equilibrium phenomena, an example being provided by the so-called Disoriented Chiral Condensate . Finally let us remark that in all these experimental studies we have very few control parameters at our disposal, essentially the nuclear sizes (and the impact parameter) and the beam energy. However, data with high statistics allow various cuts and may effectively provide new ones. ## 3 NUCLEUS-NUCLEUS COLLISIONS. GENERAL PATTERNS Measurements of the transverse energy distributions provide access to the energy density achieved in the collisions. Simple estimates using Bjorken’s formula, $$ϵ_0=\frac{1}{\tau _0\pi R^2}\frac{\mathrm{d}E_T}{\mathrm{d}y},$$ lead to $`ϵ_01.3`$ GeV/fm<sup>3</sup> at the AGS ($`\mathrm{d}E_T/\mathrm{d}\eta =200`$ GeV for Au+Au central) and $`ϵ_03`$ GeV/fm<sup>3</sup> at the SPS ($`\mathrm{d}E_T/\mathrm{d}\eta =450`$ GeV at SPS for Pb-Pb), taking for $`\tau _0`$ the generic value of 1 fm/c. Although they should be viewed as crude estimates, these numbers indicate that appropriate conditions are possibly met for the formation of a transient quark-gluon plasma at the SPS. The measurement of baryon densities reveal a large baryon stopping at the SPS , larger than expected, although the maximum baryon density is there lower than that achieved at the AGS. There is evidence that hadronic matter at freeze-out is nearly “thermal” (for a review see ). Rather remarkably, particle ratios are well fitted by simple statistical models involving only two parameters, a temperature $`T_f`$ and a baryon chemical potential $`\mu _B`$. One should distinguish here between the chemical freeze-out at which point matter composition is frozen, from the thermal freeze-out where particles undergo their last collisions. Particle ratios determine the parameters of the chemical freeze-out, while the parameters of the thermal freeze-out can be obtained with additional information from momentum distributions. Transverse momentum spectra are seen to be “blue-shifted”, reflecting the collective flow of particles moving towards the observer. This effect is seen on spectra as an increase of the inverse slope (effective temperature) with the mass of the particle considered. The freeze-out temperature and the collective expansion velocity thus determined are compatible with interferometry measurements . Thermal freeze-out occurs at a temperature lower than chemical freeze out, that is, given the expansion, at a later time. Freeze-out parameters evolve with beam energy, moving from high density/low temperature at small beam energy to low density/high temperature at high beam energy. These parameters can be displayed in a phase diagram which has been shown many times at the last Quark Matter meeting . A striking feature of this diagram is that the line representing matter at freeze-out is quite close to that representing the phase boundary toward the quark-gluon plasma . Another interesting observation is that the energy per particle on the freeze-out line is about 1 GeV . Although they are highly suggestive, the significance of all these observations is still unclear. The fact that similar models reproduce the particle production in $`e^+e^{}`$ collisions signals a universal behavior, reminiscent of the Hagedorn’s picture, and suggests that phase space is statistically populated in the prehadronization phase. Various collective flows have been observed (for a review see , and Danielewicz at this meeting). For central collisions, particle emission is azymuthally symmetric, and leads to transverse or radial flow responsible for the distortions of the momentum distributions referred to earlier. In non central collisions, directed flow and elliptic flow can occur. The nature of the elliptic flow changes with the beam energy, from low energy where it is out of plane (also referred as squeeze-out), to high energy where the flow is enhanced in the direction of the impact parameter in which the pressure gradient is the strongest. Analysis of the various flow patterns, and their evolution with the collision energy, can give information on the equation of state. With the increasingly accurate measurements of two particle correlations and interferometry , the space-time picture of nucleus-nucleus collisions at high energy is altogether becoming more and more precise. New perspectives are offered by analysis of event by event fluctuations , which are made possible by the high statistics data now available. ## 4 SPECIFIC SIGNALS I turn now to specific “signals”, that is observables for which anomalous behavior has been detected. At least for two of them, namely the enhancement of strangeness and the suppression of $`J/\mathrm{\Psi }`$ production , the observed effects were anticipated, although it is fair to say that the predictions were not quantitative: the results obtained in Pb-Pb collisions came as a surprise, and their interpretation is still under debate. On each of these subjects there is much to say (strangeness for instance having its own topical meetings!) and I shall only give a few indications. ### 4.1 Strangeness An enhancement of strangeness (properly defined) is observed in all experiments which measure particles carrying strange quarks: $`K`$, $`\varphi `$, $`\mathrm{\Lambda }`$, $`\mathrm{\Xi }`$, $`\mathrm{\Omega }`$. While at the AGS most of strangeness production can be understood in terms of independent nucleon-nucleon collisions and hadronic reinteractions , this is not so at the SPS. In particular precise measurements of the individual hyperon yields ($`\mathrm{\Lambda }`$, $`\mathrm{\Xi }`$, $`\mathrm{\Omega }`$, and the corresponding antiparticles) in Pb-Pb collisions has revealed a systematic increase with respect to p-Pb as the strangeness content of the particle increases . This observation is difficult to account for by invoking hadronic rescattering, and in fact none of the hadronic models which successfully reproduce the bulk of the data fit the strange particle yields without ad hoc adjustments. Note that $`K`$ and $`\mathrm{\Lambda }`$ are relatively easy to produce in a hadron gas, but the cross section for the production of $`\mathrm{\Omega }+\overline{\mathrm{\Omega }}`$ is small . One could imagine a scenario where multistrange hadrons are produced in steps, but this seems to take more time than is available. Another important observation is that the mechanism responsible for strangeness enhancement in Pb+Pb collisions seems to be independent on the centrality when the number of participants is $`N_{part}>100`$ . It is clearly important to explore smaller systems or more peripheral collisions to determine where the effect sets in. A similar remark applies to the energy dependence of the effect, and pinning down the onset of the phenomenon as the function of beam energy is one of the most compelling motivations to perform a low energy run at the CERN/SPS. These observations, combined with the remarks made earlier on the properties of matter at freeze-out, and in particular the fact that the ratios of strange particles are compatible with statistical models , convey the impression that much of the strangeness observed in Pb-Pb collisions is already present in the early stages of the collision. ### 4.2 Dileptons The vector mesons, through their decay into dileptons, provide access to the properties of the dense matter at various stages in the collisions. The $`\rho `$ meson plays a particular role because its lifetime (1-2 fm/c) is such that it decays much of the time while being in matter. Since the dileptons produced in the decay interact weakly with the surrounding matter they carry direct information about the state of matter at the time of the decay. The CERES Collaboration has obtained evidence for a significant excess of dileptons with invariant mass below that of the $`\rho `$. The excess, which is concentrated at low transverse momentum $`p_T`$, is not accounted for by $`\pi ^+\pi ^{}`$ annihilation in vacuum. Typical dilepton production processes involve the $`\rho `$ meson as an intermediate state, and the $`\rho `$ meson can be affected by the surrounding medium. There has been much effort devoted lately to understanding how the basic properties of hadrons are modified in matter. The way hadronic interactions modify the spectral density of the $`\rho `$ meson, leading in particular to a shift of its mass and an increase of its width, has been analyzed with increasingly sophisticated theoretical models (see for instance and the talk by R. Rapp at this meeting). Certainly the most exciting issue here is the potential relation of such modifications to the onset of chiral symmetry restoration, which could occur under the conditions realized at the SPS. Much progress in the field is expected from the coming runs which should provide high precision data, allowing in particular to distinguish the behaviors of the various resonances in matter, while the low energy run should demonstrate the sensitivity of the effect to a change in the the baryonic density. ### 4.3 $`J/\mathrm{\Psi }`$ suppression With the $`J/\mathrm{\Psi }`$, a tiny bound state made of a pair of heavy charm quark and antiquark, we can probe the very early stages of the collisions. Contrary to the $`\rho `$, the $`J/\mathrm{\Psi }`$ has a very long lifetime and it decays into dileptons only when it is far from the collision zone. However, as pointed out by Matsui and Satz , the binding of the $`J/\mathrm{\Psi }`$ meson is sensitive to the screening of the $`c\overline{c}`$ potential by a quark-gluon plasma, and the meson bound state will not survive in a hot enough quark-gluon plasma. Hence the original argument suggesting that a decrease of the observed $`J/\mathrm{\Psi }`$ yield could reveal the formation of the quark gluon plasma. An alternative scenario for the $`J/\mathrm{\Psi }`$ suppression involves $`J/\mathrm{\Psi }`$ collisions with hard “deconfined” gluons present in the quark-gluon plasma . The first run of experiments at CERN indeed showed that the rate of $`J/\mathrm{\Psi }`$ production was less than the rate expected from extrapolations of nucleon-nucleon collisions. But it soon appeared that this phenomenon, as well as the corresponding one observed in proton-nucleus collisions, could be accounted for by what is usually referred to as nuclear absorption . A $`J/\mathrm{\Psi }`$ produced somewhere in the nucleus has to cross a certain region of nuclear matter before escaping, and because it can interact inelastically with nucleons on its way out, it may be destroyed. A survival probability can then be defined, $`\mathrm{exp}\{L/\lambda \}`$, where $`L`$ is the distance traveled by the $`J/\mathrm{\Psi }`$ in nuclear matter, and $`\lambda =1/(n\sigma _{abs})`$ an absorption mean free path with $`n`$ the nuclear density. Several analysis lead to a value of $`\sigma _{abs}`$ of the order of 6 to 7 mb . The fact that the Pb-Pb data do not obey this simple behaviour was a surprise. And the temptation to speculate about a new mechanism at work has been irresistible . Interestingly, if one assumes that the extra suppression observed in Pb-Pb collisions is a local phenomenon (in space-time), sensitive for instance only to the local energy density, one can account quantitatively for the bulk of the data. But to prove that we are dealing with a phenomenon related indeed to local energy density one would need data at lower beam energy, which is hard, if not impossible, to get. Alternatively, one could explore smaller systems for which the new mechanism would set in within the covered $`E_T`$ range. While the present NA50 data show clear deviations from the normal nuclear absorption pattern, whether or not they present a threshold remains a subject of controversy. Observing a threshold behaviour would be very suggestive of a qualitative change in the properties of matter at some energy density. Progress has been made in the analysis of both the low $`E_T`$ and the high $`E_T`$ regions, with in the latter case the elimination of spurious reinteractions in the target. Worth emphasizing is the development of a new method of analysis making use of minimum bias events; the new procedure removes in particular the statistical fluctuations in the Drell-Yan spectrum, which was a limitation in previous analysis . ## 5 THE EARLY STAGES OF NUCLEAR COLLISIONS AT RHIC AND LHC With RHIC coming, one may attempt to extrapolate the knowledge gained at the AGS and SPS to higher energies. The corresponding “predictions” of various models will be reviewed at the end of this meeting. Let me concentrate here on a few conceptual issues of relevance when exploring higher energies. At high energy, semi-hard interactions leading to minijets are believed to play an important role. They start to compete with “soft” phenomena presumably already at RHIC where they may contribute a significant fraction of the total transverse energy . The reasons why theorists are interested in this regime is because it is one where one could hope to do reliable calculations form first principles in QCD. The regime is indeed one of weakly coupled many particles, which is perhaps amenable to a classical description. In fact Monte Carlo simulations, the so-called parton cascade calculations treat partons as free particles and study their evolution, taking into account QCD interactions, and assuming that the initial distributions in phase-space are given by the structure functions of the nuclei. These calculations provide a detailed description, at the partonic level, of the beginning of a nucleus-nucleus collision. They allow the study of thermalisation, the build up of energy density and so on. However, they raise a number of theoretical questions. To which extent can partons be treated as classical particles? What is the role of quantum mechanical coherence effects? In the last few years, efforts have been made to provide a more satisfying theoretical framework. Simple and and physically appealing pictures of the initial wave function have been constructed . But relating these initial wave functions to the initial conditions for parton cascades, that is understanding how the gluon initially in the wave functions are freed in the collision, remains an open problem . ## 6 CONCLUSIONS The analysis of the various data collected from BNL/AGS and CERN/SPS have led to a truly impressive progress in our understanding of nucleus-nucleus collisions at high energy: A-A collisions are clearly distinct from N-N collisions. Large energy densities are produced, providing appropriate conditions for the creation of a quark-gluon plasma. Collective behaviour of matter is seen and we have hints from several observables that quark degrees of freedom play a role in the collision dynamics. There are good indications that thermal and chemical equilibrium may be reached at freeze-out. The space-time pictures of the collisions are becoming more and more precise with flow and interferometry measurements, and the large multiplicities of Pb-Pb collisions allow for promising event by event analysis. Flows patterns start to be used to reveal details of the equation of state. Finally, several anomalies have been identified in observables considered as potential “signatures” of the quark-gluon plasma. It is fair to say, however, that we are not yet in a position to offer to outsiders to the field compelling evidence that quark-gluon plasma has been produced. For one thing, our theoretical picture of the quark-gluon plasma is still very incomplete. Calculating the properties of the quark-gluon plasma, even in equilibrium, turns out to be a difficult task, beyond the trivial level of the free gas (on which most experimental analyses rely). And the off-equilibrium properties of the plasma are essentially unknown. Thus we have so far no unique “signature”, and what we are looking for is evolving as we progress in our understanding of nucleus-nucleus collisions. But the progress which has been made since the very first run of experiments is really remarkable, and gives great confidence in the future of the field. And with RHIC coming out now, there are all reasons to be optimistic.
no-problem/9909/cond-mat9909246.html
ar5iv
text
# Investigation of the Mesoscopic Aharonov-Bohm effect in Low Magnetic Fields ## I Introduction The Aharonov-Bohm (AB) effect, first proposed in 1959 , was experimentally realized in a normal metal system in 1982 . Later the AB effect was observed in a semiconductor system , and was the subject of a number of investigations which expanded our general understanding of mesoscopic physics . These investigations focused their attention at relatively high magnetic fields. Only directly adressed the phase of the oscillations. Recently, due to the perfection of e-beam lithography, the AB effect has been the subject of new interest. AB rings are now used to perform phase sensitive measurements on e.g. quantum dots or on rings were a local gate only affects the properties in one of the arms of the ring . The technique used in these reports is to locally change the properties of one of the arms in the ring, and study the AB effect as a function of this perturbation. Information about the changes in phase can be extracted from the measurements. Especially the observation of a period halving from $`h/e`$ to $`h/2e`$ and phase-shifts of $`\pi `$ in the magnetoconductance signal has attracted large interest. ## II Experiment We fabricate the AB rings in a standard two dimensional electron gas (2DEG) situated 90nm below the surface of a GaAs/GaAlAs heterostructure. The 2DEG electron density is $`2.010^{15}\mathrm{m}^2`$ and the mobility is $`90\mathrm{T}^1`$. This corresponds to a mean free path of app. $`6`$ $`\mu \mathrm{m}`$. The ring is defined by e-beam lithography and realized with a shallow etch technique . The etched AB structure has a ring radius of $`0.65\mu \mathrm{m}`$ and a width of the arms of $`200\mathrm{n}\mathrm{m}`$ (Fig. 1, left insert). A $`30\mu \mathrm{m}`$ wide gold gate covers the entire ring, and is used to change the electron density. A positive voltage $`V_g`$ must be applied to the gate for the structure to conduct. The sample was cooled to $`0.3\mathrm{K}`$ in a <sup>3</sup>He cryostat equipped with a copper electromagnet. Measurements were performed using a conventional voltage biased lock-in technique with a excitation voltage of $`V=7.7\mu \mathrm{V}`$ oscillating at a frequency of $`131\mathrm{H}\mathrm{z}`$. Here we show measurements performed on one device, similar results have been obtained with another device in a total of six different cool-downs. ## III Results We first consider the conductance as a function of the voltage applied to the global gate at zero magnetic field. This is shown in Fig. 1 (right insert), at $`T`$=$`4.2\mathrm{K}`$. Steps are observed at approximate integer values of $`e^2/h`$. At least four steps are seen as the voltage is increased with $`0.18\mathrm{V}`$ from pinch-off. Such steps have previously been reported in AB rings . The steps show that our system, in the gate voltage regime used here, only has a few propagating modes. When the temperature is lowered a fluctuating signal is superposed the conductance curve. At $`0.3`$K, the steps are completely washed out by the fluctuations. We ascribe the fluctuations to resonances. They appear at the temperatures where the AB oscillations become visible and are the signature of a fully phase coherent device. We show in Fig. 1 an example of a magnetoconductance measurement. Here the amplitude of the oscillations is $``$ 7 % around zero field. We have seen oscillation amplitudes of up to 10 %. The conductance measurement is, due to a long distance from the voltage probes to the sample, effectively two-terminal. Hence the magnetoconductance must be symmetric, $`G(B)`$ = $`G(B)`$, due to the Onsager relations. Here $`B`$ is the applied magnetic field. This means, that there can only be a maximum or a minimum of the conductance at zero field, or stated differently, that the phase of the oscillations is $`0`$ or $`\pi `$ . In Fig. 2 we show the conductance $`G(\mathrm{\Phi },V_g)`$ subtracted the fluctuating conductance at zero field $`G(0,V_g)`$, as a function of magnetic flux $`\mathrm{\Phi }`$ through the ring and gate voltage $`V_g`$. The conductance is symmetric. Note that the dark (light) regions correspond to magnetoconductance traces with an AB phase of $`0`$ ($`\pi `$). To exemplify this, we show single traces in Fig. 3. Another remarkable feature is the occurrence of traces with half the expected period in magnetic flux, see Fig. 3. We observe phase-shifts in the magnetoconductance, and occasional halving of the period, in all our measurements. The transitions between situations with AB-phase $`0`$ and $`\pi `$ are smooth as the gate voltage is changed, as can be seen in Fig. 2. In between, magnetoconductance traces that have both $`h/e`$\- and $`h/2e`$-periodicity appear, Fig. 4. The zero-field conductance $`G(0,V_g)`$ for the measurement shown in Fig. 2 varies between $`2.5`$ to $`4.5`$ in units of $`e^2/h`$. We find in general, that for conductances of the AB ring less than app. $`2e^2/h`$, the AB oscillations are weak or not present at all. This might be due to one of the arms pinching off before the other. ## IV Discussion The fact that the AB oscillations can have a minimum at zero field, implies that the AB ring on these occasions is not symmetric, in the sense that the quantum phase acquired by traversing the two arms is not the same. In order to understand the behaviour, we compare our measurements with the theory , which is derived for a phase coherent device with 1D independent electrons, and only one incident mode. This is the simplest possible theoretical model one can think of. The conductance $`G`$ is given by $`\mathrm{G}(\theta ,\varphi ,\delta )=`$ $`{\displaystyle \frac{2e^2}{h}}2ϵ\mathrm{g}(\theta ,\varphi )`$ (2) $`(\mathrm{sin}^2\varphi \mathrm{cos}^2\theta +\mathrm{sin}^2\theta \mathrm{sin}^2\delta \mathrm{sin}^2\varphi \mathrm{sin}^2\delta ).`$ Here, $`\varphi =k_FL`$, where $`k_F`$ is the Fermi wave number and $`L`$ is half the circumference of the ring, is the average phase due to spatial propagation. $`\delta =\mathrm{\Delta }(k_fL)`$ is the phase difference between the two ways of traversing the ring. When $`\delta `$ is not $`0`$, the AB oscillations might be phase-shifted by $`\pi `$. $`\theta =\pi \mathrm{\Phi }/\mathrm{\Phi }_0`$ is the phase originating from the magnetic flux. The coupling parameter $`ϵ`$ can vary between $`0`$, for a closed ring, and $`1/2`$. The function $`\mathrm{g}(\theta ,\varphi )`$ is given by $`\mathrm{g}(\theta ,\varphi )=`$ (3) $`{\displaystyle \frac{2ϵ}{(a_{}^2\mathrm{cos}2\delta +a_+^2\mathrm{cos}2\theta (1ϵ)\mathrm{cos}2\varphi )^2+ϵ^2\mathrm{sin}^22\varphi }},`$ (4) where $`a_\pm =(1/2)(\sqrt{12ϵ}\pm 1)`$. Overall, we find the best agreement with the lineshape of the oscillations by taking $`ϵ`$ = $`1/2`$, as expected for an open system. Previously , we estimated $`\varphi =k_FL`$ $``$ $`100160`$, for the gate voltage regime used here. However, note that varying $`\varphi `$ and $`\delta `$ between $`0`$ and $`\pi /2`$ in the expression (2) exhaust all possible lineshapes of the magnetoconductance oscillations. The equation (2) gives a conductance that can oscillate between $`0`$ and $`2(e^2/h)`$. The scale of the oscillations as seen in in Fig. 2, is at most $`0.3(e^2/h)`$. In order to match the lineshape of the magnetoconductance oscillations to measurements, we use the form $$\mathrm{G}(\mathrm{B})=\mathrm{G}_\mathrm{o}+\mathrm{G}_\mathrm{\Delta }\mathrm{G}(\theta (B),\varphi ,\delta )_{ϵ=1/2}.$$ (5) The introduction of the parameters $`\mathrm{G}_\mathrm{o}`$ and $`\mathrm{G}_\mathrm{\Delta }`$ is partly justified by the fact that 1) the experiment is performed at a finite temperature, where the device might not be perfectly coherent. Incoherent transmission will on average not contribute to the magnetoconductance oscillations. 2) For a system with more than one incident mode, again there will be a constant background and the amplitude of the oscillations will be diminished . The lines in Fig. 3 are fits with the form (5). Note first, that indeed the expression (2) can produce both phase-shifts and halving of period. (For $`\varphi =\delta =\pi /4`$ the period is purely $`h/2e`$.) Next, in Fig. 4 several magnetoconductance traces are fitted with (5). The lineshapes of (2) agree nicely with the measurements. Note however, that the introduction of the two extra parameters $`\mathrm{G}_\mathrm{o}`$ and $`\mathrm{G}_\mathrm{\Delta }`$ in (5), in addition to $`\varphi `$ and $`\delta `$ gives $`4`$ adjustable parameters in the fit. In order to extract solid information on the variation of $`\varphi `$ and $`\delta `$ in the experiment, an independent assesment of $`\mathrm{G}_\mathrm{o}`$ and $`\mathrm{G}_\mathrm{\Delta }`$ will be needed. ## V Conclusion The oscillatory magnetoconductance of an AB ring, and in particular the phase of the oscillations, is systematically studied as a function of electron density. We observe phase-shifts of $`\pi `$ in the magnetoconductance oscillations, and halving of the fundamental $`h/e`$ period, as the density is varied. All those features are reproduced by a simple theoretical model , when allowing for an asymmetry in the electron density in the two arms of the ring. Our interpretation gives a simple explanation for why period-halving and phase shifts should appear in mesoscopic AB rings. Further, our measurements suggest that variations in single-mode characteristics might be probed by studying the lineshape of the AB oscillations. ## VI acknowledgements We wish to thank David H. Cobden and Per Hedegård for enlightening discussions. This work was financially supported by Velux Fonden, Ib Henriksen Foundation, Novo Nordisk Foundation, the Danish Research Council (grant 9502937, 9601677 and 9800243) and the Danish Technical Research Council (grant 9701490)
no-problem/9909/cond-mat9909131.html
ar5iv
text
# Patterns of consumption in socio-economic models with heterogeneous interacting agents. ## I Introduction A great body of literature has been devoted to the effects that interactions among consumers or firms (henceforth called agents) have on the macroeconomic variables of social-economic systems (Aoki 1996). Applications range from consumption theory, economic growth, opinion formation, stock market prices and emergence of de facto standards in technology choice (see Kirman (1997) and Durlauf (1997) for recent reviews). A common feature of many aggregate variables is that they exhibit oscillatory behaviour, showing boom-and-bust patterns. Examples are fad and bandwagon behaviour in sociology, business cycles in economics, bubbles in stock market prices, wave behaviour in the adoption of innovation technology. Boom-and-bust cycles have been observed (Persons et al. 1996) also in the adoption of financial technologies, like the CMO (collateralized mortgage obligations) and the LBO (leveraged-buyout) waves of 1980s, the merger and aquisition waves of the 1890s, 1920s and 1960s as well the takeover wave of the 1980s. Such oscillatory behaviour can be explained in terms of interacting agents if an individual’s consumption decision depends on the behaviour of different group of consumers. Various models have been introduced (Kirman 1996, Durlauf 1996), with interactions among agents depending on the agents’ “distance” (such as, for example, differences in wealth) but typically in these models agents interact in a symmetric way with each other. Recently Cowan, Cowan and and Swan (1998), hereafter (CCS), introduced a model where the utility of an individual agent can be positively or negatively affected by the choices of different groups of agents and consumption is driven by peering, imitiation and distinction effects. In CCS (1998), consumers are ordered according to their social status and are affected by the behaviour of other agents depending on their relative location on the spectrum. Agents wish to distinguish themselves from those who are below and emulate their peers and those who are above in the social spectrum. Even though this model is capable of reproducing consumption waves, which emerge from the interplay between aspiration and distinction effects, it lacks an important ingredient, namely costs of consumption. We reformulate here the model of paper CCS (1998) in the framework of statistical mechanics of disordered systems, incorporating costs of consumption which vary across agents and are exogenously determined. The dynamical properties of the model are explored, by numerical simulations, for different choices of the parameters. Various complex patterns are found. ## II Model We consider N agents, ordered on a one-dimensional continuum space according to their “wealth” $`w_i`$. In this paper wealth serves as an index of social status rather than the source of a budget constraint, as discussed below. A more realistic situation with consumers arranged over a multidimensional space (accounting, for example, for differences in age, education, etc.) should be considered, but for simplicity we will only discuss here the case $`d=1`$. Agents’ wealth is chosen randomly, it is uniformly distributed between zero and $`W_0`$, and does not change with time. Time evolves discretely and at each step the state of the agent $`i`$ is characterezied by a variable $`S_i`$ which can take only two values $`\pm 1`$; if agent $`i`$ chooses to consume then $`S_i=1`$ while if (s)he chooses not to consume then $`S_i=1`$. At time zero a new product appears in the market and each agent decides whether or not to consume one indivisible unit of it, doing so if this provides positive utility (for example a new restaurant opens at time zero and in each subseqent time period agents decide whether to visit it or not). Utility $`Y_i(t)`$ <sup>*</sup><sup>*</sup>*It corresponds to a Hamiltonian interaction: $`H=_{i=1}^NY_i(t)S_i(t)`$. is affected by idiosyncratic preferences and costs as well as externalities from other agents: $$Y_i(t)=\frac{1}{N}\underset{j=1}{\overset{N}{}}J_{ij}S_j(t)+G(w_i)+Cc_i+Nn_i(t)$$ (1) Individual costs, $`c_i`$ are uniformly distributed in the interval $`(1,0)`$ and the functional form of the intrinsic value $`G(w_i)`$ of the good for different agents will be specified below. In this partial equilibrium set-up with only one good, the decision on whether to buy the good or not depends only on whether the “marginal” utility of buying one unit exceeds the “marginal” cost. In this sense, all agents are assumed to possess sufficient liquidity at all points of time and wealth constraints are never binding. A product with $`G=0`$ is called a “fashion” good, while a “status” good has a positive intrinsic value that might be well-suited to the characteristics or tastes of a particular class of consumers. The idiosyncratic noise term $`n_i`$, describes individual preferences which can change with time, and are random uniformly distributed in the interval $`(1,1)`$. Each agent interacts with all the others, and the coupling constants $`J_{ij}`$ are functions of the agents’ status according to: $$J_{ij}=J_A\mathrm{arctan}(w_iw_j)+J_S\left[\frac{\pi }{2}\mathrm{arctan}|w_iw_j|\right]$$ (2) The coefficients $`J_A,J_S`$ are taken positive. The asymmetric term, proportional to $`J_A`$, gives a negative contribution to the utility function $`Y_i`$ if $`w_i>w_j`$ and a positive contribution if $`w_i<w_j`$. This means that agent $`i`$ wishes to distinguish herself from the poorer while imitating the richer. The second contribution, proportional to $`J_S`$, always generates positive utility and expresses peering effects among consumers of similar status. Both of these contributions saturate with distance. In Fig. (1.a) $`J_{ij}`$ is plotted as a function of $`(w_iw_j)`$. At each time $`t`$, we let agents re-evaluate their consumption decision, relative to the one taken one step before, on the basis of the utility $`Y_i(t)`$ they receive at the same time. This amounts to assuming that agents are myopic or that they face a complete absence of opportunities for intertemporal arbitrage, i.e. there are no capital markets. Agents choose simultaneously, that is, our system evolves with parallel dynamics. We have further allowed the possibility that only a fraction $`\alpha `$ of the total population reconsider their decision, with this subpopulation picked randomly at each time step. In the following though we shall consider only the case $`\alpha =1`$. An alternative scenario that will be considered (which we call Model II henceforth) is when $`J_S=0`$ and the asymmetric influences exercised by agents on the others are given by: $$J_{ij}=\mathrm{arctan}\frac{J_A}{w_iw_j}$$ (3) $`J_{ij}`$ is shown in Fig. (1.b). In this case there are only distinction-aspiration effects, which are stronger for agents of similar status and vanish at large distance. This describes the situation where agents segregate in social-economic groups so that among members of the same group there is a tendency to outshine their peers and to ignore the rest. In both cases, the coupling constants $`J_{ij}`$ are random numbers (positive or negative) which provides an analogy with spin-glass models (Mezard et al. 1987). The peculiar properties of spin-glasses have attracted considerable attention (Aoki 1996, Arthur et al. 1988, Arthur et al.1997) in social-economic disciplines, particularly towards the possible implications, in that context, of the degeneracy of the equilibrium state and of the slow relaxation dynamics to equilibrium (aging). We stress though that in our formulation the coupling constants are not symmetric and we do not expect our model to share the nice properties of spin glasses (Crisanti et al. 1988, Cugliandolo et al. 1997). Nonetheless a study (Iori et al. 1997) on the Sherrington-Kirkpatrick (SK) model with a non-hamiltonian contribution (i.e. taking $`J_{ij}=J_{ij}^S+ϵJ_{ij}^A`$) has shown that both the dynamical behaviour and the nature of equilibrium (at least for small lattices) survives for a small asymmetric perturbation. In view of this result it could be interesting to repeat the study of Iori et al. (1997) when $`ϵ`$ is zero for all but a certain fraction of the $`(i,j)`$ pairs, for which pairs, $`ϵ`$ can be large. This case would be similar to the model proposed in eq. (1). Eventually, the case of our model in eq. (1) with $`J_A=0`$ (i.e. considering only peering effects) would be a natural framework to study the problem of technological progress. In this case, indeed, while peering finds a natural interpretation in terms of the advantage that interacting firms have (for example those who share a partnership) for being at a similar technological level (see Arenas et al.(1999), aspiration and distinction effects do not affect the payoff a firm receives from adopting a new technology. When $`J_A=0`$, eq. (1) describes a modified Random Field Ising Model (RFIM) with site dependent couplings. At zero temperature, when adiabatically driven by an external field, the RFIM is characterized by avalanche dynamics. In our model the role of the external field is played by costs. ## III Simulations and Results ### A Model I We have performed computer simulations to study the model in eq. (1) above for different choices of the parameters and explored the various patterns observed. We studied both the time evolution of total consumption as well as the shape of the distribution of consumption across the social spectrum. The pictures we show refer to a relatively small lattice of $`V=800`$ consumers (for the graphics we are limited by the size of the screen). We analyzed however larger lattices ($`V=4000`$) and results do not qualitatively change with the lattice size. Agents’ wealth is uniformly distributed in the interval $`(0,W_0)`$ with $`W_0=100`$ and does not change with time. We assume that, when a new good enters the market, agents initially do not consume it ($`S_i(0)=1`$). We could have alternatively chosen a nonhomogeneous initial condition (with consumption distributed across agents with a certain probability) and study how the system evolves towards the equilibrium state. Initially we study the case of a fashion good ($`G=0`$), choosing $`C=N=0`$. We explore the different consumption patterns when fixing $`J_A`$ and increasing $`J_S`$. For $`J_S`$ small (region I) the total consumption exhibits waves, with amplitude less than $`V`$ and high frequency. As $`J_S`$ is increased the period of oscillations increases and the amplitude becomes almost equal to $`V`$. The situation is depicted in Fig. (2) for $`J_A=1`$ and $`J_S=2,15,25`$. In Fig. (3) we show a typical pattern, for $`J_A=1,J_S=25`$, as a function of time (time increases from top to bottom). Notice that it is the rich who start consuming first in order to distinguish themselves from the poor. At the beginning of each cycle consumption propagates rather slow but it spreads faster as it moves down to the poorer. As we keep increasing $`J_S`$ though, the waves disappear and the system develops a steady-state behaviour with a variety of consumption patterns: above the value $`J_S^c`$ where waves disappear, we first find a narrow region (region II) where all but the richest consume. Still increasing $`J_S`$ we find a region (region III) where only the rich consume but not the rest, while if we further increase $`J_S`$ (region IV) nobody consumes as distinction effects are not strong enough to initiate the process. Simulations for these three regions are shown in Fig. (4). The precise location of these four regions in the $`J_A,J_S`$ region is depicted in Fig. (5) for a $`800`$ lattice. Up to now we have assumed that agents do not have specific preferences for the new good. When introducing preferences, these act like the temperature in a physical system and can induce a transition from an equilibirum state to another. In our model, for example, as we turn on idiosyncratic noise ($`N>0`$), in region I waves become blurred with consumption greater than zero and less than $`V`$, while for $`J_A=1,J_S=0,N=1`$, Fig. (6), we see curiously the emergence of modulated waves. Eventually, for values of $`N>0`$, consumption waves emerge also in region IV (a larger $`N`$ is required the deeper one is in region IV) where, at $`N=0`$, nobody would consume. Next we introduce costs ($`C>0`$) while keeping $`G=0`$ (or small compared to costs). For cost below a certain value (we find $`C_c=0.465`$ for $`J_A=1`$ and $`J_S=25`$) waves still emerge and propagate across all the social spectrum, but if $`C>C_c`$ the equilibrium state is characterized by nobody consuming. Adding noise, at $`N1`$ consumption waves, of varying amplitude, emerge spontaneously even at $`C>C_c`$, though at irregular time intervals (see Fig. (7)). Nonetheless if $`N_i`$ becomes much larger, waves disappear and total consumption, disorderely distributed, fluctuates around a constant level. Also initial conditions can have an important role on the dynamical selection of the different asymptotic states: in Fig. (8) we show that imposing an initial condition with a fraction (here the top 10% of the population) of the agents consuming, then a consumption wave is generated even for $`C>C_c`$. Finally we consider the general case of a status good, which has an intrinsic value ($`G(w_i)>0`$) for some agents $`i`$ and its consumption entails costs ($`C_i>0`$). We consider the case of a good which is more suitable to this group of consumers whose wealth is distributed around a given value $`w_m`$, and we choose: $$G(w_i)=\frac{G}{(w_iw_m)^2}$$ (4) The results which follow refer to the case $`J_A=1,J_S=25,G=1,N=0`$. We fix costs at a level higher than $`C_c`$, estimated before for the case $`G=0`$ (below this value waves propagate even in the case of a fashion good). Different behaviours are found, depending on costs and/or $`w_m`$. For $`w_m<85`$, (we remind that agents’ wealth is distributed between zero and $`W_0=100`$), the good enters the social spectrum around $`w_m`$, possibly migrates through the closest social classes and then finds a stable niche (see top three cases in Fig. (9)). If $`w_mW_0`$, for costs not too high (but still larger than $`C_c`$), waves emerge and spread throughout the whole social spectrum (bottom case in Fig. (9)). The effect of adding idiosyncratic preferences is again to induce waves in regions where, in its absence, consumption would reach a constant level and would be limited to a few consumers. ### B Model II In this case, as we said, only aspiration/distinction effects affect the consumption behaviour of agents. We will focus only on short range interactions (e. g. choosing $`J_A=1`$) while, for larger values of $`J_A`$, as can be seen from Fig. (1a)), the potential becomes longer range. We will compare Model II with Model I assuming $`J_S=0`$. While consumers feel in both cases (only) distinction and aspiration effects, their strength of interaction changes differently with distance, becoming zero at large distances for Model II, while it is finite and non-zero for Model I. Moreover at short distances the potential is large and sharp for model II while it is almost zero for Model I. Starting with $`C=G=N=0`$, both models show oscillations in total cosumption, the amplituide of which is smaller in Model II compared to Model I (Fig. (10)). Nevertheless the consumption pattern in the two models is clearly different (Fig. (11)): in Model II there are no diffusing waves but a more chaotic situation with different social groups consuming at different times. This effect is better captured by Fig. (12) where we plot the consumption density at different locations along the social spectrum and at different times. We can see that consumption as a function of class is multi-peaked. We also found that the period of oscillations in Model II changes very little with $`J_A`$ despite the fact that the potential becomes longer-range as $`J_A`$ increases. This is a consequence of the sharp increase of the potential at short distance, so that the main contribution to the utility function $`Y_i`$ comes, even at large values of $`J_A`$, mainly from nearest-agent interactions. Adding costs, while keeping $`G=N=0`$, generates market segmentation. The larger the costs are (Fig. (13 (a) and (b) )) the more sparse the consumption becomes. It is interesting though that even with high costs some agents still choose to consume out of their desire to distinguish themselves from their peers. Finally, we study the case of a value good (Fig. (13 (c), (d) and (e))) which would mostly suit consumers whose wealth is close to a reference level $`w_m`$. In this case the good finds a stable niche: those agents who benefit from its consumption continue to consume the good (inducing possibly fringes to their neighours), but their behaviour does not affect distant social groups (not even when it is for the richest that the good has an high intrinsic value). Adding idiosyncratic preferences in all of the above cases only generates occasional additional consumption in the patterns above but does not change the asymptotic behaviour of the system. Let us finally comment that even if we take $`J_A`$ large (in which case the interaction becomes long-range), we do not obtain waves that spread smoothly like in Model I, and this is due to the absense of peering for nearby consumers. ## IV Conclusions In this paper we have focused on a potentially important mechanism that drives consumption decision: the interaction among heterogeneous consumers. In the sociology literature, interactions among individuals, belonging to similar or different social circles, are often seen as a major mechanism that determines new styles of behaviour. In model I we studied how peering, distinction and aspiration effects, together with costs and intrinsic values of a good, generate different consumption patterns, under the assumption that information on the consumption behaviour of agents is public ($`J_A`$ and $`J_S`$ are long range and saturate to a finite value at large distances). However collective behaviour may be affected by the structure of the communication channels. Models with imperfect information diffusion and social segregation in the way knowledge is transmitted have been proposed to explain fashion cycles (Corneo et al. 1994) . We take into account the possible local nature of interactions in model II where the $`J_{ij}`$ decay rapidily to zero with social distance. Both models present different consumption regimes: a dynamical one characterized by waves and cycles plus a variety of stationary patterns. The two models, though, appear very different in the way consumption is spatially distributed. Initial conditions combined with random individual preferences may have a major role in pushing the dynamics of the system into one of the different asymptotic states. To conclude we point that it would be interesting to compare patterns of consumptions for different choices of the distribution of agents’ wealth, such as, for example, a discontinuous distribution with gaps in the social spectrum or a gaussian distribution centered around a given social class. ## Acknowledgments We are grateful to S. Jafarey for helpful comments. V.K. acknowledges the support of the General Secretariat of Research and Technology, Greece, under contract 45890. V.K also wishes to thank the University of Essex for the kind hospitality provided during the initial stages of this work. ## References Aoki, M. New approaches to macroeconomic modeling, Cambridge University Press (1996). Arenas A., Diaz-Guilera A., Perez C. J. and Vega-Redondo F., “Self-organized evolution in socio-economics environments”, cond-mat/9905387 (1999). Arthur W. B., Anderson P.W. ,Arrow K. J. and Pines D., eds., The economy as an evolving complex system I, Addison-Wesley (1988). Arthur W.B., Durlauf S.N. and Lane D., eds., The economy as an evolving complex system II, Redwood City: Addison-Wesley (1997). Corneo G. and Jeanne O. A Theory of Fashion Based on Segmented Communication, e-preprint http://netec.wustl.edu/WoPEc/data/Papers/bonbonsfa462.html (1994). Cowan R., Cowan W. and Swan P., Waves in Consumption with Interdependence among Consumers, e-preprint http://netec.wustl.edu/WoPEc/data/Papers/dgrumamer1998011.html (1998). Crisanti A. and Sompolinsky H., Dynamics of spin systems with randomly asymmetric bonds, Phys. Rev. A 37 (1988) 4865. Cugliandolo L.F. et al., Glassy behaviour in disordered systems with non-relaxational dynamics, Phys. Rev. Lett. 78, 350 (1997). Durlauf S., Statistical mechanics approaches to socioeconomic behavior, in The economy as an evolving complex system II, W.B. Arthur, S.N. Durlauf and D. Lane, eds., Redwood City: Addison-Wesley (1997). Iori G. and Marinari E., On the Stability of the Mean-Field Glass Broken Phase under Non-Hamiltonian Perturbations J. Phys. A: Math. Gen. (1997) 4489-4511. Kirman, A. Economies with interacting agents, in The economy as an evolving complex system II, W.B. Arthur, S.N. Durlauf and D. Lane, eds., Redwood City: Addison-Wesley (1997). Mezard M. et al., “Spin Glass theory and Beyond”, (World Scientific, Singapore 1987). Persons J. and Warther V. Boom and Bust Patterns in the Adotpion of Financial Innovations e-preprint http://netec.wustl.edu/WoPEc/data/Papers/wopohsrfe9601.html (1995).
no-problem/9909/hep-ph9909382.html
ar5iv
text
# RIKEN BNL Research Center preprint Domain wall fermions and the strange quark mass Talk presented at QCD 99, Montpellier, France ## 1 INTRODUCTION The strange quark mass is a fundamental parameter of the Standard Model, one whose value is poorly known. The two methods which have been used extensively to determine $`m_s`$ are QCD sum rules and lattice QCD simulations. Before one can compare results from these different methods fairly, a basic understanding of each method’s systematic uncertainties and approximations is necessary. This talk is focused on recent lattice calculations of $`m_s`$ emphasizing such uncertainties. The calculation of $`m_s`$ using domain wall fermions by the RIKEN/BNL/Columbia (RBC) lattice collaboration is taken as a case study. The details of this work were presented at Lattice 99 , and there was another description of the relevant lattice techniques here by Becirevic . ## 2 DOMAIN WALL FERMIONS Symmetries are of fundamental importance in physics; in the case of QCD chiral symmetry is responsible for much of light hadron dynamics. However, chirally symmetric discretizations of the Dirac operator suffer from a “doubling” of the particle spectrum. For example replacing $`_\mu `$ with a nearest–neighbor difference operator leads to the free quark propagator $`S(p)=[i\gamma _\mu \mathrm{sin}(p_\mu a)+m]^1`$ which, in the massless limit, has poles at all corners of the Brillouin zone $`p_\mu [0,\pi ]`$. Wilson added to the lattice action a second derivative–like operator which breaks chiral symmetry and gives the doubler states a mass inversely proportional to the lattice spacing. Another approach, by Kogut and Susskind (KS), is to stagger the spin degrees of freedom and interpret the doublers as different quark flavors. Although this method maintains a $`U(1)`$ remnant of the continuum chiral symmetry, flavor symmetry is badly broken. Both actions recover the symmetries of continuum quarks only in the $`a0`$ limit. An exciting and fruitful suggestion for decoupling the chiral and continuum limits started with Kaplan who pointed out to the lattice community that a chiral $`2k`$–dimensional mode is bound to a mass defect (or domain wall) in a $`2k+1`$–dimensional space. Specifically, in Shamir’s boundary fermion variant , there is a massless right–handed (RH) mode stuck to the $`4d`$ boundary of a semi-infinite $`5d`$ space, given an appropriate choice for the $`5d`$ mass $`M`$. When the fifth dimension is made finite, a left–handed (LH) mode appears on the other boundary (see Fig. 1). The wavefunctions of the light modes decay exponentially within the fifth dimension, so there is a residual mixing between the RH and LH states which decreases as one increases $`N_s`$, the number of sites in the fifth dimension. This coupling of chiral modes describes a Dirac fermion with mass $`m_{\mathrm{res}}`$. In order to have control over the mass of the light Dirac fermion, the RH and LH boundaries are explicitly coupled with a weight $`am`$. For $`amam_{\mathrm{res}}`$ the mixing within the bulk of the fifth dimension is negligible. The important feature of the domain wall (DW) fermion action is that the continuum limit $`a0`$ is decoupled from the chiral limit $`N_s\mathrm{}`$ at the expense of an extra dimension of size $`N_s`$. Simulations with DW fermions have shown that for lattice spacings roughly 0.1 fm, $`N_s=1020`$ is sufficient for lattice QCD to exhibit continuum–like chiral properties, e.g. Ward identities and suppression of wrong–chirality operator mixings . ## 3 RESULTS OF RBC SIMULATIONS There are two ingredients which contribute to the determination of quark masses from the lattice: the calculation of hadron masses, decay constants, and/or matrix elements; and the calculation of the quark mass renormalization constant. In Refs. the pseudoscalar meson, vector meson, and nucleon masses are computed as functions of the bare quark mass, $`am`$. The $`am`$ corresponding to the light quark mass is determined by extrapolating $`(aM_{\mathrm{PS}}/aM_\mathrm{V})^2`$ to $`(M_\pi /M_\rho )^2`$, and then the lattice spacing $`a`$ can be defined by setting the $`\rho `$ or $`N`$ to its physical value. Finally the bare $`am`$ corresponding to the strange quark mass is fixed by interpolating the pseudoscalar mass to $`M_K`$ or the vector mass to $`M_K^{}`$ or $`M_\varphi `$. Figs. 2 and 3 show these hadron masses as functions of $`am`$. Due to a combination of finite spatial volume effects, finite $`N_s`$ effects, and quenching errors, the chiral limit is at $`am=0.0038(6)`$. Interpolating in the hadron masses as described above, one finds (with statistical errors only) : | bare | $`1/a=1.91(0.04)`$ GeV | | | --- | --- | --- | | mass | lattice units | MeV | | $`m_l`$ | 0.00166(0.00005) | 3.17(0.11) | | $`m_s(K)`$ | 0.042(0.003) | 80(6) | | $`m_s(\varphi )`$ | 0.053(0.004) | 101(7) | The scale above has been defined by setting the $`\rho `$ mass to its experimental value; however $`M_N/M_\rho =1.37(5)`$ compared to the experimental ratio 1.22, so there is at least a 10% uncertainty due to the ambiguity in whether the nucleon or $`\rho `$ is used to set the scale. This is common among quenched lattice simulations with lattice spacings $`O(0.1\mathrm{fm})`$. In Ref. the renormalization constant is computed nonperturbatively using the regularization independent momentum subtraction (RI/MOM) scheme as advocated by the Rome–Southampton group , applied for the first time to DW fermions . The result at 2 GeV for $`Z_m^{\overline{\mathrm{MS}}}1/Z_S^{\overline{\mathrm{MS}}}`$ is $`1.63\pm 0.07`$ (stat.) $`\pm 0.09`$ (sys.) which should be compared to 1.32, the one–loop perturbative $`Z_m`$ for these simulation parameters. One inherent assumption of this method is that perturbation theory is reliable at the scale of the inverse lattice spacing, roughly 2 GeV here. As Chetyrkin pointed out the convergence of the perturbation series for the matching between RI/MOM and $`\overline{\mathrm{MS}}`$ schemes is slow at 2 GeV. The convergence would be much improved for $`1/a2.53`$ GeV. The result from Ref. at this lattice spacing (corresponding to $`6/g_0^2=6.0`$) is $$m_s^{\overline{\mathrm{MS}}}(2\mathrm{GeV})=130\pm 11\pm 18\mathrm{MeV},$$ (1) where the first error is the statistical uncertainty and the second is the systematic uncertainty. ## 4 COMPARISON OF RESULTS Fig. 4 shows several quenched lattice calculations of $`m_s`$ plotted vs. lattice spacing squared. Except for the point at $`a^2=0`$, the quark masses were determined using the procedure described above and are plotted with statistical error bars only. The square corresponds to the DW fermion simulation of Ref. . The crosses are from KS simulations , the big (small) diamonds are from Wilson (tree–level improved Wilson) simulations , and the octagon from a nonperturbatively improved Wilson action . Finally, the asterisk displays the author’s average of three recent continuum extrapolations of $`m_s`$, as described in the following paragraph. Recently there have been three quenched lattice calculations of $`m_s`$ which use nonperturbative renormalization and take the continuum limit: one with KS fermions and two with nonperturbatively improved Wilson (NP IW) fermions , (see table below). In order to precisely compare among different lattice simulations which contain similar, if not identical, systematic effects, it is common practice to quote only statistical errors. However, before using the results phenomenologically, one must do one’s best to include these systematic errors. As these simulations were performed on large volumes, the source of the largest systematic error (within the quenched approximation) is the ambiguity of which physical quantity is used to set the lattice spacing. Ref. uses the lattice spacing set by the $`\rho `$ mass, $`a(M_\rho )`$, while Refs. use $`a(f_K)`$ (obtaining identical results with the lattice spacing set by the hadronic radius parameter $`r_0`$) and report that using $`a(M_N)`$ instead would increase $`m_s`$ by 10 MeV. It is commonly seen in quenched lattice QCD that $$a(f_K)<a(M_\rho )<a(M_N).$$ (2) Since the latter two references quote a one–sided systematic uncertainty, the central value is shifted up by half of that uncertainty and the new error contains the statistical and systematic uncertainties added in quadrature. A 5% scale uncertainty is added to the statistical error of Ref. . The weighted average of the three results is $`106\pm 4`$ MeV. | Ref. | quoted $`m_s`$ | $`a^1`$ syst. | adjusted $`m_s`$ | | --- | --- | --- | --- | | | 106(7) MeV | $`\pm 5`$ MeV | 106(9) MeV | | | 97(4) MeV | +10 MeV | 102(6) MeV | | | 105(4) MeV | +10 MeV | 110(6) MeV | | weighted average | | | 106(4) MeV | Of course in nature there are sea quarks, so the quenched approximation could prove unreliable. Last year the CP-PACS collaboration reported deviations from experiment of the quenched hadron spectrum in the continuum limit at the level of 10% , one might expect the same level of disagreement for the strange quark mass. However, this year the CP-PACS collaboration has a preliminary result from two–flavor dynamical Wilson fermion simulations of $`m_s^{\overline{\mathrm{MS}}}(2\mathrm{GeV})=84\pm 7`$ MeV . Although the mass renormalization has been computed only in (improved) perturbation theory, such a small strange quark mass is in conflict with lower bounds based on the positivity of hadronic spectral functions . ## 5 CONCLUSIONS The RBC collaboration has completed the calculation of the strange quark mass at a single gauge coupling $`6/g_0^2=6.0`$ using the domain wall fermion action. The mass renormalization is determined nonperturbatively through the RI/MOM scheme. The result, $`130\pm 11\pm 18`$ MeV, is consistent others at the same gauge coupling. Note this is a result at only one lattice spacing. Only in the continuum limit can a reliable comparison can be made to the NPSW and KS average of $`106\pm 4`$ MeV. A final answer for $`m_s`$ awaits several unquenched simulations exploring all the different systematic uncertainties already probed within the quenched approximation. ## ACKNOWLEDGMENTS The author thanks the organizer of this conference for a stimulating forum for discussion. Thanks also to RIKEN, Brookhaven National Lab, and the U.S. Department of Energy for providing the facilities essential for the completion of this work.
no-problem/9909/astro-ph9909113.html
ar5iv
text
# Constraints on Jets and Luminosity Function of Gamma-ray Bursts Associated with Supernovae ## 1 Introduction Since the suggested association between a Gamma-Ray Burst (GRB) and Supernova 1998bw (Galama et al. 1998), the possible general connection between GRBs and SNe has been discussed (Cen 1998, Wang et al. 1998). In contrast, Kippen et al. (1998) found no evidence for a GRB - SNe connection, and Grazianni, Lamb & Marion (1998) even conclude that not all SN Ib,c/II can be associated with GRBs, although a fraction might be. On the other hand, Wang et al. (1998) found seven candidate SN-GRB correlations while Bloom et al. (1998) estimated these SN-GRB isotropic explosion energy range from $`10^{44}10^{48}`$ erg/s, and suggested that a subclass of GRBs is associated with supernova Ib,c/II. Jets have been introduced to solve two problems raised in the GRB-SN united picture since 1) the great difference between the BATSE-inferred GRB rate with a narrow luminosity function ($`10^8`$ h<sup>3</sup> Mpc<sup>-3</sup> yr<sup>-1</sup>) (Kommers et al. 1999; Schmidit 1999) and the supernova rate ( $`10^4`$ h<sup>3</sup> Mpc<sup>-3</sup> yr<sup>-1</sup>), and 2) the great difference between the explosion energy of GRBs and supernovae. With a jet the total energy necessary to produce the observational fluxes will be reduced by a factor $`1/\gamma ^2`$ ($`\gamma `$ is the Lorentz factor of a relativistic jet), and the total rate will be greater by a factor $`\gamma ^2`$ (Cen 1998; Wang et al. 1998). Were a wide GRB luminosity function or jet to operate, BATSE-detected GRBs would only be a small fraction of all GRBs that occur. In this Letter, we compare GRB rates derived from supernova rates and density evolution to that of the measured rates from the Fourth BATSE Catalog of GRBs to investigate possible constraints on the GRB luminosity function and jet parameters. We also discuss two other related interesting questions: 1) Can the special relativistic effects of a jet provide by itself an acceptable luminosity function for GRBs? 2) Can the large amount of undetected bursts contribute to the observational gamma-ray background near 100 KeV for an acceptable GRB luminosity function? The former provides an alternative way to test if the gamma-ray burst emission is isotropic or random in the jet’s comoving frame. ## 2 Model If supernovae are the mysterious sources of GRBs, then the GRB ”equivalent isotropic” explosion energy could be within a relatively large range: from $`10^{54}`$ erg/s, deduced for GRB 990123 (Kulkarni et al 1999) to $`10^{44}`$ erg/s estimated from SN Ib/c - GRBs associations (Wang & Wheeler 1998; Bloom et al. 1998). We assume the unknown intrinsic luminosity distribution follows a power-law: $$\varphi (L)=A(\frac{L}{L_{max}})^\beta ,L_{min}LL_{max},\beta >0,$$ (1) where $`A`$ is the normalization factor. We fix $`L_{max}=10^{53}`$ erg/s. We find that varying $`L_{max}`$ from $`10^{53}`$ to $`10^{55}`$ erg/s affected our results only slightly; $`L_{min}`$ was allowed to vary in the range $`10^{44}`$ \- $`L_{max}`$. Observed spectra of GRBs can be approximated as a power-law: $`dL(E)/dEE^\alpha `$ with spectral index $`\alpha =1.1`$ (Mallozzi et al. 1996). For source density, we adopt the broken power-law function we have used before as a rough description for SN Ib/c evolution (Che et al. 1999): $`n(z)=n_0\left(\frac{1+z}{1+z_0}\right)^\eta \{\begin{array}{cc}\eta =\eta _1>0,z<z_0\hfill & \\ \eta =\eta _2<0,z>z_0\hfill & \end{array}`$ where $`n_0`$ is the comoving GRB density at redshift $`z=z_0`$ per Mpc<sup>3</sup> yr<sup>-1</sup>. We set $`z_0=1`$, $`\eta _1=4`$ (Lilly 1996), and $`\eta _2=0`$ (Madau 1998, Sadat 1998). We will assume a universe where the calculations are particularly simple: a flat Friedmann universe with cosmological constant $`\mathrm{\Lambda }=0`$. We do not expect the results would differ significantly in universes with a significant $`\mathrm{\Lambda }`$. In our asssumed universe, the number of observed gamma-ray bursts peak flux brighter than P is: $$N(>P)=_{L_{min}}^{L_{max}}\varphi (L)𝑑L_0^{z_{max}(L,P)}\frac{4\pi n(z)}{1+z}r^2(z)\frac{dr(z)}{dz}𝑑z$$ (2) The observed peak flux of a GRB at redshift $`z`$ with intrinsic luminosity $`L`$ is: $$P(L,z)=L(1+z)^\alpha /4\pi r^2(z)$$ (3) where we fixed the spectral index to be $`\alpha =1.1`$ (Mallozzi et al. 1996), and $`P_{min}2\times 10^7`$ erg cm<sup>-2</sup> s<sup>-1</sup> $`1`$ photon cm<sup>-2</sup> s<sup>-1</sup>. $$L=_{50}^{300}\frac{dL(E)}{dE}𝑑E$$ (4) where $`E`$ is measured in KeV and $`r(z)`$ is proper motion distance. The total number of bursts we observe every year is $`N_{total}(>P_{min})`$ as derived from Eqn. 2. ## 3 Results ### 3.1 Luminosity Function Constraints from Log $`N`$ \- Log $`P`$ We selected the 939 bursts with peak flux greater than $`P_{min}=2\times 10^7erg/cm^2s`$ from the Fourth BATSE GRB catalog (BATSE 4B) to get the measured Log $`N`$ \- Log $`P`$ relation that will be used for comparison. We compared this Log $`N`$ \- Log $`P`$ to GRBs following the evolving supernova rate, finding the best fits to the luminosity function range parameter $`L_{min}`$ and slope parameter $`\beta `$. The rate was normalized by $`N_{total}`$ as given in Eq. 5. We searched the acceptable range of $`L_{min}`$ and $`\beta `$ with the K-S confidence level $`1\%`$. The luminosity function width was allowed to vary from $`1<L_{max}/L_{min}<10^9`$, while the luminosity function slope was allowed to vary from $`0<\beta <6`$. The search result is shown in Figure 1: the acceptable range of luminosity function widths were $`L_{max}/L_{min}>100`$, while the acceptable rage for the luminosity function slope was $`1.8<\beta <2.6`$. ### 3.2 Constraints on GRB Fireballs and Jets Setting the $`N_{total}=800`$ yr<sup>-1</sup>, which is the BATSE yearly rate corrected for the duty cycle (Meegan et al. 1992), and constraining Log $`\frac{L_{max}}{L_{min}}`$ and $`\beta `$ from equation (5), we can find lines of constant rate ($`n_0`$) in the $`\beta `$ \- Log $`\frac{L_{max}}{L_{min}}`$ frame. If we further suppose that GRBs are connected with supernovae, then for the fireball model, the GRB event rate should follow the supernovae rate $`10^4`$ h<sup>3</sup> Mpc<sup>-3</sup> yr<sup>-1</sup>. For a jet with a Lorentz factor $`\gamma =100`$, however, the GRB rate will decrease to $`10^8`$ h<sup>3</sup> Mpc<sup>-3</sup> yr<sup>-1</sup>. Two lines of constant GRB rate are plotted in Figure 1 and labeled as lines II and III, representing isotropically emitting fireball GRBs and GRBs beamed into a $`\gamma =100`$ jets respectively. To the left of these two lines, GRBs will have a higher event rate than detected by BATSE. For the fireball model, line II traverses the K-S test acceptable region at around $`L_{min}10^{46}`$ erg/s, which is enough to produce a gamma-ray burst by shock, but is quite close to the hypothesized theoretical minimum (Meszaros 1999). Therefore, GRBs occurring at the supernova rate are not in conflict with the isotropic fireball model. Line III shows that the minimum isotropic explosion energy of GRBs from a jet is $`10^{49}`$ erg/s. From lines II and III, it is not possible to discern whether a jet is preferred from isotropic emission, since both cross the allowed K-S fit region. Most GRBs in the jet model, however, expend more energy than those in an isotropic fireball model, possibly indicating that jet models are more likely to be related to massive stars (Woosley 1998, Sari 1999) or hypernovae (Paczynski, 1998; Iwamoto et al. 1998; Woosley et al. 1998; Wheeler et al. 1999). ### 3.3 Possible Contribution to Cosmic Gamma-ray Background As the luminosity function width $`L_{max}/L_{min}`$ increases, the fraction of undetected bursts become larger, which leads to more GRB photons contributing to the diffuse gamma-ray background near 100 KeV. Quantitatively, this can be written as : $$I(E)=_0^{\mathrm{}}\frac{n(z)\overline{L}(E(1+z))}{4\pi (1+z)^2}\frac{dr(z)}{dz}𝑑z,$$ (5) where $`\overline{L}(E(1+z))`$ is the average luminosity of individual sources, and $`I(E)`$ represents the observational gamma-ray background intensity. We can deduce the required $`n_0`$ from this equation and estimate the required minimum GRB rate from the observation of cosmic ray background (Watanabe et al 1998, Comastri 1998). This result is shown by line I in Figure 1. Line I lies outside of K-S test region which indicates that GRBs cannot dominate the diffuse cosmic gamma-ray background. ### 3.4 Beaming-Induced Luminosity Function A relativistic jet or fireball has been considered as a way to solve the enormous energy release of GRBs (Paczynski & Xu 1994; Rees & Meszaros 1994; Sari & Piran 1997; Pilla & Loeb 1998). Though the models of shocks (internal or external) for producing a gamma-ray burst usually assume a magnetic field and relativistic particle acceleration (Meszaros et al 1994; Panaitescu et al 1999), the gamma-ray burst radiation should be confined within the beam. The simple assumption that the GRB radiation direction is random in the rest frame of the shocks cannot be excluded absolutely. In this section, we test this assumption by checking if the GRB luminosity function can be induced by a purely special relativistic effect. We assume that the angle between the line of sight and beaming axis is $`\theta `$, the velocity of beaming is $`v`$ and $`L_{in}`$ is the intrinsic luminosity of GRB in the beaming comoving frame, then the observational flux come from a source at redshift z is (Blandford & Konigl 1979): $$P(z,\theta )=\mathrm{\Gamma }^{2+\alpha }\frac{L_{in}}{4\pi (1+z)^2r(z)^2}$$ (6) $$\mathrm{\Gamma }=\frac{1}{\gamma (1\frac{v}{c}\mathrm{cos}\theta )}$$ (7) where $`\gamma =1/\sqrt{1v^2/c^2}`$, the Lorentz factor. We assume the beaming-induced $`(L_{in})_{min}=\mathrm{\Gamma }_{min}^{2+\alpha }L_{in}`$, here $`\mathrm{\Gamma }_{min}`$ corresponds to the biggest viewing angle $`\theta _{max}`$ we can see a burst. $`(L_{in})_{max}=L_{in}\mathrm{\Gamma }^{2+\alpha }(\theta =0)`$, seen by the observer within the beam, $`\frac{L}{L_{in}}=\mathrm{\Gamma }^{2+\alpha }`$. The induced luminosity function $`\varphi (\frac{L}{L_{in}})`$ is: $$\varphi (\frac{L}{L_{in}})d\frac{L}{L_{in}}=\varphi (cos\theta )\frac{d\mathrm{cos}\theta }{d\frac{L}{L_{in}}}d\frac{L}{L_{in}}$$ (8) $$\varphi (\frac{L}{L_{in}})=\frac{(L/L_{in})^{(\frac{3+\alpha }{2+\alpha })}}{(2+\alpha )\gamma \frac{v}{c}}$$ (9) With $`\alpha =1.1\pm 0.3`$, the induced luminosity function index is constrained to be $`1.3<\beta <1.36`$, far below the required index for Log $`N`$ \- Log $`P`$. We indicate this allowed region in Figure 1. We note that this allowed region does not intersect the allowed region created by the necessity of fit to the measured BATSE Log $`N`$ \- Log $`P`$. This contradiction implies that the GRB luminosity function is not created by purely special relativistic bulk motion. Perhaps the real luminosity function of GRBs has only a component due to beaming. If so, the beaming induced luminosity function cannot dominate but modulate the intrinsically wide luminosity function. For this to be true, the intrinsic dynamic range induced by beaming must be quite modest. From inspection of Figure 1, we see that it if beaming creates only $`L_{max}/L_{min}<100`$, it will not dominate the wider intrinsic luminosity function. From Eqn. 7, we see that if the gamma-ray burst is beamed into an angle of $`\theta _{max}1/\gamma `$, then the extra amount of beaming generated by relativistic motion will generate an $`L_{max}/L_{min}<100`$ for $`0<\theta <\theta _{max}`$, and so could be hidden by the intrinsic luminosity function of GRBs. This result can be tested by detecting the off-axis x-ray emissions from beamed GRBs, caused by purely special relativistic effects, proposed by Woods and Loeb (1999). ## 4 Discussion and Conclusions Up to now, the sources for GRBs remain mysterious. Though some observations suggest that GRBs are associated with supernovae type Ib,c/II, we need more information to confirm this kind of connection. However, the nature of the intrinsic luminosity function is a more general problem. In this letter, we have studied the luminosity function from various aspects and given a clearer sketch on how the luminosity function dynamic range is constrained by the gamma ray burst rate, and what contradictions exist between the observationally favored luminosity and a beaming-induced luminosity function. Our results are based on the assumption of connection between GRBs and SNe. Our results show 1) if a model for GRBs is considered for the consistence with GRB rate, we cannot ignore the possible intrinsic dynamic range of luminosity function to the rate. 2) If we consider the source density evolution of GRBs, the required width of luminosity function, Log $`\frac{L_{max}}{L_{min}}`$ cannot be less than 2. 3) The required minimum explosion energy for GRBs from the relative GRB rate, the limits on the width of the GRB luminosity function, and the exclusion of special relativistic motion as the cause of the luminosity function may turn out to be helpful to those studying the GRB radiation and progenitor mechanisms. This research was supported by grants from NASA and the NSF. I would like to thank Robert Nemiroff for helpful discussions and a critical reading of the manuscript. I also thank Peter Meszaros for his valurable discussion and comments.
no-problem/9909/hep-ph9909333.html
ar5iv
text
# CP Violating Phases, Nonuniversal Soft Breaking And D-brane Models ## I Introduction It has been realized for sometime that supersymmetric (SUSY) models allow for an array of CP violating phases not found in the Standard Model(SM), and that these phases will in general give rise to electric dipole moments (EDMs) of the electron and the neutron which might violate the experimental bounds . The current 90 $`\%`$ C.L. bounds for $`d_n`$ and 95 $`\%`$ C.L. bounds for $`d_e`$ are quite stringent : $`(d_n)_{exp}`$ $`<`$ $`6.3\times 10^{26}ecm;(d_e)_{exp}<4.3\times 10^{27}ecm`$ (1) While these bounds can always be satisfied by assuming sufficiently small phases (i.e O(10<sup>-2</sup>)) and/or a heavy SUSY mass spectrum (i.e.$`\stackrel{>}{}`$1 TeV) , recently it has been pointed out that cancellations may occur allowing for “naturally” large phases (i.e. O(10<sup>-1</sup>)) and a light mass spectrum and this has led to considerable analysis both within the MSSM framework and the gravity mediated supergravity (SUGRA) GUT framework . In the latter type models the theory is specified by assigning the SUSY parameters at the GUT scale, and using the renormalization group equations (RGEs), one determines the physical predictions at the electroweak scale ($`M_{\mathrm{EW}}`$) (which we take here to be the t-quark mass, $`m_t`$). Thus in SUGRA models, “naturalness” is to be determined in terms of the GUT parameters. In a previous paper , we examined the minimal model, mSUGRA, which depends on the four universal soft breaking parameters at $`M_G`$ \[$`m_0`$(scalar mass), $`m_{1/2}`$ (gaugino mass), $`A_0`$ (cubic term mass) and $`B_0`$ (quadratic term mass)\] and the Higgs mixing parameter $`\mu _0`$. Since $`m_0`$ is real and we can choose phases such that $`m_{1/2}`$ is real, one has only three phases at the GUT scale in mSUGRA: $`A_0`$ $`=`$ $`|A_0|e^{i\alpha _{0A}};B_0=|B_0|e^{i\theta _{0B}};\mu _0=|\mu _0|e^{i\theta _{0\mu }}.`$ (2) In Ref., it was shown that for the t-quark cubic soft breaking parameter at $`M_{\mathrm{EW}}`$, $`A_t=|A_t|e^{i\alpha _t}`$, the nearness of the t-quark Landau pole automatically suppresses $`\alpha _t`$ (the phase of $`A_t`$ at $`M_{\mathrm{EW}}`$), and one can satisfy the EDM bounds with a light SUSY spectrum for large $`\alpha _{0A}`$, even $`\alpha _{0A}=\pi /2`$. However, the situation is more difficult for $`\theta _{0B}`$. The experimental requirements of Eq.(1) combined with radiative breaking of $`SU(2)\times U(1)`$ at $`M_{\mathrm{EW}}`$ imply that $`\theta _{0B}`$ is large i.e. O(1) (unless $`\alpha _{0A}`$ is small and then all phases are small) and more serious, must be tightly fine tuned unless tan$`\beta `$ is small (tan$`\beta \stackrel{<}{}`$3). For example, fixing $`|A_0|`$, $`m_0`$ and $`m_{1/2}`$ to be light and $`\alpha _{0A}`$ large, one characteristically would find that $`\theta _{0B}`$ needs to be specified to 1 part in 10<sup>4</sup> for tan$`\beta `$=10. Without this fine tuning, the GUT theory would not achieve electroweak symmetry breaking at $`M_{\mathrm{EW}}`$ and/or satisfaction of Eq.(1). Nonminimal models were also examined in with results similar to the above holding. In this paper we examine the nonminimal models in more detail. We then discuss an interesting D-brane model , where the Standard Model gauge group is associated with two 5-branes. This model results in nonuniversal gaugino and scalar masses and is able to allow larger values of $`\theta _B`$ at $`M_{\mathrm{EW}}`$. However, the same fine tuning problem at $`M_G`$ for $`\theta _{0B}`$ results in this model as well. Our paper is organized as follows: Sec.2 reviews the basic formulae and notation of the SUGRA GUT models for calculating the EDMs. Sec.3 examines a general class of non universalities. Sec.4. considers the model of and conclusions are given in Sec 5. ## II EDMs in SUGRA Models We consider here supersymmetry GUT models possessing R-parity invariance where SUSY is broken in a hidden sector at a scale above $`M_G2\times 10^{16}`$ GeV. This breaking is then transmitted by gravity to the physical sector. The GUT group is assumed to be broken to the Standard Model (SM) $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ at $`M_G`$, but is otherwise unspecified. The gauge kinetic function, $`f_{\alpha \beta }`$, and Kahler potential, K, can then give rise to nonuniversal gaugino masses at $`M_G`$ which we parametrize by $`m_{1/2i}`$ $`=`$ $`|m_{1/2i}|e^{i\varphi _{0i}};i=1,2,3.`$ (3) and we chose the phase convention where $`\varphi _{02}=0`$. We also allow nonuniversal Higgs and third generation masses at $`M_G`$ which can arise from the Kahler potential: $`m_{H_1}^2`$ $`=`$ $`m_0^2(1+\delta _1);m_{H_2}^2=m_0^2(1+\delta _2)`$ (4) $`m_{q_L}^2`$ $`=`$ $`m_0^2(1+\delta _3);m_{u_R}^2=m_0^2(1+\delta _4);m_{e_R}^2=m_0^2(1+\delta _5);`$ (5) $`m_{d_R}^2`$ $`=`$ $`m_0^2(1+\delta _6);m_{l_L}^2=m_0^2(1+\delta _7);`$ (6) where $`q_L(\stackrel{~}{t}_L,\stackrel{~}{b}_L)`$, $`u_R\stackrel{~}{t}_R`$, $`e_R\stackrel{~}{\tau }_R`$, etc., $`m_0`$ is the universal mass of the first two generations and $`\delta _i`$ are the deviations from this for the Higgs bosons and the third generation. In addition, there may be nonuniversal cubic soft breaking parameters at $`M_G`$: $`A_{0t}`$ $`=`$ $`|A_{0t}|e^{i\alpha _{0t}};A_{0b}=|A_{0b}|e^{i\alpha _{0b}};A_{0\tau }=|A_{0\tau }|e^{i\alpha _{0\tau }}.`$ (7) The electric dipole moment $`d_f`$ for fermion f is defined by the effective Lagrangian: $`L_f`$ $`=`$ $`{\displaystyle \frac{i}{2}}d_f\overline{f}\sigma _{\mu \nu }\gamma ^5fF^{\mu \nu }`$ (8) Our analysis follows that of . Thus the basic diagrams leading to the EDMs are given in Fig.1. In addition there are gluonic operators $`L^G`$ $`=`$ $`{\displaystyle \frac{1}{3}}d^Gf_{abc}G_{a\mu \alpha }G_{b\nu }^\alpha \stackrel{~}{G}_c^{\mu \nu }`$ (9) and $`L^C`$ $`=`$ $`{\displaystyle \frac{i}{2}}d^C\overline{q}\sigma _{\mu \nu }\gamma ^5T^aqG_a^{\mu \nu }`$ (10) contributing to $`d_n`$ arising from the one loop diagram of Fig.1 (when the outgoing vector boson is a gluon), the two loop Weinberg type diagram and two loop Barr-Zee type diagram. (In Eq.(9), $`\stackrel{~}{G}_c^{\mu \nu }=\frac{1}{2}ϵ^{\mu \nu \alpha \beta }G_{c\alpha \beta }`$, $`ϵ^{0123}=+1`$, $`T^a=\frac{1}{2}\lambda _a`$, where $`\lambda _a`$ are the SU(3) Gellman matrices and $`f_{abc}`$ are the SU(3) structure constants). We use naive dimensional analysis to relate these to the electric dipole moments and the QCD factors $`\eta ^{\mathrm{ED}}`$, $`\eta ^G`$, $`\eta ^C`$ to evolve these results to 1 GeV. The quark dipole moments are related to $`d_n`$ using the nonrelativistic quark model to relate the u and d quark moments to $`d_n`$ i.e. $`d_n`$ $`=`$ $`{\displaystyle \frac{1}{3}}(4d_dd_u)`$ (11) and we assume the s-quark mass is 150 MeV. Thus QCD effects produce considerable uncertainty in $`d_n`$ (perhaps a factor of 2-3). Our matter phase conventions are chosen so that the chargino($`\chi ^\pm `$), neutralino($`\chi ^0`$) and squark and slepton mass matrices take the following form: $`M_{\chi ^\pm }`$ $`=`$ $`\left(\begin{array}{cc}\stackrel{~}{m}_2& \sqrt{2}M_Wsin\beta \\ \sqrt{2}M_Wcos\beta & |\mu |e^{i\theta }\end{array}\right)`$ (12) $`M_{\chi ^0}`$ $`=`$ $`\left(\begin{array}{cccc}|\stackrel{~}{m}_1|e^{i\varphi _1}& 0& a& b\\ 0& \stackrel{~}{m}_2& c& d\\ a& c& 0& |\mu |e^{i\theta }\\ b& d& |\mu |e^{i\theta }& 0\end{array}\right)`$ (13) and $`M_{\stackrel{~}{q}}^2`$ $`=`$ $`\left(\begin{array}{cc}m_{q_L}^2& e^{i\alpha _q}m_q(|A_q|+|\mu |R_qe^{i(\theta +\alpha _q)})\\ e^{i\alpha _q}m_q(|A_q|+|\mu |R_qe^{i(\theta +\alpha _q)})& m_{q_R}^2\end{array}\right).`$ (14) In the above $`a=M_Zsin\theta _Wcos\beta `$, $`b=M_Zsin\theta _Wsin\beta `$, $`c=cot\theta _Wa`$, $`d=cot\theta _Wb`$, $`tan\beta =v_2/v_1`$ ($`v_{1,2}=<H_{1,2}>`$), $`R_q=cot\beta (tan\beta )`$ for u(d) quarks. All parameters are evaluated at the electroweak scale using the RGEs, e.g. for quark q one has $`A_q=|A_q|e^{i\alpha _q}`$. (Similar formulae hold for the slepton mass matrices.) Electroweak symmetry breaking gives rise to Higgs VEVs which we parametrize by $`<H_{1,2}>`$ $`=`$ $`v_{1,2}e^{iϵ_{1,2}}.`$ (15) These enter in the phase $`\theta `$ appearing in Eqs.(12,13,14) $`\theta `$ $``$ $`ϵ_1+ϵ_2+\theta _\mu `$ (16) The Higgs VEVs are calculated by minimizing the Higgs effective potential: $`V_{eff}`$ $`=`$ $`m_1^2v_1^2+m_2^2v_2^2+2|B\mu |cos(\theta +\theta _B)v_1v_2+{\displaystyle \frac{g_2^2}{8}}(v_1^2+v_2^2)^2+{\displaystyle \frac{g_{}^{}{}_{}{}^{2}}{8}}(v_2^2v_1^2)^2+V_1`$ (17) where $`m_i^2=|\mu |^2+m_{H_i}^2`$ and $`m_{H_{1,2}}^2`$ are the Higgs running masses at $`M_{\mathrm{EW}}`$. $`V_1`$ is the one loop contribution. $`V_1={\displaystyle \frac{1}{64\pi ^2}}{\displaystyle \underset{a}{}}C_a(1)^{2j_a}(2j_a+1)m_a^4(ln{\displaystyle \frac{m_a^2}{Q^2}}{\displaystyle \frac{3}{2}})`$ (18) where $`m_a`$ is the mass of the a particle of spin $`j_a`$, $`Q`$ is the electroweak scale (which we take to be $`m_t`$) and $`C_a`$ is the number of color degrees of freedom. In the following we include the full third generation states, $`t`$, $`b`$ and $`\tau `$ in $`V_1`$ which allows us to treat the large tan$`\beta `$ regime. From Eq.(14) this implies that $`V_1`$ depends only on $`\theta +\alpha _q`$, $`\theta +\alpha _l`$ (though the rotation matrices which diagonalize $`M_{\stackrel{~}{q}}^2`$, $`M_{\stackrel{~}{l}}^2`$ will depend on $`\theta `$, $`\alpha _q`$ and $`\alpha _l`$ separately). Minimizing $`V_{eff}`$ with respect to $`ϵ_1,ϵ_2`$. then determines $`\theta `$: $`\theta =\pi \theta _B+f_1(\pi \theta _B+\alpha _q,\pi \theta _B+\alpha _l)`$ (19) where $`f_1`$ is the one loop correction. In general, $`f_1`$ is small, but can become significant for large tan$`\beta `$, as discussed in . Minimizing $`V_{eff}`$ with respect to $`v_1`$ and $`v_2`$ yields two equations which can be arranged in the usual fashion to determine $`|\mu |^2`$ and $`|B|`$ at $`M_{\mathrm{EW}}`$: $`|\mu |^2`$ $`=`$ $`{\displaystyle \frac{\mu _1^2tan^2\beta \mu _2^2}{tan^2\beta 1}}{\displaystyle \frac{1}{2}}M_Z^2`$ (20) $`|B|`$ $`=`$ $`{\displaystyle \frac{1}{2}}sin2\beta {\displaystyle \frac{m_3^2}{|\mu |}}`$ (21) where $`\mu _i^2=m_{H_i}^2+\mathrm{\Sigma }_i`$, $`m_3^2=2|\mu |^2+\mu _1^2+\mu _2^2`$ and $`\mathrm{\Sigma }_i=V_1/v_i^2`$. Note that $`|\mu |`$ and $`|B|`$ depend implicitly on the CP violating phases since the RGE that determines $`m_{H_i}^2`$ couple to the $`A`$ and $`\stackrel{~}{m}_i`$ equations, and $`\mathrm{\Sigma }_i`$ depend on the phases. ## III NonMinimal Models The renormalization group equations that relate $`M_{EW}`$ to $`M_G`$ are in general complicated differential equations requiring numerical solution, and all results given here are consequences of accurate numerical integration. Approximate analytic solutions can however be found for low and intermediate tan$`\beta `$ (neglecting b and $`\tau `$ Yukawa couplings) and in the SO(10) limit of very large tan$`\beta `$ (neglecting the $`\tau `$ Yukawa coupling). These analytic solutions give some insight into the nature of the more general numerical solutions. For low and intermediate tan$`\beta `$, the $`A_t`$ and Yukawa RGEs read $`{\displaystyle \frac{dA_t}{dt}}`$ $`=`$ $`6Y_tA_t+{\displaystyle \frac{1}{4\pi }}({\displaystyle \underset{i=1}{\overset{3}{}}}a_i\alpha _i\stackrel{~}{m}_i)`$ (22) $`{\displaystyle \frac{dY_t}{dt}}`$ $`=`$ $`6Y_t{\displaystyle \frac{1}{4\pi }}({\displaystyle \underset{i=1}{\overset{3}{}}}a_i\alpha _i)Y_t`$ (23) where $`Y_t=h_t^2/16\pi ^2`$, $`h_t`$ is the t-quark Yukawa coupling constant and $`a_i`$=(13/15,3,16/3). We follow the sign conventions of Ref., and $`t=2ln(M_G/Q)`$. The solutions of Eqs.(20) can be written as $`A_t(t)`$ $`=`$ $`D_0A_{0t}\stackrel{~}{H_2}+{\displaystyle \frac{1D_0}{F}}\stackrel{~}{H}_3`$ (24) where $`\stackrel{~}{H_2}`$ $`=`$ $`{\displaystyle \frac{\alpha _G}{4\pi }}t{\displaystyle \frac{a_i|m_{1/2i}|e^{i\varphi _i}}{1+\beta _it}}{\displaystyle \underset{i}{}}H_{2i}|m_{1/2i}|e^{i\varphi _i}`$ (25) and $`\stackrel{~}{H_3}`$ $`=`$ $`{\displaystyle _0^t}𝑑t^{}E(t^{})\stackrel{~}{H_2}{\displaystyle H_{3i}|m_{1/2i}|e^{i\varphi _i}}.`$ (26) Here $`D_0=16(F(t)/E(t))Y(t)`$ vanishes at the t-quark Landau pole and hence is generally small i.e. ($`D_0\stackrel{<}{}0.2`$ for $`m_t=175`$ GeV). The functions F and E depend on the SM gauge beta functions and are given in . (E=12.3, F=254 for $`t=2ln(M_G/m_t)`$.) We note the identity $`{\displaystyle \frac{1}{F}}{\displaystyle H_{3i}}`$ $`=`$ $`t{\displaystyle \frac{E}{F}}12.1.`$ (27) and so if we write Eq.(24) as $`A_t(t)`$ $`=`$ $`D_0A_{0t}+{\displaystyle \mathrm{\Phi }_i|m_{1/2i}|e^{i\varphi _i}}.`$ (28) the $`\mathrm{\Phi }_i`$ are real and $`O(1)`$. (In the SO(10) large tan$`\beta `$ limit, one obtains an identical result with the factor 6 replaced 7 in $`D_0`$. Thus Eq.(28) gives a valid qualitative picture over a wide range in tan$`\beta `$.) Nonuniversal gaugino masses affect the EDMs in two ways. First, taking the imaginary part of Eq.(28) one has ($`\varphi _2`$=0 in our phase convention): $`|A_t(t)|sin\alpha _t`$ $`=`$ $`|A_{0t}|D_0\mathrm{sin}\alpha _{0t}+{\displaystyle \underset{i=1,3}{}}\mathrm{\Phi }_i|m_{1/2i}|\mathrm{sin}\varphi _i.`$ (29) As in the universal case, the smallness of $`D_0`$ suppresses the effects of any large $`\alpha _{0t}`$ on the electroweak scale phase $`\alpha _t`$. However large gaugino phases $`\varphi _i`$ will generally make $`\alpha _t`$ large. Second, Eqs.(13) and (12) show that the phase $`\varphi _1`$ enters into the neutralino mass matrix though the chargino mass matrix remains unchanged ($`\varphi _2=0`$). Thus the $`\varphi _1`$ phase will affect any cancellation occurring between the neutralino and chargino contributions to the EDMs. Some of the above efffects are illustrated in Figs. 2 and 3, where we plot K vs. the phase $`\theta _B`$ at the electroweak scale for $`d_e`$. Here K is defined by $`K=log_{10}{\displaystyle \frac{d_f}{(d_f)_{exp}}}`$ (30) Thus K=0 corresponds to the case where the theory saturates the current experimental EDM bound, while K=-1, would be the situation if the experimental bounds were reduced by a factor of 10. Fig.2 considers universal scalar masses and universal $`A_0`$ with $`\alpha _{0A}=\pi /2`$ at the GUT scale, and $`\varphi _1=\varphi _3=1.1\pi `$, -1.3$`\pi `$, -1.5$`\pi `$ for tan$`\beta `$=3. We see that as $`|\varphi _1|`$ is increased from $`|\varphi _1|`$=1.1$`\pi `$ to 1.3 $`\pi `$, the allowed values of $`\theta _B`$ increases significantly since the $`\varphi _1`$ phase in Eq. (11) aids the cancellation between the neutralino and the chargino contributions. However, increasing $`|\varphi _1|`$ further to $`|\varphi _1|`$=1.5$`\pi `$ over compensates causing the allowed values of $`\theta _B`$ to decrease. Fig. 3 for tan$`\beta `$=10 shows a similar effect. The experimentally allowed parameters require $`K0`$. The allowed range $`\mathrm{\Delta }\theta _B`$ of $`\theta _B`$ decreases with tan$`\beta `$. It is very small for tan$`\beta `$=10 and is quite small even for tan$`\beta `$=3. ## IV D-Brane Models Recent advances in string theory leading to possible D=4, N=1 supersymmetric vacua after compactification has restimulated interest in phenomenological string motivated model building. A number of approaches exists including models based on Type IIB orientifolds, Horava-Witten M theory compactification on $`CY\times S^1/Z_2`$ and perturbative heterotic string vacua. The existence of open string sectors in Type IIB strings implies the presence of $`Dp`$-branes, manifolds of p+1 dimensions in the full D=10 space of which 6 dimensions are compactified e.g. on a six torus $`T^6`$. (For a survey of properties of Type IIB orientifold models see ). One can build models containing 9 branes (the full 10 dimensional space) plus $`5_i`$-branes, i=1, 2, 3 (each containg two of the compact dimensions) or 3 branes plus 7<sub>i</sub> branes, i=1, 2, 3 (each having two compactified dimensions orthogonal to the brane). Associated with a set of n coincident branes is a gauge group U(n). Thus there are large number of ways one might embed the Standard Model gauge group in Type IIB models. We consider here an interesting model recently proposed based on 9-branes and 5-branes. In this model, $`SU(3)_C\times U(1)_Y`$ is associated with one set of 5-branes, i.e. $`5_1`$, and SU(2)<sub>L</sub> is associated with a second intersecting set 5<sub>2</sub>. Strings starting on 5<sub>2</sub> and ending on 5<sub>1</sub> have massless modes carrying the joint quantum numbers of the two branes (we assume these are the SM quark and lepton doublets, Higgs doublets) while strings beginning and ending on 5<sub>1</sub> have massless modes carrying $`SU(3)_C\times U(1)_Y`$ quantum numbers (right quark and right lepton states). A number of general properties of such models have been worked out . Thus to accommodate the phenomenological requirement of gauge coupling constant unification at $`M_G2\times 10^{16}`$ GeV, one may assume, that $`M_c`$, the compactification scale of the Kaluza-Klein modes obeys $`M_c=M_G`$. Above $`M_c`$, the gauge interactions on the 5-branes see a D=6 dimensional space (with Kaluza Klein modes) while above $`M_c`$ gravity sees the full D=10 space. Gravity and gauge unification then is to take place at the string scale $`M_{\mathrm{str}}=1/\sqrt{\alpha ^{}}`$ given by $`M_{\mathrm{str}}=(\alpha _GM_cM_{Planck}/\sqrt{2})^{1/2}8\times 10^{16}`$ GeV (for $`\alpha _G1/24`$). The gauge kinetic functions for 9 branes and 5<sub>i</sub>-branes are given by $`f_9=S`$ and $`f_{5_i}=T_i`$ where S is the dilaton and $`T_i`$ are moduli. The origin of the spontaneous breaking of N=1 supersymmetry and of compactification is not yet understood within this framework. Further, CP violation must also occur as a spontaneous breaking. One assumes these effects can be phenomenologically accounted for by F-components growing VEVs parametrized as $`F^S`$ $`=`$ $`2\sqrt{3}<\mathrm{Re}S>\mathrm{sin}\theta _be^{i\alpha _s}m_{3/2}`$ (31) $`F^{T_i}`$ $`=`$ $`2\sqrt{3}<\mathrm{Re}T_i>\mathrm{cos}\theta _b\mathrm{\Theta }_ie^{i\alpha _i}m_{3/2}`$ (32) where $`\theta _b`$, $`\mathrm{\Theta }_i`$ are Goldstino angles ($`\mathrm{\Theta }_1^2+\mathrm{\Theta }_2^2+\mathrm{\Theta }_3^2=1`$). CP violation is thus incorporated in the phases $`\alpha _s`$, $`\alpha _i`$. In the following we will assume, for simplicity, that $`\mathrm{\Theta }_3=0`$ (i.e. that the 5<sub>3</sub>-brane does not affect the physical sector). We also assume isotropic compactification ($`<ReT_i>`$) are equal) to guarantee grand unification at $`M_G`$, and $`<ImT_i>`$=0 so that the spontaneous breaking does not grow a $`\theta `$-QCD type term. For models of this type, T-duality determines the Kahler potential and, combined with Eq.(28), generates the soft breaking terms. One finds at $`M_G`$: $`\stackrel{~}{m_1}`$ $`=`$ $`\sqrt{3}\mathrm{cos}\theta _b\mathrm{\Theta }_1e^{i\alpha _1}m_{3/2}=\stackrel{~}{m_3}=A_0`$ (33) $`\stackrel{~}{m_2}`$ $`=`$ $`\sqrt{3}\mathrm{cos}\theta _b\mathrm{\Theta }_2e^{i\alpha _2}m_{3/2}`$ (34) and $`m_{5_15_2}^2`$ $`=`$ $`(1{\displaystyle \frac{3}{2}}\mathrm{sin}^2\theta _b)m_{3/2}^2`$ (35) $`m_{5_1}^2`$ $`=`$ $`(13\mathrm{s}\mathrm{i}\mathrm{n}^2\theta _b)m_{3/2}^2`$ (36) Here $`A_0`$ is a universal cubic soft breaking mass, $`m_{5_15_2}^2`$ are the soft breaking masses for $`q_L`$, $`l_L`$, $`H_{1,2}`$ and $`m_{5_1}^2`$ are for $`u_R`$, $`d_R`$ and $`e_R`$. We see that the brane models give rise to nonuniversalities that are strikingly different from what one might expect in SUGRA GUT models. Thus it would be difficult to construct a GUT group, which upon spontaneous breaking at $`M_G`$ yields gaugino masses $`\stackrel{~}{m}_1`$=$`\stackrel{~}{m}_3\stackrel{~}{m}_2`$, and similarly the above pattern of sfermion and Higgs soft masses. Brane models can achieve the above pattern since they have the freedom of associating different parts of the SM gauge group with different branes. The above model does not determine the $`B`$ and $`\mu `$ parameters. We therefore will phenomenologically parametrize these at $`M_G`$ by $`B_0=|B_0|e^{i\theta _{0B}};\mu _0=|\mu _0|e^{i\theta _{0\mu }}.`$ (37) with two additional CP violating phases $`\theta _{0B}`$ and $`\theta _{0\mu }`$. We also set $`\alpha _2=0`$ in the following. We consider first the electron EDM. (We use the interactions of Ref. including the Erratum on the sign of Eq.(5.5) of Ref.). Fig.4 plots K as a function of $`\theta _B`$ for tan$`\beta `$=2 (solid), 5 (dashed), 10 (dotted) with phases $`\varphi _1=\varphi _3=\pi +\alpha _{0A}=1.25\pi `$ and $`m_{3/2}=150`$ GeV, $`\theta _b=0.2`$, $`\mathrm{\Theta }_1`$=0.85. We see that the EDM bounds allow remarkably large values of $`\theta _B`$ in this model even for large tan$`\beta `$, e.g. $`\theta _B0.4`$ for tan$`\beta `$=2 and $`\theta _B0.25`$ for tan$`\beta `$=10. (A second allowed region occurring approximately for $`\theta _B\pi +\theta _B`$ also exists. However this corresponds to the sign of $`\mu `$ that is mostly excluded by the $`bs\gamma `$ data.) Fig.5 shows a similar plot for somewhat smaller phases $`\varphi _1=\varphi _3=\pi +\alpha _{0A}=1.1\pi `$. Again relatively large phases can exist at the electroweak scale. As discussed in Ref., the largeness of $`\theta _B`$ is due to an enhanced cancellation between the neutralino and chargino contributions as a consequence of the additional $`\varphi _1`$ dependence in Eq.(11), allowing $`\theta _B`$ to be O(1). However, in spite of this, the range in $`\theta _B`$ at the electroweak scale, where the experimental bound $`K0`$ is satisfied, is quite small, e.g. from Fig.4, $`\mathrm{\Delta }\theta _B`$0.015 even for tan$`\beta `$=2. As discussed in Ref, this implies that the radiative breaking condition makes the allowed range $`\mathrm{\Delta }\theta _{0B}`$ at the GUT scale very small, particularly for large tan$`\beta `$. This is illustrated in Figs. 6 and 7. In Fig.6, we have plotted the central value of $`\theta _{0B}`$ which satisfies $`K0`$ as a function of tan$`\beta `$. Thus $`\theta _{0B}`$ is generally quite large. In Fig.7 we have plotted the allowed range of $`\mathrm{\Delta }\theta _{0B}`$ satisfying the EDM constraints. One sees that even for small tan$`\beta `$ the allowed range $`\mathrm{\Delta }\theta _{0B}`$ is very small. Thus as in the mSUGRA model of Ref., one has a serious fine tuning problem at the GUT scale due to the combined conditions of radiative breaking and the EDM bound: $`\theta _{0B}`$ must be large but very accurately determined by the string model if it is to agree with low energy phenomenology. The neutron dipole moment is more complicated due to the additional contributions arising from $`L^C`$ and $`L^G`$ of Eqs.(8) and (7). While there are significant uncertainties in the calculation of $`d_n`$ it is of interest to see if the experimental bounds on $`d_n`$ can be achieved in the same region of parameter space as occur in $`d_e`$ above. The fact that the brane model requires $`\varphi _3=\varphi _1`$ allows for the $`L^C`$ gluino contribution to aid in canceling the chargino contribution. This generally aids in broadening the overlap region of joint satisfaction of the $`d_n`$ and $`d_e`$ bounds of Eq.(1). However, in addition to this, there is a contribution from $`L^G`$ from the Weinberg type diagram. While this term is enhanced due to the factor of $`m_t`$, it is a two loop diagram and is suppressed by a factor of $`\alpha _3^2`$($`g_3/4\pi `$) and in most models is usually a small contribution. However, for the D-brane model where $`\varphi _3=\varphi _1`$, the presence of a large $`\varphi _3`$ phase increases the significance of this diagram, reducing the $`d_nd_e`$ overlap region. This is illustrated in Figs.8 where $`\mathrm{\Theta }_1`$ is plotted as a function of $`\theta _B`$ for parameters tan$`\beta `$=2, $`m_{3/2}=150`$ GeV, $`\theta _b`$=0.2. (LEP189 bounds of $`m_{1/2}\stackrel{>}{}150`$ GeV imply here that $`\mathrm{\Theta }_1\stackrel{<}{}0.94`$.) As one proceeds from $`\varphi _1=\varphi _3=1.25\pi `$ of Fig.8a to $`\varphi _1=\varphi _3=1.95\pi `$ of Fig.8d, one goes from no overlap of the allowed $`d_e`$ and $`d_n`$ regions to a significant overlap. However, the large $`\theta _B`$ phase allowed separately by $`d_e`$ and $`d_n`$ (e.g. $`\theta _B0.6`$) in Fig.8a is sharply reduced in Fig.8d in the overlap region by a factor of 10. Further, the region of parameter space where the experimental constraints for $`d_e`$ and $`d_n`$ can be simultaneously satisfied generally decreases with increasing tan$`\beta `$. Fig.9 gives the allowed region for the parameters of Fig. 8b with $`\varphi _3=\varphi _1=1.90\pi `$, for tan$`\beta `$=2, 3 and 5. The allowed parameter space disappears for tan$`\beta \stackrel{>}{}5`$. If, however, the overlap in allowed parameter region between $`d_e`$ and $`d_n`$ occurs for smaller $`\varphi _1`$ i.e. $`\varphi _1=O(10^1)`$, one can have larger values of tan$`\beta `$. This is illustrated in Fig.10 for $`\varphi _1=\varphi _3=1.97\pi `$ (i.e. 2$`\pi +\varphi _1`$=0.03$`\pi `$) for tan$`\beta `$=10. The region of overlap however now requires $`\theta _B`$ to be quite small i.e. $`\theta _B=O(10^2)`$. Of course the fine tuning of $`\theta _{0B}`$ at the GUT scale becomes quite extreme for larger tan$`\beta `$ . While the quark mass ratios are well determined, the values of $`m_u`$ and $`m_d`$ remain very uncertain due to the uncertainty in $`m_s`$ . As pointed out in Ref., this uncertainty contributes significantly to the uncertainty in the calculation of $`d_n`$. This effect for the model of Ref. is illustrated in Fig.11 for $`\varphi _1=\varphi _3=1.90\pi `$, tan$`\beta `$=2. Fig. 11a corresponds to the choice of light quarks ($`m_s(1\mathrm{G}\mathrm{e}\mathrm{V})`$ 95 MeV) while Fig. 11b to heavy quarks ($`m_s(1\mathrm{G}\mathrm{e}\mathrm{V})`$ 225 MeV). For light quarks, the Weinberg three gluon term makes a relatively larger contribution and aids more in the cancellation needed to satisfy the EDM constraint. In general, though, the Weinberg term can be several times the upper bound on $`d_n`$ of Eq.(1), and so makes a significant contribution. In other figures of this paper, we have used a central value of $`m_s`$ i.e $`m_s(1\mathrm{G}\mathrm{e}\mathrm{V})=150`$ MeV corresponding to $`m_u(1\mathrm{G}\mathrm{e}\mathrm{V})4.4`$ MeV and $`m_d(1\mathrm{G}\mathrm{e}\mathrm{V})8`$ MeV. ## V Conclusions In minimal SUGRA models with universal soft breaking, it has previously been seen that the current EDM constraints can be satisfied without fine tuning the CP violating phases at the electroweak scale. For this case the EDMs are most sensitive to $`\theta _B`$, the phase of the B parameter, and experiment can be satisfied with $`\theta _B`$=O(10<sup>-1</sup>). It was seen however that at the GUT scale, $`\theta _{0B}`$ was generally large (unless masses were large or the other phases were small), and in order to satisfy both the EDM constraints and radiative electroweak breaking, $`\theta _{0B}`$ had to be fine tuned, the fine tuning becoming more serious as tan$`\beta `$ increased. In this paper we have examined nonuniversal models, and have found generally that the same phenomenon exists. We have studied in some detail an interesting D-brane model involving CP violating phases where the Standard Model gauge group is embedded on two sets of 5-branes, $`SU(3)_C\times U(1)_Y`$ on $`5_1`$ and $`SU(2)_L`$ on $`5_2`$ so that the gaugino phases obey $`\varphi _3=\varphi _1\varphi _2`$. This is a symmetry breaking pattern that is different from what one normally expects in GUT models. If one examines $`d_e`$ and $`d_n`$ separately, one finds that this model can accommodate remarkably large values of $`\theta _B`$ i.e. $`\theta _B`$ as large as 0.7. However, the same fine tuning problem arises at the GUT scale for $`\theta _{0B}`$. Further, the region in parameter space where the experimental bounds on both $`d_e`$ and $`d_n`$ are satisfied shrinks considerably. Thus the model can not actually realize the very largest $`\theta _B`$ (though $`\theta _B`$ as large as $`0.4`$ is still possible). The Weinberg three gluon diagram typically is several times the current experimental upper bound on $`d_n`$, and so makes a significant contribution, particularly if the quark masses are light. (The Barr-Zee term is generally small if the SUSY parameters are $`\stackrel{<}{}1`$ TeV.) The allowed region in parameter space which simultaneously satisfies the $`d_n`$ and $`d_e`$ constraints also shrinks as tan$`\beta `$ is increased, the $`d_e`$ and the $`d_n`$ allowed regions narrowing. In general, if one assumes large $`\varphi _i`$ phases, one needs tan$`\beta \stackrel{<}{}`$5 to get a significant overlap between the allowed $`d_e`$ and allowed $`d_n`$ regions in parameter space, though small overlap regions exist even for tan$`\beta =10`$ and higher (though with $`\theta _B=O(10^2)`$). In the search for the SUSY Higgs, the Tevatron in RUN II/III will be able to explore almost the entire region of tan$`\beta \stackrel{<}{}`$50 (for SUSY parameters $`\stackrel{<}{}`$1 TeV) and it should be possible to experimentally verify whether tan$`\beta `$ is in fact small. As commented in Sec.2, the theoretical calculation of $`d_n`$ contains a number of uncertainties due to QCD effects. We have used here the conventional analysis. However, these uncertainties could affect the overlap between the allowed $`d_e`$ and $`d_n`$ regions, and modify bounds on $`\theta _B`$ and tan$`\beta `$. However, if QCD effects are not too large, we expect the general features described above to survive. ## VI Acknowledgement This work was supported in part by National Science Foundation Grant No. PHY-9722090. We should like to thank M. Brhlik for discussions of the results of Ref., and JianXin Lu for useful conversations.
no-problem/9909/hep-ph9909541.html
ar5iv
text
# 1 Introduction ## 1 Introduction The study of polarized parton distributions in the nucleon presents a major challenge to both experiment and theory. Particularly subtle issues are the polarized antiquark distributions, and the precise flavor decomposition of the polarized quark and antiquark distributions, including the strangeness contributions. Knowledge of the latter is a prerequisite e.g. for the identification of the gluon contribution to the proton spin . Traditional inclusive lepton scattering experiments, i.e., measurements of the polarized structure functions, do not allow to directly distinguish between the polarized quark– and antiquark distributions. Rather, quark– and antiquark contributions have to be identified from the study of scaling violations, which implies a considerable loss of accuracy . Moreover, the flavor decomposition can only be studied by way of comparing experiments with different targets, typically proton and light nuclei (deuteron, helium), which is rendered difficult by nuclear binding effects. Much more direct access to the individual quark– and antiquark distributions is possible in polarized semi-inclusive DIS (deep–inelastic scattering), where one measures e.g. the spin asymmetry of the cross section for producing a certain hadron in the fragmentation of the struck quark or antiquark in the target. Such measurements have recently been performed by the SMC and HERMES experiments. The unpolarized quark and antiquark fragmentation functions needed in the QCD description of these asymmetries can be measured independently in $`e^+e^{}`$–annihilation into hadrons , and also in hadron production in unpolarized DIS off the nucleon . There have also been attempts to estimate the flavor asymmetry of the polarized antiquark distributions theoretically, using models for the structure of the nucleon. A large flavor asymmetry of the polarized antiquark distribution was first obtained in a calculation of the quark– and antiquark distributions at a low scale in the large–$`N_c`$ limit ($`N_c`$ is the number of colors), where the nucleon can be described as a soliton of an effective chiral theory . The unpolarized quark– and antiquark distributions , as well as the polarized distributions of quarks plus antiquarks calculated in this approach are in good agreement with the standard parametrizations obtained from fits to inclusive DIS data . In the $`1/N_c`$–expansion the isovector polarized distributions are leading compared to the isoscalar ones, and calculations in the chiral quark–soliton model, using standard parameters, give an isovector antiquark distribution, $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$, considerably larger than the isoscalar one, $`\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)`$. Such a large polarized antiquark flavor asymmetry should lead to observable effects in semi-inclusive spin asymmetries as measured e.g. by the HERMES experiment. It should be noted that the same approach describes well the observed violation of the Gottfried sum rule and the recent data for the $`x`$–dependence of the flavor asymmetry of the unpolarized antiquark distribution from Drell–Yan pair production and semi-inclusive DIS , see Refs. for details. Recently, the polarized antiquark flavor asymmetry has been estimated in approaches which generalize the meson cloud picture of DIS off the nucleon to the polarized case . It is known that pion exchange contributions to DIS off the nucleon provide a qualitative explanation for the observed flavor asymmetry of the unpolarized antiquark distribution . In Ref. the authors considered the contribution of polarized rho meson exchange to the polarized antiquark distributions in the nucleon and obtained an estimate of the flavor asymmetry considerably smaller than the large–$`N_c`$ result of Refs.. In this paper we offer new arguments in favor of a large flavor asymmetry of the polarized antiquark distributions. Our main points are two: First, on the theoretical side, we comment on the estimates of the polarized flavor asymmetry in the meson exchange picture in Refs.. Specifically, we argue that the polarized rho meson exchange contributions considered in Ref. are not the dominant contributions within that approach, so that a small value obtained for this contribution does not imply smallness of the total polarized antiquark flavor asymmetry. Second, we study the implications of the flavor asymmetry of the polarized antiquark distributions for the spin asymmetries measured in hadron production in semi-inclusive DIS. Combining information on the polarized quark and antiquark distributions available from inclusive DIS with the large–$`N_c`$ model calculation of the polarized flavor asymmetries, we make quantitative predictions for the spin asymmetries in semi-inclusive pion, kaon, and charged particle production. We discuss the sensitivity of these observables to the flavor asymmetries of the polarized antiquark distributions. With the quantitative estimate of $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ from the chiral quark soliton model we obtain a sizable contribution of the flavor asymmetry to semi-inclusive spin asymmetries. Actually, incorporating the effects of the large flavor asymmetry our results fit well the recent HERMES data for spin asymmetries in semi-inclusive charged hadron production . We discuss the assumptions made in the analysis of the HERMES data in Ref., and argue that they are too restrictive and might have led to a bias in favor of a small flavor asymmetry. We also make predictions for experiments with the possibility to measure spin asymmetries of individual charged hadrons ($`\pi ^+,\pi ^{},K^+,K^{}`$), which could be feasible at HERMES or CEBAF. ## 2 Flavor asymmetry of the polarized antiquark distribution in the large–$`N_c`$ limit Quark– and antiquark distributions in the large–$`N_c`$ limit. A very useful tool for connecting QCD with the hadronic world is the theoretical limit of a large number of colors. Qualitatively speaking, at large $`N_c`$ QCD becomes equivalent to a theory of mesons, with baryons appearing as solitonic excitations . The $`1/N_c`$–expansion allows to classify baryon and meson masses, weak and strong characteristics in a model–independent way; usually the estimates agree surprisingly well with phenomenology. One example are the isovector and isoscalar axial coupling constants of the nucleon, which are of the order $`g_A^{(3)}`$ $``$ $`N_c,g_A^{(0)}N_c^0,`$ (1) in qualitative agreement with the numerical values extracted from experiments, $`g_A^{(3)}1.25`$ and $`g_A^{(0)}0.3`$. The same technique has been applied to the parton distributions in the nucleon at a low normalization point . There one finds that the isoscalar unpolarized and the isovector polarized distributions of quarks and antiquarks are leading in the $`1/N_c`$–expansion, while the respective other flavor combinations, the isovector unpolarized and isoscalar polarized distributions, appear only in the next–to–leading order. More precisely, at large $`N_c`$ the distributions scale as $`\text{leading:}\begin{array}{c}u(x)+d(x),\overline{u}(x)+\overline{d}(x)\hfill \\ \mathrm{\Delta }u(x)\mathrm{\Delta }d(x),\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)\hfill \end{array}\}`$ $``$ $`N_c^2f(N_cx),`$ (4) $`\text{subleading:}\begin{array}{c}u(x)d(x),\overline{u}(x)\overline{d}(x)\hfill \\ \mathrm{\Delta }u(x)+\mathrm{\Delta }d(x),\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)\hfill \end{array}\}`$ $``$ $`N_cf(N_cx),`$ (7) where $`f(y)`$ is a stable function in the large $`N_c`$–limit, which depends on the particular distribution considered. Note that the large $`N_c`$–behavior of the polarized quark– and antiquark distributions is related to that of the corresponding axial coupling constants, Eq.(1) by the sum rules for the first moments of these distributions (Bjorken and Ellis–Jaffe sum rules). It is interesting that the isovector polarized antiquark distribution is parametrically larger than the isoscalar one. While the $`1/N_c`$–expansion is only a parametric estimate, it is nevertheless an indication that $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ could be also numerically large. This is indeed confirmed by model calculations (see below). Polarized vs. unpolarized antiquark flavor asymmetry. We would like to briefly comment on the assumptions about the polarized antiquark flavor asymmetry made in the recent analysis of the HERMES data for semi-inclusive DIS . From the point of view of the $`1/N_c`$–expansion the flavor asymmetry of the polarized antiquark distribution is parametrically larger than that of the unpolarized ones. Thus, the assumption in the fit of Ref. of proportional flavor asymmetry in the polarized and unpolarized antiquark distributions, namely $`\mathrm{\Delta }\overline{u}(x)/\overline{u}(x)=\mathrm{\Delta }\overline{d}(x)/\overline{d}(x)`$, is inconsistent with the $`1/N_c`$–expansion and appears unnatural. The consequences of this assumption can be seen more clearly if one notes that it implies $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ $`=`$ $`\left[\overline{u}(x)\overline{d}(x)\right]{\displaystyle \frac{\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)}{\overline{u}(x)+\overline{d}(x)}}.`$ (8) The ratio of the isoscalar polarized to the isoscalar unpolarized distribution on the R.H.S. is always less than unity, which follows from the probabilistic interpretation of the leading–order distributions considered here. Thus, with the above assumption the polarized antiquark flavor asymmetry can never be larger numerically than the unpolarized one. Consequently, a fit under the assumption $`\mathrm{\Delta }\overline{u}(x)/\overline{u}(x)=\mathrm{\Delta }\overline{d}(x)/\overline{d}(x)`$ cannot be regarded as a real alternative to the reference fit assuming zero polarized antiquark flavor asymmetry, $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)=0`$. In this sense it seems that the analysis of Ref. contained an implicit bias in favor of a small polarized antiquark flavor asymmetry. Model calculation of quark and antiquark distributions at a low normalization point. In order to make quantitative estimates of the parton distributions at a low normalization point one needs to supplement the large–$`N_c`$ limit with some dynamical information. It is known that at low energies the behavior of strong interactions is largely determined by the spontaneous breaking of chiral symmetry. A concise way to summarize the implications of this non-perturbative phenomenon is by way of an effective field theory, valid at low energies. Such a theory has been derived “microscopically” within the framework of the instanton description of the QCD vacuum, which provides a dynamical explanation for the breaking of chiral symmetry in QCD . It can be expressed in terms of an effective Lagrangian describing quarks with a dynamical mass, interacting with pions, which appear as Goldstone bosons in the spontaneous breaking of chiral symmetry (here $`x`$ denotes the space–time coordinates): $$L_{\mathrm{eff}}=\overline{\psi }(x)\left[i\gamma ^\mu _\mu MU^{\gamma _5}(x)\right]\psi (x),$$ (9) $$U^{\gamma _5}(x)\frac{1+\gamma _5}{2}U(x)+\frac{1\gamma _5}{2}U^{}(x).$$ (10) Here $`U(x)`$ is a unitary matrix containing the Goldstone boson degrees of freedom, which can be parametrized as $`U(x)`$ $`=`$ $`{\displaystyle \frac{1}{F_\pi }}\left[\sigma (x)+i\tau ^a\pi ^a(x)\right],\sigma ^2+(\pi ^a)^2=F_\pi ^2.`$ (11) ($`F_\pi =93\mathrm{MeV}`$ is the weak pion decay constant). The effective theory is valid up to an ultraviolet cutoff, whose value is of the order $`600\mathrm{MeV}`$ . In the large–$`N_c`$ limit the nucleon in the effective theory defined by Eq.(9) is described by a classical pion field which binds the quarks (chiral quark–soliton model) . The field is of “hedgehog” form; in the nucleon rest frame it is given by $`U_{\mathrm{cl}}(𝐱)`$ $`=`$ $`\mathrm{exp}\left[i{\displaystyle \frac{x^a\tau ^a}{r}}P(r)\right],r|𝐱|,`$ (12) where $`P(r)`$ is called the profile function; $`P(0)=\pi `$, and $`P(r)0`$ for $`r\mathrm{}`$. Nucleon states with definite spin/isospin and momentum emerge after quantizing the collective rotations and translations of the soliton. The parton distributions in the nucleon at large $`N_c`$ can be computed by summing over the contributions of quark single–particle states in the background field; the normalization point is of the order of the ultraviolet cutoff of the effective theory, $`\mu 600\mathrm{MeV}`$ (see Refs. for details). Gradient expansion of the isovector polarized antiquark distribution. Analytic expressions for the parton distributions in the large–$`N_c`$ nucleon can be obtained in the theoretical limit of large soliton size, where one can perform an expansion of the sum over quark levels in gradients of the classical pion field, Eq.(12) . The isovector polarized antiquark distribution in leading–order gradient expansion is given by $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ $`=`$ $`{\displaystyle \frac{F_\pi ^2M_N}{3}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{d\xi }{2\pi }}{\displaystyle \frac{\mathrm{cos}M_N\xi x}{\xi }}{\displaystyle d^3y\mathrm{tr}\left[\tau ^3(i)U_{\mathrm{cl}}(𝐲+\xi 𝐞_3)U_{\mathrm{cl}}^{}(𝐲)\right]}`$ (13) $`=`$ $`{\displaystyle \frac{4M_N}{3}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{d\xi }{2\pi }}{\displaystyle \frac{\mathrm{cos}M_N\xi x}{\xi }}{\displaystyle d^3y\pi _{\mathrm{cl}}^3(𝐲+\xi 𝐞_3)\sigma _{\mathrm{cl}}(𝐲)},`$ where $`M_N`$ denotes the nucleon mass, $`𝐞_3`$ the three–dimensional unit vector in the 3–direction, and $`\tau ^3`$ the isospin Pauli matrix.<sup>1</sup><sup>1</sup>1We remark that, when combined with the corresponding gradient expansion expression for the isovector polarized quark distribution, the first moment of Eq.(13) reproduces the well–known expression for the gradient expansion of the isovector axial coupling constant, $`g_A^{(3)}`$, of the large–$`N_c`$ nucleon. The reason why we are interested in this theoretical limit is that it allows to make explicit the dependence of the polarized antiquark distribution on the classical chiral fields of the soliton. This will be useful for the discussion in Section 3. Aside from this, comparison with the result of exact numerical calculations shows that the leading–order gradient expansion, Eq.(13), gives already a very realistic numerical estimate of the isovector polarized antiquark distribution . Numerical result for $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$. In the numerical estimates of semi-inclusive asymmetries in Section 4 we shall use not the gradient expansion formula, Eq.(13), but a more accurate numerical estimate obtained by adding the bound–state level contribution and using interpolation–type formula to estimate the continuum contribution . This result for the distribution is shown in Fig.1. One sees that the flavor asymmetry of the polarized antiquark distribution is numerically larger than the unpolarized one , in agreement with the fact that it is leading in the $`1/N_c`$–expansion. Note that the unpolarized antiquark flavor asymmetry calculated in this approach is in agreement with the results of the analysis of the E866 Drell–Yan data as well as with the HERMES measurements in semi-inclusive DIS (for details, see Refs.). Including strangeness. For a realistic description of semi-inclusive spin asymmetries one has to take into account the polarized distribution of strange quarks and antiquarks in the nucleon. Within the large–$`N_c`$ description of the nucleon it is possible to include strangeness by extending the effective low–energy theory in the chiral limit, Eq.(9) to three quark flavors and treating corrections due to the finite strange current quark mass perturbatively. In this approach the nucleon is described by embedding the $`SU(2)`$ hedgehog, Eq.(12), in the $`SU(3)`$ flavor space, and quantizing its flavor rotations in the full $`SU(3)`$ flavor space. Flavor symmetry breaking can then be included perturbatively by computing matrix elements of symmetry–breaking operators between $`SU(3)`$–symmetric nucleon states . For our estimates here we limit ourselves to the simplest case of unbroken $`SU(3)`$ symmetry ($`m_s=0`$). In this case collective quantization of the $`SU(3)`$ rotations of the soliton leads to a simple relation between the flavor–octet and triplet polarized antiquark distributions, namely $`\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)2\mathrm{\Delta }\overline{s}(x)`$ $`=`$ $`{\displaystyle \frac{3FD}{F+D}}\left[\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)\right].`$ (14) The value of $`F/D`$ has been estimated in the chiral quark soliton model . When one regards the radius of the soliton as a free parameter (in reality it is determined from minimizing the energy of the soliton), the result for $`F/D`$ interpolates between the $`SU(6)`$ quark model value (for small soliton size), $`F/D=2/3`$, and the value obtained in the Skyrme model (for large soliton size), $`F/D=5/9`$.<sup>2</sup><sup>2</sup>2It is interesting to note that in this approach the value for $`F/D`$ is one-to-one related to the isoscalar axial coupling constant, $`g_A^{(0)}`$, see Ref.. Using the value $`F/D=5/9`$ corresponding to the limit of large soliton size, the ratio in Eq.(14) comes to $`3/7`$. We shall use the relation Eq.(14) with this value in our estimates of semi-inclusive spin asymmetries in Section 4. ## 3 On the flavor asymmetry of the polarized antiquark distribution in the meson cloud picture A widely used phenomenological model for the flavor asymmetry of the unpolarized antiquark distributions in the nucleon is the meson cloud picture . Recently there have been attempts to estimate also the polarized antiquark flavor asymmetry in this approach . In particular, the authors of Ref. obtained an estimate for $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ more than an order of magnitude smaller than the result of the large–$`N_c`$ model calculation, Fig.1. This striking disagreement may lead to the impression that the present theoretical understanding of the polarized antiquark flavor asymmetry is very poor. In this situation we consider it helpful to briefly comment on the estimates within the meson cloud picture. Specifically, we want to show how the very small estimate obtained in Ref. could be reconciled with our large–$`N_c`$ result. To avoid misunderstandings, we stress at this point that, in spite of many superficial similarities, the meson cloud picture described here differs in many crucial respects from the large–$`N_c`$ approach (more on this below), so the two approaches should not be confused. The meson cloud picture of DIS off the nucleon assumes that the nucleon can be described as a “bare” nucleon, characterized by flavor–symmetric quark– and antiquark distributions, and a “cloud” of virtual mesons. The flavor asymmetry of the antiquark distributions is then attributed to processes in which the hard probe couples to such a virtual meson. For instance, the sign of the observed unpolarized antiquark asymmetry in the proton, $`\overline{d}(x)\overline{u}(x)>0`$, can qualitatively be explained by the photon coupling to a pion in the “cloud” (Sullivan mechanism ), if one takes into account that the emission of a $`\pi ^+`$ by the proton, with transition to a neutron or $`\mathrm{\Delta }^0`$ intermediate state, is favored compared to that of a $`\pi ^{}`$, which is possible only by a transition to a $`\mathrm{\Delta }^{++}`$ state, as illustrated in Fig.3.<sup>3</sup><sup>3</sup>3For a discussion of the role of the $`\pi NN`$ and $`\pi N\mathrm{\Delta }`$ form factors in quantitative estimates of these contributions, see Ref.. Figure 2: The Sullivan mechanism contributing to the unpolarized antiquark flavor asymmetry in the proton, $`\overline{u}(x)\overline{d}(x)`$. The simple Sullivan mechanism involving the pion “cloud” does not contribute to the polarized asymmetry, which has often been taken as an argument in favor of the smallness of this asymmetry. Recently, Boreskov and Kaidalov made the interesting observation that at small $`x`$ a sizable polarized antiquark asymmetry is generated by the interference of the amplitudes for the photon coupling to a pion and to a rho meson emitted by the nucleon, as shown schematically in Fig.3. Figure 3: The pi–rho interference contributions to the isovector polarized structure function at small $`x`$ considered in Ref.. This type of exchange corresponds to the leading Regge cut contributing to the imaginary part of the high–energy photon–nucleon scattering amplitude. Previously, Fries and Schäfer had considered the Sullivan–type contribution from polarized rho meson exchange to the polarized antiquark asymmetry, $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$, at larger values of $`x`$ (i.e., not restricted to small $`x`$), see Fig.3. Figure 4: The polarized rho meson exchange contributions to $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ considered in Ref.. They obtained a strikingly small contribution to $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$, roughly two orders of magnitude smaller than the result of the calculation in the large–$`N_c`$ limit shown in Fig.1. It should be noted, however, that within the meson cloud picture contributions of type of Fig.3 are not special. In fact, one can obtain a contribution to the polarized antiquark distribution already from the exchange of spin–$`0`$ mesons. To see this it is instructive to take a look at the gradient expansion of the isovector polarized antiquark distribution in large–$`N_c`$ limit, Eq.(13). Although the fields in Eq.(13) are the classical chiral fields of the soliton, and no direct interpretation of this expression in terms of simple meson exchange diagrams is possible, it can provide some qualitative insights as to which quantum numbers can contribute to the polarized flavor asymmetry. Eq.(13) suggests that, in the language of the meson cloud model, a contribution to the polarized antiquark asymmetry at average values of $`x`$ should come already from the interference of pion and “sigma meson” exchange, as illustrated in Fig.3. Figure 5: Schematic illustration of the “pi–sigma” interference likely to give a large contribution to $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ in the meson cloud picture (for details, see the text). Given the large mass of the rho meson compared to the pion, and the additional suppression due to need to have a polarized rho meson, it is not difficult to imagine that this interference contribution could give a much larger contribution to $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ than the Sullivan–type polarized rho meson exchange of Fig.3. It is difficult in QCD to meaningfully speak about exchanges of mesons other than pions, which play a special role as Goldstone bosons of spontaneously broken chiral symmetry, and as mediators of strong interactions at long distances.<sup>4</sup><sup>4</sup>4It should be noted that even in the case of pure pion exchange, which can in principle be properly defined in the soft–pion limit, the notion of meson exchange contributions to the nucleon structure functions presents severe conceptual difficulties, since in graphs of the type of Fig.3 the typical momenta of the exchanged pions are not small, but can run up to momenta of the order of $`1\mathrm{GeV}`$ . In a “pure” pion cloud picture, contributions of the type indicated in Fig.3 should be referred to as interference between the photon scattering off a flavor–symmetric “bare” nucleon and a pion in the “cloud”. We emphasize that the large–$`N_c`$ results for the flavor asymmetries of the antiquark distributions in the nucleon cannot generally be interpreted as single meson exchange diagrams such as Figs.33. This can be seen from the fact that the large–$`N_c`$ limit of individual meson exchange diagrams typically gives rise to a wrong large–$`N_c`$ behavior of the resulting quark and antiquark distributions, different from Eqs.(4) and (7). The large–$`N_c`$ approach avoids the arbitrary separation of nucleon structure functions into “core” and “cloud” contributions. At the same time, however, this approach retains the possibility of describing genuine Goldstone boson exchange contributions at large distances. For example, the large–$`N_c`$ result for the helicity skewed quark distribution correctly reproduces the singularity found in this distribution in the chiral limit in QCD, which can be attributed to Goldstone boson exchange . ## 4 Semi-inclusive spin asymmetries In leading–order QCD the spin asymmetry of the cross section for semi-inclusive production of a hadron of type $`h`$ in the deep–inelastic scattering of a virtual photon off a hadronic target is given by $`A_1^h(x,z;Q^2)`$ $`=`$ $`{\displaystyle \frac{_ae_a^2\mathrm{\Delta }q_a(x,Q^2)D_a^h(z,Q^2)}{_be_b^2q_b(x,Q^2)D_b^h(z,Q^2)}}\left[{\displaystyle \frac{1+R(x,Q^2)}{1+\gamma ^2}}\right],`$ (15) where $`x`$ is the Bjorken variable, $`Q^2=q^2`$ the photon virtuality, and $`q_a(x,Q^2)`$ and $`\mathrm{\Delta }q_a(x,Q^2)`$, denote, respectively, the unpolarized and polarized quark and antiquark distributions in the target, at the scale $`Q^2`$. The sum over $`a,b`$ implies the sum over light quark flavors as well as over quarks/antiquarks: $$a,b=\{u,\overline{u},d,\overline{d},s,\overline{s}\}.$$ Furthermore, $`D_a(z,Q^2)`$ denotes the quark and antiquark fragmentation functions, describing the probability for the struck quark of type $`a`$ to fragment into a hadron of type $`h`$ with fraction $`z`$ of its longitudinal momentum. Finally, in Eq.(15) $`R(x,Q^2)=\sigma _L/\sigma _T`$ is the usual ratio of the total longitudinal to the transverse photon cross section, and $`\gamma =2xM_N/\sqrt{Q^2}`$ is a kinematical factor. Instead of the spin asymmetry for fixed $`z`$, Eq.(15), one usually considers the so-called integrated asymmetry, which is defined as $`A_1^h(x;Q^2)`$ $`=`$ $`{\displaystyle \frac{_ae_a^2\mathrm{\Delta }q_a(x;Q^2)D_a^h(Q^2)}{_be_b^2q_b(x;Q^2)D_b^h(Q^2)}}\left[{\displaystyle \frac{1+R(x,Q^2)}{1+\gamma ^2}}\right],`$ (16) where $`D_a^h(Q^2)`$ $`=`$ $`{\displaystyle _{z_{\mathrm{min}}}^1}𝑑zD_a^h(z;Q^2).`$ (17) Here $`z_{\mathrm{min}}>0`$ is a cutoff which ensures that the observed hadron was in fact produced by fragmentation of the struck quark in the target (suppression of target fragmentation) . Due to the presence of the (anti–) quark fragmentation functions in the expression for the asymmetries Eqs.(15) and (16) the quark and antiquark distributions in the target enter with different coefficients. This is different from the inclusive spin asymmetry, which at the same level of approximation is given by $`A_1(x,Q^2)`$ $`=`$ $`{\displaystyle \frac{_ae_a^2\mathrm{\Delta }q_a(x,Q^2)}{_be_b^2q_b(x,Q^2)}}\left[{\displaystyle \frac{1+R(x,Q^2)}{1+\gamma ^2}}\right].`$ (18) \[Up to kinematical factors this quantity is equal to the ratio of polarized to unpolarized structure functions, $`g_1(x,Q^2)/F_1(x,Q^2)`$\]. In the following it will be convenient to rewrite the expression for the semi-inclusive asymmetry, Eq.(16), in such a way as to explicitly separate the contributions of those combinations of parton distributions which are known well from inclusive DIS, from others which discriminate between quark and antiquark distributions. We write $`A_1^h(x;Q^2)`$ $`=`$ $`\left[A_{1,u}^h+A_{1,d}^h+A_{1,s}^h+A_{1,\mathrm{\hspace{0.17em}0}}^h+A_{1,\mathrm{\hspace{0.17em}3}}^h+A_{1,\mathrm{\hspace{0.17em}8}}^h\right](x;Q^2),`$ (19) where the contributions are defined as (we omit the $`Q^2`$–dependence for brevity) $`A_{1,u}^h(x)`$ $`=`$ $`X_u^h\left[\mathrm{\Delta }u(x)+\mathrm{\Delta }\overline{u}(x)\right]\text{(analogously for }d,s\text{)},`$ (20) $`A_{1,\mathrm{\hspace{0.17em}0}}^h(x)`$ $`=`$ $`X_0^h\left[\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)+\mathrm{\Delta }\overline{s}(x)\right],`$ (21) $`A_{1,\mathrm{\hspace{0.17em}3}}^h(x)`$ $`=`$ $`X_3^h\left[\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)\right],`$ (22) $`A_{1,\mathrm{\hspace{0.17em}8}}^h(x)`$ $`=`$ $`X_8^h\left[\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)2\mathrm{\Delta }\overline{s}(x)\right].`$ (23) The coefficients $`X_u^h,\mathrm{}X_8^h`$ are given by the following combination of quark charges and quark fragmentation functions: $`X_u^h(x)`$ $`=`$ $`{\displaystyle \frac{e_u^2D_u^h}{Y}}\text{(analogously for }d,s\text{)},`$ (24) $`X_0^h`$ $`=`$ $`{\displaystyle \frac{1}{3Y}}\left[e_u^2(D_u^hD_{\overline{u}}^h)e_d^2(D_d^hD_{\overline{d}}^h)e_s^2(D_s^hD_{\overline{s}}^h)\right],`$ (25) $`X_3^h`$ $`=`$ $`{\displaystyle \frac{1}{2Y}}\left[e_u^2(D_u^hD_{\overline{u}}^h)+e_d^2(D_d^hD_{\overline{d}}^h)\right],`$ (26) $`X_8^h`$ $`=`$ $`{\displaystyle \frac{1}{6Y}}\left[e_u^2(D_u^hD_{\overline{u}}^h)e_d^2(D_d^hD_{\overline{d}}^h)+2e_s^2(D_s^hD_{\overline{s}}^h)\right],`$ (27) with $`Y`$ $`=`$ $`{\displaystyle \frac{1+\gamma ^2}{1+R(x,Q^2)}}{\displaystyle \underset{a}{}}e_a^2q_a(x;Q^2)D_a^h(Q^2).`$ (28) The terms $`A_{1,u}^h,A_{1,d}^h`$ and $`A_{1,s}^h`$ contain the contributions of the sum of quark and antiquark distributions, which appear also in the inclusive polarized spin asymmetry (polarized structure functions), Eq.(18), and can therefore be measured independently in DIS. \[Actually, in DIS with proton or nuclear targets one is able to measure directly only two flavor combinations of these three distributions; the third one can be inferred using $`SU(3)`$ symmetry arguments.\] The term $`A_{1,\mathrm{\hspace{0.17em}0}}^h`$ in Eq.(19) contains the flavor–singlet polarized antiquark distribution. The terms $`A_{1,\mathrm{\hspace{0.17em}3}}^h`$ and $`A_{1,\mathrm{\hspace{0.17em}8}}^h`$, finally, are proportional to the flavor–nonsinglet (triplet and octet, respectively) combinations of the polarized antiquark distributions, which do not contribute to inclusive DIS and are therefore left essentially unconstrained in the parametrizations of parton distributions derived from fits to inclusive data . The decomposition Eq.(19) now allows us to consistently combine the information available from inclusive DIS, contained in the standard parametrizations of polarized parton distributions, with the results of our model calculation of the flavor asymmetries of the antiquark distributions, when computing the total semi-inclusive spin asymmetry. To evaluate the numerators of the contributions $`A_{1,u}^h,A_{1,d}^h`$, and $`A_{1,s}^h`$ we use the GRSV LO parametrizations of the distributions $`\mathrm{\Delta }u(x)+\mathrm{\Delta }\overline{u}(x),\mathrm{\Delta }d(x)+\mathrm{\Delta }\overline{d}(x)`$, $`\mathrm{\Delta }s(x)+\mathrm{\Delta }\overline{s}(x)`$ .<sup>5</sup><sup>5</sup>5To be consistent with our treatment of $`SU(3)`$ flavor symmetry breaking in the model calculation we take the so-called “standard” scenario. For the contribution $`A_{1,\mathrm{\hspace{0.17em}0}}^h`$ involving the flavor–singlet antiquark distribution, $`\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)+\mathrm{\Delta }\overline{s}(x)`$, we also use the GRSV LO parametrization, which is in good agreement with the result of the calculation in the chiral quark–soliton model . To estimate the contributions of the flavor–asymmetric antiquark distributions to the spin asymmetry, $`A_{1,\mathrm{\hspace{0.17em}3}}^h`$ and $`A_{1,\mathrm{\hspace{0.17em}8}}^h`$, we take the results of the calculation in the chiral quark–soliton model, cf. Fig.1 and Eq.(14), evolved from the scale of $`\mu ^2=(600\mathrm{MeV})^2`$ up to the experimental scale. Note that in leading order $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ and $`\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)2\mathrm{\Delta }\overline{s}(x)`$ do not mix with the total distributions under evolution \[we neglect $`SU(3)`$–symmetry breaking effects due to the finite strange quark mass in the evolution\]. Finally, to evaluate the denominators in Eqs.(24)–(27), we use the GRV LO parametrization of unpolarized parton distributions , which agrees well with the model calculations of Refs.. We emphasize that with the above “hybrid” set of polarized parton distributions we automatically fit all inclusive data, since the GRSV parametrization was determined from fits to the structure function data. In particular, the GRSV parametrization describes well the HERMES data for the inclusive spin asymmetry, Eq.(18), as can be seen from Fig.6.<sup>6</sup><sup>6</sup>6The GRSV parametrization was derived by fitting to the ratio Eq.(18) including the factor $`[1+R(x,Q^2)]`$, so that this factor is contained in the polarized parton distribution functions and should not be included explicitly when evaluating Eq.(18). The effect of this factor is shown in Fig.6. Note that the HERMES data are in good agreement with the SLAC E143 data for the inclusive spin asymmetry . Spin asymmetries in charged pion production. In order to determine the contributions of the various combinations of polarized quark and antiquark distributions to the semi-inclusive spin asymmetry, Eq.(16), we need to evaluate the coefficients $`X_u^h,\mathrm{}X_8^h`$, Eqs.(24)–(27), using a set of quark and antiquark fragmentation functions. Like the parton distributions, the fragmentation functions are process–independent quantities, which can in principle be determined from a variety of hard processes with hadronic final states, such as $`e^+e^{}`$ annihilation, or in semi-inclusive hadron production in unpolarized DIS. Unfortunately, experimental knowledge of fragmentation functions is still comparatively poor. This applies in particular to the so-called unfavored fragmentation functions, which we need to study the influence of flavor symmetry in the antiquark distribution, see Eqs.(26) and (27). The low–scale parametrizations of unpolarized quark and antiquark fragmentation functions of Binnewies et al., which fit variety of $`e^+e^{}`$ annihilation and semi-inclusive DIS data, describe only fragmentation into positively and negatively charged particles combined, which is not sufficient for our purposes. The $`\pi ^+`$ and $`\pi ^{}`$ fragmentation functions separately have been extracted at low $`Q^2`$ from semi-inclusive pion production at HERMES . The number of independent pion fragmentation functions is reduced by isospin and charge conjugation invariance: $`D_u^{\pi +}=D_{\overline{d}}^{\pi +}=D_d^\pi =D_{\overline{u}}^\pi `$ $``$ $`D,`$ (29) $`D_d^{\pi +}=D_{\overline{u}}^{\pi +}=D_u^\pi =D_{\overline{d}}^\pi `$ $``$ $`\stackrel{~}{D},`$ (30) where $`D`$ and $`\stackrel{~}{D}`$ are called, respectively, favored and unfavored fragmentation function. In addition, in Ref. it was assumed that the strange quark fragmentation function into pions is approximately equal to the unfavored fragmentation function for $`u`$ and $`d`$ quarks: $`D_s^{\pi +}=D_{\overline{s}}^\pi D_{\overline{s}}^{\pi +}=D_s^\pi `$ $``$ $`\stackrel{~}{D}.`$ (31) With these assumptions we can compute the integrals Eq.(17) for $`h=\pi ^+,\pi ^{}`$ using the HERMES fragmentation functions (extraction method 1, corrected for $`4\pi `$ acceptance) . The values of the coefficients $`X_u^{\pi \pm },\mathrm{}X_8^{\pi \pm }`$ integrals obtained with a cutoff $`z_{\mathrm{min}}=0.2`$ (the value used in the analysis of HERMES charged hadron data ) are given in rows 1 and 2 of Table 1. The numerical values of the coefficients reveal two things: First, the dominant contribution to the charged hadron asymmetry comes from the sum of the polarized $`u`$–quark and antiquark distributions in the target, $`\mathrm{\Delta }u(x)+\mathrm{\Delta }\overline{u}(x)`$, which is a consequence of the large squared charge of the $`u`$–quark. Second, among the various flavor combinations of the antiquark distributions the isovector one, $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ enters with the largest coefficient — a fortunate circumstance for attempts to extract this distribution from the data. The results for the spin asymmetry in semi-inclusive $`\pi ^+`$ and $`\pi ^{}`$ production are shown in Fig.7. The dashed lines show the results obtained taking into account only the contributions $`A_{1,u}^{\pi \pm },A_{1,d}^{\pi \pm },A_{1,s}^{\pi \pm }`$ and $`A_{1,\mathrm{\hspace{0.17em}0}}^{\pi \pm }`$ to the spin asymmetry, Eq.(19), i.e., what would be obtained without flavor asymmetry in the polarized antiquark distribution of the proton. The contributions $`A_{1,\mathrm{\hspace{0.17em}3}}^{\pi \pm }`$, proportional to $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$, are shown by the dotted lines. The total results for the asymmetries, including all contributions, are shown by the the solid lines. \[The contributions $`A_{1,\mathrm{\hspace{0.17em}8}}^{\pi \pm }`$ are very small, of the order of 10% of $`A_{1,\mathrm{\hspace{0.17em}3}}^{\pi \pm }`$, and not shown separately.\] One sees that in both cases the effect of the flavor asymmetry of the antiquark distribution is noticable. Spin asymmetries in charged hadron production and comparison with the HERMES data. We now turn to the spin asymmetries in charged production, which have been measured by the SMC and HERMES experiments. Unfortunately, no complete set of quark fragmentation functions for charged hadrons ($`K^+,K^{},p,\overline{p}`$) at the HERMES scale ($`Q^2=2.5\mathrm{GeV}^2`$) is available. We therefore take recourse to the older EMC results for the fragmentation functions , in which positively and negatively charged hadrons were separated. These data have been taken at a higher scale of $`Q^2=25\mathrm{GeV}^2`$. Since it turns out that the dominant contribution to the semi-inclusive spin asymmetry for production of charged hadrons comes from the pions, we combine the HERMES result for the pion fragmentation functions with the EMC fragmentation functions for kaons and protons , ignoring the scale dependence of the kaon and proton fragmentation functions, which anyway give a small contribution.<sup>7</sup><sup>7</sup>7In principle the evolution equations for fragmentation functions would allow us to parametrize the EMC results in terms of fragmentation functions at a lower scale; however, in order to do so consistently we would need also the gluon fragmentation functions at the higher scale, which has not been measured by EMC. Again, isospin and charge conjugation allow us to write: $`D_u^{K+}=D_{\overline{u}}^K`$ $``$ $`D^K,`$ (32) $`D_d^{K+}=D_{\overline{d}}^K`$ $``$ $`\stackrel{~}{D}^K,`$ (33) $`D_u^p=D_{\overline{u}}^{\overline{p}}`$ $``$ $`D^p,`$ (34) $`D_u^{\overline{p}}=D_{\overline{u}}^p`$ $``$ $`\stackrel{~}{D}^p.`$ (35) Furthermore, following the analysis in Ref., we shall assume that $`D_s^K=D_{\overline{s}}^{K+}`$ $``$ $`D^K,`$ (36) $`D_s^{K+}=D_{\overline{s}}^K`$ $``$ $`\stackrel{~}{D}^K,`$ (37) $`D_d^p=D_{\overline{d}}^{\overline{p}}`$ $``$ $`D^p,`$ (38) $`D_s^p=D_{\overline{s}}^{\overline{p}}D_{\overline{d}}^p=D_d^{\overline{p}}`$ $`=`$ $`\stackrel{~}{D}^p.`$ (39) With these assumptions all relevant fragmentation functions for $`h+\pi ^++K^++p`$ and $`h\pi ^{}+K^{}+\overline{p}`$ can be estimated in terms of the six functions $`D,\stackrel{~}{D},D^K,\stackrel{~}{D}^K,D^p`$ and $`\stackrel{~}{D}^p`$. Evaluating the integrals with $`z_{\mathrm{min}}=0.2`$ (the cutoff used in the analysis of HERMES data ) we obtain the values shown in rows 3 and 4 of Table 1. One sees that the vaules are not too different from those obtained for $`\pi ^+`$ and $`\pi ^{}`$ production; only the sensitivity to the strange quark distributions has increased somewhat due to the inclusion of kaon production. In Fig.8 we show the results for the spin asymmetries in $`h^+`$ and $`h^{}`$ production. As in Fig.7 for $`\pi ^+`$ and $`\pi `$ we plot the asymmetry that would be obtained without flavor asymmetry of the polarized antiquark distribution in the target ($`A_{1,u}^{h\pm }+A_{1,d}^{h\pm }+A_{1,s}^{h\pm }+A_{1,\mathrm{\hspace{0.17em}0}}^{h\pm }`$), the contributions $`A_{1,\mathrm{\hspace{0.17em}3}}^{h\pm }`$ and $`A_{1,\mathrm{\hspace{0.17em}8}}^{h\pm }`$ containing the effect of the flavor asymmetry, and the total result. A preliminary comparison of the theoretical results for $`A_1^{h\pm }`$ shows that the spin asymmetries computed including the effects of $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ (solid lines in Fig.8) are consistent with the HERMES and SMC data. The accuracy of the present data, in particular for $`A_1^h`$, seems not to be sufficient for a definite choice between the theoretical results obtained with (solid lines) and without (dashed lines in Fig.8) the flavor asymmetry of the polarized antiquark distribution. Also, one should be aware that there are several sources of uncertainty in our theoretical predictions. The greatest uncertainty comes from our imperfect knowledge of the fragmentation function, mostly from the pion fragmentation functions. In the analysis of Ref., significant corrections were applied to the measured fragmentation functions in order to compensate for the acceptance of the HERMES detector. (We use the $`4\pi `$–corrected fragmentation functions in our above estimates.) A full error analysis would require keeping track of the systematic error in the fragmentation functions and is outside the scope of this paper. One should consider the possibility of changing the above analysis such as to be able to work directly with the fragmentation functions specific to the HERMES detector, avoiding the $`4\pi `$ corrections. It is conceivable that in this way one could significantly reduce the systematic error in the fragmentation functions. Recently, Morii and Yamanishi have attempted to extract $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ from the data for polarized semi-inclusive asymmetries by combining data taken with proton and Helium targets . (It was shown in Ref. that the asymmetry calculated in the chiral quark–soliton model is consistent with their bounds obtained in Ref..) Since in the case of the HERMES experiment the statistics of the Helium data is significantly worse than for the proton, this combination of data results in a loss of accuracy. Also, this approach requires accurate compensation for nuclear binding effects. In contrast, our method of analysis relies on proton data only. Spin asymmetries in charged kaon production. It is interesting to consider separately also the spin asymmetries in the production of charged kaons only. In particular, $`K^{}`$ cannot be produced by favored fragmentation of either $`u`$ or $`d`$ quarks in the target, which makes for the bulk contribution to the semi-inclusive spin asymmetry in $`\pi ^\pm `$ or $`h^\pm `$ production. In this case one might expect a large sensitivity of the spin asymmetry to the flavor asymmetries of the polarized antiquark distributions. For a rough estimate we can use the EMC fragmentation functions to evaluate the coefficients Eq.(24)–(27) for $`K^+`$ and $`K^{}`$; the results are shown in rows 5 and 6 of Table 1. The contributions to the spin asymmetries are shown in Fig.9. One sees that, in particular in the case of $`K^{}`$ production, the contribution proportional to $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ in the proton is large. Thus, semi-inclusive charged kaon production could be a sensitive test of the flavor decomposition of the antiquark distribution in the nucleon. ## 5 Conclusions and Outlook Starting from the observation that the $`1/N_c`$–expansion predicts large flavor asymmetries of the polarized antiquark distributions, and the quantitative estimates for $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ and $`\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)2\mathrm{\Delta }\overline{s}(x)`$ obtained from the chiral quark–soliton model, we have explored several consequences of a large flavor asymmetry of the polarized antiquark distributions. On the theoretical side, we have argued that the very small value for the polarized antiquark flavor asymmetry obtained from polarized rho meson exchange in the meson cloud picture does not rule out a large asymmetry, since polarized rho meson exchange is by far not the dominant contribution to $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ in that approach. Comparison with the large–$`N_c`$ result suggests that, in the terms of the meson cloud model, a large contribution is likely to come from the interference of pion and “sigma meson” exchange. It could be interesting to explore this qualitative suggestion in more detail within the meson cloud picture. As to experimental consequences, we have found that the large flavor asymmetry predicted by the chiral quark soliton model is consistent with the recent HERMES data on spin asymmetries in semi-inclusive charged hadron production. Our conclusions are based on the presently available information on quark and antiquark fragmentation functions; however, to the extent that we have explored it, they seem to be robust with regard to the systematic uncertainties in the fragmentation functions. We do not claim that at the present level of accuracy the HERMES data for $`A_1^{h\pm }`$ necessitate a large flavor asymmetry of the antiquark distribution in the proton. However, assuming a large flavor asymmetry we obtain a good fit to the data. In this sense the HERMES results should not be seen as evidence for a small flavor asymmetry of the polarized antiquark distribution. Also, we have argued that the assumption $`\mathrm{\Delta }\overline{u}(x)/\overline{u}(x)=\mathrm{\Delta }\overline{d}(x)/\overline{d}(x)`$ made in the analysis of the HERMES data in Ref. artificially limits the contributions from the polarized antiquark flavor asymmetry, and thus does not constitute a real alternative to the reference fit assuming zero flavor asymmetry. At present, one source of uncertainty in our theoretical results are the systematic errors in the HERMES quark and antiquark fragmentation functions introduced by the corrections for $`4\pi `$ acceptance. It would be worthwhile to investigate if not the above analysis could be carried out directly with the fragmentation functions for HERMES acceptance, which could considerably reduce the systematic error. We have shown that the flavor asymmetry of the polarized antiquark distribution makes a particularly large contribution to the spin asymmetries in charged kaon production. Such measurements could be an interesting option with detectors which allow discrimination between pions and kaons in the final state, e.g. at HERMES or CEBAF. Acknowledgements The authors are grateful to A. Schäfer and M. Strikman, from discussions with whom arose the idea to write this note, and to M. Düren for comments. Helpful hints from Ph. Geiger, R. Jakob, B. Kniehl, P.V. Pobylitsa, and P. Schweitzer are also gratefully acknowledged. This work has been supported in part by the Deutsche Forschungsgemeinschaft (DFG), by a joint grant of the DFG and the Russian Foundation for Basic Research, by the German Ministry of Education and Research (BMBF), and by COSY, Jülich.
no-problem/9909/nucl-ex9909013.html
ar5iv
text
# Investigation of the Exclusive ³"He"⁢(𝑒,𝑒'⁢𝑝⁢𝑝)⁢𝑛 Reaction ## Abstract Cross sections for the $`{}_{}{}^{\text{3}}\text{He}(e,e^{}pp)n`$ reaction were measured at an energy transfer of 220 MeV and three-momentum transfers $`q`$ of 305, 375, and 445 MeV/$`c`$. Results are presented as a function of $`q`$ and the final-state neutron momentum for slices in specific kinematic variables. At low neutron momenta, comparison of the data to results of continuum Faddeev calculations performed with the Bonn-B nucleon-nucleon potential indicates a dominant role for two-proton knockout induced by a one-body hadronic current. The ground-state properties of few-nucleon systems are an excellent testing ground for models of the nucleon-nucleon (*NN*) interaction. Calculations of the structure of few-nucleon systems have successfully been performed with realistic *NN* potentials, both phenomenological ones and those based on the exchange of mesons and including a phenomenological description of the short-range *NN* behaviour . The parameters of these models were fitted to the phase shifts obtained from *NN* scattering data. Significant advances in solving the three-nucleon (*3N*) continuum currently allow exact calculations of electron-induced nuclear reactions leading to the breakup of the tri-nucleon system . The exclusive $`{}_{}{}^{3}\text{He}(e,e^{}NN)N`$ reaction is sensitive to details of the initial nuclear state, as well as to the reaction mechanism and rescattering effects in the final state. The three-fold coincidence experiments necessary to measure the cross sections for these reactions have recently proven feasible . At intermediate electron energies, the cross section for electron-induced two-nucleon knockout is driven by several processes. The *NN* interaction induces initial-state correlations between nucleons and therefore the coupling of a virtual photon to one of these nucleons via a one-body hadronic current can lead to knockout of both nucleons. The interaction of the virtual photon with two-body currents, either via coupling to mesons or via intermediate $`\mathrm{\Delta }`$ excitation, will also contribute to the cross section. In addition, final-state interactions (FSI) among the nucleons after absorption of the virtual photon may cause breakup of the tri-nucleon system. By studying the dependence of the cross section on various kinematic quantities one may hope to unravel the tightly connected properties of the *NN* interaction, short-range correlations, two-body currents and FSI. In this Letter we present results of a $`{}_{}{}^{3}\text{He}(e,e^{}pp)n`$ experiment that was performed in the dip region, i.e., the kinematic domain in between the peaks due to quasi-elastic scattering and $`\mathrm{\Delta }`$ excitation. The measurements were performed with the high duty-factor electron beam extracted from the Amsterdam Pulse Stretcher ring at NIKHEF. The incident electrons had an energy of 564 MeV. A cryogenic, high-pressure barrel cell containing gaseous $`{}_{}{}^{\text{3}}\text{He}`$ was used. The luminosity amounted to $`5\times 10^{35}`$ atoms cm<sup>-2</sup>s<sup>-1</sup>. The scattered electrons were detected in the QDQ magnetic spectrometer and the emitted protons in highly segmented plastic scintillator arrays . The kinematic settings of the QDQ correspond to a virtual-photon energy of $`\omega `$=220 MeV and three-momentum transfer values of $`q`$=305, 375, and 445 MeV/$`c`$. At $`q`$=305 MeV/$`c`$, protons were detected in the angular ranges $`5^{}<\gamma _1<60^{}`$ and $`170^{}<\gamma _2<110^{}`$, where $`\gamma _i`$ is the angle between the momentum $`𝐩_i^{}`$ of proton $`i`$ and the transferred three-momentum $`𝐪`$. For $`q`$=375 and 445 MeV/$`c`$, $`\gamma _1`$ ranges between 5 and 30. The acceptance of the second scintillator array is $`\pm 20^{}`$ both in-plane and out-of-plane. The central value of $`\gamma _2`$ was chosen such that each configuration includes the point for which the neutron is left at rest. The detection threshold of the proton detectors was 72 and 48 MeV for the kinetic energy of the protons emitted in forward and backward direction, respectively. Accidental coincidences were subtracted from the measured yield via a procedure described in Ref. . The data were corrected for electronics dead time and for inefficiencies due to multiple scattering and inelastic processes of the protons in the detection systems. Cross sections were obtained by normalizing the yield to the detection volume and integrated luminosity. The results are presented as a function of the missing momentum $`|𝐩_m|=|𝐪𝐩_1^{}𝐩_2^{}|`$, of $`\gamma _1`$ and of $`p_{13}`$, defined as $`p_{13}=|𝐩_1^{}𝐩_3^{}|`$. At missing energies $`E_m=\omega T_{p_1^{}}T_{p_2^{}}T_{p_3^{}}`$ below the pion production threshold the kinematics of the $`{}_{}{}^{\text{3}}\text{He}(e,e^{}pp)n`$ reaction is completely determined, as $`𝐩_3^{}𝐩_m`$. The measured missing-energy spectrum contains a peak, corresponding to the three-body breakup of $`{}_{}{}^{3}\text{He}`$, which has a width of 6 MeV (FWHM) due to resolutions of the detectors. Strength has been shifted from this peak towards higher missing energies due to radiative processes. The nine-fold differential cross section was integrated over a range in excitation energy up to 14 MeV, taking into account the Jacobian $`T_{p_2^{}}/E_m`$. The strength beyond this cutoff was estimated with a formalism similar to that for the $`(e,e^{})`$ reaction and applied as an overall correction factor to the data. In all figures only the statistical errors are indicated. The systematic error on the cross sections is 7%. It is mainly determined by the uncertainty in the integrated luminosity (3%), the uncertainty in the correction applied for hadronic interactions and multiple scattering in the proton detectors (6%), and the determination of the electronics dead time (2%). The data are compared to results of Faddeev calculations , where both the three-nucleon bound-state and final-state wave functions are exact solutions of the *3N* Faddeev equations solved in a partial-wave decomposition using the Bonn-B *NN* interaction. By including the rescattering contributions to all orders in the continuum, final state interaction effects are taken into account completely. To ensure convergence of the calculations, *NN* force components are included up to two-body angular momenta $`j=3`$. Two types of calculations were performed. The first one only employs a one-body hadronic current operator. The other one also includes processes, in which the virtual photon interacts with a $`\pi `$ or $`\rho `$ meson (MECs), either in-flight or in a nucleon-meson vertex. Hence, the hadronic current operator is augmented with additional currents $`𝐣_\pi `$ and $`𝐣_\rho `$ as proposed by Schiavilla *et al.* . At the vertices, cutoff parameters $`\mathrm{\Lambda }_\pi =1.7\text{GeV}`$ and $`\mathrm{\Lambda }_\rho =1.85\text{GeV}`$ have been introduced for the $`\pi `$ and $`\rho `$ meson-exchange interactions to take the finite size of baryons and mesons into account. In the case of direct two-proton knockout, the contribution of MECs to the cross section is suppressed, as the virtual photon – to first relativistic order – cannot interact with a neutral meson exchanged in a *pp* system. The calculation of the differential cross section was performed in two stages. The process of solving the Faddeev equations, which is computationally the most involved, was performed once per given $`(\omega ,q)`$ setting. Cross sections could subsequently be calculated for specific three-nucleon final states, determined by $`𝐩_1^{}`$ and $`𝐩_2^{}`$. This is especially important when comparing theoretical predictions to data obtained with large-acceptance detectors, since the cross section varies significantly within the region of phase space covered by the experimental detection volume. An orthogonal grid in the laboratory quantities $`(\theta _1,\varphi _1,\theta _2,\varphi _2,T_1)`$ was employed to compute the cross section in each bin by averaging over the contributing part of the experimental phase space. A sufficiently large number of grid points ($`2.5\times 10^6`$ points per setting) was taken to ensure convergence within 5%. The cross sections measured at $`q`$=305 MeV/$`c`$ are shown in Fig. 1 as a function of the neutron momentum ($`p_3^{}p_m`$) in the final state. The three panels correspond to different ranges in $`\gamma _1`$. The largest range in $`p_m`$ is spanned for $`25^{}<\gamma _1<35^{}`$. Here, the cross section was determined down to $`p_m`$ values as low as 10 MeV/$`c`$. In all panels the cross sections decrease roughly exponentially with approximately the same slope as a function of $`p_m`$. This decrease reflects the neutron momentum distribution inside $`{}_{}{}^{\text{3}}\text{He}`$, for a specific range of $`pp`$ relative momenta. In the measured configuration the domain of relative momenta around 300 MeV/$`c`$ per nucleon was probed. The curves show the results of continuum Faddeev calculations. The solid curve represents results obtained with a one-body hadronic current only, while the dashed line also includes contributions due to MECs. As can be seen in the middle panel, for $`p_m100\text{MeV}/c`$ there is a fair description of the data by the solid curve and the contribution due to MECs is almost negligible. At higher missing momenta calculations including only one-body currents fall short by about a factor of five. The discrepancy between data and calculations is likely to be due to two-body hadronic processes, which involve coupling of the virtual photon to a proton-neutron system in the initial state, thus generating neutrons with a large momentum in the final state. This is reflected in the increase of the MEC contribution from 5% to 40% of the calculated cross section towards high $`p_m`$, which is clearly not enough to explain the discrepancy with the data. Also excitation of the $`\mathrm{\Delta }`$ resonance followed by its non-mesonic decay has to be considered. The contribution of this process strongly depends on the invariant mass $`W_{\gamma NN}`$ of the virtual photon plus two-nucleon system. In a direct reaction mechanism, i.e., at low $`p_m`$, this is equal to the invariant mass of the two-proton system in the final state, $`W_{p_1^{}p_2^{}}`$. For all kinematic settings we have $`2030W_{p_1^{}p_2^{}}2055\text{MeV}/c^2`$, which is well below the mass of the $`N\mathrm{\Delta }`$ system. Moreover, in the two-proton case the contribution of $`pp\mathrm{\Delta }^+ppp`$ will be further suppressed because of angular-momentum and parity conservation selection rules. At high $`p_m`$ intermediate $`\mathrm{\Delta }`$ excitation via the absorption of the virtual photon by a $`pn`$ pair will contribute substantially to the reaction. This process is known to dominate the $`(\gamma ,pn)`$ reaction at $`E_\gamma >180\text{MeV}`$ . The invariant mass $`W_{p^{}n^{}}`$ of a $`pn`$ system at $`p_m300\text{MeV}/c`$ is around 2130 MeV/$`c^2`$, which is close to the resonant mass of the $`N\mathrm{\Delta }`$ system. It should be noted that – for a fixed value of $`p_m`$ – the angle $`\gamma _1`$ implicitly defines the kinematic configuration of the final state, provided that the direction of $`𝐩_2^{}`$ is kept within a limited range. For $`\gamma _125^{}`$ and $`\gamma _135^{}`$, the three nucleons are always emitted at sizeable angles with respect to each other, which reduces their mutual interactions. Enhanced probability for rescattering occurs in so-called ‘FSI configurations’ where two nucleons are emitted with approximately the same momentum and angle. Within the interval $`25^{}<\gamma _1<35^{}`$, an ‘FSI configuration’ between proton-1 and the undetected neutron occurs around $`p_m=300\text{MeV}/c`$, which introduces the bump observed in the data for $`200p_m300\text{MeV}/c`$. A similar structure is seen in the calculated curves. The enhanced rescattering effects occuring around ‘FSI configurations’ are the dominant factor that determines the cross section in these regions. The occurrence of such configurations can not be avoided in experiments with large-solid-angle detectors. On the other hand, their presence offers the possibility to test the treatment of FSI in calculations. For these kinematic conditions, the cross section is best represented as a function of the momentum difference between the two outgoing nucleons involved: $`p_{ij}=|𝐩_i^{}𝐩_j^{}|`$. Here $`p_{ij}=0\text{MeV}/c`$ corresponds to the actual ‘FSI configuration’. As mentioned before, the influence of such an ‘FSI configuration’ is already apparent in Fig. 1 at $`p_m250\text{MeV}/c`$ and $`\gamma _130^{}`$. Figure 2 shows the cross section as a function of $`p_{13}`$ for the measurements at $`q`$=445 MeV/$`c`$, where the detection volume extends to $`p_{13}=0\text{MeV}/c`$. The acceptance in $`p_m`$ has been limited to $`380<p_m<400\text{MeV}/c`$ to ensure complete coverage of the detection volume for the entire domain $`0<p_{13}<320\text{MeV}/c`$. This also limits $`p_1^{}`$ to the range 380–430 MeV/$`c`$. The range in $`p_{13}`$ is therefore mainly due to angular variations between the undetected neutron and the forward proton. A typical final-state configuration is shown in the inset of Fig. 2. The cross section data presented in Fig. 2 show an increase as $`p_{13}`$ approaches 0 MeV/$`c`$. This trend is well reproduced by the continuum Faddeev calculations which include rescattering among the three outgoing nucleons. Figure 3 shows the dependence of the cross section on the three-momentum transfer $`q`$. As the cross section is primarily determined by the momentum of the neutron, the data are presented for two regions in $`p_m`$. The region above $`p_m`$=220 MeV/$`c`$ is disregarded in this discussion as ‘FSI configurations’ occur in this domain and the cross section is affected differently at various values of $`q`$. For $`20<p_m<120\text{MeV}/c`$ the reaction can be considered as a direct process, leaving the spectator neutron ‘at rest’ in the final state. The measured cross section decreases by a factor of four between $`q`$=305 and 375 MeV/$`c`$. The agreement in size and slope between the data and the continuum Faddeev calculations performed with a one-body hadronic current, suggests that in this domain the cross section is dominated by knockout of correlated proton pairs. For $`120<p_m<220\text{MeV}/c`$ theory underestimates the data for all three $`q`$ values by a factor of five. In conclusion, differential cross sections for the $`{}_{}{}^{3}\text{He}(e,e^{}pp)`$ reaction were measured at $`\omega `$=220 MeV and $`q`$=305, 375, and 445 MeV/$`c`$ with good statistical accuracy. The measured cross section decreases roughly exponentially as a function of $`p_m`$ in the domain from 10 to 350 MeV/$`c`$. Continuum Faddeev calculations using the Bonn-B $`NN`$ potential, which include rescattering of the outgoing nucleons to all orders and also take meson-exchange currents into account, describe the data reasonably well for $`p_m<100\text{MeV}/c`$. However, an increasing discrepancy up to a factor three is observed at larger $`p_m`$ values. In this domain meson-exchange currents account for 40% of the calculated cross section and also intermediate $`\mathrm{\Delta }`$ excitation, especially in the $`pn`$ system, is expected to contribute strongly to the cross section in this domain. Comprehensive treatment of intermediate $`\mathrm{\Delta }`$ excitation in the calculations is needed for a detailed interpretation of the data in this high $`p_m`$ region. Rescattering of the nucleons in the final state influences the cross section in specific kinematic orientations; within such a subset of the data, the observed trend is well reproduced by the results of the continuum Faddeev calculations. At small $`p_m`$ values the dependence of the cross section on $`q`$ is indicative for direct knockout of two protons by a virtual photon. The fair agreement between the data and calculations based on a one-body hadronic current indicates that in this domain the cross section mainly originates from knockout of two correlated protons. This opens the opportunity to exploit this low $`p_m`$ domain for a detailed study of $`pp`$ correlations in few-nucleon systems. This work is part of the research program of the Foundation for Fundamental Research on Matter (FOM) and was sponsored by the Stichting Nationale Computerfaciliteiten (National Computing Facilities Foundation, NCF) for the use of supercomputer facilities. Both organisations are financially supported by the Netherlands Organisation for Scientific Research (NWO). The support of the Science and Technology Cooperation Germany-Poland, the Polish Committee for Scientific Research (grant No. 2P03B03914), and the US Department of Energy is gratefully acknowledged. Part of the calculations have been performed on the Cray T90 and T3E of the John von Neumann Institute for Computing, Jülich, Germany.
no-problem/9909/astro-ph9909246.html
ar5iv
text
# Commentary on ”The Theory of the Fluctuations in Brightness of the Milky Way. V” by S. Chandrasekhar and G. Munch (1952) ## Abstract The series of papers by Chandrasekhar and Munch in the 1950s were concerned with the use of statistical models to infer the properties of interstellar clouds based on the observed spatial brightness fluctuations of the Milky Way. The present paper summarizes the subsequent influence of this work, concentrating on the departure from their earlier discrete cloud model to a continuous stochastic model in Paper V of the series. The contrast between the two models anticipated and parallels current tensions in the interpretation of interstellar structure, as well as intergalactic Lyman alpha clouds. The case of interstellar structure is discussed in some detail. Implications concerning the reification of models and the ability of scientific abstraction to model complex phenomena are also briefly discussed. slugcomment: Centennial Issue of The Astrophysical Journal, in press To present-day astrophysicists studying the spatial structure of the interstellar medium, the intergalactic medium, or the large-scale distribution of galaxies in the universe, a methodology involving the comparison of the spatial statistics of a model with the observed statistics seems standard. But in fact this approach to interstellar structure, and the models employed, trace back to the work of Ambartzumium in the 1940s (see Ambartzumian 1950 and Kaplan & Pikelner 1970, p.173), and were developed most extensively in a series of papers by Chandrasekhar and Munch in the 1950s (hereinafter CM, Papers I through V). These papers and their results were influential in a number of subsequent works. For example, the relation between the two-point spatial correlation function of galaxies and the angular correlation function, which played a dominant role in the study of large-scale structure in the 1970s and 1980s (before the availability of large redshift surveys and the recognition of the need for additional structure descriptors), involves an equation derived by Limber (1953a, see Peebles 1993, pp.216-217) and inspired by the formulation of CM’s 1952 Paper V. For the interstellar medium, several studies of interstellar reddening and HI emission and absorption have been used, assuming a discrete cloud model, to infer the mean number of clouds per unit length along a line of sight and the mean extinction per cloud, based on results given in the CM papers (e.g. Knude 1979). Perhaps the most significant aspect of this early work, however, is the manner in which the rather radical departure from their earlier discrete cloud model to a continuous stochastic model in Paper V parallels current tensions in the interpretation of both the interstellar medium and even intergalactic Lyman alpha clouds, as explained below. Prior to their 1952 paper, CM presented four papers that were concerned with inferring a few basic properties of a discrete cloud model from the observed spatial fluctuations in the brightness of the Milky Way. The brightness fluctuations were interpreted as being due primarily to the varying number of discrete absorbing clouds along a line of sight. These papers are mathematically daunting, but the basic ideas are simple and have direct relevance today. In Paper I CM derived an integro-differential equation for the brightness fluctuations in terms of the frequency distribution of cloud extinctions. This equation can be seen as a specific example of the Chapman-Kolmogorov equation describing the probability distribution of variables that undergo both continuous changes as well as ”jumps.” (A simpler approximate derivation can be found in Kaplan & Pikelner 1970.) In subsequent papers, they solved this equation for the cases in which the system of clouds has infinite extent and a particular distribution of extinction (Paper II), the case of finite extent but constant extinction per cloud (Paper III), and the general case of arbitrary distributions of extinction for infinite extent (Paper IV). Limber (1953b) generalized to finite extent, while Munch (1955, Paper VI) used the model to estimate the decorrelation length in the Milky Way. All of this work assumed a model in which clouds are discrete entities. A substantial departure, however, is seen in the fifth, 1952, paper V, in which CM considered replacing the discrete cloud model by a continuous stochastic model for the density field. This continuous model was generalized to finite extent by Ramakrishnan (1954). As CM say in their abstract, the new picture ” may be considered as an alternative to (or a refinement of) the current picture, which visualizes the interstellar medium as consisting of a distribution of discrete clouds.” Besides presaging the study of structure in terms of spatial statistics defined for a continuous random variable, this paper resonates with current work suggesting that the density distribution of Lyman alpha absorbers at intermediate and large redshift (Bi & Davidson 1997, Croft et al. 1998, Haehnelt et al. 1998) and of cool interstellar gas (e.g. Falgarone 1990 for an observational review, Ballesteros-Paredes et al. 1999 for a theoretical discussion) should be viewed as a continuous, albeit intermittent, density and velocity field rather than as a collection of discrete entities called ”clouds.” The utility of the discrete cloud model in furnishing quantitative results is not in question. The question is whether the discrete cloud model omits some basic physics that is essential to understanding the evolution of the gas and its ability to form stars and galaxies, or leads to an account of such processes that is simply incorrect at some (possibly severe) level. A useful analogy is to consider geologists trying to understand mountain formation and evolution by studying only the peaks high enough to receive snow at some given time, as though they were separate entities. Could such an approach ever discover the fundamental role of stress-driven ”wrinkling” of the Earth’s crust in the evolution of mountain ranges? The assumption by CM of discrete clouds in Papers I-IV reflected the general belief among astronomers at the time that the apparent discreteness in velocity space of the absorption lines seen toward OB stars, catalogued most comprehensively by Adams (1949), reflected a discrete spatial structure. Since that time such a correspondence has become ”clouded” by evidence that the spectral lines are usually blends of multiple components, and the realization that these velocity features only represent a small fraction of the density structure that later became accessible to observers using a number of other techniques. The observed morphology of interstellar gas and dust is now acknowledged to be much more complex than allowed for by the discrete cloud model, in terms of the geometry of density enhancements, nested hierarchical structure, and connectedness. Other authors have argued against the discrete cloud model on grounds other than morphology. For example, Dickey & Lockman (1990), in their review paper on neutral hydrogen in our Galaxy, discuss ” the difficulty of objectively delimiting discrete features in emission surveys” in connection with the discrete cloud model. Falgarone et al. (1992), in a study of small-scale molecular cloud structure, use the modest inferred density contrasts to conclude that ”…the overall picture resembles that of the interstellar medium described in 1952 by Chandrasekhar and Munch as a continuous distribution of density fluctuations of small amplitude at each scale…rather than an ensemble of discrete clouds.” There is also substantial evidence that the density and velocity fields are scale-free over a large range of scales (see Elmegreen & Falgarone 1996 and references therein), suggesting a continuous distribution (although a system of discrete clouds with a continuous distribution of sizes or tiny discrete clouds of the same size arranged hierarchically could also explain the observational results). Nevertheless, the effect of sensitivity selection effects and the usually poor extent of spatial sampling has continued to reinforce the prevalent interpretation of observations in terms of a collection of discrete, (usually) spherical, or at least smooth, clouds. Any contour map that is based on less than of order ten thousand spatial resolution elements, and which has a sensitivity limit that omits a significant areal fraction of the region being studied, will appear to be a system of more-or-less spherical discrete clouds. What has changed is only an increase in the number of categories of discrete clouds, such as diffuse, dark, giant molecular, clumps, cores, etc., which usually reflect the distinct observational techniques and resolutions employed more than clear evidence of real category boundaries. There is another, just as significant, fact that has helped entrench the discrete cloud model: the model is intrinsically easier to visualize and to model theoretically. For example, evolution of structure in the discrete model can be abstracted into a generalized ”coalescence equation,” in which the clouds interact only through collisions, reducing the interstellar medium to a more complex version of the kinetic theory of gases. This model was first treated in some detail by Field & Saslaw (1965), and by a large number of subsequent works (an especially illuminating treatment can be found in Silk & Takahashi 1979). There is little connection to the flows on scales larger than an individual cloud size, or coupling in velocity space, except through collisions. Concerning the origin of the clouds, an even simpler model pictures discrete clouds that continually condense by thermal instability, resulting in a conceptually static two-phase model for the interstellar medium (Field, Goldsmith, & Habing 1969). Even though this model has been generalized to include a hot phase involving supernova explosions, as well as other effects, most notably in the influential paper by McKee & Ostriker (1977), the discreteness assumption for the clouds, and the weak connection to the hydrodynamics, persists. In contrast, if the discreteness assumption is abandoned, one is left, theoretically, with having to infer statistical properties from the hydrodynamic equations for a turbulent magnetized gas, a problem that defies satisfactory solution, let alone conceptualization, even in studies of laboratory and terrestrial unmagnetized incompressible turbulence. The problem lies in the complex and unpredictable behavior of the nonlinear advection operator in the momentum equation. The problem of empirical description is also made more difficult, since instead of counting objects with various attributes, less intuitive statistical descriptors must be employed. Even conceptually, the connection to the fluid equations makes the picture inherently more dynamic and difficult to visualize. (However the continuous CM 1952 model did not address the dynamics, since they did not consider the available information \[Adams 1949\] concerning the velocity field. The interpretation of the dynamics in terms of the stochastic model was apparently first treated by Kaplan and by von Hoerner in the 1950s–see Kaplan & Pikelner 1970.) The tension between the complexity of empirical and simulated turbulence, on the one hand, and the many attempts to reduce it to a conceptual model, can be seen as a major theme in the history of the study of turbulence. It seems very likely that the continuous stochastic variable alternative proposed in CM’s 1952 paper was influenced by Chandrasekhar’s active involvement in turbulence theory around the same time. Therefore, while providing the basic conceptual basis for the discrete cloud model for several subsequent generations of astronomers studying reddening, extinction, HI, molecular line, and more recently submillimeter continuum observations, this series of papers culminates by asking whether their own discrete model should be replaced by a tu rbulence-like formulation of the problem of interstellar structure, a question which can be seen as a major theme in contemporary studies of the interstellar medium. Although CM (1952) apparently favored the continuous stochastic approach, they gave virtually no guidance concerning criteria by which this approach could be evaluated relative to the discrete cloud model, except for brief reference to the degree to which the models can match observations (see also Limber 1953b). In fact it is only relatively recently that attention has returned (from the time of Kaplan and von Hoerner in the 1950s) to descriptions in terms of continuous density and velocity fields, using, for example, the correlation function (see Miesch & Bally 1994 and references therein), functions related to wavelet transforms (Langer et al. 1995), and principal component analysis (Heyer & Schloerb 1997). For the most part, however, theoreticians and observers alike treat the interstellar gas as some version of the discrete model, which has its theoretical and visualistic advantages, as outlined above. Part of the continued attraction of the discrete model is its conceptual simplicity, which is usually seen as a positive value in much astrophysical research and science in general. Yet CM were presciently aware of the negative reification effects which accompany the discrete cloud model: ”We wish to emphasize a tendency to argue in circles can be noted in the literature in that confirmation for the picture \[italics in original\] of interstellar matter as occurring in the form of discrete clouds is sought in the data analyzed” (CS 1952, p. 104). Such reification continues to the present day, despite occasional warnings. The fact that CM did not make an explicit decision between the two models based on either empirical or theoretical considerations can be interpreted as acknowledgement of the utility of some form of an abstraction, or reduction, of the irregular, dynamic, complex, magnetized, and undeniably continuous, interstellar turbulent flow. Indeed, short of simulations, which cannot be regarded as explanatory theories per se, the nature of science seems to require some kind of abstraction to a conceptual model in order to generalize from the particular to the universal. The interesting questions that therefore arise from the series of papers by CM are: 1. What kind of conceptual model can best bridge the gap and how can we avoid reifying the model? 2. Is any kind of conceptual model capable of bridging the gap? These questions address issues usually relegated to the philosophy or sociology of science. But it seems clear that they should be of concern to scientists in general, since the first question calls for a new approach to the problem at hand, while the second question challenges the basis of the traditional scientific enterprise in the sense that it questions the idea that complex physical phenomena can be adequately described by universal models. Whether CM recognized these broader implications in their shift from the discrete to the continuous model is of course unknown. Yet the fact that their two models still in effect drive most of the contemporary work in the field of interstellar (and intergalactic) structure and star formation, and that the above questions have not yet been answered in the nearly five decades since their work appeared, underscores the fundamental importance of the CM papers, and provides continued motivation for the many astronomers who are trying to bridge the gap in order to understand the evolution of gas and star formation in galaxies.
no-problem/9909/astro-ph9909491.html
ar5iv
text
# An extreme X-ray flare observed on EV Lac by ASCA in July 1998 ## 1 Introduction One of the basic standing problems of stellar coronal astronomy is the determination of the spatial structuring of the coronal plasma. While for the study of stellar interior structure the first-order approximation of spherical symmetry is a good starting point, stellar coronae are, as shown by the large body of extant imaging observations of the solar corona, far from spherically symmetric. The solar corona shows a high degree of spatial structuring: most of the X-ray luminous plasma is confined in coronal loops preferentially located at mid-latitudes, with an average position which tracks the migration of sunspots through the solar cycle. The lack of spatial information constitutes a strong limitation for the study of stellar coronae: low-resolution coronal X-ray spectra are insensitive to the plasma’s density, so that non-dispersive, CCD- or proportional counter-based spectral observations do not allow to distinguish between a large diffuse corona and a compact, structured, high-pressure one. It is thus not possible, if the solar analogy is postulated, to determine how the solar corona scales toward higher levels of activity, i.e. if largely through a spatial filling of the available volume with coronal loops (yielding a relatively symmetric corona) or if through the filling of a relatively small number of coronal structures with significantly higher density plasma. Thus far the main tools to study the spatial distributions of the coronal plasma have been eclipse experiments and the study of flares. While the observation of rotational modulation should also in principle allow to derive the spatial distribution of the emitting plasma, as discussed by Schmitt (1998) convincing examples of rotationally modulated X-ray emission are rare. Under a given set of assumptions, the study of the decay phase of a flare can yield information about the size of the flaring structure; different methods for this type of analysis have been developed and applied in the past to several observations of stellar flares. The widely applied quasi-static approach (see below) almost invariably results, when applied to intense stellar flares, in long, tenuous coronal loops extending out to several stellar radii. The stronger flares yield in general larger sizes. Detailed hydrodynamic modeling of flaring loops has provided useful insight on stellar X-ray flares (Reale et al. (1988)); more recently, diagnostic tools have been developed for the derivation of the size of stellar flaring loops and of the heating evolution (Reale et al. (1993); Reale et al. (1997); Reale & Micela (1998)). In the solar case, in addition to the “compact” flares, in which the plasma appears to be confined to a single loop whose geometry does not significantly evolve during the event (an assumption shared by the quasi-static method and by the hydrodynamic modeling quoted above), a second class of flares is usually recognized, i.e. the “two-ribbon” events, in which an entire arcade of loops first opens and then closes back; the footpoints of the growing system of loops are anchored in H$`\alpha `$-bright “ribbons”. These flares are generally characterized by a slower rise and decay, and a larger energy release. Compact flares have often been considered to be due to “impulsive” heating events, while the longer decay times of two-ribbon events have been considered as a sign of sustained heating. However, also in the case of compact flares sustained heating has been shown to be frequently present (Reale et al. (1997)), so that the distinction may indeed be less clear than often thought. Fitting of X-ray spectra with physical models of static loops (Maggio & Peres (1997); Sciortino et al. (1999)) can also yield the surface filling factor of the plasma as one of the fit parameters. However, for loops smaller than the pressure scale height this method only gives an upper limit to the filling factor, which needs to be further constrained, for example, with estimates of the plasma density based on EUV line ratios (Maggio & Peres (1997)). With few exceptions<sup>1</sup><sup>1</sup>1Notably the observation of $`\alpha `$ CrB, Schmitt & Kurster (1993), in which the observed total eclipse constrains the corona on the G5V star to have a scale height much less than a solar radius., eclipse studies of the quiescent emission of eclipsing binaries have thus far failed to yield strong constraints on spatial structuring of the plasma (Schmitt (1998)). In part this is due to the inversion of the weak observed modulation being mathematically an intrinsically ill-posed problem, where few compact structures can mimic the emission from a more diffuse medium (see discussion in Schmitt (1998)). Recently, the observation of the total eclipse of a large flare on Algol (Schmitt & Favata (1999)) has for the first time yielded a strong geometrical constraint on the size of a flaring structure. The geometrical loop size is significantly smaller than the size derived from the analysis of the flare’s decay (Favata & Schmitt (1999)) using the quasi-static method, showing how such approach can over-estimate the actual loop size. The characteristics of the Algol flare are such that the hydrodynamic decay, sustained heating framework which we also use here yields a large range of allowed loop semi-lengths, with the lower end of the range marginally compatible with the geometrically derived size. The presence of intense X-ray flares on flare stars<sup>2</sup><sup>2</sup>2We use the term to mean “M-type dwarfs, either single or members of a multiple system, which show frequent sudden enhancements of their X-ray, UV and optical luminosity”. was well established with *Einstein* observations – with some by now classic observations such as the one relative to a flare on Proxima Cen (Haisch (1983)). However, *Einstein* observations were usually relatively short (few ks) thus imposing a bias on the type of events which could be detected (Ambruster et al. (1987)). In particular, the short total integration times reduced the chance of detecting rare event types. The EXOSAT observatory featured long, uninterrupted observations, and thus allowed to collect a more unbiased view of flares on low-mass stars, resulting in a database (Pallavicini et al. (1990)) of about 300 hr of flare stars observations, from which it is apparent that flares come in a large variety of sizes for both their time scales and their energetics. Pallavicini et al. (1990) did not attempt to derive the spatial scales of the observed flares, although they tentatively divided them into two classes reminiscent of the solar compact and two-ribbon flares. The next generation of of flare observations came with the ROSAT All-Sky Survey (RASS), which, thanks to its sky scanning strategy, allowed to search for rare, long-lasting events. Some flares of previously undetected magnitude and duration are indeed present in it (Schmitt (1994)); in particular, EV Lac showed a long X-ray flare lasting approximately a day, superimposed on a much shorter but more intense event. We have observed EV Lac for two days with the ASCA observatory, detecting the most intense X-ray flare thus far observed on a main-sequence star, with a 300-fold peak increase of the X-ray count rate. This paper presents a detailed analysis of the flaring event, and it is organized as follows: the ASCA observations and their reduction, together with the determination of the spectral parameters for the flare are discussed in Sect. 2, the parameters of the flaring region are determined (using both the quasi-static formalism of van den Oord & Mewe (1989) and the hydrodynamic decay, sustained heating framework of Reale et al. (1997)) in Sect. 3, with a discussion (Sect. 4) closing the paper. ## 2 Observations and data reduction EV Lac was observed by ASCA continuously for $`150`$ ks from 13 July 1998 06:10 UT to 15 July 1998 01:40 UT, as an AO-6 guest investigator target (P.I. F. Favata). The data were analyzed using the ftools 4.1 software, extracting both the spectra and the light curves with the xselect package and performing the spectral analysis with the xspec 10.0 package. The mekal plasma emission model (Mewe et al. (1995)) was adopted for the spectral analysis. The peak count-rate of the flare ($`100`$ cts s<sup>-1</sup> in the SIS) exceeds both the telemetry limit ($`40`$ cts s<sup>-1</sup>) and the 1% pile-up limit throughout the whole point-spread function ($`60`$ cts s<sup>-1</sup>), thus preventing a reliable spectroscopic analysis. We therefore only used the GIS data in the following. Source photons have been extracted, for both GIS-2 and GIS-3 detectors, from a circular region 18.5 arcmin in diameter (74 pixels, somewhat larger than the suggested extraction radius for GIS data; given the strength of the source this ensures that as many as possible source photons are collected) centered on the source position, while background photons have been extracted from a circular region identical in size to the source region but symmetrically placed with respect to the optical axis of the X-ray telescope. For point sources such strategy allows for the background to be extracted from the same observation (and thus with the same screening criteria) while ensuring that the effect of telescope vignetting on cosmic background photons is properly accounted for. Given the high intensity of the source emission during the flare, the background is however effectively negligible. The GIS-3 background-subtracted light curve for EV Lac for the complete duration of the ASCA observation is shown in Fig. 1. The light curve shows evidence for variability on many time scales, and at least three individual flares can be identified: the exceptional event at $`51`$ ks from the beginning of the observation, and two minor (however still rather sizable) flares at $`10`$ and $`75`$ ks. The light curves of both minor events show a clear exponential decay, but their peak is not covered by the observations. To derive the temporal evolution of the temperature and emission measure of the large flare we have subdivided it in 9 time intervals, shown together with the GIS-3 light curve of the event in Fig. 2. Individual GIS-2 and GIS-3 spectra have been extracted for each of these intervals and merged using the procedure described in the ASCA ABC Guide (1997). The exposure time of each individual spectrum has been corrected for the dead-time of the GIS (which, at these high count rates is rather significant, with values up to 1.2 – note that the both light curves from Figs. 1 and 2 are *not* corrected for detector dead-time). The quiescent spectrum has been taken from the interval immediately preceding the flare (interval 1 in Fig. 2, covering $`5`$ ks). A two-temperature model has been fit to the spectrum extracted from interval 1, with the resulting spectral parameters shown in Table 1. The sequence of GIS flare spectra is shown in Fig. 3. The left panel shows the spectra collected during the rising phase of the flare, up to the peak in the light-curve (interval 4), while the decay-phase spectra are plotted in the right panel. Individual flare spectra from time intervals 2 to 9 have been fit with a single-temperature mekal model; given the lack of soft response in the GIS no absorbing column density was included. The quiescent emission was modeled by adding a frozen-parameter two-temperature mekal model to the fit (with the parameters as in Table 1); the results of the spectral fits are shown in Table 2. During intervals 5 and 6 the single-temperature fit does not yield a satisfactory reduced $`\chi ^2`$ (see Table 2), due to the large residuals present in the region around 1 keV, with the observed spectra showing some additional emissivity with respect to the models. Similar effects are also seen in the in the ASCA SIS Capella spectrum (Brickhouse (1998)) and in the flaring spectra of Algol as seen by SAX (Favata & Schmitt (1999)), suggesting that current plasma emission codes (as the mekal one used here) under-predict the observed spectrum around 1 keV, likely due to a large number of high quantum-number ($`n>5`$) Fe L lines from Fe xvii, Fe xviii and Fe xix (Brickhouse (1998)), and thus the formally unacceptable $`\chi ^2`$ resulting from the fit does not necessarily imply that the one-temperature model is not correctly describing the observed spectrum. Indeed, we have verified that adding further temperature components does not significantly improve the fit for intervals 5 and 6. The time evolution of the flare’s spectral parameters (temperature, emission measure, plasma metal abundance) is shown in Fig. 4, together with the flare’s GIS-3 light-curve binned in the same time intervals. ## 3 Flare analysis We have analyzed the present flare using two different frameworks (quasi-static cooling and hydrodynamic decay, sustained heating), which both make the assumption that the flaring plasma is confined in a closed loop structure, whose geometry is not significantly evolving during the event. Although direct support to this assumption is of course missing, the relatively short duration of the event allows an analogy with solar compact flares. In any case the second method provides reliable scale sizes of the flaring structure even in the presence of some readjustment of the magnetic field, the crucial assumption been plasma confinement. ### 3.1 The quasi-static cooling framework Many stellar X-ray flares observed to date have been studied using the quasi-static formalism first discussed in detail by van den Oord & Mewe (1989). It is thus of interest to analyze the present event with the quasi-static approach, to allow a homogeneous comparison with literature data, even if, as discussed by Favata & Schmitt (1999), this method can significantly over-estimate the size of the flaring loops (see also Reale et al. (1997)). According to van den Oord & Mewe (1989), for the decay to be quasi-static (i.e. to happen through a sequence of states each of which can be described by the scaling laws applicable to stationary coronal loops) the ratio between the characteristics times for radiative and conductive cooling must be constant during the flare decay (although its absolute value needs not be known). This ratio is parameterized by the quantity $$\mu =\frac{\tau _\mathrm{r}}{\tau _\mathrm{c}}=C\times \frac{T^{13/4}}{EM},$$ (1) The normalization $`C`$ depends on the details of the loop’s geometry, and is not relevant here. The evolution of $`\mu `$ is plotted in Fig. 5; within the error bars $`\mu `$ is constant during the whole decay, so that the conditions for the applicability of the quasi-static decay framework are in this case met. The scaling laws discussed by Stern et al. (1992) –linking the loop semi-length $`L`$ and the plasma density $`n`$ with the peak flare temperature $`T`$ and the effective decay time $`\tau `$ – yield results very similar to the ones obtained through a full fit to the equations of van den Oord & Mewe (1989), so that we will limit ourselves to their application. Specifically, $`L\tau T^{7/8}`$ and $`n\tau ^1T^{6/8}`$, valid for temperature regimes in which the plasma emissivity scales as $`\mathrm{\Psi }_0T^\gamma `$ with $`\gamma 0.25`$. In practice this is valid for $`T20`$ MK, i.e. for most of the decay of the flare discussed here. Scaling the values determined for the EV Lac flare from the parameters derived by van den Oord & Mewe (1989) for the EXOSAT flare observed on Algol, the derived loop semi-length is $`L5\times 10^{10}`$ cm ($`2R_{}`$), and the plasma density $`n6\times 10^{11}`$ cm<sup>-3</sup>. ### 3.2 The hydrodynamic decay, sustained heating framework A different approach to the study of a flare’s decay phase, with the same aim of determining the physical parameters of the flaring region, has been developed by Reale et al. (1997). It has been recognized from the modeling of solar X-ray flares that the slope of the locus of the flare decay in the $`\mathrm{log}n`$$`\mathrm{log}T`$ plane is a powerful diagnostic of the presence of additional heating during the decay itself (Sylwester et al. (1993)); by making use of extensive hydrodynamic modeling of decaying flaring loops, with different heating time scales, Reale et al. (1997) have derived an empirical relationship between the light curve decay time (in units of $`\tau _{\mathrm{th}}`$, the loop thermodynamic decay time, Serio et al. (1991)) and the slope $`\zeta `$ in the $`\mathrm{log}n`$$`\mathrm{log}T`$ diagram. This allows to derive the length of the flaring loop length as a function of observable quantities, i.e. the decay time of the light curve, the flare maximum temperature and the slope of the decay in the $`\mathrm{log}n`$$`\mathrm{log}T`$ diagram (the square root of the emission measure of the flaring plasma is actually used as a proxy to the density). Since the characteristics of the observed decay depend on the specific instrument response, the parameters of the actual formulas used have to be calibrated for the telescope used to observe the flare. The method reported in Reale et al. (1997) was tested on a sample of solar flares observed with Yohkoh-SXT, for which both images (from which the length of the flaring loop was measured) and spectral parameters (from the temperature and emission measure diagnostic derived from Yohkoh filter ratios) were available, and has been shown to provide reliable results for most of the studied events. A first application of the method to stellar flares observed with ROSAT PSPC is described by Reale & Micela (1998). For the present study the method has been recalibrated for stellar flares observed with ASCA GIS. The thermodynamic decay time $`\tau _{\mathrm{th}}`$ of a closed coronal loop with semi-length $`L`$, and maximum temperature $`T_{\mathrm{max}}`$ is given by Serio et al. (1991) as $$\tau _{\mathrm{th}}=\frac{\alpha L}{\sqrt{T_{\mathrm{max}}}}$$ (2) where $`\alpha =3.7\times 10^4\mathrm{cm}^1\mathrm{s}^1\mathrm{K}^{1/2}`$. By means of a grid of hydrostatic loop models (see Reale & Micela (1998)) we have determined an empirical relationship linking the loop maximum temperature $`T_{\mathrm{max}}`$, typically found at the loop apex (e.g. Rosner et al. (1978)) to the maximum temperature $`T_{\mathrm{obs}}`$ determined from the GIS spectrum: $$T_{\mathrm{max}}=0.085\times T_{\mathrm{obs}}^{1.176}$$ (3) Following the same procedure as in Reale et al. (1997) and Reale & Micela (1998) (using extensive hydrodynamic modeling of decaying flaring loops) we have determined the ratio between $`\tau _{\mathrm{LC}}`$ (the observed $`e`$-folding time of the flare’s light curve determined by fitting the light curve from the peak of the flare down to the 10% of peak level) and $`\tau _{\mathrm{th}}`$ as a function of the slope $`\zeta `$ in the $`\mathrm{log}\sqrt{EM}`$$`\mathrm{log}T`$ diagram to be, for the ASCA GIS $$\frac{\tau _{\mathrm{LC}}}{\tau _{\mathrm{th}}}=F(\zeta )=c_ae^{\zeta /\zeta _a}+q_a$$ (4) where $`c_a=10.9`$, $`\zeta _a=0.56`$ and $`q_a=0.6`$. The formula for the loop semi-length $`L`$ is therefore: $$L=\frac{\tau _{\mathrm{LC}}\sqrt{T_{\mathrm{max}}}}{\alpha F(\zeta )}0.38<\zeta 1.7$$ (5) where the second part of the relationship gives the range of $`\zeta `$ values allowed according to the modeling. The uncertainty on $`L`$ is the sum of the propagation of the errors on the observed parameters $`\tau _{\mathrm{LC}}`$ and $`\zeta `$ with the standard deviation of the difference between the true and the derived loop lengths. The latter amounts to $`15`$%. Equation 5 has been tuned on exponentially decaying light curves; however it has been shown to provide reliable results also on solar flares with more complex decay trends, e.g. a double exponential decay (as for the flare studied here), provided that the whole decay is fitted with a single exponential (F. Reale, private communication). The evolution of the EV Lac flare in the $`\mathrm{log}\sqrt{EM}`$$`\mathrm{log}T`$ plane is shown in Fig. 6, together with a least-square fit to the decay phase. The resulting best-fit slope for the decaying phase computed for time intervals from 4 to 9 inclusive is $`\zeta =0.56\pm 0.04`$. Application of Eq. 4 yields a ratio between the observed cooling time scale $`\tau _{\mathrm{LC}}`$ and the thermodynamic cooling time scale for the flaring loop $`\tau _{\mathrm{th}}`$ of $`F(\zeta )=4.6`$. Such a large value implies that the observed decay is driven by the time-evolution of the heating process and not by the free cooling of the loop. Also, the actual loop length will be significantly smaller than it would be estimated assuming that the spectral parameters reflect free cooling of the decaying loop. The actual value of $`\tau _{\mathrm{LC}}`$ has been determined by fitting the GIS light curve, binned in the same time intervals used for extracting the individual flare spectra (as plotted in Fig. 2), considering the intervals from 4 to 9 inclusive. In this case $`\tau _{\mathrm{LC}}=1.80\pm 0.15`$ ks, and therefore $`\tau _{\mathrm{th}}400`$ s. The intrinsic flare peak temperature is, applying Eq. 3 to the observed maximum temperature, $`T_{\mathrm{max}}150`$ MK. From Eq. 5 the loop semi-length is $`L=(1.3\pm 0.3)\times 10^{10}`$ cm, i.e. $`L0.5R_{}`$. This loop length is much smaller than the pressure scale height<sup>3</sup><sup>3</sup>3defined as $`H=2kT/\mu g`$, where T is the plasma temperature in the loop, $`\mu `$ is the molecular weight and $`g`$ is the surface gravity of the star. In this case $`H8\times 10^{11}`$ cm of the flaring plasma on EV Lac and also significantly smaller (by a factor of 4) than the one derived through the quasi-static formalism. A simple consistency check can be obtained by comparing the pressure obtained by assuming that the flaring loop is not, at maximum, far from a steady-state condition (thus applying the scaling laws of Rosner et al. (1978)) with the pressure implied by the derived values of $`L`$ and $`T`$ for a plausible geometry. In practice the geometry is parameterized by the ratio $`\beta `$ between the radius of the loop and its lenght. The pressure is then $$n=\sqrt{\frac{EM}{2\pi L^3\beta ^2}}$$ (6) If we assume $`\beta 0.1`$$`0.3`$ (a typical range for solar coronal loops) the loop volume is $`1.4`$$`13\times 10^{29}`$ cm<sup>3</sup>, and the resulting plasma density at the peak of the flare is $`n2`$$`0.2\times 10^{12}`$ cm<sup>-3</sup>. The corresponding pressure is $`p_{\mathrm{max}}8`$$`0.9\times 10^4`$ dyne cm<sup>-2</sup>. Using the scaling laws of Rosner et al. (1978) applicable to steady-state loops, $$T_{\mathrm{max}}=1.4\times 10^3(p_0L)^{1/3}$$ (7) where $`p_0`$ is the pressure at the base of the loop, one obtains $`p_010^5`$ dyne cm<sup>-2</sup>, slightly larger than $`p_{\mathrm{max}}`$ for $`\beta =0.1`$. This implies that the plasma evaporated from the chromosphere has not ($`\beta =0.3`$) or nearly ($`\beta =0.1`$) filled the flaring loop up to the hydrostatic equilibrium conditions at flare maximum. ### 3.3 Energetics We have computed, for each of the 8 time intervals in which the flare has been subdivided, the X-ray luminosity in the 0.1–10. keV band. For this purpose the spectrum has been extrapolated outside of the formal spectral range covered by the GIS detectors; this is at most likely to underestimate the true luminosity in the extended band, as it may miss any softer component present in the spectrum and invisible to the GIS. The time-evolution of the flare X-ray luminosity is shown in Fig. 7, in which the data are plotted both in absolute units and in units of the star’s bolometric luminosity. During interval 4 (at the peak of the light curve) the X-ray luminosity of the flare is about one quarter of the photospheric (bolometric) luminosity of the star. Soft X-ray radiation is only one of the energy loss terms for the flaring plasma, with kinetic energy, conduction to the chromosphere and white light, UV and XUV flaring emission also significantly contributing to the energy budget. In the solar case, detailed analyses of flares of different types (Wu et al. (1986)) show that at the peak of the event soft X-ray radiation only accounts for 10–20% of the total energy budget; similarly, Houdebine et al. (1993) analyzed a large optical flare on the dMe star AD Leo, concluding that the kinetic energy of plasma’s motions during the event is likely to be at least as large as the radiated energy during the flare. A detailed assessment of the energy balance of the present flare is not possible, given the lack of multi-wavelength coverage and of velocity information which could help assessing the plasma kinetic energy. The total energy radiated in the X-rays is $`E1.5\times 10^{34}`$ erg (obtained with simple trapezoidal integration of the instantaneous X-ray luminosity), over $`10`$ ks, equivalent to $`300`$ s of the star’s bolometric energy output. From the scaling laws of Rosner et al. (1978) we can estimate the heating rate per unit volume at the peak of the flare, assumed uniform along the loop, as $$\frac{dH}{dVdt}10^5p^{7/6}L^{5/6}240\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1$$ (8) The total heating rate at the flare maximum is therefore $$\frac{dH}{dt}\frac{dH}{dVdt}\times V3.3\times 10^{31}\mathrm{erg}\mathrm{s}^1$$ (9) a factor of $`3`$ higher than the flare maximum X-ray luminosity (see Fig. 7), compatible with X-ray radiation being only one of the energy loss terms during the flare. If we assume that the heating is constant for the initial rising phase, which lasts for $`t_\mathrm{r}300`$ s, and then decays exponentially, with an $`e`$-folding time $`\tau _\mathrm{H}4.6\tau _{\mathrm{th}}1800`$ s (i.e. similar to $`\tau _{\mathrm{LC}}`$), the total energy released during the flare is $$H_{\mathrm{tot}}\frac{dH}{dt}\times (t_r+\tau )7\times 10^{34}\mathrm{erg}$$ (10) approximately five times the energy radiated in X-rays. Energy losses by thermal conduction are indeed expected to be large at such high temperatures. ### 3.4 Metal abundance The metal abundance of the flaring plasma shows a significant evolution, rising, at the peak of the flare, to a value $`2`$ times higher than the abundance measured for the quiescent emission, and decaying back to the quiescent value during the terminal phase of the flare. Evolution of the metal abundance has been observed also in other flares – for example it is evident in the Algol flare observed with SAX, Favata & Schmitt (1999) – and the behavior observed here appears to follow the same general pattern of abundance enhancement going in parallel with the flare’s light curve. The simplest explanation suggested for this has been to assume that a fractionation mechanism is at work in the quiescent corona, and that the evaporation of pristine photospheric material during the flare temporarily sets the coronal chemical equilibrium off balance. Unfortunately, few reliable abundance determinations are available for M dwarfs; in particular, no state-of-the-art photospheric abundance analysis of EV Lac is known to us, and thus it is not possible to assess whether the quiescent coronal abundance is indeed depleted with respect to the photospheric value. ## 4 Discussion The most remarkable characteristic of the EV Lac flare discussed in the present paper is certainly its very large X-ray luminosity: at flare peak, for a few minutes, the flare nearly outshines the star’s photosphere. A detailed analysis of the flare’s decay, however, shows that this is an interesting event also on other accounts. Typical loop lengths derived for strong flares on active stars, mostly using the quasi-static cooling mechanism, are large, comparable or often greater than the stellar radius. The picture which has emerged from most of the literature is thus one of long, tenuous plasma structures, with the attendant challenges of sufficiently strong magnetic fields far away from the stellar surface. In the present case, instead, the loop semi-length derived from the analysis of the flare decay using the method of Reale et al. (1997) is relatively compact, at about 0.5 stellar radii (implying a maximum height above the stellar surface of $`0.3`$ stellar radii). The length derived using the quasi-static formalism is about 4 times larger, and would thus again lead to the “classic” picture of long, tenuous loops. Although certainly not small, flaring loops of $`L0.5R_{}`$ are by no means exceptional, even by the relatively modest solar standards. The analysis of the large flare on Algol seen by the SAX observatory (Schmitt & Favata (1999); Favata & Schmitt (1999)) shows that the picture of large tenuous loops implied by the results of the quasi-static analysis can be quite misleading, and that, at least in that case, the geometrical size of the flaring plasma constrained by the light-curve eclipse is significantly smaller than the loop heights derived with the quasi-static method. An analysis based on the method of Reale et al. (1997) yields, also in that case, loop dimensions which are significantly smaller than the ones implied by the quasi-static analysis. The much more compact loop size derived through the Reale et al. (1997) method is linked with the presence of significant plasma heating during the flare decay, as the implied small, dense loop has a very short thermodynamic decay time ($`\tau _{\mathrm{th}}400`$ s). No large diffuse plasma structures at several stellar radii from the surface are needed to model the flaring region, and a more appropriate picture appears to be one of a rather compact, high-pressure plasma structure, whose decay is completely dominated by the time-evolution of the heating mechanism. The present large EV Lac flare shows several characteristics in common with other large, well observed stellar flares discussed in the literature. The light curve has a “double exponential” decay, with the initial time scale being more rapid, and a slower decay setting in afterwards. A very similar decay pattern is observed in the large Algol flare seen by SAX as well as on many large solar flares (see Feldman et al. (1995) for an example). The best-fit metal abundance for the flaring plasma also shows what by now appears to be a characteristic behavior, i.e. it increases in the early phases of the flare, peaking more or less with the peak of the light-curve, and then it decreases again to the pre-flare levels. In the case of the SAX Algol flare the long duration of the event and the high intensity of the flare make it possible to show that the metallicity decays to its pre-flare value on faster time scales than either the plasma temperature or the emission measure. The coarser time coverage of the present flare and the shorter duration of the event do not allow such detailed assessment. The heating mechanism of the solar (and stellar) corona remains in many respects an unsolved puzzle, and even more the mechanism responsible for large flares. However, it is by now rather clear that most sizable flares cannot be explained with a simple picture of a (mostly) impulsive heating event followed by decay dominated by the free thermodynamic cooling of the plasma structure. On the contrary, the evidence from the recent flow of well studied flare data (including the one in the present paper) is that the decay of large flares is dominated by the time evolution of the heating mechanism. Thus, the double exponential decay observed here as well as in other large solar and stellar flare is likely to be a characteristic of the heating mechanism rather than one of the flare decay. The interpretation of the increase in best-fit plasma metallicity during the flare’s peak is still unclear. If the presence of a fractionation mechanism is accepted, which causes differences in the metal abundances in the photosphere and in the corona, the abundance increase during the flare can plausibly be explained as due to the evaporation of photospheric plasma during the early phases of the flare, on time scales faster than the ones on which the fractionation mechanism operates. Since the coronal plasma shortly after the impulsive heating is almost entirely made of material evaporated from the chromosphere, if this scenario were correct the observations presented here would imply that the chromospheric metallicity should be about three times the coronal one in quiescent conditions. Recent calculations (G. Peres, private communication) show that the plasma in a flaring loop such as the one responsible for the EV Lac flare discussed here is not optically thin for the strongest lines. This is in particular true for the Fe xxv complex at $`6.7`$ keV, which drives the determination of the metallicity of the flaring plasma. However, optical thickness effects would in general depress the strong line, leading to a lower metallicity estimate, and cannot therefore explain the metallicity increase observed during the flare. Another possible bias to the best-fit metallicity can derive from the thermal structure of the flaring plasma, which is not isothermal, even if it’s being fit with an isothermal model. To assess the magnitude of this effect we analyzed the synthetic spectra produced with an ad hoc hydrodynamic simulation of a flaring loop, showing that the effect is small ($`30`$%) in comparison with the observed magnitude of the change (a factor of $`3`$). If the heating mechanism responsible for the present flare is essentially due to some form of dissipation of magnetic energy, an obvious question to ask is what field strength would be required to accumulate the emitted energy, and to keep the plasma confined in a *stable* magnetic loop configuration. A related question is whether the present flare could be interpreted with a reasonable scaling of the conditions usually observed in the solar corona, or whether a different configuration and/or mechanism for energy release need to be invoked. Our flare analysis allows to make some relevant estimates, under the assumptions that the energy release is indeed of magnetic origin and it occurs entirely within a single coronal loop structure, with the characteristics inferred from the analysis of the flare decay. An estimate of the minimum magnetic field $`B`$ necessary to produce the event can then be obtained from the relation: $$E_{\mathrm{tot}}=\frac{(B^2B_0^2)\times V_{\mathrm{loop}}}{8\pi }$$ (11) where $`E_{\mathrm{tot}}7\times 10^{34}`$ erg is the total energy released (Sect. 3.3), $`B_0`$ is the magnetic field surviving the flare and $`V_{\mathrm{loop}}`$ is the total volume of the flaring plasma. As an estimate of $`B_0`$ we take the magnetic field necessary to maintain the plasma confined in a rigid loop structure along the whole flare, thus implicitly assuming that the loop geometry does not change during the flare. From a plasma density $`n2\times 10^{12}`$ cm<sup>-3</sup>, and a temperature $`T100`$ MK the estimated maximum plasma pressure is $`6\times 10^4`$ dyn cm<sup>2</sup>; in order to support such a pressure, a field of $`1.2`$ kG is required. Hence, the total minimum magnetic field required to explain the flare is, from Eq. 11, $`B3.7`$ kG, a value compatible with the average magnetic field of 3.8 kG, with a surface filling factor of about 60% (and evidence for field components of up to $`9`$ kG), measured on EV Lac by Johns-Krull & Valenti (1996) at photospheric level. Although we have used the loop volume in the derivation of $`B`$, this is not to say that the field fills up the whole volume. Rather, our estimates can be interpreted in the framework of the flare energy stored in a magnetic field configuration (e.g. a large group of spots) with a field strength of several kG, covering a volume comparable to the one of the flaring loop. What rests as a matter of speculation is how often such a large energy release may occur, or in other words, what are the conditions required to accumulate such large amounts of magnetic energy, especially when the photosphere is so permeated of magnetic fields as shown by Johns-Krull & Valenti (1996). In any case, the above scenario suggests the presence of large-scale, organized magnetic fields. This is somewhat in contrast with the hypothesis that EV Lac is a fully convective star, whose activity is powered by a turbulent dynamo, which would be expected to produce small-scale magnetic fields. Most dynamo theories suggest (Durney et al. (1993)) that less magnetic flux should be generated by a turbulent dynamo (as compared to the case of the solar-type “shell” dynamo) because there is no stable overshoot layer where the fields can be stored and amplified, and only small-scale magnetic regions should emerge uniformly to the surface, because the crucial ingredient is small-scale turbulent flow field, rather than large-scale rotational shear. On the other hand the presence of a magnetic field may substantially modify the stellar interior structure. Magnetic fields – even smaller than dictated by equipartition arguments – alter the convective instability conditions (Ventura et al. (1998)), and thus likely modify the structure of the convective envelope. At the same time convection has the tendency to pump magnetic fields downward (“turbulent pumping”, Brandenburg et al. (1992); Tobias et al. (1998)), so that – in a fully-convective star – fields may accumulate near the center. Hence, magnetic fields are likely to be an important (and thus far essentially unaccounted) term in determining the actual stellar structure, and any realistic calculation at the low-mass end should consider the dynamo-generated magnetic fields as an essential part. Indeed, the possibility that a strong magnetic field can lead to the formation of a radiative core has been discussed by Cox et al. (1981), and this may be the seed for a resurrection of a “shell” dynamo mechanism. ###### Acknowledgements. We would like to thank J. Schmitt for the helpful discussion relative to the choice of the best target for this observing program, and G. Peres and S. Orlando for the illuminating discussions on many details of flare evolution. FR, GM, SS and AM acknowledge the partial support of ASI and MURST. ## Appendix A Physical characteristics of EV Lac EV Lac (Gl 873) is a dMe dwarf, classified as M3.5 (Reid et al. (1995)), at a distance (from the Hipparcos-measured parallax) $`d=5.05`$ pc. It is considered a single star, with no evidence of companions, and is a slow rotator, with a photometrically determined rotation period of 4.378 d (Pettersen et al. (1992)). The projected equatorial velocity has been determined from Doppler broadening of the spectral lines at $`v\mathrm{sin}i=4.5\pm 0.5`$ km s<sup>-1</sup> (Johns-Krull & Valenti (1996)) and $`v\mathrm{sin}i=6.9\pm 0.8`$ km s<sup>-1</sup> (Delfosse et al. (1998)); this rotation velocity can be reconciled with the observed starspot modulation period only if the inclination is high ($`60`$ deg). The rotational velocity of M dwarfs (Delfosse et al. (1998)) is characterized by the bulk of them having a narrow distribution with $`v\mathrm{sin}i5`$ km s<sup>-1</sup>, and a tail of rare fast rotators with velocities of up to $`50`$ km s<sup>-1</sup>. EV Lac lies at the upper end of the slow-rotator distribution. The absolute magnitude is $`M_V=11.73`$, which, given a color index $`(RI)_\mathrm{C}=1.52`$ translates in $`M_{\mathrm{bol}}=9.40`$ (Delfosse et al. (1998)), corresponding to $`L_{\mathrm{bol}}=5.25\times 10^{31}`$ erg s<sup>-1</sup>. Using the mass-luminosity relationship of Baraffe et al. (1998) and the $`K`$-band luminosity (to be preferred given the independence of the relationship between mass and $`K`$-band luminosity from metallicity) $`M_K=6.78`$ (Delfosse et al. (1998)) we derive a mass of $`0.35M_{}`$. No photospheric abundance analyses of EV Lac are known to us, although Fleming et al. (1995) report a near-solar metallicity based on broad-band photometry. The radius for a solar-metallicity $`0.35M_{}`$ dwarf is, from the models of Chabrier & Baraffe (1997), $`R2.5\times 10^{10}`$ cm, or $`R0.36R_{}`$, (assuming the star is old enough, given that such a low-mass star will contract gravitationally until it’s $`300`$ Myr old). Stellar structure models show that the radiative core of low-mass stars shrinks with decreasing mass, disappearing completely in mid-M dwarfs, so that late-M dwarfs are expected to be completely convective. Solar-type dynamos are thought to require an interface between the convective envelope and the radiative core (the $`\alpha `$$`\mathrm{\Omega }`$ shell dynamo model) and are thus not expected to be present in the cooler M dwarfs. Given that however X-ray activity is present and common even in purportedly fully convective late M dwarfs (Barbera et al. (1993); Schmitt et al. (1995)) a different type of dynamo mechanism must be operating in them; it has recently been suggested that small-scale magnetic fields can be generated in convection zones by a turbulently driven dynamo (Durney et al. (1993)). This – which could be also at work in higher mass stars with varying degrees of efficiency, depending on the rotation rate – would therefore provide the only magnetic field generation mechanism in fully convective low-mass dwarfs. The predicted mass at which stars become fully convective depends on the details of the physics adopted in the stellar models. Chabrier & Baraffe (1997) use non-grey atmospheres to put the fully convective limit at $`0.35M_{}`$ (i.e. just at the estimated mass of EV Lac), independent of metallicity in the range $`0.01\times Z_{}<Z<Z_{}`$. Different calibrations for the mass-luminosity relationships (see discussion in Delfosse et al. (1998)) push the fully-convective limit toward somewhat higher masses, so that EV Lac is most likely fully convective, and thus its activity is likely to be driven purely by a turbulent dynamo, which is not, among other things, expected to have a solar-like cyclic behavior, and which is expected to generate magnetic fields on a smaller spatial scale (related to the scale of the turbulent flow fields) than an $`\alpha `$$`\mathrm{\Omega }`$ dynamo. The picture is complicated by the fact that strong magnetic fields may influence the stellar interior structure maintaining a radiative core at masses lower than the theoretical limits for spherically symmetric, non-magnetic stars (Cox et al. (1981)), and thus the magnetic fields of very active, low-mass stars could still be (partially) powered by an $`\alpha `$$`\mathrm{\Omega }`$ shell dynamo. ### A.1 Previous X-ray observations The high activity of EV Lac had been noted well before the advent of high-energy observations from its large optical and UV flaring rate, with some truly exceptional optical flares observed: Roizman & Shevchenko (1982) report the occurrence of a 6.4 mag $`U`$-band flare, lasting 6.4 hr, with a peak luminosity of $`10^{32}`$ erg s<sup>-1</sup> and a total energy output of $`10^{35}`$ erg. EV Lac was first observed as a quiescent soft X-ray source by EXOSAT (Schmitt & Rosso (1985)) – although it had been detected earlier by HEAO-1 during a strong flare with $`\mathrm{log}L_\mathrm{X}=28.7`$ erg s<sup>-1</sup>. It was not observed by the *Einstein* observatory, while it was observed by ROSAT both in the All Sky Survey (RASS – in which a major flare was also detected) and in pointed mode. The RASS quiescent X-ray luminosity was $`\mathrm{log}L_\mathrm{X}=29.08`$ erg s<sup>-1</sup> (Schmitt et al. (1995)), corresponding to $`\mathrm{log}L_\mathrm{X}/L_{\mathrm{bol}}=2.6`$. It was also the subject of one SAX and several PSPC pointings, analyzed in Sciortino et al. (1999), during which its coronal emission shows both a continuous variability of about a factor 2–3 and the occurrence of an intense flare, with an increase of the emission in the PSPC of about 10-fold. The HEAO A-1 Sky Survey experiment (Ambruster et al. (1984)) detected two flares from EV Lac, with peak X-ray luminosities (in the 0.5–20 keV band) of $`\mathrm{log}L_\mathrm{X}=29.5`$ erg s<sup>-1</sup> and $`\mathrm{log}L_\mathrm{X}=30.3`$ erg s<sup>-1</sup>. The most energetic of the two flares represented a peak enhancement of $`50`$ over the quiescent X-ray luminosity ($`L_\mathrm{X}10^{28.5}`$ erg s<sup>-1</sup>), and its decay $`e`$-folding time was roughly estimated to be of the order of $`5`$ ks (although the very sparse temporal coverage prevents an accurate determination of the flare’s decay). Ambruster et al. (1984) have also estimated the physical parameters of the two flares (*assuming* a peak temperature of $`2\times 10^7`$ K), resulting, for the smaller of the two flares, in a peak emission measure $`EM2\times 10^{53}`$ cm<sup>-3</sup>, a density $`n5\times 10^{11}`$ cm<sup>-3</sup> and a loop length $`L5\times 10^9`$ cm. The magnetic field strength necessary to confine the plasma was estimated at $`200G`$. Only the peak emission measure is reported for the second flare ($`EM2\times 10^{53}`$ cm<sup>-3</sup>). A small flare was seen in one of the EXOSAT observations, in the LE detector, as discussed in detail by Pallavicini et al. (1990). Its rise time ($`1/e`$ time) was $`600`$ s, and its decay time was $`4.5`$ ks. At peak the flare represented an enhancement of only $`3`$ times over the quiescent X-ray flux, with a peak X-ray luminosity $`L_\mathrm{X}10^{29}`$ erg s<sup>-1</sup> and a total energy released in the X-rays $`E10^{32}`$ erg. The lack of energy resolution of the EXOSAT LE detector made it impossible to perform an analysis of the flare’s decay. Schmitt (1994) derived loop parameters for the long RASS flare by fitting the flare decay parameters within the framework of the quasi-static formalism of van den Oord & Mewe (1989). The maximum observed temperature is $`T30`$ MK, the decay timescale is $`\tau 38`$ ks, and the peak emission measure is $`EM1.5\times 10^{52}`$ cm<sup>-3</sup>. The loop length derived through a quasi-static analysis is large, at $`L6\times 10^{11}`$ cm $`10R_{}`$ (with an inferred flaring volume $`V3\times 10^{31}`$ cm<sup>3</sup>) and the plasma density is correspondingly low, at $`n3\times 10^{10}`$ cm<sup>-3</sup>. The total thermal energy was estimated at $`E9\times 10^{34}`$ erg. Schmitt (1994) also analyzed the EV Lac PSPC flare decay using the two-ribbon model of Kopp & Poletto (1984).
no-problem/9909/cond-mat9909326.html
ar5iv
text
# Dielectric Response Near the Density-Driven Mott Transition in infinite dimensions \[ ## Abstract We study the dielectric response of correlated systems which undergo a Mott transition as a function of band filling within the dynamical mean field framework. We compute the dielectric figure of merit (DFOM), which is a measure of dielectric efficiency and an important number for potential device applications. It is suggested how the DFOM can be optimized in real transition metal oxides. The structures seen in the computed Faraday rotation are explained on the basis of the underlying local spectral density of the $`d=\mathrm{}`$ Hubbard model. \] The ac conductivity and dielectric tensor provides valuable information concerning the finite frequency, finite temperature charge dynamics of an electronic fluid in a metal. The potential for microwave devices, for e.g, is enormous, given the progress in synthesis of strongly correlated materials with electronic properties that are sensitive to external perturbations like applied electromagnetic fields, pressure(doping), etc . Recent improvements in theoretical tools at our disposal allow controlled, physically meaningful calculations to be undertaken . These advancements have, and will continue to, spur increased investigations to tap their full technological potential. The dielectric figure of merit (DFOM) reflects the quality of the dielectric response of a material, and is a quantity of interest for potential microwave device applications. It is formally defined as $$DFOM=i\frac{|ϵ_{xy}(\omega )|}{2|Imϵ_{xx}(\omega )|}$$ (1) and so one requires the full dielectric (ac conductivity) tensor to access this quantity. It also follows that it is linked to the magneto-optical response of the material under study. In weakly correlated metals, the details of the ac conductivity tensor, and hence of the dielectric and Hall response, are determined by the vagaries (shape and size) of the Fermi surface . That such a connection is untenable for strongly correlated metals has been pointed out by Shastry et al , who show that the Hall constant, for e.g, is affected by contributions coming from the whole Brillouin zone, and may have nothing to do with Fermi surface effects. On the other hand, it has been observed that the physics of strongly correlated metals undergoing metal-insulator transitions is understandable in terms of spectral weight transfer over large energy scales . The consequences for dielectric response and the DFOM have, however, not been studied at all. In this letter, we address this issue. We are primarily interested in materials like $`La_{1x}Y_xTiO_3`$ , which undergo Mott transitions with doping. We stress that 3D transition metal oxides are the most interesting candidates for the kind of effects we want to study, as filling driven insulator-metal transitions are realized in a variety of them in a wide range of parameters. We consider the one-band Hubbard model , $$H=\underset{ij\sigma }{}t_{ij}c_{i\sigma }^{}c_{j\sigma }+U\underset{i}{}n_in_i\mu \underset{i}{}n_{i\sigma }$$ (2) as a prototype model describing the electronic degrees of freedom in TM oxides. To study the 3D case, we employ the $`d=\mathrm{}`$ approximation, which is the best approximation possible at the present time . Since this method has been extensively reviewed, we only summarize the relevant aspects. All transport properties, which follow from the conductivity tensor, are obtained from a $`𝐤`$-independent self-energy in $`d=\mathrm{}`$; the only information about the lattice structure comes from the free band dispersion in the full Green fn: $$G(k,\omega )=G(ϵ_k,\omega )=\frac{1}{\omega +\mu ϵ_k\mathrm{\Sigma }(\omega )}$$ (3) To solve the model in $`d=\mathrm{}`$ requires a reliable way to solve the single impurity Anderson model(SIAM) embedded in a dynamical bath described by the hybridization fn. $`\mathrm{\Delta }(\omega )`$. There is an additional condition that completes the selfconsistency: $$𝑑ϵG(ϵ,i\omega )\rho _0(ϵ)=\frac{1}{i\omega +\mu \mathrm{\Delta }(i\omega )\mathrm{\Sigma }(i\omega )}$$ (4) where $`\rho _0(ϵ)`$ is the free DOS ($`U=0`$). In $`d=\mathrm{}`$, this is sufficient to compute the transport, because the vertex corrections in the Bethe Salpeter eqn. for the conductivity vanish identically in this limit . Thus, the conductivity is fully determined by the basic bubble diagram made up of fully interating local GFs of the lattice model. The optical conductivity and the Hall conductivity are computable in terms of the full $`d=\mathrm{}`$ GFs as follows : $$\sigma _{xx}(i\omega )=\frac{1}{i\omega }𝑑ϵ\rho _0(ϵ)\underset{i\nu }{}G(ϵ,i\nu )G(ϵ,i\nu +i\omega )$$ (5) and the Hall conductivity has been worked out by Lange , so we use the approach developed there. Explicitly, after a somewhat tedious calculation, the imaginary part of $`\sigma _{xy}(\omega )`$ is given by $`\sigma _{xy}^{^{\prime \prime }}(\omega )`$ $`=`$ $`c_{xy}{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑ϵ\rho _0(ϵ)ϵ{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑\omega _1𝑑\omega _2A(ϵ,\omega _1)A(ϵ,\omega _2)`$ (7) $`{\displaystyle \frac{1}{\omega }}\left[{\displaystyle \frac{F(ϵ,\omega _1;\omega )F(ϵ,\omega _2;\omega )}{\omega _1\omega _2}}+(\omega \omega )\right]`$ where $$F(ϵ,\omega ;\omega _1)=A(ϵ,\omega _1\omega )[f(\omega _1)f(\omega _1\omega )]$$ (8) and $`A(ϵ,\omega )=Im[\omega ϵ\mathrm{\Sigma }(\omega )]^1/\pi `$ is the s.p spectral function in $`d=\mathrm{}`$. Knowledge of $`\sigma _{xy}^{^{\prime \prime }}(\omega )`$ permits us to use the analyticity properties to compute its real part via a Kramers-Krönig transform. The dielectric tensor is determined from, $$ϵ_{xx}(\omega )=1+\frac{4\pi }{\omega }i\sigma _{xx}(\omega )$$ (9) and $$ϵ_{xy}(\omega )=\frac{4\pi }{\omega }i\sigma _{xy}(\omega )$$ (10) The paramagnetic Faraday rotation is directly related to $`\sigma _{xy}(\omega )`$ via $$\theta _F(\omega )=C(n)i\frac{\sigma _{xy}^{^{\prime \prime }}(\omega )}{\omega }$$ (11) The DFOM can readily be computed from the dielectric tensor, as determined above. So can the ac Hall constant and angle, as well as the Raman intensity, which has been discussed in detail elsewhere . Before embarking on our results and their analysis, it is instructive to summarize what is known about the $`d=\mathrm{}`$ Hubbard model. At large $`U/t`$, and away from half-filling ($`n=1`$), the ground state is a paramagnetic FL if one ignores the possibility of symmetry breaking towards antiferromagnetism, as well as disorder effects, which are especially important near $`n=1`$. This can be achieved formally by introducing a nnn hopping, which in $`d=\mathrm{}`$ leaves the free DOS essentially unchanged . This metallic state is characterized by two energy scales: a low energy coherence scale $`T_{coh}`$, below which local FL behavior sets in , and a scale of $`O(D)`$, ($`D`$ is the free bandwidth) characterizing high energy, incoherent processes across the remnant of the Mott-Hubbard insulator at $`n=1`$. At $`T<T_{coh}`$, the quenching of the local moments leads to a response characteristic of a FL at small $`\omega <<2D`$ (but with the dynamical spectral weight transfer with doping, a feature of correlations) , but at higher $`T>T_{coh}`$, the picture is that of carriers scattered off by effectively local moments, which makes the system essentially like a non-FL (note that the metal with disordered local moments is not a FL). Armed with this information, we are ready to discuss our results. We choose a gaussian unperturbed DOS, and $`U/D=3.0`$ to access the strongly correlated FL metallic state off $`n=1`$, ignoring the possible instability to an AF-ordered phase. All calculations are performed at a low temperature, $`T=0.01D`$. Fig. 1 shows the optical conductivity and the longitudinal dielectric constant. $`\sigma _{xx}(\omega )`$ agrees with calculations performed earlier in all the main respects; in particular, it clearly exhibits the low-energy quasicoherent Drude form, the transfer of optical spectral weight from the high-energy, upper-Hubbard band states to the low energy, band-like states with increasing hole doping, and the isosbectic point at which all the $`\sigma _{xx}(\omega )`$ curves as a fn. of filling cross at one point, to within numerical accuracy. It is interesting to point out that such features have also been observed in experimental studies . Correspondingly, Im$`ϵ_{xx}(\omega )`$ also shows the isosbectic point, the explanation for which is identical to that provided recently by us for the case of $`\sigma _{xx}(\omega )`$ . In Fig. 2, we show the DFOM obtained using eqn. (2) and the dielectric tensor calculated as above. We are mainly interested in the variation of the DFOM with hole doping, given here by $`\delta =(1n)`$. This fixes the chemical potential, and the FL resonance position, and the IPT describes the evolution of spectral features in good agreement with exact diagonalization studies . In view of the ability of the IPT to reproduce all the qualitative aspects observed in $`\sigma _{xx}(\omega )`$, we believe that is a good tool in the present case. The DFOM shows a sharp peak at low energy (around $`0.05`$ ev) in the metallic state off half-filling with a maximum value of order 3 for $`\delta =0.3`$. It also reaches a minimum value at around 1.8 ev for $`\delta =0.1,0.2`$; this is related to the frequency dependence of the optical conductivity tensor with filling. These features are understandable in terms of the dynamical transfer of optical spectral weight from the high-energy (upper-Hubbard band) incoherent states to low energy quasicoherent states upon hole doping the Mott insulator (at $`n=1`$). The sharp peak is related to the fact that the action of the current operator on the lower-Hubbard band states creates well defined elementary excitations in a strongly correlated Fermi liquid, while the increase of the low energy weight is understood in terms of the increasing weight of the quasicoherent, itinerant part of the spectrum relative to that of the atomic like, incoherent part with increasing $`\delta =(1n)`$. Thus, the absolute DFOM is determined by the outcome of the competition between the itinerant and atomic parts of the spectrum. The above suggests that the DFOM in the IR and the mid-IR can be increased even further by enhancing the weight of the transitions within the lower-Hubbard band and those in the (lower-Hubbard band + central FL peak) manifold, by suitable materials engineering. In reality, a multi-orbital situation would be more favorable, since one might expect increased contributions to the mid-IR part coming from interorbital hopping, as well as from spin-orbit coupling terms present in multi-orbital systems. However, static disorder, for e.g, in $`d=\mathrm{}`$, would shift coherent spectral weight to higher energy and destroy low energy coherence, limiting the DFOM to modest values, suggesting that good sample quality is one of the prerequisites to increase the DFOM (something that maybe hard to limit in the 3d transition metal compounds undergoing doping/filling driven Mott transitions). Next, we describe our results for the paramagnetic Faraday rotation, computed from eqn. (11). $`\theta _F(\omega )`$ shows a monotonic fall-off with $`\omega `$ in the IR and the mid-IR region, before increasing around $`\omega /D1`$, whereafter, it passes through a broad maximum at $`\omega /D1.4`$. This is followed by another broad peak around $`\omega /D=3.0`$. The origin of these features are linked to the nature of the transitions reflected in the interacting LDOS of the $`d=\mathrm{}`$ Hubbard model. We assign the central feature to the “quasipartcle peak” near $`\omega =0`$, the maximum feature around $`\omega /D1.4`$ to the “$`U/2`$ peak”, corresponding to transitions between the QP peak and the upper- (or lower) Hubbard bands, and the broad feature at $`\omega /D=3.0`$ to transitions between the lower- and the upper Hubbard bands. The above suggests that strongly correlated metals near the borderline of the filling-driven Mott transition might be good candidates for dielectric device applications. A multi-orbital situation (plus spin-orbit couplings) would be more effective in enhancing the DFOM, while static disorder, which inevitably accompanies doping, would act to limit it. Consideration of real bandstructure would be desirable; in our approach, this simply entails replacing the free (gaussian) DOS used here by the actual bandstructure DOS as an input into the DMFT calculation. Effects of static disorder will be especially important near the doping induced Mott transition; these require the consideration of correlations and disorder on an equal footing, and is left for a future work. However, one expects that the destruction of low energy coherent spectral weight which accompanies static disorder will limit the DFOM to more modest values. In conclusion, we have investigated the dielectric figure of merit (DFOM) and the paramagnetic Faraday rotation near the density driven Mott transition. We have shown how a consistent treatment of the interplay between the atomic and itinerant aspects inherent in strongly correlated systems leads to encouraging values for the DFOM. In practice, however, the estimate provided here would be an overestimate, since real bandstructure and disorder effects, as well as multi-orbital character of real $`d`$-band systems can well change our conclusions quantitatively. We have also provided a simple explanation for the origin of structure in the Faraday rotation; in real materials, the above effects will also affect $`\theta _F(\omega )`$. Nevertheless, we have provided a theoretical framework within which such calculations can be undertaken, and all of the above effects can be included in a suitable extension within the $`d=\mathrm{}`$ framework. ###### Acknowledgements. One of us (MSL) thanks Prof E. Müller-Hartmann for useful discussions, and SFB341 for financial support. LC acknowledges financial support of Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP).
no-problem/9909/astro-ph9909185.html
ar5iv
text
# The evolution of groups and clusters ## 1 Introduction Cosmological scenarios with cold dark matter alone cannot explain the structure formation both on small and very large scales. Scenarios with a non-zero cosmological constant $`\mathrm{\Lambda }`$ have been proved to be very successful in describing most of the observational data at both low and high redshifts. Moreover, from a recent analysis of 42 high-redshift supernovae Perlmutter et.al (1999) found direct evidence for $`\mathrm{\Omega }_\mathrm{\Lambda }=0.72`$ within a flat cosmology. It is generally believed that the dark matter component of galaxies, the dark matter halos, plays an important role in the formation of galaxies. Properties and evolution of the halos depend on the environment (Avila-Reese et al. 1999, Gottlöber et al. 1999a,b) which implies, in turn, that properties of galaxies can be expected to depend on cosmological environment too. Many of the dark matter halos are much more extended than galaxies and contain several subhalos. Here we present results of a study of the formation and evolution of these large halos which host groups and clusters of galaxies. ## 2 Dark matter halos in the simulation We simulate evolution of $`256^3`$ cold dark matter particles in a $`\mathrm{\Lambda }`$CDM model ($`\mathrm{\Omega }_M=1\mathrm{\Omega }_\mathrm{\Lambda }=0.3`$; $`\sigma _8=1.0`$; $`H_0=70`$ km/s/Mpc). As a compromise we have chosen a simulation box of $`60h^1\mathrm{Mpc}`$ in order to study the statistical properties of halos in a cosmological environment and to have also a sufficient mass resolution (particle mass of $`1.1\times 10^9h^1\mathrm{M}_{}`$). The simulations were done using the Adaptive Refinement Tree (ART) code (Kravtsov, Klypin & Khokhlov 1997). The code used a $`512^3`$ homogeneous grid on the lowest level of resolution and six levels of refinement, each successive refinement level doubling the resolution. The sixth refinement level corresponds to a formal dynamical range of 32,000 in high density regions. Thus we can reach in a 60 $`h^1`$ Mpc box the force resolution of $`2h^1`$ kpc. Identification of halos in dense environments and reconstruction of their evolution is a challenge. In order to find halos in the simulation we have developed two algorithms, which we called the hierarchical friends-of-friends (HFOF) and the bound density maxima (BDM) algorithms (Klypin et al. 1999). These algorithms work on a snapshot of the particle distribution. They are able to identify “halos inside of halos”, i.e. stable gravitationally self-bound halos which move inside of a larger region of virial overdensity. Therefore, we can find dark matter halos which correspond to galaxies inside of groups or clusters but also small satellites bound to a larger galaxy (like the LMC and the Milky Way or the M51 system). The algorithm identifies halos at different redshifts. In a second step we establish the “ancestor-descendant” relationships for all halos at all times (Gottlöber et al. 1999a,b). The total mass of a halo depends on its assumed radius. However, for satellite halos inside larger bound systems the notion of radius becomes somewhat arbitrary. The formal virial radius of a satellite halo is simply equal to the virial radius of its host halo. We have chosen to define the outer tidal radius of the satellite halos as the radius at which their density profile starts to flatten. We try to avoid the problem of mass determination by assigning not only a mass to a halo, but finding also its maximum “circular velocity” ($`\sqrt{GM/R}`$), $`v_{circ}`$. This is the quantity which is more meaningful observationally. Numerically, $`v_{circ}`$ can be measured more easily and more accurately then mass. In order to define a complete halo sample that is not affected by the numerical details of halo finding procedure we have constructed the differential velocity functions at $`z=0`$ for different mass thresholds and maximum radii. We found that the halo samples do not depend on the numerical parameters of the halo finder for halos with $`v_{circ}\genfrac{}{}{0pt}{}{_>}{^{}}100`$ km/s (Gottlöber et al. 1999a). Here we want to study samples of halos at redshifts $`z4`$ with circular velocities $`v_{circ}>100`$ km/s and, for comparison, samples of more massive halos with circular velocities $`v_{circ}>120`$ km/s. Moreover, we include only halos which consist of at least 50 bound particles. This reduces the possible detection of fake halos in the simulation, in particular the detection of very small fake satellite halos. At redshift $`z=0`$ we have detected 7786 halos with $`v_{circ}>100`$km/s in our simulation, which corresponds a number densities of $`0.036h^3\mathrm{Mpc}^3`$. 4787 of these halos have circular velocities $`v_{circ}>120`$km/s ($`0.022h^3\mathrm{Mpc}^3`$). These densities roughly agree with the number density of galaxies with magnitudes $`M_r\genfrac{}{}{0pt}{}{_<}{^{}}16.3`$ and $`M_r\genfrac{}{}{0pt}{}{_<}{^{}}18.4`$ estimated by the luminosity function of the Las Campanas redshift survey (Lin et al. 1996) which we have extended to -16.3. The time evolution of the total number of halos in the box is shown in Fig. 1. At $`z=4`$ we found the same number of halos independent of the threshold of the circular velocity. This means that we have reached here the resolution limit of the simulation. In fact, we had excluded low mass halos with less than 50 particles. ## 3 Environment of Halos To find the cosmological environment of each of these halos at $`z4`$ we run a friend-of-friend analysis over the dark matter particles with a linking length of 0.2 times the mean interparticle distance. By this procedure we find clusters of dark matter particles with an overdensity of about 200. At $`z=0`$ the virial overdensity in the $`\mathrm{\Lambda }`$CDM model under consideration is about 330 which corresponds to a linking length of about 0.17. With increasing $`z`$ the virial overdensity decreases and reaches 200 at $`z=1`$. Here we have used the same linking length for all $`z`$. Therefore, the objects which we find at $`z<1`$ are slightly larger than the objects at virial overdensity. We have increased the linking length at $`z=0`$ because we found in the vicinity of galaxy clusters halos which had already interacted with the cluster but were in the moment of detection just outside the region of virial overdensity. In a second step we find for each of the halos at all $`z`$ the cluster of dark matter particles to which the halo belongs. We call the halo a cluster galaxy if the the halo belongs to a particle cluster with a total mass larger than $`10^{14}h^1\mathrm{M}_{}`$. We call it an isolated galaxy if only one halo belongs to the object at overdensity 200 which we found by the friends-of-friends algorithm. This definition ensures that the “isolated galaxies” really do not interact with other galaxies. We found a substantial number (about 10 %) of doublets. In some cases we found pairs of halos with approximately the same mass but in most of the cases these doublets consist of one big halo with a small bound satellite which is inside of or partly overlaps with the larger halo. On the one hand, there could be more small satellites which we do not detect due to the limited mass resolution so that the doublet is in fact a small group. On the other hand, there could be really only one small satellite so that the object better could be handled as an isolated galaxy. In order to avoid difficult decisions, we have handled the galaxy pairs separately having in mind that some of them could belong to isolated galaxies whereas other are seeds of a small group. The rest of halos are called group galaxies. The 7786 halos with $`v_{circ}>100`$km/s are distributed over 18 clusters, 252 groups and 373 pairs, 4892 halos are isolated. ## 4 The Evolution of Halos The total number of halos with $`v_{circ}>100`$ km/s increases from 5290 at $`z=4`$ to the maximum value of 11847 at redshift $`z=1.5`$ and then decreased to 7786 at redshift $`z=0`$ (see Fig. 1). Assuming that each of these halos would contain a galaxy we compute the evolution of the fraction of isolated “galaxies” and “galaxies” in clusters, groups, and pairs. The first object of virial mass $`>10^{14}h^1\mathrm{M}_{}`$ forms between $`z=2.5`$ and $`z=2`$. At $`z=2`$ this cluster of $`1.3\times 10^{14}h^1\mathrm{M}_{}`$ contains already 68 satellite halos. The most massive central halo of this cluster has a circular velocity of 760 km/s. The number of cluster galaxies increases up to $`z=0`$, where 868 galaxies have been detected in clusters. At all analyzed time epochs in the simulation, we found approximately 10 % of pairs. We have detected 113 groups already at $`z=4`$. These groups contain 430 galaxies. The number of group galaxies increases very rapidly and reaches a maximum of 2607 at $`z=1.5`$. Afterwards it decreases up to 1280 at $`z=0`$. Finally, also the number of isolated galaxies increases from $`z=4`$ (4194) until $`z=1.5`$ (7854) and then decreases until $`z=0`$ (4892). In Fig. 2 we show the time evolution of the fraction of isolated galaxies and galaxies in clusters, groups, and pairs for dark matter halos with $`v_{circ}>100`$km/s. The sample of more massive halos ($`v_{circ}>120`$km/s) shows essentially the same behavior, but the total number of halos is smaller. Now let us discuss the evolution of clusters and groups. To this end, we find similar to the chain of progenitors of each halo the progenitors of the clusters and groups. The first cluster has been formed before $`z=2`$. Due to accretion and merging of groups the number of clusters increases to 18 at $`z=0`$ (Fig. 3). At $`z=4`$ we found 113 groups and 333 pairs in our simulation. It is remarkable that these numbers do not depend on the chosen threshold of the circular velocity. Both the number of groups and pairs increases up to $`z=1.5`$ where we have 465 (363) groups and 666 (567) pairs for $`v_{circ}>100`$km/s ($`v_{circ}>120`$km/s). At $`z=0`$, 252 (152) groups and 373 (258) pairs remain. The total number of groups and pairs reduces with increasing threshold of the circular velocity. In fact, if we take into account only more massive objects some of the pairs would be identified as isolated galaxies whereas some of the groups would be identified as pairs or isolated galaxies. However, as we already mentioned, the overall fraction of galaxies in groups and pairs is rather insensitive to this threshold. From Fig. 2, we see that the fraction of galaxies in groups decreases from about 22% at $`z=0.7`$ to about 16% at $`z=0`$. At the same time, the fraction of isolated halos slightly increases. The decreasing number of groups is mostly due to the fact that groups merge into more massive objects. However, we also found a small fraction of groups which merge into isolated halos. In the simulation we found 20 halos with masses larger than $`10^{12}h^1\mathrm{M}_{}`$ the progenitor of which at $`z=1`$ was a group with 4 to seven members. Such a merged group could appear today as an isolated elliptical with a group-like X-ray halo (Mulchaey & Zabludoff 1999; Vikhlinin et al. 1999). ## 5 Conclusions The general trend during evolution of clustering is the formation of small bound systems of halos at $`z\genfrac{}{}{0pt}{}{_>}{^{}}1.5`$. After $`z=1.5`$ these systems tend to merge and to increase mass by accretion so that large galaxy clusters grow in the simulation. The total number of small bound systems and the total number of galaxies in these small systems rapidly decreases after $`z=1.5`$. The fraction of isolated galaxies remains approximately constant after $`z=1`$, whereas the fraction of galaxies in groups decreases. Some of the groups merge after $`z=1`$ to form large isolated galaxies. ###### Acknowledgements. This work was funded by the NSF and NASA grants to NMSU. SG acknowledges support from Deutsche Akademie der Naturforscher Leopoldina with means of the Bundesministerium für Bildung und Forschung grant LPD 1996. We acknowledge support by NATO grant CRG 972148.
no-problem/9909/nucl-ex9909015.html
ar5iv
text
# Elastic Compton scattering from the deuteron and nucleon polarizabilities ## Abstract Cross sections for elastic Compton scattering from the deuteron were measured over the laboratory angles $`\theta _\gamma =`$ 35–150. Tagged photons in the laboratory energy range $`E_\gamma =`$ 84–105 MeV were scattered from liquid deuterium and detected in the large-volume Boston University NaI (BUNI) spectrometer. Using the calculations of Levchuk and L’vov, along with the measured differential cross sections, the isospin-averaged nucleon polarizabilities in the deuteron were estimated. A best-fit value of $`(\overline{\alpha }\overline{\beta })`$=2.6$`\pm `$1.8 was determined, constrained by dispersion sum rules. This is markedly different from the accepted value for the proton of $`(\overline{\alpha }\overline{\beta })_p`$=10.0$`\pm `$1.5$`\pm `$0.9. Elastic photon scattering from deuterium can, in principle, yield basic information on the substructure of the deuteron and hence the nucleons themselves. Compton scattering from the proton has been used extensively to determine the polarizabilities of the proton (see Ref. and references contained therein). The electric ($`\overline{\alpha }_p`$) and magnetic ($`\overline{\beta }_p`$) polarizabilities constitute the first-order responses of the internal structure of the proton to externally applied electric and magnetic fields. The current status of the proton polarizabilities has been reported in Ref. . $`(\overline{\alpha }\overline{\beta })_p`$ $`=`$ $`10.0\pm 1.5\pm 0.9,`$ (1) $`(\overline{\alpha }+\overline{\beta })_p`$ $`=`$ $`15.2\pm 2.6\pm 0.2,`$ (2) where the first error is the combined statistical and systematic, and the second is due to the model dependence of the dispersion-relation extraction method. The units of $`\overline{\alpha }`$ and $`\overline{\beta }`$ are $`10^4`$ fm<sup>3</sup>. There is also a dispersion sum rule which relates the sum of the polarizabilities to the nucleon photoabsorption cross section. The generally accepted result for the Baldin sum rule is $$(\overline{\alpha }+\overline{\beta })_p=14.2\pm 0.5,$$ (3) although recent reevaluations yield $`13.69\pm 0.14`$ and $`14.0\pm (0.30.5)`$ . Note that the experimental value (Eq. 2) is in agreement with the sum rule. The polarizabilities, obtained from Eqs. 1 and 3, are $`\overline{\alpha }_p`$ $`=`$ $`12.1\pm 0.8\pm 0.5,`$ (4) $`\overline{\beta }_p`$ $`=`$ $`2.10.80.5.`$ (5) The status of the neutron polarizabilities is much less satisfactory. The majority of measurements of the electric polarizability of the neutron have been done by low-energy neutron scattering from the Coulomb field of a heavy nucleus. There is considerable disagreement between the two most recent measurements . Schmiedmayer et al. reported a value for the static electric polarizability of $`\alpha _n=12.0\pm 1.5\pm 2.0`$, where the first uncertainty is statistical and the second systematic. The difference between the static ($`\alpha `$) and the Compton ($`\overline{\alpha }`$) polarizability is small for the neutron. These data have been reinterpreted by Enik et al. , and they have suggested that a value of $`\alpha _n=719`$ was more appropriate. In a separate experiment, Koester et al. have reported a value of $`\alpha _n=0\pm 5`$. Clearly, the current experimental value of $`\alpha _n`$ has large uncertainties. Once $`\alpha _n`$ is obtained, $`\beta _n`$ can be determined via the sum-rule relation for the neutron , $$(\overline{\alpha }+\overline{\beta })_n=15.8\pm 0.5.$$ (6) The recent reevaluations of this sum rule yield $`14.40\pm 0.66`$ and $`15.2\pm (0.30.5)`$ . An alternate method of measuring the neutron polarizabilities is through the use of the quasi-free Compton scattering reaction $`d(\gamma ,\gamma ^{}n)p`$ in which the scattered photon is detected in coincidence with the recoil neutron. In certain kinematic regions, the proton behaves as a spectator and the scattering is primarily from the neutron. There has been one measurement reported on this reaction using bremsstrahlung photons with an endpoint of 130 MeV . However, due to poor statistics, the resulting determination of $`\overline{\alpha }_n`$ effectively gives only an upper limit. Experiments have recently been completed at Mainz , LEGS, and SAL, but results have yet to be reported. A third method to determine the polarizability of the neutron is through the elastic Compton scattering reaction $`d(\gamma ,\gamma )d`$. The only reported measurement of this reaction was conducted at Illinois . The low photon energy of this experiment resulted in reduced sensitivity to the polarizabilities and hence large error bars. Since the deuteron amplitude is sensitive to the sum of the proton and neutron polarizabilities, to obtain specific information on the neutron, it is necessary to subtract the proton polarizabilities. More serious concerns are the contributions from meson exchange currents and other nuclear effects. Therefore, the final results for the neutron polarizabilities depend on a model calculation as well as knowledge of the proton polarizabilities. The present measurement was performed at the Saskatchewan Accelerator Laboratory (SAL). The facility houses a 300 MeV linear accelerator (LINAC) that injects electrons into a pulse-stretcher ring (PSR), producing a nearly continuous wave (CW) electron beam. The LINAC and PSR were used in conjunction with a high-resolution, high-rate photon tagger, a cryogenic target system, and a large-volume NaI detector. Complete details of the experiment can be found in Ref. . The $`d(\gamma ,\gamma )d`$ cross section was measured using tagged photons in the energy range 84–105 MeV with a resolution of 0.3–0.4 MeV. An electron beam of 135 MeV and $``$65% duty factor was incident on a 115 $`\mu `$m aluminum radiator producing bremsstrahlung photons, which were tagged via the standard photon tagging technique using the SAL photon tagger . The average tagged flux was $`6\times 10^7`$ photons/s integrated over the photon energy range. The tagging efficiency was measured approximately every 8 hours during the experiment by using a lead glass $`\stackrel{ˇ}{\mathrm{C}}`$erenkov detector directly in the beam to detect photons in coincidence with electrons in the focal plane. The tagging efficiency was approximately 53%. Photons scattered from the 12.7 cm long, liquid-deuterium target were detected in the large-volume Boston University NaI (BUNI) gamma-ray spectrometer. BUNI is composed of five optically-isolated segments of NaI, each 55.9 cm in length: the core (26.7 cm in diameter) and four quadrants that form an 11.4 cm thick annulus around the core. Since the inelastic contribution to $`d(\gamma ,\gamma ^{})d`$ begins only 2.2 MeV below the elastic peak, it was essential that the photon detector have at least 2% resolution at 100 MeV. The excellent resolution of BUNI is mainly due to the fact that it effectively contains 100% of the electromagnetic showers created by the incident photons. Scattered photons were detected at lab angles of $`35^{}`$, $`60^{}`$, $`90^{}`$, $`120^{}`$, and $`150^{}`$. A zero-degree (or in-beam) calibration of the detector was done once, in the middle of the experiment, in order to obtain both the lineshape of BUNI and an energy calibration for the NaI core. The NaI quadrants were calibrated daily with a radioactive source (Th-C). Figure 1 depicts the BUNI energy spectrum, after randoms have been subtracted, summed over the incident photon energy range. The contribution from empty-target backgrounds has also been subtracted. Each channel in the focal plane of the tagger corresponds to a different incident photon energy and hence a different energy for the Compton scattered photon. To sum over tagger channels, the detected photon energy in BUNI was shifted to a value corresponding to the maximum incident photon energy of 105 MeV. The magnitude of the shift was determined by which tagger channel registered the photon. After background subtraction, the elastic peak in the BUNI energy spectrum was integrated over a specific region of interest (ROI), depending on the angle, to obtain the yields. This region was chosen to ensure that no inelastic contributions were contained in the integrated region. The ROI for the 150 photon energy spectrum is shown in Fig. 1 (vertical lines). The inelastic contribution is evident in the energy range 80–90 MeV. Systematic errors associated with the energy calibration were investigated by comparing the chi-square between the data and the in-beam lineshape for different calibrations. At the 95% confidence level, the uncertainties in the cross sections were 1–4%, depending on angle. The detection efficiency for the scattered photons was largely constrained by the small size of the ROI and photon absorption before reaching the detector. Since the ROI (Fig. 1) only selected the peak of the elastic scattering distribution, a significant number of events from the tail of the lineshape were excluded. The ROI efficiency was deduced using the zero-degree Compton spectrum, shifted to the appropriate energy, and represents the response of monochromatic photons in BUNI. Comparisons of measurements and EGS simulations of zero degree and scattering geometries confirmed that the lineshape of BUNI measured at zero degrees was consistent with the measured lineshape in the scattering geometry within $``$3%. The ratio of counts inside the ROI to the total counts in the lineshape yielded an efficiency of (62$`\pm `$2)%. The absorption efficiency was broken into two parts: that due to absorption of photons in the target and associated apparatus, and that due to absorption of photons in the materials located between the NaI crystal and the target enclosure. The first absorption factor was obtained with an EGS simulation . The second was found by integrating the entire zero-degree lineshape from the in-beam calibration and comparing it to the incident photon flux as determined by the photon tagger. The absorption factor was approximately (89$`\pm `$1)%, which gave an overall detection efficiency of (55$`\pm `$2)%, relatively independent ($`<`$1%) of scattering angle. The effect of the high-rate photon flux on the measured yield was investigated. Rate effects were found to be $`<`$2% in all cases . Sources of systematic errors in this experiment included target thickness (2.5%), solid angle (1.6%), detection efficiency (3.6%), incident photon flux (1%), and energy calibration (1–4% depending on angle). Adding contributions in quadrature gives a total systematic error of 6.4% (35), 4.9% (60), 4.8% (90), 4.9% (120), and 5.2% (150). The final center-of-mass system (CMS) differential cross sections for the $`d(\gamma ,\gamma )d`$ reaction as a function of CMS scattering angle are displayed in Fig. 2 and are listed in Table I. The data were averaged over the incident photon energy range of 84-105 MeV. The error bars in the figure are the quadratic sum of the statistical and systematic errors. Levchuk and L’vov have calculated the differential cross section for the $`d(\gamma ,\gamma )d`$ reaction in the framework of a diagrammatic approach. The scattering amplitude is expressed in terms of resonance and seagull amplitudes. The resonance amplitudes correspond to two-step processes and include rescattering of the intermediate nucleons. The one- and two-body seagull amplitudes involve a photon being absorbed and emitted at the same moment within the energy scale involved. The one-body seagull diagrams include the nucleon polarizabilities. The contribution of the various ingredients to the $`d(\gamma ,\gamma )d`$ reaction at 94 MeV is illustrated in Fig. 3. The solid line in Figs. 2 and 3 is the full calculation of Levchuk and L’vov with nominal free values of the isospin-averaged polarizabilities, $`\overline{\alpha }=12`$.0, $`\overline{\beta }=3.0`$ or $`(\overline{\alpha }\overline{\beta })=9.0`$. By imposing the sum-rule constraints (Eqs. 3 and 6), the difference of the polarizabilities can be taken as a free parameter and fitted to the experimental data. The dashed curve in Fig. 2 corresponds to the best-fit value of $$(\overline{\alpha }\overline{\beta })=2.6\pm 1.8,$$ (7) which was determined by minimizing the chi-square ($`\chi ^2/N_{d.o.f.}=2.8/4`$) between the calculation and the data. Choosing the sum-rule results of Babusci et al. gives a $`<`$4% increase in $`(\overline{\alpha }\overline{\beta })`$. The fitted value is substantially different from the nominal free value, and is driven by the back-angle cross sections. The extracted $`(\overline{\alpha }\overline{\beta })`$ may be interpreted in terms of (1) medium modifications to the nucleon polarizabilities ($`\mathrm{\Delta }\overline{\alpha }`$ and $`\mathrm{\Delta }\overline{\beta }`$), (2) neutron polarizabilities which are quite different from the proton, or (3) missing physics in the calculation used to extract the polarizabilities. It is important to note that the extracted value of $`(\overline{\alpha }\overline{\beta })`$ is dependent upon the theoretical model used. For a discussion of the model dependence of the existing calculations see Refs. . Medium modifications to the free polarizabilities for Compton scattering from light nuclei have been postulated . Recent measurements on <sup>4</sup>He , <sup>12</sup>C , and <sup>16</sup>O have suggested that changes on the order of $`\mathrm{\Delta }\overline{\beta }`$ = $`\mathrm{\Delta }\overline{\alpha }`$ = 4–8 were required in order to describe the data. These differences were driven by the back-angle cross sections which tended to be larger than the theoretical predictions, as is also seen in the current $`d(\gamma ,\gamma )d`$ results. Other measurements, performed at Lund, have reported no modification of the free polarizabilities . Clearly this issue has yet to be resolved. In the current measurement, modifications of $`\mathrm{\Delta }\overline{\beta }`$3 would be required to explain the data. However, for a lightly bound system like the deuteron, medium modifications might be expected to be small. Assuming that the proton polarizabilities are unmodified in the deuteron, the neutron polarizabilities can be extracted from the fitted $`(\overline{\alpha }\overline{\beta })`$. Using the fitted value along with Eq. 1 yields $$(\overline{\alpha }\overline{\beta })_n=4.8\pm 3.9,$$ (8) which can be used along with Eq. 6 to determine the neutron polarizabilities, $`\overline{\alpha }_n`$ $`=`$ $`5.5\pm 2.0,`$ (9) $`\overline{\beta }_n`$ $`=`$ $`10.32.0.`$ (10) The error bars do not include any model dependence introduced by the theoretical calculation and are anti-correlated due to application of the sum rule. These results are surprising, since the neutron values are not expected to be radically different from the proton. Evidence for modification of the backward spin polarizability ($`\gamma _\pi `$) has been reported . The suggested modification would increase the theoretical back-angle cross sections (Fig. 2) and hence increase the extracted $`(\overline{\alpha }\overline{\beta })`$ by $``$3. However, a recent measurement of $`d(\gamma ,\gamma ^{}p)n`$ reports no evidence for such a modification of $`\gamma _\pi `$ . There has also been a recent calculation of the $`d(\gamma ,\gamma )d`$ differential cross section within the framework of baryon chiral perturbation theory . The results of the calculation of Beane et al., which includes $`O(Q^3)`$ terms and only a partial set of the $`O(Q^4)`$ corrections, are similar in magnitude to that of Levchuk and L’vov. However, Beane et al. warn that their prediction at 95 MeV has considerable uncertainty due to the slow convergence of the series. Understanding what physics may be missing in the theoretical calculations will have to wait for the full $`O(Q^4)`$ treatment. Other theoretical calculations for the $`d(\gamma ,\gamma )d`$ reaction have been reported. The calculations of Karakowski and Miller underpredict the present measurement by a factor of $``$2 at backward angles when using the nominal free values of the polarizabilities. Wilbois et al. have reported a calculation at 100 MeV but with polarizabilities set to zero. Finally, Chen et al. have reported results only up to a photon energy of 69 MeV. Forthcoming results on the quasi-free Compton scattering from the deuteron, with detection of the recoiling nucleon, should shed some light on this controversy. By doing the experiments in quasi-free kinematics (in the energy region E<sub>γ</sub>=200–300 MeV and backward angles for the scattered photons) the model dependence is minimized, and the polarizabilities can be extracted separately for the proton and the neutron. The authors would like to thank M.I. Levchuk and A.I. L’vov for supplying the code for their theoretical calculations. This work was supported in part by a grant from the Natural Science and Engineering Research Council of Canada.
no-problem/9909/cond-mat9909151.html
ar5iv
text
# Abstract ## Abstract A novel approach to intrinsic Josephson-detection of far infrared radiation is reported utilizing near-zone field effects at electric contacts on c-axis oriented YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> films. While only a bolometric signal was observed focusing the radiation far off the contacts on c-axis normal films, irradiating the edge of contacts yielded an almost wavelength independent fast signal showing the characteristic intensity dependence of Josephson-detection. The signal is attributed to a c-axis parallel component of the electric radiation field being generated in the near-zone field of diffraction at the metallic contact structures. Key words: near field, far infrared, Josephson effect, YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>. ## 1 Introduction Thin granular superconducting films of high-T<sub>c</sub> superconductors, biased close to their critical currents, have been shown to be fast and sensitive detectors for the far infrared (FIR) and millimeter wave regime . These films consist of stoichiometric crystallites, each of which is coupled to its neighbors by weak links. The films are thus comprised of a random array of Josephson junctions, the critical currents of which are depressed by below gap radiation. Measured responsivities are unsurprisingly found to vary considerably between samples, due to the reliance of the detectivity on the properties of the random array. Inherent Josephson junctions have also been found in small single crystals of Bi- and Tl-based high-$`T_c`$ superconducting materials . In this case, in contrast to granular films, a regular microscopic array of junctions is formed along the c-axis by the quasi two-dimensionality of the material. However, for efficient FIR radiation detection with these small crystallites, a component of the electric field of the radiation must be applied along the c-axis of the superconductor. This is only realizable by using delicate antennas to contact a small crystal with $`\mu `$m dimensions. A considerable improvement in FIR Josephson detection including difference frequency mixing has been achieved by using thin film structures grown with a controllable misalignment between the c-axis and the substrate surface normal. The coherent growth of such films on appropriately oriented substrates has been established by the observation of a tilt angle dependent lateral thermoelectric effect in the normal state . In such films normal incidence irradiation applies an electric field along the c-axis. Here we report on a new approach to FIR-Josephson detection using c-axis oriented epitaxial grown YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> films which are of much better quality than those prepared with a tilt angle to the c-axis. The necessary component of the electric field of the FIR radiation normal to the film surface is achieved by making use of the optical near-zone field effect of diffraction on suitable electrode structures on the sample surface. Similar diffraction generated near-zone field enhancement of electron tunneling has been previously observed applying high-power FIR radiation on n-GaAs tunnel Schottky diodes and between metallic contacts and $`\delta `$-doped 2DEG in n-GaAs 20 nm below the surface. ## 2 Experimental Epitaxial YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>-layers have been prepared by UV-laser evaporation on (100)LaAlO<sub>3</sub>-substrates. The thickness of the films was 300 nm. The samples were oxygen depleted by thermally forced oxygen diffusion in Ar atmosphere at 600 K for several hours. In Fig. 1 a plot of resistance as a function of temperature is shown for the samples used here with T<sub>c</sub> around T = 80 K. Due to oxygen deficiency the samples show semiconductor-like behavior below and above T<sub>c</sub>, the slope of the resistance versus temperature is negative away from T<sub>c</sub>. This allows to distinguish easily between Josephson- and bolometric signal as they differ in polarity. On top of the films diffracting electrode structures were deposited by sputtering large non-transparent parallel Au stripe-contacts with various spacing widths around 100 $`\mu `$m in the range of the employed FIR-wavelengths. In addition stripe contact pairs with a large distance of 3 mm were also used allowing irradiation of one contact only. To allow polarization independent diffraction measurements at contact edges, silver paint dots of 1 mm diameter were applied on the edge of one metal contact of the 3 mm slit sample covering partly the plain superconductor surface (see Fig. 4). The rough boundary of the silver paint spot yields diffraction for all polarizations of the radiation field. Measurements of the photoresponse were carried out using a TEA-CO<sub>2</sub> laser pumped molecular FIR laser with NH<sub>3</sub> as active gas yielding laser lines of $`\lambda `$ = 76, 90, 148 and 280 $`\mu `$m with a peak intensity up to 2 MW/cm<sup>2</sup> in 100 ns pulses. The intensity of the laser pulses was controlled by calibrated attenuators and monitored using a fast photon drag p-Ge detector <sup>1</sup><sup>1</sup>1ARTAS GmbH, model PD5F. The superconducting samples were biased by a fast constant current source with bias current up to 100 mA and the voltage across the sample in response to FIR laser pulses was recorded with a digital storage oscilloscope. The measurements were carried out in a temperature controllable cryostat with optical access in the range 25 to 60 K. ## 3 Results and Discussion Fig. 2 shows the sample response to 76 $`\mu `$m laser pulses (lower plates) in comparison to reference detector signals (upper plates). The left lower plate shows a signal trace when the laser was focused on the edge of the the silver paint contact whereas in the lower right plate the signal is shown which was observed when the laser was focused on the superconductor away from the contacts. In the first case we see a fast positive signal ($`\tau <10`$ ns) being synchronous to the leading edge of the reference detector signal and a slower negative signal ($`\tau 100`$ ns). In the case when solely the superconductor was irradiated only the slow negative signal is found. Thus, the slow negative signal can be identified with a bolometric effect whereas the fast positive signal can be attributed to the Josephson effect. In the signal trace bottom left the Josephson signal is truncated by the bolometric signal of opposite sign. Fig. 3 shows the intensity dependence of the signal voltage for different wavelengths demonstrating the typical behavior of Josephson response following power laws. The left and right plates show the response irradiating one contact edge of a 3 mm slit sample and a 100 $`\mu `$m slit, respectively. In both cases at low intensities an almost wavelength independent power law is observed which proceeds at high intensities into signal $`I^{\frac{1}{2}}`$ where $`I`$ is the intensity. The crossover occurs at practically the same intensity for all wavelengths except for the 100 $`\mu `$m slit sample and $`\lambda =90.5\mu `$m wavelength which is almost equal to the contact distance. In Fig. 4 the signal voltage is shown as a function of the spatial location of the laser focus scanned across the sample surface perpendicular to the silver paint contact is. The peak signal is plotted as a function of the center of the focal spot with respect to the contact geometry. The spatial signal distribution correponds to the intensity profiles of the laser focus as obtained by a high resolution pyroelectric camera<sup>2</sup><sup>2</sup>2Spiricon, Pyrocam I. A signal is observed only in the case of incidence onto the silver paint contact edge and vanishes if only the plain YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> surface is irradiated. This observation gives evidence that the signal must due to a component of the electric field of the radiation which is normal to the c-axis oriented sample modulating the critical current of the intrinsic Josephson effect. Such a longitudinal electric field component may be caused in the near-zone field of diffraction at the silver paint contact. The formation of effective intrinsic Josephson junctions in the c-axis of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>-layers is dependent of the oxygen deficiency $`\delta `$. A particular $`\delta `$ can be achieved by thermally forced oxygen diffusion in and out of the sample along the ab-crystal planes. Due to our sample geometry with large contact surfaces and its c-axis orientation this diffusion is locally inhibited, leading to an inhomogeneous oxygen distribution and therefore inhomogeneous superconducting characteristics across the sample like different critical temperatures and currents, as a semiconductor-like residual resistance in the superconducting state. Electrical measurements also indicate an inhomogeneous current distribution between the contacts, showing several distinguished current paths. ## 4 Conclusions In c-axis oriented thin oxygen depleted YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>-films a fast response to far infrared laser radiation has been observed only if diffracting metallic structures on top of the film have been irradiated. Therefore it is concluded that the signal is due to suppression of the critical current of the intrinsic Josephson coupling between adjacent superconducting layers. This suppression is caused by an electric field component of the high frequency radiation which is normal to the plane of the superconducting film and is generated in the near-zone field of diffraction. ## 5 Acknowledgments Financial support by the Deutsche Forschungsgemeinschaft is gratefully acknowledged. We thank H. Lengfellner for advice in sample preparation and A. Ya. Shulman, IRE Moscow, for helpful discussions.
no-problem/9909/math9909133.html
ar5iv
text
# On the Translation Invariance of Wavelet Subspaces ## 1. Introduction A wavelet $`\psi L^2()`$ is a complete wandering vector for the unitary system $`\{D^nT^l:n,l\}`$, i.e. the collection $`\{D^nT^l\psi :n,l\}`$ is an orthonormal basis for $`L^2()`$, where $`D`$, $`T`$ are defined on $`L^2()`$ as: $`Df(x)=\sqrt{2}f(2x)`$ and $`Tf(x)=f(x1)`$. Every wavelet can be associated with a *Generalized Multiresolution Analysis*, or GMRA (see ). Indeed, define the subspaces $`V_j=\overline{span}\{D^nT^l\psi :n<j,l\}`$, then it is routine to verify that these subspaces satisfy the following four conditions: 1. $`V_jV_{j+1}`$, 2. $`DV_j=V_{j+1}`$, 3. $`_jV_j=\{0\}`$ and $`_jV_j`$ has dense span in $`L^2()`$, 4. $`V_0`$ is invariant under $`T`$. We shall call $`V_0`$ the *core space* for $`\psi `$. Item 4 is of interest since the core space is invariant under translations by the group $``$. A natural question is: are there other groups of translations under which $`V_0`$ is invariant? This paper will answer this question by looking at groups of translations by dyadic rationals. Denote by $`T_\alpha `$ the unitary operator $`T_\alpha f(x)=f(x\alpha )`$. $`T`$ is to be understood as $`T_1`$. Note that $`\widehat{T_\alpha }=M_{e^{i\alpha \xi }}`$. In this paper, we shall consider the groups of translations $`𝒢_n=\{T_{\frac{m}{2^n}}:m\}`$, and the group $`𝒢_{\mathrm{}}=\{T_\alpha :\alpha \}`$. Denote by $`_n`$ the collection of all wavelets whose core space is invariant under $`𝒢_n`$. Note that these collections are nested: $$_0_1_2\mathrm{}_n_{n+1}\mathrm{}_{\mathrm{}}$$ We can then define an equivalence relation whose equivalence classes are given by $`_n=_n_{n+1}`$, with $`_{\mathrm{}}=_{\mathrm{}}`$. Hence, $`_n`$ is the collection of all wavelets such that $`V_0`$ is invariant under $`𝒢_n`$ but not $`𝒢_{n+1}`$. The goal of this paper is to characterize these equivalence classes, while showing that several of them are not empty. In general, $`V_0`$ can be quite complicated in structure. Indeed, it may not even be generated by translations of a finite number of functions. Hence, we wish to restrict our analysis to $`W_0`$. Recall that $`W_j`$ is defined by $`V_{j+1}=V_jW_j`$. Clearly, $`W_j=\overline{span}\{D^jT^l\psi :l\}`$. ###### Lemma 1. Let $`r<n`$ be integers, and let $`p=nr`$. The space $`V_r`$ (resp. $`W_r`$) is invariant under $`𝒢_k`$ if and only if the space $`V_n`$ (resp. $`W_n`$) is invariant under $`𝒢_{k+p}`$. ###### Proof. By definition, $`fV_r`$ if and only if $`D^pfV_n`$. Suppose that $`gV_n`$ and define $`fV_r`$ such that $`D^pf=g`$. Consider the following commutation relation: $`D^pT_{\frac{m}{2^k}}f(x)`$ $`=2^{\frac{p}{2}}T_{\frac{m}{2^k}}f(2^px)`$ $`=2^{\frac{p}{2}}f(2^px+{\displaystyle \frac{m}{2^k}})`$ $`=D^pf(x+{\displaystyle \frac{m}{2^{k+p}}})`$ $`=T_{\frac{m}{2^{k+p}}}D^pf(x).`$ This calculation establishes the statement. ∎ By lemma 1, another way to describe $`_n`$ is that $`\psi _n`$ if $`n`$ is the largest integer such that $`V_n`$ is invariant under integral translations. If $`\psi _n`$, we shall say $`\psi `$ has the *translation invariance of order n* property. ###### Theorem 2. The core space $`V_0`$ for $`\psi `$ is invariant under the action of $`𝒢_n`$ if and only if $`W_0`$ is invariant under the action of $`𝒢_n`$. ###### If. Suppose that $`W_0`$ is invariant under $`𝒢_n`$. Then, by lemma 1, for $`k>0`$, $`W_k`$ is invariant under $`𝒢_{n+k}`$, whence $`V_0^{}=_{k=0}^{\mathrm{}}W_k`$ is invariant under $`𝒢_n`$. If follows that $`V_0`$ is invariant under $`𝒢_n`$. *Only If.* Suppose $`V_0`$ is invariant under $`𝒢_n`$. Then, again by lemma 1, $`V_1`$ is invariant under $`𝒢_{n+1}`$, and hence $`𝒢_n`$. Since $`V_1=V_0W_0`$, it follows that $`W_0`$ is also invariant under $`𝒢_n`$. ∎ For the purposes of this paper, we shall say that a set $`E`$ is *partially self-similar* with respect to $`\alpha `$ if there exists a set $`F`$ of non-zero measure such that both $`F`$ and $`F+\alpha `$ are subsets of $`E`$. Additionally, if $`G,H`$ are two subsets of $``$, we shall say that $`G`$ is $`2\pi `$ translation congruent to $`H`$ if there exists a measurable partition $`G_n`$ of $`G`$ such that the collection $`\{G_n+2n\pi :n\}`$ forms a partition of $`H`$, modulo sets of measure zero. The letter $`\lambda `$ will denote Lebesgue measure. Define a mapping $`\tau :[0,2\pi )`$ such that $`\tau (x)x=2\pi k`$ for some integer $`k`$. ## 2. A Characterization of $`_{\mathrm{}}`$ Recall that a wavelet set is a set $`W`$ such that the function $`\psi `$ defined by $`\widehat{\psi }=\frac{1}{\sqrt{2\pi }}\chi _W`$ is a wavelet. Such a wavelet $`\psi `$ is called a Minimally Supported Frequency (MSF) wavelet. The following theorem reveals some structure of wavelet sets. ###### Theorem 3. Let $`fL^2()`$, and let $`E=supp(f)`$. Then $`\{e^{inx}f(x):n\}`$ is an orthonormal basis for $`L^2(E)`$ if and only if the following two conditions hold: 1. supp(f) is $`2\pi `$ translation congruent to $`[0,2\pi )`$, 2. $`|f(x)|=\frac{1}{\sqrt{2\pi }}`$ for almost every $`x`$. This next theorem completely characterizes wavelet sets; the proof of which uses the previous theorem. ###### Theorem 4. Let $`W`$. Then $`W`$ is a wavelet set if and only if the following two conditions hold: 1. $`W`$ is $`2\pi `$ translation congruent to $`[0,2\pi )`$, 2. $`_j2^jW=`$ modulo null sets. The proof of both theorems can be found in , chapter 4. ###### Theorem 5. Let $`\psi `$ be a wavelet. Then, the following are equivalent: 1. $`\psi `$ is a MSF wavelet, 2. the subspace $`V_0`$ is invariant under translations by all real numbers, 3. the subspaces $`V_j`$ of the corresponding GMRA are invariant under integral translations. 4. the subspaces $`W_j`$ of the corresponding GMRA are invariant under integral translations. ###### Proof. i) $``$ ii). If $`\psi `$ is a MSF wavelet with wavelet set $`W`$, then by theorem 3, $`\widehat{W_0}=L^2(W)`$. Clearly, $`\alpha `$, $`W_0`$ is invariant under $`T_\alpha `$ since $`\widehat{W_0}`$ is invariant under multiplication by $`e^{i\alpha }`$. It follows by theorem 2 that $`V_0`$ is invariant under all translations. ii) $``$ iii). Since $`V_0`$ is invariant under $`𝒢_n`$ for all $`n0`$, by lemma 1, $`V_n`$ is invariant under $`𝒢_0`$. iii) $``$ iv). By definition, $`V_{j+1}=V_jW_j`$. If both $`V_{j+1}`$ and $`V_j`$ are invariant under integral translations, it follows immediately that $`W_j`$ is also invariant under integral translations. iv) $``$ i). Let $`𝒞`$ be the collection of all operators for which $`W_0`$ is invariant. An easy calculation shows that $`𝒞`$ is WOT (weak operator topology) closed. If $`W_j`$ is invariant under integral translations, then again by lemma 1, $`W_0`$ is invariant under $`𝒢_j`$ for all $`j`$. Since $`_{n0}𝒢_n`$ is dense in $`𝒢_{\mathrm{}}`$, in the strong operator topology, it follows that $`W_0`$ is invariant under $`𝒢_{\mathrm{}}`$. If we take the Fourier transform, then we get that $`\widehat{W_0}`$ is invariant under multiplication by $`e^{i\alpha \xi }`$. The linear span of these operators are dense in the collection $`\{M_h:hL^{\mathrm{}}()\}`$ with respect to the WOT. It follows that $`\widehat{W_0}`$ is invariant under multiplication by any $`L^{\mathrm{}}()`$ function. Next, we wish to show that $`\widehat{W_0}=L^2(E)`$, where $`E=supp(\widehat{\psi })`$. First note that since $`\{e^{in\xi }\widehat{\psi }(\xi )\}`$ forms an orthonormal basis for $`\widehat{W_0}`$, $`\widehat{\psi }(\xi )`$ has maximal support in the sense that if $`\widehat{f}\widehat{W_0}`$, then the support of $`\widehat{f}`$ is contained in the support of $`\widehat{\psi }`$. This immediately implies that $`\widehat{W_0}L^2(E)`$. Let $`g(\xi )`$ be a compactly supported simple function, whose support $`F`$ is contained in $`E`$. Define $`E_n=\{\xi :\frac{1}{n1}\widehat{\psi }(\xi )>\frac{1}{n}\}`$, and define $`F_n=FE_n`$. Since $`g`$ is a simple function, it is uniformly bounded by some constant $`M`$. Let $`ϵ>0`$ be given. Choose an $`N`$ such that $`\lambda (_{n>N}F_n)<\frac{ϵ}{M}`$, and define $`h_0`$ to be $`\frac{g}{\widehat{\psi }}\chi _{_{nN}F_n}`$. Then, $`h_0(\xi )\widehat{\psi }(\xi )=g(\xi )`$ on $`_{nN}F_n`$, so that $`h_0\widehat{\psi }g<ϵ`$. Since $`\widehat{W_0}`$ is closed, $`g\widehat{W_0}`$; furthermore all such $`g`$’s are dense in $`L^2(E)`$, whence $`L^2(E)\widehat{W_0}`$. Since $`W_jW_k`$, $`2^jEE`$ is a set of measure zero, and since $`W_j`$ is dense in $`L^2()`$, it follows that $`_j2^jE=`$. Furthermore, by theorem 3, $`E`$ is $`2\pi `$ translation congruent to $`[0,2\pi )`$, hence, by theorem 4, $`E`$ is a wavelet set, and $`\psi `$ is a MSF wavelet. ∎ ###### Corollary 6. The equivalence class $`_{\mathrm{}}`$ can be characterized in the following two ways: 1. $`_{\mathrm{}}=_{n=0}^{\mathrm{}}_n`$, 2. $`_{\mathrm{}}`$ is precisely the collection of all MSF wavelets. ###### Proof. By theorem 5, $`V_0`$ is invariant under $`𝒢_n`$ for all $`n`$ if and only if $`V_0`$ is invariant under translations by all real numbers. This is equivalent to $`\psi `$ being a MSF wavelet. ∎ ## 3. A Characterization of $`_n`$ Suppose $`\psi `$ is a wavelet that is in $`_1`$. If $`T_{\frac{m}{2}}fW_0`$, then by taking the Fourier Transform, we have $`e^{i\frac{m}{2}}\widehat{f}\widehat{W_0}`$, and vice versa, so $`W_0`$ is invariant under translations by half integers if and only if $`\widehat{W_0}`$ is invariant under multiplication by $`e^{i\frac{m}{2}}`$. Because of this, we shall proceed with the analysis in the frequency domain. If $`fW_0`$, then we can write $`f=_kc_kT^k\psi `$, so taking the Fourier transform of both sides yields $`\widehat{f}=h\widehat{\psi }`$ for some $`hL^2([0,2\pi ))`$. Hence, we can describe $`\widehat{W_0}`$ by $`\{h(\xi )\widehat{\psi }(\xi ):hL^2([0,2\pi ))\}`$. Suppose that $`\xi E=supp(\widehat{\psi })`$. If $`\widehat{W_0}`$ is invariant under multiplication by $`e^{i\frac{m}{2}\xi }`$, then for $`m=1`$, $$e^{i\frac{1}{2}\xi }h(\xi )\widehat{\psi }(\xi )=g(\xi )\widehat{\psi }(\xi )$$ for some $`gL^2([0,2\pi ))`$. Note that if $`\xi supp(\widehat{\psi })`$, then $`e^{i\frac{1}{2}\xi }h(\xi )=g(\xi )`$. Let $`\xi supp(\widehat{\psi })`$ and let $`k`$ be an odd integer. Then, $`g(\xi )\widehat{\psi }(\xi +2k\pi )`$ $`=g(\xi +2k\pi )\widehat{\psi }(\xi +2k\pi )`$ $`=e^{i(\frac{1}{2})(\xi +2k\pi )}h(\xi +2k\pi )\widehat{\psi }(\xi +2k\pi )`$ $`=e^{i\frac{1}{2}\xi }h(\xi )\widehat{\psi }(\xi +2k\pi )`$ $`=g(\xi )\widehat{\psi }(\xi +2k\pi )`$ This calculation shows that $`\widehat{\psi }`$ cannot have both $`\xi `$ and $`\xi +2k\pi `$ in its support. We have established the first characterization theorem. ###### Theorem 7. Let $`\psi `$ be a wavelet. Then $`\psi _1`$ only if $`E=supp(\widehat{\psi })`$ is not partially self similar with respect to any odd multiple of $`2\pi `$. ###### Corollary 8. If $`supp(\widehat{\psi })=`$, then $`\psi _0`$. ###### Corollary 9. If $`\psi `$ is compactly supported, then $`\psi _0`$. It is interesting to note that most of the wavelets used in practice have this property. It is unclear at this point if this has a meaningful interpretation from a numerical analysis point of view. Theorem 7 extends to $`_n`$ in the following natural way. ###### Theorem 10. Let $`\psi `$ be a wavelet. Then $`\psi _n`$ only if the support of $`\widehat{\psi }`$ is not partially self similar with respect to any odd multiple of $`2^j\pi `$ for all $`j=1,2,\mathrm{},n`$. ###### Proof. Let $`\psi _n`$. Hence, $$e^{i\frac{1}{2^n}\xi }h(\xi )\widehat{\psi }(\xi )=g(\xi )\widehat{\psi }(\xi )$$ for some $`gL^2([0,2\pi ))`$. Let $`1jn`$, and let $`k`$ be an odd integer. Then, by a similar computation, $`g(\xi )\widehat{\psi }(\xi +2^jk\pi )`$ $`=g(\xi +2^jk\pi )\widehat{\psi }(\xi +2k\pi )`$ $`=e^{i(\frac{1}{2^n})(\xi +2^jk\pi )}h(\xi +2^jk\pi )\widehat{\psi }(\xi +2^jk\pi )`$ $`=e^{i\frac{k}{2^{nj}}\pi }e^{i\frac{1}{2^n}\xi }h(\xi )\widehat{\psi }(\xi +2^jk\pi )`$ $`=e^{i\frac{k}{2^{nj}}\pi }g(\xi )\widehat{\psi }(\xi +2^jk\pi )`$ as above. ∎ We have now established necessary conditions for wavelets to be in the equivalence classes $`_k`$ for $`k`$ not equal to 1 or $`\mathrm{}`$. This does not shed light onto whether such wavelets exist. Fortunately, to aid in the search, the converse of theorem 10 also holds. ###### Theorem 11. Let $`\psi `$ be a wavelet and let $`E=supp(\widehat{\psi })`$ be such that it is not partially self similar with respect to any odd multiple of $`2^j\pi `$ for $`j=1,2,\mathrm{},n`$. Then $`\psi _n`$. ###### Proof. It suffices to show that $$e^{i\frac{1}{2^n}\xi }\widehat{\psi }(\xi )=g(\xi )\widehat{\psi }(\xi )$$ for some $`gL^2([0,2\pi ))`$. Let $`FE`$ be such that $`\tau :F[0,2\pi )`$ is a bijection. (It can be easily shown that $`\tau :E[0,2\pi )`$ is a surjection.) The injective property of $`\tau `$ can be assured in the following manner: for each $`\xi [0,2\pi )`$, define the set $`Z_\xi =\{m_\xi :\xi +2m_\xi \pi E\}`$, then for $`\xi `$ choose $`k_\xi `$ to be 0 if $`\xi E`$, if not, choose $`k_\xi =min\{m>0:minZ_\xi \}`$, else choose $`k_\xi =max\{m<0:mZ_\xi \}`$. Let $`F=\{\xi +2k_\xi \pi :\xi [0,2\pi )\}`$. Note that by construction, $`F`$ is $`2\pi `$ translation congruent to $`[0,2\pi )`$. Hence, $$e^{i\frac{1}{2^n}\xi }\chi _F(\xi )=g(\xi )$$ where $`g(\xi )L^2(F)`$ and is $`2\pi `$ periodic. Thus, for $`\xi F`$, $$e^{i\frac{1}{2^n}\xi }\widehat{\psi }(\xi )=g(\xi )\widehat{\psi }(\xi ).$$ For almost any $`\xi EF`$, there exists a $`\xi ^{}F`$ and an integer $`l_\xi `$ such that $`\xi \xi ^{}=2l_\xi \pi `$. Moreover, by hypothesis, $`l_\xi `$ is an even multiple of $`2^n`$, since $`E`$ is not partially self similar with respect to any odd multiple of $`2^j\pi `$. Since $`e^{i\frac{1}{2^n}\xi }`$ is $`2^n\pi `$ periodic, we have that for $`\xi EF`$, $`e^{i\frac{1}{2^n}\xi }\widehat{\psi }(\xi )`$ $`=e^{i\frac{1}{2^n}(\xi ^{}+2l_\xi \pi )}\widehat{\psi }(\xi ^{}+2l\xi \pi )`$ $`=e^{i\frac{1}{2^n}\xi ^{}}\widehat{\psi }(\xi ^{}+2l\xi \pi )`$ $`=g(\xi ^{})\widehat{\psi }(\xi ^{}+2l\xi \pi )`$ $`=g(\xi )\widehat{\psi }(\xi ).`$ This completes the proof. ∎ We have established the following characterization of the $`_n`$’s. ###### Corollary 12. The equivalence class $`_n`$ consists of all wavelets $`\psi `$ such that the support of $`\widehat{\psi }`$ is not partially self similar with respect to any odd multiples of $`2^k\pi `$, for $`k=1,2,\mathrm{},n`$ but is partially self similar with respect to some odd multiple of $`2^{n+1}\pi `$. ## 4. Examples In this section, we will present examples of wavelets that are in the first four equivalence classes, with the last being in $`_0`$ but it is not an MRA wavelet, and hence cannot be compactly supported. The tool used to generate these wavelets is operator interpolation. Let $`\psi _{W_1}`$ and $`\psi _{W_2}`$ be MSF wavelets, with corresponding wavelet sets $`W_1`$ and $`W_2`$, respectively. By theorem 4, $`W_1`$ is $`2\pi `$ translation congruent to $`W_2`$. If $`\sigma :W_1W_2`$ is effected by this translation congruence, then $`\sigma `$ can be extended to a measurable bijection of $``$ by defining $`\sigma (x)=2^n\sigma (2^nx)`$ where $`n`$ is such that $`2^nxW_1`$. If $`\sigma `$ is *involutive*, i.e. $`\sigma ^2`$ is the identity, and if $`h_1`$ and $`h_2`$ are measurable, essentially bounded, 2-dilation periodic functions (i.e. $`h_1(2x)=h_1(x)`$), then $`\psi `$ defined by $$\widehat{\psi }=h_1\widehat{\psi }_{W_1}+h_2\widehat{\psi }_{W_2}$$ is again a wavelet provided the matrix (1) $$\left(\begin{array}{cc}h_1& h_2\\ h_2\sigma ^1& h_1\sigma ^1\end{array}\right)$$ is unitary almost everywhere. (Since $`\sigma ^1`$ is 2-homogeneous, and the $`h_i`$’s are 2-dilation periodic, in general it suffices to check this condition on $`W_1`$.) A complete discussion of this can be found in . Note that the interpolated wavelet $`\psi `$ has the property that $`supp(\widehat{\psi })W_1W_2`$. Further, note that since $`\sigma `$ on $`W_1`$ is given by translations by integral multiples of $`2\pi `$, $`\sigma `$ completely describes the partial self similarity of $`W_1W_2`$ with respect to multiples of $`2\pi `$. In the following examples, $`\sigma `$ will always be involutive. ###### Example 1. We shall now present an example of a wavelet in $`_1`$, which by corollary 12 is equivalent to $`E=supp(\widehat{\psi })`$ being not partially self similar with respect to any odd multiples of $`2\pi `$, but does have partially self similarity with respect to some multiple of $`4\pi `$. Consider the following two wavelet sets: $`W_1=`$ $`[{\displaystyle \frac{8\pi }{7}},{\displaystyle \frac{4\pi }{7}})[{\displaystyle \frac{4\pi }{7}},{\displaystyle \frac{6\pi }{7}})[{\displaystyle \frac{24\pi }{7}},{\displaystyle \frac{32\pi }{7}})`$ $`W_2=`$ $`[{\displaystyle \frac{8\pi }{7}},{\displaystyle \frac{4\pi }{7}})[{\displaystyle \frac{2\pi }{7}},{\displaystyle \frac{3\pi }{7}})[{\displaystyle \frac{24\pi }{7}},{\displaystyle \frac{30\pi }{7}})`$ $`[{\displaystyle \frac{31\pi }{7}},{\displaystyle \frac{32\pi }{7}})[{\displaystyle \frac{60\pi }{7}},{\displaystyle \frac{62\pi }{7}})`$ A routine calculation shows: $$\sigma (\xi )=\{\begin{array}{cc}\xi ,\hfill & \xi W_1W_2\hfill \\ \xi 4\pi ,\hfill & \xi [\frac{30\pi }{7},\frac{31\pi }{7})\hfill \\ \xi +8\pi ,\hfill & \xi [\frac{4\pi }{7},\frac{6\pi }{7})\hfill \end{array}$$ This $`\sigma `$ is involutive. Indeed, since $`\sigma ([\frac{30\pi }{7},\frac{31\pi }{7}))=[\frac{2\pi }{7},\frac{3\pi }{7})`$ and $`[\frac{2\pi }{7},\frac{3\pi }{7})=2[\frac{4\pi }{7},\frac{6\pi }{7})`$, for $`\xi [\frac{30\pi }{7},\frac{31\pi }{7})`$, $`\sigma ^2(\xi )=\sigma (\xi 4\pi )=\frac{1}{2}\sigma (2(\xi 4\pi ))=\frac{1}{2}(2\xi 8\pi +8\pi )=\xi `$. A similar computation shows that $`\sigma ^2`$ is the identity on $`[\frac{4\pi }{7},\frac{6\pi }{7})`$. Construct $`h_1`$ and $`h_2`$ as follows: $`h_1`$ $`=\chi _{W_1W_2}+{\displaystyle \frac{1}{\sqrt{2}}}\chi _{[\frac{4\pi }{7},\frac{6\pi }{7})[\frac{30\pi }{7},\frac{31\pi }{7})}`$ $`h_2`$ $`={\displaystyle \frac{1}{\sqrt{2}}}\left(\chi _{[\frac{2\pi }{7},\frac{3\pi }{7})}\chi _{[\frac{60\pi }{7},\frac{62\pi }{7})}\right)`$ We need to check the condition of the matrix in equation 1. It suffices to verify that the matrix is unitary on $`W_1`$. Clearly, on $`W_1W_2`$ the matrix is unitary, indeed it is the identity there. On $`[\frac{30\pi }{7},\frac{31\pi }{7})`$, $`h_1=h_2\sigma ^1=\frac{1}{\sqrt{2}}`$. Furthermore, if $`\xi [\frac{30\pi }{7},\frac{31\pi }{7})`$, $`h_2(\xi )=h_2(2\xi )=\frac{1}{\sqrt{2}}`$. Finally, $`\sigma ^1(\xi )=\xi 4\pi [\frac{2\pi }{7},\frac{3\pi }{7})=\frac{1}{2}[\frac{4\pi }{7},\frac{6\pi }{7})`$, hence $`h_1\sigma ^1(\xi )=\frac{1}{\sqrt{2}}`$. Thus, the matrix is simply: $$\left(\begin{array}{cc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{array}\right)$$ which is unitary as required. A similar computation shows that the matrix is also unitary on $`[\frac{4\pi }{7},\frac{6\pi }{7})`$. ###### Example 2. Here we give an example of a wavelet in $`_2`$, which by corollary 12 is equivalent to $`E=supp(\widehat{\psi })`$ being not partially self similar with respect to any odd multiples of $`2\pi `$ or $`4\pi `$, but does have partially self similarity with respect to some multiple of $`8\pi `$. Consider the following two wavelet sets: $`W_1=`$ $`[8\pi ,{\displaystyle \frac{112\pi }{15}})[{\displaystyle \frac{16\pi }{15}},\pi )[{\displaystyle \frac{14\pi }{15}},{\displaystyle \frac{8\pi }{15}})`$ $`[{\displaystyle \frac{8\pi }{15}},{\displaystyle \frac{14\pi }{15}})[\pi ,{\displaystyle \frac{16\pi }{15}})[{\displaystyle \frac{112\pi }{15}},8\pi )`$ $`W_2=`$ $`[8\pi ,{\displaystyle \frac{112\pi }{15}})[{\displaystyle \frac{14\pi }{15}},{\displaystyle \frac{\pi }{2}})`$ $`[{\displaystyle \frac{8\pi }{15}},{\displaystyle \frac{14\pi }{15}})[{\displaystyle \frac{225\pi }{30}},8\pi )[{\displaystyle \frac{224\pi }{15}},15\pi )`$ A routine calculation shows: $$\sigma (\xi )=\{\begin{array}{cc}\xi ,\hfill & \xi W_1W_2\hfill \\ \xi 8\pi ,\hfill & \xi [\frac{112\pi }{15},\frac{225\pi }{30})\hfill \\ \xi +16\pi ,\hfill & \xi [\frac{16\pi }{15},\pi )\hfill \end{array}$$ As in example 1, $`\sigma `$ is involutive, and define $`h_1`$ and $`h_2`$ analogously: $`h_1`$ $`=\chi _{W_1W_2}+{\displaystyle \frac{1}{\sqrt{2}}}\chi _{[\frac{16\pi }{15},\pi )[\frac{112\pi }{15},\frac{225\pi }{30})}`$ $`h_2`$ $`={\displaystyle \frac{1}{\sqrt{2}}}\left(\chi _{[\frac{8\pi }{15},\frac{\pi }{2})}\chi _{[\frac{224\pi }{15},15\pi )}\right)`$ These functions satisfy 1. ###### Example 3. We shall now present an example of a wavelet in $`_3`$. Consider the following wavelet sets. $`W_1=`$ $`[16\pi ,{\displaystyle \frac{480\pi }{31}})[{\displaystyle \frac{32\pi }{31}},\pi )[{\displaystyle \frac{30\pi }{31}},{\displaystyle \frac{16\pi }{31}})`$ $`[{\displaystyle \frac{16\pi }{31}},{\displaystyle \frac{30\pi }{31}})[\pi ,{\displaystyle \frac{32\pi }{31}})[{\displaystyle \frac{480\pi }{31}},16\pi )`$ $`W_2=`$ $`[16\pi ,{\displaystyle \frac{480\pi }{31}})[{\displaystyle \frac{30\pi }{31}},{\displaystyle \frac{\pi }{2}})`$ $`[{\displaystyle \frac{16\pi }{31}},{\displaystyle \frac{30\pi }{31}})[\pi ,{\displaystyle \frac{32\pi }{31}})[{\displaystyle \frac{31\pi }{2}},16\pi )[{\displaystyle \frac{960\pi }{31}},31\pi )`$ Then, $`\sigma `$ is given by: $$\sigma (\xi )=\{\begin{array}{cc}\xi ,\hfill & \xi W_1W_2\hfill \\ \xi 16\pi ,\hfill & \xi [\frac{480\pi }{31},\frac{31\pi }{2})\hfill \\ \xi +32\pi ,\hfill & \xi [\frac{32\pi }{31},\pi )\hfill \end{array}$$ Again, as in example 1, $`\sigma `$ is involutive; analogously define $`h_1`$ and $`h_2`$ as: $`h_1`$ $`=\chi _{W_1W_2}+{\displaystyle \frac{1}{\sqrt{2}}}\chi _{[\frac{32\pi }{31},\pi )[\frac{480\pi }{31},\frac{31\pi }{2})}`$ $`h_2`$ $`={\displaystyle \frac{1}{\sqrt{2}}}\left(\chi _{[\frac{16\pi }{31},\frac{\pi }{2})}\chi _{[\frac{960\pi }{31},31\pi )}\right)`$ ###### Example 4. In this example we shall construct a non-MRA wavelet in $`_0`$. Consider the following wavelet sets: $`W_1=`$ $`[{\displaystyle \frac{32\pi }{7}},{\displaystyle \frac{28\pi }{7}})[{\displaystyle \frac{7\pi }{7}},{\displaystyle \frac{4\pi }{7}})[{\displaystyle \frac{4\pi }{7}},{\displaystyle \frac{7\pi }{7}})[{\displaystyle \frac{28\pi }{7}},{\displaystyle \frac{32\pi }{7}})`$ $`W_2=`$ $`[{\displaystyle \frac{8\pi }{7}},{\displaystyle \frac{4\pi }{7}})[{\displaystyle \frac{4\pi }{7}},{\displaystyle \frac{6\pi }{7}})[{\displaystyle \frac{24\pi }{7}},{\displaystyle \frac{32\pi }{7}})`$ Both of these wavelets are non-MRA wavelets. It is shown in that the interpolated wavelet also is not an MRA wavelet. We have that $`\sigma `$ is given by: $$\sigma (\xi )=\{\begin{array}{cc}\xi ,\hfill & \xi [\frac{7\pi }{7},\frac{4\pi }{7})[\frac{4\pi }{7},\frac{6\pi }{7})[\frac{28\pi }{7},\frac{32\pi }{7})\hfill \\ \xi 2\pi ,\hfill & \xi [\frac{6\pi }{7},\frac{7\pi }{7})\hfill \\ \xi +8\pi ,\hfill & \xi [\frac{32\pi }{7},\frac{28\pi }{7})\hfill \end{array}$$ Construct $`h_1`$ and $`h_2`$ as follows: $`h_1`$ $`=\chi _{W_1W_2}+{\displaystyle \frac{1}{\sqrt{2}}}\chi _{[\frac{32\pi }{7},\frac{28\pi }{7})[\frac{6\pi }{7},\frac{7\pi }{7})}`$ $`h_2`$ $`={\displaystyle \frac{1}{\sqrt{2}}}\left(\chi _{[\frac{8\pi }{7},\frac{7\pi }{7})}\chi _{[\frac{24\pi }{7},\frac{28\pi }{7})}\right)`$
no-problem/9909/cond-mat9909050.html
ar5iv
text
# High-frequency dynamics of wave localisation ## Abstract We study the effect of localisation on the propagation of a pulse through a multi-mode disordered waveguide. The correlator $`u(\omega _1)u^{}(\omega _2)`$ of the transmitted wave amplitude $`u`$ at two frequencies differing by $`\delta \omega `$ has for large $`\delta \omega `$ the stretched exponential tail $`\mathrm{exp}(\sqrt{\tau _D\delta \omega /2})`$. The time constant $`\tau _D=L^2/D`$ is given by the diffusion coefficient $`D`$, even if the length $`L`$ of the waveguide is much greater than the localisation length $`\xi `$. Localisation has the effect of multiplying the correlator by a frequency-independent factor $`\mathrm{exp}(L/2\xi )`$, which disappears upon breaking time-reversal symmetry. PACS numbers: 42.25.Dd, 42.25.Bs, 72.15.Rn, 91.30.-f The frequency spectrum of waves propagating through a random medium contains dynamical information of interest in optics , acoustics , and seismology . A fundamental issue is how the phenomenon of wave localisation affects the dynamics. The basic quantity is the correlation of the wave amplitude at two frequencies differing by $`\delta \omega `$. A recent microwave experiment by Genack et al. measured this correlation for a pulse transmitted through a waveguide with randomly positioned scatterers. The waves were not localised in that experiment, because the length $`L`$ of the waveguide was less than the localisation length $`\xi `$, so the correlator could be computed from the perturbation theory for diffusive dynamics . The characteristic time scale in that regime is the time $`\tau _D=L^2/D`$ it takes to diffuse (with diffusion coefficient $`D`$) from one end of the waveguide to the other. According to diffusion theory, for large $`\delta \omega `$ the correlator decays $`\mathrm{exp}(\sqrt{\tau _D\delta \omega /2})`$ with time constant $`\tau _D`$. What happens to the high-frequency decay of the correlator if the waveguide becomes longer than the localisation length? That is the question addressed in this paper. Our prediction is that, although the correlator is suppressed by a factor $`\mathrm{exp}(L/2\xi )`$, the time scale for the decay remains the diffusion time $`\tau _D`$ — even if diffusion is only possible on length scales $`L`$. The exponential suppression factor disappears if time-reversal symmetry is broken (by some magneto-optical effect). Our analytical results are based on the formal equivalence between a frequency shift and an imaginary absorption rate, and are supported by a numerical solution of the wave equation. We consider the propagation of a pulse through a disordered waveguide of length $`L`$. In the frequency domain the transmission coefficient $`t_{nm}(\omega )`$ gives the ratio of the transmitted amplitude in mode $`n`$ to the incident amplitude in mode $`m`$. (The modes are normalized to carry the same flux.) We seek the correlator $`C(\delta \omega )=t_{nm}(\omega +\delta \omega )t_{nm}^{}(\omega )`$. (The brackets $`\mathrm{}`$ denote an average over the disorder.) We assume that the (positive) frequency increment $`\delta \omega `$ is sufficiently small compared to $`\omega `$ that the mean free path $`l`$ and the number of modes $`N`$ in the waveguide do not vary appreciably, and may be evaluated at the mean frequency $`\omega `$ . We also assume that $`lc/\omega `$ (with $`c`$ the wave velocity). The localisation length is then given by $`\xi =(\beta N+2\beta )l`$, with $`\beta =1(2)`$ in the presence (absence) of time-reversal symmetry. For $`N1`$ the localisation length is much greater than the mean free path, so that the motion on length scales below $`\xi `$ is diffusive (with diffusion coefficient $`D`$). Our approach is to map the dynamic problem without absorption onto a static problem with absorption . The mapping is based on the analyticity of the transmission amplitude $`t_{nm}(\omega +iy)`$, at complex frequency $`\omega +iy`$ with $`y>0`$, and on the symmetry relation $`t_{nm}(\omega +iy)=t_{nm}^{}(\omega +iy)`$. The product of transmission amplitudes $`t_{nm}(\omega +z)t_{nm}(\omega +z)`$ is therefore an analytic function of $`z`$ in the upper half of the complex plane. If we take $`z`$ real, equal to $`\frac{1}{2}\delta \omega `$, we obtain the product of transmission amplitudes $`t_{nm}(\omega +\frac{1}{2}\delta \omega )t_{nm}^{}(\omega \frac{1}{2}\delta \omega )`$ considered above (the difference with $`t_{nm}(\omega +\delta \omega )t_{nm}^{}(\omega )`$ being statistically irrelevant for $`\delta \omega \omega `$). If we take $`z`$ imaginary, equal to $`i/2\tau _\mathrm{a}`$, we obtain the transmission probability $`T=|t_{nm}(\omega +i/2\tau _\mathrm{a})|^2`$ at frequency $`\omega `$ and absorption rate $`1/\tau _\mathrm{a}`$. We conclude that the correlator $`C`$ can be obtained from the ensemble average of $`T`$ by analytic continuation to imaginary absorption rate: $$C(\delta \omega )=T\mathrm{for}\mathrm{\hspace{0.33em}\hspace{0.33em}1}/\tau _\mathrm{a}i\delta \omega .$$ (1) Two remarks on this mapping: 1. The effect of absorption (with rate $`1/\tau ^{}`$) on $`C(\delta \omega )`$ can be included by the substitution $`1/\tau _\mathrm{a}i\delta \omega +1/\tau ^{}`$. This is of importance for comparison with experiments, but here we will for simplicity ignore this effect. 2. Higher moments of the product $`𝒞=t_{nm}(\omega +\frac{1}{2}\delta \omega )t_{nm}^{}(\omega \frac{1}{2}\delta \omega )`$ are related to higher moments of $`T`$ by $`𝒞^p=T^p`$ for $`1/\tau _\mathrm{a}i\delta \omega `$. This is not sufficient to determine the entire probability distribution $`P(𝒞)`$, because moments of the form $`𝒞^p𝒞^q`$ can not be obtained by analytic continuation . To check the validity of this approach and to demonstrate how effective it is we consider briefly the case $`N=1`$. A disordered single-mode waveguide is equivalent to a geometry of parallel layers with random variations in composition and thickness. Such a randomly stratified medium is studied in seismology as a model for the subsurface of the Earth . The correlator of the reflection amplitudes $`K(\delta \omega )=r(\omega +\delta \omega )r^{}(\omega )`$ has been computed in that context by White et al. (in the limit $`L\mathrm{}`$). Their result was $$K(\delta \omega )=(2l/c)\delta \omega _0^{\mathrm{}}𝑑x\mathrm{exp}[x(2l/c)\delta \omega ]\frac{x}{xi}.$$ (2) The distribution of the reflection probability $`R=|r|^2`$ through an absorbing single-mode waveguide had been studied many years earlier as a problem in radio-engineering , with the result $$R=(l/c\tau _\mathrm{a})_1^{\mathrm{}}𝑑z\mathrm{exp}[(z1)(l/c\tau _\mathrm{a})]\frac{z1}{z+1}.$$ (3) One readily verifies that Eqs. (2) and (3) are identical under the substitution of $`1/\tau _\mathrm{a}`$ by $`i\delta \omega `$. In a similar way one can obtain the correlator of the transmission amplitudes by analytic continuation to imaginary absorption rate of the mean transmission probability through an absorbing waveguide. The absorbing problem for $`N=1`$ was solved by Freilikher, Pustilnik, and Yurkevich . That solution will not be considered further here, since our interest is in the multi-mode regime, relevant for the microwave experiments . The transmission probability in an absorbing waveguide with $`N1`$ is given by $$T=\frac{l}{N\xi _\mathrm{a}\mathrm{sinh}(L/\xi _\mathrm{a})}\mathrm{exp}\left(\delta _{\beta ,1}\frac{L}{2Nl}\right),$$ (4) for absorption lengths $`\xi _\mathrm{a}=\sqrt{D\tau _\mathrm{a}}`$ in the range $`l\xi _\mathrm{a}\xi `$. The length $`L`$ of the waveguide should be $`l`$, but the relative magnitude of $`L`$ and $`\xi `$ is arbitrary. Substitution of $`1/\tau _\mathrm{a}`$ by $`i\delta \omega `$ gives the correlator $$C(\delta \omega )=\frac{l\sqrt{i\tau _D\delta \omega }}{NL\mathrm{sinh}\sqrt{i\tau _D\delta \omega }}\mathrm{exp}\left(\delta _{\beta ,1}\frac{L}{2Nl}\right),$$ (5) where $`\tau _D=L^2/D`$ is the diffusion time. The range of validity of Eq. (5) is $`L/\xi \sqrt{\tau _D\delta \omega }L/l`$, or equivalently $`D/\xi ^2\delta \omega c/l`$. In the diffusive regime, for $`L\xi `$, the correlator (5) reduces to the known result from perturbation theory. For $`\mathrm{max}(D/L^2,D/\xi ^2)\delta \omega c/l`$ the decay of the absolute value of the correlator is a stretched exponential, $$|C|=\frac{2l}{NL}\sqrt{\tau _D\delta \omega }\mathrm{exp}\left(\sqrt{\frac{1}{2}\tau _D\delta \omega }\delta _{\beta ,1}\frac{L}{2Nl}\right).$$ (6) In the localised regime, when $`\xi `$ becomes smaller than $`L`$, the onset of this tail is pushed to higher frequencies, but it retains its functional form. The weight of the tail is reduced by a factor $`\mathrm{exp}(L/2Nl)`$ in the presence of time-reversal symmetry. There is no reduction factor if time-reversal symmetry is broken. To test our analytical findings we have carried out numerical simulations. The disordered medium is modeled by a two-dimensional square lattice (lattice constant $`a`$, length $`L`$, width $`W`$). The (relative) dielectric constant $`\epsilon `$ fluctuates from site to site between $`1\pm \delta \epsilon `$. The multiple scattering of a scalar wave $`\mathrm{\Psi }`$ (for the case $`\beta =1`$) is described by discretizing the Helmholtz equation $`[^2+(\omega /c)^2\epsilon ]\mathrm{\Psi }=0`$ and computing the transmission matrix using the recursive Green function technique . The mean free path $`l`$ is determined from the average transmission probability $`\mathrm{Tr}tt^{}=N(1+L/l)^1`$ in the diffusive regime . The correlator $`C`$ is obtained by averaging $`t_{nm}(\omega +\delta \omega )t_{nm}^{}(\omega )`$ over the mode indices $`n,m`$ and over different realisations of the disorder. We choose $`\omega ^2=2(c/a)^2`$, $`\delta \epsilon =0.4`$, leading to $`l=22.1a`$. The width $`W=11a`$ is kept fixed (corresponding to $`N=5`$), while the length $`L`$ is varied in the range 400–1600 $`a`$. These waveguides are well in the localized regime, $`L/\xi `$ ranging from 3–12. A large number (some $`10^4`$$`10^5`$) of realisations were needed to average out the statistical fluctuations, and this restricted our simulations to a relatively small value of $`N`$. For the same reason we had to limit the range of $`\delta \omega `$ in the data set with the largest $`L`$. Results for the absolute value of the correlator are plotted in Fig. 1 (data points) and are compared with the analytical high-frequency prediction for $`N1`$ (solid curve). We see from Fig. 1 that the correlators for different values of $`L/\xi `$ converge for large $`\delta \omega `$ to a curve that lies somewhat above the theoretical prediction. The offset is about 0.6, and could be easily explained as an $`𝒪(1)`$ uncertainty in the exponent in Eq. (1) due to the fact that $`N`$ is not $`1`$ in the simulation. Regardless of this offset, the simulation confirms both analytical predictions: The stretched exponential decay $`\mathrm{exp}(\sqrt{\tau _D\delta \omega /2})`$ and the exponential suppression factor $`\mathrm{exp}(L/2\xi )`$. We emphasize that the time constant $`\tau _D=L^2/D`$ of the high-frequency decay is the diffusion time for the entire length $`L`$ of the waveguide — even though the localisation length $`\xi `$ is up to a factor of 12 smaller than $`L`$. We can summarize our findings by the statement that the correlator of the transmission amplitudes factorises in the high-frequency regime: $`Cf_1(\delta \omega )f_2(\xi )`$. The frequency dependence of $`f_1`$ depends on the diffusive time through the waveguide, even if it is longer than the localisation length. Localisation has no effect on $`f_1`$, but only on $`f_2`$. We can contrast this factorisation with the high-frequency asymptotics $`Kf_3(\delta \omega )`$ of the correlator of the reflection amplitudes. In the corresponding absorbing problem the high-frequency regime corresponds to an absorption length smaller than the localisation length, so it is obvious that $`K`$ becomes independent of $`\xi `$ in that regime. The factorisation of $`C`$ is less obvious. Since the localized regime is accessible experimentally , we believe that an experimental test of our prediction should be feasible. Discussions with M. Büttiker, L. I. Glazman, K. A. Matveev, M. Pustilnik, and P. G. Silvestrov are gratefully acknowledged. This work was supported by the Dutch Science Foundation NWO/FOM.
no-problem/9909/physics9909029.html
ar5iv
text
# Scale Specific and Scale Independent Measures of Heart Rate Variability as Risk Indicators ## Abstract We study the Heart Rate Variability (HRV) using scale specific variance and scaling exponents as measures of healthy and cardiac impaired individuals. Our results show that the variance and the scaling exponent are uncorrelated. We find that the variance measure at certain scales is well suited to separate healthy subjects from heart patients. However, for cumulative survival probability the scaling exponents outperform the variance measure. Our risk study is based on a database containing recordings from 428 individuals after myocardial infarct (MI) and on database containing 105 healthy subjects and 11 heart patients. The results have been obtained by applying three recently developed methods (DFA - Detrended Fluctuation Analysis, WAV - Multiresolution Wavelet Analysis, and DTS - Detrended Time Series analysis) which are shown to be highly correlated. The study of heart rate variability (HRV) has been in use for the last two decades as part of clinical and prognostic work; international guidelines for evaluating conventional HRV-parameters do exist . The conventional parameters are power spectra and standard deviation . Recently three new methods of analyzing heart interbeat interval (RR) time series have been developed, all of them showing signs of improved diagnostic performance. The three methods are: Detrended Fluctuation Analysis (DFA) , Multiresolution Wavelet Analysis (WAV) and Detrended Time Series Analysis (DTS) . The question which method and which measure yield better separation between cardiac impaired and healthy subjects has been recently debated . In this Letter we show that variance, which is a scale specific measure, is well suited to separate between healthy subjects and heart patients. However, for the myocardial infarct (MI) group the scaling exponent, which is a scale independent measure, serves as a better risk indicator. Moreover, we show that the three above mentioned methods for both variance and scaling exponent, are correlated and converge to similar results while the variance and the scaling exponent are uncorrelated. In our study we use two groups, the MI group, containing 428 heart patients after MI and a control group, consisting of 105 healthy individuals and 11 cardiac impaired patients (9 diabetic patients, one diabetic patient after myocardial infarct, and one heart transplanted patient). Our analysis is based on 24 hour heart interbeat interval time series . We applied the following methods. The DFA Method. The detrended fluctuation analysis was proposed by Peng et al . This method avoids spurious detection of correlations that are artifacts of nonstationarity. The interbeat interval time series is integrated after subtracting the global average and then divided into windows of equal length, $`n`$. In each window the data are fitted with a least square straight line which represents the local trend in that window. The integrated time series is detrended by subtracting the local trend in each window. The root mean square fluctuation, the standard deviation $`\sigma _{\mathrm{dfa}}(n)`$ of the integrated and detrended time series is calculated for different scales (window sizes); the standard deviation can be characterized by a scaling exponent $`\alpha _{\mathrm{dfa}}`$, defined as $`\sigma (n)n^\alpha `$. The WAV Method. In the WAV method one finds the wavelet coefficients $`W_{m,j}`$, where $`m`$ is a ‘scale parameter’ and $`j`$ is a ‘position’ parameter (the scale $`m`$ is related to the number of data points in the window by $`n=2^m`$ ), by means of a wavelet transform. The standard deviation $`\sigma _{\mathrm{wav}}(m)`$ of the wavelet coefficients $`W_{m,j}`$ across the parameter $`j`$ is used as a parameter to separate healthy from sick subjects. The corresponding scaling exponents is denoted by $`\alpha _{\mathrm{wav}}`$. The DTS Method. The detrended time series method was suggested in . In this method one detrends the RR time series by subtracting the local average in a running window from the original time series, resulting in a locally detrended time series. The standard deviation $`\sigma _{\mathrm{dts}}`$ is calculated for various window scales with a scaling exponent $`\alpha _{\mathrm{dts}}`$. The first suggestion to use a scale independent measure of the HRV as a separation parameter was by Peng et al who found that a critical value of the DFA scaling exponent $`\alpha _{\mathrm{dfa}}`$ can distinguish between healthy individuals and heart patients. Thurner et al used the scale specific WAV variance $`\sigma _{\mathrm{wav}}`$ in order to better separate the same two groups. The debate, which method performs better was continued in two recent Letters . Later on, another independent study on different database yielded a better separation using the scale specific $`\sigma _{\mathrm{wav}}`$ measure. In Fig. 1 we compare the conventional measures for HRV for the control group: the variance (which is calculated for a fixed scale) for the DTS and WAV method ($`\sigma _{\mathrm{dts}}`$ and $`\sigma _{\mathrm{wav}}`$) and the scaling exponent (which is calculated for a range of scales) for the DFA method ($`\alpha _{\mathrm{dfa}}`$). One notes that the scale specific measures, $`\sigma _{\mathrm{dts}}`$ and $`\sigma _{\mathrm{wav}}`$, yield a nearly perfect separation between healthy and sick subjects (the $`p`$ value of the student t-test is less than $`10^{14}`$), compared with the scale independent measure $`\alpha _{\mathrm{dfa}}`$ which yields less pronounced separation ($`p`$ value of the student t-test is less than $`10^4`$). This outcome is reversed when we applied the measures on the MI group. Since we have no diagnostics on this group, but rather do know the follow-up history for 328 individuals from the total 428 individuals of the larger group, we investigate the survival probability of these subjects as expressed through the so-called survival curve . In these curves one divides the entire group by means of a specific value of the $`\sigma `$ or $`\alpha `$ measure, called the critical value $`\sigma _c`$ or $`\alpha _c`$. For each subgroup we calculate the cumulative survival probability given by $`P(t+\mathrm{\Delta }t)=P(t)[1\mathrm{\Delta }N/N(t)]`$, where $`P(t)`$ is the probability to survive up to $`t`$ days after the ECG recording, $`N(t)`$ denotes the number of individuals alive at $`t`$ days after the examination, and $`\mathrm{\Delta }N`$ is the number of individuals who died during the time interval $`\mathrm{\Delta }t`$. In Fig. 2 we show a comparison of survival curves where the separating measure in figures (a) and (c) is the critical standard deviation $`\sigma _c`$ and in figures (b) and (d) the critical scaling exponent $`\alpha _c`$. Individuals with $`\sigma >\sigma _c`$ (or $`\alpha >\alpha _c`$) belong to the subgroup with the higher survival probability; the upper panel extracts the subgroup with a high survival probability, whereas the lower panel extracts the subgroup with a low survival probability. This comparison shows that the scale independent scaling exponent $`\alpha `$ serves as a better prognostic predictor than the scale specific variance $`\sigma `$ (although Fig. 2a and b are similar the survival curves of Fig. 2d are more separated than the survival curves of Fig. 2c). In Fig. 2 we use the $`\sigma `$ and $`\alpha `$ measures obtained through the WAV method. However, as we show below all three methods discussed above are highly correlated and no significant difference is noticeable in the survival curves when using DFA and DTS measures. The advantage of the scale independent measure $`\alpha `$ over the scale specific measure $`\sigma `$ is also shown in Fig. 3. Here we scan the possible critical values by the Receiver Operating Characteristic (ROC) analysis ; this analysis is usually used as a medical diagnostic test and also was the basic diagnostic test of Ref. . The idea of the ROC method is to compare the result of medical test (positive or negative) with the clinical status of the patient (with or without disease). The efficiency of the medical test is judged on the basis of its sensitivity (the proportion of diseased patients correctly identified) and its specificity (the proportion of healthy patients correctly identified). The ROC curve is a graphical presentation of sensitivity versus specificity as a critical parameter is swept. In our case the patient status is determined according to its mortality (death or survival up to time $`t`$) and according to its mortality prediction (a patient with parameter value smallest than the critical value is predicted to die while a patient with a parameter value larger than the critical value is predicted to survive). In Fig. 3a and b we present two examples of the ROC curves in different times (18 months and 36 months). In both cases the ROC of the scale independent ($`\alpha _{\mathrm{wav}}`$) curve is located above the scale specific ($`\sigma _{\mathrm{wav}}`$) curve; the larger the area under the ROC curve is, the better is the parameter. In the ideal case a patient with small parameter value will die before the patient with higher parameter value. In this case the area under the ROC curve will be 1. On the other hand, when there is no relation between the value of the parameter and the mortality of the patient the area under the ROC curve will be 1/2. In Fig. 3c we show the area under the ROC curves as a function of time ($`A(t)`$). The scale independent ($`\alpha _{\mathrm{wav}}`$) curve is located above the scale specific ($`\sigma _{\mathrm{wav}}`$) curve. Thus, the scale independent measure $`\alpha _{\mathrm{wav}}`$ is more suitable for prognosis. In order to investigate if the three methods we use are correlated, we apply them on the larger MI group consisting of 428 subjects. The top panel of Fig. 4 shows that the variances (the scale specific measure) of the three methods are highly correlated. This is also true for the scaling exponents (the scale independent measure, middle panel). These comparisons indicate that indeed the various methods yield the same results in terms of variance and scaling exponents. On the other hand, the lower panel of Fig. 4 shows that the scale specific variance and the scale independent scaling exponent are uncorrelated for the DTS and DFA methods and are only faintly correlated for the WAV method. From this we conclude that the $`\alpha `$ and $`\sigma `$ measures characterize the interbeat interval series in different ways; the variance, which is a measure in the time domain (and thus is almost invariant to shuffling ), performs better as a diagnostic tool, while the scaling exponent, which is a measure in the frequency domain, depends on the order of events and performs better as a prognostic tool. Thus we suggest that the scale specific variance reflects changes in either the sympathetic or the parasympathetic activities of the neuro-autonomic nervous system which affect the cardiac ability of contraction; the scale specific variance may hint on the instant condition of the physical properties of the heart. From the above we also suggest that the scale independent scaling exponent characterize the memory interplay of the two competing branches of the autonomic nervous system (the sympathetic and the parasympathetic systems) and is thus an expression of the underlying mechanism of heart regulation (which influences the conventional power spectrum ). \*** We wish to thank Nachemsohns Foundation for financial support.
no-problem/9909/astro-ph9909053.html
ar5iv
text
# The Ultraviolet Peak of the Energy Distribution in 3C 273: Evidence for an Accretion Disk and Hot Corona Around a Massive Black Hole ## 1. Introduction It has long been known that the energy distribution of quasars, expressed as $`\nu L_\nu `$, must have a peak somewhere in the far-ultraviolet to soft X-ray region of the spectrum. Shields (1978) first suggested that the optical and near UV flux of quasars might be due to the Rayleigh-Jeans portion of a black body spectrum whose peak lies in the unobserved extreme UV region, arising from an optically thick accretion disk surrounding a massive black hole (Lynden-Bell (1969)), and dubbed the “big blue bump.” This idea was then further developed and applied to UV observations of quasars obtained with IUE (Malkan & Sargent (1982); Malkan (1983)). It was subsequently found that the X-ray spectra of quasars also showed a “soft excess” of flux, above the extrapolation of the relatively flat power-law that fit the spectrum at energies above 2 keV (Turner & Pounds (1988); Masnou et al. (1992)), and which might arise from the Wien portion of the same thermal spectrum. In addition to whatever light this spectral region might shed on the black-hole accretion-disk model for quasars, it is also a fundamental input to studies of the photoionization of the broad emission line clouds in quasars (Krolik & Kallman (1988)) as well as the photoionization of the intergalactic medium (Haardt & Madau (1996)). Consequently there has been widespread interest in determining the detailed shape of the spectrum of quasars between the optical ($`10^{15}`$ Hz) and the soft X-ray ($`10^{17}`$ Hz), and in particular, establishing more precisely where in the extreme UV the peak of the big blue bump is located (Arnaud et al. (1985); Bechtold et al. (1987); O’Brien, Gondhalekar, & Wilson (1988)). The quasar 3C 273 provides an excellent candidate for addressing this problem because of its low redshift (z=0.158) and its high flux at both UV and X-ray wavelengths. It was the first quasar detected in X-rays (Bowyer et al. (1970)) and also the first quasar observed in the far-ultraviolet (Davidsen, Hartig, & Fastie (1977)), both detections having been achieved with sounding rocket experiments before the advent of effective satellite observatories. At X-ray energies the spectrum of 3C 273 in the 2–10 keV range is well fit by a power-law, $`F_\nu \nu ^\alpha `$, with energy index $`\alpha 0.5`$ (Worrall et al. (1979)), but observations at lower energies with EXOSAT and Ginga (Turner et al. (1985), Courvoisier et al. (1987), Turner et al. (1990)), Einstein (Wilkes & Elvis (1987), Turner et al. (1991), Masnou et al. (1992)), and ROSAT (Staubert et al. (1992); Leach, McHardy, & Papadakis (1995), hereafter LMP95) have established the existence of a soft excess. Reporting on simultaneous observations of 3C 273 in 1990 December with ROSAT (0.1 - 2 keV) and Ginga (2–10 keV), Staubert et al. (1992) obtained a good fit with the sum of two power laws, with $`\alpha =0.56`$ dominating at high energies and $`\alpha =2.5`$ at low energies. However, the soft component could also be modeled with a black body spectrum or with thermal bremsstrahlung (Staubert et al. (1992)). More extensive observations of 3C 273 with ROSAT have been reported by LMP95. They established that “two physically distinct emission components are present”, with the spectrum best modeled by a combination of two power-laws with absorption by the Galactic column of gas, $`N_H=1.84\times 10^{20}`$ cm<sup>-2</sup>. Constraining the harder component to have $`\alpha _h=0.5`$ as observed by EXOSAT and Ginga, the soft component was found to be significantly steeper, with $`\alpha _s=1.7`$ for most of the observations. Attempts to fit the soft component with various other models, including blackbody and bremsstrahlung radiation were found unacceptable (LMP95). At ultraviolet wavelengths the first determination of the spectral index of 3C 273 (Davidsen, Hartig, & Fastie (1977)) gave $`\alpha =0.6`$ over the range from the optical to Lyman $`\alpha `$, with a broad bump around 3500 Å, perhaps associated with Balmer continuum and Fe ii line emission (Baldwin (1975)), that has sometimes been called the “small blue bump”. It was also pointed out by Davidsen et al. that the spectral index must increase to $`\alpha >1.2`$ somewhere near or above the Lyman limit in order to agree with the X-ray observations. Subsequent extensive observations with IUE give $`\alpha =0.66`$ between 3100 and 1250 Å (Zheng & Malkan (1993)). A very low resolution observation with the Voyager UV Spectrometer in the 900–1200 Å band suggested the possibility of a spectral break at about 1000 Å (Reichert et al. (1988)), perhaps associated with the Lyman edge. Recent observations in the 900–1200 Å region with ORFEUS imply a turnover in the spectrum of 3C 273 near the IUE/ORFEUS transition wavelength at about 1200 Å (Appenzeller et al. (1998)). Here we report the results of observations of 3C 273 in the 900–1800 Å region made with the Hopkins Ultraviolet Telescope (HUT) on both the Astro-1 and Astro-2 space shuttle missions. The Astro-1 far-ultraviolet data are accompanied by contemporaneous observations at optical, ultraviolet, and soft and hard X-rays, yielding a quasi-simultaneous broad-band spectrum for this quasar extending over more than 3 decades of frequency. The HUT data alone show a definite spectral break, with a peak in the energy spectrum $`\nu L_\nu `$ at about 920 Å in the rest frame. The best-fit power-law at shorter wavelengths from 920 Å to 787 Å, when extrapolated, nearly matches the simultaneous soft X-ray spectrum. The Astro-2 HUT data are of even higher quality than those obtained on Astro-1 and yield similar results, confirming that the peak of the “big blue bump” has now definitely been seen in 3C 273. A comparison of these data with a composite quasar spectrum in the ultraviolet (Zheng et al. (1997)) and the X-ray bands (Laor et al. (1997)), suggests strongly that a peak in the energy spectrum of all quasars occurs near the Lyman limit, and that the ionizing continuum from the Lyman limit to 1 keV is a power-law of $`\alpha =1.72.2`$. ## 2. Observations The Hopkins Ultraviolet Telescope (HUT) incorporates a 0.9-m primary mirror and a prime-focus Rowland circle spectrograph as described by Davidsen et al. (1992) and Kruk et al. (1995). First-order spectra recorded by the photon counting detector cover the 820–1840 Å spectral range with a sampling of about 0.52 Å per pixel and a point source resolution of about 3 Å. Absorption by interstellar hydrogen limits the observed spectra to wavelengths longer than the Lyman limit at 912 Å (787 Å in the quasar rest frame). HUT was used to observe 3C 273 on the Astro-1 space shuttle mission in 1990 December and again on the Astro-2 mission in 1995 March. The performance and calibration of the instrument are described for Astro-1 by Davidsen et al. (1992) and Kruk et al. (1997) and for Astro-2 by Kruk et al. (1995, 1999). Because substantial changes to HUT’s performance were made between the two flights, the results of the spectrophotometric observations may be regarded as independent measurements, effectively produced by two different instruments. The Astro-1 observations were (necessarily) conducted primarily during daylight portions of the shuttle orbit, where there is substantial contamination by geocoronal emission lines that must be subtracted in the data reduction process. The Astro-2 observations, however, were obtained mostly during night-time portions of the orbit, where the geocoronal contamination is greatly reduced. Only the night portion of the Astro-2 data is used. Details of the observations are given as an observation log in Table 1. For Astro-1 there are several nearly simultaneous observations that can be used to determine the broad-band spectrum of 3C 273. Also mounted on the shuttle with HUT was the Broad Band X-Ray Telescope (BBXRT) (Serlemitsos et al. (1992)), which observed 3C 273 in the 0.3–12 keV band with a resolution of about 100 eV. ROSAT observations in the 0.1–2.4 keV band were obtained less than two weeks after the Astro-1 observations (Staubert et al. (1992)), and Ginga observations in the 2–20 keV band were also obtained during 1990 December (Staubert et al. (1992)). In the ultraviolet, IUE spectra in both the short and long wavelength cameras were obtained between one and two weeks after the HUT observations, and these have been extracted from the IUE archive. Finally, an optical spectrum of 3C 273 was kindly obtained for us by R. Green on the KPNO 2.1 m telescope during the Astro-1 mission. All these contemporaneous measurements are listed in the observation log in Table 1. The HUT observations of 3C 273 from Astro-2 in 1995 March are also listed in Table 1. Unfortunately, we have been unable to find any nearly simultaneous observations at other wavelengths for this epoch. ## 3. Results from the HUT Observations ### 3.1. Observed Spectra The two HUT observations of 3C 273 from Astro-1 were summed and then processed and reduced to an absolute flux scale following our standard procedures as described by Kruk et al. (1997). For Astro-2 the two observations were reduced separately because of the variation of the instrument sensitivity during the mission (Kruk et al. (1995)), and the resulting spectra were then averaged, weighted by their exposure time, to yield the final spectrum presented here. As noted in Table 1, the Astro-2 data had slight flux corrections of 2–5% applied due to light loss at the slit induced by pointing errors. The results for both missions are displayed in Fig. 1. It is worth emphasizing that the Astro-1 and Astro-2 spectra were obtained, reduced, and calibrated under markedly different circumstances. The Astro-1 data were obtained with an instrument of relatively modest sensitivity, almost entirely during orbital day when there is substantial contamination by geocoronal emission lines that must be subtracted, and were absolutely calibrated by reference to an observation of the DA white dwarf G191-B2B using a model atmosphere computed for this star by P. Bergeron (Davidsen et al. (1992)). The Astro-2 data, on the other hand, were obtained with an instrument of much higher sensitivity, entirely during orbital night with minimal geocoronal contamination, and were absolutely calibrated by reference primarily to observations of the DA white dwarf HZ43 using a model atmosphere computed by D. Finley using D. Koester’s model atmosphere codes (Kruk et al. (1995)). In spite of these major differences, the resulting spectra in Fig. 1 are remarkably similar. Of course, 3C 273 is known to vary, so the close agreement between observations made more than four years apart is fortuitous. Indeed there are clear changes observed both in the continuum and in the emission line strengths (discussed below). However, both spectra show a change of continuum slope near the Lyman limit in the rest frame of the quasar, which is further discussed in the next section. ### 3.2. Empirical Models for the Spectra We first remove the airglow lines from the spectra. We use a geocoronal Ly$`\alpha `$ template that is derived from blank fields and scaled by the exposure time, and subtract it from the HUT spectrum of 3C 273. Other airglow lines are fitted and removed, but significant residuals still exist in the regions around the strongest airglow lines at 1216, 1302 and 989 Å. We use the IRAF task specfit (Kriss (1994)) to fit both the Astro-1 and Astro-2 spectra with various components for the continuum, emission lines and absorption features. We use the same overall model for both spectra, and identical fitting windows spanning the wavelength intervals 912–1194, 1238–1287, and 1319–1820 Å. For the Astro-1 data, we also model the remaining airglow residuals near 989 Å. The intrinsic emission lines of Ly$`\alpha `$, C iv and O vi are modeled with dual Gaussians, and other emission components are modeled with single Gaussians. We constrain many components of the fit by tying related parameters together. The full-widths at half maximum (FWHM) of the weaker broad emission lines (S vi, C iii, N iii, O i, C ii) are all linked, and their wavelengths are linked to that of narrow Ly$`\alpha `$ by the ratio of their laboratory wavelengths. The FWHM of broad O vi $`\lambda `$1034 is linked to broad C iv $`\lambda `$1549. The wavelengths and FWHM of the N v components are linked to the corresponding C iv components. We include one unidentified weak, broad emission feature at a rest wavelength of 1074 Å. This is not He ii $`\lambda `$1085, and it is approximately at the location of the unidentified feature also seen in Faint Object Spectrograph (FOS) observations by Laor et al. (1995). For the absorption lines, we start by including features identified in previous, higher resolution observations (FOS: Bahcall et al. (1991); GHRS: Morris et al. (1991); ORFEUS: Hurwitz et al. (1998)). We then add additional features as required to obtain a good fit to the HUT data. As with the emission lines, we constrain many components of the fit by tying the FWHM of the lines together in groups by wavelength regions where the instrumental resolution is roughly the same. All lines are assumed to be unresolved, except for the blend of O vi, C ii, and O i around 1038 Å, the blended Si ii lines at 1192 Å, and the blended C iv doublet at 1549 Å. The wavelength region below 950 Å is heavily absorbed by the high-order lines of Galactic hydrogen. To model this, we use a set of absorption profiles, each including 50 Lyman-series lines, with a fixed column density of $`1.8\times 10^{20}`$ cm<sup>-2</sup>. The absorption lines are modeled using Voigt profiles covering a range of 10–20 km s<sup>-1</sup> in the Doppler parameter which are then convolved with the instrument resolution, a Gaussian of 3 Å FWHM. For our extinction correction, we adopt $`E(BV)=0.032`$, as established by Lockman & Savage (1995), and use the extinction curve of Cardelli, Clayton & Mathis (1989), with $`R_V=3.1`$. This value of the extinction is also consistent with the $`E(BV)=0.03`$ adopted by Lichti et al. (1995) and Appenzeller et al. (1998). For both the Astro-1 and Astro-2 spectra, we obtain our best fit using a broken power-law model for the continuum shape. To ascertain the significance of the apparent spectral break near the Lyman limit, we compare fits performed using a single power-law for the continuum shape to a broken power-law. The single power-law fits are summarized in Table 2, and the broken power-law fits are summarized in Table 3. For the Astro-1 data, the fits with a single power law do not produce satisfactory results. These fits are shown as smooth curves in Fig. 2 and Fig. 3. Note that the single power-law fit deviates significantly at wavelengths shortward of 1200 Å. Even allowing for reddening corrections higher than the Galactic value, the fits are still poor, as shown in Table 2. The spectral break in the sub-Ly$`\alpha `$ region is too sharp to be accounted for by reddening. TABLE 2 Single Power-Law Fits to 3C 273 $`E(BV)`$ Astro-1 Astro-2 $`\alpha `$ $`\chi ^2/dof`$ $`\alpha `$ $`\chi ^2/dof`$ 0.01 $``$0.94 1689/1544 1.16 1723/1501 0.02 $``$0.81 1651/1544 1.00 1698/1501 0.032 $``$0.62 1619/1544 0.79 1697/1501 0.04 $``$0.49 1615/1544 0.67 1708/1501 0.05 $``$0.34 1602/1544 0.50 1746/1501 0.06 $``$0.18 1607/1544 $`\mathrm{}`$ $`\mathrm{}`$ 0.08 $`0.15`$ 1650/1544 $`\mathrm{}`$ $`\mathrm{}`$ TABLE 3 Broken Power-Law Fits to 3C 273<sup>a</sup> Parameter Astro-1 Astro-2 $`f_0^\mathrm{b}`$ $`4.37\pm 0.04`$ $`4.41\pm 0.02`$ $`\lambda _0`$ (Å) $`1064\pm 8`$ $`1036\pm 6`$ $`\alpha _1`$ $`1.69\pm 0.16`$ $`1.19\pm 0.15`$ $`\alpha _2`$ $`0.46\pm 0.03`$ $`0.74\pm 0.02`$ $`\chi ^2/dof`$ 1584/1542 1692/1499 <sup>a</sup>Assumes extinction fixed at $`E(BV)=0.032`$. <sup>b</sup>Measured at the break wavelength $`\lambda _0`$, in units of $`10^{13}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>. Although the Astro-2 spectrum is fairly similar to the Astro-1 data, the continuum shape is noticeably less blue at wavelengths longward of Ly$`\alpha `$, and the broad emission lines are significantly brighter. One can see this most easily in the comparison of the fitted spectra shown in Fig. 4. Since the Astro-2 spectrum is not as blue, the break at shorter wavelengths is not as dramatic, and a single power law reddened at the nominal Galactic value fits only slightly worse than the broken power law as one can see in the top panels of Figures 2 and 3. Thus, we conclude that a short wavelength spectral break is required by our Astro-1 data, but not by the Astro-2 data. Fig. 2.— Observed spectra of 3C 273 with the best-fitting empirical models as described in Section 3.2. The models include emission and absorption lines and a continuum described either by a single power-law (dotted line) or by a broken power-law (solid lines). Reddening with $`E\left(BV\right)=0.032`$ is applied to the models. Broken power-laws provide the best fit for both spectra. A single power-law gives an unacceptable fit for the Astro-1 spectrum; for Astro-2 a single power-law is worse, but acceptable. Parameters of the models are given in Tables 2–7. As the broken power law continuum models give our best fits for both data sets, we quote the parameters for the emission and absorption lines using these continuum models. Tables 4 and 5 give the observed wavelengths, the observed fluxes, and full-widths at half-maximum (FWHM) for the Astro-1 and Astro-2 observations of the emission lines, respectively. Tables 6 and 7 list the observed wavelengths, equivalent widths ($`W_\lambda `$), and FWHM for the absorption lines. None of the line widths are corrected for the instrumental resolution. For all absorption lines, the best-fit widths are consistent with the instrumental resolution and should therefore be considered unresolved. The error bars quoted on all parameters are determined from the error matrix of the fit, and they represent the 1$`\sigma `$ confidence interval for a single interesting parameter (Kriss (1994)). Fig. 3.— Same as Fig. 2 but showing the crucial 900–1200 Å region at an enlarged scale. Positions of the Lyman series of interstellar hydrogen absorption lines are indicated. Series lines up to L7 are clearly seen in the data. The continuum in this region is well-fit by the broken power-law model (solid curves), while the best single power-law model (dotted) lies above the data at the shortest wavelengths. ## 4. The Broad-band Spectrum of 3C 273 The BBXRT data were obtained simultaneously with the Astro-1 HUT data, and we have retrieved them from the National Space Science Data Center (NSSDC) archive. To obtain the unfolded X-ray spectrum, we use the data reduction package xspec (Arnaud (1996)). The data points in channels 1–20 and 450–512 are excluded because of high uncertainties. We bin the data so that each bin has at least 20 counts. Our preliminary fits with single power laws or broken power laws yielded poor fits, with a significant deviation around 0.6 keV. Done (1993) suggests that this is a blueshifted O viii resonance absorption feature. Empirically modeling this feature as an absorption edge, we use a dual power-law with fixed Galactic absorption to model the X-ray spectrum: $$dN/dE=(f_1E^{\alpha _1}+f_2E^{\alpha _2})\mathrm{exp}(\tau (E/E_{th})^3)\mathrm{exp}(\tau (N_H,E)).$$ This model gives a good fit ($`\chi ^2=52.5`$ for 52 binned data points and 6 free parameters). The parameters for our best fit are: the photon power-law index $`\alpha _1=4.3\pm 0.9`$, $`\alpha _2=1.64\pm 0.09`$; flux at 1 keV: $`f_1=0.0081\pm 0.0030`$ and $`f_2=0.017\pm 0.0025`$ $`\mathrm{photon}\mathrm{s}^1\mathrm{cm}^2\mathrm{keV}^1`$; and the absorption edge at $`E_{th}=0.51\pm 0.03`$ keV with an optical depth $`\tau =1.17\pm 0.31`$. This fit and the residuals are shown in Fig. 5. Fig. 4.— Direct comparison of the fits to the HUT Astro-1 (dotted lines) and Astro-2 (solid lines) spectra. Although the continuum fluxes are roughly the same, there are clear differences in the line fluxes and continuum shapes. The nearly simultaneous optical, ultraviolet, and X-ray data collected in 1990 December during or shortly after the Astro-1 mission were listed in Table 1 and are shown in Fig. 6. The BBXRT and ROSAT data shown in Fig. 6 are corrected for Galactic absorption, using a Morrison & McCammon (1983) model with $`N_H=1.8\times 10^{20}`$ cm<sup>-2</sup>. (Note that we show unfolded X-ray data in Fig. 6, and that the plotted fluxes are model dependent.) Also shown is the power-law that best fits the short wavelength portion of the HUT spectrum as part of the broken power-law fit described in Section 3.2. This component has $`\alpha =1.7`$. Two additional power-laws are also shown as dashed lines with indices of $`1.7\pm 0.36`$, representing the 1-$`\sigma `$ errors from our fit added linearly to the uncertainty of $`\pm 0.2`$ corresponding to a range in $`E(BV)`$ of $`\pm 0.01`$, which is our estimate of the uncertainty in the extinction for 3C 273. It is clear from Fig. 6 and Fig. 2 that the energy distribution of 3C 273 peaks within the HUT spectral range, a little longward of the Lyman limit in the quasar rest frame, and that the short wavelength power-law that fits the HUT data extrapolates well to fit the soft X-ray data from ROSAT and BBXRT. There still remains a considerable gap of one decade in frequency that cannot be filled for 3C 273, however, because the ultraviolet spectrum cannot be extended past the Galactic Lyman limit at 787 Å in the quasar rest frame, and the soft X-ray spectrum is severely attenuated by interstellar absorption below 0.2 keV. Indeed this extreme ultraviolet gap cannot readily be decreased in the spectrum of any single quasar, since the higher redshifts that would allow the ultraviolet spectrum to be extended will at the same time limit the observed X-ray spectrum to higher rest-energies. We therefore turn to a composite quasar spectrum assembled from many different objects to try to reduce the size of the extreme ultraviolet gap. In a previous paper (Zheng et al. 1997) we have presented the results of an analysis of HST FOS spectra of 101 quasars, most with redshifts from 0.33 to 1.5, with a few high redshift objects with $`z`$ up to 3.6. The composite quasar spectrum was also fit with a broken power-law, the break occurring around 1050 Å in the rest frame. The short wavelength component was found to have $`\alpha =2.0`$ for the total sample (which is subject to unknown selection effects), while the subset of 60 radio-loud quasars had $`\alpha =2.2`$ and the subset of 41 radio-quiet quasars had $`\alpha =1.8`$. In the lower panel of Fig. 6 we plot the radio-loud composite quasar spectrum from Zheng et al. (1997) scaled to match the flux observed for 3C 273 at 1000 Å, near the break point in both spectra. Also shown as a solid line (with a break at 2 keV) is a composite quasar X-ray spectrum for radio-loud quasars from Laor et al. (1997), which has been scaled in flux to match our UV-optical composite using the average $`\alpha _{ox}`$ of 1.445 from the sample of Laor et al. Fig. 5.— Upper panel: the best-fit model to the Astro-1 BBXRT X-ray spectrum. As described in §4, the solid line is a dual power law model with Galactic absorption and an empirical absorption edge at 0.5 keV folded through the BBXRT response function. The error bars are 1$`\sigma `$. Lower panel: residuals to the fit. TABLE 4 Astro-1 Emission Lines Feature $`\lambda _{obs}`$ Flux<sup>a</sup> FWHM (Å) ($`\mathrm{km}\mathrm{s}^1`$) S VI, $`\lambda `$ 933.38 $`1079.42\pm 0.20`$ $`2.0\pm 1.0`$ $`4905\pm 721`$ S VI, $`\lambda `$ 944.52 $`1092.32\pm 0.20`$ $`1.0\pm 0.5`$ $`4905\pm 721`$ C III, $`\lambda `$ 977.03 $`1129.95\pm 0.20`$ $`3.8\pm 1.2`$ $`4905\pm 721`$ N III, $`\lambda `$ 991.00 $`1146.12\pm 0.20`$ $`2.9\pm 1.1`$ $`4905\pm 721`$ O VI, $`\lambda `$1034.00 $`1191.46\pm 0.20`$ $`16.1\pm 3.9`$ $`4225\pm 776`$ O VI, $`\lambda `$1034.00 $`1191.46\pm 0.20`$ $`13.7\pm 5.5`$ $`10554\pm 723`$ O VI, $`\lambda `$1034.00<sup>b</sup> $`\mathrm{}`$ $`29.8\pm 6.8`$ $`\mathrm{}`$ No ID $`1243.66\pm 1.15`$ $`5.3\pm 1.1`$ $`4905\pm 721`$ C III, $`\lambda `$1175.70 $`1360.28\pm 0.20`$ $`6.6\pm 1.8`$ $`4905\pm 721`$ Ly$`\alpha `$, $`\lambda `$1215.67 $`1406.21\pm 0.20`$ $`56.3\pm 3.8`$ $`2782\pm 113`$ Ly$`\alpha `$, $`\lambda `$1215.67 $`1398.72\pm 1.66`$ $`49.8\pm 5.1`$ $`8375\pm 733`$ Ly$`\alpha `$, $`\lambda `$1215.67<sup>b</sup> $`\mathrm{}`$ $`106.1\pm 6.3`$ $`\mathrm{}`$ N V, $`\lambda `$1240.15 $`1431.80\pm 0.52`$ $`6.1\pm 1.8`$ $`2782\pm 308`$ N V, $`\lambda `$1240.15 $`1434.68\pm 1.89`$ $`41.0\pm 3.8`$ $`10554\pm 723`$ N V, $`\lambda `$1240.15<sup>b</sup> $`\mathrm{}`$ $`47.1\pm 4.2`$ $`\mathrm{}`$ O I, $`\lambda `$1304.35 $`1508.05\pm 0.20`$ $`5.8\pm 1.1`$ $`4905\pm 721`$ C II, $`\lambda `$1335.30 $`1544.34\pm 0.20`$ $`2.1\pm 1.2`$ $`4905\pm 721`$ Si IV, $`\lambda `$1402.77 $`1620.89\pm 1.62`$ $`15.1\pm 2.4`$ $`6978\pm 1325`$ C IV, $`\lambda `$1549 $`1789.44\pm 0.52`$ $`17.8\pm 2.5`$ $`2782\pm 308`$ C IV, $`\lambda `$1549 $`1793.03\pm 1.89`$ $`36.1\pm 4.1`$ $`10554\pm 723`$ C IV, $`\lambda `$1549<sup>b</sup> $`\mathrm{}`$ $`53.9\pm 4.8`$ $`\mathrm{}`$ <sup>a</sup>$`10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ <sup>b</sup>Total flux for the narrow and broad components. TABLE 6 Astro-1 Absorption Lines Feature $`\lambda _{obs}`$ $`W_\lambda `$ FWHM<sup>a</sup> (Å) (Å) ($`\mathrm{km}\mathrm{s}^1`$) O I, $`\lambda `$ 948.69 $`947.09\pm 0.41`$ $`0.4\pm 0.2`$ $`820\pm 162`$ N I blend, $`\lambda `$ 953.00 $`953.28\pm 0.41`$ $`0.4\pm 0.2`$ $`820\pm 162`$ P II, N I, $`\lambda `$ 964.00 $`964.94\pm 0.41`$ $`0.5\pm 0.1`$ $`820\pm 162`$ C III, $`\lambda `$ 977.03 $`975.68\pm 0.30`$ $`0.7\pm 0.1`$ $`820\pm 162`$ S III, $`\lambda `$1012.50 $`\mathrm{}`$ $`<0.2`$<sup>b</sup> $`1147`$ Si II, $`\lambda `$1020.70 $`1020.70\pm 0.30`$ $`0.1\pm 0.1`$ $`1162\pm 128`$ O VI, $`\lambda `$1031.93 $`1031.93\pm 0.29`$ $`0.6\pm 0.1`$ $`1162\pm 128`$ O VI, C II, O I, $`\lambda `$1038.00 $`1037.19\pm 0.29`$ $`1.5\pm 0.1`$ $`1344\pm 148`$ Ar I, $`\lambda `$1048.22 $`1049.04\pm 0.39`$ $`0.5\pm 0.1`$ $`1162\pm 128`$ Fe II, $`\lambda `$1063.18 $`1063.18\pm 0.39`$ $`0.5\pm 0.1`$ $`1162\pm 128`$ Ar I, $`\lambda `$1066.66 $`1067.51\pm 0.39`$ $`0.5\pm 0.1`$ $`1162\pm 128`$ N II, $`\lambda `$1084.19 $`1084.19\pm 0.72`$ $`0.5\pm 0.2`$ $`1162\pm 128`$ Fe II, $`\lambda `$1096.88 $`\mathrm{}`$ $`<0.2`$<sup>b</sup> $`779`$ Fe II, $`\lambda `$1121.97 $`1121.97\pm 0.72`$ $`0.3\pm 0.1`$ $`779\pm 127`$ N I, $`\lambda `$1134.63 $`1134.96\pm 0.72`$ $`0.4\pm 0.2`$ $`779\pm 127`$ Fe II, $`\lambda `$1143.22 $`1142.81\pm 0.72`$ $`0.4\pm 0.1`$ $`779\pm 127`$ Si II blend, $`\lambda `$1192.00 $`1191.56\pm 0.72`$ $`1.1\pm 0.3`$ $`1536\pm 497`$ N V, $`\lambda `$1238.82 $`\mathrm{}`$ $`0.2`$<sup>c</sup> $`536`$ N V, $`\lambda `$1242.80 $`\mathrm{}`$ $`0.1`$<sup>c</sup> $`536`$ S II, $`\lambda `$1250.50 $`1250.40\pm 0.29`$ $`0.5\pm 0.1`$ $`536\pm 137`$ S II, Si II, $`\lambda `$1260.00 $`1260.43\pm 0.67`$ $`0.4\pm 0.1`$ $`536\pm 137`$ H I, $`\lambda `$1275.19 $`\mathrm{}`$ $`0.1`$<sup>c</sup> $`536`$ C II, $`\lambda `$1335.30 $`1335.19\pm 0.34`$ $`1.0\pm 0.2`$ $`964\pm 176`$ H I, $`\lambda `$1361.42 $`\mathrm{}`$ $`0.1`$<sup>c</sup> $`964`$ Ni II, $`\lambda `$1370.09 $`\mathrm{}`$ $`0.1`$<sup>c</sup> $`589`$ Si IV, $`\lambda `$1393.76 $`1393.35\pm 0.51`$ $`0.2\pm 0.1`$ $`589\pm 100`$ Si IV, $`\lambda `$1402.77 $`1402.31\pm 0.51`$ $`0.1\pm 0.1`$ $`589\pm 100`$ Si II, $`\lambda `$1527.17 $`1526.25\pm 0.56`$ $`0.7\pm 0.2`$ $`826\pm 225`$ C IV, $`\lambda `$1549.50 $`1548.73\pm 0.26`$ $`1.3\pm 0.2`$ $`715\pm 137`$ Al II, $`\lambda `$1670.79 $`\mathrm{}`$ $`<0.4`$<sup>b</sup> $`554`$ <sup>a</sup>All tabulated widths are consistent with the instrumental resolution, and all features should be considered unresolved. <sup>b</sup>The upper limit for this line assumed a feature at the nominal wavelength with a fixed FWHM as shown. <sup>c</sup>Our fits included a feature at the nominal wavelength with the EW and FWHM fixed at the values shown. TABLE 5 Astro-2 Emission Lines Feature $`\lambda _{obs}`$ Flux<sup>a</sup> FWHM (Å) ($`\mathrm{km}\mathrm{s}^1`$) S VI, $`\lambda `$ 933.38 $`1079.59\pm 0.15`$ $`4.2\pm 0.2`$ $`5145\pm 151`$ S VI, $`\lambda `$ 944.52 $`1092.47\pm 0.15`$ $`2.1\pm 0.1`$ $`5145\pm 151`$ C III, $`\lambda `$ 977.03 $`1130.05\pm 0.15`$ $`7.4\pm 0.5`$ $`5145\pm 151`$ N III, $`\lambda `$ 991.00 $`1146.20\pm 0.15`$ $`6.5\pm 0.8`$ $`5145\pm 151`$ O VI, $`\lambda `$1034.00 $`1191.12\pm 1.18`$ $`13.6\pm 9.2`$ $`2332\pm 590`$ O VI, $`\lambda `$1034.00 $`1191.12\pm 1.18`$ $`38.7\pm 3.0`$ $`10416\pm 477`$ O VI, $`\lambda `$1034.00<sup>b</sup> $`\mathrm{}`$ $`52.3\pm 9.6`$ $`\mathrm{}`$ No ID $`1243.66\pm 1.15`$ $`5.6\pm 0.8`$ $`5145\pm 151`$ C III, $`\lambda `$1175.70 $`1360.08\pm 0.15`$ $`10.0\pm 1.0`$ $`5145\pm 151`$ Ly$`\alpha `$, $`\lambda `$1215.67 $`1405.94\pm 0.15`$ $`70.7\pm 3.3`$ $`2799\pm 76`$ Ly$`\alpha `$, $`\lambda `$1215.67 $`1399.49\pm 0.70`$ $`93.5\pm 4.4`$ $`8445\pm 322`$ Ly$`\alpha `$, $`\lambda `$1215.67<sup>b</sup> $`\mathrm{}`$ $`164.2\pm 5.5`$ $`\mathrm{}`$ N V, $`\lambda `$1240.15 $`1433.61\pm 0.40`$ $`6.6\pm 1.6`$ $`3070\pm 213`$ N V, $`\lambda `$1240.15 $`1433.79\pm 1.22`$ $`52.8\pm 3.5`$ $`10416\pm 477`$ N V, $`\lambda `$1240.15<sup>b</sup> $`\mathrm{}`$ $`59.4\pm 3.8`$ $`\mathrm{}`$ O I, $`\lambda `$1304.35 $`1507.65\pm 0.15`$ $`6.5\pm 0.9`$ $`5145\pm 151`$ C II, $`\lambda `$1335.30 $`1543.90\pm 0.15`$ $`5.3\pm 1.3`$ $`5145\pm 151`$ Si IV, $`\lambda `$1402.77 $`1622.38\pm 0.95`$ $`21.5\pm 2.0`$ $`6800\pm 671`$ C IV, $`\lambda `$1549 $`1791.30\pm 0.40`$ $`28.6\pm 2.8`$ $`3070\pm 213`$ C IV, $`\lambda `$1549 $`1791.53\pm 1.22`$ $`57.8\pm 3.2`$ $`10416\pm 477`$ C IV, $`\lambda `$1549<sup>b</sup> $`\mathrm{}`$ $`86.4\pm 4.2`$ $`\mathrm{}`$ <sup>a</sup>$`10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ <sup>b</sup>Total flux for the narrow and broad components. TABLE 7 Astro-2 Absorption Lines Feature $`\lambda _{obs}`$ $`W_\lambda `$ FWHM<sup>a</sup> (Å) (Å) ($`\mathrm{km}\mathrm{s}^1`$) O I, $`\lambda `$ 948.69 $`947.61\pm 0.47`$ $`0.4\pm 0.1`$ $`894\pm 130`$ N I blend, $`\lambda `$ 953.00 $`953.28\pm 0.22`$ $`0.8\pm 0.1`$ $`894\pm 130`$ P II, N I, $`\lambda `$ 964.00 $`964.04\pm 0.44`$ $`0.3\pm 0.1`$ $`894\pm 130`$ C III, $`\lambda `$ 977.03 $`976.48\pm 0.15`$ $`1.0\pm 0.1`$ $`894\pm 130`$ Si II, $`\lambda `$ 989.87 $`989.31\pm 0.15`$ $`0.5\pm 0.1`$ $`894\pm 130`$ N III, $`\lambda `$ 991.00 $`990.44\pm 0.15`$ $`0.3\pm 0.1`$ $`894\pm 130`$ S III, $`\lambda `$1012.50 $`1011.80\pm 0.47`$ $`0.6\pm 0.1`$ $`1202\pm 129`$ Si II, $`\lambda `$1020.70 $`1020.29\pm 0.62`$ $`0.4\pm 0.1`$ $`1202\pm 129`$ O VI, $`\lambda `$1031.93 $`1031.00\pm 0.41`$ $`0.6\pm 0.1`$ $`1202\pm 129`$ O VI, C II, O I, $`\lambda `$1038.00 $`1037.43\pm 0.21`$ $`1.6\pm 0.1`$ $`1386\pm 149`$ Ar I, $`\lambda `$1048.22 $`1051.70\pm 0.69`$ $`0.5\pm 0.1`$ $`1410\pm 318`$ Fe II, $`\lambda `$1063.18 $`1063.74\pm 0.67`$ $`0.4\pm 0.1`$ $`1410\pm 318`$ Ar I, $`\lambda `$1066.66 $`\mathrm{}`$ $`<0.2`$<sup>b</sup> $`1410`$ N II, $`\lambda `$1084.19 $`1083.46\pm 0.52`$ $`0.7\pm 0.2`$ $`1410\pm 318`$ Fe II, $`\lambda `$1096.88 $`\mathrm{}`$ $`<0.2`$<sup>b</sup> $`767`$ Fe II, $`\lambda `$1121.97 $`1121.89\pm 0.46`$ $`0.2\pm 0.1`$ $`767\pm 126`$ N I, $`\lambda `$1134.63 $`1134.83\pm 0.18`$ $`0.7\pm 0.1`$ $`767\pm 126`$ Fe II, $`\lambda `$1143.22 $`\mathrm{}`$ $`<0.2`$<sup>b</sup> $`767`$ Si II blend, $`\lambda `$1192.00 $`1191.33\pm 0.76`$ $`1.9\pm 1.8`$ $`1654\pm 513`$ N V, $`\lambda `$1238.82 $`1238.91\pm 0.77`$ $`0.2\pm 0.1`$ $`588\pm 143`$ N V, $`\lambda `$1242.80 $`1242.60\pm 0.77`$ $`0.1\pm 0.1`$ $`588\pm 143`$ S II, $`\lambda `$1250.50 $`1250.63\pm 0.64`$ $`0.2\pm 0.1`$ $`588\pm 143`$ S II, Si II, $`\lambda `$1260.00 $`1260.75\pm 0.20`$ $`0.5\pm 0.1`$ $`588\pm 143`$ H I, $`\lambda `$1275.19 $`\mathrm{}`$ $`0.1`$<sup>c</sup> $`588`$ C II, $`\lambda `$1335.30 $`1334.34\pm 0.21`$ $`0.7\pm 0.1`$ $`651\pm 103`$ H I, $`\lambda `$1361.42 $`\mathrm{}`$ $`0.1`$<sup>c</sup> $`651`$ Ni II, $`\lambda `$1370.09 $`\mathrm{}`$ $`0.1`$<sup>c</sup> $`599`$ Si IV, $`\lambda `$1393.76 $`1393.68\pm 0.17`$ $`0.5\pm 0.1`$ $`599\pm 100`$ Si IV, $`\lambda `$1402.77 $`1402.63\pm 0.17`$ $`0.2\pm 0.1`$ $`599\pm 100`$ No ID, $`\lambda `$1492.00 $`1492.28\pm 0.54`$ $`0.4\pm 0.1`$ $`599\pm 100`$ No ID, $`\lambda `$1522.00 $`1522.85\pm 0.25`$ $`0.2\pm 0.1`$ $`246\pm 92`$ Si II, $`\lambda `$1527.17 $`1526.68\pm 0.17`$ $`0.7\pm 0.1`$ $`453\pm 96`$ No ID, $`\lambda `$1533.00 $`1531.85\pm 0.25`$ $`0.5\pm 0.1`$ $`453\pm 96`$ C IV, $`\lambda `$1549.50 $`1549.23\pm 0.33`$ $`1.0\pm 0.2`$ $`884\pm 147`$ Al II, $`\lambda `$1670.79 $`1670.24\pm 0.42`$ $`0.6\pm 0.2`$ $`539\pm 170`$ <sup>a</sup>All tabulated widths are consistent with the instrumental resolution, and all features should be considered unresolved. <sup>b</sup>The upper limit for this line assumed a feature at the nominal wavelength with a fixed FWHM as shown. <sup>c</sup>Our fits included a feature at the nominal wavelength with the EW and FWHM fixed at the values shown. Fig. 6.— Upper panel: Quasi-simultaneous broad-band spectrum of 3C 273 from optical to X-rays obtained in December 1990, corrected to the rest frame. The ultraviolet and optical data have been corrected for reddening with $`E\left(BV\right)=0.032`$, and the ROSAT and BBXRT data have been corrected for interstellar absorption with $`N_H=1.84\times 10^{20}`$ cm<sup>-2</sup>. To show the continuum shape more clearly, the UV and optical data have been smoothed with a 9-pixel boxcar filter. The best-fit power law for the short wavelength HUT data with $`\alpha =1.7`$ is indicated, along with 1 $`\sigma `$ uncertainties for the slope. Lower panel: Composite radio-loud quasar spectrum in the UV-optical from Zheng et al. (1997) and in the X-ray from Laor et al. (1997). The UV-optical spectrum has been scaled to match the flux of 3C 273 at 1000 Å, near the break found in the composite spectrum (1050 Å rest) and in the 3C 273 spectrum (919 Å rest). The X-ray spectrum has been scaled to the UV-optical composite using the mean $`\alpha _{ox}`$ for the Laor et al. sample. The dotted and dashed curves represent the spectral fits to 14 observations of 3C 273 over a 30-day period with ROSAT (LMP95). All but two of these observations (dashed) gave $`\alpha _{soft}=1.641.78`$. Finally we have also plotted the results of an extensive set of observations of 3C 273 with ROSAT in 1992 December to 1993 January by LMP95. These authors obtained spectra at 2-day intervals over a period of a month. They found that the only successful fits to their spectra were obtained with a sum of two power laws, where they constrained the hard component to have $`\alpha _h=0.5`$ as determined by EXOSAT and Ginga 2–10 keV observations, and also constrained the interstellar absorption at low energies to be given by the Galactic value of $`N_H=1.84\times 10^{20}`$ cm<sup>-2</sup>. Their soft component was then found to have $`\alpha _s=1.7`$ for all but two of their observations, where they found $`\alpha _s=1.4`$. We have plotted the fits given in Table 6 of LMP95 as dotted-line curves in Fig. 6, with the two discrepant observations as dashed-line curves. It is immediately apparent that these observations of 3C 273 match quite well the composite radio-loud X-ray spectrum of Laor et al. (1997) as we have scaled it to match the UV-optical composite, which itself was scaled to match our observation of 3C 273 at 1000 Å. They are also seen to extrapolate very well to match the extreme ultraviolet part of our composite UV-optical spectrum (Zheng et al. (1997)). The upper panel of Fig. 6 thus demonstrates the ultraviolet peak in the energy distribution of a single quasar, 3C 273, and a likely power-law connection extending from the Lyman limit to the soft X-ray region, with $`\alpha _{EUV}=1.7\pm 0.36`$, though a substantial extreme ultraviolet gap necessarily remains unfilled. The lower panel of Fig. 6, on the other hand, makes use of composite quasar spectra constructed in the UV-optical and the X-ray regions (suitably scaled) along with soft X-ray data for 3C 273 (LMP95), to reveal a very similar spectrum, with a significantly smaller extreme ultraviolet gap remaining unfilled. The striking similarity between the 3C 273 spectrum and the spectral composite provides genuine physical support for the reality of the composite spectral shape derived by Zheng et al. (1997). (We note that 3C 273 made no contribution to the shape of the composite as only quasars with $`z>0.33`$ were included in the sample used.) One can argue that composite spectra in and of themselves are unphysical. Zheng et al. (1997) note that there is a wide dispersion in spectral indices among individual quasars. Koratkar & Blaes (1999) show several examples of widely differing QSO spectral shapes. So, there is no guarantee that the composite assembled by Zheng et al. (1997) actually represents a real spectrum. The fact that the spectrum of a single object such as 3C 273 exhibits such remarkably similar characteristics bolsters the case for the broad applicability of the composite spectrum. Taken together, the composite spectrum and the 3C 273 spectrum provide compelling evidence for a power-law spectrum from the Lyman limit to the soft X-ray band in quasars, with a typical value of $`\alpha =1.72.2`$. ## 5. Accretion Disk Models To place our empirical results on the continuum spectral shape of 3C 273 in a physical context, we compare our spectra to accretion disk models. Such models are well suited to producing spectra that peak in the far-UV, and the spectral break near the Lyman limit in the quasar rest frame suggests a mechanism related to the large change in opacity at the intrinsic Lyman edge. The presence of a break rather than an edge, however, plus the seeming extrapolation of the high-frequency power law to soft X-ray energies are suggestive of the appearance of an accretion disk spectrum with an intrinsic Lyman edge feature that has been Comptonized by an external medium. Comptonization (and relativistic effects in the disk) smear out any intrinsic features in the disk spectrum as well as adding a power-law high-energy tail (Czerny & Zbyszewska (1991)). As we will note below, however, the observed break is still sharper than what can be accommodated by our simple, semi-empirical models. To fit the broad-band spectral shape, we compute disk spectra in the Schwarzschild metric that are a sum of blackbody spectra representing rings in the disk running from an inner radius of 6 gravitational radii ($`r_G=GM_{BH}/c^2`$) to 1000 $`r_G`$. The blackbody spectra are modified by an empirical Lyman-limit feature as described by Lee, Kriss, & Davidsen (1992) and Lee (1995). For the Comptonization we assume that a hot spherical medium with an optical depth to Compton scattering of $`\tau _e=1`$ surrounds the disk. We use the formulation of Czerny & Zbyszewska (1991) as described by Lee, Kriss, & Davidsen (1992) to calculate the effects of the Comptonization. The spectral index of the high energy tail in this calculation depends almost entirely on the Compton $`y`$ parameter, $`y\tau _e^2T_c`$, where $`T_c`$ is the temperature of the Comptonizing medium (Lee (1995)), so the free parameters in our accretion disk model are the mass accretion rate, $`\dot{m}`$, the mass of the central black hole, $`M_{BH}`$, the optical depth at the Lyman edge, $`\tau _{Ly}`$, the inclination of the disk, $`i`$, and $`T_c`$ (since we keep $`\tau _e`$ fixed at 1). Our fits use the same wavelength intervals as the power-law and broken power-law fits in §3.2 and the same emission and absorption components. All fits have the extinction fixed at $`E(BV)=0.032`$. Since there is little data in our HUT spectrum shortward of the spectral break at $``$920 Å in the rest frame, it is the slope and intensity of the soft X-ray spectrum that largely determines $`T_c`$. This has a best fit value of $`T_c=4.1\times 10^8`$ K in all our fits, regardless of the other parameters. The shape of the intrinsic disk spectrum is invariant for a fixed ratio of $`\dot{m}/M_{BH}^2`$; this ratio is determined largely by the wavelength of the peak in the spectral energy distribution. The normalization is determined by the mass accretion rate $`\dot{m}`$. For conversion of flux to luminosity, we assume a Hubble constant $`H_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and a deceleration parameter $`q_0=0`$. The strength of the spectral break in our model is related both to the peak in the spectral energy distribution as well as the optical depth at the Lyman edge, $`\tau _{Ly}`$. The break wavelength and its sharpness constrain the best fit inclination $`i`$. (Higher inclinations lead to breaks at shorter wavelengths that are more smeared out.) Our best fit to the Astro-1 spectrum, for which simultaneous X-ray data and nearly simultaneous near-UV and optical data are also available, yields $`\chi ^2=1609`$ for 1622 data points and 82 free parameters. The best-fit parameters are summarized in the first column of Table 8. (The emission and absorption line parameters have nearly identical values to those presented from our previous empirical fits.) The quoted errors are 90% confidence for a single interesting parameter, and they correspond to $`\mathrm{\Delta }\chi ^2=4.6`$. The Comptonized accretion disk fits significantly better than a single power law at the nominal value for Galactic extinction, but worse than the best fitting broken power law. The sharp break present in the spectrum is difficult to match with the smoother shape of our simple accretion disk model, even with a Lyman-limit feature. TABLE 8 Accretion Disk Models for 3C 273 Parameter Astro-1 Astro-2 $`M_{BH}(10^8M_{})`$ $`7.1\pm 0.3`$ $`12\pm 0.4`$ $`\dot{m}(M_{}\mathrm{yr}^1)`$ $`13\pm 0.13`$ $`12\pm 0.12`$ $`i(\mathrm{deg})`$ $`60_{10}^{+20}`$ $`60_{40}^{+25}`$ $`\tau _{Ly}`$ $`0.5\pm 0.5`$ $`0.0_{0.0}^{+0.2}`$ $`T_8(10^8\mathrm{K})`$ $`4.1\pm 0.2`$ $`4.1\pm 0.2`$ $`E(BV)`$ 0.032 0.032 $`\chi ^2/dof`$ 1609/1540 1752/1497 The best fit to the Astro-2 spectrum is also summarized in Table 8. The most significant difference between the Astro-1 and the Astro-2 fits is that since the Astro-2 spectrum is redder overall, its fit favors a higher mass black hole. In all other respects, however, the Astro-1 and Astro-2 disk fits are similar. Reflecting the nearly identical flux levels of the two observations, the mass accretion rates are comparable. Both fits favor an inclination $`i=60^{}`$, and the Lyman-limit optical depth in each is low. For Astro-1, $`\tau _{Ly}=0.5`$, and $`\tau _{Ly}=0.0`$ also gives an acceptable fit; for Astro-2, $`\tau _{Ly}=0.0`$. Thus in the context of our simple accretion disk model a Lyman edge feature is not formally required to achieve an acceptable fit to our data for either Astro-1 or Astro-2. However, the fact that a broken power-law actually provides a better fit, and that the break is very close to the Lyman limit, hints that there probably is an effect associated with an opacity change at the Lyman limit. We suggest that future more sophisticated disk models should endeavor to match this feature. Fig. 7 shows the best-fit accretion disk spectrum in comparison to the HUT Astro-1 data and longer-wavelength points. Here one can see that the rather smooth disk model does not fully accommodate the sharp spectral break at $``$1000 Å. In the near-UV range the accretion disk continuum falls below the data largely due to Balmer continuum and Fe ii line emission that is not included in our model. At longer wavelengths, the accretion disk falls below the data even more as we have not included any emission components (either a power law or dust emission) that are mainly responsible for the near-IR continuum. Fig. 8 again shows the best fit over the whole frequency range from the hard X-ray to 7000 Å in the visible. The Comptonized tail of the accretion disk spectrum provides an excellent match to the soft X-ray excess present in the ROSAT and the BBXRT data. Fig. 7.— Quasi-simultaneous UV-optical spectrum of 3C 273 from 1990 December, including data from HUT, IUE, and the KPNO 2.1 m telescope (kindly provided by R. Green), corrected for reddening with $`E\left(BV\right)=0.032`$. The data have been smoothed with a 9-pixel boxcar filter. The smooth curve is the best-fitting accretion disk model as described in Section 5, with parameters given in Table 8. Several aspects of our accretion disk fits illustrate the shortcomings of our simple model. Our best-fit inclinations of $`i=60^{}`$ are a bit high given the superluminal jet in 3C 273 that suggests the disk normal should be directed close to our line of sight. However, inclinations in superluminal sources can be as high as $`3045^{}`$, and this provides an acceptable fit for our Astro-2 observations. An inclination of $`0^{}`$ is excluded at high confidence in both data sets ($`\mathrm{\Delta }\chi ^2=27`$ for the Astro-1 data). Also, as mentioned earlier, the inclination is determined mostly by the wavelength of the spectral break. A Kerr disk would produce bluer breaks that are more smeared out at much lower inclination than in a Schwarzschild disk (e.g., see Laor & Netzer (1989)). Both the Astro-1 and Astro-2 fits have accretion rates that exceed the thin-disk limit of $`L/L_{Edd}0.3`$ for a Schwarzschild metric. For the Astro-1 fit, $`L/L_{Edd}=0.46`$; for Astro-2, $`L/L_{Edd}=0.25`$. A Kerr metric would also alleviate this problem: the limit for a thin disk in this metric is higher, and, at the lower inclinations it would require, the accretion rate would also be lower. Finally, while the Astro-1 and Astro-2 fits have similar mass accretion rates, their best-fit black hole masses differ by nearly a factor of two. This illustrates a fundamental difficulty of simple steady-state accretion disk models in dealing with variability: it occurs on time scales more rapid than the viscous time scale that governs the applicability of steady-state models. In a steady-state model for a given object, the only variable that can vary freely is the mass accretion rate. Therefore, one should observe correlated changes in flux and color, with brighter states exhibiting bluer disk spectra. However, the Astro-1 and Astro-2 data are nearly identical in flux, but they have significantly different colors. Without a good model for accommodating variability in disk models, one can see that factors of several uncertainty in the actual physical parameters are easily present in our results. It is reassuring to note, however, that our values for the black hole mass based on accretion disk models ($`7.112\times 10^8M_{}`$) bracket the independent value of $`7.4\times 10^8M_{}`$ obtained by Laor (1998) using reverberation-mapping data. We conclude that our accretion disk models of the Astro-1 and Astro-2 spectra provide an overall qualitative physical characterization of the 3C 273 spectrum. While the models have quantitative shortcomings, they provide an empirical guide for potentially more sophisticated models. The most important aspects of our simple description in terms of blackbody emission and Comptonization are that it (1) accounts for the peak of the ultraviolet spectrum, (2) produces a spectral break in the far-UV, and (3) has a hard power-law tail that extends to the soft-X-ray band and matches the observed soft-X-ray spectrum. ## 6. Summary We have presented absolutely-calibrated ultraviolet spectrophotometry over the 900–1800 Å range for the quasar 3C 273, obtained with the Hopkins Ultraviolet Telescope on the Astro-1 mission in December 1990 and on the Astro-2 mission in March 1995. In both observations the continuum displays a change of slope near the Lyman limit in the quasar rest frame. At longer UV wavelengths the continuum is well-represented by a power-law of $`\alpha _1=0.5`$. Shortward of the Lyman limit, however, the continuum slope has $`\alpha _2=1.7\pm 0.36`$, where the uncertainty includes our uncertainty about $`E(BV)`$ at the level of $`\pm 0.01`$. The energy distribution per logarithmic frequency interval $`\nu f_\nu `$ therefore has a peak close to the quasar Lyman limit. The short wavelength extreme UV power-law extrapolates very well to match the soft x-ray spectrum of 3C 273 obtained nearly simultaneously in the case of Astro-1 with BBXRT and ROSAT. The soft x-ray data themselves give $`\alpha _s=1.7(\pm 0.1)`$ (LMP95), so the combined UV and X-ray data suggest $`\alpha _{UVX}=1.7`$. While some models for the photoionizing extreme ultraviolet radiation from quasars have a peak — the so-called “big blue bump” — at a wavelength much shorter than the Lyman limit at 912 Å with a peak flux much higher than what is actually observed at both longer and shorter wavelengths (e.g., Mathews & Ferland (1987); Bechtold et al. (1987); Gondhalekar et al. (1992)), we find that in 3C 273 the peak of the big blue bump occurs very close to the Lyman limit. The photoionizing flux from 3C 273 is given by $`f_\nu =f_{LL}(\nu /\nu _{LL})^{1.7}`$ extending from the Lyman limit to 1 keV, although there is still a gap in the data covering one decade of frequency in the extreme ultraviolet. Analysis of composite quasar spectra constructed in the ultraviolet (Zheng et al. (1997)) and the X-ray (Laor et al. (1997)) bands leads to the same result. The similarity between the spectrum of a single object such as 3C 273 and these composites lends physical credence to their applicability. It therefore appears to be true of quasars in general that the peak of the continuum energy distribution occurs near the Lyman limit, and that the ionizing continuum is well-represented by a power-law of energy index $`\alpha _{UVX}=1.72.2`$ extending to about 1 keV, where a separate hard X-ray component begins to dominate for higher energies. The “extreme ultraviolet gap” in the composite quasar spectrum is now only half a decade in frequency, however. This can be reduced somewhat further by extending the work of Zheng et al. (1997) to include more observations of high redshift quasars in the far ultraviolet band. The general shape of the optical, ultraviolet, and soft x-ray spectrum of 3C 273 is consistent with that of an optically thick accretion disk around a massive black hole, which is itself surrounded by a hot medium that modifies the emergent spectrum through the inverse Compton effect, as suggested by Czerny & Zbyszewska (1991), Lee et al. (1992) and Lee (1995). In our model the spectral break is due to the thermal peak of the accretion disk spectral energy distribution with a minor contribution from Lyman edge absorption. Both features are broadened and blurred by Comptonization in a surrounding hot medium, perhaps a corona or wind emanating from the disk. Comptonization of the thermal photons from the disk also produces the power-law tail extending to the soft X-ray region (e.g., Czerny & Elvis (1987); Maraschi & Molendi (1990); Ross, Fabian, & Mineshige (1992)). Our simple model is unable to fully account for the sharpness of the spectral break, however, and we suggest that a more sophisticated treatment of the Lyman-edge region may eventually provide a better match to these data. Assuming a Schwarzschild black hole with an inclination of 60 degrees, the UV spectrum is fit with a black hole mass $`M_{bh}=7\times 10^8M_{}`$ with an accretion rate $`\dot{M}=13M_{}/yr`$. Superposed on the disk spectrum is an empirically determined Lyman edge with $`\tau _{Ly}=0.5`$. The hot corona or wind is required to have a Compton parameter $`y1`$, which is obtained, for example, with $`\tau _{es}=1`$ and $`T_e=4\times 10^8`$ K. A temperature of this order appears plausible for a corona where Compton cooling balances Compton heating by the hard x-ray flux in 3C 273. These results are quite similar to those we found previously by fitting similar models to a composite quasar spectrum (Zheng et al. (1997)). There we found, assuming a typical inclination of 30 degrees, a black hole mass $`M_{bh}=1.4\times 10^9M_{}`$ and an accretion rate $`\dot{M}=2.8M_{}/yr`$. While the best-fit parameters would change somewhat in the more realistic case of a rotating (Kerr) black hole (which we have not calculated), we believe these results provide strong evidence in favor of the hypothesis that quasars are powered by accretion onto massive black holes. We thank R. Green for providing the optical data, J. Kruk and C. Bowers for help with the Astro-1 data, and K. Weaver for assistance with the X-ray data reduction. The Hopkins Ultraviolet Telescope project has been supported by NASA contract NAS–5-27000 to the Johns Hopkins University.
no-problem/9909/cond-mat9909106.html
ar5iv
text
# Order-out-of-disorder in a gas of elastic quantum strings in 2+1 dimensions. ## Abstract A limiting case of a dynamical stripe state which is of potential significance to cuprate superconductors is considered: a gas of elastic quantum strings in 2+1 dimensions, interacting merely via a hard-core condition. It is demonstrated that this gas solidifies always, by a mechanism which is the quantum analogue of the entropic interactions known from soft condensed matter physics. The analysis of systems of quantum particles has been traditionally the focus point of quantum many body theory. On the other hand, much less is known about systems composed of extended objects. Here I will analyze one of the simplests examples of such a system: a gas of quantum strings with finite line tension, embedded in 2+1 dimensional space-time. A motivation to study this problem is found in the context of the cuprate stripes. It is popular to view these stripes as preformed line-like textures which can either order in a regular pattern, or stay in a disordered state due to strong quantum fluctuations. The question arises whether it is possible to quantum melt a system of completely intact, infinitely long stripes. Even in this limit the stripes themselves can still execute quantum meandering motions and a consensus has been growing that a single stripe is like a quantum string with finite line-tension. I define the ideal string gas as the low density limit where the width of the strings can be neglected, while the strings only interact via the requirement that they cannot intersect. This is obviously the limit where quantum kinetic energy is most important. I will show that in 2+1 dimensions even in this limit this string system turns into a solid at zero temperature. This solidification is driven by the quantum-mechanical analogue of the entropic interactions known from statistical mechanics. In a system with steric interactions between its constituents, entropy is paid at collisions in the classical system and kinetic energy in the quantum system. This causes an effective repulsion and these ‘quantum entropic’ interactions dominate to such an extent in the string gas that they cause it to solidify always. In the path-integral representation, a quantum-mechanical problem of interacting particles becomes equivalent to a statistical physics problem of interacting elastic lines (‘worldlines’). Likewise, the quantum string gas becomes equivalent to the statistical physics problem of a stack of elastic membranes (‘worldsheets’) which do not interact except for the requirement that the membranes do not intersect. A seminal contribution in the study of entropic interactions in classical systems composed of extended entities is the analysis by Helfrich of a system of extrinsic curvature membranes in 3D, interacting only via an excluded volume constraint. I will illustrate this method in the quantum context by analyzing the hard-core bose gas in 1+1D, which is closely related to Helfrich’s extrinsic curvature membranes in 3D. The string gas will turn out to be a straightforward, but non-trivial extension of the bose gas: different from the latter, the quantum entropic interactions of the string gas are driven by long wavelength fluctuations. To acquire some insights in the Helfrich method in the context of quantum mechanics, consider the familiar problem of hard-core (but otherwise non-interacting) bosons in 1+1D. This is solved by mapping onto a non-interacting spinless-fermion gas. Although mathematically trivial, this problem does exhibit the conceptual ambiguity associated with Luttinger liquids. On the one hand, it is clearly a gas of particles characterized by a kinetic scale $`E_F`$, while at the same time the long wavelength density-density correlator exhibits the algebraic decay characteristic for a harmonic crystal in 1+1D: $`<n(x)n(0)>cos(2k_Fx)/x^2`$. The concept of entropic interaction offers a simple explanation. The hard-core bose gas at zero temperature corresponds with the statistical physics problem of a gas of non-intersecting elastic lines embedded in 2D space-time, which are directed along the time direction. The space-like displacement of the $`i`$-th worldline is parametrized in terms of a field $`\varphi _i(\tau )`$ ($`\tau `$ is imaginary time) and the partition function is, $`Z`$ $`=`$ $`\mathrm{\Pi }_{i=1}^N\mathrm{\Pi }_\tau {\displaystyle 𝑑\varphi _i(\tau )e^\frac{S}{\mathrm{}}},`$ (1) $`S`$ $`=`$ $`{\displaystyle 𝑑\tau \underset{i}{}\frac{M}{2}(_\tau \varphi _i)^2},`$ (2) supplemented by the avoidance condition, $$\varphi _1<\varphi _2<\mathrm{}<\varphi _N.$$ (3) The hard-core condition Eq.(3) renders this to be a highly non-trivial problem. Helfrich considered the related classical problem of a stack of linearized and directed extrinsic curvature membranes embedded in 3D space. Although this is a higher dimensional problem, the action depends on double derivatives instead of the single derivatives in Eq. (2), $`(_\mu \varphi )^2(_\mu ^2\varphi )^2`$, and it follows from powercounting that this problem is equivalent to the hard-core bose gas in the present context. In order to determine the ‘entropic’ elastic modulus at long wavelength Helfrich introduced the following construction. Assume that the long wavelength modulus $`B_0`$ is finite. For the bose gas this implies that the long wavelength action is that of a 1+1D harmonic solid, $$S_{eff}=\frac{1}{2}𝑑\tau 𝑑x\left[\rho (_\tau \psi )^2+B_0(_x\psi )^2\right],$$ (4) where $`\psi (x,\tau )`$ is a coarse grained long-wavelength displacement field, $`\rho =M/d`$ the mass density, and $`d`$ the average interwordline distance ($`n=1/d`$ is the density). Obviously, for finite $`B_0`$ fluctuations are suppressed relative to the case that $`B_0`$ vanishes and this cost of kinetic energy in the quantum problem (entropy in the classical problem) raises the free-energy. Define this ‘free-energy of membrane joining’ as $$\mathrm{\Delta }F(B_0)=F(B_0)F(B_0=0),$$ (5) At the same time, by general principle it has to be that the ‘true’ long wavelength modulus $`B`$ in the $`x`$ direction should satisfy ($`V`$ is the volume), $$B=d^2\frac{^2(\mathrm{\Delta }F(B_0)/V)}{d^2}.$$ (6) In case of the steric interactions, the only source of long wavelength rigidity in the space direction is the fluctuation contribution to $`\mathrm{\Delta }F`$. This means that $`B_0=B`$ and $`B`$ can be self-consistently determined from the differential equation, Eq. (6). In fact, the only ambiguity in this procedure is the choice for the short distance cut-off for the fluctuations in the $`x`$ direction, which is expected to be proportional to the distance between the worldlines, $`x_{min}=\eta d`$. The shortcoming of the method is that mode-couplings are completely neglected and this is not quite correct since the outcomes do depend crucially on short wavelength fluctuations. However, it appears that these effects can be absorbed in the non-universal ‘fudge-factor’ $`\eta `$, giving rise to changes in numerical prefactors without affecting the dependence of $`B`$ on the dimensionful quantities in the problem. The free energy difference for the bose gas, Eq. (5), is easily computed from the Gaussian action Eq. (4) and expanding up to leading order in $`\lambda =(\sqrt{B}\tau _0)/(\sqrt{\rho }d)`$ ($`\tau _0`$ is the cut-off time), becoming small in the low density limit, $$\frac{\mathrm{\Delta }F}{V}=\frac{\pi \mathrm{}}{4\eta ^2}\sqrt{\frac{B}{M}}\frac{1}{d^{3/2}}+O(\lambda ^2).$$ (7) Inserting Eq. (7) on the r.h.s. of the self-consistency equation Eq. (6) and solving the differential equation up to leading order in $`\lambda `$ yields, $$B=\frac{9\pi ^2}{\eta ^4}\frac{\mathrm{}^2}{Md^3}.$$ (8) It is easily checked that this corresponds with the elasticity modulus appearing in the bosonized action of the hard-core boson problem, taking $`\eta =\sqrt{6}`$. Hence, the space-like rigidity of the bose gas at long wavelength can be understood as a consequence of entropic interactions living in Euclidean space-time. Let us now turn to the string-gas problem. In fact, the string gas in 2+1D is related to the hard-core bose gas in 1+1D: the latter can be viewed as the compactified version of the former. Imagine that the hard-core bose gas lives actually in 2+1D where the additional dimension $`y`$ is rolled up to a cylinder with a compactification radius $`R_y`$ of order of the lattice constant $`a`$, while the bosons are spread out in elastic strings wrapped around the y-axis. Let $`R_y`$ go to infinite. This has the effect that the embedding space becomes $`2+1`$ dimensional, while the boson wordlines spread out in string worldsheets. This ‘directed string-gas’ is not yet the one of interest, since the worldsheets are not only directed along the imaginary time directions (as required by quantum mechanics) but also in the $`xy`$ plane (Fig. 1a). The difficulty is that in the string gas dislocations can occur (Fig. 1b), and if these proliferate they will destroy the generic long range order of the directed string gas. However, two objections can be raised against a dislocation mediated quantum melting. The first objection involves a further specification: already a single string tends to acquire spontaneously a direction, if it is regularized on a lattice (like the stripes). As pointed out by Eskes et. al., the reason is that ‘overhangs’ like in Fig. 1b are events where transversal fluctuations are surpressed, relative to those around directed configurations. The second argument is more general. It is a classic result that at any finite temperature dislocations proliferate in the string gas. However, in the presence of a finite range interaction of any strength the Kosterlitz-Thouless transition will occur at a finite temperature. Hence, by letting this interaction to become arbitrary weak a $`T=0`$ transition can be always circumvented. When dislocations can be excluded the directed string gas remains and this is just the decompactified Bose gas. In Euclidean space-time it corresponds with a sequentially ordered stack of elastic membranes. Orienting the worldsheets in the $`y,\tau `$ planes, the action becomes in terms of the dispacement fields $`\varphi _i(y,\tau )`$ describing the motion of the strings in the $`x`$ direction, $`Z`$ $`=`$ $`\mathrm{\Pi }_{i=1}^N\mathrm{\Pi }_{y,\tau }{\displaystyle 𝑑\varphi _i(y,\tau )e^\frac{\stackrel{}{S}}{\mathrm{}}},`$ (9) $`S`$ $`=`$ $`{\displaystyle 𝑑\tau 𝑑y\underset{i}{}\left[\frac{\rho _c}{2}(_\tau \varphi _i)^2+\frac{\mathrm{\Sigma }_c}{2}(_y\varphi _i)^2\right]},`$ (10) again supplemented by the avoidance condition Eq. (3). In Eq. (10), $`\rho _c`$ is the mass density and $`\mathrm{\Sigma }_c`$ the string tension, such that $`c=\sqrt{\mathrm{\Sigma }_c/\rho _c}`$ is the velocity. In the remainder I choose a lattice regularization with lattice constant $`a`$, such that the average string-string distance is $`d=a/n`$ ($`n`$ is the density), and UV momenta and frequency cut-offs on a single string $`q_0=\pi /a`$ and $`\omega _0=cq_0`$, respectively. Turning to the Helfrich method, the effective long wave-length action is written as, $$S_{eff}=\frac{1}{2}𝑑\tau 𝑑x𝑑y\left[\rho (_\tau \psi )^2+B(_x\psi )^2+\mathrm{\Sigma }(_y\psi )^2\right],$$ (11) with $`\psi (x,y,\tau )`$ as the coarse grained long-wavelength displacement field, while $`\mathrm{\Sigma }=\mathrm{\Sigma }_c/d`$, $`\rho =\rho _c/d`$ and $`B`$ has to be determined. From the action Eq. (11) it follows that the free energy difference Eq. (5) is, $$\frac{\mathrm{\Delta }F}{V}=\frac{\mathrm{}c}{8\pi ^2}_0^{q_0}𝑑q^2_0^{\pi /(\eta d)}𝑑q_x\mathrm{ln}\left[\frac{\mathrm{\Sigma }q^2}{\mathrm{\Sigma }q^2+Bq_x^2}\right].$$ (12) This integral is easily solved analytically and expanding in the small parameter $`\lambda =(\sqrt{B}a)/(\sqrt{\mathrm{\Sigma }}\eta d)`$, $$\frac{\mathrm{\Delta }F}{V}=\frac{\pi \mathrm{}c}{24\eta ^3\mathrm{\Sigma }_c}(\frac{B}{d^2})(\frac{5}{3}+\mathrm{ln}\left[\frac{\eta ^2\mathrm{\Sigma }_c}{a^2}\frac{d}{B}\right])+O(\lambda ^4).$$ (13) The free-energy difference is proportional to $`B/d^2`$ except for a logarithmic ‘correction’ $`(B/d^2)\mathrm{ln}(d/B)`$. Since $`B`$ tends to zero in the low density limit, it is actually this logarithmic ‘correction’ which determines the low density asymptote of the differential equation which is obtained after substition of Eq.(13) in the self-consistency condition Eq. (6). The physical meaning of this logarithm will be discussed later. The differential equation determining the fluctuation induced modulus $`B`$ becomes, $$f(d)=C_0\frac{^2}{d^2}\left[f(d)\mathrm{ln}(C_1df(d))+\frac{5}{3}\right],$$ (14) where $`f(d)=B/d^2`$ and $`C_0=(\pi \mathrm{}c)/(24\eta ^3\mathrm{\Sigma }_c),C_1=a^2/(\eta ^2\mathrm{\Sigma }_c)`$. Eq.(14) can be simplified using the Ansatz $`f(d)=exp[\mathrm{\Phi }(d)]`$. It is easy to see that for large $`d`$ the second derivative terms $`_d^2\mathrm{\Phi }`$ can be neglected relatively to the first derivative terms (‘quasiclassical approximation’). Neglecting the other terms which do not contribute in the low density asymptote (including the one derived from the ‘$`5/3`$’ term in Eq. (14)) $`\mathrm{\Phi }`$ obeys asymptotically the simple differential equation, $$(\mathrm{\Phi }2)(\frac{\mathrm{\Phi }}{d})^2=\frac{1}{C_0},$$ (15) and it follows that $`\mathrm{\Phi }(d)d^{2/3}`$. The full expression for the induced modulus is up to leading order in the density, $$B=Ad^2e^{\eta (\frac{54}{\pi })^{1/3}\frac{1}{\mu ^{1/3}}},$$ (16) where $`A`$ is an integration constant and $`\mu `$ is the ‘coupling constant’ for the string-gas, $$\mu =\frac{\mathrm{}}{\rho cd^2}.$$ (17) Eq.’s (16,17) represent my central result. What is the significance of this result? Most importantly, it demonstrates that in parallel with the hard-core bose gas (and Helfrich’s membranes), the string gas is characterized by a fluctuation induced elastic modulus at long wavelength which will be small but finite even at low density. This modulus $`B`$ appears in the action Eq. (11) which describes an elastic manifold covering 2+1D space-time. Eq. (16) describes the counter-intuitive fact that upon increasing the kinetic energy of a single string, the rigidity of this medium is actually increasing. The parameter $`\mu `$ is the dimensionless quantity measuring the importance of quantum fluctuations. In order to prohibit diverging fluctuations on the lattice scale, $`\mu `$ should be less than one, while the classical limit is approached when $`\mu 0`$. According to Eq. (16), $`B`$ depends on $`\mu `$ in a stretched exponential form, such that $`B`$ increases when $`\mu `$ is increasing. Since quantum dislocation melting is prohibited, the string gas is always a solid, and this solid becomes more rigid when the microscopic quantum fluctuations become more important. This might appear as less surprising when the (directed) string gas is viewed as a decompactified bose gas. On the one hand, the larger internal dimensionality of the worldsheets as compared to the worldlines weakens the ‘quantum-entropic’ interactions, but the enlarged overall dimensionality causes the algebraic long range order of the 1+1D bose gas to become the true long range order of the 2+1D string gas. The mechanism behind the ‘quantum entropic’ interaction is actually different from the one in the bose gas. In the bose gas it builds up at short-wavelengths, while in the string gas it is driven by the long-wavelength fluctuations living on the strings. An alternative, more intuitive understanding is available for the Bose-gas result, Eq. (8). This is based on the simple notion that every time membranes/worldlines collide an amount of entropy $`k_B`$ is paid because the membranes cannot intersect. Hence, these collisions raise the free energy of the system and this characteristic free energy cost $`\mathrm{\Delta }F_{coll}k_bTn_{coll.}`$. The density of collisions $`n_{col}`$ is easily calculated: for the worldlines, the mean-square transversal displacement as function of (time-like) arclength increases like $`(\varphi (\tau )\varphi (0))^2=(\mathrm{}/M)\tau `$. The characteristic time $`\tau _c`$ it takes for one collision to occur is obtained by imposing that this quantity becomes of order $`d^2`$ and a characteristic collision energy scale is obtained $`E_F\mathrm{}/\tau _c(\mathrm{}^2/M)n^2`$. $`E_F`$ is of course the Fermi-energy: it is the scale separating a regime where worldlines are effectively isolated ($`E>E_F`$, free particles) form one dominated by the collisions ($`E<E_F`$, Luttinger liquid). The induced modulus follows from naive coarse graining: $`E_F`$ is the characteristic energy asssociated with density change, while $`d`$ is the characteristic length. Therefore, $`BE_F/d`$, reproducing the result Eq. (8), within a prefactor of order unity. For the string gas this procedure yields a simple exponential- instead of the stretched exponential Eq. (16). The mean-square transversal displacement now depends logarithmically of the worldsheet area $`A`$: $`(\mathrm{\Delta }\varphi (A))^2=\mathrm{}/(\rho c)\mathrm{ln}(A)`$. Demanding this to be equal to $`d^2`$, the degeneracy scale follows immediately. The characteristic worldsheet area $`A_c`$ for which on average one collision occurs is given by $`\mathrm{}/(\rho c)\mathrm{ln}(A_c)d^2`$ where $`A_c=c^2\tau _c^2/a^2`$ in terms of the collision time $`\tau _c`$. It follows that $`\tau _c(a/c)e^{1/2\mu }`$ and the ‘Fermi energy’ of the string gas is of order $`E_F^{str}=\mathrm{}/\tau _c(\mathrm{}c/a)\mathrm{exp}(1/2\mu )`$ and thereby $`B\mathrm{exp}(1/\mu )`$. In fact, the same $`\mu `$ dependence is obtained from Helfrich’s method if the logarithm in Eq. (13) is neglected. Hence, this collision picture misses entirely the origin of the quantum-entropic repulsions in the string gas! The reason becomes clear by inspecting the origin of the logarithmic term in the integrations leading to Eq. (13). Cutting off the smallest allowed momenta in the $`x,\tau `$ directions by $`q_{min}`$ one finds that $`\mathrm{ln}[(\eta ^2\mathrm{\Sigma }_cd)/(a^2B)]\mathrm{ln}[(a^2B)/(\eta ^2\mathrm{\Sigma }_cd)+a^2q_{min}^2]`$, and this is unimportant for any finite $`q_{\mathrm{min}}`$ in the low density limit. Therefore, the logarithm and thereby the induced modulus are driven by the long wavelength fluctuations on the strings, and these are not considered in the collision point picture. In summary, I have analyzed the fluctuation induced interactions in the ‘ideal’ gas of elastic quantum strings in 2+1D. A novelty is that in this system the induced elasticity is due to long wavelength fluctuations, qualitatively different from the short distance physics of the Bose gas. It remains to be seen if these interactions are of relevance in real physical systems. On the one hand, these are rather weak and easily overwhelmed by direct interactions. However, direct string-string interactions which decay exponentially are generic and in this case the induced interactions can dominate at sufficiently low density because of their stretched exponential dependence on density: in principle the induced interactions can be physical observables. The immediate relevance of my findings lies elsewhere. The strings considered here are idealizations of the stripes but these idealizations are nevertheless close to a popular way of viewing these matters. I have demonstrated that in the absence of zero temperature stripe long range order it has to be that these ideal stripes are broken up in one or the other way. Many stimulating interactions are acknowledged with S. I. Mukhin and W. van Saarloos. I thank P. van Baal for his suggestion to study the literature on extrinsic curvature membranes.
no-problem/9909/hep-ph9909292.html
ar5iv
text
# Figure 1 IPNO-DR 99-22 $`N_f`$ DEPENDENCE OF THE QUARK CONDENSATE FROM A CHIRAL SUM RULE Bachir Moussallam<sup>1</sup><sup>1</sup>1Address after Sept. 1: MIT, Center for Theoretical Physics, 77 Massachussets Av., Cambridge MA 02139-4307 I.P.N., Groupe de Physique Théorique Université Paris-Sud, F-91406 Orsay Cédex Abstract How fast does the quark condensate in QCD-like theories vary as a function of $`N_f`$ is inferred from real QCD using chiral perturbation theory at order one-loop. A sum rule is derived for the single relevant chiral coupling-constant, $`L_6`$. A model independent lower bound is obtained. The spectral function satisfies a Weinberg-type superconvergence relation. It is discussed how this, together with chiral constraints allows a solid evaluation of $`L_6`$, based on experimental $`\pi \pi K\overline{K}`$ S-wave T-matrix input. The resulting value of $`L_6`$ is compatible with a strong $`N_f`$ dependence possibly suggestive of the proximity of a chiral phase transition. 1 Introduction By analogy with recent results obtained in supersymmetric theories, one expects that QCD-like theories will undergo a number of phase transitions at zero temperature upon varying $`N_f`$, the number of different flavour fermions, at fixed number of colours ($`N_c=3`$ in the following). If the number of fermions is large $`N_f>\frac{11}{2}N_c`$ the theory has no asymptotic freedom and no confinement. Decreasing $`N_f`$ below $`\frac{11}{2}N_c`$ one encounters a conformal phase as indicated by the fact that the $`\beta `$-function at two loops has a zero. Assuming the fermions to be all massless, the chiral $`SU(N_f)\times SU(N_f)`$ symmetry of the QCD action remains unbroken in this phase. If one further decreases $`N_f`$ to small values, $`N_f=23`$, then QCD is in a confining phase in which the chiral group is spontaneously broken to $`SU(N_f)`$. It is generally believed that, in this phase, the quark condensate is non-vanishing and large<sup>2</sup><sup>2</sup>2Experimental verification of this conjecture necessitates specific and very precise data. This question is discussed in ref... At larger $`N_f`$, there could exist different phases of chiral symmetry breaking. A physical picture of such phases is proposed in ref.. An interesting open question concerns the value of $`N_f^{crit}`$ for which a chiral phase transition takes place. Recent lattice results suggest that a transition could occur for $`N_f`$ as small as four. In an instanton vacuum model, the quark condensate ceases to be non-vanishing for $`N_f`$ of the order of five, while another theoretical model obtains a much larger value. In this paper, we use the fact that nature solves ordinary QCD in order to extract an information on how the quark condensate varies with $`N_f`$ for small values of $`N_f`$. More specifically, if it can be shown that the ratio $$R_{32}=\frac{<\overline{u}u>_{N_f=3}}{<\overline{u}u>_{N_f=2}}$$ (1) is significantly smaller than one, one may expect a rather small value of $`N_f^{crit.}`$. In the functional integral, the dependence upon $`N_f`$ arises from the fermion determinant part of the measure: setting all quark masses equal, one has $$d\mu d\mu (A)\left(det(iD/+m)\right)^{N_f}.$$ (2) In other terms, it is a Dirac sea effect. In the quenched approximation, which is often used in lattice simulations, the fermion determinant is set equal to one and $`R_{32}`$ is exactly one. The same result also follows in the leading large $`N_c`$ expansion of QCD, since the determinant contributes to graphs with internal quark loops, which are subleading. Using chiral perturbation theory (CHPT), one can access the ratio $$\stackrel{~}{R}_{32}=\frac{<\overline{u}u>_{(m_u=m_d=m_s=0)}}{<\overline{u}u>_{(m_u=m_d=0,m_s0)}}.$$ (3) This ratio is different from $`R_{32}`$, but it is also a measure of the influence of the fermion determinant in the evaluation of the quark condensate. Again, this ratio would be exactly one in the leading large $`N_c`$ expansion or in the quenched approximation, for any value of the strange quark mass $`m_s`$. The point here is that the physical value of strange quark mass is sufficiently small compared to the scale of the chiral expansion $`\mathrm{\Lambda }1`$ GeV, such that the chiral expansion in $`m_s/\mathrm{\Lambda }`$ makes sense and, at the same time, $`m_s`$ is not that small so that $`\stackrel{~}{R}_{32}`$ will not trivially be close to one. In this paper, we will provide an estimate of $`\stackrel{~}{R}_{32}`$. The plan of the paper is as follows. In sec.2, the expression of $`\stackrel{~}{R}_{32}`$ in CHPT at order one-loop is given. This expression involves a single low-energy coupling constant, $`L_6(\mu )`$, in the nomenclature of Gasser and Leutwyler. In that paper, $`L_6`$ was simply assumed to be OZI suppressed. Here, we attempt a more careful estimate on the basis of a chiral sum rule. Analogous chiral sum rules were discussed in the recent literature and eventually provide very good precision. In sec.3, a sum rule expression for $`L_6`$ in terms of the correlation function $`\mathrm{\Pi }_6(s)`$ of two scalar currents $`\overline{u}u+\overline{d}d`$ and $`\overline{s}s`$ is established and $`Im\mathrm{\Pi }_6(s)`$ is shown to satisfy a Weinberg-type sum rule. This will be an important constraint to our evaluation. The construction of the spectral function is discussed in sec.4. Important ingredients are the pion and the kaon scalar form-factors which can be related to experimental information on pion-pion scattering using analyticity, unitarity, high-energy constraints as well as low-energy constraints from chiral symmetry. This was first performed in ref.. Extension to the region of 1.5 GeV, where important resonance contribution is expected is then discussed. Finally, the result can be found in sec.6. 2. Ratio of quark condensates from CHPT Consider QCD in a limit where $`N_f`$ quarks are exactly massless. We will consider the cases of $`N_f=2`$ and $`N_f=3`$ and assume, a priori, that chiral symmetry is spontaneously broken in QCD in both cases and that the value of the condensate is sufficiently large also in both cases such that the conventional chiral expansion applies. In nature, none of the quark masses $`m_u`$, $`m_d`$ $`m_s`$ is exactly vanishing, but they are in an asymmetric configuration where $`m_u,m_d<<m_s`$ (by a factor of twenty or so) and $`m_s`$ is itself sufficiently small compared to the scale of the chiral expansion $`\mathrm{\Lambda }`$. Using this fact, one can express the ratio $`\stackrel{~}{R}_{32}`$ as an expansion in powers of $`m_s`$. At chiral order $`O(p^4)`$, making use of the formulae of ref., one obtains $$\stackrel{~}{R}_{32}=1\frac{m_sB_0}{F_{\pi }^{}{}_{}{}^{2}}\left[32L_6(\mu )\frac{1}{16\pi ^2}\left(\frac{11}{9}\mathrm{ln}\frac{m_sB_0}{\mu ^2}+\frac{2}{9}\mathrm{ln}\frac{4}{3}\right)\right]+O(m_s^2),$$ (4) At order $`O(p^2)`$ one has, $$m_sB_0=\frac{1}{2}\left(M_{K^+}^2+M_{K^0}^2M_{\pi ^+}^2\right),$$ (5) and this value can be used consistently in the equation above. The size of $`\stackrel{~}{R}_{32}`$ depends on the value of a single low-energy coupling constant, $`L_6`$. The low-energy coupling constants of CHPT may be related to QCD correlation functions evaluated near zero momenta. This can be exploited, in particular for two-point functions, in order to express these constants in the form of sum rules using analyticity (and the fact that chiral correlators have non-singular short distance behaviour). A number of these are exhibited in ref.. A classic example concerns the coupling constant $`L_{10}`$ which is related to the correlation function of two vector currents minus two axial currents. A very reasonable estimate for $`L_{10}`$ can be obtained using simply the idea of vector meson dominance as well as Weinberg sum rules. Our aim is to estimate $`L_6`$ along similar guidelines. One specific reason for interest in $`L_6`$ is in connection with the Kaplan-Manohar transformation. These authors observed that the effective lagrangian is left invariant under the following transformation of the quark mass matrix, $$+\alpha \left(\frac{32B_0}{F_\pi ^2}\right)(^{})^1det,$$ (6) together with a transformation of certain low-energy constants. At chiral order $`O(p^4)`$ three coupling constants are affected, $`L_6`$, $`L_7`$ and $`L_8`$, which get transformed as $$L_6L_6\alpha ,L_7L_7\alpha ,L_8L_8+2\alpha .$$ (7) One consequence is that using low-energy data alone, one can only determine combinations which are invariant under this transformation and not the individual values of $`L_6`$, $`L_7`$ and $`L_8`$. These values are of some importance. In particular, the value of $`L_8`$ determines the ratio of quark masses $`2m_s/(m_u+m_d)`$ beyond the leading chiral order. It is therefore of interest to explore means of separately determining these constants (or at least one of them). 3. Sum rule for $`L_6`$ Consider the correlation function of the two scalar, isoscalar currents $`\overline{u}u+\overline{d}d`$ and $`\overline{s}s`$, $$\mathrm{\Pi }_6(p^2)=iB_0^2d^4xe^{ipx}<T\left[(\overline{u}u(x)+\overline{d}d(x))\overline{s}s(0)\right]>_c,$$ (8) where the subscript $`c`$ means that only connected graphs are to be retained. The factor $`B_0^2`$ is introduced to simplify forthcoming expressions and furthermore makes $`\mathrm{\Pi }_6`$ a renormalisation scale invariant object and $`B_0`$ is defined as $$B_0=\underset{m_u=m_d=m_s=0}{lim}\frac{<\overline{u}u>}{F_\pi ^2}.$$ (9) For small momenta we can express $`\mathrm{\Pi }_6(p^2)`$ using CHPT. In particular, at zero momentum, from CHPT at $`O(p^4)`$ one obtains, $$\mathrm{\Pi }_6(0)=64L_6(\mu )\frac{1}{16\pi ^2}\left[2\mathrm{ln}\frac{(m_s+m)B_0}{\mu ^2}+\frac{4}{9}\mathrm{ln}\frac{(4m_s+2m)B_0}{3\mu ^2}+\frac{22}{9}\right]+O(m,m_s).$$ (10) Here, and in the following, isospin breaking is neglected and we set $`m_u=m_d=m`$. This expression is at the basis of our sum rule estimate for $`L_6`$. The quark condensate ratio $`\stackrel{~}{R}_{32}`$ has a very simple expression in terms of $`\mathrm{\Pi }_6`$. Combining equations (4) and (10) one obtains $$\stackrel{~}{R}_{32}=1\frac{\overline{M}_{K}^{}{}_{}{}^{2}}{32\pi ^2\overline{F}_\pi ^2}\left[16\pi ^2\overline{\mathrm{\Pi }}_6(0)+\frac{22}{9}\right]+O(m_s^2),$$ (11) where barred quantities are to be taken in the limit $`m_u=m_d=0`$. This relation (and eq.(4)) can be recovered alternatively by noting that $$\frac{}{m_s}<\overline{u}u+\overline{d}d>=B_0^2\mathrm{\Pi }_6(0),$$ (12) and integrating this equation from $`m_s=0`$ to its physical value using the CHPT expression (10). It is possible to derive a lower bound on $`L_6`$ and on $`\overline{\mathrm{\Pi }}_6(0)`$ based on general properties of the QCD measure. At first, it is not very difficult to show that $`\mathrm{\Pi }_6(0)`$ must be positive in the case of equal quark masses . Let $`Z`$ be the partition function of euclidian QCD, $$Z=𝑑\mu (A)e^{S_{YM}(A)}\mathrm{det}(iD/_A+M),M=diag(m,m,m_s).$$ (13) We can express $`\mathrm{\Pi }_6(0)`$ in terms of $`Z`$ as $$\mathrm{\Pi }_6(0)=\frac{1}{Z}\frac{d^2Z}{dmdm_s}\frac{1}{Z}\frac{dZ}{dm}\frac{1}{Z}\frac{dZ}{dm_s}.$$ (14) In the limit $`m_s=m`$ this can be written as an average of a manifestly positive quantity, $$\underset{m_s=m}{lim}\mathrm{\Pi }_6(0)=<\left(\mathrm{Tr}\frac{1}{iD/+m}<\mathrm{Tr}\frac{1}{iD/+m}>\right)^2>,$$ (15) where averages are defined as $$<O>=\frac{1}{Z}𝑑\mu (A)e^{S_{YM}(A)}\mathrm{det}(iD/_A+M)O(A).$$ (16) This is because $`\mathrm{Tr}(iD/+m)^1`$ can be shown to be real and the averaging is performed with respect to an integration measure which is real and positive in euclidian QCD (assuming the vacuum angle $`\theta =0`$ and a proper regularisation of the fermion determinant). We can now apply this result on the positivity of $`\mathrm{\Pi }_6(0)`$ in conjunction with its one-loop expression, eq.(10), setting all three quark masses there equal to the physical $`m_s`$ value. One gets $$L_6(\mu )\frac{11}{4608\pi ^2}\left[\mathrm{log}\frac{2m_sB_0}{\mu ^2}+1\right]+O(m_s).$$ (17) Ignoring higher loop corrections, this gives $$10^3L_6(M_\eta )0.35,16\pi ^2\overline{\mathrm{\Pi }}_6(0)1.57.$$ (18) This shows, in particular, that the condensate must be a decreasing function of $`N_f`$. Another property, which will prove an important constraint for the sum rule estimate of $`L_6`$ is that $`\overline{\mathrm{\Pi }}_6`$ satisfies a Weinberg-type sum rule (WSR) $$_0^{\mathrm{}}Im\overline{\mathrm{\Pi }}_6(s)𝑑s=0.$$ (19) The proof is analogous to that of the ordinary Weinberg sum rules (see e.g. ). The operators in the operator-product expansion at short distances must transform in the same way as $`(\overline{u}u+\overline{d}d)\overline{s}s`$ under the chiral group. The masses $`m_u`$ and $`m_d`$ being set equal to zero, the operator of lowest dimensionality having the correct transformation property is $`m_s(\overline{u}u+\overline{d}d)`$ and it has dimension four. Furthermore, a factor of $`(\alpha _s)^2`$ is generated from the fact that all connected graphs contain at least two gluon lines. Taking into account the scale dependence of $`\alpha _s`$ and that of $`B_0`$ in the perturbative region, we learn that $`\overline{\mathrm{\Pi }}_6`$ vanishes asymptotically faster than $`1/q^2`$, $$\overline{\mathrm{\Pi }}_6(q^2)=\frac{C<m_s(\overline{u}u+\overline{d}d)>}{q^2\left[\mathrm{ln}(q^2)\right]^{2+24/27}}+O(\frac{1}{q^4}),$$ (20) with $`C`$ a constant, which implies the WSR eq.(19). Furthermore, this behaviour ensures convergence of an unsubtracted dispersion relation for $`\overline{\mathrm{\Pi }}_6`$, so that one can express its value at zero as $$\overline{\mathrm{\Pi }}_6(0)=\frac{1}{\pi }_0^{\mathrm{}}𝑑s\frac{Im\overline{\mathrm{\Pi }}_6(s)}{s}.$$ (21) While the WSR must be satisfied for arbitrary values of $`m_s`$, it is clear from eq.(20) that convergence of the integral (19) will be faster if $`m_s=0`$. In that situation, one expects the sum rule to be saturated in an energy interval of, say, $`[02]`$ GeV, by analogy with the ordinary Weinberg sum rules. Experimental data is known for $`m_s0`$, but while there may be large differences locally in $`Im\mathrm{\Pi }_6(s)`$ upon varying the value of $`m_s`$ (notably because of threshold effects), it is expected that these differences will be smoothed out to a large extent in the integral. One therefore expects that the WSR will be also approximately saturated in a finite energy region, for the physical value of $`m_s`$. Let us make some qualitative remarks on the practical significance of the sum rule. In the large $`N_c`$ counting, $`\mathrm{\Pi }_6`$ is suppressed compared to a generic QCD correlation function, it is of order $`O((N_c)^0)`$ instead of $`O(N_c)`$. As a byproduct, one observes that the contribution of a single resonance is not enhanced by a factor of $`N_c`$ compared to the non-resonant background. In the large $`N_c`$ world, the coupling of a resonance to either $`\overline{u}u+\overline{d}d`$ or $`\overline{s}s`$ will be suppressed. For a glueball, both couplings will be suppressed. The real world, in the scalar sector, seems to be quite different from these large $`N_c`$ considerations. For instance the $`f_0(980)`$ meson is found experimentally to be rather light, narrow, and it couples strongly to both $`K\overline{K}`$ and $`\pi \pi `$ channels in violation to the large $`N_c`$ expectation. As a consequence, one expects a strong contribution of the $`f_0(980)`$ to $`Im\mathrm{\Pi }_6`$. In order to satisfy the WSR (19) the contribution from the $`f_0(980)`$ has to be canceled by a higher energy contribution. It seems plausible that this will be resonance dominated as well. The particle-data book quotes several resonances in the 1.5 GeV region: the f<sub>0</sub>(1370), a rather wide resonance, the f<sub>0</sub>(1500) which is well defined and rather narrow and (possibly) the f<sub>0</sub>(1700). A first guess is that the sum rule (19) should be essentially satisfied from an interplay between the f<sub>0</sub>(980) and the f<sub>0</sub>(1500). The remaining problem is to estimate the couplings of these resonances to the scalar currents. This cannot be extracted directly from experiment because of the absence of a physical scalar iso-scalar source ( which is the same reason why $`L_6`$, $`L_7`$, $`L_8`$ cannot be individually determined from low-energy experiments ). One way of getting around this difficulty, which was used in QCD sum rule estimates of the light quark masses is to impose smooth matching of the resonance contribution with the low-energy domain, which is known from CHPT at leading order. This procedure can be checked to be a reasonable one in the case of vector currents. Implementation of this idea in the present context is discussed below. 4. Construction of the spectral function A. The role of two-body channels In the construction of $`Im\mathrm{\Pi }_6(s)`$ it is convenient to consider separately the two energy regions I) $`0<s<1\mathrm{GeV}^2`$ and II) $`s>1\mathrm{GeV}^2`$. Let us consider region I first. The only intermediate states allowed to contribute to $`Im\mathrm{\Pi }_6(s)`$ there are $`2\pi `$, $`4\pi `$ and $`K\overline{K}`$. When $`s<<1`$ $`\mathrm{GeV}^2`$, the $`4\pi `$ contribution is suppressed by the chiral counting (being of order $`O(p^8)`$ while the leading contribution is $`O(p^4)`$). Close to 1 $`\mathrm{GeV}^2`$, chiral counting is no longer effective, but it is found experimentally that the $`f_0(980)`$ has very little coupling to $`4\pi `$ (in fact, no decay of the $`f_0(980)`$ into four pions has been observed yet). It is extremely likely, then, that the $`4\pi `$ contribution to $`Im\mathrm{\Pi }_6(s)`$ is negligible in this whole energy range. As a result, the spectral function can be expressed in terms of the pion and of the kaon scalar form-factors. It is convenient to introduce the following normalisations, $`F_1(s)={\displaystyle \frac{1}{B_0}}\sqrt{{\displaystyle \frac{3}{2}}}<0|\overline{u}u+\overline{d}d|\pi ^0\pi ^0>G_1(s)={\displaystyle \frac{1}{B_0}}\sqrt{{\displaystyle \frac{3}{2}}}<0|\overline{s}s|\pi ^0\pi ^0>`$ $`F_2(s)={\displaystyle \frac{1}{B_0}}\sqrt{2}<0|\overline{u}u+\overline{d}d|K^+K^{}>G_2(s)={\displaystyle \frac{1}{B_0}}\sqrt{2}<0|\overline{s}s|K^+K^{}>.`$ (22) The values of these form-factors at $`s=0`$ are proportional to the derivatives of $`M_\pi ^2`$ and $`M_K^2`$ with respect to the quark masses. At leading chiral order, one has $$F_1(0)=\sqrt{6}G_1(0)=0F_2(0)=\sqrt{2}G_2(0)=\sqrt{2}.$$ (23) One-loop corrections to these values can consistently be ignored because they are of the same order as the $`O(p^6)`$ contributions in eq.(10). In the energy range I, the spectral function has the following expression $$Im\mathrm{\Pi }_6(s)=\frac{1}{16\pi }\underset{i=1}{\overset{2}{}}\sqrt{\frac{s4M_i^2}{s}}F_i(s)G_i^{}(s)\theta (s4M_i^2),$$ (24) (where $`M_1M_\pi `$, $`M_2M_K`$). We consider now the energy region II. More approximations will have to be made in this region. We will work out the spectral function from several models in order to illustrate how the WSR can be satisfied. For the final purpose of evaluating the dispersive integral eq.(21) we will mainly rely on information from energy range I. As $`s`$ increases a new two-body channel opens, $`\eta \eta `$. At some point, the $`4\pi `$ channel will become important. Studies of $`\pi \pi `$ scattering suggest that this should happen at $`\sqrt{s}>1.4`$ GeV. This can be seen from the $`\pi \pi `$ inelasticity: below 1.4 GeV, inelasticity is found to be saturated to a good approximation by a single inelastic channel, $`K\overline{K}`$ (see e.g. fig. 7 of ref.) and then one observes a strong onset of $`\pi \pi 4\pi `$. It is very likely that this is caused by the presence of the nearby scalar resonances $`f_0(1370)`$ and $`f_0(1500)`$ which were both observed to couple to four pion states ,,. Another experimental finding of these references is that the $`4\pi `$ system, in this energy region, likes to cluster into two resonances. This suggests that in an energy range sufficiently large to saturate the chiral sum rule, the contributions to the spectral function are either two-body channels or behave to a good approximation as quasi two-body channels. We will utilise below a model in which the $`4\pi `$ system is treated as an effective $`\sigma \sigma `$ two-body channel. Correspondingly, we will introduce the scalar form-factors, $$F_3(s)=\frac{1}{B_0}<0|\overline{u}u+\overline{d}d|\sigma \sigma >G_3(s)=\frac{1}{B_0}<0|\overline{s}s|\sigma \sigma >.$$ (25) This is certainly somewhat schematic as it is known that $`4\pi `$ also clusters as an effective $`\rho \rho `$ channel. In this model, furthermore, the $`\eta \eta `$ channel is ignored. This is a questionable approximation, perhaps, although there is experimental evidence that the coupling of $`\eta \eta `$ to $`\pi \pi `$ appears to be relatively suppressed. In the quasi two-body approximation of multimeson channels one can express the spectral function in terms of form-factors $`F_1(s),\mathrm{},F_n(s)`$ and $`G_1(s),\mathrm{},G_n(s)`$ in the same way as eq.(24) except that the sum extends from 1 to n. Upon introducing effective two-body channels, one faces the difficulty that one can no longer rely on CHPT in order to determine the values of the form-factors at the origin. These values are needed in the construction to be described below. In practice, we will make the simple ansatz that these values are vanishing, $$F_i(0)=0,G_i(0)=0,i3.$$ (26) Evaluation of the scalar form-factors of the pion and the kaon was discussed in ref., based on a set of Muskhelishvili-Omnès equations. We review this evaluation and its extension below. B. Muskhelishvili-Omnès representation of scalar form-factors The form-factors $`F_i(s)`$ which occur in the expression for the spectral function (24) (generalised to $`n`$ channels) are themselves analytic functions everywhere in the complex plane except for a right-hand cut. Let $`T_{ij}`$ be the T-matrix elements which describe scattering among the various channels. A standard normalisation is adopted where the S and T matrices are related as $$S_{ij}=\delta _{ij}+2\mathrm{i}\sigma _i^{\frac{1}{2}}T_{ij}\sigma _j^{\frac{1}{2}}\theta (s4M_i^2)\theta (s4M_j^2),\mathrm{with}\sigma _i=\sqrt{\frac{s4M_i^2}{s}}.$$ (27) The discontinuity of the form-factors along the cut, generated from the two-body channels considered above, has the following form, $$ImF_i(s)=\underset{j=1}{\overset{n}{}}T_{ij}^{}(s)\sigma _j(s)F_j(s)\theta (s4M_j^2).$$ (28) We expect the form-factors to vanish asymptotically as $$F_i(s)1/s,s\mathrm{},$$ (29) and therefore to satisfy an unsubtracted dispersion relation. Clearly, the approximation of quasi two-body channels cannot hold for arbitrarily large energies and eq.(28) is a reasonable approximation to the exact discontinuity only in a finite energy range. However, as we are interested in constructing $`F_i(s)`$ in a finite energy region also, say below two GeV, the detailed behaviour of the spectral function at much higher energies is unimportant and we may as well assume that eq.(28) holds up to infinite energies, only requiring that the T-matrix behaves in a way that ensures the correct asymptotic decrease of the form-factors. Under these assumptions the form-factors must satisfy a set of coupled Muskhelishvili-Omnès (MO for short) singular integral equations, $$F_i(s)=\frac{1}{\pi }\underset{j=1}{\overset{n}{}}_{4M_\pi ^2}^{\mathrm{}}𝑑s^{}\frac{1}{s^{}s}T_{ij}^{}(s^{})\sqrt{\frac{s^{}4M_j^2}{s^{}}}\theta (s^{}4M_j^2)F_j(s^{}).$$ (30) One observes that off-diagonal T-matrix elements are needed outside of the physical scattering region. Except in the one-channel case, this means that one not only needs physical scattering data but also a parametrisation model which allows for extrapolation. C. Asymptotic conditions on the T-matrix Let us now specify which asymptotic conditions are required from the T-matrix. Consider first the single channel case for which an analytic solution to the MO equation is available, $$F(s)=P(s)\mathrm{\Omega }(s),\mathrm{\Omega }(s)=\mathrm{exp}\left[\frac{s}{\pi }_{4M_\pi ^2}^{\mathrm{}}𝑑s^{}\frac{1}{(s^{}s)s^{}}\delta (s^{})\right],$$ (31) where $`\delta (s)`$ is the scattering phase-shift and $`P(s)`$ an arbitrary polynomial. Integrating by parts, it is simple to verify that, as $`s\mathrm{}`$ one has $$\mathrm{\Omega }(s)s^l,l=\frac{1}{\pi }(\delta (\mathrm{})\delta (4M_\pi ^2)).$$ (32) Compatibility with the assumed high-energy behaviour of the form-factor is ensured provided $`l1`$. It is not difficult to see how this condition extends to the situation of $`n`$ coupled channels, even though no analytical solution is known in general. Let us form a vector $`\stackrel{}{F}`$ of components $`(F_1(s),\mathrm{},F_n(s))`$; we learn from Muskhelishvili’s book that there will be in general $`n`$ independent solution vectors $`\stackrel{}{F}_a`$, $`a=1,\mathrm{}n`$ to the set of equations. Let us form an $`n\times n`$ matrix from these $$\text{F}=(s)(\stackrel{}{F}_1(s),\mathrm{},\stackrel{}{F}_n(s)).$$ (33) All matrix elements of $`\text{F}=(s)`$ are analytic functions of $`s`$ in the cut complex plane and the discontinuity across the cut can be formulated in matrix form, $$\text{F}=(s+iϵ)=(1+2iT\mathrm{\Sigma })\text{F}=(siϵ),\mathrm{\Sigma }_{ij}=\delta _{ij}\sigma _i(s)\theta (s4M_i^2).$$ (34) Taking the determinant of both sides, we obtain a one-dimensional discontinuity equation, $$f(s+iϵ)=D(s)f(siϵ),f=det\text{F}=.$$ (35) As $`f(s)`$ is also an analytic function, this equation can be recast as a one-channel MO equation. As a consequence, the determinant of the solution matrix F$`=`$ can always be expressed in analytical form even though the individual entries are not known analytically. This is an interesting property which we have used as a check of the accuracy of our numerical calculations. It is easy to verify that, for a given value of the energy $`s`$ with $`mn`$ channels being open, $`D(s)`$ is the determinant of the $`m\times m`$ S-matrix so that it is a complex number of unit modulus, $$D(s)\mathrm{exp}(2i\mathrm{\Delta }(s)).$$ (36) Letting $`s`$ go to infinity, $`det\text{F}=`$ behaves as an inverse power of $`s`$, so that the matrix F$`=`$ must be of the following form, $$\underset{s\mathrm{}}{lim}\text{F}=(s)=\frac{1}{s^\nu }\text{C}=,$$ (37) with C$`=`$ a constant $`n\times n`$ matrix with non-vanishing determinant. Taking the determinant of this equation and using the one-channel result one finds that $`1/s`$ asymptotic behaviour is ensured by the asymptotic condition, $$\mathrm{\Delta }(\mathrm{})\mathrm{\Delta }(4M_\pi ^2)n\pi .$$ (38) For instance, in the case of three channels, the T-matrix must be such that the sum of the three eigen-phase-shifts sum up to $`3\pi `$ (or more) when the energy goes to infinity. If the sum is exactly $`3\pi `$, then the form-factors are uniquely determined at any energy from their values at zero. D. Models of $`\pi \pi `$ scattering T-matrix In principle, $`\pi \pi `$ phase-shifts and inelasticities can be determined from di-pion production experiments, in which high-energy pions are scattered on proton targets (e.g. ). A major source of information in this area so far, is from the high statistics experiment by the CERN-Munich collaboration from which $`\pi \pi `$ S-matrix elements were extracted by a number of people. Various determinations of S-wave phase-shifts are generally in reasonable mutual agreement below 1.4 GeV while marked differences are seen above. From these early analysis there was no clear evidence for resonances at 1.4 or 1.5 GeV. The CERN-Munich data themselves, however, are not incompatible with the presence of scalar resonances at these energies. This was demonstrated recently by Bugg et al. who obtained a good fit to the CERN-Munich data while constraining the S-matrix to have resonance poles and residues conforming to the PDG results. Unfortunately, it is not possible to use their parametrisation of the S-matrix for solving the MO equations because, on the one hand, it is not designed to satisfy the full set of two-body unitarity constraints and, moreover, the corresponding T-matrix parametrisation does not allow for extrapolation away from the physical scattering region. A set of $`\pi \pi `$ S-wave scattering phase-shifts and inelasticities was obtained recently, based on high statistics di-pion production data employing polarised proton targets. In principle, polarisation information is extremely useful in reducing the problems of phase ambiguities. Two solutions consistent with unitarity were found, called up-flat and down-flat. We will consider the latter one only here, because on the one hand, it is in good agreement with earlier phase-shift determinations below 1.4 GeV and on the other hand, up-type solutions can usually be eliminated upon using the Roy equations which encode crossing symmetry and high-energy constraints. This determination shows a marked resonance effect in the 1.4-1.5 GeV region and thus appears as a good candidate for use in our sum rule analysis. One notes that, above 1.4 GeV, the $`\pi \pi `$ phase-shifts determined by and those determined in ref. are not in good agreement. For the purpose of solving the MO equation system one further needs a T-matrix parametrisation allowing convenient (and reliable) extrapolation below physical thresholds. As pointed out in ref. a useful check on the extrapolation of $`T_{12}`$ is to compare it to the chiral expansion in the region where the latter is valid. Close to $`s=0`$ one has, $$T_{12}=\frac{\sqrt{3}}{64\pi F_\pi ^2}s+O(p^4).$$ (39) One-loop corrections to this result have been worked out. A simple T-matrix model, which is very useful for performing checks of numerical calculations is that proposed by Truong and Willey. This model has the property that the OM set of equations can be solved analytically. A somewhat more sophisticated T-matrix model, fitted to reproduce the $`\pi \pi `$ data of ref. was proposed in ref.. Fits with both 2-coupled and 3-coupled channels were performed. In this model, unitarity is ensured by solving a Lippman-Schwinger equation with a potential matrix chosen to have the following separable form, $$V_{ij}(p,q)=\underset{l,m}{}\frac{1}{p^2+\mu _{il}^2}\frac{1}{q^2+\mu _{jm}^2}\lambda _{lm}.$$ (40) The T-matrix can be computed analytically and it can be checked to have the correct chiral magnitude at low-energy (in other terms it vanishes linearly with $`s`$ and $`M_\pi ^2`$). It seems possible to adjust the parameters (and also the propagator) in order to reproduce exactly the correct T-matrix chiral expansion at $`O(p^2)`$ and even, we believe, at $`O(p^4)`$, but this has not yet been done. In this model, the OM equations must be solved numerically. For this purpose, we have developed an algorithm which is described in the appendix. The $`\lambda `$ and $`\mu `$ arrays in eq.(40) are constant parameters fitted to the data. In the case of 3-coupled channels, the available data is not sufficiently constraining and several different sets of parameters can provide comparable fits. Two different sets of parameters were obtained in refs. (called A and B) and two further sets in ref. (called E and F). Sets A, B and E generate fits with comparable $`\chi ^2`$ with the set of data considered in ref.. Set F has a good $`\chi ^2`$ at low energy only. Close to 1.4 GeV it has a very narrow resonance, which, perhaps, could be interpreted as a glueball. Although not producing a very good $`\chi ^2`$, the authors of ref. suggest that this scenario is not totally excluded by the data. The data which was used in these fits consists in a) The set of $`\pi \pi `$ phase shifts $`\delta _\pi (E)`$ and inelasticities $`\eta _\pi (E)`$ as determined in ref. in the energy range $`0.6E_{\pi \pi }1.6`$ GeV and b)the set of phases $`\varphi _{12}(E)`$ of the $`\pi \pi K\overline{K}`$ amplitudes from the particular experiment of Cohen et al.. One must keep in mind here that there is some discrepancy in the lower energy part between the result of this experiment and others, notably by Etkin et al. as far as the phase is concerned. This point is discussed in some detail by Au et al., whose K-matrix parametrisation could more easily reproduce the latter phase results. Regrettably, the absolute values of the $`\pi \pi K\overline{K}`$ amplitudes, which are also available from experiment, were not included in the fits of refs.. The various parameter sets differ in the behaviour of the phase-shift in the third channel, $`\delta _3(E)`$, which is unconstrained by experiment and also, to some extent, on the detailed structure of inelasticities. These differences, as we will see, will result in fairly different behaviour of the spectral functions as well, so that the sum rule (19) appears as an interesting theoretical constraint in this kind of analysis. The T-matrices generated from this model do not satisfy the asymptotic constraints eq.(38) neither in the 2-channel case nor (for any of the parameter sets discussed above) in the 3-channel case. Thus, they cannot be used up to infinite energies for our purposes. We must impose the proper asymptotic behaviour, i.e. that the eigen-phase shifts must sum to $`2\pi `$ in the case of two channels and $`3\pi `$ for three channels, by hand<sup>3</sup><sup>3</sup>3These are the minimal asymptotic values which ensure existence of a solution. We will assume that a possible further raise above the minimal values can only occur for $`\sqrt{s}>2`$ GeV and will have no influence on lower energy results.. For this purpose, we have introduced a cutoff energy $`E_0`$. For $`EE_0`$ the T-matrix is computed from the model, while for $`E>E_0`$ the phase-shifts are interpolated as follows $$\delta _\pi (E)=n\pi +(\delta _\pi (E_0)n\pi )f\left(\frac{E}{E_0}\right),\delta _i(E)=\delta _i(E_0)f\left(\frac{E}{E_0}\right),i2,$$ (41) with $`n`$ the number of channels and the cutoff function $`f(x)=2/(1+x^m)`$. In practice, we have taken $`E_0=1.5`$ GeV which insures a smooth raise of $`\delta _\pi (E)`$ and $`m=3`$. Changing these parameters will modify the details of the shape of the spectral function in the higher energy region. Inelasticities are computed from the model in the whole energy range. Other elements of the S-matrix can then be deduced from unitarity and continuity. 6. Results A. Two-channel models We first calculate the scalar form-factors and the spectral function from $`\pi \pi K\overline{K}`$ 2-channel models. The result for the spectral function, using the T-matrix model of Au et al.<sup>4</sup><sup>4</sup>4As observed in ref. upon using the values of the parameters at the precision given in the Au et al. paper, a spurious very narrow resonance appears close to the $`K\overline{K}`$ threshold: we removed this resonance by linearly interpolating the $`\pi \pi `$ phase-shift on both sides. If not removed, the spectral function would be identical to that shown in the figure except at the very position of the resonance where a very narrow dip would appear. is shown in Fig.1, together with the result from the 2-channel version of the potential model of ref.. Consider first the region $`\sqrt{s}1`$ GeV: there, the spectral functions from the two models have the same sign and are rather similar in shape. For sum rule applications it is useful to introduce the following spectral function integrals in this energy region $$I_n=16\pi _{4M_\pi ^2}^{4M_K^2}\frac{Im\mathrm{\Pi }_6(s)}{s^n}𝑑s.$$ (42) Some numerical results for the integrals $`I_n`$, $`n=0,1`$ are shown in table 1 below. The predictions from the two models are seen to differ by less than 10% for these quantities. We have also computed, for these two models, the low-energy observables associated with pion form-factors. In the neighbourhood of $`s=0`$, one defines (we follow the notations of ref.) $$F_1(s)=F_1(0)\left[1+\frac{1}{6}<r^2>_s^\pi s+c_\pi s^2+\mathrm{}\right],$$ (43) and, similarly, for the matrix element of the $`\overline{s}s`$ current $$\sqrt{\frac{2}{3}}\overline{M}_K^2G_1(s)=d_Fs\left[1+b_\mathrm{\Delta }s+\mathrm{}\right].$$ (44) As shown in ref. the parameter $`d_F`$ is proportional to the derivative of $`F_\pi `$ with respect to the strange quark mass. Upon using the T-matrix from Au et al., we have verified that our calculation reproduces the results obtained previously. The numbers are displayed in table 2. We also show the results corresponding to the T-matrix from ref.. The numbers quoted in the table correspond to an improved T-matrix where the $`\pi \pi `$ phase-shift is constrained in the low-energy region $`\sqrt{s}0.6`$ GeV in order to match the predictions from CHPT at two-loops for the scattering length and the scattering range i.e. $`a_0^0=0.21`$, $`b_0^0=0.26M_{\pi ^+}^2`$. If we do not make this modification, the results from the T-matrix of ref. would be only slightly different. For instance, one would have, $`<r^2>_s^\pi =0.580fm^2`$, $`d_F=0.075\mathrm{GeV}^2`$, reflecting the reasonable low-energy behaviour of the T-matrix in this particular model. These results are compatible with the known low-energy coupling constants from CHPT. At order one loop, the chiral expansion of $`<r^2>_s^\pi `$ and $`d_F`$ involve the constants $`L_4`$ and $`L_5`$ $`<r^2>_s^\pi ={\displaystyle \frac{24}{F_\pi ^2}}\left\{2L_4(\mu )+L_5(\mu ){\displaystyle \frac{1}{32\pi ^2}}\left[\mathrm{log}{\displaystyle \frac{M_\pi ^2}{\mu ^2}}+{\displaystyle \frac{1}{4}}\mathrm{log}{\displaystyle \frac{M_K^2}{\mu ^2}}+{\displaystyle \frac{4}{3}}\right]\right\}`$ $`d_F={\displaystyle \frac{8\overline{M}_{K}^{}{}_{}{}^{2}}{F_\pi ^2}}\left\{L_4(\mu ){\displaystyle \frac{1}{256\pi ^2}}\left[1+\mathrm{log}{\displaystyle \frac{\overline{M}_{K}^{}{}_{}{}^{2}}{\mu ^2}}\right]\right\}.`$ (45) While $`L_4`$ is not easily determined elsewhere, $`L_5`$ can be extracted from the ratio of $`F_K/F_\pi `$ and this gives $`L_5(M_\rho )=1.4\pm \mathrm{0.5\hspace{0.17em}10}^3`$. Using eqs.(Figure 1) and the results from Table 2, one deduces $$L_4(M_\rho )\mathrm{0.4\hspace{0.17em}10}^3,L_5(M_\rho )\mathrm{1\hspace{0.17em}10}^3.$$ (47) The value of $`L_4`$ can be considered as a prediction, and the result for $`L_5`$ appears to be compatible with $`F_K/F_\pi `$. One must note, though, that this agreement might be somewhat fortuitous because the error of this determination is rather large. If we assume, for instance, 10% relative errors on $`<r^2>_s^\pi `$ and on $`d_F`$, the resulting uncertainty on $`L_5`$ would be $`\mathrm{\Delta }L_5(M_\rho )=\pm \mathrm{0.7\hspace{0.17em}10}^3`$. In the region $`E>1`$ GeV now, the spectral functions from the two models differ considerably, see Fig.1. That corresponding to the model of ref. exhibits a strong resonance effect. Its contribution to the integral goes in the sense of canceling the positive contribution from the $`f_0(980)`$. This is qualitatively as expected from the WSR (19) and indicates that it seems possible to satisfy this constraint in a 2-channel model. Quantitatively, however, if one uses the set of parameters of ref. without alteration, one finds that the contribution from the $`f_0(1500)`$ is somewhat too strong and overcompensates that of the $`f_0(980)`$. At any rate, in the 1.5 GeV region it is no longer a good approximation to retain only two channels in unitarity relations. Other two-body channels are open like $`\eta \eta `$, and experiment indicates significant coupling of the $`f_0(1500)`$ to the $`4\pi `$ channel as well. We will now investigate how such additional channels can affect the results in an approximation of a single effective additional channel. B. Three-channel models We have computed the spectral function based on the 3-channel T-matrix model from ref. using several parameter sets determined in this reference and in a subsequent one. Results are shown in Figs.2-4. Fig.2 corresponds to the parameter set A from ref. (we recall that set B from this reference was discarded because $`T_{12}`$ has an unphysical low-energy pole in this case) and figs.3,4 correspond to the sets E and F from ref. respectively. Again, let us consider first the energy region $`\sqrt{s}1`$ GeV: there, the spectral functions from the 3-channel models are comparable to those from the 2-channel ones. This can be seen, for instance, for the integrals $`I_0`$ and $`I_1`$ displayed in table 1. Also one can see from Table 2 that the result for $`<r^2>_s^\pi `$ are rather stable. A somewhat less stable quantity is $`d_F`$, the derivative of the pion form-factor of the $`\overline{s}s`$ current, which increases in the 3-channel model for parameter sets A and E. This results in a slight increase of $`L_4`$ and a significant decrease of $`L_5`$ which becomes too small and incompatible with $`F_K/F_\pi `$. On the contrary, a large value of $`L_5`$ emerges if one uses parameter set F. Keeping in mind the uncertainty in the determination of $`L_5`$ from the scalar form-factors, none of the 3-channel parameter sets considered here is as satisfactory as the 2-channel models as far as the very low energy behaviour of the form-factors is concerned. Let us now consider the energy region $`E>1`$ GeV. One observes from Figs.2-4 that a variety of shapes can get generated from different parameter sets. Set F (see Fig.4 ) has a negatively contributing resonance, but it is much too strong and does not obey the WSR eq.(19). Set A has a positively contributing resonance and does not obey the WSR constraint either. Set E (see Fig.3) displays a more complicated structure: the $`f_0(1500)`$ has also a positive contribution but there is a wide negative contribution centered at 1.7 GeV, and the WSR is approximately obeyed. C. Estimate of $`\overline{\mathrm{\Pi }}_6(0)`$ The main conclusion from the above results is that while there seems to be reasonable agreement on the shape of the spectral function in the energy range $`\sqrt{s}1`$ GeV, its structure above 1 GeV is subject to considerable uncertainty. In models with more than two coupled channels the parameters are not sufficiently constrained from the experimental $`\pi \pi `$ and $`K\overline{K}`$ data. Another source of uncertainty in those models which, again, can be checked to affect the spectral function above one GeV concerns the values of the form-factors at the origin $`F_i(0),G_i(0),i3`$ which are not given from chiral symmetry. At least, we have seen that there exist models which fit the data and can also accomodate the WSR constraint. In order to calculate $`\overline{\mathrm{\Pi }}_6(0)`$ from the spectral representation eq.(21) one needs, in principle, to know the spectral function both below and above 1 GeV. However, the lower energy range is expected to generate the largest contribution. Qualitative information on the high energy sector, such as the existence of the WSR constraint and the experimental position of the resonances is sufficient if one is not asking for a very high precision. Firstly, we expect the contribution to $`\overline{\mathrm{\Pi }}_6(0)`$ from the range $`\sqrt{s}>1`$ GeV to be negative, giving the upper bound, $$\overline{\mathrm{\Pi }}_6(0)<\frac{1}{16\pi ^2}I_1.$$ (48) The most plausible scenario is that of a single resonance dominated contribution around $`\sqrt{s}1.5`$ GeV. In this scenario, the following estimate of $`\overline{\mathrm{\Pi }}_6(0)`$ is valid, $$\overline{\mathrm{\Pi }}_6(0)\frac{1}{16\pi ^2}\left(I_1\frac{I_0}{(1.5)^2}\right),$$ (49) in which the WSR has been used. In this case, the correction from the higher energy range is approximately 30%. The other possibility is that several resonances, the $`f_0(1500)`$ and the $`f_0(1700)`$, are playing a role in the sum rules. If the two resonances make negative contributions one expects that the correction to $`\overline{\mathrm{\Pi }}_6(0)`$ will be smaller than 30% because the $`1/s`$ factor suppresses the $`f_0(1700)`$ contribution. If the contribution from the $`f_0(1500)`$ is positive the correction is even smaller (this was realised in one of the 3-channel models considered above). The last possibility is that of a negative $`f_0(1500)`$ and a positive $`f_0(1700)`$ in which case the contribution from the higher energy region will be largest, but simple estimates like (49) show that a 50% correction is a generous upper limit. This gives us a lower bound on $`\overline{\mathrm{\Pi }}_6(0)`$, $$\overline{\mathrm{\Pi }}_6(0)>\frac{1}{32\pi ^2}I_1.$$ (50) From these considerations and the numbers of table 1 we infer that the value of $`\overline{\mathrm{\Pi }}_6(0)`$ must lie in the following range $$2<16\pi ^2\overline{\mathrm{\Pi }}_6(0)<6.$$ (51) Using chiral perturbation theory to one loop, eq.(10), this result can be recast into an estimate of the coupling constant $`L_6`$ $$0.4<10^3L_6(M_\eta )<0.8.$$ (52) We note that the bound (18) obtained in sec.3 is satisfied. This number can be compared with the estimate of ref. $$L_6(M_\eta )=(0.0\pm 0.3)\mathrm{\hspace{0.17em}10}^3.$$ (53) The central value there, is obtained from the assumption that the OZI rule applies. Indeed, the OZI rule implies that $`\stackrel{~}{R}_{32}`$ is identically equal to one, and inserting $`L_6(M_\eta )=0`$ in eq.(4) one finds $`\stackrel{~}{R}_{32}=0.96`$ which is very close to one. The value of $`L_6`$ that we obtained from the sum rule implies the following result for the ratio of quark condensates $`\stackrel{~}{R}_{32}`$ $$\stackrel{~}{R}_{32}10.54\pm 0.27.$$ (54) In order to obtain this estimate, we have used for $`L_6`$ the central value which emerges from the sum rule discussion above and, for the error, we have used the same value as that estimated in ref. (see eq.(53) ). Within the substantial error band, the main observation is that the deviation of the quark condensate ratio from one is negative, and it seems to be rather large. 7. Summary We started by noting that the sensitivity of the quark condensate on $`N_f`$ can be tested by studying its variation as a function of the strange quark mass. This variation may be related to the correlation function $`\mathrm{\Pi }_6(q^2)`$. Two different expressions of $`\mathrm{\Pi }_6(0)`$ are equalled, one based on chiral perturbation theory and one which uses a dispersive representation. We discussed the spectral function which enters this dispersive integral. We first argued that $`Im\mathrm{\Pi }_6`$ satisfies a Weinberg-type sum rule. This sum rule essentially relates resonance contributions from the two energy regions, $`0s1`$ $`\mathrm{GeV}^2`$ and $`1s<4`$ $`\mathrm{GeV}^2`$. In the first energy region the spectral function can be expressed with good accuracy in terms of scalar form-factors of the pion and the kaon. In turn, these form-factors can be constructed from experimental scattering data on $`\pi \pi `$ and $`K\overline{K}`$ following the method of ref.. The determination of the spectral function in the higher energy range is more uncertain. We considered the prediction from a model which treats the $`4\pi `$ channel as an effective two-body channel. The influence of including this third channel in unitarity relations was found to have relatively minor influence on results in the lower energy region but has a strong influence on the region above one GeV. The dispersive integral, fortunately, receives its main contribution from the lower energy range. Using experimental information on the position of the resonances, as well as the WSR constraint, allowed us to obtain an estimate of $`\overline{\mathrm{\Pi }}_6(0)`$. The conclusion of this analysis is that the properties of the $`f_0(980)`$ meson translate into a value of the coupling constant $`L_6`$ which is significantly different from that expected from the OZI rule (or, alternatively, from large $`N_c`$ considerations). If one uses the central value obtained for $`L_6`$, one finds that the condensate $`\overline{u}u`$ decreases by as much as a factor of two as one decreases the mass of the strange quark mass from its physical value down to zero. A qualitatively similar behaviour is expected if one varies $`N_f`$ from $`N_f=2`$ to $`N_f=3`$. This surprising result is possibly suggesting that $`N_f=3`$ is not extremely far from a chiral phase transition point. One must bear in mind, however, that the relationship that we used between $`\mathrm{\Pi }_6(0)`$ and the condensate ratio receives corrections from two-loop CHPT. It remains to be seen whether these are significant or not. Acknowledgments Robert Kaminski is thanked for discussions and communicating several data files. Jan Stern is thanked for discussions, suggestions and comments on the manuscript. This work is partly supported by the EEC-TMR contract ERBFMRXCT98-0169. Appendix: Numerical method The general idea for solving a linear integral equation is to approximate it by an ordinary linear system of equations by discretising the integral. The main difficulty in the case of the MO equation is to handle the principal-value integral with high accuracy. Let us illustrate the method we have used on the one-channel MO equation, the generalisation to several channels is straightforward. First, one can transform the equation into one for the real part of the form-factor, $`R(s)=Re(F(s))`$, $$R(s)=\frac{1}{\pi }_{4M^2}^{\mathrm{}}ds^{}\frac{1}{s^{}s}X(s^{})R(s^{})X(s^{})=\mathrm{tan}\delta (s^{}).$$ (55) It is useful to split the integration region into several sub-intervals in order to accommodate fast variations of the integrand (we have used up to seven intervals in our numerical work). Then, every sub-interval $`[a,b]`$ is mapped to $`[1,1]`$ and the the quantity $`X(s^{})R(s^{})`$ is expanded over a basis of Legendre polynomials. This allows us to perform the principal value integration in (55) using the exact formula, $$_1^1𝑑u\frac{P_L(u)}{uz}=2Q_L(z)$$ (56) Here $`Q_L(z)`$ is the so-called Legendre function of the second kind. It is crucial, in order to ensure the success of the calculation, that it be computed to very high accuracy. An algorithm, based on using the recursion relations in the forward direction if $`|z|<1`$ and in the backward direction otherwise, proves adequate. One obtains a discretised approximation to the integral over $`[a,b]`$ $$_a^bds^{}\frac{1}{s^{}s_k}X(s^{})R(s^{})\underset{i=1}{\overset{N}{}}\widehat{W}_i\left[1+\frac{2(s_kb)}{ba}\right]X(s_i)R(s_i)$$ (57) where $$s_i=\frac{1}{2}(a+b+(ba)u_i),\widehat{W}_i[z]=w_i\underset{j=0}{\overset{N1}{}}(2j+1)P_j(u_i)Q_j\left(z\right),$$ (58) and $`u_1,\mathrm{},u_N`$ are the set of N Gauss-Legendre integration points (i.e. the zeros) of $`P_N(u)`$ and $`w_1,\mathrm{},w_N`$ are the associated set of weights. In the case where $`b=\mathrm{}`$ (last sub-interval), we use $$s_i=\frac{2a}{1u_i}$$ (59) and $$_a^{\mathrm{}}ds^{}\frac{1}{s^{}s_k}X(s^{})R(s^{})\frac{2a}{s_k}\underset{i=1}{\overset{N}{}}\widehat{W}_i\left[1\frac{2A}{s_k}\right]\frac{X(s_i)R(s_i)}{1u_i}$$ (60) In this manner, the functional equation for the function $`R(s)`$ gets transformed into a set of $`M`$ linear equations for $`R(s_1),\mathrm{},R(s_M)`$ where $`M=nN`$, $`n`$ being the number of sub-intervals. We note that this is a homogeneous system which, strictly speaking, has no nontrivial solution unless the determinant vanishes. In practice, it does not exactly vanish. It is only in the limit of $`N\mathrm{}`$, in fact, that the determinant vanishes. In addition, one wants to specify the value at zero $`R(0)`$ and this generates one additional equation, which is non-homogeneous. A solution can be defined by dropping one of the homogeneous equations. A numerically stable way of performing this, is to use the singular-value decomposition of the $`(M+1)\times M`$ matrix of the linear equation system. We have performed several checks of the numerical calculations: a) we have verified that upon using the T-matrix of the Truong-Willey type the analytical result was accurately reproduced b)we have verified the stability of the result when varying the number of integration points up to several hundred points and c) we have also verified that the determinant of the n-independent solutions obtained numerically accurately reproduces the result which is known analytically (see sec. 4.C).
no-problem/9909/cond-mat9909059.html
ar5iv
text
# Observation of Kosterlitz-Thouless spin correlations in the colossally magnetoresistive layered manganite La1.2Sr1.8Mn2O7 ## Abstract The spin correlations of the bilayer manganite La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub> have been studied using neutron scattering. On cooling within the paramagnetic state, we observe purely two-dimensional behavior with a crossover to three-dimensional scaling close to the ferromagnetic transition. Below $`T_C`$, an effective finite size behavior is observed. The quantitative agreement of these observations with the conventional quasi two-dimensional Kosterlitz-Thouless model indicates that the phase transition is driven by the growth of magnetic correlations, which are only weakly coupled to polarons above $`T_C`$. It is now generally accepted that colossal magnetoresistance (CMR) in doped manganese oxides involves a strong coupling between spin, charge, and lattice degrees of freedom close to the ferromagnetic transition. However, the nature of the transition and the relative importance of the different interactions in controlling the magnetotransport above the critical temperature $`T_C`$ have not been clearly established. A number of studies have indicated that the magnetic phase transition may be unconventional in CMR compounds, with the observation of an anomalous spin diffusion component below $`T_C`$ and a non-divergent correlation length . This has been attributed to the development of magnetic polarons above $`T_C`$ and their possible persistence below $`T_C`$, although it can be difficult to distinguish them from standard critical fluctuations. In other studies, conventional critical scaling of bulk properties has been observed . It is important, therefore, to study the nature of the phase transition in CMR compounds, and determine if unconventional magnetic correlations are essential to the mechanism of CMR. Naturally layered manganites are derived from the perovskite structure of the three-dimensional (3D) compounds by the addition of non-magnetic blocking layers . The bilayer compounds La<sub>2-2x</sub>Sr<sub>1+2x</sub>Mn<sub>2</sub>O<sub>7</sub>, in which $`x`$ represents the hole concentration on the MnO<sub>2</sub> planes, have been extensively studied in recent years because of the insights they provide into the mechanisms of CMR . The motivation of the present study is to utilize the low-dimensionality of these compounds to perform a detailed investigation of the spin correlations close to $`T_C`$ using neutron scattering. The reduced dimensionality of the spin fluctuations extends the temperature region over which critical fluctuations may be studied, and makes them easier to distinguish from other dynamic processes in the sample . This has allowed us to compare the temperature evolution of magnetic correlations with other two-dimensional (2D) systems exhibiting similar magnetic ordering. In previous work on the 40$`\%`$ hole-doped bilayer system we observed strong 2D in-plane ferromagnetic fluctuations above $`T_C`$, with evidence of competing ferromagnetic and antiferromagnetic interactions perpendicular to the planes . The in-plane correlation length $`\xi `$ was measured in scans through the 2D rods of magnetic scattering chosen to optimize the energy integration. However, these measurements are necessarily performed away from the wavevector corresponding to 3D magnetic ordering, and so were not sensitive to a possible crossover to 3D correlations. We have now investigated the correlations close to the 3D ordering wavevector as a function of temperature, both above and below $`T_C`$. We show that the spin correlations are quantitatively consistent with a quasi-2D XY model, exhibiting Kosterlitz-Thouless correlations above $`T_C`$ with a crossover to 3D correlations at a correlation length consistent with the known in-plane and interbilayer exchange constants. This agreement with other 2D systems indicates that the magnetic correlations develop conventionally, and that the transition is a true second-order phase transition, although there is evidence that it is smeared by weak inhomogeneity. This is in contrast to a magnetic polaron model, in which the magnetic correlations are strongly bound to charge degrees of freedom. Recent x-ray and neutron scattering results provide evidence of charge localization and the development of charge correlations above $`T_C`$ . Nevertheless, they do not have a strong influence on the magnetic correlations above $`T_C`$. Instead, the incipient charge ordering is preempted by the onset of ferromagnetic ordering, and the charge correlations collapse at $`T_C`$. The neutron scattering experiments were performed at the NIST Center for Neutron Research, using the same single crystal of the 40% hole-doped bilayer manganite La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub> (lattice parameters $`a=3.862`$ $`\AA `$ and $`c=20.032`$ $`\AA `$ at 125 K) that has already been studied in various other experiments . This compound shows a transition to long-range ferromagnetic order with the Mn spins aligned entirely within the $`a`$-$`b`$ plane at $`T_C`$ $``$ 113 K, coincident with the metal-insulator transition observed in resistivity measurements. Details on the preparation and characterization of the sample are given by Mitchell et al. . The magnetic diffuse scattering was measured on the BT2 triple-axis spectrometer operating in two-axis mode, i.e., without analyzing the energy of the scattered neutrons. Scans were taken around the (002) Bragg position where the nuclear scattering contribution is extremely weak, with a fixed incident energy 13.7 meV and horizontal collimations of $`60^{}`$-$`20^{}`$-$`20^{}`$, full width at half maximum (FWHM). Pyrolitic graphite was used both as monochromator and filter against higher order contamination. In this two-axis mode, the diffuse scattering is, in the quasistatic approximation, proportional to the wavevector-dependent susceptibility $`\chi _T(𝐪)`$, where $`𝐪=𝐐\tau `$, Q is the momentum transfer and $`\tau `$ denotes a reciprocal lattice vector of the magnetic structure. Figure 1 shows the magnetic diffuse scattering observed in scans along Q = \[$`h`$02\]. The susceptibility is well described by the usual Lorentzian profile except for temperatures close to $`T_C`$. In this region, however, there is also a strong tail in the order parameter that was attributed to an inhomogeneous broading of the transition common to disordered systems like the doped manganites . Because of the strong dependence of $`T_C`$ on hole doping in the bilayer manganites , such sample inhomogeneities would manifest themselves as a distribution of $`T_C`$. We have therefore reanalyzed the scaling behavior of the order parameter, allowing for a Gaussian distribution of $`T_C`$’s. As the inset to Fig. 1 shows, such a distribution is in excellent agreement with the observations. The newly obtained values of the order parameter are $`\beta `$ = 0.14(1) and $`T_C`$ = 113.2(2) K, compared to $`\beta `$ = 0.13(1) and $`T_C`$ = 111.7(2) K reported in Ref. . The derived standard deviation $`\sigma _{T_C}`$ = 1.6 K of the Gaussian $`T_C`$-distribution corresponds to less than 0.4% variation in the total hole-doping . This shows that even a very small sample inhomogeneity strongly affects the measurements and has to be included for the analysis of critical quantities. An analysis of the wavevector dependent susceptibility including the $`T_C`$-distribution requires a detailed knowledge of the phase transition, i.e., a model for the temperature dependence of the correlation length $`\xi `$ and the static susceptibility $`\chi _T(0)`$, which will be derived below. For the following discussion, we use the correlation lengths obtained from a Lorentzian lineshape analysis, which are shown in Fig. 2. For these to be valid, the scans need to integrate over all fluctuation frequencies. In 2D systems, an optimal energy integration is achieved for a scattering geometry that has the scattered wavevector parallel to the c-axis , but this is not kinematically possible in scans along Q = \[$`h`$02\]. However, the agreement with previous measurements performed in the optimal configuration at Q = \[$`1+h`$,0,1.833\] (Ref. ) validates the quasistatic approximation in the present experiments. In the layered manganites, the separation of the MnO<sub>2</sub> bilayers by insulating (La,Sr)O layers leads to a strong anisotropy, both in transport properties and magnetic correlations. In a simple nearest-neighbor exchange Hamiltonian for the magnetic interactions, one therefore expects large differences between the various exchange constants: $`J_1`$ between spins within the same plane, $`J_2`$ between spins in different layers within a bilayer, and $`J_3`$ between spins in different bilayers. This quasi-2D behavior has been verified by spin-wave measurements which yield $`J_1/J_3150`$, similar to the anisotropy observed in the resistivity . To our knowledge, the critical behavior of quasi-2D bilayer systems has not yet been investigated thoroughly, but there exist detailed theoretical and experimental studies of single-layer quasi-2D XY magnets (Q2DXY), in which there is a strong exchange anisotropy $`J/J^{}`$, where $`J`$ and $`J^{}`$ are the in-plane and interlayer exchange constants, respectively . In such systems, both above and below $`T_C`$, spin fluctuations on length scales that are small compared to the characteristic length $`L_{\text{eff}}`$ $`\sqrt{(J/J^{})}`$ (in units of the nearest neighbor Mn-Mn distance $`a`$) are not affected by the interlayer coupling and are therefore purely 2D. When approaching $`T_C`$ from above, one expects the correlation length and the static susceptibility $`\chi _T(𝐪=0)`$ to increase according to the Kosterlitz-Thouless expressions $$\xi /a=\xi _0\mathrm{exp}[b(T/T_{\text{KT}}1)^{1/2}]$$ (1) and $$\chi _T(0)=C\mathrm{exp}[B(T/T_{\text{KT}}1)^{1/2}],$$ (2) where $`T_{\text{KT}}`$ is the topological ordering temperature of the 2D XY model . Once the interlayer interaction becomes important, i.e., when the correlation length reaches the order of $`L_{\text{eff}}`$, a crossover to 3D scaling is expected. Renormalization group theory estimates a 3D ordering temperature $`T_C`$ that is related to $`T_{\text{KT}}`$ by $$T_C=T_{\text{KT}}[1+(b/\mathrm{ln}L_{\text{eff}})^2].$$ (3) Below $`T_C`$, the correlation length decreases rapidly to $`L_{\text{eff}}`$, where spin fluctuations are again unaffected by the interlayer coupling and the correlation length remains constant. From this point on, although the magnetization is 3D, the fluctuations are 2D and the system can be modeled by a 2D system of effective size $`L_{\text{eff}}`$ . This behavior has been observed, for example, in Rb<sub>2</sub>CrCl<sub>4</sub> . As is shown in Fig. 2, our observations are in remarkable agreement with the above predictions of the Q2DXY model. From a least squares fit of Eq. (1) to the observed correlation length above 120 K, we obtain $`\xi _0=0.3(1)`$, $`T_{\text{KT}}`$ = 64(5) K, and $`b=2.1(2)`$, in excellent agreement with the theoretical value of $`b1.9`$ . For the static susceptibility we obtain $`B=3.9(4)`$, very close to the theoretical value $`B=b(2\eta )`$, where the critical exponent $`\eta <1/4`$. Using the fitted values for $`b`$ and $`T_{\text{KT}}`$ and $`J/J^{}=150`$, we obtain $`T_C`$ $``$ 109 K, consistent with the value $`T_C`$ $`=113.2`$ K derived from the order parameter. The deviation from the purely 2D behavior above $`T_C`$ occurs at a correlation length $`\xi _{\text{2D}}`$ $``$ 12 $`\AA `$, which also corresponds to the effective finite size observed below $`T_C`$. This is in rough agreement with the value of $`L_{\text{eff}}`$ $``$ 40 $`\AA `$ predicted from the measured in-plane and interbilayer exchange constants. In the region around $`T_C`$, we approximate the behavior of both the correlation length and the static susceptibility with an abrupt crossover to standard 3D scaling, although a more complete description would have to include a crossover scaling function . This model for the correlation length and the static susceptibility (shown as solid lines in Figs. 2 and 3 ) can now be used for an analysis of the peak lineshapes that includes the $`T_C`$-distribution. Figure 1 shows that this model gives a good description of the lineshape even close to $`T_C`$. It is important to note that the model includes a divergence at $`T_C`$ of both $`\xi `$ and $`\chi _T(0)`$, which is not apparent in the observed data. From a simple Lorentzian lineshape analysis, one would conclude erroneously that the correlation length does not diverge at $`T_C`$. Although the inhomogenous broadening prevents a reliable determination of the critical scaling within the 3D regime, it is evident from the above analysis that the magnetic correlations are quantitatively consistent with a conventional model of quasi-2D behavior. To underline the significance of this conclusion to our understanding of CMR, we compare our results with the recent observation of charge correlations in the same compound . In the paramagnetic phase, we have observed a growth of diffuse x-ray and neutron scattering, arising from the strain field produced by quasistatic polarons. Furthermore, broad incommensurate peaks modulating this diffuse scattering show that these polarons become increasingly correlated with each other, producing short range charge ordering on lowering the temperature towards $`T_C`$. Both the quasistatic polaronic scattering and the charge correlation peaks start to collapse just above $`T_C`$, and disappear in the ferromagnetic, metallic state (see Fig. 3). It has been argued that these polarons explain the low hole mobility in the paramagnetic state, which cannot be due to double exchange alone . The present observations are strong evidence that the phase transition in La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub> is driven by the growth of magnetic correlations, which are only weakly coupled to the polarons above $`T_C`$. The polarons are likely to induce some exchange disorder, but it is evidently not sufficient to disrupt the development of critical magnetic fluctuations. This conclusion is not consistent with a magnetic polaron model, in which the spin correlations are strongly coupled to the charge degrees of freedom. In such models, the ferromagnetic phase transition is induced by a transition from small to large polarons , but it is unlikely that such a transition would mimic the scaling behavior observed here. This does not mean that such models cannot be relevant to other CMR compounds, particularly those with much stronger electron-phonon coupling, but they are not essential to a description of the CMR process. Although the spin-charge coupling is weak above $`T_C`$, the magnetic correlations ultimately drive the metal-insulator transition; once the spin correlations extend over a large enough region, the double exchange interaction can overcome the mechanism responsible for localizing the charges, inducing the polaron collapse . This may affect the critical scaling in the 3D regime and could be responsible for the unusually low value of $`\beta `$ (in quasi-2D XY systems, $`\beta `$ is predicted to be approximately 0.23 ). Our results demonstrate that the critical properties of the bilayer CMR manganite La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub>, both above and below $`T_C`$, are in quantitative agreement with an effective finite-size 2D XY model. The magnetic and charge degrees of freedom are therefore only weakly coupled except close to $`T_C`$, where the growth of magnetic order delocalizes the polaronic charges. It is possible that some form of charge ordering would occur at lower temperature if it were not preempted by the magnetic ordering. From these observations, we conclude that magnetic polaron models are not appropriate to the present bilayer compound and are therefore not universal to CMR. This work was supported by the U.S. DOE BES DMS W-31-109-ENG-38 and NSF DMR 97-01339.
no-problem/9909/astro-ph9909291.html
ar5iv
text
# Degeneracies of the radial mass profile in lens models ## 1 The mass sheet degeneracy The degeneracies examined in this poster are a consequence of the so called “mass sheet degeneracy”. Falco et al. (1985) and Gorenstein et al. (1988) have shown that a lens model can be transformed to an equivalent model by scaling it with a factor of $`(1\kappa )`$ and adding a constant surface mass density $`\kappa `$. Constant mass densities as well as external shear do not contribute to the time delay. As $`H_0\mathrm{\Delta }t`$ depends linearly on the mass density, $`H_0`$ also scales with $`(1\kappa )`$. This has the effect that only an upper limit for $`H_0`$ can be determined when additional mass sheets may be present. When modelling lens systems, no decision between different models can be made if the models are equivalent just for the positions of the observed images. We want to examine the impact of the degeneracy for simple parametric lens models. ## 2 Perturbed spherical lens models We use the very simple approach of a spherically symmetric mass distribution plus external shear $`\gamma `$ to illustrate the degeneracy. For a a radial deflection angle of $`\alpha (r)`$ we can find an equivalent model by scaling $`\alpha (r)`$ and the shear by $`(1\kappa )`$ and adding the surface mass density which gives a contribution of $`\kappa r`$ to the radial deflection angle. Whether this leads to a degeneracy of model parameters, depends on the number of images, the image configuration and the number of parameters used to describe the radial deflection angle $`\alpha `$. A very simple while important case is the spherical power law model with a deflection angle of $`\alpha =\alpha _0r^{\beta 1}`$. It has two parameters, the scale $`\alpha _0`$ and the mass index $`\beta `$. Special values of $`\beta `$ are 0 (point mass), 1 (isothermal sphere) and 2 (constant mass sheet). This simple model may not describe the mass distribution of real galaxies exactly but is quite useful to illustrate the parameter degeneracies which exist for other models as well. ## 3 Application to Einstein cross systems In systems like the Einstein cross 2237+0305, the images are located at more or less the same distance (about one Einstein radius) from the centre of the lens. Two spherical models are equivalent to first order if the deflection angle as well as the first derivative is the same for both. We use the Einstein radius as unit of $`r`$ and an isothermal model as reference. A model with $`\alpha =\alpha _0r^{\beta 1}`$ is equivalent to the isothermal model to first order near $`r=1`$ if $`\alpha _0=1`$ and $`\beta =1+\kappa `$. This leads to a very simple scaling of the shear, the time delay and $`H_0`$: $$1\kappa =\mathrm{\hspace{0.33em}2}\beta =\frac{\gamma _{(\beta )}}{\gamma _{(\mathrm{iso})}}=\frac{H_{0(\beta )}}{H_{0(\mathrm{iso})}}$$ (1) We see that the Hubble constant for the more general $`\beta `$ model $`H_{0(\beta )}`$ can differ significantly from the value determined for the isothermal model $`H_{0(\mathrm{iso})}`$. Even more important is the fact that the possible systematic error which is made by assuming isothermal models when the real $`\beta `$ is different is the same for all systems of this type and does not show as scatter in the results for $`H_0`$. Comparison of our simple analytical results with numerical models from Wambsganss & Paczyński (1994) for the Einstein cross and Schechter et al. (1997) for the “triple quasar” 1115+080 show that the agreement is excellent for the former and still quite good for the latter. The case of 1115+080 is especially interesting, because it is one of the few systems with a measured time delay used for the determination of $`H_0`$ (Schechter et al. 1997). ## 4 Further information The original much more detailed poster presented at the conference is available from http://www.hs.uni-hamburg.de/english/persons/wucknitz.html or on request. An article with a discussion of the degeneracy for different types of lens systems is in preparation.
no-problem/9909/hep-ph9909496.html
ar5iv
text
# Acknowledgements ## Acknowledgements C.G. is grateful to the financial support of the Service de Physique Théorique, CEA Saclay where this work has been initiated. This work was supported in part by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under Contract DE-AC03-76SF00098 and in part by the National Science Foundation under grants PHY-95-14797.
no-problem/9909/astro-ph9909286.html
ar5iv
text
# Meson Synchrotron Emission from Central Engines of Gamma-Ray Bursts with Strong Magnetic Fields ## 1 Introduction Accumulating observational evidence shows the exsistence of astrophysical objects with extremely strong magnetic fields $`10^{15}G`$. Kouveliotou et al. (1998) have revealed that a galactic X-ray pulsar with an estimated magnetic field of $`8\times 10^{14}G`$ causes recurrent bursts of soft $`\gamma `$-rays, which are called soft gamma repeaters. Gamma-ray bursts (GRBs) with short and more intense bursts of $`100keV1MeV`$ photons still remain puzzling although it has been progressively clearer that they are likely to have a cosmological origin. Some of the theoretical models of GRBs (Usov 1992, Kluźniak & Ruderman 1998, and Pacsyński 1998) invoke compact objects for their central engines such as neutron stars or rotating black holes with extremely strong magnetic fields $`H10^{1617}G`$ in order to produce ultra-relativistic energy flow of a huge Lorentz factor $`\mathrm{\Gamma }10^3`$. Furthermore, neutron stars or black holes with strong magnetic fields have been proposed by many authors (Hillas 1984, and references therein) as an acceleration site of ultra high-enrgy cosmic rays (UHECR). Milgrom & Usov (1995) suggested possible association of the two highest energy UHECRs with strong GRBs in their error boxes. An ultra-relativistic nucleus gives rise to efficient meson emission, in analogy with the canonical photon synchrotron radiation, in such a strong magnetic field because it couples strongly to meson fields as well as to an electromagnetic field. Due to its large coupling constant the meson synchrotron emission is $`10^3`$ times stronger than the usual photon synchrotron radiation. Ginzburg & Syrovatskii (1965a, b) calculated the intensity of $`\pi ^0`$ emission by a proton in a given magnetic field. At that time, however, they could hardly find the astrophysical sites to which their formulae could be applied. In the present paper we propose that GRB central engines could be a viable site for strong meson emission. Waxman & Bahcall (1997) proposed that high energy neutrinos with $`10^{14}eV`$ are produced by photomeson interactions on shock-accelerated protons in the relativistic fireball (Rees & Mészáros 1992). Pacsyński & Xu (1994) suggested that charged pions, which are produced in pp collisions when the kinetic energy of the fireball is dissipated through internal collisions within the ejecta, produce a burst of $`10^{10}eV`$ neutrinos. Our proposed process of meson production is different from these two which can operate without magnetic fields. The very rapid variability time scale, $`0.1ms`$, of many GRBs implies that each sub-burst reflects the intrinsic primary energy release from the central engine (Sari & Piran 1997). Thus, the length scale of the central engine is estimated to be $`10km`$. The BATSE detector on board of Compton Gamma-Ray Observatory found the shortest burst of a duration of 5ms with substructure on scale of 0.2ms (Bhat et al. 1992). Accumulated BATSE data (Fishman & Meegan 1995) confirmed that more than 25% of total events are short ($`2s`$) bursts. Therefore, if the central engines of GRBs were the compact stellar objects like neutron stars or rotating black holes associated with strong magnetic fields, relativistic protons or heavy nuclei would trigger meson synchrotron emission whose decay products could provide several observational signals even before the hidden explosion energy is transported to the radiation in the later stage of a relativistic fireball. We extend the previous treatment of $`\pi ^0`$ emission of Ginzburg & Syrovatskii (1965a, b) to several kinds of neutral and charged mesons, which couple to a nucleon through scalar or vector type interaction, in a manner somewhat different from theirs. If the produced mesons are $`\pi ^0`$s or heavier mesons which have main decay modes onto $`\pi ^0`$s, they produce high energy photons which are immediately converted to lower energy $`\gamma `$s or $`e^\pm `$ pairs through $`\gamma \gamma +\gamma `$ or $`\gamma e^++e^{}`$ (Erber 1966) for their large optical depth in strong magnetic fields. However, if they are the charged mesons like $`\pi ^\pm `$s, decay modes are very different and result in more interesting consequences. Since the source emitters of charged nuclei are accelerated to high energy (Hillas 1984) to some extent, very high energy neutrinos are produced by $`\pi ^+\mu ^++\nu _\mu e^++\nu _e+\overline{\nu _\mu }+\nu _\mu `$ or its mirror conjugate process for the similar mechanism as proposed by Waxman & Bahcall (1997). These neutrinos could be a signature for strong meson emission from the GRB central engines and would be detected in the very early phase of the short GRBs. We formulate the meson synchrotron emission in the next section, and the calculated results are shown and discussed in §3. ## 2 Semi-Classical Treatment of Meson Emission We follow the established semi-classical treatment (Peskin & Schroeder 1995, Itzykson & Zuber 1980) of synchrotron emission of the quantum fields interacting with classical source current in strong external fields. In our present study the meson field is second quantized, while the nucleons are not and obey classical motion. The energy spectrum of $`\pi ^0`$ synchrotron emission by a proton was first derived by Ginzburg & Syrovatskii (1965a, b) and it is give by $$\frac{dI_\pi }{dE_\pi }=\frac{g^2}{\sqrt{3}\pi }\frac{E_\pi }{\mathrm{}^2c}\frac{1}{\gamma _p^2}_{y(x)}^{\mathrm{}}K_{1/3}(\eta )𝑑\eta ,$$ (1) where $`g`$ is the strong coupling constant, $`g^2/\mathrm{}c14`$ (Sakurai 1967), $`E_\pi `$ is the energy of the emitted pion, and $`\gamma _p`$ is the Lorentz factor, $`\gamma _p=E_p/m_pc^2`$, of the rotating proton in a given magnetic field. $`K_{1/3}`$ is the modified Bessel function of order $`1/3`$. The function $`y(x)`$ is given by $$y(x)=\frac{2}{3}\frac{m_\pi }{m_p}\frac{1}{\chi }x\left(1+\frac{1}{x^2}\right)^{3/2},$$ (2) where $`m_pc^2=938MeV`$ and $`m_\pi c^2=135MeV`$ are the rest masses of proton and $`\pi ^0`$ meson, and the parameter $`\chi `$, which characterizes the synchrotron emission, is determined by the proton energy and the strength of magnetic field as $`\chi =\frac{H}{H_0}\gamma _p`$ with $`H_0\frac{m_p^2c^3}{e\mathrm{}}=1.5\times 10^{20}G`$. In the above equations, variable $`x`$ is introduced for mathematical simplicity, $`x=\frac{E_\pi }{E_p}\times \frac{m_p}{m_\pi }`$. Since the available energy of the pion satisfies $`m_\pi E_\pi E_p`$, the corresponding variable range of $`x`$ is $`\gamma _p^1x\frac{m_p}{m_\pi }`$. Applying the same treatment to vector mesons, we extensively obtain the energy spectrum of $`\rho `$ meson synchrotron emission $$\frac{dI_\rho }{dE_\rho }=\frac{g^2}{\sqrt{3}\pi }\frac{E_\rho }{\mathrm{}^2c}\frac{1}{\gamma _p^2}\left(1+\frac{1}{x^2}\right)_{y(x)}^{\mathrm{}}K_{5/3}(\eta )𝑑\eta ,$$ (3) where $`K_{5/3}`$ is the modified Bessel function of order $`5/3`$. The function $`y(x)`$ is defined by Eq. (2) by replacing $`m_\pi `$ with $`\rho `$ meson mass $`m_\rho c^2=770MeV`$, and $`x=\frac{E_\rho }{E_p}\times \frac{m_p}{m_\rho }`$. Note that one can easily get the expression for photon synchrotron radiation in the limit of $`m_\rho 0`$ by replacing the strong coupling constant $`g`$ with the electromagnetic coupling constant $`e`$. The total intensity of a scalar or vector meson as a function of $`\chi `$ is obtained by integrating equation (1) or (3) over meson energies $`m_{\pi ,\rho }E_{\pi ,\rho }E_p`$ or equivalently over $`\gamma _p^1x\frac{m_p}{m_{\pi ,\rho }}`$. It is useful to give an approximate formula of the total intensity in the limit of large or small $`\chi `$: $$I_\pi =\{\begin{array}{cc}\frac{g^2}{6}\frac{m_p^2c^3}{\mathrm{}^2},\hfill & :\chi 1\hfill \\ \frac{g^2}{\sqrt{3}}\frac{m_\pi m_pc^3}{\mathrm{}^2}\chi \mathrm{exp}\left(\frac{\sqrt{3}}{\chi }\frac{m_\pi }{m_p}\right),\hfill & :\chi 1\hfill \end{array}$$ (4) and $$I_\rho =\{\begin{array}{cc}\frac{27\sqrt[6]{3}}{16\pi }\mathrm{\Gamma }(5/3)\frac{2g^2}{3}\frac{m_p^2c^3}{\mathrm{}^2}\chi ^{2/3},\hfill & :\chi 1\hfill \\ \frac{3}{2}\sqrt{\frac{3}{2}}\left(\frac{m_p}{m_\rho }\right)\left(1+\left(\frac{m_\rho }{m_p}\right)^2\right)^{1/4}\left(2\left(\frac{m_\rho }{m_p}\right)^21\right)^1\hfill & \\ \times \frac{g^2}{\sqrt{3}}\frac{m_\rho m_pc^3}{\mathrm{}^2}\chi ^{3/2}\mathrm{exp}\left(\frac{2}{3\chi }\left(1+\left(\frac{m_\rho }{m_p}\right)^2\right)^{3/2}\right),\hfill & :\chi 1\hfill \end{array}$$ (5) where we have made an approximation $`K_\nu (\eta )\frac{2^{\nu 1}\mathrm{\Gamma }(\nu )}{\eta ^\nu }`$ for $`\eta 1`$ ($`\chi 1`$), or $`K_\nu (\eta )\sqrt{\frac{\pi }{2\eta }}\mathrm{exp}(\eta )`$ for $`\eta 1`$ ($`\chi 1`$). These approximations are in reasonable agreement with exact numerical integrals within $`\pm `$ 3% for $`\pi `$ meson and $`\pm `$ 10% for $`\rho `$ meson at $`\chi 0.01`$ or $`10^2\chi `$. Let us make a short remark on our classical treatment. When the proton energy is very high or the external magnetic field is strong, i.e. $`\chi 1`$, the quantum effects not only in the meson field but in the source nucleon current may not be negligible. In the case of photon synchrotron radiation, quantum effects were carefully studied (Erber 1966), and semi-classical treatment was found to be a good approximation to the exact solution within a few percent. In our treatment, the prefactor in Eq. (5) $`\left(\frac{27\sqrt[6]{3}}{16\pi }\mathrm{\Gamma }(5/3)\right)0.583`$. Taking the limit of $`m_\rho 0`$ and $`g=e`$, Eq. (5) is applied to photon synchrotron radiation. It is shown (Erber 1966) that this factor should be 0.5563 in fully quantum mechanical calculation. It therefore is expected to hold true for the hadron processes as well. ## 3 Results and Discussions Figure 1a shows a comparison between the calculated spectra of scalar $`\pi ^0`$ meson emission and $`\gamma `$ synchrotron radiation. Since Usov (1992), Kluźniak & Ruderman (1998), and Pacsyński (1998) suggested strong magnetic fields of order $`H10^{1617}G`$, which are presumed to associate with neutron stars or black holes of the GRB central engines, we here take $`H=1.5\times 10^{16}G`$. The observed Lorentz factor of the fireball is $`\mathrm{\Gamma }10^3`$, which indicates that the energy of charged particles is at least $`10^{12}eV`$. Although the acceleration mechanism in GRBs is still unknown, there are suggestions (Milgrom & Usov 1995) that the GRBs associate with UHECRs. Six events of UHECRs beyond the Greisen-Zatsepin-Kuz’min cutoff energy $`10^{20}eV`$ (Hill & Schramm 1985) have been detected by AGASA group (Takeda et al. 1998). We therefore vary the proton energy from $`E_p=10^{12}eV`$ to $`10^{22}eV`$. All calculated spectra cut off sharply at incident proton energy $`E_{\pi ,\gamma }=E_p`$. As seen in this figure, the intensity of $`\pi ^0`$ emission for $`E_p=10^{12},10^{14}`$ and $`10^{16}eV`$, which correspond respectively to $`\chi 0.1,10`$ and $`1000`$, is $`10^310`$ times stronger than that of $`\gamma `$ radiation in high energy parts of the spectra. This reflects the different coupling constants, $`g^2/e^210^3`$. Very sharp declination of the $`\pi ^0`$ spectra at lower energy arises from the integral in Eq. (1) for finite pion mass because $`K_\nu (\eta )\sqrt{\frac{\pi }{2\eta }}\mathrm{exp}(\eta )`$ for $`\eta 1`$, which is a good approximation in this energy region where $`y(x)1`$. Figure 1b shows comparison between the calculated spectra of $`\rho `$ meson emission and $`\gamma `$ synchrotron radiation. In this figure both spectra look very similar to each other, except for the absolute intensity and the sharp declination of low energy spectra due to finite $`\rho `$ meson mass. This is because the interactions between $`\rho `$ meson and proton and between photon and proton are of vector type. For the proton energies above $`10^{14}eV`$ the intensity of $`\rho `$ meson emission is roughly a thousand times larger than that of $`\gamma `$ synchrotron radiation, reflecting again its stronger coupling constant. Figure 2 displays the calculated total intensities of synchrotron emission of $`\pi ^0`$ and $`\rho `$ mesons as a function of $`\chi `$. These are the integrated spectra shown in Figs. 1a & 1b over available meson energies. The total intensity of $`\gamma `$ synchrotron radiation is also shown in this figure. $`I_{\pi ^0}`$ exceeds $`I_\gamma `$ at $`3\times 10^2<\chi <3\times 10^4`$, and $`I_\rho `$ exceeds $`I_\gamma `$ at $`\chi >0.2`$. Both $`I_{\pi ^0}`$ and $`I_\rho `$ decrease exponentially with decreasing $`\chi `$ due to their finite masses. (See the asymptotic forms at $`\chi 1`$ in Eqs. (4) and (5).) This sharp declination of $`I_\rho `$ takes place at higher $`\chi `$ than $`I_{\pi ^0}`$ because $`\rho `$ meson mass is larger than pion mass. $`I_\rho `$ resembles $`I_\gamma `$ at $`\chi >1`$, except that the intensity is a thousand times different from each other, reflecting that both $`\rho `$ meson and photon have the same vector type coupling to proton with different coupling constatnts, $`g^2/e^210^3`$. A nucleon strongly couples to $`\pi ^\pm `$ as well as $`\pi ^0`$ field. The interaction Hamiltonian is charge independent and $`H_{\mathrm{int}}=ig(\sqrt{2}\overline{\psi _n}\gamma _5\psi _p\varphi _{\pi ^+}+\overline{\psi _p}\gamma _5\psi _p\varphi _{\pi ^0})+c.c.`$ (Sakurai 1967). The initial state $`\psi _p`$ in both $`pn+\pi ^+`$ and $`pp+\pi ^0`$ processes is identical. The difference comes from the final states: Proton and $`\pi ^+`$ couple to external magnetic field, but neutron and $`\pi ^0`$ do not. (Neutron has too small magnetic moment.) When we describe these processes in quantum mechanics, we should use wave functions from the solution of Dirac and Klein-Gordon equations for the nucleon and the pion separately. However, final states are more or less the same for the same charge state, aside from the different masses. These might not change the reaction amplitiudes by many orders at ultra-relativistic energies where the rest mass is neglected. Note that the conjugate processes $`np+\pi ^{}`$ and $`nn+\pi ^0`$ can also occur when a composite nucleus like $`{}_{}{}^{56}Fe`$ orbits in the strong magnetic fields. Heavy mesons including $`\rho `$ meson have several appreciable branching ratios for the decay onto $`\pi ^\pm `$. Let us discuss what kinds of observational signals they may make. We assume that some fraction of $`10^{51}10^{53}ergs`$ of the gravitational or magnetic field energy is released by some unknown mechanism operating at the GRB central engine during very short time duration, $`0.1ms`$, of the first sub-burst. We also assume that an appreciable part of this energy is deposited into the relativistic motion of the material leading to UHECRs. In a somewhat different context, Waxman & Bahcall (1997) proposed that the photomeson production in the ejecta of the fireball would make a burst of $`10^{14}eV`$ neutrinos. Although their proposed mechanism of meson production is completely different from ours, we can apply similar discussion on the physical consequence of $`\pi ^\pm `$ decay. Extending our previous discussions of $`\pi ^0`$ in Fig. 2 to $`\pi ^\pm `$, we expect that a thousand times stronger intensity of high energy neutrinos than the photons can be produced universally at $`0.1\chi `$ from $`\pi ^+\mu ^++\nu _\mu e^++\nu _e+\overline{\nu _\mu }+\nu _\mu `$ and its mirror conjugate process. Since neutrinos can escape from the ambient matter, these neutrinos could be a clear signature showing strong meson synchrotron emission near the central engines of GRBs associated with extremely strong magnetic fields. The neutron emerging from $`pn+\pi ^+`$ inherits almost all proton energy and can escape from the region of strong magnetic field. If there is not a dense shell surrounding the central engine, it travels $`10^5\times (\gamma _n/10^{10})pc`$ before beta decay. This process may also produce a very high energy neutrino. The generic picture of GRBs (Mészáros & Rees 1993, Piran 1999) suggets that a baryon mass $`10^5M_{}`$ is involved in a single explosion. If this is the case, such amount of baryon mass is huge enough to stop almost all neutrons before running through the ambient matter. If the central engines are neutron stars or black holes, the material ejected from these compact objects contains heavy nuclei such as oxygen and iron because these are the products from evolved massive stars. Therefore, meson emission from a heavy nucleus as well as from a proton is worth being considered. Let us consider a nucleus of total energy $`E_{\mathrm{tot}}`$, mass number A, and charge $`Z`$, in a magnetic field of strength $`H`$. The energy of each nucleon is $`E=E_{\mathrm{tot}}/A`$. When the strength of the effective magnetic field is $`H_{\mathrm{eff}}=\frac{Z}{A}H`$, the orbital trajectory of a proton is the same as the trajectory of the nucleus in the magnetic field $`H`$. The intensity of $`\pi ^0`$ emission by the nucleus should be the sum of each nucleonic contribution provided that the synchrotron emission is incoherent. Thus the total intensity is given by $$I_{\pi ^0}^{(\mathrm{A})}(E_{\mathrm{tot}},H)A\times I_{\pi ^0}^{(\mathrm{p})}(E,H_{\mathrm{eff}}),$$ (6) where $`I_{\pi ^0}^{(\mathrm{p})}(E,H_{\mathrm{eff}})`$ is the intensity of $`\pi ^0`$ emission by the nucleon of energy $`E`$ in magnetic field $`H_{\mathrm{eff}}`$. Note that both proton and neutron emit $`\pi ^0`$. Figure 3 shows $`I_{\pi ^0}^{(56)}(E_{\mathrm{tot}},H)`$ for $`{}_{}{}^{56}Fe`$ as a function of total energy $`E_{\mathrm{tot}}`$ with a fixed magnetic field of $`H=1.5\times 10^{12}G`$. The sharp declination of $`I_{\pi ^0}^{(56)}`$ takes place at higher energy than $`I_{\pi ^0}^{(\mathrm{p})}`$ because each nucleon in $`{}_{}{}^{56}Fe`$ has effectively smaller energy than a single proton. There is now more motivation to study the GRBs in association with UHECRs. It is highly desirable to proceed with a project like Orbiting Wide-angle Light-collectors (OWL), in order to detect ultra-relativistic neutrinos from GRBs for finding the true nature of the central engines. This work has been supported in part by the Grant-in-Aid for Scientific Research (10640236, 10044103,11127220) of the Ministry of Education, Science, Sports and Culture of Japan and also by JSPS-NSF Grant of the Japan-U.S. Joint Research Project. We thank the Institute for Nuclear Theory at the University of Washington for its hospitality and the Department of Energy for partial support during the completion of this work. ## 4 Reference Bhat, P.N., et al. 1992, Nature, 359, 217. Erber, T. 1966, Rev. Mod. Phys. 38, 626. Fishman, G.J., & Meegan, C.A. 1995, Ann. Rev. Astron. Astrophys. 33, 415. Ginzburg, V.L., & Syrovatskii, S.I. 1965a, Uspekhi Fiz. Nauk. 87, 65. Ginzburg, V.L., & Syrovatskii, S.I. 1965b, Ann. Rev. Astron. Astrophys. 3, 297. Hill, C.T., & Schramm, D.N. 1985, Phys. Rev. D31, 564. Hillas, A.M. 1984, Ann. Rev. Astron. Astrophys., 22, 425. Itzykson, C. & Zuber, J.-B. 1980, Quantum Field Theory (McGraw-Hill, Inc.). Kouveliotou, C. et al. 1998, Nature, 393, 235. Kluźniak, W., & Ruderman, M. 1998, Astrophs. J., 505, L113. Mészáros, P., & Rees, M.J. 1993, Astrophys. J. 405, 278. Milgrom, M., & Usov, V. 1995, Astrophs. J. , 449, L37. Pacsyński, B. 1998, Astrophs. J., 494, L45. Pacsyński, B., & Xu, G. 1994, Astrophs. J., 427, 708. Piran, T. 1999, Phys. Rep. 314, 575. Sari, R., & Piran, T. 1997, Astrophys. J. 285, 270. Peskin, M.E., & Schroeder, D.V. 1995, An Introduction to Quantum Field Theory (Addison-Wesley Publishing Company, Inc.). Rees, M.J., & Mészáros, P. 1992, Mon. Not. R. Astron. Soc. 258, 41P. Sakurai, J.J. 1967, Advanced Quantum Mechanics (Addison-Wesley Publishing Company, Inc.). Takeda, M., et al. 1998, Phys. Rev. Lett. 81, 1163. Usov, V. 1992, Nature, 357, 472. Waxman, E., & Bahcall, J.N. 1997, Phys. Rev. Lett. 78, 2292. ## 5 Figure Captions Figure 1: (a) Calculated energy spectra of scalar $`\pi ^0`$ meson emission (solid curve) and photon synchrotron radiation (dashed curve) for various incident proton energies $`E_p`$ with a fixed magnetic field of $`H=1.5\times 10^{16}G`$. Denoted numbers in the figure are the proton energies $`E_p=10^{12},10^{14},10^{16},10^{18},10^{20},`$ and $`10^{22}eV`$ from left to right. (b) The same as those in (a) for vector $`\rho `$ meson emission. Figure 2: Calculated total intensities of the emission of scalar $`\pi ^0`$ meson (solid curve), vector $`\rho `$ meson (long-dashed curve), and photon $`\gamma `$ (dashed curve) as a function of $`\chi =\frac{H}{H_0}\gamma _p`$. Figure 3: Calculated total intensities of scalar $`\pi ^0`$ meson emission by the proton (dashed curve) and iron nucleus (solid curve) as a function of the total energy $`E_{tot}`$ with a fixed magnetic field of $`H=1.5\times 10^{12}G`$.
no-problem/9909/chao-dyn9909021.html
ar5iv
text
# Does mesoscopic disorder imply microscopic chaos? We argue that Gaspard and coworkers do not give evidence for microscopic chaos in the sense in which they use the term. The effectively infinite number of molecules in a fluid can generate the same macroscopic disorder without any intrinsic instability. But we argue also that the notion of chaos in infinitely extended systems needs clarification: In a wider sense, even some systems without local instabilities can be considered chaotic. In a beautiful recent experiment , Gaspard and coworkers verified that the position of a Brownian particle behaves like a Wiener process with positive resolution dependent entropy . More surprisingly and dramatically , they claim that this observation gives a first proof of “microscopic chaos”, a term they illustrate by examples of finite dimensional dynamical systems which are intrinsically unstable. While the recent literature finds such chaos on a molecular level quite plausible, we argue that the observed macroscopic disorder cannot be taken as direct evidence. In fact, Brownian motion can be derived for systems which would usually be called non-chaotic, think of a tracer particle in a non-interacting ideal gas. All that is needed for diffusion is molecular chaos in the sense of Boltzmann, i.e. the absence of observable correlations in the motion of single molecules. Part of the confusion is due to the lack of a unique definition of “microscopic chaos” for systems with infinitely many degrees of freedom. The authors of introduce the term by extrapolating finite dimensional dynamical systems for which chaos is a well defined concept: Initially close states on average separate exponentially when time goes to infinity. The rate of separation, the Lyapunov exponent, is independent of the particular method to measure “closeness”. However, the notions of diffusion and Brownian motion involve by necessity infinitely many degrees of freedom. In this thermodynamic limit, Lyapunov exponents are no longer independent of the metric. As a consequence, the large system limit of a finite non-chaotic system will remain non-chaotic with one particular metric and become chaotic with another. Let us illustrate this astonishing fact with an example introduced by Wolfram in the context of cellular automata. Consider two states $`𝐱=(\mathrm{}x_2,x_1,x_0,x_1,x_2\mathrm{})`$ and $`𝐲=(\mathrm{}y_2,y_1,y_0,y_1,y_2\mathrm{})`$ of a one-dimensional bi-infinite lattice system. If the distance between $`𝐱`$ and $`𝐲`$ is defined by $`d_{\mathrm{max}}(𝐱,𝐲)=\mathrm{max}_i|x_iy_i|`$, it can grow exponentially only if the local differences do. This is what one usually means by “chaos”, and this is what the authors of had meant by “microscopic chaos”. This mechanism is absent in the thermodynamic limit of finite non-chaotic systems. Therefore, some authors would also call the limit non-chaotic. However, the metric $`d_{\mathrm{exp}}(𝐱,𝐲)=_i|x_iy_i|e^{|i|}`$ can also show exponential divergence if an initially far away perturbation just moves towards the origin without growing . Arguably, when observing a localised tracer like in , the latter choice of metric seems more appropriate. In finite dimensional dynamical systems, chaos arises due to the de-focusing microscopic dynamics. The positive entropy is generated by the initially insignificant digits of the initial condition which are braught forth by the dynamics. In the thermodynamic limit, also a completely different mechanism exists: Perturbations coming from far away regions kick the tracer particle once and move again away to infinity. The entropy is positive due to information stored in remote parts of the initial condition. Associated to this, one can also define suitable Lyapunov exponents . To resolve the confusion, we suggest to follow Sinai and first let the system size tend to infinity, and only then the observation time. In that case, a system observed in a particular metric $`\mu `$ is called $`\mu `$-chaotic when we find a positive Lyapunov exponent using this metric. However, Gaspard et al. had obviously in mind the type of chaos detectable with the metric $`d_{\mathrm{max}}`$, and arising from a local instability. For this, they fall short of giving experimental evidence.
no-problem/9909/hep-lat9909005.html
ar5iv
text
# The Role of Center Vortices in QCDTalk presented by C. Alexandrou ## 1 Introduction Within the vortex theory of confinement, put forward about twenty years ago , the QCD vacuum is considered as a condensate of colour magnetic vortices with a flux quantized in terms of the center of the group $`Z_N`$. A center vortex is associated with a singular gauge transformation with a discontinuity given by a gauge group center element. It has the effect of multiplying the Wilson loop linked to this vortex by an element of $`Z_N`$, i.e. $`W(C)e^{2\pi ni/N}W(C)`$, $`n=1,\mathrm{},N1`$. Assuming that center vortices condense in the QCD vacuum, the area law behaviour of large Wilson loops follows from fluctuations in the number of vortices linking the loops. A procedure to identify these vortices on the lattice via gauge fixing was shown recently to yield results in accord with vortex condensation theory . The main idea consists of choosing a gauge that makes the link variables $`U`$ as close as possible to the center of the gauge group. The Direct Maximal Center (DMC) gauge proposed by Del Debbio et al. determines a gauge transformation $`g\mathrm{SU}(\mathrm{N})`$ that maximises the quantity $$R[U]=\underset{x,\mu }{}|\mathrm{Tr}U_\mu ^{GF}(x)|^2$$ (1) where $`U_\mu ^{GF}(x)=g(x)U_\mu (x)g^{}(x+\widehat{\mu })`$. Then the gauge fixed links are projected onto $`Z_N`$, i.e. one replaces each $`U^{GF}`$ by its closest center element $`Z`$ in the evaluation of the observables. For SU(2) the center projection is defined by $$Z_\mu (x)=\mathrm{sign}\left[\mathrm{Tr}U_\mu ^{GF}(x)\right]𝐈$$ (2) and from now on, for simplicity, we will be discussing only SU(2). A plaquette in the $`Z_2`$-projected theory with value $`1`$ represents a defect called a P-vortex. The idea of center dominance is that center vortices, identified as P-vortices in the DMC gauge, are the relevant nonperturbative degrees of freedom. It is supported by numerical results: the $`Z_2`$-theory shows a string tension similar to that of the full- theory ; the deconfinement phase transition is well described in the center vortex picture as a percolation transition. The center-dominance scenario received further support by the following test carried out in Ref. : The P-vortices were removed by considering a modified ensemble with links $`U_\mu ^{}(x)Z_\mu (x)U_\mu (x)`$ which projects onto the trivial $`Z_2`$ vacuum. It was shown that in this modified ensemble, both confinement is lost and chiral symmetry is restored. We have investigated the chiral content of the $`Z_2`$-projected theory further, by looking at the quark condensate $`\overline{\psi }\psi `$ in the $`Z_2`$ sector as a function of the quark mass $`m_q`$, at zero and finite temperature. At zero temperature it extrapolates linearly to a non-zero value as $`m_q0`$ as shown in Fig. 1. For very small quark masses it diverges as $`1/m_q`$, revealing the presence of a few extremely small eigenvalues, which may be caused by the non-trivial topology of the original $`SU(2)`$ gauge field. This behaviour is strikingly similar to that observed with domain-wall fermions . It is well described by the functional form $$\overline{\psi }\psi _{Z_2}=a+b/m_q+cm_q.$$ (3) This ansatz also works well at finite temperature. The comparison of data on $`16^3\times N_t`$ volumes at $`\beta =2.4`$, with $`N_t=16,8`$ and $`4`$ (the latter in the deconfined phase) reveals that $`a,b`$ and $`c`$ do not vary much with temperature. $`c`$ remains close to its free-field value. $`a`$ tends to decrease somewhat in the deconfined, chirally symmetric phase, but remains surprisingly large. $`b`$ may also show some decrease, but not in the same proportion as the fluctuations of the $`SU(2)`$ topological charge. Therefore, one has to be cautious in relating the $`Z_2`$ condensate to the chiral properties of $`SU(2)`$. ## 2 Laplacian Center Gauge A local iterative maximization of the DMC gauge-fixing condition Eq. 1 selects any one of the many possible maxima. Each of these Gribov copies will have its own set of P-vortices, which may show dramatically different properties . In order to find a center gauge fixing procedure that is free of Gribov copies, we first note that DMC is equivalent to maximizing $`_{x,\mu }\mathrm{Tr}_{\mathrm{adj}}U_\mu (x)`$, since $$|\mathrm{Tr}U|^2=\mathrm{Tr}U^2+2=\mathrm{Tr}_{\mathrm{adj}}U+1\mathrm{taking}\mathrm{Tr}\sigma _a\sigma _b=2\delta _{ab}.$$ (4) The idea is thus to smooth the center-blind, adjoint component of the gauge field as much as possible, then to read the center component off the fundamental gauge field. Therefore, Maximal Center Gauge is just another name for adjoint Landau gauge. The problem of Gribov copies in the fundamental Landau gauge was solved in : If one relaxes the requirement that $`gSU(2)`$, the maximization of the gauge-fixing functional is achieved by taking for $`g^{}`$ the eigenvector $`\stackrel{}{v}`$ associated with the smallest eigenvalue of the covariant Laplacian $`\mathrm{\Delta }_{xy}=2d\delta _{xy}_{\pm \widehat{\mu }}U_{\pm \widehat{\mu }}(x)\delta _{x\pm \widehat{\mu },y}`$. At each site, $`v(x)`$ has 2 complex color components. The Laplacian gauge condition consists of taking for $`g^{}`$ the $`SU(2)`$ projection of $`v`$, thus rotating $`v(x)^{}`$ along direction $`(1,1)`$ at all sites. We follow this construction for the adjoint representation. The covariant Laplacian is now constructed from adjoint links $`U^{ab}=\frac{1}{2}\text{Tr}[U\sigma ^aU^{}\sigma ^b],a,b=1,2,3`$. It is a real symmetric matrix. The lowest-lying eigenvector $`\stackrel{}{v}`$ has 3 real components $`v_i,i=1,2,3`$ at each site $`x`$. One can apply a local gauge transformation $`g(x)`$ to rotate it along some fixed direction. Note, however, that this does not specify the gauge completely: Abelian rotations around this reference direction are still possible. What we have achieved at this stage is a variation of Maximal Abelian Gauge which is free of Gribov ambiguities. This Laplacian Abelian Gauge has been proposed in and, as it was shown there, monopoles are directly identifiable by the condition $`|v(x)|=0`$ for smooth fields. Abelian monopole worldlines appear naturally as the locus of ambiguities in the gauge-fixing procedure: the rotation to apply to $`v(x)`$ cannot be specified when $`|v(x)|=0`$. To fix to center gauge, we must go beyond Laplacian Abelian Gauge and specify the Abelian rotation. This is done most naturally by considering the second-lowest eigenvector $`\stackrel{}{v^{}}`$ of the adjoint covariant Laplacian, and requiring that the plane $`(v(x),v^{}(x))`$ be parallel to, for instance, $`(\sigma _3,\sigma _1)`$ at every site $`x`$. This fixes the gauge completely, except where $`v(x)`$ and $`v^{}(x)`$ are collinear. Collinearity occurs when $`\frac{v_1}{v_1^{}}=\frac{v_2}{v_2^{}}=\frac{v_3}{v_3^{}}`$, i.e. 2 constraints must be satisfied. Thus, gauge-fixing ambiguities have codimension 2: in 4$`d`$, they are 2$`d`$ surfaces. They can be considered as the center-vortex cores . ## 3 Results We have applied Laplacian Center Gauge fixing and center projection to an ensemble of $`SU(2)`$ configurations. The main difference with DMC gauge is an increase in the $`P`$-vortex density ($`11\%`$ vs $`5.5\%`$ on a $`16^4`$ lattice at $`\beta =2.4`$), similar to the increase in the monopole density for Laplacian Abelian Gauge . As in , the string tension, the quark condensate and the topological charge all vanish upon removal of the $`P`$-vortices. Fig. 2 displays the Creutz ratios $`\chi (R,R)=\mathrm{ln}\left[W(R,R)W(R1,R1)/W(R,R1)^2\right]`$ constructed from averages $`W(R,T)`$ of $`R`$ by $`T`$ Wilson loops. For large $`R`$, $`\chi (R,R)`$ approaches the string tension $`\sigma `$. On the modified configuration the string tension goes to zero whereas in the $`Z_2`$-projected theory it reproduces the $`SU(2)`$ value of Ref. . The quark condensates behave as in Fig. 1. If one applies Laplacian Center Gauge fixing to a cooled one-instanton configuration, one finds a very small number ($`_{}^<100`$) of P-vortices, regardless of the original instanton size. These P-vortices are all near the instanton center, as illustrated in Fig. 3. ## 4 Conclusions Any Gribov ambiguities that cast doubt on the physical relevance of P-vortices (pointed out e.g. by ) are removed by the Laplacian Center Gauge fixing. This gauge appears naturally as an extension of Laplacian Abelian Gauge. It allows the direct identification of center vortices (and monopoles) by inspection of the two lowest eigenmodes of the covariant adjoint Laplacian, without gauge-fixing. As in DMC gauge, center dominance seems to hold: (i) At zero temperature the string tension is well reproduced in the $`Z_2`$-projected theory, and even the $`Z_2`$ quark condensate is non-zero; (ii) In the deconfined phase, the $`Z_2`$ string tension vanishes: however the $`Z_2`$ quark condensate does not. Although here we have only discussed SU(2), our gauge fixing procedure readily generalizes to $`SU(N)`$: complete gauge-fixing is achieved by rotating the first $`(N^22)`$ eigenvectors of the adjoint Laplacian along some reference directions. Ambiguities arise whenever these $`(N^22)`$ eigenvectors \[each with $`(N^21)`$ real components\] become linearly dependent. This again defines codimension-2 center-vortex cores.
no-problem/9909/cond-mat9909122.html
ar5iv
text
# Non-Gaussian distribution of nearest-neighbour Coulomb peak spacings in metallic single electron transistors ## I Introduction A single electron transistor (SET) consists of a small conductive island, coupled to two leads via tunnel barriers, and a nearby gate used to tune the electrochemical potential of the island. The Coulomb blockade, characterized by the charging energy $`E_\mathrm{C}`$ needed to add a single electron to the island, governs the electronic properties of such devices. This leads to the observation of pronounced conductance oscillations, commonly denoted as Coulomb blockade (CB) oscillations, as a function of the gate voltage $`V_\mathrm{g}`$. These effects are of electrostatic origin and can be analyzed in a purely classical picture. However, a variety of additional effects can be studied in SETs, depending on the material they are made of. In superconducting islands, for example, Cooper pair formation leads to significant modifications of the device characteristics. SETs can also be realized in two-dimensional electron gases residing in semiconductor hosts such as Si MOSFETs or Ga(Al)As heterostructures. Discrete energy levels and phase coherence effects superimposed on the Coulomb blockade can be observed. Such devices, also known as ‘quantum dots’, have therefore become model systems to investigate numerous distinct effects. Broad attention has recently been paid to experiments measuring the distribution of nearest-neighbour spacings (NNS) of the CB oscillation peaks in quantum dots. From random-matrix theory calculations, the NNS distribution is expected to obey Wigner-Dyson statistics. However, the experimentally observed distributions differ significantly from the random-matrix theory predictions. In order to separate classical charging effects from quantum mechanics, it is generally accepted to use a constant-interaction (CI) model, which assumes that the electrostatics of the system is invariant under a change of the charge on the island by integer multiples of the elementary charge $`e`$. In this paper, we report on NNSs of CB peaks in metallic, i.e. purely electrostatic or ‘classical’ SETs. In a simple picture appropriate for metallic devices, one would expect to observe constant peak spacings $`\mathrm{\Delta }V_\mathrm{g}=e/C_\mathrm{g}`$ (with a distribution broadened by thermal fluctuations only), where $`C_\mathrm{g}`$ is the capacitance between island and gate. However, we observe a strongly asymmetric NNS distribution with a pronounced tail to small peak separations. ## II Experiments and results We have measured high quality Al/AlO<sub>x</sub>/Al SETs written by electron beam lithography and fabricated by standard two angle evaporation technique, with intermediate room temperature oxidation of the first layer (oxygen pressure of 2.5 mbar for 20 min) to develop the tunnel barriers. The substrate was silicon covered by 600 nm thermally grown SiO<sub>x</sub>. The design of the SETs is drawn schematically in fig. 1 a). The typical parameters of the devices were $`R_\mathrm{t}=1\mathrm{}10`$ M$`\mathrm{\Omega }`$ for the tunnel resistances, $`C_\mathrm{j}=40\mathrm{}200`$ aF and $`C_\mathrm{g}50`$ aF for the junction and gate capacitances, respectively. The measurements were performed in a dilution refrigerator at temperatures down to 5 mK, whereas the effective electron temperature of the devices was determined to be 45 mK (as deduced from the thermal smearing of the charge occupation number in an electron box.) The device IV-characteristics are well understood on the basis of ‘orthodox theory’ calculations, taking into account also non-equilibrium effects and the influence of the electromagnetic environment. Measurements were performed over periods of several days. The devices were highly stable for constant voltages applied, showing no drifts or spontaneous jumps for days. We attribute this stability to the very slow device cooling of about 1 day, allowing the impurities to be frozen in their lowest, most stable state. The measured $`1/f`$ noise was identified as dominant SET input noise due to background charge fluctuation, being of magnitude comparable with typical noise figures ($`10^4e/\sqrt{\mathrm{Hz}}`$ at 10 Hz) reported for other metallic SETs. A magnetic field of $`B=1\mathrm{}4`$ T was applied to suppress superconductivity. The DC current through the devices was measured as a function of bias and gate voltages $`V_\mathrm{b}`$ and $`V_\mathrm{g}`$, respectively. Due to the DC measurement technique we took large sets of sufficiently dense points by variation of the voltages in the ranges $`|V_\mathrm{b}|\frac{1}{4}E_\mathrm{C}/e`$ and $`|V_\mathrm{g}|1`$ V. The latter corresponds to a difference of some hundred electrons on the island. The data was analyzed by fitting the conductance peaks with $`G(V_\mathrm{g})=\frac{1}{2}G_0(\delta V_\mathrm{g}/w)/\mathrm{sinh}(\delta V_\mathrm{g}/w)`$, where $`\delta V_\mathrm{g}=|V_\mathrm{g}^0V_\mathrm{g}|`$, yielding amplitude $`G_0`$, width $`w=(k_\mathrm{B}T/2E_\mathrm{C})(e/C_\mathrm{g})`$ and position $`V_\mathrm{g}^0`$. A partial trace of typical CB oscillations is shown in fig. 1 b) together with a theoretical curve fitting. As expected for our devices, the amplitudes are found to be constant over the entire $`V_\mathrm{g}`$ range, with a standard deviation of typically $`1.5\%`$, as shown in fig. 1 c). However, the distribution $`P(\mathrm{\Delta }V_\mathrm{g})`$ of NNS values $`\mathrm{\Delta }V_\mathrm{g}(n)=V_\mathrm{g}^0(n+1)V_\mathrm{g}^0(n)`$, where $`n`$ is the peak index, is not Gaussian but shows a significant number of events with reduced values, cf. fig. 1 d). The NNS distributions were quantitatively independent of $`V_\mathrm{b}`$ variations. The 8 samples investigated all showed similar behaviour. The main peak in $`P(\mathrm{\Delta }V_\mathrm{g})`$, containing the majority of the events ($`60\mathrm{}80\%`$ for different samples), fits well to a Gaussian, whose width scales linearly with temperature. We should mention that samples cooled at a much faster rate typically show strongly enhanced noise levels. Consequently, measurements of CB peak statistics with such devices yielded significantly broadened NNS distributions (not shown). In order to investigate reproducibility of the reduced NNS events, we have performed measurements on a smaller $`V_\mathrm{g}`$ range, where only very few NNSs with reduced values are detected. Figure 2 shows traces with 16 conductance peaks each, taken from two consecutive $`V_\mathrm{g}`$ scans in the same direction. Both traces show two shifts in $`\mathrm{\Delta }V_\mathrm{g}`$ at the same positions. It has been found in general that the position range where NNSs significantly smaller than the mean value $`\mathrm{\Delta }V_\mathrm{g}`$ occur, is well reproduced as a function of $`V_\mathrm{g}`$. In addition, a clear difference in low $`\mathrm{\Delta }V_\mathrm{g}`$ positions between up and down scans was observed, suggesting a hysteretic behaviour. More details on the reproducibility and the hysteresis effect will be published elsewhere. The CB peak position fluctuations do not show significant correlation as a function of $`V_\mathrm{g}`$. The standard deviation of the $`n`$-th neighbour peak spacings is very closely proportional to $`\sqrt{n}`$, which is expected for uncorrelated events. We could not find any specific periodicity from a Fourier analysis of the CB peak position spectrum either. The measured noise was essentially proportional to the SET gain $`\mathrm{d}I/\mathrm{d}V_\mathrm{g}`$, i.e. dominated by device input noise. In very few cases a significantly increased noise level was observed, with a non-zero correlation with events of reduced CB peak width. This is attributed to the well-known dynamic switching of background charges (‘random telegraph noise’, RTN) for certain $`V_\mathrm{g}`$ values close to the fluctuator threshold. Considering the rare occurrence of correlated excess noise with a non-average NNS, we conclude that the fluctuators producing dynamic noise are not primarily responsible for the observed reduction of NNSs. In addition, we should emphasize that the measured fluctuation distributions did not depend on the absolute $`V_\mathrm{g}`$ range considered (cf. also inset in fig. 1 d) ). ## III Discussion Based on the theoretically expected behaviour of our SET transistors and the experimental results discussed above, we explain the observations with discontinuous switching of two-level tunnelling systems (TLTS), where the displacement of a single charge modifies the transistor island potential. Figure 3 a) shows schematically how such switching events can explain the systematic occurrence of reduced NNSs. Consider a TLTS in a metastable state, located in between the gate and the SET island. Exceeding a particular $`V_\mathrm{g}`$ threshold, a charge rearrangement can be induced in the TLTS. In response to this, the electrochemical potential $`\mu _{\mathrm{SET}}`$ of the SET island, which is usually tuned continuously by $`V_\mathrm{g}`$, experiences a sudden jump in the same direction as the $`V_\mathrm{g}`$ variation, independent of the scan direction. Consequently, a smaller $`\mathrm{\Delta }V_\mathrm{g}`$ is needed in order to change the island occupation number by one. Within this picture, the tail in the NNS distribution reflects the spatial and energetical distribution of TLTSs in some region between the island and the gate electrode. The adjustment of a dipole following the variation of an electric field is equivalent to the picture of introducing a medium with increased dielectric constant, increasing $`C_\mathrm{g}`$ and decreasing $`\mathrm{\Delta }V_\mathrm{g}`$. Assuming a simple system of an electron switching locally between two sites, the measured CB peak spacing statistics $`P(\mathrm{\Delta }V_\mathrm{g})`$ can be analyzed considering spacial distribution and type of such TLTSs. According to electrostatic dipole calculations we derive the position of the electron and its displacement which allow a variation of island charge on the order of $`10\%`$ (as in our experiments): an electron located very close to the island ($``$ 1 nm) and facing the gate electrode requires a displacement (radially away from the island) by $`2\mathrm{}4`$ nm. A process of a charge displacement by a few nanometres is very well consistent with other studies on charge trapping. However, by slightly increasing the electron’s distance from the island, the necessary displacement quickly grows to length scales for which the observed reproducibility of TLTS switching becomes very unlikely. The largest electric fields are found between island and gate electrode, whereas the field is shielded or strongly reduced elsewhere, thereby reducing the trap switching effect to a negligible level. This explains the asymmetry of $`P(\mathrm{\Delta }V_\mathrm{g})`$ with a tendency to lower values. Hence, the shape of the $`P(\mathrm{\Delta }V_\mathrm{g})`$ distribution is determined by geometry and materials of the device and the surroundings. SET transistors are known to be the best electrometers to date, with a sensitivity to charge variations by a small fraction of the electron charge $`e`$. In contrast to the random dynamic background charge fluctuations, the TLTSs in our case are highly stable in time, depending (in first approximation) on electrical potential variations only. The thermal activation energy is typically much larger than our temperature range investigated. The reproducibility of the effects suggests a well-defined system of individual charge traps with negligible interaction. Defects, acting as charge traps, may particularly reside in oxides, at semiconductor heterointerfaces, or generally at any disordered interface or lattice. The low-frequency noise in metallic SET transistors is commonly attributed to background charge fluctuations mainly in the substrate, with a small probability of traps in the tunnel junctions. A few studies on RTN have also been reported for semiconducting nanostructure devices. Switching of background charges can be detected directly with the SET provided the device is in a sensitive state of non-zero gain, i.e. within a conductance peak. In our case, the switchings predominantly reduce the width of the peak. Under low bias conditions, most of these events occur in between the CB peaks and are not seen in the peak width. However, we can increase the detection range of the peaks for TLTS switching by making them wider, e.g. by increasing the bias voltage. The CB peak widths $`w`$ show a distribution $`P(w)`$ qualitatively similar to $`P(\mathrm{\Delta }V_\mathrm{g})`$. The peak width in $`P(w)`$ reflects the temperature of the system. As plotted in fig. 3 b), the position of the main $`P(\mathrm{\Delta }V_\mathrm{g})`$ peak shows no variation with $`V_\mathrm{b}`$, and the fraction of the tail events (FTE) in $`P(\mathrm{\Delta }V_\mathrm{g})`$ also remains constant within experimental errors. On the other hand, the $`P(w)`$ main peak position has a thermally broadened minimum at $`V_\mathrm{b}=0`$ and increases almost linearly with increasing $`|V_\mathrm{b}|`$, cf. fig. 3 c). The FTE of the $`P(w)`$ distribution behaves proportionally to $`w`$, which is indicated by their ratio in fig. 3 d). This confirms that the discontinuous jumps in $`V_\mathrm{g}`$ are uniformly distributed along the $`V_\mathrm{g}`$ axis, independent of the state of the SET transistor, i.e. whether it is in the CB regime or not. Apparently, the positions of the jumps depend on the gate potential only. Our arguments are further supported by correlated low tail events between the $`P(\mathrm{\Delta }V_\mathrm{g})`$ and $`P(w)`$ distributions. On one hand, our experimental results reveal important information for the understanding of charge fluctuation mechanisms in nanostructures, hopefully leading to an improvement of reliable and stable devices. This is particularly crucial for developments like quantum computing, ultra-low noise electrometers or metrological applications. We have demonstrated the possiblity to detect directly discrete fluctuations of the background charge configuration, allowing a quantitative characterization of substrates or other dielectrics of interest. On the other hand, we wish to emphasize in particular the impact of our results on the lively discussion on CB peak statistics in semiconductor quantum dots. Our experiments show that even in the absence of single particle energy levels on the SET island, the NNS distribution can deviate significantly from a Gaussian, since it is an intrinsic feature of the island to react with high sensitivity to background charge rearrangements. So far, all experiments studying the NNS distribution of quantum dots have been performed by measuring CB oscillations as a function of a gate electrode. The observed peak spacings in $`V_\mathrm{g}`$ have been corrected using the CI model. However, the remaining NNS distribution does contain the modification of level spacings as a consequence of rearrangements in the random background charge configuration and cannot solely be attributed to the energy spectrum of the quantum dot. The charge and potential distribution in a two-dimensional electron gas (2DEG) is known to be fairly inhomogeneous and sensitive to even small perturbations of electromagnetic field. Furthermore, single electron charging effects among isolated regions due to non-uniform potential distribution in a 2DEG have recently been observed. Consequently, charge sensitive nanodevices made of semiconducting structures may reveal a significantly modified behaviour due to charging effects, of the origin described above. In terms of our model explanations, it can be easily understood, e.g., why Simmel et al. observe much broader NNS distributions in quantum dots defined in Si MOSFETs than those distributions observed in Ga(Al)As heterostructures, since there are more traps in SiO<sub>2</sub> than in heterostructures grown by molecular beam epitaxy. In order to go beyond the CI model, we therefore suggest that the charge rearrangements in the vicinity of the quantum dot should be measured independently. In detail, one could define a metallic ‘control’ SET on top of a quantum dot, which is used to correct each individual peak spacing of the quantum dot for the charge fluctuations in the environment. In summary, we have measured non-Gaussian distributions of nearest-neighbour spacings in normal conducting aluminum single electron transistors. A significant part of the peak spacings is reduced to lower values. We interpret this effect in terms of reproducible background charge rearrangements, which take place in close vicinity to the SET island, and are predominantly induced by gate voltage changes. ## Acknowledgements Helpful discussions with A. B. Zorin, J. E. Mooij and A. Cohen are gratefully acknowledged. This work is supported by the Swiss Federal Office for Education and Science and by ETH Zürich.
no-problem/9909/cond-mat9909336.html
ar5iv
text
# Modified Renormalization Strategy for Sandpile Models ## Abstract Following the Renormalization Group scheme recently developed by Pietronero et al, we introduce a simplifying strategy for the renormalization of the relaxation dynamics of sandpile models. In our scheme, five sub-cells at a generic scale $`b`$ form the renormalized cell at the next larger scale. Now the fixed point has a unique nonzero dynamical component that allows for a great simplification in the computation of the critical exponent $`z`$. The values obtained are in good agreement with both numerical and theoretical results previously reported. preprint: HEP/123-qed The concept of Self-Organized Criticality (SOC) introduced by Bak, Tang and Wiesenfeld (BTW) has attracted a wide interest to understand a class of dynamically driven systems which self-organize into a statistically stationary state characterized by the lack of any typical time or length scale. Numerical results for systems displaying SOC behavior have been extensively reported , but only a few theoretical approaches are known to be in agreement with numerical simulations in all dimensions. The major source of difficulties in the study of SOC systems lies in their inherent complexity that makes the models analytically tractable only in a few cases. The Abelian version of the BTW sandpile model, early addressed by Dhar , turned out to be one of these exceptions. Recently, Pietronero, Vespignani, and Zapperi developed a new type of real space renormalization group approach for dynamically driven systems, able to describe the self-organized critical state of sandpile models by defining a characterization of the phase space in which it is possible the renormalization of the dynamics under repeated change of scale. In addition, it is also possible to compute the critical exponents analytically . The method also reveals the nature of the SOC problems and provides a picture about the universality classes of different sandpile models. This scheme of renormalization has been recently improved by considering increasingly complex proliferation paths and extended to forest-fire models . In this report, we follow the renormalization procedure of references but using a Greek cross-shaped cell instead of a square cell in the renormalization of the relaxation dynamics. The critical exponents that characterize the stationary state are then computed and they are found to be in good agreement with previous theoretical results and large scale numerical simulations both for the BTW and two state model of Manna. We will see that the use of this particular choice of cells simplifies the renormalization equations for the BTW model. In what follows, we will focus on the sandpile critical height models in two dimensions. Sandpile models are cellular automaton defined on a lattice where to each site one assigns a variable (to which we will refer as energy). We let the system evolves by randomly adding units of energy on the system. When the energy of a site reaches a critical value, it relaxes releasing its entire energy to the neighboring sites. The affected sites may become unstable triggering new toppling events and so on until all sites are again stable. Three different classes of sites can be distinguished: (i) those sites for which the addition of a unit of energy does not induce relaxation (stable sites), (ii) those sites for which the addition of a unit of energy provokes they become unstable (critical sites), and (iii) unstable sites that will relax at the next time step. Open boundary conditions allow the energy to leave the system. In this formalism, we will denote by $`\rho `$ the density of critical sites. These definitions can be extended to a generic scale $`b`$ by considering coarse grained variables. Thus, a cell at scale $`b`$ is considered critical if the addition of a unit of energy $`\delta E(b)`$ induces a relaxation of the size of the cell, that is, the subrelaxation processes span the cell and transfer energy to some neighbors. According to the relaxation process can lead to four different possibilities at coarse grained levels: the energy can be distributed to one, two, three, or four neighbors with probabilities $`p_1`$, $`p_2`$, $`p_3`$, and $`p_4`$ respectively. Of course, it is also possible that in certain cases the unstable sites at coarse grained scale do not transfer energy to their nearest neighbors as well as to consider different proliferation problems. We, as in , will not consider these cases . Then, the probability distribution is defined by the vector $$\stackrel{}{P}=(p_1,p_2,p_3,p_4)$$ (1) with the normalization condition $`_{i=1}^4p_i=1`$. So, the properties of the system are fully characterized by the distribution $`(\rho ,\stackrel{}{P})`$ at each scale. The relation between $`\rho `$ and $`\stackrel{}{P}`$ can be derived by noting that in the stationary state the inflow of energy equals the flow of energy out of the system . This implies : $$\rho ^{(k)}=\frac{1}{_iip_i^{(k)}},$$ (2) which allow us to evaluate the stationary distribution of critical sites at each scale $`k`$ of coarse graining. Now, we define a renormalization transformation for the relaxation dynamics. We will use a cell-to-site transformation on a square lattice, in which each cell at scale $`b^{(k)}`$ is formed by five sub-cells at the finer scale $`b^{(k1)}`$ (see Fig. 1). We have chosen this type of cells for two reasons: one, because it implies the use of greater cells formed by five sub-cells at the finer scale, that is, when we scale up, five sub-cells form a new one at the larger scale; and second, one is intuitively tempted to follow the geometry of the relaxation that takes place in numerical simulations of sandpile models with energy transfer to N, E, S, W neighbors . The length scaling factor is then $`\frac{b^{(k)}}{b^{(k1)}}=\sqrt{5}`$ (see Fig.1). Therefore, at a generic scale $`b^{(k)}`$, each cell is characterized by an index $`\alpha `$, ranging from one to five, indicating its number of critical sub-cells at the smaller scale $`b^{(k1)}`$. In order to ensure the connectivity properties of the avalanche in the renormalization procedure, only those configurations with three or more sub-cells at scale $`b^{(k1)}`$ can span the cell, transferring energy to $`i`$ neighboring cells. Thus, the starting relaxation processes $`p_i^{(k1)}`$ at scale $`b^{(k1)}`$ are renormalized in the correspondent process $`p_i^{(k)}`$ at scale $`b^{(k)}`$. Besides, it has been shown that site correlations are averaged out in the stationary state. Therefore, taking into account this fact and the spanning rule we can write the weight of each configuration $`\alpha `$ in the stationary state as: $`W_{(\alpha =3)}=2\rho ^3(1\rho )^2`$ (3) $`W_{(\alpha =4)}=4\rho ^4(1\rho )`$ (4) $`W_{(\alpha =5)}=\rho ^5`$ (5) Eq. (4) gives the probability that a cell at scale $`b^{(k)}`$ has the corresponding number of critical sub-cells at scale $`b^{(k1)}`$. As an example of the general procedure, in Fig.2 we have drawn a series of relaxation processes $`p_1p_1p_2`$ at scale $`b^{(k1)}`$ that contributes to the renormalization of $`p_1^{(k)}`$ at the larger scale $`b^{(k)}`$, starting from a configuration of $`\alpha =3`$ critical sub-cells. The process consists of the following relaxation events that span the cell from left to right satisfying the spanning condition. First, the unstable sub-cell on the left relaxes toward the other critical sub-cell (the center one, Fig.2b) which occurs with probability $`(1/4)p_1^{(k1)}`$, where the index $`(k1)`$ denotes that the relaxation takes place at scale $`b^{(k1)}`$. Second, we consider the process in which the new unstable sub-cell also relaxes toward the sub-cell on the right through another $`p_1`$ process (Fig.2c) which again happens with a probability $`(1/4)p_1^{(k1)}`$. Finally, the sub-cell on the right has become unstable and transfers with probability $`(2/3)p_2^{(k1)}`$ two units of energy one inside and one outside the original cell of size $`b^{(k)}`$ (Fig.2d). The series of processes described contributes to the renormalization of $`p_1^{(k)}`$. Nevertheless, it is necessary to note that the relaxations displayed in Fig 2a-2d are not all the processes that contribute to the renormalization of $`p_1^{(k)}`$ through a $`p_1p_1p_2`$ series. Fig.2e shows a $`p_2`$ relaxation event that, although involves two neighboring sites outside the original cell of size $`b^{(k)}`$, also contributes to the renormalization of $`p_1^{(k)}`$ with probability $`(1/6)p_2^{(k1)}`$. This is a new characteristic inherent to the cell-to-site transformation chosen. Now, if we take into account all the processes that lead to $`p_1^{(k)}`$, for $`\alpha =3`$, one gets $`p_1^{(k)}={\displaystyle \frac{1}{3}}\left\{\left({\displaystyle \frac{1}{6}}p_2^{(k1)}+{\displaystyle \frac{1}{2}}p_3^{(k1)}+p_4^{(k1)}\right)\left({\displaystyle \frac{1}{4}}p_1^{(k1)}\right)\left({\displaystyle \frac{3}{2}}p_1^{(k1)}+{\displaystyle \frac{4}{3}}p_2^{(k1)}+{\displaystyle \frac{1}{2}}p_3^{(k1)}\right)\right\}`$ (6) $`+{\displaystyle \frac{2}{3}}\left\{\left({\displaystyle \frac{1}{4}}p_1^{(k1)}+{\displaystyle \frac{1}{2}}p_2^{(k1)}+{\displaystyle \frac{3}{4}}p_3^{(k1)}+p_4^{(k1)}\right)\left({\displaystyle \frac{1}{4}}p_1^{(k1)}\right)\left({\displaystyle \frac{3}{4}}p_1^{(k1)}+{\displaystyle \frac{7}{6}}p_2^{(k1)}+{\displaystyle \frac{1}{2}}p_3^{(k1)}\right)\right\}`$ (7) where in Eq. (7) the factors $`\frac{1}{3}`$ and $`\frac{2}{3}`$ refer to the multiplicities of the configurations (see Fig.3). In a similar way (though much more complicated), one obtains expressions for $`p_i^{(k)}`$, $`(i=2,3,4)`$, for $`\alpha =3`$ and imposes the normalization condition $`_{i=1}^4p_i^k=1`$. The procedure is repeated taking into account the configurations with $`\alpha =4`$ and $`\alpha =5`$ critical sites and the renormalized probabilities at level $`k`$ are finally derived by averaging over the configurations of different $`\alpha `$-values including their statistical weights $`W_\alpha (\rho ^{(k1)})`$. Therefore, the probabilities $`p_i^{(k)}`$ at length scale $`b^{(k)}`$ will be given by $$p_i^{(k)}=\underset{\alpha =3}{\overset{5}{}}W_\alpha (\rho ^{(k1)})p_i^{(k1)}(\alpha )$$ (8) with $`W_\alpha (\rho ^{(k1)})`$ and $`\rho ^{(k1)}`$ given by Eq. (4) and Eq. (2), respectively. As the computation of the probabilities $`p_i^{(k)}`$ in Eq. (8) is rather lengthy and cumbersome, we have developed a C-code to compute all the polynomial term coefficients that contribute to the renormalization transformation. Now, we proceed to explore the scale-invariant behavior of the model by finding the fixed-point solution $`p_i^{(k1)}=p_i^{(k)}`$. In order to do this, we start from the shortest length scale characterized by $`(\rho ^{(0)},\stackrel{}{p^{(0)}})`$ and study how it evolves under repeated iteration of the transformation (8). For the two state model of Manna the parameters $`(\rho ^{(0)},\stackrel{}{p^{(0)}})`$ are $`(\rho ^{(0)},0,1,0,0)`$ whereas for the BTW sandpile we have $`(\rho ^{(0)},0,0,0,1)`$. Here, the initial value of the density of critical sites $`\rho ^{(0)}`$ is irrelevant for the dynamics since the system evolves to a fixed point whatever be the distribution of critical sites at the small scale dynamics. As in Refs , both models have the same fixed point, but here there is an important difference in relation to the value of the fixed point. We obtain for the fixed point the value $`(\rho ^{},\stackrel{}{p}^{})=(\frac{1}{4},0,0,0,1)`$, that is, in the BTW model one starts from the fixed point! This is not indeed the case for the two-state model of Manna for which we need to iterate Eq. (8) more than twenty times to reach the same fixed point. We believe that this is a consequence of our renormalization strategy for the relaxation dynamics and constitutes a great simplification in the calculation of the dynamical exponent $`z`$. In fact, we were expecting the existence of a critical fixed-point value different from that reported in references although the critical exponents should be very close since they are determined by the properties of the system at large scales. The exponent $`\tau `$ that characterizes the power-law avalanche size distribution can be obtained following the procedure of . Consider the probability $`K_{b^{(k1)},b^{(k)}}`$ that the relaxation processes that are active at scale $`b^{(k1)}`$ do not extend beyond the larger scale $`b^{(k)}`$. This is expressed as $$K=\frac{\underset{b^{(k1)}}{\overset{b^{(k)}}{}}P(r)𝑑r}{\underset{b^{(k1)}}{\overset{\mathrm{}}{}}P(r)𝑑r}=1\left(\frac{b^{(k)}}{b^{(k1)}}\right)^{2(1\tau )}=1\left(\sqrt{5}\right)^{2(1\tau )}.$$ (9) Eq. (9) also satisfies $$K=p_1^{}(1\rho ^{})+p_2^{}(1\rho ^{})^2+p_3^{}(1\rho ^{})^3+p_4^{}(1\rho ^{})^4$$ (10) Then, the exponent $`\tau `$ is given by $$\tau =1\frac{1}{2}\frac{\mathrm{log}(1K)}{\mathrm{log}(\sqrt{5})}=1.235.$$ (11) This value of $`\tau `$ is in very good agreement with the value obtained in and with large-scale numerical simulations which give $`\tau =1.27`$ for the two-state model of Manna, and $`\tau =1.29`$ for the BTW sandpile model . A second independent critical exponent can also be computed. This is the so-called dynamical exponent $`z`$ that relates the spatial scale $`r`$ to time scale $`t`$ through the power law $`tr^z`$. As pointed out in , the calculation of $`z`$ could be an enormous and laborious task because the knowledge of the fixed point value is not sufficient and we have to know the complete form of the renormalized dynamics. Nevertheless, as we said before, the use of our larger cells in the renormalization transformation leads to a fixed point with a unique nonzero component in the vector $`\stackrel{}{p}^{}`$. This constitutes a great simplification in the derivation of the complete structure of the renormalized dynamics. In what follows, we will derive at a glance the dynamical critical exponent for the BTW sandpile model. In order to obtain the dynamical exponent we have to calculate the average number $`<t>`$ of noncontemporary processes at scale $`b^{(k1)}`$ needed to have a relaxation process at the larger scale $`b^{(k)}`$, which is related with $`z`$ through $$z=\frac{\mathrm{ln}<t>}{\mathrm{ln}\left(\frac{b^{(k)}}{b^{(k1)}}\right)}=\frac{\mathrm{ln}<t>}{\mathrm{ln}\left(\sqrt{5}\right)}.$$ (12) In Fig.3 we have depicted the possible starting configurations for the different values of $`\alpha `$. It is also shown the time steps needed to have a relaxation process at the larger scale. Such a simplification in the calculus is possible because we have to consider only the relaxations that contribute to the renormalization of $`p_4`$ at larger scale. As can be seen, we need two time steps for the symmetric configurations (those in which the initial unstable site is located at the center of the cell) and three for the non-symmetric configurations (those in which the initial unstable site is located in one of the critical boundary sites of the cell). Therefore, $$<t>=\frac{1}{\underset{\alpha }{}W_\alpha (\rho )}\underset{\alpha }{}t^{}(\alpha )W_\alpha (\rho )$$ (13) where $`t^{}(\alpha )`$ is the weighted average of time steps taking into account the different additional statistical weights due to multiplicities $`\omega `$ in each configuration $`\alpha `$ (see Fig.3). Now, evaluating Eq. (13) at the fixed point we obtain, $$z=1.236$$ (14) The value (14) is in remarkably good agreement with the numerical result $`z=1.21`$ and with the exact value $`z=5/4`$ . The other critical exponents can be derived from scaling relations . Table I summarizes the values of the critical exponents obtained for the BTW sandpile model and those reported by previous renormalization scheme and numerical simulations. In this report, we have introduced an alternative renormalization strategy that simplifies the analytical derivation of the critical exponents that characterize the dynamics of sandpile models. By using larger cells, formed by five sub-cells of the finer scale, we obtain a fixed point with a unique nonzero dynamical component which allow us to derive the whole form of the renormalized dynamics in a more direct and simple way. The values of the exponents here obtained are in good agreement with those previously reported. Besides, as in similar analytical predictions, the two-state model of Manna and the BTW sandpile model belong to the same universality class . The results confirm the robustness of the renormalization group approach. It is a pleasure to thank A. Vespignani for stimulating discussions and G. Caldarelli for useful correspondence. Y.M would like to thank the AECI for financial support.
no-problem/9909/hep-ph9909413.html
ar5iv
text
# On the 𝜉-Distribution of Inclusively Produced Particles in 𝑒⁺⁢𝑒⁻ Annihilation ## Abstract We discuss the momentum distributions of inclusively produced particles in $`e^+e^{}`$ annihilation. We show that the dependence of the position of the maxima of the $`\xi =\mathrm{ln}(1/z)`$ spectra on the mass of the produced particles follows naturally from the general definition of fragmentation functions when energy-momentum conservation is correctly incorporated. preprint: ADP-99-39/T376 IU/NTC 99-09 $`e^+e^{}`$ annihilation provides an excellent opportunity to study the fragmentation of quarks into hadrons. In particular, inclusive measurements of particle spectra allow us to extract fragmentation functions from such experiments and to test different theoretical models of fragmentation. Until now, fragmentation functions have not been calculated from first principles rather they have to be modeled in some way. Most of the current approaches use different algorithms, such as string and shower algorithms, and model fragmentation of a high energy quark in two phases, one of which is purely perturbative, describing the radiation and branching of the initial quarks and the other describing the subsequent non-perturbative hadronisation of the low energy quarks. Here, we follow a different approach. Starting from the general definition of the fragmentation functions, and explicitly guaranteeing energy-momentum conservation we discuss the following interesting property of inclusive particle spectra in $`e^+e^{}`$ annihilation. When the number of inclusively produced particles is plotted as a function of $`\xi =\mathrm{ln}(1/z)`$, (where $`z`$ is the momentum fraction of the fragmenting quark carried by the produced hadron) it exhibits an approximate Gaussian shape around a maximum, $`\xi ^{}`$. The position of the maximum depends both on the total centre of mass energy and on the mass of the produced particle . While the shape and the energy dependence of the spectrum can be understood in perturbative QCD, as a consequence of the coherence of gluon radiation , the position of the maximum is a free parameter which has to be extracted from experiment. Our main purpose in this paper is to show that the dependence of location of the maximum on the mass of the produced particle follows naturally from the general definition of fragmentation functions when energy-momentum conservation is correctly incorporated. Our starting point is the general definition of fragmentation functions $$\frac{1}{z}D_q(z)=\frac{1}{4}\underset{n}{}\frac{d\xi ^{}}{2\pi }e^{ip^+\xi ^{}/z}\text{Tr}\{\gamma ^+0|\psi (0)|Pn;pp_nPn;pp_n)|\overline{\psi }(\xi ^{})|0\}.$$ (1) (Here, we discuss only the twist two part of the unpolarized fragmentation functions.) $`\gamma ^+`$ is defined as $`\gamma ^+=\frac{1}{\sqrt{2}}(\gamma ^0+\gamma ^3)`$ and the plus components of the momenta are defined as $`p^+=\frac{1}{\sqrt{2}}(p^0+p^3)`$. $`p`$ and $`p_n`$ are the momenta of the produced particle, $`P`$, and associated hadronic system $`n`$. Using translational invariance to remove the $`\xi ^{}`$ dependence in the second the matrix element and the integral representation of the delta function and projecting out the plus components of $`\psi `$, we obtain $$\frac{1}{z}D_q(z)=\frac{1}{2\sqrt{2}}\underset{n}{}\delta [(1/z1)p^+p_n^+]|0|\psi _+(0)|Pn;pp_n|^2.$$ (2) Here, the plus projection is defined as $`\psi _+=\frac{1}{2}\gamma _+\gamma _{}\psi `$. Using Eq. (2) rather than Eq. (1) has the advantage that energy-momentum conservation is built in before any approximation is made for the states in the matrix element. This is similar to the case of quark distributions where the corresponding expression ensures correct support of the distribution functions as discussed in Ref. . While the detailed structure of the fragmentation function depends on the exact form of the matrix element some general properties follow already from Eq. (2). The delta function, for example, implies that the function, $`D_q(z)/z`$, peaks at $$z_{max}\frac{M}{M+M_n}.$$ (3) Here, $`M`$ and $`M_n`$ are the mass of the produced particle and the produced system, $`n`$, and we work in the rest frame of the produced particle. Here, we consider the intermediate state as a state having a definite mass. (In general, we have to integrate over a spectrum of all possible masses.) We see that the location of the maxima of the fragmentation function depends on the mass of the system $`n`$. While the high $`z`$ region is dominated by the fragmentation of a quark into the final particle and a small mass system, large mass systems contribute to the fragmentation at lower $`z`$ values. We can go a step further and eliminate the $`\delta `$-function in Eq. (2) by integrating over the momentum of the unobserved state $`n`$. $$\frac{1}{z}D_q(z)=\frac{1}{2\sqrt{2}}_{p_{min}}^{\mathrm{}}p_n𝑑p_n|0|\psi _+(0)|Pn;pp_n|^2,$$ (4) with $$p_{min}=|\frac{M^2(1z)^2z^2M_n^2}{2Mz(1z)}|.$$ (5) The significance of Eqs. (4) and (5) is that $`D_q(z)`$ vanishes for both $`z1`$ and $`z0`$. Thus, fragmentation functions have the correct support. It is interesting to see how the integration region depends on the momentum fraction, $`z`$, for various values of the produced particle system, $`M_n`$. In Fig. 1, we plot $`p_{min}`$ for the production of protons as a function of $`z`$ for different values of $`M_n`$. $`p_{min}=0`$ gives the value of $`z`$ at which the contribution of a given $`M_n`$ to the fragmentation function is largest. This gives $`z_{max}`$ according to Eq. (3). Further, the region of $`z`$ where the lower integration limit is sufficiently small that the integral will be significant, becomes narrower with increasing $`M_n`$. Thus, large mass states contribute to the fragmentation function at low $`z`$ and only in a very narrow region of $`z`$. At a given centre of mass energy, there will be a maximum value for the mass of the intermediate state which can be produced in the fragmentation. This maximal mass determines the “lower” edge of the spectrum. <sup>*</sup><sup>*</sup>*Since the $`\xi `$-distribution is given by $`d\sigma /d\xi =zd\sigma /dzzD(z)`$ it is proportional to $`z^2`$ times $`D(z)/z`$. Although Eq.(3) describes the location of the maximum of the distribution $`D(z)/z`$ we can expect that Eq.(3) is also a good approximation for the $`\xi `$-distribution, since the $`z`$ region where the lower integration limit ($`p_{min}`$), is sufficient small, is very narrow for large masses, $`M_n`$. Thus, the contributions from a given $`M_n`$ to the fragmentation functions are very narrow functions in $`z`$ for large $`M_n`$. Note, that the square of the matrix element in Eq. (4) must decrease faster then $`1/p_n^2`$ in order to give finite result for the fragmentation functions. Eq. (3) gives the maximum of the $`\xi `$-distribution exactly in the limiting case when the contribution of a given $`M_n`$ to the fragmentation function is a $`\delta `$-function. We can use Eq. (3) to estimate the maxima of the $`\xi `$-distribution associated with this particular mass. This maximum determines the maximum of the fragmentation function in first approximation. Although $`M_n`$ is not known, it should be proportional to the available total energy $`E_{CM}`$. However, the precise value of $`M_n`$ is not needed if we are only interested in the relative position of the maxima of the $`\xi `$ distribution of different particles. Taking the difference of the maxima of the $`\xi =\mathrm{ln}(1/z)`$ distributions of two different particles, $`a`$ and $`b`$, the dependence on the unknown value of $`M_n`$ drops out for sufficiently large $`M_n`$. It follows from Eq.(3) that $$\mathrm{\Delta }\xi ^{}=\xi _a^{}\xi _b^{}\mathrm{ln}\left(\frac{M_a+M_n}{M_b+M_n}\right)+\mathrm{ln}\frac{M_b}{M_a}\mathrm{ln}\frac{M_b}{M_a}$$ (6) Thus, the difference of the maxima is determined by the logarithm of the ratio of the masses of the produced particles. Since the value of $`M_n`$ for finite centre of mass energies are in general different for mesons and baryons, Eq. (6) will be only valid for the difference of the maxima of the mesons or baryons separately. We calculated the maxima of the $`\xi `$ distributions using this formula and using the maxima of the $`\eta ^{}`$ and that of the proton distributions as a reference value for mesons and baryons, respectively. The results are compared to the experimental data in Fig. 2. The location of the maxima as a function of the mass of the produced particles is reasonably well described both for mesons and baryons. In conclusion, we have shown that the dependence of the position of the maxima of the $`\xi =\mathrm{ln}(1/z)`$ spectra on the mass of the produced particles follows from the general definition of the fragmentation functions and from energy-momentum conservation. Our results are in remarkably good agreement with the data. ###### Acknowledgements. This work was partly supported by the Australian Research Council. One of the authors \[JTL\] was supported in part by National Science Foundation research contract PHY-9722706. One author \[JTL\] wishes to thank the Special Research Centre for the Subatomic Structure of Matter for its hospitality during the time this work was carried out.
no-problem/9909/astro-ph9909078.html
ar5iv
text
# Detection Techniques of Microsecond Gamma-Ray Bursts using Ground-Based Telescopes ## 1 Introduction The astrophysical band for the detection of high energy $`\gamma `$-rays has been recently expanded to energies between hundreds of GeV (Weekes et al. 1989) up to beyond 10 TeV (Aharonian et al. 1997; Tanimori et al. 1998a; Krennrich et al. 1999a) using the ground-based atmospheric Cherenkov imaging technique. The proposed coverage from 20 MeV - 300 GeV with the future satellite-based GLAST detector (Gehrels & Michelson 1999) providing a large field of view is complemented by the proposals of ground-based detectors such as VERITAS (Weekes et al. 1999), HESS (Hofmann et al. 1997) and MAGIC (Barrio et al. 1998) with an energy threshold in the tens of GeV range. Ground-based Cherenkov imaging detectors provide large collection areas of the order of $`10^5\mathrm{m}^2`$ and hence, are well suited to the study of $`\gamma `$-ray flare phenomena. This technique has already proven successful in the study of AGN flares on minute time scales (Gaidos et al. 1996) and is expected to improve in sensitivity by an order of magnitude with future detectors. In this paper we explore the possibility of using imaging atmospheric Cherenkov telescopes to detect $`\gamma `$-ray flare phenomena on shorter time scales of microseconds and with energies in the sub-GeV regime. Astrophysical phenomena producing extremely short bursts of $`\gamma `$-rays could be the signature of Hawking’s prediction of $`\gamma `$-ray burst radiation from the evaporation of primordial black holes (Hawking 1974). The lifetime of a black hole is proportional to the mass cubed. In the early Universe primordial black holes (PBHs) of small mass may have formed (Hawking 1971; Carr 1976). PBHs created with initial masses slightly greater than $`5\times 10^{14}\mathrm{grams}`$ would be evaporating now by the quantum-gravitational Hawking mechanism. A PBH’s existence ends in a dramatic explosion where the final stage of the evaporation is determined by particle physics at extremely high energies. Hagedorn (1970) suggested a particle physics model in which the number of species of particles increases exponentially with energy. In this scenario, a black hole loses its energy quickly when reaching a critical temperature. A burst of $`\gamma `$-rays as short as $`10^7\mathrm{s}`$, with a total energy of $`10^{34}`$ ergs, would be the signature of such an event. However, the burst time scale and average photon energy depends on the particle physics model, with highly uncertain predictions at high energies. This has prompted searches over much larger time and energy scales ranging from $`10^7\mathrm{s}`$ at 250 MeV to seconds at 10 TeV, as suggested by the standard model of particle physics (Halzen et al. 1991). Cline & Hong (1992), using a mixture of a Hagedorn and QCD-like spectrum, suggested that these bursts occur on the millisecond time scale in the MeV range. Classical $`\gamma `$-ray bursts (GRBs) detected with the satellite-experiment BATSE on CGRO show $`\gamma `$-ray emission on surprisingly short time scales. GRB time scales in the millisecond range have been reported by Kouveliotou et al. (1994) - the detection of the so-called “Superbowl” burst (GRB930131) has revealed temporal variations on time scales as short as 2 ms. In fact, evidence for sub-millisecond (200 $`\mu \mathrm{sec}`$) structures was found in the BATSE data of GRB910305 (Bhat et al. 1992). EGRET, which was sensitive from 30 MeV to 30 GeV, due to an instrumental dead time effect, was limited in sensitivity for short bursts to time scales above 200 ms. A search for microsecond scale bursts using EGRET has been made by looking for multiple-$`\gamma `$-ray events arriving almost simultaneously (within a single spark chamber gate, i.e., 600 ns); it produced only an upper limit of $`5\times 10^2/\mathrm{yr}/\mathrm{pc}^3`$ (Fichtel et al. 1994). Searches by Cline et al. (1997) using archival data from the BATSE experiment found some events on millisecond time scales, but it was not possible to prove that they were not just classical $`\gamma `$-ray bursters. Also in an early experiment, first generation ground-based atmospheric Cherenkov detectors were used to search on the shortest time-scales predicted ($`10^7\mathrm{s}`$), giving an upper limit of $`4\times 10^2/\mathrm{yr}/\mathrm{pc}^3`$ (Porter & Weekes 1978). The possibility of using atmospheric Cherenkov imaging telescopes to detect wavefront events was considered elsewhere (Connaughton, 1996); for a single telescope with a camera with relatively small field of view it was shown to be difficult to recognize the bursts and distinguish them from background cosmic-ray events. The technique described here can be used to search for microsecond $`\gamma `$-ray emission in the sub-GeV regime with a more sensitive ground-based instrument. The proposed detection technique will clearly identify these events. A short burst can be approximated as a thin plane wavefront of $`\gamma `$-rays traveling through space (wavefront event, hereafter), starting a multi-photon-initiated cascade when entering the earth’s atmosphere. Measuring the angular distribution of Cherenkov light from a short burst using an atmospheric Cherenkov imaging detector is a new approach to distinguish short bursts from background by cosmic rays. Previous efforts (Porter & Weekes 1978) used non-imaging Cherenkov detectors, and the suppression of cosmic rays was achieved by simultaneous recording by two telescopes separated at a distance of 400 km. Imaging enables the identification of a $`\gamma `$-ray wavefront event in a single telescope and the measurement of its arrival direction. With some modifications (§4) future ground-based $`\gamma `$-ray detectors using arrays of imaging telescopes, e.g., VERITAS (Weekes et al. 1999) and HESS (Hofmann et al. 1997), would be ideally suited for exploring this observational window of microsecond bursts. In §2 we describe the phenomenology of the wavefront events and how they differ from single-particle-initiated air showers. In §3, using Monte Carlo simulations, we describe an analysis technique including timing characteristics to separate wavefront events from background arising from cosmic-ray showers. We also discuss the design considerations (§4) for the implementation in imaging Cherenkov telescopes. In §5, an estimate of flux sensitivity and the energy range of the existing Whipple Observatory 10 m instrument are shown. ## 2 Phenomenology of multi-photon-initiated showers The technique proposed here builds upon the atmospheric Cherenkov imaging technique that has been pivotal in establishing the field of TeV $`\gamma `$-ray astrophysics (for review see Ong et al. 1998). The technique provides the highest sensitivity for detecting $`\gamma `$-ray sources above 200 GeV. In this technique, Cherenkov light from an electromagnetic atmospheric cascade is focused onto a camera of fast photomultiplier tubes. The images are analyzed to select $`\gamma `$-ray events while rejecting over 99.7% of cosmic-ray background events. This has led to the discovery of TeV $`\gamma `$-rays from the Crab Nebula (Weekes et al. 1989), PSR 1706-44 (Kifune et al. 1995), Vela (Yoshikoshi et al. 1997) and SN 1006 (Tanimori et al. 1998b) and from three active galactic nuclei: Mrk 421 (Punch et al. 1992), Mrk 501 (Quinn et al. 1996) and 1ES 2344+514 (Catanese et al. 1998). Atmospheric Cherenkov telescopes have a high collection area ($`50,000\mathrm{m}^2`$ for a single 250 GeV $`\gamma `$-ray) making them uniquely sensitive to short time-scale phenomena. Cherenkov light from a plane wavefront of multiple $`\mathrm{E}>`$ 200 MeV $`\gamma `$-rays can be detected with ground-based optical telescopes. A low energy multi-photon-initiated cascade differs significantly from a single-particle-initiated cascade e.g., a TeV photon or proton induced shower. Individual low energy $`\gamma `$-rays, when reaching the upper atmosphere, will typically generate one or a few (depending on energy) generations of electrons and positrons (collectively called electrons hereafter) by pair production and subsequent bremsstrahlung. The electrons, before falling below the critical energy, radiate Cherenkov light (6000 photons per electron for one radiation length) which can be collected by an optical reflector at ground level. The average number of Cherenkov photons associated with a single sub-GeV $`\gamma `$-ray is small, and therefore, its Cherenkov flash is too faint to be detectable at ground level. However, a large number of $`\gamma `$-rays arriving within a short time can produce a Cherenkov signal strong enough to be detectable by an atmospheric Cherenkov telescope. Previous efforts to detect wavefront events were based on the fact that multi-photon-initiated showers have a large lateral extent. They can be detected by using relatively simple non-imaging atmospheric Cherenkov telescopes (Porter & Weekes 1978). For logistical and cost reasons it is difficult to operate two telescopes at a distance several hundred miles apart, solely dedicated to a search for bursts. On the contrary, existing imaging telescopes or future arrays of imaging telescopes, can be used in parallel with standard TeV $`\gamma `$-ray observations to search for wavefront events from microsecond bursts. These instruments also provide a significant improvement to previous efforts: the imaging capability provides clear recognition of the wavefront events from the measurement of the angular Cherenkov light distribution in the focal plane combined with the Cherenkov pulse width. There are three unique characteristics of the Cherenkov light image produced by a wavefront event. a) The first is the very large extent of the wavefront, which means it can be detected simultaneously by telescopes over vast distances. The images in all telescopes in an array (for example the VERITAS array) should be identical, regardless of their distance. This is different from single-particle-initiated shower images which are detectable over a limited area on the ground and, if detected, show a parallactic displacement between telescopes. b) The second characteristic is the time profile of the Cherenkov pulse which can range from $``$ 100 nanoseconds to microseconds - and thus is quite different from Cherenkov flashes of conventional air showers showing durations of 5-30 nanosecond. The Cherenkov light time profile of relatively long (microsecond) bursts is dominated by the intrinsic width of the burst itself. However, the time profile of a wavefront event is also determined by the geometry of the multi-photon-initiated cascade: the detection is based on collecting Cherenkov photons that have been emitted by secondary electrons of the cascades initiated by $`\gamma `$-ray primaries with a large range of impact points. Cherenkov photons can be collected up to several hundred meters distance from the extrapolated impact point of the primary at ground-level. The intrinsic differences in time-of-flight between Cherenkov photons from different primary particle impact points causes multi-photon-initiated showers to have a minimum width of $`40`$ nanoseconds (see §3.3), assuming the time profile of the $`\gamma `$-rays is a delta function. The time structure of the images shows a concentric symmetry: the closer to the center, the earlier the pulse. c) The third characteristic, which is hinted at by Figure 1, but is not entirely obvious, is that the images in the camera plane from a wavefront event are circular. They also will provide information about the arrival direction of the wavefront: the displacement of the image centroid from the optic axis of the telescope measures the arrival direction of the burst. Figure 2 shows the simulated image (see §3.1) of a Cherenkov flash from a 300 MeV $`\gamma `$-ray burst (pulse width of 100 ns with 0.5 $`\gamma ^{}\mathrm{s}/\mathrm{m}^2`$; fluence = $`2.4\times 10^8\mathrm{ergs}/\mathrm{cm}^2`$) in the focal plane of the Whipple Observatory 10 m telescope. The background from night-sky fluctuations for a 100 ns exposure has been included and a standard image cleaning procedure (Reynolds et al. 1993) applied. The light distribution in the image center is relatively flat and smooth. The flatness of the light distribution arises from a uniform lateral density distribution of electrons. Shower fluctuations have very little effect on the Cherenkov light distribution because of the huge number of showers contributing to the Cherenkov flash. This results in a smooth light distribution with mainly statistical variations due to the night sky background and instrumental noise. The Cherenkov light angular distribution is determined by the Cherenkov angle at a given height ($`0.4^{}`$ at 15 km height) and the multiple-scattering angle of the electrons in the cascade. The convolution of both effects leads to images that show a prominent plateau with a radial extension of $`0.3^{}`$ with a “halo” extending further with a scale of $`2^{}`$ F.W.H.M. These image shapes clearly differ from single $`\gamma `$-ray or cosmic-ray initiated Cherenkov images (Hillas 1996) and provide an important constraint for classifying these short bursts. Together, with the timing information of the Cherenkov pulse-shape, imaging can be used to reject background events from cosmic rays. Because we are describing a burst detection technique, fluence and sensitive area are used to describe the detector properties and are defined as follows: 1. Fluence: A detector is triggered whenever the number of $`\gamma `$-rays during an integration time bin of some duration exceeds a threshold. In the following, we use the term fluence, the total energy S received from a given burst in units of $`\mathrm{ergs}/\mathrm{cm}^2`$ over the full duration of the burst. Since we are not trying to resolve individual photons during the burst such an integral measure is sufficient. 2. Sensitive area: Cherenkov photons emitted by an electron at 20 km atmospheric height are most likely spread over an area of 500 m in radius. However, a few photons, emitted from electrons with large multiple-scattering angles, reach up to 800 m from the impact point of the primary $`\gamma `$-ray. This results in a large sensitive area ($`2\times 10^6\mathrm{m}^2`$) over which individual $`\gamma `$-rays make a contribution to the total amount of light of a Cherenkov flash. The efficiency for a single sub-GeV $`\gamma `$-ray triggering a reasonable sized atmospheric Cherenkov imaging telescope ($`<\mathrm{\hspace{0.25em}20}`$ m reflector diameter) is essentially zero. For a 300 MeV $`\gamma `$-ray, the efficiency for contributing a single photoelectron in the photomultiplier camera of the Whipple Observatory 10 m telescope reaches a maximum of approximately 1%. The sensitive area is the area for which an individual low energy shower makes a significant contribution to the Cherenkov light flash. ## 3 Simulations We have carried out Monte Carlo simulations to characterize the signatures of multi-photon-initiated cascades. The Monte Carlo code ISUSIM (Mohanty et al. 1998) was used which includes the detector model of the Whipple Observatory 10 m telescope equipped with a $`4.8^{}`$ field-of-view 331-photomultiplier camera (Quinn et al. 1999). The underlying goal was to achieve good background suppression while maintaining maximum detection efficiency. A microsecond burst consists of multiple primary $`\gamma `$-rays producing independent cascades. We have generated $`10^8`$ individual $`\gamma `$-rays randomly spread out over a range of impact radius 0 - 1000 m to study the properties of bursts. The Cherenkov photons which hit the mirror and are reflected into the focal plane detector have been superimposed. In order to trigger on a wavefront event, the number of photoelectrons created in several photomultipliers used for forming a coincidence has to significantly exceed the number of photoelectrons initiated by fluctuations from the night-sky background. Therefore, a minimum number of $`\gamma `$-rays per square meter (Cherenkov photon yield $``$ number of primary $`\gamma `$-rays) is required to detect a signal in the photomultiplier camera (for trigger specifications, see §4). The fluence required for the detection of a wavefront event of given $`\gamma `$-ray energies is proportional to the number of incoming $`\gamma `$-rays$`/\mathrm{m}^2`$ during the time of the burst, ultimately determining the number of Cherenkov photons arriving at the detector. ### 3.1 Image characteristics The technique of recording the Cherenkov images of single-particle-induced showers has proven to be effective in distinguishing $`\gamma `$-ray induced showers from the more numerous background images from cosmic rays. The usefulness of imaging to identify wavefront events is addressed in this section. Figure 2 shows the Cherenkov light image from a simulated wavefront event (300 MeV $`\gamma `$-rays traveling parallel to the optic axis) in the 331-phototube camera of the Whipple Observatory 10 m telescope. The area and the gray-scale of the filled circles indicates the number of photoelectrons detected in each pixel. The fluence of the event in Figure 2 is $`2.4\times 10^8\mathrm{ergs}/\mathrm{cm}^2`$ (0.5 $`\gamma `$-rays$`/\mathrm{m}^2`$ at 300 MeV). Figure 3 shows the image of a wavefront event arriving $`1.13^{}`$ off-axis, and it can be seen that the image is off-set by $`1.1^{}`$ from the center of the camera. The image center can be used to measure the arrival direction of the burst. In both cases the light distribution shows a circular image. In the case of the image in Figure 3, the fluence is two times higher than in Figure 2 and a smoothly decreasing “halo” can be seen. The light beyond the central plateau ($`0.3^{}`$ in radius) is caused mainly by the multiple-scattering of relatively low energy electrons. This halo is not easily recognizable in Figure 2 (where the burst has a lower fluence), because the amount of light is comparable to the noise fluctuations from the night-sky background. The structure of the image can be described by its circular shape and its characteristic radius. The image shape is described here using a combination of the parameters, Width and $`Length`$ (Hillas 1985). The $`Eccentricity`$ of an image, characterizing its circular shape is defined by: $`Eccentricity=\sqrt{1Width^2/Length^2}`$. A perfectly circular image would have an $`Eccentricity`$ equals zero. The radial extend of the images is described by $`Radius`$, defined by: $`Radius=(Width+Length)/2`$. The $`Radius`$ and $`Eccentricity`$ distribution for wavefront events (individual $`\gamma `$-rays of 200 MeV-5 GeV sampled from a power-law distribution with a differential spectral index of -2.5) are shown in Figure 4b and Figure 5a, respectively. The $`Radius`$ of the images is well defined and substantially bigger than for most cosmic-ray showers. A selection of images with $`Radius`$ $`>0.70^{}`$ would reject most cosmic-ray images. The $`Eccentricity`$ distribution peaks at 0.2 (Figure 5a) which corresponds to mostly circular images, establishing their circular shape. Given the $`Radius`$ and $`Eccentricity`$ distribution of recorded cosmic-ray showers (dotted line in Figure 4b, 5a), a strong background suppression can be achieved in the search for multi-photon-initiated cascades. An additional feature that can be used is the relatively smooth light distribution which is very different from most cosmic-ray shower images. The images from wavefront events reflect the fact that many showers contribute to an image: their light distribution is extremely smooth. The smoothness of an image can be quantified, e.g., by calculating the R.M.S. of the light content of all pixels. ### 3.2 Angular resolution The image center position provides an estimate of the true arrival direction of a wavefront event. The angle $`\mathrm{\Theta }`$ is the difference between the reconstructed and the true arrival direction in degrees. Figure 5b shows the $`\mathrm{\Theta }^2`$ distribution of simulated bursts with each burst containing energies between 0.2 - 5 GeV sampled from power-law of $`\mathrm{E}^{2.5}`$. The reconstruction accuracy also depends on the total amount of light collected and therefore the fluence of the burst. The angular resolution $`\sigma _\mathrm{\Theta }`$ is defined here so that 72% of the image centers would fall within a radius of $`\sigma _\mathrm{\Theta }`$. The resolution for a burst with a fluence of $`1.5\times 10^8\mathrm{erg}/\mathrm{cm}^2`$ is $`\sigma _\mathrm{\Theta }=0.12^{}`$. However, for a burst with a fluence of $`6.0\times 10^8\mathrm{erg}/\mathrm{cm}^2`$ the resolution is $`\sigma _\mathrm{\Theta }=0.06^{}`$ and improves approximately with the square root of the fluence. ### 3.3 Timing characteristics Images of wavefront events have a characteristic shape, but even so, image analysis might not remove the background from cosmic-ray induced showers completely. The pulse shape of the Cherenkov light pulse provides an additional signature to identify and distinguish multi-photon-initiated cascades from single-particle-initiated air showers. Pulse shapes from cosmic-ray air showers are typically a few nanoseconds wide. Multi-particle-initiated showers from bursts show a minimum time scale of at least 40 ns. We have used the Whipple Observatory 10 m telescope to record pulse shapes of cosmic-ray induced air showers utilizing a 4-channel digital oscilloscope (Hewlett-Packard 54540A) with a 500 Mhz sampling time. Four channels were used to record pulses from phototubes which were spread out over an area of $`0.5^{}\times 0.5^{}`$ in the focal plane. The trigger requires all four channels to ensure that the system would only record pulses from fairly extended images, similar to multi-photon-initiated events. Smaller images can be distinguished by imaging, e.g., by measuring their $`Radius`$ and $`Eccentricity`$. The oscilloscope readout was initiated whenever four channels exceeded a threshold of 30 mV with a time overlap of at least 10 ns. The length of each record was chosen to be 2 microseconds with a time resolution of 4 nanoseconds. The recording system including the photomultiplier, cables and amplifiers used was sensitive to pulse widths ranging from 10 ns up to several hundred ns. Figure 6a shows the pulse shape of a typical Cherenkov light flash recorded with the Whipple Observatory 10 m telescope. In comparison we show (Figure 6b) the simulated pulse profiles from a multi-photon-initiated cascade from a 100 ns burst of 500 MeV $`\gamma `$-rays of two different pixels: in the center of the image (solid line) and a pixel $`1^{}`$ off-center (dashed line). The pulse profiles of the multi-photon-initiated cascade are broad and only slightly shifted with respect to each other. The fluence for the simulated wavefront event is $`1.1\times 10^7\mathrm{ergs}/\mathrm{cm}^2`$, about 7 times higher than the sensitivity limit of the technique using the Whipple Observatory 10 m telescope. It is important to point out that our measurement gives a limit for the background expected for a burst sensitivity $`1.1\times 10^7\mathrm{ergs}/\mathrm{cm}^2`$. Operating at a lower threshold might imply a higher background. Figure 7 shows the distribution of pulse widths for a data sample consisting of 10,000 events taken during 6 hours of observation time. The longest pulse recorded shows a F.W.H.M. of 33 nanoseconds. It is important to notice, that the pulse widths presented here are broadened by 180 foot of coax cable (RG-58). The intrinsic pulse width of Cherenkov pulses are somewhat shorter. This clearly indicates that pulse profiles provide excellent background discrimination of cosmic rays. Note, that even with this relatively simple set-up a sensitivity of $`1.1\times 10^7\mathrm{ergs}/\mathrm{cm}^2`$ for bursts of 500 MeV $`\gamma `$-rays would be reached. ### 3.4 Other sources of background A second background showing similar time profiles to those of multi-photon-initiated cascades could arise from fluorescence light of ultra-high energy cosmic-rays (UHECR) at E $`>10^{16}`$ eV. Although rare, they could constitute a background of slow pulses. The recorded image of the event will help to reject fluorescence events: the image would appear as an extended band through the camera, as opposed to a circular flat image from a wavefront of multiple $`\gamma `$-rays. The pulse profiles of fluorescence light depends on the impact parameter and the arrival direction of the UHECR-shower (Baltrusaitis et al. 1985). For the pixellation of the Whipple camera ($`0.25^{}`$), the pulse width of a UHECR-shower ranges from 70 ns to 350 ns for an impact parameter of 5 km and 1 km, respectively. Fluorescence light events can be distinguished from wavefront events by the average arrival time of photons (center of the pulse) in various pixels across the field of view. They differ according to the geometrical time-of-flight difference between the telescope and different parts of the shower. As a consequence, the pulses in different pixels of a fluorescence event should be substantially shifted with respect to each other along the shower axis, whereas the average arrival times of pulses from a wavefront event have a small intrinsic time spread and a circular symmetric arrival time pattern. Therefore, it is expected that even with a single telescope, rare fluorescence events could be eliminated. Light flashes from meteors and lightning have to be considered as a potential source of background. The time constant of faint meteors is of order 10 msec or greater (Cook et al. 1980) and is not in the range of microsecond bursts. Lightning pulses are in the range of tens to hundreds of microseconds (Krider 1999). ## 4 Trigger criteria The properties of images from wavefront events are vastly different than typical $`\gamma `$-ray images, for which imaging Cherenkov telescopes are usually optimized. Images from TeV $`\gamma `$-ray primaries exhibit a small angular spread requiring a trigger sensitive to an elliptical image extending over an area of $`0.15^{}\times 0.30^{}`$ in the field-of-view. Thus Cherenkov telescopes often have a trigger requirement of a two-fold (four-fold for high resolution cameras) coincidence. The large angular extent of wavefront images puts a very different requirement on the trigger geometry - covering a solid angle of $`1.5^{}`$ in diameter. The limiting factor in both cases is fluctuations from the night-sky background light. The signal to noise ratio needs to be optimized in order to achieve the highest sensitivity. In case of wavefront events, the image is bright within the central $`1.5^{}`$, the highest signal-to-noise ratio would be achieved by triggering on the total light covering the central $`1.5^{}`$ of the image. A high-fold coincidence over pixels could be used to trigger efficiently on wavefront events helping to reduce the random triggers arising from the night-sky background light fluctuations. Using a pixellation of $`0.25^{}`$ (see Figure 2), a high-fold coincidence of 40 pixels would provide a reasonable trigger condition. Also, the timing characteristics of the trigger, providing a good sensitivity for wavefront events is different than for TeV $`\gamma `$-ray observations. Single $`\gamma `$-ray detection uses a typical coincidence time of 10 nanoseconds. Wavefront event recording would be based on the integration time scale in the order of 100 nanoseconds up to a few microseconds depending on the putative astrophysical burst time scale. It is important to point out that in case of wavefront detection an integration of the signal over the burst time scale is most efficient to increase the signal-to-noise ratio at the trigger level. To search for astrophysical phenomena whose emission time scale is uncertain a trigger operating at several different time scales in parallel is necessary, similar to the technique used for the Fly’s Eye detector (Baltrusaitis et al. 1985). ## 5 Sensitivity The detection of bursts using the imaging technique as described in this paper involves two steps: triggering on the Cherenkov light flash associated by the multi-photon-initiated cascade and discriminating a wavefront event from cosmic-ray showers. Both requirements impact the sensitivity at a given energy and burst time scale. The sensitivity for a Whipple type 10 m telescope equipped with a $`4.8^{}`$ field-of-view camera with 331 pixels is estimated (see also Quinn et al. 1999). The trigger threshold for the detection of short bursts is a function of the fluence of the burst, expressed in $`\mathrm{ergs}/\mathrm{cm}^2`$. The fluence is the product of the energy of the incoming particles and the number of particles per unit area impinging on the upper atmosphere. In order to trigger on a wavefront event we require 40 pixels to exceed the night-sky background fluctuations by $`3\sigma `$. This not only prevents triggering on night-sky background fluctuations, it also ensures a good image reconstruction. Figure 8 shows the fluence sensitivity as a function of energy for 100 ns and 1 $`\mu \mathrm{s}`$ burst time scale. For comparison to previous efforts we also show the sensitivity of the EGRET detector. EGRET had a sensitivity for bursts lasting for 600 ns where it records multiple events within one readout cycle. We have assumed here a collection area of $`0.15\mathrm{m}^2`$ and a minimum of 5 $`\gamma `$-rays to be detected. Over the energy range of 300 MeV to 1 GeV the sensitivity of the wavefront technique could exceed EGRET’s sensitivity by a factor of 100 to 500 for 100 ns bursts. The energy threshold for the detection of wavefront events is limited to lower energies by the multiple scattering angle and by the Cherenkov threshold for radiation by electrons of 90 MeV at 20 km atmospheric height. This results in a natural barrier for the atmospheric Cherenkov technique. We have limited our simulations to energies between 200 MeV to 5 GeV. ## 6 Summary We have shown that sub-GeV $`\gamma `$-ray bursts lasting for $`>`$ 100 nanoseconds to microseconds could be efficiently detected using a single ground-based imaging Cherenkov telescope. The technique described is based on previous attempts to detect multi-photon-initiated cascades from short bursts. However, we show for the first time that the angular Cherenkov light distribution together with the pulse shape can be used to advantage to search for short bursts with a single imaging telescope. Measurements of Cherenkov pulse shapes of cosmic-ray induced showers indicates that pulse shapes from multi-photon-initiated cascades are well separated from background showers. A search for microsecond bursts would use this criterion as a first filter. If events with long pulse durations were found, image analysis could verify if those events are consistent with the very distinct image shapes of a multi-photon-initiated cascade. The image also contains valuable information of the arrival direction with an angular resolution of $`0.06^{}0.12^{}`$, depending on the fluence. The fluence sensitivity of the Whipple telescope with a microsecond trigger exceeds EGRET’s sensitivity by more than two orders of magnitude. In addition, arrays of telescope could be used to further improve this technique. In contrast to air showers, wavefront events would appear identical in the field-of-view of arrays of telescopes with a typical spacing of $`100`$ m. Single-particle initiated air showers show a parallactic displacement because of the different distances to the shower core. In view of several proposed next generation detectors (VERITAS, HESS; overview see Krennrich 1999b), the implementation of this technique in telescope arrays could provide the highest fluence sensitivity of any existing $`\gamma `$-ray detector for microsecond bursts at sub-GeV - several-GeV energies. This research is supported by grants from the U.S. Department of Energy.
no-problem/9909/math9909010.html
ar5iv
text
# Untitled Document ON A TOEPLITZ DETERMINANT IDENTITY OF BORODIN AND OKOUNKOV Estelle L. Basor<sup>1</sup><sup>1</sup>1Supported by National Science Foundation grant DMS–9970879. and Harold Widom<sup>2</sup><sup>2</sup>2Supported by National Science Foundation grant DMS–9732687. > In this note we give two other proofs of an identity of A. Borodin and A. Okounkov which expresses a Toeplitz determinant in terms of the Fredholm determinant of a product of two Hankel operators. The second of these proofs yields a generalization of the identity to the case of block Toeplitz determinants. The authors of the title proved in an elegant identity expressing a Toeplitz determinant in terms of the Fredholm determinant of an infinite matrix which (although not described as such) is the product of two Hankel matrices. The proof used combinatorial theory, in particular a theorem of Gessel expressing a Toeplitz determinant as a sum over partitions of products of Schur functions. The purpose of this note is to give two other proofs of the identity. The first uses an identity of the second author for the quotient of Toeplitz determinants in which the same product of Hankel matrices appears and the second, which is more direct and extends the identity to the case of block Toeplitz determinants, consists of carrying the first author’s collaborative proof of the strong Szegö limit theorem one step further. We begin with the statement of the identity of , changing notation slightly. If $`\varphi `$ is a function on the unit circle with Fourier coefficients $`\varphi _k`$ then $`T_n(\varphi )`$ denotes the Toeplitz matrix $`(\varphi _{ij})_{i,j=0,\mathrm{},n1}`$ and $`D_n(\varphi )`$ its determinant. Under general conditions $`\varphi `$ has a representation $`\varphi =\varphi _+\varphi _{}`$ where $`\varphi _+`$ (resp. $`\varphi _{}`$) extends to a nonzero analytic function in the interior (resp. exterior) of the circle. We assume that $`\varphi `$ has geometric mean 1, and normalize $`\varphi _\pm `$ so that $`\varphi _+(0)=\varphi _{}(\mathrm{})=1`$. Define the infinite matrices $`U_n`$ and $`V_n`$ acting on $`\mathrm{}^2(𝐙^+)`$, where $`𝐙^+=\{0,\mathrm{\hspace{0.17em}1},\mathrm{}\}`$, by $$U_n(i,j)=(\varphi _{}/\varphi _+)_{n+i+j+1},V_n(i,j)=(\varphi _+/\varphi _{})_{nij1}$$ and the matrix $`K_n`$ acting on $`\mathrm{}^2(\{n,n+1,\mathrm{}\})`$ by $$K_n(i,j)=\underset{k=1}{\overset{\mathrm{}}{}}(\varphi _{}/\varphi _+)_{i+k}(\varphi _+/\varphi _{})_{kj}.$$ Notice that $`K_n`$ becomes $`U_nV_n`$ under the obvious identification of $`\mathrm{}^2(\{n,n+1,\mathrm{}\})`$ with $`\mathrm{}^2(𝐙^+)`$. It is easy to check that, aside from a factor $`(1)^{i+j}`$ which does not affect its Fredholm determinant, the entries of $`K_n`$ are the same as given by the integral formula (2.2) of . The formula of Borodin and Okounkov is $$D_n(\varphi )=Zdet(IK_n),$$ (1) where $$Z=\mathrm{exp}\{\underset{k=1}{\overset{\mathrm{}}{}}k(\mathrm{log}\varphi )_k(\mathrm{log}\varphi )_k\}=\underset{n\mathrm{}}{lim}D_n(\varphi ).$$ (The last identity is the strong Szegö limit theorem.) This identity is especially useful for obtaining refined asymptotic results as $`n\mathrm{}`$. Two versions of (1) were proved in . One was algebraic and was an identity of formal power series and the other was analytic and assumed that the regions of analyticity of $`\varphi _\pm `$ included neighborhoods of the unit circle although, as the authors point out, an approximation argument can be used to extend the range of validity. The requirements for our proofs are that $`\mathrm{log}\varphi _\pm `$ be bounded and $`_{k=\mathrm{}}^{\mathrm{}}|k||(\mathrm{log}\varphi )_k|^2<\mathrm{}`$. <sup>3</sup><sup>3</sup>3The bounded functions $`f`$ satisfying $`_{k=\mathrm{}}^{\mathrm{}}|k||f_k|^2<\mathrm{}`$ form a Banach algebra under a natural norm and for any such $`f`$ the Hankel matrix $`(f_{i+j})`$ acting on $`\mathrm{}^2(𝐙^+)`$ is Hilbert-Schmidt. Thus if $`\mathrm{log}\varphi _\pm `$ belong to this algebra so do $`\varphi _{}/\varphi _+`$ and $`\varphi _+/\varphi _{}`$ and it follows that $`U_n`$ and $`V_n`$ are Hilbert-Schmidt so $`K_n`$ is trace class. Moreover the Szegö limit theorem holds for such $`\varphi `$. See or, for this and a lot more, . First proof To state the relevant result of we define the vectors $`U_n\delta `$ and $`V_n\delta `$ in $`𝐙^+`$ by $$U_n\delta (i)=(\varphi _{}/\varphi _+)_{n+i},V_n\delta (i)=(\varphi _+/\varphi _{})_{ni}.$$ (These are not the results of acting on a vector $`\delta `$ by the operators $`U_n`$ and $`V_n`$ since $`1𝐙^+`$, but the notation suggests this.) The result is the following proposition. If $`IU_nV_n`$ is invertible then so is $`T_n(\varphi )`$ and $$\frac{D_{n1}(\varphi )}{D_n(\varphi )}=1((IU_nV_n)^1U_n\delta ,V_n\delta ),$$ (2) where the inner product denotes the sum of the products of the components. The formula appears on p. 341 of in different notation. To derive (1) from this we assume temporarily that $`IU_nV_n`$ is invertible for all $`n`$ (and therefore so is $`IV_nU_n`$) and compute the upper-left entry of $`(IV_{n1}U_{n1})^1`$ in two different ways. This entry equals the upper-left entry of $`(IK_{n1})^1`$, and Cramer’s rule says that the inverse of the entry equals $$\frac{det(IK_{n1})}{det(IK_n)}.$$ On the other hand, there is a general formula which says that if one has a $`2\times 2`$ block matrix $`\left(\begin{array}{cc}A& B\\ C& D\end{array}\right)`$ then the upper-left block of its inverse equals $`(ABD^1C)^1`$. Here $`A`$ and $`D`$ are square and the various inverses are assumed to exist. In our case the large matrix is $`IV_{n1}U_{n1}`$ and $`A`$ is $`1\times 1`$. It is easy to see that $$A=1(V_n\delta ,U_n\delta ),D=IV_nU_n,C=V_nU_n\delta ,B=U_nV_n\delta ,$$ the last interpreted as a row vector. The formula says that the inverse of the upper-left entry of the inverse equals $$1(V_n\delta ,U_n\delta )((IV_nU_n)^1V_nU_n\delta ,U_nV_n\delta )$$ $$=1(V_n\delta ,U_n\delta )(U_n(IV_nU_n)^1V_nU_n\delta ,V_n\delta )$$ $$=1(V_n\delta ,U_n\delta )(\left[(IU_nV_n)^1I\right]U_n\delta ,V_n\delta )$$ $$=1((IU_nV_n)^1U_n\delta ,V_n\delta ),$$ which is the right side of (2). Thus we have established $$\frac{D_{n1}(\varphi )}{D_n(\varphi )}=\frac{det(IK_{n1})}{det(IK_n)},$$ which shows that (1) holds for some constant $`Z`$. That $`Z`$ is as stated follows by letting $`n\mathrm{}`$. To remove the restriction that $`IU_nV_n`$ be invertible for all $`n`$, we introduce a complex parameter $`\lambda `$ and replace $`\varphi `$ by $`\varphi ^\lambda =\mathrm{exp}(\lambda \mathrm{log}\varphi )`$. Then both sides of (1) are entire functions of $`\lambda `$ and are equal when $`\lambda `$ is so small that $`\varphi _{}^\lambda /\varphi _+^\lambda 1_{\mathrm{}}<1`$ and $`\varphi _+^\lambda /\varphi _{}^\lambda 1_{\mathrm{}}<1`$, for then all $`U_n`$ and $`V_n`$ have operator norm less than 1 so all $`IU_nV_n`$ are invertible. Since the two sides sides of (1) are equal for small $`\lambda `$ they are equal for all $`\lambda `$. Second proof Denote by $`T(\varphi )`$ the semi-infinite Toeplitz matrix $`(\varphi _{ij})_{i,j0}`$. Then $`T(\varphi _{})`$ and $`T(\varphi _+)`$ are upper-triangular and lower-triangular resepectively. It follows that if $`P_n`$ is the diagonal matrix whose first $`n`$ diagonal entries are all 1 and whose other entries are 0 then $$P_nT(\varphi _+)=P_nT(\varphi _+)P_n,T(\varphi _{})P_n=P_nT(\varphi _{})P_n.$$ Observe that $`T_n(\varphi )`$ is the upper-left $`n\times n`$ block of $`P_nT_n(\varphi )P_n`$. Using the above, we can write <sup>4</sup><sup>4</sup>4It is an easy general fact that if $`\psi _1\overline{H^{\mathrm{}}}`$ or $`\psi _2H^{\mathrm{}}`$ than $`T(\psi _1\psi _2)=T(\psi _1)T(\psi _2)`$. In particular $`T(\varphi _\pm )`$ are invertible with inverses $`T(\varphi _\pm ^1)`$. Recall that $`H^{\mathrm{}}`$ consists of all $`\psi L^{\mathrm{}}`$ such that $`\psi _k=0`$ when $`k<0`$. $$P_nT(\varphi )P_n=P_nT(\varphi _+)T(\varphi _+^1)T(\varphi )T(\varphi _{}^1)T(\varphi _{})P_n$$ $$=P_nT(\varphi _+)P_nT(\varphi _+^1)T(\varphi )T(\varphi _{}^1)P_nT(\varphi _{})P_n.$$ Now the upper-left blocks of $`P_nT(\varphi _\pm )P_n`$ are $`T_n(\varphi _\pm )`$, which are triangular matrices with diagonal entries all 1, by our assumed normalization. Therefore they have determinant one, so $`D_n(\varphi )`$ equals the determinant of the upper-left block of $`P_nT(\varphi _+^1)T(\varphi )T(\varphi _{}^1)P_n`$. Set $$T(\varphi _+^1)T(\varphi )T(\varphi _{}^1)=A.$$ Then the determinant of the upper-left block of $`P_nAP_n`$ equals $`det(P_nAP_n+Q_n)`$, where $`Q_n=IP_n.`$ Now $`A`$ is invertible and differs from $`I`$ by a trace class operator (we shall see this in a moment). Therefore $$det(P_nAP_n+Q_n)=detAdet(A^1P_nAP_n+A^1Q_n)$$ $$=detAdet(A^1(IQ_n)AP_n+A^1Q_n)=detAdet(P_nA^1Q_nAP_n+A^1Q_n)$$ $$=detAdet(P_n+A^1Q_n)det(IQ_nAP_n),$$ since $`P_nQ_n=0`$. The determinant of the operator on the right equals one, again since $`P_nQ_n=0`$. Moreover $$det(P_n+A^1Q_n)=det(I(IA^1)Q_n)=det(IQ_n(IA^1)Q_n).$$ We have shown $$D_n(\varphi )=detAdet(IQ_n(IA^1)Q_n).$$ (3) It remains to show that this is the same as (1). First, $`A`$ is similar via the invertible operator $`T(\varphi _+)`$ to $`T(\varphi )T(\varphi _{}^1)T(\varphi _+^1).`$ Therefore $$detA=detT(\varphi )T(\varphi _{}^1)T(\varphi _+^1)=detT(\varphi )T(\varphi ^1).$$ (4) This is a representation of the constant $`Z`$ in the strong Szegö limit theorem . Next $$A^1=T(\varphi _{})T(\varphi )^1T(\varphi _+)=T(\varphi _{})T(\varphi _+^1)T(\varphi _{}^1)T(\varphi _+)=T(\varphi _{}/\varphi _+)T(\varphi _+/\varphi _{}).$$ (5) Because $`\varphi _{}/\varphi _+`$ and $`\varphi _+/\varphi _{}`$ are reciprocals we see that the $`i,j`$ entry of this matrix equals $$\delta _{i,j}\underset{k=1}{\overset{\mathrm{}}{}}(\varphi _{}/\varphi _+)_{i+k}(\varphi _+/\varphi _{})_{kj},$$ and so $`det(IQ_n(IA^1)Q_n)`$ equals $`det(IK_n)`$. (This also shows that $`A^1`$ differs from $`I`$ by a trace class operator, so the same is true of $`A`$.) This gives (1) with $`Z=detT(\varphi )T(\varphi ^1)`$. Let us see how to modify this argument for the case of block Toeplitz determinants, where $`\varphi `$ is a matrix-valued function. We assume the factorization $`\varphi =\varphi _+\varphi _{}`$, the order of the factors being important now, where $`\varphi _\pm ^{\pm 1}`$ belong to the algebra described in footnote 3 and $`\varphi _+^{\pm 1}H^{\mathrm{}},\varphi _{}^{\pm 1}\overline{H^{\mathrm{}}}`$. Then (3) is derived without change as is formula (4) for $`detA`$ since $`\varphi ^1=\varphi _{}^1\varphi _+^1`$. But (5) no longer holds because it would require $`\varphi =\varphi _{}\varphi _+`$, which does not hold. But if we also assume a factorization $`\varphi =\psi _{}\psi _+`$, with $`\psi _\pm `$ having properties analogous to those of $`\varphi _\pm `$, we can replace (5) by $$A^1=T(\varphi _{})T(\psi _+^1)T(\psi _{}^1)T(\varphi _+)=T(\varphi _{}\psi _+^1)T(\psi _{}^1\varphi _+).$$ Now $`\varphi _{}\psi _+^1`$ and $`\psi _{}^1\varphi _+`$ are mutual inverses and we deduce that in this case (1) holds with $`Z=detT(\varphi )T(\varphi ^1)`$ and $`K_n`$ the matrix, thought of as acting on $`\mathrm{}^2(\{n,n+1,\mathrm{}\})`$, with $`i,j`$ entry $$\underset{k=1}{\overset{\mathrm{}}{}}(\varphi _{}\psi _+^1)_{i+k}(\psi _{}^1\varphi _+)_{kj}.$$ Acknowledgement The authors thank Alexander Its who, after seeing and the first proof presented here, asked the first author whether there was a direct proof. The second proof was the result. References E. Basor and J. W. Helton, A new proof of the Szegö limit theorem and new results for Toeplitz operators with discontinuous symbol, J. Operator Th. 3 (1980) 23–39. A. Borodin and A. Okounkov, A Fredholm determinant formula for Toeplitz determinants, preprint, math.CA/9907165. A. Böttcher and B. Silbermann, Analysis of Toeplitz Operators, Akademie-Verlag, Berlin, 1989. H. Widom, Toeplitz determinants with singular generating functions, Amer. J. Math. 95 (1973) 333–383. H. Widom, Asymptotic behavior of block Toeplitz matrices and determinants II, Adv. Math. 21 (1976) 1–29. | Department of Mathematics | | Department of Mathematics | | --- | --- | --- | | California Polytechnic State University | | University of California | | San Luis Obispo, CA 93407 USA | | Santa Cruz, CA 95064 USA | | ebasor@calpoly.edu | | widom@math.ucsc.edu | AMS Subject Classification: 47B35
no-problem/9909/cond-mat9909053.html
ar5iv
text
# A thermostable trilayer resist for niobium lift-off ## I Introduction The field of single charge tunneling phenomena , mesoscopic superconductivity or superconducting devices has opened a new demand in high performance nanofabrication techniques with superior self-alignement capabilities. A common technique makes use of shadow evaporation through a suspended stencil mask prepared by electron beam lithography. This technique allows self-alignement with nanometer scale accuracy as required for fabrication of ultrasmall tunnel junctions. Excellent results are currently obtained using conventional techniques based upon masks with PolyMethylMethAcrylate (PMMA) as the e-beam sensitive resist. The high resolution stencil mask is formed on top of a sub-layer with a well controlled undercut. The mask is easy to remove by a lift-off process. In the conventional two-layers process the upper layer is a thin layer of PMMA (casting solvent Chlorobenzene) while the bottom layer is a copolymer PMMA-MAA containing MethAcrylic Acid (MAA) monomers. These MAA co-monomers make the copolymer soluble in polar solvents such as acetic acid and provide good chemical selectivity with respect to the top layer. Alternately the stencil mask can be made of a thin film of germanium or silicon patterned using an additionnal PMMA top layer (tri-layers process). Such a process, as well as more complex alternative processes with more than three layers, generally ensures excellent control of each intermediate step. It is widely used for the fabrication of devices made of soft materials such as aluminum, gold, copper, chromium, permalloy, etc. Structures of high complexity can be realized by multiple angle evaporation using one or two rotation axis. Unfortunately, this technique cannot be extended to refractory metals such as niobium, molybdenum, tungsten or tantalum which require both high vacuum and high evaporation temperatures. As a consequence of the excessive heat produced by the electron beam evaporation, conventional resist masks are mechanically unstable. In addition, contamination due to the resulting outgasing of the resist degrades the electronic properties of the metal. It is a well known fact that the superconducting properties of niobium are extremely sensitive to a small amount of oxygen contamination. Various methods have been attempted to extend the shadow evaporation technique to refractory metals with more stable mask structures. In Ref. , a combination of chromium mask with a metallic aluminum sub-layer was successfully used to fabricate arrays of micron size niobium/niobium Josephson junctions. More recently, Harada et al developped a four layer resist system composed of PMMA, a hard baked photoresist, germanium and PMMA. This process was used successfully to fabricate submicron niobium/aluminum oxyde/niobium superconducting single electron transistors. However, the measured critical temperature of the niobium electrode was far below that of the bulk material ($`9.2K`$) and therefore the device could not reach optimum operation. In this paper, we describe a new process based upon a thermostable polymer, the Poly PhenyleneEtherSulfone (PES) which greatly improves the quality of the devices. We present a detailed comparison between the thermal characteristics of this polysulfone and that of PMMA polymer. We show in particular why the latter should be avoided as a sub-layer resist. As a demonstrator for this process we have fabricated submicron niobium/copper/niobium Josephson junctions. ## II Selection of the thermostable base layer for the trilayer process We have chosen to develop a new trilayer process with a thermostable polymer as the base layer and silicon (alternately germanium) as the high resolution stencil mask. The thermal stability of the PMMA upper layer is irrelevant since this layer only serves for patterning the silicon mask and is removed before the evaporation process. In order to select an alternative to PMMA (or PMMA-MAA) as the bottom layer, we have explored a number of thermostable polymers of the phenolic family. The first series of experiments was carried out on a PHS polymer (Poly ParaHydroxyStyrene) whose chemical formula is $`(CH_2CHX)_n`$ where X is the phenolic group. This polymer is a polystyrene with an hydroxyl function which insures solubility in polar solvents. Three differents molecular weight have been used: 23.600, 30.000 and 109.000 g/mole. The powder was diluted in ”Diglyme” (2-Methoxyethyl Ether). The glass transition temperature of this polymer is about 180C. It can be safely used at temperatures below 240C. Above this temperature, cross-linking makes the polymer insoluble. Good results were obtained with this polymer as a bottom layer in the trilayer process. However, it turned out to be damaged by the solvents used for the development and rinsing of the PMMA upper layer. The best results were actually obtained with a polymer presenting a lower solubility: the poly PhenyleneEtherSulfone (PES) originally commercially available under the name Victrex from ICI. This polymer is currently used as a thermostable organic substance for industrial use at 180C. The chemical formula of PES is a sequence of aromatic groups attached to a sulfur atom. The monomer has the structure shown in Fig. 1a. The high thermal stability of PES is insured by the aromatic groups. We have used a Poly PhenyleneEtherSulfone with a high molecular weight: PES Victrex 5003P (ICI). The PES is dissolved in N-methyl Pyrrolidone ($`10\%`$ w/w) to form a solution that yields a standard thickness of a few hundred nanometers after spinning. For comparison the monomer of PMMA is described in Fig. 1b. The chemical bond between MMA monomers is weak. This is the reason for the excellent properties of PMMA as a high resolution and good sensitivity electron beam resist as well as its moderate thermal stability. The chemical properties of PES are also very attractive since this polymer exhibits a good compatibility with the organic solvents used in the subsequent steps of the trilayer process. It is inert in the PMMA solvents - Chlorobenzene or Methyl-Iso-Butyl Ketone (MIBK) and is insoluble in IsoPropylic Alcohol. Its most convenient solvents are the DiMethyl Sulfoxyde (DMSO) and the N-Methyl Pyrrolidone (NMP). The latter is much less dangerous and volatile than Chlorobenzene which is a standard solvent of PMMA. ## III Thermal characteristics of the base resist layer The thermal properties of PMMA, PMMA-MAA and PES polymers were investigated using a TA Instruments 2950 Thermogravimetry Analyser (TGA) and a TA Instruments 2920 Differential Scanning Calorimetry (DSC) Analyser. The heating ramp for experiments was 10C/min. The enhanced thermostability of PES is illustrated in Fig. 2 (lower graph) which demonstrates that its weight loss is negligible at temperatures up to 400C, while a significant loss of weight is observed in PMMA at only 150C. This temperature can easily be reached during e-beam evaporation of a refractory metal. The peaks in the derivative curve (dashed line) indicate the activation temperatures for chemical transformations such as anhydride formation or hydrocarbon outgassing which are important at moderate temperatures for PMMA. This polymer presents an extremely high chemical stability between 275C (hard-bake temperature) and 400C. Above this temperature, the polymer properties degrade sharply. Fig. 3 shows a Differential Scanning Calorimetry (DSC) thermogram of the two polymers. We found a glass transition temperature of respectively 235C for PES and 121C for PMMA, again showing the superior thermal properties of PES. In general PMMA as well as its copolymers are well known to exhibit poor thermal properties. In particular the PMMA-MAA copolymer exhibits low $`T_G`$ (133C for 8.5 $`\%`$ w/w of acid) and a high rate of weight loss at moderate temperatures. As a result, a very strong outgassing of the resist bottom layer takes place in the vicinity of the device, even though no pressure increase was recorded by the vacuum gauge in our experiment. Indeed, we have observed that the niobium structures evaporated in a ultra-high vacuum chamber (base pressure of the chamber $`10^{10}`$ mbar, sample at 25 cm from niobium heated target) through a PMMA/PMMA-MAA bilayer, were not superconducting at $`1K`$. We believe that the niobium film traps moisture, oxygen and hydrocarbures outgassed from the heated polymer sub-layer. We should mention that, with careful limitation of heat radiation eg. using copper shields, sequential evaporation and liquid nitrogen cooling of the substrate holder, intermediate superconducting critical temperatures could be achieved with a PMMA sub-layer. Additional improvement could even be obtained by encapsulating the PMMA resist with either silicon or aluminum oxyde. As discussed below, the process based upon the PES thermostable sublayer is free of these constraints. ## IV Process Implementation The optimized trilayer fabrication process is as follows: Firstly, the PES 5003P solution ($`10\%`$ w/w in NMP) was spinned on for 5 min at 2000 rpm on a 2 inch silicon wafer to form the 300 nm thick bottom layer. After baking at 275C for one minute on a hot-plate, a 40 nm thick silicon layer was e-beam evaporated at room temperature on the PES bottom layer. Alternately, germanium could be used without any change in the rest of the process. Finally a 85 nm thick PMMA layer ($`2\%`$ w/w PMMA 950K in chlorobenzene) was spun-on on top of the silicon. This thin PMMA layer was then patterned by electron beam lithography using a modified Scanning Electron Microscope Cambridge S240 with homemade interfaces for e-beam writing. Subsequently, the PMMA layer was developped in a solution (1:3) of MIBK and IsoPropylic Alcohol (IPA) for 20 seconds. The unprotected silicon was then removed by a Reactive Ion Etching (R.I.E.) in a Plassys MG200 reactor. The etching parameters were: 15 sec with 20 $`sccm`$ $`SF_6`$ at pressure $`2.10^2mbar`$, incident RF power 20 $`W`$, chamber at 15C. We found it useful to place the substrate on a $`4\mathrm{"}`$ pure silicon wafer in order to increase the etching time and optimize its reproducibility. Depending on the desired depth of undercut, two alternative processes could be used to etch the PES bottom layer : * small undercut : To obtain undercuts below 50 nm we used a dry process which consisted of an oxygen R.I.E. in the same reactor as above (but without the $`4\mathrm{"}`$ pure silicon wafer). The etching conditions were the following : 3 min, 20 sccm oxygen at pressure $`4.10^1`$ mbar, incident RF power $`50W`$ and chamber temperature at 15C. An example of this dry process is shown in Fig. 4. This process can be extended to produce larger undercuts. However, residues of typical size 50 nm were usually obtained on the substrate. These residues are strongly resistant to R.I.E. and could only be eliminated using a wet process followed by additional oxygen plasma. * large undercut : To obtain undercuts larger than 50 nm a wet process based on DiMethyl Sulfoxide (DMSO) was found to be better. Thus, excellent control of both the etching time and temperature are extremely important since the etch rate is thermally activated. The sample was dipped and shaked into DMSO solvent kept at its melting temperature (18.6C). We found that the undercut obeys a quasi-linear dependence as a function of etching time (see Table I) with a typical rate of 28 $`nm/sec`$. Ethanol was used both to stop the etching and to rinse the sample. An example of this wet process is shown in Fig.5. Let us now discuss some practical points related to PES : Humidity sensitivity : The control of ambient humidity is crucial during the spinning of the polysulfone. The relative humidity level in a standard clean-room is about $`50\%`$. In these conditions, the visual aspect of the resist surface after spinning is grey with white points due to local phase inhomogeneity. The upper limit appears to be $`25\%`$ of relative humidity during the spin coating step. A convenient solution consists of drying the atmosphere by blowing a dry nitrogen flow in a small chamber surrounding the spinner. We use a 6 $`dm^3`$ cylindrical plexiglass chamber which contains an hygrometer, a pipe for nitrogen flow and a small aperture on the top to inject the resist with a poly Propylene syringe onto the sample. The relative humidity is reduced to below $`15\%`$ in a few minutes. Nitrogen flow was maintained during spinning. This also accelerates the evaporation of NMP solvent from the PES layer. An homogeneously colored surface is then easily obtained. Ageing sensitivity : Without special storage precaution (humidity-free room), trilayers have to be used within 3 months of preparation. After this time, the PES layer could not be undercut in the specific solvent given below. HydroFluoric acid sensitivity : PES polymer shows an excellent stability against HydroFluoric acid (HF) rinsing. It allows preparation of the substrate silicon surface through the mask before the metallic evaporation. HF rinsing was used in order to remove native silicon oxide. It ensures a high-quality deposited metal as it decreases hydrocarbon contaminants and passivates chemically the silicon surface. The major result was an excellent sticking of the thin lithographic structures after the final lift-off step. The surface preparation consists of 1 minute shaking of the final stencil mask in a solution of $`10\%`$ in volume of HF and a 10 minutes rinsing in desionised water to remove completely the acid. ## V Test device Various niobium devices have been fabricated using the above described process. The niobium was evaporated at room temperature from an electron beam gun in a ultra high vacuum chamber. No deformation of the silicon mask was observed after the electron beam evaporation. The stencil mask was lifted-off in NMP at 80C for 10 mins followed by a few seconds in low power ultrasound (NEY ultrasonik 300). We measured the resistance of all these niobium structures using four-probes measurement with standard lock-in techniques at low temperature. To validate the recipe, thin niobium wires were realised. We first designed the mask implementing a dry etching step in the process as described above. The geometry was a single metallic line of 5 $`\mu `$m long, 0.3 $`\mu `$m wide and 60 nm thick. The critical superconducting temperature obtained was 7.2 K (see first line of Table II). Another mask using wet etching step for fabrication was tested. The structure was an array (size 10 $`\mu `$m x 5 $`\mu `$m, step 1 $`\mu `$m) of 0.15 $`\mu `$m wide lines. 60 nm thick niobium were evaporated through this stencil mask. The critical temperature obtained was 7.1 K with a residual resistivity ratio (RRR) of 1.6. The same array with 0.35 $`\mu `$m wide lines exhibit a $`T_c`$ of 8.1 K (see line 2 and 3 in Table II). These results may be compared to a measurement on a reference plain niobium film of same thickness. In such a case, $`T_c`$ is about 8K with a RRR of 1.86. A geometry currently studied in mesoscopic physics is the superconducting - normal metal - superconducting SNS structure where the N island is viewed as a ”quantum dot” coupled to superconducting electrodes through the Andreev process . We are studying such SNS junctions with highly transparent interfaces. The mask shown in Fig. 5 allows the in-situ fabrication of self-aligned SNS junctions by shadow evaporation using copper or palladium as the normal metal island and niobium as the superconductor. The alignement of the normal island is achieved with a nanometer scale resolution by the proper choice of the evaporation angle. High quality S-N interfaces free of contamination was ensured as the metals were evaporated within one cycle in a ultra high vacuum (UHV) chamber. A typical sample with palladium as the normal metal is shown in Fig. 6. We are making systematic measurements at low temperature of niobium-copper-niobium SNS junctions have been done. Interface resistance were estimated below 0.2 $`\mathrm{\Omega }`$. Niobium lines were 0.3 $`\mu `$m wide and 60 nm thick and their resistivity was in the range of 17-20 $`\mu \mathrm{\Omega }`$.cm. Copper metal was 0.3 $`\mu `$m wide and 60 nm thick for a resistivity of 4.3 $`\mu \mathrm{\Omega }`$.cm. In such a SNS junction the superconducting critical temperature was above 7K. Because the alignement was made in-situ under high vacuum, this technique also allows an excellent control of the interface between the central island and the external superconducting electrodes : from the metallic contact (high transparency barrier) to the weak tunnel junction (low transparency barrier). Masks with large undercuts (ses Fig. 5) can also serve to elaborate niobium based tunnel nano-junctions. Controlled oxydation of artificial or natural tunnel barriers can be performed in the UHV chamber between the two evaporations ensuring both high quality barriers and a high energy gap for niobium. ## VI Conclusion We have demonstrated a reliable technique to produce high resolution self-aligned structures by shadow evaporation of refractory metals. The key point is the use of a trilayer process with a thermostable resist bottom layer. The above recipe has been successfully tested on submicron niobium copper mesoscopic structures with excellent superconducting properties of the niobium film and excellent control of the interfaces. Using this technique we have also fabricated niobium microsquid gradiometers. This process is very promising in the area of single electronic transistors since it makes the shadow evaporation technique accessible to new materials with superior electronic properties. ## VII Acknowledgements We would like to acknowledge discussions with Th. Fournier, O. Buisson and H. Courtois. We would like to thank D. Mariolle and F. Martin for the S.E.M. micrographs taken at LETI-Grenoble and also D. Cousins for careful reading of the manuscript.
no-problem/9909/hep-th9909039.html
ar5iv
text
# Acknowledgement ## Acknowledgement One of the authors (R.B.) would like to thank the Alexander von Humboldt Foundation for providing financial support making this collaboration possible.
no-problem/9909/astro-ph9909190.html
ar5iv
text
# Density and Velocity Fields from the PSCz Survey ## 1 Introduction One of the motivations for the PSCz survey at its inception in 1992 was the huge effort going into peculiar velocity surveys; neither the QDOT nor the 1.2Jy surveys were deep and dense enough to provide a satisfactory model for the gravity field. Our goals were to (a) maximise sky coverage, and (b) to obtain the best possible completeness and flux uniformity within well-defined area and redshift ranges. The survey consists of 15,000 IRAS galaxies and its sky coverage is 84%. The median depth is just $`8100\mathrm{km}\mathrm{s}^1`$, although useful information is available out to $`30,000\mathrm{km}\mathrm{s}^1`$ at high latitudes and $`15,000\mathrm{km}\mathrm{s}^1`$ everywhere. A more detailed description of the survey specification is given in Saunders et al. (1999). The distribution of identified galaxies and the mask are shown in figure 1. The $`N(z)`$ distribution is shown in figure 2a. The selection function $`\psi (r)`$ (defined here as the expected density of galaxies seen in the survey as a function of distance in the absence of clustering) is derived using the methods of Mann et al. (1996) and shown in figure 2b. The uncertainties are less than 10% in the range $`10300h^1\mathrm{Mpc}`$. The knowledge of the selection function enables one to weigh galaxy properly and to construct the (redshift-space) density field. A 3D view of this is given in figure 3. Figure 4 shows the PSCz density field in the Supergalactic Plane after removing the effect of redshift space distortions. A variable smoothing length increasing linearly along the radial direction has been used. The continuous line shows the $`\delta =0`$ contour (Branchini et al. 1999). ## 2 A New Method for Smoothing and Interpolating Galaxy Surveys For dynamical tests where we wish to compare the observed distribution of galaxies with observed peculiar velocities, it is necessary to estimate the mass distribution within any masked area. Previous attempts have involved filling the mask with a uniform distribution, or crudely interpolating or cloning the density on either side. A better way was pioneered by Lahav et al. (1994), who introduced Wiener filtering to produce a minimum variance interpolation, given a prior estimate of the power spectrum and assuming linear theory. It is even feasible to perform such studies far into the nonlinear clustering regime by assuming that the field can be reasonably described by lognormal statistics, which is indeed supported by several theoretical assessments. In this case, fitting the log of the density field as a Fourier sum will lead to random phases and Gaussian Fourier amplitudes. The mean field for the interpolation will be the mean density. Our approach is to find the set of harmonics maximising the probability that the galaxies, assumed to be Poisson-sampled from the density field, are at the positions actually observed. So we have a set of galaxies at positions $`𝐫_m`$, and a set of basis functions $`f_n`$, and a set of amplitudes $`a_n`$ which we wish to determine. The amplitude of the underlying density field at $`𝐫`$ is $$\rho (𝐫)=\mathrm{exp}\underset{n}{}a_nf_n(𝐫)$$ (1) and the likelihood for the whole survey as a function of $`\{a_n\}`$ is given by $$\mathrm{ln}\{a_n\}=\mathrm{ln}\underset{m}{}\rho (𝐫_m)=\underset{m}{}\underset{n}{}a_nf_n(𝐫_m).$$ (2) The integral constraint that the total number of galaxies predicted by the density field equals the number actually observed is invoked via a Lagrange multiplier. This yields $`N`$ equations $$\underset{m}{}f_n(𝐫_m)=_Vf_n\psi (r)\mathrm{exp}[\underset{n}{}a_nf_n(𝐫_m)],$$ (3) which states that for each harmonic the sum over its value at each of the galaxy positions is equal to the integral over the continuous density field. The equations are non-linear whenever the density field itself is, and we solve them by using the multidimensional Newton-Raphson technique. We have used a spherical harmonic expansion, and we have transformed the radial coordinate of the survey so as to make the selection function unity, rendering the shot noise constant everywhere in the final maps. We have added a simple regularisation term to the likelihood. It is not mandatory for the radial and angular parts to be separable and we have used a ‘tapered’ mask in which the redshift out to which completeness is assumed is taken from the Saunders et al. (1999). The resulting 3-D density field is shown on the PSCz web page http://www-astro.physics.ox.ac.uk/$``$wjs/pscz.html. A single shell is shown in figure 5. ## 3 The PSCz dipole The PSCz dipole has been investigated and presented by Rowan-Robinson et al. 1999 and Schmoldt et al. (1999a,b). We here present a somewhat different analysis: we have corrected for redshift space distortions according to Valentine et al. (1999) and used the interpolation described above. We have weighted the gravity dipole by $`4\pi J_3\psi (r)/(1+4\pi J_3\psi (r))`$, as in Strauss et al. (1992) (and we have assumed that $`4\pi J_3=10000h^3\mathrm{Mpc}^3`$) to produce a minimum-variance cumulative dipole, given our knowledge of the power spectrum. The results are shown in figures 6a and 6b. The suppression amounts to a factor 1.1 at $`100h^1\mathrm{Mpc}`$, 2 at $`180h^1\mathrm{Mpc}`$ and 10 at $`300h^1\mathrm{Mpc}`$. There is no evidence for any significant contribution to the dipole beyond 150 $`h^1\mathrm{Mpc}`$, and the angular convergence is spectacular. The misalignment with the CMB dipole is $`20\mathrm{deg}`$. ## 4 Model Velocity Fields We have applied several different methods to obtain a self consistent model for the density and velocity fields from the PSCz dataset. All methods assume gravitational instability and linear biasing, some are based on linear theory (Branchini et al. 1999 and Schmoldt et al. 1999b) some others on the Zel’dovich approximation (Valentine et al. and D’Mellow and Taylor, these proceedings), while others use the Least Action Principle (Sharpe et al. 1999, Nusser et al. 1999). Figure 7 shows the density and velocity fields in a slice along the Supergalactic Plane, reconstructed by Branchini et al. (1999). The dominant features in the velocity fields are the infall patterns towards the Great Attractor (-30,20), Perseus Pisces (50,-10), Cetus Wall (20,-50) and Coma (5,70). The most striking property, however, is the large scale coherence of the velocity field, apparent as a long ridge along the Perseus Pisces - Virgo - Great Attractor - Shapley Region baseline. The same general features are found using the other dynamical methods. ## 5 The Value of $`\beta `$ The comparison between measured peculiar velocities and our PSCz model gravity field allows one to measure the $`\beta `$ parameters. Matching the amplitude of the PSCz and CMB dipoles yields a value for $`\beta =0.54\pm 0.1`$, consistent with similar comparisons by Schmoldt et al. (1999a,b) and with the likelihood analysis of Branchini et al. (1999) which also use the Mark III bulk flow, the PSCz predicted bulk flow and the local shear. Similar values are also found by Sharpe et al. (1999) by considering the dynamics of the PSCz galaxies in the Local Group’s neighborhood, and by Tadros et al. (1999) from considering the large scale statistical distortion of the density field in redshift space. An estimate of $`\beta `$ to within $`10\%`$ can be achieved by independent comparisons between observed and predicted galaxies’ velocities. An analysis along these lines on the basis of the observed velocities of the SFI catalog is in progress. ## 6 Nonlinear Biasing The dense sampling of PSCz galaxies and the volume of the sample are large enough to measure the biasing relation. Narayanan et al. and Sigad et al. (these proceedings) have presented two independent methods for measuring the biasing relation which they apply to the PSCz catalog. In both cases they detect deviations from the simple linear prescriptions which are well described by semi-analytic model for galaxy formation (Kauffman et al. 1999, Benson et al. 1999). Figure 8 shows the mean biasing relation for PSCz galaxies obtained by Sigad et al. . It is compared with semi-analytic predictions from Kauffmanet al. (1999) in two different cosmological models. ## 7 Acknowledgements The PSC-z survey has only been possible because of the generous assistance from many people in the astronomical community. We are particularly grateful to John Huchra, Tony Fairall, Karl Fisher, Michael Strauss, Marc Davis, Raj Visvanathan, Luis DaCosta, Riccardo Giovanelli, Nanyao Lu, Carmen Pantoja, Tadafumi Takata, Tim Conrow, Mike Hawkins, Delphine Hardin, Mick Bridgeland, Renee Kraan-Kortweg, Amos Yahil, Alberto Caraminana, Esperanza Carrasco, Brent Tully, and the staff at IPAC and the INT, AAT, CTIO and INOAE telescopes. We have made very extensive use of the NED, LEDA and Simbad databases. ## 8 Data access Long and short versions of the catalogue, maskfiles, notes, and an expanded version of this paper, will shortly be available via the PSCz web site (http://www-astro.physics.ox.ac.uk/$``$wjs/pscz.html).
no-problem/9909/cond-mat9909453.html
ar5iv
text
# Self-similar chain conformations in polymer gels ## Abstract We use molecular dynamics simulations to study the swelling of randomly end-cross-linked polymer networks in good solvent conditions. We find that the equilibrium degree of swelling saturates at $`Q_{eq}N_e^{3/5}`$ for mean strand lengths $`\overline{N_s}`$ exceeding the melt entanglement length $`N_e`$. The internal structure of the network strands in the swollen state is characterized by a new exponent $`\nu =0.72\pm 0.02`$. Our findings are in contradiction to de Gennes’ $`c^{}`$-theorem, which predicts $`Q_{eq}N_s^{4/5}`$ and $`\nu =0.588`$. We present a simple Flory argument for a self-similar structure of mutually interpenetrating network strands, which yields $`\nu =7/10`$ and otherwise recovers the classical Flory-Rehner theory. In particular, $`Q_{eq}N_e^{3/5}`$, if $`N_e`$ is used as effective strand length. Polymer gels are soft solids governed by a complex interplay of the elasticity of the polymer network and the polymer/solvent interaction. They are sensitive to the preparation conditions and can undergo large volume changes in response to small variations of a control parameter such as temperature, solvent composition, pH or salt concentration. In this letter we reexamine a classical but still controversial problem of polymer physics , the equilibrium swelling of a piece of rubber in good solvent. Experimentally gels have been studied extensively by combining thermodynamic and rheological investigations with neutron or light scattering . Here we use computer simulations , since they offer some advantages in the access to and the control over microscopic details of the network structure. We concentrate on the role of entanglements in limiting the swelling process and, in particular, the structure of the network strands in the swollen gel. Questions relating to the structural heterogeneity in our gels and the butterfly effect will be addressed in a future publication. There are several theories for the swelling of polymer networks prepared from a melt of linear precursor chains. In the dry state of preparation, the network strands have Gaussian statistics, i.e. the mean square end-to-end distance is related to the average length, $`\overline{N_s}`$, by $`r^2_{dry}b^2\overline{N_s}^{2\nu }`$, where $`\nu =1/2`$ and $`b`$ is the monomer radius. The same relation also holds for all internal distances, leading to the characteristic structure factor $`S(q)q^{1/\nu }`$ for the scattering at wave vector $`q`$ from a fractal object. The classical Flory-Rehner theory writes the gel free energy $`F`$ as a sum of two independent terms: a free energy of mixing with the solvent (favoring swelling and estimated from the Flory-Huggins theory of semi-dilute solutions of linear polymers) and an elastic free energy (due to the affine stretching of the network strands which are treated as Gaussian, concentration-independent, linear entropic springs). Minimizing $`F`$ yields $`Q_{eq}\overline{N_s}^{3/5}`$ for the equilibrium degree of swelling. The Flory-Rehner theory implies that the structure factor of long paths through the network is of the form $`S(q)q^2`$ both locally, where the chains are unperturbed, and on large scales, where they deform affinely ($`r^2_{eq}r^2_{dry}Q_{eq}^{2/3}`$) with the outer dimensions of the sample. The stretching should be visible in the crossover region around $`q2\pi /(b\overline{N_s}^{1/2})`$ with $`S(q)q^1`$. More recent treatments are based on the scaling theory of semi-dilute solutions of linear polymers . Locally, inside of so called “blobs”, the chains behave as isolated, self-avoiding walks with $`\nu 3/5`$. A blob containing $`g`$ monomers has a typical diameter $`\xi bg^\nu =bQ^{3/4}`$. Chains with $`Ng`$ can again be regarded as ideal, however, with a renormalized chain length $`N/g`$ and a renormalized monomer size $`\xi `$. Quite interestingly, refining the Flory-Rehner ansatz along these lines recovers the classical result $`Q_{eq}N_s^{3/5}`$. Such models imply for the structure factor a crossover from $`S(q)q^{5/3}`$ to $`S(q)q^1`$ at $`q2\pi /\xi _{eq}`$, where the blob diameter at equilibrium swelling is given by $`\xi _{eq}bQ_{eq}^{3/4}`$ and much smaller than the strand extension. An open point is the length scale on which the systems begin to deform affinely with the macroscopic strain. The two theories mentioned above consider linear entropic springs, where due to the global connectivity (disregarding fluctuation effects) this length scale is given by the strand size. A drastically different point of view has been advanced by de Gennes, who argues that the swelling is limited by the local connectivity, which only begins to be felt at the overlap concentration $`c^{}\overline{N_s}/(b\overline{N_s}^\nu )^3`$ of a semi-dilute solution of linear polymers of average length $`\overline{N_s}`$, corresponding to $`Q_{eq}\overline{N_s}^{4/5}`$. As a motivation for his $`c^{}`$–theorem , de Gennes considers crosslinking in dilute solution, but postulates that the same results also hold for swelling of networks prepared by cross-linking dense melts. The $`c^{}`$–theorem predicts $`S(q)q^{5/3}`$ for $`q>2\pi /(b\overline{N_s}^{3/5})`$ as well as unusual elastic properties due to the non-linear elasticity of the network strands . Both, the Flory-Rehner theory and the $`c^{}`$-theorem are supported by part of the experimental evidence . While the results are very sensitive to the details of the preparation process, it seems well confirmed that highly cross-linked networks behave according to Flory’s prediction $`Q_{eq}N_s^{3/5}`$. On the other hand, SANS experiments and computer simulations of lightly cross-linked gels show the weak dependence of the strand extensions on the degree of swelling predicted by the $`c^{}`$-theorem. To our knowledge, all SANS studies have concentrated on the low $`q`$ Guinier regime (i.e. the radius of gyration of the strands) and no particular attention has been paid to the local chain structure. As in earlier investigations of polymer melts and networks , we used a coarse-grained polymer model where beads interacting via a truncated, purely repulsive Lennard-Jones (LJ) potential are connected by FENE springs. With $`ϵ`$, $`\sigma `$ and $`\tau `$ as the LJ units of energy, length and time, the equations of motion were integrated by a velocity-Verlet algorithm with a weak local coupling to a heat bath at $`k_BT=1ϵ`$. The potentials were parametrized in such a way that chains were effectively uncrossable, i.e. the network topology was conserved for all times. In our studies we did not simulate the solvent explicitly, but rather used vacuum which can be considered as a perfect solvent for our purely repulsive (athermal) network chains. The relevant length and time scales for chains in a melt are the average bond length, $`\sqrt{l^2}=0.965(5)\sigma `$, the mean-square end-to-end distance $`r^2(N)_{dry}=1.74(2)l^2N`$ , the melt entanglement length, $`N_e=33(2)`$ monomers, and the Rouse time $`\tau _{Rouse}(N)1.35\tau N^2`$ . In dilute solutions, single chains adopt self-avoiding conformations with $`r^2(N)1.8l^2N^{3/5}`$. Using this model, it is possible to study different network structures including randomly cross-linked, randomly end-cross-linked and end-linked melts as well as networks with the regular connectivity of a crystal lattice . Here we investigate end-cross-linked model networks created from an equilibrated monodisperse melt with $`M`$ precursor chains of length $`N`$ at a melt-like density $`\rho _{dry}=0.85\sigma ^3`$ by connecting the end monomers of the chains to a randomly chosen adjacent monomer of a different chain. This method yields defect-free tri-functional systems with an exponential distribution of strand lengths $`N_s`$ with an average of $`\overline{N_s}=N/3`$. The Gaussian statistics of the strands remains unperturbed after crosslinking . The systems studied range from $`M/N=3200/25`$ (i.e. the average strand size $`\overline{N_s}=8.3`$) up to $`M/N=500/700`$ ($`\overline{N_s}=233`$), some systems being as large as $`MN=510^5`$. All simulations used periodic boundary conditions in a cubic box and were performed at constant volume. Starting from $`V_{dry}=MN/\rho _{dry}`$, the size of the simulation box was increased in small steps alternating with equilibration periods of at least 5 entanglement times $`\tau _R(N_e)1400\tau `$. The isotropic pressure $`P`$ was obtained from the microscopic virial tensor and the condition $`P_{eq}0`$ was used to define equilibrium swelling with $`Q_{eq}=V_{eq}/V_{dry}`$. Tests with a part of the networks using open boundaries did not show any significant changes of the results. We investigated the equilibrium swelling of our model networks as a function of the average strand length $`N_s`$. Fig. 1 shows $`Q_{eq}^1N_e^{3/5}`$ as a function of the average strand length $`(N_e/N_s)^{3/5}`$. Our results for short strands are compatible with the Flory-Rehner prediction $`Q_{eq}\overline{N_s}^{3/5}`$, but do not allow for an independent determination of the exponent. In contradiction to this theory we observe a saturation of the equilibrium swelling degree for large $`\overline{N_s}`$. The crossover occurs for $`\overline{N_s}N_e`$. The extrapolated maximal degree of equilibrium swelling $`Q_{max}(\overline{N_s}\mathrm{})=6.8(3)`$ is close to the swelling degree of an ideal Flory-gel with average strand length $`N_e`$ : $`1.15N_e^{3/5}=9.5`$, where the prefactor is empirically obtained from the slope of the straight line in Fig. 1. In contrast, the corresponding estimate based on the $`c^{}`$-theorem, $`Q_{eq}b^3/\sigma ^3N_e^{4/5}36`$, is clearly too high ($`b=1.3\sigma `$ is the stastical segment length in good solution). Our interpretation is that to a first approximation entanglements act as chemical crosslinks in limiting the swelling of polymer networks. The situation is analoguos to an “olympic gel” of topologically linked ring polymers. In contrast to solutions of linear polymers, systems containing trapped entanglements cannot be arbitrarily diluted. The chain conformations at equilibrium swelling are best characterized by their structure factor $`S(q)`$. Fig. 2 shows $`S(q)`$ of the precursor chains within the network for our most weakly crosslinked $`N=700`$ sample. We have chosen the Kratky-representation ($`q^2S(q)`$ vs. $`q`$) to show the deviation from the Gaussian case ($`S(q)q^2`$) more clearly. The observed power law form $`S(q)q^{1/\nu }`$ is characteristic of fractals and common in polymeric systems. However, the observed exponent $`\nu =0.72(2)`$ is unexpected. Furthermore, the fractal structure is observed for a $`q`$-range of $`2\sigma \frac{2\pi }{q}15.5\sigma bN_e^{0.72}`$, suggesting that the mean extension of the effective strands of length $`N_e`$ is the only relevant length scale in the problem. For smaller $`q`$ we see the onset of the expected scattering of a Gaussian chain consisting of randomly oriented parts of length $`bN_e^{0.72}`$. Our precursor chains (even $`N=700`$) are to short to see it clearly developed. Since the scattering from the precursor chains could be affected by polydispersity effects, we have investigated the conformations of the network strands as a function of their contour length $`N_s`$. For high $`q`$ all structure functions fall on top of each other and show the same fractal structure with $`S(q)q^{1/0.72(2)}`$ (Fig. 2). The complementary Fig. 3 shows a log-log plot of the mean square strand extension $`r^2_{eq}(N_s)`$ versus their length. In agreement with the results for the structure functions, we find a power law $`r^2_{eq}b^2N_s^{2\times 0.72}`$ for strands which are shorter than the effective strand length $`N_e`$ and therefore sub-affine deformations. Long strands, on the other hand, deform affinely with $`r^2_{eq}=r^2_{dry}Q_{eq}^{2/3}`$. Clearly, the results of our simulations do not agree with the predictions of any of the theories presented in the introduction. While the neglect of entanglements seems to be fairly simple to repair by treating them as effective cross-links (with $`N_e`$ supplanting the average strand length $`\overline{N_s}`$ ), the fractal structure of the strands and the exponent $`\nu =0.72(2)`$ come as a surprise. In the following, we discuss a possible explanation for the stronger swelling of network strands ($`\nu 0.72`$) than of single chains ($`\nu 3/5`$) under good solvent conditions. We begin by recalling Flory’s argument for the typical size $`R_FbN^\nu `$ of a single polymer chain of length $`N`$ and statistical segment size $`b`$ in a good solvent. The equilibrium between an elastic energy $`R_F^2/(b^2N)`$ of a Gaussian chain stretched to $`R_F`$ and a repulsive energy $`b^dR_F^d(N/R_F^d)^2`$ due to binary contacts between monomers in $`d`$ dimensions leads to $$\nu =\frac{3}{d+2}.$$ (1) The simplest models for swollen networks have the regular connectivity of a crystal lattice. In agreement with the $`c^{}`$-theorem they adopt equilibrium conformations with strand extensions of the order of $`R_F`$ . However, these systems are hardly good models for the swelling process of networks prepared in the dry state, since the hypothetical initial state at melt density has an unphysical local structure with average strand extensions $`R_FQ^{1/3}bN_s^{1/3}`$ as in dense globules. In contrast, if the corresponding semi-dilute solution is compressed, the chains shrink only weakly from $`R_F`$ to the Gaussian coil radius $`RbN_s^{1/2}`$. Instead, they become highly interpenetrating with $`nN_s^{1/2}`$ of them sharing a volume of $`R^3`$. Moreover, at least the simplest model for highly cross-linked networks prepared in the dry state, $`nN_s^{1/2}`$ mutually interpenetrating regular networks with strand extensions of the order of $`R`$ , cannot possibly comply with the $`c^{}`$-theorem, if one disregards macroscopic chain separation: Either the strands extend to $`R_F`$, leading to internal concentrations of $`c^{}N_s^{1/2}`$, or the systems swell to $`c^{}`$, in which case the strands are stretched to $`R_FN_s^{1/6}`$. The same conclusions should hold for any network without too many defects, where the global connectivity forces neighboring chains to share the same volume independent of the degree of swelling. The consequences can be estimated using a simple Flory argument. Instead of a single polymer, we now consider a group of chains which can swell but not desinterpenetrate, i.e. $`nN^{1/2}`$ chains of length $`N`$ which span a volume $`R_{FR}^3`$. The equilibrium between the elastic energy $`nR_{FR}^2/(b^2N)`$ and the repulsive energy $`b^dR_{FR}^d(nN/R_{FR}^d)^2`$ leads to $`\nu `$ $`=`$ $`{\displaystyle \frac{4+d}{4+2d}}`$ (2) Quite interestingly, this local argument reproduces in three dimensions with $`Q_{eq}N^{d/(d+2)}=N^{3/5}`$ and $`R_{FR}Q^{1/d}bN^{1/2}N^{7/10}`$ the results of the classical Flory-Rehner theory of gels. However, in analogy to the Flory argument for single chains, Eq. (2) should also apply to subchains of length $`G`$ with $`1G<N`$ which share their volume with a correspondingly smaller number of other subchains. In particular, the local degree of swelling, $`G^{1/5}`$, should be sub-affine and the exponent $`\nu =7/10`$ should characterize the entire local chain structure up to the length scale of the effective strand length, $`N_e`$. This is in excellent agreement with the main findings from our simulations (see Figs. 2 and 3). Before we conclude, some additional remarks are in order: (i) For swelling in a Theta-solvent, the analogous scaling argument yields $`QN^{3/8}`$ in agreement with previous theories and experiments and predicts local chain structures characterized by $`\nu =5/8`$. (ii) Eq. (2) can also be derived along the lines of from an equilibrium between the elastic energy of blob chains and the osmotic pressure of a semi-dilute polymer solutions. Note, that the apropriate blob size is a function of the size $`G`$ of the subchains under consideration and that isolated chain behavior is only expected below the original correlation length $`\xi _{prep}`$ for systems prepared by cross-linking semi-dilute solutions. (iii) Sommer, Vilgis and Heinrich have argued that the effective inner fractal dimension $`d_i`$ of a polymer network is larger than $`d_i=1`$ for linear chains, leading to stronger swelling with $`\nu =\frac{d_i+2}{d+2}`$. While the correction goes into the right direction, it is difficult to explain a strand length independent effective inner fractal dimension of $`d_i=1.5`$ as an effect of the local connectivity. (iv) However, such effects may well be important in systems with a sufficient number of defects such as dangling ends or clusters. If the global connectivity is weak, the chains may locally desinterpenetrate, leading to a behavior which agrees much better with the $`c^{}`$-theorem . In summary, we have used large scale computer simulations and scaling arguments to investigate the swelling behavior of defect free model networks prepared at melt density. We find that for networks with average strand lengths $`\overline{N_s}>N_e`$ the swelling is limited by entanglements to $`Q_{eq}N_e^{3/5}`$ and that the strands locally exhibt a fractal structure characterized by an exponent $`\nu 7/10`$ which should be directly observable in neutron scattering experiments. We acknowledge the support of the Höchstleistungsrechenzentrum Jülich and the Rechenzentrum of the MPG in München and thank G. S. Grest for discussions and a careful reading of the manuscript.
no-problem/9909/cond-mat9909027.html
ar5iv
text
# Detailed Topography of the Fermi Surface of Sr2RuO4 \[ ## Abstract We apply a novel analysis of the field and angle dependence of the quantum-oscillatory amplitudes in the unconventional superconductor Sr<sub>2</sub>RuO<sub>4</sub> to map its Fermi surface (FS) in unprecedented detail, and to obtain previously inaccessible information on the band dispersion. The three quasi-2D FS sheets not only exhibit very diverse magnitudes of warping, but also entirely different dominant warping symmetries. We use the data to reassess recent results on $`c`$-axis transport phenomena. \] The layered perovskite oxide Sr<sub>2</sub>RuO<sub>4</sub> has attracted considerable experimental and theoretical attention since the discovery of superconductivity in this compound five years ago . Fermi liquid behaviour of several bulk transport and thermodynamic properties was observed in early work , and the existence of mass enhanced fermionic quasiparticles was demonstrated explicitly by the observation of quantum oscillations in the magnetization (de Haas-van Alphen or dHvA effect) and resistivity . Quantitative similarities between the Fermi liquid in <sup>3</sup>He and that in Sr<sub>2</sub>RuO<sub>4</sub> hint at the possibility of p-wave superconducting pairing . Evidence supporting such a scenario has come from the existence of a very strong impurity effect , a temperature independent Knight shift into the superconducting state , a muon spin rotation study indicating broken time reversal symmetry and a number of other experiments . Taken together, these favour spin triplet superconductivity with a p-wave vector order parameter and a nodeless energy gap. Sr<sub>2</sub>RuO<sub>4</sub> appears to be an ideal material in which to investigate unconventional superconductivity in real depth: of all known compounds exhibiting this phenomenon, Sr<sub>2</sub>RuO<sub>4</sub> offers the best prospects of a complete understanding of the normal state properties within standard Fermi liquid theory . This would provide a solid foundation for all theoretical models. It has become clear, however, that further progress will require very detailed knowledge of the electronic structure of Sr<sub>2</sub>RuO<sub>4</sub>. For example, one of the most successful current theories relies on the assumption that the FS consists of two sheets derived from bands with strong Ru d<sub>xz,yz</sub> character and one with strong Ru d<sub>xy</sub> character . These are supposed to form weakly coupled subsystems with very different pairing interactions . This theory has successfully predicted non-hexagonal vortex lattice structures , but it is less clear whether, in its simplest form, it will provide a satisfactory explanation for recent measurements of the temperature dependence of the density of normal excitations in the superconducting state . Of central importance to the understanding of Sr<sub>2</sub>RuO<sub>4</sub> is the origin of the quasiparticle mass enhancement and how it relates to magnetic fluctuations. Recent observations of cyclotron resonances give the promise of separating the various contributions to the enhancement, but identifying the type of resonance and the extent to which electron interactions are affecting the observed masses requires more detailed knowledge of the Fermi surface than has been available to date. Clues to the magnetic fluctuation spectrum have come from nuclear magnetic resonance and neutron scattering experiments and from calculations . Both ferro- and antiferromagnetic fluctuations appear to be present, the latter due to nesting of the FS. Angular dependent magnetoresistance oscillations (AMRO) can give information about the in-plane topography of the FS and the extent to which it is nested. However, with three bands crossing the Fermi level, the interpretation of AMRO data on Sr<sub>2</sub>RuO<sub>4</sub> has been somewhat ambiguous. Progress on all the issues discussed above requires high resolution, sheet-by-sheet knowledge of the FS of Sr<sub>2</sub>RuO<sub>4</sub>. As shown in previous studies , the dHvA effect is ideally suited to this, as data from individual FS sheets can be identified without ambiguity. For this reason, we have performed a comprehensive angular dHvA study in Sr<sub>2</sub>RuO<sub>4</sub>. Full analysis of the data required extension and generalisation of previously reported theoretical treatments of dHvA in nearly 2D materials. As a result, we present an unprecedentedly detailed picture of the warping of each FS sheet. A series of recent measurements of $`c`$-axis transport phenomena are discussed in light of the full experimentally determined dispersion. Quantum-oscillatory effects in a crystal arise from the quantization of the cyclotron motion of the charge carriers in a magnetic field $`𝑩`$. For three-dimensional metals, only the extremal cyclotron orbits in $`𝒌`$-space lead to a macroscopic magnetization, and the quantitative treatment has been known for decades . For a quasi-2D metal, the FS consists of weakly corrugated cylinders. While such weak distortions have little effect on the cross-sectional areas which determine the dHvA frequency, they still affect the interference of the magnetization contributions of different parts of the FS and therefore lead to a characteristic amplitude reduction. Conversely, as we will show, analysis of the experimental dHvA amplitude behaviour can reveal fine details of the topography of the underlying Fermi cylinders. In the most basic case, a simple corrugation of the Fermi cylinder leads to a beating pattern in the magnetization. Analysis of the beats for on-axis fields gives some information about the magnitude of the warping, but further conclusions have to rely on assumptions about the precise form of the corrugation . At the next level of approximation, the FS dispersion can be determined within the traditional scope of the Yamaji effect . A preliminary attempt to extract information on Sr<sub>2</sub>RuO<sub>4</sub> in this way, however, has not achieved agreement between the data and the predictions of the simple model . We show that a much more extensive treatment is needed that (a) considers Fermi cylinder corrugation of arbitrary shape and (b) if necessary, goes beyond the extremal orbit approximation. Also, to extract meaningful information from experiments, one needs data of much higher quality than has been available to date. In the following, we will briefly present the results of a full quantitative treatment of the oscillatory magnetization for a Fermi cylinder that is warped arbitrarily but still compliant with the Brillouin zone (BZ) symmetry of Sr<sub>2</sub>RuO<sub>4</sub>; details will be presented elsewhere . It is convenient to parameterize the corrugation of the cylinder through an expansion of the local Fermi wavevector, $$k_F(\varphi ,\kappa )=\underset{\begin{array}{c}\\ \mu ,\nu 0\\ \mu \mathrm{even}\end{array}}{}k_{\mu \nu }\mathrm{cos}\nu \kappa \{\begin{array}{cc}\mathrm{cos}\mu \varphi \hfill & (\mu mod40)\hfill \\ \mathrm{sin}\mu \varphi \hfill & (\mu mod42)\hfill \end{array}$$ (1) (see Table I for illustration). Here, $`\kappa =ck_z/2`$ where $`c`$ is the height of the body-centered tetragonal unit cell, and $`\varphi `$ is the azimuthal angle of $`𝒌`$ in the $`(k_x,k_y)`$-plane. The $`\beta `$\- and $`\gamma `$-cylinders are centered in the BZ; symmetry allows nonzero $`k_{\mu \nu }`$ only for $`\mu `$ divisible by 4. The $`\alpha `$-cylinder runs along the corners of the BZ and has nonzero $`k_{\mu \nu }`$ only for $`\nu `$ even and $`\mu `$ divisible by 4, or for $`\nu `$ odd and $`\mu mod42`$. The average Fermi wavevector is given by $`k_{00}`$, which is assumed to be much larger than the higher-order $`k_{\mu \nu }`$. One also has to consider the effect of the magnetic field: spin-splitting drives the spin-up and spin-down surfaces apart; the parameters $`k_{\mu \nu }`$ can be taken to expand weakly and linearly in the field as in $`k_{\mu \nu }^{}=k_{\mu \nu }\pm \chi _{\mu \nu }B`$. Ordinary spin-splitting is described by $`\chi _{00}`$ alone, while higher-order contributions would correspond to the underlying electronic band structure being flatter at some points on the Fermi surface than at others. Indeed, this anomalous spin-splitting will prove essential for describing the $`\alpha `$-sheet in Sr<sub>2</sub>RuO<sub>4</sub>. If a magnetic field is applied at polar and azimuthal angles $`\theta `$ and $`\varphi `$, the Fermi surface cross-sectional area perpendicular to the field which cuts the cylinder axis at $`\kappa `$ is given by the Bessel function expansion $`a^{}(\kappa )`$ $`=`$ $`{\displaystyle \frac{2\pi k_{00}}{\mathrm{cos}\theta }}{\displaystyle \underset{\begin{array}{c}\\ \mu ,\nu 0\\ \mu \mathrm{even}\end{array}}{}}(k_{\mu \nu }\pm \chi _{\mu \nu }B)`$ (8) $`\times J_\mu (\nu \kappa _F\mathrm{tan}\theta )\mathrm{cos}\nu \kappa \{\begin{array}{cc}\mathrm{cos}\mu \varphi \hfill & (\mu mod40)\hfill \\ \mathrm{sin}\mu \varphi \hfill & (\mu mod42)\hfill \end{array}`$ which is a generalization of Yamaji’s earlier treatment ; here, $`\kappa _F=ck_{00}/2`$. The total quantum oscillatory magnetization now arises from the interference of the individual contributions of the cylinder cross-sections; at constant chemical potential, the fundamental component of the oscillations is given by $$\stackrel{~}{M}\underset{}{}_0^{2\pi }𝑑\kappa \mathrm{sin}\left(\frac{\mathrm{}a^{}(\kappa )}{eB}\right).$$ (9) Eqs. (8) and (9) describe oscillations at the undistorted frequency $`\mathrm{}k_{00}^2/2e\mathrm{cos}\theta `$, with a characteristic amplitude modulation induced by the interference. One can thus infer the warping parameters $`k_{\mu \nu }`$ by modeling experimentally obtained amplitude data . We have performed a thorough dHvA rotation study in the $`[001][110]`$ plane on a high-quality crystal of Sr<sub>2</sub>RuO<sub>4</sub> with $`T_c>1.3`$K. The experiments were carried out on a low noise superconducting magnet system in field sweeps from 16 T to 5 T, at temperatures of 50 mK. A modulation field of 5.4 mT amplitude was applied to the grounded sample, and the second harmonic of the voltage induced at a pick-up coil around the sample was recorded, essentially measuring $`^2M/B^2`$, with well-established extra contributions from the field modulation, impurity scattering, and thermal smearing . A typical signal trace, demonstrating the high quality of our data, can be viewed in Fig. 1. Overall, 35 such sweeps were performed, at angular intervals of $`2^{}`$. For each of the dHvA frequencies corresponding to the three FS sheets, we have extracted the local dHvA amplitude through filtering in the Fourier domain. The dHvA amplitude can then be visualized versus magnitude and direction of the field, as shown in the density plot in Fig. Detailed Topography of the Fermi Surface of Sr<sub>2</sub>RuO<sub>4</sub> for the $`\alpha `$-sheet. We have also performed similar data analysis for a second extensive rotation study of short high-field ($`18\mathrm{T}`$ to $`15\mathrm{T}`$) sweeps in the $`[001][100]`$ plane; these runs used a different sample. We now turn to the results of the analysis for all three FS sheets, where Table I presents the deduced values for the $`k_{\mu \nu }`$. $`\alpha `$-Sheet — The most striking features of the data in Fig. Detailed Topography of the Fermi Surface of Sr<sub>2</sub>RuO<sub>4</sub> are qualitative differences with similar data in the $`[001][100]`$ plane (Ref. and the present study), and the absence of spin-zeroes that should be visible as vertical black lines. We are able to account for both effects, and produce near-perfect agreement with experiments as seen by comparing the two panels of Fig. Detailed Topography of the Fermi Surface of Sr<sub>2</sub>RuO<sub>4</sub>, using the $`k_{\mu \nu }`$ as given in Table I. The dominant $`k_{21}`$-term affects the dHvA amplitude for $`\varphi =45^{}`$ but has no effect for fields in the $`[001][100]`$ plane. The absence of spin-zeroes arises from the presence of a finite $`\chi _{21}5\times 10^5\mathrm{m}^1\mathrm{T}^1`$ in addition to $`\chi _{00}=10.4\times 10^5\mathrm{m}^1\mathrm{T}^1`$. The same parameters account equally well for data from $`[001][100]`$ rotations and for the amplitude of the second harmonic for both rotation directions. It should be noted that the warping of the $`\alpha `$-cylinder is so weak — at some angles and fields it is smaller than the Landau level spacing — that the success of the model requires the use of our exact treatment beyond the extremal orbit approximation. $`\beta `$-Sheet — The warping is comparatively large, and the extremal orbit approximation is valid over most of the angular range. The Yamaji angle of $`32^{}`$ and the variation of the (relatively fast) beating frequency $`\mathrm{\Delta }F`$ with $`\theta `$ (Fig. 3) reveal that the dominant warping parameters are $`k_{01}`$ and $`k_{41}`$, as tabulated in Table I. They have opposite signs, so the $`c`$-axis dispersion is largest along the zone diagonals. It is difficult to extract meaningful information on the higher-order terms, but we believe that the data set an upper bound for a double warping contribution of $`|k_{02}|<10^7\mathrm{m}^1`$. The spin-splitting behaviour is intricate, and while it is certain that higher-order $`\chi _{\mu \nu }`$ are needed to explain the data, it is impossible at this stage to extract them without ambiguity. $`\gamma `$-Sheet — Due to stronger impurity damping, the $`\gamma `$-signal is only observable at fields of more than 13 T. Along $`[001][110]`$, its amplitude peaks at $`\theta =\pm 13.7^{}`$, implying that the dominant corrugation is double warping, i.e. $`k_{02}k_{01}`$. We obtained a rough estimate of its strength from the sharpness of this amplitude maximum — it is difficult to assess $`k_{02}`$ from an on-axis beating pattern, as that cannot be established in the short field range over which $`\gamma `$-oscillations are visible. For the $`[001][100]`$ rotation, the amplitude maximum occurs at $`\theta =14.6^{}`$. The deviation of the two measured $`\theta `$-values from each other and from the simple Yamaji predicition of $`14.1^{}`$ yields $`k_{42}`$, whose sign implies that the $`c`$-axis dispersion is largest along the zone axes. At present, it is not possible to extract reliable information on the spin-splitting parameters. To the order of expansion given here, it is possible to calculate the contribution of each FS to the $`c`$-axis conductivity. The $`\beta `$-sheet dominates with a 86% share, compared to 8% for the $`\alpha `$\- and 6% for the $`\gamma `$-sheet. This provides new insight into recent AMRO experiments by Ohmichi et al. as it strongly suggests that the AMRO signal originates predominantly from the $`\beta `$-sheet (rather than the $`\alpha `$-sheet as had previously been assumed). The AMRO data then fix the “squareness” parameter of the $`\beta `$-cylinder as $`k_{40}=5.3\times 10^8\mathrm{m}^1`$, giving quantitative information about the FS nesting on that sheet. Our work also helps to clarify the interpretation of absorption spectra in cyclotron resonance , which have recently been used to assess quasiparticle mass enhancements in Sr<sub>2</sub>RuO<sub>4</sub>. The “periodic orbit resonance” geometry used in those experiments assesses modulations in the $`c`$-axis Fermi velocity. The strongest signals from such resonances in their simplest form should again stem from the $`\beta `$-sheet, and the unusual warping symmetry would lead to dominant resonances at $`4\omega _c,\mathrm{\hspace{0.17em}8}\omega _c,\mathrm{}`$ for the $`\beta `$\- and $`\gamma `$-orbits, and at $`2\omega _c,\mathrm{\hspace{0.17em}4}\omega _c,\mathrm{}`$ for the $`\alpha `$-orbit. High-precision dHvA itself can provide normally inaccesible information on spin-dependent many-body effects, by measuring both the specific heat and the spin susceptibility mass enhancements. For the $`\alpha `$-sheet, we have $`m^{}/m=3.4`$ and $`m_{\mathrm{susc}}^{}/m=4.1`$, the latter obtained from spin-splitting analysis assuming $`g2`$. We would expect the ratio $`m_{\mathrm{susc}}^{}/m^{}`$ to diverge at a ferromagnetic quantum critical point; the small ratio here suggests that, at least for the $`\alpha `$-sheet, the paramagnetic susceptibility enhancement is matched by specific heat contributions from phonons or large-$`q`$ spin fluctuations . Finally, an intriguing feature of the present study is the qualitative difference between the dominant warping symmetries observed for the three FS sheets of Sr<sub>2</sub>RuO<sub>4</sub>. Detailed comparison with the results of band structure calculations would be very informative to test the accuracy of these computations for the weak out-of-plane dispersions in quasi-2D metals in general. For Sr<sub>2</sub>RuO<sub>4</sub> in particular, this would also be a check on the commonly made assignment of the Ru d<sub>xz,yz</sub> orbital character to the $`\alpha `$\- and $`\beta `$-sheets, and the d<sub>xy</sub> character to the $`\gamma `$-sheet, which lies at the foundation of theories of orbital dependent superconductivity. In summary, the full analysis of angle-dependent dHvA amplitude data has emerged as an extremely powerful tool to determine the exact topography of the FS in quasi-2D metals. We have been able to extract quantitative information on the corrugation of all three Fermi cylinders of Sr<sub>2</sub>RuO<sub>4</sub>. The single warping of the $`\beta `$-sheet provides most of the $`c`$-axis dispersion, while the ellipsoidal warping of the $`\alpha `$\- and the double warping of the $`\gamma `$-sheet are less significant. We wish to thank S. Hill, E. M. Forgan, A. J. Schofield, G. J. McMullan, and G. G. Lonzarich for stimulating and fruitful discussions. The work was partly funded by the U.K. EPSRC. C.B. acknowledges the financial support of Trinity College, Cambridge; and A.P.M. gratefully acknowledges the support of the Royal Society.
no-problem/9909/hep-lat9909050.html
ar5iv
text
# UTCCP-P-70 UTHEP-410 Sept. 1999 Light hadron spectrum and quark masses in QCD with two flavors of dynamical quarks Talk presented by T. Kaneko ## 1 Introduction Understanding sea quark effects in the light hadron spectrum is an important issue, sharpened by the recent finding of a systematic deviation of the quenched spectrum from experiment. To this end, we have been pursuing $`N_\mathrm{f}=2`$ QCD simulations using an RG-improved gauge action and a tadpole-improved clover quark action , to be called RC simulations in this article. The parameters of these simulations are listed in Table 1. The statistics at $`\beta =2.2`$ have been increased since Lattice’98, and the runs at $`\beta =2.1`$ are new. In addition we have carried out quenched simulations with the same improved action, referred to as qRC, for a direct comparison of the full and quenched spectrum. The $`\beta `$ values of these runs, given in Table 1, are chosen so that the lattice spacing fixed by the string tension matches that of full QCD for each value of sea quark mass at $`\beta =1.95`$ and 2.1. Quenched hadron masses are calculated for valence quark masses such that $`m_{\mathrm{PS}}/m_\mathrm{V}`$ 0.8–0.5, which is similar to those in the RC runs. In this report we present updated results of the full QCD spectrum and light quark masses. We also discuss sea quark effects by comparing the RC and qRC results. For reference we use quenched results with the plaquette gauge and Wilson quark action as well, which we denote as qPW. ## 2 Full QCD spectrum The analysis procedure of our full QCD spectrum data follows that in Ref. : $`m_\pi `$ and $`m_\rho `$ are used to set the scale and determine the up and down quark mass $`m_{ud}`$, while the strange quark mass $`m_s`$ is fixed from either $`m_K`$ or $`m_\varphi `$. We tested several fitting forms for the continuum extrapolation, and found that the fit is stable; e.g., for the meson masses, linear extrapolations in $`a`$ and in $`a\alpha _{\overline{\mathrm{MS}}}`$ are consistent with each other and a quadratic fit in $`a`$ is also consistent within 2 standard deviations. Here, we present results from the linear extrapolation in $`a`$. Fig. 1 shows an update of results for vector meson and octet baryon masses in comparison to those from the qPW simulation. With increased statistics at $`\beta =2.2`$ and new points at $`\beta =2.1`$, we find our conclusion to remain unchanged since Lattice’98, i.e., meson masses in full QCD extrapolate significantly closer to experiment than in quenched QCD. For baryons, the statistical errors are still too large to draw definitive conclusions. ## 3 Sea quark mass dependence In order to obtain a deeper understanding of the sea quark effect in meson masses, we investigate how their values depend on the sea quark mass. In this test, the valence strange quark mass is fixed by a phenomenological value of the ratio $`m_{\eta _{\overline{s}s}}/m_\varphi =0.674`$. To avoid uncertainties that may arise from chiral extrapolations, the light dynamical quark mass is set to one of the values corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}=0.7,0.6`$ or 0.5. The values of the masses “$`m_K^{}`$” and “$`m_\rho `$” of fictitious mesons for such quark masses can then be determined by interpolations or short extrapolations of hadron mass results. In Fig. 2, we plot “$`m_K^{}/m_\rho `$” as a function of the lattice spacing normalized by “$`m_\rho `$” for different sea quark masses. Making linear extrapolations in $`a`$, we observe that the continuum limits of the two quenched simulations qRC and qPW are consistent. On the other hand, the full QCD result from RC exhibits an increasingly clearer deviation from the quenched value toward lighter sea quark masses. We consider that this result provides a clear demonstration of the sea quark effect on vector meson masses. ## 4 Quark masses We plot our results for light quark masses in the $`\overline{\mathrm{MS}}`$ scheme at $`\mu =`$2 GeV in Fig. 3, together with the quenched results of Ref. . Continuum extrapolations are made linearly in $`a`$ with the constraint that the three definitions (using axial vector Ward identity(AWI) or vector Ward identity(VWI) with either $`K_c`$ from sea quarks or partially quenched $`K_c`$ ) yield the same value. We confirm our previous finding that i) quark masses in full QCD are much smaller than those in quenched QCD, and ii) the large discrepancy in the strange quark mass determined from $`m_K`$ or $`m_\varphi `$, observed in quenched QCD, is much reduced. Our current estimate for quark masses in $`N_\mathrm{f}=2`$ QCD are $`m_{ud}=3.3(4)`$ MeV, $`m_s=84(7)`$ MeV ($`K`$-input) and $`m_s=87(11)`$ MeV ($`\varphi `$-input). The quoted errors include our estimate of the systematic errors due to the choice of functional form of continuum extrapolations and the definition of the $`\overline{\mathrm{MS}}`$ coupling used in the one-loop tadpole improved renormalization factor. Our results for quark masses are smaller than the values often used in phenomenology, though the ratio $`m_{ud}/m_s=`$ 26(3) is consistent with the result 24.4(1.5) from chiral perturbation theory. The small values are quite interesting, especially for the strange quark mass; a smaller strange quark mass raises the prediction of the Standard Model for the direct CP violation parameter $`\mathrm{Re}(ϵ^{}/ϵ)`$, as strongly favored by the experimental results from the KTeV and NA31 Collaborations . This work is supported in part by Grants-in-Aid of the Ministry of Education (Nos. 09304029, 10640246, 10640248, 11640250, 11640294, 10740107, 11740162). SE and KN are JSPS Research Fellows. AAK, TM and HPS are supported by the Research for the Future Program of JSPS, and HPS also by the Leverhulme foundation.
no-problem/9909/hep-lat9909017.html
ar5iv
text
# Looking for Effects of Topology in the Dirac Spectrum of Staggered Fermions ## 1 Introduction In the finite-volume scaling regime $`L\mathrm{}`$ with $`L<<1/m_\pi `$ there are detailed analytical predictions for the rescaled microscopic Dirac operator spectrum in gauge field sectors of fixed topological charge $`\nu `$ . We confront these predictions with quenched simulations on staggered fermions, with lattice size $`8^4`$ and $`\beta =5.1`$. More details can be found in our paper . Random Matrix Theory, or equivalently finite-volume partition functions, can be used to compute *exactly* the microscopic Dirac operator spectral density $$\rho _s(\zeta )=\underset{V\mathrm{}}{lim}\frac{1}{V}\rho (\frac{\zeta }{V\mathrm{\Sigma }})$$ (1) where $$\mathrm{\Sigma }=\underset{m0}{lim}\underset{V\mathrm{}}{lim}<\overline{\psi }\psi >=\pi \rho (0)$$ (2) is the infinite-volume chiral condensate. At finite lattice spacing, staggered fundamental fermions of SU(3) gauge theory lead to a microscopic Dirac spectrum in the universality class known as the chiral unitary ensemble (chUE). In a quenched gauge field sector of topological charge $`\nu `$, this microscopic spectral density reads $$\rho _s^{(\nu )}(\zeta )=\pi \rho (0)\frac{\zeta }{2}\{J_\nu (\zeta )^2J_{\nu 1}(\zeta )J_{\nu +1}(\zeta )\}$$ (3) As first observed by Verbaarschot , staggered fermions give good agreement with analytical prediction of just the $`\nu `$=0 sector, even when *all* gauge field configurations are summed over. To test whether staggered fermions at similar $`\beta `$-values *are* sensitive to topology at all, we have divided a large number of configurations ($``$17,000) into different topological sectors based on a variant of APE-smearing (see ref. for the details). ## 2 Classification scheme & Analysis Using the naively latticized topological charge $$\nu =\frac{1}{32\pi ^2}d^4x\text{Tr}[F_{\mu \nu }F_{\rho \sigma }]ϵ_{\mu \nu \rho \sigma }$$ (4) a given gauge field configuration is assigned an integer topological charge, $`\nu `$, if the rounded-off value of the ‘naive’ $`\nu `$ is stable between 200 and 300 APE-smearing steps. Other configurations ($`37\%`$) are simply rejected. The rounding-off of the naive $`\nu `$-values makes good sense, as can be seen in figure 2. It shows the distribution of the measured topological charge after 200 smearing steps. The distribution is strongly peaked around quantized values, and our measured $`\nu `$-values are rounded off to the obvious integer assignment. We see some “renormalization” of the topological charge as measured in this way, but this is of no concern for us here. We shall only use the measured $`\nu `$-values to make a classification of the original, un-smeared, configurations. The staggered Dirac operator spectrum has a $`\pm `$ symmetry, and when we trace a few small eigenvalues as a function of smearing in figure 3, we show only the positive ones. As expected, we find that $`4\nu `$ eigenvalues become small compared with the rest after very many smearing steps. These are the $`\nu `$ “would be” zero modes of 4 (continuum) flavors on the very smooth configurations. All measurements of the microscopic Dirac operator spectrum are performed on the original gauge field configurations, classified into different $`\nu `$-sectors according to the smeared values of $`\nu `$. Shown in figure 4 are the spectral densities obtained on configurations classified by $`\nu `$ in that way. Dashed lines are the analytical predictions for $`\nu `$=1 and 2 in the appropriate graphs, and the solid curve is the analytical prediction for $`\nu `$=0. At this $`\beta `$-value($`\beta =5.1`$) there is no discernible deviation from the $`\nu `$=0 prediction even on configurations that have been classified as $`\nu =\pm 1,\pm 2.`$ We have also compared the exact predictions $`P_{min}^{\nu =0}(\zeta )`$ $`=`$ $`\pi \rho (0){\displaystyle \frac{\zeta }{2}}e^{\frac{\zeta ^2}{4}}`$ $`P_{min}^{\nu =1}(\zeta )`$ $`=`$ $`\pi \rho (0){\displaystyle \frac{\zeta }{2}}I_2(\zeta )e^{\frac{\zeta ^2}{4}}`$ $`P_{min}^{\nu =2}(\zeta )`$ $`=`$ $`\pi \rho (0){\displaystyle \frac{\zeta }{2}}\{I_2^2(\zeta )I_1(\zeta )I_3(\zeta )\}e^{\frac{\zeta ^2}{4}}`$ for the distribution of just the smallest eigenvalue in the different topological sectors. Even here we see no deviation at all from the $`\nu `$=0 prediction. It has recently been shown that it is possible to recover the correct sensitivity of staggered fermions to topology at very weak gauge coupling (in the Schwinger model) . With fermions sensitive to gauge field topology, nice agreement with the analytical predictions has been seen even far from the continuum limit, as it should be . The work of P.H.D. and K.R. was partially supported by EU TMR grant ERBFMRXCT97-0122 and the work of U.M.H by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979. In addition P.H.D. and U.M.H. acknowledge support by NATO Science Collaborative Research Grant CRG 971487.
no-problem/9909/hep-lat9909072.html
ar5iv
text
# Edinburgh 99/11 September 1999 The topological susceptibility in ‘full’ (UK)QCD. ## Abstract We report first calculations of the topological susceptibility measured using the field theoretic method on SU(3) gauge configurations produced by the UKQCD collaboration with two flavours of dynamical, improved, Wilson fermions. Using three ensembles with matched lattice spacing but differing sea quark mass we find that hybrid Monte Carlo simulation appears to explore the topological sectors efficiently, and a topological susceptibility consistent with increasing linearly with the quark mass. The latest UKQCD runs generate SU(3) field configurations using a Wilson gauge action coupled to $`n_f=2`$ flavours of Wilson sea quarks, non–perturbatively improved such that the leading order discretisation errors in spectral quantities should be $`𝒪(a^2)`$. In dynamical simulations the lattice spacing, $`a`$, is influenced by the gauge coupling, $`\beta `$, and the quark mass parameter, $`\kappa `$. The ensembles studied represent three points on a trajectory in the $`(\beta ,\kappa )`$ space of (approximately) constant $`a`$ and physical volume ($`Va^4=16^3.32a^4`$. \[The lattice spacing has been defined from the static inter–quark potential, using $`r_0=0.49`$ fm .\] Along such trajectories, however, there is a change in chiral behaviour as seen in the pseudoscalar to vector mesonic mass ratio in Table 1. \[Dimensionless lattice quantities are denoted with circumflexes throughout.\] We measure the topological charge density on the lattice using the field theoretic operator $$\widehat{Q}(n)=\frac{1}{2}\times \frac{1}{16}\epsilon _{\mu \nu \sigma \tau }^\pm TrU_{\mu \nu }(n)U_{\sigma \tau }(n),$$ (1) symmetrised such that $`\epsilon _{\mu \nu \sigma ,\tau }^\pm =\epsilon _{\mu \nu \sigma ,+\tau }^\pm `$ etc. to improve the signal on moderately cooled lattices. The topological charge, $`\widehat{Q}`$, and susceptibility, $`\widehat{\chi }`$, are then $$\widehat{Q}=\frac{1}{32\pi ^2}\underset{n}{}\widehat{Q}(n)\text{ }\widehat{\chi }=\frac{\widehat{Q}^2}{V}.$$ (2) Although formally in the weak coupling limit $`\widehat{Q}`$ is related to that of the continuum by $`\widehat{Q}=a^4Q+𝒪(a^6),`$ the prefactors of the higher order terms typically have momenta on the scale of $`1/a`$ such that they are $`𝒪(1)`$, and $`\widehat{Q}`$ becomes increasingly dominated by ultraviolet noise near the continuum limit. It is also subject to a multiplicative renormalisation that reduces the signal at the $`\beta `$ couplings accessible to simulation. An established, and well understood, procedure to extract the topological signal is to cool the configurations prior to measuring $`\widehat{Q}`$. This locally smoothens the lattice fields to remove the ultraviolet fluctuations, and drives the renormalisation constant to unity. We move through the lattice links in a staggered fashion, updating each Cabibbo–Marinari subgroup in turn so as to exactly minimise the SU(2) Wilson plaquette action. This action is the most local and thus is expected to do least damage to correlation functions on physical length scales. Updating every link once corresponds to one cooling ‘sweep’. Such a cooling action, however, also destroys topological features when applied in extremis. Instanton–anti-instanton pairs in the vacuum are not stable minima of the action (even in the continuum) and under cooling there is an attractive force leading to annihilation. Whilst this is an issue in measurements of instanton size distributions, there in no net change in the topological charge and the susceptibility is thus stable. The lattice regularisation breaks scale invariance, and the Wilson action of an instanton is an increasing function of the core size, $`\rho `$. (Isolated) instantons shrink and disappear under cooling, leading to a net change in $`\widehat{Q}`$ and $`\widehat{\chi }`$. Whilst such events may be monitored in relatively smooth configurations, it is still desirable to perform as few cooling steps as possible to expose the signal before making measurements. In Fig. 1 we plot the normalised correlation between the topological charge after a given number of cooling sweeps, $`n_c`$, relative to that after $`n_c=25`$: $$\frac{\widehat{Q}(n_c)\widehat{Q}(25)}{\frac{1}{2}\left(\widehat{Q}(n_c)^2+\widehat{Q}(25)^2\right)}$$ (3) We find remarkable stability in $`\widehat{Q}`$ from 5 cooling sweeps out to at least 25 cooling steps. One technical point of interest is the rate at which configurations become topologically independent under Monte Carlo updating; the topological charge is related to the small eigenvalues of the fermion matrix and should be one of the slowest modes to decorrelate. We show a Monte Carlo time history plot for our most chiral ensemble in Fig. 2, and we estimate the integrated autocorrelation times in units of the 40 HMC trajectories between configurations in Table 2, albeit using a small ensemble, indicating pretty good decorrelation at these quark masses. We note that autocorrelation times are longer for the larger $`\beta `$ ensembles despite these being the less chiral. Also of issue is the ergodicity of the MC update. Measuring $`\widehat{Q}`$ over the whole configuration we find the ensemble averages to be within $`1.5`$ (relatively large) standard deviations of zero in Table 2, and Fig. 3 shows a relatively Gaussian sampling of the topological sectors. In the continuum, the topological charge of a configuration is integer. On the lattice this is not so, nor is the charge measured using our operator on a configuration particularly close to integer. This can be attributed to the presence of narrow instantons whose charge is significantly less than unity. Attempts can be made to calibrate and correct for this but we defer application of such procedures until . We have both maintained the charge as a non–integer for the purposes of calculating the topological susceptibility, and also rounded the value on each configuration to the nearest integer, and found that the topological susceptibilities obtained are consistent within statistical errors. In Table 2 we show our estimates for the topological susceptibility measured after 25 cooling sweeps. In the chirally broken, confining phase at low temperatures, the sea quarks induce an attractive interaction which leads to instanton–anti-instanton pairing and a suppression of the topological susceptibility as the dynamical quark mass is lowered: $$\chi =\frac{m_q\overline{\psi }\psi }{n_f^2}+𝒪(m_q^2)$$ (4) where $`\overline{\psi }\psi `$ is summed over light quark flavours, and should be evaluated in the $`m_q0`$ limit. The quark mass is not known a priori but may be re-expressed in terms of the pseudoscalar decay constant using the PCAC relation $`m_q\overline{\psi }\psi =f_\pi ^2m_\pi ^2`$ such that for sufficiently chiral sea quarks, the topological susceptibility should be quadratic in the pseudoscalar mass and decay constant: $$\chi =\frac{f_\pi ^2m_\pi ^2}{n_f^2}+𝒪(m_\pi ^4).$$ (5) In Fig. 4 we plot the susceptibility versus the pseudoscalar mass in units of $`r_0`$, and find the data are consistent with such a leading order fit through the origin. We may use the fitted slope of this graph to provide an estimate of the pseudoscalar decay constant, finding it to be very low compared to the experimental value. Although such an estimate must be regarded as preliminary given the volume of data so far analysed, it is clear from Fig. 4 that the heaviest of our sea quark masses yields a topological susceptibility that is in statistical agreement with the quenched value, and attempting to fit the leading order chiral behaviour to this point is likely to lead to an underestimate of $`f_\pi `$; $`𝒪(m_q^2)`$ terms are likely to be large, including the effects of not extrapolating $`\overline{\psi }\psi `$ to $`m_q0`$. Ongoing analysis of further configurations at these and lighter sea quark masses should clarify the situation. Acknowledgments. A.H.’s work was supported in part by United Kingdom PPARC grant GR/K22744, and that of M.T. by grants GR/K55752 and GR/K95338.
no-problem/9909/quant-ph9909037.html
ar5iv
text
# Universal cloning of continuous quantum variables ## Abstract The cloning of quantum variables with continuous spectra is analyzed. A universal—or Gaussian—quantum cloning machine is exhibited that copies equally well the states of two conjugate variables such as position and momentum. It also duplicates all coherent states with a fidelity of 2/3. More generally, the copies are shown to obey a no-cloning Heisenberg-like uncertainty relation. PACS numbers: 03.65.Bz, 03.67.-a, 89.70.+c Most of the concepts of quantum computation have been initially developed for discrete quantum variables, in particular, binary quantum variables (quantum bits). Recently, however, a lot of attention has been devoted to the study of continuous quantum variables in informational or computation processes, as they might be intrinsically easier to manipulate than their discrete counterparts. Variables with a continuous spectrum such as the position of a particle or the amplitude of an electromagnetic field have been shown to be useful to perform quantum teleportation, quantum error correction, or, even more generally, quantum computation. Also, quantum cryptographic schemes relying on continuous variables have been proposed, while the concept of entanglement purification has been extended to continuous variables. In this context, a promising feature of quantum computation over continuous variables is that it can be performed in quantum optics experiments by manipulating squeezed states with linear optics elements such as beam splitters. In this Letter, the problem of copying the state of a system with continuous spectrum is investigated, and it is shown that a particular unitary transformation, called cloning, can be found that universally copies the position and momentum states. Let us first state the problem in physical terms. Consider, as an example of a continuous variable, the position $`x`$ of a particle in a one-dimensional space, and its canonically conjugate variable $`p`$. If the wave function is a Dirac delta function—the particle is fully localized in position space, then $`x`$ can be measured exactly, and several perfect copies of the system can be prepared. However, such a cloning process fails to exactly copy non-localized states, e.g., momentum states. Conversely, if the wave function is a plane wave with momentum $`p`$—the particle is localized in momentum space, then $`p`$ can be measured exactly and one can again prepare perfect copies of this plane wave. However, such a “plane-wave cloner” is then unable to copy position states exactly. In short, it is impossible to copy perfectly the states of two conjugate variables such as position and momentum or the quadrature amplitudes of an electromagnetic field. This simply illustrates the famous no-cloning theorem for continuous variables. In what follows, it is shown that a unitary cloning transformation can nevertheless be found that provides two copies of a system with a continuous spectrum, but at the price of a non-unity cloning fidelity. More generally, we show that a class of cloning machines can be defined that yield two imperfect copies of a continuous variable, say $`x`$. The quality of the two copies obey a no-cloning uncertainty relation akin to the Heisenberg relation, implying that the product of the $`x`$-error variance on the first copy times the $`p`$-error variance on the second one remains bounded by $`(\mathrm{}/2)^2`$—it cannot be zero. Within this class, a universal cloner can be found that provides two identical copies of a continuous system with the same error distribution for position and momentum states. This cloner effects Gaussian-distributed position- and momentum-errors on the input variable, and is the continuous counterpart of the universal cloner for quantum bits. More generally, it duplicates in a same manner the eigenstates of linear combinations of $`\widehat{x}`$ and $`\widehat{p}`$, such as Gaussian wave packets or coherent states. The latter states are shown to be cloned with a fidelity that is equal to 2/3. In the following, we shall work in position basis, whose states $`|x`$ are normalized according to $`x|x^{}=\delta (xx^{})`$. We assume $`\mathrm{}=1`$, so that the momentum eigenstates are given by $`|p=(2\pi )^{1/2}𝑑x\mathrm{e}^{ipx}|x`$. Let us define the maximally-entangled states of two continuous variables, $$|\psi (x,p)=\frac{1}{\sqrt{2\pi }}_{\mathrm{}}^{\mathrm{}}𝑑x^{}\mathrm{e}^{ipx^{}}|x^{}_1|x^{}+x_2$$ (1) where 1 and 2 denote the two variables, while $`x`$ and $`p`$ are two real parameters. Equation (1) is akin to the original Einstein-Podolsky-Rosen (EPR) state, but parametrized by the center-of-mass position and momentum. It is easy to check that $`|\psi (x,p)`$ is maximally entangled as $`\mathrm{Tr}_1\left(|\psi \psi |\right)=\mathrm{Tr}_2\left(|\psi \psi |\right)=𝟙/(\mathrm{𝟚}\pi )`$ for all values of $`x`$ and $`p`$, with $`\mathrm{Tr}_{1,2}`$ denoting partial traces with respect to variable 1 and 2, respectively. The states $`|\psi `$ are orthonormal, i.e., $`\psi (x^{},p^{})|\psi (x,p)=\delta (xx^{})\delta (pp^{})`$, and satisfy a closure relation $$_{\mathrm{}}^{\mathrm{}}𝑑x𝑑p|\psi (x,p)\psi (x,p)|=𝟙_\mathrm{𝟙}𝟙_\mathrm{𝟚}$$ (2) so they form an orthonormal basis of the joint Hilbert space of variables 1 and 2. Interestingly, applying some unitary operator on one of these two variables makes it possible to transform the EPR states into each other. Specifically, let us define a set of displacement operators $`\widehat{D}`$ parametrized by $`x`$ and $`p`$, $$\widehat{D}(x,p)=\mathrm{e}^{ix\widehat{p}}\mathrm{e}^{ip\widehat{x}}=_{\mathrm{}}^{\mathrm{}}𝑑x^{}\mathrm{e}^{ipx^{}}|x^{}+xx^{}|$$ (3) which form a continuous Heisenberg group. Physically, $`\widehat{D}(x,p)`$ denotes a momentum shift of $`p`$ followed by a position shift of $`x`$. If $`\widehat{D}(x,p)`$ acts on one of two entangled continuous variables, it is straightforward to check that $$𝟙\widehat{𝔻}(𝕩,𝕡)|\psi (\mathrm{𝟘},\mathrm{𝟘})=|\psi (𝕩,𝕡)$$ (4) This will be useful to specify the errors induced by the continuous cloning machines considered later on. Assume that the input variable of a cloner is initially entangled with another (so-called reference) variable, so that their joint state is $`|\psi (0,0)`$. If cloning induces, say, a position-shift error of $`x`$ on the copy, then the joint state of the reference and copy variables will be $`|\psi (x,0)`$ as a result of Eq. (4). Similarly, a momentum-shift error of $`p`$ will result in $`|\psi (0,p)`$. More generally, if these $`x`$ and $`p`$ errors are distributed at random according to the probability $`P(x,p)`$, then the joint state will be the mixture $$_{\mathrm{}}^{\mathrm{}}𝑑x𝑑pP(x,p)|\psi (x,p)\psi (x,p)|$$ (5) Let us now consider a cloning machine defined as the unitary transformation $`\widehat{𝒰}`$ acting on three continuous variables: the input variable (variable 2) supplemented with two auxiliary variables, the blank copy (variable 3) and an ancilla (variable 4). After applying $`\widehat{𝒰}`$, variables 2 and 3 are taken as the two outputs of the cloner, while variable 4 (the ancilla) is simply traced over. Assume now that variable 1 denotes a reference variable that is initially entangled with the cloner input—their joint state is $`|\psi (0,0)_{1,2}`$, while the auxiliary variables 3 and 4 are initially prepared in the entangled state $$|\chi _{3,4}=_{\mathrm{}}^{\mathrm{}}𝑑x𝑑pf(x,p)|\psi (x,p)_{3,4}$$ (6) where $`f(x,p)`$ is an (arbitrary) complex amplitude function of $`x`$ and $`p`$. The cloning transformation is defined as $$\widehat{𝒰}_{2,3,4}=\mathrm{e}^{i(\widehat{x}_4\widehat{x}_3)\widehat{p}_2}\mathrm{e}^{i\widehat{x}_2(\widehat{p}_3+\widehat{p}_4)}$$ (7) where $`\widehat{x}_k`$ ($`\widehat{p}_k`$) is the position (momentum) operator for variable $`k`$. Then, the joint state of the reference, the two copies, and the ancilla after cloning is given by $`|\mathrm{\Phi }_{1,2,3,4}=𝟙_\mathrm{𝟙}\widehat{𝒰}_{\mathrm{𝟚},\mathrm{𝟛},\mathrm{𝟜}}|\psi (\mathrm{𝟘},\mathrm{𝟘})_{\mathrm{𝟙},\mathrm{𝟚}}|\chi _{\mathrm{𝟛},\mathrm{𝟜}}`$. Using Eqs. (6) and (7), it can be written as $$|\mathrm{\Phi }=_{\mathrm{}}^{\mathrm{}}𝑑x𝑑pf(x,p)|\psi (x,p)_{1,2}|\psi (x,p)_{3,4}$$ (8) This is a very peculiar 4-variable state in that it can be reexpressed in a similar form by exchanging variables 2 and 3, namely $$|\mathrm{\Phi }=_{\mathrm{}}^{\mathrm{}}𝑑x𝑑pg(x,p)|\psi (x,p)_{1,3}|\psi (x,p)_{2,4}$$ (9) with $$g(x,p)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}𝑑x^{}𝑑p^{}\mathrm{e}^{i(px^{}xp^{})}f(x^{},p^{})$$ (10) Thus, interchanging the two cloner outputs amounts to substitute the function $`f`$ with its two-dimensional Fourier transform $`g`$. This property is crucial as it ensures that the two copies suffer from complementary position and momentum errors. Indeed, using Eq. (8) and tracing over variables 3 and 4, we see that the joint state of the reference and the first output is given by Eq. (5), with $`|f|^2`$ playing the role of $`P`$. Hence, the first copy (called copy $`a`$ later on) is imperfect in the sense that the input variable gets a random position- and momentum-shift error drawn from the probability distribution $`P_a(x,p)=|f(x,p)|^2`$. Similarly, tracing the state (9) over variables 2 and 4 implies that the second copy (or copy $`b`$) is affected by a position- and momentum-shift error distributed as $`P_b(x,p)=|g(x,p)|^2`$. The complementarity between the quality of the two copies originates from the relation between the amplitude functions $`f`$ and $`g`$, i. e., Eq. (10), in close analogy with what was shown for discrete quantum cloners. Now, let us apply the cloning transformation $`\widehat{𝒰}`$ on an input position state $`|x_0`$. We simply need to project the reference variable onto state $`|x_0`$. Applied to the initial joint state of the reference and the input $`|\psi (0,0)_{1,2}`$, this projection operator $`|x_0x_0|𝟙`$ yields $`|x_0_1|x_0_2`$ up to a normalization, so the input is indeed projected onto the desired state. Applying this projector to the state $`|\mathrm{\Phi }`$ as given by Eq. (8) results in the state $$_{\mathrm{}}^{\mathrm{}}𝑑x𝑑pf(x,p)\mathrm{e}^{ipx_0}|x_0+x_2|\psi (x,p)_{3,4}$$ (11) for the remaining variables 2, 3, and 4. The state of copy $`a`$ (or variable 2) is then obtained by tracing over variables 3 and 4, $$\rho _a=_{\mathrm{}}^{\mathrm{}}𝑑xP_a(x)|x_0+xx_0+x|$$ (12) where $`P_a(x)=_{\mathrm{}}^{\mathrm{}}𝑑pP_a(x,p)`$ is the position-error (marginal) distribution affecting copy $`a`$. Hence, the first copy undergoes a random position error which is distributed as $`P_a(x)`$. Similarly, applying the projector on the alternate expression for $`|\mathrm{\Phi }`$, Eq. (9), and tracing over variables 2 and 4 results in a state $`\rho _b`$ of the second copy that is akin to Eq. (12) with $`P_b(x)=_{\mathrm{}}^{\mathrm{}}𝑑pP_b(x,p)`$. The result of cloning an input momentum state $`|p_0`$ can also be easily determined by applying a projector onto $`|p_0`$ to the reference variable, so that the initial joint state of the reference and the input is projected on $`|p_0_1|p_0_2`$. Using Eqs. (8) and (9), we obtain the analogous expressions for the state of copies $`a`$ and $`b`$, $$\rho _{a(b)}=_{\mathrm{}}^{\mathrm{}}𝑑pP_{a(b)}(p)|p_0+pp_0+p|$$ (13) where $`P_{a(b)}(p)=_{\mathrm{}}^{\mathrm{}}𝑑xP_{a(b)}(x,p)`$. Consequently, the two copies undergo a random momentum error distributed as $`P_{a(b)}(p)`$. The tradeoff between the quality of the copies can be expressed by relating the variances of the distributions $`P_a(x,p)`$ and $`P_b(x,p)`$. Let us analyze this no-cloning complementary by applying the Heisenberg uncertainty relation to the state $`|\zeta _{1,2}`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}𝑑x𝑑pf(x,p)|x_1|p_2}`$ (14) $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}𝑑x𝑑pg(x,p)|p_1|x_2}`$ (15) where $`|p_{1(2)}`$ denote the momentum states of the first (second) variable. The two pairs of conjugate operators $`(\widehat{x}_1,\widehat{p}_1)`$ and $`(\widehat{p}_2,\widehat{x}_2)`$ give rise, respectively, to the two no-cloning uncertainty relations $`(\mathrm{\Delta }x_a)^2(\mathrm{\Delta }p_b)^2`$ $``$ $`1/4`$ (16) $`(\mathrm{\Delta }x_b)^2(\mathrm{\Delta }p_a)^2`$ $``$ $`1/4`$ (17) where $`(\mathrm{\Delta }x_a)^2`$ and $`(\mathrm{\Delta }x_b)^2`$ denote the variance of $`P_a(x)`$ and $`P_b(x)`$, respectively. (The analogous notation holds for the momentum-shift distributions affecting both copies.) Consequently, if the cloning process induces a small position (momentum) error on the first copy, then the second copy is necessarily affected by a large momentum (position) error. We now focus our attention on a symmetric and universal continuous cloner that saturates the above no-cloning uncertainty relations. We restrict ourselves to solutions of the form $`f(x,p)=q(x)Q(p)`$ where $`Q(p)=(2\pi )^{1/2}_{\mathrm{}}^{\mathrm{}}𝑑x\mathrm{e}^{ipx}q(x)`$ is the Fourier transform of $`q(x)`$. It can be checked that this choice satisfies the symmetry requirement, $`g(x,p)=f(x,p)`$ . Now, for the cloner to act equally on position and momentum states, $`q(x)`$ must be equal to its Fourier transform. Hence, the universal continuous cloner corresponds to the choice $$f(x,p)=\frac{1}{\sqrt{\pi }}\mathrm{e}^{\frac{x^2+p^2}{2}}$$ (18) so that $`P_{a(b)}(x,p)=\mathrm{e}^{(x^2+p^2)}/\pi `$ is simply a bi-variate Gaussian of variance 1/2 on $`x`$\- and $`p`$-axis. The two auxiliary variables must then be prepared in the state $$|\chi =\frac{1}{\sqrt{\pi }}_{\mathrm{}}^{\mathrm{}}𝑑y𝑑z\mathrm{e}^{(y^2+z^2)/2}|y|y+z$$ (19) The resulting transformation effected by this universal cloner on an input position state $`|x`$ is given by $`|x|0|0{\displaystyle \frac{1}{\sqrt{\pi }}}{\displaystyle _{\mathrm{}}^{\mathrm{}}𝑑y𝑑z\mathrm{e}^{(y^2+z^2)/2}}`$ (21) $`\times |x+y|x+z|x+y+z`$ where the three variables denote the two copies and the ancilla, respectively. It is easy to check that Eq. (21) implies Eq. (12) and its counterpart for copy $`b`$ with $`P_a(x)=P_b(x)=\mathrm{exp}(x^2)/\sqrt{\pi }`$, so that both copies are affected by a Gaussian-distributed position error of variance 1/2. The choice $`(\mathrm{\Delta }x_a)^2=(\mathrm{\Delta }x_b)^2=(\mathrm{\Delta }p_a)^2=(\mathrm{\Delta }p_b)^2=1/2`$ ensures that the cloner is universal, that is, position and momentum states are copied with the same error variance. The value 1/2 implies that the cloner is optimal among the class of cloners considered here in view of Eq. (16). Furthermore, the cylindric symmetry of $`f(x,p)`$ \[i.e., it depends only on the radial coordinate $`(x^2+p^2)^{1/2}`$\] implies that this cloner copies the eigenstates of any operator of the form $`c\widehat{x}+d\widehat{p}`$ with the same error distribution, as we will show. Let us first determine the operation of this universal cloner on an arbitrary state $`|\xi `$ expressed in position basis as $`_{\mathrm{}}^{\mathrm{}}𝑑x\xi (x)|x`$. For this, we project the reference variable onto state $`|\xi ^{}`$, i.e., the state obtained by changing $`\xi (x)`$ into its complex conjugate. This projector $`|\xi ^{}\xi ^{}|𝟙`$ applied on the initial state $`|\psi (0,0)_{1,2}`$ yields the state $`|\xi ^{}_1|\xi _2`$ up to a normalization, so the input is indeed projected onto $`|\xi `$. Now, applying this projector to the state $`|\mathrm{\Phi }`$ after cloning implies that the three remaining variables are projected onto the state $$_{\mathrm{}}^{\mathrm{}}𝑑x𝑑pf(x,p)|\xi (x,p)_2|\psi (x,p)_{3,4}$$ (22) where $`|\xi (x,p)=\widehat{D}(x,p)|\xi =_{\mathrm{}}^{\mathrm{}}𝑑x^{}\xi (x^{})\mathrm{e}^{ipx^{}}|x^{}+x`$ is the input state $`|\xi `$ affected by a momentum shift of $`p`$ followed by a position shift of $`x`$. This yields $$\rho _{a(b)}=_{\mathrm{}}^{\mathrm{}}𝑑x𝑑pP_{a(b)}(x,p)|\xi (x,p)\xi (x,p)|$$ (23) so that the two outputs are mixtures of the $`|\xi (x,p)`$ states, with $`x`$ and $`p`$ distributed according to $`P_{a(b)}(x,p)`$. Expressed in terms of Wigner distributions, Eq. (23) implies that $`W_{\mathrm{out}}(x,p)=W_{in}(x,p)P(x,p)`$ with $``$ denoting convolution. In particular, the universal cloner simply effects a spreading out of the input Wigner function by a bi-variate Gaussian of variance 1/2. These considerations can be easily generalized to any pair of canonically conjugate variables in a rotated phase space. First note that, using the Baker-Hausdorff formula and $`[\widehat{x},\widehat{p}]=i`$, the displacement operator can be rewritten as $`\widehat{D}(x,p)=\mathrm{e}^{ixp/2}\mathrm{e}^{i(p\widehat{x}x\widehat{p})}`$. Consider now any pair of observables $`\widehat{u}`$ and $`\widehat{v}`$ satisfying the commutation rule $`[\widehat{u},\widehat{v}]=i`$. Let $`\widehat{u}=c\widehat{x}+d\widehat{p}`$ and $`\widehat{v}=d\widehat{x}+c\widehat{p}`$, where $`c`$ and $`d`$ are real and satisfy $`c^2+d^2=1`$. It is easy to check that $`v\widehat{u}u\widehat{v}=p\widehat{x}x\widehat{p}`$, where the variables $`u`$ and $`v`$ are defined just as $`\widehat{u}`$ and $`\widehat{v}`$, so that $`\widehat{D}`$ takes a similar form in terms of $`\widehat{u}`$ and $`\widehat{v}`$ (up to an irrelevant phase). Therefore, as a consequence of the cylindric symmetry of the Gaussian $`|f(x,p)|^2`$, the eigenstates $`|u`$ of the observable $`\widehat{u}`$ undergo a random shift of $`u`$ that is distributed as $`\mathrm{exp}(u^2)/\sqrt{\pi }`$. (The position and momentum states are just two special cases of this.) We can also treat the cloning of coherent states (Gaussian wave packets) by considering the complex rotation that defines the annihilation and creation operators $`\widehat{a}=(\widehat{x}+i\widehat{p})/\sqrt{2}`$ and $`\widehat{a}^{}=(\widehat{x}i\widehat{p})/\sqrt{2}`$. The displacement operator can then be written (up to an irrelevant phase) in the usual form $`\widehat{D}(\alpha )=\mathrm{e}^{\alpha \widehat{a}^{}\alpha ^{}\widehat{a}}`$, where $`\alpha =(x+ip)/\sqrt{2}`$ is a c-number that characterizes the position and momentum shift. This operator transforms the coherent state $`|\alpha _0`$ (i.e., the eigenstate of $`\widehat{a}`$ with eigenvalue $`\alpha _0`$) into $`\widehat{D}(\alpha )|\alpha _0=\mathrm{e}^{i\theta }|\alpha _0+\alpha `$, where $`\theta =\mathrm{Im}(\alpha \alpha _0^{})`$. Thus, if the input of the universal cloner is a coherent state $`|\alpha _0`$, its two outputs are a mixture of coherent states characterized by the density operator $$\rho =d^2\alpha G(\alpha )|\alpha _0+\alpha \alpha _0+\alpha |$$ (24) where the integral is over the complex plane, and $`G(\alpha )=2\mathrm{exp}(2|\alpha |^2)/\pi `$ is a Gaussian distribution in $`\alpha `$ space. Then, using $`|\alpha |\alpha ^{}|^2=\mathrm{exp}(|\alpha \alpha ^{}|^2)`$, it is easy to calculate the cloning fidelity: $$f=\alpha _0|\rho |\alpha _0=\frac{2}{\pi }d^2\alpha \mathrm{e}^{3|\alpha |^2}=\frac{2}{3}$$ (25) This fidelity does not depend on $`\alpha _0`$, so it is universal for all coherent states. Finally, we consider the cloning of quadrature squeezed states, defined as the eigenstates of $`\widehat{b}=(\widehat{x}/\sigma +i\sigma \widehat{p})/\sqrt{2}`$, where $`\sigma `$ is a real parameter. These states can be denoted as $`|\beta `$, where $`\beta =(x/\sigma +i\sigma p)/\sqrt{2}`$ is a c-number. We have again $`\widehat{D}(\beta )=\mathrm{e}^{\beta \widehat{b}^{}\beta ^{}\widehat{b}}`$, so that $`\widehat{D}(\beta )|\beta _0=|\beta _0+\beta `$ up to a phase. In order to keep the fidelity maximum, however, we must use here a non-universal cloner defined by $$f(x,p)=\frac{1}{\sqrt{\pi }}\mathrm{e}^{\frac{x^2}{2\sigma ^2}+\frac{\sigma ^2p^2}{2}}$$ (26) Both copies yielded by this cloner are affected by an $`x`$-error of variance $`\sigma ^2/2`$ and a $`p`$-error variance of $`1/(2\sigma ^2)`$, which implies that the density operator has the same form as Eq. (24) with $`G(\beta )=2\mathrm{exp}(2|\beta |^2)/\pi `$. As a consequence, there exists a specific cloning machine for each value of $`\sigma `$ that copies all squeezed states corresponding to that $`\sigma `$ with a fidelity of 2/3. In contrast, cloning these states using the universal cloner gives a fidelity that decreases as squeezing increases. We have shown that a universal cloning machine for continuous quantum variables can be defined that transforms position (momentum) states into a Gaussian-distributed mixture of position (momentum) states with an error variance of 1/2. It is universal, as the eigenstates of any linear combination of $`\widehat{x}`$ and $`\widehat{p}`$ are copied with the same error distribution. In particular, it duplicates all coherent states with a fidelity of 2/3. We conjecture that this cloning fidelity is optimal. An experimental realization of this universal cloner could be envisaged based on the manipulation of modes of the electromagnetic field. The cloning transformation $`\widehat{𝒰}`$ would then couple two auxiliary modes to the input mode to be copied. Since $`\widehat{𝒰}`$ amounts to a sequence of “continuous controlled-not gates”, it could be implemented by pairwise optical QND coupling between these three modes. As a final remark, it is worth noting that the two auxiliary modes must be prepared in state (19), which is simply the product vacuum state $`|0_3|0_4`$ processed by a controlled-not gate $`\mathrm{e}^{i\widehat{x}_3\widehat{p}_4}`$. This suggests that the noise that inevitably arises in the cloning of the input mode is simply linked to the vacuum fluctuations of the two auxiliary modes. This work was supported in part by DARPA/ARO under grant # DAAH04-96-1-3086 through the QUIC Program. N. J. C. is grateful to Samuel Braunstein, Jonathan Dowling, and Serge Massar for very helpful comments on this manuscript.
no-problem/9909/hep-lat9909078.html
ar5iv
text
# First Signs for String Breaking in Two-Flavor QCD ## 1 INTRODUCTION During the past few years there has been renewed interest in the important phenomenon of string breaking (SB), which is predicted by QCD, but which lattice QCD simulations for a long time have failed to show conclusively. String breaking is observed as a leveling off of the static quark-antiquark potential at large separation. The potential is the separation-dependent ground-state eigenvalue of the QCD hamiltonian in the presence of the static quark-antiquark pair. Traditionally, this eigenvalue was sought in the Wilson loop observable, which is the expectation value of the transfer matrix on a “string” state with the string of color flux connecting the static quark and antiquark. Experience has shown, not surprisingly, that the string state ($`S`$) is a very poor variational ansatz for a state that looks more like two static-light mesons. Including an admixture of a two-meson component ($`M`$) should help . In the following we report on preliminary results that have been obtained using the relevant string-string and string-meson operators for QCD. ## 2 STUDYING STRING BREAKING ON THE LATTICE Our conventional Wilson loop is computed with APE smearing of the space-like gauge links. In hamiltonian language the expectation value of this operator is the transfer matrix $`G_{SS}(R,T)`$ between an initial and final state $`S`$ consisting of a static quark-antiquark pair separated by a fat string of color flux. We enlarge the space by including a meson-antimeson state $`M`$ with an extra light quark in the vicinity of the static antiquark and an extra light antiquark in the vicinity of the static quark. Thus we also compute the additional transfer matrix elements $`G_{MM}(R,T)`$, $`G_{MS}(R,T)`$ and $`G_{SM}(R,T)`$. They are diagrammed in Figures 1 and 2. In principle, both channels couple to a common set of eigenvalues. At large $`T`$ we expect to reach the ground state, defined by the largest generalized eigenvalue $$G(R,T+1)u(R,T)=\lambda (R,T)G(R,T)u(R,T)$$ (1) The potential is then given by $`V(R)=lim_T\mathrm{}\mathrm{log}|\lambda (R,T)|`$. The vector $`u(R,T)`$ defines the variationally optimum admixture of $`S`$ and $`M`$ with the largest overlap with the ground state. A key unitarity condition is that the eigenvalues and eigenstates approach a constant with increasing $`T`$. This condition together with a demonstration of a smooth transition with increasing $`R`$ between a string-dominated state and two-meson-dominated state constitutes a true test of string breaking and should distinguish a quenched calculation from a proper calculation with dynamical quarks. ## 3 NUMERICAL METHOD To maximize statistics we generate “all-to-all” propagators for the light quark, using a Gaussian random source method. Results reported here are based on 15 such sources per gauge configuration, but we plan to increase this number. When constructing operators involving both staggered fermions and gauge links, one has to pay careful attention to staggered fermion phases. A consistent treatment results from interpreting the static quark as an infinitely massive staggered quark. For example, a hopping parameter expansion around an on-axis $`R\times T`$ rectangular path gives, in addition to the Wilson-loop gauge-link product, a net phase factor $`(1)^{RT}\times (1)^{R+T}`$, independent of the Dirac phase conventions. We use a similar construction to get the phases for the nonclosed gauge link products in the diagrams of Figures 1 and 2. In that case the Dirac phase convention for the gauge link products must be consistent with that of the light quark. A peculiar consequence of this construction is that the transition matrix elements must vanish for off-axis displacements $`\stackrel{}{R}`$ that have more than one odd Cartesian-displacement component. ## 4 LATTICE SIMULATION We use a set of stored configurations of size $`20^3\times 24`$ at $`\beta `$=5.415 generated with two flavors of dynamical staggered fermions of mass $`ma=0.0125`$. This set gives a ratio $`m_\pi /m_\rho 0.358`$ and has a lattice spacing of about 0.17 fm, which gives a spatial lattice size of $``$ 3.4 fm, and a temporal size of $``$ 4 fm. For the light fermions in the static-light mesons we use the same parameters as for the dynamical fermions. With the choice of 15 random sources we have currently analyzed about 70 out of the 200 archived configurations. ## 5 RESULTS In Figure 3 we show the results for the two-meson to two-meson transition $`G_{MM}`$. For short distances the operator behaves like the Wilson loop, with a linearly rising value, while for distances greater than 3 in lattice units it starts to level off. The dashed line is twice the mass of the heavy-light meson. The results for the transition matrix element $`G_{MS}`$ shown in Figure 4 are similar within large errors, with the energy leveling off at $`Ra5`$ in lattice units. In Figure 5 we show the combined results — including those for the pure Wilson loop correlator. The string-like behavior of the $`G_{MM}`$ and, especially, of the $`G_{MS}`$ correlators at short distances is clearly visible. ## 6 CONCLUSIONS Our present results with low statistics show a behavior expected with string breaking. The cross-over region is found to be at a distance of $`Ra=56`$, or about $`0.81.1`$ fm. However, to demonstrate string breaking convincingly, one must find a smooth transition with increasing $`R`$ between a string-dominated state and two-meson-dominated state and demonstrate that the eigenvalues and eigenvectors of the transfer matrix approach a constant at large $`T`$. Work in this direction is currently underway. We thank our colleagues of the MILC Collaboration for their help. This work is supported by the US National Science Foundation and Department of Energy and used computer resources at the San Diego Supercomputer Center (NPACI) and the University of Utah (CHPC).
no-problem/9909/astro-ph9909339.html
ar5iv
text
# Polarization Evolution in Strong Magnetic Fields ## 1 Introduction Understanding the structure of the magnetic fields surrounding neutron stars may provide a key in developing models for the radio and X-ray emission from pulsars, pulsar spindown, soft-gamma repeaters and the generation of the magnetic fields themselves. Although the magnetic field is instrumental in models of many phenomena associated with neutron stars, measuring its structure over a range of radii is problematic. Observations of the thermal emission from the surface may constrain the magnetic field geometry near the star, and the slowing of the pulsar’s rotation may yield an estimate of the strength of the field near the speed-of-light cylinder. Connecting these regimes is difficult. The intense magnetic fields associated with neutron stars influence many physical processes – cooling (\[Heyl & Hernquist 1997c\]; \[Shibanov et al. 1995\]), atmospheric emission (\[Rajagopal, Romani & Miller 1997\]; \[Pavlov et al. 1994\]) and the insulation of the core (\[Heyl & Hernquist 1998b\]; \[Heyl & Hernquist 1998a\]; \[Schaaf 1990\]; \[Hernquist 1985\]). Even stronger fields such as thought to be found near AXPs and SGRs alter the propagation of light through the magnetosphere by way of quantum-electrodynamic (QED) processes and may further process the emergent radiation (\[Heyl & Hernquist 1997a\]; \[Heyl & Hernquist 1997b\]; \[Baring 1995\]; \[Baring & Harding 1995\]; \[Adler 1971\]), and distort our view of the neutron star surface (\[Shaviv, Heyl & Lithwick 1999\]). Even for weaker fields, QED renders the vacuum anisotropic. The speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. Although for neutron stars with $`B<10^{14}`$ G this effect is too weak to grossly affect images and light curves of these objects, it is strong enough to decouple the propagating modes through the pulsar magnetosphere. To lowest order in the ratio of the photon energy to the electron rest mass-energy, the index of refraction of a photon in a magnetic field is independent of frequency. Near the pair-production threshold, the photon propagation adiabatically merges with a postironium state (\[Shabad & Usov 1986\]). However, well above the pair-production threshold for weak fields ($`m_ec^2Em_ec^24.4\times 10^{13}\text{G}/B`$) the low energy results again provide a good approximation (\[Tsai & Erber 1975\]) In the context of the field surrounding a neutron star, only photons with sufficiently high wavenumbers (the optical and blueward) will travel through a portion of the rapidly weakening magnetic field without their two polarization modes mixing. At radio frequencies, the plasma surrounding the neutron star produces a similar effect (e.g. \[Cheng & Ruderman 1979\], \[Barnard 1986\]). One would expect that radio emission will be initially polarized according to the direction of the local magnetic field. When one observes a pulsar at a particular instant, one sees emission from regions with various magnetic field directions; therefore, one would expect the polarization to cancel out substantially. However, one finds that pulsars exhibit a significant linear polarization. As the polarized radiation travels from its source, its polarization direction changes as the local magnetic field direction changes. At a distance from the star that is large when compared to its radius, the local magnetic field in the plane of the sky is parallel across the observed portion of the star. Since the field changes gradually, the polarization modes are decoupled, and the disparate linear polarizations can add coherently. Previous authors have focussed on the propagation of polarized radio waves through the magnetosphere. At these frequencies, the vacuum polarization is safely neglected. Furthermore, they have assumed that the coupling of the two polarization modes occurs instantaneously. In this paper, we will treat the problem of vacuum polarization in particular and how the gradual coupling of the polarization modes affects the final polarization of the emergent radiation. At sufficiently high frequency the plasma only negligibly affects the radiation as it travels through the magnetosphere. The precise frequency at which the vacuum birefringence begins to dominate depends on the charge density of the magnetospheric plamsa. If we assume the Goldreich-Julian value (\[Goldreich & Julian 1969\]), we find that for $$E_{\text{photon}}>0.035\text{eV}\left(\frac{B}{10^{12}\text{G}}\frac{P}{1\text{sec}}\frac{n_{\text{GJ}}}{n_e}\right)^{1/2},$$ (1) the vacuum contribution to the birefringence dominates that of the plasma (the modes collapse where equality obtains (\[Mészáros 1992\])). Our study of the vacuum-dominated regime, the optical and blueward, complements the work of \[Cheng & Ruderman 1979\] and \[Barnard 1986\] and will help to interpret observations of high-energy polarized radiation from neutron stars. ## 2 Small Amplitude Waves in the QED Vacuum \[Kubo & Nagata 1981\] (also \[Kubo & Nagata 1983\]) derive the equation of motion of polarization direciton on the Poincaré sphere as light travels through an inhomogeneous birefringent medium. Since their results assume that the medium is polarized but not magnetized, we first extend their results to include magnetization. The general equations derived describe how polarized radiation travels through any birefringent medium in the limit of geometric optics. We then focus on the propagation of high frequency radiation through pulsar magnetospheres and how measurements of the polarization of this radiation can constrain both the structure of the magnetic field and the emission process of the radiation. This extended formalism is more than adequate to also describe the plasma induced birefringence of radio waves. This could be used to extend the works of \[Cheng & Ruderman 1979\] and \[Barnard 1986\]. \[Kubo & Nagata 1983\] find that the evolution of the polarization of a wave traveling through a birefringent and dichroic medium in the limit of geometric optics is given by $$\frac{𝐬}{x_3}=\widehat{𝛀}\times 𝐬+\left(\widehat{𝐓}\times 𝐬\right)\times 𝐬,$$ (2) where $`x_3`$ is the direction of propagation, $`𝐬`$ is the normalized Stokes vector (\[Jackson 1975\]), and $`\widehat{𝛀}`$ and $`\widehat{𝐓}`$ are the birefringent and dichroic vectors. The Stokes vector consists of the four Stokes parameters, $`S_0,S_1,S_2`$ and $`S_3`$. The vector $`𝐬`$ consists of $`S_1/S_0,S_2/S_0`$ and $`S_3/S_0`$. Waves traveling through a magnetized (or electrified) vacuum are best treated by linearizing the constitutive relations about the external field. \[Heyl & Hernquist 1997b\] find that the permeability and dielectric tensors consist of an isotropic component plus an added component along the direction of the external field. Using the dielectric and permeability tensors of a magnetized vacuum in the formalism of \[Kubo & Nagata 1983\] yields $$\widehat{𝛀}=\frac{k_0}{2\sqrt{\mu _i\epsilon _i+1/2\left(\mu _i\epsilon _f+\mu _f\epsilon _i\right)\mathrm{sin}^2\theta }}\left(\mu _i\epsilon _f\mu _f\epsilon _i\right)\mathrm{sin}^2\theta \left[\begin{array}{c}\mathrm{cos}2\varphi \\ \mathrm{sin}2\varphi \\ 0\end{array}\right].$$ (3) where $`\theta `$ is the angle between the direction of propagation ($`𝐤`$) and the external field and $`\varphi `$ is the angle between the component of the external field perpendicular to $`𝐤`$ and the $`x`$axis defined by the observer. $`\mu _i`$ and $`\mu _f`$ are the isotropic and along the field components of the permeability tensor. $`\epsilon _i`$ and $`\epsilon _f`$ are the equivalent dielectric tensors. For the case of vacuum QED with an external magnetic field to one loop, we find that the amplitude of $`\widehat{𝛀}`$ is proportional to the difference between the indices of refraction for the two polarization states: $`\mathrm{\Omega }/k_0`$ $`=`$ $`\mathrm{\Delta }n=n_{}n_{}`$ (4) $`=`$ $`{\displaystyle \frac{\alpha }{4\pi }}\mathrm{sin}^2\theta \left[X_0^{(2)}\left({\displaystyle \frac{1}{\xi }}\right)\xi ^2+X_0^{(1)}\left({\displaystyle \frac{1}{\xi }}\right)\xi ^1X_1\left({\displaystyle \frac{1}{\xi }}\right)\right]`$ (5) to lowest order in $`\alpha `$, the fine-structure constant. $`\xi =B/B_{\text{QED}}`$ ($`B_{\text{QED}}4.4\times 10^{13}`$ G) and the functions $`X_0(x)`$ and $`X_1(x)`$ are defined in \[Heyl & Hernquist 1997a\]. An external electric field yields similar results. However, in this case the vacuum is also dichroic so the vector $`\widehat{𝐓}`$ is nonzero. In the weak magnetic field limit ($`\xi 0.5`$) we obtain, $$n_{}n_{}=\frac{\alpha }{4\pi }\frac{2}{15}\xi ^2\mathrm{sin}^2\theta ,$$ (6) and the strong field limit ($`\xi 0.5`$) yields $$n_{}n_{}=\frac{\alpha }{4\pi }\frac{2}{3}\xi \mathrm{sin}^2\theta .$$ (7) ### 2.1 Exact Solutions \[Kubo & Nagata 1983\] found that Equation 2 yields exact solutions for restricted values of $`\widehat{𝛀}`$ and $`\widehat{𝐓}`$. In the case of a uniformly magnetized vacuum, $`\widehat{𝐓}=0`$ and $`\widehat{𝛀}`$ is constant. In this case, the polarization vector $`𝐬`$ traces a circle on the Poincaré sphere about the vector $`\widehat{𝛀}`$ at a rate of $`|\widehat{𝛀}|`$. \[Kubo & Nagata 1981\] examine the case where $`\widehat{𝛀}`$ is constant in magnitude but satisfies, $$\frac{\widehat{𝛀}}{x_3}=𝚼\times \widehat{𝛀}$$ (8) where $`𝚼`$ is a constant. In this case $`\widehat{𝛀}`$ rotates about $`𝚼`$ at a rate of $`|𝚼|`$. If we follow the equations in a rotating coordinate system such that in it $`\widehat{𝛀}^{}`$ is constant, we find that $`𝐬^{}`$ satifies the following equation $$\frac{𝐬^{}}{x_3}=\widehat{𝛀}_{\text{eff}}\times 𝐬^{}$$ (9) where $$\widehat{𝛀}_{\text{eff}}^{}=\widehat{𝛀}^{}𝚼.$$ (10) This equation holds even if $`𝚼`$ is not constant. However, if $`𝚼`$ is constant, we obtain a new exact solution where $`𝐬`$ circles a guiding center displaced from $`\widehat{𝛀}`$ which in turn rotates about $`𝚼`$. If we take $`𝐬\widehat{𝛀}`$ initially and $`𝚼\widehat{𝛀}`$, we find that $`𝐬`$ develops a component perpendicular to $`\widehat{𝛀}`$. In the case of vacuum QED, $`\widehat{𝛀}`$ lies in the $`12`$plane. If the magnetic field rotates uniformly in the plane transverse to the wave, we find that $`𝐬`$ will leave the $`12`$plane and a circularly polarized component will develop. ### 2.2 Adiabatic Approximation If the parameters describing the motion of a system vary slowly compared to the characteristic frequency of the system, the system evolves adiabatically such that at a given time it executes a motion given by the instantaneous values of the parameters as if they were static. The exact solution given by Equation 9 has this feature if $`|\widehat{𝛀}||𝚼|`$. If this limit applies, $`\widehat{𝛀}_{\text{eff}}`$ is nearly parallel to $`\widehat{𝛀}^{}`$ and $`𝐬`$ circles the instantaneous guiding center $`\widehat{𝛀}`$. Furthermore, if $`𝐬`$ is initially parallel or antiparallel to $`\widehat{𝛀}`$, it will remain so (corresponding to polarizations that are parallel or perpendicular to the direction of the magnetic field). That is, the polarization modes are decoupled, and the polarization direction follows the direction of the magnetic field. We can also build an adiabatic approximation onto the exact solution of Equation 9 by allowing the magnitude of $`\widehat{𝛀}`$ to vary. We take a wave polarized parallel to the initial value of $`\widehat{𝛀}`$. As $`\widehat{𝛀}`$ rotates about $`\mathrm{{\rm Y}}`$ and decreases in magnitude, $`𝐬`$ rotates about $`\widehat{𝛀}_{\text{eff}}`$ and follows the instantaneous direction of $`\widehat{𝛀}_{\text{eff}}`$ as long as $$\left|\widehat{𝛀}\left(\frac{1}{|\widehat{𝛀}|}\frac{|\widehat{𝛀}|}{x_3}\right)^1\right|1.$$ (11) Numerical integration of Equation 2 for $`|\widehat{𝛀}|=Ax_3^6`$ and $`𝚼=1/10\widehat{x}_3`$ ($`\widehat{𝛀}`$ rotates by one radian after the photon has travelled 10 units of distance) shows that the polarization follows the analytic solution described in the previous paragraph for $$\left|\widehat{𝛀}\left(\frac{1}{|\widehat{𝛀}|}\frac{|\widehat{𝛀}|}{x_3}\right)^1\right|>0.5$$ (12) and then freezes for values of $`A`$ ranging from 10 to $`10^8`$. Figure 1 depicts both the numerical results and the analytic approximation. Generically, if the polarization modes of the medium are linear, and the wave begins in one of the modes, the final direction of the polarization projected into the $`S_1S_2`$plane will depend on how much the field direction has changed by the time of mode coupling. The circular component of the polarization depends on how fast the field direction is changing at the time of mode coupling. By applying these results to a magnetic dipole in vacua, we find that an outgoing photon’s polarization will follow the analytic solution until $$\frac{3}{2}\frac{\mathrm{d}\mathrm{ln}\mathrm{\Delta }n}{\mathrm{d}\mathrm{ln}B}\frac{B^{1/3}}{\mathrm{\Delta }n}=k_0R_0B_0^{1/3},$$ (13) after which its polarization in the observer’s system freezes. The left side of the equality describes the environment at the point when the polarization freezes; the right side depends on the point of emission of the photon. Figure 2 shows the magnetic field strength at coupling as a function of photon energy for typical pulsar parameters. If the modes begin to mix where $`BB_{\text{QED}}`$ (this is appropriate for all but the lowest energy photons near the most strongly magnetized neutron stars), we obtain $`{\displaystyle \frac{B_{\text{couple}}}{B_0}}`$ $`=`$ $`\left[{\displaystyle \frac{\alpha }{4\pi }}{\displaystyle \frac{2}{45}}\left({\displaystyle \frac{B_0}{B_{\text{QED}}}}\right)^2k_0R_0\right]^{5/3}`$ (14) $`=`$ $`1.95\times 10^5\left({\displaystyle \frac{E}{1\text{eV}}}\right)^{5/3}\left({\displaystyle \frac{B_0}{10^{12}\text{G}}}\right)^{10/3}\left({\displaystyle \frac{R_0}{10^6\text{cm}}}\right)^{5/3}.`$ (15) If this ratio is less than unity for a photon, the photon will travel with its polarization modes decoupled during some portion of its journey away from the star. That is, if $$E>1.49\times 10^3\text{eV}\left(\frac{B_0}{10^{12}\text{G}}\right)^2\left(\frac{R_0}{10^6\text{cm}}\right)^1,$$ (16) the vacuum will decouple the polarization modes. For parameters typical to neutron stars, the polarization modes for all photons with $`\lambda <800\mu `$m will be decoupled at least as they begin their journey from the vicinity of the neutron star. Although for photons of such low energy, the mode coupling induced by the plasma is likely to dominate (e.g. \[Cheng & Ruderman 1979\]). These effects may also be important for photons travelling through the magnetospheres of strongly magnetized white dwarfs. The most strongly magnetized white dwarfs have $`B10^9`$ G and $`R_010^9`$ cm; in the magnetospheres of these stars, the polarization modes of photons more energetic than 1.5 eV will be decoupled by the vacuum birefringence. ## 3 Rotating Pulsar Magnetospheres The photons that we observe from neutron stars have to travel through a portion of the neutron star’s magnetosphere before reaching us. If they travel through a region where Equation 11 holds, the photon’s polarization relative the observer’s axes will change as the magnetic field direction changes. If we examine a photon which leaves the surface of the star, once it has travelled a distance of several radii it will be travelling approximately radially. If the magnetic moment of the star remains fixed during the journey and the modes remained decoupled, the polarization directions of all the photons travelling in a particular direction and polarization mode will align with each other. The plasma in the vicinity of a neutron star is polarized by passing waves of sufficiently low frequencies. \[Cheng & Ruderman 1979\] used this effect to explain the high linear polarization of radio emission from pulsars even though one would expect the polarization modes where the radiation is produced to vary from place to place. The same effect will operate at high frequencies through the vacuum polarization of QED. High frequency photons travelling through a pulsar magnetosphere may travel a large distance before the modes couple. During this travel, the magnetic field of the pulsar may rotate a significant amount, thereby changing the polarization modes in the observer’s frame as the photon travels. Since high energy photons travel further than less energetic ones, the higher energy photons will have their polarizations dragged further by the rotating magnetic field. ### 3.1 Analytic Treatment If we assume that the rotation rate of the projection of the magnetic field onto the plane transverse to a photon’s propagation changes sufficiently slowly, we can apply the results of § 2.2 to calculate the final polarization of the photon. What is required is the direction of magnetic field at the point of recoupling and its rate of change. This analytic treatment will be restricted to photons travelling in the radial direction. For a significant phase lag to develop, the polarization modes must remain decoupled until well away from the star. Here outgoing photons emitted from near the stellar surface will have approximately radial trajectories. Furthermore, at these distances both gravitational light bending and magnetic lensing (\[Shaviv, Heyl & Lithwick 1999\]) may be neglected. The photon’s trajectory will be characterized by the angles $`\xi `$ and $`\theta `$ the longitude and colatitude where the photon left the star. We shall take $`\varpi `$, the phase of the star’s rotation, to be zero when the photon leaves the surface. The magnetic pole lies at zero longitude and colatitude $`\mathrm{\Psi }`$. As the photon travels away from the neutron star, the phase $`\varpi `$ increases and the local magnetic field direction changes. The criterion for recoupling in the weak-field limit when travelling through a dipole field is $$\frac{2}{15}\frac{\alpha }{4\pi }\left(\frac{B_{0,\text{equator}}}{B_{\text{QED}}}\right)^2R_0^6k_0\frac{\mathrm{sin}^2\alpha }{r^6}\left|\frac{\mathrm{d}\mathrm{sin}^2\alpha }{\mathrm{d}r}\frac{1}{\mathrm{sin}^2\alpha }\frac{6}{r}\right|^1=\frac{1}{2}$$ (17) In this equation, the angle $`\alpha `$ designates the magnetic colatitude of the photon which changes as the star rotates underneath it. It satisfies the following expression $$\mathrm{cos}\alpha =\mathrm{sin}\theta \mathrm{cos}(\xi +\varpi )\mathrm{sin}\mathrm{\Psi }+\mathrm{cos}\mathrm{\Psi }\mathrm{cos}\theta .$$ (18) The second important angle is the angle between the observer’s axes and the local magnetic field direction in the plane of the sky. For a magnetic dipole, the projection of the local magnetic direction and the projection of the magnetic moment ($`\widehat{𝐦}`$) onto the tangential plane are colinear. Unless the line of sight is directed down the rotation axis, we can use the projection of the rotation axis ($`\widehat{𝐳}`$) as one of the reference axes for measuring the polarization of the radiation. These projected vectors are defined by $`𝐳_\text{p}`$ $`=`$ $`\widehat{𝐳}\widehat{𝐫}\mathrm{cos}\theta `$ (19) $`𝐦_\text{p}`$ $`=`$ $`\widehat{𝐦}\widehat{𝐫}\mathrm{cos}\alpha .`$ (20) The travelling photon does not distinguish between a magnetic field pointing in one direction or in the exactly opposite direction (see Equation 3); therefore, we only need to know the angle between $`𝐳_\text{p}`$ and $`𝐦_\text{p}`$ modulo $`\pi `$. Calculating the cross product suffices to give the magnitude of the angle, $$\left(𝐳_\text{p}\times 𝐦_\text{p}\right)\widehat{𝐫}=z_\text{p}m_\text{p}\mathrm{sin}\varphi ,$$ (21) which yields $$\mathrm{tan}\varphi =\frac{\mathrm{sin}(\xi +\varpi )}{\mathrm{cos}\theta \mathrm{cos}(\xi +\varpi )\mathrm{cot}\mathrm{\Psi }\mathrm{sin}\theta }.$$ (22) $`𝚼`$ in this case is given by $$𝚼=2\frac{\mathrm{d}\varphi }{\mathrm{d}\varpi }\frac{2\pi }{cP}\widehat{3}.$$ (23) So the final polarization of the photon in the observers frame is given by Equation 10 evaluated at the moment of recoupling. The calculation of the polarization evolution in the analytic treatment proceeds as follows, 1. Choose the colatitude of the observer ($`\theta `$), the longitude where the photon is emitted ($`\xi `$), a reference frequency and magnetic field strength. 2. Given the period of the pulsar ($`P`$), a photon’s radius is given by $$r=\frac{cP}{2\pi }\varpi +R_0.$$ (24) For a given value of $`\varpi `$, the phase, solve for $`B_{0,\text{equator}}^2k_0`$ assuming that Equation 17 is satisfied, and calculate the magnitude of $`\widehat{𝛀}`$ at recoupling. 3. Substitute this value of $`\varpi `$ into Equation 22 to calculate the direction of $`\widehat{𝛀}`$ at recoupling. 4. Finally, calculate $`𝚼`$ at recoupling using Equation 23. The final polarization according to this analytic adiabatic approximation lies along $$\widehat{𝛀}_{\text{eff}}=\widehat{𝛀}𝚼.$$ (25) 5. This procedure can be repeated for other values of the longitude ($`\xi `$) to generate a light curve. In the case where the line of sight is aligned with the rotation axis, many of these geometric considerations vanish, and the problem reduces identically to that treated in § 2.2. In this situation we can derive expressions for both the final position angle, $`\varphi `$ $`=`$ $`{\displaystyle \frac{2\pi }{cP}}\left\{{\displaystyle \frac{1}{90}}\left[\alpha {\displaystyle \frac{90^4}{\pi }}k_0R_0^6\left(\mathrm{sin}\mathrm{\Psi }{\displaystyle \frac{B_{0,\text{equator}}}{B_{\text{QED}}}}\right)^2\right]^{1/5}R_0\right\}`$ (26) $``$ $`7.7\times 10^4\left(\mathrm{sin}\mathrm{\Psi }{\displaystyle \frac{B_{0,\text{equator}}}{10^{12}\text{G}}}\right)^{2/5}\left({\displaystyle \frac{E_{\text{photon}}}{1\text{eV}}}\right)^{1/5}\left({\displaystyle \frac{P}{1\text{sec}}}\right)^1\left({\displaystyle \frac{R_0}{10^6\text{cm}}}\right)^{6/5}`$ (27) and the circular component, $$s_3/|𝐬|=\frac{4\pi }{cP}\left[\left(\frac{3}{r}\right)^2+\left(\frac{4\pi }{cP}\right)^2\right]^{1/2},$$ (28) where $`r=\varphi {\displaystyle \frac{cP}{2\pi }}+R_0.`$ (29) If $`|s_3|/|𝐬|1`$, the following approximation holds, $$s_3/|𝐬|\frac{2}{3}\varphi +\frac{4}{3}\pi \frac{R_0}{cP}.$$ (30) For a more general pulsar geometry we can still use Equation 26 to characterize how radiation at various frequencies is polarized. Specifically, the position angle can be converted to a time lead for the polarization angle of high-frequency relative to low-frequency radiation. This time lead given by $`P\varphi /(2\pi )`$ is independent of the period of the pulsar. Figure 4 plots the known radio pulsars (\[Taylor, Manchester & Lyne 1993\]) and the approximate phase lead ($`r_{\text{couple}}/r_{\text{lc}}`$) expected at 5.2 keV relative to zero-energy photons. If $`r_{\text{couple}}`$ is a large fraction of the radius of the light cylinder, it may be possible to study the geometry of the magnetic field at large distances from the pulsar. For the Crab pulsar the radius at coupling for 5.2 keV is about 16% of the light-cylinder radius. The millisecond pulsars, PSR J1939+2134 and PSR J1824-2452, have $`r_{\text{couple}}`$ of over 9% of $`r_{\text{lc}}`$ at 5.2 keV. ### 3.2 Numerical Treatment The analytic techniques outlined in the previous subsection provide insights to interpret and extend the results from a direct numerical integration of Equation 2. Nevertheless, a numerical integration is unavoidable if we wish to calculate the phase shifts under general conditions – when the observer’s inclination angle is nontrivial and the field is not strictly dipolar. The numerical integration assumes two basic assumptions: 1. We assume that either 1. The magnetosphere is co-rotating with the NS. This implies that the local magnetic field is at any given moment aligned with the NS, irrespective of the distance. (A deviation from this behavior will indicate where the co-rotating magnetosphere ends), or 2. The magnetic field is given by that of a magnetized conducting sphere rotating in vacua (\[Deutsch 1955\], \[Barnard 1986\]) 2. The photon is traveling radially. That is to say, the emission process takes place well within the region where the polarization states couple. With these assumptions considered, the integration is achieved in a straightforward manner. A photon is followed from an initial radius $`R_0`$ to a radius $`r_{\text{final}}`$ that is beyond the recoupling distance using the equation: $$\frac{𝐬}{r}=\widehat{𝛀}\times 𝐬.$$ (31) Each radial step is integrated forward using the fourth order Runge-Kutta algorithm. To do so, one has to calculate the birefringent vector $`\widehat{𝛀}`$ using Equation 22 for the angle of the magnetic field and Equation 6 for the strength of the birefringence. To calculate this vector for the \[Deutsch 1955\] fields, we use Eq. 7 of \[Barnard 1986\]. The rotation phase of the star is related to the radial coordinate through Equation 24. #### 3.2.1 The Crab pulsar The Crab pulsar is among the most thoroughly studied astronomical objects. Its fast period and moderately strong magnetic field make it an ideal example for this process. We take $`\theta =54^{}`$ and $`\mathrm{\Psi }=64^{}`$ which reproduces the magnitude of the polarization swing for the pulse and interpulse (\[Smith et al. 1988\]), and the field strength at the magnetic equator to be $`1.9\times 10^{12}`$ G (\[Taylor, Manchester & Lyne 1993\]). The results of this calculation are depicted in Figure 5 and Figure 6. If the field is indeed a corotating dipole, the delayed coupling of the polarization modes results in the polarization of high frequency radiation leading that of lower energies. The lead time estimate from Equation 26 works well away from the pulse or interpulse where it slightly over or underestimates the lead time. However, if the neutron star is surrounded by a Deutsch field, this lead is retarded slightly. Since the coupling of the modes occurs gradually, a significant circular component will develop when the modes are weakly coupled. If the radiation initially has a circular component, this initial component is washed out when one measures the circular polarization over a finite bandpass, leaving only the value of $`s_3`$ produced by the coupling. Both the Deutsch fields and the corotating dipole fields result in a significant amount of circular polarization (up to 30% of the total polarization for the corotating dipole at 5.2 keV or 7% of it at 4.5 eV). The circular polarization generated by the rotating Deutsch field peaks at about 65% of the total polarization at the highest frequency studied. #### 3.2.2 RX J0720.4-3125 RX J0720.4-3125 is an isolated neutron star candidate suspected to be close to the Earth. \[Heyl & Hernquist 1998c\] from cooling arguments and its current period of 8.391 s estimate that it has a magnetic field of $`B10^{14}`$ G. From Equation 26 we estimate the time lead for a given frequency to be approximately five times larger than for the Crab pulsar. As we see from Figure 9, since the period of RX J0720.4-3125 is much larger than that of the Crab, the expected change in position angle over the larger lead time is very small. Furthermore, since the coupling of the modes occurs well within the light cylinder, the difference between the Deutsch and the corotating dipole model is negligible. Since the circular polarization that develops during coupling is proportional to the rotation frequency of the pulsar, very little circular polarization is evident in the emergent radiation – $`S_3`$ peaks at 2.7% of the total polarization, coincident with the pulse and interpulse. ## 4 Discussion The vacuum polarization induced by quantum electrodynamics decouples the polarization modes in the strong magnetic field surrounding neutron stars. As radiation travels away from the neutron star, the modes of low-frequency radiation couple first; consequently, the position angle of low frequency emission will lag behind that of higher frequency radiation emitted coincidently and in the same mode. Puslars often exhibit mode switching as a function of phase and frequency. An inelegant way to avoid confusion is to examine the position angle modulo $`\pi /2`$. By measuring the position angle as a function of frequency, one can determine not only the location of the emission but also probe the structure of the magnetic field near the light cylinder using sufficiently high frequency radiation. Previous authors have calculated a similar effect where the plasma decouples the modes at radio frequencies, assuming that the coupling occurs instantly (\[Cheng & Ruderman 1979\]; \[Barnard 1986\]). To relax this assumption, we use the equations describing the propagation of polarized radiation through a polarized and magnetized medium, under the assumptions of geometric optics. These results extend those of \[Kubo & Nagata 1981\] to the case of a magnetized and polarized medium. Linearly polarized radiation will develop a circular component during the gradual coupling of the modes. In the case of QED in a vacuum, the propagating modes are purely linear; therefore, the decoupling will wash out any initial circular component averaged over a finite bandpass. The only circular component present in the outgoing radiation is produced during coupling and measures the ratio of the coupling radius to the radius of the light cylinder. This circular component provides an independent validation of the effect. Since high frequency radiation is generally measured incoherently, determining the Stokes parameters becomes more difficult with increasing photon energy. Blueward of the ultraviolet, measuring circular polarization is impractical. The Spectrum-X-Gamma mission will carry a stellar x-ray polarimeter (SXRP) which will be especially sensitive at 2.6 keV and 5.2 keV, the first and second-order Bragg reflections off of graphite. Future instruments may be able to measure both linear and circular polarization at yet higher energies. From a theoretical point of view, the study of the propagating modes of a strongly magnetized vacuum is simpler than analysing those of a strongly magnetized plasma, especially when the properties of the plasma are not well known. However, given the magnetic permeability and the dielectric permittivity of the circumpulsar plasma, these results can be applied to study plasma-induced decoupling on the properties of polarized radio emission from pulsars. We leave the details of this process for a subsequent paper; however, our general arguments apply here as well. Both the plasma-induced and vacuum-induced decoupling can be used to probe the structure of pulsar magnetospheres. A measurement of the circular polarization produced by the mode coupling yields an estimate of the radius of the coupling. The frequency corresponding to a given degree of circular polarization measures the radial gradient of third power of the difference between the indices of refraction of the two modes. This results from the approximate inequality Equation 12. Like the arguments used in § 2.2, it is generic to waves travelling through a birefringent medium in the limit of geometric optics. Unlike the phase lead effect found by previous authors for the case of the plasma alone, the circular polarization provides a local probe of the pulsar magnetosphere without having to compare observations over a wide range of energy. As long as the radiation is produced in a region where the modes are decoupled, the interpretation of the circular polarization is straightforward. Measurements of the position angle of the radiation as a function of phase and frequency can also yield the same information, since the induced circular polarization is simply related to the phase lead. Furthermore, measuring all four Stokes parameters can elucidate the location of the emission process since below a critical frequency the modes will be coupled at the point of emission and the observed position angle will be constant with frequency. The circular polarization of the first modes which do travel through a decoupled region yields the radius of the emission. The details of the interpretation will only depend weakly on the medium which decouples the modes – plasma or vacuum. We have derived equations which describe the propagation of polarized radiation through a magnetized and polarized medium. By applying these results to the strongly magnetized regions surrounding neutron stars, we have found several observational probes of pulsar magnetospheres and emission.
no-problem/9909/hep-lat9909129.html
ar5iv
text
# Crossing the c=1 Barrier in 2d Lorentzian Quantum Gravity ## 1 Introduction It may come as a surprise to practitioners of two-dimensional gravity that there is more than one way of constructing a viable quantum theory by path-integral methods, and that there is indeed “life beyond Liouville gravity”. The new, alternative theory of 2d quantum gravity in question was first constructed as the continuum limit of an exactly soluble model of dynamically triangulated two-geometries , which could be interpreted as representing Lorentzian geometries with a causal structure and a preferred time direction. It has recently been shown that there is a whole universality class of such Lorentzian models, some of which are obtained by adding a curvature term to the gravity action or by using building blocks different from triangles in the construction of geometries . An investigation of Lorentzian gravity coupled to Ising spins led to the conclusion that in spite of strong fluctuations of the underlying geometries, the critical matter behaviour in the coupled system is governed by the Onsager exponents (which one also finds for the Ising model on a fixed, regular lattice). This immediately raises the following questions: If we continue to add matter to the system, do we eventually observe a qualitative change in the behaviour of geometry and/or matter? Is there an analogue of the $`c=1`$ barrier of Liouville quantum gravity beyond which the combined gravity-matter system degenerates? We address these and related issues below, by studying numerically 8 Ising models (corresponding to a $`c=4`$ conformal field theory) coupled to Lorentzian quantum gravity. In order to set the stage for our present investigation, let us recall some salient features of the Lorentzian gravity model . One idea behind the formulation of such a model is to take the Lorentzian structure seriously within a path-integral approach and in this way bridge the gap between the canonical quantization and the (Euclidean) path-integral formulation of gravity. The Lorentzian aspects of the model are two-fold: compared with the Euclidean case, the state sum is taken over a restricted class of triangulated two-geometries, namely, those which are generated by evolving a one-dimensional spatial slice and allow for the introduction of a causal structure. Secondly, the Lorentzian propagator is obtained by a suitable analytic continuation in the coupling constant. During time evolution, we do not permit the spatial slice to split into several components (i.e. change its topology), because the resulting space-time geometry would not be compatible with our discrete notion of causality. (In a continuum picture, the local lightcone structure associated with a Lorentzian metric must necessarily become degenerate at such branching points.) This is exactly the situation described by usual canonical (quantum) gravity. In the pure gravity model, the loop-loop correlator and various geometric properties can be calculated exactly and compared to Euclidean 2d quantum gravity, as given by Liouville gravity or 2d quantum gravity defined by dynamical triangulations or matrix models. The two models turn out to be inequivalent. For example, the Hausdorff dimension of the Lorentzian quantum geometry is $`d_H=2`$, indicating a much smoother behaviour than that of the Euclidean case where $`d_H=4`$. The difference between the fractal structures of Lorentzian and Euclidean quantum gravity can be traced to the absence or presence of so-called baby universes. These are outgrowths of the geometry taking the form of branchings-over-branchings, which are known to dominate the typical geometry contributing to the Euclidean state sum. Such branchings and associated topology changes with respect to the preferred spatial slicing are absent from the histories contributing to the Lorentzian state sum. Baby universes, i.e. discrete evolution moves resulting in spatial topology changes may be re-introduced by hand in the Lorentzian formulation (if one is willing to give up causality). This corresponds to “switching on” an additional term in the differential equation for the propagator, in such a way that the scaling limit must be modified in order to produce well-defined continuum physics. A further difference between 2d Lorentzian and Euclidean gravity is revealed by coupling them to conformal matter. In the Euclidean case this is governed by the famous KPZ scaling relations. They describe how the critical exponents of a conformal field theory change when it is coupled to Euclidean quantum gravity, and how the entropy exponent $`\gamma _{str}`$ for two-geometries (the so-called string susceptibility) changes due to their coupling to the conformal matter fields. In 2d Lorentzian gravity, the continuum limit of the quantum geometry was found to be unchanged under coupling to a $`c=1/2`$ conformal field theory, in the form of an Ising model at its critical point<sup>1</sup><sup>1</sup>1The critical point of the Ising model we refer to is the critical point of the combined Ising-gravity system. See for details.. The Hausdorff dimension remains equal to two, and an appropriately rescaled distribution of spatial volumes coincides with the distribution found in pure Lorentzian gravity. In addition, the values of the critical exponents of the Ising model agree with those of the Ising model on a regular lattice. In other words, coupling the Ising model to Lorentzian gravity does not affect the nature of its (second-order) phase transition. Summarizing, one may say that the coupling between $`c=1/2`$ conformal matter and geometry in Lorentzian quantum gravity is weak. To avoid a frequent misunderstanding, we must emphasize that this is not a trivial consequence of the fact that $`d_H=2`$ in Lorentzian quantum gravity. Although a flat space-time implies $`d_H=2`$ for the Hausdorff dimension, the converse is by no means true. In fact, the geometry does fluctuate strongly in Lorentzian gravity, as was demonstrated in . There are other examples to illustrate that the Hausdorff dimension is only a very rough measure of geometry. Consider 2d Euclidean quantum gravity coupled to conformal field theories with $`c>1`$: in these models the geometry fluctuates so wildly that the two-dimensional surfaces are torn apart and degenerate into so-called branched polymers, which again have $`d_H=2`$, the same as for smooth surfaces! An important conclusion one can draw from the results obtained in is that the strong coupling between Euclidean quantum gravity and conformal matter is directly caused by the presence of baby universes. Various qualitative arguments have been put forward in the past to support this idea, which is of course not new. However, one never had a model which prohibited the creation of baby universes, and which could be used to verify explicitly that the coupling between geometry and matter in this case is weak. The observed weak-coupling behaviour in the Lorentzian model opens up the intriguing possibility that one might be able to cross the $`c=1`$ barrier in Lorentzian 2d quantum gravity coupled to matter. This is the issue we will study numerically in the remainder of this article, by coupling eight Ising models to Lorentzian quantum gravity, corresponding at the critical point of the combined system to a $`c=4`$ conformal field theory. ## 2 Coupling gravity to multiple Ising spins In our previous work we have defined the two-loop function of Lorentzian 2d gravity as the state sum $$G(\lambda ,t)=\underset{T𝒯_t}{}e^{\lambda N_T},$$ (1) where the summation is over all triangulations $`T`$ of cylindrical topology with $`t`$ time-slices, $`N_T`$ counts the number of triangles in the triangulation $`T`$, and $`\lambda `$ is the bare cosmological constant. Since we are primarily interested in the bulk behaviour of the gravity-Ising system, we use periodic boundary conditions by identifying the top and bottom spatial slices of the cylindrical histories contributing to the state sum (1). Clearly this is not going to affect the local properties of the model. A geometry characterized by a toroidal triangulation $`T`$ of volume $`N_T`$ contains $`N_T`$ time-like links, $`N_T/2`$ space-like links, $`N_T/2`$ vertices and $`3N_T/2`$ nearest-neighbour pairs. The partition function of $`n`$ Ising models coupled to 2d Lorentzian quantum gravity is given by $$G(\lambda ,t,\beta )=\underset{T𝒯_t}{}e^{\lambda N_T}Z_T^n(\beta ),$$ (2) where $`T`$ is now a triangulation with toroidal topology. The partition function for a single Ising model on the triangulation $`T`$ is denoted by $`Z_T(\beta )`$, where the spins are located at the vertices of $`T`$ and $`\beta `$ is the inverse temperature of the Ising model. On a fixed lattice there are no interactions among the $`n`$ Ising spin copies if the partition function is simply taken as the $`n`$-fold product of $`Z(\beta )`$ for a single Ising model. In the presence of gravity, given by the definition (2), the situation is different. Although the spin partition function $`Z_T^n(\beta )`$ still factorizes for any given $`T`$, this is no longer the case after the sum over $`T`$ has been performed. The different spin copies are effectively interacting via the triangulations (or in a continuum language: via the geometry); the weight of each triangulation is a function of all the $`n`$ Ising models. It is straightforward to perform computer simulations of the combined gravity-matter system given by (2) (see for details). The only non-trivial aspect of the Monte Carlo simulation is the updating of geometry, and for this the procedure used in can readily be generalized to the extended spin system of (2). All results discussed in the following have been obtained at the critical coupling $`\beta `$ of the combined gravity-matter system (2), with $`n=8`$, i.e. with central charge $`c=4`$. Our motivation for choosing $`c=4`$ comes from our experience with Euclidean quantum gravity coupled to matter fields. In that case the phase transition at $`c=1`$ is not very clearly visible in simulations. Only for $`c4`$ can the changes in geometry be detected easily. We have therefore chosen to work with 8 Ising spins in Lorentzian quantum gravity, to have both $`c`$ sufficiently large to detect potential effects on the geometry, but still small enough to make computer simulations feasible within a limited amount of time. ## 3 Numerical results We have performed our simulations on dynamically triangulated surfaces of torus topology with $`N_T`$ triangles (corresponding to $`N=N_T/2`$ vertices) and $`t`$ time slices. For reasons that will become apparent in the following we have used geometric configurations with different ratios of temporal length $`t`$ versus average spatial extent, satisfying $`N=t^2/\tau `$ with $`\tau =1,2,3`$ and $`4`$. The choice $`\tau =1`$, previously used in , corresponds to a square lattice (with opposite sides identified), while for $`\tau >1`$ one obtains tori elongated in the $`t`$-direction. The system sizes $`N`$ used in the simulations at various values of $`\tau `$ are listed in Table 1. The geometry is updated using the move described in , and for each geometry update (corresponding to approximately $`N`$ accepted moves) the Ising spins are updated with the Swendsen–Wang algorithm. The focus of our attention is on the multiple Ising model with $`c=4`$, although for comparison some data for $`c=0,1/2`$ will also be reported. The first step of the numerical simulation consists in determining the critical values $`(\lambda _c,\beta _c)`$ of the cosmological and the matter coupling constants. For the pure gravity model ($`c=0`$), the cosmological constant $`\lambda _c=\mathrm{ln}2`$ is known exactly . For a single Ising model ($`c=1/2`$), we know from our previous simulations that $`(\lambda _c,\beta _c)=(0.742(5),0.2521(1))`$ , where the normalization for $`\lambda _c`$ is such that $`\lambda _c=\mathrm{ln}2`$ at $`\beta =\mathrm{}`$. For the case of eight Ising models, using finite-size scaling as in , for system sizes $`N=1K`$$`8K`$ and $`\tau =1,3`$ we have obtained $`(\lambda _c,\beta _c)=(1.081(5),0.2480(4))`$. As expected, this result is insensitive to the value of $`\tau `$. Having established the critical values, we investigate finite-size scaling of the system at $`(\lambda _c,\beta _c)`$. The statistics vary, for example, we performed $`1.88\times 10^6`$ sweeps for the $`N=19200`$ system and $`0.6\times 10^6`$ sweeps for the $`N=36963`$ system. The data are binned for errors. We apply finite-size scaling to a variety of observables, in order to extract universal properties which characterize both the quantum geometry and the matter interacting with it. The first of them involves a measurement of the distribution $`SV(l)`$ of spatial volumes (c.f. ), that is, the lengths $`l`$ of slices at constant time $`t`$. For sufficiently large lengths $`l`$ and space-time volumes $`N`$, one expects a finite-size scaling relation of the form $$SV_N(l)=F_S(l/N_T^{1/\delta _h}),$$ (3) for some function $`F_S`$. If such a relation holds, it defines a relative dimensionality of space (characterized by the average length $`l`$) and (proper) time since from $$N_TtltN_T^{11/\delta _h},lN_T^{1/\delta _h}.$$ (4) By relating the geodesic distance $`t`$ in time direction to the total volume, we can define a global or cosmological Hausdorff dimension $`d_H`$ of space-time through $$tN_T^{1/d_H}d_H=\frac{\delta _h}{\delta _h1}.$$ (5) This definition is motivated by a similar notion in Euclidean quantum gravity. In that case there is no distinction between spatial and time directions, and one can extract the global Hausdorff dimension by measuring the volumes of spherical shells at geodesic distance $`r`$ (the analogue of the geodesic time $`t`$ above) from a given point. (Note that a “shell” need not be a connected curve.) In a discretized context this amounts to counting the number $`n_N(r)`$ of vertices at geodesic (link) distance $`r`$. For this quantity one expects a scaling behaviour of the type $$n_N(r)=N_T^{11/d_H}F_1(x),x=\frac{r}{N_T^{1/d_H}},$$ (6) which has been verified for the case of 2d Euclidean quantum gravity. Eq. (6) is a typical example of a finite-size scaling relation. It tells us how a radial or proper time coordinate has to scale with space-time volume in order to obtain a non-trivial continuum limit ($`N\mathrm{}`$, $`r\mathrm{}`$). In this sense $`d_H`$ describes long-range properties of the system, which is our rationale for calling it the cosmological Hausdorff dimension. It does not necessarily tell us about the short-distance behaviour of space-time, for example, how the volume of a spherical shell behaves at small radius $`rN_T^{1/d_H}`$ (but still with $`r`$ much larger than the lattice spacing, to avoid lattice artifacts). At such distances one expects the shell volume to grow with a power law $$n_N(r)r^{d_h1},$$ (7) where $`d_h`$ is now a “short-distance” fractal dimension . There is no a priori reason for $`d_h`$ to coincide with the cosmological Hausdorff dimension. However, in models of simplicial Euclidean quantum gravity we have always observed $`d_h=d_H`$, such that (6) was valid for all $`r`$ (much larger than the lattice spacing). This points to a unique fractal structure of space-time, with $`F_1(x)=x^{d_h1}`$ for $`x1`$. Nevertheless, there also exist related models with $`d_hd_H`$ . We will see below that Lorentzian gravity coupled to sufficiently much matter provides another example of this kind. For illustrative purposes we have generated 3D visualizations of the two-dimensional dynamically triangulated geometries produced during the simulations. For Lorentzian geometries this can easily be done: as a consequence of the causality requirement each 2d history consists of an ordered sequence of 1d spatial slices of constant time. Each such slice is embedded isometrically in three-dimensional flat space and then the vertices of neighbouring slices are connected. Different colours indicate clusters of spin-up and spin-down states. For the case of multiple Ising models, one of them is chosen arbitrarily to determine the surface colouring. We have cut open the toroidal geometries along one of their spatial slices, so that in the pictures they appear as cylinders (with top and bottom slices to be identified). The visualizations are well suited for comparing the qualitative behaviour of the geometric and spin degrees of freedom as well as their interaction, for different values of the conformal charge $`c`$. Animations of some of the simulations can be found at . ### 3.1 Lorentzian quantum gravity with $`c1/2`$ To put our current results into context, let us recall the situation for pure Lorentzian gravity ($`c=0`$) and for Lorentzian gravity coupled to one critical Ising model ($`c=1/2`$). In that case, independent measurements of $`SV_N(l)`$ and $`n_N(t)`$ both yield $`\delta _h=d_H=2`$, corroborating the existence of a universal fractal dimension $`d=2`$, which moreover coincides with the naïvely expected continuum value. In addition, we have found the Onsager exponents for the case of a single Ising model coupled to Lorentzian gravity. The fact that both the fractal dimensions and the critical matter exponents retain their “canonical” values is in sharp contrast with the situation in 2d dynamically triangulated Euclidean quantum gravity. For later comparison with the case of eight Ising models coupled to Lorentzian gravity, we show in Fig. 1 two typical configurations of the pure-gravity system for $`N=8100`$, $`\tau =1`$, and for $`N=9408`$, $`\tau =3`$, generated during the Monte Carlo simulations. Apart from an overall rescaling, the $`\tau =1`$ geometry looks qualitatively similar to its $`\tau =3`$ counterpart. This observation can be made into a quantitative statement by showing that the distribution $`SV_N(l)`$ is independent of $`\tau `$, as indeed we have done. From the point of view of the space-time geometry, the situation is similar for $`c=1/2`$ coupled to Lorentzian gravity. We illustrate this by two typical configurations, depicted in Fig. 2. Also in this case we have checked that the distribution $`SV_N(l)`$ is independent of the relative temporal extension $`\tau `$ of space-time. ### 3.2 Properties of the quantum geometry for $`c=4`$ #### 3.2.1 The length distribution $`SV_N(l)`$ and the dimension of proper time In the same manner as discussed above, we can extract some large-scale characteristics of the quantum geometry of the $`c=4`$ system coupled to Lorentzian gravity by studying the scaling properties of the distribution $`SV_N(l)`$ of one-dimensional spatial slices of volume $`l`$. It turns out that for $`c=4`$ one has to simulate systems with $`\tau 3`$ to observe a clear scaling behaviour. As illustrated by Fig. 3, the system exhibits a tendency for developing a large number of very short spatial slices, with length of the order of the cut-off. The length distribution $`SV_N(l)`$ has a peak at small $`l`$, whose height increases with $`N`$, but whose position has a very weak dependence on the system size. Since the volume is kept fixed, there are strong finite-size effects which artificially prohibit the system from forming such a peak whenever $`N`$ and $`\tau `$ are simultaneously small. This is obvious from the data taken for the $`\tau =1`$ system (Fig. 3). In that case the peak appears clearly only for a system with more than $`8100`$ vertices. Such finite-size effects are absent for the $`\tau =3`$ system. These properties are well illustrated by Figs. 4 and 5, which show some typical geometries at $`c=4`$. They should be compared to our previous Figs. 1 and 2 for $`c1/2`$. Fig. 4 contains space-time configurations at $`\tau =1`$, for three different volumes $`N`$. Their tendency to separate into two distinct regions increases with $`N`$ (remember that the time direction has been chosen periodic). This is a typical finite-$`N`$ behaviour associated with a phase transition, in this case, of the geometry. Likewise, for increasing $`\tau `$ (and constant volume) it becomes easier to form long and thin “necks”, along which the spatial volumes $`l`$ stay close to the cut-off size (see Fig. 5). Note in particular the space-time history with the largest volume ($`N=36963`$), where the separation of space-time into two different phases is very pronounced, underscoring at the same time the effect of increasing $`N`$. This figure also illustrates the fact that in the limit as $`N\mathrm{}`$, the neck region will carry a vanishing space-time volume. It happens only rarely that the extended region shows a tendency to break up into smaller parts. Generally speaking, the fluctuations in its shape constitute the slowest modes of the simulation. Occasionally we observe a (much) smaller extended region splitting off from the main one. However, our statistics was insufficient to establish whether for large $`\tau `$ there is an underlying pattern governing the size and frequency of these events. For our present purposes, this effect can be safely ignored, since the number and size of such secondary space-time regions was small. Let us now quantify the scenario just outlined by a study of the scaling properties of $`SV_N(l)`$. As expected, the length distributions $`SV_N(l)`$ show no sign of scaling for small $`l`$. For large $`l`$, however, a scaling relation of type (3) is well satisfied, as illustrated by the plots in Fig. 6. The optimal values for $`\delta _h`$ are contained in Table 1. There is a clear tendency for $`\delta _h3/2`$ as $`\tau `$ becomes large. From Fig. 6 we can read off at which value of the parameter $`x=l/N_T^{\delta _h}`$ the scaling sets in. This happens for $`xc`$, where $`c0.5`$, or (setting $`\delta _h=3/2`$) for lengths $$lcN_T^{2/3}.$$ (8) As mentioned above, the neck region does not contribute significantly to the volume for large $`N`$. We have measured that the volume $`V_{ext}`$ of the extended phase (now defined as the scaling region of $`SV(l)`$) is asymptotically proportional to the total volume $`V`$($`N_T`$) of the surface. Let $`t_{ext}`$ denote the temporal extension of this extended region and $`l_{ext}`$ the typical length of a spatial slice in that region (such that $`t_{ext}l_{ext}=V_{ext}N`$). If we assume for the sake of definiteness that indeed $`\delta _h=3/2`$, it follows from (5) and (4) that $`\mathrm{dim}V_{ext}`$ $`=`$ $`{\displaystyle \frac{3}{2}}\mathrm{dim}l_{ext}.`$ (9) From this we immediately deduce the relations $`\mathrm{dim}l_{ext}`$ $`=`$ $`2\mathrm{dim}t_{ext},`$ (10) $`\mathrm{dim}V`$ $`=`$ $`3\mathrm{dim}t_{ext},`$ (11) and that the cosmological Hausdorff dimension is given by $`d_H=3`$. Our main conclusion is that the coupling of 8 Ising models to Lorentzian gravity produces a phase transition in which some universal properties of the geometry are changed. At large distances, proper time and spatial length develop anomalous dimensions relative to each other and to the space-time volume, as expressed by eqs. (10) and (11). #### 3.2.2 The shell volume $`n_N(r)`$ and the short-distance dimension $`d_h`$ Next we discuss the measurement of the one-dimensional volumes $`n_N(r)`$ of spherical shells at distance $`r`$. It turns out that for $`c=4`$ Lorentzian gravity plus matter, these functions do not exhibit the universal scaling properties found elsewhere in models of two-dimensional gravity . (That a universal behaviour at all length scales is unlikely is already illustrated by the separation of typical configurations into a thin and an extended region apparent in Fig. 5.) As discussed at the beginning of this section, this is no reason for concern; it simply reflects the fact that the underlying quantum geometry is more complex. We will identify several well-defined scaling regions and encounter the more general situation where the short-distance and the cosmological Hausdorff dimensions are different. As can be seen in Fig. 7, the short-distance behaviour of $`n_N(r)`$ is independent of $`N`$ and can be fitted nicely to $`n_N(r)r^{d_h1}`$, with $`d_h2`$ (which coincides with the value found for $`c=1/2`$ and also happens to be the “canonical” dimension expected from classical considerations). The best fit gives $`d_h=2.1(2)`$. Going out to length scales of the order $`rN^{1/3}`$, we see a different scaling behaviour. Here (6) is valid with a cosmological Hausdorff dimension $`d_H=3`$, in accordance with the value extracted from the measurements of the length distribution $`SV_N(l)`$. Finite-size scaling in this region, computed from the scaling of the peaks of $`n_N(r)`$ (Fig. 8), yields $`d_H=3.07(9)`$. Finally, at very large $`r`$ near the tail of the distribution, we found that the value of $`n_N(r)`$ is almost independent of $`N`$, indicating a dominance of configurations with $`d_H=1`$. Recalling the typical shape of configurations at $`c=4`$ (Fig. 5), this suggests the following interpretation. Each measurement of $`n_N(r)`$ involves the choice of a reference point, from which the geodesic distances $`r`$ are measured. Since almost no space-time volume is contained in the thin necks, the randomly chosen reference point will typically be located somewhere in the extended region. However, moving outwards from such a bulk point in spherical shells will for large $`r`$ eventually bring us back to the neck region, which in the large-$`N`$ limit has a length proportional to $`\sqrt{N}`$ (simply because $`Nt^2`$). Once the spherical shells have reached the neck region, the volume function $`n_N(r)`$ will just measure a one-dimensional structure. ### 3.3 Matter behaviour in the extended phase We have seen above how a Lorentzian geometry separates into two distinct regions under the coupling to 8 Ising models. Since the thin, stalk-like region is effectively one-dimensional, a non-trivial matter behaviour can be expected only in the remaining, spatially extended space-time region. The computation of the matter exponents in this phase is subtle and requires some care. At the critical matter coupling $`\beta _c`$ we have measured the same set of observables as in our previous simulations . Together with their expected finite-size scaling behaviour they are $$\chi =N(m^2|m|^2)N^{\gamma /\nu d_H}(\mathrm{susceptibility})$$ (12) $$D_{\mathrm{ln}|m|}=N\left(e\frac{e|m|}{|m|}\right)N^{1/\nu d_H}(D_{\mathrm{ln}|m|}\frac{d\mathrm{ln}|m|}{d\beta })$$ (13) $$D_{\mathrm{ln}m^2}=N\left(e\frac{em^2}{m^2}\right)N^{1/\nu d_H}(D_{\mathrm{ln}m^2}\frac{d\mathrm{ln}m^2}{d\beta }),$$ (14) where $`\gamma `$ and $`\nu `$ are the critical exponents of the susceptibility and of the divergent spin-spin correlation length. Initially we checked that for the case of a single Ising model, extending the geometries in the temporal direction (i.e. taking $`\tau >1`$) does not affect the Onsager exponents found in . The results are tabulated in Table 3, and do not differ significantly from our previous results. If one repeats this analysis naïvely for $`c=4`$, without taking into account geometric properties, no consistent scaling behaviour is found. For example, we find Onsager exponents for $`\tau =1`$, but these change when $`\tau `$ is increased. Although the spins in the “thin” phase cannot be critical, and contribute little to the space-time volume, the “transition” region, where the spatial length $`l`$ changes from cut-off length to $`l`$’s satisfying (8), apparently spoils the measurements, and there are considerable finite-size effects. The situation does not improve when the volume $`V_{ext}`$ is used instead of the total volume in the finite-size scaling. It seems that the only way to study the critical matter behaviour for the case of eight Ising models is to isolate explicitly the contributions from the spins on the extended part of the Lorentzian geometry. For this we adopt the following procedure: for each configuration we measure the energy $`E`$ and magnetization $`M`$ on all vertices belonging to spatial slices whose length is greater than a cut-off $`l_0(N)=c_\tau N^{1/\delta _h}`$, and on all links contained in such slices or connecting two of them. The constants $`c_\tau `$ and $`\delta _h`$ are determined from the scaling regions of the length distributions $`SV_N(l)`$, with $`c_\tau 0.5`$, as discussed in connection with eq. (8). Denote the numbers of such vertices and links by $`N^{}`$ and $`N_L^{}`$. We then compute the averages $`e=E/N_L^{}`$ and $`m=M/N^{}`$, and measure the expectation values $`N^{}`$ and $`N_L^{}`$. Looking at the Monte Carlo time histories, we observe that whenever $`N^{}0`$, $`e`$ and $`m`$ fluctuate stably around their mean values (even when the vertex number $`N^{}`$ is close to $`0`$), whereas $`E`$ and $`M`$ vary slowly but considerably together with $`N^{}`$. We can thus safely ignore the (relatively few) configurations with $`N^{}=0`$. We have also computed the volume $`V^{}V_{ext}`$ contributing to the scaling region of $`SV(l)`$ and performed finite-size scaling of the observables computed from the modified energy and magnetization averages $`e`$ and $`m`$. The results of this final analysis are summarized in Table 4. We have used a variety of different definitions of the system size, to demonstrate that the critical matter exponents extracted from finite-size scaling do not depend on them. We conclude that the critical matter behaviour of our model of eight Ising spins, on the part of space-time that possesses a non-trivial spatial extension, is governed by the Onsager exponents, and therefore lies in the same universality class as the model containing only a single copy of Ising spins. ## 4 Discussion In order to provide an interpretation for some of our results on 2d Lorentzian gravity coupled to multiple Ising spins, we first need to recall some characteristic geometric features of 2d Euclidean gravity. Consider the one-dimensional spherical “shell” consisting of all points separated from a given reference point<sup>2</sup><sup>2</sup>2When talking about “reference points”, we always have in mind averages, calculated in the statistical ensemble of 2d Euclidean geometries, with each geometry weighted by the exponential of its classical action. by a geodesic distance $`r`$. This curve will in general be multiply connected. Let $`\rho (l,r)`$ denote the number of connected shell components of length $`l`$ at distance $`r`$. It is a remarkable and universal result in 2d Euclidean quantum gravity that $`r`$ and $`l`$ have a relative anomalous scaling of the form $$lr^2.$$ (15) For pure 2d Euclidean quantum gravity this was first proved analytically in , where in the limit of infinite space-time volume $`\rho `$ was found to be $$\rho (l,r)\frac{1}{r^2}\left(c_1z^{5/2}+c_2z^{1/2}+c_3z^{1/2}\right)\text{e}^z,z=l/r^2.$$ (16) It was later checked numerically for various values $`c<1`$ of the central charge that in the infinite-volume limit the length distribution $`r^2\rho (l,r)`$ is only a function of the variable $`z=l/r^2`$. In addition, for $`z>1`$ the functional dependence on $`c`$ turns out to be rather weak. For a finite space-time volume $`N`$, it was found that $`r^2\rho (l,r)`$ can be approximated well by $$r^2\rho (l,r)f(z,l/N^{2/d_H}).$$ (17) We can use this relation to calculate the expectation values of integer powers of the length $`l`$,<sup>3</sup><sup>3</sup>3For $`n=1`$ eq. (18) is not valid and one obtains instead $$l_rr^{d_H1}H(r/N^{1/d_H}),H(0)>0,$$ where the Hausdorff dimension $`d_H`$ is a function of the central charge $`c`$ of the conformal matter theory coupled to 2d Euclidean quantum gravity. This contribution comes entirely from small loop lengths $`lr^2`$, and is suppressed in the higher moments of $`l`$. $$l^n_{r,N}\underset{l}{}l^n\rho (l,r)\stackrel{N\mathrm{large}}{}N^{2n/d_H}F_n(r/N^{1/d_H}),n>1,$$ (18) where the functions $`F_n`$ behave like $$F_n(x)x^{2n}\mathrm{for}x<1.$$ (19) For small $`rN^{1/d_H}`$ we thus obtain $$l^n_{r,N}r^{2n}\mathrm{for}n>1,$$ (20) which is in accordance with relation (15), whereas for “cosmological” distances $`rN^{1/d_H}`$ one finds $$l^n_{r,N}N^{2n/d_H}\mathrm{for}n>1.$$ (21) It should be emphasized again that eqs. (19) and (20) seem to be universally true for 2d Euclidean quantum gravity theories with $`c<1`$ and require no cut-off in the continuum limit. To our knowledge they are the only non-trivial relations in 2d Euclidean quantum gravity independent of the central charge $`c`$. For Lorentzian gravity coupled to a $`c=4`$ conformal field theory we saw above that the geometry had undergone a phase transition compared to $`c=0`$ and $`c=1/2`$. In those cases, a continuum limit could only be obtained if time and space had identical scaling dimensions, dim$`l`$ = dim$`t`$. Under the natural identification of Lorentzian proper time $`t`$ with the geodesic distance $`r`$ of the Euclidean formulae, this should be contrasted with dim$`l`$ = 2 dim$`r`$, which follows immediately from relation (15). The analogue of relation (21) for Lorentzian gravity with $`c=0,1/2`$ is given by $$l^n_{t,N}=N^{n/d_H},d_H=2,n>0.$$ (22) More precisely, eq. (22) can be computed exactly for $`c=0`$ and is deduced for $`c=1/2`$ by numerical comparison of the length distributions. However, the scaling relation we observed for $`c=4`$ was not (22), but (21) (for $`n>0`$)! The surprising conclusion is that with increasing central charge $`c`$ the geometry undergoes a transition from a state characterized by (22), to one satisfying (21), which is a generic property of Euclidean quantum gravity with $`c<1`$. What causes this transition as more and more matter is added to the model? As discussed in , matter has a tendency to “squeeze off” parts of space-time. In 2d Euclidean quantum gravity this pinching can take place anywhere and results in an ever-increasing number of baby universes. Eventually, for $`c>1`$, the fractal geometry degenerates into branched polymers, which can simply be viewed as a conglomerate of baby universes of the size of the cut-off. In the Lorentzian case by construction no baby universes can be formed. The only possible way for matter to squeeze the geometry is to pinch constant-time slices to their minimal allowed spatial length $`l=1`$. This effect is very obvious in the Monte Carlo simulations and becomes more pronounced as the central charge is increased. In going to $`c=4`$ the influence of the matter has become so strong that a genuine phase transition has taken place. Only $`t^{2/3}`$ of the $`t`$ spatial slices (which typically occur together in a single extended region) have an extension beyond the cut-off scale. On the other hand their average spatial extension behaves like $`t^{4/3}`$. The remaining spatial slices have been pinched to the cut-off scale. On the fraction of slices with a macroscopic extension, one can then define a scaling limit, which at large distances is characterized by a Hausdorff dimension $`d_H=3`$. Likewise the relative dimensions of space and time are changed from their naïve canonical values dim$`l`$ = dim$`r`$, derived from (22), to dim$`l`$ =2 dim$`r`$, dictated by (21). From our experience with Euclidean quantum gravity, this behaviour may seem unexpected. In that case, a large influence of the matter on the geometry is always accompanied by a large back reaction of the geometry on the matter, in the sense that the critical matter and gravity exponents always change simultaneously. (An exception to this is the relation (21), which is valid for all $`c<1`$ and therefore contains no information about the conformal field theory and its coupling to geometry.) The Lorentzian gravity model behaves differently: the matter strongly affects the geometry (changing bulk properties like the Hausdorff dimension and the relative scaling between time and spatial directions), but these apparently drastic changes are still not sufficient to alter any of the universal matter properties. Even when 8 Ising models are coupled to Lorentzian gravity, the critical matter exponents still retain their Onsager values. This situation provides further support for the viewpoint advanced in our previous work that the critical gravity and matter behaviour of the Euclidean models is entirely determined by the presence of baby universes. In the light of our new results, the argument may be put as follows. So far it has been unclear whether the change in the critical exponents of conformal field theories when coupled to Euclidean quantum gravity was due to the strong back reaction of the geometry on the matter or to the baby universes that were present a priori. Lorentzian gravity with 8 Ising spins provides an example where undeniably the interaction of the matter and gravity sectors is strong. Nevertheless the critical Ising exponents remain unchanged. This strongly suggests that in the Euclidean case it is really the baby universes which are responsible for the observed changes in the universal properties of the matter. While we have not undertaken a systematic search for the exact value of $`c`$ where the phase transition in geometry takes place, it is tempting to conjecture that it occurs at $`c_{\mathrm{𝑐𝑟𝑖𝑡}}=1`$. Independent of the exact value of $`c_{\mathrm{𝑐𝑟𝑖𝑡}}`$, we have identified a weak analogue of the $`c=1`$ barrier also in Lorentzian gravity. From the point of view of matter, nothing dramatic happens when the barrier is crossed. However, the behaviour of the quantum geometry undergoes a qualitative change and even shares some features with the non-singular part of the quantum geometries described by 2d Euclidean gravity coupled to matter with $`c<1`$. This highlights the universal nature of the relation $`lr^2`$ and motivates the search for a simple underlying explanation, which may have a status similar to that of $`d_H=2`$ for Brownian motions. ## 5 Outlook In closing, let us step back to examine the potential larger implications of all we have learned so far about two-dimensional Lorentzian quantum gravity. Our original aim was to find a non-perturbative path-integral formulation for quantum gravity. Previous attempts in this direction had largely been confined to the sector of Euclidean space-time metrics. Since for general metrics there is no straightforward analogue of the Wick rotation, a path integral over Lorentzian geometries is likely to require a more radical modification (compared with the Euclidean theory) than a mere analytic continuation of the action. We chose to make the path integral Lorentzian by requiring each individual space-time geometry contributing to the state sum to carry a causal structure associated with a Lorentzian geometry. In order to make the construction well defined, a regularization is necessary, and we used the method of dynamical triangulations (where geometric manifolds are represented as gluings of $`d`$-dimensional simplices), which had previously been employed successfully in a Euclidean context. An ideal testing ground for such a proposal is gravity in dimension $`d=2`$, whose Euclidean sector (“Liouville gravity”) has been studied extensively by a variety of methods. We performed the Lorentzian state sum exactly, over a set of dynamically triangulated 2-geometries satisfying a (discrete analogue) of causality, and taking a continuum limit. Maybe surprisingly, the resulting continuum theory turned out to be a new, bona fide theory of 2d quantum gravity fundamentally different from (Euclidean) Liouville gravity. As already mentioned in the introduction, it describes an ensemble of strongly fluctuating geometries, with local curvature degrees of freedom. Nevertheless, the geometry is less fractal than its Euclidean counterpart, and therefore closer to our intuitive, classical notions of smooth geometry. The existence of at least two different quantum gravities constructed by rigorous path-integral methods in 2d raises the question of “which one is the right theory?” There is no ultimate answer to this, since two-dimensional gravity (never mind its signature) does not describe any phenomena of the real world. Aficionados of Liouville gravity might object by saying that the Lorentzian version of quantum geometry was surely the less interesting, with not enough “happening” compared with the Euclidean case where, for example, the Hausdorff dimension $`d_H`$ changes with the matter content. However, even if this were the case, it would not disqualify Lorentzian gravity from being a good candidate for a quantum gravity theory, since we do not know what the geometry of “real” quantum gravity looks like at the Planck scale. So far there is little evidence to suggest non-smoothness of the space-time geometry up to the grand unified scale, which itself after all is only a few orders of magnitude removed from the Planck scale. At any rate, our present investigation shows that also in two-dimensional Lorentzian gravity things “do happen”. There is a strong interaction if one couples a sufficient amount of matter to Lorentzian geometry, affecting the universal properties of the combined system. In fact, one can argue that the resulting structure of quantum geometry is richer than that of the corresponding Euclidean model with matter, since its fractal structure (measured by the Hausdorff dimension) acquires a scale-dependence. Moreover, the interaction in Lorentzian gravity is strong, but – unlike in Euclidean gravity – not too strong in the sense of leading to a complete degeneration of the carrier geometry. In a similar vein, evidence is accumulating that the structure of Euclidean quantum gravity with and without matter is governed entirely by the presence of baby universes (branching configurations not present in the Lorentzian version because of their incompatibility with causality). There is nothing wrong with this: statistical mechanical models of Euclidean gravity provide examples of generally covariant systems which are highly interesting in their own right. What they might teach us about quantum gravity proper is much less clear, since related (and from a classical point of view highly degenerate) branched-polymer configurations and their associated fractal structure play a central role in Euclidean gravity in higher dimensions too. There they seem to affect the theory in an undesirable way, making it difficult to obtain an interesting continuum limit of the statistical models of Euclidean quantum gravity in dimension $`d>2`$. There is then a conclusion to be drawn for our eventual goal, the quantization of the physical theory of gravity in four space-time dimensions, whose character we know to be Lorentzian. Judging from our experience in two dimensions, the Lorentzian and Euclidean theories (if they both exist and are unique) may not be as closely related as is sometimes hoped for, invoking the example of standard, non-generally covariant quantum field theory. Our research has highlighted the importance of imposing causality (and thereby suppressing spatial topology changes) in the path integral over geometries. It remains to be seen which effect an analogous prescription has for quantum gravity theories in higher dimensions. ###### Acknowledgments. J.A. and K.A. thank MaPhySto, Centre for Mathematical Physics and Stochastics, funded by a grant from The Danish National Research Foundation, for financial support. We acknowledge the use of Geomview as our underlying 3D geometry viewing program.
no-problem/9909/hep-th9909219.html
ar5iv
text
# Untitled Document hep-th/9909219, UCSD/PTH 99-12, IASSNS-HEP-99/86 Compactified Little String Theories and Compact Moduli Spaces of Vacua Kenneth Intriligator UCSD Physics Department 9500 Gilman Drive La Jolla, CA 92093 and School of Natural Sciences address for Fall term, 1999. Institute for Advanced Study Princeton, NJ 08540, USA It is emphasized that compactified little string theories have compact moduli spaces of vacua, which globally probe compact string geometry. Compactifying various little string theories on $`T^3`$ leads to 3d theories with exact, quantum Coulomb branch given by: an arbitrary $`T^4`$ of volume $`M_s^2`$, an arbitrary K3 of volume $`M_s^2`$, and moduli spaces of $`G=SU(N)`$, $`SO(2N)`$, or $`E_{6,7,8}`$ instantons on an arbitrary $`T^4`$ or K3 of fixed volume. Compactifying instead on a $`T^2`$ leads to 4d theories with a compact Coulomb branch base which, when combined with the exact photon gauge coupling fiber, is a compact, elliptically-fibered space related to the above spaces. 9/99 1. Introduction Over the past few years, there have been a variety of connections between the moduli spaces of supersymmetric gauge theories and stringy geometry. For example, singular background geometry or gauge bundles can lead to enhanced, non-perturbative, gauge theories, whose moduli spaces reproduce the local singularity \[1,,2\]. Another connection is via branes, whose world-volume supersymmetric gauge theory has a moduli space which “probes” \[3,,4\] the geometry in which the branes live. In an extreme form of this connection , we perhaps actually live in the moduli space of a supersymmetric theory. It is thus interesting to consider, generally, what types of geometry can be reproduced via moduli spaces of vacua. A basic issue is whether moduli spaces of vacua can be compact. Moduli spaces of vacua of standard gauge theories are generally non-compact cones: if a given set of scalar expectation values $`\varphi _i`$ is a D-flat vacuum, so is $`\lambda \varphi _i`$ for arbitrary scaling factor $`\lambda \mathrm{}`$. (This is slightly modified by Fayet-Iliopoulos terms for $`U(1)`$ factors.) An exception is the Coulomb branch moduli, associated with the Wilson lines, of gauge theories which are compactified on tori; these moduli live on dual tori, modded out by the Weyl group. The present note is devoted to emphasizing that toroidally compactified “little string theories” \[6,,7\] <sup>1</sup> The extent to which these 6d theories decouple from the 10d bulk, for energies above some gap value, is subtle \[8,,9,,10\]; we will ignore these issues and only discuss the vacuum manifold. can have a variety of interesting, compact, moduli spaces of vacua. The present discussion is an elaboration of a footnote which appeared in . The basic message is that, while world-volume gauge theories only locally probe the geometry transverse to the brane, little string extensions can globally probe compact geometry. While this fact is perhaps well-known to some experts, it is hoped that some readers will find it of interest. For example, the basic $`𝒩=(1,0)`$ heterotic little string theory, when compactified on $`T^3`$, is argued to have a Coulomb branch moduli space of vacua which is a K3 of volume $`M_s^2`$. (Since a 3d scalar has mass dimension $`\frac{1}{2}`$, this has the correct dimensions.) The Coulomb branch is a non-linear sigma model with exact, quantum metric equal to the Ricci-flat metric of K3. The K3 is an arbitrary one of fixed volume, whose parameter space coincides with that of the $`T^3`$ compactified heterotic little string theory; the map between these parameter spaces is the same as enters in the duality between the 10d heterotic string on $`T^3`$ and M theory on K3. Geometric symmetries of the K3 Coulomb branch map to non-trivial T-dualities of the $`T^3`$ compactified little string theory. More generally, it will be argued that the little string theories obtained in from $`K`$ heterotic (or type II) NS branes at a transverse $`\text{}^2/\mathrm{\Gamma }_G`$ singularity, when compactified on $`T^3`$, have a compact Coulomb branch moduli space of vacua given by the moduli space of $`K`$ $`G`$-instantons on K3 (or $`T^4`$). Here $`G`$ is an arbitrary $`A,D,E`$ group and $`\mathrm{\Gamma }_G`$ is the corresponding $`SU(2)`$ subgroup. The K3 or $`T^4`$ appearing here is precisely that of $`M`$ theory duality, which the compactified little string theory globally probes. The volume of the compact Coulomb branch is again set by $`M_s`$. In each case, the Coulomb branch sigma model metric must be the unique one which is Ricci flat. Similarly, it will be argued that little string theories, when compactified to 4d on a $`T^2`$, have Coulomb branches which globally probe $`F`$ theory. The Coulomb branch is the base space of $`F`$ theory, and the photon kinetic terms are the elliptic fibration. For example, compactifying the basic $`𝒩=(1,0)`$ little string theory on $`T^2`$ leads to a 4d theory whose total space of Coulomb branch base and Seiberg-Witten curve is an elliptically fibered K3 of volume $`M_s^2`$. The map between the $`T^2`$ compactification data and the parameter space of fixed volume, elliptically fibered, K3 spaces is the same as in the duality between the 10d heterotic theory on $`T^2`$ and $`F`$ theory on an elliptically fibered K3 . The next section will review little string theories and their compactification, with several new minor comments included. Sect. 3 outlines classical $`T^3`$ dimensional reduction of ordinary 6d $`U(1)`$ and $`SU(2)`$ gauge fields. This already leads to compact Coulomb branches, of fixed volume $`g_6^2`$; the Coulomb branch for $`U(1)`$ is $`T^4`$, while that of $`SU(2)`$ is K3. Sect. 4 extends the probe argument of to argue for our main message: that compactifying little string theories on $`T^3`$ leads to theories whose exact Coulomb branch is compact and globally probes the $`T^4`$ or $`K3`$ of M theory duality. Sect. 4 discusses $`T^2`$ compactification of little string theories, which similarly have compact Coulomb branches that globally probe compactification of $`F`$ theory, e.g. on elliptically fibered K3s. The proposed relation of between compactified type II little string theories and moduli spaces of instantons on $`T^4`$ also entered in \[14,,15,,16\], where it was extended to moduli spaces of instantons on a non-commutative $`T^4`$ by introducing R-symmetry twists in the compactification. A relation between the twisted, compactified $`(2,0)`$ theory and K3 was proposed in ; this appears to be unrelated to the presently discussed appearance of K3 in the context of the untwisted, compactified, heterotic little string theories. Much as in \[15,,16\], it should also be possible to introduce R-symmetry twists for the compactified heterotic little string theories, perhaps leading to moduli spaces of instantons on a non-commutative K3, though this will not be done here. 2. Review of little string theories and their compactification Four classes of 6d little string theories were obtained in via the world-volume of 5-branes in the limit $`g_s0`$ with $`M_s`$ held fixed: (iia) $`𝒩=(1,1)`$ supersymmetric, via $`K`$ IIB NS five-branes or via IIA or M theory with a $`\text{}^2/\mathrm{\Gamma }_G`$ ALE singularity \[17,,18\]. (iib) $`𝒩=(2,0)`$ supersymmetric, via $`K`$ type IIA five branes or type IIB with a $`\text{}^2/\mathrm{\Gamma }_G`$ singularity . (o) $`𝒩=(1,0)`$ supersymmetric, via $`K`$ $`SO(32)`$ heterotic small instantons. (e) $`𝒩=(1,0)`$ supersymmetric, via $`K`$ heterotic $`E_8\times E_8`$ small instantons. Cases (iia) and (o) contain gauge fields with coupling $`g_6^2=M_s^2`$ and are IR free. Instantons in the 6d gauge theories are fundamental strings, with tension $`g_6^2=M_s^2`$. Cases (iib) and (e) instead contain tensor multiplet two-form gauge fields, with self-dual field strength, and lead to interacting RG fixed point field theories in the IR. $`𝒩=(1,0)`$ tensor multiplet theories (of which $`𝒩=(2,0)`$ is a special case) always have an associated group $`G`$. For cases (iib) it is $`SU(K)`$ or the $`ADE`$ singularity group $`G`$, while for (e) it is $`Sp(K)`$. There is a $`r=`$rank$`(G)`$ dimensional, compact Coulomb branch moduli space, with the real scalars in the $`𝒩=(1,0)`$ tensor multiplets taking values $`\stackrel{}{\mathrm{\Phi }}`$ in the “$`G`$-Coxeter box” $`(S^1)^r/W_G`$, where the $`S_1`$ is of radius $`M_s^2`$ and $`W_G`$ is the Weyl group of $`G`$. The theory is interacting at the boundaries of the Coxeter box but, in the bulk, behaves in the IR as $`r`$ free self-dual tensor multiplets. Strings are charged under the $`r`$ 2-form gauge fields of these tensor multiplets, with charge vectors $`\stackrel{}{\alpha }`$ in the $`G`$ root lattice<sup>2</sup> Because of the self-duality, these strings can be regarded as either “electrically” or “magnetically” charged. The Dirac quantization condition thus implies that the lattice $`\mathrm{\Lambda }`$ must be an integer lattice, i.e. the dot product of any two lattice vectors is an integer, so $`\mathrm{\Lambda }\stackrel{~}{\mathrm{\Lambda }}`$, where $`\stackrel{~}{\mathrm{\Lambda }}`$ is the dual lattice. This is, of course, a weaker condition than self-duality of the lattice. For example, the root lattice of a simple group $`G`$ is generally not self-dual but, rather, a subgroup, of degree given by the center of $`G`$, in the dual lattice, which is the weight lattice.. Via a BPS formula, a string with charges $`\stackrel{}{\alpha }`$ has tension $`Z=\stackrel{}{\alpha }\stackrel{}{\mathrm{\Phi }}`$, becoming tensionless at the origin of the Coulomb branch. Reducing to 5d leads to a gauge theory with non-Abelian gauge group $`G`$ at the origin, so the 6d theory can be regarded as a non-Abelian self-dual two-form gauge theory with group $`G`$ (whatever that means). Each of the four above classes has either vector or tensor multiplets, but not both. Theories containing both vector and tensor multiplets were discussed in by combining 5-branes with $`\text{}^2/\mathrm{\Gamma }_G`$ orbifold singularities in the transverse dimensions, using results obtained in \[19--22\]. In this way, new theories can be obtained for each of the four classes of branes, type IIA, IIB, $`SO(32)`$ heterotic, and $`E_8\times E_8`$ heterotic, at $`\text{}^2/\mathrm{\Gamma }_G`$ singularities. All of these theories generally have $`𝒩=(1,0)`$ supersymmetry. E.g. $`K`$ type IIB NS 5-branes at a $`\text{}^2/\mathrm{\Gamma }_G`$ singularity has a quiver gauge theory, based on the extended Dynkin diagram of the $`ADE`$ singularity group $`G`$, with gauge group $`U(1)_D\times _{\mu =0}^rSU(Kn_\mu )`$ and bi-fundamental matter. $`n_\mu `$ are the $`G`$ Dynkin indices and $`r=`$rank$`(G)`$. There are $`r`$ $`𝒩=(1,0)`$ tensor multiplets, which are associated, as described above, with the singularity group $`G`$. Via an anomaly cancellation mechanism, $`SU(Kn_\mu )`$ has gauge coupling $`g_{\mu ,eff}^2=M_s^2\delta _{\mu ,0}+\stackrel{}{\alpha }_\mu \stackrel{}{\mathrm{\Phi }}`$ and an $`SU(Kn_\mu )`$ instanton, which is a string in 6d, has tensor-multiplet charges $`\stackrel{}{\alpha }_\mu `$ and BPS tension $`Z_\mu =g_{\mu ,eff}^2`$. Here the $`\stackrel{}{\alpha }_\mu `$ are the $`G`$ root vectors ($`\stackrel{}{\alpha }_0`$ is the extending root) and the condition that all $`g_{\mu ,eff}^20`$ is precisely that the Coulomb branch $`\stackrel{}{\mathrm{\Phi }}`$ is the $`G`$ Coxeter box, of side length $`M_s^2`$. The $`SU(Kn_\mu )`$ instanton string charges span the $`G`$ root lattice. The instanton string for a diagonal $`SU(K)_D_{\mu =0}^rSU(Kn_\mu )`$, with index of embedding $`n_\mu `$ in $`SU(Kn_\mu )`$, has tension $`n^\mu Z_\mu =M_s^2`$, and is identified with the fundamental IIB string. The other $`r`$ independent instanton strings in $`_{\mu =0}^rSU(Kn_\mu )`$ are to be identified with the strings obtained by wrapping the type IIB 3-brane on the $`r`$ independent, fully collapsed, two-cycles of the $`\text{}^2/\mathrm{\Gamma }_G`$ singularity; $`m=1\mathrm{}r`$ of these strings become tensionless for $`\stackrel{}{\mathrm{\Phi }}`$ at a codimension $`m`$ boundary of the Coulomb branch Coxeter box. The simplest heterotic case is $`K`$ $`SO(32)`$ 5-branes at a $`\text{}^2/\mathrm{\Gamma }_G`$ singularity \[19--22\]. The theories are associated with a subgroup $`H`$ of the singularity group $`G`$, with $`GH`$ as $`SU(2P)Sp(P)`$, $`SO(4P+2)SO(4P+1)`$, $`SO(4P)SO(4P)`$, $`E_6F_4`$, $`E_7E_7`$, $`E_8E_8`$. The gauge group and matter content is given by a quiver diagram, which is the extended $`H`$ Dynkin diagram, with $`SO`$, $`Sp`$, and $`SU`$ groups at various nodes, e.g. the group at the $`\mu =0`$ node is $`Sp(K)`$. There are $`r=`$rank$`(H)`$ $`𝒩=(1,0)`$ tensor multiplets, which are associated with the group $`H`$. Via an anomaly cancellation mechanism, the gauge group at node $`\mu =0\mathrm{}r`$ of the quiver diagram has coupling $`g_{\mu ,eff}^2=M_s^2\delta _{\mu ,0}+\stackrel{}{\alpha }_\mu \stackrel{}{\mathrm{\Phi }}`$, and an instantons string in this group has tensor multiplet charges $`\stackrel{}{\alpha }_\mu `$ and BPS tension $`Z_\mu =g_{\mu ,eff}^2`$. Here $`\stackrel{}{\alpha }_\mu `$ are the simple and extending roots of $`H`$, so the instanton strings span the $`H`$ root lattice. Instantons in a diagonal $`Sp(K)_D`$ are identified with the fundamental heterotic string, of tension $`M_s^2`$. The other $`r`$ independent instanton strings can again be identified with 3-branes wrapped on collapsed two-cycles; $`m=1\mathrm{}r`$ of these become massless at a codimension $`m`$ boundary of the Coulomb branch (the $`H`$ Coxeter box). The other heterotic case, $`K`$ $`E_8\times E_8`$ 5-branes at a $`\text{}^2/\mathrm{\Gamma }_G`$ singularity, leads to little string theories with a more involved spectrum of tensor multiplets, gauge groups, and matter content \[22,,11\]. Compactifying on a circle, 6d vector and tensor multiplets both lead to 5d vector multiplets. A 6d $`𝒩=(1,0)`$ theory with a gauge group of rank $`r_V`$ and $`n_T`$ tensor multiplets, when compactified, leads to a 5d theory with a Coulomb branch moduli space of vacua of dimension $`d_C=r_V+n_T`$. Compactifying to 4d, the Coulomb branch has real dimension $`2(r_V+n_T)`$ and in 3d, upon dualizing the $`d_C`$ photons, there is a Coulomb branch of real dimension $`4(r_V+n_T)`$. Little string theories exhibit T-duality when compactified on a circle , with the (iia) theory on a circle of radius $`R`$ identical to the (iib) theory on a circle of radius $`1/M_s^2R`$. Similarly, the (o) heterotic theory, on a circle of radius $`R`$, and with a Wilson line around the circle breaking $`SO(32)`$ to $`SO(16)\times SO(16)`$, is identical to the (e) heterotic theory on a circle of radius $`1/M_s^2R`$, again with a Wilson line breaking $`E_8\times E_8`$ to $`SO(16)\times SO(16)`$. (See for the heterotic T-duality with general Wilson lines.) In these cases, T-duality exchanges 6d tensor and vector multiplets, $`r_Vn_T`$. This is nice because the 5d classical kinetic terms for the scalars coming from 6d tensor multiplets, $`M_s^4R(d\mathrm{\Phi })^2`$, is indeed exchanged with the kinetic term, $`g_6^2R(R^1d\mathrm{\Phi })^2`$, of a vector multiplet on a circle of radius $`R`$. In both cases $`\mathrm{\Phi }`$ is a compact scalar, normalized so $`\mathrm{\Phi }[0,1]`$, and the two kinetic terms are exchanged by $`R(M_s^2R)^1`$ upon setting $`g_6^2=M_s^2`$. More generally, there is an expected T-duality, with $`R(M_s^2R)^1`$ exchanging the theories coming from IIA and IIB or $`SO(32)`$ and $`E_8\times E_8`$ heterotic branes at $`\text{}^2/\mathrm{\Gamma }_G`$ singularities. T-dual theories must have $`r_V+n_T=\stackrel{~}{r}_V+\stackrel{~}{n}_T`$. As was noted in \[25,,11\], this is the case for the $`SO(32)`$ and $`E_8\times E_8`$ branes at singularities: both cases have $`r_V+n_T=C_2(G)K|G|`$, where $`C_2(G)`$ is the dual Coxeter number of the singularity group $`G`$ and $`|G|`$ is its dimension. This formula will be important in what follows. A point of concern mentioned in is that a stronger condition, $`r_V=\stackrel{~}{n}_T`$ and $`n_T=\stackrel{~}{r}_V`$, needed for T-duality to exchange the classical kinetic terms as above, is not satisfied. The present situation is, in fact, closely connected to that of , where it was argued that T-duality can fail. Here, however, there is a simpler resolution: the Coulomb branch metric can get quantum corrections and, while the quantum corrected metrics are expected to agree, the classical metrics need not. The stronger condition is thus unnecessary. All little string theories, when compactified on $`T^D`$, have the parameter space $$O(D+y,D;\text{})\backslash O(D+y,D)/O(D+y)\times O(D),$$ where $`y=0`$ for the type II cases and $`y=16`$ for the heterotic cases. These are the $`T^D`$ metric and $`B_{NS}`$ fields ($`D^2`$ real parameters), and also the $`SO(32)`$ or $`E_8\times E_8`$ Wilson lines in the heterotic cases ($`16D`$ real parameters). $`O(D+y,D;\text{})`$ is the full $`T`$ duality group. 3. Compactification preliminaries We first consider the classical dimensional reduction of a $`6d`$ $`U(1)`$ gauge field, $$d^6x(\frac{1}{4g_6^2}F_{\mu \nu }F^{\mu \nu }+B_{NS}FF),$$ on a $`T^3`$ to three dimensions. $`B_{NS}`$ is an external, background, two-form gauge field. We take space to be $`\text{}^3\times T^3`$, with $`\text{}^3`$ coordinates $`x^i`$, $`i=1,2,3`$, and periodic coordinates $`\rho ^a[0,1]`$, $`a=1,2,3`$, for the $`T^3`$; the metric is $`ds^2=\delta _{ij}dx^idx^j+h_{ab}d\rho ^ad\rho ^b`$. Taking all fields to be independent of the $`T^3`$ coordinates $`\rho ^a`$, (3.1) becomes $$S=d^3x\left[\frac{\sqrt{deth}}{g_6^2}(\frac{1}{4}F_{ij}F^{ij}+\frac{1}{2}(h^1)^{ab}_i\varphi _a^i\varphi _b)+\theta ^aϵ^{ijk}F_{ij}_k\varphi _a\right],$$ where $`B_{NS}=ϵ_{abc}\theta ^ad\rho ^bd\rho ^c`$ for some constants $`\theta ^a`$, $`a=1,2,3`$. The three real scalars $`\varphi _a`$ are associated with the Wilson lines of the gauge field around the cycles $`d\rho ^a`$ of the $`T^3`$ and are periodic, normalized so that $`\varphi _a[0,1]`$. The 3d $`U(1)`$ gauge field can be dualized to another real scalar, which also lives on a circle. This is done as in : we replace $`F_{ij}F_{ij}H_{ij}`$ in (3.1) and introduce an additional term $`ϵ^{ijk}H_{ij}_k\varphi _4`$, with the scalar $`\varphi _4`$ periodic, normalized so that $`\varphi _4[0,1]`$. First integrating out $`\varphi _4`$ leads back to the original theory. First integrating out $`H`$ sets $`F_{ij}=0`$ and leads to $`\varphi _4`$ kinetic terms. Combining with the $`\varphi _a`$ kinetic terms in (3.1), the upshot is a $`T^4`$ Coulomb branch moduli space of vacua $`\varphi _A`$, $`A=1,\mathrm{}4`$, with metric $$ds^2=\frac{\sqrt{deth}}{g_6^2}(h^1)^{ab}d\varphi _ad\varphi _b+\frac{g_6^2}{\sqrt{deth}}(d\varphi _4\theta ^ad\varphi _a)^2G^{AB}d\varphi _Ad\varphi _B,$$ where $`a`$ runs over $`1,2,3`$ and $`A=1,\mathrm{}4`$. This metric $`G^{AB}`$ has 10 real components, which depend on the 9 real parameters $`h^{ab}`$ and $`\theta ^a`$, and thus satisfies one constraint. The relation is that the $`T^4`$ of (3.1) has fixed volume, independent of the $`h_{ab}`$ and the $`\theta ^a`$: $$\mathrm{Volume}(T^4)=\sqrt{det(G^{AB})}=\frac{1}{g_6^2}=M_s^2.$$ Although the above discussion was purely classical, the map (3.1) between the $`T^3`$ metric $`h_{ab}`$ and $`B`$ fields $`\theta ^a`$ is exactly the relevant one for relating type IIB string theory on $`T^3`$ to M theory on $`T^4`$, to be discussed in the next section. Indeed, the map (3.1) was also obtained in in the context of the compactified $`(2,0)`$ theory via a chain of string duality gymnastics. We pause to note that the metric (3.1) nicely exhibits properties to be expected based on its connection to M theory. In particular, the obvious, geometric $`SL(4;\text{})`$ discrete symmetries of the $`T^4`$ correspond to non-trivial T-dualities, in a subgroup of the T-duality group appearing in (2.1). For example, consider the obvious requirement that the $`T^4`$ be invariant under the relabeling exchange $`\varphi _3\varphi _4`$. Taking, for simplicity, $`T^3`$ with $`h_{ab}=L_a^2\delta _{ab}`$ and $`\theta _a=0`$, it follows from (3.1) that this operation corresponds to the operation $$L_1(M_s^2L_2)^1,L_2(M_s^2L_1)^1,L_3L_3,$$ where we set $`g_6^2=M_s^2`$. This is a $`T`$ duality in two circles, which is non-trivial but, nevertheless, a symmetry taking the IIA or IIB theory back to itself. The generalization of the T-duality (3.1) for general $`h_{ab}`$ and $`\theta _a`$ is quite complicated, see e.g. ; remarkably, it is indeed reproduced from (3.1) by simply requiring the $`\varphi _3\varphi _4`$ symmetry. On the other hand, T-duality in an odd number of cycles, such as the $`O(3,3;\text{})`$ element taking all $`L_i(M_s^2L_i)^1`$, for $`i=1,2,3`$, is not a geometric $`SL(4;\text{})`$ symmetry of (3.1). This is sensible, since such operations are not symmetries of IIA or IIB string compactifications but, rather, exchange IIA and IIB. In particular, starting instead from a 6d tensor multiplet, dimensional reduction on a $`T^3`$ leads to a $`T^4`$ Coulomb branch moduli space, with metric related to (3.1) by T-duality in an odd number of the $`T^3`$ cycles, corresponding to the exchange of IIA and IIB. Now consider $`T^3`$ reduction of a 6d $`SU(2)`$ gauge theory. The above discussion for $`U(1)`$ carries over to this case with almost no changes. The only difference is that the real scalars $`\varphi _A`$ must be modded out by the Weyl group action $`\varphi _A\varphi _A`$. Modding out the $`T^4`$ by this $`\text{}_2`$ action leads to a $`K3`$. Thus the Coulomb branch of a 6d $`SU(2)`$ gauge theory reduced to 3d on a $`T^3`$ is given by $`\varphi _A`$ in a compact K3. The volume of the K3 is again set by $`g_6^2`$, and equal to $`M_s^2`$. The full parameter space of K3 metrics of fixed volume is 57 dimensional and given by (2.1) with $`D=3`$ and $`y=16`$, while that obtained here only depends on the 9 dimensional subspace given by (2.1) with $`D=3`$ and $`y=0`$. The remaining parameters will come from 3 real masses for each of 16 $`SU(2)`$ fundamental matter flavors; these enter as the Wilson loop parameters in (2.1). 4. The probe argument, checks, and comments The parameter space (2.1) for $`T^3`$ compactified heterotic (or type II) little string theories coincides with the geometric parameter space of a K3 (or $`T^4`$) of fixed volume. These are, of course, the standard miracles which enter in the duality between the 10d heterotic (or type II) string on $`T^3`$ and M theory on $`Y=`$K3 (or $`T^4`$). The fundamental string arises as the M5 brane wrapped on $`Y`$, so $`M_p^6Vol(Y)=M_s^2`$. We will here extend the probe argument of to argue that these M theory dualities provide the solution for the exact, quantum, Coulomb branch metric of the $`T^3`$ compactified little string theories. Recall that the argument of started with 3d $`𝒩=4`$ supersymmetric (8 supercharges) $`SU(2)`$ gauge theory with fundamental matter, which is the world-volume field theory in a D2 brane in type I’ string theory on $`T^3`$. This maps to a M2 brane in M theory on K3, which can be at an arbitrary point in the transverse $`\text{}^4\times K3`$. The $`\text{}^4`$ corresponds to a decoupled hypermultiplet in the world-volume theory. The $`K3`$ factor is more interesting: it was thus argued in that the full, quantum-corrected metric on the Coulomb branch of the D2 brane world-volume field theory must be a local piece of the corresponding K3; this was confirmed in purely in the context of 3d field theory. The D2 brane world-volume field theory only locally probes the K3 because of the particular limit taken to decouple the bulk dynamics: $`g_s0`$ and $`M_s\mathrm{}`$. On the other hand, we can take $`g_s0`$, but with $`M_s`$ held fixed. This theory is precisely the 6d heterotic little string theory (o), compactified to 3d on the same $`T^3`$ as the 10d heterotic or type I’ bulk theory. The $`T^3`$ compactified little string theory (o) globally probes the fixed volume K3 of M theory, and must thus have a Coulomb branch moduli space of vacua which is the same K3. The geometric K3 has volume $`M_s^2M_p^6`$ and, taking into account how the properly normalized Coulomb moduli scalars probe geometry, the volume of the Coulomb branch K3 is $`M_s^2`$. This matches with the result of the previous section. This compact Coulomb branch properly becomes non-compact in the field theory limit $`M_s^2\mathrm{}`$. The K3 Coulomb branch can have singularities, depending on the choice of parameters in (2.1). As in , these singularities mark the intersection of the Coulomb branch with a Higgs branch, with an interacting 3d infra-red conformal field theory at the intersection. Unfortunately, both sides in the present equivalence, between the quantum Coulomb branch of the $`T^3`$ compactified little string theory on the one hand, and the metric of K3 on the other, are presently not well understood. Perhaps the present equivalence will eventually be useful for using one of the two sides to learn about the other. A direct generalization of the above is to consider a $`T^3`$ compactification of the little string theory (o) associated with $`K`$ $`SO(32)`$ heterotic small instanton. This maps to $`K`$ M2 branes at points on $`\text{}^4\times K3`$. The Coulomb branch is, correspondingly, the symmetric product $`(K3)^K/S_K`$, where each K3 is again of fixed volume $`M_s^2`$. The geometric symmetries (see, e.g. ) of the Coulomb branch K3 correspond to non-trivial T-dualities in (2.1) though, as in (3.1), only the subgroup which takes the $`SO(32)`$ heterotic theory back to itself. An additional $`\text{}_2`$ component of T-dualities in $`O(19,3;\text{})`$ reflects the fact that, instead compactifying the $`E_3\times E_8`$ heterotic little string (e), with T-dual $`T^3`$ compactification data, also yields the same 3d theory, with the same K3 compact Coulomb branch as described above. Each of the little string theories reviewed in sect. 2 can be compactified to 3d on a $`T^3`$, and each has an exact quantum Coulomb branch which globally probes the dual M theory compactification. In each case, there is a compact Coulomb branch component, with unit volume in units of $`M_s`$. The 3d field theory limit is recovered by taking $`M_s\mathrm{}`$. The $`𝒩=(1,1)`$ little string theories with group $`U(K)`$, when compactified on $`T^3`$, have a Coulomb branch which is $`(\text{}^4\times T^4)^K/S_K`$ (more generally, $`(\text{}^4\times T^4)^{\mathrm{rank}(G)}/\mathrm{Weyl}(G)`$), which probes the duality between type II strings on $`T^3`$ and M theory on $`T^4`$. The Coulomb branch $`T^4`$ has metric $`G^{AB}`$ which is given exactly in terms of the $`T^3`$ compactification date by (3.1), with volume $`M_s^2`$. There is a similar statement for the $`𝒩=(2,0)`$ little string theory on $`T^3`$, differing from the $`𝒩=(1,1)`$ case by a T-duality in one of the $`T^3`$ cycles; the fixed volume $`T^4`$ in this context was also discussed in \[15,,16\]. The $`𝒩=(1,0)`$ little string theories associated with $`K`$ type II or heterotic 5-branes at an $`X_G\text{}^2/\mathrm{\Gamma }_G`$ singularity, when compactified on $`T^3`$, similarly probe M theory geometry. In the heterotic (or type II) cases, the M theory dual is given by $`K`$ M2 branes with a transverse space $`X_G\times K3`$ (or $`X_G\times T^4`$). In both cases, M theory with a $`X_G`$ singularity has an enhanced $`G`$ gauge symmetry and M2 branes, when sitting directly on top of the $`G`$ singularity of $`X_G`$, can be interpreted as small $`G`$ instantons. In the heterotic (or type II) cases, these $`K`$ $`G`$-instantons have the fixed volume K3 (or $`T^4`$) as their four spatial coordinates. There is a moduli space for these instantons given by their positions in these four spatial coordinates, as well as their moduli for fattening up and rotating in $`G`$. Thus, by the probe argument, the little string theory associated with $`K`$ heterotic (or type II) branes at a $`\text{}^2/\mathrm{\Gamma }_G`$ singularity, when compactified on a $`T^3`$, has a compact Coulomb branch moduli space of vacua which is exactly given by the moduli space of $`K`$ $`G`$-instantons on a K3 (or $`T^4`$) of volume $`M_s^2`$. A quick check is that the dimension of the Coulomb branch of the $`T^3`$ compactified little string theories indeed agrees with the dimension of the moduli space of $`K`$ $`G`$-instantons<sup>3</sup> For a general 4 manifold with Euler character $`\chi `$ and signature $`\sigma `$, the dimension is $`4KC_2(G)\frac{1}{2}|G|(\chi +\sigma )`$. For $`T^4`$, $`\chi =\sigma =0`$ and, for K3, $`\chi =24`$ and $`\sigma =16`$. on $`T^4`$ or K3: the type II cases indeed have $`4(r_V+n_T)=4KC_2(G)`$ and the heterotic cases indeed have $`4(r_V+n_T)=4(KC_2(G)|G|)`$. This latter fact also played a role in the mirror symmetry of . Another check is to consider the limit $`M_s^2\mathrm{}`$, where $`T^4\text{}^4`$ or $`K3`$ becomes a non-compact piece of K3, and where the compactified little string theory goes over to its 3d field theory limit. In the type II cases, the resulting 3d field theory has the quiver gauge group $`_{\mu =0}^rU(Kn_\mu )`$, based on the extended $`G`$-Dynkin diagram, which was indeed argued in \[30,,31\] to have a quantum Coulomb branch which is the moduli space of $`K`$ $`G`$-instantons on $`\text{}^4`$. This theory was argued in to have a hidden, global $`G`$ symmetry. Because $`M`$ theory has $`G`$ gauge symmetry even for finite $`M_s`$, the full compactified little string theory is expected to also have this hidden global symmetry. Similar statements should hold in the heterotic cases. Moduli spaces of instantons on $`T^4`$ or $`K3`$ have made a variety of appearances in physics and mathematics, though usually with $`G=U(N)`$ as the gauge group. In that case, the moduli space also depend on $`v_a=_{\mathrm{\Sigma }_a}\mathrm{Tr}F`$, where $`\mathrm{\Sigma }_a`$ is a basis for the two cycles of $`T^4`$ or K3. In the present case, $`G`$ is a simple $`A,D,E`$ group so $`\mathrm{Tr}F=0`$. ($`B`$ fields can possibly still contribute to $`v_a0`$, e.g. as in .) The moduli spaces of instantons obtained above have many interesting singularities. At these Coulomb branch singularities, there is an attached Higgs branch, with an interacting 3d IR CFT at the intersection. All of the above compact Coulomb branches are hyper-Kahler, with $`c_1=0`$, and the sigma model metric is the unique one which is Ricci-flat. 5. $`T^2`$ compactification: probing $`F`$ theory Compactifying the heterotic (or type II) little string theories to 4d on a $`T^2`$ leads to quantum Coulomb branches which globally probe $`F`$ theory compactifications on a fixed volume, elliptically fibered K3 (or $`T^4`$). For example, consider the $`K=1`$ case of the heterotic little string theory (o), whose low-energy field theory content is that of the world-volume of D3 branes in type I’ on $`T^2`$. This latter theory has a non-compact, quantum Coulomb branch which was argued \[32,,4\] to locally probe the duality to F-theory on an elliptically fibered K3. The Coulomb branch in the 4d field theory is the non-compact complex $`u`$ plane, over which the photon coupling $`\tau _{eff}(u)`$ is fibered according to the Seiberg-Witten curve ; the total space of $`u`$-plane base and $`\tau _{eff}(u)`$ fiber is a local, non-compact piece of K3. This is the $`M_s\mathrm{}`$ limit of the $`T^2`$ compactified little string theory. Considering now the $`T^2`$ compactified little string theory for finite $`M_s^2`$, the $`u`$-plane base is a compact box of volume $`M_s^2`$ (this is the correct mass dimension for 4d scalars). As in sect. 3, reducing a 6d $`U(1)`$ gauge field on a $`T^2`$ with metric $`h_{ab}d\rho ^ad\rho ^b`$ leads to scalars living on a dual $`T^2`$, with metric $`g_6^2\sqrt{deth}(h^1)^{ab}d\varphi _ad\varphi _b`$, which has volume $`g_6^2=M_s^2`$ for all $`h_{ab}`$. For $`SU(2)`$ rather than $`U(1)`$, we mod out by the Weyl group $`\varphi _a\varphi _a`$, yielding a 2d box of volume $`M_s^2`$. Considering the elliptic fiber $`\tau (u)`$ over the compact base as a dimensionless coordinate, the total space of base and fiber is an elliptically fibered K3 of volume $`M_s^2`$. This elliptically fibered K3 of fixed volume is that of the F-theory dual to the 10d heterotic string on $`T^2`$. As was the case there, the parameter space (2.1) of data in the $`T^2`$ compactification of the heterotic string matches that of the fixed volume, elliptically fibered K3s. This can be regarded as a special case of the $`T^3`$ compactification considered in the previous sections, where one of the radii is taken to infinity. It is thus good that we again get a K3 of volume $`M_s^2`$, since that was the case in the previous sections for all radii. More generally, $`T^2`$ compactifying the little string theories associated with $`K`$ type II 5-branes at a $`\text{}^2/\mathrm{\Gamma }_G`$ singularity leads to a compact Coulomb branch which is a $`2(r_V+n_T)=2KC_2(G)`$ dimensional torus of unit volume in units of $`M_s`$. Including the $`KC_2(G)`$ complex dimensional elliptic fiber, associated with the kinetic terms of the $`KC_2(G)`$ photons, the total space is the moduli space of $`K`$ $`G`$-instantons on a $`T^4`$ of volume $`M_s^2`$, where both the $`T^4`$ and the resulting instanton moduli space are regarded as an elliptic fibration. Compactifying on a $`T^2`$ the little string theories associated with $`K`$ heterotic 5-branes at a $`\text{}^2/\mathrm{\Gamma }_G`$ singularity leads to a compact Coulomb branch which is a $`2(r_V+n_T)=2(KC_2(G)|G|)`$ dimensional box, of unit volume in units of $`M_s`$. Including the fiber associated with the photons, the total space is an elliptically fibered space which is exactly the moduli space of $`K`$ $`G`$-instantons on an elliptically fibered K3 of volume $`M_s^2`$. Acknowledgments I would like to thank M. Douglas, D. Morrison, R. Plesser, and N. Seiberg for discussions. This work was supported by UCSD grant DOE-FG03-97ER40546 and IAS grant NSF PHY-9513835. References relax E. Witten, hep-th/9503124, Nucl. Phys. B443 (1995) 85. relax E. Witten, hep-th/9511030, Nucl. Phys. B460 (1996) 541. relax M.R. Douglas, hep-th/9512077. relax T. Banks, M. Douglas, and N. Seiberg, hep-th/9605199, Phys. Lett. B387 (1996) 278. relax T. Banks, W. Fischler, S.H. Shenker, L. Susskind, hep-th/9610043, Phys. Rev. D55 (1997) 5112. relax M. Berkooz, M. Rozali, and N. Seiberg, hep-th/9704089, Phys. Lett. B408 (1997) 105. relax N. Seiberg, hep-th/9705221, Phys. Lett B408 (1997) 98. relax J. Maldacena and A. Strominger, hep-th/9710014, JHEP 9712 (1997) 008. relax A. Peet and J. Polchinski, hep-th/9809022, Phys. Rev. D59 (1999) 065011. relax S. Minwalla and N. Seiberg, hep-th/9904142, JHEP 9906 (1999) 007. relax K. Intriligator, hep-th/9708117, Adv. Theor. Math. Phys. 1 (1997) 271. relax C. Vafa, hep-th/9602022, Nucl. Phys. B469 (1996) 403. relax N. Seiberg, hep-th/9606017, Phys. Lett. B384 (1996) 81. relax O.J. Ganor and S. Sethi, hep-th/9712071, J. High. Energy Phys. 01 (1998) 007. relax Y.K. Cheung, O.J. Ganor, M. Krogh, hep-th/9805045, Nucl. Phys. B536 (1998) 175. relax Y.E. Cheung, O.J. Ganor, M. Krogh, A.Y. Mikhailov, hep-th/9812172 relax E. Witten, seminar at Aspen Center for Physics, Aug. ’97. relax E. Witten, hep-th/9710065, Adv. Theor. Math. Phys. 2 (1998) 61. relax P.S. Aspinwall, hep-th/9612108, Nucl. Phys. B496 (1997) 149. relax K. Intriligator, hep-th/9702038, Nucl. Phys. B496 (1997) 177. relax J.D. Blum and K. Intriligator, hep-th/9705030, Nucl. Phys. B506 (1997) 223; J.D. Blum and K. Intriligator, hep-th/9705044, Nucl. Phys. B506 (1997) 199. relax P.S. Aspinwall and D.R. Morrison, hep-th/9705104, Nucl. Phys. B503 (1997) 533. relax E. Witten, hep-th/9507121, Proc. Strings ’95. relax P. Ginsparg, Phys. Rev. D35 (1987) 648. relax E. Perevalov and G. Rajesh, hep-th/9706005, Phys. Rev. Lett. 79 (1997) 2931. relax P.S. Aspinwall and M.R. Plesser, hep-th/9905036, JHEP 9908 (1999) 001. relax N. Seiberg and E. Witten, hep-th/9607163. relax A. Giveon, M. Porrati and E. Rabinovici, hep-th/9401139, Phys. Rept. 244 (1994) 77. relax P.S. Aspinwall and D.R. Morrison, hep-th/9404151. relax K. Intriligator, N. Seiberg, hep-th/9607207, Phys. Lett. B387 (1996) 513. relax J. de Boer, K. Hori, H. Ooguri, Y. Oz, hep-th/9611063, Nucl. Phys. B493 (1997) 101. relax A. Sen, hep-th/9605150, Nucl. Phys. B475 (1996) 562. relax N. Seiberg and E. Witten, hep-th/9408009, Nucl. Phys. B431 (1994) 484.
no-problem/9909/astro-ph9909146.html
ar5iv
text
# Pulsar Constraints on Neutron Star Structure and Equation of State ## I Introduction The sudden spin jumps, or glitches, commonly seen in isolated neutron stars are thought to represent angular momentum transfer between the crust and the liquid interior . In this picture, as a neutron star’s crust spins down under magnetic torque, differential rotation develops between the stellar crust and a portion of the liquid interior. The more rapidly rotating component then acts as an angular momentum reservoir which occasionally exerts a spin-up torque on the crust as a consequence of an instability. The Vela pulsar, one of the most active glitching pulsars, typically undergoes fractional changes in rotation rate of $`10^6`$ every three years on average. With the Vela pulsar having exhibited 13 glitches, meaningful study of the statistical properties of these events is now possible. In this Letter we study the time distribution of Vela’s glitches and determine the average angular momentum transfer rate in Vela and in six other pulsars. We present evidence that glitches in Vela represent a self-regulating instability for which the star prepares over a waiting interval. We obtain a lower limit on the fraction of the star’s liquid interior responsible for glitches. Assuming that glitches are driven by the liquid residing in the inner crust, as in most glitch models, we show that Vela’s ‘radiation radius’ is $`R_{\mathrm{}}\genfrac{}{}{0pt}{}{_>}{^{}}12`$ km for a mass of $`1.4M_{}`$. Future measurements of neutron star radii will check the universality of this constraint and hence test our understanding of neutron star structure and the origin of glitches. ## II Regularity of Angular Momentum Transfer A glitch of magnitude $`\mathrm{\Delta }\mathrm{\Omega }_i`$ requires angular momentum $$\mathrm{\Delta }J_i=I_c\mathrm{\Delta }\mathrm{\Omega }_i,$$ (1) where $`I_c`$ is the moment of inertia of the solid crust plus any portions of the star tightly coupled to it. Most of the core liquid is expected to couple tightly to the star’s solid component, so that $`I_c`$ makes up at least 90% of the star’s total moment of inertia. Glitches are driven by the portion of the liquid interior that is differentially rotating with respect to the crust. The cumulative angular momentum imparted to the crust over time is $$J(t)=I_c\overline{\mathrm{\Omega }}\underset{i}{}\frac{\mathrm{\Delta }\mathrm{\Omega }_i}{\overline{\mathrm{\Omega }}},$$ (2) where $`\overline{\mathrm{\Omega }}=70.4`$ rad s<sup>-1</sup> is the average spin rate of the crust over the period of observations. Fig. 1 shows the cumulative dimensionless angular momentum, $`J(t)/I_c\overline{\mathrm{\Omega }}`$, over $`30`$ years of glitch observations of the Vela pulsar, with a linear least-squares fit. The average rate of angular momentum transfer associated with glitches is $`I_c\overline{\mathrm{\Omega }}A`$, where $`A`$ is the slope of the straight line in Fig. 1: $$A=(6.44\pm 0.19)\times 10^7\mathrm{yr}^1.$$ (3) This rate $`A`$ is often referred to as the pulsar activity parameter. The angular momentum flow is extremely regular; none of Vela’s 13 glitches caused the cumulative angular momentum curve to deviate from the linear fit shown in Fig. 1 by more than 12%. To assess the likelihood that the linear trend could have arisen by chance, we tested the statistical robustness of this result. We generated many sets of simulated data in which the occurrence times of the glitches remained as observed, but the magnitudes of the 13 glitches were randomly shuffled. We compared the observed $`\chi ^2`$ to those for the deviations of the randomly shuffled data from linear fits. The $`\chi ^2`$ for the shuffled data was less than that of the real $`\chi ^2`$ in only $`1.4`$% of cases, strongly suggesting that the rate of angular momentum flow associated with glitches is reasonably constant. Additionally, the near uniformity of the intervals between the glitches in Fig. 1 suggests that glitches occur at fairly regular time intervals. The standard deviation in observed glitch intervals is $`0.53\mathrm{\Delta }t`$, where $`\mathrm{\Delta }t=840`$ d is the average glitch time interval. The probability of 13 randomly-spaced (Poisson) events having less than the observed standard deviation is only $`1`$%. The data of Fig. 1 indicate that Vela’s glitches are not random, but represent a self-regulating process which gives a relatively constant flow of angular momentum to the crust with glitches occurring at fairly regular time intervals. ## III The Glitch Reservoir’s Moment of Inertia The average rate of angular momentum transfer in Vela’s glitches constrains the properties of the angular momentum reservoir that drives the spin jumps. In particular, the frequent occurrence of large glitches requires that a significant fraction of the interior superfluid spins at a higher rate than the crust of the star. Between glitches, the reservoir acquires excess angular momentum as the rest of the star slows under the magnetic braking torque acting on the crust. Excess angular momentum accumulates at the maximum rate if the reservoir does not spin down between glitches. Hence, the rate at which the reservoir accumulates angular momentum capable of driving glitches is limited by $$\dot{J}_{\mathrm{res}}I_{\mathrm{res}}|\dot{\mathrm{\Omega }}|,$$ (4) where $`\dot{\mathrm{\Omega }}`$ is the average spin-down rate of the crust, and $`I_{\mathrm{res}}`$ is the moment of inertia of the angular momentum reservoir (not necessarily one region of the star). Equating $`\dot{J}_{\mathrm{res}}`$ to the average rate of angular momentum transfer to the crust, $`I_c\overline{\mathrm{\Omega }}A`$, gives the constraint, $$\frac{I_{\mathrm{res}}}{I_c}\frac{\overline{\mathrm{\Omega }}}{|\dot{\mathrm{\Omega }}|}AG,$$ (5) where the coupling parameter $`G`$ is the minimum fraction of the star’s moment of inertia that stores angular momentum and imparts it to the crust in glitches. Using the observed value of Vela’s activity parameter $`A`$ and $`\overline{\mathrm{\Omega }}/|\dot{\mathrm{\Omega }}|=22.6`$ Kyr, we obtain the constraint $$\frac{I_{\mathrm{res}}}{I_c}G_{\mathrm{Vela}}=1.4\%.$$ (6) A similar analysis for six other pulsars yields the results shown in Fig. 2. An earlier analysis of glitches in Vela gave $`I_{\mathrm{res}}/I_c0.8`$%. After Vela, the most significant limit is obtained from PSR 1737-30 which gives $`I_{\mathrm{res}}/I_cG_{1737}=1`$%. The similarity of $`G`$ for the five objects of intermediate age suggests that glitches in all these objects are driven by internal components with about the same fractional moment of inertia. In terms of $`G`$, the Crab pulsar and PSR 0525+21 appear to be unusual. It may be that the Crab’s angular momentum reservoir loses its excess angular momentum between glitches, perhaps through thermal creep of superfluid vortices (see, e.g., ). The value of $`G`$ for PSR 0525+21 is not well determined, since only two glitches from this object have been measured. ## IV Implications for the Dense Matter Equation of State The constraint of $`I_{\mathrm{res}}/I_c1.4`$% for Vela applies regardless of where in the star glitches originate. Many glitch models, however, assume that the internal angular momentum reservoir is the superfluid that coexists with the inner crust lattice , where the pinning of superfluid vortex lines sustains a velocity difference between the superfluid and the crust. Here we explore the implications of this interpretation. We begin by describing how the moment of inertia of the superfluid in the neutron star crust relates to the nuclear matter equation of state (EOS) and the observable properties of neutron stars. Ravenhall & Pethick have shown that, for various equations of state, the total moment of inertia $`I`$ is given by the approximate expression $$\left[1+\frac{2GI}{R^3c^2}\right]I\frac{8\pi }{3}_0^Rr^4(\rho +P/c^2)e^\lambda 𝑑r\stackrel{~}{J},$$ (7) where $`\rho `$ is the mass-energy density, $`P`$ is the pressure, and $`e^\lambda `$ is the local gravitational redshift. This expression, which holds in the limit of slow rotation, defines the integral $`\stackrel{~}{J}`$. This integral can be evaluated following Lattimer & Prakash who noted that $`\rho 1(r/R)^2`$ throughout most of the interior of a neutron star (but not in the crust), for all commonly-used equations of state. With this approximation, it can be shown that $$\stackrel{~}{J}\frac{2}{7}MR^2\mathrm{\Lambda },$$ (8) where $`\mathrm{\Lambda }(12GM/Rc^2)^1`$ and $`M`$ is the total stellar mass. Equation (7) can also be used to determine the moment of inertia of the crust plus liquid component. In the crust $`P`$ is $`<<\rho c^2`$, and the TOV equation is $$\frac{dP}{dr}GM\rho (r)\frac{e^\lambda }{r^2}.$$ (9) Using this approximation in eq. (7) gives the fraction of the star’s moment of inertia contained in the solid crust (and the neutron liquid that coexists with it): $$\frac{\mathrm{\Delta }I}{I}\frac{8\pi }{3\stackrel{~}{J}}_{R\mathrm{\Delta }R}^R\rho r^4e^\lambda 𝑑r\frac{8\pi }{3\stackrel{~}{J}GM}_0^{P_t}r^6𝑑P.$$ (10) Here $`\mathrm{\Delta }R`$ is the radial extent of the crust and $`P_t`$ is the pressure at the crust-core interface. A similar approximation is obtained in Ref. 7 (equation 17); either approximation is adequate for the estimates we are making here. In most of the crust, the equation of state has the approximately polytropic form $`P\rho ^{4/3}`$, giving $$_0^{P_t}r^6𝑑PP_tR^6\left[1+\frac{8P_t}{n_tm_nc^2}\frac{4.5+(\mathrm{\Lambda }1)^1}{\mathrm{\Lambda }1}\right]^1,$$ (11) where $`n_t`$ is the density at the core-crust transition and $`m_n`$ is the neutron mass. $`\mathrm{\Delta }I/I`$ can thus be expressed as a function of $`M`$ and $`R`$ with an additional dependence upon the EOS arising through the values of $`P_t`$ and $`n_t`$. However, $`P_t`$ is the main EOS parameter as $`n_t`$ enters chiefly via a correction term. In general, the EOS parameter $`P_t`$ varies over the range $`0.25<P_t<0.65`$ MeV fm<sup>-3</sup> for realistic equations of state . Larger values of $`P_t`$ give larger values for $`\mathrm{\Delta }I/I`$, as can be seen from eq. . Combining of eqs. and with a lower limit on $`\mathrm{\Delta }I`$ and an upper limit on $`P_t`$ gives a lower limit on the neutron star radius for a given mass. In order to relate our observational bound on $`I_{\mathrm{res}}/I_c`$ to $`\mathrm{\Delta }I`$, we assume that the angular momentum reservoir is confined to the neutron superfluid that coexists with the nuclei of the inner crust. In this case, $`I_{\mathrm{res}}\mathrm{\Delta }I`$ and $`I_cI\mathrm{\Delta }I`$. Our observational limit on $`I_{\mathrm{res}}`$ then gives $`\mathrm{\Delta }I/(I\mathrm{\Delta }I)\mathrm{\Delta }I/I_cI_{\mathrm{res}}/I_c0.014`$. To obtain a strong lower limit on the neutron star radius, we take $`P_t=0.65`$ MeV fm<sup>-3</sup> and $`n_t=0.075`$ fm<sup>-3</sup>. Combining the relations and , gives the heavy dashed curve in Fig. 3. This curve is given approximately by $$R=3.6+3.9M/M_{}.$$ (12) Stellar models that are compatible with the lower bound on $`I_{\mathrm{res}}`$ must fall below this line. Smaller $`P_t`$ reduces the crustal moment of inertia and gives a more restrictive constraint. For example, $`P_t=0.25`$ MeV fm<sup>-3</sup> moves the constraining contour to approximately $`R=4.7+4.1M/M_{}`$ (thin dashed curve of Fig. 3). ## V Discussion To summarize our conclusions regarding the statistics of Vela’s glitches, we find that angular momentum is imparted to the crust at regular time intervals at a rate that has remained nearly constant for $`30`$ yr. These data narrowly constrain the coupling parameter $`G`$ which is the minimum fraction of the star’s moment of inertia that is responsible for glitches. For Vela we find $`G=0.014`$, indicating that least $`1.4`$% of the star’s moment of inertia acts as an angular momentum reservoir for driving the glitches, regardless of where in the star this angular momentum reservoir is, or how it is coupled to the crust. Variation of $`G`$ by a factor of less than $`3`$ for stars in the age group $`10^410^5`$ yr suggests that glitches in stars in this age group all involve regions of about the same fractional moment of inertia. Mass measurements of radio pulsars in binary systems and of neutron star companions of radio pulsars give neutron star masses consistent with a very narrow distribution, $`M=1.35\pm 0.04M_{}`$ , indicated by the pair of horizontal dotted lines in Fig. 3. If Vela’s mass falls in this range, eq. constrains $`R\genfrac{}{}{0pt}{}{_>}{^{}}8.9`$ km, under the assumption that glitches arise in the inner crust superfluid. The quantity constrained by observations of the stellar luminosity and spectrum is the ‘radiation radius’ $`R_{\mathrm{}}\mathrm{\Lambda }^{1/2}R=(12GM/Rc^2)^{1/2}R`$. If $`M=1.35M_{}`$ for Vela, the above constraint gives $`R_{\mathrm{}}\genfrac{}{}{0pt}{}{_>}{^{}}12`$ km if glitches arise in the inner crust. For comparison, we show in Fig. 3 the mass-radius curves for several representative equations of state (heavy solid lines). Measurement of $`R_{\mathrm{}}\genfrac{}{}{0pt}{}{_<}{^{}}13`$ km would be inconsistent with most equations of state if $`M1.35M_{}`$. Stronger constraints could be obtained if improved calculations of nuclear matter properties indicate $`P_t`$ significantly less than 0.65 MeV fm<sup>-3</sup>. For example, for $`M1.35M_{}`$, $`R_{\mathrm{}}\genfrac{}{}{0pt}{}{_>}{^{}}13`$ km would be required if $`P_t=0.25`$ MeV fm<sup>-3</sup>. A measurement of $`R_{\mathrm{}}\genfrac{}{}{0pt}{}{_<}{^{}}11`$ km would rule out most equations of state regardless of mass or the angular momentum requirements of glitches. A promising candidate for a decisive measurement of a neutron star’s radiation radius is RX J185635-3754, an isolated, non-pulsing neutron star . A black body fit to the X-ray spectrum gives $`R_{\mathrm{}}=7.3(D/120\mathrm{pc})`$ km where $`D`$ is the distance (known to be less than 120 pc). However, either a non-uniform surface temperature or radiative transfer effects in the stellar atmosphere could raise this estimate significantly . HST observations planned for this year should determine the star’s proper motion and parallax, and hence, the distance. Future CHANDRA observations should yield more detailed spectral data and could establish the composition of the atmosphere if absorption lines are identified. These distance and spectral data may establish whether this object’s radius is consistent with an inner crust explanation of neutron star glitches. We thank P. M. McCulloch for providing us with glitch data for the Vela pulsar. This work was performed under the auspices of the U.S. Department of Energy, and was supported in part by NASA EPSCoR Grant #291748, NASA ATP Grants \# NAG 53688 and # NAG 52863, by the USDOE grant DOE/DE-FG02-87ER-40317, and by IGPP at LANL. Fig. 1 – Cumulative dimensionless angular momentum, $`J/I_c\overline{\mathrm{\Omega }}`$, imparted to the Vela pulsar’s crust as a function of time. The straight line is a least-squares fit. Fig. 2 – The coupling parameter G. The strongest constraints are obtained for Vela and PSR 1737-30, for which 13 and 9 glitches have been observed, respectively. Diamonds indicate objects with only two observed glitches, for which error bars could not be obtained. References: 0525+21 , Crab , Vela , 1338-62 , 1737-30 , 1823-13 . Fig. 3 – Limits on the Vela pulsar’s radius. The heavy dashed curve delimits allowed masses and radii that are compatible with the glitch constraint $`\mathrm{\Delta }I/(I\mathrm{\Delta }I)1.4`$% for $`P_t=0.65`$ MeV fm<sup>-3</sup>. The thin dashed curve corresponds to $`P_t=0.25`$ MeV fm<sup>-3</sup> and gives a more stringent though less conservative constraint. The dot-dashed curve corresponds to $`\mathrm{\Delta }I/(I\mathrm{\Delta }I)2.8`$% and $`P_t=0.65`$ MeV. The horizontal dashed lines indicate the mass limits for the survey of 26 radio pulsars of Ref. 19. Also displayed are mass-radius relations for the equations of state of Akmal & Pandharipande (curves a and b), Wiringa, Fiks & Fabrocini (curves c and d), Müller & Serot (curves e and f) and the kaon EOS of Glendenning & Schaffner-Bielich (curves g and h). The crosses indicate where a given EOS has $`\mathrm{\Delta }I/(I\mathrm{\Delta }I)=1.4`$% (with $`P_t=0.65`$ MeV fm<sup>-3</sup>). Curves without crosses have $`\mathrm{\Delta }I/(I\mathrm{\Delta }I)>1.4`$% for all stable $`R`$. Thin curves are contours of constant radiation radius $`R_{\mathrm{}}`$.
no-problem/9909/quant-ph9909020.html
ar5iv
text
# Probability distributions consistent with a mixed state ## I Introduction The density matrix was introduced as a means of describing a quantum system when the state of the system is not completely known. In particular, if the state of the system is $`|\psi _i`$ with probability $`p_i`$, then the density matrix is defined by $`\rho {\displaystyle \underset{i}{}}p_i|\psi _i\psi _i|.`$ (1) For a fixed density matrix it is natural to ask what class of ensembles $`\{p_i,|\psi _i\}`$ gives rise to that density matrix? This problem was addressed by Scrödinger , whose results have been extended by Jaynes , and by Hughston, Jozsa, and Wootters . The result of these investigations, the classification theorem for ensembles, has been of considerable utility in quantum statistical mechanics, quantum information theory, quantum computation, and quantum error-correction. In this paper we use the classification theorem for ensembles to obtain an explicit classification of probability distributions $`(p_i)`$ such that there exist pure states $`|\psi _i`$ satisfying $`\rho =_ip_i|\psi _i\psi _i|`$, for some fixed density matrix $`\rho `$. This is done in Section II. Section III illustrates the result with several simple applications to quantum mechanics and quantum information theory. Section IV concludes the paper. ## II Probability distributions consistent with a mixed state To state and prove our results we need to introduce some notions from the theory of majorization . Majorization is an area of mathematics concerned with the problem of comparing two vectors to determine which is more “disordered”. Suppose $`x`$ and $`y`$ are two $`d`$-dimensional real vectors. Then we say $`x`$ is majorized by $`y`$, written $`xy`$, if $`{\displaystyle \underset{i=1}{\overset{k}{}}}x_i^{}{\displaystyle \underset{i=1}{\overset{k}{}}}y_i^{}`$ (2) for $`k=1,\mathrm{},d1`$, with strict equality required when $`k=d`$. The notation indicates that the vector components are to be ordered into decreasing order. The usual interpretation is that $`x`$ is more “disordered” or “mixed” than $`y`$. When $`x`$ and $`y`$ are probability distributions it can be shown that $`xy`$ implies many quantities commonly used as measures of disorder, such as the Shannon entropy, are never lower for $`x`$ than for $`y`$. There is a close relation between unitary matrices and majorization. Any matrix $`D`$ whose components may be written in the form $`D_{ij}=|u_{ij}|^2`$ for some unitary matrix $`u=(u_{ij})`$ is said to be unitary-stochastic. The following theorem connects the unitary-stochastic matrices to majorization. Theorem 1: Let $`x`$ and $`y`$ be $`d`$-dimensional vectors. Then $`xy`$ if and only if there exists unitary-stochastic $`D`$ such that $`x=Dy`$. The proof of this theorem is constructive in nature. That is, given $`xy`$ it is possible to explicitly construct a unitary matrix $`u=(u_{ij})`$ such that $`x=Dy`$ where $`(D_{ij})=(|u_{ij}|^2)`$. Indeed, even more is true — for the forward implication in Theorem 1 it turns out to be sufficient to consider only orthogonal matrices $`u`$, that is, real matrices satisfying $`uu^T=u^Tu=I`$, where <sup>T</sup> is the transpose operation. The corresponding matrix $`D_{ij}=u_{ij}^2`$ is known as an ortho-stochastic matrix. Note that the expression $`u_{ij}^2`$ indicates the square of the $`ij`$th component of the matrix $`u`$, not the $`ij`$th component of $`u^2`$. The Appendix to this paper gives an outline of the construction needed for the reverse implication in Theorem 1, somewhat different to the proof in . The second result we need is the classification theorem for ensembles : Theorem 2: Let $`\rho `$ be a density matrix. Then $`\{p_i,|\psi _i\}`$ is an ensemble for $`\rho `$ if and only if there exists a unitary matrix $`u=(u_{ij})`$ such that $`\sqrt{p_i}|\psi _i={\displaystyle \underset{j}{}}u_{ij}|e_j,`$ (3) where $`|e_j`$ are eigenvectors of $`\rho `$ normalized so that $`\lambda _j^\rho =e_j|e_j`$ are the corresponding eigenvalues. In the statement of Theorem 2 it is understood that there may be more elements in the ensemble $`\{p_i,|\psi _i\}`$ than there are eigenvectors $`|e_j`$. When this is the case one appends extra zero vectors to the list of eigenvectors, until the number of elements in the two lists matches. Combining Theorem 1 and Theorem 2 in an appropriate way gives the following classification theorem for the class of probability distributions consistent with a given density matrix: Theorem 3: Suppose $`\rho `$ is a density matrix. Let $`(p_i)`$ be a probability distribution. Then there exist normalized quantum states $`|\psi _i`$ such that $`\rho ={\displaystyle \underset{i}{}}p_i|\psi _i\psi _i|`$ (4) if and only if $`(p_i)\lambda ^\rho `$, where $`\lambda ^\rho `$ is the vector of eigenvalues of $`\rho `$. In the statement of Theorem 3 it is understood that if the vector $`(p_i)`$ contains more elements than the vector $`\lambda ^\rho `$, then one should append sufficiently many zeros to $`\lambda ^\rho `$ that the two vectors be of the same length. Proof of Theorem 3: Suppose there exists a set of states $`|\psi _i`$ such that $`\rho =_ip_i|\psi _i\psi _i|`$. By Theorem 2 equation (3) must hold. Multiplying (3) by its adjoint gives $`p_i={\displaystyle \underset{jk}{}}u_{ik}^{}u_{ij}\lambda _j^\rho \delta _{jk},`$ (5) which simplifies to $`p_i`$ $`=`$ $`{\displaystyle \underset{j}{}}|u_{ij}|^2\lambda _j^\rho .`$ (6) Setting $`D_{ij}|u_{ij}|^2`$, we have $`(p_i)=D\lambda ^\rho `$ for unitary-stochastic $`D`$, and by Theorem 1, $`(p_i)\lambda ^\rho `$. Conversely, if $`(p_i)\lambda ^\rho `$ then by Theorem 1 we can find unitary $`u`$ such that (6) is satisfied. Now define states $`|\psi _i`$ by Equation (3); since $`u_{ij},p_i`$ and $`|e_j`$ are known this equation determines the $`|\psi _i`$ uniquely. By Theorem 2 we need only check that these are properly normalized pure states to complete the proof. Multiplying the definition of $`|\psi _i`$, Equation (3), by its adjoint gives $`p_i\psi _i|\psi _i`$ $`=`$ $`{\displaystyle \underset{jk}{}}u_{ij}u_{ik}^{}e_k|e_j`$ (7) $`=`$ $`{\displaystyle \underset{j}{}}|u_{ij}|^2\lambda _j^\rho `$ (8) $`=`$ $`p_i,`$ (9) where the last step follows from the choice of $`u`$ to satisfy (6). It follows that $`|\psi _i`$ is a normalized pure state. QED Theorem 3 is the central result of this paper. Many elements of the proof are already implicit in the paper of Hughston, Jozsa and Wootters , however they do not explicitly draw the connection with majorization. The forward implication has been proved by Uhlmann , who conjectured but did not find an explicit construction for the reverse implication. ## III Applications The remaining sections of this paper demonstrate several illustrative applications of Theorem 3 to elementary quantum mechanics and quantum information theory. ### A Uniform ensembles exist for any density matrix As our first application of Theorem 3, suppose $`d`$ is the rank of $`\rho `$, and that $`md`$. Then it is easy to verify that $`(1/m,1/m,\mathrm{},1/m)\lambda ^\rho `$, and therefore there exist pure states $`|\psi _1,\mathrm{},|\psi _m`$ such that $`\rho `$ is an equal mixture of these states with probability $`1/m`$, $`\rho ={\displaystyle \underset{i}{}}{\displaystyle \frac{|\psi _i\psi _i|}{m}}.`$ (10) Indeed, if we choose $`md`$ where $`d`$ is the dimension of the underlying space, then for any $`\rho `$ there exists a set of states such that (10) holds. A priori it is not at all obvious that such a set of pure states should exist for any density matrix $`\rho `$, however Theorem 3 guarantees that this is indeed the case: any density matrix may be regarded as the result of picking uniformly at random from some ensemble of pure states. ### B Schur-convex functions of ensemble probabilities A second application of Theorem 3 relates functions of the eigenvalues of $`\rho `$ to functions of the probabilities $`(p_i)`$. The theory of isotone functions is concerned with functions which preserve the majorization order. More specifically, the Schur-convex functions are real-valued functions $`f`$ such that $`xy`$ implies $`f(x)f(y)`$. Examples of Schur-convex functions include $`f(x)_ix_i\mathrm{log}(x_i)`$, $`f(x)_ix_i^k`$ (for any constant $`k1`$), $`f(x)_ix_i`$, and $`f(x)x_1^{}`$. More examples and a characterization of the Schur-convex functions may be found in . Each such Schur-convex function gives rise to an inequality relating the vector of probabilities $`(p_i)`$ in Equation (4) to the vector $`\lambda ^\rho `$. For example, we see from the Schur-convexity of $`_ix_i\mathrm{log}(x_i)`$ the useful inequality that $`H(p_i)S(\rho )`$, where $`H()`$ is the Shannon entropy, and $`S()`$ is the von Neumann entropy. (This result was obtained by Lanford and Robinson using different techniques.) In general, any Schur-convex function will give rise to a similar inequality relating $`(p_i)`$ and $`\lambda ^\rho `$. A similar property related to convex functions has previously been noted (see the review for an overview, as well as the original references ), however those results are a special case of the more general result given here based upon Schur-convex functions. The earlier results may be obtained by noting that if $`f(x)`$ is convex then the map $`(p_i)_if(p_i)`$ is Schur-convex. ### C Representation of bipartite pure states A third application of Theorem 3 gives us insight into the properties of pure states of bipartite systems. We state the result formally as follows: Corollary 4: Suppose $`|\psi `$ is a pure state of a composite system $`AB`$ with Schmidt decomposition $`|\psi ={\displaystyle \underset{i}{}}\sqrt{p_i}|i_A|i_B.`$ (11) Then given a probability distribution $`(q_i)`$ there exists an orthonormal basis $`|i_A^{}`$ for system $`A`$ and corresponding pure states $`|\psi _i`$ of system $`B`$ such that $`|\psi ={\displaystyle \underset{i}{}}\sqrt{q_i}|i_A^{}|\psi _i`$ (12) if and only if $`(q_i)(p_i)`$. In the statement of Corollary 4 it is understood that if $`(q_i)`$ contains more terms than $`(p_i)`$ then the former vector should be extended by adding extra zeros. In the case where the number of terms in $`(q_i)`$ exceeds the number of dimensions of $`A`$’s Hilbert space, $`A`$’s Hilbert space must be extended so its dimension matches the number of terms in $`(q_i)`$. Proof of Corollary 4: To prove the forward implication, note that tracing out system $`A`$ in equations (11) and (12) gives $`_ip_i|i_Bi_B|=_iq_i|\psi _i\psi _i|`$, and thus by Theorem 3, $`(q_i)(p_i)`$. Conversely, suppose $`|\psi `$ has Schmidt decomposition given by (11), and that $`(q_i)(p_i)`$. Let $`\rho `$ be the reduced density matrix of system $`B`$ when $`A`$ is traced out, $`\rho =\text{tr}_A(|\psi \psi |)={\displaystyle \underset{i}{}}p_i|i_Bi_B|.`$ (13) By Theorem 3, $`\rho =_iq_i|\psi _i\psi _i|`$ for some set of pure states $`|\psi _i`$. The state $`|\varphi `$ defined by $`|\varphi {\displaystyle \underset{i}{}}\sqrt{q_i}|i_A|\psi _i`$ (14) is a purification of $`\rho `$, that is, a pure state of system $`AB`$ such that when system $`A`$ is traced out, $`\text{tr}_A(|\varphi \varphi |)=\rho `$. Thus $`|\psi `$ and $`|\varphi `$ are both purifications of $`\rho `$. It can easily be shown that there exists a unitary matrix $`U`$ acting on system $`A`$ such that $`U|\varphi =|\psi `$. Defining $`|i_A^{}U|i_A`$ we see that $`|\psi ={\displaystyle \underset{i}{}}\sqrt{q_i}|i_A^{}|\psi _i,`$ (15) as claimed. QED ### D Communication cost of entanglement transformation Corollary 4 can be used to give insight into a recent result in the study of entanglement transformation . Suppose Alice and Bob are in possession of an entangled pure state $`|\psi `$. They wish to transform this state into another pure state $`|\varphi `$, with the restriction that they may only use local operations on their respective systems, together with a possibly unlimited amount of classical communication. It was shown in that the transformation can be made if and only if $`\lambda _\psi \lambda _\varphi `$, where $`\lambda _\psi `$ denotes the vector of eigenvalues of the reduced density matrix of Alice’s system when the joint Alice-Bob system is in the state $`|\psi `$, and $`\lambda _\varphi `$ is defined similarly for the state $`|\varphi `$. To see how Corollary 4 applies in this context, suppose $`|\psi `$ and $`|\varphi `$ are bipartite states with Schmidt decompositions $`|\psi `$ $`=`$ $`{\displaystyle \underset{i}{}}\sqrt{p_i}|i|i`$ (16) $`|\varphi `$ $`=`$ $`{\displaystyle \underset{i}{}}\sqrt{q_i}|i|i,`$ (17) where without loss of generality we may assume the two states have the same Schmidt bases, since local unitary transformations can be used to inter-convert between different Schmidt bases. Note that $`\lambda _\psi =(p_i)`$ and $`\lambda _\varphi =(q_i)`$. Suppose that $`\lambda _\psi =(p_i)\lambda _\varphi =(q_i)`$. By Corollary 4, and ignoring unimportant local unitary transformations, it is possible to write $`|\psi `$ and $`|\varphi `$ in the form $`|\psi `$ $`=`$ $`{\displaystyle \underset{i}{}}\sqrt{p_i}|i|i`$ (18) $`|\varphi `$ $`=`$ $`{\displaystyle \underset{i}{}}\sqrt{p_i}|i|\psi _i,`$ (19) for some set of pure states $`|\psi _i`$. This form makes it quite plausible that the state $`|\psi `$ can be transformed into the state $`|\varphi `$ by local operations and classical communication: all that needs to be done is for Bob to transform $`|i`$ into $`|\psi _i`$ in such a way as to preserve coherence between different terms in the sum. I have not found a general method utilizing this fact to transform $`|\psi `$ into $`|\varphi `$. However, it will now be shown how Corollary 4 can be applied successfully in the special case where $`|\psi `$ is a maximally entangled state of a $`d`$ dimensional system with a $`d^{}d`$ dimensional system, $`|\psi ={\displaystyle \underset{i}{}}{\displaystyle \frac{|i|i}{\sqrt{d}}}.`$ (20) The new proof has the feature that it is exponentially more efficient from the point of view of classical communication than the protocol described in . The argument runs as follows. By Corollary 4 we can find pure states $`|\varphi _i`$ such that $`|\varphi ={\displaystyle \underset{i}{}}{\displaystyle \frac{|i|\varphi _i}{\sqrt{d}}},`$ (21) up to local unitary transformations. Define an operator on Bob’s system, $`F{\displaystyle \underset{i}{}}|\varphi _ii|,`$ (22) Ideally, we’d apply $`F`$ to the system $`B`$ taking $`|\psi `$ directly to $`|\varphi `$. This doesn’t work because $`F`$ isn’t unitary. Instead, we use $`F`$ to define a quantum measurement with essentially the same effect. Define $`E{\displaystyle \frac{F}{\sqrt{\text{tr}(F^{}F)}}}.`$ (23) Let $`|0,\mathrm{},|d1`$ be the Schmidt basis for Bob’s system. Define operators $`X`$ and $`Z`$ by $`X|j|j1;Z|j\omega ^j|j,`$ (24) where $``$ denotes addition modulo $`d`$, and $`\omega `$ is a $`d`$th root of unity. Define unitary operators $`U_{s,t}`$ by $`U_{s,t}X^sZ^t.`$ (25) The indices $`s`$ and $`t`$ are integers in the range $`0`$ to $`d1`$. By checking on an operator basis and applying linearity it is easily verified that for any Hermitian $`A`$, $`{\displaystyle \underset{st}{}}U_{s,t}^{}AU_{s,t}=\text{tr}(A)I.`$ (26) Therefore, defining $`E_{s,t}EU_{s,t}`$ gives $`{\displaystyle \underset{st}{}}E_{s,t}^{}E_{s,t}=I.`$ (27) The set $`\{E_{s,t}\}`$ therefore defines a generalized measurement on Bob’s system with $`d^2`$ outcomes. Suppose Bob performs this measurement. If he obtains the result $`(s,t)`$ then the state of the system after the measurement is $`{\displaystyle \underset{i}{}}{\displaystyle \frac{\omega ^{it}|i|\varphi _{is}}{\sqrt{d}}}.`$ (28) Bob sends the measurement result to Alice, which requires $`2\mathrm{log}_2d`$ bits of communication, and then Alice performs $`X^sZ^t`$ (where $`X`$ and $`Z`$ are now defined with respect to Alice’s Schmidt basis) on her system, giving the state $`{\displaystyle \underset{i}{}}{\displaystyle \frac{|is|\varphi _{is}}{\sqrt{d}}},`$ (29) which is just $`|\varphi `$. This protocol for entanglement transformation requires only $`2\mathrm{log}_2(d)`$ bits of communication, compared with the protocol in , which required $`d1`$. Another method for achieving this result is as follows: Alice prepares locally a system $`A^{}B^{}`$ in a copy of $`|\varphi `$. She then uses the shared maximal entanglement $`|\psi `$ with Bob to teleport system $`B^{}`$ to Bob, creating the desired state $`|\varphi `$. Again, this protocol requires $`2\mathrm{log}_2(d)`$ bits of communication. The present approach is interesting, in that it does not require knowledge of the teleportation protocol in order to succeed. Moreover, the method used strongly suggests that it may be possible to always perform the transformation using $`O(\mathrm{log}_2d)`$ bits of communication, even when $`|\psi `$ is not maximally entangled, a result that does not appear obvious from the teleportation protocol. A method for doing so has recently been found using different methods, and will be reported elsewhere. ## IV Conclusion The results reported here answer a fundamental question about the nature of the density matrix as a representation for ensembles of pure states, and give some elementary applications of this result to quantum mechanics and quantum information theory. I expect that the connection revealed here between majorization and ensembles of pure states will be of considerable use in future investigations of fundamental properties of quantum systems. ## acknowledgments Thanks to Sumit Daftuar and Andrew Landahl for pointing out some glitches in earlier versions of this work, and Armin Uhlmann for discussions on majorization. This work was supported by a Tolman Fellowship, and by DARPA through the Quantum Information and Computing Institute (QUIC) administered through the ARO. ## A Unitary-stochastic matrices and majorization In this appendix we outline the constructive steps in the proof of Theorem 1. To begin, we first take a slight detour connecting majorization with a class of matrices known as T-transforms. By definition, a T-transform is a matrix which acts as the identity on all but $`2`$ dimensions, where it has the form: $`T=\left[\begin{array}{cc}t& 1t\\ 1t& t\end{array}\right],`$ (A3) for some parameter $`t`$, $`0t1`$. The following result connects majorization and T-transforms : Theorem 5: If $`xy`$ there exists a finite set of T-transforms $`T_1,T_2,\mathrm{},T_n`$ such that $`x=T_1T_2\mathrm{}T_ny`$. The converse of Theorem 5 is also true , but will not be needed. For convenience we provide details of the construction of the sequence $`T_1,\mathrm{},T_n`$ here. Proof of Theorem 5: The result is proved by induction on $`d`$, the dimension of the vector space $`x`$ and $`y`$ live in. For notational convenience we assume that the components of $`x`$ and $`y`$ have been ordered into decreasing order; if this is not the case then one can easily reduce to this case by insertion of appropriate transposition matrices (which are T-transforms). The result is clear when $`d=2`$, so let’s assume the result is true for arbitrary $`d`$, and try to prove it for $`d+1`$-dimensional $`x`$ and $`y`$. Choose $`k`$ such that $`y_kx_1y_{k1}`$. Such a $`k`$ is guaranteed to exist because $`xy`$ implies that $`x_1y_1`$ and $`x_1x_{d+1}y_{d+1}`$. Choose $`t`$ such that $`x_1=ty_1+(1t)y_k.`$ (A4) Now define $`z`$ to be the result of applying a T-transform $`T`$ with parameter $`t`$ to the $`1`$st and $`k`$th components of $`y`$, so that $`z`$ $`=`$ $`Ty`$ (A5) $`=`$ $`(x_1,y^{}),`$ (A6) where $`y^{}(y_2,\mathrm{},y_{k1},(1t)y_1+ty_k,y_{k+1},\mathrm{},y_{d+1}).`$ (A7) Define $`x^{}(x_2,x_3,\mathrm{},x_{d+1})`$. It is not difficult to verify that $`x^{}y^{}`$ (see for details), and thus by the inductive hypothesis, $`x^{}=T_1\mathrm{}T_ry^{}`$ for some sequence of T-transforms in $`d`$ dimensions. But these T-transforms can equally well be regarded as T-transforms on $`d+1`$ dimensions by acting as the identity on the first dimension, and thus $`x=T_1\mathrm{}T_rTy`$, that is, $`x`$ can be obtained from $`y`$ by a finite sequence of T-transforms, as we set out to show. QED Note that the inductive step of the proof of Theorem 5 can immediately be converted into an iterative procedure for constructing the matrices $`T_1,\mathrm{},T_n`$, and also implies that $`n=d1`$ in a $`d`$-dimensional space. The proof of Theorem 1, which we now give, is also inductive in nature, and is easily converted into an iterative procedure for constructing an orthogonal matrix $`u=(u_{ij})`$ such that $`D`$ defined by $`D_{ij}u_{ij}^2`$ satisfies Theorem 1. Note again the convention that expressions like $`u_{ij}^2`$ represent the square of the real number $`u_{ij}`$, not the $`ij`$th component of the matrix $`u^2`$. To prove Theorem 1 we use the decomposition $`x=T_1T_2\mathrm{}T_ny`$ from the proof of Theorem 5. The strategy is to use induction on $`n`$ to prove that $`T_1T_2\mathrm{}T_n=(W_{ij}^2)`$ for some orthogonal matrix $`W`$. Suppose $`n=1`$. Omitting components on which $`T_1`$ acts as the identity, we have $`T_1=\left[\begin{array}{cc}t& 1t\\ 1t& t\end{array}\right]`$ (A10) for some $`t`$, $`0t1`$. Define a unitary matrix $`U`$ to act as the identity on all components on which $`T_1`$ acts as the identity, and as $`U\left[\begin{array}{cc}\sqrt{t}& \sqrt{1t}\\ \sqrt{1t}& \sqrt{t}\end{array}\right],`$ (A13) on the components where $`T_1`$ acts non-trivially. It is clear that $`T_1=(U_{ij}^2)`$, as required. To do the inductive step, suppose that products of $`n`$ T-transforms of the form used in the proof of Theorem 5 are ortho-stochastic, and consider the product $`T_1T_2\mathrm{}T_{n+1}`$. We assume $`T_{n+2k}`$ acts on components $`k`$ and component $`d_k>k`$, as per the proof of Theorem 5. Let $`P`$ be the permutation matrix which transposes components $`2`$ and $`d_1`$. (The following proof is more transparent if one assumes that $`d_1=2`$, and drops all reference to $`P`$, which is a technical device to make certain equations more compact.) Then $`PT_{n+1}P=\left[\begin{array}{ccc}t& 1t& 0\\ 1t& t& 0\\ 0& 0& I_{d2}\end{array}\right],`$ (A17) where $`I_{d2}`$ is the $`d2`$ by $`d2`$ identity matrix. Furthermore, let us define a $`d1`$ by $`d1`$ matrix $`\mathrm{\Delta }`$ by $`T_1T_2\mathrm{}T_n=\left[\begin{array}{cc}1& 0\\ 0& \mathrm{\Delta }\end{array}\right].`$ (A20) By the inductive hypothesis there is a $`d1`$ by $`d1`$ orthogonal matrix $`U_{ij}`$ such that $`\mathrm{\Delta }_{ij}=U_{ij}^2`$. Define a new matrix $`U^{}`$ by interchanging the role of the first and $`(d_11)`$th co-ordinates in $`U`$, $`U^{}=P^{}UP^{}`$, where $`P^{}`$ transposes the first and $`(d_11)`$th co-ordinates, and similarly define $`\mathrm{\Delta }^{}`$ by $`\mathrm{\Delta }^{}P^{}\mathrm{\Delta }P^{}`$. Then $`\mathrm{\Delta }_{ij}^{}=U_{ij}^2`$. Also we have $`PT_1T_2\mathrm{}T_nP=\left[\begin{array}{cc}1& 0\\ 0& \mathrm{\Delta }^{}\end{array}\right].`$ (A23) Multiplying the previous equation by $`PT_{n+1}P`$ gives, from (A17) and the identity $`P^2=I`$, $`PT_1T_2\mathrm{}T_{n+1}P=\left[\begin{array}{ccc}t& 1t& 0\\ (1t)\stackrel{}{\delta }& t\stackrel{}{\delta }& \stackrel{~}{\mathrm{\Delta }},\end{array}\right],`$ (A26) where $`\stackrel{}{\delta }`$ is the first column of $`\mathrm{\Delta }^{}`$, and $`\stackrel{~}{\mathrm{\Delta }}`$ is the $`d2`$ by $`d1`$ matrix that results when the first column of $`\mathrm{\Delta }^{}`$ is removed. Let $`\stackrel{~}{U}`$ denote the $`d2`$ by $`d1`$ matrix that results when the first column of $`U^{}`$ is removed, and let $`\stackrel{}{u}`$ denote the first column of $`U^{}`$. Define a $`d`$ by $`d`$ matrix $`V`$ by $`V\left[\begin{array}{ccc}\sqrt{t}& \sqrt{1t}& 0\\ \sqrt{1t}\stackrel{}{u}& \sqrt{t}\stackrel{}{u}& \stackrel{~}{U}\end{array}\right].`$ (A29) We claim that $`V`$ is an orthogonal matrix. To see this we need to show that the columns of $`V`$ are of unit length and orthogonal. The length of the first column is $`\sqrt{t+(1t)\stackrel{}{u}\stackrel{}{u}}=\sqrt{1}=1.`$ (A30) A similar calculation shows that the second column is of unit length. The remaining columns are all of unit length since they are all columns of the unitary matrix $`U^{}`$. Simple algebra along similar lines can be used to check that the correct orthogonality relations between columns of $`V`$ are satisfied. Observe that $`PT_1T_2\mathrm{}T_{n+1}P=(V_{ij}^2)`$, so if we define $`WPVP`$, we see that $`W`$ is an orthogonal matrix such that $`T_1T_2\mathrm{}T_{n+1}=(W_{ij}^2)`$, which completes the induction.
no-problem/9909/hep-ph9909431.html
ar5iv
text
# Topical Seminar on Neutrinos and Astroparticle Physics. San Miniato. May 1999. RAL-TR/1999 -066 Tri-Maximal vs. Bi-Maximal Neutrino Mixing ## 1 TRI-MAXIMAL MIXING Threefold maximal mixing, ie. tri-maximal mixing, undeniably occupies a special place in the space of all $`3\times 3`$ mixings. In some weak basis the two non-commuting mass-matrices $`m_l^2`$, $`m_\nu ^2`$ ($`m_im_i^{}m_i^2`$) may simultaneously be written as ‘circulative’ matrices (‘of degree zero’) : $$\left(\begin{array}{ccc}a_l& b_l& \overline{b}_l\omega \\ \overline{b}_l& a_l& b_l\omega \\ b_l\overline{\omega }& \overline{b}_l\overline{\omega }& a_l\end{array}\right)\left(\begin{array}{ccc}a_\nu & b_\nu & \overline{b}_\nu \overline{\omega }\\ \overline{b}_\nu & a_\nu & b_\nu \overline{\omega }\\ b_\nu \omega & \overline{b}_\nu \omega & a_\nu \end{array}\right)$$ (1) respectively invariant under monomial cyclic permutations (‘circulation’ matrices ) of the form: $$C=\left(\begin{array}{ccc}.& 1& .\\ .& .& \omega \\ \overline{\omega }& .& .\end{array}\right)\overline{C}=\left(\begin{array}{ccc}.& 1& .\\ .& .& \overline{\omega }\\ \omega & .& .\end{array}\right)$$ (2) ($`C^{}m_l^2C=m_l^2`$, etc.) just as circulant matrices are invariant under simple cyclic permutations . The mass matrices Eq. 1 are diagonalised by the (eg. circulant) unitary matrices $`V`$ and $`\overline{V}`$: $$\frac{1}{\sqrt{3}}\left(\begin{array}{ccc}1& \overline{\omega }& 1\\ 1& 1& \overline{\omega }\\ \overline{\omega }& 1& 1\end{array}\right)\frac{1}{\sqrt{3}}\left(\begin{array}{ccc}1& \omega & 1\\ 1& 1& \omega \\ \omega & 1& 1\end{array}\right)$$ (3) respectively, leading to threefold maximal mixing: $$V\overline{V}^{}=\frac{1}{3}\left(\begin{array}{ccc}2+\omega & 2\overline{\omega }+1& 2\overline{\omega }+1\\ 2\overline{\omega }+1& 2+\omega & 2\overline{\omega }+1\\ 2\overline{\omega }+1& 2\overline{\omega }+1& 2+\omega \end{array}\right)$$ (4) where in all the above and in what follows $`\omega `$ and $`\overline{\omega }`$ represent complex cube-roots of unity and the overhead ‘bar’ denotes complex conjugation. After some rephasing of rows and columns the tri-maximal mixing matrix may be re-written: $$U=\frac{1}{\sqrt{3}}\begin{array}{cc}& \begin{array}{ccc}\nu _1& \nu _2& \nu _3\end{array}\\ \begin{array}{c}e\\ \mu \\ \tau \end{array}& \left(\begin{array}{ccc}1& 1& 1\\ 1& \omega & \overline{\omega }\\ 1& \overline{\omega }& \omega \end{array}\right)\end{array}$$ (5) where the matrix in the parenthesis is identically the character table for the cyclic group $`C_3`$ (group elements vs. irreducible representations). In threefold maximal mixing the CP violation parameter $`|J_{CP}|`$ is maximal ($`|J_{CP}|=1/6\sqrt{3}`$) and if no two neutrinos are degenerate in mass, CP and T violating asymmetries can approach $`\pm 100\%`$. Observables depend on the sqaures of the moduli of the mixing matrix elements : $$(|U_{l\nu }|^2)=\begin{array}{cc}& \begin{array}{ccc}\nu _1& \nu _2& \nu _3\end{array}\\ \begin{array}{c}e\\ \mu \\ \tau \end{array}& \left(\begin{array}{ccc}1/3& 1/3& 1/3\\ 1/3& 1/3& 1/3\\ 1/3& 1/3& 1/3\end{array}\right)\end{array}.$$ (6) In tri-maximal mixing all survival and appearance probabilities are universal (ie. flavour independent) and in particular if two neutrinos are effectively degenerate tri-maximal mixing predicts for the locally averaged survival probability: $$<P(ll)>=(1/3+1/3)^2+(1/3)^2=5/9$$ (7) and appearnce probability: $$<P(ll^{})>=2\times (1/3)(1/3)=2/9.$$ (8) If all three neutrino masses are effectively non-degenerate: $`<P(ll)>=<P(ll^{})>=1/3`$. ## 2 ATMOSPHERIC NEUTRINOS The SUPER-K multi-GeV data, show a clear $`50`$% suppression of atmospheric $`\nu _\mu `$ for zenith angles $`\mathrm{cos}\theta `$ $`\stackrel{<}{}`$ 0, as shown in Fig. 1a. At the same time the corresponding distribution for $`\nu _e`$ seems to be very largely unaffected, as shown in Fig. 1b. The best fit is for (twofold) maximal $`\nu _\mu \nu _\tau `$ mixing with $`\mathrm{\Delta }m^23.0\times 10^3`$ eV<sup>2</sup>, as shown by the solid/dotted curves in Fig. 1. From the measured suppression we have: $$(1|U_{\mu 3}|^2)^2+(|U_{\mu 3}|^2)^20.52\pm 0.05$$ (9) whereby the $`\nu _\mu `$ must have a large $`\nu _3`$ content, ie. $`|U_{\mu 3}|^21/2`$, or more precisely: $$1/3\stackrel{<}{}|U_{\mu 3}|^2\stackrel{<}{}2/3$$ (10) where the range quoted corresponds to the $`1\sigma `$ error above (which is largely statistical). ## 3 THE CHOOZ DATA The apparent lack of $`\nu _e`$ mixing at the atmospheric scale, is supported by independent data from the CHOOZ reactor (Fig. 2), which rules out large $`\nu _e`$ mixing over most of the $`\mathrm{\Delta }m^2`$ range allowed in the atmospheric neutrino experiments. Taken together, the CHOOZ and SUPER-K data indicate that the $`\nu _3`$ has no $`\nu _e`$ content, ie. there is a zero (or near-zero) in the top right-hand corner of the lepton mixing matrix, $`|U_{e3}|^2`$ $`\stackrel{<}{}`$ $`0.03`$. ### 3.1 The Fritzsch-Xing Ansatz Remarkably, the Fritzsch-Xing hypothesis (published well before the initial CHOOZ data) predicted just such a zero: $$(|U_{l\nu }|^2)=\begin{array}{cc}& \begin{array}{ccc}\nu _1& \nu _2& \nu _3\end{array}\\ \begin{array}{c}e\\ \mu \\ \tau \end{array}& \left(\begin{array}{ccc}1/2& 1/2& .\\ 1/6& 1/6& 2/3\\ 1/3& 1/3& 1/3\end{array}\right)\end{array}$$ (11) The Fritzsch-Xing ansatz (Eq. 11) is readily obtained from theeefold maximal mixing (Eq. 5) by the re-definitions: $`\nu _e(\nu _e\nu _\mu )/\sqrt{2}`$ and $`\nu _\mu (\nu _e+\nu _\mu )/\sqrt{2}`$ (up to phases), keeping the $`\nu _\tau `$ tri-maximally mixed. While the a priori argument for these particular linear combinations is far from convincing, it is clear that Eq. 11 is so far consistent with the atmospheric data: $$<P(\mu \mu )>=(1/6+1/6)^2+(2/3)^2=5/9$$ (12) (cf. Eq. 9), while beyond the second threshold: $$<P(ee)>=(1/2)^2+(1/2)^2+(0)^2=1/2$$ (13) The famous ‘bi-maximal’ scheme is very similar to the Fritzsch-Xing ansatz and likewise predicts a $`50`$% suppression for the solar data. ## 4 THE SOLAR DATA Taken at face value, the results from the various solar neutrino experiments imply an energy dependent suppression. In particular, taking BP98 fluxes (and correcting for the NC contribution in SUPER-K), the results from HOMESTAKE and SUPER-K: $`P(ee)0.30.4`$, lie significantly below the results from the gallium experiments: $`P(ee)0.50.6`$, as shown in Fig. 3. ### 4.1 ‘Optimised’ Bi-Maximal Mixing Mindful of the popularity of the large-angle MSW solution and the undenied phenomenological promise of the ‘original’ bi-maximal scheme , we have ourselves proposed , a phenomenologically viable (and even phenomenologically favoured) a posteriori ‘straw-man’ alternative to tri-maximal mixing: $$(|U_{l\nu }|^2)=\begin{array}{cc}& \begin{array}{ccc}\nu _1& \nu _2& \nu _3\end{array}\\ \begin{array}{c}e\\ \mu \\ \tau \end{array}& \left(\begin{array}{ccc}2/3& 1/3& .\\ 1/6& 1/3& 1/2\\ 1/6& 1/3& 1/2\end{array}\right)\end{array}$$ (14) which we refer to here as ‘optimised’ bi-maximal mixing. This scheme is of course just one special case of the ‘generalised’ bi-naximal hypotheses of Altarelli and Feruglio (and see also Ref. ). At the atmospheric scale Eq. 14 gives: $$<P(\mu \mu )>=(1/6+1/3)^2+(1/2)^2=1/2$$ (15) while beyond the second scale it gives: $$<P(ee)>=(2/3)^2+(1/3)^2+(0)^2=5/9$$ (16) There is then the added possibility to exploit a large angle MSW solution with the base of the ‘bathtub’ (given by the $`\nu _e`$ content of the $`\nu _2`$) given by $`P(ee)=1/3`$, as shown in Fig. 3. Although clearly the matrix Eq. 14 is readily obtained from the tri-maximal mixing matrix Eq. 5 by forming linear combinations of the heaviest and lightest mass eigenstates: $`\nu _1(\nu _1+\nu _3)/\sqrt{2}`$ and $`\nu _3(\nu _1\nu _3)/\sqrt{2}`$ (up to phases), with the $`\nu _2`$ remaining tri-maximally mixed, we emphasise that we claim no understanding of why these redefinitions should be necessary. ## 5 TERRESTRIAL MATTER EFFECTS <br>IN TRI-MAXIMAL MIXING In general, matter effects deform the mixing matrix and shift the neutrino masses, away from their vacuum values, depending on the local matter density. In the tri-maximal mixing scenario, the CHOOZ data require $`\mathrm{\Delta }m^2`$ $`\stackrel{<}{}`$ $`10^3`$ eV<sup>2</sup> (Fig 2), so that matter effects can be very important. For ‘intermeduate’ densities , the matter mass eigenstates become: $`\nu _1(\nu _1\nu _2)/\sqrt{2}`$ and $`\nu _2(\nu _1+\nu _2)/\sqrt{2}`$ (up to phases) while the $`\nu _3`$ remains tri-maximally mixed: $$(|U_{l\nu }|^2)\begin{array}{cc}& \begin{array}{ccc}\nu _1& \nu _2& \nu _3\end{array}\\ \begin{array}{c}e\\ \mu \\ \tau \end{array}& \left(\begin{array}{ccc}.& 2/3& 1/3\\ 1/2& 1/6& 1/3\\ 1/2& 1/6& 1/3\end{array}\right)\end{array}$$ (17) As evidenced in Fig. 4, the phenomenology of Eq. 17 can be almost indistinguishable from that of ‘optimised’ bi-maximal mixing, Eq. 14. Beyond the ‘matter threshold’ we have for $`\nu _\mu `$: $$<P(\mu \mu )>=(1/2)^2+(1/6)^2+(1/3)^2=7/18$$ (18) while for $`\nu _e`$: $$<P(ee)>=(0)^2+(2/3)^2+(1/3)^2=5/9$$ (19) For atmospheric neutrinos, account must be taken of the initial flux ratio. $`\varphi (\nu _\mu )/\varphi (\nu _e)`$. For $`\varphi (\nu _\mu )/\varphi (\nu _e)`$ $``$ $`2/1`$, the effective $`\nu _\mu `$ suppression: $$7/18+1/2\times 2/9=1/2$$ (20) (cf. Eq. 15) while the $`\nu _e`$ rate is fully compensated: $$5/9+1/2\times 2/9=1$$ (21) so that $`\nu _e`$ appear not to participate in the mixing. The up/down ratio (Fig. 5) measures the effective supression. For energies $`E`$ $`\stackrel{>}{}`$ $`1`$ GeV the initial flux ratio $`\varphi (\nu _\mu )/\varphi (\nu _e)`$ $`\stackrel{>}{}`$ $`2/1`$ and the $`\nu _e`$ rate becomes ‘over-compensated’, while, for sufficiently high energies compensation effects vanish as the complete decoupling limit $`\nu _e\nu _3`$ is approached. The resulting maximum in the up/down ratio for $`\nu _e`$ (Fig. 5) is describerd as a ‘resonance’ by Pantaleone . ### 5.1 Matter Induced CP-violation As regards the mixing matrix, CP and T violating effects are maximal in tri-maximal mixing, but in spite of this, due to the extreme hierarchy of $`\mathrm{\Delta }m^2`$ values involved, for most accessible $`L/E`$ values, observable asymmetries are expected to be unmeasurably small (in vacuum). In the presence of matter (or indeed anti-matter) significant asymmetries can occur, however, ‘enhanced’ or ‘induced’ by matter effects. Thus for example if atmospheric neutrinos are separated $`\nu /\overline{\nu }`$, interesting matter induced asymmetries become observable (Fig. 6) in tri-maximal mixing. Such effects could be investigated using atmospheric neutrino detectors equipped with magnetic fields , and/or by using sign-selected beams in long-baseline experiments . ## 6 TRI-MAXIMAL MIXING AND <br>THE SOLAR DATA The vacuum predictions for the solar data in tri-maximal mixing are largely unmodified by matter effects in the Sun, as is well known, or, as we have seen, by matter effects in the Earth (Eq. 7 vs. Eq. 16). Thus we expect $`P(ee)=5/9`$ in tri-maximal mixing with no energy dependence, as shown in Fig. 7. (Note that also the ‘optimsed’ bi-maximal scheme predicts $`P(ee)=5/9`$ with no energy dependence for $`\mathrm{\Delta }m^2`$ outside the ‘bathtub’). In Fig. 7, the <sup>8</sup>B (and <sup>7</sup>Be) flux, affecting both the SUPER-K and HOMESTAKE data-points, has been rescaled by an arbitrary factor (0.72) for comparison to the energy-independent prediction. Given the flux errors ($`\pm 14\%`$ for <sup>8</sup>B ), the fit (Fig. 7) seems not unreasonable. Fig. 8 shows the solar, atmospheric, accelerator and reactor data in overall perspective, within the tri-maximal context. Note that dis-appearance results only are represented: if the LSND appearance result were ever to be confirmed, tri-maximal mixing would be instantly excluded. ## 7 CONCLUSION The lepton mixing really does look to be either tri-maximal or bi-naximal at this point. Tri-maximal mixing is currently ‘squeezed’ in $`\mathrm{\Delta }m^2`$ by CHOOZ vs. SUPER-K (Fig. 1-2). For some $`\mathrm{\Delta }m^2`$, bi-maximal mixing (in particular the ‘optimised’ version discussed here) clearly fits the data better. Tri-maximal mixing remains, however, the ‘simplest’ most ‘symmetric’ possibility. Acknowledgement I wish to thank Paul Harrison and Don Perkins for continued collaboration, and numerous helpful discussions relating to the above material.
no-problem/9909/astro-ph9909488.html
ar5iv
text
# X-ray/Optical Bursts from GS 1826–24 ## 1 Introduction GS 1826–24 was discovered serendipitously in 1988 by the Ginga satellite (Makino et al. 1988) at an average flux of 26 mCrab (1–40 keV) and was fitted by a single power law spectrum with $`\alpha 1.7`$. Whilst showing some evidence for variability during 1988–89 (Tanaka & Lewin 1995; In’t Zand 1992), ROSAT PSPC observations in 1990 and 1992 (Barret et al. 1995) found comparable flux levels and no X-ray bursts were detected during 8 hours exposure on the source. The spectrum was well fitted by a single power law with $`\alpha 1.51.8`$ and an absorption column, $`N_H5\times 10^{21}`$ cm<sup>-2</sup>. Temporal analysis of both the Ginga and ROSAT data yielded a featureless $`f^1`$ power spectrum extending from $`10^4`$–500 Hz (Tanaka & Lewin 1995; Barret et al. 1995), with neither quasi-periodic oscillations (QPOs) nor pulsations being detected. Since there was no detection prior to Ginga, the source was catalogued as an X-ray transient. Its similarities to Cyg X–1 and GX 339–4 in the low state, both in spectrum and temporal behaviour (hard X-ray spectrum and strong flickering), led to an early suggestion by Tanaka (1989) that it was a soft X-ray transient with a possible black-hole primary. Following its detection by CGRO OSSE in the 60–200 keV energy range, Strickman et al. (1996) doubted the suggestion of a black-hole primary after examining the combined spectrum from both Ginga and OSSE. They found that this required a model with an exponentially cut-off power law plus reflection term. The observed cut-off energy around 58 keV is typical of the cooler neutron star hard X-ray spectra. The suggestion that GS 1826–24 contains a neutron star was also discussed in detail by Barret et al. (1996), where they compared the luminosity of the source with other X-ray bursters. The recent report of 70 X-ray bursts in 2.5 years by BeppoSAX WFC (Ubertini et al. 1999) and an optical burst by Homer et al. (1998) confirms the presence of a neutron star accretor. Following the first ROSAT PSPC all-sky survey observations in September 1990, and the determination of a preliminary X-ray position, a search for the counterpart yielded a time variable, UV-excess, emission line star (Motch et al. 1994; Barret et al. 1995). The source had $`B=19.7`$, and an uncertain V magnitude ($`19.3`$), due to contamination by a nearby star. Subsequent high-speed CCD photometry by Homer et al. (1998) yielded a $``$ 2.1 hr optical modulation, but confirmation of its stability requires observation over a longer time interval. We therefore carried out an ASCA observation and simultaneous RXTE/optical observations of GS 1826–24 in order to study its spectral behaviour and very short timescale variability, as well as the 2.1 hr optical modulation. In Table 1, we summarize the ASCA, RXTE and SAAO observations used in this work. This paper is structured as follows. An outline of all the X-ray and optical observations is given in section 2. In section 3 we report the spectral analysis of ASCA data for both persistent and burst emission. Simultaneous RXTE/optical observations, including the analysis of a simultaneous X-ray/optical burst are also presented. We discuss the implications of the X-ray bursts and constrain the nature of the source from the delay between the X-rays and optical in section 4. We present the overall timing study of the ASCA and RXTE/optical observations in a companion paper (Homer et al. 1999, henceforth paper II). ## 2 Observations and Data Reduction ### 2.1 ASCA The ASCA satellite consists of four co-aligned telescopes, each of which is a conical foil mirror that focuses X-rays onto two Solid State Imaging Spectrometers (SIS) and two Gas Imaging Spectrometers (GIS) (Tanaka, Inoue & Holt 1994). The SIS detectors are sensitive to photons in the 0.4–10.0 keV energy band with nominal spectral resolution of 2% at 6 keV. The GIS detectors provide imaging in the 0.7–10 keV energy range and have a relatively modest spectral resolution of 8% at 6 keV, in comparison to the SIS, but with a larger effective area at higher energies. For our observation of GS 1826–24 on 1998 March 31 (see Table 1), one CCD was activated for each SIS, giving an 11 $`\times `$ 11 field of field and temporal resolution of 4 s. The GIS detectors were set to MPC mode (i.e. no image could be extracted) so that the temporal resolution would be improved to 0.5 s. The data were filtered with standard criteria including the rejection of hot and flickering pixels and event grade selection. We extracted the SIS source spectra from circular regions of $`3^{}`$ radius, yielding 11.2 ks total exposure. The background spectra were extracted from source-free regions of the instruments during the same observation. For GIS, after the standard selection procedure, the net on-source time was 18.9 ks. ### 2.2 RXTE We also observed GS 1826–24 with the Proportional Counter Array (PCA) instrument on RXTE (Bradt, Rothschild & Swank 1993) between 1998 June 23 and July 29 (see Table 1). The PCA consists of five nearly identical Proportional Counter Units (PCUs) sensitive to X-rays with energy range between 2–60 keV and a total effective area of $``$ 6500 cm<sup>2</sup>. The PCUs each have a multi-anode xenon-filled volume, with a front propane volume which is primarily used for background rejection. For the entire PCA and across the complete energy band, the Crab Nebula produces a count rate of 13,000 counts s<sup>-1</sup>. The PCA spectral resolution at 6 keV is approximately 18% and the maximum timing resolution available was 1 $`\mu `$s. However, in order to maximize our timing and spectral resolution, we adopted a 125$`\mu `$s time resolution, 64 spectral energy channel mode over 2–60 keV in addition to the standard mode configuration. All light curves and spectra presented here have been corrected for background and dead-time. For most of the time, at least four PCUs were turned on, and so we utilized only data from these PCUs in order to minimize systematic uncertainties. ### 2.3 Optical Observations of a small ($`50\times 33`$ arcsecs) region surrounding the optical counterpart of GS 1826–24 were made using the UCT-CCD fast photometer (O’Donoghue 1995), at the Cassegrain focus of the 1.9m telescope at SAAO, Sutherland from 1998 June 23 to 26. The UCT-CCD fast photometer is a Wright Camera 576$`\times `$420 coated GEC CCD which was used here half-masked so as to operate in frame transfer mode, allowing exposures of as short as 2 s with no dead-time. The conditions were generally good with typical seeing $``$ 1.5–2.5 arcsec and the timing resolution for the source was 5 s. An observing log is presented in Table 1. We performed data reduction using IRAF, including photometry with the implementation of DAOPHOT II (Stetson 1987). Due to moderate crowding of the counterpart with a nearby but fainter neighbour and the variable seeing, point spread function (PSF) fitting was employed in order to obtain good photometry. The details of this procedure are given in Homer et al. (1998). ## 3 Analysis and Results ### 3.1 ASCA Observations Given the better spectral resolution and higher sensitivity below 2 keV of the SIS detectors, we use those data to study the persistent emission of GS 1826–24. The spectrum (excluding the burst intervals, see below) was fitted with a blackbody plus a power law component . The fit quality was good, with $`\chi _\nu ^2=1.14`$ for 307 degrees of freedom (d.o.f.) and these results are summarized in Table 2 and Fig. 1 (whenever an error for a spectral parameter is quoted throughout this paper, it refers to the single parameter 1$`\sigma `$ error). Our results are consistent with those obtained by In’t Zand et al. (1999) using the BeppoSAX NFI, taken six days later. Moreover, their 2–10 keV flux of $`5.41\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> which is only $`9\%`$ smaller than our determination, is consistent with the $`10\%`$ fall in count rate seen by the RXTE ASM during that interval. Two X-ray bursts were detected by the ASCA GIS and one of them was caught by SIS. The time interval between the two bursts was $``$ 5.4 hr and is consistent with the $`5.76\pm 0.6`$ hr quasi-periodicity of the burst recurrence as found by BeppoSAX WFC observations (Ubertini et al. 1999). Figure 2 shows the time profiles of the two bursts in the 0.7–10 keV range with 4 s binning. Their rise times (difference between time of the peak and time of the start of the burst using a linear-rise exponential-decay model) and e-folding times are comparable, see Table 3. Since the GIS was set to MPC mode, which has no positional information, the lack of background estimation limits the usefulness of the spectra. We therefore here analyse the first burst which was detected with SIS. We extracted a series of spectral slices through the burst with 4 s time resolution in the rising phase and 20 s resolution during decay. Spectral analyses of these slices were performed over the 0.5–10 keV range using a variety of approaches. The most straightforward ‘standard’ approach was to choose a 300 s section of data immediately prior to the burst and use this as our ‘background’ for spectral fits to the individual spectra through the burst. The net (burst – ‘background’) emission was well fitted ($`\chi _\nu ^20.61.2`$) with a simple blackbody. Figure 3 (left) shows the time variation of the bolometric flux, blackbody temperature and radius assuming a distance of 8 kpc (In’t Zand et al. 1999). The radii are rather low ($`4`$ km) and show an anti-correlation with temperature. However, the analysis of X-ray burst data can be complicated in cases where the persistent emission contains a blackbody contribution from the outer layers of the neutron star. Failing to account for this component may lead to errors in the temperature determination and severe underestimates in the derived blackbody radius during the later stages of the burst (van Paradijs & Lewin 1985). We thus repeated our analysis, fitting the above two-component model to each gross (continuum + burst) spectrum, rather than a single blackbody component to the net burst spectrum. The power-law component was held constant at its continuum emission value while the blackbody component was permitted to vary. The results of our two-component spectral fits are shown in Figure 3 (right), once again for an assumed distance of 8 kpc. The most important difference with the results of the ‘standard’ approach is that the blackbody radius is now in the range for a typical neutron star ($``$ 10 km). The blackbody radii show moderate variations, which appear to be anti-correlated with temperature. This indicates that the blackbody radiation from the neutron star contributes significantly to the persistent emission. The blackbody temperatures are higher during the beginning of the burst and then decline as expected for a type I burst (see Figure 3). The change of the apparent blackbody radius may be affected by the non-Planckian shape of the spectrum of a hot neutron star (see Sztajno et al. 1986 and references therein). As a result, the temperature fitted using a blackbody is simply a ‘colour temperature’ ($`T_{bb}`$) which is higher than the effective temperature ($`T_{eff}`$); the ratio of $`T_{bb}/T_{eff}`$ increases with $`T_{eff}`$ (e.g. London, Taam & Howard 1984). Moreover the fitted blackbody radii are also affected, and so we used the average relation between blackbody radius and temperature obtained for 4U 1636–53 (Sztajno et al. 1985) to make an empirical correction to the radii obtained from the above two-component fits (see van Paradijs et al. 1986). As this method is strictly empirical, it is independent of possible uncertainties in model-atmosphere calculations (e.g. Sztajno et al. 1986). We have assumed that the radii are unaffected for $`kT_{bb}`$$`<`$1.25 keV whilst for $`kT_{bb}`$$`>`$1.25 keV the radii decrease linearly with temperature (e.g. van Paradijs et al. 1986). The blackbody radii obtained following this final stage of the analysis (see Figure 4) do not show significant differences compared with the gross spectral analysis except that the radii during the peak of the burst are now in the range for a typical neutron star. There is no evidence for photospheric expansion since the radius remains almost constant throughout the burst (Figure 4). We also plotted the flux $`F_{bol}`$ versus $`F_{bol}^{1/4}/kT_{bb}`$ but we do not find evidence for any increase of the X-ray emitting area (Strohmayer, Zhang & Swank 1997). This ratio is a constant proportional to $`(R/d)^{1/2}`$, where $`R`$ and $`d`$ are the radius and distance of the source if we assume it is blackbody emission from a spherical surface. The unabsorbed bolometric peak flux of the blackbody radiation and other burst parameters are listed in Table 4. The ratio of the average luminosity emitted in the persistent emission (since the previous burst) and that emitted in the burst, $`L_{pers}/L_{burst}=55\pm 5`$ assuming the separation of the two bursts is 5.4 hr. ### 3.2 Simultaneous RXTE/Optical Burst Analysis During simultaneous X-ray/optical observations on 24 June 1998, both RXTE PCA and the SAAO 1.9m + UCT CCD detected a burst (see Figure 5). The burst lasted for $``$ 150 s and the time profiles of the burst in both X-ray and optical are of the fast-rise exponential-decay form with the e-folding times in the different energy bands given in Table 5. The optical and X-ray bursts started almost at the same time but a delay is present between the peaks. The optical burst resembles the low energy (2–3.5 and 3.5–6.4 keV) X-ray light curves in which they all have a flat peak and a shoulder during the decay phase. At higher energies ($`>`$ 6.4 keV), the peak is much sharper and the decay is faster during the initial decay phase. X-ray spectral analysis is performed in the same way as for the ASCA data. The persistent emission is fitted with a single power-law spectrum with $`N_H=(7.3\pm 1.5)\times 10^{21}`$ cm<sup>-2</sup> and photon index, $`\alpha =1.7\pm 0.01`$ ($`\chi _\nu ^2=1.11`$ for 23 d.o.f.) which reveals an absorbed flux of $`(1.2\pm 0.01)\times 10^9`$ erg cm<sup>-2</sup> s<sup>-1</sup> in the 2–20 keV range. The photon index, $`\alpha `$ is much higher than that seen by ASCA and suggests a softer spectrum during the RXTE observations. We also perform spectral fitting with a power-law plus blackbody model but the fit does not improve and leads to a large error in $`N_H`$. Both the ‘standard’ method (net burst spectrum) and gross spectrum were almost indistinguishable from those presented in Figure 6, presumably because the blackbody provides such a small contribution to the continuum emission. We also undertook the non-Planckian analysis as mentioned in the previous section, with results very similar to Figure 6, indicating that the effect due to the non-Planckian shape of the neutron star spectrum is very small. Once again the neutron star shows the spectral cooling during the burst typical of a type-I burst. The unabsorbed bolometric peak flux of the blackbody radiation and other burst parameters are listed in Table 4. The ratio $`L_{pers}/L_{burst}=50\pm 4`$ if we assume that the separation of two bursts is 5.76 hr (Ubertini et al. 1999). The blackbody radius increases to a maximum as the burst rises, but does not show the simultaneous drop in $`kT_{bb}`$ and increase in $`R_{bb}`$ that is the overt signature of photospheric radius expansion (see Lewin, van Paradijs & Taam 1995). However, when we plot the flux $`F_{bol}`$ versus $`F_{bol}^{1/4}/kT_{bb}(R/d)^{1/2}`$ we do find evidence for an increase in the X-ray burning area on the star (Strohmayer, Zhang & Swank 1997). This is shown in Figure 7 where the burst begins in the lower left and evolves diagonally to the upper right and then across to the left at an essentially constant value until near the end of the burst. This is an indication that indeed the X-ray burning area is not a constant but increases with time during the rising phase. $`F_{bol}^{1/4}/kT_{bb}`$ eventually reaches a nearly constant value and the neutron star surface simply cools during the decay phase. Kilohertz QPOs between 200 and 1200 Hz were not detected during the burst with an upper limit of 1% (at 99% confidence). We also set upper limits on the presence of any coherent pulsations during the burst: $`<`$ 1% between 100–500 Hz and 600–1200 Hz, and $`<3`$% between 1000–4000 Hz (the Nyquist limit). A detailed timing analysis of the remaining simultaneous X-ray/optical data will be presented in paper II. #### 3.2.1 X-ray/optical time delay Figure 5 shows the simultaneous optical/X-ray burst in different energy bands where there is a few seconds delay at the peak of the burst. In order to quantify this delay, we performed: (i) cross-correlation analysis, (ii) modelled the optical burst by convolving the X-ray light curve with a Gaussian transfer function. 3.2.1.1 Cross-correlation We cross-correlated the optical data with X-ray data from different energy bands as well as the total (2–60 keV) X-ray light curve. This allows us to determine the correlation and estimate any time lag between X-ray and optical variability. The measurement of the cross-correlation function provides a characteristic delay which does not depend on particular model fitting. The results show that the optical burst lags the X-ray burst by $``$ 4 s, which is marginally larger than expected in this system. The separation of the compact object and companion star is 2–3 light-s if we assume an orbital period of $``$ 2.1 hr (Homer et al. 1998), a neutron star mass of 1.4 $`M_{}`$ and a companion star mass of 0.1–1.1 $`M_{}`$. However, in using a cross-correlation method we are essentially limited to the 5 s time resolution of the optical data (the PCA data have a much higher time resolution). Moreover, the delay appears to vary, from almost nothing at the start of the burst to a few seconds at the peak. This suggests that the delay might be a function of flux. Therefore a cross correlation analysis cannot provide a full picture of the delay between the X-ray and optical fluxes and instead we model the optical burst by convolving the X-ray light curve with a transfer function. 3.2.1.2 Transfer function In order to model the time delay between the optical and X-ray bursts, we convolve a Gaussian transfer function with the X-ray light curve and use $`\chi ^2`$ fitting to model the optical light curve. The same method was used by Hynes et al. (1998) to model the HST light curve of GRO J1655–40 from the RXTE light curve. The Gaussian transfer function is given by: $$\psi (\tau )=\frac{\mathrm{\Psi }}{\sqrt{2\pi \mathrm{\Delta }\tau }}e^{\frac{1}{2}(\frac{\tau \tau _0}{\mathrm{\Delta }\tau })^2}$$ (1) where $`\tau _0`$ is the mean time delay and $`\mathrm{\Delta }\tau `$ is the dispersion or ‘smearing’ which is a measure of the width of the Gaussian. $`\mathrm{\Psi }`$ is the strength of the response. We performed a series of convolutions of the transfer function with the lightcurves from the four energy bands, varying both $`\tau _0`$ and $`\mathrm{\Delta }\tau `$ independently. Essentially we adjust the overall delay and the degree of ‘smearing’, until the transferred X-ray lightcurve reproduces the optical response. Finally, Figure 8 shows the best fit predicted light curves from each convolution superimposed on the optical light curve. The principal features of the optical burst profile are reproduced well in the predicted light curves from the 2–3.5 and 3.5–6.4 keV energy bands. Table 6 summarizes the results of the Gaussian transfer function fitting to all four energy bands. The fits are good ($`\chi _\nu ^2`$ $`<`$ 1 for 51 d.o.f.) for the two lower energy bands but not for the higher energy ones. The mean delay and dispersion between the optical and lower X-ray energy bands are both $``$ 3 s (Table 6 and Figure 9). The strength of the response, $`\mathrm{\Psi }`$, is almost the same for the two lower X-ray energy bands, at a value of $`0.0137`$, while it is 2–3 times smaller for the higher X-ray energy band. This is an indication that a greater proportion of the reprocessing occurs at lower energies. ## 4 Discussion X-ray and optical bursts from GS 1826–24 were previously reported by Ubertini et al. (1999), In’t Zand et al. (1999) and Homer et al. (1998). Our X-ray/optical observations of GS 1826–24 showed three X-ray bursts, of which two were observed by ASCA, whereas one was observed by RXTE simultaneously with the optical. On the basis of their spectral properties and time profiles, all the bursts have a cooling trend during their decays and exhibit blackbody spectra with temperatures of a few keV. The time profiles show fast rise times of 7–9 s and long decay times ranging from 20–60 s depending on the energy band. We therefore interpret the three bursts detected from GS 1826–24 as type I bursts (Hoffmann et al. 1978). The relatively long rise time ($`79`$ s), as compared to bursts in other systems, indicates that the burst front may have enough time to spread over the whole neutron star surface during the rise to burst and suggests that the burning is homogeneous over the surface of the neutron star. This is consistent with the fact that we see no evidence for pulsations during the burst. We note that X-ray burst rise times in other sources have been observed to be smaller than $``$6 sec (see e.g. Sztajno et al. 1986; Lewin et al. 1987), making this system rather unique. The burst rise times derived by In’t Zand et al. (1999) range from 5–8 s which are also consistent with our observations. For the simultaneous burst we may compare the ratio of the persistent to peak fluxes, in both the X-ray and optical, using the results of Lawrence et al. (1983). They derived a simple power law relation between the changes in U, B and V band fluxes and the corresponding X-ray flux variations during a well-studied burst of X1636–536, with $`(\frac{F_{X,max}}{F_{X,pers}})=(\frac{F_{opt,max}}{F_{opt,pers}})^\beta `$, where $`\beta `$ varies with passband. Our burst shows that $`\beta 4`$ which is comparable to the value $`\beta _V3`$ found for X1636–536 in the $`V`$ band, which is the closest approximation to our white light passband. Hence, we may imply that the reprocessed emission from the GS 1826–24 burst is also approximately that from a blackbody (with a temperature set by the degree of X-ray irradiation), where the optical passband is on the Rayleigh-Jeans tail (Lawrence et al. 1983). The X-ray burst observed by RXTE shows evidence for an increase in the burning area during the early rise phase, but no evidence for photospheric radius expansion. Note that this is consistent with the fact that most X-ray bursts showing photospheric radius expansion have rise times less than $``$1 sec (see Lewin et al. 1995). However, by assuming that our observed peak luminosity of $`F_{max}=(2.8\pm 0.4)\times 10^8`$ erg cm<sup>-2</sup> s<sup>-1</sup> is near the Eddington limit of $`1.8\times 10^{38}`$ ergs s<sup>-1</sup> for a 1.4 M neutron star, we can set an upper limit to the distance to GS 1826$``$24. We derive a maximum distance of $`d=7.5\pm 0.5`$ kpc. This estimate is consistent with the upper limit from BeppoSAX NFI observations ($`7.4\pm 0.7`$ kpc; In’t Zand et al. 1999) and the optical lower limit of 4 kpc (Barret et al. 1995). The luminosity ratios are $`L_{pers}/L_{burst}`$ 55 and $``$ 50 for the bursts observed with ASCA and RXTE, respectively. This is comparable with that found by BeppoSAX WFC ($`60\pm 7`$; Ubertini et al. 1999). Coupling this value with an estimated stable accretion rate of $`1.5\times 10^9`$ M yr<sup>-1</sup> (Ubertini et al. 1999), the burst must involve a combined hydrogen-helium burning phase (Lewin, van Paradijs & Taam 1995). This relatively long burst also resembles the theoretical results of X-ray bursts driven by rapid proton capture process, or rp-process (see Hanawa & Fujimoto 1984; Taam 1981; Bildsten 1998; Schatz et al. 1998). Pedersen et al. (1982) have shown that the optical burst mainly reflects the geometry of the system and that the contribution of intrinsic radiative processes is small. Hence the correlated optical and X-ray bursts discussed above are useful as probes of the structure and geometry of the compact object surroundings. Within the framework of the low-mass X-ray binary system, the reprocessing can occur in the accretion disk and the companion star. Based on our observed mean delay of $`3\pm 1`$ s for the optical burst with respect to low energy X-rays, we can then constrain the orbital period of the system. By Kepler’s law, the light travel time of 2–4 s corresponds to an orbital period of 1.6–5.5 hr if we assume a 1.4$`M_{}`$ neutron star and a companion star mass of 0.1–1.1$`M_{}`$ (i.e. for a low-mass main sequence star and stable mass transfer). Hence, this range of periods provides support for the $`2.1\pm 0.1`$ hr orbital period proposed by Homer et al. (1998). Lastly, from only one simultaneous optical/X-ray burst, we cannot draw a firm conclusion as to whether the optical burst is due to reprocessing in the disk or on the surface of the companion star. However, given that the source is a low inclination system ($`<`$ 70; Homer et al. 1998) and the ratio of smearing to delay is $`1`$, the reprocessing is expected to be dominated by the accretion disk. It is important in future studies to search for the possibly variable delays if the reprocessing occurs on the surface of the companion star (Matsuoka et al. 1984) or the ‘thick spot’ in the disk proposed by Pedersen et al. (1981). Whether the dominant reprocessor is the companion star or the ‘thick spot’, one expects the ratio of optical to X-ray flux in a burst to vary periodically. Moreover, the optical delay would vary as a function of orbital phase as suggested by Pedersen, van Paradijs & Lewin (1981). Ubertini et al. (1999) recently proposed a 5.76-hr quasi-periodicity in the occurrence of X-ray bursts in GS 1826–24, which makes this problem difficult to resolve with current low-Earth-orbit satellites. However, with upcoming missions such as Chandra and XMM, much longer continuous X-ray coverage will be possible, and together with ground-based telescopes will enable us to probe the structure of this source in much greater detail. ## Acknowledgements We are grateful to Darragh O’Donoghue (SAAO) for his advice on the the use of the UCT-CCD for high-speed photometry and his help with the subsequent reductions. We also thank Lars Bildsten for valuable comments and Fred Marang (SAAO) for his support at the telescope, and the RXTE SOC team for their efforts in scheduling the simultaneous time. This paper also utilizes results provided by the ASM/RXTE team.
no-problem/9909/astro-ph9909297.html
ar5iv
text
# Chemical Abundances of Bright Giants in the Mildly Metal-Poor Globular Cluster M4 ## 1 Introduction As isolated laboratories of stellar evolution, individual globular clusters were once considered to be simple systems, having formed coevally, out of the same material, and exhibiting cluster-to-cluster differences due to only metallicity and age effects. In reality, clusters of similar age and metallicity exhibit differences in their colour-magnitude diagrams and many of the elemental abundance patterns deviate from the predictions of stellar evolution theory. Many low-metallicity globular clusters exhibit large star-to-star variations of C, N, O, Na, Mg, and Al abundances. These elements are those that are sensitive to proton-capture nucleosynthesis. In clusters where giant star samples have been sufficiently large, the abundances of O and Na are anticorrelated, as are those of O and Al (as well as sometimes Mg and Al). Previous clusters studied by the Lick-Texas group (including M3, M5, M10, M13, M15, M71, M92, and NGC7006) span a range in metallicities, from –0.8 $``$ \[Fe/H\] $``$ –2.24. In the higher-metallicity clusters, the abundance swings are muted. In all of the clusters, the abundance swings are observed to be a function of giant branch position. This relationship is consistent with material having undergone proton-capture nucleosynthesis (via the CN-, ON-, NeNa-, and/or MgAl-cycles) and brought to the surface by a deep-mixing mechanism. Deep mixing, according to theory (Sweigart & Mengel 1979) should become less efficient and possibly cut-off as metallicity increases. The metallicity of M4 places it among clusters in which the O versus Na and Mg versus Al anticorrelations might be expected to be largely diminished. There are also some clusters (including M5, NGC3201, NGC6752, 47 Tuc, and $`\omega `$ Cen) for which distinctly bimodal distributions of cyanogen strengths at nearly all giant branch positions have been uncovered. These clusters apparantly have had different nucleosynthesis histories. Among these CN-bimodal clusters is M4 (Norris 1981; Smith & Norris 1993), the nearest, brightest, and one of the most accessible targets to study the CN-bimodal phenomena. For the purposes of this colloquium, only certain aspects of our M4 work will be highlighted. Details of the full analysis are presented in Ivans et al (1999). ## 2 Getting the Red Out While M4 may be the closest globular cluster, it also suffers from interstellar extinction that is large and variable across the cluster face. The line-of-sight to the cluster passes through the outer parts of the Scorpius-Ophiuchus dark cloud complex. A reddening gradient exists across the face of the cluster (Cudworth & Rees 1990; Liu & Janes 1990; Minniti et al 1992). And, the dust extinction probably varies on small spatial scales as well. This is suggested by the colour-magnitude diagram of M4, where the subgiant and giant branches are broader than expected, given the errors in the photometry (see figure 1) as well as by observations of the $`\lambda 7699\AA `$i interstellar line towards individual M4 stars (Lyons et al 1995). Figure 1 also shows that the reddening cannot reliably be estimated for individual M4 stars to the level needed to map broad-band photometric indices onto stellar parameters. Instead, another reliable temperature estimation method is required. We combined traditional spectroscopic abundance methods with modifications to the line-depth ratio technique pioneered by Gray (1994) to determine the atmospheric parameters of our stars. The “Gray” method relies on ratios of the measured central depths of lines having very different functional dependences on photometric indices and/or Teff to derive accurate relative temperature rankings (eg. vanadium versus neutral or ionized iron). Gray’s work was done on Pop. I main sequence stars and has since been expanded by Hiltgen (1996) for applications to subgiants of a range of disk metallicities. Happily, many of Gray’s line depth ratios are also sensitive Teff indicators for lower metallicity very cool RGB stars. The line depth ratios vary more than one dex in spectra of giants of moderately metal-poor clusters, and thus can indicate very small Teff changes. However, these relationships begin to approach unity among the coolest stars. While a tremendously useful tool, the “Gray” method cannot be applied to all stars of all clusters: these ratios probably will be less useful as temperature indicators for the coolest stars of appreciably more metal-rich globular clusters (where the lower temperatures and higher metallicities conspire to saturate and blend virtually all of the “Gray” spectral features). However, the method was successfully employed in our work on M4 and is currently being applied to other clusters in the process of analysis. Our initial Teff calibration of the M4 line depth ratios was set through a similar analysis of RGB stars of M5 (a cluster of very similar metallicity to M4 but suffers little from interstellar dust extinction). We discuss the details of the correlations and transformations in Ivans et al (1999). While we used the line-ratio method to rank the stars, final temperatures were determined from full spectral analyses. Our results for individual stellar parameters compare well with M4 stars in the literature. Taking advantage of the non-photometric means by which we obtained our temperatures, we then derived an average $`E(BV)`$ reddening of 0.33 +/- 0.01 (which is significantly lower than that estimated by using the dust maps made by Schlegel et al 1998 but is in good agreement with the M4 RR Lyrae studies by Caputo et al 1985). Finally, as a confirmation of the method, we derived individual stellar extinctions that not only correlate extremely well with IRAS 100 micron fluxes but also with $`E(BV)`$ estimates derived independently in interstellar absorption studies of potassium by Lyons et al (1995). ## 3 Abundance Results We performed line-by-line abundance analyses to determine the final model atmosphere specifications. Our final models satisfied the following contraints: consistent abundances from lines of neutral and ionized Fe and Ti; reasonable predictions for colour-magnitude diagram positions from the derived gravities; no obvious trends of neutral Fe line abundances with EWs; and no obvious trends of neutral Fe line abundances with corresponding excitation potentials. Finally, there is no astrophysical reason for Fe-peak abundances to vary significantly from star to star along the M4 giant branch; V, Ti, Fe, and Ni showed no significant drifts along the RGB. We present the abundance analyses in the “boxplot” shown in figure 2. The boxplot illustrates the median, data spread, skew and distribution of the range of values we derived for each of the elements from our program stars, along with possible outliers. We determined a metallicity of $`<`$\[Fe/H\]$`>`$ = –1.18 ($`\sigma `$ = 0.02) and found a large abundance ratio range for proton-capture elements such as oxygen, sodium and aluminum. However, the star-to-star variations are small for the heavier elements. Our M4 abundances generally agree well with those of past M4 investigators. The abundances of Ca, Sc, Ti, V, and Ni are also in accord with those of M5 and the halo field. However, the M4 abundances of Ba and La are both overabundant with respect to comparison samples. Yet, the overabundance of Ba in M4 stars has been observed in independent studies by both Brown & Wallerstein (1992) as well as by Lambert et al (1992). We also derived high silicon and aluminum abundances, in agreement with previous studies of M4 but significantly higher than the abundances found in either M5 or the field. We explore these issues in the following sub-sections. ### 3.1 Proton-Capture Nucleosynthesis Although several M4 giants exhibit oxygen deficiencies, most M4 giants show little evidence for the severe oxygen depletions observed in M13 (Kraft et al 1997) and M15 (Sneden et al 1997). Low oxygen abundances are accompanied by low carbon and elevated nitrogen. In addition, the sum of C+N+O is essentially constant, as expected if all stars draw on the same primordial material. We find that the behaviour of O is anti-correlated with that of Na, and, to a lesser degree, with that of Al. These findings are compatible with a proton-capture scenario in which Na and Al are enhanced at the expense of Ne and Mg, respectively (Langer et al 1997, Cavallo et al 1998). And, we find that the CN-strong stars are those that are more highly processed via proton-capture nucleosynthesis: the CN-strong group has a mean Na abundance that is a factor of two larger than the CN-weak group, and our CN-strong group also has higher Al abundances but the CN-strong/CN-weak difference is much less pronounced. As shown by Langer & Hoffman (1995), very modest hydrogen depletion of the envelope material can lead to an enhancement of Al by +0.4 dex when Na is enhanced by +0.7 dex, exactly as observed in our M4 sample. In this picture, unlike that found in M13 (Shetrone 1996), the enhancement of Al comes about entirely by destruction of <sup>25</sup>Mg and <sup>26</sup>Mg: <sup>24</sup>Mg remains untouched. We attempted to derive the Mg isotope ratios in our spectra and, while there is a hint that the isotopic ratios may not be the same as those found in halo field stars, much higher resolution data is required before one can make statements regarding any differences with certainty. ### 3.2 $`\alpha `$-element Enhancements Both the magnesium abundances and silicon abundances in M4 exceed those in M5 by a factor of two. However, Ca and Ti abundances in the two clusters are essentially the same and have the usual modest overabundances with respect to the scaled solar ratio. The $`\alpha `$\- element ratios in M4 mimic those found in the very metal-poor cluster M15. M15, like M4, also exhibits a high aluminum abundance (that is, a high “floor” of aluminum, on top of which is the proton-capture nucleosynthesic contribution described in the previous subsection). Substructure in $`\alpha `$\- and light odd elements are also found among relatively metal-rich disk dwarfs (eg. Edvardsson et al 1993) and galactic nuclear bulge giants (McWilliam & Rich 1994). While the abundance pairings and trends are not matched between the cluster and disk/bulge populations, it is clear that the differences must arise from some property of the primordial nucleosynthetic sites. ### 3.3 The Abundances of Ba, La, Eu and $`\omega `$ Cen The \[Ba/Eu\] ratio is often used as a measure of $`s`$\- to $`r`$-process nucleosynthesis in the primordial material of a cluster. Typically, clusters show –0.6 $`<`$ \[Ba/Eu\] $`<`$ –0.2. In M4, \[Ba/Eu\] is 0.25 dex higher than the total solar-system r + s and more than four times higher than that of the “normal” cluster M5. However, the high \[Ba/Eu\] in M4 is not because Eu is low (as is the case in very metal-poor M15), rather, the \[Eu/Fe\] we find for M4 is not very different from that of M5. The high \[Ba/Eu\] is due to a high \[Ba/Fe\]. And, the high abundance of Ba is supported by a high abundance of La. We performed numerical experiments by combining the results of our derived Ba, La, and Eu abundances and found that M4 has a larger $`s`$:$`r`$-process contribution than in the sun; the Ba abundance in M4 cannot be attributed to the $`r`$-process. We find no dependence of the Ba or La abundance on evolutionary state in M4 $``$ these excesses cannot result from neutron captures on Fe-peak elements during a He shell flash episode on the AGB of the stars we observed. It must be a signature of $`s`$-process enrichment of the primordial material out of which the low-mass M4 stars we observed were formed. This excess of the $`s`$-process elements is evidence that the period of star formation and mass-loss that preceded the formation of the observed stars in M4 was long enough for stars of 3–10 solar masses to evolve into AGB stars and contribute their ejecta into the ISM of the cluster. $`s`$-process contributions such as those we found in M4 are very well evidenced in the globular cluster $`\omega `$ Cen (Vanture et al 1994). Interestingly enough, there exists in $`\omega `$ Cen a subset of stars which possess nearly identical overabundance characteristics in \[Ba/Fe\], \[Al/Fe\], \[Si/Fe\], and \[La/Fe\] as those found in our M4 stars. However, the multi-metallicity cluster $`\omega `$ Cen also possesses a more complicated nucleosynthetic history than M4. The important point here is that the high Ba and La properties of M4 stars is surely a primordial, not an evolutionary, effect. ## 4 Conclusions and Future Work Evidence for proton-capture nucleosynthesis in M4 was expected, and was found to be in good agreement with both observations and theory. However, the overabundances of Ba, La, Si, and Al were not expected and these are still a puzzle. While there are similarities in metallicity, and evolutionary age as observed in the colour-magnitude diagrams of M4 and M5, what nucleosynthetic histories can explain such large differences in the elemental abundances? The Mg isotope ratio in M4 may also be found to be different from that of the field stars, yet another difference that the environment may have imposed on the nucleosynthetic history. Many of the colloquium talks have emphasized the need to attack both more outer halo clusters as well as the disk clusters and do analyses similar to those that have been done for the closest halo clusters. With the successful application of the line-ratio techniques, more detailed abundance analyses will be able to be done. ## Acknowledgements I am indebted to Jerry Lodriguss for generously sharing the electronic files containing his excellent deep sky images of the M4 Sco-Oph region. I also very gratefully acknowledge the contributions made by my collaborators to this work, without whom the successful completion of this project would not have been possible. ## References > Brown, J. A. & Wallerstein, G., 1992 AJ 104, 1818. > > Caputo, F., Castellani, V., & Quarta, M. L., 1985, A&A 143, 8. > > Cavallo, R. M, Sweigart, A. V., & Bell, R. A., 1998, ApJ 492, 575. > > Cudworth, K. M. & Rees, R. F., 1990, AJ 99, 1491. > > Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D. L., Nissen, P. E., & Tomkin, J., 1993, A&A 275, 101. > > Gray, D. F.,1994, PASP 106, 1248. > > Gustafsson, B., Bell, R. A., Ericksson, K., & Nordlund, A., 1975, A&A 42, 407. > > Hiltgen, D.. 1996, Ph.D. Thesis, Univerity of Texas, Austin. > > Ivans, I. I., Sneden, C., Kraft, R. P., Suntzeff, N. B., Smith, V. V., Langer, G. E., Fulbright, J. P., 1999, AJ 118, 1273. > > Kraft, R. P., Sneden, C., Smith, G. H., Shetrone, M. D., Langer, G. E., & Pilachowski, C. A., 1997, AJ 113, 279. > > Lambert, D. L., McWilliam, A. & Smith, V. V., 1992, ApJ 386, 685. > > Langer, G. E. & Hoffman, R., 1995, PASP 107, 1177. > > Langer, G. E., Hoffman, R., & Zaidins, C. S., 1997 PASP 109, 244. > > Liu, T. & Janes, K. A., 1990, ApJ 360, 561. > > Lyons, M. A., Bates, B., Kemp, S. N., & Davies, R. D., 1995 MNRAS 277, 113. > > McWilliam, A. & Rich, R. M., 1994 A&AS 91, 749. > > Minniti, D., Coyne, G. V., & Claria, J. J., 1992, AJ 103, 871. > > Norris, J., 1981, ApJ 248, 177. > > Schlegel, D. J., Finkbeiner, D. P., & Davis, M., 1998, ApJ 500, 525. > > Shetrone, M. D., 1996, AJ 112, 2639. > > Smith, G. H. & Norris, J. E., 199,3 AJ 105, 173. > > Sneden, C., 1973, ApJ 184, 839. > > Sneden, C., Kraft, R. P., Shetrone, M. D., Smith, G. H., Langer, G. E., & Prosser, C. F. 1997 AJ 114, 1964. > > Sweigart, A. V. & Mengel, J. G., 1979, ApJ 229, 624. > > Vanture, A.D., Wallerstein, G. & Brown, J.A., 1994, PASP 106, 835.
no-problem/9909/nucl-th9909001.html
ar5iv
text
# Systematic Study of the Kaon to Pion Multiplicity Ratios in Heavy-Ion Collisions ## I Introduction Nuclear matter at high energy density has been extensively studied through high energy heavy-ion collisions . The primary goal of these studies is to observe the possible phase transition from hadronic matter to quark-gluon plasma (QGP) , in which quarks and gluons are deconfined from individual hadrons forming an extended region. The phase transition is predicted by lattice QCD calculations to occur at a temperature 140–170 MeV and an energy density on the order of 0.5–1.5 GeV/fm<sup>3</sup> . It is believed that the QGP state existed in the early universe shortly after the big bang , and may also exist in the cores of neutron stars . If a QGP is produced in a heavy-ion collision, the collision system will evolve in stages from deconfined quarks and gluons to interacting hadrons, and finally to freeze-out particles which are detected. In order to extract the information about the postulated quark-gluon plasma stage of heavy-ion collisions, systematic studies of multi-observables at freeze-out as a function of the collision volume and bombarding energy are necessary. These observables include strangeness production , charm production , lepton production , jet quenching , elliptic , and other types of flow . For a critical review of these observables, see Ref. . In this article, we constrain ourselves to one of the above observables, namely, strangeness production. In particular, we focus on the kaon to pion multiplicity ratios ($`K/\pi `$ for both charge signs), since the bulk of strangeness produced in heavy-ion collisions are carried by kaons. It is not clear how much the $`K/\pi `$ ratios in heavy-ion reactions reflect the properties of the phase transition between QGP and hadronic matter. Originally it was argued that $`K/\pi `$ may serve as a signature of the QGP, and indeed early measurements at the BNL AGS and CERN SPS showed a significant enhancement of $`K^+/\pi ^+`$ in heavy-ion collisions over $`p+p`$ interactions. The idea was that, in the deconfined state, strange quark pairs ($`s\overline{s}`$) may be copiously produced through gluon-gluon fusion ($`ggs\overline{s}`$, while in the hadronic gas such pairs are produced via pairs of strange hadrons at a higher production threshold. Therefore the time needed for a hadronic gas system in kinetic equilibrium to reach chemical equilibration is significantly longer than the life time of a heavy-ion collision which is typically on the order of 10 fm . This idea, however, has been challenged because it neglects pre-equilibrium dynamics of the initial stage which may considerably speed up chemical equilibration : Initial interactions between produced particles and ingoing baryons are “harder” than in kinetic equilibrium. This is confirmed by detailed transport calculations which can reproduce the early experimental data on $`K^+/\pi ^+`$ . Even on the level of equilibrium physics, it is not entirely clear which difference to expect in a comparison of the scenarios with and without phase transition. Strangeness in a chemically equilibrated hadronic gas might be as high as or higher than in a QGP . In fact, thermal hadron gas model requires a so-called strangeness suppression factor to fit to experimental particle ratios; this has been interpreted as some indication of non-equilibrium transition from QGP to hadron gas . In this picture, a “lower” than expected $`K/\pi `$ ratio is a signature of the QGP. Recently several measurements of $`K/\pi `$ have been made at both the AGS and SPS by different experiments . There has been renewed interest in the interpretation of the measured $`K/\pi `$ ratios . Interestingly, the Giessen group found the strongest deviations of their hadronic model (HSD) calculations from experimental data not at SPS energy but at the lower AGS energy . Despite of the theoretical mist, the $`K/\pi `$ ratios may still be valuable observables to be studied not only to address questions of the phase transition but also to obtain a better understanding of the pre-equilibrium dynamics, the hadronization processes, and dynamics of hadrons in the medium. In these studies, the beam energy is an essential control parameter whose variation allows to modify two important variables: the initial baryon and energy densities. Both densities are of importance as to whether the system enters into the quark-gluon phase or remains hadronic all the time. If the “strangeness content” of the hadronic matter is substantially different from a QGP, discontinuity might be expected in the excitation function of the $`K/\pi `$ ratios. With data from RHIC emerging in the near future, where QGP is likely to be formed, the first things to look for are changes in the observables departing from what have been observed at low energies. In this paper we present a systematic study of the $`K/\pi `$ ratios using the Relativistic Quantum Molecular Dynamics (RQMD) model . For simplicity, we only consider the equal and large mass nucleus-nucleus collisions. The goal of this paper is to provide an understanding of the underlying mechanisms for $`K/\pi `$ enhancement by comparing model results with available experimental data at various energies. We also present predictions of the $`K/\pi `$ ratios at RHIC energy. Throughout the paper, $`K/\pi `$ stands for the integrated total multiplicity ratios unless otherwise notified. The paper is organized as follows. In Sec. II, we give brief description of the RQMD model. In Sec. III, we demonstrate that the calculated pion and kaon multiplicities are in a good agreement with experimental data in elementary $`p+p`$ interactions. In Sec. IV, we present our results on $`K/\pi `$ ratios in calculated heavy-ion collisions. This section is divided into four parts. Part A presents a systematic study of $`K/\pi `$ as a function of the number of participants, parts B and C investigate the effects of particle rescattering and rope formation on the ratios, part D considers excitation functions of $`K/\pi `$ spanning AGS, SPS, and RHIC energies ($`\sqrt{s}5`$, 17 and 200 AGeV, respectively), and Part E derives the $`K/\pi `$ enhancement factors. Section V summarizes our findings. ## II The RQMD model RQMD is a semi-classical microscopic model which combines classical propagation with stochastic interactions . The degree of freedom in RQMD depends on the relevant length and time scales of the processes considered. In low energy collisions ($``$1 AGeV), RQMD reduces to solving transport equations for a system of nucleons, other hadrons and resonances interacting in binary collisions or via mean field. At high beam energies ($`{}_{}{}^{}{}_{}{}^{>}`$10 AGeV), color strings and hadronic resonances are excited in elementary collisions; their fragmentation and decay lead to production of particles. Overlapping strings do not fragment independently but form “ropes” . The secondaries which emerge from the fragmenting strings, ropes and resonances may re-interact. For all the results presented here, the so-called “cascade mode” is used in RQMD (no “mean field”). In order to calculate heavy-ion collisions at RHIC energy, the RQMD code evolved from version 2.3 to 2.4. The physics are the same in the two versions. The change of version was due to technical reasons. We use version 2.3 in calculations for all results presented in this paper, except for heavy-ion collisions at RHIC energy version 2.4 is used. We have checked our AGS and SPS results with those from version 2.4 and our RHIC results of peripheral heavy-ion collisions with those from version 2.3; consistencies were found. Let us note about the “rope” mechanism implemented in the model. In RQMD (and also in QGSM and the Spanish version of DPM ), strings fuse into color “ropes” if they overlap in transverse and longitudinal configuration space; the transverse dimension of strings is on the order of 0.8 fm . The strong $`s\overline{s}`$ enhancement in the coherent rope fields is a consequence of the large chromoelectric field strength, because quark pair production rates depend strongly on the ratio between the squared quark mass and the field strength . It has been shown that the rope mechanism strongly enhances multi-strange baryon production . However, its effect on single-strange hadron production is small once particle rescattering is considered, resulting in negligible change in the $`K/\pi `$ ratios . On the other side, rescattering between hadrons and resonances changes considerably the ratio between produced strange and non-strange quarks. This change is about 80% for central Pb+Pb collisions at SPS energy . Multi-step processes such as | | | --- | | | $`\pi +N`$ | | | | | | $`\pi +\pi `$ | | | | $``$ | | | | $``$ | | | | | | $`\mathrm{\Delta }`$ | + | $`\varrho `$ | | | | | | | | $``$ | | | | | | | | | $`\mathrm{\Delta }^{}`$ | | | | are of essential importance for strangeness production, because intermediate resonances act as an “energy store.” Such multi-body interactions are only frequent in a system of sufficient density of roughly 1 fm<sup>-3</sup>, because the lifetime of the intermediate states is typically 1–2 fm/$`c`$ . Therefore, strangeness enhancement via rescattering is not expected to occur in the late dilute stages of the reactions. Since feed-down affects the extracted particle yields, let us finally mention which hadrons are kept stable when we calculate the $`K/\pi `$ ratios. Members of the baryon octet (mainly $`\mathrm{\Lambda },\mathrm{\Xi },\mathrm{\Omega }`$), and the $`K_S^0,K_L^0,\eta `$, and $`\varphi `$ mesons are not decayed in the model. In reality, all these particles decay into pions via single and/or multi-step channels; $`\varphi `$’s and $`\mathrm{\Omega }`$’s decay into kaons in addition. Unless notified otherwise, these decay products are not included in the charged pion or kaon multiplicity. According to RQMD, the $`\eta `$-decay contribution to the charged pion multiplicities is 2–4%; $`K_S^0`$ multiplicity is, to a good approximation, one half of the charged kaon multiplicity, therefore the $`K_S^0`$-decay contribution to pion multiplicities can be deduced from $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$; the contributions from all other sources to $`\pi ^+`$ ($`\pi ^{}`$) multiplicity are less than 3% (5%) and 5% (7–15%), respectively for $`p+p`$ and heavy-ion collisions; and the $`\varphi `$-decay contributions are a couple to 10% in $`K^+`$ and 7–15% in $`K^{}`$ multiplicity in all collisions studied. ## III RQMD results for $`p+p`$ interactions Figure 1 (top panel) shows the inclusive $`\pi ^+`$, $`\pi ^{}`$, $`K^+`$, and $`K^{}`$ multiplicities in $`p+p`$ interactions as a function of the center-of-mass energy ($`\sqrt{s}`$) from AGS to SPS energy. The $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ ratios are shown in the bottom panel. The symbols are results of RQMD calculations. The dashed curves are parametrizations of the experimental $`p+p`$ data , with experimental uncertainties shown in the shaded areas. Good agreement is found between the RQMD results and the experimental data, especially for $`\sqrt{s}>6`$ AGeV. We note that similar degrees of agreement have also been achieved for $`p+A`$ collisions . These agreements provide a realistic base for RQMD calculations of heavy-ion collisions. It should be noted that the agreement between the RQMD results and the experimental $`p+p`$ data is not granted automatically by inputs into RQMD, such as parametrizations of exclusive cross sections. Depending on energy, resonances or strings are excited in the elementary inelastic nucleon-nucleon collisions (see Refs. for details). While the resonance parameters are taken from the data tables, the parameters of string fragmentation are fixed from the properties of strings created in $`e^+e^{}q\overline{q}`$ annihilations. One of the most important differences concerning particle production in $`e^+e^{}`$ versus hadron-hadron interactions is the fragmentation of the ingoing valence quarks (e.g., the leading-particle effect ). It is treated in a constituent quark spectator fragmentation approach which keeps track of the gluonic “junction” connecting all three quarks of a nucleon. This is important for the stopping of nucleons on a nuclear target. ## IV RQMD results for heavy-ion collisions We have calculated Au+Au collisions at lab beam energies 8, 11, 14.6, 20, and 30 AGeV ($`\sqrt{s}=4.3,4.8,5.4,6.3`$, and $`7.6`$ AGeV), Pb+Pb collisions at lab beam energies 40, 60, 80, 100, 120, 140, 158, and 180 AGeV ($`\sqrt{s}=8.8,10.7,12.3,13.8,15.1,16.3,17.3`$, and $`18.4`$ AGeV), and Au+Au collisions at RHIC energy ($`\sqrt{s}=200`$ AGeV). The comparisons of $`K^+/\pi ^+`$ between RQMD and lower energy data are reported elsewhere . By studying these collisions, we shall try to identify the underlying physics within RQMD for the enhancement in $`K/\pi `$ ratios at the AGS and SPS energies, and present RQMD predictions for the ratios at RHIC energy. ### A $`K/\pi `$ systematics versus centrality In this section, we study $`K/\pi `$ ratios as a function of the collision centrality. We choose the number of participants ($`N_p`$) as a characterization of the collision centrality, considering that $`N_p`$ is experimentally accessible via forward energy measurements. For RQMD results, $`N_p`$ is taken to be the number of initial nucleons that interact at least once with other particles. Preliminary experimental data on the centrality dependence of $`K/\pi `$ are available from AGS E866 and SPS NA49 . Before discussing the RQMD results at various energies, we compare the RQMD results at AGS and SPS energies to the available data in Fig. 2. In order to facilitate the comparison, contributions from $`\eta ,\varphi ,K_S^0`$, and strange (anti)baryon decays are included in the RQMD results. These contributions reduce the RQMD $`K^+/\pi ^+`$ ratio by 5% (and is negligible for $`K^{}/\pi ^{}`$) at AGS energy, and by 10% for both $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ at SPS energy. Good agreement is found between the RQMD results and the AGS data (within 20%). RQMD systematically overpredicts the ratios at SPS energy by 10–30%. The agreement is fair with regard to the relatively large systematic errors on the data. See Parts B and C for further discussion. Figure 3 (left panel) shows $`K^+/\pi ^+`$ calculated by RQMD as a function of $`N_p`$ in heavy-ion collisions at various bombarding energies. $`K^+/\pi ^+`$ increases with $`\sqrt{s}`$ and increases from peripheral to central collisions. For all collision energies studied except RHIC energy, the dependence of the ratio on $`N_p`$ is similar. The increase in the value of $`K^+/\pi ^+`$ is of the same magnitude from peripheral to central collisions. Consequently, the relative increase in $`K^+/\pi ^+`$ from peripheral to central collisions is larger at lower energies. Figure 3 (right panel) shows $`K^{}/\pi ^{}`$ as a function of $`N_p`$. $`K^{}/\pi ^{}`$ has similar dependence on $`N_p`$ as $`K^+/\pi ^+`$ for collisions at energies between AGS and SPS. At each centrality, the ratio increases with $`\sqrt{s}`$. Unlike $`K^+/\pi ^+`$, the absolute increase in $`K^{}/\pi ^{}`$ from peripheral to central collisions is larger at higher energies. At RHIC energy as shown by Fig. 3, the $`N_p`$ dependence of $`K^+/\pi ^+`$ is distinctly different from those at the low AGS and SPS energies. Here, a saturation of the ratio seems to appear in central collisions. Consequently, the absolute increase in the ratio from peripheral to central collisions is smaller. On the other hand, $`K^{}/\pi ^{}`$ continues to increase with $`N_p`$, similar to those observed at the lower energies. The absolute value of $`K^{}/\pi ^{}`$ at RHIC energy is significantly higher than the low energy values. It is also interesting to note that the absolute value of $`K^+/\pi ^+`$ is lower at RHIC than SPS energy in central collisions. What makes $`K^+`$ and $`K^{}`$ different is, of course, the presence of the net baryon number. The effect of the fixed net baryon number is larger at lower energies. As a result, associate production of $`K^+`$ via $`NNNK^+Y`$ (where $`Y=\mathrm{\Lambda }`$ or $`\mathrm{\Sigma }`$) dominates over that of $`K^{}`$ via $`\overline{N}\overline{N}\overline{N}K^{}\overline{Y}`$. Note that pair production results in the same number of $`K^+`$’s and $`K^{}`$’s. On the other hand, the difference between associate production of $`K^+`$ and $`K^{}`$ becomes negligible at RHIC energy; particle production from string and rope decays becomes important. Therefore, $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ display a more similar shape and magnitude at RHIC than SPS energies, as shown in Fig. 3. ### B Effect of particle rescattering It is shown in Refs. , within the framework of RQMD, that the increase in $`K^+/\pi ^+`$ in heavy-ion collisions at AGS energy with respect to $`p+p`$ and $`p+A`$ is largely due to secondary rescattering among the particles. Therefore, we shall study the $`K/\pi `$ ratios as a function of the average number of collisions ($`\nu _p`$) suffered by each nucleon. $`\nu _p`$ includes both nucleon-baryon and nucleon-meson interactions. Interestingly, $`\nu _p`$ shows little variation with bombarding energy; it depends mainly on the collision geometry. For both $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$, the $`\nu _p`$ dependence displays a similar feature seen in Fig. 3. These similarities are not surprising because the average values of $`\nu _p`$ and $`N_p`$ are correlated (although the distribution of $`\nu _p`$ versus $`N_p`$ is fairly broad). For large $`\nu _p`$, however, $`K^+/\pi ^+`$ appears to saturate for nuclear collisions at high $`\sqrt{s}`$ including those in the SPS energy regime. To elaborate further the effect of particle rescattering on the $`K/\pi `$ ratios, we calculate the ratios using RQMD with particle rescattering turned off. To be specific, we include only the primary interactions of the ingoing nucleons (including rope formation) while the projectile and target pass through each other, but turn off the meson-induced interactions. The results of $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ are shown in the dashed curves in Fig. 4 for heavy-ion collisions at AGS and SPS energies. The RHIC energy results are shown in Fig. 5. For comparison, the default RQMD results are also shown. The top axis indicates the average impact parameter $`b`$ corresponding to $`N_p`$. The $`b`$ values in Fig. 4 are extracted from Pb+Pb collisions at SPS energy, and in Fig. 5 from Au+Au collisions at RHIC energy. Note that the average $`b`$ has a systematic deviation of $`\pm 0.5`$ fm for collisions at AGS energy and with different settings (default versus no rescattering and/or no rope). Referring to Fig. 4 and Fig. 5, the $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ ratios obtained with no particle rescattering show a shallow increase (if not constant) with $`N_p`$ in heavy-ion collisions at all three energies. Thus meson-induced interactions are responsible for the significant increase in the ratios with $`N_p`$ obtained with the default RQMD, especially at AGS energy. This confirms the early results obtained in Refs. . Meson-induced interactions mainly increase the kaon production rate; they do not change the pion multiplicity significantly. The changes in the pion multiplicities due to rescattering are $`20`$%, $`10`$–0% and $`+20`$% at AGS, SPS, and RHIC energies, respectively. Likewise, the corresponding changes in the kaon multiplicity are 300%, 20-30% and 60%. Consequently, the $`K/\pi `$ ratios are increased by including meson-induced interactions. The above results can be understood if one keeps in mind that resonances play the most important role in order to overcome kaon production threshold. To illustrate this, compare $`\pi \mathrm{\Delta }(1232)`$ and $`\pi N`$ collisions at the same relative momentum 700 MeV/$`c`$ in the rest frame. In addition to pion production, the $`\pi \mathrm{\Delta }`$ collision has a certain probability to produce a $`K\mathrm{\Lambda }`$ pair. On the other hand, the $`\pi N`$ collision can only produce pions. Thus $`\pi N`$ collisions do not contribute directly to the strangeness enhancement. Only as a part of a many-body process do they play a role (e.g., by involving a $`\varrho `$ as an intermediate state which subsequently interacts with another hadron). Similarly, the contribution from $`\pi \pi `$ collisions turns out to be irrelevant. With the meson-induced interactions turned off, $`\nu _p`$ is significantly lower than obtained by the default RQMD, because only the nucleon-baryon interactions are counted. At AGS energy, the $`K/\pi `$ ratios are found to be constant over $`\nu _p`$, indicating that multiple baryon-baryon interactions do not alter the $`K/\pi `$ ratios. They increase the individual kaon and pion multiplicities with similar relative magnitude. At the higher SPS and RHIC energies, the physics in RQMD is different from that at AGS energy: Strings and ropes are formed and their fragmentation results in particle production. Thus, the meaning of $`\nu _p`$ is questionable. Since the rope mechanism is a novel implementation in RQMD in an attempt to take into account the string-string interactions during the hot and dense stage of heavy-ion collisions, we study its effect on $`K/\pi `$ in the next section. ### C Effect of rope formation The dot-dashed curves in Figs. 4 and 5 show the calculated $`K/\pi `$ ratios in heavy-ion collisions at SPS and RHIC energies, respectively, by RQMD with rope formation turned off (and including particle rescattering). The effect of rope formation is negligible at AGS energies. The differences between these results and the default RQMD results (solid curves) are small. The small differences indicate that the effect of rope formation is small on $`K/\pi `$ ratios once particle rescattering is considered. It is interesting to investigate the case with both particle rescattering and rope formation turned off, as this case corresponds to the simple nucleon-nucleon collisions with only the complication of initial multiple scattering. The RQMD results for this simple case are shown in the dotted curves in Figs. 4 and 5 for SPS and RHIC energies, respectively. Comparing these results with the dashed curves (with rope and no rescattering) indicates that rope formation is able to make significant difference to $`K^+/\pi ^+`$ at both SPS and RHIC energies if there is no subsequent rescattering. On the other hand, the effect on $`K^{}/\pi ^{}`$ is small. Generally speaking, rope formation is an early phenomenon in RQMD and particle rescattering takes place later. Rope formation increases $`K^+/\pi ^+`$; particle rescattering washes out these increases. Particle rescattering makes larger increases in $`K^+/\pi ^+`$ without rope formation (dot-dashed versus dotted curves) than with rope formation (solid versus dashed curves). In other words, particle rescattering and rope formation do not act on $`K^+/\pi ^+`$ additively. These observations are understandable if one considers rope formation a form of “string/parton rescattering” at early stages of heavy-ion collisions. The $`K^{}/\pi ^{}`$ at SPS energy behaves differently from $`K^+/\pi ^+`$. Rope formation seems not to have an effect; the effect of particle rescattering is smaller than on $`K^+/\pi ^+`$. On the other hand, $`K^{}/\pi ^{}`$ at RHIC energy behaves similarly to $`K^+/\pi ^+`$. This is expected because the net baryon density is low at RHIC energy. If the net baryon density were zero, then $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ would be identical. When both particle rescattering and rope formation are turned off as shown by the dotted lines in Figs. 4 and 5, the $`K/\pi `$ ratios are rather constant over $`N_p`$. Therefore, it is interesting to compare these results to $`p+p`$. The calculated $`p+p`$ results are indicated by the shaded areas in Fig. 4 for AGS and SPS energies and in Fig. 5 for RHIC energy. Note that $`K^+/\pi ^+`$ is slight lower in the isospin weighted nucleon-nucleon ($`N+N`$) interactions than $`p+p`$, while $`K^{}/\pi ^{}`$ is slightly higher . As seen in the figures, the heavy-ion results with no rescattering and no rope are consistent with the $`p+p`$ values (with two exceptions: the $`K^+/\pi ^+`$ at SPS and the $`K^{}/\pi ^{}`$ at RHIC energy). The two exceptions are currently not understood. We remark that the $`p+p`$ results are consistent with extrapolations of the measured heavy-ion data shown in Fig. 2. We now come back to the comparison between data and the RQMD results at SPS energy. None of the four curves for SPS energy shown in Fig. 4 seems to completely agree with the data. We remark that the central collision data favor particle rescattering. In the following two sections, we only consider the default RQMD which include both particle rescattering and rope formation. ### D $`K/\pi `$ excitation functions We study the excitation functions of $`K/\pi `$ in the most central Au+Au/Pb+Pb collisions ($`N_p>350`$) with RQMD, as a necessary theoretical baseline for the search of abnormal behavior. The total multiplicity ratios are shown as open circles in Fig. 6 for $`K^+/\pi ^+`$ (left panel) and $`K^{}/\pi ^{}`$ (right panel). The multiplicity ratios in the midrapidity region ($`1/3<y/y_{\mathrm{beam}}<2/3`$) are shown as open squares. The midrapidity ratios are larger than the total multiplicity ratios. Data from AGS E866 and SPS NA44 and NA49 are shown in the filled squares (total multiplicity ratios) and filled triangles (midrapidity ratios). In order to exclude the effect of decay contributions in the data, the AGS $`K^+/\pi ^+`$ data are scaled up by 5%, and the SPS $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ data are scaled up by 10% The AGS data are well reproduced, and the SPS data are slightly overpredicted by RQMD. The overall agreement is good. For comparison, the results for $`p+p`$ interactions from Fig. 1 are replotted in Fig. 6. The $`K/\pi `$ ratios are significantly larger in heavy-ion collisions than the same energy $`p+p`$ interactions. Note that the isospin weighted $`N+N`$ interactions may be better in comparison to heavy-ion collisions. However, comparing heavy-ion results to $`p+p`$ is more feasible because only $`p+p`$ data are experimentally available. The $`K^+/\pi ^+`$ ratio in central heavy-ion collisions increases at low energy, partially due to the rapid increase in kaon production near threshold in $`p+p`$ interactions . However, $`K^+/\pi ^+`$ saturates at high energies. Clearly, the continuous increase of $`K^+/\pi ^+`$ with $`\sqrt{s}`$ in $`p+p`$ is not seen in central heavy-ion collisions. This is a direct reflection of the amount of baryon stopping at different energies. The baryon midrapidity density decreases with $`\sqrt{s}`$ , which is well described by the model. The ratio at midrapidity even decreases with $`\sqrt{s}`$ at high energies. Maximum $`K^+/\pi ^+`$ appears at beam energy $`40`$ AGeV. This energy range is currently being studied at the SPS. As seen in Fig. 6 (right panel), $`K^{}/\pi ^{}`$ continuously increases with $`\sqrt{s}`$. The dependence is distinctly different from that of $`K^+/\pi ^+`$. The ratio is larger in heavy-ion collisions than $`p+p`$ interactions at all corresponding energies, but follow the general trend seen in $`p+p`$. The increase in $`K^{}/\pi ^{}`$ from $`p+p`$ to heavy-ion collisions at the same energy is significantly smaller than that in $`K^+/\pi ^+`$. The statement can be made stronger when taken into account that the charge asymmetry results in a higher $`K^+/\pi ^+`$ and a lower $`K^{}/\pi ^{}`$ in $`p+p`$ than the isospin weighted $`N+N`$ interactions. At RHIC energy, respectively for $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$, RQMD predicts a total multiplicity ratio of $`0.19`$ and $`0.15`$, and a midrapidity multiplicity ratio of $`0.19`$ and $`0.17`$. The predictions provide a baseline for comparison to experimental data which are expected soon. The distinct difference between the $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ excitation functions results from the different production mechanisms for $`K^+`$ and $`K^{}`$, which are connected to the presence of the baryon density . The effect of the baryon density can be made more clearly in the kaon multiplicity ratio ($`K^+/K^{}`$) versus $`\sqrt{s}`$. Figure 7 shows the midrapidity ratio of $`K^+/K^{}`$ as a function of $`\sqrt{s}`$. The ratio decreases steadily with $`\sqrt{s}`$. For comparison, the E866 , NA44 , and NA49 data are also shown. The data are well reproduced by RQMD. ### E $`K/\pi `$ enhancement Now let us turn to the enhancement in $`K/\pi `$ ratios. We define the enhancement factors as the ratios of $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ in central heavy-ion collisions over those in $`p+p`$ interactions at the corresponding energy. Note that this is not a perfect definition. One may argue that a division by the isospin weighted $`N+N`$ results is a better definition, but it is not experimentally preferred. Figure 8 shows the results of the enhancement in $`K^+/\pi ^+`$ (left panel) and $`K^{}/\pi ^{}`$ (right panel). The enhancement is larger at low energy for both $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$. This is partially due to the effect of kaon production threshold. As discussed before, meson-baryon interactions involving resonances are the primary processes for $`K/\pi `$ enhancement at AGS energy. At low energies near threshold, particle rescattering is most prominent. Take an extreme case that the beam energy is below the threshold: No kaons can be produced in $`p+p`$ interactions; however, kaons can be produced in heavy-ion collisions by invoking the multi-step processes where more than two particles cooperate to overcome the production threshold. In this extreme case, the enhancement is infinity. The enhancement in $`K^+/\pi ^+`$ drops more quickly at lower $`\sqrt{s}`$. Over the whole range of $`\sqrt{s}`$ studied between AGS and SPS energies, the enhancement factor decreases smoothly with $`\sqrt{s}`$. The quick drop at low $`\sqrt{s}`$ is partially due to the threshold effect mentioned above. The shallow tail of the enhancement factor at large $`\sqrt{s}`$ results from the flattening of $`K^+/\pi ^+`$ at high $`\sqrt{s}`$ in heavy-ion collisions and the continuous increase of the ratio with $`\sqrt{s}`$ in $`p+p`$ interactions. As shown in Fig. 8 (right panel), the enhancement in $`K^{}/\pi ^{}`$ drops very quickly at low $`\sqrt{s}`$. The behavior is consistent with the high $`K^{}`$ production threshold. However, at large $`\sqrt{s}`$ the enhancement factor is almost constant. Since the enhancement factors of both $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ have a weak dependence on $`\sqrt{s}`$ at high energies, it is possible, without RQMD calculations for RHIC energy, to predict the enhancement factors at RHIC energy by extrapolation. In order to do so, we parametrize the enhancement factor as a function of $`\sqrt{s}`$. We choose to parametrize the $`K^+/\pi ^+`$ enhancement factor by the functional form $`A+B/\sqrt{s}`$. Fits to the enhancement factors shown in the left panel of Fig. 8 (of course excluding that at RHIC energy) yield $`1.8+11.2\mathrm{GeV}/\sqrt{s}`$ and $`1.8+16.8\mathrm{GeV}/\sqrt{s}`$, respectively for the enhancement in the overall $`K^+/\pi ^+`$ and the midrapidity $`K^+/\pi ^+`$. The $`\chi ^2/N_{DF}`$ of the fits are on the order of 1.5. The fit results are superimposed in Fig. 8 (left panel) as the dashed and dotted curves. The two curves essentially overlap at RHIC energy, and slightly miss the calculated results. Because of the sharp drop of the $`K^{}/\pi ^{}`$ enhancement factor at low energy, we choose to parametrize the $`K^{}/\pi ^{}`$ enhancement factor by the functional form $`A+B/(\sqrt{s}2.87\mathrm{GeV})^2`$, where 2.87 GeV is the kaon pair production threshold in $`NNNNK^+K^{}`$. Fits to the enhancement factor in the right panel of Fig. 8 (excluding that at RHIC energy) yield $`1.4+6.4\mathrm{GeV}/(\sqrt{s}2.87\mathrm{GeV})^2`$ and $`1.7+9.1\mathrm{GeV}/(\sqrt{s}2.87\mathrm{GeV})^2`$, respectively for the enhancement in the overall $`K^{}/\pi ^{}`$ and the midrapidity $`K^{}/\pi ^{}`$. The $`\chi ^2/N_{DF}`$ of the fits are on the order of 2.5. The large $`\chi ^2/N_{DF}`$ is mainly due to the lowest $`\sqrt{s}`$ point. If the lowest $`\sqrt{s}`$ point is excluded from the fits, $`\chi ^2/N_{DF}`$ becomes about 1.5 without essential change in the fit results. The fit results are superimposed in Fig. 8 (right panel) as the dashed and dotted curves. They can satisfactorily describe the enhancement factor at RHIC energy as directly calculated by RQMD. ## V Conclusions We have reported a systematic study of the kaon to pion multiplicity ratios, $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$, in heavy-ion collisions as a function of the bombarding energy from AGS to RHIC energy, using the RQMD model. We have demonstrated that the kaon and pion multiplicity data in $`p+p`$ interactions are well reproduced by RQMD, validating the comparison between data and RQMD for heavy-ion collisions. The RQMD-calculated ratios in heavy-ion collisions are higher than those in $`p+p`$ interactions at the same energy, and increase from peripheral to central collisions. By comparing the results to those calculated by RQMD with particle rescattering (meson-induced interactions) turned off, we conclude that the $`K/\pi `$ enhancement in central collisions with respect to peripheral collisions and $`p+p`$ interactions is mainly due to meson-induced interactions, especially at the low AGS energy. We further conclude that rope formation does not change the $`K/\pi `$ ratios significantly once particle rescattering is considered. It is found that the $`K/\pi `$ enhancement in central heavy-ion collisions over $`p+p`$ interactions is larger at AGS than SPS energy, and decreases smoothly with bombarding energy. This behavior is consistent with the combination of the threshold effect of kaon production in $`p+p`$ interactions and the dropping baryon density in heavy-ion collisions with increasing bombarding energy. The RQMD model reasonably describes the available $`K^+/\pi ^+`$ and $`K^{}/\pi ^{}`$ data in heavy-ion collisions. The observed $`K/\pi `$ enhancement at the AGS and SPS can be understood in the RQMD model with hadronic rescattering and string degrees of freedom. The RQMD $`K/\pi `$ results agree with experimental data better at midrapidity in the most central collisions. This is not surprising since it is the case where equilibrium is most likely to be reached. There the details of the description in any model is less relevant. The total multiplicity ratios at RHIC energy are predicted by RQMD to be $`K^+/\pi ^+=0.19`$ and $`K^{}/\pi ^{}=0.15`$. The midrapidity ratios at RHIC energy are predicted by RQMD to be $`K^+/\pi ^+=0.19`$ and $`K^{}/\pi ^{}=0.17`$. ## Acknowledgments We are grateful to Dr. G. Odyniec, Dr. G. Rai, and Dr. H.G. Ritter for valuable discussions. This work was supported by the U. S. Department of Energy under Contracts No. DE-AC03-76SF00098, DE-FG02-88ER40388, and DE-FG02-89ER40531, and used resources of the National Energy Research Scientific Computing Center.
no-problem/9909/cond-mat9909323.html
ar5iv
text
# Microscopic Deterministic Dynamics and Persistence Exponent ## Abstract Numerically we solve the microscopic deterministic equations of motion with random initial states for the two-dimensional $`\varphi ^4`$ theory. Scaling behavior of the persistence probability at criticality is systematically investigated and the persistence exponent is estimated. Recently the persistence exponent has attracted much attention. This exponent was first introduced in the context of the non-equilibrium coarsening dynamics at zero temperature . It characterizes the power law decay of the persistence probability that a local order parameter keeps its sign during a time $`t`$ after a quench from a very high temperature to zero temperature. For critical dynamics, the local order parameter (usually, a spin) flips rapidly and the persistence probability does not obey a power law. In this case, however, the persistence exponent $`\theta _p`$ can be defined by the power law decay of the global persistence probability $`p(t)`$ that the global order parameter has not changed the sign in a time $`t`$ after the quench from a very high temperature to the critical temperature , $$p(t)t^{\theta _p}.$$ (1) An interesting property of the persistence exponent is that its value is highly non-trivial even for simple systems. For the quench to zero temperature, for example, $`\theta _p`$ is apparently not a simple fraction for the simple diffusion equation and the Potts model in one dimension . For the quench to the critical temperature, it is shown that the persistence exponent is generally a new independent critical exponent, i.e. it can not be expressed by the known static exponents, the dynamic exponent $`z`$ and the recently discovered exponent $`\theta `$ . This relies on the fact that the time evolution of the global magnetization is not a Markovian process. Recent Monte Carlo simulations for the Ising and Potts model at criticality support the power law decay of the global persistence probability and detect also the non-Markovian effect . Up to now, the persistence exponent has been studied only in stochastic dynamic systems, described typically by Langevin equations or Monte Carlo algorithms. From fundamental view points, both equilibrium and non-equilibrium properties of statistical systems can be described by the microscopic deterministic equations of motion (e.g. Newton, Hamiltonian and Heisenberg equations) <sup>*</sup><sup>*</sup>*Langevin equations at zero temperature are also deterministic, but they are at mesoscopic level and generally different from the microscopic deterministic equations of motion., even though a general proof does not exist. With recent development of computers, gradually it becomes possible to solve numerically the microscopic deterministic equations of motion. For example, the $`O(N)`$ vector model and the $`XY`$ model have been investigated . The results confirm that the deterministic equations describe correctly second order phase transitions. The static critical exponents are estimated and agree with existing values. More interestingly, in a recent paper short-time dynamic behavior of the deterministic dynamics starting from random initial states has been studied . The short-time dynamic scaling was found and the estimated value of the dynamic exponent $`z`$ is the same as that of the Monte Carlo dynamics of the Ising model. The purpose of this letter is to study the critical scaling behavior of the global persistence probability and measure the persistence exponent in microscopic deterministic dynamic systems, taking the two-dimensional $`\varphi ^4`$ theory as an example. The Hamiltonian of the two-dimensional $`\varphi ^4`$ theory on a square lattice is written as $$H=\underset{i}{}\left[\frac{1}{2}\pi _i^2+\frac{1}{2}\underset{\mu }{}(\varphi _{i+\mu }\varphi _i)^2\frac{1}{2}m^2\varphi _i^2+\frac{1}{4!}\lambda \varphi _i^4\right]$$ (2) with $`\pi _i=\dot{\varphi }_i`$ and it leads to the equations of motion $$\ddot{\varphi }_i=\underset{\mu }{}(\varphi _{i+\mu }+\varphi _{i\mu }2\varphi _i)+m^2\varphi _i\frac{1}{3!}\lambda \varphi _i^3.$$ (3) Energy is conserved during the dynamic evolution governed by Eq. (3). As discussed in Refs. , a microcanonical ensemble is assumed to be generated by the solutions. In this case, the temperature could not be introduced externally as in a canonical ensemble, but could only be defined internally as the averaged kinetic energy. In the dynamic approach, the total energy is actually an even more convenient controlling parameter of the system, since it is conserved and can be input from the initial state. From the view point of ergodicity, to achieve a correct equilibrium state the microscopic deterministic dynamic system should start from a random initial state. Interestingly, this is just similar to the dynamic relaxation in stochastic dynamics after a quench from a very high temperature. Therefore, similar dynamic behavior may be expected for both dynamic systems. The order parameter of the $`\varphi ^4`$ theory is the magnetization $`M(t)=_i\varphi _i(t)/L^2`$ with $`L`$ being the lattice size. In this paper, we are interested in the global persistence probability $`p(t)`$ at the critical point, which is defined as the probability that the not averaged order parameter has not changed the sign in a time $`t`$ starting from a random state with small initial magnetization $`m_0`$. Following Ref. , we take parameters $`m^2=2.`$ and $`\lambda =0.6`$ and prepare the initial configurations as follows. For simplicity, we set initial kinetic energy to be zero, i.e. $`\dot{\varphi }_i(0)=0`$. We fix the magnitude of the initial field to be a constant $`c`$, $`|\varphi _i(0)|=c`$, and then randomly give the sign to $`\varphi _i(0)`$ with the restriction of a fixed magnetization in unit of $`c`$, and finally the constant $`c`$ is determined by the given energy. To solve the equations of motion (3) numerically, we simply discretize $`\ddot{\varphi }_i`$ by $`(\varphi _i(t+\mathrm{\Delta }t)+\varphi _i(t\mathrm{\Delta }t)2\varphi _i(t))/(\mathrm{\Delta }t)^2`$. According to the experience in Ref. , $`\mathrm{\Delta }t`$ is taken to be $`0.05`$. After an initial configuration is prepared, we update the equations of motion until the magnetization changes its sign. The maximum observing time is $`t=1000`$. Then we repeat the procedure with other initial configurations and measure the persistence probability $`p(t)`$. In our calculations, we use fairly large lattices $`L=128`$ and $`256`$ and samples of initial configurations range from $`\mathrm{3\; 500}`$ to $`\mathrm{30\; 000}`$ depending on initial magnetization $`m_0`$ and lattice sizes. The smaller $`m_0`$ and lattice size $`L`$ are, the more samples of initial configurations we have. Errors are simply estimated by dividing total samples into two or three subsamples. Compared with Monte Carlo simulations, our calculations here are much more time consuming due to the small $`\mathrm{\Delta }t`$. According to analytical analyses and Monte Carlo simulations in stochastic dynamic systems, at the critical point and in the limit $`m_0=0`$, $`p(t)`$ should decay by a power law as in Eq. (1). Our first effort is to investigate whether in microscopic deterministic dynamics $`p(t)`$ obeys also the power law and measure the persistence exponent $`\theta _p`$. Here we adopt the critical energy density $`ϵ_c=21.1`$ from the literature . In Fig. 1, the persistence probability $`p(t)`$ is displayed on a log-log scale for lattice sizes $`L=256`$ and $`128`$ with solid lines and a dash line respectively. For $`L=256`$ simulations have been performed with two values of initial magnetization $`m_0=0.003`$ and $`0.0015`$. These straight lines convince us the power law behavior of $`p(t)`$. Skipping data within a microscopic time scale $`t100`$, from the slopes of the curves one estimates the persistence exponent $`\theta _p=0.252(6)`$ for both values of $`m_0`$. This shows that there is already no effect of finite $`m_0`$. The curve for $`L=128`$ and $`m_0=0.0015`$ is roughly parallel to that of $`L=256`$. One measures the the slope $`\theta _p=0.251(1)`$ in the time interval $`[100,500]`$ but $`\theta _p=0.232(10)`$ in $`[100,1000]`$. This indicates that some finite size effect exists still for $`L=128`$ after $`t=500`$ but is negligible small for $`L=256`$. If the time evolution of the magnetization is a Markovian process, from theoretical view points the persistence exponent will be not an independent critical exponent and it will take the value $`\alpha _p`$, which relates to other exponents through $$\alpha _p=\theta +(d/2\beta /\nu )/z.$$ (4) In Table I, values of the exponent $`\theta _p`$, $`z`$, $`\theta `$ and $`\alpha _p`$ for the $`\varphi ^4`$ theory are given in comparison with those of the kinetic Ising model induced by local Monte Carlo algorithms. As is the case of the Ising model, the exponents $`\theta _p`$ and $`\alpha _p`$ for the $`\varphi ^4`$ theory differ also by about $`10`$ percent. This represents a rather visible non-Markovian effect in the time evolution of the magnetization. For equilibrium states, it is generally believed that the $`\varphi ^4`$ theory and the Ising model are in a same universality class. Results from numerical solutions of the deterministic equations also support this . From the short-time dynamic approach , within statistical errors the dynamic exponent $`z`$ for the microscopic deterministic dynamics of the $`\varphi ^4`$ theory is the same as that of the kinetic Ising model with Monte Carlo algorithms but the exponent $`\theta `$ differs by several percent. In Table I, we see that $`\theta _p`$ for the $`\varphi ^4`$ theory is also several percent bigger than that of the Ising model. However, by feeling we still think that the $`\varphi ^4`$ theory and the Ising model are very probably in a same persistence universality class. These some percent differences of the exponents come probably from that the critical point $`ϵ_c`$ has not been very accurate or there are some corrections to scaling In deterministic dynamics, energy is conserved and it couples to the order parameter. Therefore, it is believed that the deterministic dynamics belongs to the dynamics of model C rather than model A. Standard local Monte Carlo dynamics of the Ising model is dynamics of model A. In two dimensions, model A and C are the same but maybe up to a logarithmic correction . and uncontrolled systematic errors. Actually, we will see below that the critical energy density could be somewhat lower than $`ϵ_c=21.1`$ and it would yield a slightly smaller $`\theta _p`$. Our second step is to investigate the scaling behavior of the persistence probability in the neighborhood of the critical energy density. From general view points of physics, one may expect a following scaling form $$p(t,\tau )=t^{\theta _p}F(t^{1/\nu z}\tau ).$$ (5) Here $`\tau =(ϵϵ_c)/ϵ_c`$ is the reduced energy density. When $`\tau =0`$, the power law behavior in Eq. (1) is recovered. When $`\tau `$ differs from zero, the power law will be modified by the scaling function $`F(t^{1/\nu z}\tau )`$. In principle, this fact may be used for the determination of the critical energy density. In Fig. 2, $`p(t,\tau )`$ for $`L=256`$ and $`m_0=0.003`$ is plotted for three different energy density $`ϵ=20.9`$, $`21.1`$ and $`21.3`$. In the figure, we see that the solid line shows the best power law behavior among the three curves. However, an very accurate estimate of $`ϵ_c`$ could not be achieved so easily, since $`p(t,\tau )`$ is not so sensitive to the energy. According to our data, we estimate $`ϵ_c=21.06(12)`$. Within errors, it is consistent with $`ϵ_c=21.1`$ given in and $`ϵ_c=21.11(3)`$ in . We should point out, that the exponent $`\theta _p`$ will be $`0.245`$, if it is measured at $`ϵ_c=21.06`$. It is closer to that of the kinetic Ising model, as discussed above. In order to have more understanding of the scaling form (5), we differentiate with respect to the energy density on both sides of the equation and obtain $$_\tau \mathrm{ln}p(t,\tau )|_{\tau =0}t^{1/\nu z}.$$ (6) Using the data of Fig. 2, we can approximately calculate $`_\tau \mathrm{ln}p(t,\tau )|_{\tau =0}`$ and the result is displayed in Fig. 3. Even though there are some fluctuations, power law behavior is still seen. The best fitted slope of the curve gives $`1/\nu z=0.47(4)`$ in the time interval $`[100,1000]`$. Taking $`z=2.15(2)`$ as input, one obtains $`\nu =0.99(8)`$. Compared with $`\nu =1`$ for the Ising model, this result supports that the $`\varphi ^4`$ theory with deterministic dynamics and the Ising model are in a same universality class. Finally we study the scaling behavior of the persistence probability in case the initial magnetization is not so small and its effect can not be neglected. Following Ref. , we assume a finite size scaling form $$p(t,L,m_0)=t^{\theta _p}F(t^{1/z}L^1,t^{x_{0p}/z}m_0).$$ (7) Here the energy density has been set to its critical value and $`x_{0p}`$ is the scaling dimension of the initial magnetization $`m_0`$. It was discovered that for the Ising model with Monte Carlo dynamics, the value $`x_{0p}=1.01(1)`$ is ’anomalous’, i.e. it is different from the scaling dimension of the initial magnetization $`x_0=0.536(2)`$ measured from the time evolution of the magnetization or auto-correlation . The origin should be that $`p(t,L,m_0)`$ is a non-local observable in time $`t`$. It remembers the history of the time evolution. To verify the scaling form (7) and estimate $`x_{0p}`$, we perform a simulation with the lattice size $`L_1=256`$ and initial magnetization $`m_{01}=0.0151`$. Suppose the scaling form (6) holds, one can find an initial magnetization $`m_{02}`$ with the lattice size $`L_2=128`$ such that the curves of $`p(t,L,m_0)`$ for both lattice sizes collapse. Practically we haved performed simulations for $`L_2=128`$ with two initial magnetizations, $`m_0=0.0272`$ and $`0.0349`$. By linear extrapolation, we obtain data with $`m_0`$ between $`0.0272`$ and $`0.0349`$. Searching for a curve best fitted to the curve for $`L_1=256`$, we determine $`m_{02}`$. In Fig. 4, such a scaling plot is displayed. The lower and upper solid lines are the persistence probability for $`L_2=128`$ with $`m_0=0.0272`$ and $`0.0349`$ respectively, while the dashed line is the properly rescaled one for $`L_1=256`$ with $`m_{01}=0.0151`$. The solid line fitted to the dashed line represents the persistence probability for $`L_2=128`$ with $`m_{02}=0.0313(3)`$. Since the microscopic time scale is $`t_{mic}100`$, nice collapse of the two curves can be observed only after $`t100`$. From the scaling form (7), $`m_{02}=2^{x_{0p}}m_{01}`$ and one estimates $`x_{0p}=1.05(1)`$. This value is very close to $`x_{0p}=1.01(1)`$ for the kinetic Ising model. In conclusions, we have numerically solved the microscopic deterministic equations of motion with random initial states for the two-dimensional $`\varphi ^4`$ theory and systematically investigated the critical scaling behavior of the persistence probability. As summarized in Table I, the estimated exponents $`\theta _p`$ and $`x_{0p}`$ are very close to those of the kinetic Ising model induced by local Monte Carlo algorithms. What would be the dynamic and static behavior of the deterministic dynamics starting from more general initial states is an interesting work in future. Acknowledgements: Work supported in part by the Deutsche Forschungsgemeinschaft under the project TR 300/3-1.
no-problem/9909/cond-mat9909123.html
ar5iv
text
# Non-equilibrium dynamics in an interacting nanoparticle system ## I Introduction The dynamics of interacting nanoparticle systems has been subject of considerable interest concerning the existence of a low temperature spin glass phase. Evidence for a phase transition from a high temperature superparamagnetic phase to a low temperature spin glass like phase has been given from reports on critical slowing down and a divergent behavior of the nonlinear susceptibility. Spin glasses have been intensively investigated during the two last decades, not only concerning the phase transition but also the nature of the spin glass phase. The non-equilibrium dynamics in the spin glass phase has been extensively studied by conventional and also more sophisticated dc-magnetic relaxation and low frequency ac-susceptibility experiments. A non-equilibrium character of the low temperature spin glass like phase of interacting particle systems has been revealed from measurements of dc-magnetic relaxation and relaxation of the low frequency ac-susceptibility. In this paper, the non-equilibrium dynamics in the spin glass like phase is further elucidated and special interest is put into a dynamic memory effect, which has recently been observed in spin glasses. In a “memory experiment” the ac-susceptibility is measured at a low frequency, and the cooling of the spin glass sample is halted at one (or more) temperatures. During a halt, $`\chi `$ slowly decays, but when cooling is resumed $`\chi `$ gradually regains the amplitude of a continuous cooling experiment. Upon the following heating, $`\chi `$ shows dips at the temperatures where the temporary halts occurred; a memory of the cooling history has been imprinted in the spin structure. (For an illustration see Fig. 1 in Ref..) A similar memory of the cooling process has recently been reported in an interacting nano-particle system. ## II Theoretical background For spin glasses, the nonequilibrium dynamics has been interpreted within a phenomenological real space model, adopting important concepts from the droplet model. We will use the same real space model to discuss similarities and dissimilarities between the low temperature phase of interacting particle systems and that of spin glasses. The droplet model was derived for a short-range Ising spin glass, but important concepts like domain growth, chaos with temperature, and overlap length should be applicable also for particle systems exhibiting strong dipole-dipole interaction and random orientations of the anisotropy axes. Chaos with temperature means that a small temperature shift changes the equilibrium configuration of the magnetic moments completely on sufficiently long length scales. The length scale, up to which no essential change in configuration of the equilibrium state is observed after a temperature step $`\mathrm{\Delta }T`$, is called the overlap length $`l(\mathrm{\Delta }T)`$. The development towards equilibrium is governed by the growth of equilibrium domains. The typical domain size after a time $`t_\mathrm{w}`$ at a constant temperature $`T`$ is $$R(T,t_\mathrm{w})\left(\frac{T\mathrm{ln}(t_\mathrm{w}/\tau )}{\mathrm{\Delta }(T)}\right)^{1/\psi },$$ (1) where $`\tau `$ is the relaxation time of an individual magnetic moment, $`\mathrm{\Delta }(T)`$ sets the free energy scale and $`\psi `$ is a barrier exponent. For spin glasses, the atomic relaxation time is of the order of $`10^{13}`$ s independent of temperature, while for magnetic particles, the individual particle relaxation time is given by an Arrhenius law as $$\tau =\tau _0\mathrm{exp}\left(\frac{KV}{k_BT}\right),$$ (2) where $`\tau _010^{12}10^9`$ s, $`K`$ is an anisotropy constant, $`V`$ the particle volume and thus $`KV`$ the anisotropy energy barrier. Due to the inevitable polydispersivity of particle systems the individual particle relaxation time will also be distributed, and due to the exponential factor this distribution will be broader than the distribution of anisotropy energy barriers. In the droplet model there exists only two degenerate equilibrium spin configurations, $`\mathrm{\Psi }`$ and its spin reversal counterpart $`\stackrel{~}{\mathrm{\Psi }}`$. It is thus possible to map all spins to either of the two equilibrium configurations. Let us now consider a spin glass quenched to a temperature $`T_1`$ lower than the spin glass transition temperature $`T_g`$. At $`t=0`$ the spins will randomly belong to either of the two equilibrium states, resulting in fractal domains of many sizes. The subsequent equilibration process at $`t>0`$ is governed by droplet excitations yielding domain growth to a typical length scale, which depends on temperature and wait time $`t_{\mathrm{w}_1}`$ according to Eq.(1). After this time, fractal structures typically smaller than $`R(T_1,t_{\mathrm{w}_1})`$ have become equilibrated while structures on longer length scales persist. If the system thereafter is quenched to a lower temperature $`T_2`$, the spins can be mapped to the equilibrium configuration at $`T_2`$ yielding a new fractal domain structure. The domain structure achieved at temperature $`T_1`$ still fits to the equilibrium configuration at $`T_2`$ on length scales smaller than the overlap length $`l(T_1T_2)`$. The domain growth at $`T_2`$ will start at the overlap length, increase with wait time $`t_{\mathrm{w}_2}`$, and end up at the size $`R(T_2,t_{\mathrm{w}_2})`$. Heating the system back to $`T_1`$, the fractal domain growth that occurred at $`T_2`$ has only introduced a new and dispersed domain structure at $`T_1`$ on length scales $`l(T_1T_2)RR(T_2,t_{\mathrm{w}_2})`$. Note that the large length scale structures, $`RR(T_1,t_{\mathrm{w}_1})`$, essentially persist. The time-dependent response $`m(t_{\mathrm{o}bs})`$ to a weak magnetic field applied at $`t_{\mathrm{o}bs}=0`$, is due to a continuous magnetization process, governed by polarization of droplets of size $$L(T,t_{\mathrm{o}bs})\left(\frac{T\mathrm{ln}(t_{\mathrm{o}bs}/\tau )}{\mathrm{\Delta }(T)}\right)^{1/\psi }.$$ (3) Since $`L(T,t_{\mathrm{o}bs})`$ grows with the same logarithmic rate as $`R(T,t_\mathrm{w})`$, the relevant droplet excitations and the actual domain sizes become comparably large at time scales $`\mathrm{ln}t_{\mathrm{o}bs}\mathrm{ln}t_\mathrm{w}`$. For $`\mathrm{ln}t_{\mathrm{o}bs}\mathrm{ln}t_\mathrm{w}`$ the relevant excitations occur mainly within equilibrated regions, while for $`\mathrm{ln}t_{\mathrm{o}bs}\mathrm{ln}t_\mathrm{w}`$ these excitations occur on length scales of the order of the growing domain size, and involves domain walls, yielding a nonequilibrium response. A crossover from equilibrium to nonequilibrium dynamics occurs for $`\mathrm{ln}t_{\mathrm{o}bs}\mathrm{ln}t_\mathrm{w}`$, seen as a maximum (or only a bump) in the relaxation rate $`S(t)=h^1m(t)/\mathrm{ln}t`$ vs. $`\mathrm{ln}t`$ curves. Dc-relaxation and ac-susceptibility experiments are related through the relations: $`M(t)\chi ^{}(\omega )`$ and $`S(t)\chi ^{\prime \prime }(\omega )`$, when $`t=1/\omega `$. Hence, the aging behavior is also observed in low frequency ac-susceptibility measurements. Different from dc-relaxation measurements is that the observation time is constant, $`t_{\mathrm{o}bs}=1/\omega `$, implying that the probing length scale $`L(T,1/\omega )`$ is fixed for a given temperature. ## III Experimental The sample consisted of ferromagnetic particles of the amorphous alloy Fe<sub>1-x</sub>C<sub>x</sub> ($`x`$0.22) prepared by thermal decomposition of Fe(CO)<sub>5</sub> in an organic liquid (decalin) in the presence of surfactant molecules (oleic acid) as described in Ref. . After the preparation, the carrier liquid was evaporated and the particles were transferred to xylene in an oxygen-free environment resulting in a ferrofluid with a particle concentration of 5 vol% (determined by atomic absorption spectroscopy). The particles were separated by the surfactant coating and could only interact via magnetic dipole-dipole interactions. During measurements, the ferrofluid was contained in a small sapphire cup sealed with epoxy glue to prevent oxidation of the particles. The sample was only measured and exposed to magnetic fields well below the melting point of xylene ($``$ 248 K). A droplet of the ferrofluid was dripped onto a grid for transmission electron microscopy and was left to oxidize prior to the study. The electron micrographs revealed a nearly spherical particle shape. The volume-weighted size distribution (after correction for the change in density due to oxidation) was described well by the log-normal distribution, $`f(V)dV=(\sqrt{2\pi }\sigma _VV)^1\mathrm{exp}[\mathrm{ln}^2(V/V_m)/(2\sigma _V^2)]dV`$, with the median volume $`V_m=8.610^{26}`$ m<sup>3</sup> and logarithmic standard deviation $`\sigma _V=0.19`$. These parameters correspond to a log-normal volume-weighted distribution of particle diameters with the median value $`d_m=(6V_m/\pi )^{1/3}=5.5`$ nm and $`\sigma _d=\sigma _V/3=0.062`$. This particle size is slightly larger and the value of $`\sigma _V`$ is slightly lower than those reported in recent studies performed on a different batch of particles. The 5 vol% sample has been subject to a detailed study of the correlations and magnetic dynamics using a wide range of observation times and temperatures. Collective behavior is observed at temperatures below 40 K. A non-commercial low-field SQUID magnetometer was used for the measurements. All ac-susceptibility measurements were performed at a frequency of 510 mHz and a rms value of the ac-field of 0.1 Oe. The cooling and heating rate was 0.25 K/min. For the relaxation measurements the dc field was 0.05 Oe, while the background field was less than 1 mOe. ## IV Results and Discussion ### A General behavior Fig. 1 shows the ac susceptibility of the particle sample measured while cooling. The onset of the out-of-phase component of the susceptibility is less sharp than for spin glasses. Nevertheless, there exists collective spin glass like dynamics in the particle system, which can be studied by magnetic aging experiments. The ‘conventional’ aging experiment is a measurement of the wait time dependence of the response to a field change, after either field cooling (thermoremanent magnetization, TRM) or zero field cooling (ZFC). In Fig. 2, the relaxation rate from two ZFC magnetic relaxation measurements is shown. The particle system was directly cooled from a high temperature to 30 K where it was kept a wait time of 300 and 3000 s, respectively, before the magnetic field was applied. A clear wait time dependence is seen; the relaxation rate displays a maximum at $`\mathrm{ln}t_{\mathrm{o}bs}\mathrm{ln}t_\mathrm{w}`$. Both particle systems and spin glasses exhibit magnetic aging that affects the response function in a similar way, but the wait time dependence appears weaker for the particle system. Comparing the ratio between the magnitude of $`S(t)`$ at the maximum and at the short time limit (0.3 s in our current experiment) we find a value of order 1.3 in the current sample, whereas most spin glasses would show a value $``$ 2, at a corresponding temperature and wait time. The illustrated differences between the dynamics of a particle system and an archetypical spin glass can be assigned to the wide distribution of particle relaxation times and that some particles may relax independently of the collective spin glass phase due to sample inhomogeneities. The relaxation in the ac-susceptibility, after cooling the sample to 23 K with the same cooling rate as in the reference measurement, is shown in Fig. 3. The relaxation in absolute units is larger for $`\chi ^{}`$ than for $`\chi ^{\prime \prime }`$ and, in contrast to spin glasses where the aging phenomena are best exposed in $`\chi ^{\prime \prime }`$, the effects of aging are more clearly seen in $`\chi ^{}`$ for this particle system. Measuring the ac-susceptibility with slower cooling rates gives lower values of the ac-susceptibility. The effect of a slower cooling rate is largest just below the transition temperature where the effect of aging also is largest. ### B Temperature cycling One experimental procedure that has been used for spin glasses to confirm the overlap length concept is temperature cycling. The measurements are similar to conventional aging experiments, but after the wait time $`t_{\mathrm{w}_1}`$ at the measuring temperature $`T_\mathrm{m}`$, a temperature change $`\mathrm{\Delta }T`$ is made, and the system is exposed to a second wait time $`t_{\mathrm{w}_2}`$ before changing the temperature back to $`T_\mathrm{m}`$, where the field is applied and the magnetization is recorded as a function of time. Fig. 4 shows results from measurements with negative temperature cycling, at $`T_\mathrm{m}=30`$ K, using $`t_{\mathrm{w}_1}`$= 3000 and $`t_{\mathrm{w}_2}`$=10000 s. For $`0>\mathrm{\Delta }T2`$ K the additional wait time at the cycling temperature mainly shifts the maximum in the relaxation rate to longer times. The shift is largest for the smallest $`\mathrm{\Delta }T`$ and the maximum returns continuously towards $`t_{\mathrm{w}_1}`$ as $`\mathrm{\Delta }T`$ is increased. For $`\mathrm{\Delta }T=2`$ K, the maximum appears at $`\mathrm{ln}t\mathrm{ln}t_{\mathrm{w}_1}`$ and the relaxation rate behaves rather similar to the curve without the cycling only showing a somewhat larger magnitude (see Fig 4). For a larger temperature step, $`\mathrm{\Delta }T=4`$ K, the relaxation rate is enhanced at times shorter than $`t_{\mathrm{w}_1}`$, while the relaxation appears unaffected at longer time scales. The behavior can be interpreted in terms of an interplay between the domain growth and the overlap length: for small temperature steps, the domain size $`R_1(T_\mathrm{m},t_{\mathrm{w}_1})`$ attained during $`t_{\mathrm{w}_1}`$ at $`T_m`$ is shorter than the overlap length at the cycling temperature and the domain growth proceeds essentially unaffected by the temperature change, only with a slower rate. For sufficiently large temperature steps the overlap length is shorter than $`R_1(T_\mathrm{m},t_{\mathrm{w}_1})`$ and the domain growth at the lower temperature creates a new domain structure on short length scales, $`R_2(T_\mathrm{m}+\mathrm{\Delta }T,t_{\mathrm{w}_2})`$. For the temperatures and wait times used in the experiments $`R_2(T_\mathrm{m}+\mathrm{\Delta }T,t_{\mathrm{w}_2})<R_1(T_\mathrm{m},t_{\mathrm{w}_1})`$. Returning to $`T_\mathrm{m}`$, the new $`R_2`$ structures do not overlap with the equilibrium configuration and yield the apparent non-equilibrium nature of the dynamics on short time scales (cf. Fig. 4 for $`\mathrm{\Delta }T=4`$ K). Measurements with positive temperature cycling with $`T_\mathrm{m}=27`$ K are shown in Fig. 5. For small temperature steps ($`\mathrm{\Delta }T2`$ K) the maximum of the relaxation rate remains at $`\mathrm{ln}t\mathrm{ln}t_{\mathrm{w}_1}`$, but for larger temperature steps it is seen that the relaxation rate approaches the relaxation rate of an aging measurement with short wait time. This can again be explained by the overlap length becoming shorter than $`R_1(T_\mathrm{m},t_{\mathrm{w}_1})`$ when $`\mathrm{\Delta }T2`$ K. Since the domain growth rate increases fast with increasing temperature, the system will then look more and more reinitialized when returning to $`T_\mathrm{m}`$ after larger $`\mathrm{\Delta }T`$ and/or longer $`t_{\mathrm{w}_2}`$. ### C Memory The experimental procedure for a memory experiment is illustrated in Fig. 6. $`\chi ^{}(T)`$ is recorded during cooling, employing different halts at constant temperature, followed by continuous heating. Fig. 7 shows results from three different memory experiments: the first with a single temporary halt at 33 K for 1 h and 30 minutes, the second with a single temporary halt at 23 K for 10 h, and the third with temporary halts at both 33 K and 23 K. The exposed curves show the difference between the measured and a reference curve. Reference curves are taken from a measurement with continuous cooling followed by continuous heating. Similar to spin glasses, a memory of the cooling history is observed on heating for both one and two temporary halts. It is seen that on the low temperature side, the susceptibility approaches the reference level slower than on the high temperature side. This asymmetry of the dip is more pronounced in the particle system than for spin glasses. The relaxation at 23 K after a temporary halt at 33 K is equally large as the relaxation at 23 K without earlier halt, except that it starts at a lower level. In fact, $`\chi ^{}\chi _{\mathrm{r}ef}^{}`$, measured on heating for the experiment with two temporary halts at 33 K and 23 K, is just the sum of $`\chi ^{}\chi _{\mathrm{r}ef}^{}`$ of the two experiments with a single temporary halt at 33 K and 23 K, respectively. In Fig. 8 is shown a memory experiment similar to the one in Fig. 7 except that the two temporary halts are closer in temperature. The ac-susceptibility has been measured with temporary halts on cooling at 33 K for 1h30min and at 28 K for 7 h. On heating, the double halt experiment only shows one dip. However, this dip ($`\chi ^{}\chi _{\mathrm{r}ef}^{}`$) still constitutes the sum of the two heating curves with one halt, as was the case for the experiments shown in Fig. 7. Also for this experiment, the ac-relaxation curve at the lower temperature is the same with and without temporary halt at 33 K, except for a constant. We have performed supplementary experiments to the ones shown in Fig. 7 and 8 by measuring the dc-relaxation. In Fig. 9, the relaxation rate of three dc-relaxation curves measured at 33 K are shown. In the three experiments the system was quenched to 33 K and then aged for 1h30min. In one of the measurements the relaxation was measured immediately after this wait time, but in the two other measurements a negative temperature cycling was performed to 23 K for 10 h and to 28 K for 7 h, respectively. The dynamics is sizeably affected by the temperature cycling to 28 K, but only weakly affected by the temperature cycling to 23 K. The observed increase of the relaxation rate indicates that the equilibrium structure created at 33 K is erased on short length scales after the temperature cycling, as discussed in Sec. IV B. On time scales shorter than 10 s, $`S(t)`$ is even lower for the experiment with a temperature cycling to 28 K than for the other two experiments. We conclude that there is an overlap between the equilibrium configuration of the magnetic moments at 33 K and 28 K on length scales shorter than $`L(33\mathrm{K},10\mathrm{s})`$. Let us now compare these dc-measurements with the corresponding ac-measurements. The in-phase component of the ac-susceptibility at a frequency of 510 mHz records the integrated response corresponding to observation times $`1/\omega `$ 0.3 s and shorter. In the ac-experiment with two temporary halts at 33 and at 23 K (Fig. 7), no effect of the aging at 23 K is seen when heating through 33 K. However, $`\chi ^{}(33\mathrm{K})`$ measured on heating for the ac-measurement with temporary halts at 33 and 28 K is lower than $`\chi ^{}(33\mathrm{K})`$ for the measurement with only a single halt at 33 K. A lower level of $`\chi ^{}(T)`$ indicates a more equilibrated system and thereby also a lower relaxation rate. This supports the above conclusion of some overlap between the equilibrium configurations at 28 K and 33 K on the length scales seen by observation times shorter than 0.3 s. Fig. 10 shows the relaxation rate measured at 28 and 23 K after a quench to $`T_\mathrm{m}`$ followed by a wait time of 7 h and 10 h, respectively. The same measurements except for a temporary halt at 33 K for 1h30min are also shown. The relaxation rate is lower for the measurement with a temporary halt at 33 K for both temperatures, even though the difference between a temporary halt and a direct quench is larger at 28 K. These measurements, as the corresponding ac-measurements, show that there is some overlap on short length scales between the equilibrium configuration at 33 K and all lower temperatures for which we have measured the ac-susceptibility. ## V Conclusions The non-equilibrium dynamics in the low temperature spin glass like phase of an interacting Fe-C nano-particle sample is found to largely mimic the corresponding spin glass dynamics. The observed differences may be accounted for by the strongly temperature dependent and widely distributed relaxation times of the particle magnetic moments compared to the temperature independent and monodispersed relaxation times of the spins in a spin glass. Within a droplet scaling picture of a particle system, these factors strongly affects the associated length scales and growth rates of the domains and droplet excitations as well as the roughness of the domain walls. In addition, there might exist individual particles that do not take part in the collective low temperature spin glass phase but relax independently. ###### Acknowledgements. This work was financially supported by The Swedish Natural Science Research Council (NFR).
no-problem/9909/astro-ph9909093.html
ar5iv
text
# Spiral Structure as a Recurrent InstabilityRutgers Astrophysics Preprint 256 To appear in Astrophysical Dynamics – in commemoration of F. D. Kahn, eds. D. Berry, D. Breitschwerdt, A. da Costa & J. E. Dyson (Dordrecht: Kluwer) ## 1 Introduction It is about a century and a half since Lord Rosse first noted the spiral appearance of M51, but a satisfying and robust theory for the general spiral phenomenon in disc galaxies still eludes us. There has been substantial progress, of course. Early theoretical efforts focused on the gas, no spirals are seen in S0 galaxies that have little or no interstellar matter, and are most striking where gas is abundant. However, most researchers are now convinced that spirals are driven by the stellar disc through some kind of collective gravitational process. The most compelling reason is that spiral arms are smoother in images of galaxies in the near IR (Schweizer 1976; Rix & Zaritsky 1995; Block & Puerari 1999), indicating that the old disc stars participate in the pattern. In addition, we have known that spiral patterns develop spontaneously in $`N`$-body simulations ever since Lindblad’s pioneering work in the early 1960s (e.g. Lindblad 1960). The problem is thus largely one of classical dynamics – an application of nothing more sophisticated than Newton’s laws of gravity and motion – with gas playing an important, but secondary, role. Spirals in tidally interacting galaxies could well result from the interaction itself, and some spirals may be driven by bars. But spirals in a substantial fraction of galaxies cannot be ascribed to either of these triggers, and therefore present the most insistent problem. In this article, I describe my recent work on this subject, which stems from my collaboration with Franz Kahn in the late 1980s. He was a great help to me then, as he had been much earlier when I began my career as a graduate student in Manchester. It is clear that he had been interested in spiral structure throughout his career, and was present at the seminal meeting (Woltjer 1962) which seems to mark the change of focus in the wider community to gravitational theories for the phenomenon (which B. Lindblad had pioneered for many years before then). ## 2 Short- or Long-lived Patterns? While most theorists agree that spirals are density variations in the disc which are organized by gravity, opinions diverge quite quickly from this starting point. There is not even a consensus on something as fundamental as the lifetime of spiral patterns. Our snapshot view of galaxies gives us no direct information, and two separate schools of thought exist on the longevity of the structures we observe. C. C. Lin and his co-workers (e.g. Bertin & Lin 1996) favour long-lived quasi-stationary spiral patterns. They suggest that spirals are global instabilities which grow rather slowly in a cool disc with a smooth distribution function (DF). Such modes could be quasi-stationary because of mild non-linear effects, such as gas damping. They can achieve low growth rates for spiral modes by invoking a “$`Q`$-barrier” where in-going travelling waves reflect off a dynamically hot and largely unresponsive inner disc. The standing wave pattern which makes up the mode consists of short and long trailing waves that are trapped between reflections at the $`Q`$-barrier and at co-rotation; it is excited by mild over-reflection at co-rotation. This type of mode can survive only if the inner reflection occurs outside the inner Lindblad resonance (ILR) so that the waves are shielded from the fierce damping which must occur if that resonance were exposed. Most other workers favour short-lived patterns, with fresh spirals appearing in rapid succession, as indicated by $`N`$-body simulations. Such spirals develop through swing-amplification in some form or another, either as shearing waves (Goldreich & Lynden-Bell 1965) or as forced responses (Julian & Toomre 1966). Both these interpretations are manifestations of the same underlying mechanism (see Toomre 1981). The role of gas in this picture is as follows: All forms of density wave arise through collective motions of the stars and are therefore weaker when stars move more randomly. Thus fluctuating spiral structure is self-limiting, since the transient patterns themselves gradually scatter stars away from near-circular orbits. If no gas were present, the spirals must fade over time – in less than ten galactic rotations (Sellwood & Carlberg 1984) – as stellar random motions rise. Clouds of gas, when present in a galaxy disc, collide and dissipate most of the random motion they acquire, and therefore remain a dynamically cool and responsive component. Moreover, young stars reduce the rms spread of the total stellar distribution, because they possess similar orbits to those of their parent massive gas clouds. Quite a modest star formation rate is needed to preserve the participation of the stellar disc in spiral waves – a few solar masses per year over the disc of a galaxy is enough. Toomre & Kalnajs (1991) advocate one theory of this type in which spirals are the polarized disc response to random density fluctuations. If the $`10^{10}`$ individual stars of the disc were smoothly distributed, density variations would be tiny; but real galaxy discs are much less smooth because they contain a number of massive clumps, such as star clusters and giant molecular clouds (an extra role for gas). Density fluctuations can be decomposed into a spectrum of plane waves of every pitch angle, which shear continuously from the leading to the trailing direction because of differential rotation. The smooth background disc responds enthusiastically to forcing from this shearing spectrum of density fluctuations, and the entire disc develops a transient spiral streakiness. An equivalent viewpoint is to regard the background disc as polarized, with each of the orbiting disturbance masses inducing a spiral response in the surrounding medium. The spiral responses are much stronger than the forcing density variations, but remain directly proportional in this picture, the proportionality constant depending on the responsiveness of the background disc. The shot noise from the finite number of particles is itself the source of the spiral responses in the $`N`$-body simulations of Toomre & Kalnajs. While these authors have developed a rather detailed understanding of their local simulations, the amplitudes of spirals in global $`N`$-body simulations seems to be independent of the particle number (Sellwood 1989), rather than declining as $`N^{1/2}`$ as this theory would predict. Further, it is unclear whether the noise amplitude/responsiveness combination are sufficient to give rise to spirals of the amplitude we observe. I prefer a model in which the spirals are true instabilities, which recur in rapid succession, as a flag flaps repeatedly in a breeze. I describe this idea in the next few sections, but I emphasize here that it differs radically in two further respects from the quasi-steady waves advocated by Bertin & Lin. First, the ILR in my model is not shielded, but plays a central role, and second, I expect the DF to be far from smooth and, in fact, the features in the DF drive the instabilities. I do not claim a fully worked out theory, however; the weakest link in the picture is the manner in which the cycle recurs. ## 3 Groove Modes My paper with Franz Kahn, which appeared in Monthly Notices in 1991, described a new kind of global spiral instability of a stellar disc, which we called a “groove mode.” Our paper presents both $`N`$-body simulations and a local theoretical description of the instability. I attempt to provide only a physical interpretation here and leave the interested reader to refer back to our paper for a more thorough development. As its name implies, the instability occurs in a “groove” in the particle distribution. It is fundamentally a groove in angular momentum density which would give rise to a groove in surface density only if the stars were all on circular orbits. As the epicyclic radii of typical stellar orbits are some 10% – 20% of their mean radius, whereas we invoke a deficiency of stars over a range of $`\mathrm{}<1`$% of their angular momentum, the surface density of stars is not significantly reduced anywhere. Nevertheless, it is much easier to envisage the mechanism for a cold disc in which the radial density profile has a sharp notch. The mechanism for a groove mode is illustrated in Figure 1 which shows a small patch of the disc so far from its centre that curvature is negligible. Wave-like disturbances on the groove edges bring high surface density material into the groove in the dark shaded regions. The changes in density give rise to disturbance forces; those between the density excesses on either side of the groove are marked by the pairs of opposing arrows within the groove. Material displaced upwards into the groove from its lower edge is therefore pulled forward by the density excess from the opposite edge. Material that is pulled forward gains angular momentum, moves to an orbit of larger mean radius, and in a differentially rotating disc, lags behind its original azimuthal motion. It therefore rises further into the groove, moving less rapidly, relative to the groove centre than before. Similarly, material displaced downwards into the groove from its upper edge is pulled backward by the density excess from the opposite edge, loses angular momentum and sinks further into the groove. Thus each density excess causes the other to continue to grow, and the system is unstable. It is worth mentioning that Franz Kahn did not see the instability this way. Instead, he instinctively saw that the dispersion relation for waves in a groove with a smooth profile, such as a Lorentzian, would have a solution in the upper half of the complex plane. To my mind, such an insight conveys no intuitive feel for what is happening, but Franz rightfully trusted it more than physical intuition in situations where the dynamics can be quite subtle. The instability of the groove alone would be of little consequence were it not for the supporting response of the surrounding disc. A polarized response grows with the density excesses in the groove, as shown in the lower part of Figure 1. The groove itself is unstable over a wide range of wavelengths, but the vigorous supporting response from the surrounding disc favours long wavelengths only. The result is a fiercely growing, global, spiral instability for which co-rotation lies near the groove centre. Our local theory estimates of the mode frequencies based on this picture (Sellwood & Kahn 1991) were in tolerable agreement with those found in the global simulations. These spiral responses extend as far as the Lindblad resonances on either side, which are marked by the dashed lines. Recall that these resonances occur where stars moving relative to the pattern encounter the periodic disturbance at their epicyclic frequency, $`\kappa `$. If the pattern speed is $`\mathrm{\Omega }_p`$ and the circular frequency at radius $`R`$ is $`\mathrm{\Omega }(R)`$, a star encounters an $`m`$-armed wave at frequency $`\omega =m(\mathrm{\Omega }\mathrm{\Omega }_p)`$; the condition for a Lindblad resonance is $`|\omega |=\kappa `$. I should stress that this linear, global instability also occurs in discs with random motion, in which case we invoke a deficiency of stars over a narrow angular momentum range. The above arguments carry over to this more realistic situation. ## 4 Subsequent behaviour The linear mode saturates, and ceases to grow, about the time that the density excesses meet across the groove. At this point, the mode has generated a periodic density variation around the groove which will disperse only rather slowly, as the stars which comprise it all have similar angular momenta. The amplitude of the polarized response of the surrounding disc tracks the variations of the mass clump which induced it. ### 4.1 Angular momentum transport The inclined spirals exert a torque, however, which redistributes angular momentum. The effect can be viewed as a gravitational stress (Lynden-Bell & Kalnajs 1972), or as wave action carried at the group velocity (Toomre 1969; Kalnajs 1971). The group velocity, $`\omega /k`$ as usual, is directed radially away from co-rotation for all but the most open trailing waves (Binney & Tremaine 1987, §6.2). As the wave is being set up, the stars inside co-rotation do work on the wave and lose angular momentum to it, while those outside gain energy and angular momentum; the net change over the whole disc is zero, as it must be for a self-excited disturbance. Thus the wave action carried at the group velocity is negative angular momentum inside co-rotation and positive angular momentum outside. When combined with the opposite signs of the group velocity, there is an outward flux of angular momentum everywhere – in agreement with the sign of the gravity torque. ### 4.2 Exchanges at resonances The disturbance produced by the groove mode therefore generates both positive and negative angular momentum, in equal measure, at co-rotation which is then carried away by the spirals towards the Lindblad resonances on either side. Secular exchanges between the wave and the stars, which are possible only at resonances (Lynden-Bell & Kalnajs 1972), lead to the wave action being absorbed at these locations, which is why a mode could not be set up with an exposed ILR. Stars moving in a non-axisymmetric potential that rotates at a steady rate conserve neither their specific energy, $`E`$, nor their specific angular momentum, $`J`$. But the combination $$I_\mathrm{J}E\mathrm{\Omega }_pJ,$$ (1) known as Jacobi’s invariant, is conserved. Thus at a Lindblad resonance, where a star gains angular momentum $`\mathrm{\Delta }J`$ from the wave, it also changes its energy as $$\mathrm{\Delta }E=\mathrm{\Omega }_p\mathrm{\Delta }J.$$ (2) Figure 2 shows the Lindblad diagram for a differentially rotating disc with an infinitesimal non-axisymmetric perturbation. The full-drawn curve marks the locus of circular orbits in this $`(J,E)`$ plane; no particle can lie below this curve, but bound particles with $`E>E_c`$ move on eccentric orbits in this potential. The resonance condition for a non-circular orbit generalizes to $$m(\mathrm{\Omega }_\varphi \mathrm{\Omega }_p)+l\mathrm{\Omega }_R=0,$$ (3) where $`\mathrm{\Omega }_\varphi (E,J)`$ and $`\mathrm{\Omega }_R(E,J)`$ are the azimuthal and radial frequencies of orbits (Binney & Tremaine 1987, §3.1), and $`l=0`$ for co-rotation and $`l=\pm 1`$ for Lindblad resonances. The loci of these three principal resonances for arbitrarily eccentric orbits are marked by the broken curves in this Figure. The displacements caused by wave-particle interactions all have slope $`\mathrm{\Omega }_p`$ in this diagram. As this is the slope of the circular orbit curve at co-rotation, stars which exchange energy and angular momentum there do not move further from that curve, to first order. The two vectors show that at the Lindblad resonances, on the other hand, stars are moved onto more eccentric orbits when angular momentum is redistributed outwards. ### 4.3 Results from a simulation Figure 3 shows a spiral pattern extracted from one of my simulations. This calculation is of a $`Q=1.5`$, half-mass $`V=V_0=`$const (aka Mestel) disc, with a inner taper applied to give the disc a characteristic length scale, $`R_0`$, and an outer cut off to limit it to a finite radial extent. The properties of this disc are described in Binney & Tremaine (1987, §4.5) and I have previously reported (Sellwood 1991) confirmation of Toomre’s (1981) prediction that a this model with a smooth DF has no true linear instabilities. The 1 million particles in this simulation were drawn from the appropriate DF and placed at random azimuths at the start. A number of transient spiral patterns develop from the particle noise over time, and that illustrated in Figure 3 was extracted by fitting a coherent wave to $`m=2`$ Fourier coefficients of the mass distribution for the time interval $`400tV_0/R_0600`$i.e. long after the start. The best-fit pattern speed for this wave is $`2\mathrm{\Omega }_p=0.364V_0/R_0`$ and co-rotation (full drawn circle) and the Lindblad resonances (dotted circles) for this pattern speed are marked. The distribution of particles in this run at time 600 is shown in Figure 4. The abscissae are the (instantaneous) specific angular momenta of the particles, and the ordinates are the excess of energy over and above that needed for a circular orbit at this $`J`$. Overlayed on the lower plot are the expected loci of particles scattered from nearly circular orbits at the Lindblad resonances for this wave, and the loci of the generalized Lindblad resonances defined by equation (3). The very close similarity in this potential between the scattering trajectory and the locus of the generalized ILR is remarkable, and leads to the strong and coherent tongue of particles scattered up this curve. The impressive agreement between the simulation and the prediction, which has no free parameters, is reassuring. The near coincidence of the scattering trajectory with the generalized ILR is not repeated at the OLR; there the two curves are practically orthogonal. Consequently, scattering at the OLR does not produce striking changes in the particle distribution in this plot. ### 4.4 Recurrent cycle The resonant scattering of stars by spiral waves, leading to deficiencies in the DF over narrow regions in this plot, opens up the possibility of a recurrent cycle. Similar, though not identical, deficiencies are responsible for the groove modes discovered by Franz and myself, and it is therefore likely that a fresh instability should develop, perhaps with co-rotation near to one of the Lindblad resonances. A recurrent instability cycle of this nature was observed by Sellwood & Lin (1989) in simulations of a low mass disc in nearly Keplerian rotation about a central mass. Their model, in which perturbation forces were also restricted to a single Fourier component, differs in many important respects from real galaxies, but the present Mestel disc is much more realistic. Scattering caused by the waves raises the level of random motion of disc particles in the vicinity of the resonance. Sellwood & Lin found that only the groove carved in a previously undisturbed part of the disc caused a new mode to grow – the other Lindblad resonance lay in a region which had already been heated by two previous waves, and was unable to support a new instability. Thus the spiral “disease” progresses radially in a single direction. Once the entire disc has been heated, no further spiral waves can be sustained – unless some cooling mechanism is in effect (e.g. Sellwood & Carlberg 1984; Carlberg & Freedman 1986). ## 5 Hipparcos stars This model for a recurrent spiral wave instability cycle is now rather complex, and rests heavily on results from simulations. While the simulations have been conducted with as much care as possible, and their behaviour seems physically reasonable, the possibility always exists that the results arise through some pernicious, undetected numerical artifact and bear no relation to what actually happens in real galaxies. It therefore seemed appropriate to me to seek some observational confirmation before plunging any further down what could be a blind alley. A possibility of an observational test presented itself once the data became available from ESA’s Hipparcos satellite. This space mission measured proper motions and parallaxes for many stars. Of these, some 14 000 were selected by Binney & Dehnen (1998) as being a kinematically unbiased subsample within 100 pc of the Sun, so that parallaxes were known to a precision of 10% or better. The satellite measured five out of the six phase-space coordinates of each of these stars – the radial velocity was not measured. However, Dehnen (1998) cleverly realized that the missing information could be obtained, at least in a statistical sense, if it could be assumed that the velocity distribution of these stars were homogeneous throughout this small volume. This reasonable assumption allows one to infer the distribution of missing velocities in one direction on the sky by equating it to the distributions observed in orthogonal directions. In this way, Dehnen was able to construct the full phase space distribution function. Figure 5, kindly made for me by Dehnen, shows contours of the phase space density of Solar neighbourhood stars in $`(E_{\mathrm{ran}},J)`$-space. The parabolic lower boundary of the contour distribution reflects the fact that stars on close to circular orbits, whose guiding centers are far from the Sun would never visit the solar neighbourhood, and are therefore missing from this sample. The asymmetry between left and right results from the usual asymmetric drift, because the stellar density rises towards the Galactic centre. But it is clear from this Figure that the underlying distribution function is not smooth. There is a clear hint of a scattering line (maybe two), which is reminiscent of scattering at an ILR. If this interpretation is correct, it would confirm that spiral arms are transient and lend considerable support to the idea of a recurrent transient cycle of instabilities discussed above. The missing radial velocity information is now being obtained (Pont et al. 1999), which will enable this plot to be redrawn without invoking Dehnen’s stratagem and hopefully, confirm the sub-structure. It should be noted that Dehnen (1999) suggests these data could also be interpreted as scattering at the OLR of the bar (see also Raboud et al. 1998). We are working to try to determine which interpretation is the more plausible, but either way, the assumption that spiral arms in galaxies are formed in a system having a smooth DF looks rather too idealized in the light of these data. ## 6 Conclusions Theoretical effort in an area starved of observational input often loses momentum and may be aimed in quite the wrong direction. Spiral structure theory has not been devoid of observational input, since evidence in favour of density waves has been accumulating for many years. However, the near IR intensity variations (Schweizer 1976 and others), or coherent velocity perturbations (Visser 1978 and others), support only the existence of density waves, and do not test ideas for their origin. Simulations of disc galaxies were strongly motivated by the desire to fill this gap, and have been reporting for decades that spiral arms are transient. As this result has still not been fully understood, the simulators themselves have worried that it could be an artifact. Such healthy scepticism has prompted them to devote many years of effort to refining, testing and cross-checking their codes (Miller 1976; Sellwood 1983; Inagaki et al. 1984; etc.). But new results from simulations cannot, almost by definition, be checked, and the best we can do is to try to show that the behaviour is physically reasonable. Despite all this care and effort, those intent on calculating slowly growing, quasi-stationary, spiral modes have totally discounted the reported behaviour on the grounds that, in their view, spiral structure is simply too “delicate” a problem for simulated results to have any validity! The Hipparcos data provide the first observational confirmation that it is wrong to assume that the DF of a disc galaxy is smooth. In retrospect, it is hard to see how it could remain smooth, since almost any realistic disturbance in a disc will scatter stars from (or maybe trap them into) resonant regions of phase space. Obviously, the DF could relax back to a smooth state if scattering were efficient, but in a purely collisionless disc, the feature could be smoothed only by further collective effects, which themselves will have resonances elsewhere. The Hipparcos result appears to vindicate the simulations, and it seems highly likely that spirals in real galaxies are recurring, transient patterns. They result from true instabilities provoked by narrow features in the DF, and are of a global nature because long wavelength disturbances are supported most vigorously by the swing-amplifier. While the mechanism for these linear instabilities is now reasonably clear, exactly how the DF is affected, and how the instabilities might recur is not. The resonant scattering peaks, if confirmed, suggest at least one of the processes which sculptures the DF, but it is possible it is not the only, or even the dominant, source of inhomogeneities in the DF. The Hipparcos data have provided a much needed pointer to the way forward in this erstwhile stalled area. ###### Acknowledgements. This work was supported by NSF grant AST 96/17088 and NASA LTSA grant NAG 5-6037.