id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9909/cond-mat9909363.html
ar5iv
text
# Quasiparticle Resonant States Induced by a Unitary Impurity in a 𝑑-Wave Superconductor \[ ## Abstract The quasiparticle resonant states around a single nonmagnetic impurity with unitary scattering in a $`d`$-wave superconductor is studied by solving the Bogoliubov-de Gennes equations based on a $`t`$-$`J`$ model. Both the spatial variation of the order parameter and the local density of states (LDOS) around the impurity have been investigated. We find: (i) A particle-hole symmetric system has a single symmetric zero-energy peak in the LDOS regardless of the size of the superconducting coherence length $`\xi _0`$; (ii) For the particle-hole asymmetric case, an asymmetric splitting of the zero-energy peak is intrinsic to a system with a small value of $`k_F\xi _0`$. \] It is now well established that high-$`T_c`$ superconductors (HTSC’s) have essentially a $`d_{x^2y^2}`$-wave pairing symmetry. In conventional $`s`$-wave superconductors, nonmagnetic impurities affect neither the transition temperature nor the superfluid density as dictated by the Anderson theorem . But in a $`d`$-wave superconductor (DWSC) with nodes of the energy gap, such impurities can cause a strong pair-breaking effect . Recently, the local electronic properties in the immediate vicinity of an isolated non-magnetic impurity in a DWSC has become the topic of increased investigation , as these properties may provide a distinctive signature for the pairing symmetry. It has been theoretically predicted by Balatsky, Salkola and co-workers that, in a DWSC, a single nonmagnetic impurity can generate quasiparticle resonant states at subgap energies. They showed that, for a moderately strong impurity, an asymmetry of the resonance peak near the Fermi energy is induced by the fact that the impurity locally breaks the particle-hole symmetry. However, their theory says that increasing the impurity strength pushes the resonance peak toward the Fermi level, so that, in the unitary limit, the resonance occurs right on the Fermi level, and only a single symmetric zero-energy peak (ZEP) occurs in the LDOS near the impurity. It has also been shown by a finite-size diagonalization that, in the unitary limit, the lowest eigenvalues are essentially zero, indicative of the appearance of zero-energy states (ZES’s). Note that, in Ref. , the chemical potential $`\mu `$ was taken to be at the center of the tight-binding energy band (i.e., $`\mu =0`$), so that the system has a particle-hole symmetry. This symmetry is also upheld in the continuum-theory treatment of impurities where the self-consistent $`t`$-matrix approximation is employed. A question which arises naturally is whether, in the unitary limit, the “ZEP” in the LDOS due to the “ZES’s” has an asymmetric splitting or not, when particle-hole symmetry is broken in the system. Recently, Tanaka et al. concluded with their numerical study that such a splitting is still present, whereas Tsuchiura et al. made an opposite conclusion in their numerical study, and asserted that the system studied by Tanaka et al. was too small for their results to be reliable. Experimentally, an asymmetric splitting is clearly observed by Yazdani et al. , whereas Hudson et al. observed only an off-centered peak with no indication of a splitting. Thus it appears important to settle the issue of whether a unitary non-magnetic impurity in a pure DWSC can indeed give rise to such an asymmetric splitting in the “ZEP”, as it will decide whether experimental observation of this feature in HTSC’s necessarily implies that these SC’s do not have pure $`d`$-wave symmetry, or that the impurity is not in the unitary limit (in which case the asymmetry is tied to the sign of the impurity potential, which may well be a misleading conclusion). Based on a $`t`$-$`J`$ model, this paper presents an extensive study on the electronic states around a unitary single-site impurity in a DWSC. The spatial variation of the superconducting order parameter (OP) near the impurity, including an induced $`s`$-wave component, is determined self-consistently. By investigating the sensitivity of the LDOS on both $`\mu `$ and $`\xi _0`$, we find: (i) When $`\mu =0`$, so that the system is particle-hole symmetric, a single ZEP occurs in the LDOS spectrum which is symmetric with respect to zero energy, regardless of the size of $`\xi _0`$; (ii) As the particle-hole symmetry is broken by letting $`\mu 0`$, a critical value $`\gamma _c`$ exists, which is larger for larger $`|\mu |`$, so that for $`\gamma k_F\xi _0<\gamma _c`$ the “ZEP” exhibits an asymmetric splitting. (Here $`k_F`$ is the Fermi wavevector.) Thus we find that for a particle-hole asymmetric system, a sufficiently small coherence length can cause the “ZEP” to exhibit an asymmetric splitting. Treating such a system by the self-consistent $`t`$-matrix approximation, which restores the particle-hole symmetry, will then lose this feature and be misleading in this respect. We consider a $`t`$-$`J`$ model Hamiltonian defined on a two-dimensional square lattice: $``$ $`=`$ $`t{\displaystyle \underset{\mathrm{𝐢𝐣}\sigma }{}}c_{𝐢\sigma }^{}c_{𝐣\sigma }+{\displaystyle \underset{𝐢}{}}U_𝐢n_𝐢\mu {\displaystyle \underset{𝐢}{}}n_𝐢`$ (2) $`+{\displaystyle \frac{J}{2}}{\displaystyle \underset{\mathrm{𝐢𝐣}}{}}[𝐒_𝐢𝐒_𝐣{\displaystyle \frac{1}{4}}n_𝐢n_𝐣]+{\displaystyle \frac{W}{2}}{\displaystyle \underset{\mathrm{𝐢𝐣}}{}}n_𝐢n_𝐣,`$ where the Hilbert space is made of empty and singly-occupied sites only; summing over $`\mathrm{𝐢𝐣}`$ means summing over nearest-neighbor sites; $`n_𝐢=_\sigma c_{𝐢\sigma }^{}c_{𝐢\sigma }`$ is the electron number operator on site $`𝐢`$; $`𝐒_𝐢`$ is the spin-$`\frac{1}{2}`$ operator on site $`𝐢`$; and $`J>0`$ gives the antiferromagnetic superexchange interaction. As in Ref. , we have also included a direct nearest-neighbor interaction term. $`W=0`$ and $`J/4`$ correspond to two versions of the standard $`t`$-$`J`$ model. This term is introduced to adjust the magnitude of the resultant $`d`$-wave OP. The scattering potential from the single-site impurity is modeled by $`U_𝐢=U_0\delta _{𝐢I}`$ with $`I`$ the index for the impurity site. The slave-boson method is employed to write the electron operator as $`c_{𝐢\sigma }=b_𝐢^{}f_{𝐢\sigma }`$, where $`f_{𝐢\sigma }`$ and $`b_𝐢`$ are the operators for a spinon (a neutral spin-$`\frac{1}{2}`$ fermion) and a holon (a spinless charged boson). Due to the holon Bose condensation at low temperatures, the quasiparticles are determined by the spinon degree of freedom only. Within the mean-field approximation, the Bogoliubov-de Gennes (BdG) equations are derived to be $$\underset{𝐣}{}\left(\begin{array}{cc}H_{\mathrm{𝐢𝐣}}& \mathrm{\Delta }_{\mathrm{𝐢𝐣}}\\ \mathrm{\Delta }_{\mathrm{𝐢𝐣}}^{}& H_{\mathrm{𝐢𝐣}}\end{array}\right)\left(\begin{array}{c}u_𝐣^n\\ v_𝐣^n\end{array}\right)=E_n\left(\begin{array}{c}u_𝐢^n\\ v_𝐢^n\end{array}\right),$$ (3) with $$H_{\mathrm{𝐢𝐣}}=[t\delta +(\frac{J}{2}+W)\chi _{\mathrm{𝐢𝐣}}]\delta _{𝐢+𝜹,𝐣}+(U_𝐢\mu )\delta _{\mathrm{𝐢𝐣}}.$$ (4) Here $`u_𝐢^n`$ and $`v_𝐢^n`$ are the Bogoliubov amplitudes corresponding to the eigenvalue $`E_n`$; $`\delta `$ and $`\chi _{\mathrm{𝐢𝐣}}`$ are the doping rate and the bond OP, respectively; and $`𝜹`$ are the unit vectors $`\pm \widehat{𝐱}`$, $`\pm \widehat{𝐲}`$. The resonant-valence-bond (RVB) OP $`\mathrm{\Delta }_{\mathrm{𝐢𝐣}}`$, $`\chi _{\mathrm{𝐢𝐣}}`$, and $`\delta `$ are determined self-consistently: $$\mathrm{\Delta }_{\mathrm{𝐢𝐣}}=\frac{JW}{2}\underset{n}{}[u_𝐢^nv_𝐣^n+u_𝐣^nv_𝐢^n]\mathrm{tanh}(E_n/2k_BT)\delta _{𝐢+𝜹,𝐣},$$ (5) $$\chi _{\mathrm{𝐢𝐣}}=\underset{n}{}\{u_𝐢^nu_𝐣^nf(E_n)+v_𝐢^nv_𝐣^n[1f(E_n)]\},$$ (6) and $$\delta =1\frac{2}{N_a}\underset{𝐢,n}{}\{|u_𝐢^n|^2f(E_n)+|v_𝐢^n|^2[1f(E_n)]\},$$ (7) where $`k_B`$ is the Boltzmann constant; $`f(E)=[\mathrm{exp}(E/k_BT)+1]^1`$ is the Fermi distribution function; and $`N_a=N_x\times N_y`$ is the number of lattice sites. The BdG equations is solved fully self-consistently for the bulk state first. We then fix the values of $`\delta `$ and $`\chi `$ and solve the BdG equations in the presence of a single impurity with the self-consistent $`d`$-wave RVB OP. The thermally broadened local density of states (LDOS) is then evaluated according to: $$\rho _𝐢(E)=2\underset{n}{}[|u_𝐢^n|^2f^{}(E_nE)+|v_𝐢^n|^2f^{}(E_n+E)],$$ (8) where a factor $`2`$ arises from the spin sum, and $`f^{}(E)df(E)/dE`$. The LDOS $`\rho _𝐢(E)`$ is proportional to the local differential tunneling conductance which can be measured in a scanning tunneling microscope/spectroscopy (STM/S) experiment . In the numerical calculation, we construct a superlattice with the square lattice $`N_x\times N_y`$ as a unit supercell. As detailed in Ref. , this method can provide the required energy resolution for the possible resonant states. Throughout the work, we take the size of the unit supercell $`N_a=35\times 35`$, the number of supercells $`N_c=6\times 6`$, the temperature $`T=0.01J`$, and the single impurity potential in the unitary limit $`U_0=100J`$. The values of the other parameters — $`\mu `$, $`W`$, and $`t`$, are varied in order to investigate the electronic states around a single impurity for various ways to bring about particle-hole asymmetry. The obtained spatial variation of the $`d`$-wave and the induced extended-$`s`$-wave OP components around the impurity, which are defined as $`\mathrm{\Delta }_d(𝐢)=\frac{1}{4}[\mathrm{\Delta }_{\widehat{x}}(𝐢)+\mathrm{\Delta }_{\widehat{x}}(𝐢)\mathrm{\Delta }_{\widehat{y}}(𝐢)\mathrm{\Delta }_{\widehat{y}}(𝐢)]`$, and $`\mathrm{\Delta }_s(𝐢)=\frac{1}{4}[\mathrm{\Delta }_{\widehat{x}}(𝐢)+\mathrm{\Delta }_{\widehat{x}}(𝐢)+\mathrm{\Delta }_{\widehat{y}}(𝐢)+\mathrm{\Delta }_{\widehat{y}}(𝐢)]`$, is similar to Fig. 1 of Ref. . These OP component’s have the following characteristics: The $`d`$-wave component decreases continuously to zero from its bulk value as the impurity site is approached, in the scale of the coherence length $`\xi _0\mathrm{}v_F/\pi \mathrm{\Delta }_{max}`$, with the depleted region extending farther in the nodal directions if $`\xi _0`$ is larger. Here $`\mathrm{\Delta }_{max}=4\mathrm{\Delta }_0`$ with $`\mathrm{\Delta }_0`$ the bulk value of the $`d`$-wave OP defined in the real space on a nearest neighbor bond, and $`v_F`$ is the Fermi velocity. The $`s`$-wave component is zero at the impurity site and also at infinity. It has line-nodes along the $`\{110\}`$ and $`\{1\overline{1}0\}`$ directions, and changes sign across any nodal line. Unlike the pairing state at a $`\{110\}`$ surface of a DWSC, which can break the time-reversal symmetry, the pairing state near a single impurity conserves time-reversal symmetry. This difference can be understood from the Ginzburg-Landau (GL) theory , in that a mixed gradient term favors the $`d`$\- and induced $`s`$-wave OP components to be in phase, but it vanishes near a $`\{110\}`$ surface, whence the fourth order $`s`$-$`d`$ coupling term can establish an $`s+id`$ pairing state. Figure 1 shows the LDOS as a function of energy on sites one and two lattice spacings along the $`(100)`$ direction from the impurity and on the corner site of the unit supercell. The values of the parameters are labeled on the figure. Note that the LDOS at the corner site has recovered the bulk DOS, by exhibiting a gaplike feature with the gap edges at $`\pm \mathrm{\Delta }_{max}`$. This resemblance indicates that the unit cell size and the number of unit cells are large enough for uncovering the physics intrinsic to an isolated impurity. As shown in Fig. 1, we find that the LDOS spectrum near the impurity is highly sensitive to the position of $`\mu `$ within the energy band, and the parameter $`\gamma `$. In Fig. 1(a), $`\mu =0`$ and $`\gamma =0.80`$, a single ZEP occurs in the LDOS on the nearest-neighbor site of the impurity, similar to the prediction of the continuum theory and the eigenvalue calculation in Ref. . In addition, as a reflection of the particle-hole symmetry, the whole LDOS spectrum is symmetric about $`E=0`$. We have also studied the cases (not shown) with the same $`\mu =0`$ and $`t=4J`$ but with $`W=0`$ and $`W=0.5J`$ (corresponding to $`\gamma =0.27`$ and $`2.0`$), and found that the above feature remains unchanged, which allows us to conclude that as long as the system is particle-hole symmetric, only a single symmetric ZEP exists for all $`\gamma `$. When $`\mu `$ is not zero, the system is particle-hole asymmetric, and the LDOS spectrum becomes asymmetric. \[See Fig. 1(b)-(g).\] In Fig. 1(b)-(e), $`\mu =0.32J`$ is fixed, and $`W`$ and $`t`$ are varied in order to change $`\gamma `$. For a large $`\gamma =91.7`$, we see a single ZEP in the LDOS. \[See Fig. 1(b).\] When $`\gamma `$ is lowered to $`16.5`$, the “ZEP” begins to evolve into a double-peaked structure with the $`E>0`$ peak having the dominant spectral weight over the $`E<0`$ peak. For a further decreased $`\gamma =6.05`$, the spectral weight of the peak at $`E<0`$ is enhanced. (See Fig. 1(d).) As seen in Fig. 1(e), this enhancement becomes even more pronounced when $`\mu `$ is made close to the edge of a very narrow energy band so that $`\gamma `$ becomes as small as $`2.85`$. When $`\mu =0.16J`$, we only observe a single ZEP although $`\gamma `$ is as small as 5.7 \[for Fig. 1(f)\] and $`2.9`$ \[for Fig. 1(g)\], except that a tendency of the splitting can be identified in the latter case. This tendency of the splitting has been observed clearly in STM tunneling spectroscopy measurements (see Fig. 4(A) of Ref. ). It should be emphasized that the ZEP splitting obtained here has a different origin from that found by Tanaka et al. . We have re-examined their results by choosing the same parameter values and the system size ($`18\times 18`$). When the LDOS spectrum is displayed in a wide energy landscape, many split DOS peaks appear with no well-defined gaplike feature identifiable. But as the system size is enlarged by the supercell technique, the calculation only shows a single ZEP in the LDOS, which indicates that the splitting of ZEP obtained in Ref. is indeed due to the size effect. On the other hand, we have also calculated the excess charge distribution due to the presence of the impurity ($`\delta n_𝐢=n_𝐢n_0`$, where $`n_0`$ is the average particle occupation on each site for the bulk system). We find that this distribution is anisotropic, with its magnitude having tails extending along the nodal directions \[See Fig. 2\]. Because Fig. 2 is obtained with the parameter values given in Fig. 1(e) which gives a small $`\gamma (=2.85)`$ value, the exhibited tail is short. A similar calculation with the model parameters given in Fig. 1(f) (not shown) shows that the charge distribution is similar to that displayed in Fig. 2 except for a longer tail along the nodal directions due to the larger $`\gamma (=5.7)`$. This similarity in the charge distributions for a split and a unsplit ZEP’s disproves the assertion made in Ref. that the local charge-density oscillation is the cause of the ZEP splitting. We mention in passing that we have also found that the excess charge density decays exponentially along the nodal directions instead of the $`r^2`$-dependence from the impurity. But we do not think that this finding invalidates the assertion in Ref. that the wavefunction of the impurity resonant state has a $`1/r`$ decay along the nodal directions, which can lead to a long range interaction between the impurities. However, we do believe that since we have obtained essentially the bulk density of states in several neighboring points near the corner of the supercell, the interaction between the neighboring impurities should be negligible in the cell size we have chosen to work with. Thus we believe that it is very unlikely that the splitting of the ZEP we obtain is due to this interaction. Since the $`s`$-wave OP component induced near the impurity is in phase with the dominant $`d`$-wave component, the splitting of the ZEP we found is not due to a local broken time-reversal symmetry. Finally, as shown in Fig. 1(e), the splitting is also exhibited in a non-self-consistent calculation with a spatially uniform bulk $`d`$-wave OP, showing that the suppression of the $`d`$-wave OP component, and the induction of the $`s`$-wave component, have little to do with the splitting. All of these points lead us to the conclusion that, for the particle-hole asymmetric case, the splitting of the ZEP is intrinsic to the system with a short coherence length, and the critical value $`\gamma _c`$, below which the ZEP is split into an asymmetric double-peak, is simply a reflection that the system has reached a critical extent in its deviation from particle-hole symmetry. We thus propose to understand these results qualitatively as follows: The “ZES’s” induced by a unitary non-magnetic impurity have essentially the same physical origin as the “midgap states” predicted to exist on the surfaces/interfaces of a DWSC . Their existence is implied topologically by the Atiyah-Singer index theorem , which applies to particle-hole-symmetric Dirac-like operators. When this symmetry is mildly broken the midgap states are expected to still exist but no longer exactly “midgap”. The BdG equations become Dirac-like equations only under the WKBJ approximation (which is a part of the self-consistent $`t`$-matrix approximation), the error of which is measured by the parameters $`|\mu |`$ and $`\gamma ^1`$. For $`\mu =0`$, the system has exact particle-hole symmetry for all $`\gamma `$. Thus smaller $`|\mu |`$ should imply smaller $`\gamma `$ needed to reach the same deviation from particle-hole symmetry, hence a smaller $`\gamma _c`$ below which an asymmetric splitting of the ZEP appears. In summary, we have presented an extensive study on the quasiparticle resonant states induced by a unitary non-magnetic impurity in a DWSC. The results have clarified some conflicting conclusions in the literature, and should be of value for the proper analysis of the STM/S results obtained on HTSC’s around an isolated impurity. We are grateful to M. Salkola, M. E. Flatté, and A. Yazdani for valuable discussions. This work was supported by the Texas Center for Superconductivity at the University of Houston, the Robert A. Welch Foundation, and the Texas Higher Education Coordinating Board. Free computing time from the Texas A&M Supercomputer Center is also greatfully acknowledged.
no-problem/9909/nucl-th9909025.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION Within a relativistic light-front (LF) constituent-quark ($`CQ`$) model, we performed an extended investigation of elastic and transition electromagnetic (e.m.) hadron form factors in the momentum transfer region relevant for the experimental research programme at TJNAF . The main features of the model are: i) eigenstates of a mass operator which reproduces a large part of the hadron spectrum; ii) a one-body current operator with phenomenological Dirac and Pauli form factors for the $`CQ`$’s. The $`CQ`$’s are assumed to interact via the $`qq`$ potential of Capstick and Isgur ($`CI`$) , which includes a linear confining term and an effective one-gluon-exchange ($`OGE`$) term. The latter produces a huge amount of high-momentum components in the baryon wave functions and contains a central Coulomb-like potential, a spin-dependent part, responsible for the hyperfine splitting of baryon masses, and a tensor part. A comparable amount of high momentum components was obtained (f) with the $`qq`$ interaction based on the exchange of the pseudoscalar Goldstone-bosons . This fact suggests that the hadron spectrum itself dictates the high momentum behaviour in hadron wave functions. In this paper a review of our results for the elastic and transition form factors for $`J\frac{3}{2}`$ hadrons is presented (a-g). ## 2 ELECTROMAGNETIC HADRON FORM FACTORS IN THE LF DYNAMICS In the $`LF`$ formalism the space-like e.m. form factors can be related to the matrix elements of the plus component of the e.m. current, $`^+=^0+_z`$, in the reference frame where $`q^+=q^0+q_z=P_f^+P_i^+=0`$. We have evaluated elastic and transition form factors (f.f.) by assuming the $`^+`$ component of the e.m. current to be the sum of one-body $`CQ`$ currents , i.e. $`^+(0)_{j=1}^3I_j^+(0)=_{j=1}^3\left(e_j\gamma ^+f_1^j(Q^2)+i\kappa _j\frac{\sigma ^{+\rho }q_\rho }{2m_j}f_2^j(Q^2)\right)`$ with $`e_j`$ ($`\kappa _j`$) the charge (anomalous magnetic moment) of the j-th quark, and $`f_{1(2)}^j`$ its Dirac (Pauli) form factor. We studied first pion and nucleon elastic form factors and showed that an effective one-body e.m. current, with a suitable choice for the $`CQ`$ form factors, is able to give a coherent description of pion and nucleon experimental data (a). In this paper our fit of the $`CQ`$ form factors is updated to describe the most recent data for the nucleon f.f., in particular for the ratio $`G_{Ep}\mu _p/G_{Mp}`$ . In Fig. 1 the elastic proton form factors are shown, in order to illustrate the high quality fit one can reach (a fit of the same quality is obtained for the neutron and the pion as well). It is interesting to note that, while effective CQ f.f. are required to describe the nucleon f.f., the experimental data for the ratio $`G_E^p\mu _p/G_M^p`$ can also be reproduced by the current with pointlike CQ’s (see dashed line in Fig. 1 (b)). Therefore this ratio appears to be directly linked to the structure of the nucleon wave function. ## 3 NUCLEON-RESONANCE TRANSITION FORM FACTORS Once the $`CQ`$ form factors have been determined, we can obtain parameter-free predictions for the nucleon-resonance transition form factors. In Fig. 2 our evaluations of the helicity amplitude $`A_{1/2}`$ are shown for $`NS_{11}(1535)`$, $`S_{11}(1650)`$ and $`S_{31}(1620)`$, and compared with the results of a non-relativistic model . In the case of $`S_{31}(1620)`$ the results for $`p`$ and $`n`$ coincide (as for $`P_{33}(1232)`$), since only the isovector part of the CQ current is effective. Our predictions yield an overall agreement with available experimental data for the $`P`$-wave resonances and show a sizeable sensitivity to relativistic effects, but more accurate data are needed to reliably discriminate between different models. Our parameter-free predictions for the $`N\mathrm{\Delta }(1232)`$ transition form factors, obtained using the prescriptions i) and ii) defined in (d), are compared with existing data in Fig. 3 (a),(b),(c). In Fig. 3 (d) the ratio between $`G_M^{N\mathrm{\Delta }}\left(Q^2\right)`$ and the isovector part of the nucleon magnetic form factor, $`G_M^p\left(Q^2\right)G_M^n\left(Q^2\right)`$, is shown to be largely insensitive to the presence of $`CQ`$ form factors, whereas it is sharply affected by the spin-dependent part of the $`CI`$ potential, which is generated by the chromomagnetic interaction. It can clearly be seen that: i) the effect of the tiny $`D`$-wave component ($`P_D^\mathrm{\Delta }=1.1\%`$), and then of the tensor part in the $`qq`$ interaction, is small for $`G_M^{N\mathrm{\Delta }}`$, as well as for $`E_1/M_1`$ and $`S_1/M_1`$ (the $`L=0`$ component gives non-zero values of $`E_1/M_1`$ and $`S_1/M_1`$, because of the relativistic nature of our calculation); ii) the effect of the spin-dependent part of the $`qq`$ interaction, which is responsible for the $`N\mathrm{\Delta }`$ mass splitting and for the different high momentum tails of the $`N`$ and $`\mathrm{\Delta }`$ wave functions, is essential to reproduce the faster-than-dipole fall-off of $`G_M^{N\mathrm{\Delta }}\left(Q^2\right)`$ (see also Ref. (f)). Both these results do not depend on the prescriptions used to extract the $`N\mathrm{\Delta }`$ transition form factors.
no-problem/9909/astro-ph9909216.html
ar5iv
text
# A Near-Infrared Stellar Census of the Blue Compact Dwarf Galaxy VII Zw 403 1footnote 11footnote 1Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. ## 1 Introduction Blue Compact Dwarf galaxies (BCDs) are defined by their low luminosities (M<sub>B</sub>$``$-18), blue spectra with narrow emission lines, small optical sizes, large H-I mass fractions (Thuan & Martin 1981), and low oxygen abundances in their ionized interstellar gas (e.g., Izotov & Thuan 1999). BCDs are known locally (z $``$ 0.02 – 0.03), with a few as far as z $``$ 0.1 (Thuan et al. 1994). They are among the most vigorously star-forming dwarfs in the nearby Universe. The study of stellar populations in BCDs is interesting for several reasons. First, ever since their discovery, it has been an open question whether BCDs are “young” galaxies which are forming their dominant stellar population at the present epoch, or “old” galaxies which formed their first generation of stars earlier and are currently rejuventated by a starburst (Searle & Sargent 1972). What is the nature of the BCDs? Second, dwarf galaxies are the most common kind of galaxy at the present epoch (e.g. Marzke & da Costa 1997), and they may have been even more numerous in the past (Ellis 1997). Guzmán et al. (1998) propose that the (type CNELG = Compact Narrow Emission Line Galaxy) faint blue galaxies at redshifts of z $``$ 0.5 could be luminous BCDs that are experiencing a strong starburst. The ensuing supernova explosions are hypothesized to blow out the entire gas supply, resulting in a rapid fading through passive evolution (e.g., Dekel & Silk 1986, Babul & Ferguson 1996). However, HDF-N observations do not see the large numbers of faint red dwarf galaxy remnants predicted by this scenario (Ferguson & Babul 1998). What happened to the faint blue excess in the last few Gyr? This paper addresses the star-formation history (SFH) of BCDs. The stellar content of BCDs provides a fossil record of their SFHs. However, most BCDs are at such large distances that only their most luminous stars can be resolved, even with the HST. VII Zw 403 is a very nearby (4.5 Mpc) example of the BCD class. It is a key object to study with HST since it is close enough to be well resolved into individual stars (Schulte-Ladbeck et al. 1998, hereafter SCH98; Lynds et al. 1998). VII Zw 403 is a type “iE” BCD in the classification of Loose & Thuan (1986). This is by far the most common type of BCD, and is considered characteristic of the BCD phenomenon. The outer isophotes of iEs are elliptical, whereas the brightest star-forming regions are distributed somewhat irregularly in the vicinity of the center — although not at the exact center. This description is strikingly similar to that of early-type (dSph, dIrr/dSph) galaxies in the Local Group (see Mateo 1998). VII Zw 403 exhibits a smooth, elliptical background sheet (see the R-band image of Hopp & Schulte-Ladbeck 1995) with an integrated color consistent with an old and metal poor stellar population (Schulte-Ladbeck & Hopp 1998). The background sheet resolves in HST/WFPC2 images into individual red giant stars. The scale length derived from the resolved stars is about 50% larger than that of the large spheroidals of the Local Group. Schulte-Ladbeck et al. (1999, hereafter SHCG99) propose that VII Zw 403 posseses an old underlying stellar population. This supports earlier suggestions that all BCDs showing extended halos of red color might be old galaxies. It also strengthens the possible evolutionary link between BCDs and early-type dwarfs (e.g., Sung et al. 1998). VII Zw 403 has a present-day metallicity of about Z/20 (see SHGC99), a moderate star-forming rate of 0.013 Myr<sup>-1</sup> (Lynds et al. 1998), a large H-I mass of about 7x10<sup>7</sup> M (SHCG99), an extended H-I envelope (Thuan 1999, private communication), and a large outflow of hot gas detected in X-rays (Papaderos et al. 1994). Therefore, VII Zw 403 is clearly in the process of cycling gas in and out of the deepest part of its gravitational potential; this may regulate its star-formation rate and promote transitions in optical morphology between early and late type over Gyr time-scales. VII Zw 403 has an H-I derived heliocentric velocity of -92 km/s (Tully et al. 1981). Tully et al. combined this with the fact that VII Zw 403 is somewhat resolved in their ground-based images to associate it with the M81 group, at an assumed distance of 3.25 Mpc. When we applied the tip-of-the-red-giant-branch (TRGB) method to derive its distance from the I-band luminosity function of the halo stars, we found that VII Zw 403 is about 40% further away, at about 4.5 Mpc (SHCG99). A survey of several emission-line galaxy catalogs (e.g., Kunth & Sèvre 1986, Thuan & Martin 1991, Terlevich et al. 1991, Salzer et al. 1995, Pustil’nik et al. 1995, Popescu et al. 1996 and references therein) reveals additional BCDs with small positive recession velocities corresponding to the 5 to 10 Mpc distance range. We selected another four well studied BCD/dIrrs for NICMOS observation, and these will be discussed in future papers (Hopp et al. 2000). Near-IR observations have already been used to investigate the nature of BCDs. Tully et al. (1981) observed VII Zw 403 in J and H in an aperture centered on the starburst, and attributed the near-IR flux to red supergiants. Thuan (1983, 1985) obtained integrated, near-IR photometry of a large sample of BCDs and argued that he had found an old population of K and M giants. Unfortunately, as Thuan (1983) recognized, it is difficult to discriminate between a population of young red supergiants and one of old red giants using only the total optical/infrared colors of a star-forming galaxy, because of the overlap in the effective temperature ranges of red supergiants and red giants. Campbell & Terlevich (1984) obtained photometric CO indices of BCDs and asserted that the population detected in the near-IR is primarily composed of supergiants from the current starburst. Subsequent near-IR imaging has been used to resolve the more centrally concentrated star-forming regions which are dominated by the younger supergiants, from the more extended background sheets, which are potentially dominated by older red giants (e.g., James 1994, Vanzi 1997, Davies et al. 1998). However, James (1994) found that intermediate-age, asymptotic giant branch (AGB) stars could be responsible for as much as 50% of the near-IR emission of some BCDs, further complicating the interpretation of near-IR data in terms of stellar populations. Whereas integrated colors and color profiles of mixed-age stellar populations yield ambiguous results, it is in principle possible to distinguish among contributions from young red supergiants (RSGs), intermediate-age AGB stars, and stars on the first ascent red giant branch (RGB) with the help of color-magnitude diagrams (CMDs). There is little work in the literature regarding resolved stellar populations of similar star-forming galaxies in the near-infrared. The local starburst cluster R 136 in the 30 Dor region of the LMC has been studied with adaptive optics in the near-IR, but there is no old population (Brandl et al. 1996). NIC2 results on R 136 (Walborn et al. 1999) focus on the young, and still embedded, massive stars. Adaptive optics, near-IR photometry has not yet been applied successfully to the study of stellar populations in the more distant star-forming galaxies of the Local Group (e.g., Bedding et al. 1997). The DeNIS survey which is currently in progress at ESO, is mapping the Magellanic Clouds (MC) simultaneously in I, J, and K<sub>s</sub>, to limiting magnitudes of about 18, 16 and 14, respectively. DeNIS data clearly reach below the TRGB, as evidenced by preliminary color-luminosity diagrams and luminosity functions published by Cioni et al. (1999). In addition, the CMDs are well populated with blue stars, presumably the upper main sequence and blue supergiants. Recently, the post-starburst dIrr IC 10 and the dIrr IC 1613 were resolved in J and H with limiting magnitudes of J $``$ 18 and H $``$ 17.5 (Borissova et al. 1999). These data are considered deep enough to show the TRGB (m-M = 24.0 and 24.2, or distances of a few hundred Kpc, respectively). However, inspection of their luminosity functions shows the suspected TRGB occurs at the very limit of their data where incompleteness due to crowding is a severe problem. The dIrr galaxy NGC 3109, at a distance of m-M = 25.6 or 1.4 Mpc (Alonso et el. 1999), is located in the outer regions of the Local Group. Alonso et al. observed it in the near-IR to limiting magnitudes of about 20 and 19 in J and H. The RGB is below the detection limits in the near-IR. There are no known BCDs in the Local Group. Much deeper limiting magnitudes and better spatial resolution are required if we wish to resolve the old stars in galaxies at distances of up to 10 Mpc (m-M = 30.0). In order to reach the TRGB in such BCDs, we can make use of recent improvements in near-IR imaging. The peak of the spectral energy distribution of RGB stars occurs in the near-IR. A K5 giant, for example, has colors of V-I = 2.1 (Johnson 1966) and V-H = 3.5 (Koornneef 1983), suggesting that a significant gain of near-IR over optical imaging is possible. Near-IR observations are therefore a promising route to mining the old stellar populations in BCDs at large distances. The NICMOS instrument aboard HST offered the opportunity to study stellar populations in the near-IR at high spatial resolution and deep limiting magnitudes. VII Zw 403, for which optical and near-IR single-star photometry is discussed in this paper, provides the “proof of concept.” Hopp et al. (2000) give a preview of our intended applications to the additional four galaxies observed by us with HST/NICMOS. Here we present near-IR images of VII Zw 403 obtained with HST’s NIC2 camera. The galaxy is resolved into single stars in the near-IR, several magnitudes deeper than previously achieved for dIrrs from the ground, and sufficiently deep to yield useful measurements of stellar magnitudes and colors. We compare these measurements with WFPC2 observations of the same galaxy, to form an empirical characterization of the stellar content in the near-IR. We measure the TRGB in the HST equivalents of the J and H bands, and provide a calibration of the TRGB method. In future publications, we will apply these TRGB fiducials to NICMOS observations of other resolved BCD and dIrr galaxies for which there are no optical TRGB data. We compute the fractional light that different stellar types contribute to the integrated colors in optical and near-IR bands. Finally, we comment on the nature of BCDs. ## 2 Observations and reductions The star-forming regions of VII Zw 403 are located near the center of an elliptical background-light distribution (Hopp & Schulte-Ladbeck 1995, Schulte-Ladbeck & Hopp 1998). In the HST/WFPC2 observations with which we compare our NICMOS photometry, the star-forming centers are situated in the PC chip. An image of this region was published as Plate 1 of SCH98. The three WF chips cover part of the elliptical background-light distribution of VII Zw 403, but, due to the geometry of the arrangement of the WF chips, not the entire galaxy. A record of the UV/optical imaging obtained with the WFPC2 on 1995 July 7 in F336W, F555W, F814W, and F656N filters, the equivalent of the U, V, I and H$`\alpha `$ passbands, was given in SCH98. Errors and completeness fractions for stellar photometry in the continuum bands, and comments on the transformation into the Johnson-Cousins system can be found in SHCG99. ### 2.1 NICMOS imaging NICMOS observations of VII Zw 403 were obtained on 1998 July 23 as part of GO program 7859. Information about the observations can be gleaned directly from the STScI WWW pages linked to this program ID. The NICMOS instrument houses three cameras, the NIC1, NIC2 and NIC3, in a linear arrangement. Data for VII Zw 403 were obtained with all three cameras operating simultaneously, to mitigate some of the detector problems which became known during the phase-II stage of the proposal. However, although all three detectors were collecting photons, not all three yield data which are useful for this study. The NIC3 cannot be brought into focus simultaneously with NIC1 and NIC2, and even NIC1 and NIC2 are not completely confocal. We conducted the observations at the compromise focus position between NIC1 and NIC2. We performed all of our imaging observations in the F110W and F160W filters, the rough equivalents of the J and H bands. Although our primary goal was to obtain deep imaging with both high resolution and large area coverage of regions located in “Baade’s red sheet <sup>2</sup><sup>2</sup>2The term “Baade’s red sheet” has its roots in Baade’s foundation of the stellar population concept, i.e., the definition of Population I and II. An interesting historical perspective on his ideas was given by Sandage (1986). ”, our observing strategy was designed to optimize the scientific return and minimize the risk of using a new instrument with somewhat uncertain performance. We therefore chose to locate the NIC2 chip, which has a larger area and higher sensitivity at lower resolution than the NIC1 chip, within the central starburst region of the galaxy rather than in the elliptical background sheet. In order not to constrain the schedulability of the observations limited by the lifetime of the NICMOS cryogen, we also decided not to request a specific orientation of the observations, and so the NIC1 chip was positioned within the outskirts of the background sheet of VII Zw 403 at that spacecraft roll which happened to occur on the observing date. The geometry of the NICMOS observations relative to the WFPC2 observations can be gleaned by comparing Figure 1 to Plate 1 of SCH98. For reference, the NIC2 camera has a field of view of 19.2”x19.2” and a pixel scale of 0.075”. Thus the area of the NIC2 is only about 30% of the area of the PC. The observations were split into several exposures and read out in MULTIACCUM mode. The parameters were chosen to address several issues which are laid out in the NICMOS documentation (detector read-out noise, saturation of bright sources, amplifier glow), and to optimally fill the total time available per HST orbit. From one exposure to the next, the NICMOS camera was dithered in the X direction. This procedure allowed for a better sampling of the PSF, better flux measurements, and the removal of cosmic rays and background in the processing of the data. Due to its sensitivity and very small field of view, the NIC1 camera produced images which contain very few point sources. We will not discuss the NIC1 data in this paper. The NIC3 images exhibit several bright stars, but the stellar images are out of focus. Therefore, we will not discuss the NIC3 data in this paper either. In the F110W filter, we gathered a total of six individual exposures, three having integration times of 1023.95 sec each, and three of 767.96 sec. Severe cosmic-ray persistence in the images (see below) convinced us not to include one of the short exposures in the final dataset. The total integration time for the F110W image is therefore 4607.77 sec. In the F160W filter, we took a total of five exposures, each with an integration time of 767.96 sec. Again, one of the datasets was so badly affected by cosmic-ray persistence that we chose not to add it into the final image. The integration time of the F160W filter image used here is therefore 3071.84 sec. The NIC2 observations were reduced with the latest reference files available in the calnica pipeline, and the individual observations were combined into a mosaic with the calnicb pipeline. However, this reduction was not satisfactory. The data showed surprisingly few point sources compared to the WFPC2 images. They displayed a high and spatially non-uniform background. Aperture photometry immediately indicated that the anticipated limiting magnitudes were not achieved. In fact, initially the DAOPHOT (Stetson, Davis & Crabtree 1990) software did not recognize many of the point sources visible by eye. The observations of VII Zw 403 are severly affected by what are known as the “pedestal” and the “cosmic-ray persistence” problems. Both of these problems result in the addition of spatially non-uniform, high backgrounds to the data, preventing the detection of faint stellar IR point sources in our images. We are very grateful that Dr. M. Dickinson allowed us to use his personal software to remove to a large extent the effect of the pedestal-induced background in our data. To accomplish removal of the pedestal, the raw data were first run through the calnica pipeline without applying a flat-field correction. We then used MD’s software to interactively fit a pedestal-free sky level to the background in the four quadrants of the NIC2 detector. The data were re-processed through calnica with the flat-field flag on. The individual exposures so reduced exhibited a much smoother and a lower background than the raw data. In order to avoid the cosmic-ray persistence, we had requested that our data be obtained well away from the South Atlantic Anomaly; however, this was not possible due to the pressure on NICMOS time and the ensuing scheduling difficulties during its short lifetime. Because all of the exposures are affected by this problem at some level, the final data do not reach as deeply as they would otherwise. We proceeded to combine the reduced data with calnicb. However, careful inspection of the mosaiced images revealed that the PSF of the point-sources varied noticably across the images. Since DAOPHOT photometry is sensitive to the shape of the PSF, we needed to minimize these PSF variations. To do this, we combined our distortion-corrected exposures using a drizzling routine, instead of using the calnicb pipeline, (which, at time of our data reduction, did not take into account geometric distortion.) The drizzling process required a careful manual masking of detector blemishes before the image combination. The resulting images in F110W and F160W have a more uniform and lower background, and a more uniform and rounder PSF, than the data reduced by the pipeline. Owing to the dithering procedure, areas at the edges of the images in X direction have a lower signal-to-noise, which we trimmed off. ### 2.2 Single-star photometry on NICMOS images The NIC2 single-star photometry is the result of an iterative process. After the images were reduced as described above, we fitted a PSF to about one hundred fairly isolated sources in each image and carried out photometry. We set the zeropoint using the most recent photometric keywords available for the F110W and the F160W filters used with NIC2. We examined about a dozen sources in each image to calculate the aperture correction that we applied to the DAOPHOT PSF photometry. The CMD in F110W, F160W reached deeply enough to reveal the TRGB. However, the photometric errors for the faint sources were large. Therefore, we once again examined the background. The background was still non-uniform due to imperfect pedestal correction, the remaining cosmic-ray persistence, and bound-free and free-free emission from ionized gas in the H-II regions. Smaller photometric errors for the faint sources were obtained when the background was smoothed with a 23x23 pixel median filter after all the identified point sources were removed from the data. Photometry on this background yielded smaller photometric errors for the stars that were previously identified by DAOPHOT, and as a consequence, a tighter distribution of the stars on the CMD. While an even more extended sea of very faint sources was visible on the images by eye before the background smoothing was applied, the high photometric errors on the sources that were identified with DAOPHOT convinced us that photometry of these potentially real, faint red giants was not feasible. The images which had the smoothed background subtracted off are shown in Figure 1. When we overlayed the sources identified by DAOPHOT on the images, we noticed that the extended PSF, especially in the F160W filter, yielded multiple identifications of very bright sources. Possibly as a result of the pedestal removal and the drizzling, the first diffraction ring was of a non-uniform brightness, and DAOPHOT identified parts of the ring as 3-4 additional point sources. These sources were usually several magnitudes below the main peak and had a high error. However, more severly, we noticed that in several cases the main core of the PSF of the brightest stars was identified with two PSFs of equal brightness. After much experimentation with the PSF and other clipping parameters within DAOPHOT, we found that the best results were achieved when the images were smoothed with a 3x3 pixel filter. In this way the sizes of the cores of the PSFs were degraded slightly to a FWHM of 2.9 pixels (0.22”) in F110W and 3.6 pixels (0.27”) in F160W, but the brightness distribution in the diffraction features was more uniform. An illustration is provided in Fig. 2. We reapplied DAOPHOT and the bright stars were found as single sources, while the sources erroneously identified as faint stars within the PSFs of the bright stars vanished. With this procedure we measured 2134 individual sources with residual errors smaller than 0.55 mag in F110W, and 1500 sources in F160W. In constructing the CMD we required spatial coincidence of the sources to within 3 pixels; 998 such sources were found. Finally, we re-investigated the photometry of the stars used for the photometric calibration and the aperture correction. By measuring these stars in a series of ever larger circular apertures, we found that the PSF had not been encompassed sufficiently by the 0.5” aperture. We calculated an aperture correction to the 0.5” aperture (for which the photometric conversion is given by the NICMOS photometric calibration) and re-applied the appropriate corrections to the data sets. In Fig. 3, we display the internal errors of the photometry. The errors for F110W exceed 0.1 mag for a magnitude of 23.7; for F160W they reach 0.1 mag at a magnitude of 22.5. We performed completeness tests using the same procedure described in SHCG99. In Fig. 4, we summarize the results. The data are nearly complete ($`>`$95% of test stars recovered) for magnitudes $`<`$23; completeness drops to 50% at 25.4 in F110W and 24.1 in F160W. ## 3 Results and Discussion The WFPC2 CMD in V and I contains 5459 sources, and is displayed as Fig. 5. Throughout most of this section, the \[(V-I)<sub>o</sub>, M<sub>Io</sub>\] CMD serves as a guide to interpreting the NICMOS results. Main-sequence (MS), blue supergiant (BSG), blue-loop (BL), RSG, AGB and RGB stars are all represented. This CMD is extensively discussed in SCH98, Lynds et al. (1998) and SHCG99; we have good knowledge of the location of various stellar phases on this diagram. Figure 5 shows the evolutionary tracks with Z=0.0004; Z=0.004 (Z/50 and Z/5, respectively) from the Padova library (Fagotto et al. 1994). These have been transformed into the observational (HST) plane by using bolometric corrections and color tables (Origlia & Leitherer 2000) produced by folding the HST filter/system response with model atmospheres from Bessell, Castelli & Plez (1998) with \[M/H\] = -1.5; \[M/H\] = -0.5. These transformation tables adopt M<sub>V,⊙</sub> = 4.83 (e.g. M<sub>bol,⊙</sub> = 4.75, BC<sub>V,⊙</sub> = -0.08) and colors equal to zero for the model atmosphere representing $`\alpha `$ Lyrae. The F555W and F814W magnitudes of these two grids were transformed to Johnson-Cousins V and I magnitudes in the ground system using Table 10 of Holtzman et al. (1995), so that the tracks are on the same photometric system as the VII Zw 403 observations. In Fig. 5, we overlay a few tracks of either metallicity onto the observations. The tracks nicely illustrate the well-known age-metallicity degeneracy of the RGB in broad-band colors, i.e., the tip of the first-ascent red giant branch of the Z=0.0004, 1 M model which has an age of about 7 Gyr virtually coincides with that of the Z=0.004, 4 M model which has an age of about 160 Myr, in the \[(V-I)<sub>o</sub>, M<sub>Io</sub>\] plane. Thus, additional arguments based on the positional dependence of the RGB were employed in SHCG99 to suggest the presence of an old and metal-poor stellar population. A CMD of the near-IR photometry is displayed as Figure 6, in terms of instrumental magnitudes in the Vega system. A transformation of the F110W and F160W photometry into J and H is attempted below. Since additional systematic errors are added in the process, we first discuss the F110W and F160W photometry. The foreground extinction towards VII Zw 403 (E(B-V) = 0.025) is negligible at the central wavelengths of the near-IR filters (computed using Cardelli, Clayton & Mathis 1989). Because VII Zw 403 is situated at high Galactic latitude, and the NIC2 camera has a very small size, the contribution to the CMDs by Galactic foreground stars is also negligible. The NIC2 images were situated well within the PC images of VII Zw 403. We cross-identified sources found in both cameras by transforming the NIC2 coordinates into the WFPC2 system. We then merged our photometry lists, and investigated the distribution of stars on a variety of color-color diamgrams and CMDs. There are 549 sources found in V, I, J, and H. ### 3.1 Two-color diagrams and internal extinction We examined two-color diagrams (TCDs) for all combinations of the data sets (Figure 7). The TCDs show two clumps of stars corresponding to the blue and red plumes of the CMDs (see below), with only a few sources located outside of this main distribution of stars. The reddening vectors are approximately parallel to the distribution of stars even using our seemingly advantageous long color baseline of V-F110W vs. V-F160W. This can also be seen in Koornneef (1983), Fig. 1, for ground-based V, J, H data. We do not discuss TCDs involving the U band here, due to the paucity of sources found in U and the near-IR bands. We investigated the reason for observing a few sources off the main clumps of stars in the TCDs, and found that the deviant data points can all be explained by large photometric errors ($`>`$ 0.1 mag) in one of three filters. We thus attribute the sources which are offset from the main distribution to measurement error and not to internal reddening. This result of undetectable internal reddening in VII Zw 403 is consistent with the seemingly unreddened location of the blue plume in the \[(V-I)<sub>o</sub>, M<sub>Io</sub>\] CMD, centered on 0 mag, and with the fact that few stars change position from the red to the blue side when we plot CMDs with a larger color baseline (see below). Furthermore, Lynds et al. (1998) derived E(B-V) = 0.04–0.08 for the internal extinction of stellar associations in VII Zw 403. This corresponds to A<sub>V</sub> = 0.12–0.25, and, using the Galactic extinction law of Cardelli, Clayton & Mathis (1989), translates into A<sub>J</sub> = 0.03–0.07 and A<sub>H</sub> = 0.02–0.05. As expected, the extinction in the near-IR is very small. ### 3.2 Luminosity functions in the near-IR and the TRGB method for deriving distances Briefly, the TRGB method (Lee et al. 1993) makes use of the relative insensitivity to metallicty of M<sub>bol</sub> of the tip of the first-ascent red giant branch, as well as the insignificance of line-blanketing for the I magnitude of metal-poor red giants, and the availability of well-calibrated bolometric corrections to the I band based on the V-I color of the RGB. However, similar calibrations for the F110W or F160W filters do not exist. Furthermore, as illustrated in Fig. 8, while the absolute I magnitude at the TRGB is constant below an \[Fe/H\] of about -0.7, those in J and H display a metallicity dependence. In this section, we discuss an empirical near-IR TRGB calibration based on the VII Zw 403 data, which is therefore valid for the VII Zw 403 metallicity. In the following sections, we will investigate the dependence of the near-IR TRGB on metallicity both empirically and using models. In SHCG99, we derived the distance modulus of VII Zw 403 to be (m-M)<sub>o</sub>=28.23 from the V-I color of the RGB for the halo population combined with the location of the TRGB in the I-band. This immediately allows us to place an absolute magnitude scale on the near-IR CMDs of VII Zw 403, to identify the TRGB here, and to interpret the stellar content of the near-IR CMDs. The RGB is clearly distinguishable in the near-IR CMDs, as expected, as a densely populated region at red colors and faint magnitudes, the red tangle. Comparing Fig. 5 with Fig. 6 we notice that, while showing great morphological similarity, the near-IR CMDs do not exhibit the pronounced red tail of AGB stars seen in the optical CMD. In the near-IR CMDs, the red plume continues into the red tangle as a strikingly linear feature with colors 0.75$`<`$(F110W-F160W)$`<`$1.5. The RSG, AGB and RGB stars are not well-separated in color, as we further illustrate below. However, tracing along the red plume it is also evident that a vast increase in the star counts occurs at faint magnitudes, and a TRGB can be distinguished from these data. Luminosity functions in F110W and F160W were derived by counting stars in 0.1 mag bins in the color interval 0.75$`<`$(F110W-F160W)$`<`$1.5. As Fig. 9 shows, the luminosity functions display a sharp rise of the star counts towards fainter magnitudes. We identify this rise with the TRGB. We emphasize that the TRGB occurs at apparent magnitudes in F110W and F160W where the completeness of our data is still very high (above 90%). Using the above distance modulus, we find the TRGB is located at M<sub>F110W,TRGB</sub> = -4.28 $`\pm `$ 0.10 $`\pm `$ 0.18 and M<sub>F160W,TRGB</sub> = -5.43 $`\pm `$ 0.10 $`\pm `$ 0.18, where errors were computed from a combination of the random and systematic errors as discussed in SCH98. The near-IR TRGB values are presumably valid at the metallicity of the halo population of VII Zw 403, namely at $`<`$\[Fe/H\]$`>`$ = -1.92$`\pm `$0.04 (Z/83). We compare the empirical values with the 15 Gyr TRGB in our low-metallicity grid (evolution at \[Fe/H\]=-1.7, atmospheres at \[Fe/H\]=-1.5, see above) and find that the models yield M<sub>F110W,TRGB</sub> $``$ -4.6 and M<sub>F160W,TRGB</sub> $``$ -5.6, in good agreement with the data. These results provide one “calibration” for the TRGB in the near-IR, and could in principle be applied to other galaxies with old and similarly metal-poor giant branches. We now discuss the accuracy of the method by comparing our CMD in J and H to those of globular clusters (GCs). ### 3.3 Transformation to J and H The STScI NICMOS team supplies on their home page data for five standard stars (release Nov. 25, 1998), including magnitudes in HST filters and standard ground-based filters. This small sample consists of a white dwarf, a G star, and three red stars of increasingly redder color. The total color range of the stars is -0.04 $``$ J-H $``$ +2.08 and the total magnitude range is 7.3 $``$ H $``$ 12.7. The ground-based data of the NICMOS standard stars are mostly unpublished (private communication from the NICMOS IDT to the STScI NICMOS team). At least one red star comes from the list of Elias et al. (1982; CIT system of JHK magnitudes). The STScI NICMOS team mentions discrepancies of the order of 0.1 mag between the ground-based data in their Table and those of Persson et al. (1998, LCO system; see this paper and references therein for a comparison of the various ground-based systems). A complete transformation into a ground system is not yet available. We used the data of the NICMOS team to establish transformation equations from the F110W, F160W magnitudes and color to the ground-based system, applying a simple linear least-squares fit to the data: $`JH=(0.035\pm 0.14)+(0.758\pm 0.061)(F110WF160W)`$ $`H=(0.001\pm 0.06)+F160W(0.091\pm 0.027)(F110WF160W)`$ The errors give the quality of the fit but of course do not take into account any of the systematics (such as the offset mentioned above, or the limited sampling of color range by the few standards). It is therefore difficult to estimate the uncertainties introduced by applying these transformations. We have to assume that the accuracy is no better than 0.1 mag, and probably worse. In Fig. 10, we display the CMDs of VII Zw 403 in J and H. Overlayed are the tracks for the same stellar masses and metallicites as we used in Fig. 5. The tracks were transformed to the ground-system using the above equations. There is no major qualitative change in these CMDs as compared to those in the instrumental system (cf., Fig. 6). Quantitatively, the main effect is a shift of the red plume towards the blue plume. An advantage of transforming the data into the JHK system is that our CMDs and luminosity functions can be more readily compared with the few available ground-based data of similar galaxies in the near-IR (see Alonso et al. 1999, Borissova et al. 1999, Cioni et al. 1999). The transformation also enables a comparison of our data with ground-based photometry of GCs in the Milky Way and the LMC. We can also give the J, H magnitudes of the TRGB of VII Zw 403 (see Table 1). In making comparisons of near-IR data sets, additional offsets may be encountered because the various ground-based observations were obtained in different realisations of the JHK filter system. For a careful comparison, we need to transform all of the data into the same system. Persson et al. (1998) extend the bright near-IR standard star measurements of Elias et al. (1982) to fainter magnitudes. They discuss how the LCO system compares to others (UKIRT, see Casali & Hawarden, 1992; CTI, see Elias et al., 1982, AAO, see Bessel & Brett, 1988) and find rather good agreement between the UKIRT, CTI, and LCO systems in H and K. The J band is less straightforward. For instance, Elias et al. (1983) derive transformation formulae between the AAO system and their CTI system. Again, H and K show good agreement and the two systems can be treated as identical systems for our purposes. However, a significant transformation coefficient (about 0.8 mag) has to be taken into account for J. This may be due to the large contribution of atmospheric features in this band. In our discussion below, we shall therefore focus our comparisons on the H-band results, where the three systems discussed above can be assumed to be identical within the accuracy of our transformation to the ground system. ### 3.4 TRGB H magnitudes in GCs and stellar models If we wish to use the VII Zw 403-derived calibration as a distance indicator for other galaxies, we have to investigate the sensitivity to metallicity of the TRGB luminosity in the near-IR. To study this limitation of our estimator, we followed two approaches, using both stellar evolution models and GC data for guidance. First, we read off the magnitudes from the Padova isochrones (Bertelli et al. 1994, see also Fagotto et al. 1994) for 15 Gyr old stars for the largest available metallicity range. These data are shown in Fig. 8. Bertelli et al. provide their isochrones in the Johnson-Cousins system in the optical and in the near-IR passbands defined by Bessel & Brett (1988, AAO system). The I-band shows the well-known quasi-constant absolute magnitude level for metallicities, \[Fe/H\], below -0.7. According to the models, the absolute J-band magnitudes are brighter than the I ones (an advantage) but also vary strongly, displaying a rapid monotonic increase with \[Fe/H\] (a disadvantage). The H-band magnitudes offer the advantage of being about 2 mag brighter, on average, than M<sub>I</sub>. Unfortunately, the models indicate a complex dependence between \[Fe/H\] and the TRGB in H. M<sub>H</sub> exhibits a plateau at about -6, in the range -0.4$`>`$\[Fe/H\]$`>`$-1.3. Fig. 8 thus suggests that an H-band TRGB might be useful at slightly higher metallicities than that for which the I-Band TRGB is applicable. However, towards the very low metallicities of interest for us, M<sub>H</sub> suddenly changes by about 0.4 mag. Furthermore, the theoretical isochrones barely approach the metallicity regime of our dwarf galaxy data. VII Zw 403 in particular lies below the lowest metallicity point covered by published models (see Fig. 11). (We add that we also investigated the TRGB magnitudes in our two grids, which use the Bessell, Castelli & Plez (1998) atmospheres rather than pure Kuruzc atmospheres adopted in the Padova grids, and that they agree to within 0.05 mag). Second, we compiled JHK data of GCs in the Milky Way and the LMC from the literature. These data cover a large metallicity range. We used the observed CMDs to read off the magnitudes at the TRGB; together with the GC distances, this yields another indication of the dependence of the absolute TRGB magnitudes on metallicity. The major difficulty of this approach is that frequently, the empirical RGBs of GCs are not sufficiently populated near the tip to provide a reliable tip magnitude. Other sources of error include uncertainties in the GC distances and metallicities. Kuchinski et al. (1995, a, b) observed several GCs of the Milky Way belonging to the so-called disk system. These have \[Fe/H\] metallicities between -1 and 0. The authors also discuss older data for 47 Tuc and M 71. From these published CMDs, we read off the K magnitudes of the TRGBs, and their H-K colors, to derive H-TRGB magnitudes. We adopted the distance moduli and extinction corrections as presented by Kuchinski et al. to derive absolute H-TRGB values for these nine clusters. Kuchinski et al. use the Elias et al. (1982) standards for the CTI system. As they are interested in disk clusters, extinction is a severe problem for some of the data sets, especially for M 71 and Ter 2. The Kuchinski et al. cluster data have only a small overlap in \[Fe/H\] with the dwarf galaxies we are interested in. But, they serve as a comparison with the models at high metallicity. Ferraro et al. (1995) presented ground-based JHK photometry of stars in 12 GCs of the LMC. The data are in the CTI system of Elias et al. (1982). Unfortunately, the sampling at the TRGB is so poor in most of these clusters, that a secure determination of the TRGB magnitude in either of the near-IR colors is not possible. Furthermore, only some of the GCs observed by Ferraro et al. have published \[Fe/H\] values, and these have rather large uncertainties. We finally used only four of their GCs (see Fig. 11). The LMC clusters also have abundances which are higher than the range of interest for us, but, together with the Kuchinski et al. results, they allow an independent check of the results derived from the isochrones. There is a rather large dispersion in the observed data for these GCs (all with \[Fe/H\]$``$-1), but they do cluster about the TRGBs of the isochrones, which is somewhat reassuring. Recently, Davidge & Courteau (1999) observed four metal-poor Galactic halo GCs in the near-IR. They used the standard stars of Casali & Hawarden (1992) and supplied values of the TRGB in J, H, and K. The brighter parts of the RGBs are rather well sampled. The \[Fe/H\] values of these four clusters range from -2.3 to -1.5. These are in the range of interest for studies of metal-poor, old stellar populations in galaxies. Most importantly, they bracket the abundance of VII Zw 403 (\[Fe/H\] = -1.92). The two more metal-rich GCs of their sample overlap with the isochrone result, and there is good agreement between the models and the data. These four GCs indicate only a weak dependence of M<sub>H,TRGB</sub> on \[Fe/H\] for small values of \[Fe/H\]. We use the four metal-poor GCs of Davidge & Courteau, the model TRGB value for 15 Gyr and Z=0.0004, and our data point for VII Zw 403, to derive M<sub>H,TRGB</sub>. As can be seen from Fig. 11, a good approximation is $`<`$M<sub>H,TRGB</sub>$`>`$ = -5.5($`\pm `$0.1) for -2.3$`<`$\[Fe/H\]$`<`$-1.5. We propose that this value may be used to derive the distances to dwarf galaxies containing metal-poor stellar populations, with sufficient accuracy to be useful for stellar-population studies. We found disturbing the “downward” trend of M<sub>H,TRGB</sub> for low \[Fe/H\] suggested by the Padova isochrones, as compared with the “flattening” of M<sub>H,TRGB</sub> in the range from -2.3$`<`$\[Fe/H\]$`<`$-1.5 indicated by the observations. We therefore secured a stellar evolutionary grid at \[Fe/H\] = -2.3 from the Frascati group (Cassisi, private communication). The 14 Gyr isochrone of their stellar-evolution code yields M<sub>H,TRGB</sub> $``$ -5.6 at \[Fe/H\] = -2.3. This provides a consistency check of our empirical result. We do not show this additional point in Fig. 8 and 11 since it is based on a different stellar evolutionary code as the one to which we compare our data throughout the remainder of the paper. The TRGB is the preferred distance indicator when no observations of Cepheids are available, because its physics is well understood, and because when well calibrated (such as in the I band), it achieves a similar accuracy (Madore & Freedman 1998.) The large number of stars populating the near-IR CMDs of star-forming dwarf galaxies near the TRGB (cf. also, Hopp et al. 2000) indicates that the TRGB method does not suffer from the statistical uncertainy which is sometimes encountered when investigating GC ridgelines in the near-IR. Therefore, the transformation and calibration uncertainties are the dominant sources of error in using the near-IR TRGB as a distance indicator for these galaxies. This method is superior to other distance indicators for galaxies in the 5-10 Mpc range; however, care must be taken not to mis-identify the tip-of-the-AGB (TAGB) with the TRGB. The method of the three brightest RSGs, on the other hand, is always severely affected by small-number statistics (Schulte-Ladbeck & Hopp 1998, Greggio 1986). ### 3.5 M<sub>H</sub> as a measure of M<sub>bol</sub> for red stars Bessell & Wood (1984) investigated bolometric corrections for late-type stars. From observations of individual red giants and supergiants in the MWG, LMC, and SMC, they find BC<sub>H</sub> = 2.6($`\pm `$0.2). In other words, the empirical value for BC<sub>H</sub> is derived to be independent of color or spectral type, and metallicity. In Figure 12, we show this empirical relation between M<sub>H</sub> and M<sub>bol</sub> as a straight line. An inspection of Fig. 5 in Bessell & Wood (1984) suggests the possibility that the lowest-metallicity data (those for the SMC stars) might be slightly below their recommended value, at a BC<sub>H</sub> that is closer to 2.4, respectively. To see how the recommendation of a constant BC<sub>H</sub> by Bessell & Wood compares to theoretical expectations, we overlay in Figure 12 the locations of the TRGBs and TAGBs from the Padova isochrone library. Their near-IR photometry is in the same system as the data of Bessell & Wood. We again used the 15 Gyr isochrones over the range of available metallicities, and the values of M<sub>H</sub> and M<sub>bol</sub> associated with these red giant models. These points lie near the empirical calibration, but are clearly offset toward lower M<sub>H</sub> for a given M<sub>bol</sub>. The lowest metallicity point in the stellar-evolutionary models (Z=0.0004) corresponds to a BC<sub>H</sub> of 2.1. In order to make a comparison at high luminosities, we use the theoretical luminosities at the tip of the AGB. We show the TAGB points for solar metallicity and one fifth of solar, and for ages from 150 Myr to 15 Gyr. As illustrated in Fig. 12, while the stellar models with solar metallicity follow well enough the empirical relation, the lower-metallicity models are offset to a smaller M<sub>H</sub> for a given M<sub>bol</sub>. The constancy of BC<sub>H</sub> in Bessell & Wood is somewhat surprising, since observed giants will be located along the giant branches, and not just at the tips. In order to better understand how BC<sub>H</sub> varies along the giant branches, we have investigated in detail our transformed tracks. It turns out that in the models, BC<sub>H</sub> increases along the tracks and that it depends on the stellar mass (i.e. age of the population). These variations are particularly pronounced for the Z=0.004 grid, with BC<sub>H</sub> at the tips ranging between 2.3 and 2.6, and between similar limits for the 0.9 M model in the brightest 1.5 mag of its RGB evolution. The situation improves slightly for the Z=0.0004 grid, where BC<sub>H</sub> ranges between just 2.0 and 2.2 at the various tips, and within approximately the same limits for the 0.8 M model in the brightest 1.5 mag of its RGB evolution. A constant BC<sub>H</sub> is not justified from the models, and the interpretation of transformations from the observational to the theoretical plane will require simulations, since there may be multiple choices for the theoretical parameters of any individual star given its observed color and magnitude. In summary, for the low metallicities which occur in our objects, a smaller BC<sub>H</sub> than that suggested by the data of Bessell & Wood appears to be more appropriate. While BC<sub>H</sub> clearly varies strongly with stellar mass, at Z=0.0004 the TRGBs for models of a wide range of stellar masses can be approximated by a constant. We hence employ the following relation to characterize the dependence of M<sub>H</sub> on M<sub>bol</sub> $`<`$M<sub>bol</sub>$`>`$ = M<sub>H</sub> \+ 2.1($`\pm `$0.1) for Z $`=`$0.0004 We use this relation to derive the bolometric luminosity function of the red stars in VII Zw 403 (Fig. 13.). This bolometric luminosity function is valid for the stars at the TRGB (in terms of metallicity, color or temperature, BC<sub>H</sub>), but is only a rough approximation (plus minus several thenths of a mag) for other stars. We caution that using a constant BC<sub>H</sub> does introduce uncertainties in going from the observed to the theoretical plane, and that the interpretation of luminosity functions or HR diagrams (cf. also Alonso et al. 1999, Borissova et al. 1999) requires simulations. The TRGB in Fig. 13 is measured to occur at a bolometric luminosity of -3.4($`\pm `$0.1). This agrees very well with the theoretically expected M<sub>bol</sub> from Fig. 8. Our result also compares very well with preliminary results from the DeNIS project by Cioni et al. (1999), who find M<sub>bol,TRGB</sub> to be -3.4 for the LMC and -3.6 for the SMC. These data were transformed to M<sub>bol</sub> using different relations as H is not one of the survey bands. The agreement is thus all the more encouraging. We detect a large number of stars below the TRGB. Above the TRGB, the stellar numbers are small and thus highly affected by statistical errors. The bulk of stars which we observe in the luminosity range -3.4 $`>`$ M<sub>bol</sub> $`>`$ -6.5 is compatible with these stars being largely AGB stars in VII Zw 403 (cf. also, Fig. 15). The data are consistent with the most luminous AGB stars being only a few hundred Myr of age, while the least luminous, oldest ones have ages of several Gyr. The four most luminous red supergiants span the luminosity range from approximately -6.5 $`>`$ M<sub>bol</sub> $`>`$ -8.5. We note that the cool part of the Humphreys-Davidson limit, the upper luminosity boundary of RSGs, is thought to occur at M<sub>bol</sub> of -9.5 (Humphreys & Davidson 1994). The uncertainties in the transformation from the observed to the theoretical plane prevent us from stating how close our most luminous, near-IR detected objects are to this boundary. A comparison of Fig. 14 with Fig. 5 reveals that, while we are missing some of the bright supergiants in the near-IR as compared to the optical, we do see the most luminous object, at M<sub>I</sub> of about -9, so we are sampling the entire upper luminosity range. The reason why we miss in the near-IR a few of the brightest supergiants, is due to the fact that the area of the NIC2 chip covers less than a third of the star-forming centers, while the PC chip encompasses these regions very well. The total absolute blue magnitude of VII Zw 403 is small, about -14 (see Schulte-Ladbeck & Hopp 1998). At these small galaxy luminosities, fluctuations in the numbers of the most luminous and hence most massive stars are expected (Greggio 1986). Statistical effects have been known to be important in the UV and optical, and continue to be important for the interpretation of near-IR luminosity functions and CMDs. We discourage the use of the brightest RSGs as a distance indicator. Similarly, the comparative studies of the RSG populations of galaxies are severely affected by small-number statistics (cf. the stellar frequencies for M<sub>bol</sub> $`<`$ -7 in Fig. 17 of Massey 1998). ### 3.6 Decoding the near-IR CMDs In Fig. 14, we color-code the main areas of the CMDs using terminology that an observer would employ to classify stars according to the different morphological features seen on the CMD. The classification is based on the \[(V-I)<sub>o</sub>, M<sub>Io</sub>\] CMD of Fig. 5. The stars marked in blue are mainly MS and BSG stars; stellar evolutionary tracks tell us BL stars occur here as well. The stars indicated in magenta are considered to be RSG; the tracks suggest there may be BL stars at the faint end. More importantly, the tracks indicate bright AGB stars populate this part of the red plume as well, and we cannot easily differentiate them from RSGs. We are able to clearly distinguish AGB stars when they form the red tail; these stars are marked in black. Finally, we colored all of the data points which occur below the TRGB in red, suggesting that mainly RGB stars are found here. However, stellar evolution tracks indicate we must be aware of BL and faint AGB stars in this part of the CMD as well. With this broad classification of stars, we now compare the morphology of the optical–near-IR CMDs. In Fig. 14, we use the stellar classification to investigate the stellar content of the \[(J-H)<sub>o</sub>, J<sub>o</sub>\] and \[(J-H)<sub>o</sub>, H<sub>o</sub>\] CMDs. This elucidates a problem which we alluded to earlier, namely that even those AGB stars which populate the redward-extended tail of optical CMDs overlap with the RSGs in near-IR CMDs. As we stated before, the colors of RSG, AGB and RGB stars are quite degenerate with respect to (J-H)<sub>o</sub> color, or temperature. Folded with the color errors that arise from the measurements, only luminosity can help us distinguish between RSG and bright AGB stars on the one hand, and RGB (plus faint AGB stars) on the other hand. At the top of the red plume, we can distinguish the most luminous RSGs from the bright AGBs based on luminosity (AGB stars are not expected at an M<sub>bol</sub> of -8, cf. Fig. 12). We notice from Fig. 14 some mingling of faint stars between blue and red colors. This is most pronounced for stars below the TRGB. Above the TRGB, only a few stars are seriously “misplaced” based on optical color in the near-IR CMDs. We also note that our placement of the TRGB derived from the luminosity functions in F110W and F160W is consistent with our stellar classification scheme. In other words, no stars classed RGB stars based on the \[(V-I)<sub>o</sub>, M<sub>Io</sub>\] migrated above the TRGB limit in the near-IR CMDs, while just a few stars classed RSG or AGB stars wander below the TRGB. Hence, there is an overall good agreement between the TRGB magnitudes based on stellar classification and the luminosity functions. We may ask what kind of stars are we missing in the optical CMDs which are present in near-IR CMDs. Comparing Fig. 14 with Fig. 10, some differences for faint stars are apparent, in the sense that some faint stars of extreme color in Fig. 10 do not appear in Fig. 14. This is not unexpected as objects that are extremely red may have been missed in the optical, and objects that are extremely blue may be blends or spurious detections in the near-IR. There is very good agreement for the brightest, and hence most luminous objects – it appears that none of the brightest supergiants seen in J and H were not seen in V and I as well, giving some confidence that we have a complete sample of the brightest supergiants that were encompassed by the NIC2 chip’s area in the near-IR. The only potentially significant difference then between Fig. 10 and 14 occurs in the color range (J-H)<sub>o</sub>$`>`$ 1.1. Five stars are found here with very red colors, and with brightnesses above the TRGB. They have small measurement errors and must be considered to be real. In this parameter space, we thus picked up additional objects in the near-IR photometry. It is possible, judging from their colors, that these objects are Miras or Carbon stars. In Fig. 15, we show a CMD that uses the I and H bands. It may be compared with Fig. 5 which employs the I and V bands. Fig. 15 illustrates that our expectations for observing in the near-IR were realized: the H-band allowed for an over 1 mag gain in stellar brightnesses for the red stars. The tracks cross the data in slightly different places on the \[(V-I)<sub>o</sub>, M<sub>Io</sub>\] and the \[(I-H)<sub>o</sub>, M<sub>Ho</sub>\] planes, with offsets of a few tenths of a mag. These discrepancies are not too surprising considering the uncertainties in the transformations from the space to the ground systems on the one hand, and the difficulties of model atmospheres to reproduce empirical color-temperature relations on the other hand. They suggest that the state-of-the-art for comparing empirical and synthetic CMDs is to reproduce the gross features of the stellar distibutions including the absolute magnitude of the TRGB quite well, whereas other quantitative results such as e.g., metallicities from the location of the RGB, or detailed SFHs, should be considered somewhat more uncertain. ### 3.7 Long color-baseline CMDs We now present and interpret CMDs with long color baselines. Fig. 16 shows the CMDs of I<sub>o</sub> or M<sub>Io</sub> versus (V-I)<sub>o</sub>, (V-J)<sub>o</sub>, and (V-H)<sub>o</sub>. The same color scheme as that used for Fig. 14 is employed; however, notice the change in color scale. These CMDs in principle offer advantages for separating out different stellar phases from one another. In practice, since the photometric errors vary from band to band and depend on the colors of the objects, the potential of these CMDs to distinguish different stellar phases is not fully realized. An impressive feature of the CMDs of Fig. 16 is the increasingly larger spread of the data in color. For instance, the data points in the (J-H)<sub>o</sub> CMD of Fig. 10 subtend only about 1.5 mag, those in the (V-I)<sub>o</sub> CMD already cover 4.5 mag. In (V-J)<sub>o</sub> this baseline has grown to about 6.5 mag, and in (V-H)<sub>o</sub>, the data are distributed over a range of almost 8 mag. As we study CMDs with increasing color baseline, the color errors at the faint end of the stellar distributions become very large. Therefore, for the faint magnitudes, we observe some mingling of stars across the CMDs. We do not consider this to be a real effect. Only one bright star migrates from the red plume of the (V-I)<sub>o</sub> CMD into the blue plume of the (V-H)<sub>o</sub> CMD. This could potentially be a heavily reddened blue object, or else a blend of unresolved objects weighted very differently in different colors. The red plume of the (V-I)<sub>o</sub> CMD was considered to be composed of RSGs plus AGBs mainly in a narrow region where it appears to form a linear sequence from low to high luminosity. Stars offset to red colors from this band were classed AGBs only. Comparing the three CMDs with each other, we see that we judged quite well from the (V-I)<sub>o</sub> CMD the location of the AGB. Two luminous objects considered RSGs based on the (V-I)<sub>o</sub> CMD might also be classed AGB stars based on the (V-H)<sub>o</sub> CMD. At least half a dozen or so of the faint red stars near the dividing line between the RSG, AGB, and RGB stars in the (V-I)<sub>o</sub> CMD could be additional AGB stars based on the (V-H)<sub>o</sub> CMD. In Fig. 14, two BSGs wandered from the blue plume into the red plume of the CMDs, demonstrating further difficulties to disentangle the nature of red stars in this luminosity range based on CMDs. A feature of the four brightest RSGs detected in all four bands is that they appear to develop a progressively larger redward color offsets in the (V-J)<sub>o</sub> and (V-H)<sub>o</sub> CMDs from the linear band that was used to class them as RSGs on the (V-I)<sub>o</sub> CMD. Since these are bright objects with small measurement errors, we assume that this effect is real. We do not expect to see AGB stars at such high luminosities, and so must assume that these colors are intrinsic to the most luminous RSGs sampled by the NIC2 frame. A significant spreading out of colors occurs for the bluest stars and stars between the MS and the RSG plume/red tangle; presumably high- and intermediate-mass stars on blue loops. This is in part due to high measurement errors for blue objects in the near-IR bands. The effect of differential reddening could play a role in this re-distribution of sources, but we cannot constrain it on the basis of our data. ### 3.8 Comments on the Blue Hertzsprung Gap and the blue-to-red supergiant ratio at low metallicity An interesting feature of the optical/near-IR CMDs of Figs. 14-16 is the appearance of a gap in the distribution of stars along the blue plume. We first noticed this in the optical, PC data centered on the star-forming regions of VII Zw 403 (SCH98). We interpret this gap as the Blue Hertzsprung Gap (BHG) which is predicted by stellar-evolution theory, to occur between the distribution of massive stars at the red edge of the MS (core-H burning) and the blue edge of the BL phase (core-He burning). Simulations of the VII Zw 403 CMD with the code of Greggio et al. (1998), which use the low-metallicity grid also employed in this paper, were presented in SHCG99. The synthetic CMDs shown as Fig. 6 in SHCG99 display the predicted BHG for young ages of the stellar population. In comparing Fig. 6 of SHCG99 with the CMDs presented in this paper, it appears that the BHG occurs at a lower luminosity in the models than that feature which we identify with the BHG in the data. This difference can readily be explained recalling that the simulations were carried out for stellar evolution at Z=0.0004 (Z/50). The extent of the BLs in stellar evolution models is very sensitive to metallicity. In Fig. 15, we connect with a straight line the locations of the blue edges of the blue loops for the 20 and the 9 M tracks in the Z=0.0004 and the Z=0.004 (Z/5) grids. These nicely bracket the location of the observed BHG. The appearance of the BHG between the tracks of these two grids is consistent with the metallicity of the ionized gas of VII Zw 403, which suggests the present generation of stars has metallicities closer to Z/20. In future simulations, we will incorporate into the code the Z=0.001 metallicity grid, as well as all the bands now available from observations. The BHG is seen very well in the CMD of the nearby Local Group dIrr Sextans A (Dohm-Palmer et al. 1998 and references therein), for which the WFPC2 photometric errors are even smaller in the \[(V-I)<sub>o</sub>, M<sub>Io</sub>\] CMD than those for the more distant VII Zw 403. The oxygen abundances for Sextans A (log (O/H) = -4.48, Skillman 1989) and VII Zw 403 (log (O/H) = -4.42(0.06), Martin 1997, and -4.31(0.01), Izotov, Thuan & Lipovetsky 1997) are quite similar, around Z/20. A gap in the distribution of stars was not seen in the H-R diagram of the slightly metal-poor (Z/3) LMC by Fitzpatrick & Garmany (1990). Since it was predicted by stellar-evolution theory, its absence in these data gave rise to the problem of the missing BHG (e.g. the reviews of Maeder & Conti 1994 and Chiosi 1998). Theoretical work has therefore been aimed at filling in the “missing” BHG with stars, for instance by developing and incorporating into the theory of stellar evolution new prescriptions for internal mixing and/or mass loss in massive stars (Salasnich, Bressan, & Chiosi 1999). We note that the BHG is now also being discovered in CMDs of the LMC (Zaritsky 1999, private communication) which are being assembled as part of the digital Magellanic Could Photometric Survey (Zaritsky, Harris & Thompson 1997). These data will give insight into why the BHG was not observed by Fitzpatrick & Garmany (1990); first suspicions include the smaller sample size and the spatial distribution of stars across the LMC. The fact that the BHG has now been seen in the CMDs of several metal-poor galaxies indicates that more observational work on the HR diagrams of metal-poor star-forming galaxies is needed, and that such CMDs may provide important guidance for stellar-evolution theory. They suggest that additional mixing and mass-loss may not be needed to the extent currently anticipated by theorists. A related issue is the blue-to-red supergiant ratio in galaxies. Recent reviews/papers on this open problem of massive-star evolution are those of Langer & Maeder (1995) and Deng, Bressan & Chiosi (1996). In brief, the B/R supergiant ratio as a function of Z has not yet been predicted consistently by stellar-evolution models. However, the data being used to address this problem largely date back quite some time now, to the excellent series of papers by Humphreys et al. on the supergiant stellar content of Local Group galaxies (e.g., Humphreys & McElroy 1984). Our data allow us to contribute a new measurement at low-metallicity, but only in a limited way; again, small-number statistics rules. Since the largest problems for predicting the B/R ratios generally occur at low metallicities, the limited insight gained here may nevertheless be useful. Owing to the visibility of the BHG on the CMDs, we are quite well able to separate the core-H burning MS from the core-He burning BSG stars. Counting stars in the various CMDs shown in this paper yields a number of about 20$`\pm `$3 BSGs, with an RMS error of 4. We anticipate that the largest systematic error arises from preferrentially missing BSGs due to crowding in the star-forming centers; this would have the effect of increasing the B/R ratio. Assessing the number of (core-He burning) RSGs turns out to be more difficult. This is owing to the already much deliberated fact that it is difficult to discern massive RSGs from intermediate-mass, AGB stars. This is true for our photometry as well as for the other existing data sets which attempt to address the B/R ratio (Brunish, Gallagher & Truran 1986). Based on their high luminosity we can clearly identify 4 stars with RSGs. In this case, assuming the counts are dominated by the RMS errors in the stellar numbers, the B/R ratio for VII Zw 403 is 5$`\pm `$4. This ratio is consistent with the SMC (Z/10) numbers (cf. Langer & Maeder 1995). If additional stars in the red plume of VII Zw 403 are true red supergiants in the evolutionary sense, then the ratio goes down. This demonstrates the large effects that statistical and systematic errors have in affecting empirical assessments of the B/R supergiant ratio. ### 3.9 From CMDs to integrated photometry — implications for detecting old stellar populations In this section, we perform an exercise aimed at clarifying the interpretation of integrated photometry of BCDs. For this purpose, consider the NIC2 chip to be a single aperture photometer of about 19” x 19” centered on the active star-formation regions of the BCD. As discussed in the previous section, HST single-star photometry and stellar-evolution tracks allow us to distinguish broadly the main different stellar phases on the CMDs of VII Zw 403. We add the light of stars in different phases to investigate their contributions to the total light in the V<sub>o</sub>, I<sub>o</sub>, J<sub>o</sub>, and H<sub>o</sub> bands. In other words, we perform population synthesis based on single-star colors and luminosities. The results of the summation are shown in Table 2. We divide the stars into three categories: stars in the blue plume (MS, BSG and BL stars), red stars above the TRGB (RSG and bright AGB stars), and red stars below the TRGB (RGB stars, some faint AGB stars, and possibly some BL stars). Among the stars below the TRGB, the dominant population is considered to be RGB stars. However, depending on the age and early SFH of this BCD (which we cannot completely determine from the morphology of the red tangle alone) there may also be faint AGB stars. Because we are positioned on the star-forming centers, some faint BL stars might contribute here; for the less massive stars, the blue and red portions of their evolution are not as widely separated on our diagrams as those for the more massive stars. Looking at Table 2, one simple result is clear: A few luminous stars outshine even a large number of faint stars. The light in an aperture centered on the star-forming regions is dominated in the I<sub>o</sub>, J<sub>o</sub>, and H<sub>o</sub> bands by RSG and luminous AGB stars. These obviously outshine the older RGB stars. The MS, BSG and BL stars belonging to the young and blue population are also significantly brighter in all three near-IR bands than the old stellar population. The sea of red giants so well resolved with HST contributes less than 15% to the integrated light in any of these filters. Notice also that three quarters of the light in V<sub>o</sub> comes from the young, blue stars, and less than 10% from the evolved stars, reinforcing the results of Schmidt, Alloin & Bica (1995) that even a small mass fraction of young stars superimposed on the background sheet of a dwarf Elliptical (dE) or dSph galaxy will completely outshine and render undetectable the underlying old population in optical passbands. Our results have implications for the integrated photometry of BCDs in general. The actively star-forming regions of BCDs are the visually brightest regions. Aperture photometry, including that in the near-IR (e.g. Tully et al. 1981, Thuan 1983, 1985), was traditionally carried out with photometric apertures centered on these bright H-II regions. The apertures used were usually about 10” in size, thus encompassing mainly the star-forming centers. As our exercise suggests, such observations are dominated by the young (less than about 50 Myr old) red supergiants from the most recent star formation activity and by luminous AGB stars with ages less than a few Gyr. We concur with James (1994), that luminous AGB stars must be considered an important component of BCDs, contributing around half of the light in the star-forming region, and reflecting star-forming activity in the last few Gyr. It is therefore not possible to conclude, from integrated photometry centered on the starburst alone, whether an underlying, old ($`>`$10 Gyr) stellar population is present. Since even the near-IR bands predominantly measure the light from the younger stars, forming colors such as V-J or V-H which involve long baselines does not help to break this ambiguity. In Schulte-Ladbeck et al. (2000), we address the possibility to use spectrophotometric indices based on long-slit spectra that exclude the star-forming regions (see Hopp, Schulte-Ladbeck & Crone 1998) in order to age-date the background sheets of BCDs. This method has its roots in the work on spectrophotometric dating of Elliptical galaxies (e.g. Worthey 1994); and while it suffers from its own ambiguities, it may represent an important alternative to single-star photometry for dating distant BCDs in the future. The morphological properties of VII Zw 403 are representative of the vast majority of BCDs. We have previously demonstrated (SHCG99) the presence of a “core-halo” morphology for the resolved stars in this galaxy. Young stars are exclusively found in the core near the center of the extended light-distribution of the halo (approximately in the inner 40”). At large radii (out to 100” or about 4 disk scale lenghts), this young population is absent, but we find some AGB stars and a very strong red tangle dominated by RGB stars. As discussed in Schulte-Ladbeck & Hopp (1998), stellar population synthesis of integrated colors in the outer halos of BCDs (using color gradients derived from multi-filter surface photometry) suggest that even here, a mixed-age population (resulting from a complex SFH) is consistent with the data. In the case of VII Zw 403 where we know from single-star photometry that bright RSGs are absent in the halo, we have interpreted the halo color profile as evidence for the presence of an underlying population which is at least a few Gyr old (and possibly truly ancient, $`>`$ 10 Gyr old). It is likely that the halos of other iE BCDs are similarly composed of old and/or intermediate-age stars. One of the most interesting implications of our results here; however, is that those BCDs which do not exhibit an outer halo, do not necessarily lack old stars. It is entirely possible that their star forming regions are scattered through an older populations, totally obscuring it. ## 4 Summary and Conclusions The quality of the NICMOS photometry in terms of crowding and limiting magnitudes represents an improvement by several apparent magnitudes over exisiting ground-based near-IR data of similar galaxies. In terms of absolute magnitudes, comparable data are only available for the MCs. The results of our single star photometry demonstrate that all of the important stellar phases of composite stellar populations can be traced on near-IR CMDs. The optical/near-IR CMDs of VII Zw 403 show the “missing” Blue Hertzsprung Gap and a blue-to-red supergiant ratio of about 5, providing new input for stellar-evolution models at low metallicity. Several steps could be undertaken to improve our analysis: 1) Modeling of stellar atmospheres and stellar evolution at the low metallicities appropriate for the earliest stellar populations in dwarf galaxies, 2) Additional modeling and observations of the AGB stellar phase, to better understand the effects of AGB ages and metallicities, 3) Near-IR observations providing a uniform set of cluster CMDs in a well-established JHK system, for comparison of such simple stellar populations with stellar theory and galaxy observations. Our data reach the RGB of VII Zw 403 to completeness levels that allow a secure measurement of the J, H magnitudes at the tip. We compare our results to that of clusters, DeNIS data of the MCs, and the Padova tracks of stellar evolution, and give a conversion from apparent to absolute TRGB magnitude. Providing a TRGB fiducial in the near-IR has advantages for estimating the distances to more distant galaxies. As red giants are observed near the peak of their energy distributions, the near-IR TRGB method can potentially reach to further distances than the optical TRGB method. It may also prove useful for highly reddened galaxies. The bolometric luminosity function of the red stars shows that for any stellar component other than the red giants, the RMS error in the star counts is large, even in the near-IR. Statistical fluctuations in stellar numbers in low-luminosity galaxies are well known to become large for massive stars. We caution against using the method of the brightest red supergiants as a distance indicator even in the near-IR. Summing up the light of the resolved stars in different evolutionary phases, we illustrate that published integrated photometry of BCDs, in both the optical and the near-IR, is dominated by the light from luminous, young red supergiants from the current starburst (the last 50 Myr or so) and luminous young and intermediate-age AGB stars. Attempts to distinguish an old stellar component using near-IR colors are bound to fail because of the color degeneracy among RSG, AGB, and RGB stars. Even a large number of red giants is undetectable underneath just a few luminous RSG and/or AGB stars. This shows why contradicting conclusions have been reached in the literature regarding the oldest stellar components of BCDs. The purpose of our study is to use SFHs to answer very general questions about the nature of BCDs: Are they “young” galaxies? Are they related to the faint blue excess? We note that AGB stars are clearly identified in the resolved stellar content, and also contribute significantly to the integrated near-IR light of VII Zw 403. The presence of an intermediate-age stellar population suggests that star formation was active in this galaxy at times which correspond to reshifts of a few tenths. VII Zw 403 is clearly not a young galaxy. We argue that the AGB stars may provide a further link between the type iE BCDs and the faint-blue galaxy population. The trick will be to determine whether or not this component of AGB stars can correctly account for the right amount of star formation at the right time, to identify iE galaxies as the (non-merged) remnants of the (CNELG) faint blue excess. In conclusion, we present the first exploration of the near-IR CMD of a BCD galaxy, to deep enough limiting magnitudes to detect the evolved descendents of low-mass stars. Together with recent results regarding the structural parameters of BCDs (e.g. Sung et al. 1998), our data provide further support of the idea that type iE BCDs possess dynamically relaxed, old stellar populations. The nature of the iE BCDs is inferred to be one of flashing dEs/dSphs. An interrelated evolution between BCDs and dEs/dSphs galaxy types (via gas cycling through starbursts) could potentially account for precursors as well as remnants of the faint blue galaxies. We acknowledge financial support through archival HST research grants to RSL (AR-06404.01-95A, AR-08012.01-96A) and guest observer HST grants to RSL and MMC (GO-7859.01-96A). This project benefitted from the hospitality of the Universitätssternwarte München, where RSL found a stimulating environment during her sabbatical leave from the University of Pittsburgh. We are grateful to Dr. M. Dickinson for sharing his time and software to help us remove the pedestal in the NICMOS data. We also thank D. Gilmore for excellent support during our visit to STScI. We further thank Dr. L. Origila for sharing her HST magnitudes of the Bessell, Castelli & Plez (1998) stellar atmospheres with us prior to publication. UH acknowledges financial support from SFB375. Figure Captions
no-problem/9909/astro-ph9909328.html
ar5iv
text
# The ortho-to-para ratio of ammonia in the L1157 outflow Based on observations made at the Nobeyama Radio Observatory (NRO). Nobeyama Radio Observatory is a branch of the National Astronomical Observatory, an inter-university research institute operated by the Ministry of Education, Science, Sport, and Culture, Japan. ## 1 INTRODUCTION The inversion lines of metastable ammonia have been extensively used for observations of dense ($``$10<sup>4</sup> cm<sup>-3</sup>) molecular cloud cores (e.g. Myers & Benson 1983; Benson & Myers 1989), while the fractional abundance of NH<sub>3</sub> varies among the clouds (Benson & Myers 1983). On the basis of a systematic survey, Suzuki et al. (1992) pointed out that the NH<sub>3</sub> tends to be more abundant in the star-forming cores than the starless cores. Such a trend is interpreted in terms of the gas-phase chemical evolution which predicts that the NH<sub>3</sub> is deficient in the early stages of chemical evolution and becomes abundant in the later stages (e.g. Herbst & Leung 1989; Suzuki et al. 1992). In addition to the gas-phase chemical evolution, the gas-grain interaction is also considered to be important for the abundance variation of NH<sub>3</sub> (e.g. d’Hendecourt, Allamandola, & Greenberg 1985; Brown & Charnley 1990; Nejad, Williams, & Charnley 1990; Flower, Pineau des Forêts & Walmsley 1995). In the star-forming dense cores, the NH<sub>3</sub> molecules retained on dust grains can be released into gas phase by means of shock waves caused by the outflow and the radiation from the newly formed star (e.g. Nejad et al. 1990; Flower et al. 1995), and are expected to contribute to increase the NH<sub>3</sub> abundance. Usually, it is not easy to estimate contribution of molecules desorbed from dust grains from those formed in the gas phase. However in the case of NH<sub>3</sub>, an ortho-to-para ratio of NH<sub>3</sub> provides us with such information. Ammonia has two distinct species, the ortho-NH<sub>3</sub> ($`K=3n`$) and the para-NH<sub>3</sub> ($`K3n`$), which arise from different relative orientations of the three hydrogen spins. The ortho-to-para ratio is expected to be the statistical equilibrium value of 1.0 when the NH<sub>3</sub> molecules are formed in the processes of gas-phase reactions or grain surface reactions. This is because the reactions to form the NH<sub>3</sub> molecule release large excess energies compared to the energy difference between the lowest ortho and para states, which make the ortho-to-para ratio close to the statistical value. However, the ortho-to-para ratio is expected to be larger than unity when the NH<sub>3</sub> molecules adsorbed on grain surface released into the gas phase with the excess energy for desorption which is comparable to the energy difference between the ortho and para states. This is because the lowest energy level of the para species is 23 K higher than that of the ortho species, para species require more energy for desorption than ortho species. The time scale of the interconversion between ortho-NH<sub>3</sub> and para- NH<sub>3</sub> is considered to be the order of 10<sup>6</sup> yr in the gas phase (Cheung et al. 1969). Therefore, the ortho-to-para ratio provides us with valuable information on physical conditions and chemical processes when the NH<sub>3</sub> molecules are released into the gas phase. Since the lowest energy level of ortho-NH<sub>3</sub>, ($`J,K`$) = (0, 0), has no inversion doubling, it is necessary to observe the transitions higher than ($`J,K`$) = (3, 3) to measure the ortho-to-para ratio. Such high transitions are hardly excited in the dark clouds of low temperature, however, recent observations have revealed that shocked gas associated with molecular outflows in the dark clouds is heated enough to excite the transitions higher than ($`J,K`$) = (3, 3). One of the prototypical objects with shock-heated gas is the bipolar outflow in the dark cloud L1157 (e.g. Umemoto et al. 1992) which is located at 440 pc from the sun (Viotti 1969). Previous NH<sub>3</sub> observations have revealed that the gas in the outflow is heated to more than 50–100 K (Bachiller, Martín-Pintado, & Fuente 1993; Tafalla & Bachiller 1995). A good morphological coincidence of the NH<sub>3</sub> distribution (Tafalla & Bachiller 1995) with that of the SiO ($`J`$=2–1) emission (Zang et al. 1995; Gueth et al. 1998), which is considered to be a good tracer of the shocked molecular gas, reveals that the hot ammonia arises from the shocked gas. The NH<sub>3</sub> abundance enhancement observed in the shocked gas (Tafalla & Bachiller 1995) suggests the possibility that the NH<sub>3</sub> retained on grain mantles is released into the gas-phase by means of shocks. In this Letter, we report the observations of six metastable inversion lines of NH<sub>3</sub> from ($`J,K`$) = (1, 1) to (6, 6) toward the blueshifted lobe of the L1157 outflow. We have detected the high excitation NH<sub>3</sub> (5, 5) and (6, 6) lines for the first time in the low-mass star forming regions. Of six observed transitions, the (3, 3) and (6, 6) states belong to ortho- NH<sub>3</sub> and the other four states belong to para-NH<sub>3</sub>. The detection of both (6, 6) and (3, 3) emission enables us to measure the ortho-to-para ratio which provides us with information on the contribution of ammonia molecules desorbed from grains. ## 2 OBSERVATIONS The observations were carried out in 1991 June and 1992 May with the 45m telescope of the Nobeyama Radio Observatory. We observed six inversion transitions of the metastabel ammonia, ($`J,K`$) = (1,1), (2, 2), (3, 3), (4, 4), (5, 5), and (6, 6), toward three positions along the blueshifted lobe of the CO outflow. The position offsets from IRAS 20386+6751 (hereafter referred to as L1157 IRS), $`\alpha `$(1950) = 20<sup>h</sup>38<sup>m</sup>39.<sup>s</sup>6, $`\delta `$(1950) = 675133<sup>′′</sup>, are (0<sup>′′</sup>, 0<sup>′′</sup>), (20<sup>′′</sup>, -60<sup>′′</sup>), and (45<sup>′′</sup>, -105<sup>′′</sup>), which were referred as positions A, B, and C, respectively by Mikami et al. (1992). Positions B and C correspond to the tips of two CO cavities (Gueth, Guilloteau, & Bachiller 1996), toward which strong NH<sub>3</sub> (3, 3) and SiO ($`J`$=2–1) emission lines are observed (Tafalla & Bachiller 1995; Zhang et al. 1995; Gueth et al. 1998). We also observed the (1, 1), (2, 2), and (3, 3) transitions toward additional two positions outside of the lobe and one position toward the peak of the redshifted CO emission. The observed positions are shown in Figure 1 superposed on the CO ($`J`$=1–0) outflow map observed by Umemoto et al. (1992). The lowest three transitions were observed simultaneously. The higher three transitions were also observed simultaneously. At the observed frequencies, the telescope had a beam size of 72<sup>′′</sup> and the beam efficiency of 0.8. The frontend was a cooled HEMT receiver whose typical system noise temperature was $``$200 K. The backend was a bank of eight acousto-optical spectrometers with 37 kHz resolution, which corresponds to a velocity resolution of 0.50 km s<sup>-1</sup>. All the observations were made in position switching mode. To obtain the spectra, we have integrated 45 minutes per position, the resulting rms noise level per channel was $``$ 0.02 K. ## 3 RESULTS In Figure 2, we show the spectra of six transitions observed at positions A, B, and C. At position A, a strong (1, 1) line and weak (2, 2) and (3, 3) lines are observed. No emission from the transitions higher than (4, 4) has been detected. The (1, 1) line observed at position A shows the quiescent component peaked at V<sub>LSR</sub> = 2.85 km s<sup>-1</sup>, which is almost the same as the cloud systemic velocity of $``$2.7 km s<sup>-1</sup> derived from the <sup>13</sup>CO observations (Umemoto et al. 1992), with a weak blueshifted wing. The (2, 2) and (3, 3) lines appear in the velocity range of the (1, 1) wing emission, suggesting that these line profiles are contaminated by the blueshifted component in the southern lobe because of the large beam size of 72<sup>′′</sup>. Since the (2, 2) and (3, 3) lines show no significant emission at the velocity of the quiescent component, the rotational temperature, $`T_{\mathrm{rot}}`$, of the quiescent component is estimated to be $``$10 K. At positions B and C, all six transitions up to (6, 6) were detected. The (5, 5) and (6, 6) lines are first detected in the low-mass star forming regions. All transition lines except the (1, 1) one have peak velocities blueshifted by 1–2 km s<sup>-1</sup> from the cloud systemic velocity and show broad widths of $`\mathrm{\Delta }V`$ $``$4–10 km s<sup>-1</sup> (measured at 1$`\sigma `$ level), indicating that the emission of the higher energy level arises from the high velocity gas. The (3, 3) line profiles at positions B and C resemble the profiles of the SiO (J=2-1) and CS (J=2-1) lines observed by Mikami et al. (1992). Among the six transition lines, the (3, 3) line is the strongest as previously pointed out by Bachiller et al. (1993). The peak brightness temperatures of the (2, 2), (3, 3), and (4, 4) lines observed at position B are lower by a factor of 1.5–2 than those observed by Bachiller et al. (1993) at the same position. This may be due to the beam dilution effect, because the spatial extent of the ammonia emitting area is smaller than our beam ( Bachiller et al. 1993; Tafalla & Bachiller 1995). Toward three other positions outside the blueshifted lobe, we have detected only the (1, 1) emission line at the systemic velocity. ## 4 DISCUSSION ### 4.1 Temperature and Ortho-to-para Ratio of the High-Velocity Gas To obtain the rotational temperature $`T_{\mathrm{rot}}`$ of the blueshifted gas, we assumed optically thin emission, and constructed rotation diagrams, i.e., the logarithms of NH<sub>3</sub> column densities divided by statistical weight of the transition, plotted against the energy above the ground state. The contribution of the quiescent component in the (1, 1) line was eliminated by performing the multi-components gaussian fitting. The rotational temperature estimated from the (1, 1) and (2, 2) data, $`T_{\mathrm{rot}}`$(1,1; 2,2) is 38 K at positions B and is 48 K at position C. These temperatures are a factor of 4 to 5 higher than that of the quiescent component at position A. Figure 3 shows that the $`T_{\mathrm{rot}}`$ obtained from the data of higher transitions are significantly higher than $`T_{\mathrm{rot}}`$(1,1; 2,2). The rotational temperature obtained from the para-NH<sub>3</sub> (4, 4) and (5, 5) data, $`T_{\mathrm{rot}}`$(4,4; 5,5), are $``$130 K for both positions B and C. The slopes between the ortho-NH<sub>3</sub> (3, 3) and (6, 6) data are almost parallel to those between the para-NH<sub>3</sub> (4, 4) and (5, 5) ones, indicating that $`T_{\mathrm{rot}}`$(3,3; 6,6) is comparable to $`T_{\mathrm{rot}}`$(4,4; 5,5). It should be noted that the column densities of the ortho species are higher than those of para species; this suggests that the ortho species are more abundant than the para species. If we assume an ortho-to-para ratio to be $``$1.5, the data of four transitions involving the ortho- and para-states align on the straight lines; the best fits provide an ortho-to-para ratio at position B to be 1.7$`\genfrac{}{}{0pt}{}{+0.2}{0.3}`$ and that at position C to be 1.3$`\pm `$0.2. Then we obtain the $`T_{\mathrm{rot}}`$ at position B is 140$`\genfrac{}{}{0pt}{}{+4}{3}`$K (the upper and lower limit of the $`T_{\mathrm{rot}}`$ correspond to those of the ortho-to-para ratio, respectively) and that at position C is 125$`\genfrac{}{}{0pt}{}{+3}{4}`$K. The discrepancy between the $`T_{\mathrm{rot}}`$(1,1; 2,2) and the $`T_{\mathrm{rot}}`$ derived from the higher transition data can be explained by two components of gas with different kinetic temperatures as suggested by Avery & Chiao (1996) from their SiO observations. However, we consider that the most of the gas traced by the NH<sub>3</sub> emission is heated to 130–140 K in the following reason. It is known that the metastable populations may deviate from a true Boltzmann distribution due to collisional depopulation of the higher nonmetastable levels. As argued by Danby et al. (1988), the $`T_{\mathrm{rot}}`$ tends to underestimate the gas kinetic temperatures $`T_\mathrm{k}`$, at the $`T_\mathrm{k}`$ range higher than $``$ 30 K. Since the difference is remarkable for the $`T_{\mathrm{rot}}`$ derived from the lower $`J`$ transitions, the $`T_{\mathrm{rot}}`$ estimated from the higher transitions is considered to be the better indicator of the kinetic temperature than $`T_{\mathrm{rot}}`$(1,1; 2,2). Such two rotational temperatures were also observed in the M17SW molecular cloud (Güsten & Fiebig 1988). ### 4.2 Ammonia Abundance Enhancement The beam-averaged NH<sub>3</sub> column density in the quiescent gas at position A is estimated to be $`N`$(NH<sub>3</sub>) = 2 $`\times `$ 10<sup>14</sup> cm<sup>-2</sup>. When we employed the H<sub>2</sub> column density of the ambient gas derived from the <sup>13</sup>CO and C<sup>18</sup>O observations ($`N`$(H<sub>2</sub>) = 3$`\times `$10<sup>21</sup> cm<sup>-1</sup>), the NH<sub>3</sub> abundance for the quiescent component is estimated to be $`X`$(NH<sub>3</sub>) = 7 $`\times `$10<sup>-8</sup>, which is comparable to those measured in nearby dense cores ($`X`$(NH<sub>3</sub>) = (3–10) $`\times `$10<sup>-8</sup>; Benson & Myers 1983). The $`N`$(NH<sub>3</sub>) in the blueshifted gas averaged over the beam at positions B and C are calculated to be 1$`\times `$10<sup>14</sup> cm<sup>-2</sup> and 7$`\times `$10<sup>13</sup> cm<sup>-2</sup>, respectively. The H<sub>2</sub> column densities of the blueshifted gas were estimated from the CO ($`J`$=1–0) data obtained at the NRO 45m telescope (beam size was 16<sup>′′</sup>) by convolving the data with the 72<sup>′′</sup> gaussian beam and assuming the optically thin CO emission, $`T_{\mathrm{ex}}`$ = 130 K, and H<sub>2</sub>/CO abundance ratio of 10<sup>4</sup>. They are estimated to be 7$`\times `$10<sup>21</sup> cm<sup>-2</sup> toward position B and 2$`\times `$10<sup>21</sup> cm<sup>-2</sup> toward position C. By using these H<sub>2</sub> column densities, we obtained $`X`$(NH<sub>3</sub>) = 1$`\times `$10<sup>-7</sup> at position B and $`X`$(NH<sub>3</sub>) = 3$`\times `$10<sup>-7</sup> at position C, which are a factor of 2–5 higher than that of the quiescent component. The NH<sub>3</sub> column densities derived by Tafalla & Bachiller (1995) from their high-resolution VLA data are $``$5$`\times `$10<sup>14</sup> cm<sup>-2</sup> toward both positions B and C. If we compare these values with the $`N`$(H<sub>2</sub>) in the 16<sup>′′</sup> beam derived from the CO ($`J`$=1–0) data by assuming an excitation temperature of 130 K, which are 2$`\times `$10<sup>21</sup> cm<sup>-2</sup> for position B and 8$`\times `$10<sup>20</sup> cm<sup>-2</sup> for position C, we obtained $`X`$(NH<sub>3</sub>) = 3$`\times `$10<sup>-7</sup> and $`X`$(NH<sub>3</sub>) = 7$`\times `$10<sup>-7</sup> for positions B and C, respectively. The H<sub>2</sub> column densities used here may be somewhat underestimated because the sizes of the NH<sub>3</sub> enhanced regions ($`<`$ 10<sup>′′</sup>) in the map of Tafalla & Bachiller (1995) are smaller than the beam size of the CO data. When we take this into account, the ammonia abundances obtained from the higher resolution data are consistent with those from the 72<sup>′′</sup> resolution data. Therefore, we conclude that the NH<sub>3</sub> abundance in the shocked regions is enhanced by a factor of $``$5. ### 4.3 Ortho-to-para Ratio and Its Implication for the Contribution of Desorbed Ammonia The derived ortho-to-para ratio in the blueshifted gas which is larger than the statistical value of 1.0 suggests that significant amount of NH<sub>3</sub> observed in the blueshifted gas has been evaporated from dust grains. The evaporation of ammonia is supported by the fact that the ammonia abundance is enhanced in the blueshifted gas. If we assume that all of the NH<sub>3</sub> molecules observed in the blueshifted gas were from dust grains, the observed ratio of 1.3–1.7 suggest that the population of the rotational levels of NH<sub>3</sub> at the time of ejection from the grain surface is represented by a Boltzmann distribution with the temperature of 18–25 K. This formation temperature would be related to the excess energy distributed to the rotational degree of freedom in desorption processes, and are not necessarily equal to the dust temperature or the gas kinetic temperature (Takayanagi, Sakimoto, & Onda 1987). It is most likely that the NH<sub>3</sub> molecules retained on grains have been provided with sufficient energy to desorb from the grain surfaces by the passage of shocks (e.g. Nejad et al. 1990; Flower et al. 1995). The rotational temperature of 130–140 K suggests that the shock heating may be responsible for the desorption of the NH<sub>3</sub> molecules. Sandford & Allamandola (1993) revealed that the sublimation of ammonia drastically increases as a function of temperature: the residence time of NH<sub>3</sub> on a dust grain which is 10<sup>13</sup> yr at 40 K becomes only 10<sup>-7</sup> yr at 100 K. Recently, the infrared spectroscopic observations with Infrared Space Observatory (ISO) and ground-based telescopes reported that the abundance of the solid NH<sub>3</sub> (relative to H<sub>2</sub>O) in icy grain mantle is no more than a few percent (e.g. Whittet et al. 1996a, b). These results imply that the NH<sub>3</sub> molecule desorbed from grains is less important than previously expected (e.g. d’Hendecourt et al. 1985; Brown & Charnley 1990; Nejad et al. 1990). However, the ortho-to-para ratio measured in the L1157 outflow indicates that significant amount of NH<sub>3</sub> arises from dust grains and that the gas-grain chemistry plays an important role in determining the NH<sub>3</sub> abundance. We would like to thank the staff of NRO for the operation of our observations and their support in data reduction. We also thank Drs. S. Takano, K. Kawaguchi, Y. Hirahara, Y. Taniguchi, and J. Tkahashi for helpful comments. N.H. acknowledges support from a Grant-in-Aid from the Ministry of Education, Science, Sport, and Culture of Japan, No. 09640315.
no-problem/9909/cond-mat9909234.html
ar5iv
text
# Eigenvalue Distribution In The Self-Dual Non-Hermitian Ensemble ## I Introduction and Classification There are ten known universality classes of hermitian random matrices. Dyson proposed the existence of three symmetry classes, depending on spin and the existence of time reversal symmetry. These give the three classes known as Gaussian Unitary, Orthogonal, and Symplectic (GUE, GOE, GSE). Another three ensembles are the chiral Gaussian ensembles (chGUE, chGOE, chGSE). These ensembles are of relevance to low energy QCD. Altland and Zirnbauer introduced four more ensembles which can appear in superconducting systems. Finally, Zirnbauer demonstrated a relationship between the different classes of random matrix theory and symmetric spaces, and from this argued that the ten distinct known universality classes exhausted all possible universality classes. In this section we discuss various universality classes of non-Hermitian random matrices, including the ensemble of arbitrary self-dual matrices, the subject of this paper. We mention the concept of weak non-Hermiticity, but do not consider it further in this paper. We argue that the various classes of non-Hermitian matrices, the self-dual ensemble and four others, exhaust all possible universality classes. Finally, possible applications of the self-dual ensemble are dealt with, including relations with the one-component plasma. In section II, we further discuss the relationship with the one-component plasma. In section III, numerical results for the self-dual ensemble are discussed, in particular the eigenvalue density as a function of radius and the two-eigenvalue correlation functions. Several ensembles of non-Hermitian random matrices are common in the literature. Ginibre introduced three classes of such matrices. One is an ensemble of matrices with arbitrary complex elements, one an ensemble with arbitrary real elements, and the third an ensemble with arbitrary real quaternion elements. Another ensemble of non-Hermitian matrices is an ensemble of complex, symmetric matrices. This ensemble arises particularly in problems of open quantum systems. This gives a total of four known universality classes. For each of these ensembles, there exists a weakly non-Hermitian version of that ensemble. This idea of weak non-Hermiticity was introduced by Fyodorov et al.. In this case the anti-Hermitian part of the matrix is small; we only consider strongly non-Hermitian matrices in the present paper and do not consider weakly non-Hermitian matrices, even though they are the most relevant for scattering problems. The strongly non-Hermitian ensembles can be obtained from a general three parameter family of non-Hermitian matrices introduced by Fyodorov et al.. This family includes parameters measuring the strength of the real and imaginary, symmetric and anti-symmetric parts of the matrix. By adjusting the parameters, one can obtain various ensembles. One possiblity, which does not appear to have been considered much, is an ensemble of anti-symmetric matrices with arbitrary complex elements. Now, let us show that this ensemble is equivalent to an ensemble of self-dual matrices with arbitrary complex elements; this is the ensemble considered in this paper. Let $`A`$ be an arbitrary anti-symmetric matrix. Let $`Z`$ be the matrix given by $$\left(\begin{array}{cc}0& 1\\ 1& 0\\ & & 0& 1\\ & & 1& 0\\ & & & & \mathrm{}.\end{array}\right)$$ (1) Then, $`Z^\mathrm{T}=Z`$ and $`Z^2=1`$. Let $`M=ZA`$. It is trival to verify that $`ZM^\mathrm{T}Z=M`$. So, $`M`$ is self-dual. The advantage of using self-dual matrices instead of anti-symmetric matrices is that self-dual matrices have pairs of equal eigenvalues while anti-symmetric matrices have pairs of opposite eigenvalues; this makes the correlation functions clearer. When choosing matrices from the ensemble, we will use Gaussian weight $$e^{\frac{1}{2}\mathrm{Tr}(M^{}M)}$$ (2) Given these five classes, the 3 ensembles of Ginibre as well as the ensembles of symmetric non-Hermitian and self-dual non-Hermitian, let us ask whether all possible universality classes of strongly non-Hermitian random matrices have been found. Feinberg and Zee introduced the method of hermitization for non-Hermitian matrices. A similar technique was used by Efetov. The basic idea is to take a non-Hermitian matrix $`ME`$, where $`E`$ is a complex number, and form the Hermitian matrix $$H=\left(\begin{array}{cc}M_hE_R& M_aiE_I\\ M_a^{}+iE_I& M_h+E_R\end{array}\right)$$ (3) where $`M_h`$ is the Hermitian component of $`M`$ and $`M_a`$ is the anti-Hermitian component of $`M`$ and $`E_R`$ and $`E_I`$ are the real and imaginary components of $`E`$. Equivalently, one can form the Hermitian matrix $$H=\left(\begin{array}{cc}0& ME\\ M^{}\overline{E}& 0\end{array}\right)$$ (4) From the zero eigenvalues of $`H`$, one may extract the zero eigenvalues of $`ME`$. So, to each universality class of non-Hermitian random matrices, there corresponds a universality class of Hermitian random matrices. If we hermitize the three non-Hermitian ensembles introduced by Ginibre, we obtain the three chiral ensembles (chGUE, chGOE, chGSE). The relation with the chiral ensembles is most clear using equation (4), instead of equation (3). If we hermitize the ensemble of symmetric, complex matrices we obtain the ensemble with symmetry class CI, according to the nomenclature of Altland and Zirnbauer. If we hermitize the ensemble of self-dual complex matrices, we obtain the ensemble with symmetry class DIII. Here the relation with the Hermitian ensembles is most clear using equation (3). The other five classes of hermitian random matrices cannot be obtained by hermitizing a non-Hermitian ensemble: the GOE, GUE, and GSE classes lack the needed block structure, while the C and D ensembles lack the symmetry that relates the elements in the upper left and lower right blocks. This suggests that all possible universality classes of non-Hermitian matrices have been obtained. One possible interest in the ensemble of self-dual complex matrices would be experimental, such as in open-systems with spin orbit scattering. Another interest is theoretical, considering the relationship of this ensemble to the $`\beta =4`$ one-component plasma in two-dimensions. Although the level distribution in the ensemble differs from the distribution of charges in the plasma, there are some close relations between the two, discussed more in the next section. It is known that the ensemble of matrices with arbitrary complex elements is equivalent to the $`\beta =2`$ plasma. The correlation function of the $`\beta =2`$ system is monotonic, with Gaussian decay. From perturbative calculations, it has been suggested that, for $`\beta >2`$, the two-level correlation function becomes non-monotonic, indicating the appearance of short-range order. This makes it very interesting to examine the correlation function of the ensemble of self-dual matrices, although no significant sign of any non-monotonicity is found here in the numerical calculations. Numerical calculations on the one-component plasma suggest that there is a phase transition at $`\beta 144`$; so, any order that exists for $`\beta =4`$ must be short range. An exact study for finite number of particles showed non-monotonicity of the correlation functions for $`\beta =4,6`$. Even for $`\beta =4`$ there is a definite peak in the correlation function. ## II $`\beta =4`$ One-Component Plasma Consider a system of $`N`$ particles, located at positions $`z_i`$, with partition function $$𝑑z_i𝑑\overline{z}_i\underset{i=1}{\overset{N}{}}e^{|z_i|^2}\underset{i<j}{}e^{\beta \mathrm{log}(|z_iz_j|)}$$ (5) This defines the two-dimensional one-component plasma. For $`\beta =4`$, there exists some relation between this system and the ensemble considered here. First, the density of the plasma, $`\rho `$, is equal to $`\frac{1}{2\pi }`$, where the density is measured in charges per unit area. The plasma has constant charge density $`\rho `$ in a disc about the origin, and vanishing charge density outside. The self-dual ensemble has the same charge density, as found numerically in the next section, and as can be shown with a replica or SUSY technique. Second, there exists a relationship between the joint probability distribution of the eigenvalues of $`M`$ and the probability distribution of charges in the one-component plasma. The j.p.d. of the eigenvalues of $`M`$ is different from the charge distribution in the plasma, but we will argue that for widely separated eigenvalues the j.p.d. of the eigenvalues behaves the same as the probability distribution of the charges. Let $`M`$ be a matrix within the ensemble of self-dual, complex matrices. We can write $`M`$ as $`M=X\mathrm{\Lambda }X^1`$, where $`\mathrm{\Lambda }`$ is a diagonal matrix of eigenvalues of $`M`$. The eigenvalues of $`\mathrm{\Lambda }`$ exist in pairs, with $`[\mathrm{\Lambda },Z]=0`$. The requirement that $`M`$ be self-dual is equivalent to the requirement that $`X^TZ=ZX^1`$ and $`Z(X^1)^T=XZ`$; if this constraint on $`X`$ holds it is easy to verify that $`ZM^\mathrm{T}Z=M`$. If we were to impose the additional constraint on $`X`$ that $`X`$ be unitary, then we would find that $`X`$ must be an element on the symplectic group. In this case, with $`X`$ in the symplectic group, the matrix $`M`$ must be normal, such that $`[M,M^{}]=0`$. In this case the distribution of eigenvalues of $`M`$ exactly matches the charge distribution in the $`\beta =4`$ plasma. In the general case, $`M`$ is not normal and $`X`$ is not unitary, and the distribution of eigenvalues of $`M`$ will be different from the charge distribution of the plasma. Still, consider a situation in which we fix $`\mathrm{\Lambda }`$ and integrate over $`X`$, with Gaussian weight $`e^{\frac{1}{2}\mathrm{Tr}(M^{}M)}`$. This is how one obtains the j.p.d. of the eigenvalues. The measure $`[\mathrm{d}M]`$ on matrices $`M`$ is equivalent to the measure $`[\mathrm{d}\lambda _i][\mathrm{d}X]\underset{i<j}{}|\lambda _i\lambda _j|^8`$. The j.p.d. of the eigenvalues is defined by $$\underset{i<j}{}|\lambda _i\lambda _j|^8[\mathrm{d}X]e^{\frac{1}{2}\mathrm{Tr}(M^{}M)}$$ (6) with $`M=X\mathrm{\Lambda }X^1`$. The Gaussian weight, $`e^{\frac{1}{2}\mathrm{Tr}(M^{}M)}`$, will depend on $`X`$. It will be greatest when $`X`$ is chosen to be symplectic, so that $`M`$ is normal. If the eigenvalues of $`\mathrm{\Lambda }`$ are well separated, then the exponential in the Gaussian weight will be large, and we can evaluate the integral by a saddle point method: we will restrict our attention to a saddle point manifold of matrices $`M`$ which are normal, as well as weak fluctuations away from this saddle point manifold. If we parametrize the fluctuations away from the saddle point manifold and then treat these fluctuations in a Gaussian approximation, valid when the eigenvalue separation is large, we obtain that the j.p.d. for the self-dual ensemble is equal to, in this particular approximation, $$\underset{i=1}{\overset{N}{}}e^{|z_i|^2}\underset{i<j}{}(|z_iz_j|)^4\underset{i=1}{\overset{N}{}}dz_id\overline{z}_i$$ (7) up to a constant factors. This is, of course, the same as the probability distribution of the charges in the one-component plasma at $`\beta =4`$. In general, we expect that for well separated eigenvalues, the level repulsion in the self-dual ensemble will match the charge repulsion in the plasma; it is only the short distance interaction that will be different. Further, it may be shown explicitly by calculations on small matrices that the short distance interactions in the j.p.d. for the self-dual ensemble cannot be written as a product of two-body terms. Given these similarities, one might hope that the correlation functions of the self-dual ensemble will shed some light on correlations within the plasma. In the next section, we discuss a numerical investigation of the self-dual ensemble. ## III Numerics Mathematica was used to generate 4940 600-by-600 self-dual matrices. The matrices were chosen with Gaussian weight $`e^{\frac{1}{2}\mathrm{Tr}(M^{}M)}`$ as in equation (2). The matrices have 300 pairs of eigenvalues. A picture of these eigenvalues for a typical matrix is shown in figure 1. The eigenvalue density as a function of radius is shown in figure 2. The density obeys the circular law: it is nonvanishing and roughly constant within a disc, and vanishing outside. For the $`\beta =4`$ one component plasma, with a confining potential $`e^{\overline{z}z}`$ (see equation (5), the expected density of particles per unit area, from the circular law, is $`\frac{1}{2\pi }`$. The single particle eigenvalue density observed numerically for the self-dual matrices agrees with this result; note that since eigenvalues come in pairs, then we expect $`e^{\overline{z}z}`$ to be the confining potential that corresponds to the weight of equation (2) as each eigenvalue in the pair contributes a factor of $`e^{\frac{\overline{z}z}{2}}`$. One interesting feature of figure 2 is that the eigenvalue density near the edge rises before dropping. It is not clear why this happens. The two-level correlation function is shown in figures 3 and 4. In figure 3, we look at all eigenvalues within a distance of 6 or less from the origin, and plot the probability to find another eigenvalue at given distance from the first eigenvalue. In figure 4, to reduce effects due to the finite size of the matrix $`M`$, we require that the first eigenvalue lie within a distance of 3.5 or less from the origin. No significant differences are found between figure 3 and figure 4, indicating that the effects due to the finite size of $`M`$ are small even in figure 3. One can see finite size effects in both figure 3 and 4 for large distances. The correlation function rises for distances of around 20. This is simply due to the rise in eigenvalue density near the edge, as shown in figure 2, and has no deep meaning. Looking at figures 3 and 4, there is a definite “shoulder” at a distance of slightly less than 3. There is no definite sign of any non-monotonicity; certainly, if there is any peak in the correlation function near the shoulder, it is much smaller than the peak found in the $`\beta =4`$ plasma. As a quick estimate of the expected spacing between levels, assume that the levels formed a perfect hexagonal lattice, so that they are very ordered, and packed as closely as possible. In this case, if the levels have a density of $`\frac{1}{2\pi }`$, then the closest spacing between levels is $`\frac{2\sqrt{\pi }}{3^{1/4}}`$, which is approximately 2.7. For other arrangements of levels, the spacing will be slightly less. This length agrees quite well with the size of the shoulder. So, the shoulder length matches reasonably with the length scale expected from the particle spacing. ## IV Conclusion In conclusion, we have considered an ensemble of strongly non-Hermitian, self-dual matrices. The two-level correlation function of this ensemble is particularly interesting, although the hoped for non-monotonicity has not emerged. It seems that all possible universality classes of non-Hermitian matrices are now known.
no-problem/9909/astro-ph9909026.html
ar5iv
text
# Highlights of the Rome Workshop on Gamma-Ray Bursts in the Afterglow Era ## 1 Introduction As a result of the initiative and the effort of many people on the BeppoSAX team, we now live in the era of gamma-ray burst (GRB) afterglows. Having advocated for the High Energy Transient Explorer mission over the years, I know how unlikely it seemed to most astronomers that GRBs would produce detectable X-ray, let alone optical, afterglows. The mounting evidence from the Burst and Transient Source Experiment onboard the Compton Gamma-Ray Observatory that GRBs are extragalactic only seemed to strengthen this view. The subsequent discoveries that GRBs have X-ray, optical and radio afterglows has transformed the field. This workshop shows how great the impact is that these discoveries have had on the study of GRBs. The vast majority of the observational and theoretical results that were presented at this Workshop come from, or are motivated by, studies of the radio, optical and X-ray properties of afterglows and host galaxies, the latter identified by their positional coincidence with the afterglows. Here I describe some of the highlights of the Workshop, and discuss some of the questions these results pose about the nature and origin of GRBs. Of necessity, this review reflects my personal point of view. Also, I cannot discuss all of the important observational and theoretical results reported at this meeting, given the limited space available. This summary is regretfully therefore incomplete. ## 2 How Many Classes of GRBs Are There? The discovery six months ago of an unusual Type Ic supernova, SN 1998bw (Galama et al. 1998, Galama 1999), in the BeppoSAX WFC error circle for GRB 980425 (Soffitta et al. 1998) (see Figure 1) has focused attention once again on the question: How many distinct classes of GRBs are there?. If GRB 980425 were associated with SN 1998bw, the luminosity of the burst would be $`10^{46}`$ erg s<sup>-1</sup> and its energy would be $`10^{47}`$ erg. These values are five orders of magnitude less than those of the other BeppoSAX bursts, whose luminosities range from $`10^{50}`$ to $`10^{53}`$ ergs s<sup>-1</sup> and whose energies range from $`10^{52}`$ to $`10^{55}`$ ergs (see below). Moreover, the behaviors of the X-ray and optical afterglows would be very different from those of the other BeppoSAX bursts, yet the burst itself is indistinguishable from other BeppoSAX and BATSE GRBs with respect to duration, time history, spectral shape, peak flux, and fluence (Galama et al. 1998). There is another troubling aspect about the proposed association between GRB 980425 and SN 1998bw: Also inside the BeppoSAX WFC error circle was a fading X-ray source (Pian et al. 1998a,b; Pian et al. 1999; Piro et al. 1998)(see Figure 1). Connecting this fading X-ray source with the burst gives a power-law index of $`1.2`$ for the temporal decay rate (Pian et al. 1998b), which is similar to the behavior of the other X-ray afterglows observed using BeppoSAX, ROSAT and ASCA. This fading X-ray source must therefore be viewed as a strong candidate for the X-ray afterglow of GRB 980425. There is also strong statistical evidence that all Type Ib-Ic supernovae (SNe) do not produce observable GRBs (Graziani, Lamb & Marion 1999a,b). Approaching the possible association between SN 1998bw and GRBs from the opposite direction, one can ask: What fraction $`f_{\mathrm{GRB}}`$ of the GRBs detected by BATSE could have been produced by Type Ib-Ic SNe, assuming that the proposed association between GRB 980425 and SN 1998bw is correct, and that the bursts produced are similar to GRB 980425? Assuming that the association between SN1998bw and GRB 980425 is real, using this association to estimate the BATSE sampling distance for such events under the admittedly dubious assumption that the GRBs produced by Type Ib-Ic SNe are roughly standard candles, and assuming that all Type Ib-Ic SNe produce observable GRBs, Graziani, Lamb &( Marion 1999a,b) find that no more than $`90`$ such events could have been detected by BATSE during the lifetime of the Compton Gamma-Ray Observatory, indicating that the fraction $`f_{\mathrm{GRB}}`$ of such events in the BATSE catalog can be no more than about 5%. This result suggests that the observation of another burst like GRB 980425 is unlikely to happen any time soon, even assuming that the association is real, and consequently, the question of whether Type Ib-Ic SNe can produce extremely faint GRBs is likely to remain open for a long time. Earlier studies have shown that gamma-ray bursts can be separated into two classes: short, harder, more variable bursts; and long, softer, smoother bursts (see, e.g., Lamb, Graziani & Smith 1993; Kouveliotou et al. 1993). Recently, Mukherjee et al. (1999) have provided evidence for the possible existence of a third class of bursts, based on these same properties of duration, hardness and smoothness properties of the bursts. Also, the hardest long bursts exhibit a pronounced deviation from the -3/2 power-law required for a homogeneous spatial distribution of sources, whereas the short bursts and the softest long bursts do not (Pizzichini 1995; Kouveliotou 1996; Belli 1997, 1999; Tavani 1998). These results contradict the expectation that the most distant bursts should be the most affected by cosmological energy redshift and time dilation. Some bursts show considerable high-energy ($`E>300`$ keV) emission whereas others do not, but it is doubtful that this difference signifies two separate GRB classes, since a similar difference in behavior is seen for peaks within a burst (Pendleton et al. 1998). It is not clear whether the short and long classes, and the other differences among various burst properties, reflect distinct burst mechanisms, or whether they are due to beaming–or some other property of the bursts–and different viewing angles. Some theorists say, however, that the “collapsar” or “hypernova” model cannot explain the short bursts (see, e.g., Woosley 1999). Because of observational selection effects, all of the GRBs that have been detected by the BeppoSAX GRBM and observed by the WFC have been long bursts. It may be possible for BeppoSAX to revise its GRB detection algorithm in order to detect short bursts. We also expect that HETE-2 will detect short bursts and determine their positions (Kawai et al. 1999, Ricker et al. 1999). If so, follow-up observations may well lead to a breakthrough in our understanding of the nature of the short bursts similar to that which has occurred for the long bursts. A nightmare I sometimes have is that HETE-2 provides accurate positions for a number of short bursts, but the positions are not coincident with any host galaxies because the bursts are due to merging compact object binaries that have drifted away from their galaxy of origin (see below). And furthermore, the bursts exhibit no soft X-ray, optical, or radio afterglows because any envelope that the progenitors of the compact objects might have expelled has been left behind, and the intergalactic medium is too tenuous to dissipate efficiently the energy in the relativistic external shock that is widely thought to be the origin of GRB afterglows. The redshifts of such bursts would be difficult, if not impossible, to determine, since they could not be inferred from the redshift of any host galaxy, nor constrained by the observation of absorption-line systems in the spectrum of any optical afterglow. On a more positive note, future radio, optical, and X-ray observations of GRB afterglows and host galaxies, may well lead to the identification of new subclasses of GRBs. ## 3 GRB Host Galaxies The detection of burst X-ray and optical afterglows has led in eight cases to identification of the likely host galaxy by positional coincidence with the optical afterglow. At $`R=25.526`$, the typical R-band magnitudes of these galaxies, galaxies cover 10-15% of the sky for ground-based observations, because of smearing of the galaxy images due to seeing. Therefore one expects 1/10 to 1/7 of ground-based “identifications” to be incorrect. If we are lucky, all of the identifications made to date are correct, but if we are unlucky, one or two are incorrect. On the other hand, it is highly probable that all of the host galaxies identified from HST observations are correct (e.g., the host galaxies for GRBs 970228, 970508, 971214, and 980329), since HST images are free of the effects of seeing that bedevil observations from the ground. It is also reassuring that in two cases (GRBs 970508 and 971214), the host galaxy identified from ground-based observations has been confirmed by later HST observations. Let me mention a related concern. Until very recently, all GRB host galaxies had $`R=25.7\pm 0.3`$, no matter what their redshift and no matter how the afterglow on which the identification is based was discovered (i.e., whether detected in the optical, NIR, or radio); that is, the R-band magnitude of the GRB host galaxy appeared to be a kind of “cosmological constant.” In contrast, if the GRB rate is proportional to the star formation rate (SFR)(see below), one expects a relatively broad distribution of R-band magnitudes for GRB host galaxies (Hogg & Fruchter 1999, Fruchter 1999, Madau 1999)(see Figure 2). The recent discovery of the host galaxy of GRB 980703 at $`R=22.6`$ broadens the observed distribution of host galaxy R-band magnitudes, provided the identification is correct. However, it also increases the asymmetry of the R-band magnitude distribution, which exhibits a tail toward the bright end and a cutoff toward the dim. This is the opposite of what one expects if the GRB rate is proportional to the SFR. This raises the possibility that in some cases we are merely finding the first galaxy along the line-of-sight to the burst. If so, in some cases the galaxy found may be a foreground galaxy, and the actual host galaxy may lie behind it. Or it might be that the GRB rate is not proportional to the SFR (see the discussion below). Or most likely of all, the asymmetry may merely reflect the fact that we are still very much in the regime of small number statistics. Additional confirmations and/or identifications of host galaxies using HST will resolve this question. Castander and Lamb (1998) showed that the light from the host galaxy of GRB 970228, the first burst for which X-ray and optical afterglows were detected, is very blue, implying that the host galaxy is undergoing copious star formation and suggesting an association between GRB sources and star forming galaxies. Subsequent analyses of the color of this galaxy (Castander & Lamb 1999; Fruchter et al. 1999; Lamb, Castander & Reichart 1999) and other host galaxies (see, e.g, Kulkarni et al. 1998; Fruchter 1999) have strengthened this conclusion, as does the morphology and the detection of \[OII\] and Ly$`\alpha `$ emission lines from several host galaxies (see, eg., Metzger et al. 1997b; Kulkarni et al. 1998; Bloom et al. 1998) (see Figure 3). The positional coincidences between several burst afterglows and the bright blue regions of the host galaxies (see Figure 4), and the evidence for extinction by dust of some burst afterglows (see, e.g., Reichart 1998; Kulkarni et al. 1998; Lamb, Castander & Reichart 1999), suggests that these GRB sources lie near or in the star-forming regions themselves. The inferred size ($`R13`$ kpc) and the morphology of GRB host galaxies strongly suggest that they are primarily low-mass ($`M0.01M_{\mathrm{Galaxy}}`$) but not necessarily sub-luminous galaxies, because of the ongoing star formation in them (most have $`L0.010.1L_{\mathrm{Galaxy}}`$, but some have $`LL_{\mathrm{Galaxy}}`$; here $`M_{\mathrm{Galaxy}}`$ and $`L_{\mathrm{Galaxy}}`$ are the mass and luminosity of a galaxy like the Milky Way). A point sometimes not fully appreciated is that, while the total star formation rate in GRB host galaxies is often modest (resulting in modest \[OII\] and Ly$`\alpha `$ emission line strengths), the star formation per unit mass in them is very large. ## 4 GRB Distance Scale and Rate The breakthrough made possible by the discovery that GRBs have X-ray (Costa et al. 1997), optical (Galama et al. 1997) and radio (Frail et al. 1997) afterglows cannot be overstated. The discovery by Metzger et al. (1997a) of redshifted absorption lines at $`z=0.83`$ in the optical spectrum of the GRB 970508 afterglow showed that most, perhaps all, GRB sources lie at cosmological distances. Yet we must remember that GRB 970508 remains the only GRB whose distance we have measured directly. The current situation is summarized below, in order of increasing uncertainty in the redshift determination. In the cases of two other bursts, GRB 980703 (Bloom et al. 1999) and GRB 971214 (Kulkarni et al. 1998, Kulkarni 1999), we infer the redshifts ($`z=0.96`$ and 3.42) of the bursts from the redshift of a galaxy coincident with the burst afterglow (and therefore likely to be the host galaxy – but recall my earlier comments). In the case of a fourth burst, GRB 980329, a redshift $`z5`$ was inferred by attributing the precipitous drop in the flux of the optical afterglow between the I- and R-bands to the Ly$`\alpha `$ forest (Fruchter 1999; Lamb, Castander & Reichart 1999). However, Djorgovski et al. (1999) recently reported that this burst must lie at a redshift $`z<3.9`$, based on the absence of any break longward of 6000 Å in the spectrum of the host galaxy. In the case of GRB 980703 (Piro 1999) and of a fifth burst, GRB 980828 (Yoshida 1999), there are hints of an emission-like feature in the X-ray spectrum, which, if interpreted as a redshifted Fe K-shell emission line, would provide redshift distances for these bursts. However, substantial caution is in order because the statistical significance of these features is slight. Indeed, all three “indirect” means of establishing the redshift distances of GRBs need verification by cross-checking the redshift distances found using these methods against those measured directly using redshifted absorption lines in the optical spectra of their afterglows. One arcminute or better angular positions in near real-time, like those that HETE-2 will provide (Ricker et al. 1999, Kawai et al. 1999), will greatly facilitate this task. The table below summarizes the current situation, in order of increasing uncertainty in the redshift determination: $$\begin{array}{cc}\text{Redshifts of Afterglows}& \hfill 1\\ \text{Redshifts of Coincident Galaxies}& \hfill 2\\ \text{Redshifts from Afterglow Broad-Band Spectra (?)}& \hfill 0\\ \text{Redshifts from Fe Lines in X-Ray Afterglows (??)}& \hfill 0\\ & \hfill \text{—-}\\ \text{Total}& \hfill 3\end{array}$$ Even with the paucity of GRB redshift distances currently known, and the uncertainties about these distances, it is striking how our estimate of the GRB distance scale continues to increase. Not so long ago, adherents of the cosmological hypothesis for GRBs favored a redshift range $`0.1z1`$, derived primarily from the brightness distribution of the bursts under the assumption that GRBs are standard candles. (Of course, adherents of the galactic hypothesis argued for much smaller redshifts!). Now we routinely talk about redshift distances in the range $`2z6`$, and such a redshift range is supported by the three burst redshifts that have been determined so far. Much of the motivation for considering such a redshift range for GRBs comes from the appealing hypothesis that the GRB rate is proportional to the star-formation rate (SFR) in the universe, an hypothesis that arose partly in response to the accumulating evidence, described earlier, that GRBs occur in star-forming galaxies, and possibly near or in the star-forming regions themselves. How far have we been able to go in testing this hypothesis? The answer: Not very far. First of all, as Madau (1999) discussed at this meeting, our knowledge of the SFR as a function of redshift is itself as yet poorly known. The few points derived from the relatively small Hubble Deep Field may not be characteristic of the SFR in the universe at large, not to mention concerns about star-forming galaxies at high redshift whose light might be extinguished by dust in the star-forming galaxies themselves, as well as uncertainties in the epoch and magnitude of star formation in elliptical galaxies. Second, we have redshift determinations for only four GRBs and R-band magnitudes for only eight GRBs. Much further work establishing the star formation rate as a function of redshift in the universe, as well as the redshift distances for many more GRBs, will be needed before this hypothesis can really be tested. One thing is now clear: GRBs are a powerful probe of the high-$`z`$ universe. GRB 971214 would still be detected by BATSE and would be detected by HETE-2 at a redshift distance $`z10`$, and it would be detected by Swift (whose sensitivity threshold is a factor of 5 below that of BATSE and HETE-2) at $`z20`$! If GRBs are produced by the collapse of massive stars in binaries, one expects them to occur out to redshifts of at least $`z1012`$, the redshifts at which the first massive stars are thought to have formed, which are far larger than the redshifts expected for the most distant quasars. The occurrence of GRBs at these redshifts may give us our first information about the earliest generation of stars; the distribution of absorption-line systems in the spectra of their infrared afterglow spectra will give us information about both the growth of metallicity at early epochs and the large-scale structure of the universe, and the presence or absence of the Lyman-$`\alpha `$ forest in the infrared afterglow spectra will place constraints on the Gunn-Peterson effect and may give us information about the epoch at which the universe was re-ionized (Lamb & Reichart 1999a). The increase in the GRB distance scale also implies that the GRB phenomenon is much rarer than was previously thought. This implication has been noted at this meeting by Schmidt (1999), who finds that the GRB rate must be $$R_{\mathrm{GRB}}10^{11}\mathrm{GRBs}\mathrm{yr}^1\mathrm{Mpc}^3$$ (1) in order both to match the brightness distribution of the bursts and to accommodate the redshift distance of $`z=3.42`$ inferred for GRB 971214. By comparison, the rates of neutron star-neutron star (NS-NS) binary mergers (Totani 1999) and the rate of Type Ib-Ic supernovae (Cappellaro et al. 1997) are $$R_{\mathrm{NS}\mathrm{NS}}10^6\mathrm{mergers}\mathrm{yr}^1\mathrm{Mpc}^3$$ (2) $$R_{\mathrm{Type}\mathrm{Ib}\mathrm{Ic}}3\times 10^5\mathrm{SNe}\mathrm{yr}^1\mathrm{Mpc}^3.$$ (3) The rate of neutron star-black hole (NS-BH) binary mergers will be smaller. Nevertheless, it is clear that, if either of these events are the sources of GRBs, only a tiny fraction of them produce an observable GRB. Even if one posits strong beaming (i.e., $`f_{\mathrm{beam}}10^2`$; see below), the fraction is small: $$R_{\mathrm{GRB}}/R_{\mathrm{NS}\mathrm{NS}}10^3(f_{\mathrm{beam}}/10^2)^1$$ (4) $$R_{\mathrm{GRB}}/R_{\mathrm{Type}\mathrm{Ib}/\mathrm{c}}3\times 10^5(f_{\mathrm{beam}}/10^2)^1.$$ (5) Therefore, if such events are the sources of GRBs, either beaming must be incredibly strong ($`f_{\mathrm{beam}}10^510^3`$) or only rarely are the physical conditions necessary to produce a GRB satisfied. Can any theoretical astrophysicist be expected to explain such incredible beaming, or alternatively, such a non-robust, “flaky” phenomenon? I have a solution – at least in the case of SNe: We theorists merely need define those supernovae that produce GRBs to be a new class of SNe (Type I<sub>grb</sub> SNe), and then challenge the observers to go out and find the other observational criteria that define this class! ## 5 Implied Energies and Luminosities The maximum energy $`(E_{\mathrm{GRB}})_{\mathrm{max}}`$ that has been observed for a GRB imposes an important requirement on GRB models, and is therefore of great interest to theorists. $`(E_{\mathrm{GRB}})_{\mathrm{max}}`$ has increased as the number of GRB redshift distances that have been determined has increased. Currently, the record holder is GRB 971214 at $`z=3.4`$, which implies $`E_{\mathrm{GRB}}5\times 10^{53}`$ erg from its gamma-ray fluence, assuming isotropic emission and $`\mathrm{\Omega }_M=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ (Kulkarni 1999). The table below summarizes the redshifts and energies of the bursts for which these are currently known: $$\begin{array}{ccc}\text{Gamma-Ray Burst }& z& \text{Energy (if isotropic)}\\ & & \\ 970508& 0.835& 7\times 10^{51}\mathrm{erg}\\ 971214& 3.42& 5\times 10^{53}\mathrm{erg}\\ 980703& 0.966& 8\times 10^{52}\mathrm{erg}\end{array}$$ This kind of energy is difficult to accommodate in NS-NS or NS-BH binary merger models without invoking strong beaming. “Collapsar” or “hypernova” models have an easier time of it, and can perhaps reach $`10^{54}`$ erg without invoking strong beaming by assuming a high efficiency for the conversion of gravitational binding energy into gamma-rays (Woosley 1999). Both classes of models can be “saved” by invoking strong beaming ($`f_{\mathrm{beam}}1/101/100`$ (but see the lack of evidence of beaming discussed below). Even if GRBs are strongly beamed, they are still far and away the brightest electromagnetic phenomenon in the Universe, as the following comparison illustrates: * $`L_{\text{SNe}}10^{44}\mathrm{erg}\mathrm{s}^1`$ * $`L_{\text{SGR}}10^{45}\mathrm{erg}\mathrm{s}^1`$ * $`L_{\text{AGN}}10^{45}\mathrm{erg}\mathrm{s}^1`$ * $`L_{\text{GRB}}10^{51}(f_{\mathrm{beam}}/10^2)\mathrm{erg}\mathrm{s}^1`$ The luminosities of GRB 970508 and GRB 971214 differ by a factor of about one hundred. Thus (if there was previously any doubt), determination of the redshift distances for just three GRBs has put to rest once and for all the idea that GRBs are “standard candles.” The extensive studies by Loredo & Wasserman (1998a,b) and the study by Schmidt (1999) reported at this workshop show that the luminosity function for GRBs can be, and almost certainly is, exceedingly broad, with $`\mathrm{\Delta }L_{\mathrm{GRB}}/L_{\mathrm{GRB}}10^3`$. The results of Loredo & Wasserman (1998a,b) show that the burst luminosity function could be far broader; and indeed, if GRB 980425 is associated with SN 1998bw, $`\mathrm{\Delta }L_{\mathrm{GRB}}/L_{\mathrm{GRB}}10^5`$. Even taking a luminosity range $`\mathrm{\Delta }L_{\mathrm{GRB}}/L_{\mathrm{GRB}}10^3`$ implies that $`\mathrm{\Delta }F_{\mathrm{GRB}}/F_{\mathrm{GRB}}10^4`$, given the range in the distances of the three GRBs whose redshifts are known. This is far broader than the range of peak fluxes in the BASTE GRB sample, and implies that the flux distribution of the bursts extends well below the BATSE threshold. The enormous breadth of the luminosity function of GRBs suggests that the differences (such as time stretching and spectral softening) between the apparently bright and the apparently dim bursts are due to intrinsic differences between intrinsically bright and faint bursts, rather than to cosmology. Finally, a broad luminosity function is naturally expected in models with ultra-relativistic radial outflow and strong beaming (jet-like behavior). But then why is no large special relativistic Doppler redshift seen in GRB spectra; i.e., why is the spread in $`E_{\mathrm{peak}}`$ so narrow? ## 6 Burst Models NS-NS or NS-BH binary mergers and “collapsar” or “hypernova” events continue to be the leading models for the energy source of GRBs. Rees (1999) described what he termed the “best buy” model, which involves a NS-BH binary merger and a magnetically powered jet. Woosley (1999) reported a series of calculations and hydrodynamic simulations that explore various stages of the collapsar scenario, including the production of a hydrodynamic jet (although the jet might also be magnetically powered in this scenario, if magnetic fields were included). The increasingly strong evidence that the bursts detected by BeppoSAX originate in galaxies undergoing star formation, and may occur near or in the star-forming regions themselves, favors the collapsar model and disfavors the binary merger model as the explanation for long, softer, smoother bursts. Simulations of the kicks given to NS-NS and NS-BH binaries by the SNe that form them shows that most binary mergers are expected to occur well outside any galaxy (Bulik & Belczynski 1999). This is particularly the case, given that the GRB host galaxies identified so far have small masses, as discussed earlier, and therefore low escape velocities. The fact that all of the optical afterglows of the BeppoSAX bursts are coincident with the disk of the host galaxy therefore also disfavors the binary merger model as the explanation for the long, softer, smoother bursts. Current models of the bursts themselves fall into three general categories: Those that invoke a central engine, those that invoke internal shock waves in a relativistically expanding wind, and those that invoke a relativistic external shock wave. Dermer (1999) argued that the external shock wave model explains many of the observed properties of the bursts. By contrast, Fenimore (1999; see also Fenimore et al. 1999) argued that several features of GRBs, such as the large gaps seen in burst time histories, cannot be explained by the external shock wave model, and that the bursts must therefore be due either to a central engine or to internal shocks in a relativistically expanding wind. Either way, the intensity and spectral variations seen during the burst must originate at a central engine. This implies that the lifetime of the central engine must in many cases be $`t_{\mathrm{engine}}1001000`$ s, which poses a severe difficulty for NS-NS or NS-BH binary merger models, if such models are invoked to explain the long, softer, smoother bursts, and may pose a problem for the collapsar model. Fenimore (1999) reported at this meeting that he finds no evidence of relativistic expansion in the time histories and spectra of the GRBs themselves, presenting a possible difficulty for the internal shock wave model. One puzzle about the bursts themselves is: Why are GRB spectra so smooth? The shock synchrotron model agrees well with observed burst spectra. But this agreement is surprising, since strong deviations from the simplest spectral shape are expected due to inverse Compton scattering, and if forward and reverse shock contributions to the prompt gamma-ray emission occur simultaneously or at different times (Tavani 1999). Another puzzle is: Why is the spread in the peak energy $`E_{\mathrm{peak}}`$ of the burst spectra so narrow? In the external shock model, this requires that all GRBs have nearly the same ultra-relativistic value of $`\mathrm{\Gamma }`$. The narrow range in $`E_{\mathrm{peak}}`$ is, if anything, more difficult to understand in the internal shock model. If $`\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Gamma }<<1`$ in the relativistic outflow, the range in $`E_{\mathrm{peak}}`$ will be narrow, but then it is hard to understand why most of the energy of the relativistic outflow is dissipated during the burst rather than in the afterglow. Conversely, if $`\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Gamma }>>1`$ in the relativistic outflow, most of the energy of the relativistic outflow is dissipated during the burst rather than in the afterglow, but then one expects a wide range of $`E_{\mathrm{peak}}`$’s. This is a hint – like the problem discussed earlier that one would expect strong beaming to produce a large special relativistic Doppler redshift, yet this is not seen in burst spectra – that there may be something missing in our picture of the dissipation and radiation mechanisms in GRBs. ## 7 Beaming Most theorists expect GRBs to be significantly beamed – many energetic astrophysical phenomena are (examples include young protostars; the so-called “microquasars,” which are black hole binaries in the Galaxy; radio galaxies; and AGN). And theorists desire beaming because it saves their models. Several speakers at this workshop have emphasized these points (see, e.g., Dar 1999, Fargion 1999, and Rees 1999). Strong beaming probably requires strong magnetic fields, but no detailed physical model of how this might happen has been put forward as yet. One can ask: Where is the observational evidence for beaming…? Fenimore (1999) told us that there is none in the time histories of the bursts themselves. Worse yet, Greiner (1999) reported that $`f_{\mathrm{GRB}}/f_{\mathrm{afterglow}}^{\mathrm{X}\mathrm{ray}}1`$ from an analysis of the ROSAT all sky survey. This constraint may not be as strong as it appears, because the duration of the temporal exposure in the ROSAT all-sky survey is only a few hundred seconds, and thus the sensitivity of the survey is relatively poor. Consequently, soft X-ray afterglows would be detectable by the ROSAT all-sky survey only within a day or so after the burst, when (in the relativistic external shock model of afterglows – see below) the soft X-ray emission is still highly beamed. Constraints on so-called “orphan” optical afterglows, and therefore on the beaming of GRBs, will be strengthened by new low-$`z`$ SN Ia searches that will soon be underway. These searches will monitor an area of the sky that is roughly ten times larger than that monitored by current high-$`z`$ SN Ia searches down to the same limiting magnitude ($`m_B20`$) (Perlmutter 1999). ## 8 Afterglow Models The simple fireball model (i.e., a spherically symmetric relativistic external shock wave expanding into a homogeneous medium) has been “in and out of the hospital” for months, but notices of its death appear to be premature. This amazes me, given the wealth of complexities one can easily imagine in the fireball itself and in its environment (Mészáros 1999). If the simple relativistic fireball model (or even more complex variants of it) suffice to explain burst afterglows (see Figure 5), much can be learned, including the energy of the fireball per unit solid angle, the ratio of the energy in the magnetic field to that in relativistic electrons, and the density of the external medium into which the fireball expands (Wijers & Galama 1999; van Paradijs 1999; Lamb, Castander & Reichart 1999). It should be possible, in principle, to use the effects on the afterglow spectrum of extinction due to dust in the host galaxy and of absorption by the Lyman-$`\alpha `$ forest to determine the redshift of the burst itself, but so far, this goal has eluded modelers (see, e.g, Lamb, Castander & Reichart 1999). Currently, we are in the linear regime in terms of what we learn from each individual afterglow because, given the diversity of GRBs, GRB afterglows, and host galaxies, we have yet to sample the full “phase space” of afterglow or host galaxy properties. Still less have we sampled the full “phase space” of combinations of burst, afterglow, and host properties. At the same time, we are in the strongly non-linear regime, in terms of what we learn from each individual observation of a burst afterglow. The value of each astronomer’s observation is enhanced by the observations made by all other astronomers. As we have heard from several speakers at this workshop, the amount of information that can be gleaned from a given afterglow depends greatly on the number of measurements that exist both simultaneously in time and in wavelength, from the radio through the millimeter, sub-millimeter, near-infrared, optical, and X-ray bands. Furthermore, since the range of redshifts for the bursts (and therefore also their afterglows) is large, we cannot know in advance which bands will be crucial. Thus simultaneous or near-simultaneous multi-wavelength observations of burst afterglows are essential, and therefore observations by as many observers as possible must be encouraged. Finally, greater co-operation and co-ordination among observers is important, and should be facilitated, as has been done by setting up the invaluable service represented by the Gamma-Ray Burst Coordinate Network (GCN) (Barthelmy et al. 1999). ## 9 Star-Forming Regions Star forming regions consist of a cluster of O/B stars that lie in and around a clumpy cloud of dust and gas. We expect $`A_V>>1`$ for O/B stars embedded in the cloud, and $`A_V0`$ for O/B stars that have drifted out of the cloud and/or lie near the surface of the cloud and have expelled the gas and dust in their vicinity. Thus the optical/UV spectrum of star forming regions is a sum of the spectra of many hot (blue) stars, some of which are embedded in the cloud, and therefore heavily extinguished, and some of which lie on the surface or around the cloud, and are therefore essentially un-extinguished. This composite spectrum is rather blue, and yields a value $`A_V^{\mathrm{eff}}1`$ when a single extinction curve is fitted to it. The situation is very different when we consider an individual line-of-sight, as is appropriate for the afterglow of a GRB. If the GRB source lies outside and far away from any star-forming region, we expect $`A_V^{\mathrm{afterglow}}1`$; if the GRB source lies outside but near a star-forming region, we expect $`A_V^{\mathrm{afterglow}}1`$ about half the time and $`A_V^{\mathrm{afterglow}}>>1`$ about half the time. Finally, if the GRB source is embedded in the star-forming region, we expect $`A_V^{\mathrm{afterglow}}>>1`$. Thus, if GRB sources actually lie in star-forming regions, one would expect $`A_V^{\mathrm{afterglow}}>>1`$ (values of $`A_V1030`$ are not uncommon for dense, cool molecular clouds in the Galaxy). Is this consistent with what we see? No. However, this may not mean that GRB sources do not lie in star-forming regions. The reason is that the soft X rays and the UV radiation from the GRB and its afterglow are capable, during the burst and immediately afterward, of vaporizing all of the dust in their path (Lamb & Reichart 1999b). Thus the value of $`A_V^{\mathrm{afterglow}}`$ that we measure may have nothing to do with the pre-existing value of the extinction through the star-forming region in which the burst source is embedded, but may instead reflect merely the extinction due to dust and gas in the disk of the host galaxy. The GRB, and its soft X-ray and UV afterglow, are also capable of ionizing gas in any envelope material expelled by the progenitor of the burst source and in the interstellar medium of the host galaxy. This will produce Strömgren spheres or very narrow cones (if the burst and its afterglow are beamed) in hydrogen, helium and various metals (Bisnovatyi-Kogan & Timokhin 1998, Timokhin & Bisnovatyi-Kogan 1999, Mészáros 1999). Recombination of the ionized hydrogen eventually produces intense \[CII\], \[CIV\], \[OVI\] and \[CIII\] emission lines in the UV, and intense H$`\alpha `$ and H$`\beta `$ emission lines in the optical. However, the line fluxes may still not be strong enough to be detectable at the large redshift distances of GRB host galaxies. Interaction of the GRB and its soft X-ray afterglow with any envelope material expelled by the progenitor of the burst source and with the surrounding interstellar medium can also produce intense fluorescent iron line emission (see, e.g., Mészáros 1999), but it is again difficult to see how the line flux could be large enough to be detectable or to explain the hints of a fluorescent iron emission line in the X-ray afterglows of GRB 980703 (Piro et al. 1999) and GRB 980828 (Yoshida et al. 1999). ## 10 Future Each person has their own favorite list of future observational needs. Here is mine: $``$ We need a high rate ($`>100`$ GRBs yr<sup>-1</sup>) of bursts with good locations, in order to change the sociology of ground-based optical and radio observations. This many good GRB positions to follow-up each year would make it possible to propose and carry out GRB afterglow monitoring programs at many medium-to-large aperture telescopes. $``$ The diversity of GRBs, GRB afterglows, and host galaxies means that we need a large number ($`>1000`$) of good GRB positions in order to be able to study the correlations between these properties. This is important for determining whether or not there are distinct subclasses of bursts, and more than one burst mechanism. Any correlations found will also impose important constraints on burst mechanisms and models. $``$ We need many rapid (near real time) one arcminute GRB positions in order to determine whether or not significant optical emission accompanies the bursts (Park 1999), and to make it possible to take spectra of the burst afterglows while the afterglows are still bright – and thereby obtain redshifts of the bursts themselves from absorption line systems, and if there are bursts at high redshifts, from the Ly$`\alpha `$ break. $``$ All of the GRBs that BeppoSAX has detected are “long” bursts. Currently we know nothing about the afterglow properties, the distance scale, and the hosts (if any) of “short” bursts. Therefore we need good/quick positions for short bursts, in order to determine these properties for short bursts in the same way that BeppoSAX has enabled us to determine these properties for long bursts. $``$ Currently, there is a largely unexplored gap in our knowledge of the X-ray and optical behavior of burst afterglows of $`10^410^5`$ seconds immediately following the bursts, corresponding to the time needed to bring the BeppoSAX NFIs to bear on a burst. We need to fill in this unexplored gap, in order to see if bursts always, often, or rarely join smoothly onto their X-ray and optical afterglows, and to explore the geometry and kinematics of GRB afterglows (Sari 1999). $``$ We also need to search for variability in the X-ray and optical afterglows. Observations of such variability would impose severe constraints on models, including the widely-discussed relativistic fireball model of burst afterglows (see, e.g., Fenimore 1999). ## 11 Acknowledgement The Rome Workshop provided a feast of observational and theoretical results, and the opportunity to discuss them. On behalf of all of the Workshop participants, I would like to thank Enrico Costa, Luigi Piro, Filippo Fontana, and everyone else who helped to organize this meeting for bringing all of us together and for providing us with such “fine dining.”
no-problem/9909/astro-ph9909052.html
ar5iv
text
# Neutron star properties in a chiral SU(3) model ## Abstract We investigate various properties of neutron star matter within an effective chiral $`SU(3)_L\times SU(3)_R`$ model. The predictions of this model are compared with a Walecka-type model. It is demonstrated that the importance of hyperon degrees are strongly depending on the interaction used, even if the equation of state near saturation density is nearly the same in both models. While the Walecka-type model predicts a strange star core with strangeness fraction $`f_S4/3`$, the chiral model allows only for $`f_S1/3`$ and predicts that $`\mathrm{\Sigma }^0`$, $`\mathrm{\Sigma }^+`$ and $`\mathrm{\Xi }^0`$ will not exist in star, in contrast to the Walecka-type model. The internal constitution and properties of neutron stars chiefly depend on the nature of strong interactions. The accepted underlying theory of strong interactions, QCD, is however not solvable in the nonperturbative regime. So far numerical solutions of QCD on a finite space-time lattice are unable to describe finite nuclei or infinite nuclear matter . As an alternative approach several effective models of hadronic interactions have been proposed . Especially the Walecka model (QHD) and its nonlinear extensions have been quite successful and widely used for the description of hadronic matter and finite nuclei. These models are relativistic quantum field theories of baryons and mesons, but they do not consider essential features of QCD, like approximate $`SU(3)_R\times SU(3)_L`$ chiral symmetry or broken scale invariance. The Nambu–Jona-Lasinio (NJL) model is an effective theory which has these features of QCD implemented but it lacks confinement and thereby fails to describe finite nuclei and nuclear matter. The chiral SU(3) models, for example the linear SU(3)-$`\sigma `$ model have been quite successful in modeling meson-meson interactions. The $`KN`$ scattering data has been well reproduced using the chiral effective SU(3) Lagrangian . However, all these chiral models lack the feature of including the nucleon-nucleon interaction on the same chiral SU(3) basis and therefore do not provide a consistent extrapolation to moderate and high densities relevant to the interior of a neutron star. This has lead us to construct a QCD-motivated chiral $`SU(3)_L\times SU(3)_R`$ model as an effective theory of strong interactions, which implements the main features of QCD. The model has been found to describe reasonably well, the hadronic masses of the various SU(3) multiplets, finite nuclei, hypernuclei and excited nuclear matter . The basic assumptions in the present chiral model are: (i) The Lagrangian is constructed with respect to the nonlinear realization of chiral $`SU(3)_L\times SU(3)_R`$ symmetry; (ii) The masses of the heavy baryons and mesons are generated by spontaneous symmetry breaking; (iii) The masses of the pseudoscalar mesons are generated by explicit symmetry breaking, since they are the Goldstone bosons of the model; (iv) A QCD-motivated field $`\chi `$ enters, which describes the gluon condensate (dilaton field) ; (v) Baryons and mesons are grouped according to their quark structure. In this letter we investigate the composition and structure of neutron star matter with hyperons in this chirally invariant model. The total Lagrangian of the chiral $`SU(3)_L\times SU(3)_R`$ model for neutron star matter can be written in the mean-field approximation as (for details see Ref. ) $$=_{\mathrm{kin}}+_{\mathrm{BM}}+_{\mathrm{BV}}+_{\mathrm{vec}}+_0+_{\mathrm{SB}}+_{\mathrm{lep}},$$ (1) where $`_{\mathrm{BM}}+_{\mathrm{BV}}`$ $`=`$ $`{\displaystyle \underset{i}{}}\overline{\psi }_i\left[m_i^{}+g_{i\omega }\gamma _0\omega ^0+g_{i\varphi }\gamma _0\varphi ^0+g_{N\rho }\gamma _0\tau _3\rho _0\right]\psi _i,`$ (2) $`_{\mathrm{vec}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}m_\omega ^2{\displaystyle \frac{\chi ^2}{\chi _0^2}}\omega ^2+{\displaystyle \frac{1}{2}}m_\varphi ^2{\displaystyle \frac{\chi ^2}{\chi _0^2}}\varphi ^2+{\displaystyle \frac{1}{2}}{\displaystyle \frac{\chi ^2}{\chi _0^2}}m_\rho ^2\rho ^2+g_4^4(\omega ^4+2\varphi ^4+6\omega ^2\rho ^2+\rho ^4),`$ (3) $`_0`$ $`=`$ $`{\displaystyle \frac{1}{2}}k_0\chi ^2(\sigma ^2+\zeta ^2)+k_1(\sigma ^2+\zeta ^2)^2+k_2({\displaystyle \frac{\sigma ^4}{2}}+\zeta ^4)+k_3\chi \sigma ^2\zeta `$ (5) $`k_4\chi ^4{\displaystyle \frac{1}{4}}\chi ^4\mathrm{ln}{\displaystyle \frac{\chi ^4}{\chi _0^4}}+{\displaystyle \frac{\delta }{3}}\mathrm{ln}{\displaystyle \frac{\sigma ^2\zeta }{\sigma _0^2\zeta _0}},`$ $`_{\mathrm{SB}}`$ $`=`$ $`\left({\displaystyle \frac{\chi }{\chi _0}}\right)^2\left[m_\pi ^2f_\pi \sigma +(\sqrt{2}m_K^2f_K{\displaystyle \frac{1}{\sqrt{2}}}m_\pi ^2f_\pi )\zeta \right],`$ (6) $`_{\mathrm{lep}}`$ $`=`$ $`{\displaystyle \underset{l=e,\mu }{}}\overline{\psi }_l[i\gamma _\mu ^\mu m_l]\psi _l.`$ (7) Here $`_{\mathrm{kin}}`$ is the kinetic energy term of the baryons and the scalar ($`\sigma ,\zeta `$) and vector ($`\omega ,\varphi ,\rho `$) mesons. The interaction Lagrangian of the different baryons with the various spin-0 and spin-1 mesons are $`_{\mathrm{BM}}`$ and $`_{\mathrm{BV}}`$, respectively. The sum over $`i`$ extends over all the charge states of the baryon octet ($`p,n,\mathrm{\Lambda },\mathrm{\Sigma }^{},\mathrm{\Sigma }^0,\mathrm{\Sigma }^+,\mathrm{\Xi }^{},\mathrm{\Xi }^0`$). $`_{\mathrm{vec}}`$ generates the masses of the spin-1 mesons through the interactions with spin-0 mesons, and $`_0`$ gives the meson-meson interaction term which induce the spontaneous breaking of chiral symmetry. A salient feature of the model, the dilaton field $`\chi `$, which can be identified with the gluon condensate, is included. It accounts for the broken scale invariance of QCD at tree level through the logarithmic potential. $`_{\mathrm{SB}}`$ introduces an explicit symmetry breaking of the U(1)<sub>A</sub>, the SU(3)<sub>V</sub>, and the chiral symmetry. The last term $`_{\mathrm{lep}}`$ represents the free lepton Lagrangian. The effective masses of the baryons in the nonlinear realization of chiral symmetry are given by $`m_N^{}`$ $`=`$ $`m_0{\displaystyle \frac{1}{3}}g_{O8}^S(4\alpha _{OS}1)(\sqrt{2}\zeta \sigma )`$ (8) $`m_\mathrm{\Lambda }^{}`$ $`=`$ $`m_0{\displaystyle \frac{2}{3}}g_{O8}^S(\alpha _{OS}1)(\sqrt{2}\zeta \sigma )`$ (9) $`m_\mathrm{\Sigma }^{}`$ $`=`$ $`m_0+{\displaystyle \frac{2}{3}}g_{O8}^S(\alpha _{OS}1)(\sqrt{2}\zeta \sigma )`$ (10) $`m_\mathrm{\Xi }^{}`$ $`=`$ $`m_0+{\displaystyle \frac{1}{3}}g_{O8}^S(2\alpha _{OS}+1)(\sqrt{2}\zeta \sigma ),`$ (11) with $`m_0=g_{O1}^S(\sqrt{2}\sigma +\zeta )/\sqrt{3}`$, in the usual notation . The parameters $`g_{O1}^S`$, $`g_{O8}^S`$ and $`\alpha _{OS}`$ are used to fit the vacuum baryon masses to their experimental values. The thermodynamic potential of the grand canonical ensemble per unit volume at zero temperature for the neutron star matter can be written as $$\mathrm{\Omega }/V=_{\mathrm{vec}}_0_{\mathrm{SB}}𝒱_{\mathrm{vac}}\underset{i}{}\frac{\gamma _i}{(2\pi )^3}d^3k\left[E_i^{}(k)\mu _i^{}\right]\frac{1}{3}\underset{l}{}\frac{1}{\pi ^2}\frac{dkk^4}{\sqrt{k^2+m_l^2}}.$$ (12) In Eq. (4) the vacuum energy $`𝒱_{\mathrm{vac}}`$ has been subtracted. For a given baryon species $`i`$, the single particle energy and chemical potential are respectively, $`E_i^{}(k)`$ $`=`$ $`\sqrt{k_i^2+m_i^2},`$ (13) $`\mu _i`$ $`=`$ $`b_i\mu _nq_i\mu _e=\mu _i^{}+g_{i\omega }\omega _0+g_{i\varphi }\varphi _0+g_{i\rho }I_{3i}\rho _0,`$ (14) with $`\mu _i^{}E_i^{}(k=k_{F_i})`$; $`b_i`$ and $`q_i`$ are the baryon number and charge of the $`i`$th species. The energy density and pressure follows from the Gibbs-Duhem relation, $`\epsilon =\mathrm{\Omega }/V+_{k=i,l}\mu _k\rho _k`$ and $`P=\mathrm{\Omega }/V`$. At a given baryon density $`\rho _B`$, the field equations as obtained by extremizing $`\mathrm{\Omega }/V`$ are solved self-consistently in conjunction with the charge neutrality and $`\beta `$-equilibrium conditions. The parameters of the chirally invariant potential, $`k_0`$ and $`k_2`$, are used to ensure an extremum in the vacuum, while $`k_3`$ is constrained by the $`\eta ^{}`$ mass, and $`k_1`$ is varied to give $`m_\sigma =500`$ MeV. The vacuum expectation value of the fields $`\sigma `$ and $`\zeta `$ are constrained by the pion and kaon decay constants, $`\sigma _0=f_\pi `$ and $`\zeta _0=(2f_Kf_\pi )/\sqrt{2}`$. The parameters $`g_{N\omega }`$ and $`\chi _0`$ are used to fit the binding energy of nuclear matter $`B/A=\epsilon /\rho _Bm_N=16`$ MeV at the saturation density $`\rho _0=0.15`$ fm<sup>-3</sup>. In the present calculation we have employed the parameter set C1 of Ref. . The predicted values of effective nucleon mass, incompressibility, and symmetry energy at the saturation density are $`m_N^{}/m_N=0.61`$, $`K=276`$ MeV, and $`a_{\mathrm{sym}}=40.4`$ MeV. The remaining couplings to the strange baryons are then determined by the additive quark model constraints: $$g_{\mathrm{\Lambda }\omega }=g_{\mathrm{\Sigma }\omega }=2g_{\mathrm{\Xi }\omega }=\frac{2}{3}g_{N\omega }=2g_{O8}^V;g_{\mathrm{\Lambda }\varphi }=g_{\mathrm{\Sigma }\varphi }=\frac{g_{\mathrm{\Xi }\varphi }}{2}=\frac{\sqrt{2}}{3}g_{N\omega }.$$ (15) Figure 1 shows the energy per baryon as a function of baryonic density $`\rho _B`$ for varying neutron-proton asymmetries, $`\delta =(\rho _n\rho _p)/\rho _B`$, calculated in the chiral model. The curve $`\delta =0`$ describes infinite symmetric nuclear matter with a minimum at $`\rho _0`$. With increasing asymmetry, ($`\delta >0`$) the binding energy decreases and the saturation density is shifted to lower values. The binding in nuclear matter for small values of $`\delta `$ stems from the isospin symmetric nuclear forces. At asymmetries $`\delta 0.84`$ (i.e. a neutron to proton ratio $`>11`$), the system starts to become unbound even at the low density regimes. The stiffest equation of state (EOS) is obtained for pure neutron matter with $`\delta =1`$. Due to the $`\beta `$-equilibrium conditions, the EOS for neutron star matter (labeled NS) composed of neutrons, protons and electrons ($`npe`$) is softer as compared to pure neutron matter. The gravitational attraction provides the necessary binding of neutron stars. The present results from the chiral model corroborate those obtained in Walecka-like models and in the relativistic Brueckner-Hartree-Fock calculations . Let us now discuss the inclusion of hyperons. With the choice of the parameter set discussed above, the chiral model is found to produce unrealistically large hyperon potential depths in nuclear matter in comparison to the experimental values of $`U_{\mathrm{\Lambda },\mathrm{\Xi }}^N28`$ MeV for the $`\mathrm{\Lambda }`$ and $`\mathrm{\Xi }`$ particle. Parameter sets that reproduce reasonable values of $`U_{\mathrm{\Lambda },\mathrm{\Xi }}^N`$ are, however, found to yield unsatisfactory nuclear properties . Fortunately, explicit symmetry breaking can be introduced in the nonlinear realization without affecting, e.g., the partially conserved axial-vector currents relations. This allows for the inclusion of additional terms for the hyperon-scalar meson coupling : $$_{\mathrm{hyp}}=m_3\mathrm{Tr}\left(\overline{\psi }\psi +\overline{\psi }[\psi ,S]\right)\mathrm{Tr}\left(XX_0\right),$$ (16) where $`X`$ represents a scalar and $`S_b^a=[\sqrt{3}(\lambda _8)_b^a\delta _b^a]/3`$ with $`\lambda `$’s are the usual Gell-Mann matrices. In the mean field approximation this leads to the following additional mass term $$\stackrel{~}{m}_i^{}=m_i^{}+am_3\left[\sqrt{2}(\delta \delta _0)+(\zeta \zeta _0)\right],$$ (17) where $`m_i^{}`$ is given by Eq. (3). With $`a=n_s`$, where $`n_s`$ is the number of strange quarks in the baryon, and, with the parameter $`m_3`$ adjusted to $`U_\mathrm{\Lambda }^N=28`$ MeV, the other hyperon potentials obtained are $`U_\mathrm{\Sigma }^N=+3.2`$ MeV, and $`U_\mathrm{\Xi }^N+30`$ MeV. Since the potential of $`\mathrm{\Xi }`$ in ground state nuclear matter is still not satisfactory, we have used $`a=1`$ as an alternative parametrization. For $`U_\mathrm{\Lambda }^N=28`$ MeV, one obtains now $`U_\mathrm{\Sigma }^N=+3.2`$ MeV, and $`U_\mathrm{\Xi }^N=42`$ MeV. We are well aware that our choice of the parametrization of the hyperon potentials is not unique; the examination of different ways of generating the experimentally favored values for $`U_\mathrm{\Sigma }^N`$ and $`U_\mathrm{\Xi }^N`$ is in progress and will be reported in a different context. Hereafter we have employed $`a=1`$ in our calculations. Let us compare the results obtained in the chiral model with that of a Walecka-type model. The latter model employed here has a cubic and quartic self-interaction for the $`\sigma `$ field, $`U(\sigma )=g_2\sigma ^3/3+g_3\sigma ^4/4`$, introduced to get correct nuclear matter compressibility , and a quartic self-interaction for the vector field $`\omega `$, $`_v=g_4(\omega ^\mu \omega _\mu )^2`$. This modification leads to a reasonable reproduction of the Dirac-Brueckner calculation . We have used the parameter set TM1 of Ref. which gives a saturation density and binding energy of $`\rho _0=0.145`$ fm<sup>-3</sup> and $`B/A=16.3`$ MeV. It is to be noted that in the TM1 set, the values of $`m_N^{}/m_N=0.63`$, $`K=281`$ MeV, and $`a_{\mathrm{sym}}=36.9`$ MeV which primarily influence the bulk properties of neutron star matter are nearly identical to those in the chiral model. The hyperon couplings $`\sigma `$-$`Y`$ are obtained from a depth of $`U_Y^N=28`$ MeV, while the $`\omega `$-$`Y`$ couplings are obtained from SU(6) symmetry relations (6). Two additional strange mesons, $`\zeta `$ and $`\varphi `$, are introduced which couple only to the hyperons. The $`\zeta `$-Y couplings are fixed by the condition $`U_\mathrm{\Xi }^\mathrm{\Xi }2U_\mathrm{\Lambda }^\mathrm{\Lambda }40`$ MeV . Figure 2 shows the particle fractions versus the baryonic density in $`\beta `$-equilibrated matter in the chiral model (top panel) and the Walecka-type model, TM1 (bottom panel). With increasing density, it is energetically favorable for nucleons at the top of the Fermi sea to convert into other baryons. Note that the sequence of appearance of the hyperon species is the same in both models. The first strange particle to appear is the $`\mathrm{\Sigma }^{}`$, since its somewhat higher mass (as compared to the $`\mathrm{\Lambda }`$) is compensated by the electro-chemical potential in the chemical equilibrium condition (5). Because of its negative charge, charge neutrality can be achieved more economically which causes a drop in the lepton fraction. More massive and positively charged particles than these appear at higher densities. These are in fact generic features found in neutron star calculations with hyperons . Because of the equilibrium condition (5), the threshold densities of the different hyperon species are, however, strongly dependent on the magnitude of the scalar and vector fields at a given density and their interactions with the baryons. The nucleon-nucleon interaction which determines the variation of neutron chemical potential with density also determines the threshold. The relatively smaller attractive scalar fields and thereby larger effective mass of all the baryons cause a significant shift of the density at which the hyperons ($`\mathrm{\Sigma }^0,\mathrm{\Sigma }^+,\mathrm{\Xi }^0`$) appear in the chiral model calculation. This enhances the saturation values of both the neutron and proton fractions (at a level of $`30\%`$) in the chiral model. Consequently, the cores of massive stars in the TM1 model (and also in other Walecka-type models ) are in general predicted to be very ($`>50\%`$) hyperon-rich. The chiral model, on the other hand, predicts neutron stars with considerable smaller hyperon fractions in the core. The composition of the star is crucial for neutrino and antineutrino emission which can be responsible for the rapid cooling of neutron stars via the URCA process. It was demonstrated that for a $`npe`$ system rapid cooling by nucleon direct URCA process is allowed when the momentum conservation condition $`k_{F_p}+k_{F_e}k_{F_n}`$, corresponding to a proton fraction $`Y_p0.11`$, is satisfied. The magnitude of $`Y_p`$ at a given $`\rho _B`$, in turn, is determined mainly by the symmetry energy, which is nearly identical in the two models. Therefore, in both models this condition is satisfied at densities $`\rho _B\stackrel{>}{}2.2\rho _0`$ thus, rapid cooling by direct URCA process can occur. The most profound implication of the constitution of neutron star matter on its bulk properties are manifested in the equation of state (EOS). The pressure $`P`$ versus the energy density $`\epsilon `$ is displayed in Fig. 3 for neutron star matter in the chiral (thick lines) and TM1 (thin lines) model. For $`npe`$ stars (solid lines), although the incompressibility $`K`$ and the effective nucleon mass $`m_N^{}`$ at the normal nuclear matter density are similar in the two models, the EOS at large densities is found to be considerably softer for the chiral model as compared to the EOS for TM1. In the latter model, the $`m_N^{}`$ rapidly decreases with density, consequently the EOS passes quickly to one that is dominated by the repulsive vector mesons ($`\omega `$ and $`\rho `$) leading to a stiffer EOS. This has a strong bearing on the mass and radius of such stars discussed below. The structure of static neutron stars can be determined by solving the Tolman-Oppenheimer-Volkoff equations . We use the results of Baym, Pethick and Sutherland to describe the crust consisting of leptons and nuclei at the low-density ($`\rho _B<0.001`$ fm<sup>-3</sup>) EOS. For the mid-density regime ($`0.001<\rho _B<0.08`$ fm<sup>-3</sup>) the results of Negele and Vautherin are employed. Above this density, the EOS for the relativistic models have been adopted. The masses $`M`$ of the nonrotating neutron star sequence is shown in Fig. 4 as a function of central energy density $`\epsilon _c`$ in the two models. The corresponding mass-radius relationship is presented in Fig. 5. It is observed that the stiffer EOS for a $`npe`$ star in the TM1 model can support a larger maximum mass of $`M_{\mathrm{max}}=2.16M_{}`$ with a corresponding radius of $`R_{M_{\mathrm{max}}}=11.98`$ km at a central baryonic density of $`\rho _c=6.08\rho _0`$. The corresponding values obtained in the chiral model are $`M_{\mathrm{max}}=1.84M_{}`$, $`R_{M_{\mathrm{max}}}=11.10`$ km, and $`\rho _c=6.98\rho _0`$. The large difference in the $`M_{\mathrm{max}}`$ values obtained in the two models, with nearly identical $`K`$ and $`m_N^{}`$ values at $`\rho _0`$, clearly demonstrates that measurements of maximum masses of neutron ($`npe`$) stars at high densities cannot be used to constrain the incompressibility and the effective nucleon mass around the nuclear matter densities. Constraints on $`K`$ and $`m_N^{}`$ from radius measurements of massive stars will be even more uncertain, since about $`40\%`$ of these stars’ radius originates from the low density EOS. In fact, no precise radius measurements currently exist. When the hyperon degrees of freedom are included, the EOS (represented by dash-dotted lines in Fig. 3) for both the models are appreciably softer as compared to $`npe`$ stars. This is caused by the opening of the hyperon degrees of freedom which relieves some of the Fermi pressure of the nucleons. Also the decrease of the pressure exerted by the leptons (they are replaced by negatively charged hyperons to maintain charge neutrality) contributes to softening of the EOS. Since the threshold density for the appearance of the hyperons, especially ($`\mathrm{\Sigma }^0,\mathrm{\Sigma }^+,\mathrm{\Xi }^0`$) (see Fig. 2), are smaller in TM1 model, these stars contain more baryon species. This leads to an enhanced softening as compared to that in the chiral model. In fact, both models with hyperons predict quite similar values of pressure at moderate and high densities; the structures observed in the EOS (Fig. 3) correspond to the population of different hyperon species. These should reflect both in the masses and radii of the stars. The maximum masses and corresponding radii of stars with hyperons in the two models are almost identical with $`M_{\mathrm{max}}=1.52M_{}`$, $`R_{M_{\mathrm{max}}}=11.64`$ km, and $`\rho _c=5.92\rho _0`$ in the chiral model, while in the TM1 model, $`M_{\mathrm{max}}=1.55M_{}`$, $`R_{M_{\mathrm{max}}}=12.14`$ km, and $`\rho _c=5.97\rho _0`$. The magnitude of the central densities indicates that the hyperons ($`\mathrm{\Sigma }^0,\mathrm{\Sigma }^+,\mathrm{\Xi }^0`$) are entirely precluded in stars for the chiral model whereas all hyperon species appear with comparable abundances in the TM1 model for the maximum-mass star. The strangeness fraction, $`f_S=_i|S_i|\rho _i/\rho _B`$, are vastly different at the center of the maximum-mass stars: 0.33 vs. 0.75 in the chiral and in the TM1 model, respectively. Thus, models with similar nuclear matter incompressibilities and effective nucleon masses, leading to similar maximum star masses and corresponding similar radii can however have a widely different baryonic constitution! For progressively smaller central densities, the masses of the stars with hyperons are larger in the TM1 model (see Fig. 4), although the pressure at a given density is smaller than in the chiral model. This is because the masses of stars are determined by the overall EOS, and the TM1 model possess a distinctively stiffer EOS at moderate densities (see Fig. 3). The radii of these small mass hyperon-rich stars are quite distinct in these two models (see Fig. 5), because the radius of the stars depend most sensitively on the low density behavior of the EOS. In both models, stars of mass $`1.44M_{}`$ (corresponding to the lower limit imposed by the larger mass of the binary pulsar PSR 1913+16 ) also contain many hyperon species with sizeable concentration. In conclusion, we have investigated the composition and structure of neutron star matter in a novel chiral SU(3) model, and compared its predictions with that of a Walecka-type model based on different underlying assumptions. The two models with nearly identical values of effective nucleon mass, incompressibility, and symmetry energy at the normal nuclear density yield widely different maximum neutron star masses and radii. When the hyperon degrees of freedom are included, the maximum masses and the corresponding radii in the two models are found to be rather similar. However, softness of the nucleonic contribution in the chiral model precludes the hyperons $`\mathrm{\Sigma }^0,\mathrm{\Sigma }^+,\mathrm{\Xi }^0`$ leading to much smaller hyperon abundances in these stars. ###### Acknowledgements. The authors are thankful to N. Glendenning, F. Weber and J. Schaffner-Bielich for helpful discussions. This work was funded in part by the Gesellschaft für Schwerionenforschung (GSI), the DFG and the Hessische Landesgraduiertenförderung. S.P. acknowledges support from the Alexander von Humboldt Foundation.
no-problem/9909/nucl-th9909022.html
ar5iv
text
# A new resonance in 𝐾⁺⁢Λ electroproduction: the 𝐷₁₃(1895) and its electromagnetic form factors ## 1 Introduction The physics of nucleon resonance excitation continues to provide a major challenge to hadronic physics due to the nonperturbative nature of QCD at these energies. While methods like Chiral Perturbation Theory are not amenable to $`N^{}`$ physics, lattice QCD has only recently begun to contribute to this field. Most of the theoretical work on the nucleon excitation spectrum has been performed in the realm of quark models. Models that contain three constituent valence quarks predict a much richer resonance spectrum than has been observed in $`\pi N\pi N`$ scattering experiments. Quark model studies have suggested that those “missing” resonances may couple strongly to other channels, such as the $`K\mathrm{\Lambda }`$ and $`K\mathrm{\Sigma }`$ channels or final states involving vector mesons. ## 2 The Elementary Model Using new SAPHIR data we reinvestigate the $`p(\gamma ,K^+)\mathrm{\Lambda }`$ process employing an isobar model described in Ref. . We are especially interested in a structure around W = 1900 MeV, revealed in the $`K^+\mathrm{\Lambda }`$ total cross section data for the first time. Guided by a recent coupled-channels analysis , the low-energy resonance part of this model includes three states that have been found to have significant decay widths into the $`K^+\mathrm{\Lambda }`$ channel, the $`S_{11}`$(1650), $`P_{11}`$(1710), and $`P_{13}(1720)`$ resonances. In order to approximately account for unitarity corrections at tree-level we include energy-dependent widths along with partial branching fractions in the resonance propagators . The background part includes the standard Born terms along with the $`K^{}`$(892) and $`K_1`$(1270) vector meson poles in the $`t`$-channel. As in Ref. , we employ the gauge method of Haberzettl to include hadronic form factors. The fit to the data was significantly improved by allowing for separate cut-offs for the background and resonant sector. For the former, the fits produce a soft value around 800 MeV, leading to a strong suppression of the background terms while the resonant cut-off is determined to be 1900 MeV. ## 3 Results from Kaon Photoproduction: a new $`D_{13}`$ State at 1895 MeV Figure 1 compares our model described above with the SAPHIR total cross section data. Our result shows only one peak near threshold and cannot reproduce the data at higher energies without the inclusion of a new resonance with a mass of around 1900 MeV. While there are no 3 - or 4-star isospin 1/2 resonances around 1900 MeV in the Particle Data Book, several 2-star states are listed, such as the $`P_{13}(1900)`$, $`F_{17}(1990)`$, $`F_{15}(2000)`$ and $`D_{13}(2080)`$. On the theoretical side, the constituent quark model by Capstick and Roberts predicts many new states around 1900 MeV, however, only few of them have been calculated to have a significant $`K\mathrm{\Lambda }`$ decay width . These are the $`[S_{11}]_3`$(1945), $`[P_{11}]_5`$(1975), $`[P_{13}]_4`$(1950), and $`[D_{13}]_3`$(1960) states, where the subscript refers to the particular band that the state is predicted in. We have performed fits for each of these possible states, allowing the fit to determine the mass, width and coupling constants of the resonance. While we found that all four states can reproduce the structure at $`W`$ around 1900 MeV, it is only the $`[D_{13}]_3`$(1960) state that is predicted to have a large photocoupling along with a sizeable decay width into the $`K\mathrm{\Lambda }`$ channel. Table 1 presents the remarkable agreement, up to a sign, between the quark model predictions and our extracted results for the $`[D_{13}]_3`$(1960) state. In our fit, the mass of the $`D_{13}`$ comes out to be 1895 MeV; we will use this energy to refer to this state below. How reliable are the quark model predictions? Clearly, one test is to confront its predictions with the extracted couplings for the well-established resonances in the low-energy regime of the $`p(\gamma ,K^+)\mathrm{\Lambda }`$ reaction, the $`S_{11}(1650)`$, $`P_{11}(1710)`$ and $`P_{13}(1720)`$ excitations. Table 1 shows that the magnitudes of the extracted partial widths for the $`S_{11}(1650)`$, $`P_{11}(1710)`$, and $`P_{13}(1720)`$ are in good agreement with the quark model. Therefore, even though the amazing quantitative agreement for the decay widths of the $`D_{13}`$ (1895) is probably accidental we believe the structure in the SAPHIR data is in all likelihood produced by a state with these quantum numbers. Further evidence for this conclusion is found below in our discussion on the recent JLab kaon electroproduction data. As shown in Ref. the difference between the two calculations is much smaller for the differential cross sections. Including the $`D_{13}`$(1960) does not affect the threshold and low-energy regime while it does improve the agreement at higher energies. The difference between the two models can be seen more clearly in Fig. 2, where the differential cross section is plotted in a three-dimensional form. As shown by the lower part of Fig. 2, the signal for the missing resonance at $`W`$ around 1900 MeV is most pronounced in the forward and backward direction. Therefore, in order to see such an effect in the differential cross section, angular bins should be more precise for these two kaon directions. Figure 3 shows that the influence of the new state on the recoil polarization is rather small for all angles, which demonstrates that the recoil polarization is not the appropriate observable to further study this resonance. On the other hand, the photon asymmetry of $`K^+\mathrm{\Lambda }`$ photoproduction shows larger variations between the two calculations, especially for higher energies. Here the inclusion of the new state leads to a sign change in this observable, a signal that should be easily detectable by experiments with linearly polarized photons. Figure 4 shows double polarization observables for an experiment with circularly polarized photon and polarized recoil. As expected, we find no influence of the $`D_{13}`$(1895) at threshold. At resonance energies there are again clear differences between the two predictions. ## 4 Results from Kaon Electroproduction: Electromagnetic Form Factors of the $`D_{13}`$(1895) All previous descriptions of the kaon electroproduction process have performed fits to both photo- and electroproduction data simultaneously, in an attempt to provide a better constraint on the coupling constants. This method clearly runs the danger of obscuring - rather than clarifying - the underlying production mechanism. For example, anomalous behavior of the response functions in a certain $`k^2`$ range would be parameterized into the effective coupling constants, rather than be expressed in a particular form factor. Here, we adopt the philosophy used in pion electroproduction over the last decade: we demand that the kaon electroproduction amplitude be fixed at the photon point by the fit to the photoproduction data. Thus, all hadronic couplings, photocouplings and hadronic form factors are fixed, the only remaining freedom comes from the electromagnetic form factors of the included mesons and baryons. Extending our isobar model of Ref. to finite $`k^2`$ requires the introduction of additional contact terms in the Born sector in order to properly incorporate gauge invariance . We choose standard electromagnetic form factors for the nucleon , for the hyperons we use the hybrid vector meson dominance model . We use the monopole form factors for the meson resonances, where their cut-offs are taken as free parameters, determined to be $`\mathrm{\Lambda }=1.107`$ GeV and $`0.525`$ GeV for the $`K^{}`$(892) and $`K_1`$(1270), respectively. That leaves the resonance form factors to be determined which in principle can be obtained from pion electroproduction. In practice, the quality of the data at higher W has not permitted such an extraction. For the $`S_{11}`$(1650) state, we use a parameterization given by Ref. . For the $`P_{11}`$(1710), $`P_{13}(1720)`$ and $`D_{13}`$(1895) states we adopt the following functional form for their Dirac and Pauli form factors $`F_1`$ and $`F_2`$: $`F(k^2)`$ $`=`$ $`\left(1{\displaystyle \frac{k^2}{\mathrm{\Lambda }^2}}\right)^n,`$ (1) with the parameters $`\mathrm{\Lambda }`$ and $`n`$ to be determined by the kaon electroproduction data. The resulting parameters are listed in Table 2. Figure 5 shows the result of our fit. Clearly, the amplitude that includes the $`D_{13}(1895)`$ resonance yields much better agreement with the new experimental data from Hall C at JLab. The model without this resonance produces a transverse cross section which drops monotonically as a function of $`k^2`$, while in the longitudinal case this model dramatically underpredicts the data for small momentum transfer. With a $`W=1.83`$ GeV the data are close in energy to the new state, thus allowing us to study the $`k^2`$ dependence of its form factors. The contribution of the Born terms is neglegibly small for the transverse cross section but remains sizeable for the longitudinal one. We point out that without the $`D_{13}(1895)`$ we did not find a reasonable description of the JLab data, even if we provided for maximum flexibility in the functional form of the other resonance form factors. The same holds true if the new resonance is assumed to be an $`S_{11}`$ or a $`P_{11}`$ state. Even including an additional $`P_{13}`$ state around 1900 MeV does not improve the fit to the electroproduction data. It is only with the interference of two form factors given by the coupling structure of a different spin-parity state, viz. $`D_{13}`$, that a description becomes possible. We therefore find that these new kaon electroproduction data provide additional evidence supporting our suggestion that the quantum numbers of the new state indeed correspond to a $`D_{13}`$. The form factors extracted for the $`D_{13}(1895)`$ are shown in Fig. 6, in comparison to the Dirac and Pauli form factors of the proton and those of the $`\mathrm{\Delta }`$(1232). While the $`F_2(k^2)`$ form factors look similar for all three baryons, $`F_1(k^2)`$ of the $`D_{13}(1895)`$ resonance falls off dramatically at small $`k^2`$. It is the behavior of this form factor that leads to the structure of the transverse and longitudinal cross sections at a $`k^2`$ = 0.2 - 0.3 GeV<sup>2</sup>; at higher $`k^2`$ both response functions are dominated by $`F_2(k^2)`$. The experimental exploration of the small $`k^2`$ regime could therefore provide stringent constraints on the extracted form factors. ## 5 Conclusion We have investigated a structure around $`W=1900`$ MeV in the new SAPHIR total cross section data in the framework of an isobar model and found that the data can be well reproduced by including a new $`D_{13}`$ resonance with a mass, width and coupling parameters in good agreement with the values predicted by the recent quark model calculation of Ref. . To further elucidate the role and nature of this state we suggest measurements of the polarized photon asymmetry around $`W=1900`$ MeV for the $`p(\gamma ,K^+)\mathrm{\Lambda }`$ reaction. Furthermore, we extended our isobar description to kaon electroproduction by allowing only electromagnetic resonance transition form factors to vary. Employing the new JLab Hall C $`p(e,e^{}K^+)\mathrm{\Lambda }`$ data at $`W=1.83`$ GeV we find that a description of these data is only possible when the new $`D_{13}`$ state is included in the model. The dominance of this state at these energies allowed us to extract its transition form factors, one of which was found to to be dramatically different from other resonance form factors. ## Acknowledgments We thank Gregor Penner for providing his parameterization of the $`\mathrm{\Delta }(1232)`$ form factor. This work was supported by the US DOE grant DE-FG02-95ER-40907 (CB and HH) and the University Research for Graduate Education (URGE) grant (TM).
no-problem/9909/cond-mat9909066.html
ar5iv
text
# Random matrix model for quantum dots with interactions and the conductance peak spacing distribution ## Abstract We introduce a random interaction matrix model (RIMM) for finite-size strongly interacting fermionic systems whose single-particle dynamics is chaotic. The model is applied to Coulomb blockade quantum dots with irregular shape to describe the crossover of the peak spacing distribution from a Wigner-Dyson to a Gaussian-like distribution. The crossover is universal within the random matrix model and is shown to depend on a single parameter: a scaled fluctuation width of the interaction matrix elements. The crossover observed in the RIMM is compared with the results of an Anderson model with Coulomb interactions. The transport properties of semiconductor quantum dots can be measured by connecting the dots to leads via point contacts . When these point contacts are pinched off, effective barriers are formed between the dot and the leads, and the charge on the dot is quantized. Adding an electron to the dot requires a charging energy $`E_C`$ to overcome the Coulomb repulsion with electrons already in the dot. This repulsion can be compensated by modifying the gate voltage $`V_g`$ on the dot. For temperatures below $`E_C`$, a series of Coulomb blockade oscillations is observed in the linear conductance as a function of $`V_g`$. For temperatures much smaller than the mean level spacing $`\mathrm{\Delta }`$, the conductance is dominated by resonant tunneling and the Coulomb blockade oscillations become a series of sharp peaks. In dots with irregular shapes, the classical single-electron dynamics is mostly chaotic. Quantum mechanically, chaotic systems are expected to exhibit universal fluctuations that are described by random matrix theory (RMT). The distributions and parametric correlations of the Coulomb blockade peak heights in quantum dots have been derived using RMT, and these predictions have been confirmed experimentally . Another quantity of recent experimental and theoretical interest is the peak spacing statistics. The peak spacing $`\mathrm{\Delta }_2`$ can be expressed as a second order difference of the ground state energy $`_{\mathrm{g}.\mathrm{s}.}^{(n)}`$ of the $`n`$-electron dot as a function of the number of electrons: $`\mathrm{\Delta }_2=_{\mathrm{g}.\mathrm{s}.}^{(n+1)}+_{\mathrm{g}.\mathrm{s}.}^{(n1)}2_{\mathrm{g}.\mathrm{s}.}^{(n)}.`$ (1) Using the constant interaction model (which ignores interactions except for a classical Coulomb energy of $`n^2E_C/2`$), and assuming a single-particle spectrum that is independent of $`n`$, $`\mathrm{\Delta }_2=E_{n+1}E_n+E_C`$, where $`E_n`$ is the $`n`$-th single-particle energy. Within this model, RMT suggests a Wigner-Dyson distribution of the peak spacings with a width of $`\mathrm{\Delta }/2`$. However recent experiments find a distribution that is Gaussian-like and has a larger width . This observation underlines the limitations of the constant interaction model and the importance of electron-electron interactions beyond an average Coulomb energy. Some observed features of the peak spacing distribution have been reproduced using exact numerical diagonalization of small disordered dots ($`n10`$) with Coulomb interactions . The width of the distribution is found to increase monotonically with the gas parameter $`r_s`$. Analytic RPA estimates in a disordered dot for small values of $`r_s`$ give peak spacing fluctuations that are larger than those of RMT but still of the order of $`\mathrm{\Delta }`$ . Recent Hartree-Fock calculations of larger disordered and chaotic dots with interactions (up to $`n50`$ electrons) also reproduce Gaussian-like peak spacing distributions. The above studies were carried out for particular models of quantum dots using Coulomb as well as nearest-neighbor interactions, and it is not clear how generic the conclusions are. It is also not obvious which bare electron-electron interaction should be taken to represent the experiments because of screening generated by external charges. It is therefore important to find out whether the observed statistics of the peak spacings is generic, and in particular whether it can be reproduced by a modified random matrix model. Standard RMT does not make explicit reference to interactions or to number of particles. To study generic interaction effects on the statistics, we need a random matrix model in which the two-body interactions are distinguished from the one-body part of the Hamiltonian. Recently a two-body random interaction model (TBRIM) introduced in nuclear physics was used together with a diagonal random one-body Hamiltonian to study thermalization in finite-size systems and the crossover from Poisson to Wigner-Dyson statistics in many-body systems . The model explains statistical features observed in atomic and nuclear shell model calculations. However, the Poisson statistics that was used as a non-interacting limit of the model is not suitable for the study of dots whose single-electron dynamics is chaotic. In this paper we introduce a random interaction matrix model (RIMM) for strongly interacting Fermi systems whose single-particle dynamics is chaotic. With this model we can study generic and universal effects associated with the interplay of one-body chaos and two-body interactions. In particular, we apply the model to study the peak spacing statistics and find a crossover from a Wigner-Dyson distribution to a Gaussian-like distribution as a function of a parameter that measures the fluctuations of the interaction matrix elements. The crossover depends on both the number of particles and the number of single-particle orbits but becomes universal upon an appropriate scaling of the interaction strength. The crossover is demonstrated in a model of a small disordered dot with Coulomb interactions, and we show that the results can be scaled approximately onto those of the RIMM. A general Hamiltonian for spinless interacting fermions has the form $`H={\displaystyle \underset{ij}{}}h_{ij}a_i^{}a_j+{\displaystyle \frac{1}{4}}{\displaystyle \underset{ijkl}{}}\overline{u}_{ijkl}a_i^{}a_j^{}a_la_k,`$ (2) where $`h_{ij}`$ are the matrix elements of the one-body Hamiltonian and $`\overline{u}_{ij;kl}=ij|u|klij|u|lk`$ are the antisymmetrized matrix elements of the two-body interaction. The states $`|i=a_i^{}|0`$ describe a fixed basis of $`m`$ single-particle states. We define an ensemble of random matrices $`H`$ of the form (2), where the one-body $`m\times m`$ matrix $`h_{ij}`$ belongs to the Gaussian ensemble of symmetry class $`\beta `$, and the matrix elements of the two-body interaction are real independent Gaussian variables with zero average and variance $`U^2`$ ($`\frac{1}{2}U^2`$) for the diagonal (off-diagonal) elements $`P(h)e^{\frac{\beta }{2a^2}\mathrm{Tr}h^2};P(\overline{u})e^{\mathrm{Tr}\overline{u}^2/2U^2},`$ (3) Eqs. (2) and (3) define the RIMM. The parameter $`a`$ determines the single-particle level spacing $`\mathrm{\Delta }`$. In the non-interacting limit $`U=0`$, the random ensemble describes the universal statistical properties of a system whose single-particle dynamics is chaotic. For conserved time-reversal symmetry $`h`$ is a GOE matrix, while for broken time-reversal symmetry, e.g. in presence of an external magnetic field, $`h`$ becomes a GUE matrix. The random ensemble for the two-body part of the Hamiltonian (2) is invariant under orthogonal transformations of the single-particle basis. An average interaction that is invariant under such transformations can be included in the model, but it leads to a constant charging energy shift in $`\mathrm{\Delta }_2`$ and does not affect the peak spacing statistics. The randomness of the two-body interaction matrix elements is motivated by the strong fluctuations of the Coulomb interaction matrix elements in the basis of eigenstates of the chaotic single-particle Hamiltonian. The RIMM differs from the TBRIM in its one-body part, which is less relevant at the high excitation energies considered in the earlier studies but is of crucial importance near the ground state. The TBRIM has a Poissonian statistics in the non-interacting limit, in contrast to the Wigner-Dyson statistics characterizing the non-interacting limit of (2). Our random interaction matrix model is suitable for describing the generic statistical fluctuations in quantum dots with chaotic single-particle dynamics and in the presence of electron-electron interactions. The model depends on three parameters: $`U/\mathrm{\Delta }`$, the number of single-particle orbits $`m`$, and the number of particles $`n`$. Next we apply the RIMM(2) to study the peak spacing statistics in Coulomb blockade quantum dots. Peak spacings are computed using (1); i.e. the ground-state energy is calculated for three consecutive numbers of particles $`n1`$, $`n`$ and $`n+1`$, and statistics are collected by generating realizations of the ensemble $`H`$. Typical distributions of $`\stackrel{~}{\mathrm{\Delta }}_2(\mathrm{\Delta }_2\mathrm{\Delta }_2)/\mathrm{\Delta }`$ for the case of conserved time-reversal symmetry ($`h`$ is a GOE matrix) are shown in Fig. 1 for several values of $`U/\mathrm{\Delta }`$. For the non-interacting case we obtain the Wigner-Dyson distribution (dashed line), but as $`U/\mathrm{\Delta }`$ increases a crossover is observed to a Gaussian-like distribution. The distributions for $`U/\mathrm{\Delta }=1.1`$ and 1.8 are compared with a Gaussian of the same width (solid lines). The model (2) does not include spin and therefore cannot reproduce the expected bimodal structure of the peak spacing distribution at weak interactions. However, numerical simulations in small disordered dots indicate that this bimodal structure disappears already for weak interactions . The standard deviation of the spacing fluctuations $`\sigma (\stackrel{~}{\mathrm{\Delta }}_2)`$ (in units of $`\mathrm{\Delta }`$) is shown in the top panel of Fig. 2 vs. $`U/\mathrm{\Delta }`$ for $`n=4`$ and several values of $`m`$. The statistical errors (due to finite number of samples) are also estimated but are smaller than the size of the symbols. At zero interaction we are close to the GOE value of $`0.52`$. $`\sigma (\stackrel{~}{\mathrm{\Delta }}_2)`$ increases slowly and then more rapidly above $`U/\mathrm{\Delta }0.5`$. At strong interactions it is approximately linear in $`U/\mathrm{\Delta }`$. The top inset of Fig. 2 shows similar curves of the spacing fluctuations but for constant number of single-particle states $`m=14`$ and for several values of $`n`$. The standard deviation curve versus $`U/\mathrm{\Delta }`$ depends on both $`m`$ and $`n`$. However upon the linear scaling $`U_{\mathrm{eff}}=f(m,n)U/\mathrm{\Delta }`$ ($`f(m,n)`$ is a scaling constant) all curves coalesce to a single universal curve. To demonstrate the scaling we first choose a ‘reference’ curve, e.g., $`m=12`$ and $`n=4`$, which we determine accurately using 10,000 realizations at each value of $`U/\mathrm{\Delta }`$. For other values of $`(m,n)`$ we use typically $`10005000`$ realizations and find the scaling factors $`f(m,n)`$ by a least squares fit. The bottom panel of Fig. 2 shows the same curves of the top panel (solid symbols) in comparison with the reference curve (solid line), but as a function of the scaled parameter $`U_{\mathrm{eff}}`$. The curves scale almost perfectly within the statistical errors. Also shown are scaled GUE curves (open symbols) for $`n=4`$ in comparison with the GUE reference curve (dashed line). The scaling factor $`f(m,n)`$ is shown in the left inset of the bottom panel of Fig. 2 as a function of $`n`$ for different values of $`m`$ and for both the GOE (solid symbols with error bars) and the GUE (large open symbols) cases. Within the statistical errors $`f(m,n)`$ is independent of the symmetry class, supporting the universality of our model. The width of the spacing distribution is larger for the GOE case than for the GUE case at any value of $`U_{\mathrm{eff}}`$. The bottom right inset of Fig. 2 is the ratio $`\sigma _{\mathrm{GOE}}(\stackrel{~}{\mathrm{\Delta }}_2)/\sigma _{\mathrm{GUE}}(\stackrel{~}{\mathrm{\Delta }}_2)`$ versus $`U_{\mathrm{eff}}`$, calculated from the reference curves. The ratio is $`1.24\pm 0.02`$ for the non-interacting case (in close agreement with the RMT value), and depends only weakly on the interaction in the crossover regime $`U_{\mathrm{eff}}1`$. This is consistent with recent measurements which find a ratio of $`1.21.3`$ for semiconductor quantum dots with a gas constant of $`r_s12`$. At stronger interactions the ratio decreases. At large values of $`U`$, the two-body Hamiltonian dominates and one can ignore the one-body part. Since it is only the latter that distinguishes between the conserved and broken time-reversal symmetry cases, the ratio of the widths approaches $`1`$ at strong interactions. Next we compare the crossover observed in the RIMM to the results for a tight-binding Anderson model with cylindrical geometry, hopping parameter $`V=1`$, and on-site disorder with a box distribution of width $`W`$. Electrons at different sites interact via a Coulomb interaction whose strength is $`U_c=e^2/a`$ over one lattice constant $`a`$. The standard deviation of the peak spacing $`\sigma (\mathrm{\Delta }_2)`$ is shown in the left panel of Fig. 3 versus $`U_c/V`$ for a $`4\times 5`$ lattice, $`n=4`$ electrons and several values of $`W`$. The values of $`W`$ are chosen so that the RMT statistics is approximately satisfied in the non-interacting case. In the absence of a magnetic field we choose $`W=3,5,7`$. However, in the presence of a magnetic flux, which we apply inside the cylinder and incorporate in the hopping matrix elements in the perpendicular direction ($`\mathrm{\Phi }=0.15\mathrm{\Phi }_0`$), only the $`W=5`$ case satisfies the spectral RMT statistics. We find that $`\sigma _{\mathrm{B}=0}(\mathrm{\Delta }_2)`$ is monotonically increasing with $`W`$. After rescaling $`\sigma `$ by the mean level spacing $`\mathrm{\Delta }`$ at the Fermi energy, the residual $`W`$-dependence of $`\sigma (\mathrm{\Delta }_2)/\mathrm{\Delta }`$ is rather weak (top right panel of Fig. 3). The standard deviation curves for the Coulomb model can be mapped approximately on the random matrix model curve (bottom right panel of Fig. 3) by defining $`U_{\mathrm{eff}}=c_0U_c/V`$ for some constant $`c_0`$ that depends on the disorder strength and lattice size. $`c_0`$ depends only weakly on $`W`$. The universal aspects of the crossover can also be investigated by studying the peak spacing distributions themselves. In Fig. 4 we show the peak spacing distributions for three different values of $`(m,n)`$ but at constant $`U_{\mathrm{eff}}=0.33`$ (corresponding to three different values of $`U/\mathrm{\Delta }`$). We find that all three distributions coincide, indicating that finite size effects in the random matrix model are negligible. A corresponding distribution for the Coulomb model is also shown for $`U_c/V=0.75`$, $`W=5`$ and $`n=4`$ and is rather close to the random matrix model distributions. The deviations seen in the Coulomb case may be partly due to finite size effects that are non-universal; even at $`U_c=0`$ we observe deviations from the expected Wigner-Dyson distribution. In conclusion, we showed that a random interaction matrix model that includes a one-body part belonging to one of the standard Gaussian ensembles and a two-body random interaction is suitable for studying generic interaction effects on the statistics of finite Fermi systems whose single-particle dynamics is chaotic. We applied the model to chaotic dots in the Coulomb blockade regime, where it describes a crossover of the peak spacing statistics from a Wigner-Dyson to a Gaussian-like distribution. This work was supported in part by the U.S. Department of Energy grant No. DE-FG-0291-ER-40608 and the Swiss National Science Foundation. A.W. acknowledges financial support from the Studienstiftung des deutschen Volkes.
no-problem/9909/physics9909056.html
ar5iv
text
# Light-induced conversion of nuclear spin isomers of molecules ## Abstract We report on a new phenomenon of molecular nuclear spin isomers conversion in the field of resonant laser radiation. It is based on the isomer-selective optical excitation and the difference of conversion rates for excited and unexcited molecules. We expect the considerable magnitude of the effect for all molecules which absorb the radiation of existing lasers. PACS: 34.30+h In a recent work a phenomenon of enrichment of nuclear spin isomers of molecules was predicted. The enrichment appears in the course of selective laser excitation of definite spin isomers. It was expected that the proposed enrichment method was the most applicable to the molecules where the nuclear spin conversion was induced by intramolecular spin-state mixing interaction. As it was shown in , the enrichment could be very high under an appropriate frequency of excited radiation. On the other hand, there were strict limitations placed on the laser frequency. In particular, the abilities of $`CO_2`$-laser and other sources appear to be insufficient for $`CH_3F`$ molecule, which is other than that a very convenient object for investigation of the enrichment phenomenon. The efficiently absorbed lines of $`CO_2`$-laser in $`CH_3F`$ do not satisfy the conditions from . Thus the realization of the idea is uneasy due to problems with compatible object and radiation source finding. As it turns out the frequency constraints from may be removed without any considerable loss of enrichment. There stays only one condition for the frequency to meet: The radiation should effectively be absorbed on a vibrational or electronic transition from the molecule ground state. This indulgence radically expands the set of relevant objects (those, demonstrating the light-induced conversion with available radiation sources). The physical nature of the proposed phenomenon is quite similar to that of light-induced drift (LID) <sup>1</sup><sup>1</sup>1In particular, the separation of nuclear spin isomers of molecules was firstly realized on the basis of LID and other light-induced gas-kinetic effects (see, e.g., ). Their common essence is in the following: if radiation excites particles selectively with respect to a physical parameter, than the difference of rates of relaxation of the parameter for excited and unexcited particles shifts the mean stationary value of the parameter from the equilibrium. This is the definite type of nuclear spin isomers, which plays the role of the mentioned parameter. The concentration of the isomer gives the numerical mean value of the parameter. Radiation excites molecules selectively with respect to their nuclear spin modification. The conversion rates are generally different for excited and unexcited molecules in any conversion scenario. The difference should be the most dramatic when the conversion is caused by intramolecular magnetic interaction. This conversion mechanism, which was unambiguously proved for $`CH_3F`$ \[5-12\] and which seemingly works in other heavy and complicated molecules, has a resonance nature: the conversion rate increases drastically when the quantum state of mixing spin isomers are nearly degenerate. In the set of all populated rotational levels of a given spin isomer the resonance condition may distinguish very few or even a single one. Through this ”most resonant” level the conversion takes place. The realization of identical resonance conditions for rotational levels of various vibrational (electron) states is unfeasible, which leads to the difference of conversion rates in these states. In what follows we will keep in mind the mechanism of conversion induced by intramolecular mixing. For simplicity we assume that there are only two nuclear spin modifications: ortho- and para-molecules (see fig.1). Let the laser radiation interact with ortho-isomers only. The population distribution over rotational levels is assumed equilibrium in every vibrational state. The last assumption is justified when the vibrational relaxation is sufficiently slow in comparison with the rotational one, which can be provided by suitable buffer gas admixing. Under these condition we have $`{\displaystyle \frac{dN_o}{dt}}=\gamma ^{}(N_p^{}N_o^{})+\gamma [(N_pN_p^{})(N_oN_o^{})]`$ $`=(\gamma ^{}\gamma )(N_p^{}N_o^{})+\gamma (N_pN_o);`$ (1) $$\gamma ^{}\underset{J_o^{}}{}\gamma _{op}^{}W(J_o^{}),\gamma \underset{J_o}{}\gamma _{op}W(J_o).$$ Here $`N_o`$, $`N_p`$ are, respectively, the concentrations of ortho- and para-isomers; $`\gamma `$, $`\gamma ^{}`$ are the conversion rates. The asterisk indicates quantities for excited vibrational state. $`\gamma _{op}W(J_o)`$ gives the partial conversion rate through the rotational $`J_o`$-level of ortho-isomers. $`W(J_o)`$ is the Boltzmann factor. It follows from (1) that in the absence of radiation ($`N_p^{}=N_o^{}=0`$) and under stationary condition, one has $`N_p=N_o`$. This is the thermodynamic equilibrium. The equilibrium will not be disturbed by radiation, if $`\gamma ^{}=\gamma `$. In the other case ($`\gamma ^{}\gamma `$) a new stationary situation takes place: $$N_pN_o=(1\frac{\gamma ^{}}{\gamma })(N_p^{}N_o^{}).$$ (2) Although only ortho-isomers are excited by the laser beam, there will appear excited para-isomers ($`N_p^{}0`$) due to resonant excitation transmission. The conversion is the slowest relaxation process in our system. Hence the level populations instantly follow the current concentrations of ortho- and para-isomers. In particular, for excited para-isomers the following balance equation is valid: $$\alpha N_o^{}(N_pN_p^{})\alpha N_p^{}(N_oN_o^{})=\mathrm{\Gamma }_vN_p^{}.$$ (3) Here the first term in the lhs describes the creation of excited para-isomers due to excitation exchange with ortho-isomers; the second term corresponds to the reverse process. The coefficient $`\alpha `$ gives the excitation exchange rate. The term in the rhs of (3) is conditioned by the decay of excited vibrational state. As it follows from (3) $$N_p^{}=\frac{\alpha N_p}{\mathrm{\Gamma }_v+\alpha N_o}N_o^{}$$ (4) By the proper choice of buffer gas and its pressure, one can guarantee $$\mathrm{\Gamma }_v\alpha (N_o+N_p),$$ (5) Then $`N_p^{}N_o^{}`$, and the quantity $`N_p^{}`$ in (2) may be neglected. In this case the deviation of ortho- and para-concentrations from the equilibrium is maximal and most simply related to the fraction of excited ortho-isomers. It is known that under specific conditions (sufficiently high radiation intensity, proper relations between relaxation rates) this fraction can rich 1/2. Hence it follows from (2) that $$\frac{N_p}{N_o}=\frac{1}{2}(1+\frac{\gamma ^{}}{\gamma }).$$ (6) Two limiting cases of this expression are clearly interpretable. If $`\gamma ^{}\gamma `$, then $`N_o=2N_p`$ because the conversion hardly goes through the excited vibrational state, whereas the conversion through the ground state leads to equalization of ortho- and para-concentrations in this state. Remember that the concentration of ortho-isomers in the ground state is the half of their total concentration. In the opposite limiting case ($`\gamma ^{}\gamma `$) we have $`N_oN_p`$. That is, almost all molecules are converted into para-modification due to conversion through the excited state. Note that the reverse process is suppressed because para-isomers are for the most part in the ground state. Let us discuss the conditions which provide the rotational distributions in vibrational states to be close to the Boltzmann ones. If the laser radiation is absorbed on a rovibrational transition, the relative excess of the population of the upper level $`J_L`$ over the amplitude of the Boltzmann ”background” is determine by the parameter (see, e.g., ) $$a=\frac{\mathrm{\Gamma }_v}{\mathrm{\Gamma }W(J_L)},$$ (7) where $`\mathrm{\Gamma }`$ is the impact line halfwidth, which is mainly conditioned by the rotational relaxation (the broadening is assumed homogeneous); $`W(J_L)`$ is the Boltzmann factor for the level $`J_L`$. The parameter $`a`$ can be small even for the pure gas of converting molecules. But it can be made even smaller by admixing a buffer gas with the cross-section of the vibrational state deexcitation being small in comparison with the broadening cross-section. When $`a1`$ the distribution of population over rotational levels may safely be approximated by the Boltzmann distribution. The total population of the excited vibrational state of ortho-molecules is given in this case by the relation $$N_o^{}=N_o\frac{\kappa /2}{1+\kappa +\mathrm{\Omega }^2/\mathrm{\Gamma }^2};\kappa =\frac{4|G|^2}{\mathrm{\Gamma }_v\mathrm{\Gamma }}W(J_L);G=\frac{Ed}{2\mathrm{}}.$$ (8) Here $`\mathrm{\Omega }`$ is the frequency detuning; $`E`$ is the amplitude of the radiation field; $`d`$ is the dipole moment of the resonant transition; $`\kappa `$ is the so-called saturation parameter. It is evident from (8) that for high saturation parameter one has $`N_o^{}=N_o/2`$, which was used in (6). Let us estimate the possible value of the effect for $`{}_{}{}^{13}CH_3F`$-molecules, which are in good resonance ($`\mathrm{\Omega }`$ 26 MGz with the Doppler width$``$ 40 MGz) with $`CO_2`$ laser (the line $`P(32)`$ of the band 9,6 $`\mu `$) on the transition $`R(4,3)`$. In this case only ortho-isomers absorb the radiation. We assume the absorptive cell has the length $`30`$ cm. Taking into account that the absorption coefficient $`0,3`$ cm<sup>-1</sup>/Torr , the probe testing laser beam is efficiently absorbed on this length, when the pressure of $`{}_{}{}^{13}CH_3F`$ is about 0.1 Torr. For this value of $`{}_{}{}^{13}CH_3F`$ pressure the forthcoming estimations are done. The condition (5) is not met without special cares. For the increase of $`\mathrm{\Gamma }_v`$ we assume the addition of a buffer gas of molecules with proper vibrational quantum, so that the vibrational excitation can efficiently be transmitted from $`{}_{}{}^{13}CH_3F`$. To make this process irreversible, it is worth to take the buffer gas with a bit smaller vibrational quantum. Assuming the rate of excitation transfer from $`{}_{}{}^{13}CH_3F`$ to the buffer molecules to be comparable with the excitation exchange between two $`{}_{}{}^{13}CH_3F`$-molecules and making this buffer gas pressure 0.5 Torr, we get $`\mathrm{\Gamma }_v=5\alpha (N_o+N_p)`$. Thus, the condition (5) is fulfilled. Let us consider now the parameter (7). According to the data on the rate of excitation exchange between $`{}_{}{}^{13}CH_3F`$ and $`{}_{}{}^{12}CH_3F`$ ($`510^5`$ s<sup>-1</sup>/Torr), we get $`\mathrm{\Gamma }_v40`$ KGz. For the total gas pressure and for the broadening caused by the resonant buffer gas ($`20`$ MGz/Torr ), we have $`\mathrm{\Gamma }12`$ MGz. Taking into account that $`W(J_L)10^2`$, the parameter $`a1/3`$. This value is not sufficiently small. If, however, we add 10 Torr of $`He`$ as a second (nonresonant) buffer gas, then its broadening effect (3 MGz/Torr ) gives $`\mathrm{\Gamma }40`$ MGz. On the contrary, the value of $`\mathrm{\Gamma }_v`$ stays almost untouched because the deexcitation effect of the helium ($`10^2`$ Gz/Torr ) is negligible. Under these conditions we have $`a1/101`$, i.e., (7) is fulfilled. Note that the obtained value of $`\mathrm{\Gamma }`$ is approximately equal to the Doppler halfwidth. So, the estimations, which are based on the homogeneous broadening assumption, are correct. Finally, let us estimate the attainable saturation parameter from (8). For the radiation power density $`10^3`$ W/cm<sup>2</sup> (the laser beam of 1 mm in diameter and 10 W of power) we have $`\kappa 50`$ in accordance with . This guarantee the saturation of the vibrational transition. Hence the above suggested experimental conditions are close to the optimum for the manifestation of the light-induced conversion effect. The considered simple expression (6) is quite suitable for estimations. It is worth to note that the fulfillment of (5) is not indispensable at least in the case $`\gamma ^{}\gamma `$, when one should expect the brightest manifestation of the phenomenon. More detailed estimations show that even for $`\mathrm{\Gamma }_v\alpha (N_o+N_p)`$ the almost complete conversion from ortho- to para-isomers takes place, provided $`N_o^{}`$ is not too small in comparison with $`N_o`$. There is one further circumstance: The external electric field can radically effect the conversion rate and as a consequence the stationary state. This is due to the Stark shift of levels which the conversion goes through (both in the ground and excited states). The levels can so be put into (or out of) resonance. The influence of the external electric field on the conversion of nuclear spin isomers of $`CH_3F`$ was rigorously proved and investigated in . There follows a conclusion from these works that the electric field method of identification of the converting levels is promising. Summarizing, we shown that in the system of nuclear spin isomers of molecules where only one isomer absorbs the resonant radiation the simple experimental conditions can provide the shift of stationary concentrations of isomers from equilibrium concentrations. There can be almost total exhaustion of one isomer. The work was supported in part by the program RFBR (grants 98-02-17924 and 98-03-33124a).
no-problem/9909/hep-ph9909455.html
ar5iv
text
# Distinguishing Higgs Models at Photon Colliders11footnote 1Contribution to the Proceedings of the International Workshop on Linear Colliders, April 28-May 5, 1999, Sitges (Spain) ## 1 Introduction Many recent studies for the TEVATRON, LHC and HERA assume, in fact, that Nature is so favorable for us that new particles are sufficiently light that they can be seen at these facilities. We should like to discuss here the opposite scenario: No new particles will be discovered at these facilities, except the Higgs scalar(s). In this scenario, additional particles could very well exist in addition to the Higgs boson, but they would have to be heavier than 1–2 TeV. What new physics is realized could then be revealed at an $`e^+e^{}`$ Linear Collider, where the direct couplings of the Higgs boson with matter will be measured with high accuracy. Other couplings, which only occur at the one-loop level, like $`h\gamma \gamma `$ and $`hZ\gamma `$, can only be studied with higher statistics, and then only at lower accuracy. The importance of these couplings is related to the fact that in the SM and in its extensions, all fundamental charged particles are included in the loop, so the structure of the theory influences the corresponding Higgs boson decays. The $`\gamma \gamma `$ and $`\gamma e`$ modes of a Linear Collider (Photon Colliders) provide an opportunity to measure these vertices with high enough accuracy, up to the 2% level or better for the $`\gamma \gamma `$ mode and up to a few percents for the $`Z\gamma `$ mode (reaction $`e\gamma eH`$). Therefore, the study of the Higgs boson production at Photon Colliders could distinguish different models of new physics prior to the discovery of other related new particles. Frequently considered models going beyond the SM are the Two Higgs Doublet Model (2HDM), and the Minimal Supersymmetric Standard Model (MSSM). In the present paper we compare these loop-determined effective couplings $`h\gamma \gamma `$ and $`hZ\gamma `$, in the SM ($`h=H_{\mathrm{SM}}`$), in the 2HDM (Model II) where only the Higgs sector is enlarged compared to the SM, and in the MSSM, where the Higgs sector has formally the structure of Model II, but where the parameters are more constrained, and where, in addition, new supersymmetric particles appear. We study properties of the couplings of the Higgs bosons with photon(s) for the case when the mass of the lightest Higgs particle $`h`$ in the 2HDM and the MSSM is in the region 100–130 GeV, which is still open for all three models. These effective couplings are to a large extent determined by the couplings of the Higgs particle to the $`W`$, to the $`b`$ and $`t`$ quarks, and to the charged Higgs boson. In terms of the parameters $`\beta `$ (which parameterizes the ratio of the two vacuum expectation values) and $`\alpha `$ (which parameterizes the mixing among the two neutral, $`CP`$-even Higgs particles), these couplings are proportional to $`g_{hWW}`$ $``$ $`\mathrm{sin}(\beta \alpha )`$ $`g_{hbb}`$ $``$ $`{\displaystyle \frac{\mathrm{sin}\alpha }{\mathrm{cos}\beta }}=\mathrm{sin}(\beta \alpha )\mathrm{tan}\beta \mathrm{cos}(\beta \alpha )`$ $`g_{htt}`$ $``$ $`{\displaystyle \frac{\mathrm{cos}\alpha }{\mathrm{sin}\beta }}=\mathrm{sin}(\beta \alpha )+{\displaystyle \frac{\mathrm{cos}(\beta \alpha )}{\mathrm{tan}\beta }},`$ (1) as compared with the SM couplings. Within the scenario that no new physics, except the Higgs particle(s), is discovered at hadron or $`e^+e^{}`$ colliders, we can imagine two cases considered in the subsequent sections. ## 2 A light SM-like Higgs boson The first studied opportunity is as follows. All direct couplings are the same as in the SM with one Higgs doublet. How do we then determine whether the SM, the 2HDM or the MSSM is realized? The answer can be obtained from a precise study of the two-photon Higgs width and the $`hZ\gamma `$ coupling at Photon Colliders, $`\gamma \gamma `$ and $`e\gamma `$, where the current estimate of the accuracy in the measurement of the first width is of 2%. Indeed, in the SM, these vertices are determined by contributions from $`W`$ loops and matter loops, entering with opposite signs. Because of this partial cancellation, the addition of new contributions could change these vertices significantly. This situation can be realized in the 2HDM, with $`\beta \alpha =\pi /2`$, leading to couplings to gauge bosons and fermions as in the SM (see above), and to some extent also in the MSSM. We have calculated the widths in these models for this case, as functions of $`\mathrm{tan}\beta `$, keeping for the 2HDM $`\mathrm{sin}(\beta \alpha )=1`$. However, the MSSM with given $`M_h`$ and $`\beta \alpha `$ values can be realized only at some definite value of $`\beta `$ (if masses of heavy squarks are roughly fixed), see Fig. 1. For the 2HDM, the difference with respect to the SM is in this case determined by the charged Higgs boson contribution only. The relevant quantity becomes in this case $`b=1+(M_h^2\lambda _5v^2)/(2M_{H^\pm }^2)`$. In the calculation for the general 2HDM(II) we assume $`\lambda _5v^2M_h^2`$. This effect of the scalar loop is enhanced here due to the partial compensation of the $`W`$ and $`t`$-quark contributions. The result is evidently independent of the mixing angle $`\beta `$. We find, for very heavy $`H^\pm `$ and $`M_h`$=100 GeV: $$\mathrm{\Gamma }_{h\gamma \gamma }^{2\mathrm{H}\mathrm{D}\mathrm{M}}/\mathrm{\Gamma }_{h\gamma \gamma }^{\mathrm{SM}}=0.89,\mathrm{\Gamma }_{hZ\gamma }^{2\mathrm{H}\mathrm{D}\mathrm{M}}/\mathrm{\Gamma }_{hZ\gamma }^{\mathrm{SM}}=0.96.$$ (2) The effect is of the order of about 10%—a difference large enough to be observed. With growth of $`\lambda _5`$ this effect decreases roughly as $`(1\lambda _5v^2/2M_{H^\pm }^2)`$. In the MSSM we encounter two differences. First, with a fixed mass of the lightest Higgs particle, only a finite range of $`\mathrm{tan}\beta `$ is physical. Throughout this range, the coupling of the lightest Higgs particle to the $`W`$ will vary as $`\mathrm{sin}(\beta \alpha )`$, which is uniquely determined by $`\mathrm{tan}\beta `$ and the Higgs mass. Since this loop contribution dominates the effective couplings under consideration, there will be a corresponding strong variation with $`\mathrm{tan}\beta `$, as illustrated in Fig. 1. Second, the contributions of the many superpartners depend on their masses. If all these masses are sufficiently heavy, the effects become small. ## 3 Non-SM-like Higgs boson(s) The second possibility is that the couplings with matter differ from those of the SM. In this case it is very likely that this fact is known from earlier measurements, and our goal will be to search for an opportunity to distinguish the cases of the 2HDM (II) and the MSSM. In this respect we note that the measurements at a Linear Collider will give us the couplings of the lightest Higgs boson with ordinary matter and, perhaps, masses of some of the more heavy Higgs particles. For a fixed mass $`M_h`$, the $`h\gamma \gamma `$ and $`hZ\gamma `$ couplings are much smaller in the 2HDM than in the MSSM, for a wide range of $`\mathrm{tan}\beta `$ values. ## 4 Results We shall here present results for the $`h\gamma \gamma `$ and $`hZ\gamma `$ widths for a fixed Higgs boson mass equal to $`M_h=100`$ GeV. The most recent values were taken for the fermion and gauge boson masses, and other parameters. In Fig. 1, we show the decay-rate ratios $`\mathrm{\Gamma }(h\gamma \gamma )/\mathrm{\Gamma }(H^{\mathrm{SM}}\gamma \gamma )`$ and $`\mathrm{\Gamma }(hZ\gamma )/\mathrm{\Gamma }(H^{\mathrm{SM}}Z\gamma )`$, for the 2HDM and the MSSM. For the 2HDM, we take $`\mathrm{sin}(\beta \alpha )=0`$ and $`1`$, and consider a range of values of the charged Higgs boson mass from 165 GeV to infinity. Let us first discuss the case of $`\mathrm{sin}(\beta \alpha )=0`$. It is important to note that in this case, the lightest Higgs particle decouples from the $`W`$ loops (see Eq. (1)). Thus, the decay rates are dominantly due to $`b`$ and $`t`$ quark loops (plus a small contribution from the charged Higgs particle). Dips appear for $`\mathrm{tan}\beta `$ 4–5, where the $`b`$-quark starts to dominate over the $`t`$-quark contribution. The horizontal lines correspond to the 2HDM results for the case $`\mathrm{sin}(\beta \alpha )=1`$ (see the numbers above). For the MSSM, we have used the results of the program HDECAY. The solid (dashed) curves correspond to supersymmetric particles contributing (or not) to the loops. Interestingly, for the $`hZ\gamma `$ decay, these two options are indistinguishable. The sharp dips in these ratios at $`\mathrm{tan}\beta `$ 12–14 correspond to the vanishing of $`|\mathrm{sin}(\beta \alpha )|`$ (shown separately as a dotted curve), which determines the $`hWW`$ coupling. In contrast to the 2HDM, this coupling is not a free parameter in the MSSM, since the Higgs mass here is kept fixed. As mentioned above, for the MSSM, low values of $`\mathrm{tan}\beta `$ are not physical. Where the curves terminate at low $`\mathrm{tan}\beta `$, the charged Higgs mass is of the order of $`10^5`$ GeV. The results discussed above were obtained for $`M_h=100`$ GeV. We have checked that a similar picture holds for $`M_h=120`$ GeV. In summary, we have shown that the Higgs couplings involving one or two photons, which can be explored in detail at Photon Colliders ($`\gamma \gamma `$ and $`e\gamma `$), could resolve the models SM, 2HDM or MSSM with similar neutral Higgs boson masses in the range $`M_h`$ 100–120 GeV and having similar couplings to matter. In the case of different coupling to matter, a clear distinction can be made between the 2HDM and the MSSM. This research has been supported by RFBR grants 99-02-17211 and 96-15-96030, by Polish Committee for Scientific Research, grant No. 2P03B01414, and by the Research Council of Norway. ## References
no-problem/9909/hep-ph9909417.html
ar5iv
text
# Fluctuations of the String Tension and Transverse Mass Distribution ## Abstract It is shown that Gaussian fluctuations of the string tension can account for the ”thermal” distribution of transverse mass of particles created in decay of a colour string. 1. Recent precise data on the production rates of hadrons created in $`e^+e^{}`$ annihilation<sup>1</sup><sup>1</sup>1A full list of data is given in . allowed to analyse in detail the observed regularities. The main general conclusion from these analyses is that production of hadrons can be very well explained by a (rather unexpected) idea that they emerge from a thermodynamically equilibrated system. This point was first recognized and emphasized some time ago by Becattini who found that the particle spectra are consistent with the model of two thermally equilibrated fireballs. The temperature $`T`$ determined from the fit was found to be about 160 MeV. Recently Chliapnikov analysed again the experimental particle distributions and found that they are also consistent with the thermal model<sup>2</sup><sup>2</sup>2I would like to thank B.Webber for calling my attention to this paper.. In his picture hadrons emerge from a thermally equilibrated system of (constituent) quarks and antiquarks. The fit to the data gives $`T140`$ MeV. It should be emphasized that all this is achieved with at most $`3`$ parameters, in contrast to the ”standard” Monte Carlo codes like HERWIG and JETSET which are much less effective in this respect<sup>3</sup><sup>3</sup>3A detailed discussion of this point can be found in . These findings are difficult to reconcile with the generally accepted ideas about hadron production in $`e^+e^{}`$ collisions. The main difficulty is that, since the process in question is expected to be rather fast, there is hardly enough time for any equilibrium to set in. There is, however, a simple way to understand the findings of refs. : The spectrum of primarily produced partons (from which the final hadrons are formed) may be already so close to the thermal one that there is no further need for secondary collisions between partons to obtain the thermally equlibrated distribution (both life-time of the system and the parton density are irrelevant in this case). In the present note I shall argue that this possibility may naturally occur in the string model. 2. In the string picture of hadron production in $`e^+e^{}`$ annihilation the tranverse mass spectrum of the produced quarks (or diquarks) is taken from the Schwinger formula which predicts $$\frac{dn_\kappa }{d^2p_{}}e^{\pi m_{}^2/\kappa ^2}$$ (1) where $`\kappa ^2`$ is the string tension and $`m_{}`$ is the transverse mass $$m_{}=\sqrt{p_{}^2+m^2}.$$ (2) On the other hand, the ”thermal” distribution is exponential in $`m_{}`$ $$\frac{dn}{d^2p_{}}e^{m_{}/T}$$ (3) rather than a Gaussian, as in (1). The main point of the present note is the observation that Eq. (3) can be reconciled with the Schwinger formula (1) if the string tension undergoes fluctuations with the probability distribution in the Gaussian form $$P(\kappa )d\kappa =\sqrt{\frac{2}{\pi <\kappa ^2>}}\mathrm{exp}\left(\frac{\kappa ^2}{2<\kappa ^2>}\right)d\kappa $$ (4) where $`<\kappa ^2>`$ is the average string tension, i.e. $$<\kappa ^2>=_0^{\mathrm{}}P(\kappa )\kappa ^2𝑑\kappa .$$ (5) Using (1) and (4) we thus have $$\frac{dn}{d^2p_{}}_0^{\mathrm{}}𝑑\kappa P(\kappa )e^{\pi m_{}^2/\kappa ^2}=\frac{\sqrt{2}}{\sqrt{\pi <\kappa ^2>}}_0^{\mathrm{}}𝑑\kappa e^{\frac{\kappa ^2}{2<\kappa ^2>}}e^{\pi m_{}^2/\kappa ^2}$$ (6) This integral can be evaluated using the identity $$_0^{\mathrm{}}𝑑te^{st}\frac{u}{2\sqrt{\pi t^3}}e^{\frac{u^2}{4t}}=e^{u\sqrt{s}}.$$ (7) The result is $$\frac{dn}{d^2p_{}}\mathrm{exp}\left(m_{}\sqrt{\frac{2\pi }{<\kappa ^2>}}\right)$$ (8) i.e. the ”thermal” formula (3) with $$T=\sqrt{\frac{<\kappa ^2>}{2\pi }}.$$ (9) Using the standard value of the string tension $`<\kappa ^2>=0.9`$ Gev/fm, we obtain $`T=170`$ MeV for the ”temperature” of the primary partons, the value somewhat larger than those obtained by Beccatini and Chliapnikov . This is natural, as we expect that the primary parton system may undergo some expansion (and thus cooling) before the final hadrons start to form. We thus conclude that the possibility of a fluctuating string tension may help to solve the apparent difficulty in the description of the mass and transverse momentum spectra in the string model. The nature of the fluctuations remains, however, an open question. 3. In search for a possible origin of such fluctuactions, it is tempting to relate them to stochastic picture of the QCD vacuum studied recently by the Heidelberg group <sup>4</sup><sup>4</sup>4It was shown that this picture helps to explain many features of the high-energy cross-sections .. In this approach the (average) string tension is given by the formula $$<\kappa ^2>=\frac{32\pi k}{81}G_2a^2$$ (10) where $`k`$ is a constant ($`k.75`$), $`G_2`$ is the gluon condensate and $`a`$ is the correlation length of the colour field in the vacuum (lattice calculations give $`a=0.35`$ fm ). This result has a natural physical interpretation: It expresses the string tension (i.e. energy per unit lenght of the string) as a product of the vacuum energy density (proportional to the gluon condensate) times the transverse area of the string (proportional to $`a^2`$). In the stochastic vacuum model both quantities entering the R.H.S of (10) are expected to fluctuate. Indeed, the gluon condensate $`G_2`$ is proportional to the square of the field strength (its average value can be estimated from studies of the charmonium spectrum ). Since the average value of the field strength in the vacuum must vanish, it cannot be constant but changes randomly from point to point. Also $`a^2`$ represents only the average value of the fluctuating transverse size of the string. Once this is accepted, it is natural to assume that such fluctuations are described by a Gaussian distribution. This implies fluctuations of the string tension in the form given by (4). An interesting consequence follows from this point of view. First, the field fluctuations in the vacuum are expected to be independent at the two space-time points whose distance exceeds the correlation length $`a`$. This suggests that they may be also local along the string<sup>5</sup><sup>5</sup>5This was pointed out to me by W.Ochs., although the corresponding correlation length $`a_s`$ may be modified by the presence of the $`q\overline{q}`$ pair which creates the string (thus $`a_s`$ may differ from $`a`$). This may have measurable effects. Indeed, observation of a heavy $`q\overline{q}`$ or baryon-antibaryon pair at a point along the string indicates that the string tension at this point took a value well above the average. Thus by triggering on heavy particles one may search for other effects of the large string tension, e.g. increasing multiplicity in the neighbourhood. Since any effect of this kind is limited to the region determined by the correlation length $`a_s`$, it may be possible to determine it experimentally and verify to what extent it is different from the vacuum correlation length $`a`$ found in lattice calculation . Needless to say, it would be very interesting to confirm or dismiss this picture using the lattice QCD. This, however, does not seem to be an easy task<sup>6</sup><sup>6</sup>6I would like to thank J.Wosiek for discussions about this point. . Clearly, acceptance of the fluctuating string tension changes many other features of the string model. It is, however, beyond the scope of the present note to discuss them here. 4. In conclusion, we propose a modification of the original string picture by introducing a fluctuating string tension. We have shown that this assumption may help to explain the ”thermal” nature of the spectra of particles produced in $`e^+e^{}`$ annihilation. We have also argued that it seems justifiable in the stochastic picture of the QCD vacuum. It remains an open and interesting question how this modification affects the successful phenomenology of the string model. Acknowledgements I am grateful to Wieslaw Czyz, Maciek Nowak, Wolfgang Ochs, Bryan Webber and Jacek Wosiek for discussions which greatly helped to clarify the idea presented in this note. This investigation was supported in part by the KBN Grant No 2 P03B 086 14.
no-problem/9909/hep-lat9909030.html
ar5iv
text
# Lattice QCD on a Beowulf Cluster ## Abstract Using commodity component personal computers based on Alpha processor and commodity network devices and a switch, we built an 8-node parallel computer. GNU/Linux is chosen as an operating system and message passing libraries such as PVM, LAM, and MPICH have been tested as a parallel programming environment. We discuss our lattice QCD project for a heavy quark system on this computer. Even a modest lattice QCD project demands quite large amount of computing resources. In this regard, it has always been an attractive idea to build a cheap high performance computing platform out of commodity PC’s and commodity networking devices. However, the availability of cheap hardware components solved only a part of problem in building a parallel computer in the past. There were large hidden cost in constructing a do-it-yourself parallel computer and only groups which could dedicate significant amount of resources were able to take advantage of this idea. Chief stumbling block has been in providing parallel programming environment (both in hardware and software) from the scratch and in maintaining one of a kind hardware. Following recent trend in do-it-yourself clustering technology , we built a cluster which uses only available hardware and software components and can be easily maintainable. Here, we discuss our experience. In terms of hardware, the node level configuration of our cluster does not differ from ordinary PC’s other than the fact that it is monitor-less. Each node consists of a single 600 MHz Alpha 21164 processor and SDRAM SIMM main memory. The amount of memory on individual nodes varies from 128 Mbytes (5 nodes) to 256 Mbytes (2 nodes). SCSI hard disks on each nodes has either 2 Gbytes (4 nodes) or 4 Gbytes (4 nodes). Additionally, each node has CD-ROM drive and 3 1/2 inch floppy drive. Power requirement of each node is 300 Watt. As a network component, each node has a 100 Mbps Ethernet card (3Com 3C905). Node 0 which serves as a front-end has one more 100 Mbps Ethernet card for outside connection. For the inter-processor communication, we use a 100 Mbps switched HUB (24 port Intel 510T). Unlike the bus structure of a HUB, this inexpensive device allows simultaneous communications among the nodes and offers a flexibility in communication topology. Fig. 1 shows the network configuration of our cluster. Since we use a switch, the communication distance between any two nodes are the same unless the number of nodes becomes larger than the number of available ports in the switch. To outside world, only node 0 exists. All the nodes are assigned local subnet addresses (192.168.1.1 – 192.16.1.8) where 198.168.x.x are reserved addresses specifically for a private subnet and node 0 acts as a gateway to the rest. In this way, we can increase the number of nodes in the cluster without worrying about available IP addresses. As a node operating system, we use Alzza Linux version 5.2a for an Alpha processor which is a Korean customized version of Red Hat Linux 5.2 with kernel version 2.2.1. Three different parallel programming environments, LAM (Local Area Multicomputer) version 6.1 , MPICH (Message Passing Interface-Chameleon) version 1.2.2 , and PVM (Parallel Virtual Machine) version 3.4 have been tested on our platform. These are all based on the message passing paradigm of parallel computing and use TCP/IP mechanism for the actual communication. Linux comes with FORTRAN and C compiler and the parallel programming environments offer wrappers for these languages. Since these parallel programming environments use remote shell (rsh) for a parallel job execution, users need to have accounts on each nodes. NIS system is used for the password validation. Hard disk space on each nodes has divided into three different partitions : one for local operating system, the other for NFS mounted ’/home’ directory and the third for a scratch space for large I/O operations. Installation procedure consists of two parts : one for Linux operating system setup and the other for parallel programming library setup. Once Internet setup for each node is properly done, subnet network can be established by just connecting Ethernet ports. The overall cost for building our 8 node configuration is shown in Table 1. Cost for the console device such as a monitor, mouse and keyboard are not included since we use a used one (this table should be taken as a rough indication for the cost of our cluster since the component price changes quite rapidly). Since performance of a cluster is determined by (single node performance $``$ system overhead due to inter-node communication) $`\times `$ the number of nodes, sustained speed of a single CPU and efficiency of network component play an important role in a cluster. Under GNU/Linux compiler, various tests showed that the sustained speed of a single Alpha processor is better than that of an Intel processor just by the difference in CPU clock speed. It is because Alpha 21164 processor does not support out of order execution (under the same condition, Alpha 21264 which supports out of order execution does better than Alpha 21164 by about factor two). Serial version of our quenched code for an $`8^3\times 32`$ lattice which is coded with $`SU(3)`$ index as the inner-most loop and uses multi-hit and over-relaxation algorithm achieved 50 MFLOPS. Under Compaq FORTRAN compiler for Linux system (beta version), the same code achieved 91 MFLOPS (a code with long inner-most loop may do better under Compaq FORTRAN compiler by factor 4 or more ). In contrast, the same code on a 200 MHz Intel Pentium II MMX achieved 18 MFLOPS under GNU/Linux compiler. This single node benchmark suggests that with the same device, we can take advantage of future development of compiler without further tuning of codes as GNU/Linux compiler improves. As for the network performance, we tested the network setup using two different methods. One is using “ping” test and the other is using “round-robin” communication. “ping” uses ICMP layer on top of IP layer and “round-robin” test uses TCP layer on top of IP layer. Fig. 2 shows “ping” test bandwidth and Fig. 3 shows “round-robin” test bandwidth of LAM parallel programming environment. We found that LAM does better than MPICH for short message and MPICH does better than LAM for large message. Although we have a dedicated network for our cluster system, three parallel programming environments we have tested all assume normal LAN environment and use TCP/IP layer before the link layer in order to avoid various problems from sharing communication network. Further improvement in communication speed can be achieved if UDP layer with error handling is used instead of TCP. Under GNU/Linux, MPI parallel version of our quenched code for a $`16^3\times 32`$ lattice achieved 346 MFLOPS with LAM and 378 MFLOPS with MPICH. Thus, communication overhead is about 21 % for LAM and 14 % for MPICH. Paralle code has not been tested under Compaq FORTRAN compiler yet. Currently, we are generating full QCD configurations on a $`8^3\times 32`$ lattice at $`\beta =5.4`$ with $`m_qa=0.01`$ for heavy quark physics and we found that relatively cheap high performance computing platform can be easily constructed and maintained using all commodity software and hardware.
no-problem/9909/cond-mat9909230.html
ar5iv
text
# The two-dimensional bond-diluted quantum Heisenberg model at the classical percolation threshold \[ ## Abstract The two-dimensional antiferromagnetic $`S=1/2`$ Heisenberg model with random bond dilution is studied using quantum Monte Carlo simulation at the percolation threshold ($`50\%`$ of the bonds removed). Finite-size scaling of the staggered structure factor averaged over the largest connected clusters of sites on $`L\times L`$ lattices shows that long-range order exists within the percolating fractal clusters in the thermodynamic limit. This implies that the order-disorder transition driven by bond-dilution occurs exactly at the percolation threshold and that the exponents are classical. This result should apply also to the site-diluted system. \] During the past decade, questions related to the destruction upon doping of the antiferromagnetic order in the high-$`T_c`$ cuprate materials have motivated extensive studies of quantum critical phenomena in two-dimensional (2D) antiferromagnets . The 2D Heisenberg model on a square lattice can be driven through an order-disorder transition by, e.g., introducing frustrating interactions or by dimerizing the lattice . It has also been believed that a non-trivial phase transition could be achieved by diluting the system, i.e., by randomly removing either sites or bonds (nearest-neighbor interactions) . The site-dilution problem is of direct relevance to the cuprates doped with nonmagnetic impurities, such as Zn or Mn substituted for Cu . Early numerical work indicated that the long-range order vanishes in the Heisenberg model with nearest-neighbor interactions when a fraction $`p^{}0.35`$ of the sites are removed . Various analytical treatments have given results for $`p^{}`$ ranging from $`0.07`$ to $`0.30`$ . These hole concentrations are below the classical percolation threshold $`p_{\mathrm{cl}}0.407`$ , and hence the phase transition would be caused by quantum fluctuations. However, in a recent paper Kato et al. reported quantum Monte Carlo simulations of larger lattices at lower temperatures than in previous work and concluded that the critical site-dilution is exactly the percolation density; $`p^{}=p_{\mathrm{cl}}`$ . Based on their simulations, they also argued that the critical behavior nevertheless is not classical, but that the fractal clusters at $`p_{\mathrm{cl}}`$ are quantum critical with algebraically decaying correlation functions. This leads to non-classical critical exponents. Most surprisingly, the simulations indicated that the exponents depend on the magnitude of the spin. In this Letter we use a quantum Monte Carlo method to study the related bond-diluted system exactly at the percolation threshold, i.e., with $`50`$% of the bonds randomly removed. In order to determine the nature of the ground state at this point — quantum critical, classically critical, or quantum disordered — we study the magnetic properties of the largest clusters of connected sites on $`L\times L`$ lattices with $`L`$ up to $`18`$. We find clear evidence that these clusters, which in the thermodynamic limit are fractal with fractal dimension $`d=91/48`$ , are antiferromagnetically ordered. This implies that the order-disorder transition driven by bond-dilution occurs exactly at the percolation threshold and that the critical exponents are classical. We have chosen to study the bond dilution problem because it is numerically more tractable than site dilution — since the bond percolation threshold $`p_{\mathrm{cl}}=1/2`$ it can be realized exactly on the finite lattices we work with. However, the fractal dimension and the critical exponents are the same for classical bond and site percolation , and our conclusions should therefore hold true also for the site-diluted system. We argue that the reason for the disagreement with the previous results by Kato et al. is that they did not use sufficiently low temperatures in their simulations — extremely low temperatures are required for reaching the ground state even for the relatively small system sizes we use here. We consider the $`S=1/2`$ Heisenberg Hamiltonian on a square lattice with $`N=L\times L`$ sites; $$H=\underset{b=1}{\overset{2N}{}}J(b)𝐒_{i(b)}𝐒_{j(b)}.$$ (1) The bonds $`b`$ connect nearest neighbor sites $`i(b),j(b)`$ with interaction strength $`J(b)=J`$ for $`N`$ randomly selected bonds and $`J(b)=0`$ for the remaining $`N`$ bonds. We use an efficient finite-temperature quantum Monte Carlo method based on the “stochastic series expansion” approach to study systems with $`L=4,6,\mathrm{},18`$. In order to reach the ground state, we successively increase the inverse temperature $`\beta J/T`$ until all quantities of interest have converged. We find that $`\beta `$ as high as $`10^4`$ is required for the largest lattice we have considered. Another important concern when studying random systems is the equilibration of the simulations. One would like to average over as many random configurations as possible within given computer resources. Ideally, one would then perform only short simulations for each realization (typically, even a quite short simulation of a given configuration results in a statistical error smaller than the fluctuations between different configurations). However, there is a minimum length of a simulation set by the time needed to equilibrate it, and when carrying out short simulations it is important to have some way to verify that the correct equilibrium distribution indeed has been reached. We use the following scheme to check for both equilibration and temperature effects: For each bond configuration we carry out simulations at inverse temperatures $`\beta =2^nL`$, $`n=0,1,\mathrm{},n_{\mathrm{max}}`$. Starting with $`n=0`$, we perform two runs for every $`\beta `$, each with $`N_\mathrm{e}`$ updating steps for equilibration and $`N_\mathrm{m}=2N_\mathrm{e}`$ steps for measuring physical quantities (for the definition of a “step”, see Ref. ). The second run is a direct continuation of the first one, so that the effective number of equilibration steps is four times that for the first run. An agreement between the results of these two runs is then a good indication that the simulation has equilibrated. For the subsequently lower temperatures (increasing $`n`$) we always start from the last Monte Carlo state generated at the previous temperature. The convergence of the simulations using the $`\beta `$-doubling procedure will be illustrated with som results below. In the thermodynamic limit, the system at $`p_{\mathrm{cl}}`$ will be spanned by infinite clusters of fractal dimension $`d=91/48`$ . The existence of a nontrivial (quantum) critical point is determined by the magnetic properties of these clusters. If they are long-range ordered, the critical point of the order-disorder transition driven by bond dilution will be exactly at $`p_{\mathrm{cl}}=1/2`$ as in the classical case, and the critical exponents will be the classical ones. If the fractal clusters are critical, i.e., their spin-spin correlations decay with a power-law as suggested by Kato et al. , the critical point is still at $`p_{\mathrm{cl}}`$ but the exponents are different and non-trivial. A quantum critical point with $`p^{}<p_{\mathrm{cl}}`$ would imply an exponential decay of the spin-spin correlations within the fractal clusters at $`p_{\mathrm{cl}}`$. In order to determine which of these scenarios apply, we have calculated the magnetization squared of the largest connected cluster of sites for a large number of random bond configurations on lattices of different linear size $`L`$. As $`L\mathrm{}`$ this procedure gives the ordered moment of the fractal clusters of interest. Denoting the largest cluster by $`C`$, the $`z`$-component of the staggered magnetization squared is given by $$m_C^2=\left(\frac{1}{N_C}\underset{iC}{}(1)^{x_i+y_i}S_i^z\right)^2,$$ (2) where $`N_C`$ is the number of spins in the cluster and the brackets indicate both the quantum mechanical expectation value and an average over bond configurations. We also consider the full staggered structure factor $$S_\pi =\frac{1}{N}\left(\underset{i=1}{\overset{N}{}}(1)^{x_i+y_i}S_i^z\right)^2,$$ (3) which involves all the spins of the lattice and was used by Kato et al. to study the critical behavior of the site-diluted system. The number of random bond configurations used in the averages presented here ranges from $`10^3`$ for $`L=18`$ to $`10^4`$ for $`L=4`$. In Fig. 1 we show results illustrating the equilibration scheme. The disorder-averaged $`m_C^2`$ is graphed versus the index $`n`$ (specifying the inverse temperature $`\beta =2^nL`$ as described above) for $`L=18`$. In order to reduce effects of fluctuations among bond realizations and more clearly show the relative effects of equilibration times and temperature, we have normalized the data to the result of the second run at the highest $`\beta `$ used ($`n_{\mathrm{max}}=9`$, corresponding to $`\beta =9216`$) and estimated the statistical errors using the bootstrap method . The number of equilibration and measurement steps were $`N_\mathrm{e}=500`$ and $`N_\mathrm{m}=1000`$. Within error bars, there are no differences between any of the equal-$`\beta `$ runs, and we therefore conclude that the simulations are well equilibrated. As an additional check, for $`L=16`$ and smaller we have also carried out simulations using $`N_\mathrm{e}=250`$ and $`N_\mathrm{e}=1000`$ (keeping $`N_\mathrm{m}=2N_\mathrm{e}`$). For $`N_\mathrm{e}=250`$ we do see small but statistically significant differences between the equal-$`\beta `$ runs, but the results of the second run are always consistent with data obtained using the longer equilibrations. Hence, we believe that our results are free of detectable effects of insufficient equilibration. All the results to be presented below were averaged using the second of the equal-$`\beta `$ runs only. Fig. 1 shows that very low temperatures are needed to converge to the ground state even for a lattice of relatively modest size. Since there are no statistically significant differences between the $`L=18`$ results at $`\beta =4608`$ and $`9216`$, and the asymptotic approach to the ground state should be exponential, $`\beta =9216`$ should give the ground state to within error bars. In Fig. 2 we show normalized results for $`S_\pi `$ as a function of $`\beta `$ for all the lattices we have studied. It is clear that using a fixed $`\beta =500`$ for all system sizes, as was done in Ref. , leads to a significant systematic error (assuming that site-diluted systems are similarly affected by temperature, which can be expected up to some factor of order $`1`$). Fig. 2 shows that the relative deviation from the ground state grows rapidly with $`L`$ at fixed $`\beta `$ and is $`3`$% for $`L=18`$ at $`\beta =500`$. The largest lattice considered in Ref. was $`L=48`$, for which our results suggest that $`\beta =500`$ could lead to an error of more than $`10`$%. The results to be discussed below are all for $`\beta =2^9L`$. The reason for the very high $`\beta `$-values needed to converge to the ground state is most likely that localized moments can form in the irregular clusters of connected spins. These moments interact with each other with a strength which decreases rapidly with increasing separation, thus leading to closely spaced energy levels. The typical level spacing should decrease faster with increasing $`L`$ than the $`1/L`$ behavior suggested in Ref. . The temperature effects should be the largest exactly at the percolation threshold and for all hole concentrations they lead to an underestimation of the ordered moment. Hence, the result that there is a substantial order in the system close to the percolation threshold should remain valid despite such effects. In order to definitely determine whether indeed $`p^{}=p_{\mathrm{cl}}`$ and whether or not the exponents are the classical ones, we here study the lattice-size dependence of the cluster order parameter squared; Eq. (2). As a finite-size scaling ansatz, we use a simple generalization of the known scaling law for the sublattice magnetization $`m`$ for the pure 2D Heisenberg model. In that case the leading size-correction to $`m^2`$ is $`1/N^{1/2}`$ , which can be seen clearly in numerical data . Since the average number of spins in the fractal clusters of the diluted system depends asymptotically on $`L=N^{1/2}`$ according to $`N_CL^d`$, with $`d`$ the fractal dimension quoted above, we here assume a leading size correction $`1/L^{d/2}`$. Fig. 3 shows our data for $`m_C^2`$ graphed versus this variable. The results appear to be consistent with the ansatz, although in order to fit all the data points we have to use a polynomial cubic in $`1/L^{d/2}`$ (a cubic polynomial is needed also to fit data for the pure 2D Heisenberg model, but the corrections to the linear behavior are much smaller ). This fit has a $`\chi ^2`$ per degree of freedom of $`0.6`$ and gives the full staggered magnetization $`M_d=\sqrt{3m_C^2}0.09`$ for the infinite fractal clusters (the factor $`3`$ accounts for rotational averaging). Hence, the order on the $`d`$-dimensional fractal lattice is as high as $`30`$% of the 2D staggered moment . Finally, we discuss results for the staggered structure factor of the full lattice; Eq. (3). In Fig. 4 we graph $`\mathrm{ln}(S_\pi )`$ versus $`\mathrm{ln}(L)`$, which should show an asymptotic linear scaling behavior. There is a clear upward curvature as the lattice size increases, and it is clear that we are still far from the scaling regime. A curvature is also seen in the data presented by Kato et al. for small systems , but for larger sizes a linear behavior was discerned. As we have discussed above, the structure factor calculated for large lattices in Ref. was likely substantially under-estimated due to temperature effects, and the apparent non-classical scaling behavior is then an artifact. Since the results in Fig. 3 show that there is long-range order in the fractal clusters, the growth of $`S_\pi `$ must asymptotically be given by classical percolation theory, i.e., $`S_\pi L^{2d2}`$, where $`2d2=43/24`$ . The very large corrections to scaling evident in Fig. 4 can be understood as resulting from a significant reduction with increasing $`L`$ of the staggered structure factor per site of the fractal clusters ($`m_C^2`$), as seen in Fig. 3. Classically, $`m_C^2`$ is independent of system size and the scaling of $`S_\pi `$ is therefore determined solely by the increase in size of the clusters as $`L`$ increases. In the quantum case, the size dependence of $`S_\pi `$ is dominated by this geometric effect only for system sizes sufficiently large for the relative size-correction to $`m_C^2`$ to be small. For our largest lattice, the relative size correction is still more than a factor four. Considering the extremely low temperatures required to study the ground state of large lattices, it will be very difficult to numerically observe the asymptotic classical scaling regime for $`S=1/2`$. In summary, we have presented numerical results showing that the fractal $`S=1/2`$ Heisenberg clusters at the classical bond-percolation density have long-range antiferromagnetic order. This implies that the order-disorder transition driven by bond-dilution occurs exactly at the percolation density and that the critical exponents are classical. This should hold true for any spin $`S`$, and since classical bond and site percolation are equivalent in terms of the fractal dimension and the critical exponents , our conclusions should apply also to the site-diluted model. The quantum mechanical corrections to the asymptotic scaling behavior are very large for lattice sizes that can be studied with today’s computers, making direct observation of the critical behavior difficult. We have also discussed temperature effects and shown that extremely low temperatures are needed to study the ground state, likely due to the presence of weakly interacting localized moments. It would be interesting to study a diluted system including frustration. If the clean system is already close to a quantum critical point, non-magnetic impurities may be able to drive it into a quantum disordered phase. The fact that Zn or Mn doping of the cuprates destroys the long-range order well before the classical percolation threshold clearly indicates that these materials cannot be described by a randomly diluted Heisenberg model with nearest-neighbor interactions only. This work was supported by the NSF under grant No. DMR-97-12765. Some of the calculations were carried out using the Origin2000 system at the NCSA.
no-problem/9909/quant-ph9909083.html
ar5iv
text
# High-efficiency quantum interrogation measurements via the quantum Zeno effect ## Abstract The phenomenon of quantum interrogation allows one to optically detect the presence of an absorbing object, without the measuring light interacting with it. In an application of the quantum Zeno effect, the object inhibits the otherwise coherent evolution of the light, such that the probability that an interrogating photon is absorbed can in principle be arbitrarily small. We have implemented this technique, demonstrating efficiencies exceeding the 50% theoretical-maximum of the original “interaction-free” measurement proposal. We have also predicted and experimentally verified a previously unsuspected dependence on loss; efficiencies of up to 73% were observed and the feasibility of efficiencies up to 85% was demonstrated. “Negative result” measurements were discussed by Renninger and later by Dicke , who analyzed the change in an atom’s wavefunction by the nonscattering of a photon from it. In 1993 Elitzur and Vaidman (EV) showed that the wave-particle duality of light could allow “interaction-free” quantum interrogation of classical objects, in which the presence of a non-transmitting object is ascertained seemingly without interacting with it , i.e., with no photon absorbed or scattered by the object. In the basic EV technique, an interferometer is aligned to give complete destructive interference in one output port – the “dark” output – in the absence of an object. The presence of an opaque object in one arm of the interferometer eliminates the possibility of interference so that a photon may now be detected in this output. If the object is completely non-transmitting, any photon detected in the dark output port must have come from the path not containing the object. Hence, the measurements were deemed “interaction-free”, though we stress that this term is sensible only for objects that completely block the beam. For measurements on partially-transmitting (and quantum) objects, we suggest the more general terminology “quantum interrogation”. In any event there is necessarily a coupling between light and object (formally describable by some interaction Hamiltonian) – somewhat paradoxically, in the high-efficiency schemes discussed below, it is crucial that the possibility of an interaction exist, in order to reduce the probability that such an interaction actually occurs. The EV gedanken experiment has been realized using true single-photon states and with a classical light beam attenuated to the single-photon level , as well as in neutron interferometry . This methodology has even been employed to investigate the possibility of performing “absorption-free” imaging . The EV technique suffers two serious drawbacks, however. First, the measurement result is ambiguous at least half of the time – a photon may be detected in the non-dark output port whether or not there is an object. Second, at most half of the measurements are interaction-free . Following Elitzur and Vaidman , we define a figure of merit $`\eta =\mathrm{P}(\mathrm{QI})/[\mathrm{P}(\mathrm{QI})+\mathrm{P}(\mathrm{abs})]`$ to characterize the “efficiency” of a given scheme, where P(QI) is the probability that the photon is detected in the otherwise dark port, and P(abs) is the probability that the object absorbs or scatters the photon. Physically, $`\eta `$ is the fraction of measurements that are “interaction-free”. The maximum achievable efficiency, obtained by adjusting the reflectivities of the EV interferometer beamsplitters, is $`\eta =50\%`$ . It was proposed that one could circumvent these limitations by using a hybrid scheme , combining the interferometric ideas of EV and incorporating an optical version of the quantum Zeno effect , in which a weak, repeated measurement inhibits the otherwise coherent evolution of the interrogating photon. Our specific embodiment of the Zeno effect is based on an inhibited polarization rotation , although the only generic requirement is a weakly-coupled multi-level system. A photon with horizontal (H) polarization is directed through a series of $`N`$ polarization rotators (e.g., optically active elements), each of which rotates the polarization by $`\mathrm{\Delta }\theta \pi /2N`$. The net effect of the entire stepwise quantum evolution is to rotate the photon’s polarization to vertical (V). We may inhibit this evolution if at each stage we make a measurement of the polarization in the H/V basis, e.g., by inserting a horizontal polarizer after each rotator. Since the probability of being transmitted through each polarizer is just $`\mathrm{cos}^2\mathrm{\Delta }\theta `$, the probability P(QI) of being transmitted through all $`N`$ of them is simply $`\mathrm{cos}^{2N}(\mathrm{\Delta }\theta )1\pi ^2/4N`$, and the complementary probability of absorption is P(abs) $`\pi ^2/4N`$. Thus, increasing the number of cycles leads to an arbitrarily small probability that the photon is ever absorbed. Obviously the Zeno phenomenon as described is of limited use, because it requires polarizing objects. Figure 1 shows the basic concept to allow quantum interrogation of any non-transmitting object. A single photon is made to circulate $`N`$ times through the setup, before it is somehow removed and its polarization analyzed. As in the example above, the photon, initially H-polarized, is rotated by $`\mathrm{\Delta }\theta =\pi /2N`$ on each cycle, so that after $`N`$ cycles the photon is found to have V polarization. This rotation is unaffected by the polarization-interferometer (consisting of two polarizing beam splitters, which ideally transmit all H-polarized and reflect all V-polarized light; and two identical-length arms), which simply separates the light into its H and V components and adds them back with the same relative phase. If there is an object in the vertical arm of the interferometer, however, only the H component of the light is passed; i.e., each non-absorption by the object \[with probability $`\mathrm{cos}^2\mathrm{\Delta }\theta `$\] projects the wavefunction back into its initial state. Hence, after $`N`$ cycles, either the photon will still have H polarization \[with probability P(QI)\], unambiguously indicating the presence of the object, or the object will have absorbed the photon \[probability P(abs)\]. By going to higher $`N`$, P(abs) can in principle be made arbitrarily small. In the absence of any losses or other non-idealities, $`\eta =`$P(QI), so that $`\eta 1`$ as $`N\mathrm{}`$. Demonstrating this phenomenon in an actual experiment required several modifications (see Fig. 2). A horizontally-polarized laser pulse was coupled into the system by a highly reflective mirror. The light was attenuated so that the average photon number per pulse after the mirror was between 0.1 and 0.3. The photon then bounced between this recycling mirror and one of the mirrors making up a polarization Michelson interferometer. At each cycle a waveplate rotated the polarization by $`\mathrm{\Delta }\theta `$. After the desired number of cycles $`N`$, the photon was switched out of the system by applying a high-voltage pulse to a Pockels cell in each interferometer arm, thereby rotating the polarization of the photon by $`90^{}`$, so that it exited via the other port of the polarizing beam splitter. The exiting photon was then analyzed by an adjustable polarizer and single-photon detector. With no object, the polarization was found to be essentially horizontal, indicating that the stepwise rotation of polarization had taken place (remember, the final polarization is $`90^{}`$ rotated by the Pockels cell). With the object in the vertical-polarization arm of the interferometer, this evolution was inhibited, and a photon exiting the system was vertically-polarized, an interaction-free measurement of the presence of the object . A number of intermediate configurations were investigated before arriving at the arrangement described above . With these the feasibility of quantum interrogation with $`\eta `$ up to 85% was inferred (for a hypothetically lossless system) – there was no way to directly measure the amount of light absorbed by the object. In the present experiment, we made a direct measurement of the probability that a photon took the object path, by applying a constant voltage to the Pockels cell in that path, thereby directing these photons to the single-photon detector at each cycle. With the DC voltage applied, photons exiting with H polarization correspond to P(abs), while those with V polarization (which exit only after $`N`$ cycles) correspond to P(QI). (We verified that the rates corresponding to P(QI) were similar whether using the DC-biased Pockels cell as the object, or physically blocking that arm of the interferometer.) Rather unexpectedly, the efficiencies determined in this fashion were significantly lower than both the theoretical predictions and the previous inferred measurements, which agreed well with each other. The reason is that the effects of loss in the system were normalized out in the previous measurements . That loss should reduce the actual efficiencies was somewhat surprising, since losing a photon from the system seems equivalent to never sending it in initially. This line of reasoning is faulty: A photon contributing to P(QI) must necessarily remain in the system for all $`N`$ cycles, thus sampling any loss $`N`$ times; in contrast, a photon contributing to P(abs) could be absorbed on any cycle, hence remains in the system on average less than $`N`$ cycles, and sees less loss than a photon contributing to P(QI). The net effect is that, whereas $`\eta 1`$ for a large number of lossless cycles, in the presence of loss $`\eta `$ reaches a maximum value less than one before falling again toward zero . This places a strong constraint on the achievable efficiencies in any real system. Figure 3 shows the experimental verification of this phenomenon, as well as the modified theoretical predictions, which are in good agreement. Despite the efficiency reduction due to loss, we were able to observe $`\eta `$’s of up to $`74.4\pm 1.2\%`$. Also shown in Fig. 3 are several representative measurements of the “noise” of our quantum interrogation system, from events in which an object was indicated (i.e., photons were detected with vertical polarization) even though none was actually present. The main causes were imperfections of the optical elements, and interferometer instability, despite active stabilization. Because the same photon detector was used to determine both P(QI) and P(abs) in our measurements, the detector efficiency factors out of the calculation for $`\eta `$. When our highest-observed value of $`\eta `$ is corrected for our finite detection efficiencies , we arrive at an adjusted $`\eta `$ of $`53.1\pm 1.6\%`$, where we have included the effects of both the detector efficiency (65%) and the 10-nm filter (60% transmission) used to reduce background. Because this value of $`\eta `$ is only marginally above the 50% threshold of the original EV scheme, we also took one set of data in which the 10-nm filter was removed. Our measured $`\eta `$ was $`72.3\pm 1.1\%`$, implying a raw efficiency of $`62.9\pm 1.3\%`$; that is, in measurements with the opaque object, $`2/3`$ of the photons performed an “interaction-free” measurement, and $`1/3`$ did not ; in other words, the object’s presence can be unambiguously determined while absorbing only “1/3 of a photon”. This is, to our knowledge, one of the first practical utilizations of the quantum Zeno effect. A wholly different method of quantum interrogation, relying on disrupting the resonance condition of a high-finesse cavity (and hence called “resonance interaction-free measurement”), has been proposed and recently demonstrated , with efficiencies similar to those reported here. If the cavity mirrors both have reflectivity $`R`$, a narrow bandwidth photon incident on the empty cavity can have a near-unity probability of transmission, i.e., a detector observing the reflection from the entrance mirror to the cavity will never fire, in principle. An object in the cavity will naturally prevent the resonance condition (this can be thought of as an impedance mismatch), so the detector in the reflected mode will detect the photon with probability P(QI) = $`R`$, while the object will have a probability 1-$`R`$ of absorbing the photon. The efficiency of this scheme ($`R`$, in the ideal case) can thus exceed the EV 50% threshold, like the method based on the Zeno effect. However, the two techniques have very different characteristics. For example, while the Zeno technique employs broadband photon wavepackets, the resonance methods necessarily require a very narrow frequency spectrum for the interrogating photons. Because of the pulsed nature of the Zeno effect, the duration of the experiment is precisely fixed (to $`N`$ cycles); the duration of the measurement with the cavity is less well defined, determined by the ring-down time of the cavity. Conversely, it is relatively easy to allow photons to leak out of a cavity, whereas actively switching them as in our scheme is experimentally more challenging. Finally, as presented here, both techniques require interferometric stability; however, this is not strictly necessary for the Zeno method if one has a polarizing object, e.g., a polarization-selective atomic transition. Achieving higher efficiencies with these techniques will require increasing the working number of cycles $`N`$. However, the performance of the system becomes increasingly more sensitive to optical imperfections and to interferometer instability. The effect of loss is also multiplied. We believe that with sufficient engineering these problems could be reduced, allowing operation at up to O(100) cycles or higher, giving efficiencies $`>93\%`$ . Finally, crosstalk in the polarizing beamsplitter (i.e., not all horizontal polarized light is transmitted, and not all vertical polarized light is reflected, about $`1\%`$ in our present system) must be kept to a minimum. In particular, we observed spurious interference effects when the reflection probability \[$`\mathrm{sin}^2(\pi /2N)`$\] becomes comparable to the crosstalk. Use of birefringent material polarizers, whose crosstalk figures are $`10^5`$, may mitigate this problem. If the efficiencies can be improved as discussed above, one can envision using the methods to examine quantum mechanical objects, such as an atom or ion, one of whose states couples to the interrogating light (“object”), and another of whose states does not couple (“no object”). In the simplest situation we can determine which state the system is in with a greatly reduced probability of exciting it out of that state. Such a process might be called “absorption-free spectroscopy”, and could be useful for studying photosensitive systems. More interestingly, when the quantum system is in a superposition of the two states, the light becomes entangled with the quantum system . Such an effect may have use as a quantum “wire”, e.g., as an interface for connecting together two quantum computers . Finally, in the limit as $`\eta 1`$, these techniques of quantum interrogation will function even if there are several photons (or an average of several photons, as in a weak coherent state). It should then be possible to produce Schrödinger-cat like states $`\alpha |VVV\mathrm{}+\beta |HHH\mathrm{}`$, where $`|VVV\mathrm{}`$ ($`|HHH\mathrm{}`$) represents several photons with vertical (horizontal) polarization . Such states would have great interest for studying the classical-quantum boundary, and the phenomenon of decoherence. We would like to acknowledge I. Appelbaum, N. Kurnit, G. Peterson, V. Sandberg, and C. Simmons for help with our feedback system and Pockels cells; ON, GW, HW, and AZ acknowledge support by the Austrian Science Foundation FWF project number S6502. kwiat@lanl.gov. Current address: Institute for Experimental Physics, University of Vienna, Wien 1090, Austria. Current address: Sektion Physik, Ludwig-Maximilians-Universität, München Schellingstr. 4/III D-80799 München.
no-problem/9909/hep-ph9909522.html
ar5iv
text
# Zero Zeros After All These (20) Years*footnote **footnote *Talk given by RB at the Workshop “Formfactors from Low to High Q²” (Brodskyfest), University of Georgia, Athens, Georgia, Sept. 17, 1999, to appear in the Proceedings ## Original Zero The first radiation zeros were discovered as a consequence of a general investigation of the production of electroweak pairs in hadronic collisions $$p\overline{p},ppWW,ZZ,WZ,W\gamma +X$$ (1) and neutrino reactions $$\nu eWZ,W\gamma $$ (2) addressed to very-high-energy cosmic-ray physics. The investigation probed the sensitivity of these reactions to the trilinear gauge boson couplings and, at the present time, useful limits on their deviations from gauge theory predictions have been obtained in $`e^+e^{}`$ and hadron collider experiments . Pronounced dips were found in the angular distributions for the production of $`W\gamma `$ and $`WZ`$ in the two-body parton and lepton reactions. Subsequent work by Mikaelian, Samuel, and Sahdev proved that the $`W\gamma `$ dips were in fact exact zeros at particular angles, which would be ruined by non-gauge couplings. The Wisconsin brain trust followed with a general demonstration of the implied amplitude factorization. ## Simple Zero Since it is so easy to do so, we show how a radiation zero arises in lowest-order radiative charged-scalar fusion $$\mathrm{scalar}\mathrm{\hspace{0.33em}1}+\mathrm{scalar}\mathrm{\hspace{0.33em}2}\mathrm{scalar}\mathrm{\hspace{0.33em}3}+\mathrm{photon}$$ (3) whose Feynman diagrams lead to the Born amplitude $$M_\gamma ^{sc}=\frac{Q_3}{p_3q}p_3ϵ\frac{Q_1}{p_1q}p_1ϵ\frac{Q_2}{p_2q}p_2ϵ$$ (4) with charges $`Q_i`$, four-momenta $`p_i`$, and photon momentum $`q`$ and polarization $`ϵ`$. (The trilinear scalar coupling is taken to be unity.) Using momentum conservation, we observe that $`M_\gamma ^{sc}=0`$ if all $`Q_i/p_iq`$ are the same. This is exactly the same condition found for the $`W\gamma `$ amplitude, independent of the spins. It leads to a zero when $$\mathrm{cos}\theta ^{}=\frac{Q_1Q_2}{Q_1+Q_2}$$ (5) for the center-of-mass angle $`\theta ^{}`$ of particle 3 relative to the direction of particle 1 (or between the photon and particle 2). This reduces to $`\mathrm{cos}\theta ^{}=1/3(1/3)`$ for $`u\overline{d}W^+\gamma `$ ($`d\overline{u}W^{}\gamma `$). ## Zeros Everywhere: Theorems Faced with the vanishing of the above amplitudes, Brodsky asserted that the way to look at these zeros was as the complete destructive interference of classical radiation patterns. Following this lead, and making a long story short (see the longer story with better referencing in ), an arbitrary number $`n`$ of external charged particles was considered and a general set of radiation interference theorems were found . Again considering the emission of a photon with momentum $`q`$, the tree amplitude approximation vanishes, independent of any particle’s spin $`1`$, for common charge-to-light-cone-energy ratios (the radiation null zone), $`M_\gamma (tree)`$ $`=`$ $`0`$ (6) $`\mathrm{if}{\displaystyle \frac{Q_i}{p_iq}}`$ $`=`$ $`\mathrm{same},\mathrm{all}i`$ (7) where the $`i^{th}`$ particle has electric charge $`Q_i`$ and four-momentum $`p_i`$. All couplings must be prescribed by local gauge theory. We see why it took so long to discover radiation zeros since the first null zone requirement is that all charges must have the same sign. Fractionally charged quarks and weak bosons are needed in order to get three things: Same-sign charges, a process well-approximated by a Born amplitude, and a four-particle reaction so the null zone was simple. While there are zeros associated with any gauge group when the corresponding massless gauge bosons are emitted, in QCD, color charges are averaged or summed over in hadronic reactions. In thinking of the weak bosons themselves, electroweak symmetry is broken and nonzero weak-boson masses ruin radiation interference. What about other photonic zeros? The zero in electron-electron bremsstrahlung is less interesting. Zeros in electron-quark and quark-antiquark bremsstrahlung require jet identification along with the more complicated phase space . While we shall say more about other tests, we find ourselves returning again and again to the original $`W\gamma `$ zero. ## Zero $``$ Zero There are various corrections that turn the $`W\gamma `$ zero into a dip. Theoretically, higher-order (closed-loop) corrections will not vanish in the null zone, since the internal loop momenta cannot be fixed. Structure function effects, higher order QCD corrections, finite $`W`$ width effects, and photon radiation from the final state lepton line all tend to fill in the dip. The main complication in the extraction of the $`\mathrm{cos}\theta ^{}`$ distribution in $`W\gamma `$ production, however, originates from the finite resolution of the detector and ambiguities in reconstructing the parton center of mass frame. The ambiguities are associated with the nonobservation of the neutrino arising from $`W`$ decay. Identifying the missing transverse momentum, $`p\text{/}_T`$, with the transverse momentum of the neutrino of a given $`W\gamma `$ event, the unobservable longitudinal neutrino momentum, $`p_L(\nu )`$, and thus the parton center of mass frame, can be reconstructed by imposing the constraint that the neutrino and charged lepton four momenta combine to form the $`W`$ rest mass. The resulting quadratic equation, in general, has two solutions. In the approximation of a zero $`W`$-decay width, one of the two solutions coincides with the true $`p_L(\nu )`$. On an event by event basis, however, it is impossible to tell which of the two solutions is the correct one. This ambiguity considerably smears out the dip caused by the amplitude zero. Problems associated with the reconstruction of the parton center of mass frame could be avoided by considering hadronic $`W`$ decays. The horrendous QCD background, however, renders this channel useless. ## Zero Progress At present there is only a preliminary study by the CDF collaboration of the $`\mathrm{cos}\theta ^{}`$ distribution from a partial data sample of the 1992-95 Tevatron run. The event rate is still insufficient to make a statistically significant statement about the existence of the radiation zero. One can at best say that “there is a hint of gauge zero” in Fig. 1. ## Zero Help Instead of trying to reconstruct the parton center of mass frame and measure the $`\mathrm{cos}\theta ^{}`$ or the equivalent rapidity distribution in the center of mass frame, one can study rapidity correlations between the observable final state particles in the laboratory frame. Knowledge of the neutrino longitudinal momentum is not required in determining this distribution. Event mis-reconstruction problems originating from the two possible solutions for $`p_L(\nu )`$ are thus automatically avoided. In $`22`$ reactions differences of rapidities are invariant under boosts. One therefore expects that the double differential distribution of the rapidities, $`d^2\sigma /dy(\gamma )dy(W)`$, where $`y(W)`$ and $`y(\gamma )`$ are the $`W`$ and photon rapidity, respectively, in the laboratory frame, exhibits a ‘valley,’ signaling the SM amplitude zero . In $`W^\pm \gamma `$ production, the dominant $`W`$ helicity is $`\lambda _W=\pm 1`$ , implying that the charged lepton, $`\mathrm{}=e,\mu `$, from $`W\mathrm{}\nu `$ tends to be emitted in the direction of the parent $`W`$, and thus reflects most of its kinematic properties. As a result, the valley signaling the SM radiation zero should manifest itself also in the $`d^2\sigma /dy(\gamma )dy(\mathrm{})`$ distribution of the photon and lepton rapidities. The theoretical prediction of the $`d^2\sigma /dy(\gamma )dy(\mathrm{})`$ distribution in the Born approximation for $`p\overline{p}`$ collisions at 1.8 TeV is shown in Fig. 2 and indeed exhibits a pronounced valley for rapidities satisfying $`\mathrm{\Delta }y(\gamma ,\mathrm{})=y(\gamma )y(\mathrm{})0.3`$. The location of the valley can be easily understood from the value of $`\mathrm{cos}\theta ^{}`$ for which the zero occurs and the average difference between the $`W`$ rapidity and the rapidity of the $`W`$ decay lepton . To simulate detector response, transverse momentum cuts of $`p_T(\gamma )>5`$ GeV, $`p_T(\mathrm{})>20`$ GeV and $`p\text{/}_T>20`$ GeV, rapidity cuts of $`|y(\gamma )|<3`$ and $`|y(\mathrm{})|<3.5`$, a cluster transverse mass cut of $`m_T(\mathrm{}\gamma ;p\text{/}_T)>90`$ GeV and a lepton-photon separation cut of $`\mathrm{\Delta }R(\gamma ,\mathrm{})>0.7`$ have been imposed in the Figure. Here, $`\mathrm{\Delta }R(\gamma ,\mathrm{})`$ is the separation between the lepton and the photon in the azimuthal angle-pseudorapidity plane, $$\mathrm{\Delta }R(\gamma ,\mathrm{})=\sqrt{\mathrm{\Delta }\mathrm{\Phi }(\gamma ,\mathrm{})^2+\mathrm{\Delta }\eta (\gamma ,\mathrm{})^2}.$$ (8) The cluster transverse mass cut suppresses final state photon radiation which tends to obscure the dip caused by the radiation zero. For 10 fb<sup>-1</sup>, a sufficient number of events should be available to map out $`d^2\sigma /dy(\gamma )dy(\mathrm{})`$ in future Tevatron experiments. For smaller data sets, the rapidity difference distribution, $`d\sigma /d\mathrm{\Delta }y(\gamma ,\mathrm{})`$, is a more useful variable. In the photon lepton rapidity difference distribution, the SM radiation zero leads to a strong dip located at $`\mathrm{\Delta }y(\gamma ,\mathrm{})0.3`$ . The LO and NLO predictions of the SM $`\mathrm{\Delta }y(\gamma ,\mathrm{})`$ differential cross section for $`p\overline{p}\mathrm{}^+p\text{/}_T\gamma `$ at the Tevatron are shown in Fig. 3a. Next-to-leading QCD corrections leave a reasonably visible dip. In $`pp`$ collisions, the dip signaling the amplitude zero is shifted to $`\mathrm{\Delta }y(\gamma ,\mathrm{})=0`$. Because of the increased $`qg`$ luminosity, the inclusive QCD corrections are very large for $`W\gamma `$ production at multi-TeV hadron colliders . At the LHC, they enhance the cross section by a factor 2 – 3. The rapidity difference distribution for $`W^+\gamma `$ production in the SM for $`pp`$ collisions at $`\sqrt{s}=14`$ TeV is shown in Fig. 3b. Here we have imposed the following lepton and photon detection cuts: $`p_T(\gamma )>100\mathrm{GeV}/\mathrm{c},`$ $`|\eta (\gamma )|<2.5,`$ (9) $`p_T(\mathrm{})>25\mathrm{GeV}/\mathrm{c},`$ $`|\eta (\mathrm{})|<3,`$ (10) $`p\text{/}_T>50\mathrm{GeV}/\mathrm{c},`$ $`\mathrm{\Delta }R(\gamma ,\mathrm{})>0.7.`$ (11) The inclusive NLO QCD corrections are seen to considerably obscure the amplitude zero. The bulk of the corrections at LHC energies originates from quark-gluon fusion and the kinematical region where the photon is produced at large $`p_T`$ and recoils against a quark, which radiates a soft $`W`$ boson which is almost collinear to the quark. Events which originate from this phase space region usually contain a high $`p_T`$ jet. A jet veto therefore helps to reduce the QCD corrections. Nevertheless, the remaining QCD corrections still substantially blur the visibility of the radiation zero in $`W\gamma `$ production at the LHC . Given a sufficiently large integrated luminosity, experiments at the Tevatron studying lepton-photon rapidity correlations therefore offer a unique chance to observe the SM radiation zero in $`W\gamma `$ production. Nonstandard $`WW\gamma `$ couplings tend to fill in the dip in the $`\mathrm{\Delta }y(\gamma ,\mathrm{})`$ distribution caused by the radiation zero. Indirectly, the radiation zero can also be observed in the $`Z\gamma `$ to $`W\gamma `$ cross section ratio . Many theoretical and experimental uncertainties at least partially cancel in the cross section ratio. On the other hand, in searching for the effects of the SM radiation zero in the $`Z\gamma `$ to $`W\gamma `$ cross section ratio, one has to assume that the SM is valid for $`Z\gamma `$ production. Since the radiation zero occurs at a large scattering angle, the photon $`E_T`$ distribution in $`W\gamma `$ production falls much more rapidly than that of photons in $`Z\gamma `$ production. As a result, the SM $`W\gamma `$ to $`Z\gamma `$ event ratio as a function of the photon transverse energy, $`E_T^\gamma `$, drops rapidly. ## Multizeros Adding more external photons to a reaction with a Born-amplitude radiation zero will still leave us with a null zone which demands, however, that all photons be collinear . In view of the fact that the quadrilinear coupling $`WW\gamma \gamma `$ contributes, it is of interest to consider the radiation zero in $`W\gamma \gamma `$ production. The $`\mathrm{\Delta }y(\gamma \gamma ,W)=y_{\gamma \gamma }y_W`$ distribution is expected to display a clear dip for photons with a small opening angle, $`\theta _{\gamma \gamma }`$, in the laboratory frame, i.e. at $`\mathrm{cos}\theta _{\gamma \gamma }1`$. Calculations show that requiring $`\mathrm{cos}\theta _{\gamma \gamma }>0`$ is already sufficient. Figure 4a displays a pronounced dip in $`d\sigma /d\mathrm{\Delta }y(\gamma \gamma ,W)`$ for $`\mathrm{cos}\theta _{\gamma \gamma }>0`$ at the Tevatron, located at $`\mathrm{\Delta }y(\gamma \gamma ,W)0.7`$ (solid line) for $`e^{}\overline{\nu }\gamma \gamma `$ production at the Tevatron. In contrast, for $`\mathrm{cos}\theta _{\gamma \gamma }<0`$, the $`\mathrm{\Delta }y(\gamma \gamma ,W)`$ distribution does not exhibit a dip (dashed line). In the dip region, the differential cross section for $`\mathrm{cos}\theta _{\gamma \gamma }<0`$ is about one order of magnitude larger than for $`\mathrm{cos}\theta _{\gamma \gamma }>0`$. In addition, the $`\mathrm{\Delta }y(\gamma \gamma ,W)`$ distribution extends to significantly higher $`y_{\gamma \gamma }y_W`$ values if one requires $`\mathrm{cos}\theta _{\gamma \gamma }>0`$. This reflects the narrower rapidity distribution of the two-photon system for $`\mathrm{cos}\theta _{\gamma \gamma }<0`$, due to the larger invariant mass of the system when the two photons are well separated. Exactly as in the $`W\gamma `$ case, the dominant helicity of the $`W`$ boson in $`W^\pm \gamma \gamma `$ production is $`\lambda _W=\pm 1`$. One therefore expects that the distribution of the rapidity difference of the $`\gamma \gamma `$ system and the charged lepton is very similar to the $`y_{\gamma \gamma }y_W`$ distribution and would show a clear signal of the radiation zero for positive values of $`\mathrm{cos}\theta _{\gamma \gamma }`$. The $`y_{\gamma \gamma }y_e`$ distribution, shown in Fig. 4b, indeed clearly displays these features. Due to the finite difference between the electron and the $`W`$ rapidities, the location of the minimum is slightly shifted. To simulate the finite acceptance of detectors, we have imposed the following cuts in Fig. 4: $`p_T(\gamma )`$ $`>`$ $`10\mathrm{GeV},|y_\gamma |<2.5,\mathrm{\Delta }R(\gamma \gamma )>0.3\mathrm{for}\mathrm{photons},`$ (12) $`p_T(e)`$ $`>`$ $`15\mathrm{GeV},|y_e|<2.5,\mathrm{\Delta }R(e\gamma )>0.7\mathrm{for}\mathrm{charged}\mathrm{leptons},`$ (13) and $$p\text{/}_T>15\mathrm{GeV}.$$ (14) In addition, to suppress the contributions from final photon radiation, we have required that $$M_T(e^{}\nu )>70\mathrm{GeV}.$$ (15) The characteristic differences between the $`\mathrm{\Delta }y(\gamma \gamma ,e)=y_{\gamma \gamma }y_e`$ distribution for $`\mathrm{cos}\theta _{\gamma \gamma }>0`$ and $`\mathrm{cos}\theta _{\gamma \gamma }<0`$ are also reflected in the cross section ratio $$=\frac{_{\mathrm{\Delta }y(\gamma \gamma ,e)>1}𝑑\sigma }{_{\mathrm{\Delta }y(\gamma \gamma ,e)<1}𝑑\sigma },$$ (16) which may be useful for small event samples. Many experimental uncertainties cancel in $``$. For $`\mathrm{cos}\theta _{\gamma \gamma }>0`$ one finds $`0.25`$, whereas for $`\mathrm{cos}\theta _{\gamma \gamma }<0`$ $`1.06`$. Although we have restricted the discussion above to $`e\nu \gamma \gamma `$ production, our results also apply to $`p\overline{p}\mu \nu \gamma \gamma `$. NLO QCD corrections are not expected to obscure the dip signaling the radiation zero at the Tevatron, but may significantly reduce its observability at the LHC. Given a sufficiently large integrated luminosity, experiments at the Tevatron studying correlations between the rapidity of the photon pair and the charged lepton therefore offer an excellent opportunity to search for the SM radiation zero in hadronic $`W\gamma \gamma `$ production. Unfortunately, for the cuts listed above, the $`W\gamma \gamma `$ production cross section at the Tevatron is only about 2 fb. Thus, in order to observe the radiation zero in $`W\gamma \gamma `$ production, an integrated luminosity of at least $`2030`$ fb<sup>-1</sup> is needed. ## Approximately Zero At energies much larger than the $`Z`$ boson mass, one naively expects that the $`Z`$ boson in the process $`q_1\overline{q}_2W^\pm Z`$ (17) behaves essentially like a photon with unusual couplings to the fermions. One therefore might suspect that an approximate radiation zero is present in $`WZ`$ production. In Ref. it was demonstrated that the process $`q_1\overline{q}_2W^\pm Z`$ indeed exhibits an approximate zero located at $$\mathrm{cos}\mathrm{\Theta }^{}\pm \frac{1}{3}\mathrm{tan}^2\theta _W\pm 0.1,$$ (18) where $`\mathrm{\Theta }^{}`$ is the scattering angle of the $`Z`$ boson relative to the quark direction in the $`WZ`$ center of mass frame. The approximate zero is the combined result of an exact zero in the dominant helicity amplitudes $`(\pm ,)`$, and strong gauge cancellations in the remaining amplitudes. At high energies, only the $`(\pm ,)`$ and $`(0,0)`$ amplitudes remain nonzero: $`(\pm ,)`$ $``$ $`{\displaystyle \frac{F}{\mathrm{sin}\theta ^{}}}(\lambda _\mathrm{w}\mathrm{cos}\theta ^{})\left[(g_{}^{q_1}g_{}^{q_2})\mathrm{cos}\theta ^{}(g_{}^{q_1}+g_{}^{q_2})\right],`$ (19) $`(0,0)`$ $``$ $`{\displaystyle \frac{F}{2}}\mathrm{sin}\theta ^{}{\displaystyle \frac{M_Z}{M_W}}(g_{}^{q_2}g_{}^{q_1}).`$ (20) Here, $`\lambda _\mathrm{w}`$ denotes the $`W`$ boson polarization ($`\lambda =\pm 1,0`$ for transverse and longitudinal polarizations, respectively), and $`F=C{\displaystyle \frac{e^2}{\sqrt{2}\mathrm{sin}\theta _\mathrm{w}}},`$ (21) where $`C=\delta _{i_1i_2}V_{q_1q_2}`$ and $`\theta ^{}=\pi \mathrm{\Theta }^{}`$. $`i_1`$ ($`i_2`$) is the color index of the incoming quark (antiquark) and $`V_{q_1q_2}`$ is the quark mixing matrix element. $$g_{}^f=\frac{T_3^fQ_f\mathrm{sin}^2\theta _\mathrm{w}}{\mathrm{sin}\theta _\mathrm{w}\mathrm{cos}\theta _\mathrm{w}}$$ (22) is the coupling of the $`Z`$-boson to left-handed fermions with $`T_3^f=\pm \frac{1}{2}`$ representing the third component of the weak isospin. $`Q_f`$ is the electric charge of the fermion $`f`$. The existence of the zero in $`(\pm ,)`$ at $`\mathrm{cos}\mathrm{\Theta }^{}\pm 0.1`$ is a direct consequence of the contributing $`u`$\- and $`t`$-channel fermion exchange diagrams and the left-handed coupling of the $`W`$ boson to fermions. Unlike the $`W^\pm \gamma `$ case with its massless photon kinematics, the zero has an energy dependence which is, however, rather weak for energies sufficiently above the $`WZ`$ mass threshold. Analogously to the radiation zero in $`q_1\overline{q}_2W\gamma `$, one can search for the approximate zero in $`WZ`$ production in the rapidity difference distribution $`d\sigma /d\mathrm{\Delta }y(Z,\mathrm{}_1)`$ , where $$\mathrm{\Delta }y(Z,\mathrm{}_1)=y(Z)y(\mathrm{}_1)$$ (23) is the difference between the rapidity of the $`Z`$ boson, $`y(Z)`$ and the rapidity of the lepton, $`\mathrm{}_1`$ originating from the decay of the $`W`$ boson, $`W\mathrm{}_1\nu `$. The $`y(Z)y(\mathrm{}_1)`$ distribution for $`W^+Z`$ production in the Born approximation is shown in Fig. 5. The approximate zero in the $`WZ`$ amplitude leads to a dip in the $`y(Z)y(W)`$ distribution, which is located at $`y(Z)y(W)\pm 0.12`$ ($`=0`$) for $`W^\pm Z`$ production in $`p\overline{p}`$ ($`pp`$) collisions. However, in contrast to $`W\gamma `$ production, none of the $`W`$ helicities dominates in $`WZ`$ production. The charged lepton, $`\mathrm{}_1`$, thus only partly reflects the kinematical properties of the parent $`W`$ boson. As a result, a significant part of the correlation present in the $`y(Z)y(W)`$ spectrum is lost, and only a slight dip survives in the SM $`y(Z)y(\mathrm{}_1)`$ distribution. This, and the much smaller number of $`WZ\mathrm{}_1\nu \mathrm{}_2^+\mathrm{}_2^{}`$ events, make the approximate radiation zero in $`WZ`$ production much more difficult to find at the Tevatron or LHC than the radiation amplitude zero in $`W\gamma `$ production. Due to the nonzero average rapidity difference between the lepton $`\mathrm{}_1`$ and the parent $`W`$ boson, the location of the minimum of the $`y(Z)y(\mathrm{}_1)`$ distribution in $`p\overline{p}`$ collisions is slightly shifted to $`y(Z)y(\mathrm{}_1)0.5`$. In Fig. 5a a rapidity cut of $`|\eta (\mathrm{})|<2.5`$ has been imposed, instead of the cut used in Fig. 3a. All other rapidity and transverse momentum cuts are as described before. Furthermore, $`\mathrm{\Delta }R(\mathrm{},\mathrm{})>0.4`$ is required for leptons of equal electric charge in Fig. 5. The significance of the dip in the $`y(Z)y(\mathrm{}_1)`$ distribution depends to some extent on the cut imposed on $`p_T(\mathrm{}_1)`$ and the missing transverse momentum. Increasing (decreasing) the cut on $`p_T(\mathrm{}_1)`$ ($`p\text{/}_T`$) tends to increase the probability that $`\mathrm{}_1`$ is emitted in the flight direction of the $`W`$ boson, and thus enhances the significance of the dip. If the $`p\text{/}_T>50`$ GeV cut at the LHC could be reduced to 20 GeV, the dip signaling the approximate zero in the $`WZ`$ production amplitude would be strengthened considerably. In contrast to the situation encountered in $`W\gamma `$ production, nonstandard $`WWZ`$ couplings do not always tend to fill in the dip caused by the approximate radiation zero. This is due to the relatively strong interference between standard and anomalous contributions to the helicity amplitudes for certain anomalous couplings. As a result, the dip may even become more pronounced in some cases. Before we turn to the prospects of observing radiation zeros in the near future, we would like to mention a new development in the general question of radiation zeros. Different kinds of null zones have been found (“Type II” radiation zeros ) in the important process $`q\overline{q}W^+W^{}\gamma `$, for which there are no regular (also called Type I) zeros. The Type II zeros require soft photons and certain coplanarity, but dips survive for harder photons that are sensitive to the trilinear and quadrilinear gauge boson couplings. It will be interesting to see how visible they are in an analysis incorporating acceptance cuts, detector resolution effects, finite $`W`$ width effects, and decay-lepton radiation. ## Nonzero Zeros in Zero Zero? How long will we wait for a real dip to appear? A sufficient rapidity coverage is essential to observe the radiation zero in $`d^2\sigma /dy(\gamma )dy(\mathrm{})`$ and/or the $`\mathrm{\Delta }y(\gamma ,\mathrm{})`$ distribution . This is demonstrated in Fig. 6, which displays simulations of the rapidity difference distribution for 1 fb<sup>-1</sup> in the electron channel at the Tevatron. If both central ($`|y|<1.1`$) and endcap ($`1.5<|y|<2.5`$) electrons and photons can be used (Fig. 6a), the simulations indicate that with integrated luminosities $`1`$ fb<sup>-1</sup> it will be possible to conclusively establish the dip in the photon lepton rapidity difference distribution which signals the presence of the radiation zero in $`W\gamma `$ production. On the other hand, for central electrons and photons only, the dip is statistically not significant for 1 fb<sup>-1</sup>. With the detector upgrades which are currently being implemented for the next Tevatron run, both CDF and DØ experiments should have the capability to analyze the $`\mathrm{\Delta }y(\gamma ,\mathrm{})`$ distribution over the full rapidity range of $`|y|<2.5`$. While the data analysis may take rather longer, we may hazard the guess that the year Y2K will have more than three zeros in it. ## Zeroing in on Brodsky The radiation zeros are the generalization of the well-known vanishing of classical non-relativistic electric and magnetic dipole radiation occurring for equal charge/mass ratios (indeed, the low-energy limit of the null zone conditions) and equal gyro-magnetic g-factors. The null zone is exactly the same as that for the completely destructive interference of radiation by charge lines (a classical convection current calculation) and is preserved by the fully relativistic quantum Born approximation for gauge theories. Stan Brodsky has long emphasized the magic of the gyro-magnetic ratio value $`g=2`$ predicted by gauge theory for spinor and vector particles. Only for this value will Born amplitudes have the same null zone as the classical radiation patterns for soft photons. And only for this value will Born amplitudes have good high-energy behavior. In this way we have a connection between the large and small distance scales, with the value $`g=2`$ as a bridge. ## Acknowledgment It is great to have this chance to acknowledge Stan Brodsky’s unique leadership and collaboration in these radiation matters. UB would like to thank S. Errede, T. Han, N. Kauer, G. Landsberg, J. Ohnemus, R. Sobey and D. Zeppenfeld for pleasant collaboration and many fruitful discussions. RB is grateful to K. Kowalski, Sh. Shvartsman and C. Taylor for advice and collaboration through the years. This research was supported in part by NSF grant PHY-9600770.
no-problem/9909/quant-ph9909035.html
ar5iv
text
# Quantum Logic Using Sympathetically Cooled Ions ## I Introduction One of the most attractive physical systems for generating large entangled states and realizing a quantum computer is a collection of cold trapped atomic ions . The ion trap quantum computer stores one or more quantum bits (qubits) in the internal states of each trapped ion, and quantum logic gates (implemented by interactions with externally applied laser beams) can couple qubits through a collective quantized mode of motion of the ion Coulomb crystal. Loss of coherence of the internal states of trapped ions is negligible under proper conditions but heating of the motion of the ion crystal may ultimately limit the fidelity of logic gates of this type. In fact, such heating is currently a limiting factor in the NIST ion-trap quantum logic experiments . Electric fields from the environment readily couple to the motion of the ions, heating the ion crystal . If the ion trap is much larger than the ion crystal size, we expect these electric fields to be nearly uniform across the crystal. Uniform fields will heat only modes that involve center-of-mass motion (COM motion), in which the crystal moves as a rigid body. Motional modes orthogonal to the COM motion, for instance the collective breathing mode, require field gradients to excite their motion. The heating of these modes is therefore suppressed . However, even if quantum logic operations use such a “cold” mode, the heating of the COM motion can still indirectly limit the fidelity of logic operations. Since the laser coupling of an internal qubit and a motional mode depends on the total wavepacket spread of the ion containing the qubit, the thermal COM motion can reduce the logic fidelity . In this paper, we examine sympathetic cooling in a particular scheme for which we can continuously laser-cool the COM motion while leaving undisturbed the coherences of both the internal qubits and the mode used for quantum logic. In this method, one applies continuous laser cooling to only the center ion of a Coulomb-coupled string of an odd number of ions. One can address the center ion alone if the center ion is of a different ion species than that composing the rest of the string . Alternatively, one can simply focus the cooling beams so that they affect only the center ion. In either case, the cooling affects only the internal states of the center ion, leaving all other internal coherences intact. If the logic operations use a mode in which the center ion remains at rest, the motional coherences in that mode are also unaffected by the cooling. On the other hand, the sympathetic cooling keeps the COM motion cold, reducing the thermal wavepacket spread of the ions. In the following, we will discuss the dynamics of an ion string in which all ions are identical except the center ion, assuming heating by a uniform electric field. Our results give guidelines for implementing the sympathetic cooling scheme. Similar results would apply to two- and three-dimensional ion crystals . ## II Axial Modes of Motion We consider a crystal of $`N`$ ions, all of charge $`𝗊`$, in a linear RF trap . The linear RF trap is essentially an RF quadrupole mass filter with a static confining potential along the filter axis $`\widehat{z}`$. If the radial confinement is sufficiently strong compared to the axial confinement, the ions will line up along the $`z`$-axis in a string configuration . There is no RF electric field along $`\widehat{z}`$, so we can write the axial confining potential as $`\varphi (z)=𝗊a_0z^2/2`$ for $`a_0`$ a constant. The potential energy of the string is then given by $$V(z_1,\mathrm{}z_n)=\frac{1}{2}𝗊a_0\underset{i=1}{\overset{N}{}}z_i^2+\frac{𝗊^2}{8\pi ϵ_0}\underset{\stackrel{i,j=1}{ij}}{\overset{N}{}}\frac{1}{z_iz_j}$$ (1) for $`z_i`$ the position of the $`i`$th ion in the string (counting from the end of the string). The first term in the potential energy expresses the influence of the static confining potential along the $`z`$-axis, while the second arises from the mutual Coulomb repulsion of the ions. For a single ion of mass $`m`$, the trap frequency along $`z`$ is just $`\omega _z=\sqrt{𝗊a_0/m}`$. We can compute the equilibrium positions of the ions in the string by minimizing the potential energy of Eq. 1. Defining a length scale $`\mathrm{}`$ by $`\mathrm{}^3=𝗊/(4\pi ϵ_0a_0)`$ and normalizing the ion positions by $`u_i=z_i/\mathrm{}`$ gives a set of equations for the $`u_i`$ as $$u_i\underset{j=1}{\overset{i1}{}}\frac{1}{(u_iu_j)^2}+\underset{j=i+1}{\overset{N}{}}\frac{1}{(u_iu_j)^2}=0,i=1\mathrm{}N$$ (2) which has analytic solutions only up to $`N=3`$. Steane and James have computed the equilibrium positions of ions in strings with $`N`$ up to 10. The potential energy is independent of the mass, so the equilibrium positions of ions in a string are independent of the elemental composition of the string if all the ions have the same charge. In a real ion trap the ions will have some nonzero temperature and will move about their equilibrium positions. If the ions are sufficiently cold, we can write their positions as a function of time as $`z_i(t)=\mathrm{}u_i+q_i(t)`$, where $`q_i(t)`$ is small enough to allow linearizing all forces. We specialize to the case of an odd number of ions $`N`$, where all ions have mass $`m`$, except for the one at the center of the string which has mass $`M`$. The ions are numbered $`1\mathrm{}N`$, with the center ion labeled by $`n_c=(N+1)/2`$. Following James , the Lagrangian for the resulting small oscillations is $`L`$ $`=`$ $`{\displaystyle \frac{m}{2}}{\displaystyle \underset{\stackrel{i=1}{in_c}}{\overset{N}{}}}\dot{q}_i^2+{\displaystyle \frac{M}{2}}\dot{q}_{n_c}^2{\displaystyle \frac{1}{2}}{\displaystyle \underset{i,j=1}{\overset{N}{}}}{\displaystyle \frac{^2V}{z_iz_j}}|_{\{q_i\}=0}q_iq_j`$ (3) $`=`$ $`{\displaystyle \frac{m}{2}}{\displaystyle \underset{\stackrel{i=1}{in_c}}{\overset{N}{}}}\dot{q}_i^2+{\displaystyle \frac{M}{2}}\dot{q}_{n_c}^2{\displaystyle \frac{1}{2}}𝗊a_0{\displaystyle \underset{i,j=1}{\overset{N}{}}}A_{ij}q_iq_j`$ (4) where $$A_{ij}=\{\begin{array}{cc}1+2\underset{\stackrel{k=1}{ki}}{\overset{N}{}}\frac{1}{u_iu_k^3}\hfill & i=j\hfill \\ 2\frac{1}{u_iu_j^3}\hfill & ij\hfill \end{array}$$ (5) We define a normalized time as $`T=\omega _zt`$. In treating the case of two ion species, we write $`\mu =M/m`$ for the mass ratio of the two species and normalize the amplitude of the ion vibrations $`q_i(t)`$ as $`Q_i=q_i\sqrt{𝗊a_0}`$, $`in_c`$, $`Q_{n_c}=q_{n_c}\sqrt{𝗊a_0\mu }`$. The Lagrangian becomes $$L=\frac{1}{2}\underset{i=1}{\overset{N}{}}\left(\frac{dQ_i}{dT}\right)^2\frac{1}{2}\underset{i,j=1}{\overset{N}{}}A_{ij}^{}Q_iQ_j$$ (6) where $$A_{ij}^{}=\{\begin{array}{cc}A_{ij}\hfill & i,jn_c\hfill \\ A_{ij}/\sqrt{\mu }\hfill & i\mathrm{or}j=n_c,ij\hfill \\ A_{ij}/\mu \hfill & i=j=n_c\hfill \end{array}$$ (7) generalizing the result of James . The Lagrangian is now cast in the canonical form for small oscillations in the coordinates $`Q_i(t)`$. To find the normal modes, we solve the eigenvalue equation $$𝐀^{}\stackrel{}{v}^{(k)}=\zeta _k^2\stackrel{}{v}^{(k)}k=1\mathrm{}N$$ (8) for the frequencies $`\zeta _k`$ and (orthonormal) eigenvectors $`\stackrel{}{v}^{(k)}`$ of the $`N`$ normal modes. Because of our normalization of the Lagrangian (6), the $`\zeta _k`$ are normalized to $`\omega _z`$ and the $`\stackrel{}{v}^{(k)}`$ are expressed in terms of the normalized coordinates $`Q_i(t)`$. In terms of the physical time $`t`$, the frequency of the $`k`$th mode is $`\zeta _k\omega _z`$. If the $`k`$th mode is excited with an amplitude $`C`$, we have $`q_i(t)`$ $`=`$ $`\mathrm{Re}[Cv_i^{(k)}e^{i(\zeta _k\omega _zt+\varphi _k)}]in_c`$ (9) $`q_{n_c}(t)`$ $`=`$ $`\mathrm{Re}[C{\displaystyle \frac{1}{\sqrt{\mu }}}v_{n_c}^{(k)}e^{i(\zeta _k\omega _zt+\varphi _k)}]`$ (10) in terms of the physical coordinates $`q_i(t)`$. We can solve for the normal modes analytically for $`N=3`$. Exact expressions for the normal-mode frequencies are $`\zeta _1`$ $`=`$ $`\left[{\displaystyle \frac{13}{10}}+{\displaystyle \frac{1}{10\mu }}(21\sqrt{44134\mu +169\mu ^2})\right]^{\frac{1}{2}}`$ (11) $`\zeta _2`$ $`=`$ $`\sqrt{3}`$ (12) $`\zeta _3`$ $`=`$ $`\left[{\displaystyle \frac{13}{10}}+{\displaystyle \frac{1}{10\mu }}(21+\sqrt{44134\mu +169\mu ^2})\right]^{\frac{1}{2}}`$ (13) normalized to $`\omega _z`$. The mode eigenvectors are $`\stackrel{}{v}^{(1)}`$ $`=`$ $`N_1(1,{\displaystyle \frac{\sqrt{\mu }}{8}}(135\zeta _1^2),1)`$ (14) $`\stackrel{}{v}^{(2)}`$ $`=`$ $`N_2(1,0,1)`$ (15) $`\stackrel{}{v}^{(3)}`$ $`=`$ $`N_3(1,{\displaystyle \frac{\sqrt{\mu }}{8}}(135\zeta _3^2),1)`$ (16) in terms of $`Q_i(t)`$. Here $`N_1,N_2,`$ and $`N_3`$ are normalization factors. In the case of three identical ions ($`\mu =1`$), we can express the mode eigenvectors in terms of the $`Q_i(t)`$ as $`\stackrel{}{v}^{(1)}=(1,1,1)/\sqrt{3}`$, $`\stackrel{}{v}^{(2)}=(1,0,1)/\sqrt{2}`$, and $`\stackrel{}{v}^{(3)}=(1,2,1)/\sqrt{6}`$. The mode eigenvectors, in this special case, also give the ion oscillation amplitudes in terms of the physical coordinates $`q_i(t)`$. For three identical ions, then, pure axial COM motion constitutes a normal mode. (This result holds for an arbitrary number of identical ions.) We also note that the center ion does not move in mode #2; hence the frequency and eigenvector of mode #2 are independent of $`\mu `$. For any odd number $`N`$ of ions there are $`(N1)/2`$ modes for which the center ion does not move. These modes will likewise have frequencies and eigenvectors independent of $`\mu `$. Moreover, they have $`v_{n_cm}^{(k)}=v_{n_c+m}^{(k)}`$ and so they are orthogonal to the COM motion and do not couple to uniform electric fields. The center ion moves in the other $`(N+1)/2`$ modes, and unless $`\mu =1`$, each of these $`(N+1)/2`$ modes has a component of axial COM motion and therefore couples to uniform electric fields. For $`N=5`$ and higher, the normal mode frequencies depend on $`\mu `$ in a complicated way. However, it is easy to find the frequencies numerically. Fig. 1 shows the mode frequencies for $`N=3`$, 5, 7, and 9 as a function of $`\mu `$ for $`0.01<\mu <100`$. The modes are numbered in order of increasing frequency (at $`\mu =1`$), and are normalized to $`\omega _z`$. In each case, the lowest-lying mode has all ions moving in the same direction and consists of pure COM motion for $`\mu =1`$. The even-numbered modes correspond to the $`(N1)/2`$ modes for which the center ion does not move. Their frequencies are therefore independent of $`\mu `$. For both very large and very small $`\mu `$ the modes pair up, as shown in Fig. 1. For each pair there is some value $`\mu >1`$ for which the modes become degenerate. The relative spacing between modes in a pair is also smaller in the large-$`\mu `$ limit than in the small-$`\mu `$ limit. In selecting a normal mode of motion for logic operations, we want to ensure that the mode is well resolved from all other normal modes. However, when modes are nearly degenerate, as for $`\mu 1`$ and $`\mu 1`$, transfer of energy can occur between the modes in the presence of an appropriate coupling, for instance if the static confining potential contains small terms of order $`z^3`$ . This coupling can lead to a loss of coherence of the logic mode. Also, the need to resolve the logic mode from a nearby spectator mode can force a reduction in gate speed. These effects limit the usefulness of the sympathetic cooling scheme for $`\mu `$ very large. Evidently it is best to use a cooling ion that is of the same mass or lighter than the logic ions. In this case mode #2 is well-separated from all other modes, as shown in Fig. 1. ## III Transverse Modes of Motion We now consider the motion of the ions transverse to the $`z`$-axis. The ions experience an RF potential $`\chi \mathrm{cos}(\mathrm{\Omega }t)(x^2y^2)/2`$ for a suitable choice of axes $`x`$ and $`y`$ perpendicular to $`z`$, where $`\mathrm{\Omega }`$ is the frequency of the RF field and $`\chi `$ is a constant. The static confining potential can be written $`(𝗊a_0/2)(z^2\alpha x^2(1\alpha )y^2)`$ at the position of the ions (with $`\alpha `$ a constant), so there is also a transverse static electric field. To analyze the ion motion, we work in the pseudopotential approximation, in which one time-averages the motion over a period of the RF drive to find the ponderomotive force on the ion. If the static potential is negligible, the RF drive gives rise to an effective transverse confining potential of $`\frac{1}{2}m\omega _{r0}^2(x^2+y^2)`$, where $`\omega _{r0}=𝗊\chi /(\sqrt{2}\mathrm{\Omega }m)`$ for an ion of mass $`m`$. If we include the effects of the static field, the transverse potential becomes $`\frac{1}{2}m(\omega _x^2x^2+\omega _y^2y^2)`$, where $`\omega _x=\omega _{r0}\sqrt{1\alpha \omega _z^2/\omega _{r0}^2}`$, $`\omega _y=\omega _{r0}\sqrt{1(1\alpha )\omega _z^2/\omega _{r0}^2}`$. Below we will assume $`\alpha =1/2`$, so that $`\omega _y=\omega _x`$. In any case, the transverse potential is that of a simple harmonic oscillator, as we saw also for the axial potential. However, the transverse potential depends directly on the ion’s mass, so the center ion of a string feels a different trap potential than the others for $`\mu 1`$. We define $`ϵ=\omega _{r0}/\omega _z`$, so that $`\omega _x=\omega _z\sqrt{ϵ^21/2}`$. Then the normalized Lagrangian for the motion along $`x`$ is $$L=\frac{1}{2}\underset{i=1}{\overset{N}{}}\left(\frac{dX_i}{dT}\right)^2\frac{1}{2}\underset{i,j=1}{\overset{N}{}}B_{ij}^{}X_iX_j$$ (17) where $`X_i=x_i\sqrt{𝗊a_0}`$ for $`in_c`$ and $`X_{n_c}=x_i\sqrt{𝗊a_0\mu }`$ are normalized ion vibration amplitudes along $`x`$. Here $$B_{ij}^{}=\{\begin{array}{cc}B_{ij}\hfill & i,jn_c\hfill \\ B_{ij}/\sqrt{\mu }\hfill & i\mathrm{or}j=n_c,ij\hfill \\ B_{ij}/\mu \hfill & i=j=n_c\hfill \end{array}$$ (18) and $$B_{ij}=\{\begin{array}{cc}ϵ^2\frac{1}{2}\underset{\stackrel{k=1}{ki}}{\overset{N}{}}\frac{1}{u_iu_k^3}\hfill & i=j,jn_c\hfill \\ \frac{ϵ^2}{\mu }\frac{1}{2}\underset{\stackrel{k=1}{ki}}{\overset{N}{}}\frac{1}{u_iu_k^3}\hfill & i=j=n_c\hfill \\ \frac{1}{u_iu_j^3}\hfill & ij\hfill \end{array}$$ (19) We can describe the normal mode frequencies and oscillation amplitudes in terms of the eigenvectors and eigenvalues of $`B_{ij}^{}`$, just as for the axial case above. The normalizations of the time and position coordinates remain the same as in the axial case. In the previous section, we assumed that the radial confinement of the ions was strong enough that the configuration of ions in a string along the $`z`$-axis was always stable. However, for sufficiently small $`ϵ`$, the string configuration becomes unstable. The stable configurations for different values of $`ϵ`$ can be calculated , and several of these configurations have been observed for small numbers of ions . Rather than review the theory of these configurations, we will simply find the range of validity of our small-oscillation Lagrangian for the string configuration. The string will remain stable for all $`ϵ`$ greater than some $`ϵ_s=ϵ_s(\mu )`$; $`ϵ_s`$ also varies with $`N`$. On the boundary between stable and unstable regions, the frequency of some mode goes to zero. Recalling that the determinant of a matrix is equal to the product of its eigenvalues, we see that $`ϵ_s(\mu )`$ is the maximum value of $`ϵ`$ satisfying $`detB^{}(ϵ,\mu )=0`$ for $`\mu `$ fixed. Fig. 2 shows $`ϵ_s(\mu )`$ as a function of $`\mu `$ for 3, 5, 7, and 9 ions. In each case, there is a cusp in $`ϵ_s(\mu )`$ corresponding to the crossing of the two largest solutions to $`detB^{}(ϵ,\mu )=0`$. The position of the cusp varies with the number of ions, but lies between $`\mu =0.1`$ and $`\mu =1`$ for $`N9`$. Only the cusp for $`N=3`$ is clearly visible in Fig. 2, but numerical study indicates the presence of a cusp for all four values of $`N`$. For $`\mu `$ greater than the value at the cusp, $`ϵ<ϵ_s(\mu )`$ corresponds to instability of the zigzag mode, so that the string breaks into a configuration in which each ion is displaced in the opposite direction to its neighbors . For $`\mu `$ smaller than the value at the cusp, $`ϵ_s`$ is independent of $`\mu `$. In this regime, $`ϵ<ϵ_s`$ creates an instability in a mode similar to the zigzag mode, except that the center ion remains fixed. We can proceed to calculate the frequencies of the transverse modes for values $`ϵ>ϵ_s(\mu )`$. Again, these frequencies are normalized to the axial frequency of a single ion of mass $`m`$. Fig. 3 shows the transverse mode frequencies for 3, 5, 7, and 9 ions as a function of $`\mu `$, where $`ϵ`$ is taken equal to $`1.1ϵ_s(\mu )`$. The modes are numbered in order of increasing frequency at $`\mu =1`$ (all ions identical). In this numbering scheme, the central ion moves in odd-numbered modes but not in even-numbered modes. The frequencies of the even-numbered modes appear to depend on $`\mu `$ because they are calculated at a multiple of $`ϵ_s(\mu )`$; for constant $`ϵ`$ these frequencies are independent of $`\mu `$. The cusps in the mode frequencies in Fig. 3 arise from the cusps of $`ϵ_s(\mu )`$ at the crossover points between the two relevant solutions of $`detB^{}=0`$. Mode frequencies plotted for a constant value of $`ϵ`$ do not exhibit these cusps. As in the case of axial motion, the mode frequencies form pairs of one even- and one odd-numbered mode for small $`\mu `$. However, for large $`\mu `$ all but one of the transverse modes become degenerate. The only nondegenerate transverse mode in this case is the zigzag mode. In general, the modes are most easily resolved from their neighbors for $`\mu =1`$, as in the case of axial motion. Increasing $`ϵ`$ reduces the frequency spacing between nearly degenerate modes. At $`ϵ=1.1ϵ_s(\mu )`$ and $`\mu =1`$, for instance, the fractional spacing between the cold transverse mode of 3 ions and its nearest neighbor is 0.20, but for $`ϵ=1.5ϵ_s(\mu )`$ the same spacing is 0.09. The near-degeneracy of the modes for large or small $`\mu `$ and for $`ϵ/ϵ_s`$ significantly greater than 1 limits the usefulness of these modes because of possible mode cross-coupling, just as for the axial modes. Resolving a particular transverse mode requires operating the trap near the point at which the string configuration becomes unstable, i.e., $`ϵ`$ near $`ϵ_0(\mu )`$. In this regime, the collective motion of the ions is quite sensitive to uncontrolled perturbations, which may pose significant technical problems for using a transverse mode in quantum logic operations. ## IV Mode Heating Stochastic electric fields present on the ion trap electrodes, for instance from fluctuating surface potentials, can heat the various normal modes of motion incoherently. For ion trap characteristic dimension $`d_{trap}`$ much larger than the size of the ion crystal $`d_{ions}`$, these fields are approximately uniform across the ion crystal, so they couple only to the COM motion. The $`(N1)/2`$ even-numbered modes are orthogonal to the COM motion, so they are only heated by fluctuating electric field gradients. The heating rates of these modes are reduced by a factor of at least $`(d_{ions}/d_{trap})^21`$ as compared to the heating of the other modes . In the following, therefore, we will neglect the effects of fluctuating field gradients, so that the even-numbered modes do not heat at all. The analysis of sections 2 and 3 shows that the motion of a crystal of $`N`$ ions is separable into the $`3N`$ normal modes, each of which is equivalent to a simple harmonic oscillator. Hence we can quantize the crystal motion by quantizing the normal modes. The $`k`$th normal mode gives rise to a ladder of energy levels spaced by $`\mathrm{}\zeta _k\omega _z`$, with $`3N`$ such ladders in all. If we now write the uniform electric field power spectral density as $`S_E(\omega )`$, we can generalize the result of to give $$\dot{\overline{n}}_k=\frac{𝗊^2S_E(\zeta _k\omega _z)}{4m\mathrm{}\zeta _k\omega _z}\left(\frac{v_{n_c}^{(k)}}{\sqrt{\mu }}+\underset{\stackrel{j=1}{jn_c}}{\overset{N}{}}v_j^{(k)}\right)^2$$ (20) for the heating rate of the $`k`$th mode, expressed in terms of the average number of quanta gained per second. Recall that $`v_i^{(k)}`$ is the oscillation amplitude of the $`i`$th ion in the $`k`$th normal mode, expressed in the normalized coordinates. It is useful to normalize the heating rate in equation (20) to the heating rate of the lowest-lying axial mode of a string of identical ions. This normal mode consists entirely of COM motion and we write $`v_j^{COM}=1/\sqrt{N}`$ for all ions. The normalized heating rate of the $`k`$th mode is then $$\frac{\dot{\overline{n}}_k}{\dot{\overline{n}}_{COM}}=\frac{1}{N\zeta _k}\left(\frac{v_{n_c}^{(k)}}{\sqrt{\mu }}+\underset{\stackrel{j=1}{jn_c}}{\overset{N}{}}v_j^{(k)}\right)^2$$ (21) where we have assumed that the spectral density $`S_E(\omega )`$ is constant over the frequency range of the normal modes, i.e., $`S_E(\omega _z)=S_E(\zeta _k\omega _z)`$. Fig. 4 shows plots of the normalized heating rates of the axial modes for $`N=3`$, 5, 7, and 9 as a function of $`\mu `$. Fig. 5 is the same, but for the transverse modes, with $`ϵ=1.1ϵ_s`$. The numbering of modes on the plots of heating rate matches the numbering on the corresponding plots of mode frequency (Figs. 1 and 2). In both axial-mode and transverse-mode plots, the even-numbered modes have the center ion at rest, while the center ion moves for all odd-numbered modes. We see from Figs. 4 and 5 that the modes for which the center ion is fixed can never heat, while all the other modes always heat to some extent for $`\mu 1`$. We will refer to these modes as “cold” and “hot” modes, respectively. If the ions are identical, only the modes with all ions moving with the same amplitude (COM modes) can heat. There are three such modes, one along $`\widehat{x}`$, one along $`\widehat{y}`$, and one along $`\widehat{z}`$. In interpreting Figs. 4 and 5, it is important to recall that the normalized heating rate defined in Eq. (21) is inversely proportional to the mode frequency. For instance, the $`\mu `$-dependence of the heating rate of the highest-frequency transverse mode can be largely ascribed to variations in the mode frequency, rather than to changes in the coupling of the mode to the electric field. ## V Prospects for Sympathetic Cooling Heating reduces logic gate fidelity in two ways. The logic mode itself can be heated, but by choosing a cold mode, we can render this effect negligible. On the other hand, the Rabi frequency of the transition between logic-mode motional states depends on the total wavepacket spread of the ion involved in the transition . Heating on modes other than the logic mode can thus lead to unknown, uncontrolled changes in this Rabi frequency, resulting in overdriving or underdriving of the transition. The purpose of sympathetic cooling is to remove this effect by cooling the center ion and thus all hot modes. For sympathetic cooling to be useful, we must find a cold mode suitable for use in quantum logic. The cold mode must be spectrally well separated from any other modes in order to prevent unwanted mode cross-coupling. We can use the lowest-lying cold axial mode as the logic mode for $`\mu 3`$. In this mode, called the breathing mode, the center ion remains fixed and the spacings between ions expand and contract in unison. Unless the trap is operated very close to the instability point of the string configuration, the breathing mode is better separated from its neighbors than are any of the cold transverse modes. For $`\mu 3`$ any cold mode, either axial or transverse, is nearly degenerate with a hot mode. In this regime one must make a specific calculation of mode frequencies in order to find the best-resolved cold mode. Even so, the cold axial modes are again better separated from their neighbors than are the cold transverse modes, except for $`ϵ`$ very close to $`ϵ_s(\mu )`$. It seems best to select a cold axial mode as the logic mode in most cases. By selecting our laser-beam geometry appropriately, we can ensure that the Rabi frequency of the motional transition on the axial mode used for logic depends chiefly on the spread of the ion wavepacket along $`z`$. In this case, heating of the axial modes will affect logic-gate fidelity, but heating of the transverse modes will have little effect. If the mass of the central ion is nearly the same as that of the others ($`\mu 1`$), only the lowest axial mode will heat significantly, and we can continuously cool this mode by cooling only the central ion, ensuring that all ions remain in the Lamb-Dicke limit . If $`\mu `$ is not near 1, we must cool all $`(N+1)/2`$ hot modes (again by addressing the central ion) to keep all ions in the Lamb-Dicke limit. The analysis above indicates that, all other things being equal, we are best off if our substituted ion is identical to, or is an isotope of, the logic ions. However, sympathetic cooling can still be useful if the two ion species have different masses. For example, we can consider sympathetic cooling using the species <sup>9</sup>Be<sup>+</sup> and <sup>24</sup>Mg<sup>+</sup>. Linear traps constructed at NIST have demonstrated axial secular frequencies of over 10 MHz for single trapped <sup>9</sup>Be<sup>+</sup> ions. For three ions with <sup>24</sup>Mg<sup>+</sup> as the central ion, $`\omega _z(\mathrm{Be}^+)=2\pi \times 10`$ MHz yields a spacing of 1.6 MHz between the cold axial breathing mode and its nearest neighbor. If we reverse the roles of the ions ($`\omega _z(\mathrm{Mg}^+)=2\pi \times 10`$ MHz), the spacing increases to 6.2 MHz. The transverse modes are much harder to resolve from each other. For three ions with <sup>24</sup>Mg<sup>+</sup> in the center, we require $`\omega _{r0}(\mathrm{Be}^+)=2\pi \times 27.6`$ MHz to obtain $`ϵ=1.1ϵ_s`$, and the spacing between the cold transverse zigzag mode and its nearest neighbor is only 560 kHz. Reversing the roles of the ions, we find $`ϵ=1.1ϵ_s`$ at $`\omega _{r0}(\mathrm{Mg}^+)=2\pi \times 14.7`$ MHz with a spacing of 1.1 MHz. For this combination of ion species, the cold axial breathing mode seems most appropriate for logic. For a string of 3 or 5 ions, sympathetic cooling would require driving transitions on 2 or 3 axial-mode sidebands, respectively. From this example we see that sympathetic cooling can be useful even for ion mass ratios of nearly 3 to 1. ## VI Conclusion We have investigated a particular sympathetic cooling scheme for the case of an ion string confined in a linear RF trap. We have numerically calculated the mode frequencies of the axial and transverse modes as functions of the mass ratio $`\mu `$ and trap anisotropy $`ϵ`$ for 3, 5, 7, and 9 ions. We have also calculated the heating rates of these modes relative to the heating rate of a single ion, assuming that the heating is driven by a uniform stochastic electric field. The results indicate that the scheme is feasible for many choices of ion species if we use a cold axial mode as the logic mode. The optimal implementation of the scheme employs two ion species of nearly equal mass. However, a demonstration of sympathetic cooling using <sup>9</sup>Be<sup>+</sup> and <sup>24</sup>Mg<sup>+</sup> appears well within the reach of current experimental technique. ## ACKNOWLEDGMENTS This research was supported by NSA, ONR, and ARO. This publication is the work of the U.S. Government and is not subject to U.S. copyright.
no-problem/9909/adap-org9909003.html
ar5iv
text
# Analysis of the optimality principles responsible for vascular network architectonics ## 1 Introduction The great amount of natural systems have highly branching networks. As a evident example of such systems we may regard living tissue where blood supplies the cellular tissue with oxygen, nutritious products, etc. through branching vascular network and at the same time withdraws products resulting from living activities of the cellular tissue. A similar situation takes place in respiratory systems where oxygen reaches small vessels (capillaries) going through the hierarchical system of bronchial tubes. There is a question of what physical principles govern network organization living systems under consideration. In this paper we focus our attention on the analyzes of different known optimal principles of network formation for microcirculatory bed and developing new approach to this problem. ## 2 Analysis of the optimality principles A microcirculatory bed can be reasonably regarded as a space-filling fractal being a natural structure for ensuring that all cells are serviced by capillaries . The vessel network must branch so that every small group of cells, referred below to as “elementary tissue domain”, is supplied by at least one capillary. Since a typical length of capillaries is about 0.3 to 0.7 mm a vessel network generated by an artery of length of order of 1 to 5 cm should contain a sufficiently large number of hierarchy levels. At zeroth level we meet the host artery and the host vein, the mother and daughter vessels belong to $`n`$-th and $`(n+1)`$-th levels, respectively, and the last level $`N`$ comprises capillaries. So at each level $`n`$ of the vascular network the tissue domain supplied by a given microcirculatory bed as a whole can be approximated by the union of the tissue subdomains whose mean size is about the typical length $`l_n`$ of the $`n`$-th level vessels. Thus, the individual volume of these subdomains is estimated as $`V_nl_n^3`$ and their total number (as well as the total number of $`n`$-th level vessels) is about $`M_nV_0/l_n^3`$, where $`V_0`$ is the total volume of the microcirculatory bed. The higher is the level, the more accurate become the independence of such estimates from the particular details of vessel arrangements. For internal organs they approximately hold also for large vessels of regional circulation. To justify the latter statement we present Table 1 relating the vessel lengths and radii to the radii of the corresponding tissue cylinders, i.e. the cylindrical neighborhood falling on one vessel of a fixed level. This condition that the vascular network be volume-preserving from one generation to the next gives us immediately the local relation between the characteristic lengths of the vessels: $`l_n^3gl_{n+1}^3`$ (here $`g=2`$ is the order of the vessel branching node). Whence it follows that $`l_nl_0g^{n/3}`$, where $`l_0`$ is the characteristic size of the microcirculatory bed region or, what is practically the same, the length of the host artery. The following analysis, however, will require a more detailed information about the vascular network architectonics. Namely, we need to know how the vessel radii change at the nodes and the relative arrangement of mother and daughter vessels. Actually here we meet the problem as to what fundamental regularities govern the vessel branching. These regularities manifest themselves in the relation between the radii $`a_0`$, $`a_1`$, $`a_2`$ of mother and daughter arteries, respectively, and the angles $`\theta _1`$, $`\theta _2`$, $`\theta _{12}=\theta _1+\theta _2`$ which the daughter branches make with the direction of the mother artery and with each other (Fig. 1). A detailed theoretical attempt of understanding this regularity was made first by Cecil D. Murray in 1926 . He proposed a model relating the artery radii at branching nodes (Fig. 1) by the expression $$a_0^x=a_1^x+a_2^x\text{with}x=3$$ (1) thereafter referred to as Murray’s law ($`x`$ is also called the bifurcation exponent). Then Murray’s approach was under development in a large number of works, see, for example, and a series of works by Zamir et all. and by Woldenberg et all. (for a historical review see also ). The idea of Murray’s model is reduced to the assumption that physiological vascular network, subject through evolution to natural selection, must have achieved an optimum arrangement corresponding to the least possible biological work needed for maintaining the blood flow through it at required level. This biological work $`𝒫`$ involves two terms: (i) the cost of overcoming viscous drag during blood motion through the vessels obeying Poiseuille’s law, and (ii) the energy metabolically required to maintain the volume of blood and the vessel tissue. Dealing with an individual artery of length $`l`$ and radius $`a`$ with a blood flow rate $`J`$ in it we get: $$𝒫=\frac{8\eta lJ^2}{\pi a^4}+m\pi a^2l,$$ (2) where $`\eta `$ is the blood viscosity and $`m`$ is a metabolic coefficient. Minimizing function (1) with respect to $`a`$ we find the relation between the blood flow rate $`J`$ and the artery lumen radius $`a`$ corresponding to the given optimality principles: $$J=ka^3,$$ (3) where the coefficient $`k=\sqrt{m\pi ^2/(16\eta )}`$ is a constant for the tissue under consideration. Due to the blood conservation at branching nodes we can write $`J_0=J_1+J_2`$ (Fig. 1) whence Murray’s law (1) immediately follows. However, care must be taken in comparing measurements with prediction, particularly if averages over many successive levels are used. Already Mall himself noted that his data were approximate . In particular, for large arteries of systemic circulation where blood flow can be turbulent the bifurcation exponent $`x`$ should be equal to $`7/32.33`$ as it follows from this optimality principle of minimum pumping power and lumen volume. There is also another optimality principle leading to Murray’s law, the principle of minimum drag and lumen surface . The drag against the blood motion through vessels is caused by the blood viscosity and can be described in terms of the shear stress on the walls of vessels, $`2\pi al\tau _t`$, where $$\tau _t=\eta _nv=\frac{a}{2l}\delta P=\frac{4\eta }{\pi }\frac{J}{a^3}$$ (4) for the laminar flow and $`\delta P`$ is the pressure drop along a vessel of length $`l`$ and radius $`a`$. Then the given optimality principle is reduced to the minimum condition for the function $$𝒫^{}=\frac{8\eta lJ}{a^2}+m^{}2\pi al,$$ (5) where $`m^{}`$ is a certain weighting coefficient. Minimizing (5) with respect to $`a`$ we get a relationship between $`J`$ and $`a`$ of the same form as (3), leading to Murray’s law again. There were a number works (see, e.g., ) aimed at finding out what the specific optimality principle governs the artery branching by studying the angles of daughter vessels, $`\theta _1`$, $`\theta _2`$, $`\theta _{12}`$, in relation to the asymmetry of the branching node, $`a_2/a_1`$ (Fig. 1). However, on one hand, all the optimality principles give numerically close relationships between the vessel angles and radii for the bifurcation exponent $`x3`$ . On the other hand, it turned out that experimentally determined branching angles generally exhibit considerable scatter around the theoretical optimum. The matter is that small variations of the total “cost” of artery bifurcation about several percents causes the actual vessel angles to deviate significantly from the predicted optimum. This feature is illustrated in Fig. 2 showing the variations in the vessel angles governed by the minimality of functional (5) with imposed 10% perturbations. Namely, varying the coordinates of the branching node (Fig. 1) we get that the minimum of the function $`𝒫_0^{}+𝒫_1^{}+𝒫_2^{}`$ is attained when $`a_1\mathrm{cos}\theta _1+a_2\mathrm{cos}\theta _2`$ $`=`$ $`(1+ϵ)a_0,`$ (6) $`a_1\mathrm{sin}\theta _1a_2\mathrm{sin}\theta _2`$ $`=`$ $`ϵ^{}a_0,`$ (7) where the additional terms $`ϵa_0`$ and $`ϵ^{}a_0`$ with $`\left|ϵ\right|,\left|ϵ^{}\right|<0.1`$ describes possible deviations from the optimality condition. The resulting values of $`\theta _1`$ and $`\theta _2`$ are depicted in Fig. 2. It should be noted that expressions (6), (7) correspond actually to the mechanical equilibrium of the node under the action of vessel walls strained by the blood motion and the additional terms describe a possible effect of the cellular tissue. Nevertheless, the optimality principles based on functional (2) seems to govern the artery bifurcations . Besides, this principles gives also adequate estimates of the integral characteristics of microcirculatory beds . The bifurcation exponent $`x`$, on the contrary, is well approximated by the Murray value, $`x3`$, at least starting from arteries of intermediate size . This value meets also the space-filling requirement for the vascular network fractal in geometry to fill precisely the space of a fixed relative volume at each hierarchy level . Indeed, assuming the volume of the tissue cylinders matching an artery of length $`l`$ and lumen radius $`a`$ to be about $`l^3`$ we get that the corresponding relative volume of blood is $`(a/l)^2`$. So it is fixed if $`a=\mathrm{constant}l`$ and, thus, $`a_0^3=a_1^3+a_2^3`$ provided the tissue cylinder matching the mother artery is composed of the tissue cylinders of the daughter arteries. In order to specify the microcirculatory bed structure we need also to classify vessels according to the symmetry of their branching (Fig. 3). The matter is that arteries with predominantly asymmetric bifurcations give off comparatively little flow into its side branches along its way and, therefore, able to carry the mainstream flow across larger distances. Conversely, a more symmetric bifurcation pattern splits flow into numerous small branches, thereby delivering blood to its surrounding tissue. Such arteries have been attributed a “conveying” and “delivering” types of function, respectively. Since blood must be conveyed towards the sites at which to be delivered, both types of vessels occur in real arterial trees. Moreover, a larger conveying vessel may switch into a bunch of small delivering branches. Obviously, real arterial trees should contain a great variety of intermediate stages in between these extremes and as our field of view moves from the large systemic arteries to small arteries of regional circulation the vessel bifurcation should become more and more symmetrical. This has been also justified by numerically modelling the structure of arterial trees governed by the minimality condition of blood volume . According to the experimental data (see, e.g., the work and Fig. 4 based on it) even sufficiently large regional arteries of diameter and length about 300 $`\mu `$m and 1 cm, respectively, (Table 1) branch symmetrically, at least at first approximation.. Therefore microcirculatory beds as they have been specified above can be regarded as a vessel network with approximately symmetrical bifurcations. In other words, we may think of the systemic arteries as vessels of the conveying type where the mean blood pressure is practically constant. Conversely, the arteries of microcirculatory beds should belong to the delivering type and mainly determine the total resistance of the vascular network to blood flow, with the blood pressure drop being uniformly distributed over many arteries of different length. ## 3 Physiological mechanisms governing the vessel arrangements The universality of Murray’s law for distributed transport systems in many different live organisms raises questions as to: What cues are available to organisms to use in generating such systems? What physiological mechanisms enable them to adapt to altering conditions? Do in fact live organisms follow certain global optimality principles? For Murray’s law (3) the shear stress $`\tau _t`$ is constant (see formula (4)) throughout a given artery system. Rodbard proposed that the shear stress detected by the vessel endothelium leads to the vessel growth or contraction, and Zamir suggested that this leads to the development of Murray’s system as vessels maintain a constant value of shear stress. Concerning the particular mechanism by which organisms can implement the shear stress sensitivity we can say the following. Now it is established that the adaptation of conduit arteries as well as resistance arteries to acute changes in flow is mediated by the potent endogenous nitrovasodilator endothelium-derived relaxing factor, whose release from endothelial cells is enhanced by flow through the physical stimulation of shear stress (see, e.g., and references therein). The adaptation of arterial diameters to long-term changes in the flow rate also occurs through a mechanism which appears to involve the sensitivity to shear stress and the participation of endothelial cells, but remains not to be understood well . It should be noted that the shear stress equality through a vascular network does not lead directly to a certain optimality principle. Different principles, for instance, (2) and (5), can give the same condition imposed on the shear stress. Moreover, it is quite possible that the case of this equality is of another nature. In particular, for large conduit arteries in the human pulmonary tree the bifurcation exponent $`x`$ is reported to be in the range 1–2, whereas Murray’s law holds well beginning from intermediate conveying arteries . The matter is that in large systemic arteries the blood pressure exhibits substantial oscillations because of the heart beating, giving rise to damped waves travelling through the systemic arteries. The value of the bifurcation exponent $`x=2`$ matching the area-preserving law at the branching nodes ensures that the energy-carrying waves are not reflected back up the vessels at the nodes. However, this requirement is also can be derived from a certain optimality principle . Summarizing the aforesaid we will model the microcirculatory bed in terms of a delivering vascular network with symmetrical bifurcation nodes embedded uniformly into the cellular tissue. Besides, the Murray’s law will be assumed to hold. The latter is also essential from the standpoint of the tissue self-regulation, which will be discussed in detail in the next section. Here, nevertheless, we make several remarks concerning the given aspect too, because it could be treated as an alternative origin of Murray’s law (3). Let us consider a symmetrical dichotomous vessel tree shown, e.g., in Fig. 3$`𝒃`$. In order to govern blood flow redistribution over the microcirculatory bed finely enough so to supply with, for example, increased amount of blood only those regions where it is necessary and not to disturb other regions the blood pressure should be uniformly distributed over the microcirculatory bed, at least, approximately. The blood pressure drop $`\delta P_n`$ along an artery of level $`n`$ ($`n=0,1,2,\mathrm{}`$, Fig. 3) for laminar blood flow is $$\delta P_n=\frac{8\eta l_nJ_n}{\pi a_n^4}.$$ (8) For the space-filling vascular network this artery supplies with blood a tissue region of volume about $`l_n^3`$ and, so, under normal conditions the blood flow rate $`J_n`$ in it should be equal to $`J_njl_n^3`$, where $`j`$ is the blood perfusion rate (the volume of blood flowing through a tissue domain of unit volume per unit time) assumed to be the same at all the points of the given microcirculatory bed. Then formula (8) gives us the estimate $$\delta P_n=\frac{8\eta j}{\pi }\left(\frac{l_n}{a_n}\right)^4$$ whence it follows that $`\delta P_n`$ will be approximately the same for all the levels, i.e. the blood pressure will be uniformly distributed over the arterial bed if the ratio $`l_n/a_n`$ takes a certain fixed value, $`l_n\mathrm{constant}a_n`$ and, thus, $`J_n\mathrm{constant}^{}a_n^3`$. Due to the blood conservation at branching nodes we can write $$J_0=J_1+J_2$$ (9) (see Fig. 1). The later gives us immediately Murray’s law (1). In other words, Murray’s law can be also regarded as a direct consequence of the organism capacity for controlling finely the blood flow redistribution over the microcirculatory beds. It should be noted that in the previous papers we considered in detail the mathematical model for the vascular network response to variations in the tissue temperature on the given network architectonics. We have found that the distribution of the blood temperature over the venous bed aggregating the information of the cellular tissue state allows the living tissue to function properly. We showed that this property is one of the general basic characteristics of various natural hierarchical systems. These systems differ from each other by the specific realization of such a synergetic mechanism only.
no-problem/9909/astro-ph9909027.html
ar5iv
text
# GRB 970228 and GRB 980329 and the Nature of Their Host Galaxies ## 1 Introduction The GRB 970228 and GRB 980329 afterglows are among the most well observed of the afterglows detected so far. We use observations of these afterglows to derive some of the properties of these bursts and to constrain the nature of their host galaxies. ## 2 GRB 970228 and the Nature of Its Host Galaxy We have determined the local galactic extinction toward the GRB970228 field by comparing galaxy counts in two bands in this field to those in the HDF, and by comparing the observed broad band colors of stars in the GRB970228 field to the colors of library spectra of the same spectral type. We also estimate the extinction using the neutral hydrogen column density and the amount of infrared dust emission toward this field. Combining the results of these methods, we find a best-fit galactic extinction in the optical of $`A_V=1.09_{0.20}^{+0.10}`$, which implies a substantial dimming and change of the spectral slope of the intrinsic GRB970228 afterglow. Further details can be found in Castander & Lamb (1998, 1999a); see also González et al. (1999). Re-analyzing the HST images, we measure a color $`(V_{606}I_{814})_{ST}=0.18_{0.61}^{+0.51}`$ for the extended source. We constrain the nature of the likely host galaxy of GRB 970228 by comparing this color to those obtained from synthetic galaxy spectra computed with PEGASE (Fioc & Rocca-Volmerange (1997)), taking into account the measured extinction. The top panel of Figure 1 shows the expected colors of an Sa, an Sc and an Irregular galaxy with and without evolution included. Galaxies are only consistent with the observed color if they are at high redshift ($`z1.01.5`$), or have active star formation (like the evolving Irr shown). The bottom panel of Figure 1 better illustrates this point. It shows that on-going bursts of star formation of duration shorter than 1 Gyr produce acceptable $`V_{606}I_{814}`$ colors; longer duration bursts are disfavored for redshifts $`z0.8`$. If we include the $`H`$ and $`K`$ magnitudes of Fruchter et al. (1998) in our analysis, our conclusions are strengthened (see Castander & Lamb 1999b for further details). We conclude that the host galaxy must be undergoing star formation, in agreement with our earlier result (Castander & Lamb 1998; see also Castander & Lamb 1999b). If there is any extinction present due to the host galaxy, this conclusion would be strengthened. If the extended source is a galaxy with ongoing star formation, strong emission lines are expected. Tonry et al. (1997) and Kulkarni et al. (1997) have tried to obtain the spectrum of the GRB970228 afterglow and its associated nebulosity. Neither observation revealed any obvious emission lines. The lack of observed \[OII\] and Ly$`\alpha `$ emission lines suggests that the redshift of the galaxy may lie in the range $`1.5z2.6`$, considering the spectral coverage of the observations. ## 3 GRB 980329 and the Nature of Its Host Galaxy We model the observed radio through X-ray spectrum of the GRB 980329 afterglow, and its evolution through time, as follows. We take the intrinsic spectrum to be a thrice broken power law, motivated by the relativistic fireball model, in which spectral breaks may occur due to synchrotron self-absorption, the synchrotron peak, and electron cooling (see, e.g., Sari, Piran, & Narayan 1998). The spectrum that we fit is a generalization of the spectrum expected in this model, in the sense that we do not constrain the slopes of the four spectral segments, nor the (power-law) rates at which these segments fade, a priori. We allow the intrinsic spectrum to be modified in the following ways. First, we allow this spectrum to be extincted by dust and absorbed by the Lyman limit at a single redshift, assumed to be the redshift of the burst and its host galaxy. We adopt the six parameter ultraviolet extinction curve of Fitzpatrick & Massa (1988) and the one parameter optical and near-infrared extinction curve of Cardelli, Clayton, & Mathis (1989). Finally, we redshift the modified spectrum to the observer’s frame of reference, and model the Lyman-$`\alpha `$ forest due to absorption by gas clouds along the line-of-sight between the burst and the observer. The afterglow of GRB 980329 is unique among afterglows observed to date in that enough measurements of it have been taken to determine all the parameters of our model. From the results of our fits, we draw six conclusions: (1) The inferred intrinsic spectrum of the afterglow is consistent with the predictions of the simplest relativistic fireball model, in which an isotropic fireball expands into a homogeneous external medium. (2) The intrinsic spectrum of the afterglow is extincted by dust (source frame $`A_V1`$ mag). (3) The linear component of the extinction curve is flat, which is typical of young star-forming regions like the Orion Nebula but is not typical of older star-forming or starburst regions. (4) The $``$ 2 mag drop between the $`R`$ and the $`I`$ bands can be explained by the Ly-$`\alpha `$ forest if the burst redshift is $`z5`$ (Fruchter 1999), by the far-ultraviolet non-linear component of the extinction curve if $`3z5`$, and by the 2175 Å bump if $`z2`$; other redshifts are not consistent with these data, given this general model. Djorgovski et al.(1999) report that $`z<3.9`$ based upon the non-detection of the Ly-$`\alpha `$ forest in a Keck II spectrum of the host galaxy. (5) Assuming a redshift of $`z=3.5`$ for illustrative purposes, using the observed breaks in the intrinsic spectrum, and solving for the physical properties of the fireball (see, e.g., Wijers & Galama 1998), we find that the energy of the fireball per unit solid angle is $``$ $`10^{52}/4\pi `$ erg sr<sup>-1</sup> if $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. (6) Similarly, we find that the density of the external medium into which this fireball expands is $`n10^3`$ cm<sup>-3</sup> if $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. This density suggests that GRB 980329 occurred in a molecular cloud, which is consistent with the fact that the observed extinction features are characteristic of star-forming regions.
no-problem/9909/nucl-th9909030.html
ar5iv
text
# Quasi-nuclear and quark model baryonium: historical survey Dedicated to the memory of C.B. Dover and I.S. Shapiro Invited talk at the Euroconference QCD99, to appear in the Proceedings, ed. S. Narison Preprint ISN 99-95, nucl-th/9909030 ## 1 Introduction In the 70’s, there has been many indications of new mesons coupled to the nucleon–antinucleon ($`\text{N}\overline{\text{N}}`$) system. States below the $`\text{N}\overline{\text{N}}`$ threshold were claimed, e.g., in radiative transitions $`\text{N}\overline{\text{N}}X+\gamma `$. Above the threshold, states were tentatively seen as bumps in cross sections. Clarifying the experimental situation was one of the main motivation for building the low-energy part of the antiproton beam facility at CERN. Most candidates for baryonium have not been confirmed by careful scans. It remains, however, * Some of the multimeson states seen below threshold in annihilation experiments might have to do with baryonium. This is the case in particular for the state called “AX” (now more prosaically $`f_2(1565)`$ ), as pointed out by Dover . * Evidence for broad baryonium states was based on elastic scattering and annihilation into two mesons. See, e.g., and references therein. The PS172 collaboration at LEAR has measured the differential cross section and analysing power of $`\text{N}\overline{\text{N}}`$ annihilation into two pseudoscalar mesons at various energies. Analysis, again, confirms a rich resonance structure . * There is an intriguing activity in total cross sections, especially for isospin $`I=1`$ $`\text{N}\overline{\text{N}}`$ and for the strangeness-exchange reaction $`\text{N}\overline{\text{N}}\overline{\mathrm{\Lambda }}\mathrm{\Lambda }`$ . A closer look reveals a P-wave enhancement which might be of resonant nature. Unfortunately, the analysis is not yet published and some of the early claims have not been confirmed in more recent runs. * In the scalar channel ($`{}_{}{}^{3}\text{P}_{0}^{}`$ according to the conventional spectroscopic notation), the shift of protonium is slightly larger than expected . This unavoidably reminds us that a bound state close to threshold in the nuclear spectrum strongly distorts the pattern of atomic levels In short, the intense activity in the hadron spectrum around $`2`$GeV makes it difficult to conclude that baryonium is completely dead. Of course, the fashion has evolved: a state that would have been easily described years ago as a “baryonium candidate” would now preferentially be compared to predictions for glueballs or hybrids. It remains useful to recall and update the theoretical speculations inspired by the baryonium candidates in the late 70’s and early 80’s. ## 2 Quasi-nuclear baryonium ### 2.1 Brief history The question of possible nucleon–antinucleon ($`\text{N}\overline{\text{N}}`$) bound states was raised many years ago, in particular by Fermi and Yang , who remarked on the strong attraction at large and intermediate distances between N and $`\overline{\text{N}}`$. In the sixties, explicit attempts were made to describe the spectrum of ordinary mesons ($`\pi `$, $`\rho `$, etc.) as $`\text{N}\overline{\text{N}}`$ states, an approximate realisation of the “bootstrap” ideas. It was noticed , however, that the $`\text{N}\overline{\text{N}}`$ picture hardly reproduces the observed patterns of the meson spectrum, in particular the property of “exchange degeneracy”: for most quantum numbers, the meson with isospin $`I=0`$ is nearly degenerate with its $`I=1`$ partner, one striking example being provided by $`\omega `$ and $`\rho `$ vector mesons. In the 70’s, a new approach was pioneered by Shapiro , Dover and others: in their view, $`\text{N}\overline{\text{N}}`$ states were no more associated with “ordinary” light mesons, but instead with new types of mesons with a mass near the $`\text{N}\overline{\text{N}}`$ threshold and specific decay properties. This new approach was encouraged by evidence from many intriguing experimental investigations in the 70’s, which also stimulated a very interesting activity in the quark model: exotic multiquark configurations were studied, followed by glueballs and hybrid states, more generally all “non-$`\text{q}\overline{\text{q}}`$” mesons which will be extensively discussed at this conference. Closer to the idea of quasi-nuclear baryonium are the meson–meson molecules. Those were studied mostly by particle physicists, while $`\text{N}\overline{\text{N}}`$ states remained more the domain of interest of nuclear physicists, due to the link with nuclear forces. ### 2.2 The $`𝐆`$-parity rule In QED, it is well-known that the amplitude of $`\mu ^+e^+`$ scattering, for instance, is deduced from the $`\mu ^+e^{}`$ one by the rule of $`C`$ conjugation: the contribution from one-photon exchange ($`C=1`$) flips the sign, that of two photons ($`C=+1`$) is unchanged, etc. In short, if the amplitude is split into two parts according to the $`C`$ content of the $`t`$-channel reaction $`\mu ^+\mu ^{}e^+e^{}`$, then $`(\mu ^+e^+)=_++_{},`$ $`(\mu ^+e^{})=_+_{}.`$ (1) The same rule can be formulated for strong interactions and applied to relate $`\overline{\text{p}}\text{p}`$ to pp, as well as $`\overline{\text{n}}\text{p}`$ to np. However, as strong interactions are invariant under isospin rotations, it is more convenient to work with isospin eigenstates, and the rule becomes the following. If the NN amplitude of $`s`$-channel isospin $`I`$ is split into $`t`$-channel exchanges of $`G`$-parity $`G=+1`$ and exchanges with $`G=1`$, the former contributes exactly the same to the $`\text{N}\overline{\text{N}}`$ amplitude of same isospin $`I`$, while the latter changes sign. This rule is often expressed in terms of one-pion exchange or $`\omega `$-exchange having an opposite sign in $`\text{N}\overline{\text{N}}`$ with respect to NN, while $`\rho `$ or $`ϵ`$ exchange contribute with the same sign. It should be underlined, however, that the rule is valid much beyond the one-boson-exchange approximation. For instance, a crossed diagram with two pions being exchanged contributes with the same sign to NN and $`\text{N}\overline{\text{N}}`$. ### 2.3 Properties of the NN potential Already in the early 70’s, a fairly decent understanding of long- and medium-range nuclear forces was achieved. First, the tail is dominated by the celebrated Yukawa term, one-pion exchange, which is necessary to reproduce the peripheral phase-shifts at low energy as well as the quadrupole deformation of the deuteron . At intermediate distances, pion exchange, even when supplemented by its own iteration, does not provide enough attraction. It is necessary to introduce a spin-isospin blind attraction, otherwise, one hardly achieves binding of the known nuclei. This was called $`\sigma `$-exchange or $`ϵ`$-exchange, sometimes split into two fictitious narrow mesons to mimic the large width of this meson, which results in a variety of ranges. The true nature of this meson has been extensively discussed in the session chaired by Lucien Montanet at this Workshop. Refined models of nuclear forces describe this attraction as due to two-pion exchanges, including the possibility of strong $`\pi \pi `$ correlation, as well as excitation nucleon resonances in the intermediate states. The main conceptual difficulty is to avoid double counting when superposing $`s`$-channel type of resonances and $`t`$-channel type of exchanges, a problem known as “duality”. To describe the medium-range nuclear forces accurately, one also needs some spin-dependent contributions. For instance, the P-wave phase-shifts with quantum numbers $`{}_{}{}^{2S+1}\text{L}_{J}^{}={}_{}{}^{3}\text{P}_{0}^{}`$, $`{}_{}{}^{3}\text{P}_{1}^{}`$ and $`{}_{}{}^{3}\text{P}_{2}^{}`$, dominated at very low energy by pion exchange, exhibit different patterns as energy increases. Their behaviour is typical of the spin-orbit forces mediated by vector mesons. This is why $`\rho `$-exchange and to a lesser extent, $`\omega `$-exchange cannot be avoided. Another role of $`\omega `$-exchange is to moderate somewhat the attraction due to two-pion exchange. By no means, however, can it account for the whole repulsion which is observed at short-distances, and which is responsible of the saturation properties in heavy nuclei and nuclear matter. In the 70’s, the short-range NN repulsion was treated empirically by cutting off or regularising the Yukawa-type terms due to meson-exchanges and adding some ad-hoc parametrization of the core, adjusted to reproduce the S-wave phase-shifts and the binding energy of the deuteron. Needless to say, dramatic progress in the description of nuclear forces have been achieved in recent years. On the theory side, we understand, at least qualitatively, that the short-range repulsion is due to the quark content of each nucleon. This is similar to the repulsion between two Helium atoms: due to the Pauli principle, the electrons of the first atom tend to expel the electrons of the second atom. On the phenomenological side, accurate models such as the Argonne potential are now used for sophisticated nuclear-structure calculations. ### 2.4 Properties of the $`\text{N}\overline{\text{N}}`$ potential What occurs if one takes one of the NN potentials available in the 70’s, such as the Paris potential or one the many variants of the one-boson-exchange models , and applies to it a $`G`$-parity transformation? The resulting $`\text{N}\overline{\text{N}}`$ potential exhibits the following properties: 1) $`ϵ`$ (or equivalent) and $`\omega `$ exchanges, which partially cancel each other in the NN case, now add up coherently. This means that the $`\text{N}\overline{\text{N}}`$ potential is, on the average, deeper than the NN one. As the latter is attractive enough to bind the deuteron, a rich spectrum can be anticipated for $`\text{N}\overline{\text{N}}`$. 2) The channel dependence of NN forces is dominated by a strong spin-orbit potential, especially for $`I=1`$, i.e., proton–proton. This is seen in the P-wave phase-shifts, as mentioned above, and also in nucleon–nucleus scattering or in detailed spectroscopic studies. The origin lies in coherent contributions from vector exchanges ($`\rho `$, $`\omega `$) and scalar exchanges (mainly $`ϵ`$) to the $`I=1`$ spin-orbit potential. Once the $`G`$-parity rule has changed some of the signs, the spin-orbit potential becomes moderate, in both $`I=0`$ and $`I=1`$ cases, but one observes a very strong $`I=0`$ tensor potential, due to coherent contributions of pseudoscalar and vector exchanges . This property is independent of any particular tuning of the coupling constants and thus is shared by all models based on meson exchanges. ### 2.5 Uncertainties on the $`\text{N}\overline{\text{N}}`$ potential Before discussing the bound states and resonances in the $`\text{N}\overline{\text{N}}`$ potential, it is worth recalling some limits of the approach. 1) There are cancellations in the NN potential. If a component of the potential is sensitive to a combination $`g_1^2g_2^2`$ of the couplings, then a model with $`g_1`$ and $`g_2`$ both large can be roughly equivalent to another where they are both small. But these models can substantially differ for the $`\text{N}\overline{\text{N}}`$ analogue, if it probes the combination $`g_1^2+g_2^2`$. 2) In the same spirit, the $`G`$-parity content of the $`t`$-channel is not completely guaranteed, except for the pion tail. In particular, the effective $`\omega `$ exchange presumably incorporates many contributions besides some resonating three-pion exchange. 3) The concept of NN potential implicitly assumes the 6-quark wave function is factorised into two nucleon-clusters $`\mathrm{\Psi }`$ and a relative wave-function $`\phi `$, say $$\mathrm{\Psi }(\stackrel{}{r}_1,\stackrel{}{r}_2,\stackrel{}{r}_3)\mathrm{\Psi }(\stackrel{}{r}_4,\stackrel{}{r}_5,\stackrel{}{r}_6)\phi (\stackrel{}{r}).$$ (2) Perhaps the potential $`V`$ governing $`\phi (\stackrel{}{r})`$ mimics the delicate dynamics to be expressed in a multichannel framework. One might then be afraid that in the $`\text{N}\overline{\text{N}}`$ case, the distortion of the incoming bags $`\mathrm{\Psi }`$ could be more pronounced. In this case, the $`G`$-parity rule should be applied for each channel and for each transition potential separately, not a the level of the effective one-channel potential $`V`$. 4) It would be very desirable to probe our theoretical ideas on the long- and intermediate-distance $`\text{N}\overline{\text{N}}`$ potential by detailed scattering experiments, with refined spin measurements to filter out the short-range contributions. Unfortunately, only a fraction of the possible scattering experiments have been carried out at LEAR , and uncertainties remain. The available results are however compatible with meson-exchange models supplemented by annihilation. The same conclusion holds for the detailed spectroscopy of the antiproton–proton atom . ### 2.6 $`\text{N}\overline{\text{N}}`$ spectra The first spectral calculations based on explicit $`\text{N}\overline{\text{N}}`$ potentials were rather crude. Annihilation was first omitted to get a starting point, and then its effects were discussed qualitatively. This means the real part of the potential was taken as given by the $`G`$-parity rule, and regularised at short distances, by an empirical cut-off. Once this procedure is accepted, the calculation is rather straightforward. One should simply care to properly handle the copious mixing of $`L=J1`$ and $`L=J+1`$ components in natural parity states, due to tensor forces, especially for isospin $`I=0`$ . The resulting spectra have been discussed at length in Refs. . Of course, the number of bound states, and their binding energy increase when the cut-off leaves more attraction in the core, so no detailed quantitative prediction was possible. Nevertheless, a few qualitative properties remain when the cut-off varies: the spectrum is rich, in particular in the sector with isospin $`I=0`$ and natural parity corresponding to the partial waves $`{}_{}{}^{3}\text{P}_{0}^{}`$, $`{}_{}{}^{3}\text{S}_{1}^{}{}_{}{}^{3}\text{D}_{1}^{}`$, $`{}_{}{}^{3}\text{P}_{2}^{}{}_{}{}^{3}\text{F}_{2}^{}`$, corresponding to $`J^{PC}I^G=0^{++}0^+`$, $`1^{}0^{}`$, $`2^{++}0^+`$, respectively. The abundant candidates for “baryonium” in the data available at this time made this quasi-nuclear baryonium approach plausible . As already mentioned, annihilation was first neglected. Shapiro and his collaborators insisted on the short-range character of annihilation and therefore claimed that it should not distort much the spectrum. Other authors acknowledged that annihilation should be rather strong, to account for the observed cross-sections, but should affect mostly the S-wave states, whose binding rely on the short-range part of the potential, and not too much the $`I=0`$, natural parity states, which experience long-range tensor forces. This was perhaps a too optimistic view point. For instance, an explicit calculation using a complex optical potential fitting the observed cross-section showed that no $`\text{N}\overline{\text{N}}`$ bound state or resonance survives annihilation. In Ref. , Myhrer and Thomas used a brute-force annihilation. It was then argued that maybe annihilation is weaker, or at least has more moderate effects on the spectrum, if one accounts for 1) its energy dependence: it might be weaker below threshold, since the phase-space for pairs of meson resonances is more restricted. It was even argued that part of the observed annihilation (the most peripheral part) in scattering experiments comes from transitions from $`\text{N}\overline{\text{N}}`$ scattering states to a $`\pi `$meson plus a $`\text{N}\overline{\text{N}}`$ baryonium, which in turn decays. Of course, this mechanism does not apply to the lowest baryonium. 2) its channel dependence: annihilation is perhaps less strong in a few partial waves. This however should be checked by fitting scattering and annihilation data. 3) its intricate nature. Probably a crude optical model approach is sufficient to account for the strong suppression of the incoming antinucleon wave function in scattering experiments, but too crude for describing baryonium. Coupled-channel models have thus been developed (see, e.g., Ref. and references therein). It turns out that in coupled-channel calculations, it is less difficult to accommodate simultaneously large annihilation cross sections and relatively narrow baryonia. ## 3 Multiquark baryonium At the time where several candidates for baryonium were proposed, the quasi-nuclear approach, inspired by the deuteron described as a NN bound state, was seriously challenged by a direct quark picture. Among the first contributions, there is the interesting remark by Jaffe that $`\text{q}^2\overline{\text{q}}^2`$ S-wave are not that high in the spectrum, and might even challenge P-wave $`\text{q}\overline{\text{q}}`$ to describe scalar or tensor mesons. From the discussions at this Conference, it is clear that the debate is still open. It was then pointed out that orbital excitations of these states, of the type $`(\text{q}^2)`$$`(\overline{\text{q}}^2)`$ have preferential coupling to $`\text{N}\overline{\text{N}}`$. Indeed, simple rearrangement into two $`\text{q}\overline{\text{q}}`$ is suppressed by the orbital barrier, while the string can break into an additional $`\text{q}\overline{\text{q}}`$ pair, leading to $`\text{q}^3`$ and $`\overline{\text{q}}^3`$. Chan and collaborators went a little further and speculated about possible internal excitations of the colour degree of freedom. When the diquark is in a colour $`\overline{3}`$ state, they obtained a so-called “true” baryonium, basically similar to the orbital resonances of Jaffe. However, if the diquark carries a colour 6 state (and the antidiquark a colour $`\overline{6}`$), then the “mock-baryonium”, which still hardly decays into mesons, is also reluctant to decay into N ad $`\overline{\text{N}}`$, and thus is likely to be very narrow (a few MeV, perhaps). This “colour chemistry” was rather fascinating. A problem, however, is that the clustering into diquarks is postulated instead of being established by a dynamical calculation. (An analogous situation existed for orbital excitations of baryons: the equality of Regge slopes for meson and baryon trajectories is natural once one accepts that excited baryons consist of a quark and a diquark, the latter behaving as a colour $`\overline{3}`$ antiquark. The dynamical clustering of two of the three quarks in excited baryons was shown only in 1985 .) There has been a lot of activity on exotic hadrons meanwhile, though the fashion focused more on glueballs and hybrids. The pioneering bag model estimate of Jaffe and the cluster model of Chan et al. has been revisited within several frameworks and extended to new configurations such as “dibaryons” (six quarks), or pentaquarks (one antiquark, four quarks). The flavour degree of freedom plays a crucial role in building configurations with maximal attraction and possibly more binding than in the competing threshold. For instance, Jaffe pointed out that (uuddss) might be more stable that two separated (uds) , and that such a stable dibaryon is more likely in this strangeness $`S=2`$ sector than in the $`S=1`$ or $`S=0`$ ones. In the four-quark sector, configurations like $`(\text{Q}\text{Q}\overline{\text{q}}\overline{\text{q}})`$ with a large mass ratio $`m(\text{Q})/m(\overline{\text{q}})`$ are expected to resist spontaneous dissociation into two separated $`(\text{Q}\overline{\text{q}})`$ mesons (see, e.g., and references therein). For the Pentaquark, the favoured configurations $`(\text{Q}\overline{\text{q}}^4)`$ consist of a very heavy quark associated with light or strange antiquarks . ## 4 Multiquark states vs. $`\text{N}\overline{\text{N}}`$ states An obvious question is whether the picture of two hadrons interacting by exchanging mesons is more or less realistic than a direct approach using quark dynamics. One cannot give a general answer, as it depends on the type of binding one eventually gets for the state. In the limit of strong binding, a multiquark system can be viewed as a single bag where quarks and antiquarks interact directly by exchanging gluons. For a multiquark close to its dissociation threshold, we have more often two hadrons experiencing their long-range interaction. Such a state is called an “hadronic molecule”. There has been many discussions on such molecules , $`\text{K}\overline{\text{K}}`$, $`\text{D}\overline{\text{D}}`$ or $`\text{B}\text{B}^{}`$. In particular, pion-exchange, due to its long range, plays a crucial role in achieving the binding of some configurations. From this respect, it is clear that the baryonium idea has been very inspiring. ## Acknowledgments I would like to thank S. Narison for the very stimulating atmosphere of this Conference, and S.U. Chung and L. Montanet for enjoyable discussions on exotic hadrons.
no-problem/9909/astro-ph9909283.html
ar5iv
text
# Precision Timing of Two Anomalous X-Ray Pulsars ## 1 Introduction The nature of anomalous X-ray pulsars (AXPs) has been a mystery since the discovery of the first example (1E 2259+586) nearly 20 years ago (Fahlman & Gregory (1981)). The properties of AXPs can be summarized as follows (see also Mereghetti & Stella (1995); van Paradijs, Taam, & van den Heuvel (1995); Gotthelf & Vasisht (1998)): they exhibit X-ray pulsations in the range $``$5–12 s; they have pulsed X-ray luminosities in the range $`10^{34}10^{35}`$ erg s<sup>-1</sup>; they spin down regularly within the limited timing observations available, with some exceptions; their X-ray luminosities are much greater than the rate of loss of rotational kinetic energy inferred from the observed spin-down; they have spectra that are characterized by thermal emission with $`kT0.4`$ keV, with evidence for a hard tail in some sources; they are found in the plane of the Galaxy; and two of the six certain members of the class appear to be located at the geometric centers of apparent supernova remnants (Fahlman & Gregory (1981); Vasisht & Gotthelf (1997)). Soft gamma repeaters also exhibit AXP-like pulsations in quiescence (e.g. Kouveliotou et al. (1998)). Mereghetti & Stella (1995) suggested that AXPs are accreting from a low mass companion. However increasingly this model has become difficult to reconcile with observations. The absence of Doppler shifts even on short time scales (e.g. Mereghetti, Israel, & Stella (1998)), the absence of a detectable optical/IR companion or accretion disk (see Mereghetti & Stella (1995)), the apparent associations with supernova remnants, the apparent steady spin down within the limits of current observations (e.g. Gotthelf, Vasisht, & Dotani (1999)), and AXP spectra that are very different from those of known accreting sources, all argue against an accretion origin for the X-rays. Recently, it has been argued that the AXPs are young, isolated, highly magnetized neutron stars or “magnetars” (Thompson & Duncan (1996); Heyl & Hernquist (1997)). Evidence for this is primarily the inferred strength of the surface dipolar magnetic field required to slow the pulsar down in vacuo: $`10^{14}10^{15}`$ G. The spin-down ages in this model, inferred assuming small birth spin periods, are in the range of $``$8–200 kyr. This suggested youth is supported by the two apparent supernova remnant associations. Additional circumstantial supporting evidence comes from AXPs location close to the Galactic plane, consistent with their being isolated neutron stars near their birth place, as well as from interpreting apparent deviations from spin-down as glitches (Heyl & Hernquist (1999)) similar to those seen in radio pulsars (e.g. Kaspi et al. (1992)). Recently, deviations from simple spin-down have been suggested, under the magnetar hypothesis, to be due to radiative precession, originating in the asphericity of the neutron star produced by the strong magnetic field (Melatos (1999)). One way to test both the magnetar and accretion models is through timing observations. The spin down of some AXPs has been monitored by considering the measured frequency at individual epochs (e.g. Baykal et al. (1998); Gotthelf, Vasisht, & Dotani (1999)). However those measurements have been sparse and are only marginally sensitive to spin irregularities on time scales of weeks to months, relevant to glitches or accretion torque fluctuations. Further, high-precision determination of the spin evolution over a long baseline is necessary to look for “timing noise” as is seen in many young radio pulsars (e.g. Arzoumanian et al. (1994); Kaspi et al. (1994); Lyne (1996)), to obtain a reliable measurement of a braking index, and to search for precession (Melatos (1999)). Whether such high precision is possible to achieve with AXP timing has not, until now, been established. Here we report on X-ray monitoring observations made with the Rossi X-ray Timing Explorer (RXTE) in which, for the first time, we determine high-precision spin parameters using long-term phase-coherent timing of two AXPs. The sources, 1RXS J170849.0$``$400910 (Sugizaki et al. (1997); Israel et al. (1999)) and 1E 2259+586 (Baykal & Swank (1996); Parmar et al. (1998); Baykal et al. (1998)) have periods of 11 s and 7 s, respectively. The RXTE AXP monitoring project is part of a larger effort to time coherently several AXPs. Results for other sources will be presented elsewhere. ## 2 Observations and Results Our observations were made using the RXTE Proportional Counter Array (PCA) (Jahoda et al. (1996)). The detector consists of five identical multi-anode proportional counter units (PCUs) each containing a front propane anticoincidence layer followed by several xenon/methane layers. The detector operates in the 2–60 keV range, with a total effective area of $``$6500 cm<sup>2</sup> and a 1 field of view. In addition to the standard data modes, data were collected in the GoodXenonwithPropane mode, which records the arrival time (1 $`\mu `$s resolution) and energy (256-channel resolution) of every unrejected xenon event as well as all of the propane layer events. To maximize the sensitivity to the targets, which have soft spectra, we restricted the analysis to unrejected events in the top xenon layer of each PCU and chose an optimal energy range for each source: absolute channels<sup>1</sup><sup>1</sup>1These channel-to-energy conversions are for RXTE gain epoch 3, from 1996 April 15 to 1999 March 22, averaged over the five PCUs. 6–14 (2.5–5.4 keV) for 1RXS J170849.0$``$400910 and absolute channels 6–24 (2.5–9.1 keV) for 1E 2259+586. The observations were reduced using MIT-developed software for handling raw spacecraft telemetry packet data. Data from the different PCUs were merged and binned at 62.5 ms and 31.25 ms resolution for 1RXS J170849.0$``$400910 and 1E 2259+586, respectively. The data were then reduced to Barycentric Dynamical Time (TDB) at the solar system barycenter using the source positions in Table 1 and the JPL DE200 solar system ephemeris (Standish et al. (1992)) and stored on disk as one time series per observing epoch. Our strategy for attempting phase-coherent timing of these sources made use of standard radio pulsar techniques. Pulse phase at any time $`t`$ can be expressed as $$\varphi (t)=\varphi (t_0)+\nu (tt_0)+\frac{1}{2}\dot{\nu }(tt_0)^2+\mathrm{},$$ (1) where $`t_0`$ is a reference epoch, and $`\nu `$ and $`\dot{\nu }`$ are the spin frequency and its time derivative. Phase-coherent timing amounts to counting all pulses over the entire observing span. To achieve this, uncertainties in the first-gues spin parameters $`\nu `$ and $`\dot{\nu }`$ must be sufficiently small that the discrepancy between observed and predicted arrival time differ by only a fraction of the period. To achieve this goal, we observed each source at two closely spaced epochs (i.e. within one day), then at a third epoch several days later. This spacing was chosen to determine an initial $`\nu `$, by absolute pulse numbering, of sufficient precision to predict $`\varphi `$ for the next observation roughly one month later. Subsequent monitoring was done at roughly one month intervals. Once phase connection was achieved with the $``$6 months of monitoring data, we also included public RXTE archival data. For 1RXS J170849.0$``$400910, the total data set consists of 19 arrival times obtained between 1998 January 13 and 1999 May 26. For 1E 2259+586, we have 33 arrival times obtained between 1996 September 29 and 1999 May 12. Our procedure included the following steps. The first barycentered, binned time series for the closely spaced set of three observations were folded at the best-estimate period determined via Fourier transform, using a unique reference epoch. The folded pulse profiles were cross-correlated with a high-signal-to-noise template in the Fourier domain and the phase offsets were recorded. We suppressed high-order harmonics in the pulse profile using a frequency-domain filter to avoid contamination by bin-to-bin Poisson fluctuations. A precise $`\nu `$ was then determined by demanding that an integer number of pulses occur between each observation. Profiles were then re-folded, still with respect to a fixed epoch, using the improved $`\nu `$. After each new observation, this process was repeated, also including the effect of $`\dot{\nu }`$. Phase residuals were examined to verify that there were no missed pulses, then fit with a quadratic function to determine the optimal $`\nu `$ and $`\dot{\nu }`$. Uncertainties on measured pulse phases were determined using Monte Carlo simulations. We verified this procedure and its results by extracting absolute average pulse arrival times in TDB at the solar system barycenter from the optimally folded profiles, and using the TEMPO pulsar timing software package (http://pulsar.princeton.edu/tempo), in common use in radio pulsar timing. Best fit $`\nu `$ and $`\dot{\nu }`$ for each source are given in Table 1. These values were measured with $`\ddot{\nu }`$ fixed at zero. Corresponding arrival time residuals are shown in Figures 1 and 2. In both cases, the RMS residual is $``$0.01$`P`$, where $`P=1/\nu `$. We also tried fitting for $`\ddot{\nu }`$; the results are given in Table 1. For 1E 2259+586, the fitted $`\ddot{\nu }`$ is consistent with zero; we provide a $`3\sigma `$ upper limit. For 1RXS J170849.0$``$400910, the fitted $`\ddot{\nu }`$ is marginally significant at the 4$`\sigma `$ level; however, a fit omitting only the first point reduces the significance to 2.6$`\sigma `$. We therefore quote the current best-fit value in parentheses only; further timing observations will decide if the observed $`\ddot{\nu }`$ is truly significant. ## 3 Discussion Using a very simple spin-down model, we have maintained phase coherence for1RXS J170849.0$``$400910 and 1E 2259+586 with phase residuals of only $``$1%, comparable to or smaller than those measured for most radio pulsars (e.g. Arzoumanian et al. (1994)). This demonstrates that these AXPs are extremely stable rotators. This stability is consistent with the magnetar hypothesis because isolated rotating neutron stars are expected to spin-down with great regularity, as is seen in the radio pulsar population. We can compare our pulse ephemerides with past period measurements to see whether there have been a deviation from a simple spin-down law. For 1RXS J170849.0$``$400910, only two previous period measurements have been reported (Sugizaki et al. (1997); Israel et al. (1999)). The pulse parameters listed in Table 1, extrapolated to the epochs of the previous observations, agree with the published values within uncertainties. Thus, the spin-down has been regular for at least 1.4 yr prior to the commencement of our observations. For 1E 2259+586, spin frequencies have been measured occasionally since 1978 (see Baykal & Swank 1996 and references therein). Figure 3 shows the differences between previously measured spin frequencies and those predicted by our timing ephemeris (Table 1). The error bars represent the published one standard deviation measurement uncertainties. Our measured $`\dot{\nu }`$ is not consistent with the long-term $`\dot{\nu }`$: all observed frequencies were significantly larger than predicted by the extrapolation of the current $`\nu `$ and $`\dot{\nu }`$. A least squares fit to the data shown in Figure 3 gives $`\mathrm{\Delta }\dot{\nu }=1.328(9)\times 10^{15}`$ Hz s<sup>-1</sup>, though the linear fit is poor because of apparent short-time-scale fluctuations. Thus, the current value of $`\dot{\nu }`$, measured over the past 2.6 yr, is smaller than that of the long-term trend by $`10`$%. Melatos (1999) suggests that the large magnetic field inferred in the magnetar model should result in significant deviations from sphericity of the neutron star, with the principle axis misaligned with the spin axis. Under such circumstances, the star undergoes radiative precession with period $``$10 yr. Given the epoch and duration of our observations of 1E 2259+586, the manifestation of such precession is completely covariant with $`\dot{\nu }`$. However, the implied deviation of $`\dot{\nu }`$ from the long-term trend is consistent with the Melatos (1999) prediction, though smaller by a factor $``$3.5. An unambiguous test of this model can be provided by periodic timing residuals on a time scale of a decade. Note that the change in $`\dot{\nu }`$ that we observe for 1E 2259+586 relative to the long-term trend is not consistent with those observed after glitches in radio pulsars, in which the absolute magnitude of the post-glitch $`\dot{\nu }`$ is larger than the pre-glitch value (Shemar & Lyne (1996)). Radio pulsars, especially young ones, exhibit deviations from simple spin-down laws which appear to be a result of random processes (Cordes & Helfand (1980)). These deviations are not physically understood and are commonly referred to as “timing noise” (see Lyne (1996) for a review). The measured $`\ddot{\nu }`$’s for 1RXS J170849.0$``$400910 and 1E 2259+586 (Table 1) can be compared with those of radio pulsars. Noise level has been quantified by Arzoumanian et al. (1994) by the statistic $`\mathrm{\Delta }_8\mathrm{log}\left(|\ddot{\nu }|t^3/6\nu \right),`$ for $`t=10^8`$ s. We find $`\mathrm{\Delta }_81.7`$ and $`<0.5`$ for 1RXS J170849.0$``$400910 and 1E 2259+586, respectively. These values place these objects clearly among the radio pulsar population on the $`\mathrm{\Delta }_8`$$`\dot{P}`$ plot of Arzoumanian et al. (1994), as was suggested for 1E 2259+586 and 1E 1048.1$``$5937 by Heyl & Hernquist (1997) on the basis of sparser, incoherent data. Torque noise in the accreting pulsar population would generally predict much larger values of $`\ddot{\nu }`$ (Bildsten et al. (1997), although see also Chakrabarty et al. (1997)). One caveat is that the 8 s accreting X-ray pulsar 4U 1626$``$67 is known to be in a close binary orbit with a low-mass companion, with orbital period 42 min (Middleditch et al. (1981); Chakrabarty (1998)). The X-ray pulsations show no Doppler shifts (Levine et al. (1988); Chakrabarty et al. (1997)), implying that we are viewing the orbit nearly face-on. Timing observations of 4U 1626$``$67 over $``$8 yr permit phase-coherent timing, except near one epoch where the spin-down rate abruptly changed sign (Chakrabarty et al. (1997)). The apparent change in spin-down rate we have detected for 1E 2259+586 is plausible as a change in accretion torque. However, if the X-rays observed for the two AXPs considered here had an accretion origin, the orbits must both be viewed face-on as well, which is unlikely. Furthermore the much harder X-ray spectrum of 4U 1626$``$67 (Owens, Oosterbroek, & Parmar (1997)), which is consistent with other accreting sources, is very different from those of any of the known AXPs, including 1RXS J170849.0$``$400910 and 1E 2259+586. An alternative model for AXPs, that they are massive white dwarfs, also predicts regular spin down (Paczynski (1990)). In this model, the larger moment of inertia hence larger spin-down luminosity accounts for the observed $`L_x`$. However, if the observed X-rays have a thermal origin, e.g. emission from hot gas heated by positrons near the polar cap (Usov (1993)), the implied hot spot is implausibly small (Thompson & Duncan (1996)). Also, data obtained with the EGRET instrument aboard the Compton Gamma-Ray Observatory show (D. Thompson, personal communication) that the high-energy $`\gamma `$-ray flux from 1E 2259+586 is smaller than predicted in the white dwarf model (Usov (1993)). Furthermore, an observable supernova remnant, as is seen surrounding 1E 2259+586 (CTB 109), is not expected for a white dwarf. Note that the absence of a possibly associated supernova remnant is not evidence against an AXP being a young neutron star, given the limited observability of some remnants associated with very young pulsars (e.g. Braun, Goss, & Lyne (1989); Pivovaroff, Kaspi & Gotthelf (1999)). With the stability of the rotation of 1RXS J170849.0$``$400910 and 1E 2259+586 established, the door is now open for unambiguously testing the magnetar model. In particular, although neither source has glitched during our observations, which implies glitch activity lower than in some comparably young radio pulsars (e.g. Kaspi et al. (1992); Shemar & Lyne (1996)), future glitches will be easily identified. Also, for 1RXS J170849.0$``$400910, a braking index of 3, expected if the source is spinning down due to magnetic dipole radiation, should be measurable in another year, although its measurement could be complicated by timing noise, precessional effects and/or glitches. The periodic precession predicted by Melatos (1999) could be clearly confirmed or ruled out in the next few years of RXTE monitoring, particularly if earlier observations made with other observatories can be incorporated. We thank E. Morgan, J. Lackey and G. Weinberg for help with software, A. Melatos for useful conversations, and D. Thompson for the EGRET analysis. VMK acknowledges use of NASA LTSA grant number NAG5-8063. This work was also supported in part by NASA grant NAG5-4114 to DC.
no-problem/9909/hep-ph9909426.html
ar5iv
text
# Two Skyrmion Dynamics with 𝜔 Mesons ## I Introduction In the large $`N_C`$ or classical limit of QCD, nucleons may be identified as classical solitons of the scalar-isovector $`SU(N_f)`$ pion field. The simplest theory that manifests these solitons is the non-linear sigma model. However the solitons of this theory are not stable against collapse. The first attempt to provide a stable theory was by Skyrme (long before QCD) who introduced a fourth order term. This term does indeed lead to stabilized solitons that are called skyrmions, and there is a vast body of work on their properties and on how to quantize them . Unfortunately the fourth order term introduces numerical instabilities that make complex dynamical calculation nearly impossible . It is possible to stabilize the non-linear sigma model without the fourth order, or Skyrme term as it is called, by coupling the baryon current to the $`\omega `$ meson field . This not only provides stability, but does so in the context of more reasonable physics. It is also possible to stabilize the solitons by introducing a $`\rho `$ meson field, with a gauge coupling, or with both the $`\omega `$ and $`\rho `$, but the $`\rho `$ adds a great deal to the numerical complications. Hence in this first, exploratory work, we stabilize with the $`\omega `$ only. We continue to refer to the solitons as skyrmions. The Skyrme approach, either in its fourth order or $`\omega `$ stabilized form, has much to recommend it as a model of low energy strong interaction physics. This low energy or long wave length domain is notoriously difficult to describe in the context of standard QCD. The Skyrme approach includes chiral symmetry, baryons (and therefore also antibaryons), pions, the one pion exchange potential, and, with quantization, nucleons and deltas. The idea of having nucleons arise naturally from an effective theory of pions and vector mesons is especially attractive as a path from QCD to such effective Lagrangians is better understood. It has been shown that the Skyrme model can give a good account, with very few parameters, of the low energy properties of nucleons . In the baryon number two system, the Skyrme approach can describe the principal features of the nucleon-nucleon static potential . Most of these problems have also been successfully studied in more traditional nuclear physics forms. One problem that has not yielded easily to traditional physics approaches and is out of the reach of standard lattice QCD is low energy annihilation. This is a problem ideally suited to the Skyrme approach. We have already shown that a general picture of post-annihilation dynamics including branching ratios can be obtained from the Skyrme approach . The nucleon-antinucleon potential can be as well . What remains is a full modeling of annihilation from start to finish. The only attempts we know of to do that in the Skyrme model had numerical problems associated with the Skyrme term . We propose to study skyrmion dynamics using $`\omega `$ stabilization, thus avoiding the usual numerical problems. As a prelude to studying annihilation, we have studied scattering in the baryon number two system. It is that work we report here. In the Skyrme model the three components of the pion field, $`\pi _1,\pi _2,\pi _3`$, can be aligned with the spatial directions, $`x,y,z`$, providing a correspondence between the two spaces. This simple alignment is called a hedgehog. A rotation of the pion field with respect to space is called a grooming. The energy of a single free skyrmion is independent of grooming, but the interaction between two skyrmions depends critically, as we shall see, on their relative grooming. In this paper we present the results of calculations of skyrmion-skyrmion scattering (with $`\omega `$ meson stabilization) at medium energy for a variety of impact parameters and groomings. We find rich structure. Some of the channels have simple scattering, but some display radiation, scattering out of the scattering plane, orbiting and ultimately capture to a bound $`B=2`$ state. These calculations are numerically complex and require considerable computational resources and computing time, but they are numerically stable. We know of no other calculations of $`B=2`$ skyrmion scattering that show the phenomena we find. Furthermore our success here bodes well for extending the method to annihilation. In the next section we briefly review the model we use, presenting both the Lagrangian for the $`\omega `$ stabilized non-linear sigma model, and our equations of motion. In Section 3 we discuss our numerical strategies and methods. Section 4 presents our results, mostly in graphical form, and Section 5 deals with conclusions and outlook. The reader interested only in results can go directly to Sections 4 and 5. ## II Formalism ### A Model Our starting point is the non-linear $`\sigma `$ model Lagrangian, $$_\sigma =\frac{1}{4}f_\pi ^2\mathrm{t}r\left(_\mu 𝒰^\mu 𝒰^{}\right)+\frac{1}{2}m_\pi ^2f_\pi ^2\mathrm{t}r\left(𝒰1\right),$$ (1) where the $`SU(2)`$ field $`𝒰`$ is parameterized by the three real pion fields<sup>*</sup><sup>*</sup>* To avoid confusion, throughout this paper we reserve the plain letter $`\pi `$ for the pion field, and use the symbol $`\underset{¯}{\pi }=3.1415\mathrm{}`$ for the mathematical constant. $`\{\pi _k\}_{k=1,3}=\stackrel{}{\pi }`$, $$𝒰=\mathrm{exp}\left(i(\stackrel{}{\tau }\stackrel{}{\pi })\right)=\mathrm{cos}\pi +i\mathrm{sin}\pi (\stackrel{}{\tau }\widehat{\pi }).$$ (2) Here, $`\{\tau _k\}_{k=1,3}=\stackrel{}{\tau }`$ are the Pauli matrices (in flavor space). We identify the baryon current with $$B^\mu =\frac{1}{24\underset{¯}{\pi }^2}ϵ^{\mu \nu \alpha \beta }\mathrm{t}r\left[\left(𝒰^{}_\nu 𝒰\right)\left(𝒰^{}_\alpha 𝒰\right)\left(𝒰^{}_\beta 𝒰\right)\right].$$ (3) The full Lagrangian also contains the free $`\omega `$ and the interaction part, $$=_\sigma +\frac{3}{2}g\omega _\mu B^\mu \frac{1}{2}_\mu \omega _\nu \left(^\mu \omega ^\nu ^\nu \omega ^\mu \right)+\frac{1}{2}m^2\omega _\mu \omega ^\mu .$$ (4) We take $`f_\pi =62\mathrm{MeV}`$ following and $`g=\frac{m}{\sqrt{2}f_\pi }`$ from VMD where $`m=770\mathrm{MeV}`$ is the vector meson mass. This gives a skyrmion mass of $`791\mathrm{MeV}`$. ### B Choice of Dynamical Variables The traditional approach to numerical simulations involving skyrmions uses the cartesian decomposition of the unitary matrix $`𝒰`$, $`𝒰=\mathrm{\Phi }𝐈_2+\stackrel{}{\mathrm{\Psi }}\stackrel{}{\tau };\mathrm{\Phi }=\mathrm{cos}\pi ,\stackrel{}{\mathrm{\Psi }}=\widehat{\pi }\mathrm{sin}\pi `$ (5) The quadruplet of real numbers $`(\mathrm{\Phi },\stackrel{}{\mathrm{\Psi }})`$ is constrained by the unitarity condition $`𝒰𝒰^{}=𝐈_2\mathrm{\Phi }^2+\stackrel{}{\mathrm{\Psi }}\stackrel{}{\mathrm{\Psi }}=1`$, also known as the chiral condition. The Lagrangian is usually written in terms of $`(\mathrm{\Phi },\stackrel{}{\mathrm{\Psi }})`$ and the chiral condition is imposed using a Lagrange multiplier field. The four coordinates of $`𝒰`$ are similar to the cartesian coordinates of a point in $`𝐑^4`$ confined to the surface of a hypersphere of unit radius. An attractive idea is to use an approach which ensures naturally the unitarity of $`𝒰`$ at all times. Such a method is used successfully in lattice QCD in the context of the hybrid Monte-Carlo algorithm . There, the dynamical variables are $`𝒰`$ itself and the Hermitian matrix $`\dot{𝒰}𝒰^{}`$. In the present work, we employ the parameterization of $`𝒰`$ that follows from (2). We will use $`\pi `$ and $`\widehat{\pi }`$ and their time derivatives $`\dot{\pi },\dot{\widehat{\pi }}`$ as dynamical variables. This implements only in part the principle mentioned above, since the unit vector $`\widehat{\pi }`$ is subject to conditions similar to the four-dimensional vector $`(\mathrm{\Phi },\stackrel{}{\mathrm{\Psi }})`$. However, one has better geometrical intuition for the former. The connection to $`(\mathrm{\Phi },\stackrel{}{\mathrm{\Psi }})`$ is straightforward via (5). The Lagrangian in terms of $`\pi ,\widehat{\pi }`$ is $``$ $`=`$ $`_\sigma +_{int}+_\omega `$ (6) $`_\sigma `$ $`=`$ $`{\displaystyle \frac{1}{2}}f_\pi ^2\left(_\mu \pi ^\mu \pi +\mathrm{sin}^2\pi _\mu \widehat{\pi }^\mu \widehat{\pi }\right)+m_\pi ^2f_\pi ^2(\mathrm{cos}\pi 1)`$ (7) $`_{int}`$ $`=`$ $`{\displaystyle \frac{3g}{8\underset{¯}{\pi }^2}}ϵ^{\mu \nu \alpha \beta }\mathrm{sin}^2\pi \omega _\mu _\nu \pi \left[\widehat{\pi }\left(_\alpha \widehat{\pi }\times _\beta \widehat{\pi }\right)\right]`$ (8) $`_\omega `$ $`=`$ $`{\displaystyle \frac{1}{2}}_\mu \omega _\nu \left(^\mu \omega ^\nu ^\nu \omega ^\mu \right)+{\displaystyle \frac{1}{2}}m_{vec}^2\omega _\mu \omega ^\mu .`$ (9) ### C Equations of Motion We wish to obtain the Euler-Lagrange equations for $`\pi ,\widehat{\pi }`$. The problem is that $`\widehat{\pi }`$ is a unit vector, $`\widehat{\pi }\widehat{\pi }=1`$, so its components cannot be treated as true coordinates. One may still write down the Euler-Lagrange equations, by considering the action $`S={\displaystyle d^3x𝑑t(\pi (x,t),\widehat{\pi }(x,t),\omega _\alpha (x,t);_\mu \pi (x,t),_\mu \widehat{\pi }(x,t),_\mu \omega _\alpha (x,t))}.`$ (10) The Lagrange equation for $`\pi `$ is obtained by requiring that $`S`$ be stationary with respect to a small variation $`\delta \pi (x,t)`$ and the corresponding variations $`\delta (_\mu \pi )=_\mu (\delta \pi )`$. The variation of $`S`$ which has to vanish for any $`\delta \pi `$, is $`\delta S={\displaystyle d^3x𝑑t\left\{\frac{}{\pi }\delta \pi +\frac{}{(_\mu \pi )}\delta (_\mu \pi )\right\}}={\displaystyle d^3x𝑑t\left\{\frac{}{\pi }_\mu \left[\frac{}{(_\mu \pi )}\right]\right\}\delta \pi }.`$ (11) Therefore the quantity in curly brackets in the last equality of (11) has to be identically zero, since $`\delta \pi `$ is completely arbitrary. Of course this leads exactly to the usual form of the Euler-Lagrange equations. We may repeat formally the same steps for $`\widehat{\pi }`$, leading to $`\delta S={\displaystyle d^3x𝑑t\left\{\frac{}{\widehat{\pi }}_\mu \left[\frac{}{(_\mu \widehat{\pi })}\right]\right\}\delta \widehat{\pi }}.`$ (12) Here, partial differentiation with respect to the vector $`\widehat{\pi }`$ or its derivatives gives the vector obtained by differentiating with respect to the corresponding components of $`\widehat{\pi }`$ or its derivatives. If we want $`\delta S`$ to vanish for any $`\delta \widehat{\pi }`$, we do not need to require that the quantity in curly brackets vanish identically. This is because $`\delta \widehat{\pi }`$ is not completely arbitrary. Both $`\widehat{\pi }`$ and $`\widehat{\pi }^{}=\widehat{\pi }+\delta \widehat{\pi }`$ have to be unit vectors. The variation of $`\widehat{\pi }\widehat{\pi }`$ must vanish, $`0=\delta \left(\widehat{\pi }\widehat{\pi }\right)=2\delta \widehat{\pi }\widehat{\pi }.`$ (13) In other words, $`\delta \widehat{\pi }(x,t)\widehat{\pi }(x,t)`$ for any $`(x,t)`$. Therefore, the necessary condition for the stationarity of $`S`$ is simply $`\left\{{\displaystyle \frac{}{\widehat{\pi }}}_\mu \left[{\displaystyle \frac{}{(_\mu \widehat{\pi })}}\right]\right\}\widehat{\pi }.`$ (14) The above statement is easily turned into an equation by subtracting the projection of the left hand side onto $`\widehat{\pi }`$, $`\left\{{\displaystyle \frac{}{\widehat{\pi }}}_\mu \left[{\displaystyle \frac{}{(_\mu \widehat{\pi })}}\right]\right\}\widehat{\pi }\left\{{\displaystyle \frac{}{\widehat{\pi }}}_\mu \left[{\displaystyle \frac{}{(_\mu \widehat{\pi })}}\right]\right\}=0.`$ (15) The Euler-Lagrange equations we obtain finally are $`_\nu ^\nu \pi `$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{sin}2\pi _\mu \widehat{\pi }^\mu \widehat{\pi }m_\pi ^2\mathrm{sin}\pi +{\displaystyle \frac{3g}{8\underset{¯}{\pi }^2f_\pi ^2}}ϵ^{\mu \nu \alpha \beta }\mathrm{sin}^2\pi _\mu \omega _\nu \left[\widehat{\pi }\left(_\alpha \widehat{\pi }\times _\beta \widehat{\pi }\right)\right]`$ (16) $`_\mu ^\mu \widehat{\pi }`$ $`=`$ $`\widehat{\pi }\left(\widehat{\pi }_\mu ^\mu \widehat{\pi }\right){\displaystyle \frac{2\mathrm{cos}\pi }{\mathrm{sin}\pi }}_\mu \pi ^\mu \widehat{\pi }{\displaystyle \frac{3g}{4\underset{¯}{\pi }^2f_\pi ^2}}ϵ^{\mu \nu \alpha \beta }_\mu \omega _\nu _\alpha \pi \left(_\beta \widehat{\pi }\times \widehat{\pi }\right)`$ (17) $`_\nu ^\nu \omega ^\mu `$ $`=`$ $`m_{vec}^2\omega ^\mu {\displaystyle \frac{3g}{8\underset{¯}{\pi }^2}}ϵ^{\mu \nu \alpha \beta }\mathrm{sin}^2\pi _\nu \pi \left[\widehat{\pi }\left(_\alpha \widehat{\pi }\times _\beta \widehat{\pi }\right)\right].`$ (18) ## III Computation Our calculation is based on the equations of motion (16), using $`(\pi ,\widehat{\pi },\omega _\mu ;\dot{\pi },\dot{\widehat{\pi }},\dot{\omega }_\mu )`$ as variables. This choice leads to two problems which have to be tackled by our discretization scheme. First, we have a coordinate singularity at $`\pi =0`$ and $`\pi =\underset{¯}{\pi }`$. Here, $`\widehat{\pi }`$ is not defined. The situation is similar to that of angles in polar coordinates when the radius vanishes. While the equations of motion are correct for any non-zero $`\pi `$, in the vicinity of the coordinate singularity small changes of $`\stackrel{}{\pi }`$ are translated into very large variations of $`\widehat{\pi }`$. A scheme based on the equations of motion (16) breaks down around the ‘poles’ of the hypersphere due to the large discretization errors involved. One way out is to rotate the coordinate system in $`SU(2)`$ so as to avoid the problem regions. We introduce a new field $`𝒱`$ obtained by rotation with a fixed $`U_0SU(2)`$, $`𝒰=U_0𝒱;𝒱=\mathrm{exp}\left(i\stackrel{}{\tau }\stackrel{}{\sigma }\right);U_0=\mathrm{exp}\left(i\stackrel{}{\tau }\stackrel{}{\mathrm{\Theta }}\right).`$ (19) Substituting (19) into our Lagrangian (1)(4), we find that only the pion mass term is modified since everywhere else $`𝒰`$ is combined with $`𝒰^{}`$ or its derivatives so $`U_0`$ drops out. The equations of motion can then be derived in an identical fashion to (16). We cite them here for the sake of completeness, $`_\nu ^\nu \sigma `$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{sin}2\pi _\mu \widehat{\sigma }^\mu \widehat{\sigma }m_\pi ^2\left(\mathrm{cos}\mathrm{\Theta }\mathrm{sin}\sigma +(\widehat{\mathrm{\Theta }}\widehat{\sigma })\mathrm{sin}\mathrm{\Theta }\mathrm{cos}\sigma \right)+{\displaystyle \frac{3g}{8\underset{¯}{\pi }^2f_\pi ^2}}ϵ^{\mu \nu \alpha \beta }\mathrm{sin}^2\sigma _\mu \omega _\nu \left[\widehat{\sigma }\left(_\alpha \widehat{\sigma }\times _\beta \widehat{\sigma }\right)\right]`$ (20) $`_\mu ^\mu \widehat{\sigma }`$ $`=`$ $`\widehat{\sigma }\left(\widehat{\sigma }_\mu ^\mu \widehat{\sigma }\right){\displaystyle \frac{2\mathrm{cos}\sigma }{\mathrm{sin}\sigma }}_\mu \sigma ^\mu \widehat{\sigma }{\displaystyle \frac{3g}{4\underset{¯}{\pi }^2f_\pi ^2}}ϵ^{\mu \nu \alpha \beta }_\mu \omega _\nu _\alpha \sigma \left(_\beta \widehat{\sigma }\times \widehat{\sigma }\right)m_\pi ^2\left(\widehat{\mathrm{\Theta }}\widehat{\sigma }(\widehat{\mathrm{\Theta }}\widehat{\sigma })\right)\mathrm{sin}\mathrm{\Theta }\mathrm{sin}\sigma `$ (21) $`_\nu ^\nu \omega ^\mu `$ $`=`$ $`m_{vec}^2\omega ^\mu {\displaystyle \frac{3g}{8\underset{¯}{\pi }^2}}ϵ^{\mu \nu \alpha \beta }\mathrm{sin}^2\sigma _\nu \sigma \left[\widehat{\sigma }\left(_\alpha \widehat{\sigma }\times _\beta \widehat{\sigma }\right)\right].`$ (22) In our calculation, we first rotate the field to be updated, together with the surrounding fields, so that the corresponding $`(\sigma ,\widehat{\sigma })`$ is comfortably away from the coordinate singularities. Then we apply the discretized equations of motion (see below) derived from (20) and finally we rotate back the updated fields. We stress that switching to rotated fields and (20) amounts to a mere changing of the coordinate system. The content of the equations is identical irrespective of the coordinate system. The difference is in the discretization which in the vicinity of the ‘poles’ leads to large errors which are avoided in the rotated frame. The choice of $`\stackrel{}{\mathrm{\Theta }}`$ is largely arbitrary. For simplicity we choose it so that $`\stackrel{}{\sigma }`$ is always on the ‘equator’ of the $`SU(2)`$ hypersphere. The second important numerical issue follows from the unit vector nature We will continue to use $`\stackrel{}{\pi }`$ when referring to the pion field, even though we perform our updates using the rotated pion fields $`\stackrel{}{\sigma }`$. of $`\widehat{\pi }`$, $`\stackrel{}{\pi }\stackrel{}{\pi }=1`$. This and the corresponding constraints on $`\dot{\widehat{\pi }}`$ and $`\ddot{\widehat{\pi }}`$, namely $`\widehat{\pi }\dot{\widehat{\pi }}=0`$, $`\ddot{\widehat{\pi }}\widehat{\pi }+\dot{\widehat{\pi }}\dot{\widehat{\pi }}=0`$ are consistent with the equations of motion but are violated by the discretized equations. Therefore they need to be imposed by projecting out the spurious components of $`\dot{\widehat{\pi }}`$ and $`\ddot{\widehat{\pi }}`$ at every step in the update. There are similar conditions for the spatial derivatives, which also have to be taken into account. This is of course a rather technical point, but it is a reminder of the fact that our approach does not completely conform to the idea of using a minimal set of coordinates. That would be achieved for instance by trading $`\widehat{\pi }`$, which describes a direction in three dimensions, for the two angles that define that direction. That would have introduced another set of coordinate singularities to be avoided. The opposite strategy would be to apply the same considerations we followed for $`\widehat{\pi }`$ in deriving the Euler-Lagrange equations and imposing the unit length constraint, to the four-dimensional unit vector $`(\mathrm{\Phi },\stackrel{}{\mathrm{\Psi }})`$. That also remains an open possibility, along with using unitary and Hermitian matrices as dynamical variables and of course, the traditional path employing Lagrange multipliers. We have not found it necessary to employ any of these notions at this time. Besides the fact that it leads to a reasonably stable calculation, our choice of variables has the advantage of fairly simple equations of motion. The decomposition of $`\stackrel{}{\pi }`$ into its length and unit vector follows quite naturally, and leads to great simplification in the much more complicated case of including a vector-isovector $`\rho `$ meson field. The main idea of our numerical scheme is the following. The discretized time evolution for the fields themselves is quite straightforward, given the knowledge of their time-derivatives or ‘velocities’. The evolution of the velocities is the core of our procedure. The velocities are assigned to half-integer timesteps. Thus, one timeslice contains the fields at a given time and the velocities half a timestep earlier. The time evolution of the velocities follows from solving the equations of motion for the second time derivatives. The latter are written in terms of the retarded velocity (at time $`(t\mathrm{\Delta }t/2)`$) and the one at $`(t+\mathrm{\Delta }t/2)`$. The rest of the equations of motion contain the local fields and their spatial derivatives, (all defined at time $`t`$), but they also contain the velocities as it is evident from (20). We approximate the velocities at $`t`$ by the average of their values at $`(t\pm \mathrm{\Delta }t/2)`$. This leads to an implicit equation for the updated velocities. We solve the implicit equation iteratively to cubic order in $`\mathrm{\Delta }t`$ which exceeds the order of accuracy in which it was derived. Finally, the new velocities are used to update the fields. Use of a flux-conservative form for the equations of motion might have lead to improved accuracy. However, this way we can use the known conservation laws, in particular, energy and baryon number conservation, as a check for the accuracy of our simulations. The long-term stability of the calculation is largely improved by the feedback mechanism built into our implicit scheme. Calculations performed using an explicit scheme involving a second timeslice shifted with one half timestep give virtually identical results as far as the time evolution of the fields and energy and baryon number conservation are concerned. Part of the calculations presented below – those not involving complicated physical situations – have been in fact performed with this earlier version of our code. When pushed beyond $`25\mathrm{fm}/\mathrm{c}`$ in time (a few thousand timesteps), these calculations typically develop instabilities which build up fast to destructive levels. By contrast, the feedback calculations ran without exception up to 5000 timesteps or more without becoming unstable. This is achieved without adding an explicit numerical viscosity term to our equations. The calculation based on the algorithm described above was implemented on a three-dimensional grid of points. The physical size of the box used for the calculations shown in this paper was $`18\times 10\times 10`$ fermi. We take advantage of the four-fold spatial symmetry of the problem , therefore our mesh covers only one quadrant of the physical box. We used a fixed lattice spacing of $`0.1\mathrm{fm}`$ therefore our lattice has $`91\times 101\times 51`$ regular points. In addition to these, we have a layer of unequally spaced points on each outside wall of the box simulating an absorbing boundary. Thus our full mesh has $`101\times 121\times 61`$ points. An indication of the better intrinsic stability of the model Lagrangian employed here is the fact that our calculations are fairly stable out to $`50\mathrm{fm}/\mathrm{c}`$ in time with timesteps of $`0.1`$ to $`0.4\mathrm{fm}/\mathrm{c}`$, corresponding to a CFL ratio of $`0.1`$ to $`0.4`$. This is larger than in the early works of Verbaarschot et al ($`0.05`$) and of the Caltech group ($`0.075`$ to $`0.013`$), and comparable to the more recent calculations of , which employ fourth-order spatial differences, in contrast with our second-order spatial difference scheme. In the absence of radiation, the total energy is conserved to within $`3\%`$ for typically $`20\mathrm{fm}`$. Due to the emission of radiation which eventually leaves the box it is harder to assess the degree of energy conservation for those processes which involve (quasi)bound states and have to be followed for a longer time. We can estimate the amount of energy being radiated by calculating the energy contained in a sphere of given radius ($`2\mathrm{fm}`$) around the origin, large enough to contain most of the field. The radiated energy first leaves this sphere and then the box. Excluding loss through radiation identified in this manner, the total energy is conserved to within $`4\%`$ in all the calculations presented below, including the long runs ($`40\mathrm{}60\mathrm{fm}/\mathrm{c}`$) involving bound states. A check for consistency is to reverse the arrow of time at some point in the calculation, which should lead back close to the initial state. We performed this check successfully on one case with nontrivial dynamics but little radiation. We construct the initial state by first numerically solving the field equations in the spherically symmetric ansatz. This gives us the radial functions for the spherically symmetric static skyrmion (hedgehog). We then place a boosted hedgehog configuration in the simulation box. The skyrmion is boosted towards its mirror image implemented via the boundary conditions at the symmetry walls of the box, which corresponds only to one quadrant of the physical region being simulated. One can dial all the relative groomings discussed below as well as the corresponding skyrmion-antiskyrmion configurations by simply choosing the appropriate signs in the boundary conditions for the various field components . The problem of the overlap of the tails of the two skyrmions in the initial state is not easy to solve. Instead, we chose the initial configuration with a large separation ($`9\mathrm{fm}`$) between the two centers, so that the overlap becomes truly negligible. Our calculations have been performed on clusters of $`612`$ IBM SP-2 computers. Depending on the number of processors and the timestep, one $`20\mathrm{fm}/\mathrm{c}`$ calculation takes half a day to two days to complete. ## IV Results For skyrmion-skyrmion scattering with finite impact parameter, there are four relative groomings that are distinct. The first is no grooming at all. This is the hedgehog-hedgehog channel. The second grooming we consider is a relative rotation of $`\underset{¯}{\pi }`$ about an axis perpendicular to the scattering plane (the plane formed by the incident direction and the impact parameter). The third is a relative grooming by $`\underset{¯}{\pi }`$ about the direction of motion. For zero impact parameter this channel is known to be repulsive. The fourth grooming consists of a relative rotation by $`\underset{¯}{\pi }`$ about the direction of the impact parameter. In the limit of zero impact parameter the second and fourth groomings become equivalent, they then correspond to a rotation of $`\underset{¯}{\pi }`$ about an axis normal to the incident direction. In this zero impact parameter case, this channel is known to be attractive. All of our scattering studies are done at a relative velocity of $`\beta =v/c=0.5`$, which corresponds to a center of mass kinetic energy of 230 MeV. In order to keep our study relatively small, we have not studied the effect of varying the incident energy. As the main means of presentation we chose energy density contour plots. The baryon density plots are very similar to the energy density plots. ### A Head-on collisions Let us begin, for simplicity, with the case of zero impact parameter. For the hedgehog-hedgehog channel (HH) and the repulsive channel (rotation by $`\underset{¯}{\pi }`$ about the incident direction) symmetry dictates that the scattering can only be exactly forward (the skyrmions passing through on another) or exactly backward (the skyrmions bouncing back off each other). We find that in fact the scattering is backward in both the HH and the repulsive channels. We illustrate this type of scattering in Figure 1 with contour plots of the energy density in the $`xy`$ plane Throughout the paper we adopt the following convention. The direction of the initial motion is $`x`$, the impact parameter – if nonzero – points in the $`y`$ direction and $`z`$ is the direction perpendicular to the \[initial\] plane of motion, $`xy`$. For zero impact parameter the choice of $`y`$ and $`z`$ is of course arbitrary. for the repulsive channel. Unless otherwise specified we keep the same choice for the energy contour levels for all similar plots that will follow, namely, the first contour at $`5\mathrm{M}\mathrm{e}\mathrm{V}/\mathrm{fm}^3`$, and the others equally spaced at $`100\mathrm{M}\mathrm{e}\mathrm{V}/\mathrm{fm}^3`$. The head-on scattering in the repulsive channel has nothing surprising. The process is reminiscent of two tennis balls bouncing off each other. The skyrmions start compressing as soon as they touch. They slow down, compress and stop, and then expand and move off in the direction opposite to the one they came in along. The collision is practically elastic. We were unable to detect any energy loss through radiation and the velocities of the topological centers before and after collision are practically the same. For the hedgehog-hedgehog channel we again find backward scattering with an evolution very similar to that shown in Figure 1, hence we show no figure. In the attractive channel (rotation by $`\underset{¯}{\pi }`$ normal to the incident direction) the skyrmions scatter at $`90^0`$ along an axis perpendicular to the plane formed by the incident direction and the grooming axis. This right angle scattering is well known. It proceeds through the axially symmetric $`B=2`$ configuration . The skyrmions lose their individual identity in this process. In Figure 2 we show the energy contours for head-on collision in the attractive channel. At the midpoint of this process, shown in in the fourth frame of Figure 2, one can clearly see the torus-shaped configuration. It is situated in the plane defined by the incoming and outgoing directions and perpendicular to the grooming axis. The bulk of the energy density avoids the center of the doughnut as it shifts from the incoming direction to the one perpendicular to it. To illustrate this point better we plot the middle frames of Figure 2 in perspective in Figure 3.<sup>§</sup><sup>§</sup>§ We remind the reader that we plot level contours of the total energy density in the median plane of our three-dimensional system, as opposed to three-dimensional surfaces of constant energy. In Figure 4 we plot the coordinates of the center of one skyrmion (defined as the point where the pion field amplitude is exactly $`\underset{¯}{\pi }`$) versus. time. The only nonzero coordinate is $`x`$, initially. Then, after the right-angle scattering, $`y`$ is the only non-zero coordinate. Straight lines indicate uniform motion. That is the case both before and after the collision. The slopes of the $`y`$ line is noticeably smaller than that of the $`x`$ line before collision, in other words, the outgoing velocity is slightly smaller. This is due to a genuine physical process, radiation, rather than to a numerical artifact, because there is no decrease in the total energy of the system. In some of the processes we discuss below, there are stronger examples of slowing down, accompanied by detectable radiation. As we have seen above, the scattering direction is determined by the incident direction and the grooming direction. When the grooming direction is normal to the incident direction, a torus is formed in the plane normal to the grooming direction. In the presence of a non-zero impact parameter, it matters whether the grooming direction is parallel or normal to that impact parameter. In both cases there is a tendency to the formation of a torus normal to the grooming direction, but the ultimate evolution of the scattering is different in the two cases. We will refer to the case where the grooming is normal to the impact parameter as attractive (1) and attractive (2) when they are parallel. Let us now look at scattering in each grooming as a function of impact parameter. In each of the four cases we study impact parameters of $`0.4\mathrm{fm}`$, $`0.8\mathrm{fm}`$, $`1.6\mathrm{fm}`$, and $`2.8\mathrm{fm}`$. In all cases we find that the scattering at $`2.8\mathrm{fm}`$ is “routine” and hence we do not go to larger impact parameter. ### B Simple scattering: the hedgehog-hedgehog and attractive (1) channels We begin with two groomings that lead to simple dynamics. Consider first the hedgehog-hedgehog channel. In Figure 5 we plot the trajectories of the topological center of one of the skyrmions for zero impact parameter and for each of the four non-zero impact parameters. Here and in the other similar plots, we define the center as the point where the norm of the pion field reaches the value of $`|\stackrel{}{\pi }|=\underset{¯}{\pi }`$. This is a good indicator of the global movement of a skyrmion, especially when the two colliding objects are somewhat separated.The topological center does not necessarily coincide with the center of mass. While the latter is insensitive to internal oscillations of the skyrmion, the topological center oscillates. We cut off the trajectories in Figure 5 after $`t=17\mathrm{fm}/\mathrm{c}`$. We see normal scattering trajectories corresponding to a repulsive interaction. This channel is indeed known to be mildly repulsive. For the smallest impact parameter, $`b=0.4\mathrm{fm}`$, the scattering is in the backward direction (recall that for $`b=0`$ the scattering angle is $`180^o`$) and gradually turns forward for increasing impact parameter. This is exactly what one might expect, since this channel is the most similar to point-particle scattering. The interaction between the skyrmions is central in the HH channel, because it is independent of the direction of the relative position vectors of the topological centers. The large-angle scattering for small impact parameters probes the interaction of the soft core of the skyrmions. We illustrate this in Figure 6 with energy contour plots from the $`b=0.4\mathrm{fm}`$ case. Just like in the head-on case, the skyrmions compress as they touch. They slow down and then accelerate and proceed in the outgoing direction. Internal oscillations of the skyrmions can be observed after the collision, therefore this process is not entirely elastic. The other channel that exhibits only simple scattering is the attractive (1) grooming – where the grooming direction is perpendicular to the plane of motion. We have studied the same five impact parameters here. In Figure 7 we show the trajectory of the skyrmion centers for each of those impact parameters. For $`b=0`$ and this grooming, the scattering angle is $`90^o`$. For large impact parameters the scattering is nearly forward. Hence in between, we should see something in between, which is precisely what the figure shows. Note that in this channel, in contrast to the HH channel, the trajectories begin their scattering by curving toward the scattering center, as one would expect for an attractive channel. The attraction is seen to act practically unperturbed in the scattering with $`b=2.8\mathrm{fm}`$. Again we illustrate one scattering process in more detail. In Figure 8 we plot energy contours for the $`b=0.8\mathrm{fm}`$ scattering. The most important feature is the existence in the third and fourth frames of a configuration reminiscent of the doughnut-shaped intermediate state identified in the $`b=0`$ case for this grooming (Figures 2 and 3). We are further away from axial symmetry than in Figure 2, but the relatively large void in the middle is clearly identifiable. It is important to point out that even at this mutual distance of $`0.8\mathrm{fm}`$, the attractive interaction is strong enough to start forming the $`B=2`$ configuration. In the present case it is ripped apart by momentum. As we shall see that is not always the case in all groomings. Another feature to mention is the presence of internal oscillations. They can be clearly seen in the last few frames of Figure 8 deforming the outgoing skyrmions. In the case of $`b=1.6\mathrm{fm}`$ trajectory, where the attraction is too weak to lead to a $`B=2`$ configuration, the topological centers are attracted toward each other but then they bounce back and oscillate transversally. ### C Orbiting, capture and radiation: the repulsive channel For the remaining two groomings, where the grooming direction is in the scattering plane, the dynamics is more complex. Let us first consider grooming about the incident direction, referred to as the repulsive channel. For the smaller impact parameters $`b=0.4\mathrm{fm}`$ and $`b=0.8\mathrm{fm}`$, we find a remarkable type of trajectory. The scattering begins as repulsive as we see in Figure 9, but as the skyrmions pass one another, they find themselves groomed by a rotation of $`\underset{¯}{\pi }`$ about an axis normal to the line joining them. This is the most attractive configuration. Hence as they pass one another they begin to attract. The two skyrmions now, feeling this attraction, couple and begin to orbit. In order to make the motion clearer in Figure 9, we have decreased the spatial scale of our plot and increased the length of time shown on the orbiting trajectories, which are cut off at $`28.0\mathrm{fm}/\mathrm{c}`$. In Figure 10 we show energy density contour plots for $`b=0.8\mathrm{fm}`$ to illustrate our point. Notice that the plots cover a longer time period than in the previous cases. As the skyrmions orbit, they radiate. This radiation carries off energy and the skyrmions couple more strongly. The radiation and coupling last for a long time. The first energy level we plot is $`5\mathrm{MeV}/\mathrm{fm}^3`$, comparable to the amplitude of the radiation. The first burst of radiation can be seen in the second and third frames of Figure 10. It appears to consist of the spinning off of a region of high local energy density. A later burst is seen in the last frame. These bursts accompany the orbiting movement of the now bound skyrmions. The radiation takes away some of the angular momentum and a significant part of the energy. The total energy radiated is greater than the incident kinetic energy, hence the two are now in a bound state. Thus for this choice of parameters, ($`b=0.8\mathrm{fm}`$, $`v=0.5c`$, repulsive grooming) we have the phenomena of orbiting, radiation and capture. We assume that this $`B=2`$ system will eventually find its way to axial-symmetric torus shaped bound state at rest. To reach that state they will have to radiate more energy and the remaining angular momentum. We have followed the orbiting for a time of more than $`60`$ fm/c. We observed continued orbiting with slowly decreasing amplitude, as shown in Figure 11, and continuous energy loss through bursts radiation. We are not able, numerically, to follow the state to the very end. We now have a bound $`B=2`$ configuration, and therefore should expect the appearance of a torus. The grooming in this case is about the $`x`$ axis. The torus should now appear in the $`yz`$ plane, and should rotate about the $`z`$ axis to carry the initial angular momentum. Energy contours in the $`yz`$ plane corresponding to three frames from Figure 10 are shown in Figure 12. We recognize the familiar pattern of the doughnut-shaped $`B=2`$ bound state. These snapshots correspond to situations when the doughnut is aligned with the $`yz`$ plane. The doughnut is strongly deformed, with the two skyrmions clearly distinguishable. This deformation is only slightly alleviated during the $`20\mathrm{fm}/\mathrm{c}`$ (and a full $`360^\mathrm{o}`$ rotation) between the first and the last frame in Figure 12. The appearance of the $`B=2`$ bound state is even clearer in the $`b=0.4\mathrm{fm}`$ case. The $`xy`$ plane energy contours are very similar to those in the $`b=0.8\mathrm{fm}`$ process. There is again quite some radiation early on, as illustrated in Figure 13. As can be seen from the trajectories in Figure 9, the attraction is strong in this case (once the skyrmions are past the initial repulsion), and the topological centers come very close. The $`yz`$ plane contour plot in Figure 14, showing the doughnut just after its formation, exhibits more axial symmetry than the corresponding ones in the $`b=0.8\mathrm{fm}`$ case. We conclude that we have an example of two skyrmions merging into a $`B=2`$ axial symmetric configuration. The angular momentum remaining after the initial radiation burst is carried by the rotation of the doughnut around the $`z`$ axis. The individual skyrmions lose their identity early on and the movement of the topological centers is always very close to the symmetry center. Periodically, they move out of the $`xy`$ plane in the $`z`$ direction. We interpret this as oscillations of the torus. We assume that eventually the kinetic energy and the angular momentum will be radiated away. In the $`b=1.6\mathrm{fm}`$ case the skyrmions first feel some repulsion, which turns to attraction as they pass. This results in small persistent transverse oscillations. For $`b=2.8\mathrm{fm}`$ there is only a very small attractive interaction. ### D Scattering out of the plane of motion: the attractive (2) channel The scattering in the last remaining channel, with grooming around the direction of the impact parameter, shows the most remarkable behavior of all. The corresponding trajectories are shown in Figure 15. Recall that for zero impact parameter, this grooming leads to right angle scattering in a direction normal to the plane formed by the incident direction and the grooming axis. That direction is now normal to the scattering plane, which is the one that contains both the impact parameter and the incident momenta. Usually one says that there cannot be scattering, for finite impact parameter, out of the scattering plane by angular momentum conservation. However, we have meson radiation (mostly pion but also some $`\omega `$) that can carry off angular momentum, albeit inefficiently. For the impact parameter of $`0.4\mathrm{fm}`$ this is just what happens. This remarkable trajectory can be better understood by studying the cartesian components of the topological center’s position as a function of time. We take the $`x`$ direction as the incident one, the impact parameter in the $`y`$ direction and the normal to the scattering plane in the $`z`$ direction. The behavior of $`x`$, $`y`$, and $`z`$ as functions of time for the topological center of each of the skyrmions in the case of $`b=0.4\mathrm{fm}`$ is shown in Figure 16. The skyrmions meet, interact, and then go off normal to the scattering plane in the $`z`$ direction. Therefore they disappear from the plot of trajectories in the $`xy`$ plane, Figure 15. They disappear at the point $`x=y=0`$, with their final state trajectory along the $`z`$ axis having no impact parameter, as it must not, by angular momentum conservation. Note that the slope of the $`z`$ trajectory in Figure 16 is less than that of the $`x`$ trajectory, reflecting the energy carried off by the radiation. The energy contour plots shown in Figure 17 show the skyrmions coming together and attempting to form a doughnut about a skewed axis in the $`xy`$ plane and then flying apart in the $`z`$ direction. Figure 18 shows the last two configurations at higher resolution in energy. This reveals the radiation that carries off the angular momentum in the $`xy`$ plane. For the smallest non-zero impact parameter we investigated, $`b=0.4\mathrm{fm}`$, there is meson field radiation left behind in the scattering plane that carries off the initial angular momentum. When we go to an impact parameter of $`0.8\mathrm{fm}`$, things change. The cartesian coordinates for the $`b=0.8\mathrm{fm}`$ case are shown in Figure 19. As the skyrmions meet there is some curvature and then an attempt at uniform motion along $`z`$ at $`x=y=0`$. Now there is too much angular momentum for the field to carry away. The skyrmions try to go off normal to the scattering plane, but they have only a brief excursion in that direction while the meson field is radiating. The skyrmions then return to the scattering plane, but by now the field has taken off so much energy that they are bound and they begin to orbit in the $`xy`$ plane, alternating with excursions in the $`z`$ direction, of slowly decreasing amplitude. In Figure 15 we have truncated the trajectory of the $`b=0.8`$ fm case at the point where the skyrmions first leave the $`xy`$ plane. Presumably their final state would once again be the static torus. Between $`0.4\mathrm{fm}`$ and $`0.8\mathrm{fm}`$ there must be a critical impact parameter dividing the cases of skyrmions that escape normal to the scattering plane and those that are trapped in orbit in that plane. We are investigating this critical impact parameter and the nature of the solutions in its vicinity. Energy contour plots for the $`b=0.8\mathrm{fm}`$ impact parameter are shown in Figure 20. The first few frames are very similar to those in Figure 17, showing the skyrmions approaching, forming a doughnut practically in the $`yz`$ plane, and attempting to escape in the $`z`$ direction. As we see in Figure 21, there is considerable radiation at this time as the field tries to carry off the angular momentum. The field cannot, and the skyrmions come back first along $`z`$ and then through the doughnut into the $`xy`$ plane, avoiding the center. Note that both the $`z`$ turn around and the subsequent $`x`$ turn around comes at no more than $`3.0`$ fm separation, as it must. At larger separation, the skyrmions would be out of effective force range and not able to come back. When the skyrmions return to the $`xy`$ plane, they have lost so much energy that they are bound. They begin to orbit, and then make another unsuccessful attempt to escape along $`z`$. As can be seen from the time evolution plot, this sequence of orbiting of the topological centers in the $`xy`$ plane alternating with excursions out of the plane in the $`z`$ direction continues for a long time. We interpret this as follows. The two skyrmions have essentially merged into a $`B=2`$ configuration. The residual angular momentum forces the torus to rotate around the $`z`$ axis. The remainder of the momentum with which the two skyrmions came into this configuration (more precisely, its component pointing to the center), while not enough to push the skyrmions out in the perpendicular direction, results in oscillations which deform the doughnut as illustrated in Figure 22. This central component of the momentum allows the skyrmions to escape in the $`b=0.4\mathrm{fm}`$ and the central case. We are not able to follow this cycle to completion, but believe that eventually the system will radiate enough energy and angular momentum to settle into the static, axial-symmetric $`B=2`$ torus, similarly to the small impact parameters for repulsive grooming discussed in the previous section. For impact parameters of $`1.6\mathrm{fm}`$ and $`2.8\mathrm{fm}`$ in this channel, the scattering is unremarkable by comparison, as is seen in Figure 15. For the largest impact parameter we notice the medium-range repulsive interaction, just the opposite of the corresponding case in the repulsive grooming discussed above. The two groomings are interchanged at the point where the centers cross the $`x=0`$ plane. ## V Conclusion and Outlook The phenomenology of two-skyrmion scattering for various groomings as it emerges from our qualitative study, may be summarized as follows. In the absence of grooming , the scattering process is almost elastic at the energy we considered. The kinematics is similar to that of point-particles interacting via a central repulsive interaction. This is to be expected, since the interaction between two hedgehog configurations is central. We know that at higher velocities one expects to excite vibrational modes of the individual skyrmions. We do see modest indications of that. The head-on collision in the repulsive channel is quasi-elastic, similar to the hedgehog-hedgehog processes. The remaining collisions involving grooming of $`180^o`$ can be divided into two categories, depending of whether or not the impact parameter, $`b`$, is small enough for the formation of the $`B=2`$ (torus-shaped) bound state. If the impact parameter is large, the collisions are quasi-elastic with a weak attractive or repulsive character depending on the grooming. If the impact parameter is small enough, the collision proceeds through a (sometimes deformed) doughnut configuration in the plane normal to the grooming axis, even if the final state is not bound. The cleanest example is the attractive (1) channel, with the grooming axis normal to the plane of motion. For zero impact parameter it is well known that a torus is formed in the $`xy`$ plane, the skyrmions lose their identity, and fly out at right angles with respect to the incident direction and the grooming axis. For a nonzero impact parameter, a deformed torus still appears in the $`xy`$ plane, and the scattering angle decreases continuously from $`90^o`$ as $`b`$ increases. In the repulsive case (grooming around the direction of motion, $`x`$), the torus is formed close to the $`yz`$ plane. As the two incident skyrmions approach the $`y`$ axis (that of the impact parameter), they find themselves in an attractive configuration, since they are now groomed around an axis ($`x`$) perpendicular to the one ($`y`$) connecting them. They form the doughnut initially in the plane perpendicular to the grooming axis. The configuration carries some of the initial angular momentum by rotating around the $`z`$ axis. The skyrmions do not tend to exit in the perpendicular direction because they came in with very little momentum in the plane of the torus. In the attractive (1) and (2) cases, the initial momenta have a large component pointing to the center of the torus. This momentum is channeled into the perpendicular direction, always leading to scattering in the attractive (1) case. In the attractive (2) case, with grooming around the impact parameter direction $`y`$, the doughnut is formed close to the $`xz`$ plane. The $`90^o`$ scattering would therefore happen in the $`z`$ direction, but this is strongly limited by the necessity of shedding the angular momentum around $`z`$. Radiation provides a mechanism for this, and for a small enough impact parameter the skyrmions can escape in the $`z`$ direction. Otherwise they go into a rotating doughnut configuration. In this case however, there is radiation and significant momentum in the plane of the torus, which torus also exhibits quadrupole oscillations. While our investigation is by no means complete, we did identify a significant number of distinct patterns for the outcome of skyrmion-skyrmion collisions. One may define a number of critical configurations which separate the different outcomes, even for fixed velocity. The picture might get even richer by sweeping a range of incident velocities. The calculations of skyrmion-skyrmion scattering reported in the Section above demonstrate two things. First they show that the $`\omega `$ meson stabilizes the Skyrme model and makes it numerically tractable out to long times and through complex dynamical situations. Second the calculation shows a rich mix of phenomena. These include capture and orbiting, radiation, and excursions out of the scattering plane. Although each of these arises naturally in the model, they have not been seen or demonstrated before, nor have most of them been suggested before. They therefore add to the rich and often surprising mix of results in the Skyrme model. Since this is a model of low energy QCD at large $`N_C`$, the new results give insight into aspects of QCD in the non-perturbative long wave length or low energy domain. This is a region in which our best hope for insight comes from effective theories like the Skyrme model. The success of these calculations also gives us confidence that the method can be carried over, with only simple changes, to the annihilation problem. In particular the stability of the calculations, thanks both to $`\omega `$ meson stabilization and a number of numerical strategies, suggests that the annihilation calculation will also be stable. For annihilation, meson field (pion and omega) radiation in the final state is all there is and the fact that we clearly see radiation in the skyrmion-skyrmion case is reassuring. Our work here suggests a number of further avenues. We have already discussed annihilation. It would also be interesting to explore the landscape of skyrmion-skyrmion scattering as a function of energy as well as grooming and impact parameter and also to examine more closely the behavior of the model in the neighborhood of critical parameters where the scattering behavior changes abruptly. Also interesting would be to try to extract information about nucleon-nucleon scattering from the results for skyrmion-skyrmion scattering. We are investigating these questions. ## VI Acknowledgements We are grateful to Jac Verbaarschot and Yang Lu for many helpful discussions on both physics and numerical issues. We are indebted to Folkert Tangerman and Monica Holbooke for a number of suggestions regarding the calculation. Jac Verbaarschot is also acknowledged for a critical reading of the manuscript. This work was supported in part by the National Science Foundation. We are very grateful to Prof. R. Hollebeek for making available to us the considerable computing resources of the National Scalable Cluster Project Center at the University of Pennsylvania, which center is also supported by the National Science Foundation.
no-problem/9909/physics9909017.html
ar5iv
text
# Kulback-Leibler and Renormalized Entropy: Applications to EEGs of Epilepsy Patients ## 1 Introduction Since Shannon’s classical works, information theoretic concepts have found many applications in practically all fields of science. In particular, tools derived from information theory have been used to characterize the degree of randomness of time sequences, and to quantify the difference between two probability distributions. Indeed there are a number of constructs which qualify as distances between two distributions. Although the Kullback-Leibler (K-L) relative entropy is not a distance in the mathematical sense (it is not symmetric), it plays a central role as it has numerous applications and numerous physical interpretations. Another, seemingly independent, observable measuring a dissimilarity between two distributions was recently introduced in . This “renormalized entropy” was subsequently applied to various physiological time sequences, including heart beats and electroencephalograms (EEGs) recorded in patients with epilepsy. The relation between K-L and renormalized entropy, and their application to EEGs recorded in patients with epilepsy is the subject of the present communication. Ever since the first recordings in the late ’20s, the EEG is one of the most powerful tools in neurophysiology . An important application of EEGs in clinical practice is the diagnosis of epilepsy. Characteristic abnormal patterns help to classify epilepsies, to localize the epileptogenic focus, and eventually to predict seizures . About $`20\%`$ of patients suffering from focal epilepsies do not improve with antiepileptic medication and are therefore assumed candidates for a surgical resection of the seizure generating area. Successful surgical treatment of focal epilepsies requires exact localization of the seizure generating area and its delineation from functionally relevant areas. Recording the patient’s spontaneous habitual seizures by means of long-term (several days), and in some cases intracranial, EEGs (i.e., with electrodes implanted within the skull) is currently assumed most reliable. Although EEG recordings are in clinical use for more than half a century, conventional EEG analysis mostly rely on visual inspection or on linear methods as the Fourier Transform (see e.g. for a comprehensive description of Fourier analysis in EEGs). Particularly for diagnosis of epilepsy, quantitative methods of analysis are in need to give additional information (for a review of quantitative methods in EEG analysis, see e.g. ). It is precisely in this context that the authors of found renormalized entropy to be much more significant than any of the other methods they looked at. In the following we argue that renormalized entropy is very closely related to K-L entropy. Indeed, it is precisely a K-L entropy, although not between the two distributions one started out to compare. Nevertheless we can relate renormalized entropy to the K-L entropy between these two distribution. Moreover, when extracting these measures from EEGs, we find both to be very similar. It seems indeed from these analyses that standard K-L entropy is more useful than renormalized entropy. In the next section we recall Shannon and K-L entropies, and show how renormalized entropy is related to K-L entropy. In section 3 we present applications to seizure EEG data. In this section we also address several technical points concerning the implementation in case of EEG data, and we discuss the importance of the results from a neurophysiological point of view. Finally in section 4 we draw our conclusions. ## 2 Entropy measures We consider a discrete random variable having $`n`$ possible outcomes $`x_k(k=1,\mathrm{},n)`$ with respective probabilities $`p_k`$, satisfying $`p_k0`$ and $`_{k=1}^np_k=1`$. The Shannon entropy of $`p`$ is defined as $$H[p]=\underset{k}{}p_k\mathrm{log}p_k.$$ (1) In the following we shall take $`k`$ as a frequency index and $`p_k`$ as a normalized spectral density, $$p_k=\frac{S(\omega _k)}{_kS(\omega _k)}.$$ (2) Moreover, the spectrum will be estimated from gliding windows over a scalar (‘univariate’) time sequence $`x_n`$, $$S(\omega _k)=S_t(\omega _k)=\left[|X_t(\omega _k)|^2\right]_{smooth},$$ (3) where $`X_t(\omega _k)`$ is the discrete Fourier transform of $`x_n`$ taken over a window of length $`T`$ centered at time $`t`$ (see Sec. 3 for details), and the bracket $`[]_{smooth}`$ indicates a local averaging over nearby frequencies. We should stress, however, that all results of the present section apply to any probability distribution. Shannon entropy is equal to $`0`$ in the case of delta distributions, and positive else. It can be interpreted as the average amount of code length (measured in bits, if the logarithm in eq.(1) is taken with base 2) needed to encode a randomly chosen value of $`k`$ (randomly with respect to $`p`$). The essential point here is that the minimal (average) code length is obtained by codes which are optimal for a specific probability distribution – see e.g. the Morse code which uses shorter codes for the more frequent letters. Let us now suppose we have two different probability distributions $`p=\{p_k\}`$ and $`q=\{q_k\}`$. We can then define the K-L (relative) entropy as $$K(p|q)=\underset{k}{}p_k\mathrm{log}\frac{p_k}{q_k}.$$ (4) It is also positive and vanishes only if $`p_kq_k`$, thus measuring the degree of similarity between both probability distributions. Notice however, that it is in general not symmetric, $`K(p|q)K(q|p)`$, therefore it is not a distance in the usual mathematical sense. Its most important interpretation is the following: Assume that $`p`$ is the correct distribution, but the encoding is made using a code which would have been optimal (i.e., would have produced the shortest average code length) if the distribution were $`q`$ instead. Then, $`K(p|q)`$ measures the average excess of the code length (again measured in bits, if the logarithm is base 2) over the shortest code (which would have been based on $`p`$). But there are also several different interpretations in different contexts. For instance, mutual information can be considered as K-L entropy with $`p`$ the true joint distribution and $`q`$ the product of the marginal distributions. Also, Boltzmann’s H theorem is most easily derived using K-L entropies . A supposedly different and independent distance measure between two distributions was introduced in . These authors called $`q`$ the ‘reference distribution’. They defined a ‘renormalized’ reference distribution $`\stackrel{~}{q}`$ as $$\stackrel{~}{q}_k=C[q_k]^\beta $$ (5) where $`C`$ and $`\beta `$ are uniquely fixed by demanding $$\underset{k}{}\stackrel{~}{q}_k\mathrm{log}q_k=\underset{k}{}p_k\mathrm{log}q_k$$ (6) and $$\underset{k}{}\stackrel{~}{q}_k=1.$$ (7) Then they define ‘renormalized entropy’ as $$\mathrm{\Delta }H=H[p]H[\stackrel{~}{q}]$$ (8) and show that it is negative definite, except when $`pq`$. When applying it to time resolved spectra of several physiological time series, it is claimed in that $`\mathrm{\Delta }H`$ gives more significant results (e.g., shows more clearly the onset of an epileptic seizure ) than any other observable studied by these authors. We want to show now that: (i) the renormalized entropy is just the negative of the K-L entropy between $`p`$ and $`\stackrel{~}{q}`$, $$\mathrm{\Delta }H=K(p|\stackrel{~}{q}).$$ (9) (ii) the absolute value $`|\mathrm{\Delta }H|`$ is less than the K-L entropy between $`p`$ and $`q`$, since the difference between both is also a K-L entropy, $$|\mathrm{\Delta }H|=K(p|q)K(\stackrel{~}{q}|q)K(p|q).$$ (10) This strongly suggests that renormalized entropy cannot be more useful than the standard K-L relative entropy between the unrenormalized distributions. To prove our claims, we notice that we can rewrite eq.(6), using eqs.(5) and (7), as $$\underset{k}{}\stackrel{~}{q}_k\mathrm{log}\stackrel{~}{q}_k=\underset{k}{}p_k\mathrm{log}\stackrel{~}{q}_k.$$ (11) Therefore, $`\mathrm{\Delta }H`$ $`=`$ $`{\displaystyle \underset{k}{}}\stackrel{~}{q}_k\mathrm{log}\stackrel{~}{q}_k{\displaystyle \underset{k}{}}p_k\mathrm{log}p_k`$ (12) $`=`$ $`{\displaystyle \underset{k}{}}p_k\mathrm{log}\stackrel{~}{q}_k{\displaystyle \underset{k}{}}p_k\mathrm{log}p_k={\displaystyle \underset{k}{}}p_k\mathrm{log}{\displaystyle \frac{p_k}{\stackrel{~}{q}_k}},`$ which proves our first claim. Furthermore, we can write $`\mathrm{\Delta }H+K(p|q)`$ $`=`$ $`{\displaystyle \underset{k}{}}p_k\mathrm{log}\stackrel{~}{q}_k{\displaystyle \underset{k}{}}p_k\mathrm{log}q_k`$ (13) $`=`$ $`{\displaystyle \underset{k}{}}\stackrel{~}{q}_k\mathrm{log}\stackrel{~}{q}_k{\displaystyle \underset{k}{}}\stackrel{~}{q}_k\mathrm{log}q_k={\displaystyle \underset{k}{}}\stackrel{~}{q}_k\mathrm{log}{\displaystyle \frac{\stackrel{~}{q}_k}{q_k}},`$ which proves the second claim. ## 3 Application to EEG data ### 3.1 Details of the data We will illustrate the result of the previous section by re-analyzing some of the same data used in . The data correspond to an intracranial multichannel EEG recording of a patient with mesial temporal lobe epilepsy; it was sampled with 173 Hz and band pass filtered in the range $`0.5385`$ Hz. In Fig. 1 we show EEG time sequences (500000 data points, approx. 48 min. of continuous recording) from three different recording sites prior to, during, and after an epileptic seizure. Seizure starts at about point 270000 (minute $`26`$) and lasts for 2 minutes. The recording sites are located nearest to the epileptogenic focus (upper trace; channel abbreviation: TBAR), adjacent to the focus (middle trace; channel abbreviation: TR), and on the non-affected brain hemisphere (lower trace, channel abbreviation: TBAL) To better visualize the dynamics, insets drawn on top of each signal show typical EEG sequences of 10 sec duration during the pre-seizure (left), seizure (middle), and the post-seizure stage (right). ### 3.2 Power spectrum For a finite data set $`x_n`$ sampled at discrete times $`t_n=n\mathrm{\Delta }t,n=1,\mathrm{},N,T=N\mathrm{\Delta }t`$, we denote by $`X(\omega _k)`$ its discrete Fourier transform at $`\omega _k=2\pi k/T`$, with $`k=1,\mathrm{},N`$. We estimate the power spectrum as $$S(\omega _k)=C\underset{n=b}{\overset{b}{}}w(n)|X(\omega _{k+n})|^2$$ (14) where $`w(n)`$ is a smoothing function of window size $`B=2b+1`$, and $`C`$ is a normalization factor. As in ref. , a Bartlett-Priestley smoothing function was used $$w(n)\{\begin{array}{cc}[1(n/b)^2]\hfill & |n|b\hfill \\ 0\hfill & |n|>b.\hfill \end{array}$$ (15) As in and for comparison purposes, we subdivide the data in (half overlapping) epochs of $`T24`$ s ($`N=4096`$ data points), and choose the window size of the Bartlett-Priestley function as $`B=33`$. This window length corresponds to a frequency resolution of 0.042 Hz. In the following we consider the spectrum in the region $`\omega <30`$ Hz since no interesting activity occurs outside this band . Moreover, since we are not interested in the absolute power, the normalization factor $`C`$ is adjusted such that the sum over all frequencies below 30 Hz gives unity. ### 3.3 Shannon entropy Parts (a) - (c) of Figs. 24 show the EEG signals recorded at the three sites, contour plots of the corresponding normalized power spectra and time dependent estimates of the Shannon entropy $`H`$. Prior to the seizure, power spectra exhibit an almost stable but spread frequency composition which is reflected in high values of $`H`$. When the seizure starts, the spectra in Figs. 2 and 3 are dominated by a single frequency component ($`7`$ Hz). This is reflected in Fig. 2 by an abrupt decrease of $`H`$ by about 20%. Actually, the decrease is even more pronounced for smaller time windows, since the period of strong coherence is much shorter than 24 sec. As the seizure evolves, the dominant frequency decreases rapidly. This dynamics is characteristic of seizures originating from the mesial temporal lobe (see e.g. ) but it is not the only possible one . The rise of $`H`$ in both Figs. 2 and 3 immediately before the final drop can partially be attributed to this fast change of dynamics. The estimated entropy is high during this phase because of several subsequently appearing frequencies in the same window. The following concentration of activity at lower frequencies finally leads to a decrease of $`H`$. To a lesser degree this is also seen in Fig. 4. Within or close to the seizure generating area, $`H`$ remains small throughout the entire recorded post-seizure stage. Finally, it slowly increases towards values that compare to those obtained during the pre-seizure stage. Using a Shannon entropy defined from the wavelet transform, similar results were obtained in ref. from an analysis of a scalp recorded seizure. ### 3.4 Kullback-Leibler entropy The time courses of the K-L entropy $`K(p|q)`$ are shown in parts (d) of Figs. 24. As reference segments we used the signals from the pre-seizure stage consisting of 4096 data points and starting at $`n=20480`$. The sensitivity (i.e. increase of $`K(p|q)`$ during the seizure relative to the background level) is notably improved when compared to that of the Shannon entropy. Background fluctuations during the pre-seizure stage only slightly affected $`K(p|q)`$ since pre-seizure power spectra from different windows are almost similar. Also, $`K(p|q)`$ proved nearly independent on the choice of the reference segment, as long as it was chosen from the pre-seizure stage. As with the Shannon entropy we see in Figs. 2 and 3 a marked change at seizure onset due to a concentration of spectral power at frequencies $`7`$ Hz. $`K(p|q)`$ clearly detects this difference. It also detects the spectral difference when lower frequencies dominate in the post-seizure stage. But again the rapid frequency change after seizure onset is hard to distinguish from a broad band spectrum due to our somewhat large window size $`T`$. The last two parts of Figs. 24 show time courses of the K-L entropy and the renormalized entropy calculated using a reference segment with lowest Shannon entropy as was done by the authors of . For Figs. 2 and 3 this was after the seizure (4096 data points starting at n=335872 and n=315392, resp.), while it was during the seizure for data shown in Fig. 4 (4096 data points starting at n=284672). Here K-L and renormalized entropies give similar results. This illustrates the similarity between renormalized and K-L entropies as already pointed out in section 2. Differences with results in can be attributed partly to differences in the exact choice of the reference segment. We see that peak values of $`K(p|q)`$ are larger than those based on calculations using a pre-seizure reference window. However, the relative increases over pre-seizure values are much less pronounced. Therefore, we consider post-seizure reference segments as not very useful for seizure detection. Moreover, post-seizure reference segments obviously can not be used in real-time applications. In addition, a post-seizure reference segment is not very reasonable physiologically. Immediately after a seizure, the state of the patient and, accordingly, the EEG are highly abnormal. Typically the post-seizure EEG exhibits slow fluctuations of high amplitude, sometimes superposed with high frequency activity (see Fig.1). This is obviously not an typical background EEG. Moreover, the post-seizure stage is often contaminated by artifacts, some of which are not as easily recognizable as those shown in Fig. 1. We therefore disagree with the procedure proposed in ref. of automatically choosing a reference as the segment with lowest entropy for each recording channel. Instead, we propose to choose a reference segment recorded during a state as “normal” as possible, i.e. far from a seizure (we should note, however, that there is still a lot of controversy in neurophysiology of what is considered to be “far”), free of artifacts and, if possible, free of abnormal alterations (admittedly, this is not always possible). Moreover, the reference segment should be exactly the same time interval for all channels. Otherwise comparisons between different recording sites are not reliable. Also, one might consider taking shorter time segments. This would of course enhance statistical fluctuations, but would allow better time resolution. Even then it would be difficult to detect the recording site showing the very first sign of the seizure which is necessary for an exact focus localization. We verified this for windows down to 1.5 seconds (data not shown). This is in agreement with clinical experience which shows that the time scales relevant for this detection can be less than 1 sec. Because of these problems, the suggestions of concerning clinical applications like seizure detection or localization of epileptic foci seem too optimistic. ## 4 Conclusion The aim of the present paper was twofold. Firstly, we showed that “renormalized entropy”, a novel entropy measure for differences in probability distributions, is closely related to Kullback-Leibler entropy. We also argued that it is very unlikely that more information is obtained from the former than from the latter. Secondly, we checked recent claims that renormalized entropy (and thus also K-L entropy) is very useful in applications to intracranial EEGs from epilepsy patients. We found some of these claims to be unjustified. Nevertheless, the fact remains that K-L entropy applied to spectral distributions is a very promising tool which has not yet been studied much in this context. In fact, “abnormal” frequency patterns corresponding to epileptic seizures were better identified with K-L than with the Shannon entropy. While the present study was performed on a limited amount of data, we suggest K-L entropy to be an interesting tool for a more systematic study. Finally, we point out that the K-L entropy can also be defined from other time-frequency distributions rather than the windowed Fourier transform. In particular, we consider wavelets as good candidates, since they have optimal resolution both in the time and the frequency range (see for theoretical background and for application to EEGs). Acknowledgments: K.L. acknowledges support from the Deutsche Forschungsgemeinschaft.
no-problem/9909/astro-ph9909388.html
ar5iv
text
# Spectral variation in the X-ray pulsar GX 1+4 during a low-flux episode ## 1 Introduction The study of X-ray pulsars has been an area of active research for almost 30 years. In spite of this there remain some significant shortfalls in the understanding of these objects. An example is the persistent pulsar GX 1+4. At the time of its discovery \[Lewin, Ricker and McClintock 1971\] it was one of the brightest objects in the X-ray sky. The companion to the neutron star is an M6 giant \[Davidsen, Malina and Bowyer 1977\]. GX 1+4 is the only known X-ray pulsar in a symbiotic system. Measurements of the average spin-up rate during the 1970s gave the largest value recorded for any pulsar (or in fact any astronomical object) at $`2`$ per cent per year. Inexplicably, the average spin-up trend reversed around 1983, switching to spin-down at approximately the same rate. Since that reversal a number of changes in the sign of the torque (as inferred from the rate of change of the pulsar spin period) have been observed \[Chakrabarty et al. 1997\]. Several estimates (Beurle et al. 1984, Dotani et al. 1989, Greenhill et al. 1993, Cui 1997) indicate a neutron star surface magnetic field strength of $`23\times 10^{13}`$ G. The X-ray flux from the source is extremely variable on time-scales of seconds to decades. Two principal flux states have been observed, a ‘high’ state which persisted during the spin-up period of the 1970s, and a ‘low’ state since. Although the mean flux has been increasing steadily during the current ‘low’ state it has not yet returned to the level of the 1970s. Superimposed on these long-term variations are smooth changes in the flux on time-scales of order hours to days. On the shortest time-scales the periodic variation due to the neutron star’s rotation period at around 2 min is observed. Compared to other accretion-powered X-ray pulsars, GX 1+4 has an atypically hard spectrum extending out well past 100 keV \[Frontera and Dal Fiume 1989\]. Historically the spectrum has been fitted with thermal bremsstrahlung or power law models; more recent observations with improved spectral resolution generally favour a power law model with exponential cutoff. Typical values for the cutoff power law model parameters are photon index $`\alpha =1.12.5`$; cutoff energy $`518`$ keV; $`e`$-folding energy $`1126`$ keV. For any spectral model covering the range 1-10 keV, it is also necessary to include a gaussian component representing iron line emission at $`6.4`$ keV, and a term to account for the effects of photoelectric absorption by cold gas along the line of sight with hydrogen column density in the range $`n_H=(4140)\times 10^{22}\mathrm{cm}^2`$. The source spectrum and in particular the column density $`n_H`$ have previously exhibited significant variability on time-scales as short as a day \[Becker et al. 1976\]. Measurements of spectral variation with phase are few; one example of pulse-phase spectroscopy was undertaken with data from the Ginga satellite from 1987 and 1988 \[Dotani et al. 1989\]. Only the column density and the iron line centre energy were allowed to vary with phase in the spectral fits, and no significant variation was observed. The Ghosh and Lamb \[Ghosh and Lamb 1979\] model predicts a correlation between torque and mass transfer rate (and hence luminosity) for accretion-driven X-ray sources. For most sources it is difficult to test such a relationship since the range of luminosities at which they are observed is limited. However the correlation between torque and luminosity has been confirmed, at least approximately, for three transient sources using data from the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma Ray Observatory (CGRO) and EXOSAT (Reynolds et al. 1996, Bildsten et al. 1997). The situation for persistent pulsars is, however, less straightforward. The BATSE data have demonstrated that in general the torque is in fact uncorrelated with luminosity in these sources. The spin-up or spin-down rate can remain almost constant over intervals (referred to in this paper as a ‘constant torque state’ or just ‘torque state’) which are long compared to other characteristic time-scales of the system, even when the luminosity varies by several orders of magnitude over that time. Transitions between these states of constant torque can be abrupt, with time-scales of $`<1`$ d when the two torque values have the same sign; alternatively when switching from spin-up to spin-down (or vice-versa) the switch generally occurs smoothly over a period of $`1050`$ d. It is possible that there remains some connection between the torque and luminosity, since at times the torque measured for GX 1+4 has been anticorrelated with luminosity \[Chakrabarty et al. 1997\]. This behaviour has not been observed in other pulsars. One important caveat regarding the BATSE measurements is that the instrument can only measure pulsed flux. Systematic variations in pulse profiles or pulse fraction could introduce significant aliasing to the flux data, hence masking the true relationship between bolometric flux and torque. Given that pulse profile shape and torque state have shown evidence for correlation in GX 1+4 \[Greenhill, Galloway and Storey 1998\] this could potentially be an important effect. In this paper we present results from spectral analysis of data obtained from GX 1+4 during 1996 using the Rossi X-ray Timing Explorer satellite (RXTE; Giles et al. 1995). A companion paper \[Giles et al. 1999\] contains detailed analysis of pulse arrival times and pulse profile changes. ## 2 Observations The source was observed with RXTE between 1996 July 19 16:47 UT and 1996 July 21 02:39 UT. Several interruptions were made during that time as a consequence of previously scheduled monitoring of other sources. After screening the data to avoid periods contaminated by Earth occultations, the passage of the satellite through the South Atlantic Anomaly (SAA), and periods of unstable pointing, the total on-source duration was $`51`$ ks. RXTE consists of three instruments, the proportional counter array (PCA) covering the energy range 2-60 keV, the high-energy X-ray timing experiment (HEXTE) covering 16-250 keV, and the all-sky monitor (ASM) which spans 2-10 keV. Pointed observations are performed using the PCA and HEXTE instruments, while the ASM regularly scans the entire visible sky. The background-subtracted total PCA count rate for 3 of the five proportional counter units (PCUs) comprising the PCA is shown in Fig. 1a. The other two PCUs were only active briefly at the beginning of the observation so those data are not included in the analysis. The phase-averaged PCA count rate was initially low at $`80\mathrm{count}\mathrm{s}^1`$. This corresponds to a flux of $`6\times 10^{36}\mathrm{erg}\mathrm{s}^1`$ in the 2-60 keV energy range, using the spectral model discussed in section 3 and assuming a source distance of 10 kpc. Throughout this paper we shall use this value as the source distance unless otherwise specified; the actual distance is thought to be in the range 3-15 kpc \[Chakrabarty and Roche 1997\]. During the course of the observation the count rate decreased to a minimum of $`5\mathrm{count}\mathrm{s}^1`$, corresponding to a flux of $`4\times 10^{35}\mathrm{erg}\mathrm{s}^1`$ before partially recovering towards the end. The count rates are unusually low for this source, with other observations giving significantly higher rates; for example $`320\mathrm{count}\mathrm{s}^1`$ and $`230\mathrm{count}\mathrm{s}^1`$ (equivalent rates for 3 PCU’s) in February 1996 and January 1997 respectively. At times the background-subtracted countrate during interval 2 drops significantly below zero. This is a consequence of the low source to background signal ratio (around 1:10) during this interval coupled with statistical variations in the binned countrate values. The observation is divided into three intervals on the basis of the mean flux (Fig. 1a). Interval 1 covers the start of the observation to just before the flux minimum. Interval 2 spans the period of minimum flux, during which time the flux was $`30\mathrm{count}\mathrm{s}^1`$ apart from $`10`$ s during a flare (see section 5). Interval 3 covers the remaining portion of the observation, during which the mean flux increased steadily. Accompanying the changes in flux were significant variations in pulse profile and spectral shape. Historically GX 1+4 has shown evidence of a correlation between torque state and pulse profile shape \[Greenhill, Galloway and Storey 1998\]. Throughout the period of spin-up during the 1970s pulse profiles were typically brighter at the trailing edge with respect to the primary minimum; e.g. Doty, Hoffman and Lewin \[Doty et al. 1981\]. Since then, measured pulse profiles have instead usually been leading-edge bright, with less pronounced asymmetry; e.g. Greenhill et al. \[Greenhill et al. 1993\]. During interval 1 the pulse profile was observed to be leading-edge bright, similar to other observations since the 1980s. Pulsations all but ceased during interval 2, and in interval 3 the shape of the profile had changed dramatically and resembled the trailing-edge bright profiles typically observed during the 1970s \[Giles et al. 1999\]. The count rate spectra taken during each interval are shown in Fig. 1b. The overall spectral shape changed significantly over the course of the observation, with the spectrum becoming harder in intervals 2 and 3 compared to interval 1. The iron fluorescence line at around 6.4 keV appears more prominent in the second and third intervals. Iron line enhancement during intervals 2 and 3 is also apparent in the spectral ratios, Fig. 1c. These ratios were calculated by subtracting the background spectrum (including a component to account for the emission from the galactic plane; see section 3) from the source spectrum for each interval and dividing the resulting spectra for intervals 2 and 3 by that of interval 1. Because the countrate drops off steeply above 10 keV the spectral bins must be made correspondingly larger to achieve a reliable ratio. The datapoint in the highest energy band for each curve was obtained from HEXTE data, while the lower energy ratios are calculated from PCA data. The decrease in flux observed from interval 1 to 2 and 3 becomes more pronounced at energies below 6 keV. Above 15 keV the spectral ratios are almost constant with energy. ## 3 The spectral model fits Instrumental background from cosmic ray interactions and as a result of passages close to the SAA are estimated using the pcabackest software, provided by the RXTE Guest Observer Facility (GOF). Due to the proximity of the source to the galactic plane, an additional component which takes into account the so-called ‘galactic ridge’ emission must be included in any spectral model. The model for this component used in our fits is identical to that fitted to survey data from this region \[Valinia and Marshall 1998\] with normalisations and abundance fitted to spectra taken during slews to and from the source during this observation. A secondary instrumental effect which must be taken into account to obtain the lowest possible residuals in the model fit is a consequence of the Xenon L-edge absorption feature. This feature is modelled in our spectrum by a multiplicative edge model component with energy fixed at 4.83 keV. Candidate spectral models were tested by fitting to the count rate spectrum to minimise $`\chi ^2`$ using the xspec spectral fitting package version 10 \[Arnaud 1996\]. In general each model takes the form of one (or more) continuum components with a gaussian component necessary to simulate the iron line emission, and a multiplicative component to account for absorption by cold matter along the line of sight. With a lower than normal count rate for the source during this observation, the primary source of error is the Poisson statistics within each energy bin rather than any instrumental uncertainty. Hence the systematic error in xspec was set to zero for the model fits. Thermal bremsstrahlung and power law continuum components both resulted in formally unacceptable values of $`\chi ^2`$ for the interval 1 mean spectrum. Some improvement was found by fitting with various forms of power law, including a power law with exponential cutoff and a broken power law. Nevertheless, each of these models gave unacceptable values of reduced-$`\chi ^2`$: 2.93 and 1.94 respectively. An acceptable fit was obtained using an analytic model based on Comptonisation of a thermal spectrum by hot plasma close to the source (‘compTT’ in xspec; Titarchuk 1994), with reduced-$`\chi ^2`$ for the interval 1 spectra of 1.10. The Comptonisation model has thus been chosen for all spectral fitting reported in this paper. Model parameters for spectral fits from intervals 1 and 3 are shown in Table 1. Fitting this model to the interval 2 mean spectra resulted in an acceptable $`\chi _\nu ^2`$ fit statistic of 0.7943, but with very wide confidence limits for the fit parameters. No improvement in the confidence intervals is obtained by freezing selected parameters to the mean values for the entire observation (e.g. $`T_0`$). The fit parameters are effectively unconstrained and as such cannot be relied upon as a measure of the source conditions. Additionally, the interval 2 spectra alone do not permit an unambiguous choice of spectral model. We cannot distinguish between cutoff powerlaw, broken powerlaw, and Comptonisation spectral models during this interval on the basis of $`\chi _\nu ^2`$. A comparable fit can even be obtained using a model consisting of two blackbody emission components, with fitted temperatures $`1.4_{1.2}^{1.6}`$ keV and $`6.0_{4.5}^{7.6}`$ keV ($`\chi _\nu ^2=0.81`$; see section 5). Consequently we restrict the discussion of the mean spectral fitting results to those from intervals 1 and 3. The Comptonisation model implementation in xspec offers a geometry switch which affects the fitted value of the optical depth $`\tau _P`$. The switch can be set to model either disc or spherical geometries. As we will argue in section 6, neither of these are strictly appropriate for the present situation. Consequently we performed all the fitting using the disc geometry, but note that fitted values of $`\tau _P`$ with the spherical geometry will be approximately twice as large. The fitted values of $`\tau _P`$ should provide an adequate comparative measure of the degree of Comptonisation between different spectra. The increase in line-of-sight absorption following interval 1, suggested initially by the spectral ratios (Fig. 1c), is further supported by the model fits. The fitted column density $`n_H`$ more than doubles between interval 1 and 3. Spectral fits to each uninterrupted ‘burst’ of data (see Fig. 1a) indicate that the increase took place smoothly over approximately 10 hours, although significant variations in the fitted $`n_H`$ values are observed on timescales as short as 2 h. BeppoSAX satellite observations indicate that $`n_H`$ may have persisted at the level measured at the end of interval 3 at least until August 19 \[Israel et al. 1998\]. The input spectral temperature $`kT_0`$ is consistent with a constant value of $`1`$ keV during the entire observation. The decrease in flux following interval 1 is associated with a marginally significant decrease in the fitted values of the scattering optical depth $`\tau _P`$ and the Comptonised component normalisation parameter $`A_\mathrm{C}`$. The model parameters associated with the gaussian component, representing fluorescence from iron in the circumstellar matter, are consistent with constant values over the course of the observation. The iron line centre energy is consistent with emission from cool matter, with no significant change in the centre energy found between intervals. The iron line equivalent width (EW) increases with marginal significance following interval 1. ## 4 Pulse-phase spectroscopy The data from interval 1 were divided into 10 equal phase bins, and a spectrum obtained for each phase range. The ephemeris is that of Giles et al. \[Giles et al. 1999\], with best-fit constant barycentre corrected period $`P=124.36568\pm 0.00020`$ s. The primary minimum is defined as phase zero. Data from interval 1 alone were used, for two reasons. Firstly, the countrate was at its highest during that time, making the signal-to-noise ratio optimal compared to the other intervals. It was not possible to fit models reliably to pulse-phase spectra from interval 2 (due to the low countrate) or interval 3 (due to its short duration). Secondly, since the evidence of the pulse profiles suggests conditions in the source may be rather different between intervals 1, 2 and 3, this seems a better choice than simply combining all the data. Each of the 10 spectra were then fitted with the model described in section 3. Initially, all fit parameters (barring those of the galactic ridge component) were left free to vary. Fitted values of the column density $`n_H`$, input spectrum temperature $`kT_0`$, and the iron line component parameters were all found to be consistent with those for the mean interval 1 spectrum. Confidence limits for the scattering plasma temperature $`kT`$ were very large within some phase ranges, while significant variations with phase were observed only in the normalisation parameter $`A_\mathrm{C}`$ and scattering optical depth $`\tau _P`$. To improve the confidence intervals for the latter three parameters a second fit was performed with all other parameters frozen at the fitted values for the mean interval 1 spectrum. The resulting fit values are shown in Fig. 2. Around the phase of primary minimum ($`\varphi =0.00.1`$) the fitted value of $`\tau _P`$ is significantly higher than the mean value, while $`kT`$ is lower. The normalisation $`A_\mathrm{C}`$ is also significantly lower than the mean value at $`\varphi =0.0`$, but in the phasebin immediately following is above the confidence limits for the mean. There is little evidence for strong spectral variation from the mean at other phases, however we do note an almost monotonic decrease in the normalisation $`A_\mathrm{C}`$ from $`\varphi =0.1`$ and throughout the pulse cycle. ## 5 Flare spectra During the period of lowest flux (interval 2) a strong flare was observed, with the peak flux rising to almost 20 times the mean level during this interval (Fig. 3a). The flare was preceeded by a modest brightening of the source which began $`150`$ s before the flare itself and lasted $`60`$ s; a second pre-flare brightening began $`50`$ s before the main flare, lasting $`30`$ s. Both the flare and the pre-flare activity occurred within the extent of two pulse periods. From the ephemeris determined for the full data set \[Giles et al. 1999\], a primary minimum would have occurred between the first and second pre-flares had the source been pulsing as was observed during intervals 1 and 3. The instantaneous flux during the flare peaked at $`105\mathrm{count}\mathrm{s}^1`$, compared to the mean rate during interval 2 of $`5\mathrm{count}\mathrm{s}^1`$. No comparable events occurred at other times during interval 2. During intervals 1 and 3, the significant variations between successive pulse profiles make it difficult to rule out flares with peaks having similar heights above the mean level. Certainly no flares with the same proportional increase in flux compared to the mean occurred over the course of the observation. The count rate during this interval was too low to obtain useful full-resolution spectra. Instead, low-resolution spectra at various times were extracted from the uninterrupted portion of data within which the flare was observed. The PHA ratios obtained by dividing the various spectra are shown in Fig. 3b. The top panel shows the ratio of the mean spectrum following the flare to that preceding it (excluding the flare itself). Mean flux decreased by around 50 per cent following the flare, with no strong evidence of spectral variation. The second and third plots show the ratios of the pre-flare (interval A on Fig. 3a) and flare (intervals B and C) spectra vs. the mean (excluding the flare). In each case there is no evidence of spectral variation; each ratio is consistent with constant PHA ratio in the range 3-20keV. The pre-flare exhibits only a modest increase in flux of around 50 per cent, while during the 58 s window encompassing the flare itself the mean flux increased by 4-5 times. The bottom panel shows the PHA ratio between the falling and rising parts of the main flare (intervals C and B respectively). The ratio is constant barring a broad dip between 6 and 12 keV. Examination of the spectra indicate that this dip is not due to any global change in the spectral shape, but rather a localised decrease in flux within that energy range as the flare developed. Modelling the spectra from the rising and falling parts using the two-temperature model described in section 3, we find that the variation can be fitted best by a decrease in the temperature of the cooler component ($`\chi _\nu ^2=1.6`$) although the change in temperature is not statistically significant. ## 6 Discussion Re-analysed BATSE data confirm that GX 1+4 underwent a torque reversal from spin-down to spin-up around 1996 August 2, approximately 10 days after the RXTE observation \[Giles et al. 1999\]. Since contributions to the net torque on the neutron star may come from both accreted material and magnetic stresses within the disc, it seems reasonable to suggest that changes in the magnetosphere or disc structure which cause the torque reversal may occur some time before a measurable effect is seen on the star itself. We therefore suggest that the spectral and pulse profile changes measured during our observation are related to the (presently unknown) phenomenon which causes torque reversals. Additional support for this connection is provided by the observation of dramatic pulse profile shape changes during the RXTE observation coupled with the previously noted correlation between pulse profile shape and torque state \[Greenhill, Galloway and Storey 1998\]. Until an observation can be made which encompasses the precise time a torque reversal is occurring it may be impossible to determine more about the process. Comptonisation models have been used to fit spectra for this source from past observations, e.g. Staubert et al. \[Staubert et al. 1995\]; however it has not previously been possible to eliminate all other candidate models on the basis of the $`\chi ^2`$ fit parameter. The particular model used for the spectral fitting simulates Comptonisation in an unmagnetised plasma \[Titarchuk 1994\], and since the available evidence points towards a strong magnetic field in GX 1+4 (although this awaits confirmation by more direct measurements such as a cyclotron resonance line) the model fit parameters may not be an accurate measure of the source conditions. It is likely that the principal effect of the magnetic field will be to make the spectral parameters dependent on the emission angle. Hence the model fit parameters obtained from the mean spectra are expected to be a reasonable approximation of the actual values (L. Titarchuk, pers. comm.) Assuming that the majority of the X-ray emission originates from a blackbody at most the size of the neutron star ($`R_{}10`$ km), we expect a temperature $`kT_00.5`$ keV. The temperature of the input spectrum $`kT_01`$ keV obtained from the model fits is consistent with this calculation. Rough estimates of the accretion column density can be made based on the mass transfer rate derived from the luminosity, and assuming a simple column geometry. The accretion luminosity $`L_{acc}GM_{}\dot{M}/R_{}`$ and hence during interval 1 $`\dot{M}2\times 10^{16}\mathrm{g}\mathrm{s}^1`$. Assuming that the accretion column radius $`R_c`$ is some fraction $`f`$ of the neutron star radius $`R_{}`$, and the column plasma is moving at approximately the free fall velocity $`0.5c`$, the estimated optical depth for Thompson scattering is $`0.17/f`$. In general $`f`$ is subject to considerable uncertainties particularly given the over-simplistic geometry adopted here, but we estimate $`f4\times 10^2`$ (e.g. Frank, King and Raine 1992) and thus the optical depth $`\tau 5`$, close to the model fit values. The pulse phase spectroscopy results also show that $`\tau _P`$, $`kT`$ and $`A_\mathrm{C}`$ are significantly modulated at the pulsar rotation period. Consequently we propose that the Comptonisation model provides a realistic picture of spectral formation in this source, with scattering taking place in the accretion column. Thus the $`kT`$ parameter can be interpreted as the mean temperature of the accretion column plasma over the region in the column where scattering takes place. The model normalisation parameter $`A_\mathrm{C}`$ is somewhat more difficult to relate to a physically measurable quantity, since both the $`kT`$ and $`\tau _P`$ parameters can also affect the total flux from the model component. The spectral ratios (Fig. 1c) and the spectral fit parameters strongly suggest that the variations in the mean spectra during the course of the observation are due to two factors. The decrease in flux which is essentially independent of energy is presumably a result of decreased rate of mass transfer to the neutron star $`\dot{M}`$. This is accompanied by a strong increase in absorption by cold material causing the flux decrease below 6 keV. Variations in the column density $`n_H`$ on time-scales of $`2`$ h have not previously been observed in this source. The iron line energy and the relationship between equivalent width and $`n_H`$ are consistent with the spherical distribution of matter suggested by Kotani et al. \[Kotani et al. 1999\], however the variation is much too rapid to be attributable to the negative feedback effect which those authors suggest regulates mass transfer to the neutron star in the long term. The rapid variation may be an indication of significant inhomogeneities in the circumstellar matter, or alternatively that the giant wind velocity is much faster than 10 $`\mathrm{km}\mathrm{s}^1`$ as suggested by infrared observations of the companion \[Chakrabarty et al. 1998\]. Variation in the spectral fit parameters with pulse phase may provide clues to the distribution of matter in the accretion column. The sharp dip in the pulse profiles is associated with a significant increase in the scattering optical depth $`\tau _P`$ and decrease in the Comptonisation component normalisation parameter $`A_\mathrm{C}`$ (Fig. 2). Such an effect may be observed if the accretion column is viewed almost directly along the magnetic axis, resulting in a much greater path length for photons propagating through the relatively dense matter of the column; essentially an ‘eclipse’ of the neutron star pole by the accretion column. Preliminary Monte Carlo modelling based on Comptonisation as the source of high-energy photons supports this as a possible mechanism (Galloway, 1999, work in progress). Accretion column eclipses have previously been postulated to explain dips in pulse profiles from A 0535+262 \[Cemeljic and Bulik 1998\] and RX J0812.4-3114 \[Reig and Roche 1999\]. That the plasma temperature $`kT`$ is also low around the phase of primary minimum may be related to the bulk motion of the column plasma, since the relative velocity of the plasma in the observer’s frame will depend on orientation (and hence pulse phase). The velocity of bulk motion is likely to be many orders of magnitude above the thermal velocity (in the plasma rest frame) and so may result in observable variation of this fit parameter with pulse phase. The asymmetry of the normalisation $`A_\mathrm{C}`$ with respect to the primary minimum furthermore points to significant asymmetry of the emission on the ‘leading’ and ‘trailing’ side of the pole. Such asymmetry may originate from a nonzero relative velocity between the disc and column plasma where the disc plasma becomes bound to the magnetic field lines and enters the magnetosphere \[Wang and Welter 1981\]. The additional observation that the width of the dip decreases with increasing energy may point towards a role for resonant absorbtion \[Giles et al. 1999\]. The observation of a short-duration flare during the minimum flux period provides a further example of previously unseen behaviour in this source. With the peak flux during the flare rising to almost 20 times the mean level, and with no other comparable events observed during interval 2, it is likely that the flare was due to a short-lived episode of enhanced accretion. The mean accretion rate during interval 2 can be estimated to be $`2.5\times 10^{15}\mathrm{g}\mathrm{s}^1`$ \[Frank, King and Raine 1992\]; the increased luminosity observed during the flare thus implies additional accretion of at least $`5\times 10^{17}`$ g. In order to measure the instantaneous $`\dot{M}`$ throughout the flare it would be necessary to correct for the effects of anisotropic emission from the neutron star surface as well as changing observation angle with the star’s rotation. Since the geometry is essentially unknown, and beam patterns rather model dependent, this is not yet possible. We do however note that the delay between the start of the pre-flare increase (‘A’ in Fig. 3) and the flare itself is $`150`$ s. The relative angular velocities of the disc plasma and the neutron star magnetic field lines at the inner disc radius imply a periodicity significantly different from that resulting from the neutron star’s rotation. From the mean interval 2 luminosity and the estimated surface magnetic field strength for GX 1+4 of $`3\times 10^{13}`$ G we estimate the inner disc radius as $`2.7\times 10^7`$ m. A locally dense patch of plasma rotating with Keplerian velocity in the disc would pass close to the region where plasma enters the accretion column originating from each pole every 150 s or so. Thus it is conceivable that these two events represent successive passages of the same patch through the column uptake zone in the disc. After the second passage, the patch is presumably completely transmitted to the star and so no further flaring behaviour is seen. If $`\dot{M}`$ variations are a significant factor in the evolution of the flare, we might see other indications in the flare shape. If the polar region cools much more slowly than the flare timescale an asymmetric flare might be observed. Spectral model fits might also indicate cooling of the emission component originating from the pole. However, the flare appears almost completely symmetric, and spectral fits to the rising and falling parts of the flare do not exhibit cooling at any statistically significant level. ## Acknowledgments We would like to thank Dr. K. Wu for many helpful discussions and suggestions during the preparation of this paper. The RXTE GOF provided timely and vital help and information, as well as the archival observations from 1996 and 1997. We would also like to thank the BATSE pulsar group for providing the timing data.
no-problem/9909/hep-ph9909507.html
ar5iv
text
# 1 Introduction ## 1 Introduction Understanding the initial conditions of a relativistic heavy ion collision from first principles is perhaps the single most challenging problem facing the heavy ion community. Proposed signatures of a possible Quark-Gluon Plasma (QGP) formed in heavy ion collisions will crucially depend on the initial conditions. At the very early stages of the collision, one would need to take the full quantum mechanical nature of the nuclei into account which is a prohibitingly difficult task since it would require full knowledge of the nuclear wave functions. The McLerran-Venugopalan model offers a new and promising tool to investigate these early times by representing the initial nuclei by classical fields. At later times, when the highly off-shell modes of the field are freed by hard scattering and go on-shell, one can identify these modes with partons and consider the initial distributions of these partons in the nuclei. When calculating a typical nuclear cross section, one needs to know the distribution of a given parton kind in the nucleus $`f_{g/A}`$. Using the QCD factorization theorems, one can then write the nuclear cross section as a convolution of the nuclear parton densities with the hard parton-parton cross section, i.e. $`\sigma _{AB}f_{g/A}(x,Q)\sigma _{gg}f_{g/B}(x,Q)`$ (1) In order to obtain the nuclear parton distributions, one can take the corresponding parton distributions in a nucleon and scale them by the atomic weight $`A`$. This sounds plausible specially at high values of $`Q^2`$ since one does not expect nuclear effects to be important at large values of $`Q^2`$. However, this expectation was proved to be too naive and it was experimentally found the the distribution of partons in free nucleons are strikingly different from those in bound nucleons. This difference is more pronounced in large nuclei at small values of $`x_{bj}`$ where a significant depletion in the number of partons is observed so that a simple $`A`$ scaling does not hold. Alternatively, one could measure the nuclear parton distribution in a particular experiment. Since these distributions are universal, one could then use them to predict nuclear cross sections in other experiments. The most recent experiments measuring nuclear parton distributions have been performed by the NMC collaboration at CERN SPS and by the E665 collaboration at Fermilab . However, both of these are fixed target experiments and are limited in the kinematic range in $`x`$ and $`Q^2`$ they can cover. Also, the amount of data in the kinematic region where perturbative QCD would apply is limited. A lepton-nucleus collider would go a long way towards expanding our knowledge of the nuclear parton distributions and is urgently needed. Once the nuclear parton distributions are known at a given value $`x_0`$ and $`Q_0`$, one can use the perturbative QCD evolution equations to predict the distributions at different $`x`$ and $`Q`$. However, the standard evolution equations are expected to break down at very small values of $`x`$ due to parton recombination effects. A new evolution equation (JKLW) which takes these effects into account was derived in and is the non-linear all twist generalization of the standard perturbative QCD evolution equations such as DLA DGLAP and GLR/MQ (see also for a similar equation). These non-linear effects were investigated numerically in and were found to be important in the kinematic region to be explored by the upcoming experiments at RHIC and LHC. One can also use this new evolution equation to predict the $`x`$, $`Q`$, $`b_t`$ and $`A`$ dependence of the gluon shadowing ratio defined as $`S={\displaystyle \frac{xG_A}{AxG_N}}`$ (2) where $`xG_A`$ and $`xG_N`$ are the nuclear and nucleon gluon distribution functions respectively. ## 2 Shadowing of gluons To calculate the shadowing ratio $`S`$ for gluons, we start with the following evolution equation which was derived from the effective action for QCD at small $`x`$ in . $`{\displaystyle \frac{^2}{y\xi }}xG(x,Q,b_{})={\displaystyle \frac{N_c(N_c1)}{2}}Q^2\left[1{\displaystyle \frac{1}{\kappa }}\mathrm{exp}({\displaystyle \frac{1}{\kappa }})E_1({\displaystyle \frac{1}{\kappa }})\right]`$ (3) where $`\kappa ={\displaystyle \frac{2\alpha _s}{\pi (N_c1)Q^2}}xG(x,Q,b_{})`$ (4) and $`\mathrm{E}_1(x)`$ is the exponential integral function defined as $`\mathrm{E}_1(x)={\displaystyle _0^{\mathrm{}}}𝑑t{\displaystyle \frac{e^{(1+t)x}}{1+t}},x>0`$ (5) This equation was shown to reduce to DLA DGLAP and GLR/MQ at the low gluon density limit. In we showed in detail how to solve this equation numerically. Here, we briefly review our main approximations and assumptions and refer the interested reader to for more details. In order to solve equation (3), we need to know the initial gluon distribution at some reference point $`x_0`$ and $`Q_0`$ as well as its derivative (this is due to making the semi-classical approximation). We then use a fourth order Runge-Kutta code to calculate the gluon distribution at any other point $`x`$ and $`Q`$. In , we took $`x_0=0.05`$ and $`Q_0=0.7`$, but the effective $`Q_0`$ for most points calculated was about $`1GeV`$. The reason for our choices were two fold; first that experimentally it is known that the shadowing ratio is about $`1`$ in the range $`x=0.050.07`$. Also, in order to maximize the effects of perturbative shadowing, we needed to start from as low value of $`Q_0`$ as possible while keeping it high enough so that perturbative QCD is still valid. With these approximations, we showed in that for large nuclei, the non-linearities of the evolution equation are very important. Our results for the gluon shadowing ratio as defined in (2) are shown in Figure $`1`$. It is clear that gluon shadowing at RHIC and LHC will be important. Shadowing of gluons in nuclei will have significant effects on the measured observables in the upcoming experiments at RHIC and LHC. For example, initial minijet and total transverse energy production will be greatly reduced. Also, heavy quark production will be significantly effected since its cross section is proportional to the squared of gluon density. Basically, any production cross section which involves the distribution of gluons in nuclei will be modified. Therefore, it is extremely important to understand shadowing more thoroughly. For example, the observed shadowing ratio (of $`F_2`$) does not seem to have a significant $`Q`$ dependence while parton recombination models tend to predict a strong $`Q`$ dependence of the shadowing ratio. This could in principle be due to assuming no shadowing at the initial point, i.e. the point where the perturbative evolution starts from. To investigate this, one should include initial non-perturbative shadowing at the reference point and then evolve the distributions with both the leading twist DGLAP and all twist JKLW evolution equations. The difference between the two would be a clear indication of importance of higher twist effects in understanding gluon shadowing. Acknowledgments I would like to thank all of my collaborators and specially Xin-Nian Wang with whom much of the numerical work presented here has been done. This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics Division of the Department of Energy, under contract No. DE-AC03-76SF00098 and DE-FG02-87ER40328. References
no-problem/9909/hep-lat9909022.html
ar5iv
text
# Centre vortices and their friends ## 1 INTRODUCTION One approach to QCD confinement is that of centre vortices, where the key degrees of freedom are those in the centre of the gauge group, Z($`N`$) in the case of SU($`N`$). Numerical results for the case of SU(2) (also treated here) show that this is fruitful. The main technique is that of centre projection : fixing to a gauge where the links are as close as possible to a centre element, then projecting to that element, leaving a lattice of Z(2) links; negative plaquettes are called P-vortices and are interpreted as the source of confinement. Here we examine two related issues. ## 2 RANDOM VORTICES A random gas of infinitely long vortices will cause linear confinement. This is too simplistic, but maybe can teach us something: indeed it gives about the right string tension from measured vortex densities. Viewed in four dimensions the vortices are defined by closed surfaces; confinement survives only so long as this surface percolates throughout the space-time manifold, and hence deconfinement may be due to loss of percolation . This has all been argued from the point of view of taking SU(2) and reducing the degrees of freedom to the bare essentials. Here we shall attempt the opposite: to construct an (approximately) random vortex picture. Truly random vortices are difficult because of the strong coupling of adjacent plaquettes via the links, even with no gauge coupling present. Our lattice and observables are as in the projected Z(2) theory. We use the following procedure: * Create a random set of links, either $`\pm 1`$ with 50% probability (‘random start’) or set to unity (‘frozen start’). * Let $`v=\text{density of negative plaquettes}`$ (corresponding to vortices); initially $`v0.5`$ or $`v0`$. Pick a target $`v=v_T`$ chosen to correspond to the mean density of P-vortices in SU(2). At $`\beta =2.3`$, $`v_T0.0945`$; at $`\beta =2.4`$, $`v_t0.0602`$. * Pick a link at random in the lattice. Flip the sign of this link either (i) if it does not alter $`v`$ or (ii) if it alters $`v`$ towards $`v_T`$. * Continue until $`v_T`$ is achieved. Because of condition (i) it is useful to attempt to flip links already considered. In the case of the frozen start, we have tried further to make the vortices independent by making sets of flips which do not affect the overall vortex density. * Generate many configurations of this sort and analyse them as a Monte Carlo sample. Note that here there is no Markov process, and hence no fluctuating action; in a sense our ensemble is microcanonical. There is a bias in this proceedure because we flip links attached to sets of plaquettes predominantly of one sign, hence our vortices are not truly random. We could instead have chosen the target $`v`$ to correspond to the SU(2) string tension on the assumption of truly random vortices. Our actual choice reflects a desire to look at the cluster properties of vortices. ### 2.1 Results Fig. 1 shows results on bulk lattices, $`12^4`$ for $`\beta =2.3`$ and $`16^4`$ for $`\beta =2.4`$. The string tension is shown both for the two ideal cases (from a large scale run in full SU(2) and for fully random vortices) and as measured from vortices. In the quasi-random case with the random start, Creutz ratios show a string tension which for small loops lies near the expected value ($`2v`$) but which increases for larger loops. The results shown are from a full potential calculation where this increase tends to level out, although with some curvature, giving a rather larger string tension; the form fit to is necessarily somewhat ad hoc and here we have included a quadratic part. Furthermore, in the frozen start the vortices lack confinement and hence show in effect a repulsion. These are sizeable effects; a more truly random method will be needed for a more realistic comparison. An effective action would also presumably help . Nonetheless, we examine cluster properties by methods similar to ref. , dividing vortices into two clusters where the surfaces touch only along an edge. This difference between touching and joining is a lattice effect which makes a noticeable impact — almost tripling the number of vortices not in the largest (percolating) cluster for the case of SU(2) with $`\beta =2.3`$ with the random start, and increasing the largest cluster size dramatically for the frozen start. Of course we would prefer to detect vortices directly with their physical size. ### 2.2 The deconfining transition We have also examined a lattice in the deconfined phase, using Polyakov loops $`L`$ as the order parameter, although it is maybe unlikely that homogeneous random vortices alone can be sufficient to explain deconfinement. The lattice results show that $`\left|L\right|`$ goes to 1 for small vortex density, but this is expected simply due to the fact that neighbouring loops are effectively Wilson loops with an area equal to the finite temperature extent of the lattice, and hence correlated by the vanishing string tension. There is no sign of a phase transition, nor finite size scaling behaviour. It may well be important to have the vortex surface orientated predominantly parallel to, and hence not piercing, temporal Wilson loops; it is not clear such an effect can come from just the Z(2) degrees of freedom. ## 3 PROBING WILSON LOOPS The plaquette-sized P-vortices are expected to have a topological effect on Wilson loops, depending only on whether a vortex pierces the loop. We investigate this by looking at the correlations between P-vortices and Wilson loops. Our method is the following (fig. 1). We take a plaquette $`P`$ on the centre-projected lattice within a wilson loop $`W`$, a certain distance from the centre of the loop. For present purposes we shall simply take the distance $`r`$ to be the number of plaquettes diagonally from the centre of the loop, as in the diagram. If $`P=1`$, we ignore $`W`$ and pass on to the next one; if $`P=1`$ we examine the value of $`W`$. After sampling over many configurations, we can form an average $`W_{P=1}(r)`$. Note that in examining $`P`$ we take no account whatsoever of other centre plaquettes inside (or outside) $`W`$; the effect is purely the correlation between the Wilson loop and a centre vortex at the given position, whether or not the loop is pierced by other vortices. To achieve sufficiently large correlations we are restricted to loops of sizes that have $`𝒪(1)`$ vortices inside. Clearly, if there is no correlation, $`W_{P=1}(r)=W`$. As a control, we have performed the same experiment replacing $`P`$ with the sign of a gauge plaquette $`G`$ located in the same place. The results (fig. 2) show that $`W_{P=1}(r)`$ is rather flat inside the loop, but with a significant correlation. In contrast, the values of $`W_{G=1}(r)`$ vary much more widely over the inside of the loop. This is a sign that the dominant effect of the vortex is given by whether or not it pierces the loop, regardless of where it does so, an effect not expected and not shown by the sign of the full gauge plaquette. Both probes become uncorrelated very quickly when outside the loops. For gauge plaquettes this can be understood from strong coupling; such plaquettes only appear in quite high order. For P-plaquttes the natural interpretation is that vortices not piercing the Wilson loop have no effect on it. However, if the vortices really correspond to extended physical objects, it is not clear why the change from inside to outside should be so sharp; this raises questions about the size of the vortex core.
no-problem/9909/cond-mat9909294.html
ar5iv
text
# The energy spectrum of complex periodic potentials of the Kronig-Penney type ## Abstract We consider a complex periodic PT-symmetric potential of the Kronig-Penney type, in order to elucidate the peculiar properties found by Bender et al. for potentials of the form $`V=i(\mathrm{sin}x)^{2N+1}`$, and in particular the absence of anti-periodic solutions. In this model we show explicitly why these solutions disappear as soon as $`V^{}(x)V(x)`$, and spell out the consequences for the form of the dispersion relation. In a recent paper Bender et al. showed that periodic potentials which were complex but obeyed PT symmetry possessed real band spectra, with, however, one striking difference from the case of real periodic potentials, namely that there were no antiperiodic solutions, i.e. Bloch waves with lattice wave vector $`k=(2n+1)\pi /a`$. This result was obtained from detailed numerical studies of potentials of the form $`V(x)=i\mathrm{sin}^{2N+1}(x)`$, and it was found necessary to work to extremely high accuracy to detect the absence of such solutions. In the present note we supplement this work by an analytical solution to a complex, PT-symmetric version of the Kronig-Penney model, which illustrates the phenomenon very clearly. It therefore seems a generic property of non-Hermitian, but PT symmetric potentials, although an analytic proof is still not available. In the standard Kronig-Penney model the potential consists of a periodic string of delta functions of the form $$V(x)=\alpha \underset{n}{}\delta (xna).$$ (1) The simplest way to find the energy eigenvalues is the Floquet procedure, described in , based on two solutions which in the region $`0x<a`$ are $$u_1(x)=\mathrm{cos}\kappa x,u_2=(1/\kappa )\mathrm{sin}\kappa x.$$ (2) The lattice wave vector $`k`$, which occurs in the phase $`\mathrm{e}^{ika}`$ of the Bloch wave function at $`x=a_+`$, is given by $$\mathrm{cos}ka=\frac{1}{2}(u_1(a)+u_2^{}(a)).$$ (3) Crossing the delta function at $`x=a`$, $`u_2^{}`$ has the discontinuity $`\alpha u_2(a_{})`$, giving the well-known condition $$\mathrm{cos}ka=\mathrm{cos}\kappa a+\frac{\alpha }{2\kappa }\mathrm{sin}\kappa a.$$ (4) We can construct a non-Hermitian PT-symmetric version of this model by taking the coefficients of the delta functions to be pure imaginary and alternating in sign, and arranging them symmetrically about the origin: $$V(x)=i\alpha \underset{n}{}(1)^n\delta (x\frac{1}{2}(n+\frac{1}{2})a).$$ (5) The periodicity of the potential is still $`a`$. To perform the Floquet analysis it is easier to shift the origin to $`\frac{1}{4}a`$, so that the two wave functions initially take the same form as in Eq. (2) and then track their discontinuities through the delta functions at $`x=\frac{1}{2}a`$ and $`x=a`$. The net result is that the Kronig-Penney expression for $`\mathrm{cos}ka`$ is replaced by $$\mathrm{cos}ka=\mathrm{cos}\kappa a+\frac{\alpha ^2}{2\kappa ^2}\mathrm{sin}^2\frac{1}{2}\kappa a,$$ (6) from which it is immediately apparent that $`\mathrm{cos}ka`$ is strictly greater than -1, and in particular that there are no antiperiodic solutions with $`k=(2n+1)\pi /a`$. A generalization of Eq. (5) is the potential $$V(x)=\alpha \underset{n}{}e^{(1)^niþ}\delta (x\frac{1}{2}(n+\frac{1}{2})a),$$ (7) which reduces to Eq. (5) for $`þ=\pi /2`$ and to the standard Kronig-Penney model, with spacing $`\frac{1}{2}a`$, for $`þ=0`$. It is then a reasonable question to ask at what value of $`þ`$ the antiperiodic solutions disappear: the answer is perhaps surprising. Beginning, as before, with the two solutions of Eq. (2), and tracking the discontinuities of their derivatives through the (shifted) delta functions at $`x=\frac{1}{2}a`$ and $`x=a`$, we arrive at the equation $$\mathrm{cos}ka=\mathrm{cos}\kappa a+\frac{\alpha }{\kappa }\mathrm{sin}\kappa a\mathrm{cos}þ+\frac{\alpha ^2}{2\kappa ^2}\mathrm{sin}^2\frac{1}{2}\kappa a,$$ (8) which reduces to (6) for $`þ=\pi /2`$. It implies the following relation for $`\mathrm{cos}ka+1`$: $$\mathrm{cos}ka+1=2\left|\mathrm{cos}\frac{1}{2}\kappa a+\frac{\alpha }{2\kappa }e^{iþ}\mathrm{sin}\frac{1}{2}\kappa a\right|^2,$$ (9) which is non-zero as soon as $`þ0,m\pi `$. Thus the antiperiodic solution disappears immediately, rather than at some finite critical angle between 0 and $`\pi /2`$. As noted above, whereas for $`þ0`$ the repeat distance is $`a`$, it reduces to $`\frac{1}{2}a`$ at $`þ=0`$. Correspondingly (8) can be rewritten as $$\mathrm{cos}\frac{1}{2}ka=\mathrm{cos}\frac{1}{2}\kappa a+\frac{\alpha }{2\kappa }\mathrm{sin}\frac{1}{2}\kappa a,$$ (10) which starts off positive, passes through $`\mathrm{cos}\frac{1}{2}ka=0`$ and then becomes negative. However, the moment $`þ`$ becomes non-zero this solution disappears, and by continuity $`\mathrm{cos}\frac{1}{2}ka`$ must always remain positive, i.e. $$\mathrm{cos}\frac{1}{2}ka=\left|\mathrm{cos}\frac{1}{2}\kappa a+\frac{\alpha }{2\kappa }e^{iþ}\mathrm{sin}\frac{1}{2}\kappa a\right|.$$ (11) The situation is illustrated in Fig. 1, where we have taken $`þ=0.1^\mathrm{c}`$. In Fig. 2 we show the resulting band structure for the same value of $`\theta `$. The Brillouin zone boundary is at $`k=\pi /a`$, in the middle of the Brillouin zone for $`\theta =0`$. In contrast to the usual situation, as exemplified by the dispersion relation for phonons in a diatomic molecule, where a gap appears at the boundary, here the energy levels merge together before reaching the boundary, thus forming one continuous, double-valued band. At the point where the levels merge the effective mass is zero. Consideration of this toy model seems to show that there is something generic about the dispersion relations for periodic potentials with $`V^{}(x)=V(x)V(x)`$. In this case we have an analytic demonstration that as soon as $`\theta 0`$ there is no antiperiodic solution and the energy levels do not reach the Brillouin zone boundaries. A general proof of this property, not depending on the specific form of $`V`$, would be very welcome. Acknowledgement I should like to thank Prof. C. M. Bender for his hospitality at Washington University, where this work was begun.
no-problem/9909/quant-ph9909070.html
ar5iv
text
# An optically driven quantum dot quantum computer ## Abstract We propose a quantum computer structure based on coupled asymmetric single-electron quantum dots. Adjacent dots are strongly coupled by means of electric dipole-dipole interactions enabling rapid computation rates. Further, the asymmetric structures can be tailored for a long coherence time. The result maximizes the number of computation cycles prior to loss of coherence. The possibility that a computer could be built employing the laws of quantum physics has stimulated considerable interest in searching for useful algorithms and a realizable physical implementation. Two useful algorithms, exhaustive search and factorization , have been discovered; others have been shown possible. Various approaches have been explored for possible physical implementations, including trapped ions , cavity quantum electrodynamics , ensemble nuclear magnetic resonance , small Josephson junctions , optical devices incorporating beam splitters and phase shifters , and a number of solid state systems based on quantum dots . There are many advantages to quantum computing; however, the requirements for such computers are very stringent, perhaps especially so for solid state systems. Nevertheless, solid state quantum computers are very appealing relative to other proposed physical implementations. For example, semiconductor-manufacturing technology is immediately applicable to the production of quantum computers of the proper implementation that is readily scalable due to its artificially fabricated nature. In this paper, we propose a manufactured solid state implementation based on advanced nanotechnology that seems capable of physical implementation. It consists of an ensemble of ”identical” semiconductor pillars, each consisting of a vertical stack of coupled asymmetric GaAs/AlGaAs single-electron quantum dots of differing sizes and material compositions so that each dot possesses a distinct energy structure. Qubit registers are based on the ground and first excited states of a single electron within each quantum dot. The asymmetric dots produce large built-in electrostatic dipole moments between the ground and excited states, and electrons in adjacent dots are coupled through an electric dipole-dipole interaction. The dipole-dipole coupling between electrons in nonadjacent dots is less by ten times the coupling between adjacent dots. Parameters of the structure can be chosen to produce a well-resolved spectrum of distinguishable qubits with adjacent qubits strongly coupled. The resulting ensemble of quantum computers may also be tuned electrically through metal interconnect to produce ”identical” pillars. In addition, the asymmetric potential can be designed so that dephasing due to electron-phonon scattering and spontaneous emission is minimized. The combination of strong dipole-dipole coupling and long dephasing times make it possible to perform many computational steps. Quantum computations may be carried out in complete analogy with the operation of a NMR quantum computer, including the application of refocusing pulses to decouple qubits not involved with a current step in the computational process . Final readout of the amplitude and phase of the qubit states can be achieved through quantum state holography. Amplitude and phase information are extracted through mixing the final state with a reference state generated in the same system by an additional delayed laser pulse and detecting the total time- and frequency- integrated fluorescence as a function of the delay . Means of characterizing the required laser pulses are described in Ref. . Our quantum register is similar to the n-type single-electron transistor structure recently reported by Tarucha et al. . In Tarucha’s structure, source and drain are at the top and bottom of a free standing pillar with a quantum well in the middle and a cylindrical gate wrapped around the center of the pillar. In our design, a stacked series of asymmetric GaAs/AlGaAs quantum wells are arrayed along the pillar axis by first epitaxially growing planar quantum wells in a manner similar to that employed to produce surface emitting lasers . By applying a negative gate bias that depletes carriers near the surface, a parabolic electrostatic potential is formed which provides confinement in the radial direction. In the strong depletion regime, the curvature of the parabolic radial potential is a function of the doping concentration. To facilitate coupling to the laser field, the gate is made transparent using a reverse damascene process. The simultaneous insertion of a single electron in each dot is accomplished by lining up the quantum dot ground state levels so they lie close to the Fermi level; a single electron is confined in each dot over a finite range of the gate voltage due to shell filling effects . Strong electrostatic confinement in the radial direction serves to keep the quantum dot electrons from interacting with the gate electrode, phonon surface modes, localized surface impurities, and interface roughness fluctuations. The electrostatic potential near the pillar axis is smooth in the presence of small fluctuations in the pillar radius. By tuning the gate voltage, it is anticipated that size fluctuations between different pillars can be compensated for. In order to derive the structure parameters and estimate the dependence of the functional performance of this device, we assume that the quantum dot electron potential, V(r), can be expressed in cylindrical coordinates as $`V(\stackrel{}{r})=V(z)+V(\rho )`$, where $`V(\rho )`$ is a radial potential and $`V(z)`$ is the potential along the growth direction. This separable potential assumption is a good approximation in the strong depletion regime where only a single electron resides in each dot. The assumption of a separable potential is commonly used in the study of quantum dot structures and enables us to consider the $`z`$ and $`\rho `$ motions separately . The z-directional potential $`V(z)`$, shown schematically in the inset of Fig. 1, is a step potential formed by a layer of $`Al_xGa_{1x}As`$ of thickness $`B`$ ($`0<z<B`$) and a layer of $`GaAs`$ of thickness $`LB`$ ($`B<z<L`$). The resulting asymmetric quantum dot/well is confined by $`Al_yGa_{1y}As`$ barriers with $`y>x`$. The asymmetry of this structure is parameterized by the ratio $`B/L`$ where $`0<B/L<1`$. In the effective mass approximation, the qubit wavefunctions are $`|i=R(\rho )\psi _i(z)u_s(\stackrel{}{r})`$ ($`i=0,1`$). Here $`R(\rho )`$ is the ground state of the radial envelope function, $`\psi _i(z)`$ is the envelope function along $`z`$, and $`u_s(\stackrel{}{r})`$ is the $`s`$-like zone center Bloch function including electron spin. For simplicity, we assume complete confinement by the $`Al_yGa_{1y}As`$ barriers along the z direction. Then, the envelope function $`\psi _i(z)`$ is obtained by solving the time-independent Schrödinger equation subject to the boundary conditions $`\psi _i(0)=\psi _i(L)=0`$. The energies of the qubit wavefunctions are given by $`E=E_\rho +E_i`$ where $`E_\rho `$ is the energy associated with $`R(\rho )`$ and $`E_i`$ is the energy associated with $`\psi _i(z)`$. Since the present study primarily concerns coupling along the growth direction, analyses are conducted only in this direction. Figure 1 shows the probability density, $`|\psi _i(z)|^2`$, as a function of position, $`z`$, for the two qubit states $`|0`$ and $`|1`$ in a $`20nm`$ $`GaAs/Al_{0.3}Ga_{0.7}As`$ asymmetric quantum dot. The barrier thickness $`B=15nm`$ and the overall length of the dot is $`L=20nm`$. By choosing $`B/L=0.75`$ and $`x=0.3`$, it is found that the ground state wavefunction $`|0`$ is strongly localized in the $`GaAs`$ region while the $`|1`$ wavefunction is strongly localized in the $`Al_{0.3}Ga_{0.7}As`$ barrier. By appropriately choosing the asymmetric quantum dot parameters, the qubit wavefunctions can be spatially separated and a large difference in the electrostatic dipole moments can be achieved. The transition energy $`\mathrm{\Delta }E=E_1E_0`$ between $`|1`$ and $`|0`$ is shown in Fig. 2 as a function of $`B/L`$ in a $`20nm`$ $`GaAs/Al_xGa_{1x}As`$ asymmetric quantum dot ($`L=20nm`$). Several values of Al concentration $`x`$ are considered. It is clear from this figure that the transition energy can be tailored substantially by varying the asymmetry parameter. With three parameters available for adjustment ($`B`$, $`L`$, and $`x`$), we can make $`\mathrm{\Delta }E`$ unique for each dot in the register. In this way, we can address a given dot by using laser light with the correct photon energy. The electric field from an electron in one dot shifts the energy levels of electrons in adjacent dots through electrostatic dipole-dipole coupling. By appropriate choice of coordinate systems, the dipole moments associated with $`|0`$ and $`|1`$ can be written equal in magnitude but oppositely directed. The dipole-dipole coupling energy is then defined as $$V_{dd}=2\frac{|d_1||d_2|}{ϵ_rR_{12}^3},$$ (1) where $`d_1`$ and $`d_2`$ are the ground state dipole moments in the two dots, $`ϵ_r=12.9`$ is the dielectric constant for $`GaAs`$, and $`R_{12}`$ is the distance between the dots. Figure 3 shows the dipole-dipole coupling energy, $`V_{dd}`$, between two asymmetric $`GaAs/Al_xGa_{1x}As`$ quantum dots of widths $`L1=19nm`$ and $`L2=21nm`$ separated by a $`10nm`$ $`Al_yGa_{1y}As`$ barrier. The coupling energy is plotted as a function of $`B/L`$ for several values of $`x`$ where $`B/L`$ and $`x`$ are taken to be the same in both dots. The dipole-dipole coupling energies are a strongly peaked function of the asymmetry parameter, $`B/L`$. From the figure, we see that values of $`V_{dd}0.15meV`$ can be achieved. By way of comparison, the maximum dipole-dipole coupling energy that can be achieved with DC biased symmetric quantum dots ($`B/L=0`$) is $`V_{dd}=0.038meV`$ at a DC bias field of $`F=112kV/cm`$. Quantum dot electrons can interact with the environment through the phonon field, particularly the longitudinal-optical (LO) and acoustic (LA) phonons. The LO phonon energy, $`\mathrm{}\omega _{LO}`$, lies in a narrow band around $`36.2meV`$. As long as the quantum dot energy level spacings lie outside this band, LO phonon scattering is strongly suppressed by the phonon bottleneck effect. Acoustic phonon energies are much smaller than the energy difference, $`\mathrm{\Delta }E`$, between qubit states. Thus acoustic phonon scattering requires multiple emission processes which are also very slow. Theoretical studies on phonon bottleneck effects in GaAs quantum dots indicate that LO and LA phonon scattering rates including multiple phonon processes are slower than the spontaneous emission rate provided that the quantum dot energy level spacing is greater than $`1`$ meV and, at the same time, avoids a narrow window (of a few meV) around the LO phonon energy . While dephasing via interactions with the phonon field can be strongly suppressed by proper designing of the structure, quantum dot electrons are still coupled to the environment through spontaneous emission and this is the dominant dephasing mechanism. Decoherence resulting from spontaneous emission ultimately limits the total time available for a quantum computation . Thus, it’s important that the spontaneous emission lifetime be large. The excited state lifetime, $`T_d`$, against spontaneous emission is $$T_d=\frac{3\mathrm{}(\mathrm{}c)^3}{4e^2D^2\mathrm{\Delta }E^3},$$ (2) where $`D=0|z|1`$ is the dipole matrix element between $`|0`$ and $`|1`$. Figure 4 shows the spontaneous emission lifetime of an electron in qubit state $`|1`$ for a $`20nm`$ $`GaAs/Al_xGa_{1x}As`$ quantum dot as a function of asymmetry parameter, $`B/L`$, for several values of Aluminum concentration, $`x`$. It’s immediately obvious from Fig. 4 that the lifetime depends strongly on $`B/L`$. Depending on the value of $`x`$ chosen, the computed lifetime can achieve a maximum of between 4000 $`ns`$ and 6000 $`ns`$. In general, the maximum lifetime increases with $`x`$. In Eq. (2), the lifetime is inversely proportional to $`\mathrm{\Delta }E^3`$ and $`D^2`$, but the sharp peak seen in Fig. 4 is due *primarily* to a pronounced minimum in $`D`$. In contrast, the spontaneous emission lifetime in a $`20nm`$ symmetric quantum dot under a DC bias of $`F=112kV/cm`$ is only $`1073ns`$. Based on these results, we can estimate parameters for a solid state quantum register containing a stack of several asymmetric $`GaAs/Al_{0.3}Ga_{0.7}As`$ quantum dots in the $`L20nm`$ range separated by $`10nm`$ $`Al_yGa_{1y}As`$ barriers ($`y>0.4`$). An important design goal is obtaining a large spontaneous emission lifetime and a large dipole-dipole coupling energy. From Figs. 3 and 4, we see that both can be achieved by selecting an asymmetry parameter, $`B/L=0.8`$. This gives us a spontaneous emission lifetime $`T_d=3100ns`$ and a dipole-dipole coupling energy $`V_{dd}=0.14meV`$. The transition energy between the qubit states is on the order of $`100meV`$ ($`\lambda =12.4\mu m`$). In a quantum computation, the quantum register is optically driven by a laser as described in Ref. . In our example, we require a tunable IR laser in the $`12\mu m`$ range so we can individually address various transitions between coupled qubit states. Following initial state preparation, which can be achieved by cooling the structure to low temperature, the computation is driven by applying a series of coherent optical pulses at appropriate intervals. The $`\pi `$-pulse duration, $`T_p`$, must be less than the dephasing time, $`T_d`$ so that many computation steps can be performed before decoherence sets in. At the same time, the pulse linewidth must be narrow enough so that we can selectively excite transitions separated by the dipole-dipole coupling energy, $`V_{dd}`$. For transform limited ultrashort pulses, the linewidth-pulsewidth product is given by the Heisenberg uncertainty principle. Combining these two restraints, $`T_p`$ must satisfy $$\frac{\mathrm{}}{2V_{dd}}T_pT_d.$$ (3) Using $`V_{dd}`$ and $`T_d`$ for our structure, we obtain $`2.4psT_p3.1\times 10^6ps`$. For highly biased symmetric quantum dots, it is $`8.7psT_p1.1\times 10^6ps`$ using values of $`V_{dd}=0.038meV`$ and $`T_d=1073ns`$. Hence, the number of computational steps that can be executed before decoherence sets in (i.e., ratio of the upper and lower limits in the inequality) is an order of magnitude larger for the proposed asymmetric structure. In summary, we have proposed a solid state quantum register based on a vertically coupled asymmetric single-electron quantum dot structure that overcomes the problems of weak dipole-dipole coupling and short decoherence times encountered in earlier quantum dot computing schemes based on biased symmetric dots. This structure may provide a realistic candidate for quantum computing in solid state systems. This work was supported, in part, by the Defense Advanced Research Project Agency and the Office of Naval Research.
no-problem/9909/astro-ph9909322.html
ar5iv
text
# References The extra high energy cosmic rays spectrum in view of the decay of proton into Planck neutrinos D.L. Khokhlov Sumy State University, R.-Korsakov St. 2 Sumy 244007 Ukraine e-mail: khokhlov@cafe.sumy.ua ## Abstract It is assumed that proton decays into Planck neutrinos. The energy of the Planck neutrino is equal to the Planck mass in the preferred rest frame. In the frame moving relative to the preferred rest frame the energy of the Planck neutrino reduces by the corresponding Lorentz factor. The lifetime of proton depends on the kinetic energy of proton relative to the preferred rest frame. The time required for proton travel from the source to the earth defines the limiting energy. Protons with the energies more than the limiting energy decay and do not give contribution in the EHECRs spectrum. It is shown that EHECRs with the energies $`E>3\times 10^{18}\mathrm{eV}`$ can be identified with the Planck neutrinos. The energy spectrum of extra high energy cosmic rays (EHECRs) above $`10^{10}\mathrm{eV}`$ can be divided into three regions: two ”knees” and one ”ankle” . The first ”knee” appears around $`3\times 10^{15}\mathrm{eV}`$ where the spectral power law index changes from $`2.7`$ to $`3.0`$. The second ”knee” is somewhere between $`10^{17}\mathrm{eV}`$ and $`10^{18}\mathrm{eV}`$ where the spectral slope steepens from $`3.0`$ to around $`3.3`$. The ”ankle” is seen in the region of $`3\times 10^{18}\mathrm{eV}`$ above which the spectral slope flattens out to about $`2.7`$. It was proposed that, at the Planck scale $`m_{Pl}=1.2\times 10^{19}\mathrm{GeV}`$, the decay of electron into neutrino (Planck neutrino) occurs $$e\nu _{Pl}.$$ (1) Within the framework of electrodynamics, hadrons can be described by the structure of 5 electrons . The structure of proton is given by $$pe^+e^{}e^+e^{}e^+.$$ (2) From this, at the Planck scale, proton decays into 5 Planck neutrinos. The lifetime of proton relative to the decay into Planck neutrinos is given by $$t_p=t_{Pl}\left(\frac{m_{Pl}}{2m_p}\right)^5$$ (3) where the factor 2 takes into account the transition from the massive particle to the massless one. The energy of the Planck neutrino depends on the reference frame. Let, in the preferred rest frame, the energy of the Planck neutrino be equal to the Planck mass $$E_\nu =m_{Pl}.$$ (4) In the frame moving relative to the preferred rest frame with the Lorentz factor $$\gamma =\left(1\frac{v^2}{c^2}\right)^{1/2}$$ (5) the energy of the Planck neutrino reduces by the Lorentz factor $$E_\nu ^{}=\gamma m_{Pl}.$$ (6) From this the lifetime of proton in the frame moving relative to the preferred rest frame is given by $$t_p=t_{Pl}\left(\frac{\gamma m_{Pl}}{2m_p}\right)^5.$$ (7) In view of eq. (7), the lifetime of proton decreases with the increase of the kinetic energy of proton relative to the preferred rest frame given by $$E_p=\frac{m_p}{\gamma }.$$ (8) Let the earth possess unity Lorentz factor relative to the preferred rest frame. For protons arrived at the earth, the travel time meets the condition $$tt_p.$$ (9) From this the time required for proton travel from the source to the earth defines the limiting energy of proton $$E_{lim}=\frac{m_{Pl}}{2}\left(\frac{t_{Pl}}{t}\right)^{1/5}.$$ (10) Within the time $`t`$, protons with the energies $`E>E_{lim}`$ decay and do not give contribution in the EHECRs spectrum. Determine the range of the limiting energies of proton depending on the range of distances to the EHECRs sources. Take the maximum and minimum distances to the source as the size of the universe and the thickness of our galactic disc respectively. For the lifetime of the universe $`t_U=1.06\times 10^{18}\mathrm{s}`$ , the limiting energy is equal to $`E_U=3.3\times 10^{15}\mathrm{eV}`$. This corresponds to the first ”knee” in the EHECRs spectrum. For the thickness of our galactic disc $`300\mathrm{pc}`$, the limiting energy is equal to $`E_G=5.5\times 10^{17}\mathrm{eV}`$. This corresponds to the second ”knee” in the EHECRs spectrum. Thus the range of the limiting energies of proton due to the decay of proton into Planck neutrinos lies between the first ”knee” $`E3\times 10^{15}\mathrm{eV}`$ and the second ”knee” $`E10^{17}10^{18}\mathrm{eV}`$. From the above consideration it follows that the decrease of the spectral power law index from $`2.7`$ to $`3.0`$ at the first ”knee” $`E3\times 10^{15}\mathrm{eV}`$ and from $`3.0`$ to around $`3.3`$ at the second ”knee” $`E10^{17}10^{18}\mathrm{eV}`$ can be explained as a result of the decay of proton into Planck neutrinos. From this it seems natural that, below the ”ankle” $`E<3\times 10^{18}\mathrm{eV}`$, the EHECRs events are mainly caused by the protons. Above the ”ankle” $`E>3\times 10^{18}\mathrm{eV}`$, the EHECRs events are caused by the particles other than protons. If Planck neutrinos take part in the strong interactions, they must give contribution in the EHECRs events. Since proton decays into 5 Planck neutrinos, the energy of the Planck neutrino is $`1/5`$ of the energy of the decayed proton. For the spectral power law index equal to $`2.7`$, the ratio of the proton flux to the Planck neutrino flux is given by $`J_p/J_\nu =5^{1.7}=15`$. From the above consideration it is natural to identify EHE particles with the energies $`E>3\times 10^{18}\mathrm{eV}`$ with the Planck neutrinos. Continue the curve with the spectral power law index $`2.7`$ from the ”ankle” $`E3\times 10^{18}\mathrm{eV}`$ to the first ”knee” $`E3\times 10^{15}\mathrm{eV}`$ and compare the continued curve with the observational curve. Comparison gives the ratio of the proton flux to the Planck neutrino flux $`J_p/J_\nu =15`$.
no-problem/9909/physics9909039.html
ar5iv
text
# All Optical Flip-Flop Based on Coupled Laser Diodes ## I. Introduction Optical flip flops based on laser diodes (LD) have been extensively investigated as they have many potential applications in optical computing and telecommunications. The most important types of optical bistable laser diode devices can be classified into three broad types: 1) Absorptive bistability, 2) Two mode or polarization bistability by non-linear gain saturation, 3) Dispersive bistability. A review and explanation of these three types of bistable LDs can be found in . The optical bistable system considered here is not based upon any of the above mentioned effects and doesn’t rely on second order laser effects. Rather it is based on the fact that lasing at the natural lasing wavelength in a laser can be quenched when sufficient external light is injected into the laser cavity. The external light is not coherent with the lasing light. The external light is amplified by the laser gain medium. Lasing is quenched because the amplified external light causes the gain inside the laser to drop below the lasing threshold (for the laser’s natural lasing wavelength). The concept of a bistable laser system based on gain quenching was first envisioned in . However two decades passed before the concept was experimentally demonstrated in pulsed operation with dye lasers . A theoretical study of the system was presented in and suggestions for implementation in laser diodes given. A bistable device loosely based on the ideas presented in was demonstrated in . However this device was not based on coupled separate lasing cavities and required saturable absorbers to change the lasing thresholds for the two lasing modes in the system. In this paper we present for the first time (to our knowledge) experimental results from a bistable system based on the concept given in operating continuously and employing laser diodes. Furthermore we demonstrate all optical set-reset switching of the system. To introduce the rest of the paper, the concept presented in and is now elaborated in the context of the experimental system described later. Two lasers can be coupled together as shown in Figure 1. Laser A’s lasing wavelength is $`\lambda _1`$ and only $`\lambda _1`$ light from laser A is injected into laser B. Laser B’s lasing wavelength is $`\lambda _2`$ and only $`\lambda _2`$ light from laser B is injected into laser A. One laser acting as master can suppress lasing action in the other slave laser. With a symmetric configuration of the two lasers the role of master and slave can be interchanged. Thus the system can be in one of two states, depending on which laser is lasing. The flip flop state can be determined by noting the wavelength of the light output. The flip flop is in state 1 if light at wavelength $`\lambda _1`$ is output, and state 2 if wavelength $`\lambda _2`$ is output. To switch between states light from outside the flip flop can be injected into the laser that is currently master. The master laser stops lasing at its natural wavelength due to the injected light. The absence of light from the master laser allows the slave laser to start lasing and become master. When the external light is removed the flip flop remains in the new state. The flip flop described above is modeled and implemented here by using semiconductor optical amplifiers (SOA) with wavelength dependent mirrors to form the LDs. This approach was taken because light injected into the LD which is not at the lasing wavelength only passes once through the LD. Strict requirements such as the wavelength of light injected into the LDs being at one of the LD resonant frequencies are thus avoided. However, implementations based on LDs constructed in other ways are possible. ## II. Rate Equations The flip flop can be mathematically modeled using two coupled sets of rate equations (1) to (4). Each set describes one of the LDs. In particular, the number ($`P_A`$,$`P_B`$) of photons in the laser cavity at the lasing wavelength are described by one equation \[(1) for LD A, (3) for LD B\]. While the carrier number ($`N_A`$,$`N_B`$) in the laser cavity is described by another equation \[(2) for LD A, (4) for LD B\]. The effect of injected photons into the laser cavity is modeled by adding a carrier depletion term to the carrier number rate equation , the $`S_{2av}`$ terms in (2) and (4). The $`S_{2av}`$ terms are taken from the SOA model presented in . In modeling the effect of injected photons we have assumed the effects of amplified spontaneous emission and residual facet reflectivities are insignificant . The rate equations are different from those presented in because we base the rate equations on the SOA model of . Rate equations for LD A: $$\frac{dP_A}{dt}=(\nu _gG_A\frac{1}{\tau _p})P_A+\beta \frac{N_A}{\tau _e}$$ (1) $$\frac{dN_A}{dt}=\frac{I_A}{q}\frac{N_A}{\tau _e}\nu _gG_A(P_A+S_{2avA}(\eta P_B+P_{Aext}))$$ (2) Rate equations for LD B: $$\frac{dP_B}{dt}=(\nu _gG_B\frac{1}{\tau _p})P_B+\beta \frac{N_B}{\tau _e}$$ (3) $$\frac{dN_B}{dt}=\frac{I_B}{q}\frac{N_B}{\tau _e}\nu _gG_B(P_B+S_{2avB}(\eta P_A+P_{Bext}))$$ (4) Where $$S_{2av}=\frac{e^{(G_A\alpha _{int})L}1}{2L(G_A\alpha _{int})}$$ (5) $$G_A=\frac{\mathrm{\Gamma }a}{V}(N_AN_0)$$ (6) $`S_{2avB}`$ (from ) and $`G_B`$ are similarly defined for LD B, but are dependent on $`N_B`$, rather than $`N_A`$. The photon lifetime $`\tau _p`$ is given by $$\frac{1}{\tau _p}=\nu _g(\alpha _{int}+\frac{1}{2L}\mathrm{ln}(\frac{1}{R_1R_2}))$$ (7) $`R_1`$ , $`R_2`$ are the reflectivities of the wavelength dependent mirrors associated with each LD. In (2) and (4), $`P_{Aext}`$ and $`P_{Bext}`$ represent the number of externally injected photons per LD cavity round trip time ($`2L/\nu _g`$ seconds), and are used to change the flip flop state. $`\eta `$ is a coupling factor indicating the fraction of the photons from one LD that are coupled into the other LD. Furthermore, from the right most terms of equations (2) and (4) it can be seen that only $`\lambda _1`$ wavelength photons ($`P_A`$) from LD A are injected into LD B, and only $`\lambda _2`$ wavelength photons ($`P_B`$) from LD B are injected into LD A. $`\tau _e`$ is the carrier lifetime, and the other symbols have their usual meaning. We consider the steady state behaviour of the flip flop. $`N_A`$, $`N_B`$, $`P_A`$ and $`P_B`$ can be considered state variables of the flip flop, as the set of four variables describe a unique operating point of the flip flop. The state variable steady state values were found by solving the rate equations numerically using a fourth order Runge-Kutta method. The state variables were determined for various values of injected external light $`P_{Bext}`$ starting at $`P_{Bext}=0`$. $`P_{Aext}`$ was set to zero. For each value of $`P_{Bext}`$ the state variables were found with the flip flop initially in state 1 and also initially in state 2. The simulation parameters were: $`R_1=R_2=0.02`$, $`\eta =0.32`$, $`I_A=I_B=158`$ mA, $`\tau _e=1`$ ns, $`q=1.6\times 10^{19}`$ C, $`\beta =5\times 10^5`$, $`\nu _g=8\times 10^9cms^1`$, $`\mathrm{\Gamma }=0.33`$ , $`a=2.9\times 10^{16}cm^2`$, $`V=2.5\times 10^{10}cm^3`$, $`N_0=2.2\times 10^8`$, $`\alpha _{int}=27cm^1`$, $`L=500`$ microns. The SOA parameters were for a 1550 nm SOA . The flip flop action can be clearly seen when the state variables $`P_A`$ and $`P_B`$ are plotted against $`P_{Bext}`$ , Figure 2. The wavelength of the $`P_{Bext}`$ photons is not $`\lambda _2`$. If the flip flop is initially in state 2, then it remains in state 2 with LD B lasing until $`P_{Bext}`$ reaches the level $`P_{thr}`$. At this point the flip flop abruptly changes to state 1 with LD A lasing. The flip flop remains in state 1 even if $`P_{Bext}`$ returns to zero. If the flip flop is initially in state 1 then it remains in state 1 for all values of $`P_{Bext}`$ . The behaviour of the flip flop is similar to that shown in Figure 2 when $`P_{Bext}`$ is set to zero and $`P_{Aext}`$ is varied. It can be seen from the simulation results that the flip flop has some useful properties including: high contrast ratio and little change in output at the lasing wavelength before the threshold is reached for the LD which isn’t being injected with external light. ## III. Experiments To demonstrate the operation of the flip flop a prototype was constructed in free space optics. LDs (Uniphase CQL806) were used which had an antireflection coating with residual reflectivity of $`5\times 10^4`$ deposited on the front facet. The antireflection coated LDs function as SOAs. To form LDs as described in Section 1 and Section 2, diffraction gratings were used as wavelength dependent mirrors for the antireflection coated LDs. The experimental setup is shown in Figure 3. Gratings G1 and G2 form frequency selective external cavities (that is, wavelength dependent mirrors) for the two LDs, forcing LD A to lase at $`\lambda _1=684`$ nm and LD B to lase at $`\lambda _2=678.3`$ nm. The zeroth order diffracted beams from G1 and G2 serve as output beams for LD A and B. The output beams pass through optical isolators and then gratings G3 and G4. This arrangement ensures that only $`\lambda _1`$ light is injected into LD B from LD A, and only $`\lambda _2`$ light is injected into LD A from LD B. The gratings G3 and G4 direct the appropriate wavelength of light to the photo-diodes. PD 1 detects optical power at wavelength $`\lambda _1`$ and PD 2 at wavelength $`\lambda _2`$. Beam splitters are used to allow injection of light from one LD to the other LD and also from an external source. $`\lambda /2`$ plates are used to adjust the light polarization throughout the setup. To demonstrate the flip flop operation, the flip flop state was regularly toggled by injecting light pulses into the LD which was master in the current state. Two hundred microsecond wide pulses of light at wavelength 676.3 nm were injected into the master LD for the current state approximately every 10 milliseconds. The optical powers at wavelengths $`\lambda _1`$ and $`\lambda _2`$ were observed on an oscilloscope (via photo-diodes PD 1 and PD 2). The oscilloscope traces are shown in Figure 4. The switching between states every 10 milliseconds can be clearly seen. Furthermore the flip flop state is stable in the time between the state changes. ## IV. Conclusion An optical flip flop was proposed based on two simple lasers diodes which act as a master-slave pair. The two lasers are coupled so that only light at the lasing wavelength of one laser is injected into the other laser. The flip flop state at any given time is determined by which laser is master and which is slave. Rate equations were used to model the flip flop. The steady state characteristics of the flip flop were obtained from the numerical solution of the rate equations. Flip flop operation is not dependent on second order laser effects such as resonant frequency shifts or gain saturation. Hence the flip flop should be able to be implemented in a wide variety of technologies. Furthermore the novel flip flop structure is straightforward to implement. The flip flop was experimentally demonstrated using laser diodes with antireflection coatings. ### Acknowledgments The kind assistance of Philips Research Laboratories, Prof. Holstlaan 4, 5656 AA Eindhoven, The Netherlands, in providing laser diodes and other equipment is gratefully acknowledged. This research was supported by the Netherlands Organization for Scientific Research (N.W.O.) through the ”NRC Photonics” grant. ### Figure Captions Figure 1: Master-slave arrangement of two identical lasing cavities, showing the two possible states. Figure 2: LD A and B photon numbers $`P_A`$ , $`P_B`$ versus external light injected into LD B $`P_{Bext}`$ Figure 3: Setup for optical flip flop. LD: laser diode antireflection coated facet, BS: beam splitter, G: diffraction grating, ISO: isolator, PD: photo-diode. Figure 4: Optical power at the two lasing wavelengths, as measured by photo-diodes 1 and 2 in the experimental setup. The changing between the two states every 10 milli-seconds can be clearly seen.
no-problem/9909/cond-mat9909282.html
ar5iv
text
# Finite-temperature phase transitions in 𝜈=2 bilayer quantum Hall systems ## Abstract In this paper, the influence of an in-plane magnetic field $`B_{}`$ on the finite-temperature phase transitions in $`\nu =2`$ bilayer quantum Hall systems are examined. It is found that there can exist two types of finite-temperature phase transitions. The first is the Kosterlitz-Thouless (KT) transitions, which can have an unusual non-monotonic dependence on $`B_{}`$; the second type originates from the crossing of energy levels and always increases with $`B_{}`$. Based on these results, we point out that the threshold temperature observed in the inelastic light scattering experiments cannot be the KT transition temperature, because the latter shows a totally different $`B_{}`$-dependence as compared with the experimental observation. Instead, it should be the level-crossing temperature, which we found agrees with the $`B_{}`$-dependence observed. Moreover, combining the knowledge of these two transition temperatures, a complete finite-temperature phase diagram is presented. Recent theoretical works predict that, besides the fully spin polarized ferromagnetic phase (F) and the paramagnetic symmetric or spin singlet (S) phase, a novel canted antiferromagnetic (C) phase can exist in the filling factor $`\nu =2`$ bilayer quantum Hall (QH) systems. In such a C phase, the electron spins in each layer are tilted away from the external magnetic field direction due to the competition between ferromagnetic ordering and singlet ordering. Encouraging experimental evidence in support of the C phase has recently emerged through inelastic light scattering spectroscopy, transport measurements, and capacitance spectroscopy. In particular, it is observed for certain samples in the inelastic light scattering experiments that there is a threshold temperature $`T_{SDE}`$ below which the spin conserved spin-density excitation (SDE) mode ($`\omega _0`$ mode) seems to lose all spectral weight. Because the $`U(1)`$ planar spin rotational symmetry is spontaneously broken in the C phase, there should be a finite-temperature Kosterlitz-Thouless (KT) transition with a characteristic energy scale which is about the vortex-antivortex binding energy. It is claimed that the observed threshold temperature is the predicted KT transition temperature $`T_{KT}`$ in the C phase. While the predicted value of the KT transition temperature ($`T_{KT}1.8`$ K in the Hartree-Fock theory) is reasonably close to that of the threshold temperature ($`T_{SDE}0.52`$ K) in the inelastic light scattering experiment under normal magnetic fields, which seems to support the identification between these two temperature scales, we point out that such an interpretation meets trouble in the tilted magentic field experiment for two reasons. First, it is found in Ref.\[\] that the threshold temperature rises as the parallel magnetic field $`B_{}`$ increases. Nevertheless, we notice that (i) the sample used in the experiment is located near the F-C phase boundary in the quantum phase diagram (see the inset in Fig. 1); and (ii) an in-plane magnetic field effectively moves a sample even closer to the F-C phase boundary. Hence the symmetry-breaking order parameter and therefore $`T_{KT}`$ should be reduced (c.f. Figs. 8 and 11 of Ref.\[\]), rather than enhanced, when $`B_{}`$ increases. Thus the physical content of these two characteristic temperatures should not be the same. Second, it is questionable to regard the observed disappearance of the $`\omega _0`$ mode at the threshold temperature as the transition to the C phase, if one is reminded that the spectral weight of the $`\omega _0`$ mode (which does not involve any spin flip) is also greatly suppressed in the F phase, where almost all spin-up (down) states are occupied (empty). When temperature $`TT_{KT}`$ for the systems near the F-C quantum phase boundary, it is expected that, although the expectation value of the in-plane spin component vanishes and the $`U(1)`$ planar spin rotational symmetry is restored, the spin component $`S_z`$ along the direction of the external magnetic field may still be nonzero. That is, these systems at $`TT_{KT}`$ should behave somewhat like the F phase at finite temperatures. (See Fig. 2 for our finite-temperature phase diagram.) Thus it needs higher temperatures for these systems to loss all their spin polarizations, such that the $`\omega _0`$ mode can be observed. In this paper, the finite-temperature phase transitions in $`\nu =2`$ bilayer quantum Hall systems are investigated. As discussed above, one had not yet reached the correct theoretical understanding for the reported threshold temperature. Hence we focus our attention on solving this issue. Since the aforementioned arguments are quite general, the same qualitative results should be obtained irrespective of which kind of approximation methods being employed. For simplicity, we use the Hartree-Fock approximation in the following. We show that the KT transition temperature in the C phase can have an unusual non-monotonic dependence on the tilted angle of the applied magnetic field. That is, $`T_{KT}`$ can either rise or fall as $`B_{}`$ is turned on, depending on whether the samples are initially located near the C-S or the F-C phase boundary in the quantum phase diagram. By using the sample parameters in the tilted field experiment, we show that $`T_{KT}`$ decreases as the tilted angle increases. Thus the KT transition scenario do fail to explain the $`B_{}`$-dependence of the threshold temperature in Ref.\[\]. Instead, in order to link with the observed threshold temperature, we propose another characteristic temperature $`T_X`$ caused by the crossing of energy levels, since its variation with respect to $`B_{}`$ agrees qualitatively with the reported threshold temperature. Based on the dependence of $`T_{KT}`$ and $`T_X`$ on the tunneling-induced symmetric-antisymmetric energy gap, a complete finite-temperature phase diagram is shown. As shown in Ref. \[\], the KT transition temperature is estimated to be $`T_{KT}0.9\rho _s/k_B`$, with the spin stiffness $`\rho _s=c_A\rho _s^A+c_E\rho _s^E`$, where $$\rho _s^{A/E}=\frac{l^2}{4\pi }\underset{p}{}p^2V_{A/E}(p,0),$$ (1) and the analytical forms of the constants $`c_A`$ and $`c_E`$ can be written down explicitly by minimizing the Hartree-Fock variational energy functional directly: $`c_A`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }_{\mathrm{SAS}}^2[(\mathrm{\Delta }_{\mathrm{SAS}}^2\mathrm{\Delta }_z^2)^2(2U_{}\mathrm{\Delta }_z)^2][(2U_{}\mathrm{\Delta }_{\mathrm{SAS}})^2(\mathrm{\Delta }_{\mathrm{SAS}}^2\mathrm{\Delta }_z^2)^2]}{(2U_{})^2(\mathrm{\Delta }_{\mathrm{SAS}}^2\mathrm{\Delta }_z^2)^4}},`$ (2) $`c_E`$ $`=`$ $`{\displaystyle \frac{[(2U_{}\mathrm{\Delta }_z)^2(\mathrm{\Delta }_{\mathrm{SAS}}^2\mathrm{\Delta }_z^2)^2][\mathrm{\Delta }_z^2(\mathrm{\Delta }_{\mathrm{SAS}}^2\mathrm{\Delta }_z^2)^2\mathrm{\Delta }_{\mathrm{SAS}}^2(2U_{}\mathrm{\Delta }_z)^2]}{(2U_{})^2(\mathrm{\Delta }_{\mathrm{SAS}}^2\mathrm{\Delta }_z^2)^4}}.`$ (3) Here $`\mathrm{\Delta }_{\mathrm{SAS}}`$ is the tunneling-induced symmetric-antisymmetric energy separation, $`\mathrm{\Delta }_z`$ is the Zeeman energy, and $`U_{}=(U_AU_E)/2`$ with $`U_{A/E}=_pV_{A/E}(p,0)`$ being the exchange energy of the intralayer/interlayer Coulomb interaction. The matrix elements $`V_{A/E}(p_1,p_2)`$ of the intralayer/interlayer Coulomb interaction are $$V_{A/E}(p_1,p_2)=\frac{1}{\mathrm{\Omega }}\underset{𝐪}{}v_{A/E}(q)\delta _{p_1,q_y}e^{q^2l^2/2}e^{iq_Xp_2l^2},$$ (4) where $`\mathrm{\Omega }`$ is the area of the system. $`v_A(q)=(2\pi e^2/ϵq)F_A(q,b)`$ and $`v_E(q)=v_A(q)F_E(q,b)e^{qd}`$ are the Fourier transforms of the intralayer and the interlayer Coulomb interaction potentials. $`ϵ`$ is the dielectric constant of the system, and $`d`$ is the interlayer separation. We have also included the finite-well-thickness correction by introducing the form factor $`F_{A/E}(q,b)`$ in the intralayer/interlayer Coulomb potential, where $`F_A(q,b)=2/bq2(1e^{qb})/b^2q^2`$, $`F_E(q,b)=4\mathrm{sinh}^2(qb/2)/b^2q^2`$, and $`b`$ is the width of a quantum well. Since we know $`\rho _s`$ exactly within the microscopic Hartree-Fock approximation, the KT transition temperature can be easily determined. As shown in Refs.\[\], the KT transition temperature along with the symmetry-breaking order parameter drops continuously to zero as the phase boundaries are approached from within the C phase. Now we consider the tilted magnetic field case, where a parallel magnetic field $`B_{}`$ and a perpendicular field $`B_{}`$ both appear with the tilted angle $`\mathrm{\Theta }=\mathrm{tan}^1(B_{}/B_{})`$. The effect of the parallel magnetic field on $`T_{KT}`$ can be incorporated by the following replacements: $`\mathrm{\Delta }_{\mathrm{SAS}}`$ $``$ $`\overline{\mathrm{\Delta }}_{\mathrm{SAS}}=\mathrm{\Delta }_{\mathrm{SAS}}e^{Q^2l^2/4},`$ (5) $`\mathrm{\Delta }_z`$ $``$ $`\overline{\mathrm{\Delta }}_z=\mathrm{\Delta }_z\sqrt{1+(B_{}/B_{})^2},`$ (6) $`V_E(p_1,p_2)`$ $``$ $`\overline{V}_E(p_1,p_2)=V_E(p_1,p_2)e^{\pm iQp_1l^2},`$ (7) with $`Q=B_{}d/B_{}l^2`$ and the magnetic length $`l=\sqrt{\mathrm{}c/eB_{}}`$. In Fig. 1 we show the transition temperature $`T_{KT}`$ as a function of the tilted angle $`\mathrm{\Theta }`$ for some typical sample parameters. Since it is relatively easy to tune $`\mathrm{\Delta }_{\mathrm{SAS}}`$ in fabrication, we vary its value with other system parameters being fixed. Three possible situations are depicted in Fig. 1: (i) if the system begins in the C phase and near the C-S phase boundary (triangle), then $`T_{KT}`$ (and $`\rho _s`$) grows as $`B_{}`$ is turned on; (ii) if the system is in the C phase and near the F-C phase boundary (cross), then $`T_{KT}`$ (and $`\rho _s`$) is reduced as $`B_{}`$ is turned on; (iii) when the system lies between the two phase boundaries (circle), $`T_{KT}`$ (and $`\rho _s`$) can have an unusual non-monotonic dependence on $`B_{}`$, that is, it can increase and then decrease as $`B_{}`$ increases. We find that the enhancement (suppression) of $`T_{KT}`$ as $`\mathrm{\Theta }`$ increases can be large for the system near the C-S (F-C) phase boundary. However, when the system lies between the two phase boundaries, the magnitude of $`T_{KT}`$ may have roughly the same value. Since the crossed line is the predicted $`T_{KT}`$ for the sample studied in Ref.\[\], which has a decreasing dependence on $`\mathrm{\Theta }`$, the experimentally observed enhancement of $`T_{SDE}`$ can not be explained by the result of $`T_{KT}`$. Motivated by the above results, we look for another characteristic temperature above which the spectral weight of the $`\omega _0`$ mode indeed becomes significant. As mentioned before, for the systems near the F-C phase boundary in the quantum phase diagram, a non-vanishing spin polarization is possible when $`TT_{KT}`$. Consequently, the mean-field Hamiltonian of the self-consistent Hartree-Fock theory at finite temperatures, which takes this fact into account, reduces to $`H^{HF}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }_{\mathrm{SAS}}+\delta _{\mathrm{SAS}}}{2}}{\displaystyle \underset{\tau ,k,\sigma }{}}\tau c_{\tau ,k,\sigma }^{}c_{\tau ,k,\sigma }`$ (9) $`{\displaystyle \frac{\mathrm{\Delta }_z+\delta _z}{2}}{\displaystyle \underset{\tau ,k,\sigma }{}}\sigma c_{\tau ,k,\sigma }^{}c_{\tau ,k,\sigma },`$ where $`\delta _{\mathrm{SAS}}=(U_E/2)_{\tau ,\sigma }\tau f(E_{\tau ,\sigma })`$ and $`\delta _z=(U_A/2)_{\tau ,\sigma }\sigma f(E_{\tau ,\sigma })`$. Here the Landau gauge is assumed, and $`c_{\tau ,k,\sigma }^{}`$ creates an electron at the lowest Landau level orbital $`k`$ in the symmetric ($`\tau =1`$) or the antisymmetric ($`\tau =1`$) subbands with spin $`\sigma /2`$ ($`\sigma =\pm 1`$). The thermal averages $`c_{\tau ,\sigma }^{}c_{\tau ,\sigma }=f(E_{\tau ,\sigma })`$, where $`f(E)`$ is the Fermi-Dirac distribution function and the energy eigenvalues of this mean-field Hamiltonian are $$E_{\tau ,\sigma }=\frac{\tau }{2}(\mathrm{\Delta }_{\mathrm{SAS}}+\delta _{\mathrm{SAS}})\frac{\sigma }{2}(\mathrm{\Delta }_z+\delta _z).$$ (10) We assume that the translational symmetry is not broken, thus these expectation values have no intra-Landau level dependence. Since we consider only the case of $`T>T_{KT}`$, the order parameters for the C phase are dropped. Moreover, because of the symmetry in the energy levels, the chemical potential is fixed at zero for all temperatures. We see that the four energy levels for the non-interacting electrons are shifted by the self-consistent mean fields, $`\delta _{\mathrm{SAS}}`$ and $`\delta _z`$, both of which have temperature dependence. For $`T>T_{KT}`$, if we assume that the effective Zeeman energy, $`\mathrm{\Delta }_z+\delta _z`$, is initially larger than the effective symmetric-antisymmetric energy gap, $`\mathrm{\Delta }_{\mathrm{SAS}}+\delta _{\mathrm{SAS}}`$, the crossing of energy levels can occur at a higher temperature $`T=T_X>T_{KT}`$, because the mean field $`\delta _z`$ is a monotonic decreasing function of $`T`$. Thus at $`T=T_X`$, where level crossing occurs, one has $$\mathrm{\Delta }_{\mathrm{SAS}}+\delta _{\mathrm{SAS}}=\mathrm{\Delta }_z+\delta _z,$$ (11) $$\frac{1}{2}\underset{\tau ,\sigma }{}\sigma f(E_{\tau ,\sigma })=\frac{1}{2}\underset{\tau ,\sigma }{}\tau f(E_{\tau ,\sigma })=f(E_{+1,+1})\frac{1}{2},$$ (12) By solving Eqs. (11) and (12) with Eq. (10), the level-crossing temperature $`T_X`$ can be determined. Combining the knowledge of the KT transition temperature, a complete phase diagram at finite temperatures can be obtained as shown in Fig. 2, where the phase boundaries for the tilted angle $`\mathrm{\Theta }=30^{}`$ are also presented. In the F phase, the planar spins are thermally randomized but $`S_z`$ remains nonzero. This finite-temperature phase diagram indeed confirms our previous arguments. Note that for $`\mathrm{\Delta }_{\mathrm{SAS}}`$ slightly larger than $`0.23e^2/ϵl`$, the C phase directly transits to the S phase at finite temperatures, in which $`S_z=0`$. It can be seen that an in-plane magnetic field moves the finite-temperature phase boundaries to the right, due to the effective modification of the sample parameters given in Eq. (6). With a fixed $`\mathrm{\Delta }_{\mathrm{SAS}}`$, the change of the transition temperatures for a tilted magnetic field with $`\mathrm{\Theta }=30^{}`$ can be read out directly from Fig. 2. This finite-temperature phase diagram indicates that $`T_X`$ is an increasing function of $`\mathrm{\Theta }`$. By using Eq. (6), the tilted-field dependence of $`T_X`$ is explicitly shown in Fig. 3. The values of $`\mathrm{\Delta }_{\mathrm{SAS}}`$ are chosen such that the corresponding samples can undergo both C$``$F and F$``$S phase transitions with rising temperatures (see Fig. 2), even though only the F$``$S transition temperatures $`T_X`$ are plotted. In general, it takes a higher $`T_X`$ for a sample with a smaller $`\mathrm{\Delta }_{\mathrm{SAS}}`$ to transit to the S phase, which is reasonable since the F phase becomes more stable for a larger ratio of $`\mathrm{\Delta }_z/\mathrm{\Delta }_{\mathrm{SAS}}`$. The in-plane magnetic field always elevates $`T_X`$, in contrast to its effect on $`T_{KT}`$, which is more complicated as shown in Fig. 1. The result shows that for systems near the $`T=0`$ F-C phase boundary (say, for the cross symbol in the inset of Fig. 1), the level-crossing temperature $`T_X`$ indeed increases with $`B_{}`$ and should be identified as the experimentally observed $`T_{SDE}`$. Before closing this paper, some remarks are in order. First, we would like to comment that the threshold temperature in the experiment is $`T_{SDE}0.5`$ K, which is considerably lower then the calculated $`T_X17`$ K using the actual experimental sample parameters (see Fig. 2 for $`\mathrm{\Delta }_{\mathrm{SAS}}=0.1e^2/ϵl`$). Therefore, the present theory is not quantitatively satisfying. The quantum fluctuations neglected in the mean-field theory should lower the calculated level-crossing temperature $`T_X`$ and reduce this discrepancy. Although the above analysis is crude, it provides a starting point for interpreting the enhancement of the threshold temperature in Ref.\[\]. Second, we notice that, for the sample in the transport experiment (say, the sample with the total density $`n_t=0.7\times 10^{11}`$ cm<sup>-2</sup> at the balanced point), which is initially located in the C phase and near the $`T=0`$ F-C phase boundary, its activation energy decreases as the tilted angle increases from zero. We suggest that the energy scale set by $`T_{KT}`$ (i.e. the vortex-antivortex binding energy) may be related to this activation energy, since they have similar $`B_{}`$-dependence. In conclusion, we have investigated the dependence of phase transition temperatures, $`T_{KT}`$ and $`T_X`$, on the in-plane magnetic field and demonstrated that it is $`T_X`$, rather than $`T_{KT}`$, that agrees qualitatively with the experimentally reported threshold temperature. We have also obtained a finite-temperature phase diagram of the bilayer systems based on the Hartree-Fock approximation. A verification of these two different phase transitions awaits experimental measurements to probe the C phase more directly at lower temperatures. For example, the heat capacity measurements should show power law temperature dependence in the C phase (see Fig. 12 of Ref. \[\]) because of the existence of the gapless Goldstone mode due to spontaneous symmetry breaking. Once that being achieved, it would be quite interesting for future experiments to confirm the predicted non-monotonic $`B_{}`$-dependence of $`T_{KT}`$. ###### Acknowledgements. M.F.Y. acknowledges financial support by the National Science Council of Taiwan under contract No. NSC 89-2112-M-029-001. M.C.C. is supported by the National Science Council of Taiwan under contract No. NSC 89-2112-M-003-006.
no-problem/9909/astro-ph9909174.html
ar5iv
text
# The Behind The Plane Survey ## 1 Introduction The 84% sky coverage of the PSCz survey is effectively limited by the need to get, for every galaxy, an optical identification from sky survey plates. The fractional incompleteness in sky coverage translates directly into uncertainty in predicting the gravity dipole on the Local Group. The IRAS PSC data itself is reliable to much lower latitudes, although genuine galaxies are outnumbered by Galactic sources with similar IRAS properties. Previous attempts to go further into the Plane have either been restricted to the Arecibo declination range, or have relied on optical identifications from Sky Survey Plates. Because the extinction may be several magnitudes or more, they have inevitably suffered from progressive and unquantifiable incompleteness as a function of latitude. In 1994 we embarked on a program, parallel with the PSCz survey, to systematically identify low latitude IRAS galaxies wherever the PSC data allowed, using new near-infrared observations where necessary. ## 2 Sky coverage and selection criteria The mask consists of (a) the IRAS coverage gaps (3% of the sky), (b) areas flagged as High Source Density at $`60\mu \mathrm{m}`$ (3%), where the PSC processing was changed to ensure reliability at the expense of completeness, (c) areas flagged as High Source Density at $`12`$ or $`25\mu \mathrm{m}`$ on the basis that identifications would be impossible, and (d) areas with $`I_{100}>100\mathrm{MJy}\mathrm{ster}^1`$ (as in Rowan-Robinson et al. 1991), because of the impossible IRAS confusion and contamination by local sources. Our final sky coverage is 93%. The IRAS selection criteria were tightened from those used for the PSCz, in order to minimise the contamination by Galactic sources while still keeping most of the galaxies. The revised criteria were $`S_{60}/S_{25}`$ $`>`$ $`2`$ $`S_{60}/S_{12}`$ $`>`$ $`4`$ $`S_{100}/S_{60}`$ $`>`$ $`1`$ $`S_{100}/S_{60}`$ $`<`$ $`5`$ $`CC_{60}`$ $`=`$ $`A,B,C`$ Upper limits were only used where they guaranteed inclusion or exclusion. At high latitudes, these criteria encompass the vast majority of galaxies, with incompleteness increasing from 3% for nearby galaxies, to 6% at $`15,000\mathrm{km}\mathrm{s}^1`$. At low latitudes, the incompleteness is higher because of corrupted fluxes. However, we know of just 18 galaxies at low latitude excluded by our selection criteria, compared with 1225 included. Unfortunately, the criteria efficiently exclude very nearby galaxies such as Dwingeloo I. From the source counts alone, it is clear that only one third of the sources are galaxies. ## 3 Identifications Many sources are immediately identifiable as galaxies from sky survey plates. Others are clearly Galactic. To identify the rest, we used Sky survey plates in all available bands, NVSS data for $`\delta >40^{}`$, IRAS addscan profiles, Simbad and other literature data. For almost all sources still remaining unclassified, and also almost all sources with a faint ($`B_j>19.5^m`$) galaxy counterpart, we obtained $`K^{}`$ ‘snapshots’ using the UHa $`88^{}`$, UNAM 2.1m, ESO 2.2m, CTIO 1.5m and Las Campanas 1m telescopes, between 1994 and 1999. In general, the $`K^{}`$ images allowed unambiguous identification as a galaxy or other (usually cirrus) source. Occasionally, there remained ambiguity between galaxies and buried YSO’s, and more frequently a very faint and compact galaxy ID was overlooked, but revealed by subsequent NVSS data. While hundreds of galaxies were found which are completely invisible on sky survey plates, in general some sort of optical counterpart is visible. Our identification program gave us a total 1225 galaxies. They are plotted along with the PSCz survey in figure 1. Note the striking excess of galaxies in the Great Attractor region, $`(l,b)=(325,5)`$, despite the low surface density of identified galaxies in the southern Galactic Plane. Note also that the northern sky is now essentially mapped right through the Plane, showing a bridge across linking Pisces and Perseus. We also see the Puppis and Ophiucus superclusters. ## 4 Completeness of the BTP survey The nominal flux limit for the BTP survey is $`0.6\mathrm{Jy}`$, as for the PSCz. However, the source counts shown in figure 2 clearly show increasing incompleteness below $`1Jy`$ as a function of $`I_{100}`$. The primary causes of incompleteness in the PSC at low-latitudes are confusion and noise shadowing. Rather than trying to model these effects, we have simply defined a limit to which our source count completeness ($`80\%`$) is acceptable, as a function of $`I_{100}`$. We estimate the flux limit for reasonable completeness to be $`S_{60lim}=0.5+I_{100}/(200\mathrm{MJy}\mathrm{ster}^1)\mathrm{Jy}`$. ## 5 2D surface density of galaxies Combining the BTP survey with the PSCz allows to to map the surface distribution of IRAS galaxies across almost the whole sky. We have used the Fourier interpolation method described in Saunders et al. 1999, to interpolate across the residual mask. We have corrected for the effects of the incompleteness found above. The results are shown in figure 3; the power of the Great Attractor is now obvious, as easily the dominant feature in the map. There appears to be incompleteness larger than our above estimate within $`10^{}`$ of the Galactic Centre, and we are investigating possible causes. ## 6 Redshift aquisition Complete spectroscopy for all sources is impracticable. However, in the PSCz, there is little contribution to the cumulative IRAS dipole beyond $`15,000\mathrm{km}\mathrm{s}^1`$, and this is an achievable completeness target; within this distance, and given the extinctions $`A_R<4^m`$, virtually all galaxies are accessible to optical or HI spectroscopy. About 40% of the required redshifts were already known from previous surveys. The southern spectroscopy is now very nearly complete, using Parkes and the CTIO 1.5m and 4m telescopes. In the north, we have about 300 redshifts still to obtain. ## 7 The effect on the IRAS dipole The $`20^{}`$ deviation between e.g. PSCz and CMB dipoles is much larger than the expected deviation for current, low $`\mathrm{\Omega }`$ models; Strauss et al. (1992) found an rms misalignment of just $`10^{}`$. The inclusion of the the BTP sample, and in particular the Great Attractor, will shift the IRAS dipole in both direction and magnitude. Given the relative positions of the GA, and the CMB and IRAS dipoles, the sense of this correction seems certain to improve the alignment of the IRAS and CMB dipoles, and to reduce the estimate of $`\beta `$.
no-problem/9909/astro-ph9909169.html
ar5iv
text
# High-Redshift Quasars Found in Sloan Digital Sky Survey Commissioning Data II: The Spring Equatorial Stripe1 ## 1 Introduction This paper is the second in a series presenting high-redshift quasars selected from the commissioning data of the Sloan Digital Sky Survey (SDSS<sup>1</sup><sup>1</sup>1http://www.astro.princeton.edu/PBOOK/welcome.htm, York et al. (1999)). In Paper I (Fan et al. 1999a ), we presented the discovery of 15 quasars at $`z>3.6`$ selected from two SDSS photometric runs covering $`140`$ deg<sup>2</sup> in the Southern Galactic cap along the Celestial Equator observed in Fall 1998. In this paper, we describe observations of quasar candidates selected in a similar manner from 250 deg<sup>2</sup> of SDSS imaging data in the Northern Galactic cap, again along the Celestial Equator, observed in Spring 1999. The scientific objectives, photometric data reduction, selection criteria and spectroscopic observation procedures are described in Paper I, and will be outlined only briefly here. We have not yet observed all the quasar candidates spectroscopically, so the objects described in these two papers do not form a complete sample. We will present the complete sample of high-redshift quasars found in the Equatorial stripe, and derive the quasar luminosity function at high redshift, in subsequent papers. We describe the photometric observations and selection of quasar candidates briefly in §2. The spectra of 22 new high-redshift quasars are presented in §3. ## 2 Photometric Observations and Quasar Selection The SDSS telescope (Siegmund et al. (1999)<sup>2</sup><sup>2</sup>2see also http://www.astro.princeton.edu/PBOOK/telescop/telescop.htm), imaging camera (Gunn et al. (1998)), and photometric data reduction (Lupton et al. 1999b <sup>3</sup><sup>3</sup>3see also http://www.astro.princeton.edu/PBOOK/datasys/datasys.htm) are described in Paper I. Briefly, the telescope, located at Apache Point Observatory in southeastern New Mexico, has a 2.5m primary mirror and a wide, essentially distortion-free field. The imaging camera contains thirty $`2048\times 2048`$ photometric CCDs, which simultaneously observe 6 parallel $`13^{}`$ wide swaths, or scanlines of the sky, in 5 broad filters ($`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$, and $`z^{}`$) covering the entire optical band from the atmospheric cutoff in the blue to the silicon sensitivity cutoff in the red (Fukugita et al. (1996)). The photometric data are taken in time-delay and integrate (TDI, or “drift-scan”) mode at sidereal rate; a given point on the sky passes through each of the five filters in succession. The total integration time per filter is 54.1 seconds. The data were calibrated photometrically by observing secondary standards in the survey area using a (now decommissioned) 60cm telescope at Apache Point Observatory and the US Naval Observatory’s 1m telescope. The photometric calibration used in this paper is only accurate to 5–10%, due to systematics in the shape of the point spread function across individual CCDs, and the fact that the primary standard star network had not yet been finalized at the time of these observations. This situation will be improved to the survey requirement of 2% in future papers in this series. Thus as in Paper I, we will denote the preliminary SDSS magnitudes presented here as $`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$ and $`z^{}`$, rather than the notation $`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$ and $`z^{}`$ that will be used for the final SDSS photometry. In this paper, we select quasar candidates from four SDSS imaging runs in the Northern Galactic Cap. The data were acquired with the telescope parked at the Celestial Equator. Details of the photometric runs are summarized in Table 1. Two interleaved SDSS scans, or the Northern and Southern strips, form a filled stripe 2.5 degrees wide in declination, centered on the Celestial Equator. Run 77 and 745 cover the Northern strip of the Equatorial Stripe, while Runs 85 and 752 cover the Southern strip. Runs 77 and 745, and runs 85 and 752 have considerable overlap, but candidates were selected only on the catalogs based on individual runs. The total stripe is roughly 7 hours long and covers a total area of about 250 deg<sup>2</sup> at Galactic latitude in the range $`25^{}<b<63^{}`$. All four nights were photometric, with seeing conditions varying from $`1.3^{\prime \prime }`$ to worse than $`2^{\prime \prime }`$. The data are processed by a series of automated pipelines to carry out astrometric and photometric measurements (c.f. Paper I, and references therein). The final object catalog includes roughly 5 million objects in total. The limiting magnitudes are similar to those of Paper I, roughly 22.3, 22.6, 22.7, 22.4 and 20.5 in $`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$ and $`z^{}`$, respectively. Figure 1 presents the color-color diagrams from Run 752 for all stellar sources at $`i^{}<20.2`$. The inner parts of the diagrams are shown as contours, linearly spaced in density of stars per unit area in color-color space. As in Paper I, a source is plotted only if it is detected in all three of the relevant bands at more than 5$`\sigma `$. In addition, objects that are flagged as saturated, lying on the bleed trail of a saturated star, overlapping the edge of the image boundary, or showing other indications of possible problems in the photometric measurement, are rejected. The median tracks of quasar colors as a function of redshift, as well as the locations of low-redshift ($`z<2.5`$) quasars, hot white dwarfs and A stars, all from the simulation of Fan (1999), are also plotted in Figure 1. High-redshift quasar candidates were selected using color cuts similar to those presented in Paper I. Because of the uncertainties in the photometric zeropoints, we found that the stellar locus shifted by of order 0.05 mag in the color-color diagrams between the Fall (Paper I) and Spring observations. We adjust the color cuts presented in Paper I according to these shifts. Final color cuts of the complete sample will be presented in a later paper with the final photometric calibrations. The color selection criteria used in this paper are as follows: 1. $`gri`$ candidates, selected principally from the $`g^{}r^{},r^{}i^{}`$ diagram: $$\begin{array}{c}(a)i^{}<20\hfill \\ (b)u^{}g^{}>2.0\text{ or }u^{}>22.3\hfill \\ (c)g^{}r^{}>1.0\hfill \\ (d)r^{}i^{}<0.08+0.42(g^{}r^{}0.96)\text{ or }g^{}r^{}>2.26\hfill \\ (e)i^{}z^{}<0.25\hfill \end{array}$$ (1) 2. $`riz`$ candidates, selected principally from the $`r^{}i^{},i^{}z^{}`$ diagram: $$\begin{array}{c}(a)i^{}<20.2\hfill \\ (b)u^{}>22.3\hfill \\ (c)g^{}>22.6\hfill \\ (d)r^{}i^{}>0.8\hfill \\ (e)i^{}z^{}<0.47(r^{}i^{}0.68)\hfill \end{array}$$ (2) The intersections of those color cuts with the $`g^{}r^{},r^{}i^{}`$ and $`r^{}i^{},i^{}z^{}`$ diagrams are illustrated in Figure 1. A total of 80 $`gri`$ and $`riz`$ candidates that have colors consistent with quasars at $`z>3.6`$ and $`i^{}<20.0`$ were selected from the catalog. Several other $`riz`$ candidates ($`z>4.6`$) at $`i^{}<20.2`$ were also selected and observed. Two of the candidates, SDSSp J105320.43–001649.3 and SDSSp J111246.30+004957.5 (see Table 2; for naming convention, see §3), are the previously known quasars BRI1050–0000 ($`z=4.29`$, Storrie-Lombardi et al. (1996)) and BRI1110+0106 ($`z=3.92`$, Smith, Thompson & Djorgovski (1994)). Those two objects are the only quasars with $`z>3.6`$ in the area covered in the NED database<sup>4</sup><sup>4</sup>4The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.. ## 3 Spectroscopic Observations Spectra of 32 high-redshift quasar candidates from the Equatorial Stripe were obtained with the ARC 3.5m telescope of the Apache Point Observatory, using the Double Imaging Spectrograph (DIS), during a number of nights from March to May 1999. Exposure times of these candidates range from 600 seconds for the brightest ($`i^{}17`$) candidate to 5400 seconds for the faintest candidates. The instrument and spectral data reduction procedures were described in Paper I. The final spectra extend from 4000 Å to 10000 Å, with a spectral resolution of 12 Å in the blue and 14 Å in the red. They have been flux calibrated and corrected for telluric absorption with observations of F subdwarfs (Oke & Gunn (1983), Oke (1990)). Twenty-one of the candidates are identified to be high-redshift quasars at $`z>3.6`$. In particular, five of the candidates are quasars at $`z>4.6`$, with redshifts of 4.62, 4.69, 4.70, 4.92 and 5.03, respectively. Two of these objects have very unusual spectra. SDSSp J153259.96–003944.1 is identified as a quasar without detectable emission lines at $`z=4.62`$. Its optical, radio, and polarization properties are reported in a separate paper (Fan et al. 1999b ). SDSSp J160501.21–011220.0 is a Broad Absorption Line (BAL) quasar at a redshift of 4.92 with two BAL systems, and is described further below. One additional object, SDSSp J130348.94+002010.4, was also observed. It has redder $`g^{}r^{}`$ and $`r^{}i^{}`$ colors than required by our color selection criteria (eq. 1), but is identified as a BAL quasar at $`z=3.64`$. We present its spectrum below but will not include it in the future statistical analyses. Table 2 gives the position and SDSS photometry, the redshift of each confirmed SDSS quasar and the photometric run from which it was selected. For the objects in the overlap region between runs, only the results from the runs from which they are selected are listed. None of these quasars show magnitude differences of more than 0.2 mag in the high signal-to-noise passbands between runs. We also include the SDSS measurements of the two previously known $`z>3.6`$ quasars in Table 2. The naming convention for the SDSS sources is SDSSp J HHMMSS.SS$`\pm `$DDMMSS.S, where “p” stands for the preliminary SDSS astrometry, and the positions are expressed in J2000.0 coordinates. The preliminary SDSS astrometry is accurate to better than $`0.2^{\prime \prime }`$ in each axis. The photometry is expressed in asinh magnitudes (Lupton, Gunn & Szalay (1999), see also Paper I); this magnitude system approaches normal logarithmic magnitudes at high signal-to-noise ratio, but becomes a linear flux scale for low signal-to-noise ratio, even for slightly negative fluxes. The photometric errors given are statistical in nature and do not include the systematic errors due to PSF variation across the field and uncertainties in the photometric zeropoint. Positions of the 24 confirmed SDSS quasars on the color-color diagrams are plotted on Figure 1. Finding charts of all objects in Table 2 are given in Figure 2. They are $`200^{\prime \prime }\times 200^{\prime \prime }`$ $`i^{}`$ band SDSS images with an effective exposure time of 54.1 seconds. We matched the positions of quasars in Table 2 against radio surveys. Twenty of them are in the region covered by the FIRST survey (Becker, White & Helfand (1995)). Three of them have FIRST counterparts at 20 cm at the 1 mJy level (with positional matches better than $`1^{\prime \prime }`$). Two new SDSS quasars, SDSSp J123503.04–000331.8 ($`z=4.69`$) and SDSSp J141205.78–010152.6 ($`z=3.73`$), correspond to FIRST sources of 18.4 and 4.3 mJy at 20 cm, respectively. One of the previously known quasars, SDSSp J105320.43–001649.3 (BRI1050-0000, $`z=4.29`$), is also a FIRST source of 13.8 mJy. The three quasars with RA $`>16^h`$ are not covered by the FIRST survey. We matched them against the NVSS survey (Condon et al. (1998)); none of them has an NVSS counterpart at 20cm at the 2.5 mJy level. We have similarly cross-correlated the list against the ROSAT full-sky pixel images (Voges et al. (1999)); none of these objects were detected, implying a typical 3-$`\sigma `$ upper limit of $`3\times 10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in the 0.1 – 2.4 keV band. This result is not unexpected; only a few $`z>4`$ quasars have an observed X-ray flux above this value (e.g. Fabian et al. (1997), Moran & Helfand (1997)), and the typical X-ray flux from known optically selected $`z>4`$ quasars is a factor of four or more lower than our limit (Brandt (1999)). We expect positive ROSAT X-ray detections of a substantial fraction of the somewhat brighter, lower-redshift quasars in the SDSS. Among the 34 observed candidates which satisfy Equations 1 and 2 (including the two previously known quasars), 23 are confirmed as quasars at $`z>3.6`$, a success rate of 68%, similar to the success rate we reported in Paper I. Ten of the eleven non-quasars are faint late type stars, which are typical contaminants of high-redshift quasar searches. One of the candidates, however, has spectral features that we have not yet been able to identify; we will present its spectrum and discuss possible explanations in a separate paper. Figure 3 presents our spectra of the 22 new SDSS quasars. In Figure 3, we place the spectra on an absolute flux scale (to compensate for the uncertainties due to non-photometric conditions and variable seeing during the night) by forcing the synthetic $`i^{}`$ magnitudes from the spectra to be the same as the SDSS photometric measurements. The synthetic and photometric measurements agree within $`0.1`$ mag for the objects observed in photometric nights. This scatter is due to both the uncertainties in the SDSS photometric zeropoints (5 – 10%, see §2) and the spectroscopic observations. Therefore, the absolute flux scale in Figure 3 is only accurate to $`0.1`$ mag. The emission line properties of the quasars are listed in Table 3. Central wavelengths and rest frame equivalent widths of five major emission lines are measured following the procedures of Paper I. Table 4 gives the continuum properties of the quasars. As in Paper I, redshifts are determined from all emission lines redward of Ly$`\alpha `$; Ly$`\alpha `$ itself is not used, due to absorption from the Ly$`\alpha `$ forest on its blue side. The AB magnitude of the quasar continuum at 1450 Å (rest frame), $`AB_{1450}`$, is determined by the average flux in the continuum window from 1425 Å to 1475 Å (rest frame) from the spectra in Figure 3. The absolute magnitude $`M_B`$ is determined by assuming a cosmology of $`q_0=0.5`$, $`h=0.5`$, and a power law index of –0.5 following Schneider, Schmidt & Gunn (1991). The $`\mathrm{AB}_{1450}`$ and $`M_B`$ magnitudes are corrected for Galactic extinction using the reddening map of Schlegel, Finkbeiner & Davis (1998); these values are accurate to $`0.1`$ mag, due to the uncertainties in the absolute flux scale (see above). The emission and absorption properties of most of these quasars are very similar to those of other quasars at similar redshift (c.f. Schneider, Schmidt & Gunn (1989), 1991, 1997, Warren et al. (1991), Kennefick, Djorgovski & de Carvalho (1995), Storrie-Lombardi et al. (1996), Paper I), with some interesting exceptions, which we now discuss. ### 3.1 Notes on Individual Objects SDSSp J112253.51+005329.8 ($`z=4.57`$). A number of absorption systems are present in the spectrum. In particular, the peak of the Ly$`\alpha `$ emission line is self-absorbed by several lines. SDSSp J120441.73–002149.6 ($`z=5.03`$). This is the second quasar found at $`z5`$ in our survey. As was the case for SDSSp J033829.31–002156.3 ($`z=5.00`$, Paper I), the determination of an accurate redshift is not straightforward. A straight Gaussian fit to the Ly$`\alpha `$ emission line yields a redshift of 5.14. The Si IV and C IV emissions are affected by atmospheric absorption; they give a consistent redshift of about 5.03. We therefore adopt a redshift of 5.03 $`\pm `$ 0.05 for this quasar. For these two $`z5`$ quasars, the differences between the redshifts of the Ly$`\alpha `$ lines and that of the Si IV and C IV lines are about 0.1, much larger than other quasars at $`z>4`$ (Schneider, Schmidt & Gunn (1991)). It is not clear whether the Lyman $`\alpha `$ absorption at the blue wing of the Lyman $`\alpha `$ emission line can fully account for this large redshift discrepancy. The Ly$`\alpha `$ line profile is also more symmetric than that of other high-redshift quasars. This object is relatively bright at $`i^{}=19.3`$, suitable for high signal-to-noise ratio studies of its emission line profiles. SDSSp J130348.94+002010.4 ($`z=3.64`$). This BAL quasar shows absorption troughs shortward of $`z3.64`$ (as measured by the emission peaks) in virtually all sampled strong emission lines, extending blueward by up to $`13,000`$ km s<sup>-1</sup> (e.g., for C IV). Although this object is a marked outlier in SDSS colors, it is redder in $`g^{}r^{}`$ and $`i^{}z^{}`$ (perhaps because of the BALs) than the great majority of other known high-redshift quasars, and does not satisfy our formal selection criteria (eq. 1). SDSSp J141205.78–010152.6 ($`z=3.73`$). This object possesses an interesting absorption system at $`z=3.62`$ ($`v=7000`$ km s<sup>-1</sup>) with absorption lines of C IV (7158Å), N V (5729Å), Ly$`\alpha `$ (5619Å) and Ly$`\beta `$+O VI (4763Å). The C IV absorption has a FWHM of $`1300\mathrm{km}\mathrm{s}^1`$. This system is probably also responsible for the peculiar shape of the Lyman Limit System. High resolution observations are needed to determine the nature of this absorption system, especially whether it is a so-called “mini-BAL” system (defined as the velocity span of the absorption profile being narrower than 2000 $`\mathrm{km}\mathrm{s}^1`$; Barlow, Hamann, & Sargent (1997), Churchill et al. (1999)). SDSSp J141315.36+000032.1 ($`z=4.08`$). This object was observed under poor weather conditions, and the spectrum has a very low signal-to-noise ratio. However, the Ly$`\alpha `$ and C IV emission lines are clearly detected, and give consistent redshifts. SDSSp J141332.35–004909.7 ($`z=4.14`$). This is another mini-BAL quasar candidate. The C IV trough has a FWHM of $`1400\mathrm{km}\mathrm{s}^1`$. An accurate center and equivalent width of the Ly$`\alpha `$ line cannot be measured due to the presence of the BAL trough of N V, which appears blueward of the Ly$`\alpha `$ emission. SDSSp J151618.44–000544.3 ($`z=3.70`$). The spectrum of this object shows a strong damped Ly$`\alpha `$ system candidate at $`z=3.03`$ with rest-frame equivalent width of $`38`$ Å. In comparison, the strongest known damped Ly$`\alpha `$ system has a rest-frame equivalent width of 41 Å (Wolfe et al. (1986)). The spectrum at $`\lambda >7500`$Å is affected by a CCD defect and is not plotted. SDSSp J153259.96–003944.1 ($`z=4.62`$). This is a unique object. Its spectrum features two breaks at 6800 Å and 5100 Å, respectively. We interpret the two breaks as the onset of the Lyman $`\alpha `$ forest and a Lyman Limit System respectively, giving a consistent redshift of $`z=4.62`$. The redshifts of onset of the Lyman $`\alpha `$ forest and Lyman Limit System are typically close to the emission line redshift for high-redshift quasars (e.g., Schneider, Schmidt & Gunn (1991), Storrie-Lombardi et al. (1996)); we therefore adopt a redshift of 4.62 $`\pm `$ 0.04 for this object. However, this object has no detected emission lines. We discuss high signal-to-noise ratio Keck spectroscopy, VLA radio observations and optical broad-band polarimetry of this object in a separate paper (Fan et al. 1999b ). SDSSp J160501.21–011220.0 ($`z=4.92`$). This object is the highest redshift BAL quasar yet discovered. It has two BAL systems in its spectrum, one at $`z=4.86`$, the other at $`z=4.69`$. The absorption due to the BAL systems is detected for the Ly$`\alpha `$, N V, Si IV and C IV lines. Because of the BALs, an emission line redshift cannot be measured accurately. We adopt a redshift of $`4.92\pm 0.05`$, as measured from the peaks of the N V, O I and Si IV lines. In Figure 4, we show the absorption systems for each line, aligned in the rest-frame velocity of the quasar. The two BALs are at relative velocities of 3000 km $`\mathrm{s}^1`$ and 11,700 km $`\mathrm{s}^1`$, respectively. This is an ideal object for further spectroscopic observations to study the BAL phenomenon at very high redshift. The presence of the BALs affects the broad-band colors, resulting in $`r^{}i^{}3`$, compared to $`r^{}i^{}<2`$ for other $`z5`$ quasars (Figure 1c). SDSSp J162116.91–004251.1 ($`z=3.70`$). This is a very bright high-redshift quasar ($`i^{}=17.23`$, $`\mathrm{M}_\mathrm{B}=28.81`$), the most luminous we have yet found. The signal-to-noise ratio of this spectrum is high, allowing identification of several absorption lines. It is particularly suitable for high-resolution studies of its absorption systems. SDSSp J165527.61–000619.2 ($`z=3.99`$). This object has a very strong Ly$`\alpha `$ emission line; the rest-frame equivalent width is 172 Å, which is more than twice the value of most quasars in this redshift range. The presence of this strong emission line affects the broad-band colors. It has $`g^{}r^{}3`$, compared to $`g^{}r^{}1.6`$ for ordinary $`z4`$ quasars (Figure 1b). The Sloan Digital Sky Survey (SDSS) is a joint project of the University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Max-Planck-Institute for Astronomy, Princeton University, the United States Naval Observatory, and the University of Washington. Apache Point Observatory, site of the SDSS, is operated by the Astrophysical Research Consortium. Funding for the project has been provided by the Alfred P. Sloan Foundation, the SDSS member institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, and the Ministry of Education of Japan. The SDSS Web site is http://www.sdss.org/. XF and MAS acknowledge additional support from Research Corporation, NSF grants AST96-16901, the Princeton University Research Board, and an Advisory Council Scholarship. DPS acknowledges the support of NSF grant AST95-09919 and AST99-00703. We thank Niel Brandt for useful discussions. We thank Karen Gloria and Russet McMillan for their usual expert help at the 3.5m telescope.
no-problem/9909/cond-mat9909327.html
ar5iv
text
# Quantum Stochastic Absorption ## I Introduction Wave propagation in an active random medium has attracted much attention during the past decade. Recently, many experiments have reported lasing action of light in optically active strongly scattering media. These systems exhibit interesting physical properties due to the combined effects of static disorder-induced multiple scattering and of coherent amplification/absorption. In the extensively studied case of an electron motion in a random medium it is well established that quantum interference effects arising from a serial disorder in one-dimensional systems lead to Anderson localization. An essential difference between electron and light propagation is the absence of conservation law for photons. Light can be absorbed or amplified retaining the phase coherence. In most of the theoretical studies amplification or absorption is modeled phenomenologically by introducing an imaginary potential (optical potential) in the Hamiltonian. In the case of light (electro-magnetic waves) this corresponds to a medium with a complex dielectric constant. Several interesting effects have been predicted which include statistics of super-reflection and transmission and the dual symmetry between absorption and amplification. Media thus modeled are referred to as coherently absorbing or amplifying. In the case of electron transport, inelastic scattering (due to phonons) leads to loss of phase memory of the wave function. Thus the motion of electrons becomes phase incoherent and sample to sample fluctuations become self-averaging in the high temperature limit leading to a classical behavior. There has been much interest in the effect of inelastic scattering on the coherent tunneling through potential barriers. To allow for the possibility of inelastic decay on the otherwise coherent tunneling through potential barriers, several studies invoke absorption. To study the above phenomenon, one resorts to the optical potential models (coherent absorption models). In the optical potential model the potential is made complex $`V(x)=V_r(x)iV_i`$. The Hamiltonian becomes non-Hermitian resulting in absorption or amplification of probability current depending on the sign of $`V_i`$. The presence of imaginary potential (absorption/amplification) leads to many counter-intuitive features. In the scattering case, in the vicinity of the absorber, the particle experiences a mismatch in the potential (being complex) and therefore it tries to avoid this region by enhanced back reflection. Imaginary potential plays a dual role of an absorber and a reflector. In other words, in such models absorption without reflection is not possible. Naively one expects the absorption to increase monotonically as a function of $`V_i`$. However, the observed behavior is non-monotonic. At first absorption increases and after exhibiting a maximum decreases to zero as $`V_i\mathrm{}`$. The absorber, in this limit acts as a perfect reflector. During each scattering event an electron picks up an additional scattering phase shift due to $`V_i`$ which along with multiple interference leads to additional coherence or resonances in the system. Thus, due to the presence of imaginary potentials, we have additional reflection and resonances in the system. In the presence of coherent absorption and quenched disorder, the stationary distribution for reflection coefficient has been calculated. This has been done within random phase approximation (RPA) using the invariant imbedding method. The stationary distribution is given by: $`P_s(r)`$ $`=`$ $`{\displaystyle \frac{|D|exp(|D|)exp(\frac{|D|}{1r})}{(1r)^2}}\text{for}r1`$ (1) $`=`$ $`0\text{for}r>1.`$ (2) Here $`D`$ is proportional to $`V_i/W`$, $`W`$ being the strength of disorder. Notice that the distribution has a single peak which shifts towards $`r=0`$ with increasing absorption strength $`V_i`$. However, the exact distribution obtained numerically for strong disorder and strong absorption shows significant qualitative departure from this analytical distribution. For sufficiently strong absorption, the numerically obtained stationary distribution shows a double peak structure. In the limit $`V_i\mathrm{}`$ the distribution becomes a delta function at $`r=1`$. This corresponds to the limit where the absorber acts as a perfect reflector. To this end we would like to develop a model where absorption does not lead to concomitant reflection and additional resonances as discussed above. Recently, such a stochastic absorption model was developed by Pradhan based on the work of Büttiker. In his treatment several absorptive side-channels are added to the purely elastic channels of interest. A particle that once enters the absorbing or the side-channel never returns back and is physically lost. He has obtained the Langevin equation for the reflection amplitude $`R(L)`$ for a random medium of length $`L`$ by enlarging the $`S`$-matrix to include side-channels. In continuum limit the equation for $`R(L)`$ is: $$\frac{dR}{dL}=\alpha R(L)+2ikR(L)+ikV(L)[1+R(L)]^2,$$ (3) where $`\alpha `$ is the absorption parameter and $`V(L)`$ is the random potential representing the static disorder. Interestingly, within the random phase approximation (RPA), the stationary probability distribution for the reflection coefficient $`P_s(r)`$ ( for $`L\mathrm{}`$ ) is again given by Eq.1. In our present work we develop another simple model for absorption which can be readily used to study the case of amplifying medium as well. The medium comprises of random strength delta function scatterers at regular spatial intervals $`a`$. To model absorption (leaking out) of electrons, an attenuation constant per unit length $`\alpha `$ is introduced. Every time the electron traverses the free region between the delta scatterers, we insert a factor $`exp(\alpha a)`$ in the free propagator following Ref. . We find that this method of modeling absorption does not lead to additional reflection and resonances as in the case of optical potential models. We obtain the localization length and study the statistics of reflection and transmission coefficients. The stationary distribution of reflection coefficient agrees with Eq.1 in a larger parameter space. Following earlier method, the continuum limit of our model leads to the same Langevin equation (Eq. 3) for $`R(L)`$ where $`\alpha `$ is replaced by $`2\alpha `$. Naturally, agreement of our result with Eq.1 follows. In Sec.II we give the details of our model and the numerical procedure. The section after that is devoted to results and discussion. ## II The Model We carry out calculations on the wave propagation in an absorbing medium characterized by an attenuation constant $`\alpha `$ and interspersed by a chain of uniformly spaced independent delta-function scatterers of random strengths. The $`i^{th}`$ delta-function scattering center is described by a transfer matrix $$𝖬_𝗂=\left(\begin{array}{cc}1iq_i/2k& iq_i/2k\\ iq_i/2k& 1+iq_i/2k\end{array}\right)$$ where $`q_i`$ is the strength of the $`i^{th}`$ delta-function. The $`q_i`$’s are uniformly distributed over the range $`W/2q_iW/2`$, i.e., $`P(q_i)=1/W`$. Here $`W`$ is the disorder strength. We set units of $`\mathrm{}`$ and $`2m`$ to be unity. The energy of the incident wave is $`E=k^2`$. For further analysis, $`W`$ and $`\alpha `$ are scaled with respect to $`a`$ and are made dimensionless. Propagation of the wave in-between two consecutive delta-function scatterers separated by a unit spacing ($`a=1`$) can be described by the matrix $$𝖷=\left(\begin{array}{cc}e^{ik\alpha }& 0\\ 0& e^{ik+\alpha }\end{array}\right).$$ The total transfer matrix for the $`L`$-site system is constructed by repeated application of $`𝖬_𝗂`$ and $`𝖷`$: $$M=𝖬_𝖫𝖷\mathrm{}.\mathrm{𝖷𝖬}_\mathrm{𝟤}\mathrm{𝖷𝖬}_\mathrm{𝟣}.$$ From $`M`$ the reflection and transmission amplitudes are calculated using $$R=\frac{M(2,1)}{M(2,2)}$$ and $$T=\frac{\mathrm{𝖽𝖾𝗍}M}{M(2,2)}.$$ The reflection and transmission coefficients are $`r=|R|^2`$ and $`t=|T|^2`$ respectively and the absorption is given by $`\sigma =1rt`$. Thus, due to absorption the total flux is not conserved and we have $`r+t1`$. ## III Results and Discussion In our studies we consider at least 10,000 realizations for calculating various distributions and averages. In the case of stationary distributions, the length of the samples considered were about 5 to 10 times the localization length. We also verified that the corresponding distributions or averages do not evolve any further with increasing sample length $`L`$. All results are shown for incident energy $`E=k^2=1.0`$ unless specified otherwise. We first consider the behavior of $`lnt`$. The angular bracket denotes the ensemble average. In Fig.1 we plot $`lnt`$ as a function of length $`L`$ for ordered absorptive medium ( $`W=0.0`$, $`\alpha =0.05`$ ), ensemble averaged disordered non-absorptive medium ( $`W=1.0`$, $`\alpha =0.0`$ ) and disordered absorptive medium ( $`W=1.0`$, $`\alpha =0.05,0.1,0.15`$ ). In all the cases transmission decays exponentially with the length. The absorption-induced length scale $`\xi `$ in random medium associated with the decay of transmission coefficient is always less than both $`\xi _a`$ and $`\xi _w`$. The localization length for disordered nonabsorptive medium scales as $`\xi _w=96k^2/W^2`$ and for ordered absorptive medium, as $`\xi _a=1/\alpha `$. In Fig.2 we show the plot of $`1/\xi `$ versus $`1/\xi _w+1/\xi _a`$ obtained by changing $`\alpha `$ for various values of disorder strength $`W`$. We have numerically calculated $`1/\xi `$ for the different cases. All the points fall on a straight line with unit slope indicating the relation $`1/\xi =1/\xi _w+1/\xi _a`$. Such a relation exists for the case of coherently absorbing and amplifying media. To study the nature of fluctuations in the transmission coefficient, in Fig.3 we plot, on log-scale, average $`t`$ ($`t`$), root-mean-squared variance $`t_v=\sqrt{t^2t^2}`$ and root-mean-squared relative variance $`t_{rv}=t_v/t`$ as a function of length $`L`$ for $`W=1.0`$ and $`\alpha =0.01`$. We see from the figure that $`t_v`$ is less than $`t`$ and $`t_{rv}`$ is less than unity upto $`L/\xi 3`$. But, beyond that $`t_v`$ becomes greater than $`t`$ and $`t_{rv}`$ crosses unity making the transmission coefficient a non-self-averaging quantity. This implies that the fluctuations in $`t`$ over the ensemble of macroscopically identical samples dominates the ensemble average. The transmission coefficient becomes very sensitive to spatial realizations of the impurity configurations. In Fig.4 we show the distribution $`P(t)`$ at different sample lengths for $`\alpha =0.01`$ and $`W=1.0`$. For small lengths $`L`$, resonant transmission dominates and $`P(t)`$ peaks at a large value of $`t`$. In fact for $`L0`$, $`P(t)\delta (t1)`$. As the length becomes comparable to the localization length $`L\xi `$, multiple reflections start dominating. Consequently, the time spent inside the medium increases leading to more absorption. Thus, the peak of the distribution shifts to smaller values of $`t`$ and the distribution broadens due to randomization by disorder. In the long length limit $`L>>\xi `$, the distribution develops long tails and its peak shifts towards $`t=0`$. The transmittance shows large sample-to-sample fluctuations and becomes a non-self-averaging quantity. It is the tail of the distribution that determines the behavior of fluctuations. Finally, as expected, for $`L\mathrm{}`$ , $`P(t)\delta (t)`$. From the previous discussion it is clear that the transmittance becomes non-self-averaging due to the appearance of tails in the $`t`$-distribution. These tails owe their existence to the presence of resonant realizations in the ensemble. Hence, it is worthwhile to investigate the nature of resonances and the effect of absorption on them. Specifically, we would like to understand if the presence of absorption would give rise to any new resonances. It is well known from the studies in passive disordered media that the ensemble fluctuation and the fluctuations for a given sample as a function of chemical potential or energy are expected to be related by some sort of ergodicity, i.e., the measured fluctuations as a function of the control parameter are identical to the fluctuations observable by changing the impurity configurations. In Fig.5(a) we show the plot of $`t`$ versus $`k`$ for $`W=1.0`$ and $`\alpha =0`$ at $`L=100`$ for a given realization of the random potential. Figure 5(b) shows a plot of $`t`$ versus $`k`$ for the same realization but with $`\alpha =0.01`$. By mere visual inspection one can see that the only effect of absorption, apart from reducing the value of transmission, is to increase the width of resonance peaks for the passive case. Thus the presence of absorption does not introduce any new resonances. This can be seen from Fig.5(c) and (d) which emphasize Fig.5(a) and (b) respectively by enlarging a narrow region between $`k=1.5`$ to $`k=2.0`$. We do not see any new peaks in the transmission spectrum for absorptive case. Similar effect is observed in case of reflection also. We now turn our attention to the statistics of reflection coefficient. In Fig.6 we plot $`lnr`$ as a function of length $`L`$ for a fixed value of disorder strength $`W=1.0`$ and different values of absorption strength $`\alpha `$ as indicated in the figure. It increases with $`L`$ initially and for $`L\xi `$, it saturates. At any $`L`$, $`lnr|_{W,\alpha }<lnr|_{W,0}`$. This is in contrast to the behavior observed for the case of coherent absorption. As we know, in the case of coherent absorption, the reflection coefficient tends to unity for absorption strength becoming very large. In this regime, the predominantly reflecting nature of optical potential makes reflection larger than that in the corresponding passive case. In Fig.7 we have shown $`P_s(r)`$ for various values of $`\alpha `$. In the small $`\alpha `$ range, i.e., for $`\alpha =0.001`$, the distribution has a peak at large $`r`$. As we increase $`\alpha `$ the peak shifts to smaller values of $`r`$. The thick line shows the fit obtained using the analytical expression given in Eq.1. In the limit of large $`\alpha `$, the distribution tends to become a delta function at $`r=0`$. This is in sharp contrast to the behavior observed for coherent absorption. The distribution is always single peaked. For all non-zero values of $`\alpha `$ the medium acts as an absorber only and there is no additional reflection due to absorption. Figure 8 shows a monotonic decrease of saturated value of average reflection coefficient $`r_s`$ as a function of $`\alpha `$. The average absorption, defined as $`\sigma =1rt`$, increases monotonically with increasing $`\alpha `$ and in the limit of $`\alpha \mathrm{}`$ saturates to unity in contrast to the optical model wherein it tends to zero. In the case of the optical model, the absorption coefficient is a non-monotonic function of absorption strength $`V_i`$ and for values of $`V_i`$ near the peak the stationary distribution of reflection coefficient displays a double peak. In fact our model exhibits the properties in agreement with physical expectations of an absorbing medium, i.e., stronger the absorption lesser are the reflection and transmission across the medium. Finally, we discuss the phase distribution. Figure 9 shows the stationary distribution of phase of the reflected wave for a fixed disorder strength $`W=1.0`$ and various for values of $`\alpha `$. For small values of disorder one generally expects the phase distribution to be uniform if the system size is around the localization length. This is seen in Fig.9(a) for the case of weak absorption. As we increase $`\alpha `$ the phase distribution develops two distinct peaks – a feature observed for coherent absorption also. This is related to the fact that the localization length decreases with $`\alpha `$. We would like to point out that the stationary distribution $`P_s(r)`$ is same within the RPA for the case of coherently as well as stochastically absorbing media. Unlike in the case of coherently absorbing medium, Eq.1 seems to be valid in a larger parameter space for the stochastic absorbing medium where the RPA may not be valid. The parameters for validity of the RPA are determined by the observation of uniform phase distribution. However, beyond the RPA, $`P_s(r)`$ for the case of stochastic and coherent absorbing media are qualitatively distinct from each other. In conclusion, we have studied a new model of quantum stochastic absorption. The behavior observed for transmission and reflection coefficients is in accordance with physical expectations of an absorbing medium. This model can be extended to the case of stochastically amplifying medium. It exhibits duality between absorption and amplification which has received much attention recently. Results for this will be reported elsewhere. ## ACKNOWLEDGMENTS One of us (DS) would like to thank Prof. S. N. Behera for extending hospitality at the Institute of Physics, Bhubaneswar.
no-problem/9909/hep-lat9909001.html
ar5iv
text
# ITEP-TH-29/99, KANAZAWA-99/08, UL-NTZ 19/1999 1 September, 1999 Vortex profiles and vortex interactions at the electroweak crossoverPresented by the first author at Lattice’99, Pisa, Italy. ## 1 Introduction Although the standard model does not possess topologically stable monopole– and vortex–like defects, one can define so-called embedded topological defects : Nambu monopoles and $`Z`$–vortex strings . Last year, we have started to investigate how the electroweak transition and the continuous crossover can be understood in terms of the behavior of these excitations. This has been done in the framework of dimensional reduction which is reliable for Higgs boson masses between $`30`$ and $`240`$ GeV . Due to the similarity of the phase transitions in the $`SU(2)`$ Higgs model and in the $`SU(2)\times U(1)`$ electroweak theory , we restricted ourselves to the $`3D`$ $`SU(2)`$ Higgs model. In our lattice studies we observed the vortices to undergo a so–called percolation transition which coincides with the first order phase transition at small Higgs masses. The percolation transition continues to exist at realistic (large) Higgs mass when the electroweak theory has a smooth crossover rather than a true thermal phase transition . In our present study we have a closer look at the vortex properties within the electroweak crossover regime \[for a Higgs boson mass $`103`$ GeV ($`94`$ GeV) in a $`4D`$ $`SU(2)`$ Higgs model with (without) top quark\]. ## 2 Lattice model and defect operators To construct the vortices on the lattice we use their correspondence to the Abrikosov-Nielsen-Olesen (ANO) strings embedded into an Abelian subgroup of the $`SU(2)`$ gauge group. We define a composite adjoint unit vector field $`n_x=n_x^a\sigma ^a`$, $`n_x^a=(\varphi _x^+\sigma ^a\varphi _x)/(\varphi _x^+\varphi _x)`$, where the 2–component complex isospinor $`\varphi _x`$ is the Higgs field. The field $`n`$ allows to define the gauge invariant flux $`\overline{\theta }_p`$ through the plaquette $`p=\{x,\mu \nu \}`$ as $`\overline{\theta }_p=\mathrm{arg}(\mathrm{Tr}[(1\mathrm{l}+n_x)V_{x,\mu }V_{x+\widehat{\mu },\nu }V_{x+\widehat{\nu },\mu }^+V_{x,\nu }^+])`$ where $`V_{x,\mu }(U,n)=U_{x,\mu }+n_xU_{x,\mu }n_{x+\widehat{\mu }}`$ is the projection of a link. Abelian link angles $`\chi _{x,\mu }=\mathrm{arg}\left(\varphi _x^+V_{x,\mu }\varphi _{x+\widehat{\mu }}\right)`$ are used to construct a plaquette angle $`\chi _p=\chi _{x,\mu }+\chi _{x+\widehat{\mu },\nu }\chi _{x+\widehat{\nu },\mu }\chi _{x,\nu }`$. The vorticity $`\sigma _p`$ on the plaquette $`p`$ is : $`\sigma _p=(\chi _p\overline{\theta }_p)/(2\pi ).`$ (1) The vortex trajectories are formed by links $`{}_{}{}^{}l=\{x,\rho \}`$ of the dual lattice ($`{}_{}{}^{}l`$ dual to $`p`$) which carry a non–zero vorticity $`{}_{}{}^{}\sigma _{x,\rho }^{}=\epsilon _{\rho \mu \nu }\sigma _{x,\mu \nu }/2`$. Those trajectories are either closed or begin/end on Nambu (anti-) monopoles. ## 3 Vortex profiles The topologically unstable vortices populate the “vacuum” of the model due to thermal fluctuations. Looking at realistic equilibrium configurations we would not expect to find vortices with classical (ANO type, Refs. ) profiles. Our lattice vortex defect operator (1) is constructed to detect a line-like object (in $`3D`$ space–time) with non-zero vorticity (“soul of the physical vortex”). Within a given gauge–Higgs configuration, the vortex profile around that soul would by screened by quantum fluctuations, compared to its classical shape. However, the average over all vacuum vortices may reveal a structure resembling a classical vortex to some extent. This requires to study the correlators of the vortex operators with various field operators. Profiles obtained in this way are called “quantum vortex profiles” although thermal fluctuations contribute to the correlators, too. Preliminary indications of some vortex-like structure have been given in Ref. . While in the center of a classical continuum vortex the Higgs field modulus is zero and the energy density reaches its maximum , in thermal equilibrium on the lattice the (squared) modulus of the Higgs field, $`\rho ^2=(\varphi _x^+\varphi _x)`$ (the gauge field energy density, $`E^g=1\frac{1}{2}\mathrm{Tr}U_p`$) were found lower (higher) on the vortex trajectories than the corresponding bulk averages<sup>1</sup><sup>1</sup>1A similar method was used to study the physical features of Abelian monopoles in $`SU(2)`$ gluodynamics, Ref. .. We define the following vortex–field correlators for plaquettes $`P_0`$ and $`P_R`$ located in the same plane (perpendicular to a local segment of the vortex trajectory) $`C_\rho (R)=<\sigma _{P_0}^2\rho _{P_R}^2>,C_E(R)=<\sigma _{P_0}^2E_{P_R}^g>,`$ (2) where $`R`$ is the distance between the plaquettes. The Higgs modulus is $`\rho _P^2=(1/4)_{xP}\rho _x^2`$ (averaged over the corners of $`P`$). The quantum vortex profiles for the Higgs field, $`C_\rho (R)`$, and gauge field energy, $`C_E(R)`$, are shown in Figures 1(a) and 1(b), respectively, for two values of the hopping parameter: $`\beta _H=.3440`$ (symmetric side) and $`\beta _H=.3536`$ (on top of the crossover) for a lattice $`16^3`$ at gauge coupling<sup>2</sup><sup>2</sup>2Results for larger $`\beta _G`$ and bigger lattices will be presented elsewhere. $`\beta _G=8`$. On the Higgs side the profiles are qualitatively similar. One can see from these Figures that the vortices have, on the average, a thickness of several lattice spacings while the center of the vortex is located within the plaquette $`P_0`$ \[identified by the vortex current (1)\]. To parametrize the vortex shape we fit the correlator data (2) by the following functions: $`C_\rho ^{\mathrm{fit}}(R)`$ $`=`$ $`C_\rho +B_\rho {\displaystyle \underset{xP_0,yP_R}{}}G(|𝐱𝐲|;m_\rho ),`$ $`C_E^{\mathrm{fit}}(R)`$ $`=`$ $`C_E+B_EG(R;m_E)`$ (3) with asymptotic values $`C_{\rho ,E}`$ as well as amplitudes $`B_{\rho ,E}`$ and inverse coherence lengths (effective masses) $`m_E`$ and $`m_\rho `$. The lattice function $`G(R;m)`$ was proposed to fit point–point correlation functions in Ref. . The function $`G`$ is proportional to the scalar propagator with the mass $`2\mathrm{sinh}(m/2)`$ in $`3D`$ space. Best fits are shown in Figures 1 by solid lines. As an example, the effective masses as function of the hopping parameter (decreasing temperature) are shown in Figures 2 on the symmetric side of the crossover. The masses become small near the crossover (where the fit ceases to be good). Deeper on the symmetric side, the quantum vortex profiles are squeezed compared to the classical ones due to Debye screening leading to a smaller coherence length. Approaching the crossover from the symmetric side the density of the vortices becomes smaller reducing this effect. ## 4 Type of vortex vacuum A vortex medium can be characterized in superconductor terms: if two static vortices with the same vorticity attract (repel) each other the medium corresponds to a type–I (type–II) superconductor. In order to define the type of interaction for the case of electroweak matter we have measured two–point functions of the vortex currents: $`C_+(R)`$ $`=`$ $`<|\sigma _{P_0}||\sigma _{P_R}|>=2(g_{++}+g_+)`$ $`C_{}(R)`$ $`=`$ $`<\sigma _{P_0}\sigma _{P_R}>=2(g_{++}g_+)`$ (4) where $`g_{+\pm }=g_{+\pm }(R)`$ is shorthand for contributions to the correlation functions $`C_\pm `$ of parallel or anti–parallel vortices at positions $`P_0`$ and $`P_R`$. Obviously, $`g_{++}=g_{}`$ and $`g_+=g_+`$. The correlators $`g_{++}(R)=(C_++C_{})/4`$ ($`g_+(R)=(C_+C_{})/4`$) can be interpreted as the average density of vortices (anti-vortices) relative to the bulk density (normalized to unity at $`R\mathrm{}`$) in the plane orthogonal to a vortex current at distance $`R`$. If the vortices attract/repel each other (type–I/type–II) the long range tail of the function $`g_{++}(R)`$ should exponentially approach unity from above/below, while the behavior of $`g_+(R)`$ is attractive independent of the type of superconductivity. In Figure 3 one can see that the same-vorticity pair distribution $`g_{++}`$<sup>3</sup><sup>3</sup>3Shortest distances and data points of the order of the noise have been omitted. decreases exponentially with a slope that becomes minimal on the crossover. Apart from $`R<2`$ the opposite-vorticity distribution $`g_+`$ behaves similarly. Thus we conclude that electroweak matter at the crossover belongs to the type–I vortex vacuum class. The attractive character becomes even stronger on the Higgs (lower temperature) side. ## Acknowledgments M.N. Ch. was partially supported by grants INTAS-RFBR-95-0681, RFBR-99-01230a and INTAS 96-370.
no-problem/9909/cond-mat9909176.html
ar5iv
text
# Multiscaling in Infinite Dimensional Collision Processes ## Abstract We study relaxation properties of two-body collisions in infinite spatial dimension. We show that this process exhibits multiscaling asymptotic behavior as the underlying distribution is characterized by an infinite set of nontrivial exponents. These nonequilibrium relaxation characteristics are found to be closely related to the steady state properties of the system. PACS numbers: 05.40.+j, 05.20.Dd, 02.50.Ey Our understanding of the statistical mechanics of nonequilibrium systems remains incomplete, in sharp contrast with their equilibrium counterpart. The rich phenomenology associated with dynamics of far from equilibrium interacting particle systems exposes the lack of a unifying theoretical framework. Simple tractable microscopic models can therefore help us gain insight and better the description of nonequilibrium dynamics. In this study, we focus on the nonequilibrium relaxation of an infinite particle system interacting via two body collisions. We find that a hierarchy of scales underlies the relaxation. In particular, we devise an extremely simple system which exhibits multiscaling in infinite dimension, while in finite dimensions simple scaling behavior is restored. Furthermore, we show that this behavior extends to a broader class of collision processes. We are interested in modeling collision processes in a structureless, infinite dimensional space. Therefore, we place an infinite number of identical particles on the nodes of a completely connected graph. Particles are characterized by a single parameter, their velocity $`v`$. Two-body collisions are realized by choosing two particles at random and changing their velocities according to $`(u_1,u_2)(v_1,v_2)`$ with $`\left(\begin{array}{c}v_1\\ v_2\end{array}\right)=\left(\begin{array}{cc}\gamma & 1\gamma \\ 1\gamma & \gamma \end{array}\right)\left(\begin{array}{c}u_1\\ u_2\end{array}\right)`$ (1) and $`0\gamma 1`$. In other words, the post-collision velocities are given by a linear combination of the pre-collision velocities. Both the total momentum ($`u_1+u_2=v_1+v_2`$) and the total number of particles are conserved by this process. In fact, the collision rule (1) is the most general linear combination which obeys momentum conservation and Galilean invariance, i.e., invariance under velocity translation $`vvv_0`$. Our motivation for studying this problem is inelastic collisions in one-dimensional granular gases . While the two problems involve different collision rates, they share the same trivial final state where all velocities vanish, $`P(v,t)\delta (v)`$ when $`t\mathrm{}`$ (without loss of generality, the average velocity was set to zero by invoking the transformation $`vvv`$). We chose to describe this work in sightly more general terms since closely related dynamics were used in different contexts including voting systems , asset exchange processes , combinatorial processes , traffic flows , and force fluctuations in bead packs . We will show that multiscaling characterizes fluctuations in some of these problems as well. Velocity fluctuations may be obtained via the probability distribution function $`P(v,t)`$ which evolves according to the following master equation $`{\displaystyle \frac{P(v,t)}{t}}`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑u_1𝑑u_2P(u_1,t)P(u_2,t)`$ (2) $`\times `$ $`\left[\delta (v\gamma u_1(1\gamma )u_2)\delta (vu_2)\right].`$ (3) The $`\delta `$functions on the right-hand side reflect the collision rule (1) and guarantee conservation of the number of particles, $`𝑑vP(v,t)=1`$, and the total momentum $`𝑑vvP(v,t)=0`$. Eq. (2) can be simplified by eliminating one of the integrations $$\frac{P(v,t)}{t}+P(v,t)=\frac{1}{1\gamma }_{\mathrm{}}^{\mathrm{}}𝑑uP(u,t)P(\frac{v\gamma u}{1\gamma },t).$$ (4) Further simplification may be achieved via the Fourier transform $`\widehat{P}(k,t)=𝑑ve^{ikv}P(v,t)`$ which obeys $$\frac{}{t}\widehat{P}(k,t)+\widehat{P}(k,t)=\widehat{P}[\gamma k,t]\widehat{P}[(1\gamma )k,t].$$ (5) Although the integration is eliminated, this compact equation is still challenging as the nonlinear term becomes nonlocal. Velocity fluctuations can be quantified using the moments of the velocity distribution, $`M_n(t)=𝑑vv^nP(v,t)`$. The moments obey a closed and recursive set of the ordinary differential equations. The corresponding equations can be derived by inserting the expansion $`\widehat{P}(k,t)=_n\frac{(ik)^n}{n!}M_n(t)`$ into Eq. (5) or directly from Eq. (2). The first few moments evolve according to $`\dot{M}_0=\dot{M}_1=0`$, and $`\dot{M}_2`$ $`=`$ $`a_2M_2,`$ (6) $`\dot{M}_3`$ $`=`$ $`a_3M_3,`$ (7) $`\dot{M}_4`$ $`=`$ $`a_4M_4+a_{24}M_2^2,`$ (8) with the coefficients $$a_na_n(\gamma )=1(1\gamma )^n\gamma ^n,$$ (9) and $`a_{24}=6\gamma ^2(1\gamma )^2`$. Integrating these rate equations yields $`M_0=1`$, $`M_1=0`$ and $`M_2(t)`$ $`=`$ $`M_2(0)e^{a_2t}`$ (10) $`M_3(t)`$ $`=`$ $`M_3(0)e^{a_3t}`$ (11) $`M_4(t)`$ $`=`$ $`\left[M_4(0)+3M_2^2(0)\right]e^{a_4t}3M_2^2(t).`$ (12) The asymptotic behavior of the first few moments suggests that knowledge of the RMS fluctuation $`v^{}M_2^{1/2}`$ is not sufficient to characterize higher order moments since $`M_3^{1/3}/v^{},M_4^{1/4}/v^{}\mathrm{}`$, as $`t\mathrm{}`$. This observation extends to higher order moments as well. In general, the moments evolve according to $$\dot{M}_n+a_nM_n=\underset{m=2}{\overset{n2}{}}\left(\genfrac{}{}{0pt}{}{n}{m}\right)\gamma ^m(1\gamma )^{nm}M_mM_{nm}.$$ (13) Note that for $`0<\gamma <1`$, the coefficients $`a_n`$ satisfy $`a_n<a_m+a_{nm}`$ when $`1<m<n1`$. This inequality can be shown by introducing $`G(\gamma )=a_m(\gamma )+a_{nm}(\gamma )a_n(\gamma )`$ which satisfies $`G(0)=0`$ and $`G(\gamma )=G(1\gamma )`$. Therefore, one needs to show that $`G^{}(\gamma )=m[b_mb_n]+(nm)[b_{nm}b_n]>0`$ for $`0<\gamma <1/2`$ with $`b_nb_n(\gamma )=(1\gamma )^{n1}\gamma ^{n1}`$. One can verify that the $`b_n`$’s decrease monotonically with increasing $`n`$, $`b_nb_{n+1}`$ for $`n2`$, therefore proving the desired inequality. Since moments decay exponentially, this inequality shows that the right hand side in the above equation is negligible asymptotically. Thus, the leading asymptotic behavior for all $`n>0`$ is $`M_n\mathrm{exp}(a_nt)`$. Since the $`a_n`$’s increase monotonically, $`a_n<a_{n+1}`$, the moments decrease monotonically in the long time limit, $`M_n>M_{n+1}`$. Furthermore, in terms of the second moment one has $$M_nM_2^{\alpha _n},\alpha _n=\frac{1(1\gamma )^n\gamma ^n}{1(1\gamma )^2\gamma ^2}.$$ (14) While the prefactors depend on the details of the initial distribution, the scaling exponents are universal. Therefore, the velocity distribution does not follow a naive scaling form $`P(v,t)=\frac{1}{v^{}}P(\frac{v}{v^{}})`$. Such a distribution would imply the linear exponents $`\alpha _n=\alpha _n^{}=n/2`$. Instead, the actual behavior is given by Eq. (14) with the exponents $`\alpha _n`$ reflecting a multiscaling asymptotic behavior with a nontrivial (non-linear) dependence on the index $`n`$. For instance, the high order exponents saturate, $`\alpha _na_2^1`$ for $`n\mathrm{}`$, instead of diverging. One may quantify the deviation from ordinary scaling via a properly normalized set of indices $`\beta _n=\alpha _n/\alpha _n^{}`$ defined from $`M_n^{1/n}(v^{})^{\beta _n}`$. By evaluating the $`\gamma =1/2`$ case where multiscaling is most pronounced, a bound can be obtained for these indices: $`7/8,31/48\beta _n1`$ for $`n=4,6`$ respectively. Furthermore, $`\beta _n1\frac{2n3}{2}\gamma `$ when $`\gamma 0`$ indicating that the deviation from ordinary scaling vanishes for weakly inelastic collisions. Thus, the multiscaling behavior can be quite subtle . The above shows that a hierarchy of scales underlies fluctuations in the velocity. In parallel, a hierarchy of diverging time scales characterizes velocity fluctuations $$M_n^{1/n}\mathrm{exp}(t/\tau _n),\tau _n=\frac{n}{a_n}.$$ (15) These time scales diverge for large $`n`$ according to $`\tau _nn`$. Large moments reflect the large velocity tail of a distribution. Indeed, the distribution of extremely large velocities is dominated by persistent particles which experienced no collisions up time $`t`$. The probability for such events decays exponentially with time $`P(v,t)P(v,0)\mathrm{exp}(t)`$ for $`v1`$ (alternatively, this behavior emerges from Eq. (4) since the gain term is negligible for the tail and hence $`\dot{P}+P=0`$). This decay is consistent with the large order moment decay $`M_n\mathrm{exp}(t)`$ when $`n\mathrm{}`$. Although the leading asymptotic behavior of the moments was established, understanding the entire distribution $`P(v,t)`$ remains a challenge. Simulations of the $`\gamma =1/2`$ process reveal an interesting structure for compact distributions. Starting from a uniform velocity distribution, $`P_0(v)=1/2`$ for $`1<v<1`$, the distribution loses analyticity at $`v=\pm 1/2`$. Our analysis of Eq. (5) shows that such a singularity should indeed develop at $`v=\pm 1/2`$ and it additionally implies the appearance of (progressively weaker and weaker) singularities at $`v=\pm 1/4`$, etc. More generally, for an arbitrary compact initial distribution and an arbitrary $`\gamma `$, the distribution $`P(v,t)`$ loses analyticity for $`t>0`$ and develops an infinite (countable) set of singularities whose locations depend on the arithmetic nature of $`\gamma `$ (e.g., it is very different for rational and irrational $`\gamma `$’s). On the other hand, unbounded distributions do not develop such singularities, and therefore, the loss of analyticity is not necessarily responsible for the multiscaling behavior. Asymptotically, our system reaches a trivial steady state $`P(v,t=\mathrm{})=\delta (v)`$. To examine the relation between dynamics and statics, a non-trivial steady state can be generated by considering the driven version of our model . External forcing balances dissipation due to collisions and therefore results in a nontrivial nonequilibrium steady state. Specifically, we assume that in addition to changes due to collisions, velocities may also change due to an external forcing: $`\frac{dv_j}{dt}|_{\mathrm{heat}}=\xi _j`$. We assume standard uncorrelated white noise $`\xi _i(t)\xi _j(t^{})=2D\delta _{ij}\delta (tt^{})`$ with a zero average $`\xi _j=0`$. The left hand side of the master equation (4) should therefore be modified by the diffusion term $`{\displaystyle \frac{P(v,t)}{t}}{\displaystyle \frac{P(v,t)}{t}}D{\displaystyle \frac{^2P(v,t)}{v^2}}.`$ (16) Of course, the addition of the diffusive term does not alter conservation of the total particle number and the total momentum, and one can safely work in a reference frame moving with the center of mass velocity. We restrict our attention to the steady state, obtained by setting the time derivative to zero. The corresponding Fourier transform $`\widehat{P}_{\mathrm{}}(k)\widehat{P}(k,t=\mathrm{})`$ satisfies $$(1+Dk^2)\widehat{P}_{\mathrm{}}(k)=\widehat{P}_{\mathrm{}}[\gamma k]\widehat{P}_{\mathrm{}}[(1\gamma )k].$$ (17) The solution to this functional equation which obeys the conservation laws $`\widehat{P}_{\mathrm{}}(0)=1`$ and $`v=\widehat{P}_{\mathrm{}}^{}(0)=0`$ is found recursively $$\widehat{P}_{\mathrm{}}(k)=\underset{i=0}{\overset{\mathrm{}}{}}\underset{j=0}{\overset{i}{}}\left[1+\gamma ^{2j}(1\gamma )^{2(ij)}Dk^2\right]^{\left(\genfrac{}{}{0pt}{}{i}{j}\right)}.$$ (18) To simplify this double product we take the logarithm and transform it as follows $`\mathrm{ln}\widehat{P}_{\mathrm{}}(k)`$ $`=`$ $`{\displaystyle \underset{i=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{j=0}{\overset{j}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{i}{j}}\right)\mathrm{ln}\left[1+\gamma ^{2j}(1\gamma )^{2(ij)}Dk^2\right]`$ (19) $`=`$ $`{\displaystyle \underset{i=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{j=0}{\overset{i}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{i}{j}}\right){\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(Dk^2)^n\gamma ^{2jn}(1\gamma )^{2(ij)n}}{n}}`$ (20) $`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(Dk^2)^n}{n}}{\displaystyle \underset{i=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{j=0}{\overset{i}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{i}{j}}\right)\gamma ^{2nj}(1\gamma )^{2n(ij)}`$ (21) $`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(Dk^2)^n}{n}}{\displaystyle \underset{i=0}{\overset{\mathrm{}}{}}}\left[\gamma ^{2n}+(1\gamma )^{2n}\right]^i.`$ (22) The second identity follows from the series expansion $`\mathrm{ln}(1+q)=_{n1}n^1(q)^n`$, and the forth from the binomial identity $`_{j=0}^i\left(\genfrac{}{}{0pt}{}{i}{j}\right)p^jq^{ij}=(p+q)^i`$. Finally, using the geometric series $`(1x)^1=_{n0}x^n`$, the Fourier transform at the steady state is found $$\widehat{P}_{\mathrm{}}(k)=\mathrm{exp}\left\{\underset{n=1}{\overset{\mathrm{}}{}}\frac{(Dk^2)^n}{na_{2n}(\gamma )}\right\},$$ (23) with $`a_n(\gamma )`$ given by Eq. (9). The $`n`$th cumulant of the steady state distribution $`\kappa _n`$ can be readily found from $`\mathrm{ln}\widehat{P}_{\mathrm{}}(k)=_m\frac{(ik)^m}{m!}\kappa _m`$. Therefore, the odd cumulants vanish while the even cumulants are simply proportional to the time scales characterizing the exponential relaxation of the corresponding moments: $$\kappa _{2n}=\frac{(2n1)!}{n}D^n\tau _{2n}.$$ (24) Of course, the moments can be constructed from these cumulants. Interestingly, a direct correspondence between the steady state characteristics and the nonequilibrium relaxation time scales is established via the cumulants of the probability distribution. None of the (even) cumulants vanish, thereby reflecting significant deviations from a Gaussian distribution. Nevertheless, for sufficiently large velocities, one may concentrate on the small wave number behavior. Using the inverse Fourier transform of (23) one finds the tail of the distribution $$P_{\mathrm{}}(v)\sqrt{\frac{a_2}{4\pi D}}\mathrm{exp}\left\{\frac{a_2v^2}{4D}\right\},v\sqrt{D/a_2}.$$ (25) This in particular implies the large moment behavior $`M_{2n}(2n1)!!(4D/a_2)^n`$ as $`n\mathrm{}`$. To examine how general is the above behavior, we briefly discuss a few generalizations and extensions of the basic model. Relaxing Galilean invariance, the most general momentum conserving collision rule is $`\left(\begin{array}{c}v_1\\ v_2\end{array}\right)=\left(\begin{array}{cc}\gamma _1& 1\gamma _2\\ 1\gamma _1& \gamma _2\end{array}\right)\left(\begin{array}{c}u_1\\ u_2\end{array}\right).`$ (26) Following the same steps that led to (14) shows that when $`\gamma _1,\gamma _20,1`$ and when $`M_1=0`$ this process also exhibits multiscaling with the exponents $`\alpha _n=a_n/a_2`$, where $`a_n(\gamma _1,\gamma _2)=\frac{1}{2}[a_n(\gamma _1)+a_n(\gamma _2)]`$. When $`\gamma _1=1\gamma _2=\gamma `$ one recovers the model introduced by Melzak , and when $`\gamma _1=\gamma _2=\gamma `$ one recovers inelastic collisions. Since $`a_n(\gamma )=a_n(1\gamma )`$ both models have identical multiscaling exponents. Furthermore, a multiscaling behavior with the very same exponents $`\alpha _n(\gamma )`$ is also found for the following process $`(u_1,u_2)(u_1\gamma u_1,v_1+\gamma u_1)`$ investigated in the context of asset distributions and headway distributions in traffic flows . One can also consider stochastic rather than deterministic collision processes by assuming that the collision (26) occurs with probability density $`\sigma _1(\gamma _1,\gamma _2)`$. Our findings extend to this model as well and the multiscaling exponents are given by the same general expression $`\alpha _n=a_n/a_2`$ with $`a_n=𝑑\gamma _1𝑑\gamma _2\sigma (\gamma _1,\gamma _2)a_n(\gamma _1,\gamma _2)`$. In particular, for completely random inelastic collisions, i.e., $`\sigma 1`$ and $`\gamma _1=\gamma _2=\gamma `$, one finds $`a_n=\frac{n1}{n+1}`$ and hence $`\alpha _n=3\frac{n1}{n+1}`$. So far, we discussed only two-body interactions. We therefore consider $`N`$-body interactions where a collision is symbolized by $`(u_1,\mathrm{},u_N)(v_1,\mathrm{},v_N)`$. We consider a generalization of the $`\gamma =\frac{1}{2}`$ two-body case where the post-collision velocities are all equal. Momentum conservation implies $`v_i=\overline{u}=N^1u_i`$. The master equation is a straightforward generalization of the two-body case and we merely quote the moment equations $$\dot{M}_n+a_nM_n=N^n\underset{n_i1}{}\left(\genfrac{}{}{0pt}{}{n}{n_1\mathrm{}n_N}\right)M_{n_1}\mathrm{}M_{n_N}$$ (27) with $`a_n=1N^{1n}`$. Using the inequality $`a_n<a_m+a_{nm}`$ for all $`1<m<n1`$, and its kin like $`a_n<a_{m_1}+a_{m_2}+a_{nm_1m_2}`$ for all $`1<m_1,m_2<n1`$, etc., we find that the right-hand side of the above equation remains asymptotically negligible. Therefore, $`M_ne^{a_nt}`$ and $$M_nM_2^{\alpha _n},\alpha _n=\frac{1N^{1n}}{1N^1}.$$ (28) Thus, this $`N`$-body “averaging” process exhibits multiscaling asymptotic behavior as well. Thus far, we considered the behavior on a mean field level, i.e., in an infinite dimensional space. It is natural to consider the finite-dimensional counterpart. Specifically, we assume that particles reside on a $`d`$ dimensional lattice and that only nearest neighbors interact. Here, the above dynamics is essentially equivalent to a diffusion process . As a result, the underlying correlation length is diffusive, $`L(t)t^{1/2}`$. Within this correlation length the velocities are “well mixed” and momentum conservation therefore implies that $`vL^{d/2}t^{d/4}`$. Indeed, the infinite dimension limit is consistent with the above exponential decay. Furthermore, an exact solution for moments of arbitrary order is possible . We do not detail it here and simply quote that ordinary scaling is restored $`M_nt^{n/4}`$, i.e. $`\alpha _n=\alpha _n^{}=n/2`$. Thus, spatial correlations counter the mechanism responsible for multiscaling. In summary, we have investigated inelastic collision processes in infinite dimension. We have shown that such systems are characterized by multiscaling, or equivalently by an infinite hierarchy of diverging time scales. Multiscaling holds for several generalizations of the basic model including stochastic collision models and even processes which do not obey Galilean invariance. In this latter case, however, multiscaling is restricted to situations with zero total momentum. This perhaps explains why multiscaling asymptotic behavior was overlooked in previous studies . Another explanation is that this behavior may be difficult to detect from numerical simulations. Indeed, in other problems such as multidimensional fragmentation , and in fluid turbulence, low order moments deviate only slightly from the normal scaling expectation. There are a number of extensions of this work which are worth pursuing. We have started with a simplified model of a 1D granular gas with a velocity independent collision rate. One possibility is to approximate the collision rate with the RMS velocity fluctuation. This leads to the algebraic decay $`M_nt^{2\alpha _n}`$ with $`\alpha _n`$ given by Eq. (14) and in particular, Haff’s cooling law $`T=M_2t^2`$ is recovered . Our numerical studies indicate that when velocity dependent collision rates are implemented, ordinary scaling behavior is restored. One may also use this model as an approximation for inelastic collisions in higher dimensions as well, following the Maxwell approximation in kinetic theory . This research was supported by the DOE (W-7405-ENG-36), NSF (DMR9632059), and ARO (DAAH04-96-1-0114).
no-problem/9909/cond-mat9909405.html
ar5iv
text
# Gold Nanowires and their Chemical Modifications ## Abstract Equilibrium structure, local densities of states, and electronic transport in a gold nanowire made of a four-atom chain supported by two gold electrodes, which has been imaged recently by high-resolution electron microscopy, and chemical modification of the wire via the adsorption of a methylthiol molecule, are investigated with ab-initio local density functional simulations. In the bare wire at the imaged geometry the middle two atoms dimerize, and the structure is strongly modified by the adsorption of the molecule with an accompanying increase of the ballistic conductance through the wire. PACS: 73.40.Jn, 73.61.-r, 85.30.Vw Generation of wires of atomic scale dimensions in the process of formation and/or elongation of interfacial contacts has been predicted through early molecular dynamics simulations using many-body potentials , and in the face of fundamental interest and technological considerations driven by the relentless miniaturization of electronic and mechanical devices such wires have been the subject of intensive experimental and theoretical research endeavors . Indeed, nanometer-scale wires (nanowires, NWs) have been created and their structural, mechanical, and transport characteristics were studied using a variety of techniques , including most recently combined scanning tunneling microscopy (STM) and direct imaging with the use of high-resolution transmission electron microscopy (HRTEM) . We report here on ab-initio local-density functional (LDA) investigations of the atomic structure, electronic spectrum and conductance of a gold NW consisting of a four-atom chain connected to gold electrodes, which is the smallest NW imaged by HRTEM . Our study reveals dimerization of the gold atoms in the middle of the chain akin to a Peierls transition in (extended) one-dimensional systems. Furthermore, we explore structural and electronic spectral modifications resulting from adsorption of a molecule (methylthiol, SCH<sub>3</sub>) to the wire , demonstrating the sensitivity of these properties to such chemical interactions, as well as their effect on the electronic conductance of the wire which we find to increase upon adsorption. These results provide a new interpretation of the measured HRTEM image of the atomic gold wire and suggest a new strategy for formation of organo-metallic NWs, as well as the use of NWs as monitoring and chemical sensing devices. In light of the aforementioned STM/HRTEM observations we start our simulation from a 4-atom Au wire consisting of two tip-atoms (t) located at the apexes of two opposing tip-electrodes distanced initially by $`d_{tt}^{(0)}`$=8.9 Å, and of two internal Au atoms (i) located in the gap between the two electrodes with initial uniform distances $`d_{ti}^{(0)}`$=$`d_{ii}^{(0)}`$=2.967 Å between neighboring atoms of the wire; the tip-electrodes consist each of 29 gold atoms arranged in a pyramidal shape made of face-centered-cubic stacked (110) layers and exposing (100) and (111) facets, with the tip atoms (as well as the internal wire atoms) and the atoms of the underlying layers supporting them treated dynamically, while the rest of the electrode gold atoms are held at their crystalline lattice positions. This initial structure relaxes spontaneously in the course of a total energy minimization with a 0.16 eV gain in the total energy, to that shown in Fig. 1 (AuNW, left) where the two inner wire atoms dimerize, with $`d_{ii}`$=2.68 Å (compared to $`d`$(Au-Au)=2.48 Å in a free Au<sub>2</sub> molecule), $`d_{ti}`$=3.07 Å, and the total length of the wire shortens to $`d_{tt}`$=8.82 Å (see ref. ). From the local density of states (LDOS) of the dimerized wire (Fig. 2), calculated in the regions delineated in Fig. 1 (AuNW, left), we observe that the electronic states in the interior region of the wire (region E, see Fig. 1, left) near the Fermi energy ($`E_F`$, marked by a dashed line in Fig. 2) are not found in the free Au<sub>2</sub> dimer (whose states are marked by dots on the upper energy axis of Fig. 2). Rather, these states originate from hybridization between the atomic states of the interior Au atoms and the gold-electrode states, with the highest-occupied molecular orbital (HOMO) exhibiting a dominant d-character on the interior wire atoms. The dimerization is driven by lowering of the energies of states in the interval $`10E6.5`$ eV calculated at the interior region of the wire (compare the upper and and lower LDOS curves in the inset to Fig. 2, corresponding to the dimerized and initial equal-distance configurations, respectively). Additionally, the dimerization of the wire is accompanied by a small increase of the energy gap near $`E_F`$ from 0.194 eV in the equidistant wire to 0.216 eV in the dimerized one; in a certain sense the observed dimerization may be regarded as a ”predecessor” of a Peierls transition, though such description should be treated with caution due to the rather limited extent of the wire considered here. The calculated electronic conductance of the dimerized AuNW is $`G`$=0.58 $`g_0`$ ($`g_0`$=$`2e^2/h`$, where $`e`$ is the electron charge and $`h`$ is the Planck constant) corresponding to a resistance of 22.17 k$`\mathrm{\Omega }`$, and ballistic transport occurs through a single conductance channel . Two binding configurations of a methylthiol molecule to the dimerized equilibrium configuration of the AuNW were considered: (i) the SCH<sub>3</sub> molecule bonded to the two middle interior Au atoms of the wire (see AuNW/m-SCH<sub>3</sub> in Fig. 1), and (ii) binding of the molecule to a terminal tip atom (t) of the wire and to the neighboring interior wire atom (see AuNW/t-SCH<sub>3</sub> in Fig. 1). The binding energies of the molecule in the two adsorption configurations are essentially the same (4.01 eV), and in both cases binding of the molecule is accompanied by significant structural changes of the wire. In the m-SCH<sub>3</sub> configuration (Fig. 1, middle) the dimerization of the interior wire gold atoms is removed, and the sulfur atom is bonded to the two interior gold atoms with $`d`$(S-Au)=2.31 Å, the angle $`\mathrm{}`$(Au-S-Au)=117.4<sup>o</sup>, and $`d`$(S-C)=1.84 Å. The configuration is symmetric about a plane of reflection passing through the sulfur atom normal to the plane of the figure, and the length of the wire increases by 0.1 Å (i.e., $`d_{tt}`$=9.02 Å), with $`d_{ti}`$=2.58 Å and $`d_{ii}`$=3.94 Å; the wire atoms are shifted slightly in the direction of the adsorbed molecule with the terminal wire atoms displaced laterally by 0.016 Å from the four-fold hollow site of the underlying gold electrode layer. Dimerization of the interior gold wire atoms is removed also in the t-SCH<sub>3</sub> equilibrium bonding configuration where symmetry is broken, the bond lengths of the sulfur to the two gold atoms are unequal \[$`d`$(S-Au(t))=2.38 Å and $`d`$(S-Au(i))=2.31 Å\], the angle $`\mathrm{}`$(Au(t)-S-Au(i))=109.3<sup>o</sup>, $`d_{ti}`$=3.83 Å, $`d`$(S-C)=1.85 Å, and the length of the wire is $`d_{tt}`$=8.98 Å; additionally, the Au(t) atom bonded to the sulfur is displaced laterally by 0.057 Å from the four-fold hollow site, while the displacement of the other Au(t) atom (not bonded directly to the molecule) is 0.012 Å. We remark that in both adsorption configurations the intra-thiol $`d`$(S-C) distance, as well as the S-Au bond length are similar to those calculated for the equilibrium structure of a free Au<sub>2</sub>SCH<sub>3</sub> molecule where $`d`$(S-C)=1.84 Å and $`d`$(S-Au)=2.41 Å, while the Au-S-Au angles in the thiolated wires are much larger than in the free molecule (where $`\mathrm{}`$(Au-S-Au)=67.1<sup>o</sup>) due to the binding of the sulfur-bonded gold atoms of the wire to the rest of the nanostructure. Examination of the LDOS for the chemically modified wires displayed in Fig. 3 reveals that changes from the spectrum of the bare dimerized wire (Fig. 2) are localized to regions in the immediate vicinity of the molecular binding site (compare region E in Fig. 2 with region T for AuNW/m-SCH<sub>3</sub> and regions T and E’ for AuNW/t-SCH<sub>3</sub> in Fig. 3). In both cases $`E_F`$ lies below the HOMO level of the free SCH<sub>3</sub> molecule correlating with energy gain due to hybridization of the molecular states with the states of the wire. In both of the chemically modified wires the HOMO level is partially occupied (containing about one hole) while it is doubly occupied in the bare AuNW, and the gap between that level and the lowest unoccupied one is 0.21 eV and 0.316 eV in the m-SCH<sub>3</sub> and t-SCH<sub>3</sub>, respectively. Chemical modifications of the AuNW are signaled, and may be detected, by changes in the electronic conductance, which increases upon adsorption, i.e. $`G`$(AuNW/m-SCH<sub>3</sub>)=0.82 $`g_0`$ and $`G`$(AuNW/t-SCH<sub>3</sub>)=0.88 $`g_0`$, involving a single conductance channel. While at first sight an increase of the conductance in the presence of an adsorbate may seem surprising, it can be explained via examination of the potential landscapes governing the propagation of the electron through the wires (see Fig. 4). Comparison between the potential shown in Fig. 4a for the bare dimerized nanowire with those corresponding to the chemically modified ones (Fig. 4b and 4c), reveals that the potential barriers (bottle-necks), associated with the unequal spacings between the gold atoms in the dimerized equilibrium structure and the reduced overlap between the electronic states of the inner wire and the tip atoms, which decrease the transmission of the incident electron through the bare AuNW (Fig. 4a), are reduced in the chemically modified wires and the conductance path is broadened, resulting in enhancement of the ballistic transmission through these wires . These findings, obtained through ab-initio simulations, pertaining to the dimerized structure of an atomic gold nanowire and the sensitivity of structural, electronic, and conductance properties of such wires to chemical modifications via molecular adsorption, provide a new interpretation of recent HRTEM measurements , demonstrate methods for probing the nature of chemical interactions with such nanostructures, and suggest a strategy for preparation of chemically modified nanowires. Acknowledgement. We thank A.G. Scherbakov for his assistance in conductance calculations. This research is supported by the U.S. DOE, AFOSR, and the Academy of Finland. Calculations were performed on an IBM SP2 parallel computer at the Georgia Tech Center for Computational Materials Science, and on a Cray T3E at the National Energy Research Scientific Computing Center (NERSC) at Berkeley, CA.
no-problem/9909/hep-ph9909379.html
ar5iv
text
# References Is there a unique thermal dilepton source in the reactions Pb(158 A$``$GeV) + Au, Pb? K. Gallmeister<sup>a</sup>, B. Kämpfer<sup>a</sup>, O.P. Pavlenko<sup>a,b</sup> <sup>a</sup>Forschungszentrum Rossendorf, PF 510119, 01314 Dresden, Germany <sup>b</sup>Institute for Theoretical Physics, 252143 Kiev - 143, Ukraine Abstract An analysis of the dilepton measurements of the reactions Pb(158 A$``$GeV) + Au, Pb by the CERES and NA50 collaborations points to a unique thermal source contributing to the invariant mass and transverse momentum spectra. PACS: 25.75.Dw, 12.38.Mh, 24.10.Lx Key words: relativistic heavy-ion collisions, dileptons, thermal source Introduction: Dileptons are penetrating probes which carry nearly undisturbed information about early, hot and dense matter stages in relativistic heavy-ion collisions. Some effort, however, is needed for disentangling the various sources contributing to the observed yields and for identifying the messengers from primordial states of strongly interacting matter. The dielectron spectra for the reaction Pb(158 A$``$GeV) + Au measured by the CERES collaboration cannot be described by a superposition of $`e^+e^{}`$ decay channels of final hadrons, i.e. the hadronic cocktail. A significant additional source of dielectrons must be there. Since the data cover mainly the invariant mass range $`M<`$ 1.5 GeV the downward extrapolation of the Drell-Yan process is not an appropriate explanation. Also correlated semileptonic decays of open charm mesons have been excluded . As a widely accepted explanation, a thermal source is found to account for the data (cf. and further references therein, in particular with respect to in-medium effects and chiral symmetry restoration). Very similar, the NA50 collaboration has found, for the reaction Pb(158 A$``$GeV) + Pb, that the superposition of Drell-Yan dimuons and open charm decays does not explain the data in the invariant mass range 1.5 GeV $`<M<`$ 2.5 GeV . Final state interactions or abnormal charm enhancement have been proposed as possible explanations. Here we try to explain the NA50 measurements by another idea , namely a thermal source , which was already found to account for the data in the intermediate invariant mass range in the reaction S (200 A$``$GeV) + W . We present a schematic model describing at the same time the CERES and NA50 data. The model: Since we include the corresponding detector acceptances a good starting point for Monte Carlo simulations is the differential dilepton spectrum $$\frac{dN}{p_{\mathrm{\hspace{0.17em}1}}dp_{\mathrm{\hspace{0.17em}1}}p_{\mathrm{\hspace{0.17em}2}}dp_{\mathrm{\hspace{0.17em}2}}dy_1dy_2d\varphi _1d\varphi _2}=d^4Qd^4x\frac{dR}{d^4Qd^4x}\delta ^{(4)}(Qp_1p_2),$$ (1) where $`Q=p_1+p_2`$ is the pair four-momentum, $`p_{1,2}`$ are the individual lepton four-momenta composed of transverse momenta $`p_{\mathrm{\hspace{0.17em}1},2}`$ and rapidities $`y_{1,2}`$ and azimuthal angles $`\varphi _{1,2}`$. Here we extensively employ the quark - hadron duality and base the rate $`R`$ on the lowest-order quark - antiquark ($`q\overline{q}`$) annihilation rate (cf. ) $$\frac{dR}{d^4Qd^4x}=\frac{5\alpha ^2}{36\pi ^4}\mathrm{exp}\left\{\frac{uQ}{T}\right\},$$ (2) where $`u(x)`$ is the four-velocity of the medium depending on the space-time as also the temperature $`T(x)`$ does. Note that, due to Lorentz invariance, $`u`$ necessarily enters this expression. The above rate is in Boltzmann approximation, and a term related to the chemical potential is suppressed. As shown in the $`q\overline{q}`$ rate deviates from the hadronic one at $`M<`$ 300 MeV, but in this range the Dalitz decays dominate anyhow; in addition, in this range the thermal yield is strongly suppressed by the CERES acceptance. In the kinematical regions we consider below, the lepton masses can be neglected. Performing the space-time and momentum integrations in eqs. (1, 2) one gets $$\frac{dN}{dp_{\mathrm{\hspace{0.17em}1}}dp_{\mathrm{\hspace{0.17em}2}}dy_1dy_2d\varphi _1d\varphi _2}=\frac{5\alpha ^2}{72\pi ^5}p_{\mathrm{\hspace{0.17em}1}}p_{\mathrm{\hspace{0.17em}2}}_{t_i}^{t_f}𝑑tV(t)E,$$ (3) $`E`$ $`=`$ $`\{\begin{array}{cc}\mathrm{exp}\left\{\frac{M_{}\mathrm{cosh}Y\mathrm{cosh}\rho (r,t)}{T(r,t)}\right\}\frac{\mathrm{sinh}\xi }{\xi }\hfill & \text{for}3D,\hfill \\ K_0\left(\frac{M_{}\mathrm{cosh}\rho (r,t)}{T(r,t)}\right)I_0\left(\frac{Q_{}\mathrm{sinh}\rho (r,t)}{T(r,t)}\right)\hfill & \text{for}2D,\hfill \end{array}`$ (6) $`V(t)`$ $`=`$ $`\{\begin{array}{cc}4\pi 𝑑rr^2\hfill & \text{for}3D,\hfill \\ t𝑑rr\hfill & \text{for}2D,\hfill \end{array}`$ (9) where $`V(t)`$ acts on $`E`$, and $`3D`$ means spherical symmetric expansion, while $`2D`$ denotes the case of longitudinal boost-invariant and cylinder-symmetrical transverse expansion; the quantity $`\xi `$ is defined as $`\xi =T^1\mathrm{sinh}\rho \sqrt{M_{}^2\mathrm{cosh}^2YM^2}`$, and $`\rho (r,t)`$ is the radial or transverse expansion rapidity; $`K_0`$ and $`I_0`$ are Bessel functions . The components of the lepton pair four-momentum $`Q=(M_{}\mathrm{cosh}Y,M_{}\mathrm{sinh}Y,\stackrel{}{Q}_{})`$ are related to the individual lepton momenta via $`M_{}^2`$ $`=`$ $`p_{\mathrm{\hspace{0.17em}1}}^2+p_{\mathrm{\hspace{0.17em}2}}^2+2p_{\mathrm{\hspace{0.17em}1}}p_{\mathrm{\hspace{0.17em}2}}\mathrm{cosh}(y_1y_2),`$ (10) $`\stackrel{}{Q}_{}`$ $`=`$ $`\stackrel{}{p}_{\mathrm{\hspace{0.17em}1}}+\stackrel{}{p}_{\mathrm{\hspace{0.17em}2}},`$ (11) $`M^2`$ $`=`$ $`M_{}^2Q_{}^2,`$ (12) $`\mathrm{tanh}Y`$ $`=`$ $`{\displaystyle \frac{p_1\mathrm{sinh}y_1+p_2\mathrm{sinh}y_2}{p_1\mathrm{cosh}y_1+p_2\mathrm{cosh}y_2}}.`$ (13) It turns out that the shape of the invariant mass spectrum $`dN/(dMdY|_{|Y|<0.5}dtdV(t))`$, which is determined only by the emissivity function $`E`$, does not depend on the flow rapidity $`\rho `$ in the 2D case , and in the 3D case for $`T`$ = 120 $`\mathrm{}`$ 220 MeV and $`\rho <0.6`$ there is also no effect of the flow. The analysis of transverse momentum spectra of various hadrons species point to an average transverse expansion velocity $`\overline{v}_{}`$ 0.43 at kinetic freeze-out, while a combined analysis of hadron spectra including HBT data yields $`\overline{v}_{}`$ 0.55 . Therefore, $`\rho <0.6`$ is the relevant range for the considered reactions. We note further that the invariant mass spectra $`dN/(dMdY|_{|Y|<0.5}dtdV(t))`$ for the 3D and 2D cases differ only marginally. Relying on these findings one can approximate the emissivity function $`E`$ by that of a ”static” source at midrapidity, as appropriate only for symmetric systems, $$E=\mathrm{exp}\left\{\frac{M_{}\mathrm{cosh}Y}{T(t)}\right\},$$ (14) thus getting rid of the peculiarities of the flow pattern. We would like to emphasize the approximate character of eq. (14) because once cooling and dilution of the matter are included, they are necessarily accompanied by expansion and flow. One has therefore to check to which degree the flow affects the dilepton observables. In contrast to the invariant mass spectra, the transverse momentum or transverse mass spectra are sensitive to the flow pattern , in general. A value of $`\rho =`$ 0.4, for example, causes already a sizeable change of the shape of the spectra $`dN/(dQ_{}dY|_{|y|<0.5}dtdV(t))`$ compared to $`\rho =0`$, in particular in the large-$`Q_{}`$ region. The differences between the 2D and 3D cases are not larger than a factor of 2 and, in a restricted $`Q_{}`$ interval, can be absorbed in a renormalization. The most striking difference of the 2D and 3D scenarios is seen in the rapidity spectrum: for 2D it is flat, while in the 3D case it is localized at midrapidity (values of $`\rho <`$ 0.6 also do not change the latter rapidity distribution). Below we shall discuss which space is left to extract from the dilepton data in restricted acceptance regions hints for the flow pattern when the other dilepton sources are also taken into account. Comparison with data: In line with the above arguments we base our rate calculations on eqs. (3, 14) and use the parameterizations $`T`$ $`=`$ $`(T_iT_{\mathrm{}})\mathrm{exp}\left\{{\displaystyle \frac{t}{t_2}}\right\}+T_{\mathrm{}},`$ (15) $`V`$ $`=`$ $`N\mathrm{exp}\left\{{\displaystyle \frac{t}{t_1}}\right\}.`$ (16) with $`T_i=`$ 210 MeV, $`T_{\mathrm{}}=`$ 110 MeV, $`t_1=`$ 5 fm/c, $`t_2=`$ 8 fm/c, $`N=\frac{A+B}{2.5n_0}`$ with $`A,B`$ as mass numbers of the colliding systems and $`n_0=`$ 0.17 fm<sup>-3</sup>. We stop the time evolution at $`T_f=`$ 130 MeV. In fig. 1 we show the comparison with the preliminary CERES data applying the appropriate acceptance . One observes a satisfactory overall agreement of the sum of the hadronic cocktail and the thermal contribution with the data. It is the thermal contribution which fills the hole of the cocktail around $`M=0.5`$ GeV in the invariant mass distribution in fig. 1a. In the mass bin $`M=`$ 0.25 $`\mathrm{}`$ 0.68 GeV the thermal yield is seen (fig. 1b) to dominate at small values of the transverse momentum $`Q_{}`$. In this region of $`Q_{}`$ transverse flow effects are not important. The large-$`Q_{}`$ spectrum is dominated by the cocktail. For higher-mass bins the thermal yield in the region of the first two data points is nearly as strong as the cocktail and rapidly falls then at larger values of $`Q_{}`$ below the cocktail. Therefore, the flow effects turn out to be of minor importance for the present analysis, since within our framework the transverse flow shows up at larger values of $`Q_{}`$. The question now is whether the same thermal source model accounts also for the NA50 data . In the mass range $`M>1`$ GeV, the Drell-Yan dileptons and dileptons from correlated semileptonic decays of open charm mesons must be included. To get the corresponding yields for Pb + Pb collisions from PYTHIA the overlap function $`T_{AA}=`$ 31 mb<sup>-1</sup> is used. We have carefully checked that our PYTHIA calculations with K factors $`K_{\mathrm{DY}}=1.23`$ and $`K_{\mathrm{charm}}=4`$, adjusted to Drell-Yan data and identified open charm data (cf. for a data compilation), and intrinsic transverse parton momentum $`k_{}=0.9`$ GeV coincide with results of the NA50 collaboration. In particular, we reproduce with $`\chi _{\mathrm{d}.\mathrm{o}.\mathrm{f}.}^2=0.24`$ the anticipated NA50 result that the data of the most central collisions (except the $`J/\psi `$ region) are described by the Drell-Yan yield $`+`$ 2.8 $`\times `$ the yield from correlated semileptonic decays of open charm. In this way we get some confidence in our acceptance routine which essentially consists of geometrical cuts and a suitable minimum single-muon energy of $`𝒪`$(11.5 GeV) . The resulting invariant mass spectra, including the thermal source contribution, are displayed in fig. 2a. The thermal source, with strength adjusted by the above comparison with CERES data, is needed to achieve the overall agreement with data. For $`M<2`$ GeV the thermal contribution dominates over the Dell-Yan and charm contributions. The value of $`\chi _{\mathrm{d}.\mathrm{o}.\mathrm{f}.}^2=1.38`$ quantifies that the very details of the data are not perfectly described. This may be attributed to the too schematic source model eq. (14) and our approximate description of the more involved NA50 acceptance. Nevertheless, the transverse momentum spectrum for the mass bin $`M=`$ 1.5 $`\mathrm{}`$ 2.5 GeV is nicely reproduced by the sum of Drell-Yan, open charm and thermal contributions with $`\chi _{\mathrm{d}.\mathrm{o}.\mathrm{f}.}^2=2`$. The thermal yield is strongest at not too large values of $`Q_{}`$ where transverse flow effects can be neglected. Therefore, it seems that from present dilepton measurements the transverse flow can hardly be inferred. However, a reduction of the above uncomfortably large value of the phenomenological parameter $`k_{}`$ could be partially compensated by transverse flow. In a previous version of this note we have confronted our model with the findings of the NA50 collaboration according to the recipe ”reconstructed data” = the Drell-Yan yield $`+`$ 3 $`\times `$ the dilepton yield from decays of $`D`$ mesons and found a much better agreement. This fact points to the need to employ the full acceptance matrix in attempting a decision on the best data interpretation. Summary and discussion: In summary we have shown that a very simplified, schematic model for thermal dilepton emission, with parameters adjusted to the CERES data, also accounts for the measurements of the NA50 collaboration. Our study points to a common thermal source seen in different phase space regions in the dielectron and dimuon channels. This unifying interpretation of different measurements has to be contrasted with other proposals of explaining the dimuon excess in the invariant mass region 1.5 $`\mathrm{}`$ 2.5 GeV either by final state hadronic interactions or an abnormally large open charm production. The latter one should be checked experimentally by a direct measurement of $`D`$ mesons as envisaged in the proposal , thus providing a firm understanding of various dilepton sources. Due to the convolution of the local matter emissivity and the space-time history of the whole matter and the general dependence on the flow pattern, it is difficult to decide which type of matter (deconfined or hadron matter) emits really the dileptons. Our model is not aimed at answering this question. Instead, with respect to chiral symmetry restoration, we apply the quark - hadron duality as a convenient way to roughly describe the dilepton emissivity of matter by a $`q\overline{q}`$ rate, being aware that higher-order QCD processes change this rate (this might partially be included in a changed normalization $`N`$). In further investigations a more microscopically founded rate together with a more detailed space-time evolution must be attempted and chemical potentials controlling the baryon and pion densities must be included. This has been accomplished in to a large extent with similar conclusions as ours. Acknowledgments: Stimulating discussions with P. Braun-Munzinger, W. Cassing, O. Drapier, Z. Lin, U. Mosel, E. Scomparin, J. Rafelski, J. Stachel, and G. Zinovjev are gratefully acknowledged. We thank R. Rapp for extensive conversations on his NA50 acceptance routine, enabling the comparison with data, and for informing us on prior to publication. O.P.P. thanks for the warm hospitality of the nuclear theory group in the Research Center Rossendorf. The work is supported by BMBF 06DR829/1 and WTZ UKR-008-98.
no-problem/9909/hep-ph9909508.html
ar5iv
text
# Gravitational Particle Production and the Moduli Problem ## I Introduction Recently there has been a renewal of interest in gravitational production of particles in an expanding universe. This was a subject of intensive study many years ago, see e.g. . However, with the invention of inflationary theory the issue of the production of particles due to gravitational effects became less urgent. Indeed, gravitational effects are especially important near the cosmological singularity, at the Planck time. But the density of the particles produced at that epoch becomes exponentially small due to inflation. New particles are produced only after the end of inflation when the energy density is much smaller than the Planck density. Production of particles due to gravitational effects at that stage is typically very inefficient. There are a few exceptions to this rule that have motivated the recent interest in gravitational particle production. First of all, there are some models where the main mechanism of reheating during inflation is due to gravitational production. Even though this mechanism is very inefficient, in the absence of other mechanisms of reheating it may do the job. For example, one may consider the class of theories where the inflaton potential $`V(\varphi )`$ gradually decreases at large $`\varphi `$ and does not have any minima. In such theories the inflaton field $`\varphi `$ does not oscillate after inflation, so the standard mechanism of reheating does not work . To emphasize this unusual feature of such theories we call them non-oscillatory models, or simply NO models . Usually gravitational particle production in such models lead to dangerous cosmological consequences, such as large isocurvature fluctuations and overproduction of gravitinos . In order to overcome these problems, it was necessary to modify the NO models and to use the non-gravitational mechanism of instant preheating for the description of particle production . There are some other cases where even very small but unavoidable gravitational particle production may lead either to useful or to catastrophic consequences . For example, it has recently been found that the production of gravitinos by the oscillating inflaton field is not suppressed by the small gravitational coupling. As a result, gravitinos can be copiously produced in the early universe even if the reheating temperature always remains smaller than $`10^8`$ GeV . Another important example is related to moduli production. 15 years ago Coughlan et al realized that string theory and supergravity give rise to a cosmological moduli problem associated with the existence of a large homogeneous classical moduli field in the early universe . Soon afterwards Goncharov, Linde and Vysotsky showed that quantum fluctuations of moduli fields produced at the last stages of inflation lead to the moduli problem even if initially there were no classical moduli fields . Thus the cosmological moduli problem may appear either because of the existence of a large long-living homogeneous classical moduli field or because of quantum production of excitations (particles) of the moduli fields. In it was pointed out that the problem of moduli production is especially difficult in the context of NO models, where moduli are produced as abundantly as usual particles. Recently the problem of moduli production in the early universe was studied by numerical methods in , with conclusions similar to those of Ref. . As we are going to demonstrate, the main source of gravitational production of light moduli in inflationary cosmology is very simple, and one can study the theory of moduli production not only numerically but also analytically by the methods developed in . This will allow us to generalize and considerably strengthen the results of Refs. . In particular, we will see that in the leading approximation the problem of overproduction of light moduli particles is equivalent to the problem of large homogeneous classical moduli fields . We will show that the ratio of the number density of light moduli produced during inflation to the entropy of the universe after reheating satisfies the inequality $$\frac{n_\chi }{s}\frac{T_RH_0^2}{3mM_p^2}.$$ (1) Here $`m`$ is the moduli mass, $`M_p2.4\times 10^{18}`$ GeV is the reduced Planck mass, and $`H_0`$ is the Hubble constant at the moment corresponding to the beginning of the last 60 e-foldings before the end of inflation. In the simplest versions of inflationary theory with potentials $`M^2\varphi ^2/2`$ or $`\lambda \varphi ^4/4`$ one has $`H_010^{14}`$ GeV. In such models our result implies that in order to satisfy the cosmological constraint $`\frac{n_\chi }{s}10^{12}`$ one needs to have an abnormally small reheating temperature $`T_R1`$ GeV. Alternatively one may consider inflationary models where the Hubble constant at the end of inflation is very small. But we will argue that even this may not help, so one may need either to invoke thermal inflation or to use some other mechanisms which can make the moduli problem less severe, see e.g. . In the next section we outline the classical and quantum versions of the moduli problem and explain how each of them can arise in inflationary theory. In section III we describe the results of our numerical simulations of gravitational production of light scalar fields during and after preheating. In particular we verify our prediction that the dominant contribution to particle production comes from long-wavelength modes which are indistinguishable from homogeneous classical moduli fields. Finally in section IV we analytically compute the production of these long wavelength modes and derive Eq.(1). This section also contains our concluding discussion. ## II Moduli problem String moduli couple to standard model fields only through Planck scale suppressed interactions. Their effective potential is exactly flat in perturbation theory in the supersymmetric limit, but it may become curved due to nonperturbative effects or because of supersymmetry breaking. If these fields originally are far from the minimum of their effective potential, the energy of their oscillations will decrease in an expanding universe in the same way as the energy density of nonrelativistic matter, $`\rho _ma^3(t)`$. Meanwhile the energy density of relativistic plasma decreases as $`a^4`$. Therefore the relative contribution of moduli to the energy density of the universe may quickly become very significant. They are expected to decay after the stage of nucleosynthesis, violating the standard nucleosynthesis predictions unless the initial amplitude of the moduli oscillations $`\chi _0`$ is sufficiently small. The constraints on the energy density of the moduli field $`\rho _\chi `$ and the number of moduli particles $`n_\chi `$ depend on details of the theory. The most stringent constraint appears because of the photodissociation and photoproduction of light elements by the decay products of the moduli fields. For $`m10^210^3`$ GeV one has $$\frac{n_\chi }{s}10^{12}10^{15}.$$ (2) see and references therein. In this paper we will use a conservative version of this constraint, $`\frac{n_\chi }{s}10^{12}`$. If the field $`\chi `$ is a classical homogeneous oscillating scalar field, then this constraint applies to it as well if one defines the corresponding number density of nonrelativistic particles $`\chi `$ by the following obvious relation: $$n_\chi =\frac{\rho _\chi }{m}=\frac{m\chi ^2}{2}.$$ (3) Let us first consider moduli $`\chi `$ with a constant mass $`m10^210^3`$ GeV and assume that reheating of the universe occurs after the beginning of oscillations of the moduli. This is indeed the case if one takes into account that in order to avoid the gravitino problem one should have $`T_R<10^8`$ GeV. We will also assume for definiteness that the minimum of the effective potential for the field $`\chi `$ is at $`\chi =0`$; one can always achieve this by an obvious redefinition of the field $`\chi `$. Independent of the choice of inflationary theory, at the end of inflation the main fraction of the energy density of the universe is concentrated in the energy of an oscillating scalar field $`\varphi `$. Typically this is the same field which drives inflation, but in some models such as hybrid inflation this may not be the case. We will consider here the simplest (and most general) model where the effective potential of the field $`\varphi `$ after inflation is quadratic, $$V(\varphi )=\frac{M^2}{2}\varphi ^2.$$ (4) After inflation the field $`\varphi `$ oscillates. If one keeps the notation $`\varphi `$ for the amplitude of oscillations of this field, then one can say that the energy density of this field is given by $`\rho (\varphi )=\frac{M^2}{2}\varphi ^2`$. To simplify our notation, we will take the scale factor at the end of inflation to be $`a_0=1`$. The amplitude of the oscillating field in the theory with the potential (4) changes as $$\varphi (t)=\varphi _0a^{3/2}(t).$$ (5) The field $`\chi `$ does not oscillate and almost does not change its magnitude until the moment $`t_1`$ when $`H^2(t)=\frac{\rho _\varphi }{3M_p^2}`$ becomes smaller than $`m^2/3`$. At that time one has $$\frac{\rho _\chi }{\rho _\varphi }\frac{m^2\chi _0^2}{6H^2(t)M_p^2}\frac{\chi _0^2}{2M_p^2}$$ (6) This ratio, which can also be obtained by a numerical investigation of oscillations of the moduli fields, does not change until the time $`t_R`$ when reheating occurs because $`\rho _\chi `$ and $`\rho _\varphi `$ decrease in the same way: they are proportional to $`a^3`$. At the moment of reheating one has $`\rho _\varphi (t_R)=\pi ^2N(T)T_R^4/30`$, and the entropy of produced particles $`s=2\pi ^2N(T)T_R^3/45`$, where $`N(T)`$ is the number of light degrees of freedom. This yields $$\frac{n_\chi }{s}\frac{\rho _\chi }{ms}\frac{\chi _0^2T_R}{3mM_p^2}$$ (7) Usually one expects $`T_Rm10^2`$ GeV. Then in order to have $`\frac{n_\chi }{s}<10^{12}`$ one would need $`\chi _010^6M_p`$. However, it is hard to imagine why the value of the moduli field at the end of inflation should be so small. If one takes $`\chi _0M_p`$, which looks natural, then one violates the bound $`\frac{n_\chi }{s}<10^{12}`$ by more than 12 orders of magnitude. This is the essence of the cosmological moduli problem . In general, the situation is more complex. During the expansion of the universe the effective potential of the moduli acquires some corrections. In particular, quite often the effective mass of the moduli (the second derivative of the effective potential) can be represented as $$m_\chi ^2=m^2+c^2H^2,$$ (8) where $`c`$ is some constant and H is the Hubble parameter . Higher derivatives of the effective potential may acquire corrections as well. This leads to a different version of the moduli problem discussed in , see also . The position of the minimum of the effective potential of the moduli field in the early universe may occur at a large distance from the position of the minimum at present. This may fix the initial position of the field $`\chi `$ and lead to its subsequent oscillations. A simple toy model illustrating this possibility was given in : $$V=\frac{1}{2}m_\chi ^2\chi ^2+\frac{c^2}{2}H^2\left(\chi \chi _0\right)^2.$$ (9) At large $`H`$ the minimum appears at $`\chi =\chi _0`$; at small $`H`$ the minimum is at $`\chi =0`$. Thus one would expect that initially the field should stay at $`\chi _0`$, and later, when $`H`$ decreases, it should oscillate about $`\chi =0`$ with an initial amplitude approximately equal to $`\chi _0`$. The only natural value for $`\chi _0`$ in supergravity is $`\chi _0M_p`$. This may lead to a strong violation of the bound (2). A more detailed investigation of this situation has shown that one should distinguish between three different possibilities: $`c1`$, $`c1`$ and $`c1`$. If $`c>O(10)`$, the field $`\chi `$ is trapped in the (moving) minimum of the effective potential, its oscillations have very small amplitudes, and the moduli problem does not appear at all . This is the simplest resolution of the problem, but it is not simple to find realistic models where the condition $`c>O(10)`$ is satisfied. The most natural case is $`c1`$. It requires a complete study of the behavior of the effective potential in an expanding universe. There may exist some cases where the minimum of the effective potential does not move in this regime, but in general the effects of quantum creation of moduli in this scenario are subdominant with respect to the classical moduli problem discussed above , so we will not discuss this regime in our paper. Here we will study the case $`c1`$. In this case the effective mass of the moduli at $`Hm`$ is always much smaller than $`H`$, so the field does not move towards its minimum, regardless of its position. Thus if there is any classical field $`\chi _0`$ it simply stays at its initial position until $`H`$ becomes smaller than $`m`$, just as in the case considered above, and the resulting ratio $`\frac{n_\chi }{s}`$ is given by Eq. (7). The moduli problem in this scenario has two aspects. First of all, in order to avoid the classical moduli problem one needs to explain why $`\chi _010^6\sqrt{\frac{m}{T_R}}M_p`$, which is necessary (but not sufficient) to have $`n_\chi /s<10^{12}`$. Then one should study quantum creation of moduli in an expanding universe and check whether their contribution to $`n_\chi `$ violates the bound $`n_\chi /s<10^{12}`$. This last aspect of the moduli problem was studied in . In inflationary cosmology these two contributions (the contributions to $`n_\chi `$ from the classical field $`\chi `$ and from its quantum fluctuations) are almost indistinguishable. Indeed, the dominant contribution to the number of moduli produced in an expanding universe is given by the fluctuations of the moduli field produced during inflation. These fluctuations have exponentially large wavelengths and for all practical purposes they have the same consequences as a homogeneous classical field of amplitude $`\chi _0=\sqrt{\chi ^2}`$. To be more accurate, these fluctuations behave in the same way as the homogeneous classical field $`\varphi `$ only if their wavelength is greater than $`H^1`$. During inflation this condition is satisfied for all inflationary fluctuations, but after inflation the size of the horizon grows and eventually becomes larger than the wavelength of some of the modes. Then these modes begin to oscillate and their amplitude begins to decrease even if at that stage $`m<H`$. To take this effect into account one may simply exclude from consideration those modes whose wavelengths become smaller than $`H^1`$ prior to the moment $`tm^1`$ when $`H`$ drops down to $`m`$. It can be shown that in the context of our problem this is a relatively minor correction, so we can use the simple estimate $`\chi _0=\sqrt{\chi ^2}`$. In order to evaluate this quantity we will assume that $`c1`$ and $`mH`$ during and after inflation. This reduces the problem to the investigation of the production of massless (or nearly massless) particles during and after inflation. In the next section we study this issue and show that in the most interesting cases where inflationary long-wavelength fluctuations of a scalar field can be generated during inflation, they give the dominant contribution to particle production. This allows us to reduce a complicated problem of gravitational particle production to a simple problem which can be easily solved analytically. ## III Generation of light particles from and after inflation In this section we will present the results of a numerical study of the gravitational creation of light scalar particles in the context of inflation. Consider a scalar field $`\chi `$ with the potential $$V(\chi )=\frac{1}{2}\left(m^2\xi R\right)\chi ^2$$ (10) where $`R`$ is the Ricci scalar. In a Friedmann universe $`R=\frac{6}{a^2}(\ddot{a}a+\dot{a}^2)`$. The scalar field operator can be represented in the form $$\chi (x,t)=\frac{1}{(2\pi )^{3/2}}d^3k\left[\widehat{a}_k\chi _k(t)e^{ikx}+\widehat{a}_k^{}\chi _k^{}(t)e^{ikx}\right]$$ (11) where the eigenmode functions $`\chi _k`$ satisfy $$\ddot{\chi _k}+3\frac{\dot{a}}{a}\dot{\chi _k}+\left[\left(\frac{k}{a}\right)^2+m^2\xi R\right]\chi _k=0,$$ (12) By introducing conformal time and field variables defined as $`\eta \frac{dt}{a},f_ka\chi _k`$ eq. (12) can be simplified to $$f_k^{\prime \prime }+\omega _k^2f_k=0$$ (13) where primes denote differentiation with respect to $`\eta `$ and $$\omega _k^2=k^2+a^2m^2+\left(\frac{1}{6}\xi \right)\frac{a^{\prime \prime }}{a}.$$ (14) The growth of the scale factor is determined by the evolution of the inflaton field $`\varphi `$ with potential $`V(\varphi )`$. In conformal time $$a^{\prime \prime }=\frac{a^2}{a}\frac{8\pi a}{3}\left(\varphi ^2a^2V(\varphi )\right)$$ (15) $$\varphi ^{\prime \prime }+2\frac{a^{}}{a}\varphi ^{}+a^2\frac{V(\varphi )}{\varphi }=0.$$ (16) For initial conditions for the modes $`f_k`$, in the first approximation one can use positive frequency vacuum fluctuations $`f_k=\frac{1}{\sqrt{2k}}e^{ikt}`$, see e.g. . However, when describing fluctuations produced at the last stages of a long inflationary period, one should begin with fluctuations which have been generated during the previous stage of inflation. For example, for massless scalar fields minimally coupled to gravity instead of $`f_k=\frac{1}{\sqrt{2k}}e^{ikt}`$ one should use Hankel functions : $`f_k(t)={\displaystyle \frac{ia(t)\mathrm{H}}{k\sqrt{2k}}}\left(1+{\displaystyle \frac{k}{i\mathrm{H}}}e^{\mathrm{H}t}\right)\mathrm{exp}\left({\displaystyle \frac{ik}{\mathrm{H}}}e^{\mathrm{H}t}\right),`$ (17) where $`H`$ is the Hubble constant at the beginning of calculations. To make the calculations even more accurate, one should take into account that long-wavelength perturbations were produced at early stages of inflation when $`H`$ was greater than at the beginning of the calculations. If the stage of inflation is very long, then the final results do not change much if instead of the Hankel functions (17) one uses $`f_k=\frac{1}{\sqrt{2k}}e^{ikt}`$. However, if the inflationary stage is short, then using the functions $`f_k=\frac{1}{\sqrt{2k}}e^{ikt}`$ considerably underestimates the resulting value of $`\chi ^2`$. At late times the solutions to Eq. (13) can be represented in terms of WKB solutions as $$f_k(\eta )=\frac{\alpha _k(\eta )}{\sqrt{2\omega _k}}e^{i^\eta \omega _k𝑑\eta }+\frac{\beta _k(\eta )}{\sqrt{2\omega _k}}e^{+i^\eta \omega _k𝑑\eta },$$ (18) where $`\alpha _k(\eta )`$ and $`\beta _k(\eta )`$ play the role of coefficients of a Bogolyubov transformation. This form is often used to discuss particle production because the number density of particles in a given mode is given by $`n_k=|\beta _k(\eta )|^2`$ and their energy density is $`\omega _kn_k`$. As we will see, though, the main contribution to the number density of $`\chi `$ particles at late times comes from long-wavelength modes which are far outside the horizon during reheating. As long as they remain outside the horizon these modes do not manifest particle-like behaviour, i.e. the mode functions do not oscillate. In this situation the coefficients $`\alpha `$ and $`\beta `$ have no clear physical meaning. We therefore present our results in terms of the mode amplitudes $`|f_k(\eta )|^2`$, which as we will show contain all the information relevant to number density and energy density at late times. At late times when $`\frac{a^{\prime \prime }}{a}H<m`$ the long wavelength modes of $`\chi `$ will be nonrelativistic and their number density will simply be given by Eq. (3). Moreover the very long wavelength modes which are still outside the horizon at late times (e.g. at nucleosynthesis) will act like a classical homogeneous field whose amplitude is given by $$\chi ^2=\frac{1}{2\pi ^2a^2}𝑑kk^2|f_k|^2.$$ (19) It is these very long wavelength modes which will dominate and therefore the quantity of interest for us is the amplitude of these fluctuations. In our calculations we assumed that $`m^2=c^2H^2`$ with $`c1`$; the results shown are for $`c=0`$ but we also did the calculations with $`c=.01`$ and found that the results were independent of $`c`$ in this range. Figure 1 shows the results of solving Eq. (13) for a model with the inflaton potential $`V(\varphi )=\frac{1}{4}\lambda \varphi ^4`$. These data were taken after ten oscillations of the inflaton field. The vertical axis shows $`k^2|f_k|^2`$ as a function of the momentum $`k`$. The momentum is shown in units of the Hubble constant at the end of inflation. The different plots represent runs with different starts of inflation, i.e. with different initial values of $`\varphi `$. They all coincide in the ultraviolet part of the spectrum, but the runs which started towards the end of inflation show a significant suppression in the infrared. This shows that fluctuations produced during inflation are the primary source of the infrared modes, which in turn dominate the number density. The curve on top shows the Hankel function solutions (17), which give $$|f_k|^2=\frac{a^2H^2}{2k^3}$$ (20) for de Sitter space, i.e. for a constant $`H`$. In the figure, we have corrected this expression by using for each mode the value of the Hubble constant at the moment when that mode crossed the horizon. For the $`\lambda \varphi ^4`$ model shown here the appropriate Hubble parameter for each mode can be approximated as $$H_k=\sqrt{\frac{2\pi \lambda }{3}}\left(\varphi _e^2\frac{1}{\pi }ln\left(\frac{k}{H_e}\right)\right)$$ (21) where $`\varphi _e`$ and $`H_e`$ are the values of the inflaton and the Hubble parameter respectively at the end of inflation. Note that if the Hankel function solutions (17) are used as initial conditions for a numerical run then they do not change as the modes cross the horizon, so the upper curve of the plot can also be obtained from such a run. Relative to this upper curve it’s easy to see how the numerical runs show suppression in the infrared due to starting inflation at late times and choosing the initial conditions in the form $`f_k=\frac{1}{\sqrt{2k}}e^{ikt}`$, and suppression in the ultraviolet due to the end of inflation. The latter suppression is physically realistic. The infrared suppression should occur at a wavelength corresponding to the Hubble radius at the beginning of inflation. The different regions of the graph illustrate effects which occurred at different times. During inflation long wavelength modes crossed the horizon at early times. Thus the far left portion of the plot shows the modes which crossed earliest. They have the highest amplitudes both because they were frozen in earliest and because the Hubble constant was higher at earlier times when they were produced. The lower plots don’t show these high amplitudes because inflation began too late for these modes to cross outside the horizon and be amplified. Farther to the right the curve shows modes which were only slightly if at all amplified during inflation. The far right modes were produced during the fast rolling and oscillatory stages. These modes are not frozen and can be described meaningfully in terms of $`\alpha `$ and $`\beta `$ coefficients. The regularized expression $$|f_k|^2=\frac{1}{\omega _k}\left[|\beta _k|^2+Re\left(\alpha _k\beta _k^{}e^{2i{\scriptscriptstyle \omega _k𝑑\eta }}\right)\right]$$ (22) shows why the amplitudes of these modes oscillate as a function of $`k`$. In short inflationary fluctuations are primarily responsible for producing infrared modes and post-inflationary effects account for ultraviolet modes, but it is the infrared modes that were outside the horizon at the end of inflation which dominate the number density at late times. The earlier inflation began the farther this distribution will extend into the infrared, and the long wavelength end of this spectrum will always give the greatest contribution to the number density of $`\chi `$ particles. Our numerical calculations are similar to those of Kuzmin and Tkachev . However, they took a rather small initial value of the classical scalar field $`\varphi `$, which resulted in less than 60 e-folds of inflation. As initial conditions for the fluctuations they used $`f_k=\frac{1}{\sqrt{2k}}e^{ikt}`$. They pointed out that the results of such calculations can give only a lower bound on the number of $`\chi `$ particles produced during inflation. Consequently, similar calculations performed in could give only a lower bound on the number of moduli fields produced in the early universe. Our goal is to find not a lower bound but the complete number of particles produced at the last stages of inflation in realistic inflationary models, where the total duration of inflation typically is much greater than 60 e-foldings. One result revealed by our calculations is that the effects of an arbitrarily long stage of inflation can be mimicked by the correct choice of “initial” conditions chosen for the modes $`\chi _k`$ after inflation. Instead of using the Minkowski space fluctuations $`f_k=\frac{1}{\sqrt{k}}e^{ikt}`$ used in as well as in our numerical calculations, one should use the de Sitter space solutions (17), with $`H`$ corrected to the value it had for each mode at horizon crossing. Using these modes at the end of inflation is equivalent to running a simulation with a long stage of inflation. Our numerical calculations confirmed the result that we are going to derive analytically in the next section (see also ): The number of $`\chi `$ particles ($`n_\chi m\chi ^2`$) produced during the stage of inflation beginning at $`\varphi =\varphi _0`$ in the simplest model $`M^2\varphi ^2/2`$ is proportional to $`\varphi _0^4`$, whereas in the model $`\lambda \varphi ^4/4`$ it is proportional to $`\varphi _0^6`$. Thus the total number of particles produced during inflation is extremely sensitive to the choice of initial conditions. If one considers $`\varphi _0`$ corresponding to the beginning of the last 60 e-folds of inflation, the total number of particles produced at that stage appears to be much greater than the lower bound obtained in . As we will see, this will allow us to put a much stronger constraint on the moduli theories than the constraint obtained in . ## IV Light moduli from inflation The numerical results obtained in the previous section confirm our expectation that in the most interesting cases where long-wavelength inflationary fluctuations of light scalar fields can be generated during inflation, they give the dominant contribution to particle production. In particular, in the case of $`c1`$, $`mH`$ most moduli field fluctuations are generated during inflation rather than during the post-inflationary stage. These fluctuations grow at the stage of inflation in the same way as if the moduli field $`\chi `$ were massless : $$\frac{d\chi ^2}{dt}=\frac{H^3}{4\pi ^2}.$$ (23) If the Hubble constant does not change during inflation, one obtains the well-known relation $$\chi ^2=\frac{H^3t}{4\pi ^2}.$$ (24) However, in realistic inflationary models the Hubble constant changes in time, and fluctuations of the light fields $`\chi `$ with $`mH`$ behave in a more complicated way. As an example, let us consider the case studied in the last section. Here inflation is driven by a field $`\varphi `$ with an effective potential $`V(\varphi )=\frac{\lambda }{4}\varphi ^4`$ at $`\varphi >0`$. This potential could be oscillatory or flat for $`\varphi <0`$. We consider a light scalar field $`\chi `$ which is not coupled to the inflaton field $`\varphi `$, and which is minimally coupled to gravity. The field $`\varphi `$ during inflation obeys the following equation: $$3H\dot{\varphi }=\lambda \varphi ^3.$$ (25) Here $$H=\frac{1}{2}\sqrt{\frac{\lambda }{3}}\frac{\varphi ^2}{M_p}.$$ (26) These two equations yield the solution $$\varphi =\varphi _0\mathrm{exp}\left(2\sqrt{\frac{\lambda }{3}}M_pt\right),$$ (27) where $`\varphi _0`$ is the initial value of the inflaton field $`\varphi `$. In this case Eq. (23) reads: $$\frac{d\chi ^2}{dt}=\frac{\lambda \sqrt{\lambda }}{96\sqrt{3}\pi ^2}\frac{\varphi _0^6}{M_p^3}\mathrm{exp}\left(12\sqrt{\frac{\lambda }{3}}M_pt\right)$$ (28) The result of integration at large $`t`$ converges to $$\chi ^2=\frac{\lambda }{2}\left(\frac{\varphi _0^3}{24\pi M_p^2}\right)^2.$$ (29) This result agrees with the results of our numerical investigation described in the previous section. ¿From the point of view of the moduli problem, these fluctuations lead to the same consequences as a classical scalar field $`\chi `$ which is homogeneous on the scale $`H^1`$ and which has a typical amplitude $$\chi _0=\sqrt{\chi ^2}=\sqrt{\frac{\lambda }{2}}\frac{\varphi _0^3}{24\pi M_p^2}.$$ (30) A similar result can be obtained in the model $`V(\varphi )=\frac{M^2}{2}\varphi ^2`$. In this case one has $$\varphi (t)=\varphi _0\sqrt{\frac{2}{3}}M_pMt.$$ (31) The time-dependent Hubble parameter is given by $$H=\frac{M}{\sqrt{6}M_p}\varphi (t),$$ (32) which yields $$\chi _0=\sqrt{\chi ^2}=\frac{M\varphi _0^2}{8\pi \sqrt{3}M_p^2}.$$ (33) As we see, the value of $`\chi _0`$ depends on the initial value of the field $`\varphi `$. This result has the following interpretation. One may consider an inflationary domain of initial size $`H^1(\varphi _0)`$. This domain after inflation becomes exponentially large. For example, its size in the model with $`V(\varphi )=\frac{M^2}{2}\varphi ^2`$ becomes $$lH^1(\varphi _0)\mathrm{exp}\left(\frac{\varphi _0^2}{4M_p^2}\right).$$ (34) In order to achieve 60 e-folds of inflation in this model one needs to take $`\varphi _015M_p`$. This implies that a typical value of the (nearly) homogeneous scalar field $`\chi `$ in a universe which experienced 60 e-folds of inflation in this model is given by $$\chi _0=\sqrt{\chi ^2}5M.$$ (35) In realistic versions of this model one has $`M5\times 10^6M_p10^{13}`$ GeV . Substitution of this result into Eq. (7) gives $$\frac{n_\chi }{s}2\times 10^{10}\frac{T_R}{m}.$$ (36) This implies that the condition $`n_\chi /s10^{12}`$ requires that the reheating temperature in this model should be at least two orders of magnitude smaller than $`m`$. For example, for $`m10^2`$ GeV one should have $`T_R1`$ GeV, which looks rather problematic. This result confirms the basic conclusion of Ref. that the usual models of inflation do not solve the moduli problem. Our result is similar to the result obtained in by numerical methods, but it is approximately two orders of magnitude stronger. The reason for this difference is that the authors of Ref. used a much smaller value of $`\varphi _0`$ in their numerical calculations. Consequently, they took into account only the particles produced at the very end of inflation, whereas the leading effect occurs at earlier stages of inflation, i.e. at larger $`\varphi `$. In general one can get a simple estimate of $`\chi _0=\sqrt{\chi ^2}`$ by assuming that the universe expanded with a constant Hubble parameter $`H_0`$ during the last 60 e-folds of inflation. To make this estimate more accurate one should take the value of the Hubble constant not at the end of inflation but approximately 60 e-foldings before it, at the time when the fluctuations on the scale of the present horizon were produced. The reason is that the largest contribution to the fluctuations is given by the time when the Hubble constant took its greatest values. Also, at that stage the rate of change of H was relatively slow, so the approximation $`H=H_0=const`$ is reasonable. Thus one can write $$\chi _0=\sqrt{\chi ^2}\frac{H_0}{2\pi }\sqrt{H_0t}\frac{H_0}{2\pi }\sqrt{60}H_0.$$ (37) This gives $$\frac{n_\chi }{s}\frac{T_RH_0^2}{3mM_p^2}.$$ (38) In the simplest versions of chaotic inflation with potentials $`M^2\varphi ^2/2`$ or $`\lambda \varphi ^4/4`$ one has $`H_010^{14}`$ GeV, which leads to the requirement $`T_R1`$ GeV. But this equation shows that there is another way to relax the problem of the gravitational moduli production: one may consider models of inflation with a very small value of $`H_0`$ . For example, one may have $`n_\chi /s10^{12}`$ for $`T_RH_010^7`$ GeV. However, this condition is not sufficient to resolve the moduli problem; the situation is more complicated. First of all, it is very difficult to find inflationary models where inflation occurs only during 60 e-foldings. Typically it lasts much longer, and the fluctuations of the light moduli fields will be much greater. This is especially obvious in the theory of eternal inflation where the amplitude of fluctuations of the light moduli fields can become indefinitely large . In particular, if the condition $`m_\chi ^2m^2+c^2H^2`$ with $`c1`$ remains valid for $`\chi M_p`$, then one may expect the generation of moduli fields $`\chi >M_p`$. This should initiate inflation driven by the light moduli . Then the situation would become even more problematic: we would need to find out how one could produce baryon asymmetry of the universe after the light moduli decay and how one could obtain density perturbations $`\delta \rho /\rho 10^4`$ in such a low-scale inflationary model. One may expect that the region of values of $`\chi `$ where its effective potential has small curvature $`m^2H^2`$ may be limited, and may even depend on $`H`$. Then the existence of a long stage of inflation would push the fluctuations of the field $`\chi `$ up to the region where its effective potential becomes curved, and instead of our results for $`\chi _0`$ one should substitute the largest value of $`\chi `$ for which $`m_\chi ^2<H^2`$. In such a situation one would have a mixture of problems which occur at $`c1`$ and at $`c1`$. Finally, we should emphasize that all our worries about quantum creation of moduli appear only after one makes the assumption that for whatever reason the initial value of the classical field $`\chi `$ in the universe vanishes, i.e. that the classical version of the moduli problem has been resolved. We do not see any justification for this assumption in theories where the mass of the moduli field in the early universe is much smaller than $`H`$. Indeed, in such theories the classical field $`\chi `$ initially can take any value, and this value is not going to change until the moment $`tH^1m^1`$. The main purpose of this paper was to demonstrate that even if one finds a solution to the light moduli problem at the classical level, the same problem will appear again because of quantum effects. This does not mean that the moduli problem is unsolvable. One of the most interesting solutions is provided by thermal inflation . The Hubble constant during inflation in this scenario is very small, and the effects of moduli production there are rather insignificant. Another possibility is that moduli are very heavy in the early universe, $`m_\chi ^2=m^2+c^2H^2`$, with $`c>O(10)`$, in which case the moduli problem does not appear . The main question is whether we really need to make the theory so complicated in order to avoid the cosmological problems associated with moduli. Is it possible to find a simpler solution? One of the main results of our investigation is to confirm the conclusion of Ref. that the simplest versions of inflationary theory do not help us to solve the moduli problem but rather aggravate it. In conclusion we would like to note that the methods developed in this paper apply not only to the theory of moduli production but to other problems as well. For example, one may study the theory of gravitational production of superheavy scalar particles after inflation . If these particles are minimally coupled to gravity and have mass $`mH`$ during inflation, then one can use our Eqs. (30), (33) to calculate the number of produced particles. These equations imply that the final result will strongly depend on $`\varphi _0`$, i.e. on the duration of inflation. If inflation occurs for more than 60 Hubble times, the production of particles with $`mH`$ is much more efficient than was previously anticipated. As we just mentioned, if $`\varphi _0`$ is large enough then the production of fluctuations of the field $`\chi `$ may even lead to a new stage of inflation driven by the field $`\chi `$ . On the other hand, if $`m`$ is greater than the value of the Hubble constant at the very end of inflation, then quantum fluctuations are produced only at the early stages of inflation (when $`H>m`$). These fluctuations oscillate and decrease exponentially during the last stages of inflation. In such cases the final number of produced particles will not depend on the duration of inflation and can be unambiguously calculated. We hope to return to this question in a separate publication. The authors are grateful to I. Tkachev for useful discussions. This work was supported by NSERC and CIAR and by NSF grant AST95-29-225. The work of G.F. and A.L. was also supported by NSF grant PHY-9870115, and the work of L.K. and A.L. by NATO Linkage Grant 975389.
no-problem/9909/hep-lat9909013.html
ar5iv
text
# BI-TP 99/33 Numerical evidence for Goldstone-mode effects in the three-dimensional 𝑂⁢(4)-modelTalk presented by Tereza Mendes ## Abstract We investigate the three-dimensional $`O(4)`$-model on $`24^3`$$`96^3`$ lattices as a function of the magnetic field $`H`$. Below the critical point, in the broken phase, we confirm explicitly the $`H^{1/2}`$ dependence of the magnetization and the corresponding $`H^{1/2}`$ divergence of the longitudinal susceptibility, which are predicted from the existence of Goldstone modes. At the critical point the magnetization follows the expected finite-size-scaling behavior, with critical exponent $`\delta =4.87(1)`$. The continuous symmetry present in the $`O(N)`$ spin models $$\beta =J\underset{<i,j>}{}𝐒_i𝐒_j𝐇\underset{i}{}𝐒_i$$ — where $`𝐒_i`$ are unit vectors in an $`N`$-dimensional sphere — gives rise to the so-called spin waves: slowly varying (long-wavelength) spin configurations, whose energy may be arbitrarily close to the ground-state energy. In two dimensions these modes are responsible for the absence of spontaneous magnetization, whereas in $`d>2`$ they are the Goldstone modes associated with the spontaneous breaking of the rotational symmetry for temperatures below the critical value $`T_c`$. Due to the presence of Goldstone modes, a diverging susceptibility is induced in the limit of small external magnetic field $`H`$ for all $`T<T_c`$, i.e. everywhere on the coexistence curve (see for example ). More precisely: not only the transverse susceptibility, which is directly related to the fluctuation of the transverse (Goldstone) modes, diverges when $`H0`$ $$\chi _T(T<T_c,H)=\frac{M(T,H)}{H}=𝒪(H^1)$$ but also the longitudinal susceptibility, given in terms of the magnetization $`M`$ by $`\chi _LM/H,`$ is predicted to diverge for $`2<d4`$. In $`d=3`$ the predicted divergence is $`H^{1/2}`$, or equivalently, the behavior for the magnetization will include a “singular” term of order $`H^{1/2}`$ $$M(T<T_c,H)=M(T,0)+cH^{1/2}.$$ Indication of this behavior was seen in early numerical simulations of the three-dimensional $`O(3)`$ model at low temperatures . Here we verify explicitly the predicted singularities — or Goldstone-mode effects — for the three-dimensional $`O(4)`$ case, by simulating the model in the presence of an external magnetic field and close to the critical temperature $`T_c`$. The low-temperature singularities are still present at $`T\mathrm{}<T_c`$. We are able to compute magnetic critical exponents directly, by varying $`H`$ at $`T_c`$, and we also use the observed Goldstone-effect behavior to extrapolate our data to $`H0`$ and obtain the zero-field critical exponents in very good agreement with the existing values. Using these exponents, we verify finite-size scaling at $`T_c`$. Our motivation for considering the three-dimensional $`O(4)`$ model is that it is expected to be in the same universality class as QCD at finite temperature for two degenerate quark flavors , the magnetic field in the spin model corresponding to the quark mass in QCD. Our simulations are done using the cluster algorithm. We compute the magnetization along the direction of the magnetic field as $$M=\frac{1}{V}<\underset{i}{}𝐒_i\widehat{𝐇}>.$$ We note that, due to the presence of a nonzero field, the magnetization defined above is nonzero on finite lattices, contrary to what happens for simulations at $`H=0`$, where one is led to consider $`1/V<|𝐒_i|>`$, which overestimates the true magnetization. The susceptibilities are obtained from the spin correlation functions $`G_L(x)`$ $``$ $`<S_0^{}S_x^{}><S_0^{}><S_x^{}>`$ $`G_T(x)`$ $``$ $`{\displaystyle \frac{1}{3}}<𝐒_0^{}𝐒_x^{}>`$ by $`\chi _L=M/H=\stackrel{~}{G}_L(0);\chi _T=\stackrel{~}{G}_T(0).`$ We use for $`T_c`$ the value $$1/T_cJ_c=\mathrm{\hspace{0.33em}0.93590},$$ obtained in simulations of the zero-field model . In Fig. 1 we show our data for the magnetization for temperatures $`TT_c`$ plotted versus $`H^{1/2}`$. Inverse temperatures are given by $`J=J_c`$, 0.95, 0.98, 1.0, 1.2, starting from the lowest curve. We have simulated at increasingly larger values of $`L`$ at fixed values of $`J`$ and $`H`$ in order to eliminate finite-size effects. Except for the curve at $`J=1.2`$ only these largest-lattice values are shown. Symbols are as explained in Fig. 2. The finite-size effects for small $`H`$ do not disappear as one moves away from $`T_c`$, but rather increase. Solid lines connect the $`y`$-axis to the last point (a filled square) included in our fits of the data for small $`H`$. It can clearly be seen that the predicted behavior (linear in $`H^{1/2}`$) holds close to $`H=0`$ for all temperatures $`T<T_c`$ considered. We thus see evidence that the Goldstone-mode effects are observable even very close to $`T_c`$. The straight-line segments become shorter as $`T_c`$ is approached from below, and at $`T_c`$ the magnetization vanishes as a power of $`H`$, as expected. We have fitted the data from the largest lattice sizes at $`T_c`$ to the scaling behavior $$M(T_c,H)H^{1/\delta },$$ obtaining the exponent $`\delta =4.87(1)`$, in agreement with . The straight-line fits shown in Fig. 1 are used to extrapolate our data to the zero-field limit, and the corresponding zero-field scaling law $$M(T\mathrm{}<T_c,H=0)(T_cT)^\beta $$ yields $`\beta =0.38(1)`$, also in agreement with . The finite-size effects at $`T_c`$ are very well described by the finite-size-scaling ansatz $$M(T_c,H;L)=L^{\beta /\nu }Q_M(HL^{\beta \delta /\nu }),$$ as shown in Fig. 2. We see no corrections to finite-size scaling. In Fig. 3 we show a typical behavior for $`\chi _T`$ and $`\chi _L`$ at low temperature, for $`J=\mathrm{\hspace{0.33em}0.98}`$. Symbols are as in Fig. 2. The lines $`M/H`$ (a factor 3 is included for clarity) and $`M/H`$ (from the fit in Fig. 1) are shown, for comparison respectively with $`\chi _T`$ and $`\chi _L`$. Both lines are obtained from our “infinite-volume” data. Notice the large finite-size effects in $`\chi _L`$. Finally, we verify numerically the claim in Ref. , that the Goldstone-mode singularity at low temperatures is consistent with the magnetic equation of state $$(1/\beta )M^{\delta 1}\chi _Lc_1+c_2y^{1/2},$$ where $`yH/M^\delta `$. In Figs. 4 and 5 we show respectively the behavior of $`\chi _L`$ and the combination above for $`J=0.95`$. We see that the two perturbative predictions are well satisfied at $`T\mathrm{}<T_c`$. Note that the line in Fig. 5 is not obtained using the perturbative coefficients given in , but from a fit of our data. This idea of finding nonperturbative coefficients from fits to perturbative forms is very useful in obtaining an expression for the scaling function for this model . This work was supported by the DFG under Grant No. Ka 1198/4-1 and by the TMR-Network ERBFMRX-CT-970122.
no-problem/9909/quant-ph9909002.html
ar5iv
text
# References The 3-dimensional $`q`$-deformed harmonic oscillator and magic numbers of alkali metal clusters Dennis Bonatsos<sup>#</sup>, N. Karoussos<sup>#</sup>, P. P. Raychev, R. P. Roussev, P. A. Terziev <sup>#</sup> Institute of Nuclear Physics, N.C.S.R. “Demokritos” GR-15310 Aghia Paraskevi, Attiki, Greece Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences 72 Tzarigrad Road, BG-1784 Sofia, Bulgaria Abstract Magic numbers predicted by a 3-dimensional $`q`$-deformed harmonic oscillator with u<sub>q</sub>(3) $``$ so<sub>q</sub>(3) symmetry are compared to experimental data for alkali metal clusters, as well as to theoretical predictions of jellium models, Woods–Saxon and wine bottle potentials, and to the classification scheme using the $`3n+l`$ pseudo quantum number. The 3-dimensional $`q`$-deformed harmonic oscillator correctly predicts all experimentally observed magic numbers up to 1500 (which is the expected limit of validity for theories based on the filling of electronic shells), thus indicating that u<sub>q</sub>(3), which is a nonlinear extension of the u(3) symmetry of the spherical (3-dimensional isotropic) harmonic oscillator, is a good candidate for being the symmetry of systems of alkali metal clusters. Metal clusters have been recently the subject of many investigations (see for relevant reviews). One of the first fascinating findings in their study was the appearance of magic numbers , analogous to but different from the magic numbers appearing in the shell structure of atomic nuclei . This analogy led to the early description of metal clusters in terms of the Nilsson–Clemenger model , which is a simplified version of the Nilsson model of atomic nuclei, in which no spin-orbit interaction is included. Further theoretical investigations in terms of the jellium model demonstrated that the mean field potential in the case of simple metal clusters bears great similarities to the Woods–Saxon potential of atomic nuclei, with a slight modification of the “wine bottle” type . The Woods–Saxon potential itself looks like a harmonic oscillator truncated at a certain energy value and flattened at the bottom. It should also be recalled that an early schematic explanation of the magic numbers of metallic clusters has been given in terms of a scheme intermediate between the level scheme of the 3-dimensional harmonic oscillator and the square well . Again in this case the intermediate potential resembles a harmonic oscillator flattened at the bottom. On the other hand, modified versions of harmonic oscillators have been recently investigated in the novel mathematical framework of quantum algebras , which are nonlinear generalizations of the usual Lie algebras. The spectra of $`q`$-deformed oscillators increase either less rapidly (for $`q`$ being a phase factor, i.e. $`q=e^{i\tau }`$ with $`\tau `$ being real) or more rapidly (for $`q`$ being real, i.e. $`q=e^\tau `$ with $`\tau `$ being real) in comparison to the equidistant spectrum of the usual harmonic oscillator , while the corresponding (WKB-equivalent) potentials resemble the harmonic oscillator potential, truncated at a certain energy (for $`q`$ being a phase factor) or not (for $`q`$ being real), the deformation inflicting an overall widening or narrowing of the potential, depending on the value of the deformation parameter $`q`$. Very recently, a $`q`$-deformed version of the 3-dimensional harmonic oscillator has been constructed , taking advantage of the u<sub>q</sub>(3) $``$ so<sub>q</sub>(3) symmetry . The spectrum of this 3-dimensional $`q`$-deformed harmonic oscillator has been found to reproduce very well the spectrum of the modified harmonic oscillator introduced by Nilsson , without the spin-orbit interaction term. Since the Nilsson model without the spin orbit term is essentially the Nilsson–Clemenger model used for the description of metallic clusters , it is worth examining if the 3-dimensional $`q`$-deformed harmonic oscillator can reproduce the magic numbers of simple metallic clusters. This is the subject of the present investigation. The space of the 3-dimensional $`q`$-deformed harmonic oscillator consists of the completely symmetric irreducible representations of the quantum algebra u<sub>q</sub>(3). In this space a deformed angular momentum algebra, so<sub>q</sub>(3), can be defined . The Hamiltonian of the 3-dimensional $`q`$-deformed harmonic oscillator is defined so that it satisfies the following requirements: a) It is an so<sub>q</sub>(3) scalar, i.e. the energy is simultaneously measurable with the $`q`$-deformed angular momentum related to the algebra so<sub>q</sub>(3) and its $`z`$-projection. b) It conserves the number of bosons, in terms of which the quantum algebras u<sub>q</sub>(3) and so<sub>q</sub>(3) are realized. c) In the limit $`q1`$ it is in agreement with the Hamiltonian of the usual 3-dimensional harmonic oscillator. It has been proved that the Hamiltonian of the 3-dimensional $`q`$-deformed harmonic oscillator satisfying the above requirements takes the form $$H_q=\mathrm{}\omega _0\left\{[N]q^{N+1}\frac{q(qq^1)}{[2]}C_q^{(2)}\right\},$$ (1) where $`N`$ is the number operator and $`C_q^{(2)}`$ is the second order Casimir operator of the algebra so<sub>q</sub>(3), while $$[x]=\frac{q^xq^x}{qq^1}$$ (2) is the definition of $`q`$-numbers and $`q`$-operators. The energy eigenvalues of the 3-dimensional $`q`$-deformed harmonic oscillator are then $$E_q(n,l)=\mathrm{}\omega _0\left\{[n]q^{n+1}\frac{q(qq^1)}{[2]}[l][l+1]\right\},$$ (3) where $`n`$ is the number of vibrational quanta and $`l`$ is the eigenvalue of the angular momentum, obtaining the values $`l=n,n2,\mathrm{},0`$ or 1. In the limit of $`q1`$ one obtains $`\mathrm{lim}_{q1}E_q(n,l)=\mathrm{}\omega _0n`$, which coincides with the classical result. For small values of the deformation parameter $`\tau `$ (where $`q=e^\tau `$) one can expand eq. (3) in powers of $`\tau `$ obtaining $$E_q(n,l)=\mathrm{}\omega _0n\mathrm{}\omega _0\tau (l(l+1)n(n+1))$$ $$\mathrm{}\omega _0\tau ^2\left(l(l+1)\frac{1}{3}n(n+1)(2n+1)\right)+𝒪(\tau ^3).$$ (4) The last expression to leading order bears great similarity to the modified harmonic oscillator suggested by Nilsson (with the spin-orbit term omitted) $$V=\frac{1}{2}\mathrm{}\omega \rho ^2\mathrm{}\omega \kappa ^{}(𝐋^2<𝐋^2>_N),\rho =r\sqrt{\frac{M\omega }{\mathrm{}}},$$ (5) where $$<𝐋^2>_N=\frac{N(N+3)}{2}.$$ (6) The energy eigenvalues of Nilsson’s modified harmonic oscillator are $$E_{nl}=\mathrm{}\omega n\mathrm{}\omega \mu ^{}\left(l(l+1)\frac{1}{2}n(n+3)\right).$$ (7) It has been proved that the spectrum of the 3-dimensional $`q`$-deformed harmonic oscillator closely reproduces the spectrum of the modified harmonic oscillator of Nilsson. In both cases the effect of the $`l(l+1)`$ term is to flatten the bottom of the harmonic oscillator potential, thus making it to resemble the Woods–Saxon potential. The level scheme of the 3-dimensional $`q`$-deformed harmonic oscillator (for $`\mathrm{}\omega _0=1`$ and $`\tau =0.038`$) is given in Table 1, up to a certain energy. Each level is characterized by the quantum numbers $`n`$ (number of vibrational quanta) and $`l`$ (angular momentum). Next to each level its energy, the number of particles it can accommodate (which is equal to $`2(2l+1)`$) and the total number of particles up to and including this level are given. If the energy difference between two successive levels is larger than 0.39, it is considered as a gap separating two successive shells and the energy difference is reported between the two levels. In this way magic numbers can be easily read in the table: they are the numbers appearing above the gaps, written in boldface characters. The magic numbers provided by the 3-dimensional $`q`$-deformed harmonic oscillator in Table 1 are compared to available experimental data for Na clusters in Table 2 (columns 2–6). The following comments apply: i) Only magic numbers up to 1500 are reported, since it is known that filling of electronic shells is expected to occur only up to this limit . For large clusters beyond this point it is known that magic numbers can be explained by the completion of icosahedral or cuboctahedral shells of atoms . ii) Up to 600 particles there is consistency among the various experiments and between the experimental results in one hand and our findings in the other. iii) Beyond 600 particles the predictions of the three experiments, which report magic numbers in this region, are quite different. However, the predictions of all three experiments are well accommodated by the present model. Magic numbers 694, 832, 1012 are supported by the findings of both Martin et al. and Bréchignac et al. , magic numbers 1206, 1410 are in agreement with the experimental findings of Martin et al. , magic numbers 912, 1284 are supported by the findings of Bréchignac et al., while magic numbers 676, 1100, 1314, 1502 are in agreement with the experimental findings of Pedersen et al. . In Table 2 the predictions of three simple theoretical models (non-deformed 3-dimensional harmonic oscillator (column 9), square well potential (column 8), rounded square well potential (intermediate between the previous two, column 7) ) are also reported for comparison. It is clear that the predictions of the non-deformed 3-dimensional harmonic oscillator are in agreement with the experimental data only up to magic number 40, while the other two models give correctly a few more magic numbers (58, 92, 138), although they already fail by predicting magic numbers at 68, 70, 106, 112, 156, which are not observed. It should be noticed at this point that the first few magic numbers of alkali clusters (up to 92) can be correctly reproduced by the assumption of the formation of shells of atoms instead of shells of delocalized electrons , this assumption being applicable under conditions not favoring delocalization of the valence electrons of alkali atoms. Comparisons among the present results, experimental data (by Martin et al. (column 2), Pedersen et al. (column 3) and Bréchignac et al. (column 4) ) and theoretical predictions more sophisticated than these reported in Table 2, can be made in Table 3, where magic numbers predicted by various jellium model calculations (columns 5–8, ), Woods–Saxon and wine bottle potentials (column 9, ), as well as by a classification scheme using the $`3n+l`$ pseudo quantum number (column 10, ) are reported. The following observations can be made: i) All magic numbers predicted by the 3-dimensional $`q`$-deformed harmonic oscillator are supported by at least one experiment, with no exception. ii) Some of the jellium models, as well as the $`3n+l`$ classification scheme, predict magic numbers at 186, 540/542, which are not supported by experiment. Some jellium models also predict a magic number at 748 or 758, again without support from experiment. The Woods–Saxon and wine bottle potentials of Ref. predict a magic number at 68, for which no experimental support exists. The present scheme avoids problems at these numbers. It should be noticed, however, that in the cases of 186 and 542 the energy gap following them in the present scheme is 0.329 and 0.325 respectively (see Table 1), i.e. quite close to the threshold of 0.39 which we have considered as the minimum energy gap separating different shells. One could therefore qualitatively remark that 186 and 542 are “built in” the present scheme as “secondary” (not very pronounced) magic numbers. The following general remarks can also be made: i) It is quite remarkable that the 3-dimensional $`q`$-deformed harmonic oscillator reproduces the magic numbers at least as accurately as other, more sophisticated, models by using only one free parameter ($`q=e^\tau `$). Once the parameter is fixed, the whole spectrum is fixed and no further manipulations can be made. This can be considered as evidence that the 3-dimensional $`q`$-deformed harmonic oscillator owns a symmetry (the u<sub>q</sub>(3) $``$ so<sub>q</sub>(3) symmetry) appropriate for the description of the physical systems under study. ii) It has been remarked that if $`n`$ is the number of nodes in the solution of the radial Schrödinger equation and $`l`$ is the angular momentum quantum number, then the degeneracy of energy levels of the hydrogen atom characterized by the same $`n+l`$ is due to the so(4) symmetry of this system, while the degeneracy of energy levels of the spherical harmonic oscillator (i.e. of the 3-dimensional isotropic harmonic oscillator) characterized by the same $`2n+l`$ is due to the su(3) symmetry of this system. $`3n+l`$ has been used to approximate the magic numbers of alkali metal clusters with some success, but no relevant Lie symmetry could be determined (see also ). In view of the present findings the lack of Lie symmetry related to $`3n+l`$ is quite clear: the symmetry of the system appears to be a quantum algebraic symmetry (u<sub>q</sub>(3)), which is a nonlinear extension of the Lie symmetry u(3). iii) An interesting problem is to determine a WKB-equivalent potential giving (within this approximation) the same spectrum as the 3-dimensional $`q`$-deformed harmonic oscillator, using methods similar to these of Ref. . The similarity between the results of the present model and these provided by the Woods–Saxon potential (column 9 in Table 3) suggests that the answer should be a harmonic oscillator potential flattened at the bottom, similar to the Woods–Saxon potential. If such a WKB-equivalent potential will show any similarity to a wine bottle shape, as several potentials used for the description of metal clusters do , remains to be seen. In summary, we have shown that the 3-dimensional $`q`$-deformed harmonic oscillator with u<sub>q</sub>(3) $``$ so<sub>q</sub>(3) symmetry correctly predicts all experimentally observed magic numbers of alkali metal clusters up to 1500, which is the expected limit of validity for theories based on the filling of electronic shells. This indicates that u<sub>q</sub>(3), which is a nonlinear deformation of the u(3) symmetry of the spherical (3-dimensional isotropic) harmonic oscillator, is a good candidate for being the symmetry of systems of alkali metal clusters. One of the authors (PPR) acknowledges support from the Bulgarian Ministry of Science and Education under contracts $`\mathrm{\Phi }`$-415 and $`\mathrm{\Phi }`$-547. Table 1: Energy spectrum, $`E_q(n,l)`$, of the 3-dimensional $`q`$-deformed harmonic oscillator (eq. (3)), for $`\mathrm{}\omega _0=1`$ and $`q=e^\tau `$ with $`\tau =0.038`$. Each level is characterized by $`n`$ (the number of vibrational quanta) and $`l`$ (the angular momentum). $`2(2l+1)`$ represents the number of particles each level can accommodate, while under “total” the total number of particles up to and including this level is given. Magic numbers, reported in boldface, correspond to energy gaps larger than 0.39, reported between the relevant couples of energy levels.
no-problem/9909/cond-mat9909083.html
ar5iv
text
# 1 Introduction ## 1 Introduction In 1969 S. Kauffman purposed to envisage a partial case of cellular automata (CA) for investigation of biological processes . It was found that subclass of CA called Boolean networks reveals a number of properties successfully interpreted in terms of gene networks of living organisms. After this the tool of Boolean networks became fairly useful for investigations of a large range of phenomena: in spin glass theory , gene and neural networks , chaos controlling problem etc. Thus the study of properties of such systems acquired the undoubt interest. It was found and investigated both numerically and analytically many characteristic properties and regularities of Boolean networks . For random Boolean networks behavior it was revealed that there are three different phases of system dynamics: chaotic, ordered and marginal (which supposed to be critical) . It was pointed out that the behavior of such systems depends on many factors. Let us stop at such characteristics of Boolean networks as its structure. Investigations of the relation between a structure and dynamics in real, for instance, biological systems are extremely difficult. At least we want to understand one qualitativley. For example, by the means of representation of gene regulation as a net of units acting each other, which have only two possible states. In this case the tool of Boolean Networks is very convenient. So we envisage several networks of different forms with distinctions in geometric shape as well as in action organization. Considering the evolution of such systems for different dynamic rules, the choice of which is resulted in obtained for random Boolean networks relations, which admit to put system dynamics into ordered, chaotic or marginal class, we try to understand how a structure influences on system behavior in different phases. The investigation of system behavior has been made by the means of analysis of measuring trajectories convergence in the phase space of system’s states, in other words through the measuring of the Hamming distance ($`Hd`$) . We do not try to perform any rule classification for cellular authomata with a definit structure as it has been done in . Let us stress that we investigate no only different lattice structures but also different geometric shapes of the Boolean Networks. Section 2 of this paper defines Boolean Networks and describes the possible approach to its investigations. In section 3 the studied structuries and rules of dynamics are presented. Section 4 describes obtained results. Finally, section 5 gives some conclusions. ## 2 Definitions and approaches Boolean Networks can be considered as an example of cellular automata . The theory of cellular automata is quite successfully applied to study complex systems. A Boolean network can well represents a big set of very different structures of the system from random Boolean networks models introduced by Kauffman with its totally random structure to cellular automata with a regular lattice and local interactions. A Boolean network is represented by a system of $`N`$ interconnected elements with only two possible states 0 and 1 (on/off). Any element in the system can has a connection with $`K`$ elements, where $`K`$ can be varied for different units. Under term connection it is assumed that $`K`$ other elements influence the center element. According to a logical or Boolean rule every element is moved to the next state. A state of the system is defined as a pattern of states (on/off or 0/1) of all elements forming it. All elements are updated synchronously, moving the system into the next state, and each state can has only single resultant one. The total space of the system’s states is defined as all possible $`2^N`$ combinations of the elements’ values in the system. Since the number of all possible states is limited and transition rules are fixed and do not depend on time, the system reaches a limit cycle or a fixed point called an attractor. Attractors may be envisaged as the ”target area” of an organism, i.e. the cell’s type in the end of development. Limit cycles can be considered in certain aspects as biological rhythms . As was pointed out in studying of models on the basis of Boolean networks , the behavior of such systems can be varied in a wide range from order to chaos and in number of cases, has quite nontrivial character. Eventually the choice of topology or a structure of interactions when another parameters are fixed, one can consider as one of the means of controlling chaos . Due to totally characterize behavior of the system, in other words to say to which trajectory every from $`2^N`$ points of phase space belongs, one need to get the phase portrait of space of states of the system, namely the number of attractors, basins of attractors, a size of a limit cycle, if a more detailed analysis is desirable the number (or percentage) of Eden gardens, the number of trees, a transient length etc are calculated. If the system size is big enough it is extremely difficult to do, therefore in notes about the system behavior generally, it is reasonable to be limited by the more rough characteristics as, for example, the Hamming distance. Let us take one as the main characteristics of system behavior (in terms chaos/order). Hamming distance is a reasonable and often useful measure of distance in the configuration space of states of a binary system. It is defined as the number of bits which are different between the binary sequences $`S_1`$ and $`S_2`$. Usually normalized Hamming distance ($`Hd`$) is considered $$Hd=\frac{1}{N}\underset{i=1}{\overset{N}{}}(\sigma _i(t)\sigma _i^{}(t))^2$$ where $`N`$ – number of elements in the system, $`\sigma _i(t)`$ – the state of i-th element at the moment t, $`S_1=\{\sigma _i\}_1^N,S_2=\{\sigma _i^{}\}_1^N`$. Any configuration corresponds to the point in the space of all possible configurations. According to the rules of dynamics, in other words, system’s evolution, each initial configuration traces out a trajectory in time. As it was revealed if the process is chaotic, the trajectories of nearby configurations diverge (in number of cases exponentially or as for instance, on lattices with discrete variables and nearest-neighbor interactions the Hamming distance increases with a power law) in time, if the system’s dynamics is ordered then closed trajectories converge, in the critical case ones neither converge nor diverge, the distance between them preserves almost the same in time. Let us consider pairs of configurations which differ only in a single site, i.e. initial $`Hd`$ is equal to $`1/N`$. If we consider just two trajectories and on this basis make the conclusion about system behavior in a whole, it seems to be no quite correct. Thus due to get more adequate statistical picture we consider statistically avereged Hamming distance defined by the next means. At first let’s take a state, where only in a random single position stay the unit, the next state is chosen so that difference from the former is only in the first position and calculate the convergence of these trajectories. Further we calculate the convergence of the former trajectory and one, which differ from it only in 3d position. And so far for all odd numbers. So we investigate N/2 pairs of trajectories. The next step. A state with couple of units in two random positions is envisaged and then the previous step is repeated. And so far for states with $`3,4,\mathrm{},N1`$ units. Generically we have done uniform selection from the total space of states and investigated $`\frac{N(N2)}{2}`$ pairs of trajectories. In total we have the average $`Hd(t)`$ defined as $$Hd(t)=\frac{2}{N^2(N2)}\underset{\sigma ,\sigma ^{}\mathrm{\Omega }}{}\underset{i=1}{\overset{N}{}}(\sigma _i(t)\sigma _i^{}(t))^2$$ where $`\mathrm{\Omega }`$ is the set of trajectories choosen by suggested above means. ## 3 Structures and rules of dynamics We have considered seven two-dimensional structures with connectivity $`K=3`$. We tried to cover more different structures both in the sense of a spatial shape (ribbon closed to circle, torus, sphere, cone) and in the sense of links’ organization (regular lattices, loops, cascades or hierarchy structures, feedback and autoregulation) . In striving to answer the question, how the type of lattice, the spatial organization, boundaries, autoregulation, hierarchy influence the behavior of a system and how the structure and dynamic’s rules interact, we used two sorts of rules namely, homogeneous (the same for all elements) and heterogeneous (one for odd and another for even elements). The choice of the rules in the model is reasoned by the fact revealed early for random Boolean networks ones, that the average connectivity of a network and rules governing its behavior are related by the next formula $`K_c=\frac{1}{2p(1p)}`$. As it was obtained if $`K<\frac{1}{2p(1p)}`$ (where $`K`$ – the average connectivity, $`p`$ – the bias of the rule of dynamics) dynamics have to be ordered, if $`K>\frac{1}{2p(1p)}`$ we have to get chaos and if $`K=\frac{1}{2p(1p)}`$ the behavior is critical . We use the same rules for all investigated structures. * The first subject of our investigation is a ribbon closed to a circle. The elements are arranged on the edges of the ribbon. Each of them has links with its left and right neighbors and with the element opposite to it on the other edge of the ribbon. * The second system has the structure represented by the directed graph, namely the binary tree where neighbors of an element are two successors for it nodes plus autoregulation (i.e. an element is itself neighbor). The last layer is closed in a circle. So we have the hierarchical model on a cone with autoregulation. * In the third case we investigated the same but undirected graph. Neighbors of an element are two successors for it nodes plus its ancestor, in other words, here undirection excepts autoregulation. Thus we have a cone without autoregulation. * The fourth investigated structure is represented by the regular honeycomb lattice closed in a torus. * The fifth case contains the same as in the previous case lattice but here the boundaries’ elements are selfregulated. * In the sixth case we have the same lattice forming a sphere. * The seventh case is represented by the regular squared lattice closed in a torus where neighbors of an unit are the left and the right and one above it. So we have a closed cascade of hierarchical regulation. Our rules are: a. the homogeneous rule – if all three element’s neighbor are switched on then the element will be switched on and it will be switched off in any other case. In the number presentation it is 01111111 ($`p=0.125`$) (Here and below 0 in the rule means on and 1 means off. Each figure corresponds to the neighbor’s state, where ones are situated from all 0 to all 1 ) b. here we have the same rule as in the point a for even elements and the next for odd ones – if all three element’s neighbor are switched off then the element will be switched on and it will be switched off in any other case. In the number presentation it is the same as above plus 10000000 ($`p=0.125`$) c. an element will be switched on only in case both 1st and 2d its neighbor or 3d have such state. In the number presentation it is 01000010 ($`p=0.25`$) d. here we have the same rule as in the point c for even elements and for odd ones – if 1st or both 2d and 3d element’s neighbor are switched on then the element will be switched on and it will be switched off in any other case. In the number presentation it is the same as above plus 00011000 ($`p=0.25`$) e. an element will be switched on only in case any pair of its neighbors has such state. In the number presentation it is 00010110 ($`p=0.375`$) g. here we have the same rule as in point e for even elements and for odd ones – if only a single of element’s neighbor is switched on then the element will be switched on and it will be switched off in any other case. In the number presentation it is the same as above plus 01101000 ($`p=0.375`$) ## 4 Results We pay attention to the next characteristics : percentage of convergence couple of trajectories in the phase space pc, maximum pattern difference between them mp and behavior $`Hd(t)`$ (it is defined as period of the function). The data of percentage convergence are presented in table 1, maximum pattern difference and character of behavior $`Hd(t)`$ – in table 2 and table 3 correspondingly. Let us make some explanatory notes to the table 3. After a finite number of steps the system comes to a stationary regime that can be either a fixed point or a limit cycle. So when we follow the evolution a pair of trajectories one can observe that after a number of steps if the trajectories have not converged the Hamming distance is constant or some periodic function of time, moreover the period can be very large, that point out that at least one trajectory belongs to a large limit cycle, that is usually proposed by a feature of chaos. It is true for each pair of trajectories, therefore the bigger the period of $`Hd(t)`$ is the more chaotic system behavior supposed to be. A number of iteration steps before trajectories have converged one can consider as the maximum transient length. As it is easy to see in a whole for different structures the results vary. But in spite of this there is a general tendency in systems’ behavior and several fluctuations (1b, 3c, 1d, 7d, 6g). For all a and e cases percentage of convergence is always very high ($`100\pm 0.0004`$ for a case and $`99.93\pm 0.37`$ for e one). Excepts second structure the maximum patterns difference is always more than 19% for a case and it is always less than 1.5% for e one. These cases correspond to the ordered behavior. Qualitatively g case always demonstrates chaotic behavior (too large period of $`Hd(t)`$, quite low the convergence percentage). Except second structure the maximum patterns difference is always more than 48. At the same time the character of $`Hd(t)`$ behavior is very variable. Case b has very low the maximum patterns difference (less than 1%) for all structures and the results are close to each other. Majority of structures in this case demonstrate rather chaotic behavior. The system behavior is analogous for all c cases excepts third structure. One can put it into the category of chaos. In d cases besides the second and 7th structures we have ordered behavior. Overall, in our case the obtained results do not confirm those for random Boolean networks. These results allow to conclude that heterogeneity/homogeneity of rules influences on system’s behavior in more extent than the $`K,p`$ relation. In boundary cases (a b and e g) heterogeneity result in chaotic dynamics, homogeneity – to order one and vice versa for middle cases (c d). Let us stress that ordered dynamics in a and e cases has not special situations. Let’s stop at some interesting results and distinctions. The case 1g has the minimum percentage of convergence. Eventually in this case there are big number of attractors, large pattern difference between them. We stress that in spite of the high convergence percentage 1d case reveals obvious chaotic features, if trajectories do not converge the difference between them becomes quite big. The character of $`Hd(t)`$ is similar to 1g case (Fig. 1), (Fig. 2). As for the second structure, here we observed some critical features in behavior of the system, especially in the b and d cases. For the g case we obtain that $`Hd(t)`$ has period equal to 8, percentage of convergence is quite high for chaotic behavior (Fig. 3). Overall this structure has no large maximum pattern differences, moreover it has the smallest value for both a and g cases and the smallest average. In case of the third structure we can see strongly different from all other rules (from the same category) the result for the rule c. This case is characterized by too large (for rule c) convergence percentage and at the same time $`Hd(t)`$ has quite large period (equal 12) (Fig. 4). Let us note that 3c case has the maximum pattern divergence equal to system’s size. (It is the maximum among all other cases.) The minimum convergence percentage is observed not in g but in b case. G and a cases have the maximum pattern difference and the maximum average of this value (for its categories). In (Fig. 5) one can see the $`Hd(t)`$ distribution for g cases on this structure. Comparing 4th and 5th structures one can conclude that the influence of boundaries is negligibly small as is the one of the autoregulation. System’s behavior is determined rather by the construction of the lattice. Here the behavior is either chaotic or ordered (not any critical features) with very strong differences between them. Excepting b and g cases the behavior on these structures is close to the 1st one. In (Fig. 6) one can see the $`Hd(t)`$ distribution for g cases on these structures. Behavior on the structure 7 is quite close to the 4th and the 5th ones excepting d case. This confirms a regular structure and a shape influence. The 7d case strongly differ from other (from its category) by low convergence percentage. In (Fig. 7) one can see $`Hd(t)`$ distribution for g case on this structure. The dynamics lead by the rules b and g in the most extent depends on the structure of interactions especially it is seen on the second, third and 6th structures, what allow to conclude that a spatial shape undoubtedly influence the dynamics of the system. But this influences is essential only for the definite rules. The 6g case has too large convergence percentage for this rule. In (Fig. 8) one can see the $`Hd(t)`$ distribution for the g case of the 6th structure. The minimum of convergence percentage is observed in the c case as well as for the second structure. We studied the systems of different sizes. It was obtained that an increase of the system’s size does not give large changes of dynamics, only slightly shifts percentage of convergence in the direction of the present tendency, i.e. for rather order cases it increases, for chaotic ones it decreases. When we increase system’s size, for the small ($`<20`$) maximum pattern differences the results are the same and for large ones this value increases. The a and g rules have greater divergence what results in a more branched transient structure. Also we considered other rules with the same bias for all structures. It was revealed that the a and e cases with the same bias give results independent from the structure but strongly dependent on the rules, behavior can change from chaos to order. The maximum pattern difference is more stable than the convergence percentage. The cases with low convergence percentage are more stable to rules’ changes. As for the rule g it gives the most robust result on the quality level. The same rules’ changes result in different changes of system’s dynamics for different structures. ## 5 Conclusion This paper represents the investigation of influence of a Boolean network structure on behavior of the system. We have studied several different structures, which differ by the geometric shape, lattice organization and means of influence. We have used different rules of dynamics as well. The means by which the investigation has been done consists of the measuring of the averaged Hamming distance. In the work we used obtained for random Boolean networks $`K,p`$ relation and compared the results with ones in random Boolean networks models resulted in dividing of system behavior into ordered chaotic or marginal phases according to the $`K,p`$ relation. Influence of the spatial shape as well as the organization of interactions on a lattice on system behavior is quite confirmed by the results obtained. As we saw both the second and the third structures form a cone but system’s dynamics in these cases has strong distinctions, the second structure has the minimum average value of mp. At the same time the third structure is characterized by the maximum value of the corresponding parameter. In the most extent the influence of the special shape of the network has been observed for the b and g rules. For the second, third and sixth structures percentage of convergence is essentially greater than for the other structures, that allows to conclude that behavior of Boolean Networks on a sphere is closer to a cone than to a torus or a closed ribbon. The closed lattices themselves do not influence on system’s behavior but in addition to hierarchy it decreases the convergence percentage, as it one can see on example of the 4,5,7 structures. The influence of boundaries and adges is negligibly small. Hierarchy in couple with irregularity gives the critical behavior (2b,2d) or shifts the character of system’s dynamics in this direction (2g). The investigation of the second and the 7th structures pointed out that there is no essential influence of autoregulation. $`K,p`$ relation obtained for random Boolean networks has not been confirmed by the results of our investigation of the concrete structuries. In the larger number of cases small changes of the interaction’s structure do not give large changes of dynamics but in some cases, it is so. Acknowledgment. The work was partially supported by State Committee of Russian Federation for high education grant No 97-14.3-58. TABLES
no-problem/9909/hep-lat9909153.html
ar5iv
text
# Lattice Gross-Neveu model with domain-wall fermionspresented by K.-i. Nagai ## 1 Introduction Defining chiral fermions on the lattice has been one of long-standing problems in lattice field theories. Several years ago, domain-wall fermion has been proposed as a new formulation of lattice chiral fermion. This formulation considers Wilson fermion in $`D+1`$ dimensions with the free boundary condition in the extra dimension of a size $`N_s`$, or equivalently, $`N_s`$-flavored Wilson fermions with flavor mixing. In the limit $`N_s\mathrm{}`$, the spectrum of free domain-wall fermion (DWF) contains massless modes at the edges in the extra dimension. The massless modes are stable under perturbation from weak gauge fields. While there have been numerical simulations of lattice QCD with DWF (DWQCD), some non-perturbative issues, in particular existence of the parity-broken phase (Aoki phase) and necessity of fine tuning of couplings to restore chiral symmetry for finite $`N_s`$, have not been clarified. In order to answer these questions, we investigate the two-dimensional lattice Gross–Neveu model using the DWF formalism (DWGN) as a test of DWQCD. ## 2 Action and Effective potential We propose the following action for DWGN: $`S=S_{free}+a^2{\displaystyle \underset{n}{}}\overline{q}(n)\left\{\sigma (n)+i\gamma _5\mathrm{\Pi }(n)\right\}q(n)`$ $`+a^2{\displaystyle \underset{n}{}}\left[{\displaystyle \frac{N}{2g_\sigma ^2}}\left\{\sigma (n)m_f\right\}^2+{\displaystyle \frac{N}{2g_\pi ^2}}\mathrm{\Pi }(n)^2\right].`$ (1) The auxiliary fields are related to fermion condensates as $`\sigma (n)=m_f\frac{g_\sigma ^2}{N}\overline{q}(n)q(n)`$ and $`\mathrm{\Pi }(n)=\frac{g_\pi ^2}{N}\overline{q}(n)i\gamma _5q(n)`$. The important point in this action is that the interaction terms are constructed from the edge state: $`q(n)=P_R\psi (n,s=1)+P_L\psi (n,s=N_s)`$. We attempt to solve DWGN analytically in the large $`N`$ limit. The effective potential can be calculated by the technique of the propagator matrix, which yields $`V_{eff}={\displaystyle \frac{1}{2g_\sigma ^2}}\left(\sigma m_f\right)^2+{\displaystyle \frac{1}{2g_\pi ^2}}\mathrm{\Pi }^2I,`$ (2) $`I={\displaystyle _p}\mathrm{ln}\left[Fa^2(\sigma ^2+\mathrm{\Pi }^2)+Ga\sigma +H\right],`$ (3) where $`_p=_{\pi /a}^{\pi /a}\frac{d^2p}{(2\pi )^2}`$. The explicit form of the functions $`F,G`$ and $`H`$ are described in Ref.. Let us note that the term $`Ga\sigma `$ explicitly breaks chiral symmetry. ## 3 Existence of Aoki phase The phase diagram of the Wilson fermion action has a region of spontaneously broken parity-flavor symmetry (Aoki phase), which plays an important role for controlling restoration of chiral symmetry in the continuum limit. Since DWF is an extension of the Wilson fermion formalism, we wish to examine if Aoki phase exists for DWGN. In this section, we set the coupling constants as $`g_\sigma ^2=g_\pi ^2=g^2`$. Let us first consider the case of $`N_s=\mathrm{}`$. Since $`F`$ and $`H`$ dominates over the term $`G`$ in this limit, the effective potential becomes $$I(\sigma ,\mathrm{\Pi })=_p\mathrm{ln}\left[Fa^2(\sigma ^2+\mathrm{\Pi }^2)+H\right].$$ (4) The $`O(a)`$ term, which breaks chiral symmetry, is absent, and hence the model has exact chiral symmetry. Thus pion becomes massless without fine tuning even for finite lattice spacings and for arbitrary strong coupling. Next we consider the case of finite $`N_s`$. It is expected that Aoki phase exists in this case since the $`N_s=1`$ DWGN is equivalent to GN model with Wilson fermion. The solution of the gap equation marking the phase boundary is illustrated for several values of $`M`$ with a fixed size $`N_s=2`$ in Fig.1. As is summarized in a schematic diagram in Fig.2, we find that (i) Aoki phase exists inside the boundary and this boundary forms cusps toward $`g^20`$, (ii) the Aoki phase always appears in the $`m_f>0`$ region for even $`N_s`$ (in the conventional choice of sign in domain-wall literatures, this corresponds to $`m_f<0`$), (iii) pion becomes massless on the boundary. The last point means that pion mass does not vanish at $`m_f=0`$ for finite lattice spacing. Hence a fine tuning is needed to obtain massless pion. In Fig.3 we show the $`N_s`$ dependence of the phase boundary for $`N_s=2,4`$ and $`6`$ at $`M=0.9`$. We find that (i) the Aoki phase shrinks exponentially with $`N_s`$, (ii) the first “finger” on the left approaches $`m_f=0`$, while other “fingers” move toward $`m_f=\mathrm{}`$, (iii) the critical value $`g_c`$ (see Fig.2), below which the cusp structure (“fingers”) appears, increases exponentially with $`N_s`$, in contrast to the case of Wilson fermion. These features mean that the area of the normal phase becomes wide with increasing $`N_s`$. As seen from above, DWF for finite $`N_s`$ represents an improved Wilson fermion. ## 4 $`O(a)`$ scaling violation ($`a0`$) We study the mechanism of chiral symmetry restoration in the continuum limit. In particular, we wish to understand scaling violation for finite and infinite $`N_s`$. The effective potential in the continuum limit is obtained after some calculation: $$V_{eff}=m\sigma _R+\frac{1}{4\pi }\left(\sigma _R^2+\mathrm{\Pi }_R^2\right)\mathrm{ln}\frac{\sigma _R^2+\mathrm{\Pi }_R^2}{e\mathrm{\Lambda }^2}.$$ (5) Here $`\sigma _R=f_M\{\sigma \frac{\left(1M\right)^{N_s}}{a}\}`$ and $`\mathrm{\Pi }_R=f_M\mathrm{\Pi }`$ with $`f_M=\frac{M\left(2M\right)}{1\left(1M\right)^{2N_s}}`$, which is the normalization factor of the edge state “$`q(n)`$” for finite $`N_s`$. In order to obtain (5), which agrees with the continuum result, we need to impose the following scaling relations: $`{\displaystyle \frac{1}{2g_\sigma ^2}}\widehat{C_0}+C_2={\displaystyle \frac{f_M^2}{4\pi }}\mathrm{ln}{\displaystyle \frac{1}{a^2\mathrm{\Lambda }^2}},`$ (6) $`{\displaystyle \frac{1}{2g_\pi ^2}}\widehat{C_0}={\displaystyle \frac{f_M^2}{4\pi }}\mathrm{ln}{\displaystyle \frac{1}{a^2\mathrm{\Lambda }^2}},`$ (7) $`{\displaystyle \frac{1}{f_M}}\left({\displaystyle \frac{m_f}{g_\sigma ^2}}{\displaystyle \frac{(1M)^{N_s}}{g_\sigma ^2a}}+{\displaystyle \frac{C_1}{a}}\right)=m.`$ (8) These relations are the same with those found for the Wilson fermion case . Therefore a fine tuning is needed to restore chiral symmetry. Figure 4 shows $`\sigma /\mathrm{\Lambda }`$ as a function of $`a\mathrm{\Lambda }`$ for $`M=0.9`$, using the Wilson-like scaling relations in (6-8). We find that $`O(a)`$ scaling violation is large at $`N_s=2,3`$. However, the magnitude of $`O(a)`$ scaling violation diminishes exponentially as $`N_s`$ increases. In fact the scaling curve almost exactly follows the $`O(a^2)`$ behavior for $`N_s8`$. ## 5 Conclusions We have investigated the two-dimensional DWGN model in detail as a test of DWQCD. When the size of the extra dimension $`N_s=\mathrm{}`$, the model has chiral symmetry for finite lattice spacing, and Aoki phase does not exist. On the other hand, in the case of finite $`N_s`$, Aoki phase does exist and a fine tuning is needed to restore chiral symmetry in the continuum limit. However, the $`O(a)`$ scaling violation that gives rise to this behavior vanishes exponentially fast as $`N_s`$ is increased so that it is negligible in practice for $`N_s=O(10)`$. While the GN model does not have gauge fields and quantum fluctuations are absent in the large $`N`$ limit, it is expected that the results obtained in this work provide instructive and systematic information for DWQCD simulations. The authors are JSPS Research Fellows.
no-problem/9909/cond-mat9909309.html
ar5iv
text
# The Extension of Rod-Coil Multiblock Copolymers and the Effect of the Helix-Coil Transition \[ ## Abstract The extension elasticity of rod-coil mutliblock copolymers is analyzed for two experimentally accessible situations. In the quenched case, when the architecture is fixed by the synthesis, the force law is distinguished by a sharp change in the slope. In the annealed case, where interconversion between rod and coil states is possible, the resulting force law is sigmoid with a pronounced plateau. This last case is realized, for example, when homopolypeptides capable of undergoing a helix-coil transition are extended from a coil state. Both scenarios are relevant to the analysis and design of experiments involving single molecule mechanical measurements of biopolymers and synthetic macromolecules. PACS numbers: 61.25.Hq, 61.41.+e, 87.15.He \] With the advent of single molecule mechanical measurements it became possible to study the force laws characterizing the extension of individual macromolecules . In turn, these provide a probe of internal degrees of freedom associated with intrachain self assembly or with monomers that can assume different conformational states. A molecular interpretation of the force laws thus obtained requires appropriate theoretical models allowing for the distinctive “internal states” of each system. The formulation of such models is a challenging task in view of the complexity and diversity of the systems investigated. These include DNA , the muscle protein titin and the extracellular matrix protein tenascin , the polyscharides dextran and xanthan as well as the synthetic polymer poly(ethelene-glycol) . In this letter we consider two unexplored yet accessible systems where a detailed confrontation between theory and experiment is possible. In particular, we present a theory for the extension force law of multiblock copolymers consisting of alternating rod and coil blocks. Two scenarios are considered, both focusing on the equilibrium force law of long chains undergoing quasistatic extension. In one, the architecture is “quenched” that is, the block structure is set by the chemistry and no interconversion is possible. Such is the case, for example, for segmented polyurethans . In the second scenario the monomers can interconvert between coil and rod states. The “annealed” architecture is realized, for instance, in homopolypeptides capable of undergoing a cooperative helix-coil transition . In this system, the highly rigid helical domains play the part of the rod blocks. As we shall see, the two scenarios lead to distinctive force laws (Figure). The force law of the quenched case is characterized by an abrupt change of slope. This arises because the rod blocks are more susceptible to orientation by the applied tension. In the annealed scenario, the extension of a chain that is initially in a coil state leads to a sigmoid force profile exhibiting a pronounced plateau. The plateau is traceable to a one dimensional coexistence of helical and coil domains where the chain extension favors the helical state because of its low configurational entropy. These results suggest that force measurements can be used to probe the block structure of polymers with quenched architecture. In the annealed case the force law provides a direct measure of the thermodynamic parameters involved. From the view point of the growing field of single molecule mechanical measurements, these results are helpful in exploring the diagnostic potential of these techniques. The discussion is of interest from the polymer physics perspective because the theory of polymer elasticity focuses on the case of flexible homopolymers modeled as random walks, or self avoiding random walks, with a constant step length and no internal degrees of freedom . In this context, rod-coil multiblock copolymers with quenched architecture may be viewed as a special case of heteropolymers incorporating monomers of different sizes. The analysis of the annealed case supplements the isolated discussions of the effect of internal degrees of freedom on the extension elasticity . We first consider the elastic free energy $`F_{el}`$ and the tension $`f=F_{el}/R`$ of a quenched rod-coil multiblock copolymer with an imposed end-to-end distance, $`R`$. The chain incorporates $`N`$ monomers of identical length $`a`$ that form $`yN`$ rod blocks and $`yN`$ coil blocks such that the number of monomers in the rods is $`\theta N`$. For simplicity we assume that the coil and rod blocks are monodispersed, and that the number of monomers in a rod block is thus $`\theta /y`$. We focus on the case of long rod blocks, $`\theta /y1`$. The distinctive feature of rod-coil multiblock copolymers is that the rodlike blocks, the longer “monomers”, are oriented by the applied tension before the alignment of the shorter monomers becomes significant. The freely jointed chain model, the macromolecular analog of the Langevin’s theory of paramagnetism, enables a quantitative description of this situation . The length of the monomer is the counterpart of the magnetic moment, the applied tension is the analog of the magnetic field while the end-to-end distance of the chain corresponds to the magnetization. A monomer of length $`l`$ is assigned an orientational energy of $`fl\mathrm{cos}\varphi `$ where $`\varphi `$ is the angle made by $`𝐥`$ with respect to $`𝐟`$. For a flexible homopolymer the end-to-end distance is $`R=NaL(x)`$ where $`L(x)=\mathrm{coth}xx^1`$ is the Langevin function and $`x=fa/kT`$. In our situation the multiblock copolymer is viewed as a collection of two types of non-interacting “dipoles”: $`(1\theta )N`$ monomers of length $`a`$ and $`yN`$ “monomers” of length $`\theta a/y`$. The reduced end-to-end distance, $`r=R/Na`$ is thus $$r=(1\theta )L(x)+\theta L(\theta x/y).$$ (1) Three regimes are involved. When $`x1`$ and $`\theta x/y1`$ both the rods and the $`a`$ monomers are weakly oriented. Since $`L(z)z/3`$ for $`z1`$, we find $`fa/kT3r/(1\theta +\theta ^2/y)`$ and $`F_{el}/NkT3r^2/2(1\theta +\theta ^2/y)`$. For stronger extensions $`x1`$ while $`\theta x/y1`$ in other words, the $`a`$ monomers are only weakly aligned but the rods are almost fully oriented with $`f`$. Focusing on the case of long rods, $`y1`$, when $`L(\theta x/y)1`$, we obtain $`fa/kT3(r\theta )/(1\theta )`$ and $`F_{el}/NkT3(r\theta )^2/2(1\theta )+3y/2`$. The first term in $`F_{el}`$ reflects the Gaussian stretching penalty of the $`(1\theta )N`$ “short” monomers. It allows for the full alignment of the rods and the resulting modification of the imposed end-to-end distance experienced by the flexible blocks. The second term reflects the Gaussian penalty associated with the full alignment of the $`yN`$ rod blocks. The crossover between the two regimes occurs at $`r_{co}\theta +y(1\theta )/\theta `$. Eventually, upon further extension, the $`a`$ monomers also approach saturation, $`x1`$ and $`\theta x/y1`$. In this last regime $`fa/kT(1\theta +y)/(1r)`$ and $`F_{el}/NkT(1\theta +y)\mathrm{ln}[2(1\theta )/3(1r)]+3y/2+(1\theta )/6`$. Since the approximate expressions for $`f`$ do not match at the boundary between the two last regimes, the crossover between them, $`\widehat{r_{co}}`$, may be determined by the requirement that $`fa/kT3(r\theta )/(1\theta )1`$ leading to $`\widehat{r_{co}}(1+2\theta )/3`$. Altogether, the plot of $`fa/kT`$ vs. $`r`$ for a rod-coil multiblock copolymer is distinguished by an abrupt change of slope at $`r_{co}`$. In turn, this feature provides a useful diagnostic for the architecture of the polymer. In the annealed scenario, monomers may interconvert between the rod and coil states. Accordingly, $`\theta `$ and $`y`$ are no longer constants set by the chemistry. Rather, their values vary with the temperature $`T`$ and with $`R`$. $`\theta `$ and $`y`$ are determined by minimizing the free energy per chain $`F_{chain}`$ with respect to $`y`$ and $`\theta `$ for a given $`R`$. $`F_{chain}`$ allows, in addition to $`F_{el}`$, for the mixing free energy $`F_{mix}`$ of the one dimensional mixture, along the chain trajectory, of the rod and coil blocks. Our discussion focuses on polymers undergoing a helix-coil transition. It is directed at the coupling of the helix-coil transition with the extension for the simplest possible situation, that is a homopolypeptide capable of forming an $`\alpha `$-helix . This form of intrachain self assembly is due to intrachain hydrogen bonds. The $`i`$-th monomer, residue, forms H-bonds with the $`(i+3)`$ and $`(i3)`$ monomers. Altogether, a helical domain consisting of $`n`$ monomers involves $`n2`$ H-bonds. The persistence length of an $`\alpha `$-helix is of order of $`200nm`$ and a helical domain may thus be considered as a rod block. It is convenient to discuss this system in terms of “bonds” between two adjacent monomers. For simplicity we retain the approximation setting the monomer size in the rod and coil state equal to $`a`$. Using the coil state as a reference, each helical bond is assigned a free energy $`\mathrm{\Delta }f`$. It reflects the contribution of the intrachain H-bonds, the change in solvation due to the formation of intrachain H-bonds and the associated loss of configurational entropy. The helix-coil transition occurs at the transition temperature $`T_{}`$ when $`\mathrm{\Delta }f=0`$. Above $`T_{}`$, $`\mathrm{\Delta }f>0`$ while below $`T_{}`$, $`\mathrm{\Delta }f<0`$. Terminal helical bonds, at the boundary of a helical domain, incur an additional free energy penalty $`\mathrm{\Delta }f_t>0`$ since the terminal monomers lose their configurational entropy but do not contribute H-bonds. $`\mathrm{\Delta }f_t`$ plays the role of interfacial free energy associated with the helix-coil domain boundary. Traditionally, the theory of the helix-coil transition is formulated in terms of the Bragg-Zimm parameters $`s=\mathrm{exp}(\mathrm{\Delta }f/kT)`$ and $`\sigma =\mathrm{exp}(2\mathrm{\Delta }f_t/kT)`$ where $`\sigma 10^310^4`$ varies with the identity of the residues but is independent of $`T`$. The helix-coil transition gives rise to a sigmoid $`\theta `$ vs. $`T`$ plot with a characteristic width $`T_{}\sigma ^{1/2}`$. The usual discussion of the helix-coil transition is based on transfer matrix methods. For our purposes it is convenient to recast it in terms of the free energy of the unperturbed chain, $`F_{chain}^0`$. For long chain, when the $`N\mathrm{}`$ limit is applicable, $`F_{chain}^0`$ reflects three contributions. Each of the $`\theta N`$ helical bonds contributes $`\mathrm{\Delta }f`$ while every one of the $`2yN`$ terminal bonds contributes $`\mathrm{\Delta }f_t`$. These two terms are supplemented by the mixing entropy, $`S_{mix}`$, associated with the different possible combinations of $`\theta N`$ helical bonds and $`2yN`$ terminal bonds. The number of possible configurations, $`W`$ is $`W_{th}W_{tc}`$ where $`W_{th}=\left(\genfrac{}{}{0pt}{}{\theta N}{yN}\right)`$ is the number of ways of placing $`yN`$ terminal bonds among $`\theta N`$ helical bonds and $`W_{tc}=\left(\genfrac{}{}{0pt}{}{(1\theta )N}{yN}\right)`$ is the statistical weight associated with the placement of $`yN`$ terminal bonds among $`(1\theta )N`$ coil bonds. $`S_{mix}=k\mathrm{ln}W`$ can be expressed as $`S_{mix}=\theta S_{mix}(y/\theta )+(1\theta )S_{mix}(y/(1\theta ))`$ where $`\frac{1}{N}S_{mix}(X)=X\mathrm{ln}X(1X)\mathrm{ln}(1X)`$. Here $`\frac{1}{N}S_{mix}(y/\theta )`$ is the mixing entropy of the terminal monomers among the rod monomers while $`\frac{1}{N}S_{mix}(y/(1\theta ))`$ is the mixing entropy of the terminal monomers among the coil monomers. Altogether $`{\displaystyle \frac{F_{chain}^o}{NkT}}`$ $`=`$ $`\theta {\displaystyle \frac{\mathrm{\Delta }f}{kT}}+2y{\displaystyle \frac{\mathrm{\Delta }f_t}{kT}}+(\theta y)\mathrm{ln}{\displaystyle \frac{\theta y}{\theta }}+y\mathrm{ln}{\displaystyle \frac{y}{\theta }}`$ (3) $`+(1\theta y)\mathrm{ln}{\displaystyle \frac{1\theta y}{1\theta }}+y\mathrm{ln}{\displaystyle \frac{y}{1\theta }}.`$ The equilibrium conditions $`F_{chain}^0/y=0`$ and $`F_{chain}^0/\theta =0`$ lead respectively to $`y^2=(\theta y)(1\theta y)\sigma `$ and to $`(1\theta )(\theta y)=\theta (1\theta y)s`$. In turn, these yield the familiar results for the unperturbed chain and, in particular, $`\theta =\frac{1}{2}+\frac{1}{2}\sqrt{\frac{(s1)^2}{4\sigma s+(s1)^2}}`$. The full analysis of the coupling of the helix-coil transition with the extension of the chain involves an augmented free energy per chain, allowing for the extension penalty $`F_{chain}=F_{chain}^0+F_{el}`$ . An analytic solution yielding the qualitative features of the force law, as well as the important length and force scales, is possible by setting $`S_{mix}=0`$ . Within the $`S_{mix}=0`$ approximation the interfacial penalty imposes coalescence of the helical domains leading to the formation of a helix-coil diblock copolymer ($`y=0`$). As a result, the helix-coil transition takes place as a first order phase transition. The corresponding free energy is $$\frac{F_{chain}}{NkT}\frac{3(r\theta )^2}{2(1\theta )}+\theta \frac{\mathrm{\Delta }f}{kT}.$$ (4) The equilibrium condition $`F_{chain}/\theta =0`$ for a given $`r`$ yields $`\theta =1(1r)/\sqrt{12\mathrm{\Delta }f/3kT}`$. In turn, this defines a critical extension, $`r_L=1\sqrt{12\mathrm{\Delta }f/3kT}`$ such that for $`r<r_L`$, $`\theta =0`$ and for $`rr_L`$, $`\theta `$ is specified by $`\theta =(rr_L)/(1r_L)`$. The equilibrium free energy of the chain for $`r<r_L`$ is $`F_{chain}/NkT3r^2/2`$ while for $`rr_L`$ it is $`F_{chain}/NkT3r_L(r1)+\mathrm{\Delta }f/kT`$. This regime lasts until $`\theta =1`$ is attained at $`r_U=1`$. Within this simple minded picture, further extension is impossible and for $`r_U>1`$, $`F_{chain}=\mathrm{}`$. The corresponding force law involves three regimes: (i) A linear response regime, where the Gaussian elasticity is operative and $`fa/NkT3r`$, occurs while $`r<r_L`$ and $`\theta =0`$. (ii) A plateau with $`f_{co}a/NkT3r_L`$ occurs in the range $`r_L<r<r_U`$ when the coexistence between the helix and coil states follows the lever rule $`\theta /(1\theta )=(rr_L)/(r_Ur)`$. The midpoint of the plateau, corresponding to $`\theta =1/2`$, occurs at $`r_{1/2}=(r_U+r_L)/2`$. (iii) A steep increase in force occurs at $`r=r_U`$, when the assumed nonextensibility of the fully helical chain comes into play. Note that in reality the helical domains are not perfect rods. High applied tension may cause extension by breaking the H-bonds. Within the $`S_{mix}=0`$ approximation the plateau corresponds to a first order phase transition involving the coexistence of a helical phase and a coil phase. Such phase transition is prohibited in one dimensional system experiencing short ranged interactions . When one allows for $`S_{mix}>0`$ the plateau in the force law exhibits a weak increase with $`r`$, instead of the $`r^0`$ dependence predicted by the $`S_{mix}=0`$ approximation. Yet, as we shall see, the center of the plateau occurs at $`r_{1/2}`$ and its height at this point is $`f_{co}a/kT3r_L`$. Furthermore, the qualitative $`\mathrm{\Delta }f`$ dependence of both the height and the width of the plateau is retained. In particular, the width of the plateau increases while its height decreases as $`\mathrm{\Delta }f0`$, or equivalently, as $`s1`$. Since at the vicinity of the transition temperature $`s(TT_{})/T_{}`$, these scenarios may be explored by change of $`T`$. The complete analysis of the problem in terms $`F_{chain}=F_{chain}^0+F_{el}`$ involves different regimes distinguished by the applicable form of $`F_{el}`$, as discussed in the first part of this letter. For weak extensions $`F_{el}/NkT3r^2/2(1\theta +\theta ^2/y)`$ and the equilibrium conditions $`F_{chain}/\theta =0`$ and $`F_{chain}/y=0`$ lead to $`F_{chain}^0/\theta =F_{el}/\theta `$ and $`F_{chain}^0/y=F_{el}/y`$. In turn, this leads to $`y^2=(\theta y)(1\theta y)\sigma K_\sigma `$ and to $`(1\theta )(\theta y)=\theta (1\theta y)sK_s`$ where $`K_\sigma =\mathrm{exp}[(F_{el}/y)/NkT]`$ and $`K_s=\mathrm{exp}[(F_{el}/\theta )/NkT]`$. $`(F_{el}/\theta )/NkT`$ is the increment in the elastic free energy per monomer associated with adding one residue to a helical block for a constant $`y`$ and $`(F_{el}/y)/NkT`$ is the price of creating an extra helical sequence while maintaining a constant $`\theta `$. For weak extensions, when the elastic penalty is a weak perturbation, $`K_\sigma `$ and $`K_s`$ may be approximated by $`K_\sigma =\mathrm{exp}[r^2\theta _0^2/y_0^2(1\theta _0+\theta _0^2/y_0)^2]`$ and $`K_s=\mathrm{exp}[r^2(2\theta _0/y_01)/(1\theta _0+\theta _0^2/y_0)^2]`$ where $`\theta _0`$ and $`y_0`$ correspond to the unperturbed chain. $`K_\sigma `$ is a decreasing function of $`r`$ while $`K_s`$ is an increasing function of $`r`$. Thus, the extension favors the transition by increasing $`s`$ and enhances the cooperatively by decreasing $`\sigma `$. This analysis demonstrates that the $`S_{mix}=0`$ approximation overestimates $`F_{el}`$ and thus $`f`$. $`fa/NkT3r/(1\theta _0+\theta _0^2/y_0)`$ provides a better estimate for $`f`$. However, in accordance with the Le Chatellier principle , the actual force law is even weaker since $`\theta `$ increases with $`r`$ while $`y/\theta `$ decreases. For stronger deformations, when $`r>r_{co}`$ the appropriate elastic free energy is $`F_{el}/NkT3(r\theta )^2/2(1\theta )+3y/2`$. In this range it is possible to obtain the force law at the vicinity of $`\theta =1/2`$. The equilibrium condition $`F_{chain}/y=0`$ leads to $`y^2=(\theta y)(1\theta y)\sigma /e^{3/2}`$. Since $`\sigma 1`$ this reduces, in the vicinity of $`\theta =1/2`$, to $`y\sqrt{\theta (1\theta )\sigma /e^{3/2}}`$. Utilizing this relationship, and following expansion of the logarithmic terms we obtain $`F_{chain}/kT3(r\theta )^2/2(1\theta )+\theta \mathrm{\Delta }f/kT2\sqrt{\theta (1\theta )\sigma /e^{3/2}}`$. In turn, $`F_{chain}/\theta =0`$ recovers the expression for $`\theta `$ as obtained within the $`S_{mix}=0`$ approximation, $`\theta (rr_L)/(1r_L)`$. This allows us to express $`F_{chain}`$ as a function of $`r`$ and to obtain $`f`$ in the vicinity of $`\theta =1/2`$ $$\frac{fa}{kT}\frac{f_{co}a}{kT}+2\frac{\sigma ^{1/2}}{e^{3/4}}\frac{(rr_{1/2})(1r_L)^1}{\sqrt{(rr_L)(r_Ur)}}.$$ (5) The force at $`r_{1/2}`$ is $`f_{co}`$ as suggested by the $`S_{mix}=0`$ approximation. However, when the full $`F_{chain}`$ is allowed for the $`fr^0`$ behavior of a perfect plateau is lost. The second term in (5) ensures that $`f`$ increases with $`r`$. On the other hand, since $`\sigma 10^4`$, the $`\sigma ^{1/2}`$ prefactor in this term imposes slow variation of $`f`$ in the vicinity of $`r_{1/2}`$. Our discussion of the elasticity of rod-coil multiblock copolymers reveals two scenarios. In the quenched case, the force law exhibits a sharp change of slope while in the annealed case the force law is sigmoid. It is important to note that our analysis focused on the thermodynamic force law as obtained for long chains and quasistatic extension. Effects due to the finite length of the chains and the finite rate of deformation may modify the experimentally observed force curves . The analysis of the coupling between the helix-coil transition and the chain extension focused on the case of homopolypeptides capable of forming a single strand $`\alpha `$-helix. The results can be however applied, with some modifications, to a much wider class of systems. It is relevant, for example, to the interpretation of the sigmoid force curve observed for poly(ethylene-glycol) which was attributed to the formation of water-mediated single stranded helix . Furthermore, the analysis can be extended to the case of polymers forming multistranded helices such as collagen. In this context it is of interest to note that sigmoid force curves were indeed observed in early experiments involving fibers formed by fibrous proteins, collagen, keratin etc., that form helical structures . Finally, one should note the recent development of facile synthesis route for homopolypeptides and block copolypeptides of well defined architecture . This provides a convenient method for synthesizing rod-coil multiblock copolymers with either quenched or annealed architecture.
no-problem/9909/astro-ph9909002.html
ar5iv
text
# Gamma-Ray Bursts as a Probe of the Very High Redshift Universe ## 1 Introduction The relatively accurate (3′) gamma-ray burst (GRB) positions found using BeppoSAX and disseminated within a day or so led to the remarkable discoveries that GRBs have X-ray (Costa et al. 1997), optical (Galama et al. 1997) and radio (Frail & Kulkarni 1997) afterglows. The redshift distances of eight GRBs are currently known, either directly from absorption lines in the spectra of the afterglow, or indirectly, from emission lines in the spectra of a galaxy that is coincident with the position of the X-ray and optical afterglow. These redshifts span the range $`z=0.433.42`$, and imply that GRBs are perhaps the most luminous and energetic events in the universe (see Table 1). The most widely discussed models of the central engine of GRBs involve a black hole and an accretion disk, formed either through the core collapse of a massive star (Woosley 1993, 1996; Paczyński 1998, MacFadyen & Woosley 1999; Wheeler et al. 1999; MacFadyen, Woosley & Heger 1999) or the coalescence of a neutron star – neutron star or neutron star – black hole binary (Paczyński 1986; Narayan, Paczyński & Piran 1992; Mészáros & Rees 1993). The former are expected to occur near or in the star-forming regions of their host galaxies, while most of the latter are expected to occur primarily outside of the galaxies in which they were born. Castander and Lamb (1998) showed that the light from the host galaxy of GRB 970228, the first burst for which an afterglow was detected, is very blue. This implies that the host galaxy is undergoing copious star formation and suggests an association between GRB sources and star-forming galaxies. Subsequent analyses of the color of this galaxy (Castander & Lamb 1999; Fruchter et al. 1999a) and other host galaxies (Kulkarni et al. 1998; Fruchter 1999) have strengthened this conclusion, as has the detection of \[OII\] and Ly$`\alpha `$ emission lines from several host galaxies (see, e.g., Metzger et al. 1997a; Kulkarni et al. 1998; Bloom et al. 1998). The positional coincidences between burst afterglows and the bright blue regions of the host galaxies (Sahu et al. 1997, Kulkarni et al. 1998, Fruchter 1999, Kulkarni et al. 1999, Fruchter et al. 1999a), and the evidence for extinction by dust of some burst afterglows (see, e.g., Reichart 1998; Kulkarni et al. 1998; Lamb, Castander & Reichart 1999), lend further support to the idea that GRBs are associated with star formation, as is expected if GRBs are due to the collapse of massive stars. However, this evidence is indirect. Recent tantalizing evidence that the light curves and spectra of the afterglows of GRB 980326 (Bloom et al. 1999) and GRB 970228 (Reichart 1999a, Galama et al. 1999b) contain a supernova (SN) component, in addition to a relativistic shock wave component, provide more direct clues that at least the long, softer, smoother bursts (Lamb, Graziani & Smith 1993; Kouveliotou et al. 1993) detected by BeppoSAX are a result of the collapse of massive stars. In this paper, we show that, if many GRBs are indeed produced by the collapse of massive stars, GRBs and their afterglows provide a powerful probe of the very high redshift ($`z5`$) universe. In §2, we establish that both GRBs and their afterglows are detectable out to very high redshifts. In §3, we then show that one expects GRBs to occur out to $`z10`$ and possibly $`z1520`$, redshifts that are far larger than those expected for the most distant quasars. This implies that there are large numbers of GRBs with peak photon number fluxes below the detection thresholds of BATSE and HETE-2, and even below the detection threshold of Swift. The mere detection of very high redshift GRBs would give us our first information about the earliest generations of stars. In §4, we show that GRBs and their afterglows can be used as beacons to locate core collapse supernovae at redshifts $`z1`$, and to study the properties of these supernovae. In §5, we describe the expected properties of the absorption-line systems and the Ly$`\alpha `$ forest in the spectra of GRB afterglows, and discuss various strategies for determining the redshifts of very high redshift GRBs. We then show in §6 how the absorption-line systems and the Ly$`\alpha `$ forest visible in the spectra of GRB afterglows can be used to trace the evolution of metallicity in the universe, and in §7 how they can be used to probe the large-scale structure of the universe at very high redshifts. Finally, in §8 we show how measurement of the Ly$`\alpha `$ break in the spectra of GRB afterglows can be used to constrain, or possibly measure, the epoch at which re-ionization of the universe occurred, using the Gunn-Peterson test. We summarize our conclusions in §9. ## 2 Detectability of GRBs and Their Afterglows at Very High Redshifts It is now clear that GRBs are detectable out to very high redshifts (VHRs). In order to establish this, we calculate the limiting redshifts detectable by BATSE and HETE-2, and by Swift, for the seven GRBs with well-established redshifts and published peak photon number fluxes. The peak photon number luminosity is $$L_P=_{\nu _l}^{\nu _u}\frac{dL_P}{d\nu }𝑑\nu ,$$ (1) where $`\nu _l<\nu <\nu _u`$ is the band of observation. Typically, for the Burst and Transient Source Experiment (BATSE) on the Compton Gamma-Ray Observatory, $`\nu _l=50`$ keV and $`\nu _u=300`$ keV. The corresponding peak photon number flux $`P`$ is $$P=_{\nu _l}^{\nu _u}\frac{dP}{d\nu }𝑑\nu .$$ (2) Assuming that GRBs have a photon number spectrum of the form $`dL_P/d\nu \nu ^\alpha `$ and that $`L_P`$ is independent of z, the observed peak photon number flux $`P`$ for a burst occurring at a redshift $`z`$ is given by $$P=\frac{L_P}{4\pi D^2(z)(1+z)^\alpha },$$ (3) where $$D(z)=c_0^z(1+z^{})\left|\frac{dt(z^{})}{dz^{}}\right|𝑑z^{}$$ (4) is the comoving distance to the GRB, and $$\frac{dt(z)}{dz}=\frac{c}{H_0}\frac{1}{(1+z)\sqrt{\mathrm{\Omega }_m(1+z)^3+\mathrm{\Omega }_\mathrm{\Lambda }+(1\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda })(1+z)^2}}.$$ (5) Throughout this paper we take $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$. Then $$D(z)=\frac{c}{H_0}_0^z\frac{dz^{}}{\sqrt{\mathrm{\Omega }_m(1+z^{})^3+\mathrm{\Omega }_\mathrm{\Lambda }}}.$$ (6) Taking $`\alpha =1`$, which is typical of GRBs (Mallozzi, Pendleton & Paciesas 1996), $$P=\frac{L_P}{4\pi D^2(z)(1+z)},$$ (7) which is coincidentally identical to the form one gets when $`P`$ and $`L_P`$ are bolometric quantities. Using these expressions, we have calculated the limiting redshifts detectable by BATSE and HETE-2, and by Swift, for the seven GRBs with well-established redshifts and published peak photon number fluxes. In doing so, we have used the peak photon number fluxes given in Table 1, taken a detection threshold of 0.2 ph s<sup>-1</sup> for BATSE (Meegan et al. 1993) and HETE-2 (Ricker 1998) and 0.04 ph s<sup>-1</sup> for Swift (Gehrels 1999), and set $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_m=0.3`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ (other cosmologies give similar results). Figure 1 displays the results. This figure shows that BATSE and HETE-2 would be able to detect four of these GRBs (GRBs 970228, 970508, 980613, and 980703) out to redshifts $`2<z<4`$, and three (GRBs 971214, 990123, and 990510) out to redshifts of $`20<z<30`$. Swift would be able to detect the former four out to redshifts of $`5<z<15`$, and the latter three out to redshifts in excess of $`z70`$, although it is unlikely that GRBs occur at such extreme redshifts (see §3 below). Consequently, if GRBs occur at VHRs, BATSE has probably already detected them, and future missions should detect them as well. The soft X-ray, optical and infrared afterglows of GRBs are also detectable out to VHRs. The effects of distance and redshift tend to reduce the spectral flux in GRB afterglows in a given frequency band, but time dilation tends to increase it at a fixed time of observation after the GRB, since afterglow intensities tend to decrease with time. These effects combine to produce little or no decrease in the spectral energy flux $`F_\nu `$ of GRB afterglows in a given frequency band and at a fixed time of observation after the GRB with increasing redshift: $$F_\nu (\nu ,t)=\frac{L_\nu (\nu ,t)}{4\pi D^2(z)(1+z)^{1a+b}},$$ (8) where $`L_\nu \nu ^at^b`$ is the intrinsic spectral luminosity of the GRB afterglow, which we assume applies even at early times, and $`D(z)`$ is again the comoving distance to the burst. Many afterglows fade like $`b4/3`$, which implies that $`F_\nu (\nu ,t)D(z)^2(1+z)^{5/9}`$ in the simplest afterglow model where $`a=2b/3`$ (see, e.g., Wijers, Rees, & Mészáros 1997). In addition, $`D(z)`$ increases very slowly with redshift at redshifts greater than a few. Consequently, there is little or no decrease in the spectral flux of GRB afterglows with increasing redshift beyond $`z3`$. For example, Halpern et al. (1999) find in the case of GRB 980519 that $`a=1.05\pm 0.10`$ and $`b=2.05\pm 0.04`$ so that $`1a+b=0.00\pm 0.11`$, which implies no decrease in the spectral flux with increasing redshift, except for the effect of $`D(z)`$. In the simplest afterglow model where $`a=2b/3`$, if the afterglow declines more rapidly than $`b1.7`$, the spectral flux actually increases as one moves the burst to higher redshifts! As another example, we calculate the best-fit spectral flux distribution of the early afterglow of GRB 970228 from Reichart (1999a), as observed one day after the burst, transformed to various redshifts. The transformation involves (1) dimming the afterglow,<sup>1</sup><sup>1</sup>1Again, we have set $`\mathrm{\Omega }_m=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$; other cosmologies yield similar results. (2) redshifting its spectrum, (3) time dilating its light curve, and (4) extincting the spectrum using a model of the Ly$`\alpha `$ forest. For the model of the Ly$`\alpha `$ forest, we have adopted the best-fit flux deficit distribution to Sample 4 of Zuo & Lu (1993) from Reichart (1999b). At redshifts in excess of $`z=4.4`$, this model is an extrapolation, but it is consistent with the results of theoretical calculations of the redshift evolution of Ly$`\alpha `$ absorbers (see, e.g., Valageas, Schaeffer & Silk 1999). Finally, we have convolved the transformed spectra with a top hat smearing function of width $`\mathrm{\Delta }\nu =0.2\nu `$. This models these spectra as they would be sampled photometrically, as opposed to spectroscopically; i.e., this transforms the model spectra into model spectral flux distributions. Figure 2 shows the resulting K-band light curves. For a fixed band and time of observation, steps (1) and (2) above dim the afterglow and step (3) brightens it, as discussed above. Figure 2 shows that in the case of the early afterglow of GRB 970228, as in the case of GRB 980519, at redshifts greater than a few the three effects nearly cancel one another out. Thus the afterglow of a GRB occurring at a redshift slightly in excess of $`z=10`$ would be detectable at K $`16.2`$ mag one hour after the burst, and at K $`21.6`$ mag one day after the burst, if its afterglow were similar to that of GRB 970228 (a relatively faint afterglow). Figure 3 shows the resulting spectral flux distribution. The spectral flux distribution of the afterglow is cut off by the Ly$`\alpha `$ forest at progressively lower frequencies as one moves out in redshift. Thus high redshift ($`1<z<5`$) afterglows are characterized by an optical “dropout” (Fruchter 1999), and very high redshift ($`z5`$) afterglows by an infrared “dropout.” We also show in Figure 3 the effect of a moderate ($`A_V=1/3`$), fixed amount of extinction at the redshift of the GRB. However, the amount of extinction is likely to be very small at large redshifts because of the rapid decrease in metallicity beyond $`z=3`$ (see §6 below). So far, optical observations have been favored over near-infrared (NIR) and IR observations in afterglow searches. This is understandable, given the greater availability of optical cameras and the modest (typically 2′$`\times `$ 2′) fields-of-view of current NIR cameras. Usually, deep NIR observations have been carried out only once an optical transient has been identified in a GRB error circle, thereby assuring that the afterglow can be captured within the field-of-view of the NIR camera. The K-band afterglow search of Gorosabel et al. (1998), which detected the afterglow of GRB 971214 only 3.2 hours after the burst, is a notable exception. Unfortunately, the current search strategy of waiting for the identification of an afterglow candidate at optical wavelengths before carrying out NIR observations biases against the identification of VHR GRBs, since the afterglows of these bursts cannot be detected at optical wavelengths. The results of our calculations show that the identification of VHR GRBs will require afterglow searches that incorporate on a consistent basis (1) sufficiently deep NIR observations, carried out within hours to days after the burst, and (2) near-simultaneous optical observations that go sufficiently deep to meaningfully constrain the redshift of the burst, in the event that its afterglow is only detected at NIR wavelengths. Fortunately, early NIR observations will be facilitated by the HETE-2 (Ricker 1998) and Swift missions (Gehrels 1999), which will provide positions accurate to better than several arcminutes for many GRBs in near-real time. For more than half of the nearly two dozen GRBs for which X-ray afterglows have been detected, a corresponding optical afterglow has not been detected. A possible explanation of this “missing optical afterglow” problem is that, because of larger positional error circles or other reasons, optical afterglow searches do not always go deep enough, soon enough, to detect the fading afterglows. This may explain many of the missing optical afterglows, but it probably does not account for all of them. Another possible explanation is that some of these afterglows are significantly extincted by dust, either in our galaxy, in the host galaxies, or in the environments immediate to the bursts themselves (see, e.g., Reichart 1998, 1999b). Finally, it is possible that some of the GRBs for which no optical afterglow has been detected occurred at VHRs, and therefore their afterglow spectra were absorbed in the optical, as described above. In reality, a combination of these three effects may be at work. Early HST and NIR afterglow searches, facilitated by the more accurate near-real time positions that the HETE-2 and Swift missions will provide, could help to distinguish between these various explanations. In conclusion, if GRBs occur at very high redshifts, both they and their afterglows would be detectable. ## 3 GRB Afterglows as a Probe of Star Formation The positional coincidences between burst afterglows and the bright blue regions of the host galaxies (Sahu et al. 1997, Kulkarni et al. 1998, Fruchter 1999, Kulkarni et al. 1999, Fruchter et al. 1999a), and the evidence for extinction by dust of some burst afterglows (see, e.g., Reichart 1998; Kulkarni et al. 1998; Lamb, Castander & Reichart 1999), lends support to the idea that GRBs are associated with star formation. The discovery of what appears to be supernova components in the afterglows of GRBs 970228 (Reichart 1999a; Galama et al. 1999b) and 980326 (Bloom et al. 1999) strongly suggests that at least some GRBs are related to the deaths of massive stars, as predicted by the widely-discussed collapsar model of GRBs (see, e.g., Woosley 1993, 1996; Paczyński 1998; MacFadyen & Woosley 1999; Wheeler et al. 1999; MacFadyen, Woosley & Heger 1999). The presence of an unusual radio supernova, SN 1998bw, in the error circle of GRB 980425 (Galama et al. 1998; Kulkarni et al. 1998) also lends support to this hypothesis, although the identification of SN 1998bw with GRB 980425 is not secure (see, e.g., Graziani, Lamb, & Marion 1999). If GRBs are related to the collapse of massive stars, one expects the GRB rate to be approximately proportional to the star formation rate (SFR). Observational estimates (Gallego et al. 1995; Lilly et al. 1996; Connolly et al. 1997; Madau, Pozzetti & Dickinson 1998) indicate that the SFR in the universe was about 15 times larger at a redshift $`z1`$ than it is today. The data at higher redshifts from the Hubble Deep Field (HDF) in the north suggests a peak in the SFR at $`z12`$ (Madau, Pozzetti & Dickinson 1998), but the actual situation is highly uncertain. Assuming that GRBs are standard candles and the estimate of the SFR derived by Madau, Pozzetti & Dickinson (1998; however, see Pei, Fall & Hauser 1999), several authors (Totani 1997, 1999; Wijers et al. 1998) have investigated whether or not the observed GRB brightness distribution is consistent with such a SFR, which rises rapidly from the present epoch and peaks at $`z12`$. Totani (1997, 1999) finds that it is not, and one can infer from the results of Wijers et al. (1998) that it is not. However, there is now overwhelming evidence that GRBs are not standard candles: The peak photon number fluxes $`P`$ of the seven GRBs with secure redshifts and published peak photon fluxes span nearly two orders of magnitude (see Table 1). The range of peak photon number fluxes may actually be much greater (Loredo & Wasserman 1998). Furthermore, theoretical calculations show that the birth rate of Pop III stars produces a peak in the star-formation rate in the universe at redshifts $`16<z<20`$, while the birth rate of Pop II stars produces a much larger and broader peak at redshifts $`2<z<10`$ (Ostriker & Gnedin 1996; Gnedin & Ostriker 1997; Valageas & Silk 1999). Consequently, if GRBs are produced by the collapse of massive stars in binaries, one expects them to occur out to at least $`z10`$ and possibly $`z1520`$, redshifts that are far larger than those expected for the most distant quasars. Therefore, if GRBs – or at least a well-defined subset of the observed GRBs, such as the long bursts – are due to the deaths of massive stars, as theory and observations now suggest, then GRBs are a powerful probe of the star-formation history of the universe, and particularly of the SFR at VHRs. In Figure 4, we have plotted the SFR versus redshift from a phenomenological fit (Rowan-Robinson 1999) to the star formation rate derived from submillimeter, infrared, and UV data at redshifts $`z<5`$, and from a numerical simulation by Gnedin & Ostriker (1997) (Figure 2b of their paper) at redshifts $`z5`$. The simulations done by Gnedin & Ostriker (1997) indicate that the SFR increases with increasing redshift until $`z10`$, at which point it levels off. The smaller peak in the SFR at $`z18`$ corresponds to the formation of Population III stars, brought on by cooling by molecular hydrogen. In their other simulations, the strength of this peak was found to be greater than in the example used here (Ostriker & Gnedin 1996; Gnedin & Ostriker 1997). Since GRBs are detectable at these VHRs (see §2) and their redshifts may be measurable from the absorption-line systems and the Ly$`\alpha `$ break in the afterglows (see §5 below), if the GRB rate is proportional to the star-formation rate, then GRBs could provide unique information about the star-formation history of the VHR universe. More easily but less informatively, one can examine the GRB peak photon flux distribution $`N_{GRB}(P)`$. To illustrate this, we have calculated the expected GRB peak flux distribution assuming (1) that the GRB rate is proportional to the star-formation rate<sup>2</sup><sup>2</sup>2This may underestimate the GRB rate at VHRs since it is generally thought that the initial mass function will be tilted toward a greater fraction of massive stars at VHRs because of less efficient cooling due to the lower metallicity of the universe at these early times., (2) that the star-formation rate is that given in Figure 4, and (3) that the peak photon luminosity distribution $`f(L_P)`$ of the bursts is independent of $`z`$. There is a mis-match of about a factor of three between the $`z<5`$ and $`z5`$ regimes. However, estimates of the star formation rate are uncertain by at least this amount in both regimes. We have therefore chosen to match the two regimes smoothly to one another, in order to avoid creating a discontinuity in the GRB peak flux distribution that would be entirely an artifact of this mis-match. We calculate the observed GRB peak photon flux distribution $`N_{GRB}(P)`$ as follows. Assuming that GRBs are standard candles of peak photon luminosity $`L_P`$, the peak photon flux distribution is $$N_{GRB}(P|L_P)=\mathrm{\Delta }T_{obs}\frac{R_{SF}(z)}{1+z}\frac{dV(z)}{dz}\left|\frac{dz(P|L_P)}{dP}\right|,$$ (9) where $`\mathrm{\Delta }T_{obs}`$ is the length of time of observation, $`R_{SF}(z)`$ is the local co-moving star-formation rate at $`z`$, $$\frac{dV(z)}{dz}=4\pi \frac{d_L^2(z)}{1+z}\left|\frac{dt(z)}{dz}\right|$$ (10) is the differential comoving volume, $$d_L(z)=\{\begin{array}{cc}\frac{c}{H_0\sqrt{1\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }}}(1+z)\mathrm{sinh}\left[\frac{H_0\sqrt{1\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }}}{c}D(z)\right]\hfill & \text{(}\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }<1\text{)}\hfill \\ (1+z)D(z)\hfill & \text{(}\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1\text{)}\hfill \\ \frac{c}{H_0\sqrt{\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }1}}(1+z)\mathrm{sin}\left[\frac{H_0\sqrt{\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }1}}{c}D(z)\right]\hfill & \text{(}\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }>1\text{)}\hfill \end{array}$$ (11) is the luminosity distance, and $$\frac{dz(P|L_P)}{dP}=\left[\frac{dP(z|L_P)}{dz}\right]^1.$$ (12) For $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$, $$\frac{dV(z)}{dz}=4\pi D^2(z)\frac{dD(z)}{dz},$$ (13) where the comoving distance $$D(z)=\frac{c}{H_0}_0^z\frac{dz^{}}{[\mathrm{\Omega }_m(1+z^{})^3+\mathrm{\Omega }_\mathrm{\Lambda }]^{1/2}},$$ (14) and $$\frac{dD(z)}{dz}=\frac{c}{H_0}\frac{1}{[\mathrm{\Omega }_m(1+z)^3+\mathrm{\Omega }_\mathrm{\Lambda }]^{1/2}}.$$ (15) For $`dL_P/d\nu \nu ^\alpha `$, $$P(z|L_P)=\frac{L_P}{4\pi D^2(z)(1+z)^\alpha }.$$ (16) Again taking $`\alpha =1`$, which is typical of GRBs (Mallozzi, Pendleton & Paciesas 1996), $$P(z|L_P)=\frac{L_P}{4\pi D^2(z)(1+z)}.$$ (17) Then $$\left|\frac{dP(z|L_P)}{dz}\right|=\frac{L_P}{4\pi }\left[\frac{2}{D^3(z)(1+z)}\frac{dD(z)}{dz}+\frac{1}{D^2(z)(1+z)^2}\right].$$ (18) For a luminosity function $`f(L_P)`$ and for $`dL_P/d\nu \nu ^\alpha `$, $`N_{GRB}(P)`$ is given by the following convolution integration: $$N_{GRB}(P)=\mathrm{\Delta }T_{obs}_0^{\mathrm{}}R_{GRB}(P|L_P)f[L_P4\pi D^2(z)(1+z)^\alpha P]𝑑L_P.$$ (19) The upper panel of Figure 5 shows the number $`N_{}(z)`$ of stars expected as a function of redshift $`z`$ (i.e., the star-formation rate, weighted by the co-moving volume, and time-dilated) for an assumed cosmology $`\mathrm{\Omega }_M=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ (other cosmologies give similar results). The solid curve corresponds to the star-formation rate in Figure 4. The dashed curve corresponds to the star-formation rate derived by Madau et al. (1998). This figure shows that $`N_{}(z)`$ peaks sharply at $`z2`$ and then drops off fairly rapidly at higher $`z`$, with a tail that extends out to $`z12`$. The rapid rise in $`N_{}(z)`$ out to $`z2`$ is due to the rapidly increasing volume of space. The rapid decline beyond $`z2`$ is due almost completely to the “edge” in the spatial distribution produced by the cosmology. In essence, the sharp peak in $`N_{}(z)`$ at $`z2`$ reflects the fact that the star-formation rate we have taken is fairly broad in $`z`$, and consequently, the behavior of $`N_{}(z)`$ is dominated by the behavior of the co-moving volume $`dV(z)/dz`$; i.e., the shape of $`N_{}(z)`$ is due almost entirely to cosmology. The lower panel in Figure 5 shows the cumulative distribution $`N_{}(>z)`$ of the number of stars expected as a function of redshift $`z`$. The solid and dashed curves have the same meaning as in the upper panel. This figure shows that $`40\%`$ of all stars have redshifts $`z>5`$. The upper panels of Figures 6, 7, and 8 show the predicted peak photon flux distribution $`N_{GRB}(P)`$. The solid curve assumes that all bursts have a peak (isotropic) photon luminosity $`L_P=10^{58}`$ ph s<sup>-1</sup>. However, as remarked above, there is now overwhelming evidence that GRBs are not “standard candles.” Consequently, we also show in Figures 6 – 8, as illustrative examples, the convolution of this same star formation rate and a photon luminosity function, $$f(L_P)\{\begin{array}{cc}L_P^\beta \hfill & \text{(}L_{\mathrm{min}}<L_P<L_{\mathrm{max}}\text{)}\hfill \\ 0\hfill & \text{(otherwise)}\hfill \end{array}$$ (20) where ($`\mathrm{log}L_{\mathrm{min}},\mathrm{log}L_{\mathrm{max}}`$) = (57.5,58.5), (57,59), and (56.5,59.5); i.e., $`f(L_P)`$ is centered on $`L_P=10^{58}`$ ph s<sup>-1</sup>, and has widths $`\mathrm{\Delta }L_P/L_P=10`$, 100 and 1000.<sup>3</sup><sup>3</sup>3The seven bursts with well-determined redshifts and published peak (isotropic) photon luminosities have a mean peak photon luminosity and sample variance $`\mathrm{log}L_P=58.1\pm 0.7`$. The actual luminosity function of GRBs could well be even wider (Loredo & Wasserman 1998b, Lamb 1999). The general shape of the peak photon flux distributions $`N_{GRB}(P)`$ can be understood as follows. The shape of $`N_{GRB}(P)`$ above the rollover reflects the competition between the increasing number of GRBs expected at larger $`z`$ (shown in Figure 5) and their decreasing peak photon number flux $`P(z)`$ due to their increasing distance, while the shape of $`N_{GRB}(P)`$ below the rollover reflects the intrinsic luminosity function $`f(L_P)`$ of the bursts (Loredo & Wasserman 1998). The latter is particularly the case because cosmology causes the expected number of GRBs \[$`N_{}(z)`$\] to have an “edge,” and therefore to be sharply peaked in $`z`$, because of cosmology (Wasserman 1992). Thus, most of the GRBs below the rollover in the peak photon flux distribution $`N_{GRB}(P)`$ lie at the same distance ($`z2`$) but have a range of intrinsic peak flux luminosities $`L_P`$, reflecting the intrinsic luminosity function $`f(L_P)`$. This is particularly clear in Figure 6, where $`N_{GRB}(P)`$ is flat, and the plateau extends over an increasingly broad range of peak photon fluxes for increasingly broad intrinsic luminosity functions $`f(L_P)`$. It is also evident in Figures 7 and 8, where $`N_{GRB}(P)`$ below the rollover has slopes of $`1`$ and $`2`$. Thus, information can be extracted from $`N_{GRB}(P)`$ about both the GRB rate as a function of redshift and the intrinsic luminosity function $`f(L_P)`$ of the bursts. Figures 6, 7, and 8 show that the limiting sensitivities of BATSE and HETE-2, and of Swift all lie well below the observed rollover at $`P6`$ ph cm<sup>-2</sup> s<sup>-1</sup>. Therefore, BATSE has detected, and HETE-2 will detect, many GRBs out to $`z10`$, if this picture is correct. Swift will detect many GRBs out to $`z14`$, and will also detect for the first time many intrinsically fainter GRBs. The middle panels of Figures 6 – 8 show the predicted cumulative peak photon flux distribution $`N_{GRB}(>P)`$ for the same set of luminosity functions. For the star formation rate that we have assumed, we find that, if GRBs are assumed to be “standard candles,” the predicted peak photon flux distribution falls steeply throughout the BATSE and HETE-2 regime, and therefore fails to match the observed distribution, in agreement with earlier work. In fact, we find that a photon luminosity function spanning at least a factor of 100 is required in order to obtain semi-quantitative agreement with the principle features of the observed distribution; i.e., a roll-over at a peak photon flux of $`P6`$ ph cm<sup>-2</sup> s<sup>-1</sup> and a slope above this of about -3/2. This implies that there are large numbers of GRBs with peak photon number fluxes below the detection threshold of BATSE and HETE-2, and even of Swift. The lower panels of Figures 6 – 8 show the predicted fraction of bursts with peak photon number flux $`P`$ that have redshifts of $`z>5`$, for the same set of luminosity functions. From these figures, we see that near the detection threshold of Swift, a significant number of bursts will have redshifts of $`z>5`$. Depending on the slope and the width of the luminosity function, more than half of such bursts may have redshifts of $`z>5`$. ## 4 GRBs as a Means of Finding Supernovae at Very High Redshifts GRBs can be used as beacons, revealing the locations of SNe at high redshifts ($`z>1`$). To illustrate this, we take the best-fit V-band light curve of the early afterglow of GRB 970228 from Reichart (1999a) and add to it the V-band (or peak spectral flux) light curve of SN 1998bw (Galama et al. 1998; McKenzie & Schaefer 1999), using the light curve of SN 199bw as a template, as we did in Reichart (1999a). The light curves we use are corrected for Galactic extinction, as explained in Reichart (1999a). We then transform the two components to redshifts of $`z=1.2`$, 3.0, 7.7, and 20, as described in §2. Figure 9 shows the resulting light curves. Figure 9 shows that, if a SN 1998bw-like event occurred at a redshift of $`z3`$, then it would peak in the K band about 70 days after the event, and the peak magnitude would be K $`24.4`$. Consequently, the detection of high redshift SNe — localized on the sky and in redshift by earlier GRB afterglow observations — is within the limits of existing ground-based instruments, and well within the limits of HST/NICMOS observations, out to a redshift of at least $`z3`$. At higher redshifts, SNe could be detected with NIR observations at frequencies above the peak frequency of the SN in the observer’s frame; but because this portion of the SN spectrum is very red, the flux at NIR frequencies in the observer’s frame decreases rapidly with increasing redshifts. Consequently, SNe at redshifts higher than $`z4`$ or 5 probably cannot be detected in the NIR using existing instruments. At still higher redshifts, one must appeal to L- and M-band observations, but existing instruments do not yet have the necessary sensitivity. In Table 2, we expand upon Figure 9 by listing the band and the number of days after the GRB that observations would have the best chance of detecting a SN 1998bw-like event at peak flux density for a variety of GRB redshifts. We also list at what magnitudes SN 1998bw would have been detected in these bands at these times, if transformed to these redshifts. Of course, the chances of detecting a SN component depend on (1) how bright the afterglow is in the band of observation at the time of observation, (2) how bright the host galaxy, if detected, is in the band of observation, and (3) how much Galactic extinction there is in the direction of the GRB in the band of observation. Already one burst, GRB 971214, has been found to have a redshift, $`z=3.418`$ (Kulkarni et al. 1998), that lies at the high end of the redshift range for which current instruments can detect a SN 1998bw-like SN. In the case of this burst, K-band observations were taken 54 and 58 days after the burst (Ramaprakash et al. 1998), but these observations did not go deep enough to detect a SN component similar to SN 1998bw, were one present in the light curve. As a further example, we consider the case of GRB 990510, a recent burst whose afterglow faded as $`t^{2.4}`$ (Stanek et al. 1999) or $`t^{2.2}`$ (Harrison et al. 1999) at late times; the difference in these values can be traced to slight differences in these groups’ respective parameterizations of the light curve of this afterglow (Harrison et al. 1999). As a consequence of this rapid fading, a SN 1998bw-like component to the light curve, if present, would dominate the afterglow after about a month at red and NIR wavelengths. Again using SN 1998bw as a template, we transform this template to the redshift of the burst, $`z=1.619`$ (Vreeswijk et al. 1999), and correct this template for the difference in Galactic extinction ($`A_V=0.233`$ mag versus $`A_V=0.673`$ mag) along the SN 1998bw and GRB 990510 lines of sight, using the dust maps of Schlegel, Finkbeiner, & Davis (1998)<sup>4</sup><sup>4</sup>4Software and data are available at http://astro.berkeley.edu/davis/dust/index/html., and the Galactic extinction curve of Cardelli, Clayton, & Mathis (1989) for $`R_V=3.1`$. We plot the resulting observer-frame I-, J-, H-, and K-band predictions for a SN 1998bw-like component of the afterglow of GRB 990510 at 49, 101, and 144 days after the burst in Figure 10. If there is a SN component in the afterglow, it could have easily been detected with the NICMOS instrument on HST, had it not run out of cryogen half a year earlier. In the absence of NICMOS, Fruchter et al. (1999b) performed HST/STIS observations of the afterglow on 8.1 and 17.9 June 1999. They detected the afterglow at V $`=27.0\pm 0.2`$ mag and V $`=27.8\pm 0.3`$ mag on these two dates, which is consistent with the extrapolated light curve of the afterglow, and therefore is not consistent with an additional SN 1998bw-like component in the afterglow (Fruchter et al. 1999b). However, caution is in order, since this conclusion is subject to a number of uncertainties. These include assumptions about (1) the spectral form of the afterglow at the times of the observations, since the observations spanned a wavelength range of 300 nm – 900 nm, (2) how to extrapolate the light curve of the early afterglow of GRB 990510 to the times of the observations; (3) the luminosity of the supernova component relative to the luminosity of SN 1998bw, since Type Ib-Ic supernovae are known not to be standard candles; (4) the brightness of SN 1998bw at ultraviolet wavelengths; (5) the ignorability of any difference in host galaxy extinction along the SN 1998bw and GRB 990510 lines of sight; and (6) the underlying cosmological model. ## 5 Measuring the Redshifts of Very High Redshift GRBs Of the eight GRBs with secure redshifts, four have redshifts between $`0.4<z<1`$, two have redshifts between $`1<z<2`$, and one (GRB 971214) has a redshift of $`z=3.42`$ (see Table 1). These redshifts have been found in two ways: (1) by taking a spectrum of the afterglow at early times, when the afterglow was still sufficiently bright (GRBs 970508, 980703, 990123, and 990510), and (2) by taking a spectrum of the host galaxy, if detected, at sufficiently late times, once the afterglow had faded (GRBs 970228, 970508, 971214, 980613, and 980703). Both methods have uncertainties. In the former case, one technically measures only a lower limit for the redshift of the burst, corresponding to the redshift of the first absorber along the line of sight from the burst. However, as most, and possibly all, bursts with optical afterglows are associated with host galaxies, this first absorber is likely to be the host galaxy itself. Consequently, GRB redshifts measured in this way are fairly secure. In the latter case, one must establish that the positional coincidence between the afterglow and the potential host galaxy is not accidental. For ground-based observations, $``$ 10% of the sky is covered by galaxies brighter than R $`25.5`$, a typical magnitude for GRB host galaxies (see, e.g., Hogg & Fruchter 1999), due to seeing (Lamb 1999). Consequently, identification of the host galaxy is best established using HST images. In the cases of GRB 970508 (Metzger et al. 1997a, 1997b), GRB 980703 (Djorgovski et al. 1998), and GRB 990712 (Galama et al. 1999a), both absorption and emission lines were measured. (It is notable that the redshifts for the two GRBs at $`z=1.6`$ were found by taking a spectrum of the GRB afterglow at early times. Currently, redshifts of host galaxies in the range $`1<z<2.5`$ are difficult to measure because the H$`\alpha `$ and \[O II\] emission lines both lie outside the optical band for this range of redshifts.) At VHRs, e.g., $`z>5`$, both methods will be challenging. Consider first the detection of absorption lines in afterglow spectra. In Figure 11, we plot the observed wavelengths of prominent absorption lines as a function of redshift. At VHRs, the prominent Balmer lines are redshifted out of the NIR, and therefore out of the wavelength range of instruments such as NIRSPEC (0.9 $`\mu `$m - 5.1 $`\mu `$m; McLean et al. 1998). Prominent metal lines such as Mg II and Fe II are not redshifted out of the NIR. However, both observations (see, e.g., York 1999) and theoretical calculations (see, e.g., Ostriker & Gnedin 1996; Gnedin & Ostriker 1997; Valageas, Schaeffer, & Silk 1999; Valageas & Silk 1999) suggest that the metallicity of the universe decreases with increasing redshift, especially beyond $`z>3`$. Therefore, the equivalent widths of the prominent metal lines are expected to decrease with increasing redshift, making them challenging to detect at very high redshifts. However, prominent metal lines associated with the host galaxy may still be present if many GRBs are due to the collapse of massive stars, and the bursts occur near or in star-forming regions, since substantial production of metals would be expected in the disk – and certainly the star-forming regions – of the host galaxy. This is illustrated by Figure 11 of Valageas & Silk (1999), which we have reprinted as Figure 12 in the present paper. The second method, which requires detecting the potential host galaxy, confirming its identification as the host galaxy through positional coincidence with the GRB afterglow, and detecting Ly$`\alpha `$, \[O II\], or other emission lines from the host galaxy, will also be challenging. This will be the case because (1) galaxies more massive than $`10^9`$ $`M_{\mathrm{}}`$ are not expected to have formed by these times (see, e.g., Ostriker & Gnedin 1996; Gnedin & Ostriker 1997), and (2) because the surface brightness of these galaxies decreases as $`(1+z)^4`$. However, at redshifts $`z>5`$ (Fruchter 1999), and certainly at VHRs, the Ly$`\alpha `$ forest will be a very prominent feature of the spectral flux distribution of the GRB afterglow. This is evident in Figure 2, which shows the expected flux distribution of GRB afterglows at various redshifts. It is even more evident in Figure 13, which focuses in on NIR through optical frequencies. As an illustration of this, consider a burst that occurs at a redshift slightly in excess of $`z=10`$. If its afterglow is similar to that of GRB 970228, which had a relatively faint afterglow, its afterglow would be detectable at K $`16.2`$ mag one hour after the burst, and at K $`21.6`$ mag one day after the burst. However, the afterglow would not be detectable in the J band to any attainable limiting magnitude. Consequently, not only could VHR GRBs be detected and identified relatively easily using existing ground-based instruments, but given the extreme nature of this effect, accurate redshifts could be determined from photometry alone. One possible concern is that dust along the line of sight through the star-forming region or the disk of the host galaxy could produce extinction (see, e.g., Reichart 1998, 1999b) that can mimic the signature of the Ly$`\alpha `$ forest in the spectral flux distribution of the afterglow (Lamb, Castander, & Reichart 1999). However, at VHRs ($`z>5`$), this possibility is less likely due to (1) the lower abundance of dust in the universe at these early times, and (2) the increasing flux deficit of the Ly$`\alpha `$ forest with redshift. We illustrate this latter effect in the lower panel of Figure 13, in which we re-plot the upper panel of Figure 13, except that the solid curves correspond to extincted versions of the solid curves. In the lower panel of Figure 13, we have taken $`A_V=1/3`$ mag at the redshift of the burst, an extinction magnitude that may be typical of the disks of host galaxies (see, e.g., Reichart 1998), and we have adopted an extinction curve that is typical of the interstellar medium of our galaxy, using the extinction curve parameterization of Reichart (1999b). Figure 13 shows that, at redshifts higher than $`z5`$, the signature of the Ly$`\alpha `$ forest clearly dominates the signature of the extinction curve. ## 6 Tracing the Metallicity of the Universe Using GRB Afterglows Recent studies of QSO absorption lines associated with damped Ly$`\alpha `$ systems (Lu et al. 1996, Prochaska & Wolfe 1997, Pettini et al. 1997a,b) provide strong evidence that the metallicity of the universe decreases with increasing redshift, and decreases dramatically beyond $`z3`$. Recent observations (Cowie et al. 1995) have confirmed earlier evidence for a forest of C IV and Si IV doublets associated with the forest of Ly$`\alpha `$ lines (Meyer & York 1987). Observations of these systems extend to $`z=4.5`$, higher than the redshifts of the damped Ly$`\alpha `$ systems that have been observed to date. The detection of a forest of C IV and Si IV doublets, when combined with models of the ionization field from QSO radiation (see, e.g., Meiksin & Madau 1993, Cowie et al. 1995), suggests the existence of a floor under the abundances of heavy elements at roughly $`10^2`$ of solar, extending out to the highest redshifts so far observed (Songaila 1997). These various abundance determinations indicate that heavy elements exist in QSO absorption-line systems as early as $`z=5`$, although at low levels, with a marked increase in the metallicity of these systems evident at $`z3`$. This metallicity history is consistent with an early universal contamination of primordial gas by massive stars, followed by a delay in forming additional heavy elements until $`z3`$ (Timmes, Lauroesch & Truran 1995), and finally a rise to 0.1 of solar abundances at $`z=2`$. The abundances of, e.g., Ca and Fe inferred from QSO absorption-line systems do not show a further increase at $`z<1`$ to fully solar values (Meyer & York 1992). This may be due (1) to the fact that the disks of galaxies comparable to the Milky Way, in which solar abundances exist, provide such a small cross section for absorption against background quasars, compared to dwarf galaxies (York 1999); (2) to the depletion of some heavy elements by warm and cold clouds in low $`z`$ galaxies (Pettini et al. 1997a); and (3) possibly to the fact that solar metal abundances may be anomalously large by about a factor of three (Mushotsky 1999, private communication). It may be that all three of these factors play a role. However, as we have seen, theoretical calculations of star formation in the universe predict that the earliest generation of stars occurs at redshifts $`z1520`$, and that the star formation rate increases thereafter, peaking at $`z210`$ (Ostriker & Gnedin 1996, Gnedin & Ostriker 1997, Valageas & Silk 1999). One therefore expects substantial metal production at $`z>3`$. The discrepancy between this expectation and the abundances deduced from observations of QSO absorption-line systems may reflect differences between the metallicity of galactic disks and star-forming regions, and the metallicities of the hydrogen clouds in the halos of galaxies and/or in the IGM that are responsible for QSO absorption lines. This possibility is supported by Figure 12, taken from Valageas & Silk (1999), which shows the redshift evolution of the metallicities of (1) star-forming regions, (2) stars, (3) gas in galactic halos, and (4) the overall average metallicity of matter given by their model calculations. Also shown are the data points from Pettini et al. (1997b) for the metallicity of 34 damped Ly$`\alpha `$ systems, as inferred from zinc absorption lines. The curve corresponding to the overall average metallicity of matter represents an upper bound for the mean metallicity of the IGM (corresponding to very efficient mixing). If galaxies do not eject metals very extensively into the IGM, the metallicity of the IGM could be much smaller. Figure 12 shows that the mean metallicity expected for star-forming regions is substantially more than that expected for clouds in the IGM and for Ly$`\alpha `$ clouds associated with galactic halos (Lyman limit or damped Ly$`\alpha `$ systems). Consequently, it is possible that the equivalent widths (EWs) of the absorption lines associated with the host galaxy of the GRB (if it occurs in a galaxy) will remain large at very high redshifts, even as the EWs of the absorption lines due either to gas clouds in the IGM or associated with the halos of galaxies weaken greatly beyond $`z3`$. The situation at still higher redshifts, where star formation may occur in globular cluster-sized entities but not galaxies – which have not yet had time to form – is unclear. It is clear from this discussion that studies of absorption-line systems in GRB afterglow spectra can contribute greatly to our understanding of the metallicity history of the universe, and can allow a comparison between the metallicity history of hydrogen clouds along the line of sight to the burst and the metallicity history of the star forming regions and/or disks of the burst host galaxies (and the globular cluster-sized objects in which GRBs may occur at still higher redshifts). Core collapse SNe, such as the Type Ib-Ic SNe with which GRBs may be associated, produce different relative abundances of various metals than do Type Ia SNe, which are thought to be due to the thermonuclear disruption of white dwarfs (see, e.g., Woosely & Weaver 1986). For example, the spectrum of SN 1998bw, a peculiar Type Ic SN, exhibited absorption lines reflecting the production of substantial amounts of O and Cr, as well as Mg II, Fe, Ca, Si, and some S (Iwamoto et al. 1999, Mazzoli et al. 2000). This is typical of core collapse SNe (i.e., Type Ib-Ic and Type II SNe). Thus the relative abundances of various heavy elements can also give clues about the origin of these elements; i.e., whether or not the metallicity is due primarily to Type Ib-Ic and Type II SNe (core collapse SNe), and when and at what rate Type Ia SNe begin to contribute to the increase in metallicity. Finally, studies of the absorption lines in GRB afterglow spectra can help to determine whether the ratio \[Fe/H\] is a good chronometer at high redshift, as has usually been assumed, or may not be, as recent studies have suggested (Truran 1999, private communication). Such studies can also help to determine the extent and importance of mixing between the metals and the hydrogen gas in galaxies and in the IGM. ## 7 GRBs and Their Afterglows as Probes of Large-Scale Structure GRBs can be used to probe the large-scale structure of luminous matter in the universe (Lamb and Quashnock 1993; Quashnock 1996). The use of GRBs for this purpose has the advantage that they occur and are detectable out to VHRs, if GRBs are related to the collapse of massive stars. Thus observations of GRBs can be used to probe the properties of large-scale structure at much higher redshifts (and much earlier) than those that are currently probed by observations of galaxies and QSOs. GRBs also have the advantage that, while galaxy and QSO surveys on the largest angular scales face difficulties due to the absorption of light by dust and gas in the Galaxy, the Galaxy is completely transparent to gamma rays. Thus one can obtain a homogeneous sample of GRBs covering the entire sky, unlike existing and future galaxy and QSO surveys. On the other hand, the use of GRBs to probe large-scale structure suffers from a very serious disadvantage: Small number statistics. HETE-2 is expected to lead to the determination of the redshifts of a hundred or so bursts, while the Swift mission is likely to lead to the determination of the redshifts of $`1000`$ bursts. These numbers are an order of magnitude smaller than the number of QSOs whose redshifts are currently known, which is itself far smaller than the number of QSO redshifts expected from the Sloan Digital Sky Survey. Thus the use of GRBs themselves as a tracer of large-scale structure in the universe may not be particularly powerful. However, it should be possible to use the metal absorption lines and the Ly$`\alpha `$ forest seen in the optical and infrared spectra of GRB afterglows to probe the clustering of matter on the largest scales, as has been done using these same lines in the optical spectra of QSOs (Quashnock, Vanden Berk & York 1996; Quashnock & Vanden Berk 1998; Quashnock & Stein 1999). Deep images of fields around QSOs with absorbers in their spectra have revealed galaxies in the vicinities of absorbers and at the same redshifts (see, e.g., Steidel, Dickenson & Persson 1994; Steidel et al. 1997). A similar association has also been inferred for a substantial fraction of the damped Ly$`\alpha `$ absorption lines (Lanzetta et al. 1995; Le Brun, Bergeron & Boissé 1996). Consequently, it is thought that metal absorption lines are associated with galaxy halos, and possibly galactic disks in some cases. As discussed in §2 and §3, we expect both GRBs and their afterglows to occur and to be detectable out to very high redshifts ($`z>5`$), redshifts that are far larger than the redshifts expected for the most distant quasars. Consequently, the observation of absorption-line systems and damped Ly$`\alpha `$ systems in the optical and infrared spectra of GRB afterglows affords an opportunity to probe the properties of these systems and their clustering at VHRs. At VHRs, one expects to be in the linear regime, and that the mass spectrum of the Ly$`\alpha `$ forest systems, which are thought to lie in the IGM, and possibly the damped Ly$`\alpha `$ systems, to follow the Harrison-Zeldovich spectrum of density fluctuation in the early universe. Observations of NIR and infrared absorption lines in the afterglow spectra of GRBs may allow one to test this expectation, provided that these lines are detectable. The decrease in metallicity with increasing redshift may make it difficult to detect absorption-line systems at redshifts $`z>5`$, but this may not be true for damped Ly$`\alpha `$ systems and the Ly$`\alpha `$ forest. ## 8 GRB Afterglows as a Probe of the Epoch of Re-Ionization The epoch of re-ionization is one of the most important unknown quantities relevant to large-scale structure and cosmology. The lack of Gunn-Peterson absorption implies that this epoch lies at $`z>5`$, while the lack of distortion of the microwave background by Compton scattering off of free electrons implies that it lies at $`z<50`$. It is plausible to have re-ionization occur anywhere in between these redshifts in most cosmological models. Observations of the afterglows of VHR GRBs can be used to constrain the epoch of re-ionization by the presence or absence of flux shortward of the Lyman limit in the rest frame of the GRBs (see Figure 13). The absence of Gunn-Peterson troughs (Gunn & Peterson 1965) in the spectra of high-redshift quasars (Schneider, Schmidt, & Gunn 1991) and galaxies (Franx et al. 1997) indicates that the IGM was re-ionized at a redshift in excess of $`z5`$. Whether re-ionization was caused by the first generation of stars, or by quasars, is not yet known. Assuming re-ionization was caused by the first generation of stars, Gnedin & Ostriker (1997) predict that re-ionization occurred at $`z7`$; assuming re-ionization was caused by quasars, Valageas & Silk (1999) predict that re-ionization occurred at $`z6`$. However, Haiman & Loeb (1998), assuming that re-ionization was caused by stars and/or quasars, predict that re-ionization occurred at a redshift in excess of $`z11.5`$. Observations of VHR GRB afterglows may make it possible to distinguish between these possibilities. If re-ionization occurred at a redshift of $`z<6`$, as Valageas & Silk (1999) predict, then the redshift of re-ionization could be measured directly from VHR GRB afterglow photometry. We show this in Figure 13, in which we re-plot the spectral flux distributions shown in Figure 3, assuming that the intergalactic medium was not re-ionized until a redshift of $`4<z<5`$ (dashed curves). If re-ionization occurred at a redshift in excess of $`z6`$, as Gnedin & Ostriker (1997) and Haiman & Loeb (1998) predict, then observations of the afterglows of VHR GRBs could be used to probe the redshift evolution of the Ly$`\alpha `$ forest out to this redshift using existing ground-based instruments. However, to reach $`z6`$, observations would have to commence within a few hours of, instead of $`1`$ day after, a burst. The HETE-2 and Swift missions, which will provide burst positions accurate to a few arcseconds in near-real time, should make such observations possible. Future advances in instrumentation should allow redshifts in excess of $`z6`$ to be probed. ## 9 Conclusions The work of Bloom et al. (1999) in the case of GRB 980326, and the subsequent work of Reichart (1999a) and Galama et al. (1999b) in the case of GRB 970228, strongly suggest that at least some GRBs are related to the supernovae of massive stars. If many GRBs are related to the collapse of massive stars, we have shown that the bursts and their afterglows can be used as a powerful probe of many aspects of the very high redshift ($`z5`$) universe. We have established that both GRBs and their afterglows are detectable out to very high redshifts. HETE-2 should detect GRBs out to $`z30`$, while the Swift mission would be capable of detecting GRBs out to $`z>70`$, although it is unlikely that bursts occur at such extremely high redshifts. We have shown that, on the basis of theoretical calculations of star formation in the universe, one expects GRBs to occur out to at least $`z10`$ and possibly $`z1520`$, redshifts that are far larger than those expected for the most distant quasars. This implies that there are large numbers of GRBs with peak photon number fluxes below the detection thresholds of BATSE and HETE-2, and even below the detection threshold of Swift. It also implies that HETE-2 will detect many GRBs out to $`z8`$. Similarly, the Swift mission would detect many GRBs out to $`z14`$, and would detect for the first time many intrinsically fainter GRBs. The mere detection of VHR GRBs would give us our first information about the earliest generations of stars. We have shown how GRBs and their afterglows can be used as beacons to locate core collapse supernovae at redshifts $`z1`$, and to study the properties of these supernovae. We have described the expected properties of the absorption-line systems and the Ly$`\alpha `$ forest in the spectra of GRB afterglows. We have described and compared various strategies for determining the redshifts of very high redshift GRBs. We have shown how the absorption-line systems and the Ly$`\alpha `$ forest visible in the spectra of GRB afterglows can be used to trace the evolution of metallicity in the universe, and to probe the large-scale structure of the universe at very high redshifts. Finally, we have shown how measurement of the Ly$`\alpha `$ break in the spectra of GRB afterglows can be used to constrain, or possibly measure, the epoch at which re-ionization of the universe occurred, using the Gunn-Peterson test. This research was supported in part by NASA grant NAG5-2868 and NASA contract NASW-4690. We thank Carlo Graziani for discussions about the proper way to calculate GRB peak photon flux and peak photon energy distributions. We thank Don York for discussions about QSO metal absorption-line systems and the metallicity history of the universe, and Jean Quashnock for discussions about using QSO absorption-line systems to probe the large-scale structure of the universe. We thank Jim Truran for discussions about whether \[Fe/H\] is a good chronometer at early times in the evolution of the Galaxy and at high redshift. Finally, we thank Dave Cole for providing information about the limiting magnitudes that are detectable using current NIR instruments.
no-problem/9909/cond-mat9909173.html
ar5iv
text
# Intermittent Granular Flow and Clogging with Internal Avalanches ## Abstract The dynamics of intermittent granular flow through an orifice in a granular bin and the associated clogging due to formation of arches blocking the outlet, is studied numerically in two-dimensions. Our numerical results indicate that for small hole sizes, the system self-organizes to a steady state and the distribution of the grain displacements decays as power laws. On the other hand, for large holes, the outflow distributions are also observed to follow power law distributions. PACS number(s): 05.70.Jk, 64.60 Lx, 74.80.Bj, 46.10.+z How much granular mass is expected to flow out through a hole at the bottom of a granular bin? If the hole is sufficiently small, the outflow of grains stops soon due to the formation of a stable arch of grains clogging the hole. If the arch is broken, the grains in the arch become unstable and a cascade of grain displacements propagates within the system called the ‘internal avalanche’. As a result, a fresh flow starts through the hole, which also eventually stops due to the formation of another arch. It is known that the amount of out flowing granular mass between successive clogging events varies over a wide range, however, on average it depends sensitively on the hole diameter as compared to the grain size. In this paper, we study this phenomenon of clogged outflow distribution from a granular bin in two-dimensions using computer simulations. We observe the power law dependence of the distribution. This phenomenon is important in the context of recent experimental and theoretical studies of the avalanche size distributions from the surfaces of a sandpile on an open base . Research in this direction is motivated by the expectation that a sandpile is a simple example of a Self-Organized Critical (SOC) system . According to the ideas of SOC, long range spatio-temporal correlations may spontaneously emerge in slowly driven dynamical systems without fine tuning of any control parameter . Laboratory experiments on sandpiles, however, have not always found evidence of criticality in sandpiles. The fluctuating outflow of granular mass from a slowly rotating horizontal semicircular drum did not show a power law decay of its power spectrum as expected from the SOC theory . In a second experiment, the avalanche size was directly measured by the outflow amount during the dropping of sand on a sandpile situated on a horizontal base. It was observed that the distribution of the avalanche size, obeys a scaling behaviour only for small piles but not for the large ones . However, it was claimed that a stretched exponential distribution for the avalanches fits the whole range of system sizes . The reasons for not observing scaling are the existence of two angles of repose as observed in and also the effect of inertia of the sand mass while moving down during avalanches. This effect was suppressed in an experiment with anisotropic rice grains and scaling was successfully observed . In contrast, we believe, internal avalanches are better candidates for observing SOC. Due to the high compactness of grains in the granular material in a bin, a grain never gets sufficient time to accelerate much and the effect of inertia should be small. Internal avalanches were first studied in a block model of granular system by removing grains situated at the bottom of the bin one after another . While this model and also other cellular automata and ‘Tetris-like’ models did exhibit scaling behaviour for the internal avalanches, the disc models in continuous space, however, did not show sufficient evidence of SOC. We study a two-dimensional granular system with grains kept in a vertical rectangular bin. Initially the system has no hole at the bottom and we fill the bin up to a certain height. A hole of half-width $`R`$ is then made at the centre of the bottom. Some grains drop out through the hole. This flow is stopped when an ‘arch’ is formed, clogging the flow (Fig. 1). In two-dimensions, an arch is a sequence of grains, mutually depending on one another, such that their weights are balanced by mutual reaction forces. These arches are formed due to the competition among the grains to occupy the same vacant region of space. In the steady state, these arches are observed everywhere. The arch, covering the hole, is broken by deleting randomly one of the two grains at the bottom of the arch. An avalanche of grain movements starts within the system. Those grains which reach the bottom line with their centres within the hole are considered to flow out of the system and are deleted. A deleted grain is immediately replaced again randomly on the top surface. We measure the number of grains $`\rho `$ that flow out of the system during an avalanche between two successive clogging events and study its distributions for various values of the hole sizes $`R`$. The granular system is represented by $`N`$ hard mono disperse discs of radii $`r`$. No two grains are allowed to overlap but they can touch each other and one can roll over the other if possible. A rectangular area of size $`L_x\times L_y`$ on the $`xy`$ plane represents our two dimensional bin. Periodic boundary conditions are used along the $`x`$ direction and gravity acts along the $`y`$ direction. A ‘pseudo-dynamics’ method is used to study the avalanches. In this method we do not solve the classical equations of motion for the grain system as should have been done in a Molecular Dynamics method. Here, the direction of gravity and the local geometrical constraints due to the presence of other grains govern the movement of a grain. To justify the use of the pseudo-dynamics we argue that due to the high compactness of the system a grain never gets sufficient time to accelerate to large velocities. During the avalanches, the grains move in parallel. This is mimicked in the simulation by discretizing the time. In one time unit, all grains are given one chance each to move, by selecting them one after another in any random sequence. A grain can move only in two ways: Either it can $`fall`$ a certain distance or $`roll`$ a certain angle over another grain. We used only one parameter $`\delta `$ to characterize these two cases. Therefore $`\delta `$ is the maximum possible free fall a grain can have in an unit time. While rolling, the centre of a grain moves along the arc of a circle of radius $`2r`$ on another grain in contact, the maximum length of the arc is also limited to $`\delta `$. Therefore, the maximum angle a grain can roll freely is: $`\theta =\delta /2r`$ over another grain in contact. We believe this dynamics models an assembly of very light grains in $`0+`$ gravity. The presence of other grains in the neighbourhood of a particular grain imposes severe restriction to its possible movements. This is taken care off in the simulation using a search algorithm which digitizes the bin into a square grid and associates the serial numbers of grains to the primitive cells of the grid. A cell contains at most one grain when $`r1/\sqrt{2}`$ is chosen. Then it needs to search only 24 nearest neighbouring cells for possible contact grains. A grain $`n`$ in the stable position is supported by two other grains with serial numbers $`n_L`$ and $`n_R`$ on the left $`(L)`$ and right $`(R)`$. The position of the grain $`n`$ is then updated according to the following prescription: * If $`n_L=n_R=0`$, it falls, * if $`n_L0`$ but $`n_R=0`$, it rolls over $`n_L`$, * if $`n_L=0`$ but $`n_R0`$, it rolls over $`n_R`$, * if $`n_L0`$ and $`n_R0`$ it is stable. When the grain $`n`$ with the centre at $`(x_n,y_n)`$ is allowed to fall, it may come in contact with a grain $`m`$ at $`(x_m,y_m)`$ within the distance $`\delta `$. Therefore, during the fall, the coordinates are updated as: $`x_n^{}=x_n\mathrm{and}y_n^{}=y_m+\sqrt{4r^2(x_nx_m)^2}`$ $`\mathrm{if}y_ny_n^{}<\delta \mathrm{otherwise},y_n^{}=y_n\delta .`$ Similarly the grain $`n`$ may roll an angle $`\theta ^{}`$ over a grain $`t`$ and come in contact with another grain $`m`$ at $`(x_m,y_m)`$ . The coordinates of the grain $`n`$ in the new position where it touches both $`t`$ and $`m`$ are: $`x_n^{}={\displaystyle \frac{1}{2}}(x_m+x_t)\pm (y_my_t)\sqrt{{\displaystyle \frac{4r^2}{d_{mt}^2}}{\displaystyle \frac{1}{4}}}`$ $`y_n^{}={\displaystyle \frac{1}{2}}(y_m+y_t)(x_mx_t)\sqrt{{\displaystyle \frac{4r^2}{d_{mt}^2}}{\displaystyle \frac{1}{4}}}.`$ Here $`d_{mt}`$ is the distance between the centres of the grains $`m`$ and $`t`$ and the $`\pm `$ signs are for the left and right rolls. This position is accepted only if $`\theta ^{}\theta `$ otherwise the grain $`n`$ rolls down a maximum angle $`\theta `$ over the grain $`t`$. The initial configuration of grains within the bin is done by releasing grains sequentially one after another along vertical trajectories with randomly chosen horizontal positions. They fall till they touch the growing heap and are then allowed to roll down on the surface to their local stable positions. This is called the ‘ballistic deposition with re-structuring method’ (BDRM) . The initial pattern does not have arches since a grain while rolling down along the surface does not need to compete with any other grain. However, the arches are observed when the system is disturbed by opening the outlet and allowing the grains to flow out. After a large number of out-flows, the system reaches a steady state. One can characterize this state in many ways. We characterize the steady state by the distribution of the angles of the contact vectors. When two grains are in contact, we draw two vectors from their centres to the contact points. The angle $`\varphi `$ is the angle between the contact vector and the $`+x`$ direction. We measured the probability distributions $`P(\varphi )`$ in the range $`0^o360^o`$ for the initial state as well as in the steady state. To arrive at the steady state we discard $`2N`$ outflows, though the transition takes place even earlier than that. The contact angle distributions are shown in Fig. 2. The plot of $`P(\varphi )`$ vs. $`\varphi `$ for the initial grain distribution shows four sharp peaks at $`60^o,120^o,240^o,300^o`$. We explain them in the following way. If the grains were placed at the bottom in a complete orderly way by mutually touching one another without any gap, the structure generated by the BDRM method would be the hexagonal close packing (HCP) structure where every grain has six other grains in contact at the interval of $`60^o`$s. However, in our case, though we dropped grains randomly at the bottom level, the deterministic piling process retains some effect of the HCP structure. On the other hand, it is also known that the probability of a grain having the number of contact neighbours different from four, decreases with the height as a power law , which is certainly consistent with the four peaks we obtained. There are also two small peaks at $`0^o`$ and at $`180^o`$ due to the horizontal contacts of grains near the base level. This distribution, however, changes drastically by reducing the heights of the peaks and by broadening their widths in the steady state. The randomness introduced into the system by randomly selecting grains at the bottom increases the possibilities of other contact angles also, but is not fully capable to totally randomize it to a flat distribution. We check that after a large number of avalanches, this steady state distribution remains unchanged. Next we study the statistics of the displacement distributions in the granular heap in the limit of $`Rr`$. In this limit, no grain can drop out through the hole. The arch is broken by deleting only one grain. The avalanche is allowed to propagate within the system. The displacement $`\mathrm{\Delta }s`$ is the absolute magnitude of the displacement of a grain before and after an avalanche. We observe that there is a huge variation the displacements $`\mathrm{\Delta }s`$ of the grains. Most of the grains displace very little, whereas others have much bigger displacements, but their numbers are small. The displacement distribution $`P(\mathrm{\Delta }s)`$ is measured in the steady state over a large number of avalanches. The lower cut-off of the distribution turned out to be strongly dependent on the tolerance factor $`ϵ`$ used for the simulation. If the centres of two grains are separated by a distance within $`rϵ`$ and $`r+ϵ`$ then they are considered to be in contact. The upper cut-off of the distribution is of the order of unity since when a grain is deleted at the bottom, its neighbouring grains drops a distance of the order of the grain diameter. Systems of four different sizes have been simulated with $`N`$ = 300, 625, 2500 and 10000 in bins with base sizes $`L_x`$ = 20, 40, 80 and 160 units respectively. First $`5N`$ avalanches are thrown away to allow the system to reach the steady state. A list is maintained for the coordinates of all grains. This list is compared before and after the avalanche to measure $`\mathrm{\Delta }s`$. We observe a nice straight line curve over many decades of the $`P(\mathrm{\Delta }s)`$ vs. $`\mathrm{\Delta }s`$ plot on a double logarithmic scale, with little curvature at the right edge (Fig. 3). However, the slope of this curve is found to increase with the system size. Different extrapolations are tried and the best is obtained with $`L^{1/2}`$ which leads to 0.97 $`\pm `$ 0.05 in the $`N\mathrm{}`$ limit for the slope. We get similar behaviours for the distributions of the $`x`$ and $`y`$ components of the displacement vectors. Finally we study the outflow distribution and clogging. We use a bin of dimensions $`L_x=50`$ and $`L_y=200`$ units containing 4000 grains of radii $`r=1/\sqrt{2}`$ with a hole of half-width $`R`$ at the centre of the bottom of the bin. In a stable state an arch covers the hole and forbids other grains to flow out. The two lower most grains of the arch on the two sides of the hole are chosen and one of them is selected randomly. By deleting this grain, the arch is made unstable, which creates an internal avalanche of grain movements within the bin. Grains which come down and touch the bottom line of the bin with their centres within the hole are removed. While removing such a grain, the whole avalanche dynamics is frozen and the grains is replaced back on the top surface again by the BDRM method. Two conical shapes are formed. There are some grains which are never disturbed by any avalanche and are located on both sides of the hole. These undisturbed grains form a cone. The second cone is formed on the upper surface, though the grains were dropped on this surface along randomly chosen horizontal coordinates with uniform probability. For a fixed size of the base, the angle of the cone is found to increase with the average height of the granular column. The number of grains $`\rho `$ that go out of the system in an internal avalanche is counted between successive clogging events. Holes of five different sizes, i.e., $`R=0.75,1,1.25,1.5,2`$ are used. We could generate around $`5\times 10^5`$ such outflows for each hole size and collected the distribution data for $`P(\rho )`$. We show the plot of these curves in Fig. 4 on a double logarithmic scale. The grain which is deleted to break the arch is also counted in $`\rho `$. It is found increasingly improbable that the outflow will consist of only another grain. Consequently $`P(1)`$ is found to decrease monotonically on increasing $`R`$. However, for large values of $`\rho `$, the curve falls down. All curves of $`P(\rho )`$ vs. $`\rho `$ plots show straight lines for large $`\rho `$ values, signifying the power law distributions for the avalanche sizes. The slopes of these curves $`\sigma (R)`$ are found to depend strongly on the size of the hole, as well as the grain size: $`P(\rho )\rho ^{\sigma (R,r)}`$. We observe that $`\sigma (R,r)`$ is actually a linear function of $`(R/r)`$ as $`\sigma (R,r)=k(\frac{R}{r})`$ with k=1.6. The average outflow $`<\rho (R/r)>`$ diverges exponentially as: $`exp(\alpha (\frac{R}{r}1))`$ with $`\alpha =0.7`$. To summarize, we have studied a numerical model of intermittent granular flow and clogging with associated internal avalanches from a two-dimensional bin using a pseudo dynamics method. The flow of grains through a hole at the bottom of the bin are repeatedly clogged due to the formation of stable arches of grains, which we break to start fresh flows. Our numerical simulation results indicate that the system self-organizes to a state different from the initial state. In the limit of $`Rr`$, the distribution of granular distributions follow power laws. On the other hand when $`R>r`$, the granular flow distribution is also observed to follow power law. Funding by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR) through a project is thankfully acknowledged. Electronic Address: manna@boson.bose.res.in Electronic Address: hans@ica1.uni-stuttgart.de
no-problem/9909/cond-mat9909205.html
ar5iv
text
# Granular Flow, Collisional Cooling and Charged Grains ## Abstract The non-Newtonian character of granular flow in a vertical pipe is analyzed. The time evolution of the flow velocity and the velocity fluctuations (or granular temperature) is derived. The steady state velocity has a power law dependence on the pipe width with an exponent between $`3/4`$ and $`3/2`$. The flow becomes faster the more efficient the collisional cooling is, provided the density remains low enough. The dependence of collisional cooling on the solid fraction, the restitution coefficient and a possible electric charge of all grains is discussed in detail. ## 1 Introduction Granular materials like dry sand are classical many particle systems which differ significantly from solids and liquids in their dynamical behavior. They seem to have some properties in common with solids, like the ability of densely packed grains to sustain shear. Other properties, like the ability to flow through a hopper, remind of a fluid. Upon looking more closely, however, it turns out that such similarities are only superficial: Force localization or arching in granular packings and their plastic yield behaviour are distictly different from solids, cluster instabilities and nonlinear internal friction make granular flow very different from that of ordinary, Newtonian fluids. These differences manifest themselves often in extraordinarily strong fluctuations, which may cause accidents, when ignored in technological applications. This is why understanding the statistical physics of granular materials is important and has been very fruitful (see e.g. ). In this paper a few selected examples will be presented, which on the one hand document the progress in understanding brought about by applying concepts from statistical physics, and on the other hand point out some areas where important and difficult questions invite future research. One of the simplest geometries displaying the non-Newtonian character of granular flow is an evacuated vertical tube through which the grains fall. The experimental investigation is difficult, however, because the flow depends sensitively on electric charging and humidity . Nevertheless the ideal uncharged dry granular medium falling in vacuum through a vertical pipe is an important reference case and a natural starting point for computer simulations. Using this idealized example we shall show that the properties of granular flow can be explained, if two essential physical ingredients are understood: The interaction of the granular flow with the container walls and the phenomenon of collisional cooling. This technical term draws an analogy between the disordered relative motion of the agitated grains and the thermal motion of gas molecules. In a granular “gas”, differently from a molecular gas, the relative motion of the grains is reduced in every collision due to the irreversible loss of kinetic energy to the internal degrees of freedom of the grains. This is called collisional cooling. Agitated dry grains are usually electrically charged due to contact electrification. Its effect on the dynamical behavior of granular materials has hardly been studied so far. The reason is certainly not lack of interest, as intentional charging is the basis of several modern applications of granular materials in industrial processes. The reason is that well controlled experiments with electrically charged grains are difficult, as is the theory, because of the long range nature of the Coulomb interaction. One such application is the electrostatic separation of scrap plastics into the raw materials for recycling. A similar technique is used to separate Potassium salts, a raw material for fertilizers, from rock salt. In a “conditioning process” chemically different grains get charged oppositely. Then they fall through a condenser tower, where they are deflected in opposite directions and hence separated. Such a dry separation has the advantage of avoiding the environmental damage, the old fashioned chemical separation method would cause. Another application is powder varnishing. In order to avoid the harmful fumes of ordinary paints, the dry pigment powder is charged monopolarly and attracted to the grounded piece of metal to be varnished. Once covered with the powder, the metal is heated so that the powder melts and forms a continuous film. Recently a rather complete understanding has been reached, how monopolar charging affects collisional cooling . However, little is known about the influence of the charges on the grain-wall interaction statistics. The monopolar case is much simpler than the bipolar one: If all grains repell each other, collisional cooling cannot lead to the clustering instability observed for neutral grains . The case, where the grains carry charges of either sign, is much more difficult, because the clustering instability might even be enhanced. It has not been investigated yet. ## 2 Why the laws of Hagen, Poisseuille and Bagnold fail for granular pipe flow Since the flow through a vertical pipe is such a basic example, it has been addressed many times, including some classical work from the last century. Here we remind the reader of some of the most elementary ideas and results concerning pipe flow, and at the same time show, where they fail. The general situation is much more complex, as we are going to point out in the subsequent sections. Force balance requires that the divergence of the stress tensor $`\sigma _{ij}`$ compensates the weight per unit volume in the steady state of the flowing material: $$_x\sigma _{zx}+_z\sigma _{zz}=mgn,$$ (1) where $`m`$ denotes the molecular or grain mass, $`n`$ the number density of molecules or grains and $`g`$ the gravitational acceleration. The partial derivative in the vertical (the $`z`$-) direction vanishes because of translational invariance along the pipe. $`_x`$ denotes the partial derivative in the transversal direction.<sup>1</sup><sup>1</sup>1For the sake of transparency the equations are given for the two dimensional case in this section. In Newtonian or simple liquids the stress tensor is assumed to be proportional to the shear rate $$\sigma _{zx}=\eta _xv_z.$$ (2) The proportionality constant $`\eta `$ is the viscosity. Inserting this into (1) immediately gives the parabolic velocity profile of Hagen-Poisseuille flow, $`v_z=v_{\mathrm{max}}(mgn/2\eta )x^2`$. No-slip boundary conditions then imply that the flow velocity averaged over the cross section of the pipe scales like $`\overline{v}W^2`$. According to kinetic theory the viscosity $`\eta `$ is proportional to the thermal velocity. In lowest order the thermal motion of liquid molecules is independent of the average flow velocity. It is given by the coupling of the liquid to a heat bath. For a granular gas, the thermal velocity must be replaced by a typical relative velocity $`\delta v`$ of the grains. Due to collisional cooling $`\delta v`$ would drop to zero, if there was no flow. This is the most important difference between liquid and granular flow. It shows, that for a granular gas the collision rate between the grains, and hence the viscosity cannot be regarded as independent of the average flow velocity in lowest order. Bagnold argued that the typical relative motion should be proportional to the absolute value of the shear rate, $`\eta \delta v|_xv_z|`$, so that $$\sigma _{zx}|_xv_z|_xv_z.$$ (3) Inserting this into (1) leads to the result, that the average flow velocity must scale with the pipe diameter as $`\overline{v}W^{3/2}`$. However, Bagnold’s argument ignores, that there is a second characteristic velocity in the system, which is $`\sqrt{gd}`$, where $`d`$ is the diameter of the grains. It enters due to the nonlinear coupling between the flow velocity and the irregular grain motion, as we are going to point out in the next section. Hence, for granular flow through a vertical pipe, the viscosity is a function of both the average flow velocity and $`\sqrt{gd}`$. This will change the scaling of $`\overline{v}`$ with the diameter of the pipe, of course. Very little is known about the flow of dry granular materials at high solid fractions, where the picture of gas-like dynamics, which we employed so far, no longer applies. Hagen studied the discharge from a silo and postulated, that the flow rate is not limited by plastic deformations inside the packing but by arching at the outlet. He assumes that the only dimensionful relevant parameters for outlets much larger than the grain size are $`g`$ and the width $`W`$ of the outlet, for which we use the same notation as for the pipe diameter. Therefore, he concludes, that up to dimensionless prefactors $$\overline{v}\sqrt{gW}.$$ (4) He confirmed this experimentally for the silo geometry, where the outlet is smaller than the diameter of the container. It is tempting to expect that this holds also for pipe flow at high solid fractions. However, in our computer simulations we never observed such a behaviour, although we studied volume fractions, which were so high, that the addition of a single particle would block the pipe completely. Without investigating dense granular flow any further in this paper, we just want to point out that Hagen’s dimensional argument seems less plausible for a pipe than for a silo, because important arching now occurs at any place simultaneously along the pipe, and the dynamics of decompaction waves and plastic deformations far from the lower end of the pipe may well depend on the dimensionless ratio $`W/d`$, for instance. This spoils the argument leading to (4), of course. ## 3 General equations of momentum and energy balance A vertical pipe can be viewed essentially as a one-dimensional system, if one averages all dynamical quantities over the cross section. In the following we derive the time evolution of such cross sectional averages of the velocity and velocity fluctuations, assuming they are constant along the pipe. This assumption ignores the spontaneous formation of density waves, which is legitimate if the pipe is sufficiently short. Then a homogeneous state can be maintained. It needs not be stationary, though, and the equations we shall derive describe its temporal evolution. The physical significance of this study is based on the assumption that short sections of a long pipe are locally homogeneous and close to the corresponding steady state. The translational invariance along the pipe implies that the average velocity only has an axial component. Its time evolution is given by the competition of a gain term, which is the gravitational acceleration $`g`$, and a loss term due to the momentum transfer to the pipe wall. Here we focus on the behavior at low enough densities, where the dynamics are dominated by collisions rather than frictional contacts. Then the momentum transfer to the pipe wall is proportional to the number of grain-wall collisions, $`\dot{N}_\mathrm{w}`$. In each such collision the axial velocity of the colliding particle changes by an average value $`\mathrm{\Delta }\overline{v}`$. All grains are assumed to be equal for simplicity. Hence the average axial velocity changes by $`\mathrm{\Delta }\overline{v}/N`$ in a wall collision. The momentum balance then reads: $$\dot{\overline{v}}=g\dot{N}_\mathrm{w}\mathrm{\Delta }\overline{v}/N.$$ (5) More subtle is the energy balance which gives rise to an equation for the root mean square fluctuation of the velocity, $`\delta v=\sqrt{\stackrel{}{v}^2\stackrel{}{v}^2}`$. This can be regarded as the typical absolute value of relative velocities. The rates of energy dissipation $`\dot{E}_{\mathrm{diss}}`$ and of change of kinetic and potential energy, $`\dot{E}_{\mathrm{kin}}`$ and $`\dot{E}_{\mathrm{pot}}`$ must add up to zero due to energy conservation, $$0=\dot{E}_{\mathrm{diss}}+\dot{E}_{\mathrm{kin}}+\dot{E}_{\mathrm{pot}}.$$ (6) The change in kinetic energy per unit time is $$\dot{E}_{\mathrm{kin}}=Nm(\overline{v}\dot{\overline{v}}+\delta v\dot{\delta v}),$$ (7) where $`N`$ is the total number of particles in the pipe and $`m`$ their mass. The potential energy (in the absence of Coulomb interactions between the grains) changes at a rate $$\dot{E}_{\mathrm{pot}}=Nmg\overline{v}.$$ (8) If only the irreversible nature of binary grain collisions is taken into account the energy dissipation rate is proportional to the number of binary collisions per unit time, $`\dot{N}_\mathrm{g}`$, times the loss of kinetic energy in the relative motion of the collision partners, $$\dot{E}_{\mathrm{diss}}=\dot{N}_\mathrm{g}\mathrm{\Delta }E,$$ (9) with $$\mathrm{\Delta }E=\mathrm{\Delta }(m\delta v^2/2)=m\delta v\mathrm{\Delta }(\delta v).$$ (10) Solving (6) for $`\dot{\delta v}`$ and replacing $`\dot{\overline{v}}g`$ using (5) gives $$\dot{\delta v}=(\overline{v}/\delta v)\dot{N}_\mathrm{w}\mathrm{\Delta }\overline{v}/N\dot{N}_\mathrm{g}\mathrm{\Delta }(\delta v)/N.$$ (11) As for the average velocity, (5), the typical relative velocity has a gain and a loss term. The gain term has a remarkable symmetry to the loss term in (5), which is completely general. Only the second term in (11) may be different, if additional modes of energy dissipation like collisions with the walls or friction are included. The gain term in (11) subsumes also the production of granular temperature in the interior of the pipe due to the finite shear rate, which is remarkable, as it expresses everything in terms of physics at the wall. Once the loss terms of the balance equations, (5) and (11), are known, the time evolution of the average velocity and the velocity fluctuations can be calculated, because the gain terms are given. In this sense, it is sufficient to have a statistical description of collisional cooling (which gives the loss term in (11)) and of the momentum transfer of the granular gas to a wall (which gives the loss term in (5)) in order to describe granular flow in a vertical pipe. It turns out, that collisional cooling is easier, because it cannot depend on the average velocity due to the Galilei invariance of the grain-grain-interactions, whereas the momentum transfer to the walls depends on both, $`\overline{v}`$ and $`\delta v`$. ## 4 Collisional cooling We shall now specify $`\dot{N}_\mathrm{g}`$ and $`\mathrm{\Delta }(\delta v)`$. The time between two subsequent collisions of a particle can be estimated by the mean free path, $`\lambda `$, divided by a typical relative velocity, $`\delta v`$. Hence the number of binary collisions per unit time is proportional to $$\dot{N}_\mathrm{g}N\delta v/\lambda .$$ (12) Here we assumed that the flow is sufficiently homogeneous, that the local variations of $`\lambda `$ and $`\delta v`$ are unimportant. In each collision the relative normal velocity gets reduced by a factor, the restitution coefficient $`e_\mathrm{n}<1`$. For simplicity we assume that the restitution coefficient is a constant. Correspondingly a fraction of the kinetic energy of relative motion is dissipated in each collision. $$\mathrm{\Delta }E=(1e_\mathrm{n}^2)\frac{m}{2}\delta v^2$$ (13) with the grain mass $`m`$. According to (10) $`\mathrm{\Delta }(\delta v)=(1e_\mathrm{n}^2)\delta v/2`$. Putting this together, the dissipation rate (9) is $$\dot{E}_{\mathrm{diss}}=k_\mathrm{g}N\frac{m}{d}\delta v^3.$$ (14) The dimensionless proportionality constant $`k_\mathrm{g}`$ contains the dependence on the solid fraction $`\nu d/\lambda `$ and the restitution coefficient $`e_\mathrm{n}`$ and can be calculated analytically, if one assumes that the probability distribution of the particles is Gaussian . From these considerations one can draw a very general conclusion for the steady state values of $`\overline{v}`$ and $`\delta v`$. In the steady state the kinetic energy is constant, so that (6) together with (8) and (14) implies $$\frac{\overline{v}_\mathrm{s}}{\delta v_\mathrm{s}^3}=\frac{k_\mathrm{g}}{gd}$$ (15) Whenever the dissipation is dominated by irreversible binary collisions and the flow is sufficiently homogeneous, the steady flow velocity in a vertical pipe should be proportional to the velocity fluctuation to the power $`3/2`$. The proportionality constant does not depend on the width of the pipe. We tested this relation by two dimensional event driven molecular dynamics simulations . The agreement is surprisingly good, given the simple arguments above, even quantitatively. However, it turns out that the proportionality constant in (15) has a weak dependence on the width of the pipe, which can be traced back to deviations of the velocity distribution from an isotropic Gaussian: The vertical velocity component has a skewed distribution with enhanced tail towards zero velocity . ## 5 Interaction of the granular flow with the wall The collision rate $`\dot{N}_\mathrm{w}`$ with the walls of the vertical pipe can be determined by noting that the number of particles colliding with a unit area of the wall per unit time for low density $`n`$ is given by $`|v_{}|n`$. As the typical velocity perpendicular to the pipe wall, $`|v_{}|`$ is proportional to $`\delta v`$, one obtains $$\dot{N}_\mathrm{w}N\delta v/W.$$ (16) This is the place where the pipe width $`W`$ enters into the flow dynamics. To specify, by how much the vertical velocity of a grain changes on average, when it collides with the wall, is much more difficult, as it depends on the local geometry of the wall. In our simulations the wall consisted of a dense array of circular particles of equal size. When a grain is reflected from such a wall particle, a fraction of the vertical component of its velocity will be reversed. Instead of averaging this over all collision geometries, we give some general arguments narrowing down the possible functional form of $`\mathrm{\Delta }\overline{v}`$. If we assume that the velocity distribution is Gaussian, all moments of any velocity component must be functions of $`\overline{v}`$ and $`\delta v`$. This must be true for $`\mathrm{\Delta }\overline{v}`$, as well. For dimensional reasons it must be of the form $$\mathrm{\Delta }\overline{v}=\overline{v}f\left(\frac{\delta v}{\overline{v}}\right)$$ (17) with a dimensionless function $`f`$. The physical interpretation of this is the following: The loss term in the momentum balance can be viewed as an effective wall friction. As long as the granular flow in the vertical pipe approaches a steady state, the friction force must depend on the velocity $`\overline{v}`$. The ratio $`\delta v/\overline{v}`$ can be viewed as a characteristic impact angle, so that the function $`f`$ contains the information about the average local collision geometry at the wall. In principle all dimensionless parameters characterizing the system may enter the funcion $`f`$, that is, apart from $`\nu `$ also the restitution coefficient $`e_\mathrm{n}`$ and the ratios $`W/d`$ and $`gd/\overline{v}^2`$. However, it is hard to imagine, that the width $`W`$ of the pipe or the gravitational acceleration $`g`$ influences the local collision geometry. Therefore we shall assume that $`f`$ does not depend on $`W/d`$ or $`gd/\overline{v}^2`$. On the other hand, it is plausible, that the restitution coefficient enters $`f`$. It will influence the spatial distribution of particles and also accounts for the correlation of the velocities, if some particle is scattered back and forth between the wall and neighboring particles inside the pipe, and hence hits the wall twice or more times without a real randomization of its velocity. Due to positional correlations among the particles, $`f`$ should also depend on the solid fraction $`\nu `$: One can easily imagine, that the average collision geometry is different in dense and in dilute systems. Lacking a more precise understanding of the function $`f`$ we make a simple power law ansatz for it and write the loss term of (5) as $`{\displaystyle \frac{\dot{N}_\mathrm{w}}{N}}\mathrm{\Delta }\overline{v}`$ $`=`$ $`{\displaystyle \frac{1}{W}}\delta v\overline{v}k_\mathrm{w}\left({\displaystyle \frac{\delta v}{\overline{v}}}\right)^\beta `$ (18) $`=`$ $`k_\mathrm{w}W^1\delta v^{1+\beta }\overline{v}^{1\beta }.`$ The dimensionless parameters $`k_\mathrm{w}`$ and $`\beta `$ will be functions of $`\nu `$ and $`e_\mathrm{n}`$. ## 6 Time evolution and steady state With these assumptions, the equations of motion (5) and (11) for granular flow through a vertical pipe become $`\dot{\overline{v}}`$ $`=`$ $`gk_\mathrm{w}W^1\delta v^{1+\beta }\overline{v}^{1\beta },`$ (19) $`\dot{\delta v}`$ $`=`$ $`k_\mathrm{w}W^1\delta v^\beta \overline{v}^{2\beta }k_\mathrm{g}d^1\delta v^2.`$ (20) As the time evolution should not be singular for $`\overline{v}=0`$ or $`\delta v=0`$, the values of $`\beta `$ are restricted to the interval $$0\beta 1.$$ (21) The meaning of the exponent $`\beta `$ becomes clear, if we calculate the steady state velocity from (19) and (15). The result is $$\overline{v}_\mathrm{s}=\sqrt{gd}k_\mathrm{g}^{\gamma 1/2}k_\mathrm{w}^\gamma \left(\frac{W}{d}\right)^\gamma .$$ (22) The exponent $`\gamma `$, which determines the dependence of the average flow velocity on the pipe diameter, is related to $`\beta `$ by $$\gamma =\frac{3}{2(2\beta )}.$$ (23) Due to (21) we predict that in granular pipe flow $$3/4\gamma 3/2,$$ (24) as long as the flow is sufficiently homogeneous and the main dissipation mechanism are binary collisions. Note, that the exponent is always smaller than 2, which would be its value for Hagen-Poisseuille flow of a Newtonian fluid. $`\gamma =3/2`$ is the prediction of Bagnold’s theory, but in our simulations we found also values as small as 1, depending on the values of the solid fraction and the restitution coefficient . The stationary value $`\delta v_\mathrm{s}`$ directly follows from (22) and (15). One obtains the same formula as (22) with $`\gamma `$ replaced by $`\gamma /3`$. ## 7 Collisional cooling for monopolar charged grains In this section we summarize our recent results , how the dissipation rate (14) is changed if all grains carry the same electrical charge $`q`$ (besides having the same mass $`m`$, radius $`r`$ and restitution coefficient $`e_\mathrm{n}`$). For simplicity we assume that the charges are located in the middle of the insulating particles. The results are valid for grains in a three dimensional space, $`D=3`$. Whereas the hard sphere gas has no characteristic energy scale, the Coulomb repulsion introduces such a scale, $$E_\mathrm{q}=q^2/d.$$ (25) It is the energy barrier that two collision partners have to overcome, when approaching each other from infinity. It has to be compared to the typical kinetic energy stored in the relative motion of the particles, which by analogy with molecular gases is usually expressed in terms of the “granular temperature” $$T=\delta v^2/D.$$ (26) If $`E_\mathrm{q}mT`$ one expects that the charges have negligible effect on the dissipation rate. Using (26) and the expression $$\nu =\frac{\pi }{6}nd^3\mathrm{with}n=N/V$$ (27) for the three dimensional solid fraction, the dissipation rate (14) can be written in the form $$\dot{E}_{\mathrm{diss}}/V=kn^2d^2mT^{3/2}$$ (28) with the dimensionless prefactor $$k=k_\mathrm{g}\pi \sqrt{3}/2\nu .$$ (29) The advantage of writing it this way is that the leading $`n`$\- or $`\nu `$-dependence is explicitely given: In the dilute limit $`\nu 0`$ the dissipation rate should be proportional to $`n^2`$, i.e. to the probability that two particles meet in an ideal gas. Since the remaining factors in (28) are uniquely determined by the dimension of the dissipation rate, this equation must hold for charged particles as well. However, in this case the prefactor $`k`$ will not only depend on $`e_\mathrm{n}`$ and $`\nu `$, but also on the dimensionless energy ratio $`E_\mathrm{q}/mT`$. We found that the following factorization holds $$k=k_0(e_\mathrm{n})g_{\mathrm{chs}}(\nu ,E_\mathrm{q}/mT),$$ (30) where $$k_0=2\sqrt{\pi }(1e_\mathrm{n}^2)$$ (31) is the value of $`k`$ for $`\nu =E_\mathrm{q}/mT=0`$. $`g_{\mathrm{chs}}`$ denotes the radial distribution function for charged hard spheres (chs) at contact, normalized by the one for the uncharged ideal gas. For $`\nu <0.2`$ and $`E_\mathrm{q}/mT<8`$ our computer simulations show that $$g_{\mathrm{chs}}(\nu ,\frac{E_\mathrm{q}}{mT})g_{\mathrm{hs}}(\nu )\mathrm{exp}\left(\frac{E_\mathrm{q}}{mT}f(\nu )\right)$$ (32) is a very good approximation. Here, $$g_{\mathrm{hs}}=\frac{2\nu }{2(1\nu )^3}1$$ (33) is the well-known Enskog correction for the radial distribution function of (uncharged) hard spheres (hs) . This factor describes that the probability that two particles collide is enhanced due to the excluded volume of all the remaining particles. The second, Boltzmann-like factor describes that the Coulomb repulsion suppresses collisions. The effective energy barrier $`E_\mathrm{q}f(\nu )`$ decreases with increasing solid fraction, because two particles which are about to collide not only repel each other but are also pushed together by being repelled from all the other charged particles in the system. A two parameter fit gives $$f(\nu )1c_0\nu ^{1/3}+c_1\nu ^{2/3}$$ (34) with $$c_0=2.40\pm 0.15,\mathrm{and}c_1=1.44\pm 0.15.$$ (35) Very general arguments lead to the prediction that $`c_1=(c_0/2)^2`$, which is confirmed by (35). We expect deviations from (32) for larger $`\nu `$ and $`E_\mathrm{q}/mT`$, because the uncharged hard sphere system has a fluid-solid transition close to $`\nu 0.5`$, and the charged system may become a Wigner crystal for any solid fraction, provided the temperature gets low enough. ## 8 Conclusion We presented four main results: The steady state velocity of granular flow in a vertical pipe should have a power law dependence on the diameter $`W`$ of the pipe with an exponent $`\gamma `$ ranging between 3/4 and 3/2, depending on the solid fraction and the restitution coefficient of the grains. This result was derived ignoring possible electric charges of the grains and assuming that the flow is sufficiently homogeneous and the main dissipation mechanism are binary collisions. This illustrates the genuinely non-Newtonian character of granular flow. Second, the dependence of the steady state velocity on the solid fraction $`\nu `$, the restitution coefficient $`e_\mathrm{n}`$ and – in the case of monopolar charging – the ratio between Coulomb barrier and kinetic energy, $`E_\mathrm{q}/mT`$, is contained in the factor $`k_\mathrm{g}^{\gamma 1/2}k_\mathrm{w}^\gamma `$ in (22). In the dilute limit $`\nu 0`$ as well as in the limit of nearly elastic particles $`e_\mathrm{n}1`$ the coefficient $`k_\mathrm{w}`$, which describes how sensitive the momentum transfer to the wall depends on the local collision geometry, should remain finite, whereas $`k_\mathrm{g}`$ vanishes like $`\nu (1e_\mathrm{n}^2)`$ according to (29), (31). As $`\gamma 1/2>0`$, this implies that the flow through a vertical pipe becomes faster the higher the solid fraction and the less elastic the collisions between the grains are (in the limit of low density and nearly elastic collisions). The physical reason for this is that in denser and more dissipative systems collisional cooling is more efficient, reducing the collisions with the walls and hence their braking effect. This remarkable behaviour has been confirmed in computer simulations . Third, monopolar charging leads to a Boltzmann-like factor in $`k_\mathrm{g}`$ or the dissipation rate, respectively, which means that for low granular temperature the dissipation rate becomes exponentially weak. The higher the density the less pronounced is this effect, because the effective Coulomb barrier $`E_\mathrm{q}f(\nu )`$ hindering the collisions becomes weaker. Finally, we derived the evolution equations for the flow velocity and the velocity fluctuation for granular flow through a vertical pipe, (19) and (20). These equations apply to the situation of homogeneous flow, which can only be realized in computer simulations of a sufficiently short pipe with periodic boundary conditions. In order to generalize these equations for flow that is inhomogeneous along the pipe one should replace the time derivatives by $`_t+\overline{v}(z,t)_z`$. In addition a third equation, the continuity equation, is needed to describe the time evolution of the density of grains along the pipe. Such equations have been proposed previously in order to study the kinetic waves spontaneously forming in granular pipe flow. Our equations are different. ## Acknowledgements We thank the Deutsche Forschungsgemeinschaft for supporting this research through grant no. Wo 577/1-3. The computer simulations supporting the theory presented in this paper were partly done at the John von Neumann-Institut für Computing (NIC) in Jülich.
no-problem/9909/physics9909020.html
ar5iv
text
# A quantum point contact for neutral atoms ## Abstract We show that the conductance of atoms through a tightly confining waveguide constriction is quantized in units of $`\lambda _{\mathrm{dB}}^2/\pi `$, where $`\lambda _{dB}`$ is the de Broglie wavelength of the incident atoms. Such a constriction forms the atom analogue of an electron quantum point contact and is an example of quantum transport of neutral atoms in an aperiodic system. We present a practical constriction geometry that can be realized using a microfabricated magnetic waveguide, and discuss how a pair of such constrictions can be used to study the quantum statistics of weakly interacting gases in small traps. Quantum transport, in which the center-of-mass motion of particles is dominated by quantum mechanical effects, has been observed in both electron and neutral-atom systems. Pioneering experiments demonstrated quantum transport in periodic structures. For example, Bloch oscillations and Wannier-Stark ladders were observed in the conduction of electrons through superlattices with an applied electric field, as well as in the transport of neutral atoms through accelerating optical lattices . Further work with neutral atoms in optical lattices has utilized their slower time scales (kHz instead of THz) and longer coherence lengths to observe a clear signature of dynamical Bloch band suppression , an effect originally predicted for but not yet observed in electron transport . Quantum transport also occurs in aperiodic systems. For example, a quantum point contact (QPC) is a single constriction through which the conductance is always an integer multiple of some base conductance. The quantization of electron conductance in multiples of $`2e^2/h`$, where $`e`$ is the charge of the electron and $`h`$ is Planck’s constant, is observed through channels whose width is comparable to the Fermi wavelength $`\lambda _F`$. Experimental realizations of a QPC include a sharp metallic tip contacting a surface and an electrostatic constriction in a two-dimensional electron gas . Electron QPC’s have length-to-width ratios less than 10 because phase-coherent transport requires that channels must be shorter than the mean free path between scattering events, $`\mathrm{}_{\mathrm{mfp}}`$. Geometric constraints are the limiting factor in the accuracy of quantization in an electron QPC . In this Letter, we present an experimentally realizable system that forms a QPC for neutral atoms — a constriction whose ground state width $`b_o`$ is comparable to $`\lambda _{\mathrm{dB}}/2\pi `$, where $`\lambda _{\mathrm{dB}}`$ is the de Broglie wavelength of the atoms. The “conductance”, as defined below, through a QPC for atoms is quantized in integer multiples of $`\lambda _{\mathrm{dB}}^2/\pi `$. The absence of frozen-in disorder, the low rate of inter-atomic scattering ($`\mathrm{}_{mfp}1`$ m), and the availability of nearly monochromatic matter waves with de Broglie wavelengths $`\lambda _{\mathrm{dB}}50`$ nm offer the possibility of conductance quantization through a cylindrical constriction with a length-to-width ratio of $`10^5`$. This new regime is interesting because deleterious effects such as reflection and inter-mode nonadiabatic transitions are minimized , allowing for accuracy of conductance quantization limited only by finite-temperature effects. Furthermore, the observation of conductance quantization at new energy and length scales is of inherent interest. If a QPC for neutral atoms were realized, it would provide excellent opportunities for exploring the physics of small ensembles of weakly interacting gases. For instance, the transmission through a series of two QPC’s would depend on the energetics of atoms confined in the trap between the two constrictions. The physics of such a “quantum dot” for atoms is fundamentally different from that of electrons, since the Coulombic charging energy that dominates the energetics of an electron quantum dot is absent for neutral particles. The quantum statistics of neutral atoms energetically restricted to sub-dimensional spaces has already aroused theoretical interest in novel effects such as Fermionization and the formation of a Luttinger liquid . Recently, several waveguides have been proposed whose confinement may be strong enough to meet the constraint $`b_o\lambda _{dB}/2\pi `$ for longitudinally free atoms. In this work, we will focus on the example of a surface-mounted four-wire electromagnet waveguide for atoms (see Fig. 1) which exploits recent advances in microfabricated atom optics . A neutral atom with a magnetic quantum number $`m`$ experiences a linear Zeeman potential $`U(𝒓)=\mu _Bgm\left|𝑩(𝒓)\right|`$, where $`\mu _B`$ is the Bohr magneton, $`g`$ is the Landé g factor, and $`𝑩(𝒓)`$ is the magnetic field at $`𝒓`$. Atoms with $`m>0`$ are transversely confined near the minimum in field magnitude shown in Fig. 1; however, they are free to move in the $`𝒛`$ direction, parallel to the wires. Non-adiabatic changes in $`m`$ near the field minimum can be exponentially suppressed with a holding field $`B_h`$ applied in the axial direction $`𝒛`$ . Near the guide center, the potential forms a cylindrically symmetric two-dimensional simple harmonic oscillator with classical oscillation frequency $`\omega =\left[\mu _Bgm(2\mu _0I/\pi S^2)^2/MB_h\right]^{1/2}`$, where $`\mu _0`$ is the permeability of free space, $`I`$ is the inner wire current, $`2I`$ is the outer wire current, $`S`$ is the center-to-center wire spacing, and $`M`$ is the mass of the atoms. Sodium (<sup>23</sup>Na) in the $`|F=1,m_F=+1>`$ state would have a classical oscillation frequency of $`\omega =2\pi \times 3.3`$ MHz and a root mean squared (RMS) ground state width $`b=\sqrt{\mathrm{}/2M\omega }=8.1`$ nm in a waveguide with $`S=1`$ $`\mu `$m and $`I=0.1`$ A. The fabrication of electromagnet waveguides of this size scale and current capacity has been demonstrated . A constriction in the waveguide potential can be created by contracting the spacing between the wires of the waveguide. The constriction strength can be tuned dynamically by changing the current in the wires. Fig. 2a shows a top-down view of a constriction whose wire spacing $`S(z)`$ is smoothly varied as $$S(z)=S_o\mathrm{exp}\left(\frac{z^2}{2\mathrm{}^2}\right),$$ (1) where $`S_o`$ is the spacing at $`z=0`$, and $`\mathrm{}`$ is the characteristic channel length. Assuming the wires are nearly parallel, the guide width, depth, oscillation frequency, and curvature scale as $`S(z)`$, $`S(z)^1`$, $`S(z)^2`$, and $`S(z)^4`$, respectively. For $`\mathrm{}=100S_o`$, field calculations above this curved-wire geometry show that the parallel-wire approximation is valid for $`|z|3\mathrm{}`$, allowing for a well-defined waveguide potential over a factor of more than $`10^3`$ in level spacing (see Fig. 2b). Our particular choice of $`S(z)`$ is somewhat arbitrary but prescribes one way in which wires can form a smooth, constricting waveguide as well as run to contact pads (necessary to connect the wires to a power supply) far enough from the channel ($`\mathrm{}`$) that their geometry is unimportant. The total “footprint” of this device (not including contact pads) is approximately $`10\mathrm{}\times 10\mathrm{}`$, or about 1 mm<sup>2</sup>, for $`S_o=1`$ $`\mu `$m and $`\mathrm{}=100S_o`$. Atoms approach the constriction from the $`\mathbf{}𝒛`$ direction, as shown in Fig. 2a. We calculate the propagation of the atom waves through the constriction by solving the time-dependent Schrödinger equation in three spatial dimensions. It is important to note that the nature of quantum transport requires fully quantum-mechanical calculations, even for the longitudinal degree of freedom within the waveguide. The Hamiltonian for an atom near the axis of the four-wire waveguide described by Eq. (1) is $$\widehat{H}_{QPC}=\frac{\widehat{𝒑}^2}{2M}+\frac{1}{2}M\omega _o^2e^{2\widehat{z}^2/\mathrm{}^2}(\widehat{x}^2+\widehat{y}^2),$$ (2) where $`\widehat{}`$ denotes an operator, $`\omega _o`$ is the transverse oscillation frequency at $`z=0`$, and we have assumed the parallel-wire scaling of field curvature, $`S(z)^4`$. Since a direct numerical integration approach is computationally prohibitive, we developed a model that neglects non-adiabatic propagation at the entrance and exit of the channel. The waveguide potential is truncated at $`z=\pm z_T`$, the planes between which atoms can propagate adiabatically in the waveguide, and the wavefunction amplitude $`\psi `$ and its normal derivative $`\psi /z`$ are matched between plane-wave states ($`|z|>z_T`$) and the modes of the waveguide ($`|z|<z_T`$). We found that, for $`\mathrm{}10b_o`$, a two-dimensional version of the model could reproduce the transmissions and spatial output distributions of a two dimensional split-operator FFT integration of $`\widehat{H}_{QPC}(\widehat{x},\widehat{z})`$ with the full waveguide potential. This agreement gave us confidence in our three-dimensional model of atom propagation through the constriction. The cross-section for an incident atomic plane wave to be transmitted through a constriction is dependent on the plane-wave energy $`E_I`$ and incident angle. However, if the RMS angular spread of incident plane waves $`\sigma `$ is much greater than the RMS acceptance angle $`\alpha \left[ln(\mathrm{}/b_o)b_o^2/\mathrm{}^2\right]^{1/4}`$, we can integrate over all solid angles and define a “conductance” $`\mathrm{\Phi }`$ dependent only on parameters of the constriction and the kinetic energy $`E_I`$ of the incident atoms: $$\mathrm{\Phi }(E_I)=\frac{F}{J_of(0,0)},$$ (3) where $`F`$ is the total flux of atoms (in s<sup>-1</sup>) transmitted though the constriction and $`J_of(0,0)`$ is the incident on-axis brightness (in cm<sup>-2</sup>s<sup>-1</sup>). The transverse momentum distribution $`f(k_x,k_y)`$ is defined as follows: in the plane wave basis $`\{|𝒌\}`$, we consider a density distribution of atoms on the energy shell $`a(𝒌)d𝒌=(C/k_z^o)\delta \left[k_zk_z^o\right]f(k_x,k_y)d𝒌`$, where $`C=\mathrm{}J_o/2\pi E_I`$, $`\mathrm{}k_z^o=\left[2ME_I\mathrm{}^2(k_x^2+k_y^2)\right]^{1/2}`$, and $`f(k_x,k_y)`$ is normalized such that the incident flux density $`J_o=𝑑𝒌a(𝒌)\mathrm{}k_z/M`$. When applied to the diffusion of an isotropic gas ($`f=1`$) through a hole in a thin wall, $`\mathrm{\Phi }`$ is equal to the area of the hole; for a channel with a small acceptance angle, $`\alpha \sigma `$, $`\mathrm{\Phi }`$ is the effective area at the narrowest cross-section of the channel. We consider a distribution of incident energies $`g(E_I)`$ with a RMS spread $`\mathrm{\Delta }E`$, centered about $`\overline{E_I}`$. As an example, the <sup>23</sup>Na source described in Ref. REFERENCES has a monochromaticity $`\overline{E_I}/\mathrm{\Delta }E50`$ for atoms traveling at $`30`$ cm$`/`$s, or $`\lambda _{\mathrm{dB}}=50`$ nm. To meet the constraint $`\sigma \alpha `$ , such a source can be reflected off of a diffuser , such as the de-magnetized magnetic tape described in Ref. REFERENCES. Assuming the spatial density of atoms is preserved during propagation , such a source can have a flux density of $`J_o2\times 10^{10}`$ cm<sup>-2</sup>s<sup>-1</sup>. The quantized conductance for atoms is shown in Fig. 3 and is the central result of this Letter. Conductance $`\mathrm{\Phi }/(\lambda _{dB}^2/\pi )`$ is shown as a function of mean energy $`\overline{E_I}/\mathrm{}\omega _o`$ and energy spread $`\mathrm{\Delta }E/\mathrm{}\omega _o`$. In the limits $`\mathrm{}\omega _o\mathrm{\Delta }E`$ and $`\mathrm{}b_o`$, one can show analytically that the conductance is $`\mathrm{\Phi }=(\lambda _{dB}^2/\pi )N`$, where $`N`$ is the number of modes above cutoff at $`z=0`$. The “staircase” of $`\mathrm{\Phi }`$ versus $`\overline{E_I}/\mathrm{}\omega _o`$ is a vivid example of quantum transport, as it demonstrates the quantum mechanical nature of the center-of-mass motion. For all of Fig. 3 we have assumed $`\mathrm{}=10^3S_o10^5b_o`$; in the particular case of the Na source discussed above, and assuming $`\sigma =25`$ mrad, the first step ($`\mathrm{\Phi }=\lambda _{dB}^2/\pi `$) corresponds to a transmitted flux of $``$ 500 atoms s<sup>-1</sup>, which is a sufficient flux to measure via photoionization. We can understand several features shown in Fig. 3 by considering the adiabatic motion of atoms within the waveguide. As atom waves propagate though the constricting waveguide, modes with transverse oscillator states $`(n_x,n_y)`$ such that $`\mathrm{}\omega _o(n_x+n_y+1)\overline{E_I}2M\mathrm{}^2/\mathrm{}^2`$ will contribute negligible evanescent transmission and adiabatically reflect before $`z=0`$. Steps occur when the number of allowed propagating modes changes: the $`m^{th}`$ step appears at $`\mathrm{}\omega _o=\overline{E_I}/m`$. Note that this condition can also be written $`b_o=\sqrt{m}\lambda _{dB}/2\pi `$, demonstrating that transverse confinement on the order of $`\lambda _{dB}/2\pi `$ is essential to seeing conductance steps in a QPC. Since low-lying modes occupy a circularly symmetric part of the potential, the $`m^{th}`$ step involves $`m`$ degenerate modes and is $`m`$ times as high as the first step. The large aspect ratio of the atom QPC allows for a sufficiently gentle constriction to suppress partial reflection at the entrance to the guide, such that the sharpness of steps and flatness between them is limited only by the spread in incident atom energies. It is interesting to compare the electron and atom QPC systems. If contact is made between two Fermi seas whose chemical potentials differ by $`e\mathrm{\Delta }V<k_BTE_F`$, where $`\mathrm{\Delta }V`$ is the applied voltage, $`T`$ is the temperature of the electron gas, and $`E_F`$ is the Fermi energy, then the current that flows between them will be carried by electrons with an energy spread $`k_BT`$ and a mean energy $`E_F`$. For a cold atom beam, the particle flow is driven by kinetics instead of energetics. The incident kinetic energy $`\overline{E_I}`$ corresponds to $`E_F`$, and the energy spread $`\mathrm{\Delta }E\overline{E_I}`$ corresponds to $`k_BT`$. The quantum of conductance for both systems can be formulated in terms of particle wavelength: the classical conductance of a point contact of area A connecting two three-dimensional gases of electrons is $`G=(e^2k_F^2A)/(4\pi ^2\mathrm{})`$ , such that if $`G=Ne^2/\pi \mathrm{}`$, the effective area is $`A=N\lambda _F^2/\pi `$. In order to determine the accuracy of conductance quantization, three measurements ($`F`$, $`J_of(0,0)`$, and $`\lambda _{dB}`$) are necessary for the atom QPC instead of two measurements (current and $`\mathrm{\Delta }V`$) for the electron QPC. The reduced number of degrees of freedom for electrons results from their Fermi degeneracy: the net current is carried by electrons whose incident flux density $`J_o`$ and wavelength $`\lambda _F`$ are functions of $`E_F`$ and $`\mathrm{\Delta }V`$. As a thought experiment, the simplicity of an externally tuned $`J_o`$ and $`\lambda _{\mathrm{dB}}`$ could also be extended to neutral atoms, if two degenerate ensembles of Fermionic atoms were connected by a QPC and given a potential difference $`\mathrm{\Delta }U`$, such as could be induced by a uniform magnetic field applied to one reservoir. We can redefine neutral atom conductance as $`\mathrm{\Gamma }=F/\mathrm{\Delta }U`$, where $`F`$ is the transmitted atom flux, just as the electron conductance $`G`$ is the ratio of electron flux (current) to potential difference (voltage). One can show that $$\mathrm{\Gamma }=\frac{N}{h},$$ (4) assuming $`\mathrm{\Delta }U<k_BTE_F`$, where $`T`$ is the temperature of the Fermi ensembles and N is the number of modes above cutoff. Two QPC’s can form a trap between them, just as a pair of electron QPC’s form a quantum dot . For $`\mathrm{}\omega _o>\overline{E_I}`$, all modes of the QPC are below cutoff and evanescent transmission is dominated by tunneling of atoms occupying the $`(0,0)`$ mode. While the quantum dot between them is energetically isolated, atoms can still tunnel into and out of the dot. For cold Fermionic atoms, the Pauli exclusion principle would enable a single atom to block transmission through the trap, just as the charging energy of a single electron can block transmission in electron quantum dots; such a blockade might be used to make a single-atom transistor. In such a single-atom blockade regime, quantum dots can also show a suppression of shot noise below the Poissonian level . Note that spectroscopic measurement of neutral atom traps with resolvable energy levels has been suggested previously in analogy to spectroscopic measurement of electron quantum dots. We emphasize that the loading and observation of such a small trap with two or more QPC “leads” is a powerful configuration for atom optics, because loading a small, isolated trap is problematic, and because spectroscopy near the substrate is complicated by light scattering and inaccessibility. In conclusion, we show how an electromagnet waveguide could be used to create a quantum point contact for cold neutral atoms. This device is an example of a new physical regime, quantum transport within microfabricated atom optics. The authors thank A. Barnett, N. Dekker, M. Drndić, E. W. Hagley, K. S. Johnson, M. Olshanii, W. D. Phillips, M. G. Raizen, and G. Zabow for useful discussions. This work was supported in part by NSF Grant Nos. PHY-9732449 and PHY-9876929, and by MRSEC Grant No. DMR-9809363. J. T. acknowledges support from the Fannie and John Hertz Foundation.
no-problem/9909/cond-mat9909322.html
ar5iv
text
# Deterministic Equations of Motion and Dynamic Critical Phenomena ## Abstract Taking the two-dimensional $`\varphi ^4`$ theory as an example, we numerically solve the deterministic equations of motion with random initial states. Short-time behavior of the solutions is systematically investigated. Assuming that the solutions generate a microcanonical ensemble of the system, we demonstrate that the second order phase transition point can be determined already from the short-time dynamic behavior. Initial increase of the magnetization and critical slowing down are observed. The dynamic critical exponent $`z`$, the new exponent $`\theta `$ and the static exponents $`\beta `$ and $`\nu `$ are estimated. Interestingly, the deterministic dynamics with random initial states is in a same dynamic universality class of Monte Carlo dynamics. It is believed that statistical mechanics is originated from the fundamental equations of motion for many body systems or field theories, even though up to now there exists not a general proof. For classical systems, equations of motion are deterministic. Statistical ensembles are expected to be effective description of the systems. For quantum systems the situation is similar. In any case, practically it remains open whether solutions of the fundamental equations of motion really yield the same results as statistical mechanics, e.g., see Refs. . With recent development of computers, to solve numerically equations of motion gradually becomes possible. This has attracted much attention of scientists in different areas. For example, recently the $`O(N)`$ vector model and $`XY`$ model have been numerically solved . Noting that energy is conserved during time evolution, solutions of equations of motion actually generate a microcanonical ensemble. Making the time average of the observables, introducing standard techniques developed in statistical mechanics for the canonical ensemble, phase transition points and critical exponents have been estimated. The results are in agreement with those obtained from a canonical ensemble in statistical mechanics. In principle, the deterministic equations of motion should describe not only equilibrium properties but also dynamic properties of the systems. In statistical mechanics, dynamics is approximately given by some effective stochastic equations of motion, typically at mesoscopic level, e.g. Langevin equations or Monte Carlo algorithms. For stochastic dynamics, critical slowing down and dynamic scaling around the critical point are characteristic properties. It has long been challenging whether stochastic dynamics describes correctly the real physical world. It is important and interesting to study dynamic properties of the fundamental deterministic equations of motion and whether deterministic dynamics and stochastic dynamics are in a same universality class. On the other hand, in recent years much progress has been achieved in stochastic dynamics. For a long time it is known that around the critical point there exists universal scaling behavior in the long-time regime of the dynamic evolution. This is more or less due to the divergent correlation time induced by the divergent spatial correlation length. Short-time behavior is believed to depend on microscopic detail. However, in recent years it is discovered that universal scaling behavior emerges already in the macroscopic short-time regime, after a time scale $`t_{mic}`$ which is sufficiently large in the microscopic sense (for a review, see Ref. ). Important is that one should take care of the macroscopic initial conditions carefully. For example, dynamic magnetic systems with an initial state of very high temperature and small magnetization, the short-time dynamic scaling for the $`k`$th moment of the magnetization is written as $$M^{(k)}(t,\tau ,L,m_0)=b^{k\beta /\nu }M^{(k)}(b^zt,b^{1/\nu }\tau ,b^1L,b^{x_0}m_0).$$ (1) Here $`\tau =(TT_c)/T_c`$ is the reduced temperature, $`\beta `$, $`\nu `$ are the well known static critical exponents and $`z`$ is the dynamic exponent, while the new independent exponent $`x_0`$ is the scaling dimension of the initial magnetization $`m_0`$. The parameter $`b`$ is an arbitrary rescaling factor and $`L`$ is the lattice size. For the dynamics of model A, a prominent property of the short-time dynamic scaling is that the magnetization undergoes a critical initial increase at early time $$M(t)m_0t^\theta $$ (2) where the exponent $`\theta `$ is related to $`x_0`$ by $`\theta =(x_0\beta /\nu )/z`$. More interesting and important is that the static exponents $`\beta `$, $`\nu `$ and the dynamic exponent $`z`$ in the short-time scaling form (1) take the same values as they are defined in equilibrium or the long-time regime. This fact makes it possible to determine all these exponents already in the short-time regime of the dynamic evolution . Since now the measurements are carried out at early time, the method is free of critical slowing down. The short-time dynamic scaling is not only conceptually interesting but also practically important. To clarify whether the short-time dynamic scaling is a fundamental phenomenon of the real physical world becomes urgent. The purpose of this letter is to study whether there exists short-time universal scaling behavior in the dynamics described by the deterministic equations of motion and meanwhile determine the critical point and all the static and dynamic exponents. The model we choose is the two-dimensional $`\varphi ^4`$ theory. The reason is that the statics of this model is known in a same universality class of the two-dimensional Ising model and there exist already some numerical results . The Hamiltonian of the model on a square lattice is $$H=\underset{i}{}\left[\frac{1}{2}\pi _i^2+\frac{1}{2}\underset{\mu }{}(\varphi _{i+\mu }\varphi _i)^2\frac{1}{2}m^2\varphi _i^2+\frac{1}{4!}\lambda \varphi _i^4\right]$$ (3) with $`\pi _i=\dot{\varphi }_i`$ and it leads to the equations of motion $$\ddot{\varphi }_i=\underset{\mu }{}(\varphi _{i+\mu }+\varphi _{i\mu }2\varphi _i)+m^2\varphi _i\frac{1}{3!}\lambda \varphi _i^3.$$ (4) In the dynamic evolution governed by equations (4), energy is conserved. The solutions are assumed to generate a microcanonical ensemble. The temperature could not be introduced externally as in a canonical ensemble, but could only be defined internally as the averaged kinetic energy. In our short-time dynamic approach, the total energy is actually a even more convenient controlling parameter of the system, since it is conserved and can be input from the initial state. Therefore, from now $`\tau `$ in the scaling form (1) will be understood as a reduced energy density $`(ϵϵ_c)/ϵ_c`$. Here $`ϵ_c`$ is the critical energy density corresponding to a second order phase transition. The order parameter of the $`\varphi ^4`$ theory is the magnetization. The time-dependent magnetization $`MM^{(1)}(t)`$ and its second moment $`M^{(2)}`$ are defined as $$M^{(k)}(t)=\frac{1}{L^{2k}}\left[\underset{i}{}\varphi _i(t)\right]^{(k)},k=1,2.$$ (5) The average for observables is only over initial configurations. In the short-time regime of the dynamic evolution, spatial correlation length is small, one can easily realize that $`M^{(2)}1/L^d`$. From the scaling form (1), we obtain at the critical point for $`m_0=0`$ $$M^{(2)}(t)t^{(d2\beta /\nu )/z}.$$ (6) Another interesting observable is the auto-correlation function $$A(t)=\frac{1}{L^2}\underset{i}{}<\varphi _i(0)\varphi _i(t)>.$$ (7) From the calculations based on Langevin equations, at the critical point the auto-correlation $`A(t)`$ with $`m_0=0`$ presents a power law behavior $$A(t)t^{d/z+\theta }.$$ (8) The power law behavior indicates critical slowing down. The interesting point is that the new exponent $`\theta `$ describing the initial increase of the magnetization also appears here. Our strategy is that from initial increase of the magnetization, Eq. (2), we measure the exponent $`\theta `$. Taking $`\theta `$ as an input, from the power law decay of the auto-correlation, Eq. (8), we extract the dynamic exponent $`z`$. Then from power law behavior of the second moment, Eq. (6), we obtain the static exponent $`\beta /\nu `$. Finally, the critical point and the remaining exponent $`\nu `$ are determined as follows. Assuming $`L`$ is sufficiently large and $`t^{x_0/z}`$ is small enough, one can deduce from scaling form (1) $$M(t)=m_0t^\theta F(t^{1/\nu z}\tau ).$$ (9) When $`\tau =0`$, Eq. (2) is recovered. When $`\tau 0`$, the power law behavior will be modified. Therefore, searching for a energy density which gives the best power law behavior for the magnetization, one determines the critical point . The exponent $`1/\nu z`$ can be extracted from the logarithmic derivative $$_\tau \mathrm{ln}M(t,\tau ,m_0)|_{\tau =0}t^{1/\nu z}.$$ (10) For stochastic dynamics in a canonical ensemble, to obtain the scaling form (1) and (8) the initial state is taken to be a very high temperature state. Therefore, for deterministic dynamics we consider a random initial state with zero or small magnetization. For simplicity, we also set initial kinetic energy to be zero, i.e. $`\dot{\varphi }_i(0)=0`$. Practically we proceed as follows. We first fix the magnitude of the initial field to be a constant $`c`$, $`|\varphi _i(0)|=c`$, then randomly give the sign to $`\varphi _i(0)`$ with the restriction of a fixed magnetization in unit of $`c`$, and finally the constant $`c`$ is determined by the given energy. Following Ref. , we take parameters $`m^2=2.`$ and $`\lambda =0.6`$. To solve these equations of motion numerically, we simply discretize $`\ddot{\varphi }_i`$ by $`(\varphi _i(t+\mathrm{\Delta }t)+\varphi _i(t\mathrm{\Delta }t)2\varphi _i(t))/(\mathrm{\Delta }t)^2`$. After an initial configuration is prepared, we update the equations of motion until $`t=500`$. Then we repeat the procedure with another initial configuration. In our calculations, we use a fairly large lattice $`L=256`$ and samples of initial configurations for average range from $`\mathrm{4\; 500}`$ to $`\mathrm{10\; 000}`$ depending on initial magnetization $`m_0`$. Errors are simply estimated by dividing total samples into two or three groups. Extra calculations with $`L=128`$ show that the finite size effect for $`L=256`$ is already negligible small. Compared with numerical solutions in the long-time regime, the finite $`\mathrm{\Delta }t`$ effect in the short-time dynamic approach is less severe, since our updating time is limited. Our results are presented with $`\mathrm{\Delta }t=0.05`$. We have also performed some simulation with $`\mathrm{\Delta }t=0.01`$ and confirmed that the finite $`\mathrm{\Delta }t`$ effect has been rather small. In any case, however, since we have this small $`\mathrm{\Delta }t`$, the calculations are much more lengthy than standard Monte Carlo simulations. Therefore, the purpose of this paper is to explore new physics rather than to pursue high accuracy of measurements. In order to determine the critical point, we perform simulations with non-zero initial magnetization $`m_0`$ for a couple of energy densities in the critical regime. In Fig. 1, time evolution of the magnetization with $`m_0=0.015`$ for parameters $`m^2=2.`$ and $`\lambda =0.6`$ has been plotted with solid lines for three energy densities $`ϵ=20.7`$, $`21.1`$ and $`21.5`$ in log-log scale. To show the universal behavior clearly, we have cut the data for $`t<t_{mic}50`$. Even looking by eyes, one could realize that the magnetization of $`ϵ=21.1`$ has rather good power law behavior. Actually, careful analysis of the data between $`t=50`$ and $`500`$ leads to the critical energy density $`ϵ_c=21.11(3)`$. This agrees well with $`ϵ_c=21.1`$ given by the Binder cumulant in equilibrium in Ref. . At $`ϵ_c`$, one measures the exponent $`\theta =0.146(3)`$. Accurately speaking, however, the exponent $`\theta `$ is defined in the limit $`m_00`$. In general, for finite $`m_0`$ the exponent $`\theta `$ may show some $`m_0`$-dependence . Therefore, we have performed another simulation with $`m_0=0.009`$ at $`ϵ_c`$. The magnetization is also displayed in Fig. 1 with a dashed line. The corresponding exponent is $`\theta =0.158(2)`$. If we linearly extrapolate the results to $`m_0=0`$, we obtain the final value $`\theta =0.176(7)`$. With the critical energy in hand, we set $`m_0=0`$ and proceed to measure the auto-correlation $`A(t)`$ and second moment $`M^{(2)}(t)`$ . In Fig. 2, $`A(t)`$ and $`M^{(2)}(t)`$ are displayed in log-log scale with a solid and dashed line respectively. In order to show some typical microscopic behavior within $`t_{mic}`$, here we present the data from a relatively early time $`t=10`$. From the figure we see clearly that at very early time, the curves do not have power law behavior, typically oscillating a bit. However, after $`t>t_{mic}50`$, both curves nicely stabilize to power law behavior. From the data for $`t>50`$, we measure the exponents $`d/z\theta =0.755(5)`$ and $`(d2\beta /\nu )/z=0.819(12)`$. Finally, to complete the estimate of the exponents, we calculate approximately the derivative of the magnetization with respect to the energy density with the data for Fig. 1. In Fig. 3, the logarithmic derivative of the magnetization is plotted in log-log scale at the critical point. As usual, the fluctuation here is larger . The power law behavior is also not as clean as that of the previous observables. The slope of the curve tends to be smaller as the time $`t`$ evolves. To improve this situation, probably we need to perform simulations with energy densities closer to $`ϵ_c`$ and with high statistics and/or to consider corrections to scaling. This requests very much computer times. In any case, from the slope between $`t=200`$ and $`500`$, we estimate $`1/\nu z=0.492(26)`$. In Table I, we summarize all the values of the critical exponents discussed above. From these values, we can estimate the critical exponents $`z`$, $`2\beta /\nu `$ and $`\nu `$. The results are also listed in Table I. For comparison, the exact values of $`2\beta /\nu `$ and $`\nu `$ for the Ising model and available results for other exponents measured from a similar dynamic process with standard Monte Carlo dynamics are also given in Table I. Remarkably, not only the static exponents, but also the dynamic exponents of the $`\varphi ^4`$ theory are very close to those of the Ising model with standard Monte Carlo dynamics. The exponent $`\theta `$ shows some percents of difference. However, taking into account that the measurements of $`\theta `$ are not so easy, especially the extrapolation to $`m_0=0`$ limit may induce some systematic errors, we would consider that the exponent $`\theta `$ for both model are the same. Therefore, we conclude that the $`\varphi ^4`$ theory described by the deterministic equations of motion are in a same static as well as dynamic universality class of the Ising model with standard Monte Carlo dynamics. Why is the deterministic dynamics in a same universality class of the stochastic dynamics? The origin may be traced back to the random initial state and the kinetic energy term in the Hamiltonian. The random initial configuration of $`\{\varphi _i\}`$ induced stochastically evolving kinetic energy. The stochastic kinetic energy serves as a kind of noises or heat bath to the potential energy. This is similar to the stochastic dynamics with a canonical ensemble, where the Hamiltonian is simply the potential energy term here. In conclusions, we have numerically solved the deterministic equations of motion for the two-dimensional $`\varphi ^4`$ theory. Short-time universal behavior is found. Based on the short-time dynamic scaling form, the critical point and all the static and dynamic critical exponents are determined. The values of the static exponents agree with those of the Ising model. More interestingly, both the dynamic exponents $`z`$ and $`\theta `$ also coincide with those of the kinetic Ising model induced by the standard Monte Carlo algorithms (heat-bath and Metropolis et al). Especially, the dynamic exponent $`z=2.15(2)`$ is very far from $`z=1`$, which is the expectation from the naive deterministic viewpoint. These results show that, on the one hand, the deterministic equations of motion indeed can describe statistical properties of the systems both in statics and dynamics, on the other hand, the Monte Carlo dynamics can be a good effective description of the real physical dynamics, at least in some cases. Since the measurements are carried out in the short-time regime of the dynamic evolution, our dynamic approach does not suffer from critical slowing down. Furthermore, the errors induced by a finite $`\mathrm{\Delta }t`$ in our measurements are also limited. It is challenging to derive analytically the short-time dynamic scaling from deterministic equations of motion. Extension of the present work to quantum systems would be very interesting. Acknowledgements: Work supported in part by the Deutsche Forschungsgemeinschaft; Schu 95/9-1 and SFB 418.
no-problem/9909/astro-ph9909141.html
ar5iv
text
# Secondary stars in CVs – the observational picture ## 1 Are the secondary stars in CVs main sequence stars? The secondary stars in CVs have been recognised to follow approximately the mass–radius relation of main sequence field stars \[Echeverría, 1983, Patterson, 1984, Warner 1995, Smith & Dhillon, 1998\]. A detailed comparison is complicated by the lack of information both on the masses of the secondary stars in CVs and of the masses and radii of main sequence field stars. It is advantageous, therefore, to restrict the comparison to quantities readily observable in CVs, as the orbital period $`P`$ and the spectral type $`Sp`$ of the secondary star \[Beuermann et al., 1998, henceforth BBKW98, Smith & Dhillon, 1998\]. This approach requires to construct an equivalent diagram for field stars predicting the orbital period of a CV at which a field star of given spectral type would fill the Roche-lobe of the secondary. Note that this prediction is almost independent of the mass of the white dwarf primary. Figure 1 shows the spectral types of the secondaries in CVs with known orbital period. The data points are based on published optical/near-IR spectra quoted in the compilation of Ritter & Kolb with some recent spectral type determinations added \[e.g., Schwarz et al., 1999, Smith et al., 1999, Thomas et al., 1999, Thorstensen et al., 1999\]. Purely photometric spectral type assignments have not been included. We have also not included spectral types derived from IR spectra only. Constructing a spectral type period diagram for field stars requires knowledge of their masses and radii. Unfortunately, both quantities are not generally well known. In this situation it is important to note that recent progress in the construction of stellar models \[Baraffe et al., 1998, henceforth BCAH98\] combined with the NextGen model atmospheres of Hauschildt et al. have led to a substantially improved definition of the lower main sequence \[Leggett et al., 1996, henceforth L96\]. In fact, as noted by BBKW98, the theoretical and observational radii of M-stars \[L96\] agree at the few % level for stars of given absolute $`K`$-band magnitude $`M_\mathrm{K}`$. Using the observed radii, we have established a radius scale which allows to estimate the radii of field stars of given $`M_\mathrm{K}`$ \[Beuermann et al., 1999, henceforth BBH99\]. For the present purpose, we adopt the same stellar models which accurately reproduce the radii to estimate the masses, too. Specifically, we use the theoretical relation between mass and absolute $`K`$-band magnitude, $`M(M_\mathrm{K})`$ \[BCAH98\] to calculate the above defined ‘orbital period’ for a field star of known $`M_\mathrm{K}`$ and $`Sp`$. We do not follow the approach of Clemens et al. who adopted the observational relation between mass and absolute magnitude of Henry & McCarthy because this relation represents a mean for stars of different age and metallicity and we want to study how differences in metallicity affect the location of stars in the $`SpP`$ diagram. Figure 2 shows the resulting $`SpP`$ diagram for field stars of spectral types K/M supplemented by the Sun. The main sequence of stars with near-solar metallicity is delineated by eight young disk (YD) field stars from L96 ($``$ ), the YD binary YY Gem ($`\mathrm{}`$$``$ ), the Sun ($``$), and further 56 stars ($`,,`$) with radii from \[BBH99\]. The effects of decreasing metallicity are indicated by the binary CM Dra ($`\mathrm{}`$) and by four old disk stars ($`\mathrm{}`$) and four halo stars ($``$) from L96. Also shown in Fig. 2 are the models curves from the BCAH98 stellar models, namely the ZAMS model for solar metallicity (solid line), and models for stars aged $`10^{10}`$ yrs with 1/3 solar metallicity (long dashes) and 1/30 solar metallicity (short dashes). For the model stars, the spectral type is assigned by converting the theoretical colour $`IK`$ to $`Sp`$ \[BBKW98\]. The solid curve is expected to agree with the locus of the late-type YD stars. The slight displacement is probably due to remaining errors in the transformations used. Comparison with Fig. 1 indicates that the (zero age) main sequence of low-mass field stars with near-solar metallicity coincides with the locus of CV secondaries of the earliest spectral types at any given value of $`P`$. The upper left in Fig. 1 is devoid of CVs and shows that none of the systems included in the figure contains a secondary with metallicity substantially lower than the Sun (larger than solar metallicities are not excluded but remain unproved at present). The low space density of Pop II CVs is in agreement with population studies by Stehle et al. . The secondaries in many CVs with $`P>3`$ h have a later spectral type than expected for ZAMS secondary stars of near-solar metallicity. These secondaries are expanded over and have lower masses than Roche-lobe filling ZAMS stars. The two principal causes for this expansion are the loss of thermal equilibrium due to the on-going mass transfer and nuclear evolution prior to the onset of Roche-lobe overflow. Kolb & Baraffe have computed corresponding evolutionary sequences which nicely explain the observed behaviour and of which first results were presented in BBKW98. The paths included in Fig. 2 refer to secondaries starting as ZAMS stars of 1 $`M_{}`$ and evolving under mass loss rates of $`1.5\times 10^9`$$`M_{}`$ yr<sup>-1</sup> (solid curve) and $`5\times 10^9`$$`M_{}`$ yr<sup>-1</sup> (dashed curve). The point on these paths where the secondary becomes fully convective is indicated by the dotted curve \[Kolb & Baraffe, 1999\]. Late spectral types in CVs with orbital periods between 3 and 5 h can be explained by this scenario. The evolutionary models suggest that whenever the secondary becomes convective it has reached $`0.2`$$`M_{}`$ and $`Sp`$M4.5. If angular momentum loss drops rapidly at this point the secondary enters the period gap and re-appears below the gap with the same mass and spectral type $`Sp`$M4.5. The comparatively late spectral types in CVs with orbital periods larger than 5 h can not be explained by the loss of thermal equilibrium due to mass loss but are consistent with nuclear evolution of the secondary star prior to the onset of mass transfer. Two evolutionary paths are included in Fig. 2 for secondaries starting mass transfer at $`M=1`$$`M_{}`$ with a rate of $`1.5\times 10^9`$$`M_{}`$ yr<sup>-1</sup> and a central hydrogen fraction which is reduced to 0.16 (dot-dashed curve) or is practically exhausted (dot-dot-dashed curve) \[BBKW98, Kolb & Baraffe, 1999\]. Early evolutionary calculations of a similar type were performed by Whyte & Eggleton . The model calculations of Kolb & Baraffe indicate that stars driven out of thermal equilibrium stay at approximately the same effective temperature and spectral type as an undisturbed star of the same mass. Their results allow to estimate the masses of the secondary stars and thereby their Roche radii from the observed spectral types. This is a prerequisite for distance determinations of CVs using the surface brightness method \[Bailey 1981\]. ## 2 The brightness distribution of CV secondaries The Roche-lobe filling secondary stars experience gravity darkening and will not be of uniform surface brightness. More seriously and more difficult to model are the variations in surface brightness caused by irradiation. Marsh showed that the strengths of the TiO bands and the NaI$`\lambda 8183,8195`$ absorption-line doublet decrease on the hemisphere of the M5 secondary star in HT Cas which faces the primary. Naively, one might expect that heating of the secondary star produces an earlier spectral type and a corresponding increase in the TiO band strength. Observation indicates an opposite result, a decreased flux of the TiO features on the illuminated hemisphere, which suggests that a major change in the atmospheric structure takes place as a result of heating. Schwope et al. present a nice tomographic picture of the non-uniform appearance of the secondary star of QQ Vul in the NaI lines. Similar results have been obtained e.g. for AM Her \[Southwell et al., 1995\], illustrating the complications caused by the non-uniform appearance of the secondary star for dynamic mass determinations. This non-uniformity must be taken into account when applying the surface brightness method for distance measurements. ## 3 Barnes-Evans relation for field M-dwarfs Barnes & Evans found that the visual surface brightness of giants and supergiants (the only stars for which directly measured radii are available, apart from the few eclipsing binaries) is a function of colour only and independent of luminosity (gravity). BBH99 derived the surface brightness for the M-dwarfs studied by L96 and demonstrated that it deviates from that of giants. The implied small gravity dependence is in perfect agreement with recent model calculations for dwarfs and giants \[BCAH98, Hauschildt et al., 1999\]. The dwarf/giant difference reaches a maximum for $`Sp`$M0 ($`VI_c`$ = 1.7, $`VK`$ = 3.5). BBH99 also showed that the surface brightness of dwarfs in the $`K`$-band depends on metallicity in agreement with the predictions of BCAH98. Since the well-observed CVs have near-solar metallicities (Figs. 1 and 2), consideration of the gravity and metallicity dependencies allows improved surface brightness-colour relations to be established which are valid, e.g., for CVs of the galactic disk population. Fig. 3 shows the resulting surface brightness $`S_\mathrm{K}`$ in the $`K`$-band as a function of colour $`VK`$ which can be fit by the linear relationship given in the figure. This relation differs from the original relation of Bailey which was widely used in CV research. Bailey’s calibration of $`S_\mathrm{K}`$ depended on the Barnes-Evans relation for giants and is, therefore, expected to be low by some 0.3-0.4 mag near spectral type M0 or $`VK`$$`=3.5`$. The systematic difference between the two relations is primarily due to the gravity dependence of $`S_\mathrm{K}`$. The often-cited near constancy of $`S_\mathrm{K}`$ for M-dwarfs does not exist. The scatter in our data is substantially reduced over that in Bailey’s diagram, and in the similar one by Ramseyer , because we considered only single dwarfs with near-solar metallicity and avoided colour transformations between different photometric systems. Low-metallicity M-dwarfs have $`S_\mathrm{K}`$ values lower by $`0.5`$ mag. There is a systematic uncertainty in the derived surface brightness of about 0.1 mag carried over from the remaining uncertainty in the radius scale \[BBH99\]. The secure identification of CV secondary stars requires spectroscopic observations in the optical/near-IR spectral regions. M-stars are best detected by their pronounced TiO bands in the red part of the optical spectrum which display a variation in band strength ratios with spectral type \[Wade & Horne, 1988, Marsh, 1990\]. We have calibrated the absolute strength of the flux depression at 7165 Å relative to the quasi-continuum at 7500 Å vs. spectral type for a selection of field stars with near-solar metallicities and show the result in Fig. 4. The quantity $`F_{\mathrm{TiO}}=(F_{\lambda 7500}F_{\lambda 7165})\times d^2/R^2`$ is the TiO flux depression expressed as a surface brightness, with $`d`$ the distance and $`R`$ the stellar radius. The spectral fluxes $`F_{\lambda 7500}`$ and $`F_{\lambda 7165}`$ represent averages over $`\pm 50`$ and $`\pm 25`$Å, respectively. Different from the surface brightness $`S_\mathrm{K}`$ above, $`F_{\mathrm{TiO}}`$ is given in physical units. A second order polynomial fit to the logarithm of $`F_{\mathrm{TiO}}`$ is $$F_{\mathrm{TiO}}=10^{5+\alpha }\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1\mathrm{with}\alpha =1.477+0.5332(10S)0.03516(10S)^2$$ (1) where $`S`$ is the spectral subtype for M-dwarfs, i.e. $`S=5.5`$ for an dM5.5 secondary, and $`S=1`$ for K7, preceding M0. Eq. (1) is applicable to all CVs which have secondaries with near-solar abundances. For metal-poor M-dwarfs of the old-disk and halo populations, the surface brightness $`F_{\mathrm{TiO}}`$ is reduced over that of Eq. (1). In extreme subdwarfs the $`\lambda 7165`$ flux depression disappears and our definition of $`F_{\mathrm{TiO}}`$ becomes meaningless. Hence, Eq. (1) needs revision should spectral types become available for the secondaries in Pop II CVs, e.g., in globular clusters. ## 4 Distance measurements of CVs The surface brightness method \[Bailey, 1981\] allows the measurement of distances of CVs if the $`K`$-band flux of the secondary star or its spectral flux in the TiO$`\lambda 7500,7165`$ band can be measured. To be sure, the secondaries in CVs display a non-uniform surface brightness and care should be taken if only a single spectrum or flux measurement are available. Ideally, the full orbital modulation is needed in order to select the most appropriate view on the secondary star for comparison with the appropriate surface brightness of field stars. In general, the $`K`$-band fluxes are not seriously affected by irradiation, while spectral features as the TiO bands and the NaI absorption lines are greatly diminished in strength on the heated face of the secondary star \[e.g. Marsh, 1990, Schwope et al., 1999\]. Both methods have, therefore, advantages and disadvantages. The advantage of using the TiO band strength lies in its easy identification in observed spectra. We have used both the $`K`$-band brightness of the secondary and its TiO band strength to derive the distances of selected CVs and show some results in Table 1 (for a more detailed discussion see Beuermann & Weichhold, 1999. Note that the error in $`M_2`$ has not been considered and the distances scale as $`M_2^{1/3}`$). Ideally, both methods should yield the same distances which should agree within errors with the trigonometric parallax. This is, in fact, the case for of AM Her and U Gem (parallaxes by C. Dahn, private communication, Harrison et al., 1999). We have confidence also in the distances derived for V834 Cen and Z Cha which have no trigonometric parallaxes yet. There are some hopefully clear cases which represent CVs with well-defined TiO features. Obtaining independent distance information for these systems, e.g., from the $`K`$-band fluxes of the secondary stars is clearly desirable. There are also controversial cases, however, which indicate some of the pitfalls of the methods. For HT Cas, the distance derived from infrared photometry (Berriman et al., 1987) is probably wrong, as noted already by Ramseyer . Another important case is that of SS Cyg which has $`K=9.4`$ in quiescence. If that IR flux is entirely due to the K4 secondary, as assumed by Bailey , our calibration yields $`d=105`$ pc. On the other hand, the secondary contributes only about $`50`$% of the visual flux which implies that it is has $`K\mathrm{\hspace{0.17em}10.2}`$ and raises the distance to $`d\mathrm{\hspace{0.17em}152}`$ pc, consistent with the recent HST FGS parallax of 166 pc \[Harrison et al., 1999\]. Note that assuming a K5 secondary would reduce $`d`$ again to 133 pc. Finally, the 8-h AM Herculis binary V1309 Ori has an (evolved) M0$``$M1 secondary and a TiO distance of 745 pc. The optical flux and spectral type imply that the secondary accounts for most of the $`K`$-band flux which is inconsistent with the statement of Harrop-Allin et al. of a contribution as low as $`20`$% based on $`K`$-band spectroscopy of the illuminated face of the secondary. The discrepancy is likely due to problems associated with the interpretation of illuminated stellar atmospheres. We conclude that reliable distances of CVs can be obtained from both the $`K`$-magnitude and the TiO band strength of the secondary if its flux contribution can be unequivocally determined, illumination effects are taken properly into account, and the viewing direction on to the secondary is known. Measuring accurate distances to CVs allows to derive basic quantities as their absolute magnitudes and the mass transfer rates \[Warner, 1987\]. Much of our understanding of the physics of CVs \[Warner, 1995\] is based on the tedious derivation of such quantities. ## 5 Acknowledgements I thank Isabelle Baraffe and Ulrich Kolb for many discussions and for allowing me to show their evolutionary paths in Fig. 1 prior to publication, Hans Ritter for pointing out some errors in my earlier compilation of spectral types of secondary stars, and Marc Weichhold for the initial work on this subject.
no-problem/9909/hep-ph9909480.html
ar5iv
text
# Can Flavor-Independent Supersymmetric Soft Phases Be the Source of All CP Violation? ## Abstract Recently it has been demonstrated that large phases in softly broken supersymmetric theories are consistent with electric dipole moment constraints, and are motivated in some (Type I) string models. Here we consider whether large flavor-independent soft phases may be the dominant (or only) source of all CP violation. In this framework $`ϵ`$ and $`ϵ^{}/ϵ`$ can be accommodated, and the SUSY contribution to the B system mixing can be large and dominant. An unconventional flavor structure of the squark mass matrices (with enhanced super-CKM mixing) is required for consistency with B and K system observables. preprint: VPI-IPPAP-99-08 hep-ph/9909480 Although the first experimental evidence of CP violation was discovered over thirty years ago in the K system , the origin of CP violation remains an open question. In the Standard Model (SM), all CP violation arises due to a single phase in the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix . While the SM framework of CP violation provides a natural explanation for the small value of $`ϵ`$ in the K system and is supported by the recent CDF measurement of $`\mathrm{sin}2\beta `$ through the decay $`B\psi K_S`$ , it is not clear whether the SM prediction is in agreement with the observed value of $`ϵ^{}/ϵ`$ recently measured by (confirming the earlier results of ) due to theoretical uncertainties . However, the SM cannot account for the baryon asymmetry , and hence new physics is necessarily required to describe all observed CP violation. In this paper, we investigate the possibility of a unified picture of CP violation by adopting the hypothesis that all observed CP violation can be attributed to the phases which arise in the low energy minimal supersymmetric standard model (MSSM), as first suggested by Frère et.al. . The issue of CP violation in supersymmetric theories is not a new question . However, much of our analysis is motivated from embedding the MSSM into a particular string-motivated D-brane model at high energies , which departs significantly from the standard results for CP violation in SUSY models (which we summarize for the sake of comparison). The CP-violating phases of the MSSM can be classified into two categories: (i) the flavor-independent phases (in the gaugino masses, $`\mu `$, etc.), and (ii) the flavor-dependent phases (in the off-diagonal elements of the scalar mass-squares and trilinear couplings). We focus here on the flavor-independent phases; these phases have traditionally been assumed small ($`\stackrel{<}{_{}}10^2`$) if the sparticle masses are $`𝒪`$(TeV) as the phases are individually highly constrained by the experimental upper bounds on the electric dipole moments (EDMs) of the electron and neutron . However, a reinvestigation of this issue has demonstrated that cancellations between different contributions to the EDMs can allow for viable regions of parameter space with phases of $`𝒪(1)`$ and light sparticle masses. In recent work , we found a (Type I) string-motivated model of the soft breaking terms based on embedding the SM on five-branes in which large flavor-independent phases can be accommodated. The large relative phases between the gaugino mass parameters in this model play a crucial role in providing the cancellations in the EDM’s, yielding regions of parameter space in which the electron and neutron EDM bounds are satisfied simultaneously. In this model, the CP-violating phases in the soft breaking terms are due to the (assumed) presence of complex F-component VEV’s of moduli fields. Complex scalar moduli VEV’s can in principle also lead to phases in the superpotential Yukawa couplings; however, for simplicity we assume here that the phase of the CKM matrix is numerically close to zero . The crucial feature of our scenario compared to previous work is that all flavor-independent phases in the soft SUSY breaking sector can be large, with the EDM constraints satisfied by cancellations motivated by the underlying theory. We will show that SUSY can account for all observed CP violation with large flavor-independent phases (including the relative phases of the gaugino masses, which are zero in many SUSY models) and a particular flavor structure of the squark mass matrices. We focus on the low $`\mathrm{tan}\beta `$ regime, distinguishing our results from other recent work . The baryon asymmetry can be explainable in SUSY ; see for a study of baryogenesis within this approach. The CP-violating and FCNC processes that we consider are presented in Table I (we do not list the electron EDM (), but only consider parameter sets which satisfy the electron and neutron EDM constraints). First note that generically the matrices which diagonalize the quark mass matrices and those which diagonalize the squark mass matrices are not equivalent due to SUSY breaking effects. The sfermion mass matrices are expressed in the super-CKM basis, in which the squarks and quarks are rotated simultaneously. In this basis the sfermion mass matrices are non-diagonal, and the amplitudes depend on the matrices $`\{\mathrm{\Gamma }_{U,D_{L,R}}^{\mathrm{SKM}}\}`$ which rotate the squarks from the SCKM basis into the mass eigenstates. As shown schematically in Table I, particular processes are sensitive to certain elements of the quark and squark diagonalization matrices. We find an unconventional flavor structure at the electroweak scale of the $`\mathrm{\Gamma }^{\mathrm{SKM}}`$ matrices in the up-squark sector is required to reproduce the observed CP-violation in the K and B systems: $`\mathrm{\Gamma }_{U_L}^{\mathrm{SKM}}=\left(\begin{array}{cccccc}1& \lambda ^{}+\lambda & \lambda ^{}c_\theta & 0& 0& \lambda ^{}s_\theta e^{i\phi _{\stackrel{~}{t}}}\\ (\lambda ^{}+\lambda )& 1& \lambda ^{}c_\theta & 0& 0& \lambda ^{}s_\theta e^{i\phi _{\stackrel{~}{t}}}\\ \lambda ^{}& \lambda ^{}& c_\theta & 0& 0& s_\theta e^{i\phi _{\stackrel{~}{t}}}\end{array}\right);`$ (4) $`\mathrm{\Gamma }_{U_R}^{\mathrm{SKM}}=\left(\begin{array}{cccccc}0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 1& 0\\ 0& 0& s_\theta e^{i\phi _{\stackrel{~}{t}}}& 0& 0& c_\theta \end{array}\right),`$ (8) where $`\lambda ^{}\stackrel{<}{_{}}\lambda \mathrm{sin}\theta _c`$, $`\theta `$, $`\phi _{\stackrel{~}{t}}`$ denote the stop mixing parameter and its phase, and entries of $`𝒪(\lambda ^2)`$ are neglected. Note that the mixing in the LL sector is enhanced as compared to that of the SM, while in the RR sector it is negligible (this is easily seen by setting $`\theta =0`$). We now estimate the SUSY contributions to the observables in Table I. We will be working in the framework similar to the one laid out in , except we will also assume significant flavor mixing in the trilinear soft terms already at the GUT scale. In particular, we assume the $`A`$-terms to be of the form $`e^{i\varphi _A}BY^{u,d}B^{}`$ where $`B,B^{}`$ are $`real`$ matrices with considerable off-diagonal elements. Further, we assume that the squarks (except for the lightest stop) are degenerate in mass, and retain only the lightest stop except in the case of $`ϵ`$ and $`ϵ^{}`$ (for which the first two generations give the leading contribution), and neglect all but top quark masses unless the other fermion masses give leading contributions. For the purpose of presentation, we separate the stop left-right mixing from the family mixing. The family mixing matrices $`\stackrel{~}{K}^{L,R}`$ are defined as $`\stackrel{~}{K}_{ij}^L=(\mathrm{\Gamma }_{U_L}^{SKM})_{ij}|_{\theta =0}`$, $`\stackrel{~}{K}_{ij}^R=(\mathrm{\Gamma }_{U_R}^{SKM})_{i,j+3}|_{\theta =0}`$ with $`i,j=\mathrm{1..3}`$. In accordance with the chosen form of the $`\mathrm{\Gamma }^{}`$s, we assume $`\stackrel{~}{K}_{ij}^L\lambda /3`$ and $`\stackrel{~}{K}_{ij}^R0`$ for $`ij`$. These matrices are $`real`$, as the only source of CP-violating phases in the $`\mathrm{\Gamma }^{}`$s is the stop mixing. We assume maximal chargino and stop mixings, and the following parameter values: $`m_{\stackrel{~}{t}}140`$ GeV, $`m_{\stackrel{~}{\chi }}100`$ GeV, $`m_{\stackrel{~}{q}}m_{\stackrel{~}{g}}350`$ GeV, and $`A250`$ GeV. Our estimates agree within better than an order of magnitude with the numerical results to be presented in . Let us first turn to the discussion of $`ϵ`$ and $`ϵ^{}`$. Here we utilize the mass insertion approximation and the associated $`(\delta _{ij})_{AB}`$ parameters (see e.g. ). Since we study the impact of flavor-independent phases at high energies, the LL and RR insertions are essentially real (their phases are produced effectively at the two-loop level; see RGE’s in ). The LR insertion always occurs in combination with the gluino phase $`\phi _3`$ due to reparameterization invariance; the physical combination of phases is $`(\delta _{12})_{LR}e^{i\varphi _3}`$ (the gluino phase has generally been neglected in earlier work). Our numerical studies show that the observed values of $`ϵ`$ and $`ϵ^{}`$ can be reproduced for $`|(\delta _{12}^d)_{LR}|3\times 10^3`$ and $`Arg((\delta _{12})_{LR}^de^{i\varphi _3})10^2`$, in agreement with . This value of $`|(\delta _{12}^d)_{LR}|`$ can be obtained in models with a large flavor violation in the $`A`$-terms. Note also that this value of $`(\delta _{12}^d)_{LR}`$ leads to a significant gluino contribution to $`\mathrm{\Delta }m_K`$. The leading chargino contribution to $`(M_K)_{12}`$ is CP-conserving, as can be seen from $`(M_K)_{12}^{\stackrel{~}{t}\stackrel{~}{\chi }}{\displaystyle \frac{g^4}{384\pi ^2}}{\displaystyle \frac{m_Kf_K^2}{m_{\stackrel{~}{t}}^2}}\left(\stackrel{~}{K}_{td}^L\stackrel{~}{K}_{ts}^L\right)^2|V_{11}T_{11}|^4,`$ (9) (recall $`\stackrel{~}{K}^{L,R}`$ are real). $`V`$ and $`T`$ denote the chargino and stop mixing matrices; to simplify this expression we employed the approximation $`m_{\stackrel{~}{t}}^2m_{\stackrel{~}{\chi }}^2`$. This contribution gives $`\mathrm{\Delta }m_K10^{16}`$ GeV, well below the experimental value. Therefore $`\mathrm{\Delta }m_K`$ is dominated by the Standard Model and gluino contributions (as in ). In our approach the SM tree diagrams for B decays are real, and there is negligible interference with the superpenguin diagrams. Therefore the B system is essentially superweak, with all CP violation due to mixing. In contrast to the case of $`K\overline{K}`$ mixing, $`B\overline{B}`$ mixing is dominated by the chargino contribution: $`(M_B)_{12}^{\stackrel{~}{t}\stackrel{~}{\chi }}{\displaystyle \frac{g^4}{384\pi ^2}}{\displaystyle \frac{m_Bf_B^2}{m_{\stackrel{~}{t}}^2}}\left(\stackrel{~}{K}_{td}^L\stackrel{~}{K}_{tb}^L\right)^2|V_{11}T_{11}|^4\left(1{\displaystyle \frac{h_t}{g}}{\displaystyle \frac{V_{12}^{}T_{12}^{}\stackrel{~}{K}_{tb}^R}{V_{11}^{}T_{11}^{}\stackrel{~}{K}_{tb}^L}}\right)^2.`$ (10) The corresponding $`\mathrm{\Delta }m_B`$ is of order $`10^{13}`$ GeV, which is roughly the observed value. The SM contribution to $`\mathrm{\Delta }m_B`$ is significantly smaller since the CKM orthogonality condition forces $`V_{td}`$ to take its smallest allowed value. The CP-violating gluino contribution requires two LR mass insertions and, as a result, is suppressed by $`(m_b/\stackrel{~}{m})^2`$. Similar considerations hold for $`B_s\overline{B}_s`$ mixing although the mixing phase is generally smaller than that in $`B_d\overline{B}_d`$ due to a significant CP-conserving SM contribution. Although the CP asymmetries and CKM entries are not related, $`\mathrm{sin}2\beta `$ and $`\mathrm{sin}2\alpha `$ can be defined in terms of the above asymmetries ($`\mathrm{sin}2\gamma `$ can be defined via the CP asymmetry in $`B_s\rho K_s`$). The angles of the “unitarity triangle” given in this way need not sum to $`180^0`$ as in the SM. Our results demonstrate that the chargino contribution alone is sufficient to account for the observed value of $`\mathrm{sin}2\beta `$ reported in the CDF preliminary results . This can be seen from (10) since the mixing phase can be as large as $`\pi /2`$ if $`O(1)`$ phases are present in $`V`$ and $`T`$. In Fig. 1 we show contour plots of both $`\mathrm{sin}2\beta `$ and $`\mathrm{\Delta }m_B`$ in the $`\phi _{\stackrel{~}{t}}`$-$`\phi _\mu `$ plane. The CP-asymmetries in $`B\psi K_s`$ and $`B\pi ^+\pi ^{}`$ are related: $`\mathrm{sin}2\beta =\mathrm{sin}2\alpha `$. This relation is characteristic of superweak models with a real CKM matrix (), and is not consistent with the SM, as seen using the “sin” relation: $`\mathrm{sin}\beta /\mathrm{sin}\alpha =|V_{ub}|/|V_{cb}\mathrm{sin}\theta _c|`$. The LHS implies $`|V_{ub}|/|V_{cb}\mathrm{sin}\theta _c|=1`$, while the experimental upper bound on the RHS is 0.45, verifying the nonclosure of the unitarity triangle. We now turn to the $`bs\gamma `$ CP asymmetry $`𝒜_{CP}(bs\gamma )`$. The dominant contribution is due to mixing between the magnetic penguin operator Wilson coefficients $`C_7`$ and $`C_8`$ $`𝒜_{CP}(bs\gamma ){\displaystyle \frac{4\alpha _s(m_b)}{9|C_7|^2}}\mathrm{Im}(C_7C_8^{}).`$ (11) Both $`C_7`$ and $`C_8`$ receive real SM contributions, and hence the SUSY contribution from the chargino-stop loop has to be competitive while at the same time respect the experimental limits on $`\mathrm{BR}(bs\gamma )`$. As a result, larger values of $`𝒜_{CP}(bs\gamma )`$ usually imply branching ratios further away from the experimental central value. Typical results still predict asymmetries larger than in the SM (of order several percent). We checked that the enhanced super-CKM mixing does not lead to an overproduction of $`D\overline{D}`$ mixing. Since the chargino contribution is subject to strong GIM cancellations, the leading contribution is given by the gluino-stop loop: $`(M_D)_{12}^{\stackrel{~}{t}\stackrel{~}{g}}{\displaystyle \frac{\alpha _s^2}{27}}{\displaystyle \frac{m_Df_D^2}{m_{\stackrel{~}{g}}^2}}\mathrm{ln}{\displaystyle \frac{m_{\stackrel{~}{g}}^2}{m_{\stackrel{~}{t}}^2}}\left(\stackrel{~}{L}_{tu}^L\stackrel{~}{L}_{tc}^L\right)^2|T_{11}|^4,`$ (12) where $`\stackrel{~}{L}`$ is a real matrix which has roughly the same form as $`\stackrel{~}{K}`$. $`\mathrm{\Delta }m_D`$ is of order $`10^{14}`$ GeV which corresponds to $`x=(\mathrm{\Delta }m/\mathrm{\Gamma })_{D^0}`$ between $`10^3`$ and $`10^2`$, which is in the range of the SM prediction and is consistent with recent CLEO measurements . Next consider the CP violating decay $`K_L\pi ^0\nu \overline{\nu }`$, which in the SM provides an alternate way to determine $`\mathrm{sin}\beta `$ . It proceeds through a CP-violating $`Zds`$ effective vertex, for which the dominant SUSY contribution is the chargino-stop loop : $`Z_{ds}^{\stackrel{~}{t}}{\displaystyle \frac{1}{4}}{\displaystyle \frac{m_{\stackrel{~}{\chi }}^2}{m_{\stackrel{~}{t}}^2}}\mathrm{ln}{\displaystyle \frac{m_{\stackrel{~}{\chi }}^2}{m_{\stackrel{~}{t}}^2}}|V_{11}T_{11}|^2\stackrel{~}{K}_{td}^L\stackrel{~}{K}_{ts}^L.`$ (13) This contribution conserves CP, and thus we expect the branching ratio $`K_L\pi ^0\nu \overline{\nu }/K^+\pi ^+\nu \overline{\nu }`$ to be $`𝒪(ϵ)`$. This clearly violates the SM relation between the CP asymmetry in $`B\psi K_s`$ and the branching ratio of $`K_L\pi ^0\nu \overline{\nu }`$. However, the CP-conserving (charged) mode of this decay is dominated by the SM and chargino contributions. Typically we expect $`Z_{ds}^{\stackrel{~}{t}}`$ to be of order $`10^4`$ which translates into the branching ratio for $`K^+\pi ^+\nu \overline{\nu }`$ of the order of $`10^{10}`$. In certain regions of the parameter space, this branching ratio can be significantly enhanced (up to an order of magnitude) over the SM prediction. To summarize: our approach provides a unified view of all CP violation (including the baryon asymmetry ) which is testable at future colliders and at the B factories, tying its origin to fundamental CP-violating parameters within a (Type I) string-motivated context. CP violation in the K system is mainly due to the gluino-squark diagrams, with phases from the gluino mass $`M_3`$ and the trilinear coupling $`A`$. As the CKM matrix is by assumption (approximately) real, the B system is superweak: CP violation occurs mainly due to mixing. Therefore the unitarity triangle does not close, and we expect $`\mathrm{sin}2\beta /\mathrm{sin}2\alpha 1`$. $`\mathrm{\Delta }m_K`$ is dominated by the SM and gluino contributions, while $`\mathrm{\Delta }m_B`$ is dominated by the chargino-stop contribution. $`K^+\pi ^+\nu \overline{\nu }`$ can be enhanced while $`K_L\pi \nu \overline{\nu }`$ is suppressed compared to the SM predictions. $`D\overline{D}`$ mixing is expected to occur at a level somewhat below the current limit. The CP asymmetry in $`bs\gamma `$ can be considerably enhanced over its SM value. The electric dipole moments of the electron and neutron are suppressed by cancellations and should have values near the current limits. Our approach dictates an unconventional and interesting flavor structure for the squark mass matrices at low energies which is required for consistency with the preliminary experimental value of $`\mathrm{sin}2\beta `$. An investigation of the connection of these matrices to the flavor structure of a basic theory at high energies is underway . ###### Acknowledgements. We would like to thank G. Good for helpful discussions and numerical work, J. Hewett for helpful suggestions, and S. Khalil for correspondence . This work is supported in part by the U.S. Department of Energy. | Observable | Dominant Contribution | Flavor Content | | --- | --- | --- | | nEDM | $`\stackrel{~}{g}`$, $`\stackrel{~}{\chi }^+`$, $`\stackrel{~}{\chi }^0`$ | $`(\delta _{dd})_{LR}`$, $`\stackrel{~}{K}_{ud}\stackrel{~}{K}_{ud}^{}`$ | | $`ϵ`$ | $`\stackrel{~}{g}`$ | $`(\delta _{ds})_{LR}`$ | | $`ϵ^{}`$ | $`\stackrel{~}{g}`$ | $`(\delta _{ds})_{LR}`$ | | $`\mathrm{\Delta }m_K`$ | SM, $`\stackrel{~}{g}`$ | SM, $`(\delta _{ds})_{LR}`$ | | $`K_L\pi \nu \overline{\nu }`$ | $`\stackrel{~}{g}`$ | $`(\delta _{ds})_{LR}`$ | | $`\mathrm{\Delta }m_{B_d}`$ | $`\stackrel{~}{\chi }^+`$ | $`|\stackrel{~}{K}_{tb}\stackrel{~}{K}_{td}^{}|`$ | | $`\mathrm{\Delta }m_{B_s}`$ | SM, $`\stackrel{~}{\chi }^+`$ | $`|\stackrel{~}{K}_{tb}\stackrel{~}{K}_{ts}^{}|`$ | | $`\mathrm{sin}2\beta `$ | $`\stackrel{~}{\chi }^+`$ | $`\stackrel{~}{K}_{tb}\stackrel{~}{K}_{td}^{}`$ | | $`\mathrm{sin}2\alpha `$ | $`\stackrel{~}{\chi }^+`$ | $`\stackrel{~}{K}_{tb}\stackrel{~}{K}_{td}^{}`$ | | $`\mathrm{sin}2\gamma `$ | $`\stackrel{~}{\chi }^+`$ | $`\stackrel{~}{K}_{tb}\stackrel{~}{K}_{ts}^{}`$ | | $`𝒜_{CP}(bs\gamma )`$ | $`\stackrel{~}{\chi }^+`$ | $`\stackrel{~}{K}_{tb}\stackrel{~}{K}_{ts}^{}`$ | | $`\mathrm{\Delta }m_D`$ | $`\stackrel{~}{g}`$ | $`|\stackrel{~}{K}_{tc}\stackrel{~}{K}_{tu}^{}|`$ | | $`n_B/n_\gamma `$ | $`\stackrel{~}{\chi }^+`$, $`\stackrel{~}{\chi }^0`$, $`\stackrel{~}{t}_R`$ | | Table I: We list the CP-violating observables and our dominant one-loop contributions (we work within the decoupling limit and hence neglect the charged Higgs). The third column schematically shows the flavor physics. Basically the $`\delta `$’s are elements of the squark mass matrices normalized to some common squark mass, and the $`\stackrel{~}{K}`$’s are related to the $`\mathrm{\Gamma }^U`$ matrices defined in the text (with the stop mixing factored out, so they represent the family mixing only). Subscripts label flavor or chirality. The table is designed to demonstrate symbolically which observables are related (or not) to others. (More technically, in the down-squark sector, we utilize the $`(\delta _{ij})_{AB}`$ parameters of the mass insertion approximation. $`\stackrel{~}{K}_{ij}`$ labels the flavor factors which enter in diagrams involving up-type squarks. The flavor factors which enter the $`bs\gamma `$ and the nEDM amplitudes are different from the $`\stackrel{~}{K}`$ matrices but the flavor structure is similar (analogous statements apply for $`D\overline{D}`$ mixing).
no-problem/9909/astro-ph9909135.html
ar5iv
text
# The Effect of Environment on the X-Ray Emission from Early-Type Galaxies ## 1 Introduction There are two primary sources for the X-ray luminosity of early-type galaxies, emission from stellar sources and optically thin radiation from a hot dilute interstellar medium (e.g., Fabbiano 1989). For the more luminous X-ray emitting ellipticals, there is little doubt that emission from hot interstellar gas is the dominant mechanism. To explain the behavior of the X-ray emission from hot gas, cooling flow models were developed, and had a number of successes. In the standard model, gas that is shed by stars is converted into hot gas with a temperature corresponding to the velocity dispersion of the stars. Radiative losses of the gas cause it to fall inward into the galaxy, releasing additional gravitational energy. This model is able to account for the observed magnitude of the X-ray luminosity, it can reproduce the X-ray surface brightness distribution, and it predicts a correlation between $`L_X`$ and $`L_B`$. However, there are a few important discrepancies between theory and observation. The predicted slope of $`L_X`$$`L_B`$ relationship (approximately $`L_XL_B^m`$, where $`m=1.6`$–1.8) is not as steep as the observational relationship obtained using Einstein Observatory data, where $`m=1.7`$–2.4. More recent work by Brown & Bregman (1998) and Irwin & Sarazin (1998) find the slope to be about 2.7, from ROSAT data, in clear disagreement with the steady-state cooling flow model. Another important discrepancy is the extremely large dispersion about the relationship, about two orders of magnitude in $`L_X`$ at fixed $`L_B`$, first discovered in Einstein Observatory data (Canizares, Fabbiano, & Trinchieri 1987) and confirmed with the ROSAT sample (White & Davis 1997; Brown & Bregman 1998). There have been a few explanations for the cause of the large dispersion in $`L_X`$, which is either attributed to the internal properties of the galaxy or to the external influences of the environment. D’Ercole et al. (1989) suggested that slight differences in the structure of a galaxy could lead to widely variant X-ray luminosities at fixed optical luminosity. However, this result occurs for a particular form of the supernova rate as a function of cosmic time, which may not be likely (Loewenstein & Mathews 1991). Other models that include the effects of supernova heating and galactic winds do not predict a large dispersion in $`L_X`$ at the current epoch (David, Forman, & Jones 1991; Loewenstein & Mathews 1987; Vedder, Trester, & Canizares 1988). Environmental effects are becoming more widely understood to have a significant, if not central influence on the X-ray properties of a galaxy. One effect is stripping of gas from a galaxy as it passes through the ambient cluster medium (Takeda, Nulsen, & Fabian 1984; Gaetz, Salpeter, & Shaviv 1987). This is likely to be important in the richer clusters, but most current samples do not contain very rich clusters (the Virgo Cluster being the richest). Although there is evidence for stripping for NGC 4406 in Virgo, early-type galaxies in Virgo are not X-ray underluminous in general, so stripping is probably the exception rather than the rule. Another environmental effect is accretion of material onto galaxies, which can raise the X-ray luminosity substantially (Brighenti & Mathews 1998) and probably is most important in gas-rich group and cluster environments. A third environmental effect is the ability of an ambient medium to stifle a galactic wind, causing those galaxies to retain their gas and become more X-ray luminous than field galaxies (Brown & Bregman 1998). In Brown & Bregman (1998), we presented the X-ray luminosities of a complete optically-selected sample of 34 early-type galaxies in the 0.5–2.0 keV ROSAT band. We noticed that the galaxies with the largest values of $`L_X`$ (for a given $`L_B`$) were in the more populated regions (the Virgo or Fornax clusters, or in the centers of groups) of the sample, suggesting that being in moderately rich environments enhances the X-ray emission of a system. This trend may differ from that reported by White & Sarazin (1991), who found that lower-X-ray luminous galaxies had 50% more bright neighboring galaxies than higher-X-ray luminous galaxies. To further assess the importance of environment upon X-ray luminosities, we advance our previous work by analyzing the X-ray properties of our sample in the context of a quantitative measure of the galactic richness provided by Tully (1988). In addition, we present information on data processing, sample bias, temperature distribution, and several other issues not discussed in our previous Letter. ## 2 Galaxy Sample and Data Processing The criteria for including a galaxy in this work is established in Brown & Bregman (1998). The targets (see Table 1) are detected at the $``$ 97% confidence level, and include twelve galaxies lying within the poor clusters of Fornax and Virgo, with the remainder either isolated or lying in loose groups. For the purpose of this survey, we used PSPC data in the processing rather than HRI data when available, since HRI data contains no spectral information (Table 2). Data analysis was performed using the PROS system under NOAO’s IRAF (software written specifically for X-ray data) with an objective of obtaining a well-defined $`L_X`$ for each galaxy in the sample (§2.1), and $`T_X`$ (X-ray gas temperature) when possible. Spatial analyses (§2.2) were performed on images using a blocking factor of eight for the normalized PSPC data (neglecting the softest energy channels, PI $`<`$ 20), creating 4 arcsec pixels. Blocking factors of two or four were used for HRI data resulting in 1-2 arcsec pixels. Spectral analyses (§2.3) were performed on normalized QPOE files using the same regions chosen for the spatial analyses, and binned according to the quality of the data. ### 2.1 Source and Background The extent of the region within which the X-ray flux is determined (the source), and the image regions chosen for background subtraction, can have a significant effect on the determination of $`L_X`$ because the galaxies in our sample may sit in clusters or loose groups. Brown & Bregman (1998) briefly state the source and background radial limits, and justifies the use of de Vaucouleur’s half-light radius, $`r_e`$, as the basis for those limits. We chose to use a 4$`r_e`$ radius for the advantages it offers. A circle of 4$`r_e`$ is large enough to be resolved by the ROSAT PSPC for each galaxy, yet small enough to avoid possible cluster emission problems. For our five weakest targets (NGC 1344, NGC 3377, NGC 5061, NGC 5102, and NGC 7507) background emission dominates the signal when going out to 4$`r_e`$. We therefore obtained the flux within 1$`r_e`$, to get its best measure while maximizing the S/N. That value is then multiplied by a factor of 1.58 to extrapolate the flux out to 4$`r_e`$. The correction factor is obtained by taking the ratio of a beta model ($`\beta `$=0.5, $`r_{core}=r_e/11`$) integrated from 0 to 4$`r_e`$ to one integrated from 0 to 1$`r_e`$. The background is chosen based on our desire to examine $`L_X`$ only within 4$`r_e`$. If the background is defined to be a region far from the source center, as is usually done, there may still exist cluster emission past 4$`r_e`$ surrounding the source. To establish a flux within 4$`r_e`$, the excess emission along our line of sight must be removed. The best way to accomplish this is to subtract an annulus within which the mean surface brightness equals the mean surface brightness of unwanted emission in front of and behind the source. We mathematically determined the radial boundaries of this annulus, using a beta model for extended emission in elliptical galaxies of the form $$I_X=I_o[1+(r/r_{core,X})^2]^{3\beta +0.5}$$ (1) where $`I_o`$ is the central surface brightness, and $`r_{core,X}`$ is the core radius of the X-ray emission. Although an infinite range of inner and outer radii is possible, we find that, to maximize the signal-to-noise, a unique solution of $`r_1=4r_e`$ and $`r_2=6.3r_e`$ exist for $`\beta =0.5`$. #### 2.1.1 NGC 4494 One galaxy in the sample, NGC 4494, lies far off-axis ($``$45 off). At this radius, the PSF becomes large and distorted, so we needed to correct for the difference between the on-center position and its off-axis position. We could not simply scale the amount within 1$`r_e`$ to 4$`r_e`$ because 1$`r_e`$ is significantly smaller than the PSF of the instrument, so the instrument scatters most of the photons out of 1$`r_e`$. This is the only object that we have where that is true. Instead, we chose a circle of radius 240 as the source, and similar sized circles on either side as the background. As a comparison, we calculated the flux using an annular background 4–6.3 $`r_e`$ from the center of the source which resulted in only a 10% difference, smaller than the uncertainty due to photon statistics. ### 2.2 Spatial Analysis Spatial analysis was performed on the data in order to compare our well defined choice of background with backgrounds taken at large radii. We first determined the total photon count (proportional to the flux) in a $`4r_e`$ circle using the 4–6.3 $`r_e`$ background (see Table 2). This flux was then compared to one obtained by subtracting a background region beyond the extent of the X-ray emission. This background region was also taken to be an annulus, and was typically located at $`>7r_e`$ from the galaxy center. The comparisons show that only for 36% of the galaxies is there a difference greater than 10% between taking a background as detailed above, and a more traditional background (neglecting the five weak detections and NGC 4494). We find that differences greater than 10% only exist for four of the galaxies (NGC 1395, NGC 1399, NGC 4406, and NGC 4472) when photon count errors are taken into account, and 68% of the galaxies show differences of less than 5% (Figure 1). ### 2.3 Spectral Analysis We used a Raymond-Smith plasma model for spectral fits, succinctly discussed in Brown & Bregman (1998). Abundances were held at 0.5 solar since the reliability with which they can be determined from the ROSAT PSPC has been questioned (Bauer & Bregman 1996). A recent paper by Loewenstein & Mushotzky (1997) indicate that the metallicity for the galaxies that they observed with ASCA is about 0.5 solar, with scatter, making our choice a reasonable one. Single-temperature models were preferred over two-temperature models when acceptable $`\chi _\nu ^2`$ values could be obtained ($`\chi _\nu ^2>1.46`$ for 30 degrees of freedom). For NGC 4472, an acceptable fit could not be obtained with either a single-temperature or two-temperature model with 50% abundances, so a single-temperature model with 0.8 solar abundance was used. For NGC 1399 and NGC 1404, we also allowed the Galactic $`N_H`$ column density to be a free parameter within a limited range (20.1 $``$ log $`N_H`$ 20.5 and 20.1 $``$ log $`N_H`$ 20.3, respectively in $`cm^2`$) to obtain an acceptable fit. Temperatures cannot be accurately fitted for low-count (i.e., $`<300`$ counts) PSPC objects, or HRI galaxies (no spectral information). However, a fit must be performed to the data to obtain a flux using PROS software. Therefore, the data for these few galaxies were rebinned into single bins, and fitted for the normalization only. We assumed a fixed $`T_X`$ = 3/2$`T_\sigma `$ (typical of the findings of Davis & White 1996) except for NGC 5102 and NGC 4494 where we used temperatures of 0.5 keV and 0.3 keV respectively, since the lower calculated temperatures led to clearly unacceptable fits. The stellar velocity dispersion temperature, $`T_\sigma `$, is calculated according to $$kT=\mu m_p\sigma ^2,$$ (2) where $`\mu `$ is the mean molecular weight, and $`\sigma `$ is the one-dimensional stellar velocity dispersion. After fitting the data, fluxes were obtained for two energy ranges $``$ 0.5–2.0 keV and 0.1–2.0 keV $``$ using the intrinsic spectrum found for each source, and fitted temperatures when available. Luminosities were then calculated using distances derived from Faber et al. (1989) and an $`H_0`$ = 50 km/s/Mpc, with the exception of NGC 5102 for which no distance was given in the Faber et al. (1989) catalog (the McMillan, Ciardullo, & Jacoby 1994 distance is used for NGC 5102). In the past, using these distances has significantly reduced the dispersion about $`L_X`$$`L_B`$ line by increasing the internal consistency of distances (Donnelly, Faber, & O’Connell 1990). Errors in the luminosity, due to uncertainties in the Galactic $`N_H`$ column density (typically 5–10%) and uncertainties in photon counts, were examined and calculated. In all cases, errors in the photon statistics were found to be significantly greater than errors introduced from the uncertainty in the Galactic $`N_H`$ column. ## 3 Analysis of the Observational Results We have reported briefly our findings of $`L_X`$ and $`T_X`$ for this study in Brown & Bregman (1998). We found a slope for the $`L_X`$$`L_B`$ relationship steeper than previous investigations with Einstein Observatory data, with broad dispersion about the fit line. The observed $`L_X`$ of the brightest galaxies were found to be comparable to either the energy released through supernovae or through gravitational infall. A correlation was confirmed between $`T_X`$ and $`T_\sigma `$, using the 19 high-photon-count PSPC galaxies, with a slope steeper than that reported by Davis & White (1996). We discussed a possible connection between the X-ray luminous galaxies and their gas temperatures with respect to $`T_\sigma `$, and suggested a possible connection between the observed X-ray luminosity and the environment in which each galaxy lies. Here, we present a more detailed analysis of the $`L_X`$$`L_B`$ correlation (§3.1), the relationship between $`T_X`$ and $`T_\sigma `$3.2), and the role environment may play in the observed luminosities (§3.3). ### 3.1 The $`L_X`$$`L_B`$ Plane There can be a large correction to the flux from 0.1–0.5 keV due to Galactic absorption, so we will limit our discussion to the luminosities determined for the 0.5–2.0 keV band (Table 3) which is fairly insensitive to Galactic absorption corrections. A logarithmic plot of $`L_B`$ (derived from Faber et al. 1989 magnitudes) against $`L_X`$ (Figure 2) for the sample is reproduced from Brown & Bregman (1998) with the addition of three dwarf galaxies: NGC 147, NGC 205, and NGC 221 (data processed according to the methods described in §2. The galaxy NGC 221 (M 32) is a detection of a single point source, with a calculated X-ray luminosity (log$`L_X=37.39\mathrm{ergs}\mathrm{s}^1`$) in good agreement with Burstein et al. (1997), while NGC 147 and NGC 205 are upper limits. The X-ray-fainter galaxies appear to follow a linear relation (Figure 2) that can be compared to a stellar contribution derived from hard and soft X-ray components of Centaurus A (log $`L_X/L_B=28.96\mathrm{ergs}\mathrm{s}^1/L_{}`$; Brown & Bregman 1998, and references therein). A stellar component may also be scaled from the M31 bulge, since it might be expected that the bulges of spiral galaxies exhibit the same spectral signatures as early-type galaxies dominated by stellar emission. The M31 contribution (log $`L_X/L_B=29.21\mathrm{ergs}\mathrm{s}^1/L_{}`$ and log $`L_X=38.93\mathrm{ergs}\mathrm{s}^1`$, Irwin & Sarazin 1998) lies 0.3 dex higher than the Cen A stellar line, from which we infer that the stellar X-ray to optical luminosity ratio may not be a constant for all early-type systems. The X-ray luminosities of the brightest galaxies in the sample can be compared to the maximum amount of energy produced by stellar motions and gravitational infall (log$`L_{grav}`$ = 23.57 + (5/3)log$`L_B`$, Brown & Bregman 1998). This energy depends partly upon the shape of the potential well which is measured by the stellar velocity dispersion. The assumption is made that the thermalization of stellar mass loss is 100% efficient. Figure 2 also indicates that if the supernova rates of van den Bergh & Tammann (1991, upper line) are correct, supernovae can also provide energy sufficient to produced the high X-ray luminosities observed. The lower (by a factor of 4.7) supernova energy line (log$`L_{SN,2}`$ = log$`L_B`$ \+ 30.22) was determined using a supernova rate given by Turatto, Capellaro, & Benetti (1994) and assumes a SNe energy of $`10^{51}`$ ergs. To determine the correlation between $`L_X`$ and $`L_B`$, we implemented the ordinary least-squares (OLS) linear regression bisector method of Feigelson & Babu (1992) who discuss the applicability and effectiveness of several unweighted least-squares linear regression models to astronomical problems. The usual method employed by astronomers is the least squares Y-on-X (Y/X) fit, which minimizes residuals in Y. The OLS(Y/X) method is clearly preferred if it is known that one variable physically depends on another, or if the goal is to predict Y ($`L_X`$) given X ($`L_B`$). The goal of this study is not predicting $`L_X`$ given $`L_B`$, but understanding the fundamental relationship between $`L_X`$ and $`L_B`$. In this case, the assignment of a dependent or independent variable is uncertain, and so a symmetrical linear regression method is most appropriate as it is invariant to a change of variables (Isobe et al. 1990). Of the four OLS lines reviewed of this sort, the OLS bisector (the line bisecting the OLS(Y/X) and (X/Y) lines) is recommended. The OLS bisector method yields a slope of 2.72$`\pm `$0.27 for our data (Table 4), which is slightly steeper than previously reported due to the application of a resampling procedure recommended for small samples (Feigelson & Babu 1992). In Table 4, we note that OLS(Y/X) fitting yields a flatter slope consistent with $`m`$ 2.0–2.3 (Eskridge et al. 1995, White & Davis 1997). There is an increase from 2.7 to 2.9 in the $`L_X`$$`L_B`$ slope if the background is chosen far from the galaxy center (see Table 4). The galaxy NGC 5102 is excluded from all fits primarily because of it’s very low $`L_X/L_B`$ ratio, which places it among the dwarf galaxies on the $`L_X`$$`L_B`$ plot (Figure 2). NGC 5102 has a very blue integrated color of ($`BV`$)$`{}_{T}{}^{}{}_{}{}^{0}`$ = 0.58 (de Vaucouleurs et al. 1976), and a low metallicity which suggests that this galaxy recently had a starburst episode, and its X-ray emission is consistent with that of stars (van Woerden et al. 1993). An estimate of the hot gas contribution to the X-ray emission can be derived by subtracting a linear stellar component ($`L_{X,}`$) from the data. The removal of the Cen A estimate of $`L_{X,}`$ yields a slope of 3.32$`\pm `$0.46 (bisector, Table 4) for the $`L_X`$$`L_B`$ distribution. The bisector slope in this case was calculated using the methods of Isobe et al. (1990) since the distributed software of Feigelson & Babu (1992) does not accommodate upper limits. A gas component was not derived from M31 as we determined that the $`L_X`$ magnitude of the M31 stellar component was greater than the lowest luminosity galaxies (NGC 1344, NGC 3115, and NGC 5102) at the 3$`\sigma `$ level. Also, CO is abundant in the bulge of M31, which distinguishes it from early-type galaxies (Loinard, Allen, & Lequeux 1995), and it has been suggested that the M31 bulge represents a post-starburst stage (Rieke, Lebofsky, & Walker 1988). ### 3.2 $`T_X`$$`T\sigma `$ Correlation Brown & Bregman (1998) confirm a correlation between the fitted X-ray gas temperature ($`T_X`$) and the stellar velocity dispersion temperature ($`T_\sigma `$), in addition to the $`L_X`$$`L_B`$ relationship. The slope of the log$`T_X`$–log$`T_\sigma `$ relationship is found to be 1.43$`\pm `$0.34 (see Table 4), which is slightly steeper than a slope of unity that is expected in cooling flow models. Also not theoretically expected, is a large dispersion in $`T_X`$ about the best fit line. The slope to the temperature data differs from that reported by Davis & White (1996) who published temperatures for 30 galaxies. The causes of this difference are discussed briefly in Brown & Bregman (1998). One cause lies in the methods these two groups use in fitting the data. Our temperature data is reproduced in Figure 3a with the addition of a Y-on-X fitted line, and compared to the data of Davis & White (1996, Figure 3b) in illustration of the difference in statistical methods. For the Davis & White (1996) sample, we plot $`T_\sigma `$ instead of $`\sigma `$, where $`T_\sigma `$ is derived from Faber et al. (1989) and Dressler et al. (1991) velocity dispersions. As stated in §3.1, the regression method utilized is a function of the problem being addressed. Davis & White (1996) find gas temperatures everywhere hotter than the expected velocity dispersion temperature, which is a striking difference between their data and ours. Below $`T_\sigma 0.45`$ keV we find a more or less symmetrical distribution of gas temperatures about the $`T_X`$ = $`T_\sigma `$ line. In cases where we fit $`T_X>0.5`$ keV, we find that our temperatures agree well with Davis & White (1996) for the same galaxies. Below $`T_X=0.5`$ keV, our temperatures range from $``$25–60% below the values of Davis & White (1996). This difference cannot simply be the result of the number of temperature components allowed (see Brown & Bregman 1998), as three of our “low” $`T_X`$ galaxies were fit with single-temperatures. Davis & White (1996) report extremely low solar abundances (Z $`0.06`$ solar) for the three galaxies by allowing Z to be a fit parameter, and at low abundances, derived temperatures become higher. A discussion of our data as compared to ASCA derived temperatures is given in Brown & Bregman (1998). Our temperatures are presented in Table 5 along with those of Davis & White (1996), and Buote & Fabian (1998) ASCA temperatures for comparison. ### 3.3 The galaxy environment We suggested in our previous paper that the observed X-ray luminosity of a galaxy was strongly influenced by its environment because the most X-ray luminous galaxies were in clusters or in the centers of groups. Here, we examine this by quantifying the environment richness through the use of the local galaxy density, $`\rho `$. We determine which objects are X-ray luminous for their $`L_B`$ by comparing $`L_X/L_B`$ to the individual $`\rho `$ of each galaxy in our sample. In Figure 4, the ratio of the X-ray-to-optical luminosity is plotted against the Tully (1988) local density of galaxies ($`\rho `$) brighter than -16 mag in the vicinity of each sample galaxy. The galaxy density is calculated such that an isolated galaxy will have a local density of $`\rho `$ = 0.06 galaxies Mpc<sup>-3</sup> by virtue of its own presence. The local density in a richer environment, for example the center of the Virgo cluster, would be approximately 5 galaxies Mpc<sup>-3</sup>. For our sample, no high luminosity systems are found in the most isolated environments ($`\rho <`$ 0.2 galaxies Mpc<sup>-3</sup>), where the median log $`L_X/L_B`$ is plotted on Figure 4 at $`29.1\mathrm{ergs}\mathrm{s}^1/L_{}`$. The upper and lower 25% quartile values for log $`L_X/L_B`$ in this region are 29.2 and 28.8 respectively (both in $`\mathrm{ergs}\mathrm{s}^1/L_{}`$). The highest luminosity galaxies are only found in the densest environments ($`\rho >`$ 0.79 galaxies Mpc<sup>-3</sup>), however lower luminosity ellipticals are also found in this region contributing to a collective median luminosity ratio of $`30.2\mathrm{ergs}\mathrm{s}^1/L_{}`$, with log $`L_X/L_B=30.6\mathrm{ergs}\mathrm{s}^1/L_{}`$ at the upper 25% quartile and 29.7 $`\mathrm{ergs}\mathrm{s}^1/L_{}`$ at the lower 25% quartile. For 0.2 $`<\rho <`$ 0.8 galaxies Mpc<sup>-3</sup>, the median luminosity ratio has a moderate value of log($`L_X/L_B`$) $`29.6\mathrm{ergs}\mathrm{s}^1/L_{}`$ and upper and lower 25% quartile log $`L_X/L_B`$ values of 29.3 and 30.0 respectively (in $`\mathrm{ergs}\mathrm{s}^1/L_{}`$). The data appear to follow a correlation albeit with broad dispersion, and there is a lower limit to the luminosity ratio for $`1.2<`$ log $`\rho <`$ 0.6, with no galaxies found below log $`L_X/L_B=28.8\mathrm{ergs}\mathrm{s}^1/L_{}`$. We applied statistical tests to the data to quantify the apparent correlation. Three non-parametric tests for bivariate data, Kendall’s Tau, Spearman’s Rho, and Cox Proportional Hazard, were performed, each of which determined a correlation in the data at better than the 99.7% confidence level. We additionally examined the $`L_X`$ residuals for the OLS(Y/X) fit, excluding NGC 5102, as they might relate to the galaxy environment (see Figure 5). The residuals are defined to be the difference between $`L_X`$ and the OLS bisector fit to $`L_X`$ and $`L_B`$. The Kendall’s Tau test indicates a correlation exists at $`>`$ 99% confidence. The correlation implies that the galaxies brightest for their optical luminosity (i.e, NGC 1399, NGC 4636, NGC 4552) are found in the spatially densest regions (relative to the rest of the sample). Occasionally, a galaxy with a low $`L_X`$, given $`L_B`$, is found in a dense environment indicating that complex mechanisms affect observed X-ray luminosities. The fitted X-ray temperatures, as a function of spatial density, exhibit a weak correlation at the 92-95% confidence level, depending upon if one uses the Kendall’s Tau test, the Spearman’s Rho test, or the Cox Proportional Hazard model (Figure 6a). The hottest galaxies (k$`T_X0.8`$ keV) are found in a range of environments as are galaxies with k$`T_X0.45`$ keV. The ratio of $`T_X`$ to $`T_\sigma `$ as a function of density exhibits an even weaker correlation, as shown in Figure 6b. ### 3.4 Malmquist bias The issue of bias is an important one for our sample, so we have addressed the statistical and geometrical effects that are often discussed as Malmquist bias. One effect is of concern whenever measured distances contain uncertainty. Along a line of sight, as one goes farther in distance, the volume of space associated with a given solid angle increases. This has the effect of giving greater weight to larger distances. This is the “classical” Malmquist bias, and is corrected for in the distance measurements used in our analyses (Faber et al. 1989). The other type of Malmquist bias occurs in magnitude-limited samples and reflects the fact that the intrinsic luminosity function is not being equally or completely sampled at all distances. For the most distant galaxies, only the high luminosity part of the luminosity function is sampled, whereas for the nearest galaxies, much more of the luminosity function is sampled. Although this is an important issue for some investigations, such as those using standard candles as distance indicators (e.g., Teerikorpi 1997), it is not important for this study, which does not require that the Schecter luminosity function be fully sampled. Our sample merely requires that the galaxies be representative of those that comprise the Schecter luminosity function, and that they were chosen in a fashion that does not bias the X-ray luminosity. These galaxies where chosen independently of their X-ray properties, each was detected in X-rays, and they lie far above the magnitude limit of the Faber et al. (1989) sample, so this should comprise an unbiased sample. ## 4 Discussion and Conclusions ### 4.1 The Role of Environment One of results of our work is the demonstration that environment has a central influence on the X-ray luminosity as seen in the positive correlation of the $`L_X/L_B`$ ratio with galaxy density. In the lowest density environments, only X-ray faint galaxies are found (low $`L_X/L_B`$ ratios), and the galaxies with the highest $`L_X/L_B`$ ratios are found in fairly dense environments. However, galaxies in dense environments exhibit a wide range in $`L_X/L_B`$, suggesting that environment does not have the same effect on all galaxies, and we suggest a natural explanation for this. The distribution of $`L_X/L_B`$ in Figure 4 reveals that there is a positive correlation between local environmental richness and X-ray brightness, but with significant dispersion at moderate and high galaxy density. In an effort to understand this distribution, it is important to recognize that environment can have both positive and negative effects on the X-ray brightness of a galaxy (e.g., Takeda, Nulsen, & Fabian 1984). Groups and clusters with a significant ambient medium can lead to stripping of the interstellar gas from a galaxy, leading to a substantial reduction in $`L_X/L_B`$. However, for galaxies moving slowly through relatively cool clusters or groups, the galaxy may be able to accrete this external material, increasing $`L_X/L_B`$ (Brighenti & Mathews 1998). Also, an ambient group or cluster medium could stifle galactic winds, should they exist, causing the galaxy to retain its hot gas locally, which also will increase $`L_X/L_B`$. Therefore, galaxies in clusters or groups would have a range of $`L_X/L_B`$, where the low values are determined by galaxies where stripping or winds occur, and the high values represent systems that are accreting the ambient material or having their winds stifled. In very rich clusters, like Coma, stripping is probably the dominant process, but none of the galaxies in our sample lie in such a rich system. The richest cluster in this sample is Virgo, where stripping is expected to occur in some of the galaxies, but not most of them (Gaetz, Salpeter, & Shaviv 1987). The role of environment has been previously examined by White & Sarazin (1991) using Einstein Observatory data to determine if a correlation exists between local galaxy density and $`L_X`$. White & Sarazin (1991) find the number of bright, neighboring galaxies within various projected distances from each X-ray galaxy, and perform linear regressions of log $`L_X`$ against log $`L_B`$ and the local density of galaxies. They conclude that the large dispersion in $`L_X`$ is not significantly reduced by taking into account the individual local galaxy density, which may be due to uncertainties in the numbers of neighbors found. White & Sarazin (1991) then binned the X-ray sample in order to reduce statistical error. The X-ray sample is divided into four subsets: high-$`L_X`$ (log $`L_X/L_B30\mathrm{ergs}\mathrm{s}^1/L_{}`$) detections, high-$`L_X`$ upper limits, low-$`L_X`$ detections, and low-$`L_X`$ upper limits. The total number of bright galaxies (again, within various projected distances) around all galaxies in each subset is calculated yielding an averaged number of bright galaxies per X-ray galaxy for each subgroup. They find that low-$`L_X`$ galaxies have $`50`$% more neighbors than high-$`L_X`$ galaxies, which the authors argue is expected if ram-pressure stripping is a major factor in the $`L_X`$ dispersion. The method of White & Sarazin (1991) is significantly different from ours. Whereas they bin an incomplete sample of Einstein Observatory data and look at bright neighbors in various projected distances, we use the tabulated local galaxy density data of Tully (1988) for each galaxy in our complete sample. The Tully local density is calculated using a three-dimensional grid spaced at 0.5 Mpc. Also, our data includes X-ray fainter galaxies, which allows us to extend the range of X-ray-to-optical luminosity ratios. Our data extends down to log $`L_X/L_B=28.8\mathrm{ergs}\mathrm{s}^1/L_{}`$ versus the Canizares, Fabbiano, & Trinchieri (1987; used by White & Sarazin 1991) data which extends down to log $`L_X/L_B=29.2\mathrm{ergs}\mathrm{s}^1/L_{}`$. Since none of the galaxies in these samples lie in rich environments (i.e., $`>`$ 4 galaxies per Mpc<sup>3</sup>), we cannot determine if ram-pressure stripping effects becomes significant in the observed X-ray luminosity in very dense environments. ### 4.2 The Importance of Galactic Winds The extremely low values of $`L_X/L_B`$ seen in isolated galaxies is probably the strongest evidence in favor of galactic winds. The X-ray emission from these systems is so low that one is probably detecting the X-ray contribution from stars rather than gas, indicating that these galaxies are not gas-rich. For these galaxies, an ambient group or cluster medium is not detected and they do not lie in a high velocity dispersion group, so stripping is unlikely to occur. Furthermore, the rate of supernova energy input is probably adequate to drive a galactic wind, as discussed below. Therefore, the most viable mechanism for rendering the galaxies X-ray weak is through galactic winds (see Pellegrini & Ciotti 1998, and references therein). Galactic winds will occur if the supernova heating rate is sufficient to raise the temperature of the mass loss from stars above the escape temperature (from the galaxy center). A steady-state galactic wind will obey Bernoulli’s law, which for a flow that has a small flow velocity when it reaches large radius is $`\frac{5P}{2\rho }+\mathrm{\Phi }=0`$, where $`\mathrm{\Phi }`$ is the gravitational potential and $`\frac{5P}{2\rho }`$ is the enthalpy. For a steady-state wind from the center of a galaxy where $`\mathrm{\Phi }(0)=8\sigma ^2`$, $$T_{wind}=\frac{16}{5}\sigma ^2\mu m_p/k$$ (3) The variable $`\mu `$ is the mean molecular weight, $`m_p`$ is the proton mass, and $`k`$ is the Boltzmann constant. We use the supernova rate of Cappellaro et al. (1997), and energy per supernova of $`10^{51}`$ ergs, and the stellar mass loss rate of Faber & Gallagher (1976) to determine gas temperature entering the system: $$T_{gas}=(\alpha _{SN}T_{SN}+\alpha _{}T_{})/(\alpha _{SN}+\alpha _{})$$ (4) where $`\alpha _{}`$ and $`\alpha _{SN}`$ are the mass loss rates from stars and supernovae (see, e.g., Mathews & Brighenti 1997). We then find that the most optically luminous galaxy that can sustain a total wind occurs at $`L_B=5\times 10^{10}L_{}`$. Less luminous galaxies can drive a wind, provided that high ambient pressure from the surrounding environment does not prevent it. For galaxies more luminous than about $`1.5\times 10^{11}L_{}`$, the radius beyond which a wind can exist is far enough out that most of the stellar mass loss is retained by the galaxy (for galaxies with more complex models, partial winds may be more common; see Pelligrini & Ciotti 1998). The largest uncertainty in estimating these values for $`L_B`$ is the energy released per supernova, which may be uncertain by about a factor of two. Previously, we offered the suggestion that the galaxies with high $`L_X/L_B`$ values were X-ray bright because galactic winds were stifled (contained) by a high-pressure ambient medium. This is energetically possible if the supernova rate is correctly given by van den Bergh & Tammann (1991), whose rate differs by about a factor of three from Cappellaro et al. (1997; see Figure 2). The primary discrepancy between the two rate calculations is the correction for the inability to detect supernovae against the ambient light of the galaxy. A correction by a factor of three was suggested by van den Bergh & Tammann (1991), while Cappellaro et al. (1997) argue that the correction factor is only about 40%. We calculated the distance from the center of an elliptical galaxy within which a point source near the magnitude limit of these surveys would fall below the $`5\sigma `$ detection threshold, for a typical elliptical galaxy in these surveys. We find that only 5-30% of the supernovae would have been missed due to this effect, similar to the more accurate calculation of Cappellaro et al. (1997). The supernova heating rate of Turatto, Capellaro, & Benetti (1994; line $`L_{SN,2}`$ in Figure 2) is insufficient to account for the X-ray emission from the highest $`L_X/L_B`$ systems, which are up to a factor of three above this line (an energy of $`10^{51}`$ erg per supernova is used). Unless the energy per supernova has been underestimated by a factor of three, we consider unlikely our original suggestion that stifled winds can explain the highest $`L_X/L_B`$ systems. The source of energy in these high luminosity systems is most likely gravitational, supplied by accretion onto the system, as discussed by Mathews & Brighenti (1998) and Brighenti & Mathews (1999). They show that the temperature and density distributions of hot gas in X-ray luminous galaxies, such as NGC 4472, are consistent with their calculations. ### 4.3 An Overall Explanation of the $`L_X`$$`L_B`$ Distribution Previously, there was hope that the cooling flow model could provide a satisfactory explanation for the observed log$`L_X`$–log$`L_B`$ distribution. An $`L_X`$$`L_B`$ correlation with a slope of about 1.7 was predicted, which was similar to observations, and the dispersion about this correlation was treated as perturbations due (in part) to differences in galaxy evolution. The picture that we present is fundamentally different. We suggest that the steep observed correlation is due to the transition from galaxies with total winds, to those with partial winds, to those where the gas is retained and can accrete additional material. As calculated above, we expect this transition from winds to retained gas to occur in the range $`10.7<\mathrm{log}L_B(L_{})<11.2`$. Below log$`L_B=10.7(L_{})`$, one only finds galaxies with small values of $`L_X/L_B`$, as would be expected for systems with total winds. Also, all galaxies with log$`L_B>11.2`$ have $`L_X/L_B`$ values well above that for purely stellar emission, consistent with retaining their gas. Of equal importance is the influence of environment, which can remove galactic gas through stripping, but can also help galaxies retain their gas by stifling winds, and can add to the galactic gas content through accretion. A critical difference between our picture and most others is that heating by supernovae can be comparable to, if not greater than the heating by the thermalization of stellar ejecta. This suggestion is not in complete agreement with other models or with some observations, the disagreement centering around the importance of winds and the metal enrichment introduced by the supernovae. Brighenti & Mathews (1999) study massive galaxies and use a supernova rate that is less than half the value determined by Cappellaro et al. (1997). With these supernova rates and galactic potential wells, winds are not important. They show that a range of nearly an order of magnitude in $`L_X/L_B`$ can be introduced by truncation of the galaxy as it interacts with neighbors (Brighenti & Mathews 1999). However, the observed range in $`L_X/L_B`$ is two orders of magnitude, and when the stellar contribution is removed, the range in $`L_X/L_B`$ due to the gas alone substantially exceeds two orders of magnitude. It is unlikely that truncation alone will produce the wide range in $`L_X/L_B`$. Also, in their model, one might expect that truncation would be least important in regions of low galaxy density, leading to high values of $`L_X/L_B`$ in such regions. In conflict with this expectation, we find that galaxies in the lowest density environments only have low values of $`L_X/L_B`$. The great range of $`L_X/L_B`$ and the finding that fairly isolated galaxies have low values of $`L_X/L_B`$ is consistent with expectations of a model that incorporates galactic winds. A discussion of supernova rates also introduces an apparent conflict between the expected and observed metallicity of the X-ray emitting gas (see Brown & Bregman 1998). For the standard cooling flow picture and the rates given by Cappellaro et al. (1997), the metallicity should be several times solar instead of near-solar (Loewenstein & Mushotzky 1997; Buote 1999). However, the metallicities are measured only for the most luminous galaxies, and these are precisely the ones for which accretion of circumgalactic gas dominates the gas content. The metallicity prediction must be revised downward due to the diluting effects of the accreted gas, which may reduce the metallicity into the observed range. ### 4.4 Predictions of the Models The various suggestions made by us and others has several immediate predictions that will be tested by upcoming observations. First, the amount of the stellar contribution to the X-ray emission should be spatially resolved by Chandra, leading to a clear determination of this value, and probably a determination of the spectrum, separated from the spectrum of the hot gas. Another important test will be the measurement of the rate of cooling gas, since it should be low if supernovae cause galactic winds and approximately the stellar mass loss rate if cooling flows are uniformly present. We predict that the low $`L_X/L_B`$ galaxies will show little evidence of cooling gas, which will be measured through the O VIII and O VII X-ray lines with Chandra, and through the O VI line with FUSE. The high $`L_X/L_B`$ galaxies should have cooling rates similar to the stellar mass loss rate, if not greater, although most models agree on this prediction. Cooling flow models should show a significant metallicity gradient from the outer to the inner parts, reflecting the stellar metallicity gradient, and this effect should be even more pronounced for galaxies that are accreting material from the surrounding group or cluster. Finally, the ongoing optical supernova searches, being carried out with modern CCD techniques, should yield an improved determination of the supernova rate. We would like to thank a variety of people for valuable discussion: J. Irwin, J. Mohr, P. Hanlan, R. White, M. Loewenstein, G. Worthey, J. Parriott, M. Roberts, D. Hogg, and R. Mushotzky. Special thanks is due to the members of the ROSAT team and to the archiving efforts associated with the mission. Also, we wish to acknowledge the use of the NASA Extragalactic Database (NED), operated by IPAC under contract with NASA; SLOPES, which implements the methods presented in Isobe et al. (1990), Babu & Feigelson (1992), and Feigelson & Babu (1992); ASURV (Isobe & Feigelson 1990), which implements the methods presented in Isobe, Feigelson, & Nelson ( 1986). NASA has provided support for this work through grants NAGW-2135, NAG5-1955, and NAG5-3247; BAB would like to acknowledge support through a NASA Graduate Student Researchers Program grant NGT-51408, and the support of the National Academy of Sciences postdoctoral associate program.
no-problem/9909/math-ph9909016.html
ar5iv
text
# Coherent states and uncertainty relations Publised in ”Quantum Field Theory, Quantum Mechanics and Quantum Optics, Pt.I: Symmetries and Algebraic Structure in Physics”, Edited by V. V. Dodonov and V. I. Man’ko, 1991, Nova Science Publishers, Inc., Singapore, pp. 247-249. ## Abstract A sharp estimation of the $`L^p`$-norms of some matrix coefficients of the square integrable representations is conjectured. The conjecture can be proved for integer values of $`p`$ using a result of J. Burbea. In an unpublished paper we have obtained the following sharp estimation of the $`L^p`$-norms of the matrix coefficieents of the square integrable representations $`U_k(x,y,t)`$, with $`k𝐑`$, of the Heisenberg group (which is defined as the manifold $`𝐑^3`$ with the group multiplication given by the rule : $`(x,y,t)(x^{^{}},y^{^{}},t^{^{}})=(x+x^{^{}},y+y^{^{}},t+t^{^{}}+\frac{1}{2}(x{}_{}{}^{}yxy^{^{}}))`$): $$(|k|(2\pi )^1_𝐑|(h,U_k(x,y,t)f)|^p𝑑x𝑑y)^{\frac{1}{p}}(\frac{2}{p})^{\frac{1}{p}}hf$$ (1) for all $`h,fL^2(𝐑)`$ and where the equality is attained iff $`h`$ and $`f`$ are Glauber coherent states. This result was quoted in in connection with the uncertainty relations. In paper we have considered, also, the following immediate question raised by the above result: do there exist such sharp estimations for the square integrable representations of other locally compact unimodular Lie groups ? Let us denote by $`G`$ such a group and let $`U`$ denote a unitary representation of $`G`$ in a Hilbert space $`H`$. We suppose that this representation is a representation with coherent states, i.e. there is a subgroup $`S`$ of $`G`$, a character $`e:S𝐓`$ of $`S`$, and a vector $`f_0`$ of $`H`$ such that: $$U(s)f_0=e(s)f_0,sS,$$ (2) and $$_{G/S}|(h,U(g)f_0)|^2𝑑\dot{g}<\mathrm{},hH.$$ (3) Here $`\dot{g}`$ is the Haar measure on $`G/S`$. Then, it is well known that there exists a real nonvanishing number $`dim(U)`$ so that: $$dim(U)_{G/S}|(h,U(g)f_0)|^2𝑑\dot{g}=h^2f_0^2,hH.$$ (4) For such representations a straigthforward generalization of the result (1) is given by the following conjecture: CONJECTURE. For any real $`p2`$ and for any $`hH`$ there exists a constant $`C(p)1`$ with $`C(2)=1`$ and such that: $$\left(dim(U)_{G/S}|(h,U(g)f_0)|^p𝑑\dot{g}\right)^{\frac{1}{p}}=C(p)hf_0,hH.$$ (5) and where the equality is attained iff $`h`$ is a coherent state, i.e. iff $`h=cU(g)f_0`$, $`gG`$, $`c𝐂`$. In the following we shall discuss the relevance of this conjecture for the uncertainty relations and we shall prove it when $`p`$ is any even natural number and $`G`$ is one of the following groups: the Heisenberg group, the group $`SU(2)`$ and the group $`SU(1,1)`$. From (4) it follows that the Hilbert space $`H`$ is embedded in the Hilbert space $`L^2(G/S,dim(U)d\dot{g})`$ as a subspace with reproducing kernel. Hence for any vector $`hH`$ with $`h=1`$ we can define the wave function $`(h,U(g)f_0)`$ on $`G/S`$ and a probability distribution $`P(\dot{g})=|(h,U(g)f_0)|^2`$ on $`G/S`$. Then it is evident that the left hand side of (5) can be considered as a measure of the extent to which the above defined wave function and the corresponding probability distribution are peaked on $`G/S`$. From (5) it follows that the probability distribution $`P(\dot{g})`$ on $`G/S`$ cannot be arbitrarily peaked on the phase space $`G/S`$. Hence (5) is an uncertainty relation for the wave function defined on the phase space and the most peaked wave functions are those associated with the coherent states. When $`G`$ is one of the three particular cases, enumerated above, the phase space $`G/S`$ will be denoted by $`\mathrm{\Delta }`$ and is, respectively, the complex plane $`𝐂`$, the unit disc and the Riemann sphere. The irreducible representations with coherent states on $`\mathrm{\Delta }`$ of the first and last group can be parametrrized by a positive real number $`\beta `$ and the corresponding generalized dimension $`dim(\beta )`$ is defined by: $`dim(\beta )=\beta `$ for the Heisenberg group and $`dim(\beta )=\beta 1`$ for the group $`SU(1,1)`$. For the group $`SU(2)`$ the parameter $`\beta `$ takes only integer and semiinteger values and the generalized dimension $`dim(\beta )`$ is equal with the ordinary one which is given by the well known formula: $`dim(\beta )=2\beta +1`$. The invariant measure $`d\dot{g}`$ on $`\mathrm{\Delta }`$ is given by $`d\dot{g}=\pi ^1dxdy`$ for the Heisenberg group and by $`d\dot{g}=\pi ^1(1\pm |z|^2)^2dxdy`$, $`(z=x+iy)`$, for the groups $`SU(2)`$ and $`SU(1,1)`$ respectively. In all three cases the Hilbert space $`H_\beta `$ is unitarily equivalent with the space of holomorphic functions on $`\mathrm{\Delta }`$ with the norm defined by: $$f_\beta =dim(\beta )_\mathrm{\Delta }|f(z)|^2k_\beta (|z|^2)^1𝑑\dot{g},$$ (6) where $`k_\beta (z)=\mathrm{exp}(\beta z)`$ for the Heisenberg group, $`k_\beta (z)=(1z)^\beta `$ for the group $`SU(1,1)`$, and $`k_\beta (z)=(1+z)^{2\beta }`$ for the group $`SU(2)`$. The reproducing kernel is given by $`k_\beta (z)`$ In order to prove the conjecture we shall use the following theorem proved in : THEOREM. If $`fH_\beta `$ and $`hH_\beta ^{^{}}`$, then $`fhH_{\beta +\beta ^{^{}}}`$ and $$fh_{\beta +\beta ^{^{}}}f_\beta h_\beta ^{^{}}.$$ (7) The equality is attained iff either $`fh=0`$ or $`f`$ and $`h`$ are of the form $`f=c_1k_\beta (\overline{w}z)`$, $`h=c_2k_\beta ^{^{}}(\overline{w}z)`$ for some $`w\mathrm{\Delta }`$ and some nonzero complex constants $`c_1`$ and $`c_2`$. As a corollary one obtain for any natural number $`n`$: $$f^n_{n\beta }f_\beta ^n$$ (8) with the equality attained either for $`f=0`$ or when $`f=ck_\beta (\overline{w}z)`$ for some nonzero complex constant $`c`$. From (8) it follows for the probability distribution on $`\mathrm{\Delta }`$ given by $`P_\beta (z)=|f(z)|^2k_\beta (|z|^2)^1`$ that: $$_\mathrm{\Delta }P_\beta (z)^n𝑑\dot{g}\frac{dim(\beta )}{dim(n\beta )}$$ (9) with the equality for $`P_\beta (z)=k_\beta (\overline{w}z)^2k_\beta (|z|^2)^1k_\beta (|w|^2)^1`$. Hence, the most concentrated wavefunction defined on the phase space $`\mathrm{\Delta }`$ is the coherent state $`k_\beta (\overline{w}z)k_\beta (|z|^2)^{\frac{1}{2}}k_\beta (|w|^2)^{\frac{1}{2}}`$.
no-problem/9909/hep-lat9909118.html
ar5iv
text
# Scale Determination Using the Static Potential with Two Dynamical Quark Flavors ## 1 INTRODUCTION To get an idea of systematic errors introduced in determining the physical scale in lattice QCD simulations using $`m_\rho `$, it is important to use an alternative quantity to set the physical scale. One quantity, $`r_0`$, proposed by Sommer has recently been studied in several papers . Here, we examine the static potential for a range of couplings 5.3 $`6/g^2`$ 5.6 with two flavors of dynamical staggered quarks. We compute the Sommer scale for each dynamical quark mass and extrapolate to the chiral limit at each coupling. Finally, using the scale determined by $`r_0`$, we take the continuum limit and replot the Edinburgh curve, finding very little difference as compared with setting the scale from $`m_\rho `$. The configurations were generated by the MILC collaboration using the staggered quark action with two dynamical flavors and the Wilson gauge action. They were stored every 10 units of molecular dynamics time. These configurations have been used elsewhere for light hadron spectrum calculations . For each coupling there are at least four quark masses. (For details, see Ref. .) ## 2 STATIC POTENTIAL To calculate the static potential, Wilson loops $`W(R,T)`$ were measured for 5 and 10 smearing steps using the APE smearing method as explained in . The effective potentials were then calculated using $$V_T(R)=\mathrm{ln}\left[\frac{W(R,T)}{W(R,T+1)}\right]$$ (1) After blocking the results into 30 time unit blocks, errors are obtained using single elimination jackknife. $`V(R)`$ is expected to flatten out at large $`R`$ for dynamical quarks due to string breaking. At our strongest coupling, 5.3, and smallest quark masses we have searched for evidence of string breaking. For the two lightest masses, we did not find a signal at large enough values of $`R`$, but we do find a hint of string breaking for $`am_q`$ = 0.075. In Fig. 1, the potential seems to flatten for $`R`$ greater than five to seven lattice spacings. This corresponds to a distance of 1.3–1.8 fm. Ref. reports finding signs of string breaking at 0.8–1.1 fm, however their $`m_\pi /m_\rho `$ is lower. This is consistent: the string is expected to break at smaller distances for lighter quarks. ## 3 FIT PARAMETERS We used the standard ansatz for the potential, $$V(R)=V_0+\sigma Re/Rf(G_L1/R).$$ (2) This is a simple combination of a Coulomb term at short distances and a linear increase at large distances. The last term accounts for lattice artifacts. For more details on this ansatz and its limitations see Refs. and . Using the fit parameters $`\sigma `$ and $`e`$, the Sommer scale $`r_0`$ is calculated in the usual way, $$r_0=\sqrt{\frac{ce}{\sigma }}$$ (3) with c = 1.65. This corresponds to a physical length of $`r_00.5`$ fm. We fit the potential to the ansatz (2) for different ranges of $`R`$ and different values of $`T`$. The ”good” fits yielded consistent values, except for $`\beta `$ = 5.35, $`am_q`$=0.05. We have excluded this point from our fits for the chiral extrapolation. The criteria for best fits were the confidence of the fit and the degrees of freedom. Additionally, the range of $`R`$ had to include the value of $`r_0`$, which was an important constraint for the coarse lattices. In Fig. 2, we show the best estimates of $`r_0`$ along with the exponential fit explained below. ## 4 SCALE DETERMINATION For dynamical quarks, a different value of $`r_0`$ and $`\sigma `$ is obtained for each quark mass. The consideration that the physical scale should approach the quenched result as the quark mass tends to infinity and the graphs of $`r_0`$ vs $`am_q`$ suggest an exponential fit $$r_0=A+B\left(e^{Cam_q}1\right)$$ (4) with three parameters $`A`$, $`B`$ and $`C`$. We obtained good fits to this form for all $`\beta `$ values except 5.415, where we excluded the heaviest two points from the fit to get an acceptable confidence level. These fits were used for the chiral extrapolation to determine $`r_0`$ for each $`m_q`$. ## 5 EDINBURGH PLOT A most surprising result of Lattice ’98 was that the Edinburgh plot for dynamical quarks was shifted up compared to the quenched simulations, leading to a larger deviation from the real world value in the continuum limit . We were curious to know if the Edinburgh curve changed when setting the scale from $`r_0`$. Fig. 3 shows $`m_N/m_\rho `$ as a function of $`a/r_0`$ for the real world $`m_\pi /m_\rho `$ value.<sup>1</sup><sup>1</sup>1These fits were good up to $`m_\pi /m_\rho `$ = 0.45 but for higher ratios the $`\beta `$ = 5.415 point departed from the curve. There seems to be a problem with that coupling for higher masses that we do not completely understand yet. Hence we have excluded that point from the fits for continuum extrapolation for $`m_\pi /m_\rho >`$ 0.45. This exclusion yielded better fits without much shift in the continuum value. The Edinburgh curve is replotted using the scale determined here in Fig. 4. ## 6 CONCLUSIONS We have calculated the heavy quark potential for a range of couplings for dynamical staggered quarks. Some flattening of the static potential for large $`R`$ at strong coupling is seen which hints at string breaking, but does not suffice to make a conclusive case. We determine $`r_0`$, extrapolate it to the chiral limit using an exponential form and examine the hadron mass data using the new scale. We do not find significant deviations from the previous continuum extrapolations using the $`\rho `$ mass to set the scale. We would like to thank the rest of the MILC collaboration for the use of the lattices and U.M. Heller for some fitting routines. We are grateful to the US Department of Energy which supported this work under grant FG 02-91ER-40661. The computations were performed on the T3E at PSC, Origin-2000s at NCSA and an Origin-2000, a Paragon and the CANDYCANE Linux cluster at Indiana University.
no-problem/9909/hep-lat9909031.html
ar5iv
text
# 1. Introduction ## 1. Introduction The hermitean Wilson-Dirac operator $`H`$ is a fundamental quantity in the overlap formalism . It has been shown there that the difference of the numbers of its positive and negative eigenvalues is related to the index of the massless Dirac operator. This gets particularly explicit with the Neuberger operator . The consideration of eigenvalue flows of $`H`$ introduced in has initiated a number of numerical works on such flows, including studies of the index theorem , of topological susceptibility , of instanton effects , and of the spectrum gap with the mass parameter . In these works the derivative of the flows has been used , considering the respective relation as a result of first-order perturbation theory . In theoretical considerations of these flows, as in Section 8 of Ref. , details of the behavior at crossing have been of interest. Further, questions concerning the eigenvalue flows have also been raised in investigations of the vicinities of the parameter values $`m/r=0,2,4,6,8`$. Generally one should be aware of the fact that the considerations of eigenvalue flows of $`H`$ actually imply certain smoothness conditions to hold, which goes beyond the solving of the eigenvalue equation at individual points. Locally this requires adequate properties of derivatives and globally that integration provides appropriate solutions. In the case of the specific hermitean operator considered, in unitary space fortunately there are theorems which we can use to settle the first point. To clarify the second point we have to develop an appropriate procedure of integration. In the present paper we derive a differential equation for the eigenvalues of the hermitean Wilson-Dirac operator $`H`$ and give a complete specification of its admissible solutions. We are able to do this in a mathematically well defined way. Our developments appear important for a number of problems. In particular, there is the ambiguity in the choice of the mass parameter $`m`$ which affects the counting of crossings of flows on a finite lattice. So far, from an upper bound on the gauge field, a bounding function has been derived which e.g. allows to disentangle physical and doubler regions. One can hope that the differential-equation properties found here allow to extract more detailed information within this respect. In a study of the locality of the Neuberger operator a lower bound for $`H^2`$, also relying on the above gauge-field bound, has been important. In such investigations again the differential equation may help to get sharper results. In present Monte-Carlo simulations with massless quarks (with overlap as well as with domain wall fermions) there are severe problems due to too small values occurring for $`H^2`$ . The proposals to deal with this include projecting out some subspace of small eigenvalues of $`H`$ , constructing forms of $`H`$ which are better behaved around zero , and looking for more suitably gauge-field actions . There are now hopes that the differential equation, allowing a more detailed insight, may guide to a way out of these difficulties. Studying flows numerically one generally has to interpolate between a finite number of points. It appears possible to develop more efficient methods for this which use the general properties obtained for the differential-equation solutions. In the following we first derive the differential equation for the eigenvalue flows of $`H`$ (Section 2). We then discuss the mathematical properties involved and restrictions due to the eigenequation (Section 3). Next we integrate the differential equation and give a complete overview of its admissible solutions (Section 4). Finally we collect some conclusions (Section 5). ## 2. Derivation of differential equation The Wilson-Dirac operator $`X/a`$ is given by $$X=\frac{r}{2}\underset{\mu }{}_\mu ^{}_\mu +m+\frac{1}{2}\underset{\mu }{}\gamma _\mu (_\mu _\mu ^{})$$ (2.1) where $`(_\mu )_{n^{}n}=\delta _{n^{}n}U_{\mu n}\delta _{n^{},n+\widehat{\mu }}`$ and $`0<r1`$. Its property $`X^{}=\gamma _5X\gamma _5`$ implies that $$H=\gamma _5X$$ (2.2) is hermitean. The operator $`H`$ has the eigenequation $$H\varphi _l=\alpha _l\varphi _l$$ (2.3) where $`\alpha _l`$ is real and the $`\varphi _l`$ form a complete orthonormal set in unitary space, as one has on a finite lattice. Multiplying (2.3) by $`\varphi _l^{}\gamma _5`$ one gets $`\varphi _l^{}\gamma _5H\varphi _l=\alpha _l\varphi _l^{}\gamma _5\varphi _l`$ and summing this and its hermitian conjugate one has $`\varphi _l^{}\{\gamma _5,H\}\varphi _l=2\alpha _l\varphi _l^{}\gamma _5\varphi _l`$. From this by inserting (2.2) with (2.1) one obtains $$\alpha _l\varphi _l^{}\gamma _5\varphi _l=m+g_l(m)$$ (2.4) where $$g_l(m)=\frac{r}{2}\underset{\mu }{}_\mu \varphi _l^2.$$ (2.5) For $`g_l(m)`$ using $`_\mu \varphi _l(_\mu \text{1l})\varphi _l+\varphi _l=2`$ one gets $$0g_l(m)8r.$$ (2.6) Further, abbreviating $`(\text{d}\alpha _l)/(\text{d}m)`$ by $`\dot{\alpha }_l`$, we obtain $$\frac{\text{d}(\varphi _l^{}H\varphi _l)}{\text{d}m}=\varphi _l^{}\dot{H}\varphi _l+\dot{\varphi }_l^{}H\varphi _l+\varphi _l^{}H\dot{\varphi }_l=\varphi _l^{}\gamma _5\varphi _l+\alpha _l\frac{\text{d}(\varphi _l^{}\varphi _l)}{\text{d}m}$$ (2.7) which means that we have $$\dot{\alpha }_l=\varphi _l^{}\gamma _5\varphi _l.$$ (2.8) Combining (2.4) and (2.8) we get the differential equation $$\dot{\alpha }_l(m)\alpha _l(m)=m+g_l(m)$$ (2.9) for the eigenvalue flows of the hermitean Wilson-Dirac operator $`H`$. ## 3. Requirements for solutions In Section 2 actually only continuity of $`\varphi _l(m)`$ would have been needed, which can be seen repeating the calculations of (2.7) in terms of finite differences. In Section 4, analyzing properties of solutions, we shall need $`\dot{g}_l(m)`$ (at least at certain points) which by (2.5) implies also the existence of $`\dot{\varphi }_l(m)`$. All of this is, however, no problem because in the case considered we have derivatives of $`\varphi _l(m)`$ up to any order. This follows because for our hermitean operator of form $`H(m)=H(0)+m\gamma _5`$ in unitary space theorems apply by which $`\varphi _l(m)`$ gets holomorphic on the real axis. Of the (continuously) infinite number of solutions of (2.9), specified by integration constants, only a discrete finite subset occurs for a given $`H`$, the selection depending on $`H`$. The number of solutions in this subset, being the number of eigenvectors of $`H`$, is simply the dimension of the unitary space. Because of (2.8) by the continuity of $`\varphi _l(m)`$ also $`\dot{\alpha }_l(m)`$ must be continuous for all $`m`$ in order that the respective solution $`\alpha _l(m)`$ of (2.9) belongs to the admissible subset. Since the eigenvectors $`\varphi _l(m)`$ have derivatives up to any order this must also hold for the admissible $`\alpha _l(m)`$. Because of the continuity required for admissible $`\alpha _l(m)`$ only solutions of the differential equation (2.9) are admitted which exist for all $`m`$. ## 4. Solutions of differential equation Instead of (2.9) we first consider the differential equation $$\dot{\beta }_l(m)=2(m+g_l(m))$$ (4.1) which by inserting $`\beta _l(m)=\alpha _l^2(m)`$ becomes (2.9). Integration of (4.1) readily gives $$\beta _l(m)=\beta _l(m_b)+2_{m_b}^m\text{d}m^{}(m^{}+g_l(m^{}))$$ (4.2) in which particular solutions are determined by the choices of $`m_b`$ and $`\beta _l(m_b)`$. These choices are restricted here by the fact that one actually wants real solutions of (2.9) which meet the requirements discussed in Section 3. To get an overview of the properties of (4.2) we note that $$_{\widehat{m}}^m\text{d}m^{}(m^{}+g_l(m^{})),$$ (4.3) where $`\widehat{m}`$ is an arbitrarily fixed value, has a minimum at $`m=m_y`$ if $$m_y+g_l(m_y)=0\text{ and }\dot{g}_l(m_y)>1.$$ (4.4) Because of $$Hm\gamma _5\text{ for }|m|\mathrm{}$$ (4.5) one has $`\varphi _l(m)\chi _\pm `$ with $`\gamma _5\chi _\pm =\pm \chi _\pm `$ in this limit. Then in (2.4) one gets $`\alpha _l\varphi _l^{}\gamma _5\varphi _l(\pm m)(\pm 1)=m`$ so that one obtains $`g_l(m)0`$ for $`|m|\mathrm{}`$. From this and $`g_l(m)0`$ it follows that there is at least one solution $`m_y0`$ of (4.4). In general several ones with $`m_{y_s}<\mathrm{}<m_{y_1}<m_{y_0}0`$ may occur. If there is only one solution of (4.4) we choose $`m_b=m_y`$. If there are several ones we put $`m_b`$ equal to the $`m_{y_\nu }`$ related to the lowest minimum of (4.3) (in case of several degenerate ones picking arbitrarily one of them). In this way we achieve that $$_{m_b}^m\text{d}m^{}(m^{}+g_l(m^{}))0\text{ for all }m.$$ (4.6) In order to get a real solution $`\alpha _l(m)`$ of (2.9) which exists for all $`m`$, according to (4.6) (which takes the value 0 for $`m=m_b`$) one has to choose $`\beta _l(m_b)0`$ in (4.2). At the points $`m_{y_\nu }`$ with minima of (4.2) where $`\beta _l(m_{y_\nu })>0`$ one then immediately sees that for the solutions of (2.9) one has a minimum of $`\alpha _l(m)=+\sqrt{\beta _l(m)}`$ and a maximum of $`\alpha _l(m)=\sqrt{\beta _l(m)}`$. The points where $`\beta _l(m_{y_\nu })=0`$, i.e. the ones with the lowest minima of (4.2), however, need special consideration. There, to clarify the details at crossing, one has (1) to check whether the derivative $`\dot{\alpha }_l(m)`$ is finite as is necessary in view of (2.8) and (2) to disentangle the solutions related to different signs of the square root properly. To check under which conditions the derivative $`\dot{\alpha }(m)`$ remains finite we note that from (2.9) one gets $$\dot{\alpha }_l^2=\frac{(m+g_l(m))^2}{\beta _l(m)}$$ (4.7) showing that in case of $`\beta _l(\stackrel{~}{m})=0`$ for some $`\stackrel{~}{m}`$ one must also have $`\stackrel{~}{m}+g_l(\stackrel{~}{m})=0`$ in order that the derivative remains finite. With these relations holding at $`\stackrel{~}{m}`$ one obtains $$\dot{\alpha }_l^2(m)1+\dot{g}_l(\stackrel{~}{m})\text{ for }m\stackrel{~}{m}.$$ (4.8) Because at the points with $`\beta _l(m_{y_\nu })=0`$ envisaged above (4.4) is satisfied, by (4.8) the existence of the derivative $`\dot{\alpha }_l^2(m_{y_\nu })`$ at these points with $`\alpha _l(m_{y_\nu })=0`$ is guaranteed. Further, since in (4.4) one has $`\dot{g}_l(m_{y_\nu })>1`$ it follows that $`\dot{\alpha }_l^2(m_{y_\nu })>0`$ there. Thus, because $`\dot{\alpha }_l(m)`$ should be continuous (as pointed out in Section 3), there must be a crossing point of two solutions $`\alpha _l(m)`$, i.e. the solutions $`\sqrt{\beta _l(m)}`$ from below must continue as $`\pm \sqrt{\beta _l(m)}`$ above the zero and have the derivatives $$\dot{\alpha }_l(m_{y_\nu })=\pm \sqrt{1+\dot{g}_l(m_{y_\nu })}0$$ (4.9) at the crossing point. The asymptotic behavior $`\beta _l(m)m^2`$ for $`|m|\mathrm{}`$, which implies $`\alpha (m)\pm m`$, should be obvious from (4.5). If desired it can readily be worked out in more detail estimating the integral in the form $$\beta _l(m)=\beta _l(m_b)+m^2m_b^2+2_{m_b}^m\text{d}m^{}g_l(m^{})$$ (4.10) of (4.2), which already works using the bound (2.6). ## 5. Conclusions Establishing an exact relation for the derivatives of the eigenvalues of the hermitean Wilson-Dirac operator $`H`$ we have derived a differential equation for the eigenvalue flows of $`H`$. Referring to appropriate theorems also the mathematical aspects have been fully clarified. Unambiguous prescriptions for the selection of admissible solutions have been given. Integrating the differential equation and analyzing the features of its solutions a complete overview has been obtained. Our results appear advantageous for future theoretical developments as well as for applications in numerical works. ## Acknowledgement I wish to thank Michael Müller-Preussker and his group for their warm hospitality.
no-problem/9909/hep-lat9909145.html
ar5iv
text
# Chirality tubes along monopole trajectories Supported in part by FWF under Contract No. P11456 ## Abstract We classify the lattice by elementary 3-cubes which are associated to dual links occupied by, or free of monopoles. We then compute the quark condensate, the quark charge and the chiral density on those cubes. By looking at distributions we demonstrate that monopole trajectories carry considerably more chirality with respect to the free vacuum. During the last years one has gained some insight into the mutual interrelations of two distinct excitations of the QCD vacuum: monopoles and instantons. Both of those objects have been used to explain a wide variety of basic QCD properties, such as quark confinement, chiral symmetry breaking and the $`U_A(1)`$ problem. The first property is usually associated with monopoles, the later ones with instantons. Instantons have integer topological charge $`Q`$ which is related to the chiral zero eigenvalues of the fermionic matrix with a gauge field configuration via the Atiyah-Singer index theorem. Since instantons carry chirality, and on the other hand it has been demonstrated that instantons are predominantly localized at regions where monopoles exist, the question arises whether monopoles carry chirality themselves. For calorons it has been proven that they consist of monopoles which might be a sign that monopoles are indeed carriers of chirality. In this contribution we discuss this issue by directly looking at the chirality located on monopole loops, and comparing it to the background. We do this by measuring conditional probability distributions of fermionic observables of the form $`\overline{\psi }\mathrm{\Gamma }\psi `$ with $`\mathrm{\Gamma }=1,\gamma _4,\gamma _5`$ in a standard staggered fermion setting. Those quantities are usually referred to as the quark condensate, quark charge density, and the chiral density. Mathematically and numerically the local quark condensate $`\overline{\psi }\psi (x)`$ is a diagonal element of the inverse of the fermionic matrix of the QCD action. The other fermionic operators are obtained by inserting the Euclidian $`\gamma _4`$ and $`\gamma _5`$ matrices. Under a conditional probability distribution we understand the probability of encountering a certain value for a fermionic observable $`\overline{\psi }\mathrm{\Gamma }\psi `$, under the condition that the local position is close to (or away from) a monopole trajectory, $$P_{s/t}^{(\mathrm{no})\mathrm{mon}}(\overline{\psi }\mathrm{\Gamma }\psi ,x)|_{x()\mathrm{monopole}\mathrm{tube}},$$ (1) where $`x`$ is indicating the local position, and $`s/t`$ space- or time-like monopole trajectories. The core of the monopole tube is the singular monopole trajectory, living on dual links, as obtained by the standard definition of monopoles in SU(3). We did not distinguish between the two independent colors of monopoles. For each dual link occupied by a monopole trajectory there exists an elementary 3-cube. The 8 sites of such a cube constitute the section of the monopole tube corresponding to that dual link. Our simulations were performed for full SU(3) QCD on an $`8^3\times 4`$ lattice with periodic boundary conditions. Dynamical quarks in Kogut-Susskind discretization with $`n_f=3`$ flavors of degenerate mass $`m=0.1`$ were taken into account using the pseudofermionic method. We performed runs in the confinement phase at $`\beta =5.2`$. Measurements were taken on 2000 configurations separated by 50 sweeps. We computed correlation functions between two observables $`𝒪_1(x)`$ and $`𝒪_2(y)`$ , $$g(yx)=𝒪_1(x)𝒪_2(y)𝒪_1𝒪_2.$$ (2) In Fig. 1 we display results for $`𝒪_1`$ a local fermionic observable (except in (d)) and $`𝒪_2`$ the monopole charge density $`\rho `$. All correlations exhibit an extension of several lattice spacings and show an exponential falloff over the whole range. The corresponding screening masses are given in Table 1 in GeV for 3 levels of cooling. They are a coarse measure for the chirality profile of the monopole tube. It is apparent that cooling does not change the screening masses drastically. For reasons of comparison we included in the table the screening masses for the correlation with the topological charge density (squared). We find that the correlations of the color charge density $`\overline{\psi }\psi (x)`$ and $`|\psi ^{}\psi (x)|`$ with the topological charge density are very similar, both in the slopes and the absolute values. This becomes clear because the quark condensate can be interpreted as the absolute value of the quark density. However, cooling (or some other kind of smoothing) is inevitable to obtain nontrivial correlations between the chiral density, $`𝒪_1=\overline{\psi }\gamma _5\psi (x)`$, and the topological charge density. This can be expected since both quantities are connected via the anomaly. The topological charge of a gauge field is related to the chiral density of the associated fermion field by $`Q=q(x)d^4x=m\overline{\psi }\gamma _5\psi (x)d^4x`$. We have checked that this relation also holds approximately for the corresponding lattice observables on individual configurations. The autocorrelation function of the density of the topological charge $`<q(0)q(r)>`$ should be compared to $`<\overline{\psi }\gamma _5\psi (0)q(r)>`$. If the classical t’Hooft instanton with size $`\rho _I`$ is considered, the topological charge density is $$q(x)\rho _I^4(x^2+\rho _I^2)^4.$$ (3) On the other hand the corresponding density of fermionic quantities $$\overline{\psi }\psi (x)\overline{\psi }\gamma _5\psi (x)\rho _I^2(x^2+\rho _I^2)^3$$ (4) is broader. This behavior is reflected in Table 1 and means that the local relation $`q(x)=m\overline{\psi }\gamma _5\psi (x)`$ does not hold. Figure 2 shows results for the conditional probability distributions for $`\overline{\psi }\psi `$ (in multiples of the quark mass) in the case of monopole presence (m=1) or absence (m=0). The m=0 case exhibits a relatively narrow distribution of the fermionic quantity around $`0.41`$ with a variance of 0.179 or 0.171, for space- and time-like monopole trajectories respectively. The m=1 case is clearly different and shows a much broader distribution, with both the mean and the variance being about a factor two larger. The time-like trajectories yield distributions which are still peaked on the left, like in the m=0 case, whereas this is not observed for space-like trajectories. The plot suggests that in the close neighborhood of a monopole it becomes more likely to encounter large values of $`\overline{\psi }\psi (x)`$. The form of the distributions points towards a picture where monopole tubes carry a space-time dependent density of the fermionic observables. In the confinement, space-like tubes bear more chirality. The same situation is found for the other observables $`\mathrm{Re}(\psi ^{}\psi )`$ and $`|\overline{\psi }\gamma _5\psi |`$ (see Fig. 3). The figures depict the situation after 15 cooling steps. We checked that these observations can be made also after 5 cooling steps. In summary, the computations of correlation functions between the monopole charge density and the fermionic observables yield an exponential decrease. The screening masses correspond to those of correlators between the topological charge density and the same fermionic observables . Our calculations of conditional distribution functions of fermionic observables point out a significant enhancement for finding large chirality in the neighborhood of monopole trajectories. The same distributions also indicate that the monopoles are not covered by a uniform chirality tube.
no-problem/9909/cond-mat9909244.html
ar5iv
text
# Operation of Quantum Cellular Automaton cells with more than two electrons ## I Introduction The concept of logic circuits based on Quantum Cellular Automata (QCA), first proposed by Lent et al., has received much attention in the last few years, due to the perspectives of extremely low power operation and to the drastic reduction of interconnections it would allow. The basic QCA building block is represented by a bistable cell made up of four quantum dots or metallic islands at the vertices of a square and containing two electrons that can align along the two different diagonals, thus encoding the two logical states. For an isolated cell, alignment along either diagonal is equally likely, but, in the presence of an external electric field such as that due to a nearby cell (driver cell in the following), in which polarization along one of the diagonals is externally enforced, also the electrons in the driven cell will align along the same diagonal, thereby minimizing the total electrostatic energy. It is therefore possible to propagate the polarization state along a chain of cells and it has been shown that all combinatorial logic functions can be performed by properly designed two-dimensional arrays of such cells. Various implementations of QCA cells have been proposed so far, based on metal islands , on quantum dots obtained in semiconductor heterostructures or on nanostructured silicon islands . All of these implementations share the same problem: an extreme sensitivity to fabrication tolerances and the associated need for careful adjustment of each single cell. Such a sensitivity is the direct consequence of the smallness of the electrostatic interaction between nearby cells and therefore of the energy splitting between the configurations corresponding to the two logic states. While for the purpose of large-scale integration new approaches are needed, such as, possibly, the resort to implementations on the molecular scale, experiments for the assessment of the basic principle of operation are being performed by carefully tuning the voltages applied to adjustment electrodes built in each cell. The understanding that could so far be gathered from the existing literature was that strongly bistable and effective QCA operation was possible only in two regimes: either for cells containing just two electrons (and this would be the case for semiconductor quantum dots) or for cells containing two excess electrons on top of a very large total number of electrons, and operating in the classical Coulomb blockade limit. Based on the generally used expression for cell polarization $`P`$ given in Ref. $$P=\frac{\rho _1+\rho _3\rho _2\rho _4}{\rho _1+\rho _2+\rho _3+\rho _4},$$ (1) operation was disrupted as soon as the number of electrons $`n`$ per cell was other than two ($`\rho _i`$ is the charge in dot $`i`$, and dots are numbered clockwise). For $`n>2`$ the maximum polarization reached by the driven cell decreases, due to the fact that, while the denominator of Eq.(1) is $`nq`$, where $`q`$ is the electron charge, the numerator at most reaches a value of 2$`q`$. Indeed, a configuration with an excess of more than two electrons along one of the diagonals is not energetically favored for any reasonable arrangement of neighboring cells. We observe that each cell is globally neutral, because electron charges are compensated for by ionized donors and by the positive charge induced on the electrodes defining the quantum dots. Such neutralization takes place over a certain region of space, with a finite extension. Therefore, even though the global monopole component of the electric field is zero, some effects proportional to the total number of electrons contained in the cell exist, but they are much weaker than those of the uncompensated “dipole” component associated with the asymmetry between the two diagonals, at least for most configurations of practical interest. This has lead us to proposing a somewhat different expression for cell polarization, in which the denominator is always 2$`q`$, independent of total cell occupancy: $$P=\frac{\rho _1+\rho _3\rho _2\rho _4}{2q}.$$ (2) We argue that Eq.(2) provides a more realistic representation of the action of a cell on its neighbors than Eq.(1). If the positive neutralizing charges were in the very same plane as that of the cell, and localized in each dot in an amount corresponding to $`nq/4`$, as in Ref. , our statement that only the difference between the numbers of electrons along the two diagonals matters would be rigorous, because the net charge in each dot is the same as in the case of a 2-electron cell, in any realistic case. The situation changes somewhat if the neutralizing charge is not located in the same plane as that of the cell electrons and/or is not equally distributed among the dots. In order to show that, in practical operating conditions, Eq.(2) still provides the best description of the polarizing action of a cell, we have studied two specific limiting cases: (a) neutralization by means of four $`nq/4`$ charges located in correspondence with the dots, but on a plane placed at an arbitrary distance $`d`$ from the cell; (b) neutralization by means of image charges located on a plane at an arbitrary distance $`h`$ from that of the cell (such as in the case of Dirichlet surface boundary conditions at a distance $`h/2`$). We have first considered a driver cell with a variable number of electrons, coupled to a driven cell with just two electrons, and investigated the polarization of the latter cell as a function of the polarization of the former (defined according to Eq.(2)). For simplicity, we have assumed classical point-like charges in the driver cell, while a full quantum mechanical solution has been performed for the driven cell, by means of the Configuration-Interaction (CI) method. The CI technique is based on expanding the many-electron wave function into a linear combination of Slater determinants built starting from a single-electron basis. The coefficients of this linear combination are the unknowns of the problem and can be determined by solving an algebraic eigenvalue problem, with a dimension corresponding to the number of Slater determinants that are taken into consideration. We assumed, for the driven cell, a confinement potential generated in a GaAs/AlGaAs heterostructure at a depth of 70 nm by a metal gate with four 90 nm holes with centers located at the vertices of a 110 nm square, considering an applied voltage of $`0.5`$ V. The distance $`D`$ between cell centers is 300 nm. Let us first examine case (a): in Fig. 1 we report the polarization of the driven cell, in response to a 0.7 polarization of the driver cell, as a function of $`d`$ for 2, 26 and 50 electrons in the driver cell. If there is a total of just two electrons, the driven cell is always fully polarized, independently of the distance at which the neutralizing charges are located. When the number of electrons becomes larger, the polarization of the driven cell is unaffected, as long as the neutralizing charges are within a reasonable distance from the driver cell; above a certain threshold value for $`d`$ (depending on $`n`$) the locally uncompensated repulsive action of the electrons in the driver cell prevails and forces the electrons of the driven cell into the two rightmost dots, thus yielding zero polarization. As far as case (b) is concerned, in Fig. 2 results are shown for 2, 26 and 50 electrons, for the previously described operating conditions. For large enough values of $`h`$, the polarization of the driven cell drops down to zero, because of the repulsive action of the locally uncompensated charge. In addition, the polarization decreases (in the same fashion, regardless of the number of electrons) for decreasing $`h`$: this is easily understood considering that the image charges do screen the action of the driver cell and such screening becomes more effective as the image plane approaches the cell plane. In Fig. 3 we report the complete cell-to-cell response function, i.e. the polarization of the driven cell versus that of the driver cell, for neutralization with image charges at a distance of 70 nm, and for 2, 26 and 50 electrons. Full polarization is reached both for 2 and 26 electrons, with some problem appearing for 50 electrons, that could be overcome adjusting the geometrical parameters. Realistic situations are somewhere in between the two cases we have just discussed, since neutralization is performed by means of charges located both at the surface (metal gates or surface traps) and in the layers of the heterostructure. These results confirm the ability of our Eq.(2) to properly describe the polarizing action of a many-electron driver cell, and we can move on to the discussion of the response of a many-electron driven cell. For this purpose, we can initially use an intuitive electrostatic model, in order to gain an immediate understanding of the problem, which will then be validated with a detailed quantum mechanical calculation. We consider electrons as classical particles, interacting via Coulomb repulsion, but with the possibility of tunneling between dots belonging to the same cell. The driver cell is assumed to have just two electrons and we examine the response of a many-electron driven cell. The configurations corresponding to the minimum electrostatic energy for cells with 3,4,5,6 electrons are shown to the right of Fig. 4. It is apparent that, while for 3 and 5 electrons the maximum polarization is only one half, and for 4 electrons is zero, for 6 electrons we obtain full polarization and a behavior that is substantially equivalent to that of a 2-electron cell. In order to validate this result, we have performed a quantum mechanical calculation on cells containing up to 6 electrons, by means of the CI technique. While for up to 4 electrons a basis of just 4 single-electron wave functions is adequate for obtaining very accurate results, for 5 or more electrons a larger basis is in general needed, because the presence of two electrons in the same dot leads to a significant deviation from the single-electron wave functions. An acceptable approximation for the cases of interest can still be obtained with a total of 8 spin orbitals; significant improvements in the accuracy require a large increase in the number of determinants and are beyond the scope of the present work. In Fig. 4 we report the CI results for the cell-to-cell response function of driven cells with 2, 3, 4, 5, 6 electrons for barriers separating the dots higher than in the previous cases (this time the voltage applied to the gate is -0.7 V): the achieved limiting polarization values are in exact agreement with the predictions from the previously presented simple electrostatic model. In addition, we notice that around the origin the curve for a 2-electron cell is steeper than that for the 6-electron cell: this is due to the fact that the two “excess” (with respect to 4) electrons in the 6-electron cell can be thought of as “seeing” a more shallow confinement potential, resulting from that of the 2-electron cell plus the electrostatic action of the first four electrons. Such an effect can be compensated for by raising the potential barriers separating the dots of each cell. From the intuitive electrostatic model and from the other results just described, we can conclude that QCA cell operation is substantially associated with the electrons in excess with respect to a multiple of 4: a 6-electron cell yields full polarization as a 2-electron cell; the same occurs for a 10-electron cell, and, in general, whenever the total number of electrons per cell equals $`4N+2`$, with $`N`$ an integer. This conclusion completes our understanding of the behavior of QCA cells, filling the gap between the operation with just two electrons and that in the metallic limit, with a very large number of electrons and two excess charges. Cells with $`4N+2`$ electrons are thus suitable for QCA operation, thereby lowering the technological fabrication requirements; however symmetry constraints are in no way reduced as a result of the present findings, and remain the main obstacle preventing the implementation of practicable QCA logic. This work has been supported by the ESPRIT project 23362 QUADRANT (QUAntum Devices foR Advanced Nano-electronic Technology).
no-problem/9909/nucl-ex9909011.html
ar5iv
text
# Critical temperature for quenching of pair correlations ## Abstract The level density at low spin in the <sup>161,162</sup>Dy and <sup>171,172</sup>Yb nuclei has been extracted from primary $`\gamma `$ rays. The nuclear heat capacity is deduced within the framework of the canonical ensemble. The heat capacity exhibits an S-formed shape as a function of temperature, which is interpreted as a fingerprint of the phase transition from a strongly correlated to an uncorrelated phase. The critical temperature for the quenching of pair correlations is found at $`T_c=0.50(4)`$ MeV. PACS number(s): 21.10.Ma, 24.10.Pa, 25.55.Hp, 27.70.+q The thermodynamical properties of nuclei deviate from infinite systems. While the quenching of pairing in superconductors is well described as a function of temperature, the nucleus represents a finite many body system characterized by large fluctuations in the thermodynamic observables. A long-standing problem in experimental nuclear physics has been to observe the transition from strongly paired states, at around $`T=0`$, to unpaired states at higher temperatures. In nuclear theory, the pairing gap parameter $`\mathrm{\Delta }`$ can be studied as function of temperature using the BCS gap equations . From this simple model the gap decreases monotonically to zero at a critical temperature of $`T_c0.5\mathrm{\Delta }`$. However, if particle number is projected out , the decrease is significantly delayed. The predicted decrease of pair correlations takes place over several MeV of excitation energy . Recently , we reported structures in the level densities in the 1–7 MeV region, which are probably due to the breaking of nucleon pairs and a gradual decrease of pair correlations. Experimental data on the quenching of pair correlations are important as a test for nuclear theories. Within finite temperature BCS and RPA models, level density and specific heat are calculated for e.g. <sup>58</sup>Ni ; within the shell model Monte Carlo method (SMMC) one is now able to estimate level densities in heavy nuclei up to high excitation energies. The subject of this Letter is to report on the observation of the gradual transition from strongly paired states to unpaired states in rare earth nuclei at low spin. The canonical heat capacity is used as a thermometer. Since only particles at the Fermi surface contribute to this quantity, it is very sensitive to phase transitions. It has been demonstrated from SMMC calculations in the Fe region , that breaking of only one nucleon pair increases the heat capacity significantly. Recently , we presented a method for extracting level density and $`\gamma `$ strength function from measured $`\gamma `$ ray spectra. Since the $`\gamma `$ decay half lives are long, typically $`10^{12}`$$`10^{19}`$ s, the method should essentially give observables from a thermalized system . The spin window is typically 2–8 $`\mathrm{}`$ and the excitation energy resolution is 0.3 MeV. The experiments were carried out with 45 MeV <sup>3</sup>He projectiles from the MC-35 cyclotron at the University of Oslo. The experimental data were recorded with the CACTUS multidetector array using the (<sup>3</sup>He,$`\alpha \gamma `$) reaction on <sup>162,163</sup>Dy and <sup>172,173</sup>Yb self-supporting targets. The beam time was two weeks for each target. The charged ejectiles were detected with eight particle telescopes placed at an angle of 45 relative to the beam direction. Each telescope comprises one Si $`\mathrm{\Delta }E`$ front and one Si(Li) $`E`$ end detector with thicknesses of 140 and 3000 $`\mu `$m, respectively. An array of 28 5”$`\times `$5” NaI(Tl) $`\gamma `$ detectors with a total efficiency of $``$15% surrounded the target and particle detectors. From the reaction kinematics the measured $`\alpha `$ particle energy can be transformed to excitation energy $`E`$. Thus, each coincident $`\gamma `$ ray can be assigned a $`\gamma `$ cascade originating from a specific excitation energy. The data are sorted into a matrix of $`(E,E_\gamma )`$ energy pairs. At each excitation energy $`E`$ the NaI $`\gamma `$ ray spectra are unfolded , and this matrix is used to extract the primary $`\gamma `$ ray matrix, with the well established subtraction technique of Ref. . The resulting matrix $`P(E,E_\gamma )`$, which describes the primary $`\gamma `$ spectra obtained at an initial excitation energy $`E`$, is factorized according to the Brink-Axel hypothesis by $`P(E,E_\gamma )\rho (EE_\gamma )\sigma (E_\gamma )`$. The level density $`\rho (E)`$ and the $`\gamma `$ strength function $`\sigma (E_\gamma )`$ are determined by a least $`\chi ^2`$ fit to $`P`$. Since the fit yields an infinitely large number of equally good solutions, which can be obtained by transforming one arbitrary solution by $`\stackrel{~}{\rho }(EE_\gamma )`$ $`=`$ $`A\mathrm{exp}[\alpha (EE_\gamma )]\rho (EE_\gamma ),`$ (1) $`\stackrel{~}{\sigma }(E_\gamma )`$ $`=`$ $`B\mathrm{exp}(\alpha E_\gamma )\sigma (E_\gamma ),`$ (2) we have to determine the parameters $`A`$, $`B`$ and $`\alpha `$ by comparing the $`\rho `$ and $`\sigma `$ functions to known data. The $`A`$ and $`\alpha `$ parameters are fitted to reproduce the number of known levels in the vicinity of the ground state and the neutron resonance spacing at the neutron binding energy . Figure 1 shows the extracted level densities and $`\gamma `$ strength functions for the <sup>161,162</sup>Dy and <sup>171,172</sup>Yb nuclei. The data for the even nuclei are published recently , and are included to be compared to odd nuclei. Also the $`\gamma `$ strength functions are given for all four nuclei. Since other experimental data on the $`\gamma `$ strength function in the energy interval 1.5–6 MeV is sparse, we could not fix the parameter $`B`$; and therefore we give the functions in arbitrary units. However, when fitting the functions with the expression $`CE_\gamma ^n`$ we obtain $`n3.9,\mathrm{\hspace{0.17em}4.5}`$ for the Dy and Yb isotopes respectively, which one would expect from the tale of the giant dipole resonance . In the following, only the level density will be discussed. The partition function in the canonical ensemble $$Z(T)=\underset{n=0}{\overset{\mathrm{}}{}}\rho (E_n)e^{E_n/T}$$ (3) is determined by the measured level density of accessible states $`\rho (E_n)`$ in the present nuclear reaction. Strictly, the sum should run from zero to infinity. In this work we calculate $`Z`$ for temperatures up to $`T=1`$ MeV. Assuming a Bethe-like level density expression , the average excitation energy in the canonical ensemble $$E(T)=Z^1\underset{n=0}{\overset{\mathrm{}}{}}E_n\rho (E_n)e^{E_n/T}$$ (4) gives roughly $`EaT^2`$ with a standard deviation of $`\sigma _ET\sqrt{2aT}`$ where $`a`$ is the level density parameter. Using Eq. (4) requires that the level density should be known up to $`E+3\sigma _E`$, typically 40 MeV. However, the experimental level densities of Fig. 1 only cover the excitation region up close to the neutron binding energy of about 6 and 8 MeV for odd and even mass nuclei, respectively. For higher energies it is reasonable to assume Fermi gas properties, since single particles are excited into the continuum region with high level density. Therefore, due to lack of experimental data, the level density is extrapolated to higher energies by the shifted Fermi gas model expression $$\rho _{\mathrm{FG}}(U)=f\frac{\mathrm{exp}(2\sqrt{aU})}{12\sqrt{0.1776}a^{1/2}U^{3/2}A^{1/3}},$$ (5) where $`U`$ is the shifted energy and $`A`$ is the mass number. For shift and level density parameters $`a`$, we use von Egidy’s parameterization of this expression . The expression had to be normalized by a factor $`f`$ in order to match the neutron resonance spacing data. The factors are 1.2, 0.94, 0.4 and 0.6 for <sup>161,162</sup>Dy and <sup>171,172</sup>Yb respectively. The solid lines in Fig. 1 show how the expression extrapolates our experimental level density curves. The extraction of the microcanonical heat capacity $`C_V(E)`$ gives large fluctuations which are difficult to interpret . Therefore, the heat capacity $`C_V(T)`$ is calculated within the canonical ensemble, where $`T`$ is a fixed input value in the theory, and a more appropriate parameter. The heat capacity is then given by $$C_V(T)=\frac{E}{T},$$ (6) and the averaging made in Eq. (4) gives a smooth temperature dependence of $`C_V(T)`$. A corresponding $`C_V(E)`$ may also be derived in the canonical ensemble. The deduced heat capacities for the <sup>161,162</sup>Dy and <sup>171,172</sup>Yb nuclei are shown in Fig. 2. All four nuclei exhibit similarly S-shaped $`C_V(T)`$-curves with a local maximum relative to the Fermi gas estimate at $`T_c0.5`$ MeV. The S-shaped curve is interpreted as a fingerprint of a phase transition in a finite system from a phase with strong pairing correlations to a phase without such correlations. Due to the strong smoothing introduced by the transformation to the canonical ensemble, we do not expect to see discrete transitions between the various quasiparticle regimes, but only the transition where all pairing correlations are quenched as a whole. In the right panels of Fig. 2, we see that $`C_V(E)`$ has an excess in the heat capacity distributed over a broad region of excitation energy and is not giving a clear signal for quenching of pairing correlations at a certain energy . In order to extract a critical temperature for the quenching of pairing correlations from our data, we have to be careful, not to depend too much on the extrapolation of $`\rho `$. An inspection of Fig. 1 shows that the level density is roughly composed of two components as proposed by Gilbert and Cameron : (i) a low energetic part; approximately a straight line in the log plot, and (ii) a high energetic part including the theoretical Fermi gas extrapolation; a slower growing function. For illustration, we construct a simple level density formula composed of a constant temperature level density part with $`\tau `$ as temperature parameter, and a Fermi gas expression $$\rho (E)\{\begin{array}{cc}\eta \mathrm{exp}(E/\tau )& \mathrm{for}E\epsilon \\ E^{3/2}\mathrm{exp}(2\sqrt{aE})& \mathrm{for}E>\epsilon \end{array},$$ (7) where $`\eta =\epsilon ^{3/2}\mathrm{exp}(2\sqrt{a\epsilon }\epsilon /\tau )`$ accounts for continuity at the energy $`E=\epsilon `$. If we also require the slopes to be equal at $`\epsilon `$, the level density parameter $`a`$ is restricted to $$a=\left(\frac{\sqrt{\epsilon }}{\tau }+\frac{3}{2\sqrt{\epsilon }}\right)^2.$$ (8) Figure 3 shows the heat capacity evaluated in the canonical ensemble with the level density function of Eq. (7) and $`\tau ^1=1.7`$ MeV<sup>-1</sup>. The left hand part simulates a pure Fermi gas description, i.e. the case $`\epsilon =0`$, assuming a level density parameter $`a=20`$ MeV<sup>-1</sup>. One can see that a pure Fermi gas does not give rise to the characteristic S-shape of the heat capacity as in Fig. 2. The right hand part simulates the experiments, where $`\epsilon =5`$ MeV and $`a`$ fulfills Eq. (8), i.e. again $`a=20`$ MeV<sup>-1</sup>. The characteristic S-shape emerges. Therefore, our method to find $`T_c`$ relies on the assumption that the lower energetic part of the level density can be approximately described by a constant temperature level density. Calculating $`E(T)`$ and $`C_V(T)`$ within the canonical ensemble for an exponential level density gives $`T^1=E(T)^1+\tau ^1`$ and $$C_V(T)=(1T/\tau )^2.$$ (9) Thus, plotting $`T^1`$ as function of $`E(T)^1`$ one can determine $`\tau `$ from Fig. 4. The quantity $`\tau `$ is then identified with the critical temperature $`T_c`$, since $`C_V(T)`$ according to Eq. (9) exhibits a pole at $`\tau `$ and the analogy with the definition of $`T_\lambda `$ in the theory of superfluids becomes evident. The $`C_V(T)`$ curve of Eq. (9) with $`T_c=\tau `$ using the extracted critical temperatures for the four nuclei is shown as dashed-dotted lines in Fig. 2. This simple analytical expression with only one parameter $`T_c`$ fits the experimental data up to temperatures of $``$0.4 MeV. The critical temperature itself is marked by the vertical lines. The extracted $`T_c`$ is rather close to the estimate of $`T_c`$ represented by the arrows. The extracted values are $`T_c=`$ 0.52, 0.49, 0.50 and 0.49 MeV for the <sup>161,162</sup>Dy and <sup>171,172</sup>Yb nuclei respectively, which are somehow delayed compared to a degenerated BCS model with $`T_c=0.5\mathrm{\Delta }`$ yielding $`T_c`$ 0.48, 0.46, 0.41 and 0.38 MeV for the respective nuclei, where $`\mathrm{\Delta }`$ is calculated from neutron separation energies . We will now discuss how sensitive the extracted critical temperature is with respect to the extrapolation, and we will give an estimate of the uncertainty of the extracted critical temperatures. In the fit in Fig. 4, we use only energies from $`E`$ 0.5–2 MeV. This corresponds to energies in the level density curves up to $`E_n`$6 MeV according to Eq. (4). Also in Fig. 2, the interval where Eq. (9) fits the experimental data is $`T`$0–0.4 MeV. This corresponds to energies in the level density curves up to $`E_n`$8 MeV. Thus, the extracted critical temperature does only depend weakly on the actual extrapolation of $`\rho `$ curve. Also the S-shape of the $`C_V(T)`$ curve depends only on the fact, that the nuclear level density develops somehow as a Fermi gas expression at energies somewhere above the neutron binding energy. However, the actual values of the $`C_V(T)`$ curve above $`T`$0.5 MeV do depend on the specific extrapolation chosen. With respect to $`T_c`$, the extrapolation is only important for determining the parameters $`A`$ and especially $`\alpha `$ of Eq. (2). We can therefore estimate the error of $`T_c`$ by $$\left(\frac{\mathrm{\Delta }E\mathrm{\Delta }T_c}{T_c^2}\right)^2=\left(\frac{\mathrm{\Delta }a\mathrm{\Delta }U}{2\sqrt{aU_n}}\right)^2+\left(\frac{\mathrm{\Delta }D}{D}\right)^2+2(0.05)^2,$$ (10) where $`\mathrm{\Delta }E`$ is the energy difference between the upper and lower energy where $`A`$ and $`\alpha `$ are determined. $`\mathrm{\Delta }a`$ is the uncertainty of the level density parameter, $`\mathrm{\Delta }U`$ is the energy difference between the neutron binding energy and the upper point where $`A`$ and $`\alpha `$ are determined, $`U_n`$ is the (shifted) neutron binding energy. $`D`$ and $`\mathrm{\Delta }D`$ are the neutron resonance spacing and its error. The errors of 5% are added in order to account for the two fitting procedures, one fitting $`A`$ and $`\alpha `$ the other fitting $`T_c`$, both with an uncertainty of some 5%. Using a very conservative estimate of $`a17.5(6.0)`$ MeV<sup>-1</sup>, $`U_n<8`$ MeV, $`\mathrm{\Delta }U<3`$ MeV, $`D/\mathrm{\Delta }D<0.2`$ and $`\mathrm{\Delta }E>4.5`$ MeV, we obtain $`\mathrm{\Delta }T_c<0.04`$ MeV. This yields a maximum error of $`T_c`$ of some 8%. It is also important to notice, that due to the strong smoothing in Eq. (4), the errors of the experimental level density curves are negligible in our calculation. In conclusion, we have seen a fingerprint of a phase transition in a finite system for the quenching of pairing correlations as a whole, given by the S-shape of the canonical heat capacity curves in rare earth nuclei. For the first time the critical temperature $`T_c`$ at which pair correlations in rare earth nuclei are quenched, has been extracted from experimental data. The reminiscence of the quenching process is distributed over a 6 MeV broad excitation energy region, which is difficult to observe and interpret in the microcanonical ensemble. Simple arguments show that the peak in the heat capacity arises from two components in the level density; a constant temperature like part and a Fermi gas like part. It would be very interesting to compare our results with SMMC calculations performed for a narrow spin window. The authors are grateful to E.A. Olsen and J. Wikne for providing the excellent experimental conditions. We thank Y. Alhassid for several interesting discussions. We wish to acknowledge the support from the Norwegian Research Council (NFR).
no-problem/9909/hep-ph9909286.html
ar5iv
text
# Neutrino afterglow from Gamma-Ray Bursts: ∼10¹⁸ eV ## 1 Introduction The widely accepted interpretation of the phenomenology of $`\gamma `$-ray bursts (GRB’s) is that the observable effects are due to the dissipation of the kinetic energy of a relativistically expanding fireball whose primal cause is not yet known \[see Mészáros (1995) and Piran (1996) for reviews\]. The physical conditions in the dissipation region imply that protons can be Fermi accelerated to energies $`>10^{20}`$ eV (Waxman 1995a; Vietri 1995; see Waxman 1999 for recent review). Adopting the conventional fireball picture, we showed previously that the prediction of an accompanying burst of $`10^{14}\mathrm{eV}`$ neutrinos is a natural consequence (Waxman & Bahcall 1997). The neutrinos are produced by $`\pi ^+`$ created in interactions between fireball $`\gamma `$-rays and accelerated protons. The key relation is between the observed photon energy, $`ϵ_\gamma `$, and the accelerated proton’s energy, $`ϵ_p`$, at the photo-meson threshold of the $`\mathrm{\Delta }`$-resonance. In the observer’s frame, $$ϵ_\gamma ϵ_p=0.2\mathrm{GeV}^2\mathrm{\Gamma }^2,$$ (1) where phenomenologically the Lorentz factors of the expanding fireball are $`\mathrm{\Gamma }>10^2`$. Inserting a typical observed $`\gamma `$-ray energy of $`1`$ MeV, we see that characteristic proton energies $`2\times 10^6`$ GeV are required to produce neutrinos from pion decay. Typically, the neutrinos receive $`5`$% of the proton energy, leading to neutrinos of $`10^{14}`$ eV as stated<sup>1</sup><sup>1</sup>1The claim that the spectrum of neutrinos produced in interaction with burst (rather than afterglow) photons extends to $`10^{19}`$ eV (Vietri (1998)) is due to calculational errors.. In the standard picture, these neutrinos result from internal shocks within the fireball. In the last two years, afterglows of GRB’s have been discovered in X-ray, optical, and radio (Costa et al. 1997; van Paradijs et al. 1997; Frail et al. 1997). These observations confirm (Waxman 1997a; Wijers, Rees, & Mészáros 1997) standard model predictions (Paczyński & Rhoads 1993; Katz 1994; Mészáros & Rees 1997; Vietri 1997a) of afterglows that result from the collision of the expanding fireball with the surrounding medium. Inserting in Eq. (1) a typical afterglow photon energy $`10^2`$ eV, we see that characteristic neutrino energies of order $`10^9`$ GeV may be expected<sup>2</sup><sup>2</sup>2Afterglow photons are produced over a wide range of energies, from radio to X-rays, leading to a broad neutrino spectrum. As we show in §4, however, the flux is dominated by neutrinos in the energy range $`10^{17}`$$`10^{19}`$ eV \[cf. Eq. (14)\], which are produced in proton interactions with $`10`$ eV–1 keV photons.. $`\gamma `$-rays of similar energies are produced by $`\pi ^0`$ decay, but because the fireball is optically thick at these energies the $`\gamma `$’s probably leak out only at much lower energies, $`10`$ GeV. The ultra-high energy neutrinos, $`10^{18}`$ eV, are produced in the initial stage of the interaction of the fireball with its surrounding gas, which occurs over a time, $`T`$, comparable to the duration of the GRB itself. Optical–UV photons are radiated by electrons accelerated in shocks propagating backward into the ejecta. Protons are accelerated to high energy in these “reverse” shocks. The combination of low energy photons and high energy protons produces ultra-high energy neutrinos via photo-meson interactions, as indicated by Eq. (1). Afterglows have been detected in several cases; reverse shock emission has only been identified for GRB 990123 (Akerlof 1999). Both the detections and the non-detections are consistent with shocks occurring with typical model parameters (Sari & Piran 1999; Mészáros & Rees 1999), suggesting that reverse shock emission may be common. The predicted neutrino and gamma-ray emission depends upon parameters of the surrounding medium that can only be estimated once more observations of the prompt optical afterglow emission are available. We discuss in §2 likely plasma conditions in the collisions between the fireball and the surrounding medium, and in §3 the physics of how the ultra-high energy neutrinos are produced. We first discuss in §3.1 ultra-high energy cosmic ray (UHECR) production in GRB’s in the light of recent afterglow observations. We show that afterglow observations provide further support for the model of UHECR production in GRB’s, and address some criticism of the model recently made in the literature (Gallant & Achterberg (1999)). Neutrino production is then discussed in §3.2. The expected neutrino flux and spectrum is derived in §4 and the implications of our results to future experiments are discussed in §5. ## 2 Plasma conditions at the reverse shocks We concentrate in this section on the epoch between the time the expanding fireball first strikes the surrounding medium and the time when the reverse shocks have erased the memory of the initial conditions. After this period, the expansion approaches the Blandford & McKee (1976) self-similar solutions. The purpose of this section is to derive the plasma parameters and the UV luminosity and spectrum expected for typical GRB parameters. We use the results derived here to estimate in the following sections the energy to which protons can be accelerated and to calculate the production of ultra-high energy neutrinos and multiple GeV photons. During self-similar expansion, the Lorentz factor of plasma at the shock front is $`\mathrm{\Gamma }_{BM}=(17E/16\pi nm_pc^2)^{1/2}r^{3/2}`$, where $`E`$ is the fireball energy and $`n`$ is the surrounding gas number density. The characteristic time at which radiation emitted by shocked plasma at radius $`r`$ is observed by a distant observer is $`tr/4\mathrm{\Gamma }_{BM}^2c`$ (Waxman 1997b). The transition to self-similar expansion occurs on a time scale $`T`$ (measured in the observer frame) comparable to the longer of the two time scales set by the initial conditions: the (observer) GRB duration $`t_{\mathrm{GRB}}`$ and the (observer) time $`T_\mathrm{\Gamma }`$ at which the self-similar Lorentz factor equals the original ejecta Lorentz factor $`\mathrm{\Gamma }_i`$, $`\mathrm{\Gamma }_{BM}(t=T_\mathrm{\Gamma })=\mathrm{\Gamma }_i`$. Since $`t=r/4\mathrm{\Gamma }_{BM}^2c`$, $$T=\mathrm{max}[t_{\mathrm{GRB}},6\left(\frac{E_{53}}{n_0}\right)^{1/3}\left(\frac{\mathrm{\Gamma }_i}{300}\right)^{8/3}\mathrm{s}].$$ (2) During the transition, plasma shocked by the reverse shocks expands with Lorentz factor close to that given by the self-similar solution, $`\mathrm{\Gamma }\mathrm{\Gamma }_{BM}(t=T)`$, i.e. $$\mathrm{\Gamma }245\left(\frac{E_{53}}{n_0}\right)^{1/8}T_1^{3/8},$$ (3) while the unshocked fireball ejecta propagate with the original expansion Lorentz factor, $`\mathrm{\Gamma }_i>\mathrm{\Gamma }`$. We write Eq. (3) in terms of dimensionless parameters that are characteristically of order unity in models that successfully describe observed GRB phenomena. Thus $`E=10^{53}E_{53}`$ erg, $`T=10T_1`$ s, $`n=1n_0\mathrm{cm}^3`$, and typically $`\mathrm{\Gamma }_i300`$. Lorentz factors of the reverse shocks in the frames of the unshocked plasma are mildly relativistic, $`\mathrm{\Gamma }_R1\mathrm{\Gamma }_i/\mathrm{\Gamma }_{\mathrm{BM}}`$. If the initial Lorentz factor is extremely large, $`\mathrm{\Gamma }_i300`$, the transition Lorentz factor computed from Eq. (2) and Eq. (3) remains unchanged, $`\mathrm{\Gamma }250`$, while the reverse shocks become highly relativistic, $`\mathrm{\Gamma }_R\mathrm{\Gamma }_i/\mathrm{\Gamma }1`$. The observed photon radiation is produced in the fireball model by synchrotron emission of shock-accelerated electrons. We now summarize the characteristics of the synchrotron spectrum that leads to ultra-high energy neutrinos and GeV photons. Let $`\xi _e`$ and $`\xi _B`$ be the fractions of the thermal energy density $`U`$ (in the plasma rest frame) that are carried, respectively, by electrons and magnetic fields. The characteristic electron Lorentz factor (in the plasma rest frame) is $`\gamma _m\xi _e(\mathrm{\Gamma }_R1)m_p/m_e\xi _e(\mathrm{\Gamma }_i/\mathrm{\Gamma })m_p/m_e`$, where the thermal energy per proton in the shocked ejecta is $`(\mathrm{\Gamma }_R1)m_pc^2`$. The energy density $`U`$ is $`E4\pi r^2cT\mathrm{\Gamma }^2U`$, and the number of radiating electrons is $`N_eE/\mathrm{\Gamma }_im_pc^2`$. The characteristic(or peak) energy of synchrotron photons (in the observer frame) is $$ϵ_{\gamma m}^{\mathrm{ob}.}\mathrm{}\mathrm{\Gamma }\gamma _m^2\frac{eB}{m_ec}=0.6\xi _{e,1}^2\xi _{B,2}^{1/2}n_0^{1/2}\left(\frac{\mathrm{\Gamma }_i}{300}\right)^2\mathrm{eV},$$ (4) and the specific luminosity, $`L_ϵ=dL/dϵ_\gamma ^{\mathrm{ob}.}`$, at $`ϵ_{\gamma m}^{\mathrm{ob}.}`$ is $$L_m(2\pi \mathrm{})^1\mathrm{\Gamma }\frac{e^3B}{m_ec^2}N_e6\times 10^{60}\xi _{B,2}^{1/2}E_{53}^{5/4}T_1^{3/4}\left(\frac{\mathrm{\Gamma }_i}{300}\right)^1n_0^{1/4}\mathrm{s}^1,$$ (5) where $`\xi _e=0.1\xi _{e,1}`$, and $`\xi _B=0.01\xi _{B,2}`$. Hereafter, we denote particle energy in the observer frame with the super-script “ob.”, and particle energy measured at the plasma frame with no super-script (e.g., $`ϵ_{\gamma m}^{\mathrm{ob}.}=\mathrm{\Gamma }ϵ_{\gamma m}`$). Since the reverse shocks are typically mildly relativistic, electrons are expected to be accelerated in these shocks to a power law energy distribution, $`dN_e/d\gamma _e\gamma _e^p`$ for $`\gamma _e>\gamma _m`$, with $`p2`$ (Axford, Leer, & Skadron 1977; Bell 1978; Blandford & Ostriker 1978). The specific luminosity extends in this case to energy $`ϵ_\gamma >ϵ_{\gamma m}`$ as $`L_ϵ=L_m(ϵ_\gamma /ϵ_{\gamma m})^{1/2}`$, up to photon energy $`ϵ_{\gamma c}`$. Here $`ϵ_{\gamma c}`$ is the characteristic synchrotron frequency of electrons for which the synchrotron cooling time, $`6\pi m_ec/\sigma _T\gamma _eB^2`$, is comparable to the ejecta (rest frame) expansion time, $`r/\mathrm{\Gamma }c`$. At energy $`ϵ_\gamma >ϵ_{\gamma c}`$, $$ϵ_{\gamma c}^{\mathrm{ob}.}0.3\xi _{B,2}^{3/2}n_0^1E_{53}^{1/2}T_1^{1/2}\mathrm{keV},$$ (6) the spectrum steepens to $`L_ϵϵ_\gamma ^1`$. ## 3 UHECR and neutrino production ### 3.1 UHECR production Protons are expected to be accelerated to high energies in mildly relativistic shocks within an expanding ultra-relativistic GRB wind (Waxman 1995a, Vietri 1995). Energies as high as $`ϵ_p^{\mathrm{ob}.}=10^{20}ϵ_{p,20}^{\mathrm{ob}.}`$ eV may be achieved provided the fraction of thermal energy density carried by magnetic fields, $`\xi _B`$, is large enough, and provided shocks occur at large enough radii, so that proton energy loss by synchrotron emission does not affect acceleration (Waxman 1995a, 1999). The condition that needs to be satisfied by $`\xi _B`$, $$\frac{\xi _B}{\xi _e}>10^2(ϵ_{p,20}^{\mathrm{ob}.}\mathrm{\Gamma }/250)^2L_{\gamma ,52}^1,$$ (7) where $`\mathrm{\Gamma }`$ is the wind expansion Lorentz factor and $`L_\gamma =10^{52}L_{\gamma ,52}\mathrm{erg}\mathrm{s}^1`$ its $`\gamma `$-ray luminosity, is consistent with constraints imposed by afterglow observations. Afterglow observations imply $`\xi _e0.1`$ and $`\xi _B0.01`$ \[e.g. Eq. (4)\]. The observed distribution of GRB redshifts, which suggests that most detected GRB’s occur at the redshift range of 1–3 (Krumholtz, Thorsett & Harrison (1998); Mao & Mo (1998); Hogg & Fruchter (1999)), imply that the characteristic GRB $`\gamma `$-ray luminosity is $`L_\gamma 10^{52}\mathrm{erg}\mathrm{s}^1`$: For characteristic GRB $`\gamma `$-ray flux, $`F_\gamma 10^6\mathrm{erg}/\mathrm{cm}^2\mathrm{s}`$ in the BATSE 20keV–2MeV range, and adopting the cosmological parameters $`\mathrm{\Omega }=0.2`$, $`\mathrm{\Lambda }=0`$ and $`H_0=75\mathrm{k}\mathrm{m}/\mathrm{s}\mathrm{Mpc}`$, the luminosity for a $`z=1.5`$ burst is $`L_\gamma 10^{52}\mathrm{erg}\mathrm{s}^1`$. This result is consistent with the more detailed analysis of Mao & Mo (1998), who obtain a median GRB luminosity in the 50keV–300keV range (which accounts for $`1/3`$ of the BATSE range luminosity) of $`10^{51}\mathrm{erg}\mathrm{s}^1`$ for $`\mathrm{\Omega }=1`$, $`\mathrm{\Lambda }=0`$ and $`H_0=100\mathrm{k}\mathrm{m}/\mathrm{s}\mathrm{Mpc}`$. The condition that needs to be satisfied to avoid proton synchrotron energy loss (Waxman 1995a ), $$r>r_{\mathrm{syn}}=10^{12}(\mathrm{\Gamma }/250)^2(ϵ_{p,20}^{\mathrm{ob}.})^3\mathrm{cm},$$ (8) is clearly satisfied in the present context, as reverse shocks arise at $`r4\mathrm{\Gamma }^2cT10^{17}\mathrm{cm}10^{12}`$ cm. Thus, synchrotron losses of protons accelerated to high energy at the radii where reverse shocks are expected to arise are negligible. We note that it has recently been claimed (Gallant & Achterberg 1999) that acceleration of protons to $`10^{20}`$ eV in the highly-relativistic external shock driven by the fireball into its surround medium is impossible. Regardless of whether this claim is correct or not, it is not relevant to the model proposed in Waxman (1995a) and discussed here, in which protons are accelerated in the mildly-relativistic internal (reverse) shocks. Finally, improved constraints from afterglow observations on the energy generation rate of GRB’s provide further support to the GRB model of UHECR production. For an open universe, $`\mathrm{\Omega }=0.2`$, $`\mathrm{\Lambda }=0`$ and $`H_0=75\mathrm{k}\mathrm{m}/\mathrm{s}\mathrm{Mpc}`$, the GRB rate per unit volume required to account for the observed BATSE rate is $`R_{\mathrm{GRB}}10^8\mathrm{Mpc}^3\mathrm{yr}^1`$, assuming a constant comoving GRB rate. Present data does not allow to distinguish between models in which the GRB rate is evolving with redshift, e.g. following star formation rate, and models in which it is not evolving, since in both cases most detected GRB’s occur at the redshfit range of 1–3 (Hogg & Fruchter (1999)). Thus, $`R_{\mathrm{GRB}}`$ provides a robust estimate of the rate at $`z1`$, while the present, $`z=0`$, rate may be lower by a factor of $`8`$ if strong redshift evolution is assumed. This implies that the present rate of $`\gamma `$-ray energy generation by GRB’s is in the range of $`10^{44}\mathrm{erg}/\mathrm{Mpc}^3\mathrm{yr}`$ to $`10^{45}\mathrm{erg}/\mathrm{Mpc}^3\mathrm{yr}`$, remarkably similar to the energy generation rate required to account for the observed UHECR flux above $`10^{19}`$ eV, $`10^{44}\mathrm{erg}/\mathrm{Mpc}^3\mathrm{yr}`$ (Waxman 1995b ; Waxman & Bahcall (1999)). ### 3.2 Neutrino production The photon distribution in the wind rest frame is isotropic. Denoting by $`n_\gamma (ϵ_\gamma )dϵ_\gamma `$ the number density (in the wind rest frame) of photons in the energy range $`ϵ_\gamma `$ to $`ϵ_\gamma +dϵ_\gamma `$, the fractional energy loss rate of a proton with energy $`ϵ_p`$ due to pion production is $`t_\pi ^1(ϵ_p)`$ $`{\displaystyle \frac{1}{ϵ_p}}{\displaystyle \frac{dϵ_p}{dt}}`$ $`=`$ $`{\displaystyle \frac{1}{2\gamma _p^2}}c{\displaystyle _{ϵ_0}^{\mathrm{}}}𝑑ϵ\sigma _\pi (ϵ)\xi (ϵ)ϵ{\displaystyle _{ϵ/2\gamma _p}^{\mathrm{}}}𝑑xx^2n(x),`$ (9) where $`\gamma _p=ϵ_p/m_pc^2`$, $`\sigma _\pi (ϵ)`$ is the cross section for pion production for a photon with energy $`ϵ`$ in the proton rest frame, $`\xi (ϵ)`$ is the average fraction of energy lost to the pion, and $`ϵ_0=0.15\mathrm{GeV}`$ is the threshold energy. The photon density is related to the observed luminosity by $`n(x)=L_ϵ(\mathrm{\Gamma }x)/(4\pi r^2c\mathrm{\Gamma }x)`$. For proton Lorentz factor $`ϵ_0/2ϵ_{\gamma c}\gamma _p<ϵ_0/2ϵ_{\gamma m}`$, photo-meson production is dominated by interaction with photons in the energy range $`ϵ_{\gamma m}<ϵ_\gamma ϵ_{\gamma c}`$, where $`L_ϵϵ_\gamma ^{1/2}`$. For this photon spectrum, the contribution to the first integral of Eq. (9) from photons at the $`\mathrm{\Delta }`$ resonance is comparable to that of photons of higher energy, and we obtain $$t_\pi ^1(ϵ_p)\frac{2^{5/2}}{2.5}\frac{L_m}{4\pi r^2\mathrm{\Gamma }}\left(\frac{ϵ_{\mathrm{peak}}}{\gamma _pϵ_{\gamma m}}\right)^{1/2}\frac{\sigma _{\mathrm{peak}}\xi _{\mathrm{peak}}\mathrm{\Delta }ϵ}{ϵ_{\mathrm{peak}}}.$$ (10) Here, $`\sigma _{\mathrm{peak}}5\times 10^{28}\mathrm{cm}^2`$ and $`\xi _{\mathrm{peak}}0.2`$ at the resonance $`ϵ=ϵ_{\mathrm{peak}}=0.3\mathrm{GeV}`$, and $`\mathrm{\Delta }ϵ0.2\mathrm{GeV}`$ is the peak width. The time available for proton energy loss by pion production is comparable to the expansion time as measured in the wind rest frame, $`r/\mathrm{\Gamma }c`$. Thus, the fraction of energy lost by protons to pions is $`f_\pi (ϵ_p^{\mathrm{ob}.})`$ $`0.05\left({\displaystyle \frac{L_m}{6\times 10^{60}\mathrm{s}^1}}\right)\left({\displaystyle \frac{\mathrm{\Gamma }}{250}}\right)^5T_1^1`$ (11) $`\times (ϵ_{\gamma m,\mathrm{eV}}^{\mathrm{ob}.}ϵ_{p,20}^{\mathrm{ob}.})^{1/2}.`$ Eq. (11) is valid for protons in the energy range $`4\times 10^{18}\left({\displaystyle \frac{\mathrm{\Gamma }}{250}}\right)^2(ϵ_{\gamma c,\mathrm{keV}}^{\mathrm{ob}.})^1\mathrm{eV}<ϵ_p^{\mathrm{ob}.}<`$ $`4\times 10^{21}\left({\displaystyle \frac{\mathrm{\Gamma }}{250}}\right)^2(ϵ_{\gamma m,\mathrm{eV}}^{\mathrm{ob}.})^1\mathrm{eV}.`$ (12) Such protons interact with photons in the energy range $`ϵ_{\gamma m}`$ to $`ϵ_{\gamma c}`$, where the photon spectrum $`L_ϵϵ_{\gamma c}^{1/2}`$ and the number of photons above interaction threshold is $`ϵ_p^{1/2}`$. At lower energy, protons interact with photons of energy $`ϵ_\gamma >ϵ_{\gamma c}`$, where $`L_ϵϵ^1`$ rather then $`L_ϵϵ^{1/2}`$. At these energies therefore $`f_\pi ϵ_p^{\mathrm{ob}.}`$. Since the flow is ultra-relativistic, the results given above are independent of whether the wind is spherically symmetric or jet-like, provided the jet opening angle is $`>1/\mathrm{\Gamma }`$ . For a jet-like wind, $`L_m`$ is the luminosity that would have been produced by the wind if it were spherically symmetric. ## 4 Neutrino spectrum and flux Approximately half of the energy lost by protons goes into $`\pi ^0`$ ’s and the other half to $`\pi ^+`$ ’s. Neutrinos are produced by the decay of $`\pi ^+`$’s, $`\pi ^+\mu ^++\nu _\mu e^++\nu _e+\overline{\nu }_\mu +\nu _\mu `$. The mean pion energy is $`20\%`$ of the proton energy. This energy is roughly evenly distributed between the $`\pi ^+`$ decay products. Thus, approximately half the energy lost by protons of energy $`ϵ_p`$ is converted to neutrinos with energy $`0.05ϵ_p`$. Eq. (12) implies that the spectrum of neutrinos below $`ϵ_{\nu b}^{\mathrm{ob}.}10^{17}(\mathrm{\Gamma }/250)^2(ϵ_{\gamma c,\mathrm{keV}}^{\mathrm{ob}.})^1\mathrm{eV}`$ is harder by one power of the energy then the proton spectrum, and by half a power of the energy at higher energy. For a power law differential spectrum of accelerated protons $`n(ϵ_p)ϵ_p^2`$, as expected for Fermi acceleration and which could produce the observed spectrum of ultra-high energy cosmic rays (Waxman 1995b), the differential neutrino spectrum is $`n(ϵ_\nu )ϵ_\nu ^\alpha `$ with $`\alpha =1`$ below the break and $`\alpha =3/2`$ above the break. The energy production rate required to produce the observed flux of ultra-high energy cosmic-rays, assuming that the sources are cosmologically distributed, is (Waxman 1995b) $$E_{CR}^2d\dot{N}_{CR}/dE_{CR}10^{44}\mathrm{erg}\mathrm{Mpc}^3\mathrm{yr}^1.$$ (13) If GRB’s are indeed the sources of ultra-high energy cosmic rays, then Eqs. (11,12) imply that the expected neutrino intensity is $$ϵ_\nu ^2\mathrm{\Phi }_\nu 10^{10}\frac{f_\pi ^{[19]}}{0.1}\left(\frac{ϵ_\nu ^{\mathrm{ob}.}}{10^{17}\mathrm{eV}}\right)^\beta \mathrm{GeV}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1,$$ (14) where $`f_\pi ^{[19]}f_\pi (ϵ_{p,20}^{\mathrm{ob}.}=2)`$ and $`\beta =1/2`$ for $`ϵ_\nu ^{\mathrm{ob}.}>10^{17}\mathrm{eV}`$ and $`\beta =1`$ for $`ϵ_\nu ^{\mathrm{ob}.}<10^{17}\mathrm{eV}`$. The fluxes of all neutrinos are similar and given by Eq. (14), $`\mathrm{\Phi }_{\nu _\mu }\mathrm{\Phi }_{\overline{\nu }_\mu }\mathrm{\Phi }_{\nu _e}\mathrm{\Phi }_\nu `$. Eq. (14) is obtained by integrating the neutrino generation rate implied by Eq. (13) and (11) over cosmological time, under the assumption that the generation rate is independent of cosmic time (Waxman & Bahcall 1999). If the GRB energy generation rate increases with redshift in a manner similar to the evolution of the QSO luminosity density, which exhibits the fastest known redshift evolution, the expected neutrino flux would be $`3`$ times that given in Eq. (14) (Waxman & Bahcall 1999). The neutrino flux is expected to be strongly suppressed at energy $`ϵ_\nu ^{\mathrm{ob}.}>10^{19}`$ eV, since protons are not expected to be accelerated to energy $`ϵ_p^{\mathrm{ob}.}10^{20}`$ eV. If protons are accelerated to much higher energy, the $`\nu _\mu `$ flux may extend to $`10^{21}`$ eV. At higher energy the ejecta expansion time $`\mathrm{\Gamma }T`$ is shorter than the pion decay time, leading to strong suppression of $`\nu _\mu `$ flux due to adiabatic energy loss at $`ϵ_\nu ^{\mathrm{ob}.}>10^{21}T_1(\mathrm{\Gamma }/250)^2`$ eV. Adiabatic energy loss of muons will suppress the $`\overline{\nu }_\mu `$ and $`\nu _e`$ flux at $`ϵ_\nu ^{\mathrm{ob}.}>10^{19}T_1(\mathrm{\Gamma }/250)^2`$ eV. ## 5 Discussion When the expanding fireball of a GRB collides with the surrounding medium, reverse shocks are created that give rise to observed afterglow by synchrotron radiation. The specific luminosity and energy spectrum describing these processes are given in Eq. (4)-Eq. (6). For typical values of the plasma parameters and fireball Lorentz factor, these equations are consistent with the afterglow observations. The burst GRB 990123 was especially luminous and also for this burst the relations Eq. (4)-Eq. (6) are consistent with the observation of a reverse shock and the other afterglow phenomena. If protons are accelerated in GRB’s up to energies $`10^{20}`$ eV, then the expected flux of high energy neutrinos is given by Eq. (14) and (11). Muon energy loss suppresses the $`\overline{\nu }_\mu `$ and $`\nu _e`$ flux above $`10^{19}`$ eV, while pion energy loss suppresses the $`\nu _\mu `$ flux only above $`10^{21}`$ eV. Since protons are not expected to be accelerated to $`10^{20}`$ eV (Waxman 1995a), the energy beyond which the $`\nu _\mu `$ flux is suppressed will likely be determined by the maximum energy of accelerated protons. Measuring the maximum neutrino energy will set a lower limit to the maximum proton energy. The predicted flux is sensitive to the value of the Lorentz factor of the reverse shock (see Eq. 11), but this value is given robustly by Eq. (3) as $`\mathrm{\Gamma }250`$. Will the ultra-high energy neutrinos predicted in this letter be detectable? The sensitivities of high-energy neutrino detectors have not been determined for ultra-high energy neutrinos whose time of occurrence is known to within $`10`$ s and whose direction on the sky is known accurately. Special techniques may enhance the detection of GRB neutrinos (see below). Planned $`1\mathrm{k}\mathrm{m}^3`$ detectors of high energy neutrinos include ICECUBE, ANTARES, NESTOR (Halzen 1999) and NuBE (Roy, Crawford, & Trattner 1999). Neutrinos are detected by observing optical Cherenkov light emitted by neutrino-induced muons. The probability $`P_{\nu \mu }`$ that a neutrino would produce a high energy muon with the currently required long path within the detector is $`P_{\nu \mu }3\times 10^3(ϵ_\nu /10^{17}\mathrm{eV})^{1/2}`$ (Gaisser, Halzen & Stanev 1995; Gandhi, Quig, Reno, & Sarcevic 1998). Using (14), the expected detection rate of muon neutrinos is $`0.06/\mathrm{km}^2\mathrm{yr}`$ (over $`2\pi `$ sr), or $`3`$ times larger if GRB’s evolve like quasars. GRB neutrinos may be detectable in these experiments because the knowledge of neutrino direction and arrival time may relax the requirement for long muon path within the detector. Air-showers could be used to detect ultra-high energy neutrinos. The neutrino acceptance of the planned Auger detector, $`10^4\mathrm{km}^3\mathrm{sr}`$ (Parente & Zas 1996), seems too low. The effective area of proposed space detectors (Linsley 1985; Takahashi 1995) may exceed $`10^6\mathrm{km}^2`$ at $`ϵ_\nu >2\times 10^{19}`$ eV, detecting several tens of GRB correlated events per year, provided that the neutrino flux extends to $`ϵ_\nu >2\times 10^{19}`$ eV. Since, however, the GRB neutrino flux is not expected to extend well above $`ϵ_\nu 10^{19}`$ eV, and since the acceptance of space detectors decrease rapidly below $`10^{19}`$ eV, the detection rate of space detectors would depend sensitively on their low energy threshold. As explained in Waxman & Bahcall (1997), $`\nu _\tau `$’s are not expected to be produced in the GRB. However, the strong mixing between $`\nu _\mu `$ and $`\nu _\tau `$ favored by Super-Kamiokande observations of atmospheric neutrinos indicates that the flux of $`\nu _\mu `$ and $`\nu _\tau `$ should be equal at Earth. This conclusion would not hold if the less favored alternative of $`\nu _\mu `$ to $`\nu _{\mathrm{sterile}}`$ occurs. The decay of $`\pi ^0`$’s produced in photo-meson interactions would lead to the production of $`10^{19}`$ eV photons. For the photon luminosity and spectrum given in Eqs. (46), the fireball optical depth for pair production is higher than unity for $`ϵ_\gamma >10`$ GeV. Thus, the ultra-high energy photons would be degraded and will escape the fireball as multi-GeV photons. Since in order for GRB’s to be the sources of ultra-high energy protons similar energy should be produced in $`1`$ MeV photons and $`10^{20}`$ eV protons (Waxman 1995b), the expected multi-GeV integrated luminosity is $`10\%`$ of the $`1`$ MeV integrated luminosity, i.e. $`10^6\mathrm{erg}/\mathrm{cm}^2`$. Such multi-GeV emission has been detected in several GRB’s on time scale of $`>10`$ s following the GRB, and may be common (Dingus 1995). This is not, however, conclusive evidence for proton acceleration to ultra-high energy. For the parameters adopted in this paper, inverse-Compton scattering of synchrotron photons may also produce the observed multi-GeV photons. More wide spectrum observations, of optical to $`>10`$ GeV photons, are required to determine whether the observed multi-GeV emission on $`10`$ s is due to inverse-Compton or $`\pi ^0`$ decay. We note here that Multi-GeV photon production by synchrotron emission of $`10^{20}`$ eV protons accelerated at the highly-relativistic external shock driven by the fireball into its surround medium has been discussed in Vietri (1997b) and Bottcehr & Dermer (1998). However, protons are not likely to be accelerated to such energy at the external shock (Gallant & Achterberg 1999), and, moreover, even if acceleration is possible, the fraction of proton energy lost by synchrotron emission at the radii where external shocks occur is $`1`$, \[see discussion following Eq. (8)\] and hence the expected flux is much smaller than the inverse-Compton or $`\pi ^0`$ decay flux. JNB was supported in part by NSF PHY95-13835. EW was supported in part by BSF Grant 9800343, AEC Grant 38/99 and MINERVA Grant.
no-problem/9909/cond-mat9909390.html
ar5iv
text
# 1 Introduction ## 1 Introduction The $`Q`$-state Potts model in two dimensions is very fertile ground for the investigation of phase transitions and critical phenomena. For $`Q=2`$ and 3 there is a second order phase transition between $`Q`$ ferromagnetic ordered states and a disordered state. For $`Q=4`$ the transition is also second order, but the usual critical behavior is modified by strong logarithmic corrections. For $`Q>4`$ the transition is first order, with $`Q=5`$ exhibiting weak first order behavior and a very large correlation length at the critical point. The Hamiltonian for the Potts model is $$=J\underset{<i,j>}{}(1\delta (q_i,q_j)),$$ $`(1)`$ where $`J`$ is the coupling constant and $`0q_iQ1`$ are the Potts spin variables. The energies (in units of $`J`$) are evenly spaced and take on interger values in the range $`0EN_b`$ where $`N_b`$ is the number of bonds on the lattice. Here we will consider simple square lattices with periodic, cylindrical, and self-dual boundary conditions. In addition to the energy there are $`Q`$ order parameters $$M_q=\underset{k}{}\delta (q_k,q),$$ $`(2)`$ which for the Ising model is simply related to the magnetization. The possible values of the order parameter are also intergers, $`0MN_s`$, where $`N_s`$ is the number of sites on the lattice. If we denote the number of states with energy $`E`$ by $`\mathrm{\Omega }(E)`$, then the canonical partition function for the $`Q`$-state Potts model is $$Z_Q(y)=\underset{E}{}\mathrm{\Omega }_Q(E)y^E,$$ $`(3)`$ where $`y=e^{\beta J}`$. From (3) it is clear that $`Z`$ is simply a polynomial in $`y`$, and the analytic structure of $`Z`$ is completely determined by the zeros of this polynomial, as first discussed by Lee and Yang. If we wish to study the partition function in an external field which couples to the order paramter, (2), then one needs to enumerate the number of states with fixed energy $`E`$ and fixed order parameter $`M`$, $`\mathrm{\Omega }(E,M)`$. The partition function is again a polynomial given by $$Z_Q(y,x)=\underset{E}{}\underset{M}{}\mathrm{\Omega }_Q(E,M)x^My^E,$$ $`(4)`$ where $`x=e^{\beta h}`$, and $`h`$ is the external field. As discussed in the second of Lee and Yang’s two famous papers, the zeros of the partition function for the Ising model in the complex-$`x`$ plane all lie on the unit circle. For finite systems the analyticity of $`Z`$ in both $`x`$ and $`y`$ ensures that no zeros lie on the real axis. However, according to Lee and Yang, in the thermodynamic limit the zeros of the partition function in either the complex-$`x`$ or -$`y`$ planes approach arbitrarily close to the real axis at the critical point, leading to nonanalytic behavior in the partition function. If the zeros lie on a one-dimensional locus in the thermodynamic limit one can define the density of zeros (per site) $`g(\theta )`$ in terms of which the free energy per site is $$f(y)=g(\theta )log[yy_0(\theta )]𝑑\theta .$$ $`(5)`$ In the critical region the singular part of the free energy is a homogeneous function of the reduced temperature, $`yy_c`$, from which it follows that $`g(\theta )`$ must also be a homogeneous function for small $`\theta `$ of the form $$g(\theta )=b^{(d+y_T)}g(\theta b^{y_T}).$$ $`(6)`$ This in turn implies that $`g(\theta )`$ vanishes as $`\theta ^\kappa `$ as $`\theta `$ goes to zero where $`\kappa =(dy_T)/y_T`$. On the other hand, if the system has a first order transition, $`\kappa =0`$ and the discontinuity in the first derivative of $`f`$, the latent heat, is given by $`L=2\pi g(0)`$. Exactly parallel arguments hold in the complex-$`x`$ plane if one replaces the temperature exponent $`y_T`$ by the magnetic exponent $`y_h`$. In a recent paper Creswick has shown how the numerical transfer matrix of Binder can be generalized to allow the evaluation of the density of states $`\mathrm{\Omega }(E)`$ and the restricted density of states, $`\mathrm{\Omega }(E,M)`$ for the $`Q`$-state Potts model. Similar calculations of $`\mathrm{\Omega }(E)`$ have been carried out by Bhanot in both two and three dimensions for the $`Q=2`$ and $`Q=3`$ Potts models, and Pearson for the $`4^3`$ Ising model. Beale has used the exact solution for the partition function of the Ising model on finite square lattices to calculate $`\mathrm{\Omega }(E)`$. Bhanot’s method is far more complex than the $`\mu TM`$ and requires essentially the same computer resources. Pearson’s method is only applicable to lattices with very few spins (e.g. 64) and Beale’s approach makes essential use of the exact solution for the Ising model and so can not be used for other values of $`Q`$ or in three dimensions. The $`\mu TM`$ is quite general and the algorithm itself requires less than 100 lines of code. In addition, it is straightforward to generalize the $`\mu TM`$ to count states with fixed energy and magnetization, or any other function of the Potts variables. ## 2 Results The partition function for the Potts model maps into itself under the dual transformation $$u\frac{1}{u},$$ $`(7)`$ where $$u=\frac{y^11}{\sqrt{Q}}.$$ $`(8)`$ In the complex $`u`$-plane a subset of the zeros of the partition function tend to lie on a unit circle (which maps into itself under (7)); however, cylindrical and periodic boundary conditions are not self-dual, and this causes the zeros to move slightly off the unit circle. For this reason we have modified the $`\mu TM`$ for the self-dual lattice introduced by Wu et al., so that the zeros do indeed lie on the unit circle and therefore are simply parameterized by the phase $`\theta `$. In the complex $`x`$-plane the zeros of the partition function for the Ising model are guaranteed to lie on the unit circle by the circle theorem of Lee and Yang, irrespective of the boundary conditions. Given that the zeros are well parameterized by a single variable, we can define the density of zeros for finite lattices as $$g(\frac{1}{2}(\theta _{k+1}\theta _k))=\frac{1}{N}\frac{1}{\theta _{k+1}\theta _k}.$$ $`(9)`$ In Fig.1 we show the density of zeros in the complex-$`x`$ plane for the Ising model at $`y=y_c`$ and $`y=0.5y_c`$. Note that at the critical temperature $`g`$ tends to zero as one approaches the real axis, but below the critical temperature it approaches a constant $$2\pi g(0,y)=m_0(y),$$ $`(10)`$ where $`m_0(y)`$ is the spontaneous magnetization. We have applied finite-size scaling to the density of zeros calculated in this way and find excellent agreement with the exact solution for the magnetization except close to the critical point where crossover complicates the FSS analysis. There is reason to hope that a more sophisticated FSS analysis will improve these results substantially. Finally, in Fig.2 we show the density of zeros in the complex $`y`$-plane for the 3-state Potts model, which is known to have a second-order transition at the critical point. ## 3 Conclusions The $`\mu TM`$ and its extensions offer a new way of obtaining exact information about finite two dimensional lattices. While the method is easily extended to three dimensions, memory requirements limit its use to the 2-state model on $`4^2\times L`$ lattices. However, Monte Carlo techniques have been developed which have no such limitations and show great promise in extending many of these results to larger lattices. Preliminary studies indicate that while most of the Yang-Lee zeros are very sensitive to the exact value of $`\mathrm{\Omega }(E)`$, the edge singularity and the next two nearest the critical point are not. In addition, it is possible to generalize the $`\mu TM`$ to calculate the microcanonical distribution of any function of the Potts variables, and in particular the order parameter and correlation function.
no-problem/9909/hep-lat9909032.html
ar5iv
text
# Equivalent of a Thouless energy in lattice QCD Dirac spectra ## Abstract Random matrix theory (RMT) is a powerful statistical tool to model spectral fluctuations. In addition, RMT provides efficient means to separate different scales in spectra. Recently RMT has found application in quantum chromodynamics (QCD). In mesoscopic physics, the Thouless energy sets the universal scale for which RMT applies. We try to identify the equivalent of a Thouless energy in complete spectra of the QCD Dirac operator with staggered fermions and $`SU_c(2)`$ lattice gauge fields. Comparing lattice data with RMT predictions we find deviations which allow us to give an estimate for this scale. In recent years, RMT has been successfully introduced into the study of certain aspects of quantum chromodynamics (QCD). The interest focuses on the spectral properties of the Euclidean Dirac operator. For the massless Dirac operator $`\overline{)}D[U]`$ with staggered fermions and gauge fields $`USU_c(2)`$ we solve numerically for each configuration the eigenvalue equation $$i\overline{)}D[U]\psi _k=\lambda _k[U]\psi _k.$$ (1) The distribution of the gauge fields is given by the Euclidean partition function. Examples of the spectra are shown in Fig. 1, where the average level densities for $`16^4`$ and $`10^4`$ lattices are shown. It should be pointed out that we have $`V/2=32768`$ resp. $`V/2=5000`$ distinct positive eigenvalues of each configuration, so that there are millions of eigenvalues at our disposal. As the gauge fields vary over the ensemble of configurations, the eigenvalues fluctuate about their mean values. Chiral random matrix theory models the fluctuations of the eigenvalues in the microscopic limit, i.e. near $`\lambda =0`$ as well as in the bulk of the spectrum . Our main question is to what scales RMT does apply in QCD. In disordered systems the Thouless energy $`E_c`$ determines the scale in which fluctuations are predicted by RMT. Beyond this scale deviations occur. In QCD an equivalent of the Thouless energy is $`\lambda _{\mathrm{RMT}}`$ . In the microscopic region it scales as $$\lambda _{\mathrm{RMT}}/D\sqrt{V},$$ (2) $`V`$ is the lattice volume, $`D`$ is the mean level spacing. As argued in a corresponding effect should also be seen in the bulk of the spectrum. The staircase function $`N(\lambda )`$ gives the number of levels with energy $`\lambda `$. In many cases it can be separated into $$N(\lambda )=N_{\mathrm{ave}}(\lambda )+N_{\mathrm{fluc}}(\lambda ).$$ (3) $`N_{\mathrm{ave}}(\lambda )`$ is determined by gross features of the system. $`N_{\mathrm{fluc}}(\lambda )`$ contains the correlations to be analyzed. RMT makes predictions for the fluctuations on the scale of the mean level spacing. The influence of the overall level density must be removed by numerically unfolding the spectra through the mapping $`\lambda _ix_i=N_{\mathrm{ave}}(\lambda _i)`$. For the new sequence we then have $`\widehat{N}_{\mathrm{ave}}(x)=x`$ i.e. the mean level spacing is unity everywere $`1/\rho _{\mathrm{ave}}(x)=1`$ where $`\rho _{\mathrm{ave}}(x)=d\widehat{N}_{\mathrm{ave}}(x)/dx`$. The extraction of $`N_{\mathrm{ave}}(\lambda )`$ is highly non-trivial, because little is known analytically about the level density of QCD spectra. However, there are several phenomenological unfolding procedures, e.g. ensemble unfolding, where one divides the energy range in $`m`$ bins of width $`\mathrm{\Delta }\lambda `$ and averages the density $`\rho (\lambda ,\lambda +\mathrm{\Delta }\lambda )`$ for each bin over all configurations. Then the staircase function $`N_{\mathrm{ave}}(\lambda )=_{i=1}^m\rho (\lambda _i,\lambda _i+\mathrm{\Delta }\lambda )\mathrm{\Delta }\lambda `$, with $`\lambda _m=\lambda `$ is calculated. Furthermore there is configuration unfolding, where $`N(\lambda )`$ is fitted for each configuration to a polynomial of degree $`n`$. Strong coupling expansions for $`SU_c(2)`$ with staggered fermions and $`1/N_c`$ expansion of the QCD level density motivate this ansatz. For technical details and further unfolding procedures see . Whatever approach one uses, the mean number of rescaled levels in an interval of length $`L`$ in units of the mean level spacing should equal $`L`$. This assures that the unfolded spectrum has mean level density unity. We compared RMT predictions for two-point correlators with lattice data for two quantities. First, the level number variance, which measures the deviation of the number of eigenvalues $`n_\alpha (L)`$ in an interval $`[\alpha ,\alpha +L]`$ from the expected mean number $`L`$ $$\mathrm{\Sigma }^2(L)=\overline{<(Ln_\alpha (L))^2>}.$$ (4) $`<\mathrm{}>`$ is the spectral average, $`\overline{(\mathrm{})}`$ the ensemble average. Thus, an interval of length $`L`$ contains on average $`L\pm \sqrt{\mathrm{\Sigma }^2(L)}`$ levels. For uncorrelated Poisson spectra $`\mathrm{\Sigma }^2(L)=L`$. RMT predicts stronger correlations: $`\mathrm{\Sigma }^2(L)\mathrm{log}L`$. The second two-point correlator we considered is the spectral rigidity, defined as the least square deviation of $`N(\lambda )`$ from the straight line $$\mathrm{\Delta }_3(L)=\frac{1}{L}\mathrm{min}_{A,B}_\alpha ^{\alpha +L}𝑑\xi (N(\xi )A\xi B)^2.$$ (5) For this quantity RMT predicts $`\mathrm{\Delta }_3(L)\mathrm{log}L`$. In Fig. 2 the RMT results for these statistical measures are compared with lattice data. The wealth of data allows us to analyze higher order correlations. Again we see good agreement (see Figs. 11 and 12 in ). With ensemble unfolding of the data we obtain for $`\mathrm{\Sigma }^2(L)`$ the curves plotted in Fig. 3. Independently of the spectral region considered and of $`\beta `$ we find that the point where the deviation sets in, scales as $$\lambda _{\mathrm{RMT}}/D0.3\sqrt{V}.$$ (6) This should be compared with the result obtained in for the microscopic region $$\lambda _{\mathrm{RMT}}/D0.3\mathrm{}0.7\sqrt{V}.$$ (7) With polynomial unfolding the scaling law (6) vanishes. The deviation point appears to be the same for different lattice sizes (Fig. 4). In order to find out, if these deviations are due to a Thouless energy, we performed a Fourier analysis of the oscillations of the staircase function . From it we concluded that the deviations of the data obtained with polynomial unfolding are due to a non-polynomial-like part in the average level density and not to an equivalent of the Thouless energy. In conclusion, analyzing some of the statistical properties of complete eigenvalue spectra of the Dirac operator for staggered fermions and $`SU_c(2)`$ gauge fields for various couplings and lattice volumes, we find the scaling behavior of the equivalent of the Thouless energy. Using ensemble unfolding, we have $`\lambda _{\mathrm{RMT}}/D=C\sqrt{V}`$. The constant is approximately $`C0.3`$ which is compatible with the result obtained in for the microscopic region of the spectrum, where the scaling (7) was found. By unfolding each configuration separately, we do not see any scaling of this type. Hence the Thouless energy is due to fluctuations in the ensemble.
no-problem/9909/astro-ph9909479.html
ar5iv
text
# Bipolar molecular outflows driven by hydromagnetic protostellar winds ## 1. Introduction As they form, young stars emit powerful winds whose mechanical luminosity amounts to a fair fraction of the stars’ binding energy. These winds are often observed as jets, and it is generally believed that a toroidal magnetic field is responsible for their collimation (Benford 1978, Blandford & Payne 1982). When a protostellar wind strikes the ambient medium, a bipolar molecular outflow is produced. These outflows have been credited with the support (Norman & Silk (1980), McKee (1989), Bertoldi & McKee (1996)) as well as the disruption (e.g., Bally et al. (1999)) of molecular clouds and the dense clumps within them that produce star clusters. We seek a model of protostellar outflows that is sufficiently detailed to permit a quantitative study of both their supportive and destructive roles in the lives of their parent clouds. Bipolar molecular outflows display a number of common features that must be reproduced in any viable model. As summarized by Lada & Fich (1996), these include a nearly linear position-velocity relation (a “Hubble law”) and a mass-velocity relation $`dM/dvv^\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }1.8`$. It is debated whether these features result primarily from turbulent mixing in the region affected by the jet (e.g., Raga et al. (1993), Stahler (1994)), or from the dynamics of a shocked shell bounding the wind cocoon (e.g., Shu et al. (1991), Masson & Chernin 1992, 1993). In this paper we investigate shells of ambient material set into motion by hydromagnetic protostellar winds, and demonstrate that these naturally produce the primary characteristics of observed outflows. (We do not address traces of high-velocity molecular gas sometimes found within these shells.) Our analysis of the wind force distribution (§2) follows Shu et al. (1995) and the suggestion made by Ostriker (1997); our investigation of shell motion (§3) follows Shu et al. (1991), Masson & Chernin (1993), and Li & Shu (1996). We generalize these results and show (§4), contrary to the conclusions of Masson & Chernin, that a combined model can reproduce the observed mass-velocity relationship as well as the position-velocity law. ## 2. Force distributions from disk winds and X-winds We seek the distribution of wind momentum flux on scales of the dense clumps in molecular clouds, far larger than any scale associated with an accretion disk whose wind produces the outflow. The angular distribution of the wind momentum injection rate $`\dot{p}_w`$ at time $`t`$ can be written as $$\frac{d\dot{p}_w(t)}{d\mathrm{\Omega }}=r^2\rho _wv_w^2\frac{\dot{p}_w(t)}{4\pi }P(\mu ),$$ (1) where $`\mu =\mathrm{cos}\theta `$ labels directions from the outflow axis. We wish to determine $`P(\mu )`$, the normalized force distribution, which we assume is constant in time. Shu et al (1995) have shown that $`\rho _w1/(r\mathrm{sin}\theta )^2`$ is a good approximation for X-winds. Since the wind velocity $`v_w`$ is approximately the same on different streamlines in this model, it follows that $`\rho _wv_w^21/(r\mathrm{sin}\theta )^2`$. In fact, this should be a reasonably good approximation for any radial hydromagnetic wind that has expanded to a large distance, as the following heuristic argument indicates. Such winds expand more rapidly than the fast magnetosonic velocity $`c_fB/(4\pi \rho _w)^{1/2}`$, where we have assumed that thermal pressure is negligible. At large distances, the field wraps into a spiral with $`BB_\varphi `$. First, consider the flow along streamlines. Flux conservation gives $`2\pi r\mathrm{\Delta }rB_\varphi =`$ const in radial flow. Since $`v_w`$ is about constant at large distances, it follows that $`\mathrm{\Delta }r`$ const, so that $`B_\varphi 1/r`$. Since $`\rho _w1/r^2`$ in a constant velocity, radial wind, it follows that $`v_w/c_f`$ is approximately constant along a streamline at large distances. The flow approximates an isothermal wind, for which $`(v_w/c_f)^2`$ increases logarithmically with distance from unity near the source. This implies that at large distances $`(v_w/c_f)^2`$ is approximately a constant across streamlines as well, so that $`\rho _wv_w^2B_\varphi ^2`$. But note that the field must become approximately force free at large distances: balancing the tension $`B_\varphi ^2/(4\pi \varpi )`$, where $`\varpi r\mathrm{sin}\theta `$ is the cylindrical radius, against the pressure gradient $`(1/8\pi )B_\varphi ^2/\varpi `$ gives $`B_\varphi 1/\varpi =1/r\mathrm{sin}\theta `$. We conclude that $`\rho _wv_w^2B_\varphi ^21/(r\mathrm{sin}\theta )^2`$ is a general characteristic of radial hydromagnetic winds. This argument can be made more precise using a result due to Ostriker (1997). Suppose the wind arises from a Keplerian disk, where the wind density varies with initial radius $`\varpi _0`$ as $`\rho _0\varpi _0^q`$ and the Alfvén velocity varies with orbital velocity at the disk. The value $`q=3/2`$ corresponds to the solution of Blandford & Payne (1982), whereas values $`0.5<q<1`$ were considered by Ostriker (1997). Assume that each streamline expands to $`\varpi \varpi _0`$, so that the wind is significantly super-Alfvénic. The conservation of specific energy, angular frequency, and mass flux along streamlines, along with the “isorotation” relation between $`𝐁`$ and $`𝐯`$, give $`\rho _wv_w^2C(r)\varpi _0^{(1q)/2}\varpi ^2`$, where $`C(r)`$ is a slowly varying function of $`r`$ (Ostriker 1997). We see that the heuristic argument above is valid provided that the disk radius $`\varpi _0`$, which varies from $`\varpi _{0,\mathrm{in}}`$ to $`\varpi _{0,\mathrm{out}}`$, say, has a smaller range of variation than $`\varpi `$, which varies from an innermost radius $`r\theta _{\mathrm{core}}`$ to $`r`$. More precisely, if there is a power law relation between $`\varpi _0`$ and $`\mathrm{sin}\theta `$, then $`P(\mu )r^2\rho _wv_w^2`$ gives $$P(\mu )(\mathrm{sin}\theta )^{2(1+ϵ)};ϵ\left(\frac{q1}{4}\right)\frac{\mathrm{ln}(\varpi _{0,\mathrm{in}}/\varpi _{0,\mathrm{out}})}{\mathrm{ln}(\theta _{\mathrm{core}})},$$ (2) so we recover our earlier result if $`\theta _{\mathrm{core}}\varpi _{0,\mathrm{in}}/\varpi _{0,\mathrm{out}}`$, which allows the flow to be quasi-radial. For an X-wind, $`\varpi _{0,\mathrm{in}}=\varpi _{0,\mathrm{out}}`$ and the heuristic result should be quite accurate. Conditions at the axis set $`\theta _{\mathrm{core}}`$. Although the inner boundary is generally not considered in disk wind models, the Shu et al. (1995) theory posits a core of open field lines from the pole of the accreting star. The balance of magnetic pressure and tension in the fiducial X-wind model gives $`\varpi _{\mathrm{core}}=2.5(1+0.18\mathrm{log}_{10}r_{\mathrm{pc}})\mathrm{AU}`$, or $`\theta _{\mathrm{core}}=1.2\times 10^5r_{\mathrm{pc}}^1(1+0.18\mathrm{log}_{10}r_{\mathrm{pc}})`$, at a distance of $`r_{\mathrm{pc}}`$ parsecs. We have evaluated the accuracy of equation (2) by calculating the wind force distribution analytically using the method outlined by Shu et al. (1995), as Ostriker (1997) suggested. We present this calculation in Matzner (1999), where we find that equation (2) is a good approximation at all angles more than fifteen degrees from the equator, and best matches the actual solution between one and ten degrees from the outflow axis. The fiducial X-wind model (Shu et al. 1995) has $`P(\mu )\mathrm{sin}^2\theta `$ greater by $`40\%`$ toward the equator, for reasons not considered in equation (2); this corresponds to $`ϵ1/50`$ within $`10^{}`$ of the axis at a distance of $`0.1\mathrm{pc}`$. Although the approximation $`P(\mu )(\mathrm{sin}\theta )^{2(1+ϵ)}`$ is valid for ideal axisymmetric winds, we expect the force distribution to flatten within some angle $`\theta _0>\theta _{\mathrm{core}}`$ in reality. This flattening could be produced by any of the mechanisms thought to create Herbig-Haro objects, e.g., jet precession, internal shocks from a fluctuating wind velocity, or the magnetic kink instability. Assuming that $`ϵ`$ is negligible and that $`\theta _01`$, we can therefore approximate the force distribution of a magnetized protostellar wind as $$P(\mu )\frac{1}{\mathrm{ln}(2/\theta _0)\left(1+\theta _0^2\mu ^2\right)},$$ (3) where the prefactor assures $`_0^1P(\mu )𝑑\mu =1`$. Outflows may require a larger value of $`\theta _0`$ than appropriate for winds themselves, if mixing between sectors (neglected here) dilutes the momentum on axis; the current theory will still apply, with this larger $`\theta _0`$. However, so long as $`\theta _01`$, the wind force is tightly concentrated along the axis: The formation of jets is an inevitable consequence of a hydromagnetic wind. ## 3. Shells driven by protostellar winds We shall now explore the structure and motion of a shell of ambient material struck by a wind with the force distribution given by equation (3). Following Shu et al. (1991), Masson & Chernin (1992), and Li & Shu (1996), we idealize the swept-up shell as thin and momentum-conserving. This is justified because both shocks are radiative for protostellar wind velocities (Koo & McKee 1992a; however, see §5 for a consideration of magnetic pressure). We will also adopt the assumption that the flow is entirely radial, so that mass and momentum are conserved in each angular sector and there is no relative motion of the shocked fluids. A shell is driven by a “heavy” wind if less ambient material than wind material has been swept up; such shells travel at nearly the wind velocity, and the crossing time of the wind is therefore comparable to the outflow age. Alternatively, a shell driven by a “light” wind is one that has swept up more ambient gas, and has decelerated significantly. In the limit of a very light wind, both the wind’s mass and its flight time can be neglected; this limit is approached rapidly once a comparable mass has been swept up (Koo & McKee 1992b). Molecular outflows expand five to twenty times more slowly than their driving winds, and are comparably more massive. We may therefore neglect the wind’s mass and flight time, and integrate the equation of momentum conservation in each direction $`\mu `$, $`dp_w/d\mathrm{\Omega }=v_sM_a(R_s,\mu )/\mathrm{\Omega }`$, where $`M_a`$ is the ambient mass inside the shell radius $`R_s`$. We assume that the ambient gas has a density $`\rho _a=\rho _{01}r^{k_\rho }Q(\mu )`$, where $`\rho _{01}`$ is a constant and the angular factor $`Q(\mu )`$ is normalized so that $`_0^1Q(\mu )𝑑\mu =1`$. We find that the shell radius is $$R_s^{4k_\rho }=\frac{(4k_\rho )(3k_\rho )P(\mu )}{4\pi \rho _{01}Q(\mu )}_0^tp_w(t^{})𝑑t^{}.$$ (4) If the wind momentum is a power law in time, $`p_wt^{\eta _{\mathrm{in}}}`$, then the shell velocity is given by $`v(\mu )=\eta R_s/t`$, where $$\eta \mathrm{ln}R_s(\mu ,t)/\mathrm{ln}t=(\eta _{\mathrm{in}}+1)/(4k_\rho ).$$ (5) Equation (4) shows that the shell expands self-similarly, regardless of the wind history: its radial and velocity structures are fixed, while its scale expands as $`p_w𝑑t`$ increases. Self-similarity is expected, because we have chosen a scale-free medium; other radial scales, such as the scale of the light-heavy wind transition, the wind collimation scale, and the cooling length, are all small compared to a typical outflow. Shu et al. (1991), Masson & Chernin (1992) and Li & Shu (1996) have all considered a steady wind ($`\eta _{\mathrm{in}}=1`$) and an ambient distribution appropriate for pre-stellar cores at the point of collapse: $`k_\rho =2`$, and $`Q(\mu )`$ larger toward the equator, because of magnetic or rotational flattening. However, this is appropriate only in the region whose gravity is dominated by the pre-stellar core, or where it prescribes a density higher than the ambient density: $`r0.07(\sigma _{\mathrm{th}}/0.2\mathrm{km}\mathrm{s}^1)(n_\mathrm{H}/10^4\mathrm{cm}^3)^{1/2}\mathrm{pc}`$, using the theory of Shu (1977). <sup>1</sup><sup>1</sup>1 Note that if the wind, star and core masses are each within about a factor of ten of the last, then the outflow must have emerged from its core if the wind is to be light near its axis. This follows from the overwhelming factor ($`10^4`$) by which the axial force is enhanced; it is only exacerbated by any flattening of the core. The core mass may continue to affect the low-velocity, equatorial flow, however. Outside this radius, anisotropies in the ambient gas are unlikely to correlate with the outflow direction, so we may assume $`Q(\mu )1`$. We also then expect $`k_\rho 0`$, $`1`$ or $`2`$, if the lobe in question is smaller than, comparable to, or emerging from its parent star-forming “clump”. Assuming the ambient medium is isotropic and the outflow has not escaped its clump, and that $`ϵ1`$ so that its effect can be neglected in the outflow shape, our model reduces to: $$\frac{R_s}{R_{\mathrm{head}}}=\left[1+(1\mu ^2)\theta _0^2\right]^{1/(4k_\rho )},$$ (6) where the radius of the lobe head, $`R_{\mathrm{head}}`$, expands according to equations (3) and (4) with $`\mu =1`$. ## 4. Comparison with observations The shell described above is a Hubble flow in the sense that $`𝐯_s(\mu ,t)=\eta 𝐑_s(\mu ,t)/t`$. Since the relative line-of-sight velocity is related to the line-of-sight distance by $`v_{\mathrm{obs}}=\eta z_{\mathrm{los}}/t`$, the position-velocity (PV) diagram along the extent of an optically thin outflow is the same as its image to an observer situated in the plane of the sky along the short axis of the outflow. Whereas inclination causes a foreshortening of the outflow in a sky map, it causes the PV diagram to rotate. An elongated outflow ($`\theta _01`$) will display a nearly linear PV diagram from the greatest to the least values of velocity and position. The agreement of self-similar outflows with the observed Hubble law was pointed out by Shu et al (1991) for the particular case $`k_\rho =2`$ and $`\eta _{\mathrm{in}}=1`$; here we see that it is a quite general property. The mass-velocity relationship, $`dM/dv_{\mathrm{obs}}`$ is a projection of the PV diagram onto the velocity axis. Typical outflows show $`dM(v_{\mathrm{obs}})/dv_{\mathrm{obs}}v_{\mathrm{obs}}^\mathrm{\Gamma }`$, with $`\mathrm{\Gamma }1.8`$. In an elongated outflow of inclination $`i`$, $`v_{\mathrm{obs}}v\mathrm{cos}i`$, where $`v=|𝐯|`$, because all but the lowest velocities are achieved at small angles from the outflow axis. Therefore, $`\mathrm{\Gamma }=d\mathrm{ln}(dM/dv)/d\mathrm{ln}(v)`$ except at low velocities. Because $`dM/dv=(M/\mu )/(v/\mu )v^{3k_\rho }/(\mu v^{(5+ϵk_\rho )/(1+ϵ)})`$ from equations (3) and (4) with $`Q(\mu )=1`$, we find $$\mathrm{\Gamma }=2+ϵ\frac{4k_\rho }{1+ϵ},$$ (7) for $`v_{\mathrm{obs}}2v_{\mathrm{min}}`$ where $`v_{\mathrm{min}}\theta _0^{2/(4k_\rho )}v_{\mathrm{head}}`$ is the minimum space velocity (see eq. 6). Essentially all of the (non-equatorial) flow shows a value of $`\mathrm{\Gamma }`$ very close to $`2`$ (if $`ϵ1`$), in excellent agreement with a typical observed value of $`1.8`$. The slope is shallower for $`v_{\mathrm{obs}}v_{\mathrm{min}}`$: because $`dM/dv_{\mathrm{obs}}`$ is a symmetric function of $`v_{\mathrm{obs}}`$, $`\mathrm{\Gamma }=0`$ when $`v_{\mathrm{obs}}=0`$. The velocity at which $`dM/dvv^2`$ fails is greater for more inclined outflows, or for larger $`\theta _0`$; this could in principle constrain the inclination. The ability of our model to fit observational mass-velocity curves is demonstrated in Figure 1 for L1551, NGC2071, and NGC2264G. The qualitative agreement of X-winds in isothermal toroids with observed mass-velocity curves has previously been shown by Li & Shu (1996). Again, we see that this is a general feature of momentum–conserving shells driven by hydromagnetic winds, and is not linked to particular models for the source of the wind nor the ambient medium, so long as this medium has a power law density distribution. The broadening angle $`\theta _0`$ can be constrained using the range of velocities for which $`\mathrm{\Gamma }2`$: this law holds for at least a factor of 4 in the outflows shown in Figure 1. This must be less than about $`v_{\mathrm{head}}/(2v_{\mathrm{min}})=\theta _0^{2/(4k_\rho )}`$, so $`\theta _010^{1.8(1k_\rho /4)}`$. However, the velocity factor is also reduced by inclination at a given $`\theta _0`$; in Figure 1 we show that $`\theta _0=10^2`$ allows an inclination of $`45^{}`$ to fit the data; $`\theta _0=10^{1.5}`$ gives too small a velocity range unless $`i=0`$. If $`p_{\mathrm{obs},\mathrm{lobe}}`$ is the net momentum of an outflow lobe along the line of sight, another constraint on $`\theta _0`$ comes from the ratio, $`{\displaystyle \frac{p_{\mathrm{obs},\mathrm{lobe}}}{(v_{\mathrm{obs}}^2dM/dv_{\mathrm{obs}})_{\mathrm{head}}}}={\displaystyle \frac{2\mathrm{ln}(\theta _0^1)}{4k_\rho }},`$ (8) valid if $`ϵ,\theta _01`$. The result was obtained by integrating the momentum along the line of sight for an uninclined ($`i=0`$) outflow; we did not assume that $`\mu 1`$, but instead used the exact expression $`v_{\mathrm{obs}}=\mu v`$ appropriate for each shell. It is interesting to note that this agrees exactly with the expression $`\mathrm{ln}(v_{\mathrm{head}}/v_{\mathrm{min}})`$ one would estimate from $`dm/dvv^2`$ with $`\mu =1`$. Although the ratio was derived for zero inclination, it is actually independent of $`i`$, because the numerator and denominator scale together. Observations place a lower limit on the ratio (an upper limit on $`\theta _0`$), since the net momentum may be underestimated. We find that $`\theta _0<10^{1.5}`$ for L1551 (Moriarty-Schieven & Snell (1988)) and $`\theta _0<10^{1.3}`$ for NGC2071 (Moriarty-Schieven, Hughes & Snell (1989)), assuming $`k_\rho =0`$. Again, note that $`\theta _010^2`$ is consistent with the data. The current model reproduces the typical extents and velocities of observed outflows. For instance, equations (3) and (4) imply that a wind of momentum $`20M_{}\mathrm{km}\mathrm{s}^1`$ with $`\theta _0=10^2`$, blowing steadily into a uniform density $`10^4\mathrm{cm}^3`$ for $`10^5`$ years, drives lobes whose heads decelerate to $`7.4\mathrm{km}\mathrm{s}^1`$ and expand to $`1.5\mathrm{pc}`$ (each) in this time. ## 5. Conclusions Perhaps the most remarkable properties of bipolar molecular outflows, apart from their intensity and frequency in regions of active star formation, are their high degrees of collimation and the commonality of the relation $`dM/dvv^{1.8}`$. We have shown that hydromagnetic winds are naturally collimated so that the force distribution, $`\rho _wv_w^21/\mathrm{sin}^2\theta `$, approximately leads to $`dM/dvv^2`$ in any power-law ambient medium, provided the interaction is momentum conserving. Our results show that that the conclusions reached by Shu et al (1991) and (1995) and Li & Shu (1996) for steady X-winds in media with $`1/r^2`$ density distributions are far more general, and in addition are in good agreement with observations of protostellar outflows. The fact that outflows are often observed to have $`dM/dv`$ slightly shallower than $`v^2`$ indicates that the theory is only approximate. A number of effects we have not considered could lead to a deviation from the -2 slope: CO self-absorption at lower velocities, mixing of radial momentum between angles, or the generation of lateral momentum when the wind impacts the shell (Masson and Chernin 1993). In the current model, $`\mathrm{\Gamma }`$ can differ from $`2`$ either because of the wind force does not exactly follow $`\rho _wv_w^2(\mathrm{sin}\theta )^2`$ ($`ϵ0`$), or because the ambient medium is not a single power law ($`k_\rho `$ varies). The model also predicts a flattening of $`dM/dv`$ at low velocities, which could raise the estimate of $`\mathrm{\Gamma }`$. Let us consider the possibility that $`ϵ`$ is to blame for $`\mathrm{\Gamma }2`$. From equation (7), $`\mathrm{\Gamma }1.8`$ requires $`ϵ^15(4k_\rho )`$: the wind is slightly more concentrated toward the axis. Equation (2) implies that the number of decades of disk radius required to give this value of $`ϵ`$ is approximately $`\mathrm{log}_{10}(1/\theta _{\mathrm{core}})/[5(q1)(1k_\rho /4)]`$. The Blandford & Payne (1982) model has $`q=3/2`$; it would therefore require about 2-4 decades of disk radius to give $`\mathrm{\Gamma }=1.8`$ for $`\theta _{\mathrm{core}}10^5`$ and $`2k_\rho 0`$. A disk with $`q<1`$ (initial wind density increasing outward, e.g., Ostriker’s models), has its wind force weighted toward the equator relative to $`\rho _wv_w^2(\mathrm{sin}\theta )^2`$, and produces outflows steeper than $`dM/dvv^2`$. X-wind models share this behavior, as they predict $`ϵ1/50`$. Our model assumes that the shocked wind and ambient gas form a thin shell. Although this is appropriate for unmagnetized gas (Koo & McKee 1992a ), the fact that winds are collimated magnetically raises the possibility that outflows might become inflated with a cocoon of magnetically-supported shocked wind, before or after the wind shuts off. This depends on the wind’s terminal Alfvén Mach number (inversely proportional to Poynting flux), and also on whether the field remains ordered or becomes tangled. Because the Poynting flux decreases as the wind collimates ($`\theta _{\mathrm{core}}`$ decreases), and also as the kink instability develops (Choudhuri & Königl 1986), it is reasonable to ignore the magnetic pressure of the shocked wind. ###### Acknowledgements. E. Ostriker informs us of similar, unpublished results she obtained independently. We are grateful to F. Shu for thoughtful suggestions, and to R. Plambeck and M. Hogerheijde for sharing and discussing their data. CDM appreciates comments from J. Monnier. The research of both CDM and CFM is supported in part by the National Science Foundation through NSF grant AST 95-30480, in part by a NASA grant to the Center for Star Formation Studies, and, for CFM, in part by a Guggenheim Fellowship. CFM gratefully acknowledges the hospitality of John Bahcall of the Institute for Advanced Study; his visit there was supported in part by a grant from the Alfred P. Sloan Foundation.
no-problem/9909/hep-lat9909015.html
ar5iv
text
# Folding and Design in Coarse-Grained Protein Models ## 1 Introduction Proteins are heterogenuous chain molecules composed of sequences of amino acids. The protein folding problem amounts to given a sequence of amino acids predict the protein 3D structure. There are 20 different amino acids. In the Bioinformatics approach one aims at extracting rules in a ”black-box” manner by relating sequence with structure from databases. Here we pursue the physics approach, where given interaction energies, the 3D structures and their thermodynamical properties are probed. In principle, this can be pursued on different levels of resolution. Ab initio quantum chemistry calculations can not handle the huge degrees of freedom, but are of course useful for estimating interatomic potentials. All-atom representations, where the atoms are the building blocks, also require very large computing resources for the full folding problem including thermodynamics, but are profitable for computing partial problems, binding energies etc. Here we pursue a course-grained representation, where the entities are the amino acids. This is motivated by the fact that the hydrophobic properties of the amino acids play a most important role in the folding process – the amino acids that are hydrophobic (H) tend to form a core, whereas the hydrophilic or polar ones (P) are attracted to the surrounding H<sub>2</sub>O solution. In such representations, the interactions between the amino acids and the solvent are reformulated in an effective interaction between the amino acids. ## 2 Coarse-Grained Models Both lattice and off-lattice models have here been studied. A well studied lattice model is the HP model $$E(r,\sigma )=\underset{i<j}{}\sigma _i\sigma _j\mathrm{\Delta }(r_ir_j)$$ (1) where $`\mathrm{\Delta }(r_ir_j)=1`$ if monomers $`i`$ and $`j`$ are non-bonded nearest neighbors and $`0`$ otherwise. For hydrophobic and polar monomers, one has $`\sigma _i=1`$ and 0, respectively. Being discrete, this model has the advantage that for sizes up to $`N=18`$ in 2D it can be solved exactly by exhaustive enumeration. Similarly off-lattice models have been developed, where adjacent residues are linked by rigid bonds of unit length to form linear chains . The energy function is given by $$E(r,\sigma )=\underset{i}{}F_i+\underset{i<j}{}ϵ(\sigma _i,\sigma _j)[r_{ij}^{12}r_{ij}^6]$$ (2) where $`F_i`$ is a local sequence-independent interaction chosen to mimic the observed local correlations among real proteins and the second term corresponds to amino-acid interactions, the strengths/signs of which are governed by $`ϵ(\sigma _i,\sigma _j)`$. ## 3 Folding Investigating thermodynamical properties of chains given by Eqs. (1,2) is extremely tedious with standard MC methods; Metropolis, the hybrid method etc. Hence novel approaches are called for. Dynamical Parameter approaches have here turned out to be very powerful; the tempering and multisequence methods. In both approaches one enlarges the Gibbs distribution. In one simulates $$P(r,k)=\frac{1}{Z}\mathrm{exp}(g_kE(r,\sigma )/T_k)$$ (3) with ordinary $`r`$ and $`k`$ updates for $`T_1<\mathrm{}<T_K`$, regularly quenching the system to the ground state. The weights are $`g_k`$ are chosen such that the probability of visiting the different $`T_k`$ is roughly constant. Similarly in the multisequence method the degrees of freedom are enlarged to include different sequences according to $$P(r,\sigma )=\frac{1}{Z}\mathrm{exp}(g_\sigma E(r,\sigma )/T)$$ (4) where again $`g_\sigma `$ is a set of tunable parameters, which are subject to moves jointly with $`r`$. When estimating thermodynamical quantities, these dynamical parameter methods yield speedup factors of several orders of magnitude. A key issue when studying properties of protein models are to what extent different sequences yield structures with good folding properties from a thermodynamic standpoint. Defining good folding properties is straightforward in the lattice model case – non-degenerate ground states. For off-lattice models a suitable measure can be defined in terms of the mean-square distance $`\delta _{ab}^2`$ between two arbitrary configurations $`a`$ and $`b`$. An informative measure of stability is the mean $`\delta ^2`$ . With a suitable cut on $`\delta ^2`$ good folders are singled out. For both lattice and off-lattice models, only a few % of the sequences have good folding properties <sup>1</sup><sup>1</sup>1Similar fractions are obtained within the replica approach for lattice models .. When analyzing the sequence properties of good folders, one finds that similar signatures occur among real proteins when using a binary coding for the hydrophobicities . One might speculate that only those sequences with good folding properties survived the evolution. ## 4 Design The “inverse” of protein folding, sequence optimization, is of utmost relevance in the context of drug design. Here, one aims at finding optimal amino acid sequences given a target structure such that the solution represents a good folder. This corresponds to maximizing the conditional probability , $`P(r_0|\sigma )={\displaystyle \frac{1}{Z(\sigma )}}\mathrm{exp}(E(r_0,\sigma )/T)`$ (5) $`Z(\sigma )={\displaystyle \underset{r}{}}\mathrm{exp}(E(r,\sigma )/T)`$ (6) Note that here $`Z(\sigma )`$ is not a constant quantity. A straightforward approach would therefore require a nested MC – for each step in $`\sigma `$ a complete MC has to be performed in $`r`$ . Needless to say, this is extremely time consuming. Various approximations for $`Z`$ has been suggested; chemical potentials fixing the net hydrophobicity and low-$`T`$ expansions . Neither of these produce good folders in a reliable way. Here we devise a different strategy based upon the multisequence method . The starting point is the joint probability distribution (Eq. (4)) The corresponding marginal distribution is given by $`P(\sigma )`$ $`=`$ $`{\displaystyle \underset{r}{}}P(r,\sigma )={\displaystyle \frac{1}{Z}}\mathrm{exp}(g_\sigma )Z(\sigma )`$ $`Z`$ $`=`$ $`{\displaystyle \underset{\sigma }{}}\mathrm{exp}(g_\sigma )Z(\sigma )`$ (7) With the choice $$g_\sigma =E(r_0,\sigma )/T$$ (8) one obtains $$P(r_0|\sigma )=\frac{P(r_0,\sigma )}{P(\sigma )}=\frac{1}{ZP(\sigma )}$$ (9) In other words, maximizing $`P(r_0|\sigma )`$ is in this case equivalent to minimizing $`P(\sigma )`$. This implies that bad sequences are visited more frequently than good ones in the simulation. This property may seem strange at a first glance. However, it can be used to eliminate bad sequences. The situation is illustrated in Fig. 1. Basically, one runs a MC in both $`r`$ and $`\sigma `$ using all (or a subset of) the sequences. Regularly, one estimates $`P(\sigma )`$. Sequences where $`P(\sigma )`$ exceeds a certain threshold are then eliminated, thereby purifying the sample towards designing sequences according to Eq. (9). For lattice models one can use an alternative to eliminating high $`P(\sigma )`$ sequences, by removing sequences with $`E(r,\sigma )E(r_0,\sigma )`$. Testing any design algorithm requires that one has access to designable structures, i.e. structures for which there exist good folding sequences. Furthermore, after the design process, it must be verified that the designed sequence indeed has the structure as a stable minimum (good folder). For $`N18`$ 2D lattice models this is of course feasible, since these models can be enumerated exactly. For larger lattice models and off-lattice models this is not the case and testing the design approach is more laborious. Extensive tests have been performed for $`N`$=16, 18, 32 and 50 lattice and $`N`$=16 and 20 off-lattice chains respectively. For systems exceeding $`N`$=20 one cannot go through all possible sequences. Hence a bootstrap procedure has been devised, where a set of preliminary runs with subsets of sequences is first performed. Positions along the chain with clear assignments of H or P are then clamped and the remaining degrees of freedom are run with all sequences visited. With no exceptions, the design algorithm efficiently singles out sequences that folds well into the (designable) target structures. Acknowledgment: The results reported here were obtained together with A. Irbäck, F. Potthast, E. Sandelin and O. Sommelius.
no-problem/9909/astro-ph9909422.html
ar5iv
text
# Aperture Synthesis Images of Dense Molecular Gas in Nearby Galaxies with the Nobeyama Millimeter Array ## 1. Dense Molecular Gas in Galaxies In order to study the distribution of dense molecular gas and its relation to the central activities (starburst and AGN) in galaxies, we have conducted an imaging survey of HCN(1$``$0) and HCO<sup>+</sup>(1$``$0) emissions from nearby spiral galaxies with the Nobeyama Millimeter Array (NMA) (Kohno et al. 1996, 1998, 1999a, 1999b, 1999c; Shibatsuka et al. 1999). Figure 1 shows preliminary images of HCN and HCO<sup>+</sup> in galaxies. In starburst galaxies, we find there is good spatial coincidence between dense molecular gas and star-forming regions. The ratios of HCN to CO integrated intensities on the brightness temperature scale, $`R_{\mathrm{HCN}/\mathrm{CO}}`$, are as high as 0.1 to 0.2 in the starburst regions, and quickly decrease outside of these regions. In contrast, we find a remarkable decrease of the HCN emission in the post-starburst nuclei, despite the strong CO concentrations there. The $`R_{\mathrm{HCN}/\mathrm{CO}}`$ values in the central a few 100 pc regions of these quiescent galaxies are very low, 0.02 to 0.04. A rough correlation between $`R_{\mathrm{HCN}/\mathrm{CO}}`$ and H$`\alpha `$/CO ratios, which is an indicator of star-formation efficiency, is found at a few 100 pc scale. The fraction of dense molecular gas in the total molecular gas, measured from $`R_{\mathrm{HCN}/\mathrm{CO}}`$, may be an important parameter that controls star formation. In some Seyfert galaxies we find extremely high $`R_{\mathrm{HCN}/\mathrm{CO}}`$ exceeding 0.3. These very high ratios are never observed even in strong starburst regions, implying a physical link between extremely high $`R_{\mathrm{HCN}/\mathrm{CO}}`$ and Seyfert activity. ## References Kohno, K., Kawabe, R., Tosaki, T., & Okumura, S. K. 1996, ApJ, 461, L29 Kohno, K. et al. 1998, in The Central Regions of Galaxy and Galaxies, Y. Sofue, Kluwer: Dordrecht, 239 Kohno, K., Kawabe, R., & Vila-Vilaró, B. 1999a, ApJ, 511, 157 Kohno, K. et al. 1999b, Adv. Space Res. 23, 1011 Kohno, K. et al. 1999c, in The Physics and Chemistry of Interstellar Medium, in press (astro-ph/9902251) Shibatsuka, T. et al. 1999, in Proceedings of Star Formation 1999, in press (astro-ph/9909313)
no-problem/9909/cond-mat9909361.html
ar5iv
text
# Cluster variation method and disorder varieties of two–dimensional Ising–like models ## I Introduction The cluster variation method (CVM) is a powerful hierarchy of approximations for lattice models of equilibrium statistical mechanics which has been invented by Kikuchi and more recently rewritten by An and Morita . It is particularly well suited to analyse complex phase diagrams of discrete classical models , but in some simple cases it is also known to give exact results. Since the approximations involved amount to neglecting correlations except for a finite range, exact results are obtained whenever correlations have a particularly simple structure, as in tree-like lattices or one-dimensional strips . The purpose of the present paper is to study the behaviour of the CVM in another situation in which correlations are particularly simple, namely in the case of disorder varieties of two-dimensional Ising-like models with competitive interactions. Disorder varieties are known since the papers by Stephenson and have subsequently been studied by many authors . On a disorder variety (which is a suitable subspace in the whole parameter space of a model) the correlation functions factorize in a simple way, which leads to an effective dimensional reduction of the model, so one could expect that the CVM might be particularly accurate or even exact in such a case. This is indeed the case and I shall show, giving both general arguments and a detailed analysis of a particular model, that the CVM is exact on disorder varieties. The plan of the paper is as follows: in Sec. II I shall introduce disorder varieties and briefly recall some of the results which have been obtained in the past years; Sec. III will be devoted to the definition and explanation of the CVM; in Sec. IV the exactness of the CVM on disorder varieties will be shown and finally, conclusions will be drawn in Sec. V. ## II Disorder varieties A disorder variety is a subspace of the parameter space of a model with competitive interactions, lying in the disordered phase, where the correlations have a particularly simple form and the model can then be integrated exactly. The first example of such a variety has been found by Stephenson in the anisotropic antiferromagnetic Ising model on the triangular lattice. The hamiltonian of the model can be written in the form $$H=\underset{ij}{}J_{ij}\sigma _i\sigma _j,$$ (1) where $`\sigma _i=\pm 1`$ is the spin variable at site $`i`$, the sum is over all nearest-neighbour (NN) pairs and $`J_{ij}`$ depends only on the direction of the link between sites $`i`$ and $`j`$. The values of $`J_{ij}`$ along the three lattice directions will be denoted by $`J_1`$, $`J_2`$ and $`J_3`$. In the antiferromagnetic model we have $`J_l<0`$ for $`l=1,2,3`$. Stephenson showed that when the condition $$\mathrm{tanh}K_3+\mathrm{tanh}K_1\mathrm{tanh}K_2=0,K_l=J_l/k_\mathrm{B}T$$ (2) (or one which is obtained from it by a cyclic permutation of the indices) holds, then the pair correlation along a lattice direction has a simple exponential form, as for the one-dimensional model. If $`\sigma _i`$ and $`\sigma _j`$ are two spin variables separated by a distance $`k`$ on a linear chain of the lattice in the $`l`$th direction, their correlation $`\sigma _i\sigma _j`$ is the $`k`$th power of the NN correlation along the same direction. In particular, assuming $`J_1<J_2<J_3<0`$, one has $`\sigma _i\sigma _j=[\mathrm{tanh}(K_1)]^k`$ in direction 1, $`\sigma _i\sigma _j=[\mathrm{tanh}(K_2)]^k`$ in direction 2 and $`\sigma _i\sigma _j=[\mathrm{tanh}(K_3)]^k`$ in direction 3. Stephenson also showed that the disorder variety separates a portion of the disordered phase in which the pair correlation has an oscillating behavior from one in which it decreases monotonically. Similar results have been obtained by the same author for the union jack lattice and for certain one-dimensional lattices. Later, Enting showed that the interaction round a face (IRF) model on the square lattice (and in particular the Ising model with NN, next-nearest-neighbour (NNN) and plaquette interactions) has a disorder variety which can be mapped onto an exactly solvable crystal growth model. Peschel and Emery rederived Stephenson’s results for the correlations on the disorder variety of the triangular Ising model by means of a one-dimensional kinetic model and applied this technique also to the ANNNI model. Peschel and Rys solved the eight vertex model on one of its disorder varieties. Baxter analysed the disorder varieties of the IRF model on the square lattice. He showed that the eigenvector of the (diagonal to diagonal) transfer matrix corresponding to the largest eigenvalue can be written in a simple form as the product of a sequence of two-site (NN) factors. Rujàn studied the relations between different techniques and considered several models (vertex models, staggered IRF model, $`q`$-state Potts models, random bond models). Jaekel and Maillard found a local criterion which characterizes disorder varieties for any dimensionality and explains the effective dimensional reduction occurring in the model: the Boltzmann weight of an elementary cell of the lattice, summed over some (suitably chosen) spins (or whatever degrees of freedom), is independent of the remaining spins. Georges and coworkers used this local criterion to calculate correlation functions on the disorder varieties of three-dimensional Ising models. To conclude this (certainly not exhaustive) brief survey of the existing literature, we mention that recently, Meyer and coworkers studied the disorder varieties of the eight vertex model in the framework of a random matrix theory approach to the transfer matrix. ## III The cluster variation method The cluster variation method (CVM) is a hierarchy of approximation techniques for discrete classical lattice models, which has been invented by Kikuchi . In its modern formulation the CVM is based on the truncation of the cumulant expansion of the variational principle of equilibrium statistical mechanics, which says that the free energy $``$ of a model defined on the lattice $`\mathrm{\Lambda }`$ is given by $$=\mathrm{min}[\rho _\mathrm{\Lambda }]=\mathrm{min}\mathrm{Tr}(\rho _\mathrm{\Lambda }H+\rho _\mathrm{\Lambda }\mathrm{ln}\rho _\mathrm{\Lambda }),$$ (3) where $`H`$ is the hamiltonian of the model, $`k_\mathrm{B}T=1`$ for simplicity, and the minimization must be performed with respect to a density matrix obeying the normalization constraint $`\mathrm{Tr}(\rho _\mathrm{\Lambda })=1`$. If the model under consideration has only short range interactions and the maximal clusters are sufficiently large the hamiltonian can be decomposed into a sum of cluster contributions $`H_\alpha `$ and the approximate variational free energy takes the form $$F[\{\rho _\alpha ,\alpha M\}]=\underset{\alpha M}{}\left[\mathrm{Tr}(\rho _\alpha H_\alpha )a_\alpha S_\alpha \right],$$ (4) where $`\alpha `$ is a cluster of sites, $`\rho _\alpha =\mathrm{Tr}_{\mathrm{\Lambda }\alpha }\rho _\mathrm{\Lambda }`$ is the cluster density matrix ($`\mathrm{Tr}_{\mathrm{\Lambda }\alpha }`$ denotes a summation over all degrees of freedom except those belonging to the cluster $`\alpha `$), $`S_\alpha =\mathrm{Tr}(\rho _\alpha \mathrm{ln}\rho _\alpha )`$ is the cluster entropy and the coefficients $`a_\alpha `$ can be easily obtained from the set of linear equations $$\underset{\beta \alpha M}{}a_\alpha =1,\beta M.$$ (5) The cluster density matrices must satisfy the following normalization and compatibility conditions $$\mathrm{Tr}\rho _\alpha =1,\alpha M\mathrm{and}\rho _\alpha =\mathrm{Tr}_{\beta \alpha }\rho _\beta ,\alpha \beta M.$$ (6) Notice that (4) would still be exact if the density matrix $`\rho _\mathrm{\Lambda }`$ of the whole lattice could be written exactly as a product of cluster density matrices in the form $$\rho _\mathrm{\Lambda }=\underset{\alpha M}{}(\rho _\alpha )^{a_\alpha }.$$ (7) ## IV Exactness of the cluster variation method on disorder varieties There are two properties of the disorder varieties which suggest, at least for two-dimensional models, that the CVM might be exact on them. One is the one-dimensional-like character of the pair correlations. In fact, it is known that for a one-dimensional model with NN interactions, the pair approximation of the CVM (that is, the approximation in which the maximal clusters are the NN pairs), which is equivalent to the Bethe-Peierls approximation, is exact . The other property, still valid for two-dimensional models, is related to a result by Baxter . He showed that the eigenvector (corresponding to the largest eigenvalue) of the diagonal to diagonal transfer matrix is simply the product of a sequence of two-site (NN) factors. Since the density matrix of a diagonal cluster is the square of this eigenvector also the density matrix has a product structure. As we have seen in the previous section, when the density matrix has a suitable product structure the CVM becomes exact. Therefore one can hope to find a CVM approximation which is exact on the disorder variety of a given two-dimensional model. In the square lattice case a good candidate is the plaquette approximation , which is equivalent to the Kramers-Wannier approximation , which in turn has long been known to correspond to a variational approximation in which the largest eigenvalue of the transfer matrix is searched using a restricted space of factorized vectors . In the case of the plaquette approximation for a model defined on the square lattice the condition (7), which implies the exactness of the approximation, becomes $$\rho _\mathrm{\Lambda }=\frac{_{\mathrm{plaq}}\rho _{\mathrm{plaq}}_{\mathrm{site}}\rho _{\mathrm{site}}}{_{\mathrm{pair}}\rho _{\mathrm{pair}}},$$ (8) where $`\rho _\mathrm{\Lambda }`$ denotes the density matrix of the whole lattice and the products are to be intended over all plaquettes, pairs and sites of the lattice. The above equation should however be taken with some care, since it is known that not all local thermodynamic states (i.e. density matrices) can be extended to the whole lattice . Consider as an example a model of Ising spins $`\sigma _i=\pm 1`$ in its disordered phase, which will be studied in detail below. One can easily check on small lattices that, using a generic plaquette density matrix and the pair and site matrices derived from it by partial traces, (8) leads to a $`\rho _\mathrm{\Lambda }`$ which is not correctly normalized. In the case of open boundary conditions (with this choice the sites and the pairs lying at the boundary do not enter the products in (8)) the correct normalization is achieved only if $`d=c^2`$, where $`c=\sigma _i\sigma _j_{\mathrm{NN}}`$ and $`d=\sigma _i\sigma _j_{\mathrm{NNN}}`$ are the NN and NNN correlations, respectively. When the condition $`d=c^2`$ holds, the procedure of extending local density matrices to larger clusters is well defined, in the sense that by partial traces one can reobtain the local density matrices which were used to build the larger ones. In addition, one can verify that the density matrix of any cluster admits a decomposition into a product of plaquette, pair and site density matrices, with exponents given by the CVM rules. For instance, with reference to Fig. 1, in the case of the $`3\times 3`$ square we have $$\rho _9(\tau _1,\mathrm{}\tau _9)=\frac{\rho _{\mathrm{plaq}}(\tau _1,\tau _2,\tau _5,\tau _4)\rho _{\mathrm{plaq}}(\tau _2,\tau _3,\tau _6,\tau _5)\rho _{\mathrm{plaq}}(\tau _4,\tau _5,\tau _8,\tau _7)\rho _{\mathrm{plaq}}(\tau _5,\tau _6,\tau _9,\tau _8)\rho _{\mathrm{site}}(\tau _5)}{\rho _{\mathrm{pair}}(\tau _2,\tau _5)\rho _{\mathrm{pair}}(\tau _5,\tau _8)\rho _{\mathrm{pair}}(\tau _4,\tau _5)\rho _{\mathrm{pair}}(\tau _5,\tau _6)},$$ (9) while for the zig-zag chain $$\rho _{\mathrm{chain}}(\sigma _1,\sigma _2,\mathrm{}\sigma _L)=\frac{\rho _{\mathrm{pair}}(\sigma _1,\sigma _2)\rho _{\mathrm{pair}}(\sigma _2,\sigma _3)\mathrm{}\rho _{\mathrm{pair}}(\sigma _{L1},\sigma _L)}{\rho _{\mathrm{site}}(\sigma _2)\rho _{\mathrm{site}}(\sigma _3)\mathrm{}\rho _{\mathrm{site}}(\sigma _{L1})}.$$ (10) As a consequence, also the pair correlation function has a very simple product form, that is (labeling the spin variables by the site coordinates) $$g(x,y)=\sigma (x_0,y_0)\sigma (x_0+x,y_0+y)=c^{|x|+|y|}.$$ (11) The result (10) is equivalent to the result by Baxter that, on disorder varieties, the eigenvector of the diagonal to diagonal transfer matrix, corresponding to the largest eigenvalue, can be written as a product of NN pair terms. $`\rho _{\mathrm{chain}}`$ is just the square of this eigenvector, and the site factors which appear in the denominator can be easily associated, in a symmetric way, to the adjacent pairs. This shows (although this is not a rigorous proof) that when the plaquette approximation is exact for a model of Ising spins on the square lattice, then the model is at a point of the disorder variety in its parameter space. Let us finally study in detail the square lattice Ising model with NN, NNN and plaquette interactions. The hamiltonian of the model can be written in the form $$H=J_1\underset{ij}{}\sigma _i\sigma _jJ_2\underset{ij}{}\sigma _i\sigma _jJ_4\underset{{}_{k}{}^{j}\mathrm{}_{l}^{i}}{}\sigma _i\sigma _j\sigma _k\sigma _l,$$ (12) where $`J_1`$, $`J_2`$ and $`J_4`$ are the NN, NNN and plaquette couplings, respectively. This is a special case of the models studied in . We shall first use the plaquette approximation of the CVM. Notice that this approximation has already been applied to the same model in . In particular, Sanchez reported closed-form expressions for the equilibrium density matrices and the momentum space pair correlation function in the disordered phase. Morán-López and coworkers observed qualitatively the existence of a disorder locus in the phase diagram. Cirillo and coworkers calculated again the momentum space pair correlation function, and on this basis they determined the location of the disorder line. Their pair correlation function coincides with that by Sanchez except for a misprint , and they obtained a disorder line which is only very close to the exact one, instead of coincident as it should be on the basis of the results of the present paper, because of an additional approximation. As a first step one can, at least at the numerical level, verify that the approximation is exact on the disorder variety using only the CVM. A simple way is to consider a hierarchy of approximations like the so-called C-series , in which the maximal cluster is a rectangle made of $`2\times L`$ sites, with $`L2`$ (the plaquette approximation is the first element of the C-series). It is found, with extremely high precision, that a sequence of approximations in this series gives identical results on the disorder variety of the model. Inspection of the pair correlations shows that (11) is also satisfied. On the other hand, using published results it requires only a long but straightforward calculation to check that on the (known ) disorder variety of the model one obtains the exact free energy. Looking at the pair correlations one also sees that the condition $`d=c^2`$ (see (11)) is satisfied on the variety of equation $$\mathrm{cosh}(2J_1)=\frac{\mathrm{exp}(2J_4)\mathrm{cosh}(4J_2)+\mathrm{exp}(2J_2)}{\mathrm{exp}(2J_2)+\mathrm{exp}(2J_4)},$$ (13) which is precisely the disorder variety of the model . The free energy per site can be written as $$f=\mathrm{ln}\left[\mathrm{exp}(J_4)+\mathrm{exp}(J_42J_2)\right],$$ (14) and again coincides with the exact one, while the NN correlation is $$c=\frac{\mathrm{exp}(4J_2)\mathrm{cosh}(2J_1)}{\mathrm{sinh}(2J_1)},$$ (15) the NNN correlation $`d=c^2`$ and the plaquette correlation $$q=\sigma _i\sigma _j\sigma _k\sigma _l=\frac{\mathrm{exp}(4J_4)\left[1\mathrm{exp}(8J_2)\right]+4\mathrm{exp}(2J_2)\left[\mathrm{exp}(2J_4)\mathrm{exp}(2J_2)\right]}{\mathrm{exp}(4J_4)\left[1\mathrm{exp}(8J_2)\right]+4\mathrm{exp}(2J_2)\left[\mathrm{exp}(2J_4)+\mathrm{exp}(2J_2)\right]}.$$ (16) Finally, since all the pair correlations are given simply by (11) we can easily calculate the momentum space correlation function, or structure factor. We first rewrite (11) as $`g(x,y)=\mathrm{exp}\left({\displaystyle \frac{|x|+|y|}{\xi }}\right)`$, where $`\xi =(\mathrm{ln}c)^1`$. After a Fourier transform one finds $`S(p_x,p_y)=S_1(p_x)S_1(p_y)`$, where $$S_1(p)=\frac{\mathrm{sinh}(1/\xi )}{\mathrm{cosh}(1/\xi )\mathrm{cos}p}.$$ (17) It can be verified that the structure factors calculated by Sanchez and (except for the misprint) Cirillo and coworkers reduce to the above expression on the disorder line. ## V Conclusions I have shown that the CVM gives exact results on the disorder varieties of two-dimensional Ising-like models. In particular, I have considered the Ising model with NN, NNN and plaquette interactions on the square lattice, in the plaquette approximation of the CVM. In the disordered phase of the model, imposing the simple condition that the NNN pair correlation equals the square of the NN pair correlation, the CVM plaquette approximation becomes exact and it is shown that this condition holds on the disorder variety, where the model can be solved in closed form. It is important to notice that, using the CVM, one can obtain any correlation function, due to the fact that the procedure of extending the local thermodynamic state is well-defined just on the disorder variety. Similar results can be obtained on the triangular lattice as well as other two-dimensional lattices.
no-problem/9909/astro-ph9909198.html
ar5iv
text
# A new investigation on the Antlia Dwarf GalaxyBased on observations collected with VLT-UT1 telescope of ESO in Paranal, during the Science Verification Program ## 1 Introduction The global properties of Local Group (LG) galaxies play a key role for understanding both galaxy formation and evolution. Among them the dwarf galaxies (DGs) are particularly interesting since they are a unique laboratory to address several open questions on low-luminosity galaxies. In particular, the global properties of nearby DGs are crucial to understand the relationship, if any, between different morphological-types (Minniti & Zijlstra 1996) as well as to estimate how fundamental parameters such as the dark matter content, the chemical composition, and the star-formation history depend on the global luminosity, and in turn on the total mass of the DGs (Grebel 1998; Mateo 1998; van den Bergh 1999a). Moreover, since the LG contains both fairly isolated galaxies and dwarfs in subgroups, it allows us to investigate the environmental effects on the galactic evolution. As a consequence a detailed analysis of the evolutionary properties of their stellar component(s) is a fundamental step to shed new light not only on the formation and the interaction of the Galaxy and of M31 with their satellites but also for understanding distant, unresolvable stellar systems such as the very low surface brightness galaxies (Whiting, Irwin, & Hau 1997, hereinafter WIH; Grebel 1998). The observational scenario on nearby DGs was further enriched by the evidence that the Antlia-Sextans clustering may be the nearest group of galaxies not bound to the LG (van den Bergh 1999b, hereinafter VDB). Ground based data provided several deep and accurate CMDs for not-too-distant DGs (Smecker-Hane et al. 1994; Marconi et al. 1998), while data collected by the Hubble Space Telescope (HST) were crucial for assessing the stellar content of distant LG galaxies (see e.g., Mighell & Rich 1996; Buonanno et al. 1999; Caputo et al. 1999; Gallart et al. 1999, and references therein). Even though HST data provided high-quality CMDs for a large sample of DGs in the LG, in the near future the use 8-10m telescopes can substantially improve our knowledge of their global properties. In fact, the large collecting area and the evidence that DGs are only marginally affected by crowding problems even in the innermost regions make this class of instruments particularly useful for investigating the stellar content in the LG galaxies. In this paper, we present the results of an investigation on the Antlia dwarf galaxy based on B,V,I data collected with FORS I on VLT during the SV program. These photometric data are a plain evidence of the VLT capability to investigate DGs in the LG (but see also Tolstoy 1999). The layout of this paper is the following: in the next section we briefly present the observations and describe the procedures adopted for the reduction and the calibration of data. In §3 we discuss the main features of the CMDs, together with the distance and metallicity estimates, while the Antlia-Sextans grouping is addressed in §4. Finally, a brief summary and the conclusions are outlined in §5. ## 2 Observations and data reduction The data for Antlia have been requested and retrieved electronically from the ESO archive in Garching. The galaxy was observed through the standard Bessell B,V,I filters during the VLT-UT1 SV Program in January 1999 using the FORS I camera which covers a $`6.8\times 6.8`$ arcmin field of view at 0.02 arcsec per pixel resolution. The seeing was excellent (0.45 - 0.75 arcsec). We used the standard reduction procedure reported on the Data Reduction Notes listed in the VLT web pages and the Daophot II package (Stetson, Davis, & Crabtree 1990, and references therein). To improve the detection limit we coadded all the frames taken with the same filter and then we selected the I coadded frame (5400 sec) to create a master catalogue of stellar objects. The stars identified in this search were used as a template to fit both the B and the V coadded frames. By adopting a SHARP parameter of $`\pm 0.5`$, we detected in the coadded frames 3711 (B), 2958 (V), and 4583 (I) stars down to $`B27.0`$ mag, $`V25.7`$ mag, and $`I25.5`$ mag respectively. The calibration was derived by using a standard field observed during the same night and by adopting the average extinction coefficients. Completeness tests were performed by randomly adding in each magnitude bin (0.2 mag) 15-20% of the original number of stars to the coadded frames. Only stars that were detected in the same position and within a magnitude bin of $`\pm 0.1`$ mag were considered as recovered. The simulations we performed suggest that a completeness of the order of 50% was reached at $`I24.0`$ mag and $`B25.1`$ mag respectively. ## 3 The Color-Magnitude diagram The Antlia dwarf galaxy was originally noted by Corwin, de Vaucouleurs & de Vaucouleurs (1985) and by both Feitzinger & Galinski (1985) and Arp & Madore (1987) who also suggested that this stellar system could be a nearby galaxy. This finding was subsequently confirmed by Fouqué et al. (1990) who found, in a detailed $`HI`$ survey of southern late-type galaxies, that Antlia has a small radial velocity ($`V_r=361\pm 2kms^1`$). However, a firm identification of Antlia was only recently provided by WIH in a systematic search for VLSB galaxies in 894 ESO-SRC IIIaJ plates covering the entire southern sky, who also suggested that this galaxy is probably gravitationally bound to the dwarf irregular galaxy NGC3109. After its rediscover the global properties of this galaxy were investigated by Aparicio et al. (1997, hereinafter ADGMD) and by Sarajedini, Claver & Ostheimer (1997, hereinafter SCO). These investigations brought out the following characteristics: a) low-mass, metal-poor stars are the main stellar component of the galaxy; b) evidence of an age gradient within the galaxy with the young stellar component located close to the center; c) no evidence of an ongoing star formation process. A peculiar feature of Antlia is the amount of gas still present in this galaxy, and indeed WIH have inferred a total H I mass of $`8\times 10^5M_{}`$. Following the classification suggested by Da Costa (1997), which is based on the ratio between total mass of gas and B integrated luminosity, Antlia should be classified as a (dusty) dwarf irregular (dIrr) rather than as a dwarf spheroidal (dSph) galaxy, since according to SCO it should also contain interstellar dust. Oddly enough, it also shows a smooth elliptical morphology and a very low stellar concentration in the innermost regions. These features are quite similar to other isolated dSph galaxies in the LG such as the Tucana dwarf. However, Tucana does not show a significant amount of gas and therefore Mateo (1998) classified Antlia as a transitional galaxy (dIrr/dSph) together with LGS3, Phoenix, DDO210, and Pegasus. Panels a) and b) of Figure 1 show the CMD of Antlia in the $`(I,VI)`$ and in the $`(I,BI)`$ CMD respectively. In these diagrams were only plotted stars ($`1700`$) located within 2.5 arcmin from the center of the galaxy. We selected the stars whose centroids in B, V, and I frames were matched in one pixel (0.2 arcsec) and which satisfy tight constraints on photometric accuracy, namely $`\sigma _I0.2`$, $`\sigma _{BI}0.3`$, $`\sigma _{VI}0.3`$, and SHARP parameter $`\pm 0.5`$. Figure 1 discloses several interesting features. It is noteworthy the well-populated RGB, extending from $`I24`$ to $`I21.5`$ mag, and the sizable number of stars brighter than the TRGB. SCO identified the latter objects as a not-negligible population of Asymptotic Giant Branch stars, progeny of an intermediate-age population. On the basis of stellar counts in nearby fields (ADGMD), and of the evidence that these bright stars are not particularly concentrated toward the center of the galaxy -they appear at distances larger than 1 arcmin- the possibility that they are actually foreground objects cannot be ruled out. By accounting for the RGB intrinsic width, SCO suggested that Antlia could contain a sizable amount of interstellar dust. This suggestion was mainly based on the slope of the RGB upper portion which in their $`(I,VI)`$ CMD mimic the slope of objects affected by increasing reddening. This feature seems not confirmed by present photometry. The $`(I,BI)`$ CMD is particularly compelling since in this plane the slope of the reddening vector is steeper than in the $`(I,VI)`$ CMD (see also ADGMD). Even though the origin of such a discrepancy cannot firmly established, we suggest that the photometric error (the FWHM of SCO data is roughly a factor of two larger than in our data) and the contamination with foreground objects might have introduced a spurious trend (see §4.2 in SCO). The sample of stars located in the blue region of the CMDs -$`(VI)<0.7`$, $`(BI)<1`$\- plotted in Figure 1 is quite interesting. This feature already found by SCO and by ADGMD suggest the presence of a young stellar population. Moreover, their radial distribution clearly shows that this sample is strongly concentrated in the innermost regions of the galaxy. Therefore we confirm the difference in the radial distribution between the young and the old stellar component found by SCO and by ADGMD. The presence of this young stellar component together with the evidence of sizable amount of gas are the most clear indications that Antlia should be classified as a dIrr rather than as dSph galaxy. In order to supply an estimate of the age of the blue stars we plotted in the CMDs of Figure 1 evolutionary prescriptions (Cassisi 1999) for H and He-burning phases at two different stellar ages, namely $`t=80`$Myr and $`t=150`$Myr. At the same time, we also plotted the location of the RGB for two old stellar populations at $`t14Gyr`$ and $`t0.8Gyr`$ respectively. Theoretical predictions were transformed into the observational plane by adopting the bolometric corrections and the color-temperature relations provided by Green (1988). The adopted distance modulus, metallicity and reddening are discussed in the next section. The comparison between theory and observations clearly shows that the position of bright blue stars in Antlia is finely reproduced by an isochron with an age ranging from 100 to 150Myr. This age range implies that the TO masses of blue stars range from 3.5 to 4.5 $`M_{}`$ and therefore that in this galaxy should be present classical Cepheids with periods of the order of few days. ### 3.1 Distance and metallicity. When dealing with composite stellar population systems for which only the bright end of the CMD is well sampled, as in the case of Antlia, the TRGB method turns out to be a valuable distance indicator. In fact, this standard candle works for all morphological types of galaxies as long as an old stellar population is present. Moreover the absolute I-Cousins magnitude of the TRGB presents a negligible dependence on metal content over a wide metallicity range, at least for $`[M/H]<0.5`$. After the first semi-empirical calibration by Lee, Freedman & Madore (1993, hereinafter LFM), Salaris & Cassisi (1997, 1998, hereinafter SC97 and SC98) provided a new theoretical calibration of the TRGB method, characterized by a magnitude shift of $`0.15`$ -toward brighter magnitudes- in the absolute I magnitude of the tip. Note that Antlia distance estimates available in the literature are based on the TRGB calibration provided by LFM, while we here adopt the SC98 calibration. To estimate the apparent magnitude of the TRGB we use the differential luminosity function (LF) of RGB stars. We have not performed any correction since as clearly shown by ADGMD the stars located close to the tip are only marginally affected by foreground contamination. The top panel of Figure 2 shows the differential LF evaluated by adopting a magnitude bin of 0.20 mag. The error bars for each bin were estimated taking into account both the statistical fluctuations and the corrections for completeness. In order to obtain a robust determination of the RGB tip discontinuity, the LF was convolved with an edge-detecting Sobel filter $`[1,0,+1]`$. This function shows a sharp peak at $`I_{TRGB}=21.7\pm 0.10`$ mag which marks the appearance of the RGB tip. The error was estimated on the basis of the adopted magnitude bin. This value is, within current observational uncertainties, in good agreement with the values obtained by SCO ($`I_{TRGB}=21.63\pm 0.05`$) and ADGMD ($`I_{TRGB}=21.64\pm 0.04`$). Thus supporting a posteriori the negligible effect of foreground contamination on the LF of RGB stars. Our determination is $`0.2`$ mag fainter than the estimate provided by WIH, i.e. $`I_{TRGB}=21.4\pm 0.1`$. This discrepancy was already noted by ADGMD who suggested that it could be due to crowding problems at the limiting magnitude in the WIH’s photometry. By interpolating the maps of Burstein & Heiles (1982) we estimated $`E(BV)=0.03\pm 0.02`$, which implies a foreground reddening of $`E(VI)=0.04\pm 0.03`$. By using the extinction relation provided by Cardelli et al. (1989), this reddening implies an extinction $`A_I=0.04\pm 0.03`$ mag. As a consequence, the uncertainty on the I magnitude of the RGB tip is dominated by photometric and completeness errors since the reddening correction for this band is significantly smaller ($`I_{TRGB,0}=21.66\pm 0.10`$ mag). This notwithstanding, our estimate of $`I_{TRGB,0}`$ is in good agreement with the values suggested by SCO ($`I_{TRGB,0}=21.57`$ mag) and by ADGMD ($`I_{TRGB,0}=21.57\pm 0.05`$ mag). The TRGB method (see LFM, SC97 and SC98) is an iterative procedure which simultaneously gives both the distance and the mean metallicity of the old stellar population in the galaxy. The metallicity evaluations are based on the calibration of the dereddened color index $`(VI)`$ of the RGB half a magnitude below the tip, $`(VI)_{3.5,0}`$, as a function of the metal content. By adopting the SC98 calibrations we find a distance modulus of $`(mM)_0=25.89\pm 0.10`$ mag, i.e. $`D=1.51\pm 0.07`$ Mpc and $`(VI)_{3.5,0}`$=1.36 mag, which in turn translates into a mean metallicity of $`[M/H]1.3\pm 0.15`$. This distance modulus is roughly 13% larger than the values found both by SCO ($`(mM)_0=25.62\pm 0.12`$ mag) and by ADGMD ($`(mM)_0=25.6\pm 0.1`$ mag). As far as the mean metallicity is concerned, our estimate is $`0.3`$ dex higher than the ADGMD evaluation ($`[M/H]1.6\pm 0.1`$) and $`0.6`$ dex higher than SCO determination ($`[M/H]1.9\pm 0.13`$). As expected the disagreement is mainly due to the calibration of the TRGB method and of the RGB color index vs. metallicity provided by LFM and by SC98. A thorough discussion of the differences between these calibrations was already provided by SC98. However, the discrepancy with the metallicity evaluation provided by SCO is mainly due to the different approach adopted by these authors for estimating the metallicity along the RGB. In passing we note that our metallicity estimate is in fair agreement with the metallicity obtained by comparing the RGB loci of Antlia directly with the RGB loci of galactic globular clusters provided by Da Costa & Armandroff (1990). ### 3.2 Some hints on RGB stars. The observed color width of the Antlia RGB at $`I22.1`$ mag -i.e. $`0.5`$ mag below the RGB tip- is $`\mathrm{\Delta }(VI)0.25`$ mag. However, since at $`I22.1`$ mag our mean photometric error is $`\sigma _{VI}=0.05`$ mag, it turns out that the intrinsic color width of the RGB is roughly equal to 0.2 mag. If we assume that this color dispersion is due to a spread in metallicity, the metal content of Antlia should be in the range $`1.8<[M/H]<1.0`$. However, stellar models supply important information to figure out this problem. In fact, at fixed metal content an increase in age moves the RGB loci toward redder colors, while a decrease in age causes a decrease in the TRGB luminosity (see e.g. SC98; Caputo et al. 1999). Figure 1 shows the theoretical prescriptions at fixed metallicity -$`[M/H]=1.3`$\- for the RGB loci of an old (14Gyr, solid line) and a young (0.8Gyr, dashed line) stellar population. As a result, the RGB color dispersion in Antlia could be due to a mix of an old and of a young/intermediate-age stellar population. The LF shows a further interesting feature: for magnitudes dimmer than $`I22.3`$ mag one can notice a significant increase in the number of RGB stars. This evidence is supported by the substantial change in the slope of the cumulative LF plotted in the bottom panel of Figure 2, and can also be identified in the CMDs plotted in Figure 1. To assess the nature of this feature, Figure 2 shows the comparison between the observed differential LF and the theoretical LF for an age of 14 Gyr. We find that for magnitudes fainter than $`I22.4`$ mag, the observed stellar counts are, at the $`2\sigma `$ level, larger than the theoretical counts expected for an old stellar population. This finding could be interpreted as the evidence of a secondary sample of RGB stars connected with a stellar population younger than the main RGB stellar component. In fact, for stellar ages lower than $`1Gyr`$, the TRGB luminosity becomes quite sensitive to the age. To constrain the age of such a population we take into account the I magnitude difference between the RGB tip associated with the oldest component, located at $`I_{TRGB}=21.65`$ mag, and the LF discontinuity located at fainter magnitudes i.e. $`I22.4`$mag. By adopting this approach and by assuming for the younger stellar component, the same metallicity of the oldest one, we estimate that its age is $`0.7`$ Gyr. ## 4 Some hints on the Antlia-Sextans grouping. One of the main reason why the global properties of Antlia are so interesting is because it should be located beyond the zero-velocity surface of the LG, and therefore its motion can be used for providing independent estimates of both the LG dark matter halo and the age of the Universe (Lynden-Bell 1981). However, the location of this galaxy within the LG is still controversial. In fact, SCO suggested on the basis of its position in the heliocentric radial velocity versus apex angle diagram that it is located near the outer edge of the LG. At the same time, it was pointed out by ADGMD on the basis of the relative velocity between Antlia and NGC3109 that it is unlikely that this pair of galaxies is gravitationally bound. On the other hand, Yahil, Tammann, & Sandage (1977), Lynden-Bell & Lin (1977) and, more recently, VDB in a detailed analysis of the nearest group of galaxies brought out that Antlia together with Sextans A/B and NGC3109 forms a small cluster of galaxies which is not bounded to the LG, and therefore that it is expanding with the Hubble flow. He also suggested that Antlia is probably a satellite of NGC3109 and that this pair to be gravitationally stable should contain a sizable amount of dark matter. By adopting the distance moduli with the relative errors estimated by SC98 (see their Table 2, column 9 and Table 3, column 2) by means of the TRGB method for the other three members of this group we find that the corresponding distances are: D(Sextans A)=$`1.51\pm 0.1`$ Mpc, D(Sextans B)=$`1.45\pm 0.09`$ Mpc, and D(NGC3109)=$`1.37\pm 0.09`$ Mpc. Taken at face values these distances together with the Antlia distance suggest that within the errors these four galaxies are probably located at the same distance, thus supporting the finding by VDB that they form a small nearby clustering. Note that with the exception of NGC3109 these distance estimates are systematically larger than those adopted by VDB. The discrepancy ranges from 5% for Sextans A up to 13% for Antlia. The reason for the disagreement is partially due to the difference in the TRGB calibration and partially to the different standard candles adopted by VDB (classical Cepheids and TRGB method). The main advantage of our distance determinations is that they are based on the same standard candle and on the same TRGB calibration. However, distance determinations based both on the LFM and on the SC98 calibration agree within their errors with the distance scale based on Cepheid Period-Luminosity (PL) relation. This problem might be resolved by a detailed comparison of distance determinations based on Cepheid PL relations which account for the metallicity dependence and on TRGB method (Bono, Marconi, & Stellingwerf 1999). In order to disentangle this thorny problem we estimated the distance of NGC3109 by adopting the sample of classical Cepheids observed in this galaxy by Musella, Piotto, & Capaccioli (1998) and the theoretical $`PL_I`$ and $`PL_V`$ relations for Z=0.004 provided by Bono et al. (1999). Interestingly enough, we find that the reddening corrected distance moduli in these two bands are $`25.8\pm 0.1`$ mag and $`25.82\pm 0.08`$ mag respectively. Within the errors, which account only for the intrinsic dispersion, these distance determinations seem to support more the SC98 than the LFM calibration of the TRGB method. In fact, SC98 derived a distance modulus for NGC3109 of $`25.69\pm 0.14`$ mag which is in good agreement with the Cepheid distance, while Lee (1993) by adopting the LFM calibration found a distance modulus of $`25.45\pm 0.15`$ mag and the corresponding distance is 15% smaller than the Cepheid distance. On the basis of this finding, it goes without saying that DGs which host both young and old stellar populations can play a key role to settle down the dependence of Cepheid distance scale on metallicity since these stellar systems are characterized by a much smaller metallicity gradient when compared with large spiral galaxies (Mateo 1998). By taking into account the TRGB distances of NGC3109 and Antlia and their separation on the sky ($`1^{}.18`$) we estimated that the projected distance between these galaxies is $`140`$ kpc. The projected distance decreases to $`70`$ kpc if we use the new NGC3109 distance based on Cepheid distance. By adopting these two different projected distances and the equation (4) given by VDB we find that this system should have a total mass larger than $`7.8\times 10^{10}M_{}`$ (TRGB distances) and $`4.0\times 10^{10}M_{}`$ (Cepheid distance to NGC3109) to be bound. These values translates, by assuming $`m_{V,0}(Antlia)=15.58`$ (ADGMD) and $`m_{V,0}(NGC3109)=9.63`$ (Carignan 1985; Minniti et al. 1999), into total mass-to-light ratios of $`(M/L_V)_0350`$ and 170 in solar units. Taken at face value these ratios imply that this system should contain an amount of dark matter which is at least a factor of 2-4 larger than in any other dwarf in the LG (see Table 4 in Mateo 1998). Therefore, it is seems unlikely that these two galaxies are gravitationally bound. Finally, we mention that the increase in the mean distances supplies a straightforward support to the evidence brought out by VDB that the Ant-Sex clustering is located beyond the zero-velocity surface of the LG. ## 5 Summary and conclusions. Two CMDs -$`(I,VI)`$ and $`(I,BI)`$\- together with the LF in the I band, based on photometric data collected with FORS I during the SV program of the VLT were used for constraining the global properties of Antlia. The new data confirm, as originally suggested by SCO and ADGMD, that low-mass, metal-poor stars with an age of the order of 10 Gyr are the main stellar component of this galaxy and that young blue stars are located close to the center. The presence of interstellar dust suggested by SCO is not confirmed by current photometric data. The comparison between theory and observations suggests that the young stellar component is characterized by an age ranging from 100 to 150 Myr, and in turn that in this galaxy should be present classical Cepheids with periods of the order of few days. By adopting the calibrations of the TRGB method and of the color index $`VI`$ vs. metallicity suggested by SC98 we estimated that the distance modulus of Antlia is $`(mM)_0=25.89\pm 0.10`$ mag -i.e. $`D=1.51\pm 0.07`$ Mpc-, while its mean metallicity is $`[M/H]=1.3\pm 0.15`$. This distance estimate is 13% larger than the distance determinations provided by SCO and ADGMD, while the mean metallicity is 0.3 dex more metal-rich than the value suggested by ADGMD. The disagreement with previous estimates available in the literature is mainly due to systematic differences in the calibration of the TRGB method and of the color index vs. metallicity relation provided by LFM and SC98. Interestingly enough, we find that the differential LF shows a secondary peak at $`I22.5`$ mag which is at $`2\sigma `$ level larger than theoretical predictions. We suggest that this feature could be due to a secondary young/intermediate-age stellar component. By assuming that this sample of RGB stars presents the same metallicity of the old component we estimated that its age should be $`0.7`$ Gyr. This evidence together with the appearance of the blue stars suggests that after the initial burst who took place approximately 10 Gyr ago this galaxy experienced two further star formation episodes $`0.7`$ and $`0.1`$ Gyr ago. Finally by using the TRGB method we derived in a homogeneous context the distances of Sextans A/B, and NGC3109 we find that these three galaxies together with Antlia are located within the errors at the same distance. Thus supporting the finding by VDB that these galaxies form a small nearby clustering. The new distances also support the evidence that this grouping could be located beyond the zero-velocity surface of the LG (VDB). Obviously new observations aimed at detecting both horizontal branch stars and RR Lyrae stars as well as at detecting and measuring classical Cepheids can supply fundamental constraints on the intrinsic distance and on the global properties of this intriguing neighborhood. * It is a pleasure to thank V. Castellani and M. Marconi for many interesting discussions on an early draft of this paper. We also warmly acknowledge M. Salaris for many stimulating suggestions on the content of the paper. Detailed and pertinent comments from the referee, Sydney van den Bergh, have contributed to improving the content of this paper.
no-problem/9909/astro-ph9909010.html
ar5iv
text
# Untitled Document Footprints of the Newly-Discovered Vela Supernova in Antarctic Ice Cores? C.P. Burgess<sup>a</sup> and K. Zuber<sup>b</sup> <sup>a</sup> Physics Department, McGill University 3600 University St., Montréal, Québec, CANADA, H3A 2T8. <sup>b</sup> Lehrstuhl für Exp. Physik IV, Universität Dortmund 44221 Dortmund, GERMANY. Abstract The recently-discovered, nearby young supernova remnant in the southeast corner of the older Vela supernova remnant may have been seen in measurements of nitrate abundances in Antarctic ice cores. Such an interpretation of this twenty-year-old ice-core data would provide a more accurate dating of this supernova than is possible purely using astrophysical techniques. It permits an inference of the supernova4s <sup>44</sup>Ti yield purely on an observational basis, without reference to supernova modelling. The resulting estimates of the supernova distance and light-arrival time are 200 pc and 700 years ago, implying an expansion speed of 5,000 km/s for the supernova remnant. Such an expansion speed has been argued elsewhere to imply the explosion to have been a 15 $`M_{}`$ Type II supernova. This interpretation also adds new evidence to the debate as to whether nearby supernovae can measurably affect nitrate abundances in polar ice cores. Only a handful of supernovae have exploded over the last thousand years within several kpc of the Earth. To this select group – which is summarized <sup>1</sup> <sup>1</sup> The more recent supernova Cassiopeia A of around 1680 appears not to have been widely seen, if it was seen at all . in Table 1 – there has recently been a new addition, due to the discovery of a young supernova remnant in ROSAT X-ray data, RX J0852.0 - 4622, quite nearby . This remnant has RA $`8^h\mathrm{\hspace{0.17em}52}^m`$ and Declination $`46^o22^{}`$ (2000 epoch), and in the likely event that RX J0852.0 - 4622 is identical to the COMPTEL Gamma Ray source GROJ0852-4642 it should be around 200 pc away, with its light potentially first arriving at Earth as early as 700 years ago . Although there is no visual record of this supernova, its proximity to the Earth suggests it might have left other calling cards which might yet be found. To pursue this we have searched the literature on geophysical supernova signatures. It is the purpose of this letter to point out that supernova RX J0852.0 - 4622 indeed appears to have left its mark, through its influence on the nitrate abundances in twenty-year-old ice cores which were drilled at the South Pole station. | Date | | Name | | RA | | Dec | | Visual | | Distance | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | | | | (1950) | | (1950) | | Magnitude | | (kpc) | | | 1006 3 April | | | | 15 10 | | $`40`$ | | $`9.5`$ | | 1.3 | | | 1054 4 July | | Crab | | 05 40 | | $`+20`$ | | $`4`$ | | 2.2 | | | 1181 6 August | | | | 01 30 | | $`+65`$ | | | | 2.6 | | | 1572 8 November | | Tycho | | 00 20 | | $`+65`$ | | $`4`$ | | 2.7 | | | 1604 8 October | | Kepler | | 17 30 | | $`20`$ | | $`3`$ | | 4.2 | | Table (1): Supernovae observations within the last millenium. In their original publication , the drillers of this ice core identified within it three distinctive spikes in the nitrate abundance, whose dates of deposition correspond to the dates of the three latest supernova listed in Table I. (Their core sample was not sufficiently deep to contain those of 1054 or 1006.) These spikes are easily seen in Fig. 1, which is reproduced from Ref. . Also seen in Fig. 1 is a fourth clear spike in the nitrate abundance, which could not be attributed to any supernova known at the time. It is remarkable that this fourth spike corresponds precisely with the time when light – including X- and gamma rays – from the recently-discovered Vela supernova would have been arriving at the Earth! We have found no other geophysical signals for this supernova, and our search for these unearthed an interesting controversy , , , , regarding which the recent Vela supernova may shed new light. The controversy concerns whether or not nearby supernovae can be detected by studying the concentration of nitrate deposition as a function of depth in polar ice cores. Supernovae have been argued to have produced observable changes in geophysical isotope abundances , and there is little question that supernovae can produce $`NO_3^{}`$ when the ionizing radiation they generate impinges on the atmosphere , , . What is not clear is whether this source of atmospheric nitrates is detectable over other sources in ice removed from polar core samples. The evidence given in ref. , that polar ice cores can register $`NO_3^{}`$ nitrate fluctuations of a cosmogenic origin, was supported by observations of the nitrate abundance in Antarctic ice cores taken near the Vostok station (78<sup>o</sup> 28’ S, 106 <sup>o</sup> 48’ E) . The authors of ref. claim to find evidence for a correlation between the nitrate abundances and the cyclic variations in the solar activity. Because the overall nitrate deposition rate was found to be smaller in Vostok cores than in those from the South Pole, it was not possible to confirm at Vostok that nitrate abundances correlate with supernovae, although (by eye) some increase in the nitrate levels is roughly coincident with the times of the various observed supernovae. The difference seen in the overall annual rate of nitrate fallout, which is lower at Vostok than at the South Pole , might itself be some evidence in favour of its being of cosmogenic origin. As was observed in , such a difference could arise if the nitrate production, were associated with aurorae in the Antarctic atmosphere. Since aurorae occur when charged particles impinge on the atmosphere, the geomagnetic field places them in a torus centred on the magnetic pole. (Ionizing bremstrahlung $`X`$-rays from these particles are also directed downwards and so ionize the atmosphere preferentially beneath the aurorae.) In the southern hemisphere this makes aurorae more abundant over the South Pole than over the Vostok station. If the nitrates precipitate rapidly the nitrate abundance deposited on the surface could also be higher at the South Pole than at Vostok. On the other hand, searches using Greenland ice cores in the early 1980’s show no evidence for correlations between nitrate levels and supernovae , . We have ourselves examined data for chemical depositions in ice cores taken in the early 1990’s from the Antarctic Taylor Dome (77<sup>o</sup> 48’ S, 158<sup>o</sup> 43’ E — reasonably close to Vostok) , and no spectacular nitrate peaking appears at depths corresponding to known supernovae (although some suggestive spikes do appear in the abundances of other ions, such as $`Cl^{}`$). The controversy emerges because these observations permit two different conclusions: 1. Cosmogenic influences on ion abundances in polar ice cores are swamped by terrestrial influences; or 2. Cosmogenic sources can detectably influence glacial ion abundances, but their fallout to the surface is uneven over the Earth’s surface. To the supporters of option 1 the spikes of ref. must be due to some kind of experimental error, or to some other kind of terrestrial source. Their correlation with observed supernovae would be coincidental. For the supporters of option 2 the difficulty is understanding how cores taken at some places can carry cosmogenic signals, while those take at others do not. Here we take the point of view that the agreement between the newly-discovered supernova, RX J0852.0 - 4622, and the fourth spike in the data of ref. , makes coincidence a less convincing explanation for the remarkable correlation between nitrate spikes and visible supernovae. We therefore adopt the point of view of option 2, in order to see what can be learnt about cosmogenic nitrate deposition on the Earth, as well as about the properties of the supernova itself. We find that several inferences may be drawn. 1. The Smoking Gun: First and foremost, the most obvious test of option 2 consists of further examination of ice cores taken at the South Pole. Taking the spikes in the data of ref. at face value means that some mechanism makes the nitrate fallout due to supernovae uneven around the globe, but has not removed the most recent signals at the South Pole itself. Although it is logically possible that the same mechanism might prevent deeper South Pole cores from carrying the evidence of the earlier supernovae of 1054 and 1006, this possibility seems unlikely given the presence of the four earlier spikes. Clearly, a comparison of the nitrate levels in more ice cores – especially those taken at the South Pole which are deep enough to include these last-mentioned supernovae – would be very useful to clarify the experimental situation. Furthermore one might envisage searching for signals in the deposition rates of other chemical compunds like $`Cl`$ and $`NH_4`$. 2. Dating the New SN Remnant: The age ($`t`$) and distance ($`d`$) of RX J0852.0 - 4622 may be inferred from the observed intensity ($`f`$) of the <sup>44</sup>Ti decay gamma-ray line, as well as the angular size ($`\theta `$) of the supernova remnant, using $$f=\frac{1}{4\pi d^2}\left(\frac{Y_{44}}{m_{44}\tau _{44}}\right)e^{t/\tau _{44}},\theta =\frac{v_mt}{d},$$ $`\left(1\right)\text{ }`$ if the <sup>44</sup>Ti yield ($`Y_{44}`$) of the supernova and the mean expansion velocity of its remnant ($`v_m`$) are known . (Here $`\tau _{44}90`$ yr is the <sup>44</sup>Ti half life, and $`m_{44}`$ is its atomic mass.) $`v_m`$ may be inferred from the present velocity of the shock wave, which is in turn found from the X-ray brightness which it produces as it slams into the surrounding medium, and $`Y_{44}`$ is taken from SN models. Not surprisingly, the values for $`t`$ and $`d`$ obtained in this way are subject to considerable uncertainty, with age estimates being potentially inaccurate by hundreds of years. This chain of inference may be reversed if the nitrate spikes in the South Pole ice core are due to supernova RX J0852.0 - 4622, because then more information is directly available from observations. For instance, a <sup>44</sup>Ti yield of $`5\times 10^5M_{}`$ may now be directly inferred from observations given the date of the ice core spike (using the inferred mean remnant expansion speed of 5,000 km/s), instead of being taken from numerical studies. Alternatively, profit may be made from the much better accuracy with which the ages of the nitrate spikes in the South Pole ice core are known. In order to estimate the error in determining the age for each depth of their South Pole ice sample, Rood et.al. provide three possible chronologies for the same core. These indicate the date of the previously-unidentified nitrate spike to be within the range $`1320\pm 20`$ AD. If we take the <sup>44</sup>Ti yield from numerical models, then we may more precisely learn the expansion velocity of the supernova ejecta. The agreement of 1320 AD with the age determined from $`X`$ray and $`\gamma `$ray observations of the supernova remnant then indicates that the ejecta expansion velocity is close to the central value of 5,000 km/s assumed in ref. . Since the ratio between the intensity of two different gamma-ray lines is independent of the distance to the SN remnant, more may be learnt by comparing the intensity of the <sup>44</sup>Ti line with the <sup>26</sup>Al line, which has also been observed. Using the half lives $`\tau _{44}90\text{yr}\tau _{26}1.07\times 10^6`$ yr, one finds in this way $$\frac{f_{44}}{f_{26}}=\left(\frac{\tau _{26}m_{26}Y_{44}}{\tau _{44}m_{44}Y_{26}}\right)e^{t/\tau _{44}}.$$ $`\left(2\right)\text{ }`$ A complication arises in this case because although the short $`90`$ yr halflife of <sup>44</sup>Ti ensures the observed <sup>44</sup>Ti gamma flux comes from RX J0852.0 - 4622, the $`1.07\times 10^6`$ yr half-life of <sup>26</sup>Al makes it impossible to be sure that these gamma rays are not coming from the older Vela remnant rather than just from RX J0852.0 - 4622. Two things may be learned here by assuming the ice-core date for RX J0852.0 - 4622. First, if one uses the results of numerical models to infer an upper limit, $`Y_{44}/Y_{26}<100`$ (or $`<10`$), together with the observed <sup>44</sup>Ti flux, $`f_{44}=(3.8\pm 0.7)\times 10^5`$/cm<sup>2</sup>/s, then one finds a lower limit to the <sup>26</sup>Al flux from RX J0852.0 - 4622: $`f_{26}>1\times 10^7`$/cm<sup>2</sup>/s (or $`>1\times 10^6`$/cm<sup>2</sup>/s). Alternatively, if the observed point-source flux, $`f_{\mathrm{pt}}=(2.2\pm 0.5)\times 10^5`$/cm<sup>2</sup>/s, of <sup>26</sup>Al gamma rays is assumed to be coming from RX J0852.0 - 4622, then we learn $`Y_{44}/Y_{26}0.5`$ (which is in agreement with ref. provided $`v_m=5,000`$ km/s). 3. The Nature of the Supernova Explosion: As is argued in Ref. , an expansion velocity this large for the SNR argues that this was a 15 $`M_{}`$ Type II supernova. Moreover an age of 700 years gives further information, as can be seen from Fig. 2. Having a relatively nearby Vela supernova of less than 250 pc, gives a <sup>44</sup>Ti yield of less than $`10^4M_{}`$. This disfavours a Type Ia supernova explosion within a dense region as a possible progenitor of the new Vela supernova. Since it was so nearby, it would be worthwhile to look for other cosmogenic signals for this supernova, such as have been proposed for the very nearby Geminga event several hundred thousand years ago , , , through enhancements in the abundances of radionuclides in sediments , , . One might imagine even searching for signals due to the neutrino flux, since this should be as large as $`10^{16}\mathrm{cm}^2\mathrm{s}^1`$. 4. The Distance to SN1006: As was already noticed in , this date for the arrival time of light from the supernova implies the supernova distance must be 200 pc, which is on the near side of the range which is allowed by the $`X`$ray measurements. This makes this the closest supernovae which happened in the last millenium. Since this range was determined by comparing the brightness of remnant RX J0852.0 - 4622 with the remnant of the 1006 supernova, the 1006 remnant must be about 800 pc away, which is also at the near end of its allowed range. 5. The Distance-Dependence of the Nitrate Signal: It is tempting to observe that a distance of 200 pc to RX J0852.0 - 4622 makes this supernova 10 times closer than the next nearest SN remnant listed in Table 1. This raises the question as to why the flux of ionizing radiation was not therefore 100 times as large for this supernova than for all of the others, with a correspondingly large nitrate peak. Such a large variation in amplitude is clearly not visible for the peaks in Fig. (1). We have three reasons not to be disturbed by this naive factor of 100 in radiation intensity. First, as mentioned in the previous item, the distance estimates to the supernovae of Table (1) carry relatively large uncertainties, with the remnant of SN1006 being possibly only 4 times as distant as RX J0852.0 - 4622. Second, all supernovae are not alike and two supernovae can differ widely in their brightness even if they are equidistant. (This point is perhaps most dramatically illustrated by the nonobservation of the supernova associated with the Cass A remnant.) Third, given that some poorly-understood mechanism is required to ensure that the rate of nitrate fallout is not uniform around the globe — as must be assumed if we are to interpret the Rood et.al. spikes as being cosmogenic in origin — we should expect no simple connection between the size of a nitrate spike and the amount of ionizing radiation received at the Earth. 6. The Distribution of Nitrate Fallout: Finally, if the four nitrate spikes of the Rood et.al. core are really associated with supernovae, then it still must be understood why nitrate levels of supernova origin are unevenly deposited around the globe, and why they are larger at the South Pole than they are in Greenland and elsewhere in Antarctica. One possibility is suggested if the ionization mechanism due to the supernova were associated with aurorae. Besides potentially explaining different nitrate deposition rates at different Antarctic sites if the settling rate is sufficiently fast, aurorae might also account for differences between the northern and southern hemispheres. For auroral production produced by protons directed to the Earth by solar flares, the conversion to $`NO_3^{}`$ proceeds mainly at night , and so at high latitudes nitrate production proceeds most abundantly during the winter. Since the five supernovae listed in Table (1) all occur between April and early October, nitrate deposition in the northern hemisphere could be less efficient if the connection between supernovae and aurora were also to cause more effective nitrate production during the southern winter. Of course there are also several problems with this kind of mechanism, which would have to be understood. First, association with aurorae usually means the ionization is accomplished by charged particles which preferentially hit the atmosphere near the magnetic poles because they move along the magnetic field lines of the Earth. But charged particles are not likely to have reached us yet from RX J0852.0 - 4622, since cosmic rays diffuse through the interstellar medium and would take tens of thousands of years to travel the intervening 200 pc. In addition, any such aurora-based scenario must also explain the absence of a solar-cycle dependence in the deposition rate in cores taken near the north magnetic pole. It is our hope that the remarkable correspondence between the arrival time of light from RX J0852.0 - 4622, and the date of ref. ’s fourth spike will stimulate further progress in understanding the nature of terrestrial signals for nearby violent astrophysical events. Acknowledgments This research was partially funded by N.S.E.R.C. of Canada and les Fonds F.C.A.R. du Québec. We thank John Beacom for updating us on the supernova distances listed in Table 1. Figure captions Fig. 1: Orginal data on nitrate abundance as obtained by Rood et.al. for a South Pole ice core. Clearly visible are the spikes which can be associated to supernova explosions. Fig. 2: Distance versus <sup>44</sup>Ti yield for two assumed lifetimes of <sup>44</sup>Ti for a given supernova 700 years ago. The distance $`d`$ is determined by the gamma flux $`f_{44}`$ and <sup>44</sup>Ti - lifetime using the quadratic distance dependence of the ejected <sup>44</sup>Ti mass $`Y_{44}=4\pi e^{t/\tau }m_{44}\tau f_{44}d^2`$. The lifetimes used are 87.5 years (dashed line) and 90.4 years (solid line) . As can be seen, for reasonable distances (below 250 pc) to the new Vela supernova remnant the <sup>44</sup>Ti yield is always below $`10^4M_{}`$. This disfavours SN Ia explosions in dense regions as a possible progenitor of the new Vela supernova. 1. References Bernd Aschenbach, Discovery of a young nearby supernova remnant, Nature v. 396, p. 141, 1998. A.F. Iyudin et.al., Emission from $`{}_{}{}^{44}Ti`$ associated with a previously unknown galactic supernova, Nature v. 396, p. 142, 1998. W.B. Ashworth, A probable Flamsteed observation of the Cassiopeia supernova, Jour. Hist. Astron. v. 11, p. 1, 1980. Robert T. Rood, Craig L. Sarazin, Edward J. Zeller and Bruce C. Parker, $`X`$ or $`\gamma `$-rays from supernovae in glacial ice, Nature, v. 282, p. 701, 1979. T. Risbo, H.B. Clausen and K.L. Rasmussen, Supernovae and nitrate in the Greenland ice sheet, Nature, v. 294, p. 637, 1981. Bruce C. Parker, Edward J. Zeller and Anthony J. Gow, Nitrate fluctuations in Antarctic snow and firn: potential sources and mechanisms of formation, Annals of Glaciology, v. 3, p. 243, 1982. Michael Herron, Impurity sources of $`F^{}`$, $`Cl^{}`$, $`NO_3^{}`$ and $`SO_4^2`$ in greenland and Antarctic precipitation, Journal of Geophysical Research, v. 87, no. C4, p. 3052, 1982. P.E. Damon, D. Kaimei, G.E. Kocharov, I.B. Mikheeva and A.N. Peristykh, Radiocarbon production by the gamma-ray component of supernova explosions, Radiocarbon 37 (1995) 599-604; P.E. Damon, G.E. Kocharov, A.N. Peristykh, I.B. Mikheeva and K.M. Dai, High energy gamma rays from SN1006AD, Proc. 24th Int. Cosmic Ray Conf., Rome 1995, v. 2, p. 311-314. M.A. Ruderman, Science, v. 184, p. 1079, 1974. R.C. Whitten, J. Cuzzi, W.J. Borucki and J.H. Wolfe, Effect of nearby supernova explosions on atmospheric ozone, Nature, v. 263, p. 398, 1976. D.H. Clark, W.H. McCrea and F.R. Stephenson, Frequency of nearby supernovae and climatic and biological catastrophes, Nature, v. 265, p. 318, 1977. J.C. Stager, P.A. Mayewski, Abrupt early to mid-Holocene climate transition registered at the equator and the poles, Science v. 276, p. 1834-1836.; P.A. Mayewski, et.al., Climate Change During the Last Deglaciation in Antarctica, Science v 272, p. 1636-1638 (1996); E.J. Steig, et.al.Wisconsinan and Holocene climate history from an ice core at Taylor Dome, western Ross Embayment, Antarctica, Geografiska Annaler (in review). B. Aschenbach, A.F. Iyudin and V. Schönfelder, Constraints of age, distance and progenitor of the supernova remnant RX J0852.0 - 4622 / GRO J0852-4642, Astronomy and Astrophysics (to appear), (astro-ph/9909415). W. Chen and N. Gehrels, The progenitor of the new COMPTEL/ROSAT supernova remnant in Vela, Astrophysical Journal Letters v. 514, p. L103 (1999), (astro-ph/9812154 v2). J.P. Halpern and S.S. Holt, Discovery of soft $`X`$ray pulsations from the $`\gamma `$ray source Geminga, Nature, v. 357, p. 222, 1992. G.F. Bignami, P.A. Caraveo and S. Mereghetti, The proper motion of Geminga’s optical counterpart, Nature, v. 361, p. 704, 1993. Daniel Wang, Zhi-Yun Li and Mitchell C. Begelman, The $`X`$ray emitting trail of the nearby pulsar PSR1929+10, Nature, v. 364, p. 127, 1993. G. Cini Castagnoli and G. Bonino, Thermoluminescence in sediments and historical supernovae explosions, Il Nuovo Cimento v. 5C, n. 4, p. 488, 1982. John Ellis, Brian D. Fields and David N. Schramm, Geological isotope anomalies as signatures of nearby supernovae, (astro-ph/9605128). Brian D. Fields and John Ellis, On deep ocean $`{}_{}{}^{60}Fe`$ as a fossil of a near-earth supernova, preprint CERN-TH/98-373 (astro-ph/9811457). P.J. Crutzen, Ozone production rates in an oxygen-hydrogen-nitrogen oxide atmosphere, Journal of Geophysical Research, v 30, p. 7311-7327 (1976). F.E. Wietfeldt et al., Long-term measurement of the half-life of <sup>44</sup>Ti, Phys. Rev. C, v 59, p. 528-530 (1999). E.B. Norman et al., Half-life of <sup>44</sup>Ti, Phys. Rev. C, v 57, p. 2010-2016 (1998).
no-problem/9909/astro-ph9909281.html
ar5iv
text
# A Polarization Pursuers’ Guide ## I Introduction It has long been known that the cosmic microwave background (CMB) must be polarized if it has a cosmological origin . Detection, and ultimately mapping, of the polarization will help isolate the peculiar velocity at the surface of last scatter , constrain the ionization history of the Universe , determine the nature of primordial perturbations , detect an inflationary gravitational-wave background , primordial magnetic fields , and cosmological parity violation , and maybe more (see, e.g., Ref. for a recent review). However, the precise amplitude and angular spectrum of the polarization depends on a structure-formation model and the values of numerous undetermined parameters. Moreover, it has so far eluded detection. A variety of experiments are now poised to detect the polarization for the first time. But what is the ideal experiment? What angular resolution, instrumental sensitivity, and fraction of the sky should be targeted? Can it be picked out more easily by cross-correlating with the CMB temperature? The purpose of this paper is to answer these questions in a fairly model-independent way. We first address the detectability of the gradient component of the polarization from density perturbations. A priori, one might expect the detectability of this signal to depend sensitively on details of the structure-formation model, ionization history, and on a variety of undetermined cosmological parameters. However, we find that if we fix the baryon-to-photon ratio to its big-bang-nucleosynthesis (BBN) value and demand that the degree-scale anisotropy agree with recent measurements, then the detectability of the polarization is roughly model-independent. We provide some analytic arguments in support of this result. We can thus specify an experiment that would be more-or-less guaranteed to detect the CMB polarization. Non-detection in such experiments would thus only be explained if the baryon density considerably exceeded the BBN value. We then consider the curl component of the polarization from an inflationary gravitational-wave background. This extends slightly the work of Refs. .<sup>§</sup><sup>§</sup>§There is also related work in Refs. in which it is determined how accurately various cosmological and inflationary parameters can be determined in case of a positive detection. The new twist here is that we consider maps with partial sky coverage (rather than only full-sky maps) and find that in a noise-limited fixed-time experiment, the sensitivity to gravitational waves may be improved considerably by surveying more deeply a smaller patch of sky.Similar arguments were investigated for temperature maps in Refs. . Since the polarization should be detected shortly, our main results on its detectability should, strictly speaking, become obsolete fairly quickly. Even so, our results should be of some lasting value as they provide figures of merit for comparing the relative value, in terms of signal-to-noise, of various future CMB polarization experiments. It should be kept in mind, however, that ours is a hypothetical experiment in which foregrounds have been subtracted and instrumental artifacts understood, and any comparison with realistic experiments must take these effects into account. Section II briefly reviews the CMB polarization signals. Section III introduces the formalism for determining the detectability of polarization for a given experiment. Section IV considers polarization from scalar modes for a putative structure-formation model and Section V evaluates the detectability of the polarization from gravitational waves (using only the curl component of the polarization). Section VI presents the results of the prior two Sections in a slightly different way. Section VII shows that the results for scalar modes in Section IV would be essentially the same in virtually any other structure-formation model with a BBN baryon density and degree-scale temperature anisotropy that matches recent measurements. We make some concluding remarks in Section VII. ## II Brief Review of CMB Polarization Ultimately, the primary goal of CMB polarization experiments will be to reconstruct the polarization power spectra and the temperature-polarization power spectrum. Just as a temperature (T) map can be expanded in terms of spherical harmonics, a polarization map can be expanded in terms of a set of tensor spherical harmonics for the gradient (G) component of the polarization and another set of harmonics for the curl (C) component . Thus, the two-point statistics of the T/P map are specified completely by the six power spectra $`C_{\mathrm{}}^{\mathrm{XX}^{}}`$ for $`\mathrm{X},\mathrm{X}^{}=\{\mathrm{T},\mathrm{G},\mathrm{C}\}`$. Parity invariance demands that $`C_{\mathrm{}}^{\mathrm{TC}}=C_{\mathrm{}}^{\mathrm{GC}}=0`$ (unless the physics that gives rise to CMB fluctuations is parity breaking ). Therefore the statistics of the CMB temperature-polarization map are completely specified by the four sets of moments, $`C_{\mathrm{}}^{\mathrm{TT}}`$, $`C_{\mathrm{}}^{\mathrm{TG}}`$, $`C_{\mathrm{}}^{\mathrm{GG}}`$, and $`C_{\mathrm{}}^{\mathrm{CC}}`$. See, e.g., Fig. 1 in Ref. for sample spectra from adiabatic perturbations and from gravitational waves. There are essentially two things we would like to do with the CMB polarization: (1) map the G component to study primordial density perturbations, and (2) search for the C component due to inflationary gravitational waves . The G signal from density perturbations is more-or-less guaranteed to be there at some level (to be quantified further below), and will undoubtedly provide a wealth of information on the origin of structure and the values of cosmological parameters. The amplitude of the C component from inflationary gravitational waves is proportional to the square of the (to-be-determined) energy scale of inflation. It is not guaranteed to be large enough to be detectable even if inflation did occur. On the other hand, if inflation had something to do with grand unification or Planck-scale physics, as many theorists surmise, then the polarization is conceivably detectable, as argued in Refs. and further below. If detected, it would provide a unique and direct window to the Universe as it was $`10^{36}`$ seconds after the big bang! ## III Formalism We first address the general question of the detectability of a particular polarization component. We assume that the amplitude of the various polarization signals will each be picked out by a maximum-likelihood analysis . The shape of the likelihood function will then give limits on the parameters, in this case the various power spectra, $`C_{\mathrm{}}^{\mathrm{XX}}`$ that describe the CMB. In particular, the curvature of the likelihood gives traditional error bars, defined as for a Gaussian distribution (but see Ref. for a discussion of the more complicated true non-Gaussian distribution). Here we will concentrate on the error bar for the overall amplitude of the power spectra. We can then ask, what is the smallest amplitude that could be distinguished from the null hypothesis of no polarization component by an experiment that maps the polarization over some fraction of the sky with a given angular resolution and instrumental noise? This question was addressed (for the curl component) in Ref. for a full-sky map. If an experiment concentrates on a smaller region of sky, then several things happen that affect the sensitivity: (1) information from modes with $`\mathrm{}180/\theta `$ (where $`\theta ^2`$ is the area on the sky mapped) is lost;This is not strictly true. In principle, as usual in Fourier analysis, less sky coverage merely limits the independent modes one can measure to have a spacing of $`\delta l180/\theta `$. In practice, instrumental effects (detector drifts; “1/f” noise) will render the smallest of these bins unobservable. (2) the sample variance is increased; (3) the noise per pixel is decreased since more time can be spent integrating on this smaller patch of the sky. For definiteness, suppose we hypothesize that there is a C component of the polarization with a power spectrum that has the $`\mathrm{}`$ dependence expected from inflation (as shown in Fig. 1 in Ref. ), but an unknown amplitude $`𝒯`$.<sup>\**</sup><sup>\**</sup>\**We define $`𝒯6C_2^{\mathrm{TT},\mathrm{tens}}`$ where $`C_2^{\mathrm{TT},\mathrm{tens}}`$ is the tensor contribution to the temperature quadrupole moment expected for a scale-invariant spectrum. We can predict the size of the error that we will obtain from the ensemble average of the curvature of the likelihood function (also known as the Fisher matrix) . For example, consider the tensor signal. In such a likelihood analysis, the expected error will be $`\sigma _𝒯`$, where $$\frac{1}{\sigma _𝒯^2}=\underset{\mathrm{}}{}\left(\frac{C_{\mathrm{}}^{\mathrm{CC}}}{𝒯}\right)^2\frac{1}{(\sigma _{\mathrm{}}^{\mathrm{CC}})^2},$$ (1) with similar equations for the other $`C_{\mathrm{}}^{\mathrm{XX}}`$. Here, the $`\sigma _{\mathrm{}}^{\mathrm{XX}^{}}`$ are the expected errors at individual $`\mathrm{}`$ for each of the $`\mathrm{XX}^{}`$ power spectra. These are given by (cf., Ref. ) $`\sigma _{\mathrm{}}^{\mathrm{CC}}`$ $`=`$ $`\sqrt{{\displaystyle \frac{2}{f_{\mathrm{sky}}(2\mathrm{}+1)}}}\left(C_{\mathrm{}}^{\mathrm{CC}}+f_{\mathrm{sky}}w^1B_{\mathrm{}}^2\right),`$ (2) $`\sigma _{\mathrm{}}^{\mathrm{GG}}`$ $`=`$ $`\sqrt{{\displaystyle \frac{2}{f_{\mathrm{sky}}(2\mathrm{}+1)}}}\left(C_{\mathrm{}}^{\mathrm{GG}}+f_{\mathrm{sky}}w^1B_{\mathrm{}}^2\right),`$ (3) $`\sigma _{\mathrm{}}^{\mathrm{TG}}`$ $`=`$ $`\sqrt{{\displaystyle \frac{1}{f_{\mathrm{sky}}(2\mathrm{}+1)}}}\left[\left(C_{\mathrm{}}^{\mathrm{TG}}\right)^2+\left(C_{\mathrm{}}^{\mathrm{TT}}+f_{\mathrm{sky}}w^1B_{\mathrm{}}^2\right)\left(C_{\mathrm{}}^{\mathrm{GG}}+f_{\mathrm{sky}}w^1B_{\mathrm{}}^2\right)\right]^{1/2},`$ (4) where $`w=(t_{\mathrm{pix}}N_{\mathrm{pix}}T_0^2)/(4\pi s^2)`$ is the weight (inverse variance) on the sky spread over $`4\pi `$ steradians, $`f_{\mathrm{sky}}`$ is the fraction of the sky observed, and $`t_{\mathrm{pix}}`$ is the time spent observing each of the $`N_{\mathrm{pix}}`$ pixels. The detector sensitivity is $`s`$ and the average sky temperature is $`T_0=2.73\mu \mathrm{K}`$ (and hence all $`C_{\mathrm{}}^{\mathrm{XX}^{}}`$ are measured in dimensionless $`\mathrm{\Delta }T/T`$ units). The inverse weight for a full-sky observation is $`w^1=2.14\times 10^{15}t_{\mathrm{yr}}^1(s/200\mu \mathrm{K}\sqrt{\mathrm{sec}})^2`$ with $`t_{\mathrm{yr}}`$ the total observing time in years. Finally, $`B_{\mathrm{}}`$ is the experimental beam, which for a Gaussian is $`B_{\mathrm{}}=e^{\mathrm{}^2\sigma _\theta ^2/2}`$. We assume all detectors are polarized. As mentioned, all other $`C_{\mathrm{}}^{\mathrm{XX}^{}}`$ cross terms are zero (in the usual cases, at least). The CC and GG errors each have two terms, one proportional to $`C_{\mathrm{}}^{\mathrm{XX}}`$ (the sample variance), and another proportional to $`w^1`$ (the noise variance). The TG error is more complicated since it involves the product of two different fields (T and G) on the sky. There are several complications to note when considering these formulae: 1) We never have access to the actual $`C_{\mathrm{}}^{\mathrm{XX}^{}}`$, but only to some estimate of the spectra; 2) the expressions only deal approximately with the effect of partial sky coverage; and 3) the actual likelihood function can be considerably non-Gaussian, so the expressions above do not really refer to “1 sigma confidence limits.” Here, we are interested in the detectability of a polarization component; that is, what is the smallest polarization amplitude that we could confidently differentiate from zero? This answer in detail depends on the full shape of the likelihood function: the “number of sigma” that the likelihood maximum lies away from zero is related to the fraction of integrated likelihood between zero and the maximum. This gives an indication of how well the observation can be distinguished from zero power in the polarization. Toy problems and experience give us an approximate rule of thumb: the signal is detectable when it can be differentiated from the “null hypothesis” of $`C_{\mathrm{}}^{\mathrm{XX}}=0`$. Stated another (more Bayesian) way, for a fixed noise variance, as we increase the observed signal the fraction of probability below the peak increases rapidly when the sample variance—i.e., the estimated power—approaches the noise. Thus, on the one hand you need to observe enough sky to sufficiently decrease the sample variance, and a small enough noise that the sample variance dominates. Thus, the $`\mathrm{}`$ component of the tensor signal (for example) is detectable if its amplitude is greater than $$\sigma _{\mathrm{}}^{\mathrm{CC}}=\sqrt{2/(2\mathrm{}+1)}f_{\mathrm{sky}}^{1/2}w^1e^{\mathrm{}^2\sigma _b^2}.$$ (5) We then estimate the smallest tensor amplitude $`𝒯`$ that can be distinguished from zero (at “1 sigma”) by using Eq. (1) with the null hypothesis $`C_{\mathrm{}}^{\mathrm{TT}}=0`$. Putting it all together, the smallest detectable tensor amplitude (scaled by the largest consistent with COBE) is $$\frac{\sigma _𝒯}{𝒯}1.47\times 10^{17}t_{\mathrm{yr}}\left(\frac{s}{200\mu \mathrm{K}\sqrt{\mathrm{sec}}}\right)^2\left(\frac{\theta }{\mathrm{deg}}\right)\mathrm{\Sigma }_\theta ^{1/2},$$ (6) where $$\mathrm{\Sigma }_\theta =\underset{\mathrm{}(180/\theta )}{}(2\mathrm{}+1)\left(C_{\mathrm{}}^{\mathrm{CC}}\right)^2e^{2\mathrm{}^2\sigma _b^2}.$$ (7) The expression for the GG signal from density perturbations is obtained by replacing $`C_{\mathrm{}}^{\mathrm{CC}}`$ by $`C_{\mathrm{}}^{\mathrm{GG}}`$ \[and $`\sigma _{\mathrm{}}^{\mathrm{GG}}`$ is the same as $`\sigma _{\mathrm{}}^{\mathrm{CC}}`$ given in Eq. (5)\]. For the TG cross-correlation, things are more complicated. First of all, the expression for $`\sigma _{\mathrm{}}^{\mathrm{TG}}`$ in Eq. 2 has terms involving the temperature power spectra and observing characteristics. Second, we know that there is a temperature component on the sky, so we must pick a “null hypothesis” with the observed $`C_{\mathrm{}}^{\mathrm{TT}}`$. The TG moments also have covariances with the TT and GG moments (see Eqs. (3.28)–(3.30) in Ref. ), but these are zero for the null hypothesis of no polarization. Hence, with the null hypothesis of no polarization, the variance with which each $`C_{\mathrm{}}^{\mathrm{TG}}`$ can be measured is $$\sigma _{\mathrm{}}^{\mathrm{TG}}=\sqrt{1/f_{\mathrm{sky}}(2\mathrm{}+1)}\left[f_{\mathrm{sky}}w^1e^{\mathrm{}^2\sigma _b^2}\left(C_{\mathrm{}}^{\mathrm{TT}}+f_{\mathrm{sky}}w^1e^{\mathrm{}^2\sigma _b^2}\right)\right]^{1/2}.$$ (8) Thus, the dependence on $`s`$ is more complicated than in Eq. (8), so the end result for the polarization sensitivity achievable by cross-correlating with the temperature does not scale simply with $`s`$ or $`t_{\mathrm{yr}}`$ as it does for GG and CC. ## IV Detectability of Density-Perturbation Signal Since the polarization has yet to be detected, the obvious first goal of a current experiment should be to detect unambiguously the polarization. In the standard theory with adiabatic perturbations somehow laid down prior to last scattering, the G polarization is inevitable. Density perturbations will thus produce a nonzero GG power spectrum and a TG power spectrum. We discuss the detectability of polarization from only the GG signal or the TG signal individually, and then from the combination of both signals. ### A The GG signal We have calculated the detectability of the polarization from density perturbations (using only the GG power spectrum), and results are shown in Fig. 1. Here, we ask the following: Suppose there is a polarization signal with an $`\mathrm{}`$ dependence characteristic of density perturbations but of unknown amplitude. What is the smallest polarization amplitude that could be distinguished from the null hypothesis of no polarization? The short-dashed curves (which coincide with the solid curves for small survey widths) in Fig. 1 show the smallest polarization amplitude $`𝒮`$ (scaled by that expected for a COBE-normalized CDM model) detectable (at $`3\sigma `$) by an experiment with detector sensitivity $`s`$ that maps the polarization only on a square region of the sky with a given width. The curves are (from top to bottom) for fwhm beamwidths of 1, 0.5, 0.3, 0.2, 0.1, 0.05, and 0.01 degrees. The results scale with the square of the detector sensitivity and inversely to the duration of the experiment. Any experiment that has a $`(\sigma _𝒮/𝒮)`$ smaller than unity should be able to detect the polarization expected in a CDM model at $`>3\sigma `$. Fig. 1 shows that an experiment with comparable $`s`$ can in a few months achieve the same signal-to-noise with the GG power spectrum as MAP. ### B The TG signal We have also done the same analysis for the temperature/polarization cross correlation (the TG power spectrum), and the results are indicated by the long-dashed curves in Fig. 1. Here we ask, what is the smallest polarization signal that could be distinguished from the null hypothesis of no polarization by looking for the expected temperature-polarization cross-correlation? First of all, the analog of Eq. (5) for GG is given for TG by Eq. (8). Thus, cosmic variance in the temperature map comes into play even if we investigate the null hypothesis of no polarization. As a result, the detectability of the polarization from temperature-polarization cross-correlation does not scale simply with the instrumental sensitivity $`s`$ (and this is why we present results for detectability in four panels for four different values of $`s`$ in Fig. 1 rather than on one panel as we will for the curl component in Fig. 2). However, comparing the long-dash curves in all four panels, we see that for $`s200\mu \mathrm{K}\sqrt{\mathrm{sec}}`$ (reasonable values for just about any future experiments), the detector-noise term in Eq. (8) is less important than the $`C_{\mathrm{}}^{\mathrm{TT}}`$ term, so the result for $`\sigma _𝒮/𝒮`$ scales with $`s`$. ### C Combining the TG and GG Signal Comparing the long- and short-dash curves in Fig. 1, we see that the polarization sensitivity obtained by looking for a temperature-polarization cross-correlation improves on that obtained from the polarization auto-correlation (for fixed angular resolution and detector sensitivity) only for nearly full-sky surveys with $`s50\mu \mathrm{K}\sqrt{\mathrm{sec}}`$. Thus, the sensitivity of MAP (full-sky and $`s150\mu \mathrm{K}\sqrt{\mathrm{sec}}`$) to polarization will come primarily from cross-correlating with the temperature map, while the signal-to-noise for polarization auto-correlation and temperature-polarization cross-correlation should be roughly comparable for Planck. The Figure also indicates that in an experiment with $`s100\mu \mathrm{K}\sqrt{\mathrm{sec}}`$ that maps only a small fraction of the sky (widths $`10^{}`$), the polarization is more easily detected via polarization auto-correlations; cross-correlating with the temperature should not significantly improve the prospects for detecting the polarization in such experiments. The total sensitivity achievable using both TG and GG together is obtained by adding the sensitivities from each in quadrature.<sup>††</sup><sup>††</sup>††In principle, there are cross terms between TG and GG in the correlation matrix. However, for the null hypothesis of no polarization, these are zero; cf., Eq. (3.29) in Ref. . The solid curves in Fig. 1 show the polarization sensitivities achievable by combining the GG and TG data. For $`s10\mu \mathrm{K}\sqrt{\mathrm{sec}}`$, the sensitivity comes entirely from the polarization auto-correlation and scales as $`s^2`$, as shown in the top left-hand panel. We see that the sensitivity to the polarization can improve as the angular resolution is improved all the way down to 0.01 degrees, and the ideal survey size varies from 2–3 degrees (for an angular resolution of 1 degree) to a fraction of a degree for better angular resolution. ## V Detectability of the Curl Component Consider next the C component, which can tell us about the amplitude of gravitational waves produced, for example, by inflation. We have carried out the exercise as for the scalar signal. As above we hypothesize that there is a C component of the polarization with an unknown amplitude $`𝒯`$ Results are shown in Fig. 2. Plotted there is the smallest gravitational-wave (i.e., tensor) amplitude $`𝒯`$ detectable at $`3\sigma `$ by an experiment with a detector sensitivity $`s=10\mu \mathrm{K}\sqrt{\mathrm{sec}}`$ that maps a square region of the sky over a year with a given beamwidth. The horizontal line shows the upper limit to the tensor amplitude from COBE. The curves are (from top to bottom) for fwhm beamwidths of 1, 0.5, 0.3, 0.2, 0.1, and 0.05 degrees. The results scale with the square of the detector sensitivity and inversely to the duration of the experiment. The sensitivity to the tensor signal is a little better with an 0.5-degree beam than with a 1-degree beam, but even smaller angular resolution does not improve the sensitivity much. And with a resolution of 0.5 degrees or better, the best survey size for detecting this tensor signal is about 3 to 5 degrees. If such a fraction of the sky is surveyed, the sensitivity to a tensor signal (rms) will be about 30 times better than with a full-sky survey with the same detector sensitivity and duration (and thus 30 times better than indicated in Refs. . Thus, a balloon experiment with the same detector sensitivity as MAP could in principle detect the same tensor amplitude in a few weeks that MAP would in a year. (A width of 200 degrees corresponds to full-sky coverage.) The tensor amplitude is related to the energy scale of inflation<sup>‡‡</sup><sup>‡‡</sup>‡‡The energy scale of inflation is defined here to be the fourth root of the inflaton-potential height. by $`𝒯=(E_{\mathrm{infl}}/7\times 10^{18}\mathrm{GeV})^4`$, and COBE currently constrains $`E_{\mathrm{infl}}2\times 10^{16}`$ GeV . Thus with Fig. 2, one can determine the inflationary energy scale accessible with any given experiment. ## VI SOME DETAILS AND INSIGHT Fig. 3 is intended to provide some additional insight into the results shown in Figs. 1 and 2. Fig. 3 plots the summands (with arbitrary normalization) from Eq. (7) for $`\mathrm{\Sigma }_\theta `$ for the CC and GG signals for a full-sky map with perfect angular resolution. It also shows the analogous summand for TG. The detectability of each signal (CC, GG, and TG) is inversely proportional to the square root of the area under each curve. A finite beamwidth (and/or instrumental noise) would reduce the contribution from higher $`\mathrm{}`$’s and a survey area less than full-sky would reduce the contribution from lower $`\mathrm{}`$’s. The Figure illustrates that the CC signal is best detected with $`\mathrm{}200`$ and the GG signal is best detected with $`\mathrm{}2001200`$, as may have been surmised from Figs. 1 and 2. The TG signal is spread over a larger range of $`\mathrm{}`$’s. In particular, note that very little of $`\mathrm{\Sigma }_\theta `$ comes from $`\mathrm{}10`$ in any case, so the loss of the $`\mathrm{}10`$ modes that comes with survey regions smaller than $`10\times 10`$ deg<sup>2</sup> does not significantly affect the detectability of the polarization signals. And now for some historical perspective. Although not shown, the $`\mathrm{\Sigma }_\theta `$ for the temperature power spectrum TT peaks very sharply at low $`\mathrm{}`$ (it essentially falls off as $`\mathrm{}^3`$ for a nearly scale-invariant spectrum \[$`\mathrm{}(\mathrm{}+1)C_{\mathrm{}}\mathrm{const}`$\] such as that observed). Thus, in retrospect, the COBE full-sky scan was indeed the best strategy for detecting the temperature anisotropy. An equal-time survey of a smaller region of the sky would have made detection far less likely. ## VII Model Independence of the Results In Section IV we used a standard-CDM model for our calculations, and it is natural to inquire whether and/or how our results depend on this assumption. The purpose of this Section is to illustrate that the results shown above are, to a large extent, independent of the gross features and details of the structure-formation model as long as we (1) use the BBN baryon density and (2) demand that the model reproduce the degree-scale anisotropy observed by several recent experiments. To make this case, we have considered a number of models in which the CMB power spectrum passes through recent data points near $`\mathrm{}200`$, as shown in Fig. 4. The models are listed in Table I. The $`C_{\mathrm{}}^{\mathrm{GG}}`$ polarization power spectrum for each model is shown in Fig. 5. The detectability of the polarization—á la Fig. 1—in each of these models is shown in Fig. 6. All of the models (except the high-baryon-density models) have the BBN baryon-to-photon ratio, $`\mathrm{\Omega }_bh^2=0.02`$. Fig. 5 shows that all models with the BBN baryon density produce roughly the same amount of polarization, and Fig. 6 shows more precisely that the detectabilities are all similar. The only models in which the polarization signal is significantly smaller (and accordingly harder to detect) are those with a baryon-to-photon ratio that considerably exceeds the BBN value. So why is this? Heuristically, we expect the polarization amplitude to be proportional to the temperature-anisotropy amplitude, and we have fixed this. This explanation is close, but still only partially correct. More accurately, the polarization comes from peculiar velocities (the “dipole” ) at the surface of last scatter . The peculiar-velocity amplitude is indeed proportional to the density-perturbation amplitude that produces the peak in the temperature power spectrum, but the constant of proportionality depends on the baryon density ; the peculiar velocity (and thus the polarization) is larger for smaller $`\mathrm{\Omega }_bh^2`$ (and the dependence is actually considerably weaker than linear). We also know that the troughs in the temperature power spectrum are filled in by the peculiar velocities. Therefore, the polarization amplitude should actually be proportional to the amplitude of the (yet-undetermined) trough in the power spectrum, rather than the peak that has been measured. Having fixed the peak height, the amplitude of the trough—and thus the peculiar velocity—in turn depends only on the baryon density, $`\mathrm{\Omega }_bh^2`$. In this way, the polarization amplitude depends primarily on the baryon-to-photon ratio, itself proportional to $`\mathrm{\Omega }_bh^2`$. These arguments further suggest that if the baryon density is significantly higher than that allowed by BBN, then the polarization amplitude will be smaller, and accordingly harder to detect. There are many good reasons to believe that the BBN prediction for the baryon density is robust. However, it has also been pointed out that some problems (e.g., the baryon fraction in clusters and a reported excess in power on $`100h^1`$ Mpc scales) can be solved if we disregard the BBN constraint and consider a much larger baryon density (seem e.g., Ref. ). To illustrate, we also include in Figs. 4, 5, and 6 results from a high-baryon-density (i.e., $`\mathrm{\Omega }_bh^2=0.144`$) model , and as expected, the polarization amplitude is decreased relative to the temperature-fluctuation amplitude. We conclude that as long as the baryon density is not much larger than that allowed by BBN, the results shown in Fig. 1 should be model-independent, but that the polarization may be significantly smaller if the baryon-to-photon ratio is significantly higher. ## VIII CONCLUSIONS We have carried out calculations that will help assess the prospects for detection of polarization with various experiments. Our results can be used to forecast the signal-to-noise for the polarization signals expected from density perturbations and from gravitational waves in an experiment of given sky coverage, angular resolution, and instrumental noise. Even after the polarization has been detected, our results will provide a useful set (although not unique) of figures of merit for subsequent polarization experiments. Of course, the “theoretical” factors considered here must be weighed in tandem with those that involve foreground subtraction and experimental logistics in the design or evaluation of any particular experiment. (As with temperature-anisotropy experiments, these usually encourage increasing the signal-to-noise to better distinguish systematic effects.) In contrast to temperature anisotropies which show power on all scales \[i.e., $`\mathrm{}(\mathrm{}+1)C_{\mathrm{}}\mathrm{const}`$\], the polarization power peaks strongly at higher $`\mathrm{}`$. Hence the signal-to-noise in a polarization experiment of fixed flight time and instrumental sensitivity may be improved by surveying a smaller region of sky, unlike the case for temperature-anisotropy experiments. The ideal survey for detecting the curl component from gravitational waves is of order 2–5 degrees, and the sensitivity is not improved much for angular resolutions smaller than 0.2 degrees.<sup>\**</sup><sup>\**</sup>\**Of course, better angular resolution and a larger survey area may be required to distinguish the gravitational-wave signal from a possible curl component from some foregrounds, nonlinear late-time effects , and/or instrumental artifacts. The polarization signal from density perturbations is peaked at still smaller angular scales, and may be better accessed by mapping an even smaller region of sky (again, keeping in mind the caveats mentioned above). Our numerical experiments and some physical arguments indicate that the measured degree-scale temperature anisotropy fixes the polarization amplitude in a model-independent way as long as we use a fixed baryon density. Thus, if the baryon density is known from BBN, then any experiment for which the curves in Fig. 1 fall below unity is guaranteed a $`3\sigma `$ detection of the CMB polarization. A non-detection would indicate unambiguously a baryon density significantly higher than that predicted by BBN. ###### Acknowledgements. We thank L. Knox for providing the updated CMB data, and S. Hanany, A.T. Lee and the whole MAXIMA team for inspiring some of the work described herein. This work was supported at Columbia by a DoE Outstanding Junior Investigator Award, DEFG02-92-ER 40699, NASA NAG5-3091, and the Alfred P. Sloan Foundation, and at Berkeley by NAG5-6552 and NSF KDI grant 9872979.
no-problem/9909/astro-ph9909084.html
ar5iv
text
# Bulk Motions in Large-Scale Void Models ## 1 Introduction The dipole moment in the cosmic background radiation (CMB) is thought to come mainly from the Doppler shift due to the motion of the Local Group (LG), relative to the cosmic homogeneous expansion. As the main gravitational source which brings the velocity vector of LG, the existence of the Great Attractor (GA) was found by Lynden-Bell et al.(1988) and Dressler et al. (1987). It has the position at the redshift of 4300 km sec<sup>-1</sup>. On the other hand, the motion of LG relative to the inertial frame consisting of many clusters on larger-scales was studied observationally by several groups: A bulk flow of $`700`$ km sec<sup>-1</sup> was found by Lauer and Postman (1994, 1995) and Colless (1995) as the motion of the Abell cluster inertial frame relative to LG in the region with redshift $`<15000`$ km sec<sup>-1</sup>, but in the other approach the different result was derived by Giovanelli et al. (1998), Dale et al. (1999) and Riess et al. (1997) in the regions with similar redshifts. Lauer and Postman’s work is based on the assumption that the brightest cluster galaxies as standard candles and the Hoessel relation can be used, but at present these assumptions have been regarded as questionable or unreliable. Independently of these works, the motion of cluster frames relative to CMB was measured by Husdon et al. (1999) and Willick (1999) due to the global Hubble formula using the Tully-Fisher distances of clusters and their redshifts with respect to CMB, and the flow velocity vector was derived in the region with about 150$`h^1`$ Mpc ($`H_0=100h^1`$ km sec<sup>-1</sup> Mpc<sup>-1</sup>). The remarkable and puzzling properties of these flows are that the flow velocity reaches a large value $`700`$ km/sec on a large scale, while the dipole velocity (not due to GA) corresponding to the CMB dipole anisotropy seems to be much small, compared with the above flow velocity. In the present note we first consider inhomogeneous models on sub-horizon scale, corresponding to matter flows on scales $`150h^1`$ Mpc. They are assumed to be spherically symmetric inhomogeneous models which consist of inner and outer homogeneous regions connected by a shell being a singular layer, and the behavior of large-scale motions caused in the inner region is considered. Next we consider light rays which are emitted at the last scattering surface and reach an observer situated at a point O (inthe inner region) deviated from the center C, and the CMB dipole anisotropy for the observer is shown. On the basis of these results we show the consistency with various observations of cosmic flows. Moreover the \[$`m,z`$\] relation is discussed in connection with SNIa data, and finally concluding remarks are presented. ## 2 Cosmological models and the bulk motions In previous papers (Tomita 1995, 1996) we treated spherically symmetric inhomogeneous models which consist of inner and outer homogeneous regions connected with an intermediate self-similar region and have the boundary on a super-horizon scale. Here we consider a similar spherically symmetric inhomogeneous model which consists of inner and outer homogeneous regions, but is connected by a shell being a singular layer on a sub-horizon scale $`150h^1`$ Mpc. This shell may be associated with large-scale structures or excess powers observed by Broadhurst et al.(1990), Landy et al.(1996), and Einasto et al.(1997). The physical state in each region is specified by the Hubble constant density and the density parameter. It is assumed that the present Hubble parameter in the inner region ($`H_0^{\mathrm{in}}`$) is larger than that in the outer region ($`H_0^{\mathrm{out}}`$), and the present inner density parameter ($`\mathrm{\Omega }_0^{\mathrm{in}}`$) is smaller than the present outer density parameter ($`\mathrm{\Omega }_0^{\mathrm{out}}`$). The evolution of physical states in each region and the boundary has been studied in the form of void models (e.g., Sakai et al. 1993). The average motion of CMB is comoving with matter in the outer region, while it is not comoving with matter in the inner region or matter in the inner region moves relative to CMB, because their Hubble constants are different. The bulk motion appears as the result of this relative motion to CMB. The relative velocity ($`\mathrm{\Delta }v`$) is $`(H_0^{\mathrm{in}}H_0^{\mathrm{out}})r`$ in the radial direction, where $`r`$ is the radial distance from the center C to an arbitrary point (a cluster’s position) in the inner region. When an observer O sees this velocity vector $`\mathrm{\Delta }v`$, it can be divided into two parts: the component in the observer’s line of sight ($`\mathrm{\Delta }v_{\mathrm{ls}}`$) and the bulk-velocity component in the direction of C $``$ O ($`v_p`$). The latter component $`v_p`$ is constant, irrespective of the cluster’s position. In the case when the present radius of the boundary and the observers’s position are $`200h^1`$ and $`40h^1`$ Mpc, respectively, we have $`v_p700`$ km sec<sup>-1</sup>. ## 3 Dipole anisotropy and the consistency with various observations of cosmic flows If the observer were in the center C, he never sees any CMB anisotropy, as long as the two regions are homogeneous. For the non-central observer O we have nonzero dipole anisotropy $`D`$ which is derived by calculating curved paths from the last scattering surface to O and the directional variation of the temperature $`T_r`$. The velocity $`v_d`$ corresponding to $`D`$ is defined by $`v_dc[(3/4\pi )^{1/2}D]`$ and derived. As the result it was found that $`v_d`$ is small compared with $`v_p`$, if O is near to C. In our above example, $`r(\mathrm{OC})/r(\mathrm{boundary})1/5`$, we obtain $`v_d0.1v_p`$. As described in §2, the bulk velocities at arbitrary two points are equal and so their difference is zero. Accordingly the relative velocity of the Local Group (LG) to the frame of clusters ($`v_{LG}`$) is only the peculiar velocity ($`v_{GA}`$) caused by the small-scale nonspherical gravitational field of the Great Attracter. The above result gives the dipole velocity of LG, $`v_d(\mathrm{LG})=v_{GA}+v_d`$, so that $`v_{GA}`$ and $`v_d(\mathrm{LG})`$ are comparable and the diffrence is $`v_d(0.1v_p)`$. This situation in the present models is consistent with the observations (Giovanelli et al. (1998), Dale et al. (1999) and Riess et al. (1997)) for relative velocities of LG to the cluster frame , and the observations (Husdon et al. (1999) and Willick (1999)) for the bulk flows of clusters, since the observed values of $`v_{LG}`$, $`v_d(\mathrm{LG})`$ and $`v_p`$ are $`565`$ km sec<sup>-1</sup>, $`627`$ km sec<sup>-1</sup> and $`700`$ km sec<sup>-1</sup>, respectively, in the similar directions. The observed difference of first two velocities is about $`0.1\times v_p`$. The detail derivation of the contents in §2 and §3 is shown in Tomita (1999a). ## 4 \[$`m,z`$\] relation and SNIa data Here the behavior of distances in the present models is studied. First we treat the distances from a virtual observer who is in the center C of the inner void-like region in models with a single shell, and derive the \[magnitude $`m`$ \- redshift $`z`$\] relation. This relation is compared with the counterpart in the homogeneous models. Then the relation in the present models is found to deviate from that in the homogeneous models with $`\mathrm{\Lambda }=0`$ at the stage of $`z<1.5`$. It is partially similar to that in the nonzero-$`\mathrm{\Lambda }`$ homogeneous models, but the remarkable difference appears at the high-redshift stage $`z>1.0`$. Moreover, we consider a realistic observer who is in the position O deviated from the center, and calculate the distances from him. The distances depend on the direction of incident light and the area angular diameter distance is different from the linear angular diameter distances. It is shown as the result that the \[$`m,z`$\] relation is anisotropic, but the relation averaged with respect to the angle is very near to the relation by the virtual observer. When we compare these theoretical relations with SNIa data (Riess et al.(1998), Garnavich et al.(1998), and Schmidt et al.(1998)), we can determine which of the present models and nonzero-$`\mathrm{\Lambda }`$ homogeneous models are better, and the fittest model parameters. At present, however, there are few data at $`z1.0`$, so that the model selection may not be performed. The detail description of the content in this section is given in Tomita (1999b). ## 5 Concluding remarks The density perturbations in the inner region and their influence of on CMB anisotropy are another important factor to the selection of model parameters, which should be studied next.
no-problem/9909/chao-dyn9909016.html
ar5iv
text
# Non Perturbative Destruction of Localization in the Quantum Kicked Particle Problem ## Abstract The angle coordinate of the Quantum Kicked Rotator problem is treated as if it were an extended coordinate. A new mechanism for destruction of coherence by noise is analyzed using both heuristic and formal approach. Its effectiveness constitutes a manifestation of long-range non-trivial dynamical correlations. Perturbation theory fails to quantify certain aspects of this effect. In the perturbative case, for sufficiently weak noise, the diffusion coefficient $`𝒟`$ is just proportional to the noise intensity $`\nu `$. It is predicted that in some generic cases one may have a non-perturbative dependence $`𝒟\nu ^\alpha `$ with $`0.35<\alpha <0.38`$ for arbitrarily weak noise. This work has been found relevant to the recently studied ionization of H-atoms by a microwave electric field in the presence of noise . Note added (a): Borgonovi and Shepelyansky (Physica D 109, 24 (1997)) have adopted this idea of non-perturbative transport, and have demonstrated that the same effect manifests itself in the tight-binding Anderson model with the same exponent $`\alpha `$. Note added (b): The recent interest in the work reported here comes from the experimental work by the Austin group (Klappauf, Oskay, Steck and Raizen, PRL 81, 1203 (1998)), and by the Auckland group (Ammann, Gray, Shvarchuck and Christensen, PRL 80, 4111 (1998)). In these experiment the QKP model is realized literally. However, the novel effect of non-perturbative transport, reported in this Letter, has not been tested yet. The most striking manifestation of quantum mechanical effects on classical chaos is dynamical localization which leads to suppression of chaos. Consider for example a particle that is confined to move in a one dimensional space whose length is $`L`$ and that is subject to a kicking potential with period $`T`$. Classically, the motion of the particle becomes ergodic in space but diffusive in momentum<sup>1</sup>. Thus, the kinetic energy of the particle grows like $`p^2D_0t`$ , where the diffusion coefficient $`D_0`$ depends on the strength of the kicking potential. Quantum mechanically it is found that diffusion in momentum is suppressed<sup>2</sup>. This is due to localization of the Floquet eigenstates in momentum<sup>3</sup>. A standard argumentation<sup>4</sup> leads to the following expression for the localization length $`\mathrm{}={\displaystyle \frac{2\pi }{L}}\mathrm{}\xi ={\displaystyle \frac{TL}{2\pi \mathrm{}}}D_0`$ (1) where $`\mathrm{}`$ is measured in units of $`p`$ while $`\xi `$ is the dimensionless localization length. The prototype problem for the investigation of dynamical localization is the Quantum Kicked Rotator (QKR) Problem<sup>2,3</sup>. In this problem the particle is kicked by a cos$`x`$ potential and periodic boundary conditions are imposed over $`[0,2\pi ]`$. However, we may consider $`x`$ to be an extended variable and impose periodic boundary conditions over $`[0,2\pi ]`$ where $``$ is an integer and the limit $`\mathrm{}`$ is taken. We obtain then a new problem to be entitled ’The Quantum Kicked Particle (QKP) Problem’. It is not correct to use (1) with $`L=2\pi `$ to obtain $`\mathrm{}=T\frac{D_0}{\mathrm{}}\mathrm{}`$ since due to the translational symmetry of cos$`x`$ the localization length $`\mathrm{}`$ is the same as in the QKR problem ($`=1`$) irrespective of $``$. However, it is evident that any dislocation in the periodic structure of the kicking potential will result in $`\mathrm{}\mathrm{}`$. Therefore we expect localization in the QKP problem to be extremely sensitive to any generic perturbation. We shall discuss in this letter the effect of noise on localization in the QKP problem. In conclusion we shall explain why this problem should be considered a prototype example<sup>5</sup> for the recently studied noise-induced diffusive-ionization of a highly excited H-atom that is subject to a monochromatic microwave electric field<sup>6,7</sup>. We are considering in this letter the quantized version of the classical standard map with noise, namely $`x_t`$ $`=`$ $`x_{t1}+p_{t1}`$ (2) $`p_t`$ $`=`$ $`p_{t1}+K\mathrm{sin}x_t+f_t`$ (3) It is implicit that the dynamical behavior should be averaged over realizations of the sequence $`f_t`$ such that $`f_t=0`$ and $`f_tf_t^{}=\nu \delta _{t,t^{}}`$ (4) Following Ott, Antonsen and Hanson<sup>8</sup> we assume that the one-step propagator that generates this map is $`\widehat{U}=\mathrm{exp}\left[{\displaystyle \frac{i}{\mathrm{}}}(K\mathrm{cos}\widehat{x}+\widehat{V}_{int})\right]\mathrm{exp}\left[{\displaystyle \frac{i}{\mathrm{}}}{\displaystyle \frac{1}{2}}\widehat{p}^2\right]`$ (5) where $`V_{int}`$ is the interaction term with the noise source. Consider first the standard QKR case in which $`x`$ is an angle variable. The interaction term must respect then the $`2\pi `$ spatial periodicity of $`x`$. Possible choices that correspond to the classical map (2) are $`\widehat{V}_{int}=\sqrt{2\nu }\mathrm{sin}(\widehat{x}+\phi (t))`$ where $`\phi (t)`$ is a random phase<sup>8</sup>, and<sup>9</sup> $`\widehat{V}_{int}=𝑑\phi f_\phi (t)\sqrt{2}\mathrm{sin}(\widehat{x}+\phi )`$ where $`f_\phi (t)`$ satisfies $`f_\phi (t)f_\phi ^{}(t^{})=\nu \frac{1}{2\pi }\delta (\phi \phi ^{})\delta _{t,t^{}}`$. It may be shown that this QKR model is not sensitive to the detailed form of the interaction term<sup>10</sup>. In the QKP problem the map (2) describes the time evolution of a particle. A generic interaction term with the external noise source is not expected to respect the $`2\pi `$ spatial periodicity of the kicking potential. We may assume then e.g. a linear coupling scheme $`V_{int}=f_t\widehat{x}`$ where $`f_t`$ satisfies (3). We shall see that in this QKP model the dynamical behavior is significantly different compared with the QKR model though both models correspond to the same map (2). From now on it is assumed that $`1K`$ which is the usual condition for being in the classically-chaotic regime of the standard map. In the presence of strong noise diffusion in momentum is classical-like<sup>8</sup> with coefficient $`\frac{1}{2}K^2+\nu `$. If the noise is weak then classical-like diffusion lasts a characteristic time $`t^{}2\xi `$ and then a crossover to slower diffusive behavior is observed. The asymptotic diffusion coefficient is defined as follows: $`𝒟=\underset{t\mathrm{}}{lim}{\displaystyle \frac{(p(t)p(0))^2}{t}}`$ (6) where $``$ denotes quantum statistical average over initial conditions and noise realizations (see Ref.10 for further details). In the absence of noise $`𝒟=0`$ due to the localization effect. We shall use now a heuristic picture in order to determine $`𝒟`$ in the presence of weak noise. Next we shall introduce a formal treatment and the limitations of both approaches will be pointed out. A good way to gain insight of the effect of noise on coherence is to use Wigner’s picture of the dynamics<sup>11</sup>. Wigner’s function $`\rho (x,p)`$ is defined on $`[0,2\pi ]\times \frac{\mathrm{}}{2}𝒵`$ where $`𝒵`$ are the integer numbers. Assuming that the particle is prepared in a $`\widehat{U}`$-eigenstate, Wigner’s function has details on spatial scale $`\frac{1}{\xi }`$ indicating a superposition of $`\xi `$ momentum eigenstates. The effect of noise is to smear fine details of Wigner’s function and thus to turn the superposition into a mixture<sup>11</sup>. The coherence time $`t_c`$ in the QKR problem is simply the time it takes for the noise to ’mix’ neighboring momenta<sup>8</sup> on momentum scale $`\mathrm{}`$ , namely $`t_c^{QKR}=(\mathrm{}^2\frac{1}{\nu })`$ , while in the QKP problem a shorter time scale exists, namely $`t_c^{QKP}=(\frac{1}{\xi ^2}\frac{1}{\nu })^{\frac{1}{3}}`$ which is the time it takes to spread over spatial scale $`\frac{1}{\xi }`$. This spreading is absent in case of a rotator since it is associated with the noise-induced diffusion in the non-discrete momentum space. This diffusion is $`\delta pt^{\frac{1}{2}}`$ while the associated spreading is $`\delta xt^{\frac{3}{2}}`$. It is important to note that implicit in this heuristic picture is the underlying assumption that the kicks do not affect significantly the coherence time. This assumption has been shown to be incorrect in case of the QKR problem if the noise possesses long range correlations<sup>11</sup>. Actually, we shall see that in the QKP problem the situation is similar, though the heuristic picture gives the right qualitative behavior. We proceed now to estimate $`𝒟`$. One may try to use the heuristic diffusion picture that is implicit in the work by Ott Antonsen and Hanson<sup>8</sup>. It is argued that for weak noise ($`t^{}t_c`$) the diffusion process in momentum space is similar to a random walk on a grid with spacing $`\mathrm{}\xi `$ and hopping-probability $`\frac{1}{t_c}`$. The diffusion coefficient in the presence of weak noise is therefore of the order $`(\mathrm{}\xi )^2\frac{1}{t_c}`$ which upon using (1) leads to $`𝒟{\displaystyle \frac{t^{}}{t_c}}D_0.`$ (7) It follows that $`𝒟\nu ^\alpha `$ with $`\alpha =1`$ for the QKR and $`\alpha =\frac{1}{3}`$ for the QKP. In Figure 1 the results of a numerical experiment are presented<sup>13</sup>. The observed behavior is indeed $`\alpha =1`$ for the QKR but $`0.35<\alpha <0.38`$ for the QKP which deviates slightly from the heuristic value $`\alpha =\frac{1}{3}`$. We turn now to a formal analysis of the problem to overcome the natural limitations of the above heuristic picture. We shall try to explain the origin for the deviation from the heuristic result in case of the QKP, but we shall see that leading order perturbation theory cannot be trusted if we want to quantify this deviation. In the absence of noise one may define the dispersion (energy) function $`E(t)(p(t)p(0))^2`$. This function is related to the momentum autocorrelation function<sup>10</sup> via $`E(t)=2(C_p(0)C_p(t))`$. Its time derivative will be denoted by $`D(t)`$, and has the asymptotic value $`𝒟=0`$ due to the localization effect. It has been found<sup>10</sup> that dynamical correlations in the QKR problem decay exponentially on time scale $`t^{}`$ while on longer time scale a slower power-law decay is observed. Consequently $`D(t)=\{\begin{array}{cc}D_0e^{\frac{t}{t^{}}}& \text{for}t<𝒪(t^{})\hfill \\ cD_0(\frac{t^{}}{t})^{1+\beta }& \text{for}𝒪(t^{})<t\hfill \end{array}`$ (10) with $`D_0\frac{1}{2}K^2`$, $`t^{}2\xi `$, $`\beta 0.75`$ and $`c0.5`$. More details including analytical considerations may be found in Ref.10. In the presence of noise coherence is destroyed. The decay probability $`P(t)`$ of a quasienergy eigenstate as a function of time may be calculated using leading order perturbative calculation<sup>9,10</sup>. For the QKR the decay rate is constant,namely $`\dot{P}(t)={\displaystyle \frac{1}{\mathrm{}^2}}\nu .`$ (11) For the QKP one obtains $`\dot{P}(t)={\displaystyle \frac{1}{\mathrm{}^2}}\nu {\displaystyle \underset{\tau =t}{\overset{t}{}}}C_p(\tau )(t|\tau |).`$ (12) For $`t^{}t`$ the behaviour is roughly $`P(t)\nu \xi ^{2+\beta }t^{3\beta }`$ in the latter case. These results hold as long as $`P(t)1`$. However, if we assume that $`P(t)`$ is a function of a single scaled variable $`\frac{t}{t_c}`$ then the perturbative result suggests that for the QKR problem $`t_c^{QKR}=(\mathrm{}^2\frac{1}{\nu })`$ which is the inverse of the decay rate and agrees with our heuristic expectation. For the QKP one obtains $`t_c^{QKP}(\frac{1}{\xi ^{2+\beta }}\frac{1}{\nu })^{\frac{1}{3\beta }}`$ that coincide with the heuristic result only if we assume very strong dynamical correlations ($`\beta 0`$) which is not correct since $`\beta 0.75`$. Note that the latter results are as exact as leading order perturbation theory permits. We consider now diffusion in presence of weak noise ($`t^{}t_c`$) using a formal approach. A derivation<sup>10</sup> which is based on leading order perturbation theory leads to the result $`𝒟{\displaystyle \underset{t=0}{\overset{\mathrm{}}{}}}\dot{P}(t)D(t).`$ (13) This expression may be trusted only if it is dominated by the short-time terms (those with $`tt_c`$). This would be always the case if dynamical correlations possesed a short-range nature. Specifically, if $`D(t)`$ decayed exponentially on the relatively short time scale $`t^{}`$, then the sum in (10) would be dominated by the terms in its head whose number is of the order $`t^{}`$. Evidently $`𝒟`$ should be proportional then to the intensity of the noise. A non-trivial dependence of the form $`𝒟\nu ^\alpha `$ with $`\alpha 1`$ is therefore a manifestation of long range dynamical correlations. The sum in (10) is necessarily dominated then by the long-time terms and consequently the perturbative estimate for $`𝒟`$ cannot be trusted any more. In case of the QKR model one easily finds that in spite of the power law decay of the long-time terms, yet the sum (10) is dominated by the short time terms. Substitution of (7) and (8) into (10) leads then to the heuristic formula (6). Figure 1 illustrates comparision of the numerical results (filled squares) with the analytical estimate (smooth curve, no fitting parameters). We turn now to discuss the QKP case. Here the behaviour of the terms in the sum (10) that satisfy $`t^{}tt_c`$ is $`\dot{P}(t)D(t)=(3\beta )cD_0{\displaystyle \frac{(t^{})^{1+\beta }}{(t_c)^{3\beta }}}t^{12\beta },`$ (14) where we have used (7) and (9). This behaviour (provided $`\beta 1`$) indicates that most of the contribution to $`𝒟`$ in (10) comes from the long-time terms with $`t`$ which is of the order $`t_c`$. This observation is supported by the comparision of the numerical<sup>12</sup> results (Figure 1, filled circles) with analytical estimate that takes into account only the short-time contribution (dashed curve, no fitting parameters). One may try to use the following extrapolation scheme in order to estimate the dominant contribution of the long-time terms to the diffusion coefficient : (a) To assume that nevertheless (10) holds, (b) To assume that (9) holds for $`tt_c`$ while $`\dot{P}(t)=0`$ for $`t_c<t`$. One obtains then $`𝒟=c^{}(\frac{t^{}}{t_c})^{1+\beta }D_0`$ instead of (6) where $`c^{}`$ is a prefactor of order unity. Consequently $`𝒟\nu ^\alpha `$ with $`\alpha =\frac{1+\beta }{3\beta }`$. This result differs for $`0<\beta `$ from the heuristic one. It predicts that $`\alpha `$ is larger then $`\frac{1}{3}`$, namely $`\alpha 0.78`$. Unfortunately, the latter value does not agree with the numerical results which leads to the conclusion that leading order perturbation theory is not sufficient in order to obtain a quantitive description of the effect. The QKP problem is a prototype example that illustrates destruction of coherence via a spreading mechanism. This mechanism is opperative in systems where the noise does not respect a symetry that is responsible for the localization. In the QKP problem, due to translational symetry of the kicking potential, only states that have finite momentum separation are coupled by the kicks. This feature is shared by the recently studied highly excited H-atom that is subject to a monochromatic microwave electric field<sup>6</sup>. The high energy levels of the undriven H-atom are very dense, but only photon-distant states are coupled by the interaction with the field. It follows that this problem reduces locally to a generelized QKP-problem with finite $``$ rather then QKR-problem. Generic noise will induce diffusion to neighbouring levels that play no significant role in the dynamics if the noise is absent. If the time scale that is required for this diffusion is much less than $`t_c^{QKP}`$ then we may expect that the spreading mechanism for destruction of coherence will become effective. Our results therefore sugests that if the H-atom is prepared in a very high excited state, then a new behaviour which is different from the one that has been reported in Ref.7 may be found. Namely, the ionization time will not be in general inverse proportional to the variance of the noise. This subject obviously deserves a systematic study. Indeed Fishman and Shepelyansky<sup>13</sup> have considered the effect of noise on ionization and pointed out that there are indications for the manifestation of the non-perturbative mechanism in experiments on Rubidium atoms that have been carried out recently by the Munich group<sup>14</sup>. Note added (c): Further discussion of the perturbative and the non-perturbative mechanisms for destruction of cohernce, in a more general context, may be found in the paper ”Quantal Brownian Motion - Dephasing and Dissipation” (D. Cohen,J. Phys. A 31, 8199 (1998)). It should be emphasized that the essential ingredient for the manifestation of the non-perturbative mechanism is the possibility for exchange of relatively small quanta of momentum between the particle and the environment. It should be possible to realize such type of noise in eg Raizen’s experiments by introducing a noisy field with small $`q`$ components ($`q=`$ wavenumber in the relevant direction). The emphasis on ‘symmetry breaking’ in the above 1991 version of the Letter is somewhat misleading. I thank S. Fishman for useful criticism , D.L. Shepeliansky for fruitful discutions and T. Dittrich, F.M. Izrailev and E. Shimshony for interesting conversations. I also thank R. Graham and F. Haake for thier hospitality in Universitat-GHS-Essen. This work was supported in part by the U.S.-Israel Binational Science Foundation (BSF), and by the European Science Foundation (ESF). 1. A.J. Lichtenberg and M.A. Lieberman, Regular and Stochastic Motion, (Springer, Berlin, 1983). 2. G. Casati, B.V. Chirikov, F.M. Izrailev and J. Ford, in Stochastic Behaviour in classical and Quantum Hamiltonian Systems, Vol. 93 of Lecture Notes in Physics, edited by G. Casati and J. Ford (Springer, N.Y. 1979), p. 334. 3. S. Fishman, D.R. Grempel and R.E. Prange, Phys. Rev. Lett. 49, 509 (1982). D.R. Grempel, R.E. Prange and S. Fishman, Phys. Rev. A 29, 1639 (1984). S. Fishman, R.E. Prange, M. Griniasty, Phys. Rev. A 39, 1628 (1989). S. Fishman, D.R. Grempel and R.E. Prange, Phys. Rev. A 36, 289 (1987). 4. B.V. Chirikov, F.M. Izrailev and D.L. Shepelyansky, Sov. Sci. Rev. 2C, 209 (1981). D.L. Shepelyansky, Physica 28D, 103 (1987). 5. I thank D.L. Shepelyansky for suggesting this connection. 6. J.E. Bayfield and P.M. Koch, Phys. Rev. Lett. 48, 711 (1982). J.G. Leopold and I.C. Percival, J. Phys. B 12, 709 (1979). R. Blumel and U. Smilansky, Z. Phys. D 6, 83 (1987). G. Casati, I. Guarneri and D.L. Shepeliansky, IEEE J. Quant. Elect. 24, 1240 (1988). 7. R. Blumel, R. Graham, L. Sirko, U. Smilansky, H. Walther and K. Yamada, Phys. Rev. Lett. 62, 341 (1989). 8. E. Ott, T.M. Antonsen Jr. and J.D. Hanson, Phys. Rev. Lett. 53, 2187 (1984). 9. D. Cohen, preprint (1991). Note: Published in J. Phys. A 27, 4805 (1994). 10. D. Cohen, Phys. Rev. A 44, 2292 (1991). 11. D. Cohen, phys. Rev. Lett. 67, 1945 (1991) ; Phys. Rev. A 43, 639 (1991). 12. I thank D.L. Shepelyansky for useful idea concerning these simulations. 13. S. Fishman and D.L. Shepelyansky, Europhys. Lett., 16(7), pp. 643-648 (1991). 14. A. Buchleitner, R. Mantegna and H. Walther, presented at the Marseille Conference on Semiclassical Methods in Quantum Chaos and Solid State Physics, and to be published.
no-problem/9909/hep-lat9909124.html
ar5iv
text
# Topology in QCD ## 1 Introduction Topology is important in QCD largely due to the way it influences the propagation of quarks. Here it is the zero-modes of $`i\overline{)}D`$ that are special; not the exact zero modes whose number per unit volume $$\frac{\overline{|Q|}}{V}\frac{1}{\sqrt{V}}\stackrel{V\mathrm{}}{}0,$$ (1) but rather the (mixed) would-be zero modes that would have been exact zero-modes if their parent topological charges had been isolated. The total number of these is certainly $`V`$, if only from the small instantons whose density is calculable analytically. Lattice calculations find, in both SU(2) and SU(3), a substantial value for $$\frac{Q^2}{V}(200MeV)^4\frac{1}{fm^4}$$ (2) and this suggests that alot of the topological charge density tends to cluster in lumps that are uncorrelated at larger distances – just like a ‘gas’ of instantons. So, for convenience, this is the language I shall use in this talk. I will start with the aspect that is no doubt on the firmest theoretical ground: the large mass of the $`\eta ^{}`$ and the value of the topological susceptibility $`\chi _tQ^2/V`$. I dwell on the quenched case because it provides one area in which the lattice calculations are in very good shape, even if there is not much new this year (probably because they are in such good shape …). I will then move onto the possible role of instantons in driving the spontaneous breaking of chiral symmetry . Here the exciting news is that we have finally got lattice fermions that are good enough to address this question realistically. More speculative is the influence of topology on hadrons . This provides a motivation for trying to extract the properties of the ‘gas’ of instantons in the vacuum. There has been some interesting new work on the latter during the past year. Finally I turn to the role of instantons in confinement. The usual view is that there is no such role. I will discuss a recent calculation that claims the opposite. This will be the one area where fermions don’t appear at all. ## 2 $`Q`$, $`\chi _t`$ and $`m_\eta ^{}`$ Instantons provide a resolution of the $`\eta ^{}`$ puzzle . The mass of the $`\eta ^{}`$ can be related to the strength of topological fluctuations : $$\chi _t\frac{Q^2}{V}\frac{m_\eta ^{}^2f_\pi ^2}{2N_f}(180MeV)^4.$$ (3) This is to leading order in $`N_c`$ and so the value of $`\chi _t`$ is the quenched value. The practical application (i.e. $`MeV`$ units) assumes that $`N_c=3`$ is close to $`N_c=\mathrm{}`$ and this is also something that one needs to confirm. Let me start with some reassuring comments about calculating the topological charge $`Q`$ of lattice gauge fields. We suppose the lattice spacing $`a`$ is very small. Consider topological charges which are localised within a core of radius $`\rho `$. For $`a\rho 0.5fm`$ these charges have an analytically calculable density, which turns out to be $`\rho ^6`$ for SU(3). Large scales, say $`\rho 0.5fm`$, are a non-perturbative problem. At the ultraviolet scale, $`\rho a`$, lattice artifacts may dominate. Now, the point to note is that during the Monte Carlo each step changes only one link matrix i.e. the fields within a volume $`\delta va^4`$. So the only way we can change the value of $`Q`$ is for the core of the topological charge to shrink over many Monte Carlo steps from say $`\rho 0.2fm`$ down to $`\rho a`$ and then within a hypercube and so out of the lattice. Now because the density $`\rho ^6`$, the region $`a\rho 0.1fm`$ (say) will normally (in our finite lattice volume of a couple of fermi) have no instantons at all. So as an instanton traverses this topological desert it will become very visible – particularly once $`\rho few\times a`$. At this point the core will stick out above the $`O(1/\beta )`$ UV fluctuations in both the action and the topological charge densities. Now, if we know when $`Q`$ changes then that is all we need to calculate $`Q^2`$ (in a long run); and we can calculate $`Q`$ e.g. by averaging over long sequences of configurations. Clearly the same argument applies to cooling, and because cooling rapidly dampens the UV fluctuations, the above scenario can work for modest $`a`$ with only a couple of cooling sweeps being performed on each Monte Carlo field in the sequence. So as $`a0`$ we can certainly calculate $`Q`$ and $`Q^2`$ for lattice gauge fields. At finite-$`a`$ there will typically be $`O(a^2)`$ corrections whose size will depend on the method used. Note that things will be worse in SU(2) than in SU(3). In SU(2) the density of small instantons decreases only as $`\rho ^{7/3}`$: that is to say, the decoupling of the UV scale occurs much more slowly. Although one often starts with SU(2) calculations because they are much faster, I think that, for the above reason, this is a false economy. I will now summarise a number of different calculations of $`\chi _t^{1/4}/\sqrt{\sigma }`$. If we use the usual form of the lattice topological charge $$Q_L=\frac{1}{32\pi ^2}\underset{x}{}\epsilon _{\mu \nu \rho \sigma }Tr\{U_{\mu \nu }(x)U_{\rho \sigma }(x)\}$$ (4) (then $`\pm \mu `$ etc. antisymmetrised) and if we average $`Q_L^2`$ over fields containing a topological charge $`Q`$ we obtain $$\overline{Q_L^2}=Z^2(\beta )Q^2+\zeta (\beta ).$$ (5) These additive and multiplicative renormalisations can be either calculated analytically in powers of $`1/\beta `$, or numerically. (Note that although $`\zeta (\beta )`$ diverges as $`1/a^4`$, in practical calculations it is not much larger than $`Q^2`$ and so the ‘subtraction’ is under much better control than in the corresponding gluon condensate calculations.) Alternatively one can cool the fields, and then $`Z1`$, up to $`O(a^2/\rho ^2)`$ lattice corrections, and $`\zeta 0`$, after a couple of cools. I now take a number of quite different lattice calculations (using only those with at least 3 $`\beta `$ values) and compare their continuum extrapolations. I extrapolate the dimensionless ratio $`\chi _t^{1/4}/\sqrt{\sigma }`$ ($`\sigma `$ = string tension) $$\frac{\chi _t^{1/4}(a)}{\sqrt{\sigma }(a)}=\frac{\chi _t^{1/4}(0)}{\sqrt{\sigma }(0)}+ca^2\sigma $$ (6) using a common set of values for $`\sigma `$ (as listed in ) so that any differences are differences in the calculation of topology, not of the scale. I now list the SU(3) calculations I use, and the results of the corresponding continuum extrapolations. $``$ SU(3) Pisa Smeared version of $`Q_L(x)`$, calculated directly from Monte Carlo field average using eqn 5 ($`5.9\beta 6.1`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.464(23)`$ Boulder Algebraic lattice $`Q`$ on RG smoothed fields ($`5.85\beta 6.1`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.456(23)`$ Oxford $`Q_L`$ on cooled fields ($`5.7\beta 6.2`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.449(17)`$ UKQCD $`Q_L`$ on cooled fields ($`6.0\beta 6.4`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.448(50)`$ $``$ SU(2) Pisa Smeared version of $`Q_L(x)`$, calculated directly from Monte Carlo field averages using eqn 5 ($`2.44\beta 2.57`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.480(23)`$ Boulder lattice $`Q`$ on RG mapped fields ( $`2.4\beta 2.6`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.528(21)`$ Oxford-Liverpool $`Q_L`$ on cooled fields ( $`2.2\beta 2.6`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.480(12)`$ Oxford blocked geometric $`Q`$ directly on Monte Carlo fields ($`2.3\beta 2.6`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.480(18)`$ Zurich version of $`Q_L`$ on improved-cooled fields ($`2.4\beta 2.6`$): $`\chi _t^{1/4}/\sqrt{\sigma }=0.501(45)`$ All these continuum limits are consistent with each other despite the wide variety of methods being used. They provide the following conservative estimates: $$\frac{\chi _t^{1/4}}{\sqrt{\sigma }}=\{\begin{array}{cc}0.455\pm 0.015\hfill & SU(3)\hfill \\ 0.487\pm 0.012\hfill & SU(2)\text{.}\hfill \end{array}$$ (7) We observe that the two values are close to each other suggesting that (for the pure gauge theory) SU(3) is indeed close to SU($`\mathrm{}`$). If we plug in the value $`\sqrt{\sigma }440\pm 38MeV`$ , then we find $`\chi _t^{1/4}=200\pm 18MeV`$ for SU(3). As we have seen this is roughly the density of topological fluctuations that is needed to provide the $`\eta ^{}`$ with its observed mass. One can of course also calculate $`\chi _t`$ for ‘full QCD’. We expect that $`Q^20`$ as $`m_q0`$, since the zero modes ensure that $`det`$$`i\overline{)}D`$=0 for $`Q0`$. In fact we know more: the anomalous Ward identities tell us that $$\chi _t=\frac{m_\pi ^2f_\pi ^2}{n_f^2}+O(m_q^2)m_q$$ (8) if we are in the phase in which chiral symmetry is spontaneously broken. By contrast, in a chirally symmetric phase we expect $`\chi _tm_q^{n_f}`$, and, of course, in the quenched case $`\chi _tm_q^0`$. So we can test the lattice calculations against this relation. One has to be careful because the lattice spacing $`a`$ is both a function of $`\beta `$ and $`m_q`$. (For a large quark mass the running of the coupling will include the quark only for scales below $`O(1/m_q)`$.) So in testing eqn 8 we should express all the quantities in terms of some physical quantity that is not expected to vary strongly with $`m_q`$, e.g. $`r_0`$ or $`\sqrt{\sigma }`$. The ($`n_f=2`$) UKQCD calculations reported at this meeting go one better by tuning $`\beta `$ with $`m_q`$ so that $`r_0/a`$ is independent of $`m_q`$. Such a calculation separates the $`m_q`$ dependence from any $`a`$ dependence. If one plots $`r_0^4\chi _t`$ versus $`r_0^2m_\pi ^2m_q`$ one finds that the (three) points are consistent with eqn 8 but only if one decreases $`f_\pi `$ by about 20% from its physical value. That, I think, is pretty good. Less pretty are the large statistical errors. Despite the latter it is clear that the value of $`r_0^4\chi _t`$ has decreased from its quenched value and that a $`m_q^{n_f=2}`$ behaviour is excluded. This contrasts with the $`n_f=2`$ CP-PACS calculation of $`\chi _t/\sigma ^2`$ which strangely finds no $`m_q`$ dependence at all. On the other hand the $`n_f=2`$ calculation of the Pisa group does show some sign of a decrease as $`m_q0`$. The meson that (perhaps) most directly reflects topological fluctuations is the $`\eta ^{}`$. CP-PACS has produced a very nice calculation of $`m_\eta ^{}`$. This is a tough calculation because such flavour-singlet hadrons simultaneously need the statistics of glueball calculations and expensive quark propagators. CP-PACS do a direct calculation that finds $`m_\eta ^{}0`$ as $`m_q0`$, in contrast to the $`m_q`$ Goldstone behaviour one would expect if topological fluctuations were negligible. The value they obtain in the continuum chiral limit is $`m_\eta ^{}=863(86)MeV`$. This is for $`n_f=2`$ and $`n_c=3`$, so it is amusing to note that if we plug into eqn 3 the values $`n_f=2`$, $`f_\pi =93MeV`$ (since this should be insensitive to the strange quark) and $`\chi _t=(200MeV)^4`$ (the lattice value) then we obtain $`m_\eta ^{}860MeV`$ as our expectation. This is promising and I hope CP-PACS will pursue this calculation; perhaps in the direction of explicitly showing that it is dominated by the lowest modes of $`i\overline{)}D`$. A related calculation has been performed by UKQCD ; and the Pisa group has tried calculating the mass using correlators of the ($`\stackrel{}{p}=0`$) topological charge. ## 3 Chiral symmetry breaking Let $`\rho (\lambda )`$ be the normalised spectral density of $`i\overline{)}D[A]`$ averaged over gauge fields. Then we can express the chiral condensate as $`\overline{\psi }\psi `$ $`=`$ $`\underset{m0}{lim}\underset{V\mathrm{}}{lim}\overline{\psi }\psi _{m,V}`$ (9) $`=`$ $`\underset{m0}{lim}\underset{V\mathrm{}}{lim}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{2m\rho (\lambda ,m)}{\lambda ^2+m^2}}𝑑\lambda `$ $`=`$ $`\pi \rho (0)`$ So chiral symmetry breaking requires a non-zero density of modes at $`\lambda =0`$. Now if we remove all interactions, then each (anti)instanton has a zero-mode and so $`\rho (\lambda )\delta (\lambda )`$. Interactions will spread these modes away from zero: however instantons are clearly a good first guess if what you want is $`\rho (0)0`$. Contrast it with the non-interacting limit of the perturbative vacuum – the free theory – where $`\rho (\lambda )\lambda ^3`$. This idea, that instantons might drive chiral symmetry breaking, is an old one . There have been lattice calculations to test this idea, for example . The calculations in , with staggered fermions in the pure gauge SU(2) vacuum, found that the chiral symmetry breaking disappeared if one removed the topological eigenmodes of $`i\overline{)}D[A]`$. While conclusive about the lattice theory, there was a question mark over the continuum theory: despite the lattice spacing being ‘small’ (e.g. $`\beta =2.6`$) the $`|Q|`$ zero-modes were no closer to zero than the other small modes of $`i\overline{)}D[A]`$. (See also .) This raises doubts about how continuum-like is the spectrum of the $`O(V)`$ mixed would-be zero modes; one really needs the lattice shift in the exact zero-modes to leave them small compared to the other small modes in a reasonably sized box. (The exact $`|Q|\sqrt{V}`$ zero modes become irrelevant in the thermodynamic limit.) This raises doubts about any lattice calculations of the influence of instantons on quarks; e.g hadron masses as well as chiral symmetry breaking. So the exciting news here is, of course, ‘Ginsparg-Wilson’. In particular the Columbia group has pursued the domain wall variant and have produced some very pretty calculations showing that even with a modest 5’th dimension one obtains ‘exact’ lattice zero-modes that are very much smaller than the other small modes of the Dirac spectrum in a reasonably sized box. (In practice they observe the $`1/m`$ behaviour in $`\overline{\psi }\psi _{m,V}`$ that one obtains from such modes.) This provides an explicit demonstration that controlled calculations of the influence of topology on continuum hadronic physics can now be done. There has also been related work with overlap fermions and there has been a great deal of comparison with Random Matrix Theory, as well as other model calculations, which I hope will be reviewed elsewhere. ## 4 The instanton content of the vacuum There has been a new calculation of the topological structure of the SU(2) vacuum . The main novelty here is a modified cooling algorithm that is designed to cool out to a specified cooling radius $`r_c`$. One finds that while some features, such as the average instanton size, vary weakly with $`r_c`$, other quantities, such as the instanton density and average nearest instanton anti-instanton distance, vary rapidly with $`r_c`$. (Not unlike the conclusions of using ordinary cooling.) This is largely bad news if what you want is to obtain the detailed properties of the instanton ‘gas’ in the original vacuum, so that you can provide an input into phenomenological instanton calculations . However one thing that is not understood is how much of this apparent variation with cooling is real and how much of it is actually the fault of the ‘pattern recognition’ algorithms that turn the topological charge density into an instanton ensemble. Here a gleam of hope comes from a reanalysis in of . Clearly in $`n_c`$ usual cooling sweeps one expects roughly $`\overline{r_c}a\sqrt{n_c}`$. Plotting the data of for all $`\beta `$ and $`n_c`$ against the corresponding scaling variable, finds a very nice scaling. More importantly, they find a range of $`r_c`$ where the instanton properties become independent of $`r_c`$. This range is narrow, so it needs more work. But it provides some hope … The instanton gases found on the lattice are usually denser, and with larger average instanton sizes, than the instanton liquid models would like. So it is interesting that a recent calculation of the quark physics (in a ‘toy model’) from the lattice instanton ensembles of finds that the important part of the low-$`\lambda `$ spectrum looks like that of a dilutish gas of narrower instantons. There is a simple reason for this; a large instanton has a large zero-mode that has a correspondingly small density. It will therefore have a small value when integrated over the small volume where the zero-mode of the small instanton resides. This leades to a small mixing between large and small instantons: they approximately decouple. This mechanism provides a possibility for reconciling the lattice and the instanton liquid. Note also related work in and . ## 5 Instantons and confinement An isolated instanton affects a large Wilson loop weakly; it merely renormalises the Coulomb potential . So one expects that an instanton ‘gas’ (no long range order) will not disorder a Wilson loop strongly enough to produce an area decay. The claim in that random ensembles of instantons do produce linear confinement is therefore surprising. However we note that our analytic intuition really holds for Wilson loops that avoid instanton cores. In the density distributions used are $`D(\rho )1/\rho ^5`$ or $`1/\rho ^3`$. This corresponds to the ‘packing fraction’ $`𝑑\rho D(\rho )\rho ^4`$ diverging! So these are very dense gases, and the Wilson loop will, throughout its length, pass in the middle of densely overlapping instanton cores. It may be that this will disorder the Wilson loop sufficiently to confine. This is relevant since lattice calculations suggest that instantons in the real vacuum are dense and highly overlapping. It is of course important to check that the field configurations being used, generated by the approximation of adding the individual $`A_I(x)`$ (in singular gauge), do indeed accurately denote a ‘gas’ of topological charges, and that the confining disorder is not being produced by the breakdown of this standard linear addition ansatz. These and related directions will be interesting to pursue further.
no-problem/9909/hep-ph9909439.html
ar5iv
text
# References A total neutrino conversion in the Earth without a resonance. M. V. Chizhov<sup>1</sup><sup>1</sup>1Permanent address: Centre for Space Research and Technologies, Faculty of Physics, University of Sofia, 1164 Sofia, Bulgaria The Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34014 Trieste, Italy ## Abstract The neutrino oscillation enhancement in the Earth-type medium mantle – core – mantle is discussed. It is noted that the total conversion is possible both for a resonant matter density and a nonresonant one. A useful parameterization, for the representation of the transition probability for neutrinos and antineutrinos in a single plot, is proposed. The matter effect of the Earth on neutrino oscillations, in the interesting region of oscillation parameters for the solar and atmospheric neutrinos, is widely discussed now . Due to the specific multilayer structure of the Earth, a new effect of an enhancement of the oscillations of massive neutrinos is possible. In contrast to MSW resonance effect this new effect occurs due to a maximal constructive interference among transition amplitudes, which give contribution to the total amplitude in the multilayer case. A good approximation for the Earth interior is a two-layer model with two basic structures: the mantle and the core. These structures have slowly increasing densities from the surface of the Earth to its center with a sharp leap on their border. Therefore, we can consider the mantle and the core densities on the neutrino trajectories as different constants. This assumption leads to simple analytical formulae for the neutrino transition probability (see, for example, ). In the case of oscillations between two ultra-relativistic neutrino species in vacuum, there are two parameters: the vacuum mixing angle $`\vartheta _0`$ and the ratio between the neutrino squared mass difference and the neutrino energy $`\mathrm{\Delta }m^2/E`$. We assume for definiteness that $`\mathrm{\Delta }m^2>0`$ and $$0<\vartheta _0\pi /4,$$ (1) i.e. $`\mathrm{cos}(2\vartheta _0)>0`$. If the vacuum mixing angle $`\vartheta _0`$ is small, the neutrino transition probability is suppressed. Therefore, in this case the experimental search of oscillations is extremely difficult. On the other hand, a medium can effect oscillations and, in particular, enhance them. For oscillations in a medium, a difference $`V_{\alpha \beta }`$ ($`\alpha \beta =e,\mu ,\tau ,s`$) between the effective potentials of different neutrino species $`\nu _\alpha `$ and $`\nu _\beta `$ can arise. For neutrinos it can be either positive $$V_{e\mu }=\sqrt{2}G_FN_e>0,$$ (2) or negative $$V_{\mu s}=\sqrt{2}G_FN_n/2<0,$$ (3) where $`N_e`$ and $`N_n`$ are the electron and neutron number densities of the medium. For antineutrinos $`V_{\alpha \beta }`$ is replaced by $$V_{\overline{\alpha }\overline{\beta }}=V_{\alpha \beta }.$$ (4) The matter mixing angle $`\vartheta `$ is given by the well-known expression $$\mathrm{cos}(2\vartheta )=\frac{1}{\mathrm{\Delta }E}\left(\frac{\mathrm{\Delta }m^2}{2E}\mathrm{cos}(2\vartheta _0)V_{\alpha \beta }\right),$$ (5) where $$\mathrm{\Delta }E=\frac{\mathrm{\Delta }m^2}{2E}\sqrt{\left(\mathrm{cos}(2\vartheta _0)\frac{2EV_{\alpha \beta }}{\mathrm{\Delta }m^2}\right)^2+\mathrm{sin}^2(2\vartheta _0)}$$ (6) being the difference between the energies of the two neutrino energy-eigenstates in the medium. In order to present a comparison of the probabilities of neutrinos and antineutrinos in a single figure, we extend formally the range of the vacuum mixing angles (1) to the region $$0<\vartheta _0<\pi /2$$ (7) in such a way that $`\mathrm{cos}(2\vartheta _0)<0`$ corresponds to the antineutrino case, keeping the same $`V_{\alpha \beta }`$ for neutrinos and antineutrinos. This is possible, because in the two-species case we can obtain antineutrino evolution equation from the neutrino one $$i\frac{\mathrm{d}}{\mathrm{d}t}\left(\begin{array}{c}\nu _\alpha \\ \nu _\beta \end{array}\right)=\frac{\mathrm{\Delta }E}{2}\left(\begin{array}{cc}\mathrm{cos}(2\vartheta )& \mathrm{sin}(2\vartheta )\\ \mathrm{sin}(2\vartheta )& \mathrm{cos}(2\vartheta )\end{array}\right)\left(\begin{array}{c}\nu _\alpha \\ \nu _\beta \end{array}\right)$$ (8) by the formal substitutions: $`\mathrm{cos}(2\vartheta _0)\mathrm{cos}(2\vartheta _0)`$ and $`\nu _\alpha \overline{\nu }_\beta `$, $`\nu _\beta \overline{\nu }_\alpha `$. As far as $`P_{\alpha \beta (\overline{\alpha }\overline{\beta })}=P_{\beta \alpha (\overline{\beta }\overline{\alpha })}`$, we can plot the continuous total transition probability $`P_{\alpha \beta }`$ for neutrinos and antineutrinos in a single figure, using extended region (7) for the vacuum mixing angles. When the MSW resonance condition $$\frac{\mathrm{\Delta }m^2}{2E}\mathrm{cos}(2\vartheta _0)=V_{\alpha \beta }$$ (9) is fulfilled, the matter mixing angle can be maximal, $`\vartheta =\pi /4`$, even in the case of a small vacuum mixing angle $`\vartheta _0`$. In this case the neutrino transition probability can reach its maximal value $`P_{\alpha \beta }=1`$. It can be realized either for neutrinos or for antineutrinos. In a constant density homogeneous medium the maxima of neutrino transition probability lie on the curve (9) in $`(\mathrm{cos}(2\vartheta _0),\mathrm{\Delta }m^2/E)`$plane. The positions of the maxima depend on the distance $`X`$, travelled by the neutrinos or antineutrinos, and are defined by the phase condition $$\varphi =\mathrm{\Delta }EX=(2k+1)\pi ,k=0,1,2,\mathrm{}.$$ (10) When the (anti)neutrinos arrive to the detector at nadir angle greater than $`33^{}`$, they pass only through the Earth mantle, which is assumed to have a constant density $`\rho _m4.5`$ g/cm<sup>3</sup>. Therefore, we can consider it as a simple case of a neutrino propagation in a constant density homogeneous medium. The contours of the analytically calculated transition probability, at the nadir angle $`h=70^{}`$ for different oscillation parameters in the cases of $`\underset{\mu }{\overset{()}{\nu }}\underset{\tau }{\overset{()}{\nu }}`$ and $`\underset{\mu }{\overset{()}{\nu }}\underset{s}{\overset{()}{\nu }}`$ oscillations, are shown in Fig. 1. In the case of $`\underset{\mu }{\overset{()}{\nu }}\underset{\tau }{\overset{()}{\nu }}`$ oscillations, $`V_{\mu \tau }=0`$. This corresponds to a vacuum case, when a total conversion $`P_{\mu \tau }=1`$ takes place only at a maximal vacuum mixing angle $`\vartheta _0=\pi /4`$, i.e. $`\mathrm{cos}(2\vartheta _0)=0`$ (Fig. 1a). The transition probability has a symmetrical form and there is no difference between neutrino and antineutrino cases. The later case of $`\underset{\mu }{\overset{()}{\nu }}\underset{s}{\overset{()}{\nu }}`$ oscillations allows a total resonance conversion for antineutrino oscillations $`\overline{\nu }_\mu \overline{\nu }_s`$, and a supression of the transition probability for $`\nu _\mu \nu _s`$ case (Fig. 1b). This feature enables us to distinguish between these cases for the atmospheric neutrinos. In the following we consider just the latter case which is effected by a matter distribution. In the case of smaller nadir angles $`h33^{}`$, (anti)neutrinos pass also through the Earth core, which density we assume to be constant $`\rho _c11.5`$ g/cm<sup>3</sup>. This leads to simple analytical expressions for the neutino transition probability, which has been analyzed in . Due to the strong interference between the amplitudes in the mantle and the core, a total conversion even for neutrinos $`P_{\mu s}=1`$ can occur. At the nadir angles just below 33 the absolute maxima move away from the curve (9) and their interpretation in the terms of the MSW resonances becames meaningless. As it was shown in , in the three-layer case of the Earth profile, the total (anti)neutrino conversion is possible in the infinite two-dimensional region of the oscillation parameters $$region𝒜:\{\begin{array}{c}\mathrm{cos}(2\vartheta _c)0\hfill \\ \mathrm{cos}(2\vartheta _c4\vartheta _m)0,\hfill \end{array}$$ (11) where $`\vartheta _m`$ and $`\vartheta _c`$ are the mixing angles in the mantle and the core, correspondingly. For fixed $`\mathrm{\Delta }m^2`$ and the vacuum mixing angle $`\vartheta _0`$, the conditions (11) give the allowed values of neutrino energy $`E`$, at which a total conversion in the Earth is possible. In contrast to the MSW resonance condition (9), where only single value of $`E`$ is possible, there exists a continuum of different solutions for $`E`$. Moreover, the region $`𝒜`$ is wider than the analogous region in the two-layer case considered in . For $`\rho _c>2\rho _m`$, which has place for the Earth, the region $`𝒜`$ overlaps the parameter space $`\mathrm{cos}(2\vartheta _0)>0`$, where the MSW resonance condition cannot be satisfied, due to the different signs on the left and on the right hand side of eq. (9). However, a total neutrino conversion is possible. It is somewhat opposite to the common opinion that in this case the matter suppresses oscillations and the total neutrino conversion cannot occur. The positions of the absolute maxima in the region $`𝒜`$ are defined by the two conditions on the phases in the mantle $`\varphi _m`$ and in the core $`\varphi _c`$ $$\{\begin{array}{c}\mathrm{tan}\frac{\varphi _m}{2}=\pm \sqrt{\frac{\mathrm{cos}\left(2\vartheta _c\right)}{\mathrm{cos}\left(2\vartheta _c4\vartheta _m\right)}},\hfill \\ \mathrm{tan}\frac{\varphi _c}{2}=\pm \frac{\mathrm{cos}\left(2\vartheta _m\right)}{\sqrt{\mathrm{cos}\left(2\vartheta _c\right)\mathrm{cos}\left(2\vartheta _c4\vartheta _m\right)}}.\hfill \end{array}$$ (12) In Fig. 2, for example, we show the contours of the transition probability at nadir angle $`h=32.4^{}`$. The rightmost maximum corresponds to a total neutrino conversion into nonresonance region. The total conversion can take place for $`\nu _\mu \nu _s`$ oscillations near the maximal vacuum mixing angle $`\mathrm{sin}^2(2\vartheta _0)>0.993`$ and in the wide range of the vacuum mixing angles for the resonant case of $`\overline{\nu }_\mu \overline{\nu }_s`$ oscillations. It can lead to a specific dependence of the nadir angle distribution for the atmospheric neutrinos. In it was noted that at the maximal mixing angle $`\vartheta _0=\pi /4`$, $`\mathrm{\Delta }m^2/E2\times 10^4`$ eV<sup>2</sup>/GeV and nadir angle near 30, when the special conditions $$\varphi _m=\varphi _c=\pi $$ (13) are approximately satisfied, the enhancement of $`\nu _\mu \nu _s`$ oscillations takes place. However, the equalities (13) are not the right conditions for the maximum of the transition probability, in contrast to the opinion of the authors of ref. . According to our approach these conditions correspond to the limiting case, when the absolute maxima lie on the boundary $$\mathrm{cos}(2\vartheta _c4\vartheta _m)=0$$ (14) of region $`𝒜`$. This curve defines how far from the resonance curve (9) the total neutrino conversion for the Earth-type profile can occur. The enhancement found in is due to the lowest absolute maximum, which is a solution of eqs. (12), near to the maximal mixing angle (see Fig. 2). For nonresonant matter oscillations the region, where the total neutrino conversion occurs, becomes maximal in the case of the vacuum – matter – vacuum profile (Fig. 3). The minimal possible vacuum mixing angle, at which the total neutrino conversion takes place, is equal to $`\pi /8`$, i.e. $`\mathrm{sin}^2(2\vartheta _0)1/2`$. It has a clear physical meaning: the role of the inner layer is to prevent the rapid decrease of the transition probability, after it reaches its maximal value in the first layer (Fig. 4). Therefore, two outer layers are enough for the realization of a total neutrino conversion. The limiting case of small mixing angles and ‘optimal’ conditions on the phases (10,13) is analogous to the parametric enhancement of oscillations considered in , where ‘drift’ of the transition probability to its maximal value takes place. In these cases, when oscillation amplitudes are small, many periods in a medium with periodic number density are required to reach the absolute maximum of the transition probability $`P=1`$. As far as in the nonresonance region the area, where the total neutrino conversion occurs, extends more and more with each period to small mixing angles. However, the authors of refs. have missed all solutions for absolute maxima of the transition probability inside this area. For the Earth profile, for instance, they correspond to conditions on phases, which depend on the matter angles $`\vartheta _m`$ and $`\vartheta _c`$ (see 12). It was shown, that without the assumption about the small vacuum mixing angles or small matter effect, the transition probability can reach its absolute maximum $`P=1`$ for the three-layer Earth-type medium and even for the two-layer case . So, the total conversion in a multilayer medium can occur both in the resonance and the nonresonance regions of oscillation parameters. This simple fact must be kept in mind, when the neutrino propagation in a multilayer medium with different densities like the Earth is analyzed. I express my gratitude to SISSA and ICTP for the warm hospitality and financial support. I am also obliged to S. T. Petcov for introducing me into this theme, fruitful collaboration and help. I thank Q. Y. Liu for useful discussions. Figure captions Figure 1a. The contours for the different values of the transition probability $`P_{\mu \tau }`$ 0.2, 0.4, 0.6, 0.8 at nadir angle $`h=70^{}`$ are shown. The dark spots inside them correspond to the absolute maxima $`P_{\mu \tau }=1`$, which are realized at the maximal vacuum mixing angle $`\vartheta _0=\pi /4`$. Figure 1b. The contours for the different values of the transition probability $`P_{\mu s}`$ 0.2, 0.4, 0.6, 0.8 at nadir angle $`h=70^{}`$ are shown. The dark spots inside them correspond to the absolute maxima $`P_{\mu s}=1`$. The resonance curve for the mantle, where the total conversion can occur, is also drawn. Figure 2. The contours for the different values of the transition probability $`P_{\mu s}`$ 0.2, 0.4, 0.6, 0.8 at nadir angle $`h=32.4^{}`$ are shown. The dark spots inside them correspond to the absolute maxima $`P_{\mu s}=1`$. The region $`𝒜`$, where the total conversion can occur, is also drawn. For comparison the resonance curve for the mantle (dot curve) is presented. Figure 3. The region $`𝒜`$ for vacuum – matter – vacuum profile, where the total conversion $`P_{\mu s}=1`$ can occur, is plotted. Figure 4. The evolution of the transition probability $`P_{\mu s}`$ in the medium vacuum – matter – vacuum at $`\mathrm{sin}^2(2\vartheta _0)=1/2`$ and $`\mathrm{\Delta }m^2/E0`$ is presented. Figure 1a. Figure 1b. Figure 2. Figure 3. Figure 4.
no-problem/9909/cond-mat9909139.html
ar5iv
text
# References Work presented at the Sixth International Conference on the Optics of Excitons in Confined Systems, Ascona, Switzerland, Aug. 30-Sept. 2 ,1999. Decoherence effects on the generation of exciton entangled states in coupled quantum dots F.J.Rodríguez<sup>1</sup> (a), L.Quiroga<sup>2</sup>(a) and N.F.Johnson<sup>3</sup> (b) (a) Depto. de Física, Universidad de los Andes, A.A. 4976, Santafé de Bogotá, Colombia (b) Physics Department, Clarendon Laboratory, Oxford University, Oxford OX1 3PU, U.K. ## Abstract We report on exciton-acoustic-phonon coupling effects on the generation of exciton maximally entangled states in $`N=2`$ and $`3`$ quantum dot systems. In particular, we address the question of the combined effect of laser pulses, appropriate for generating Bell and Greenberger-Horne-Zeilinger entangled states, together with decoherence mechanisms as provided by a phonon reservoir. By solving numerically the master equation for the optically driven exciton-phonon kinetics, we show that the generation of maximally entangled exciton states is preserved over a reasonable parameter window. PACS: 71.10.Li, 71.35.-y, 73.20.Dx <sup>1</sup> frodrigu@uniandes.edu.co <sup>2</sup> lquiroga@uniandes.edu.co <sup>3</sup> n.johnson@physics.oxford.ac.uk Confined excitons together with ultrafast optical spectroscopy have been shown to be important elements for achieving coherent wavefunction control on the nanometer and femtosecond scales in semiconductors . Maximally entangled states (MES), of Bell-type for excitons in two coupled quantum dots (QDs) and Greenberger-Horne-Zeilinger (GHZ) type for three coupled QDs, have been reported as excellent candidates for achieving quantum entanglement in solid-state based devices . However, the question arises as to how reliable the MES preparation scheme of Ref. will be when decoherence mechanisms are taken into account during the generation step. Exciton decoherence in semiconductor QDs is dominated by acoustic phonon scattering at low temperatures . In this work we present results on the kinetics of the generation of exciton MES in QDs, taking into account an acoustic phonon dephasing mechanism. The Hamiltonian describing a system formed by $`N`$ QDs in the rotating wave approximation is $`H(t)=\mathrm{\Delta }\omega J_zV(J^2J_z^2)A(J^++J^{})+{\displaystyle \underset{\stackrel{}{k}}{}}\omega _\stackrel{}{k}a_\stackrel{}{k}^{}a_\stackrel{}{k}+{\displaystyle \underset{\stackrel{}{k}}{}}g_\stackrel{}{k}J_z(a_\stackrel{}{k}^{}+a_\stackrel{}{k})`$ (1) where $`J_+=_{n=1}^Ne_n^{}h_n^{}`$, $`J_{}=_{n=1}^Nh_ne_n`$ and $`J_z=\frac{1}{2}_{n=1}^N(e_n^{}e_nh_nh_n^{})`$ with $`e_n^{}`$ ($`h_n^{}`$) describing the electron (hole) creation operator in the $`n`$’th QD. The collective operators describing the QD excitons, $`J`$-operators, satisfy the usual angular momentum commutation relationships: $`[J_+,J_{}]=2J_z`$, $`[J_\pm ,J_z]=J_\pm `$. $`\mathrm{\Delta }\omega =ϵ\omega `$ is the resonance detuning, $`ϵ`$ denotes the semiconductor energy gap, $`\omega `$ is the laser central frequency, $`V`$ the Forster term representing the Coulomb interdot interaction, $`A`$ the laser pulse amplitude and $`a_\stackrel{}{k}^{}`$ ($`a_\stackrel{}{k}`$) the creation (annihilation) operator of the acoustic phonon with wavevector $`\stackrel{}{k}`$. We put $`\mathrm{}=1`$ throughout this paper. We work within corresponding optically active exciton states, $`i.e.`$ $`J=1`$ and $`J=3/2`$ for two and three coupled quantum dots, respectively. Mixing with dark exciton states can be induced by exciting selectively a single QD or by a different coupling with the local environment of each QD. These latter effects will not be considered here. The time evolution from any initial state under the action of $`H`$ in Eq.(1) is easily performed by means of the pseudo 1/2-spin operator formalism . The exact kinetic equations for this system can be obtained by applying the method of operator-equation hierarchy developed for Dicke systems in . As a test, we verified that in the limit of zero laser intensity and no Forster term our results coincide with those in where two-state systems coupled to a dephasing environment were considered. Until now the experimental identification and quantification of exciton decoherence mechanisms in low dimensional semiconductor heterostructures is rather scarce. As a consequence we adopt here a simplified model. In a standard way, by assuming a very short correlation time for exciton operators, the exact hierarchy of equations transforms into a Markovian master equation. The initial condition is represented by the density matrix $`\rho (0)=|00|\rho _{Ph}(T)`$, exciton vacuum and the equilibrium phonon reservoir at temperature $`T`$. At resonance, $`i.e.`$ $`\mathrm{\Delta }\omega =0`$, the dynamical equation for the expectation value of exciton operators is then given by $`{\displaystyle \frac{J_\alpha ^{rs}}{t}}=iV[J_\alpha ^{rs},J_z^2]iA[J_\alpha ^{rs},J^++J^{}]`$ (2) $`\mathrm{\Gamma }(2[J_\alpha ^{rs},J_z]J_z[J_\alpha ^{rs},J_z^2])`$ where the decoherence rate is $`\mathrm{\Gamma }=𝑑\omega ^{}\omega ^ne^{\omega ^{}/\omega _c}(1+2N(\omega ^{},T))`$ with $`n`$ depending on the dimensionality of the phonon field, $`\omega _c`$ is a cut-off frequency (typically the Debye frequency) and $`N(\omega ^{},T)`$ is the phonon Bose-Einstein occupation factor. We do not attempt here to perform a microscopic calculation of $`\mathrm{\Gamma }`$ but instead we take it as a variable parameter. We consider pure decoherence effects that do not involve energy relaxation of excitons, as indicated by the last term in Eq. (1). It is a well known fact that very narrow linewidth of the photoluminescence signal of a single QD does exist due to the elimination of inhomogeneous broadening effects. Consequently, the decoherence rate $`\mathrm{\Gamma }`$ in our calculations should be associated with just homogeneous broadening effects. At low temperature the main decoherence mechanism is indeed acoustic phonon scattering processes. The decoherence parameter $`\mathrm{\Gamma }`$ is temperature dependent and it amounts for 20-50 $`\mu `$eV for typical III-V semiconductor QDs in a temperature range from 10 K to 30 K . We solve numerically the coupled differential linear equations for the time dependent pseudo-spin expectation values (8 for Bell states and 15 for GHZ states). For $`\mathrm{\Gamma }`$ we take typical values which can represent real situations for QDs at low temperatures. Other common parameters for the results shown below are: resonance condition $`ϵ=\omega =1`$ and Forster term $`V=ϵ/10`$. Laser strengths and decoherence rates are to be expressed in units of $`V`$. As a quantitative measure of the successful generation of exciton MES we present our results in terms of the time dependent overlaps $`O_B(t)=Tr\{\rho _{Bell}\rho (t)\}`$ and $`O_G(t)=Tr\{\rho _{GHZ}\rho (t)\}`$ where $`\rho _{Bell}=(1+J_z^{01}J_z^{12})/3J_y^{02}`$ and $`\rho _{GHZ}=1/4+(J_z^{01}J_z^{23})/2J_y^{03}`$ (we use the same notation as in ). $`|0`$ represents the exciton vacuum, $`|1`$ denotes a single-exciton state, $`|2`$ represents the biexciton state and $`|3`$ is the triexciton state. In order to appreciate the importance of the non-linear Forster term to generate exciton MES we present in Fig. 1 the evolution of the overlaps $`O_B(t)`$ and $`O_G(t)`$ in the limit of very weak light excitation and zero decoherence . It is worth noting that no exciton MES generation is possible if the Forster interaction is turned off. This implies that efficient exciton MES generation should be helped by compact QD systems where the Forster term can take a significant value. Next, we discuss the $`N=2`$ case and Bell-state generation in presence of noise. In Fig. 2a results are shown for a decoherence rate $`\mathrm{\Gamma }=0.001`$ and different laser intensities ($`A=0.1`$ and $`A=0.4`$). Bell-state generation time is significantly shortened by applying stronger laser pulses. Therefore, decoherence effects can be minimized by using higher excitation levels. However, a higher laser intensity also implies a sharper evolution which therefore requires a very precise pulse length. In Fig. 3a Bell-state generation is shown for different values of the decoherence parameter ($`\mathrm{\Gamma }=0.001,0.01`$ and $`0.1`$). It is evident that at high temperature $`\mathrm{\Gamma }=0.1`$ no MES generation is possible. However, we estimate that $`\mathrm{\Gamma }`$ values between $`0.0010.01`$ are typical in the temperature range from $`10`$ K to $`50`$ K. We conclude that a parameter window exists where successful generation of Bell MES can be produced. Now we address the GHZ MES generation in a $`N=3`$ QD system. As for the Bell case, using higher laser excitation levels it is possible to obtain in shorter times a total overlap with the GHZ density matrix as depicted in Fig. 2b ($`\mathrm{\Gamma }=0.001`$). Temperature effects through the variation of $`\mathrm{\Gamma }`$ are depicted in Fig. 3b ($`A=0.4`$). It is evident that similar decoherence rates yield a more dramatic reduction of the MES coherence in the GHZ case than in the Bell case. However, as for Bell generation, a parameter window does exist where the generation of such entangled states can be feasible. It is worth noting the different scaling behaviour of the generation frequency of these MES at very low temperature, $`i.e.`$ vanishing $`\mathrm{\Gamma }`$ and very low laser excitation. While selective $`\pi /2`$ laser pulse length for the Bell case scales like $`V/A^2`$, selective $`\pi /2`$ pulse length for the GHZ case scales like $`V^2/A^3`$. This property of $`\pi /2`$ pulses to generate exciton MES was demonstrated in an analytical way in and can be verified in our numerical results by looking at Fig. 2a and Fig. 2b. In summary, we have shown that decoherence effects can be minimized in the generation of maximally entangled states by applying stronger laser pulses and working at low temperatures where acoustic phonon scattering is the main decoherence mechanism. This work has been partially supported by COLCIENCIAS. Figure Captions Figure 1: Exciton MES generation in the zero decoherence limit. Thick lines represents the Bell-state overlap with $`A=0.1`$: solid, Forster term included; dotted, Forster term not included. Thin lines represent the GHZ-state overlap with $`A=0.2`$ and similar meaning for solid and dotted lines. Figure 2: Exciton MES generation in the presence of decoherence (a) $`O_B(t)`$ for $`A=0.1`$, dotted line and $`A=0.4`$, solid line. (b) $`O_G(t)`$ for $`A=0.2`$, dotted line and $`A=0.4`$, solid line. $`\mathrm{\Gamma }=0.001`$. Figure 3: Exciton MES generation in the presence of decoherence (a) $`O_B(t)`$ and (b) $`O_G(t)`$. $`A=0.4`$, $`\mathrm{\Gamma }=0.001`$, dotted line, $`\mathrm{\Gamma }=0.01`$, solid line and $`\mathrm{\Gamma }=0.1`$, dashed line for both (a) and (b).
no-problem/9909/hep-th9909160.html
ar5iv
text
# Static Axially Symmetric Einstein-Yang-Mills-dilaton solutions: Addendum Asymptotic solutions ## Abstract We discuss the asymptotic form of the static axially symmetric, globally regular and black hole solutions, obtained recently in Einstein-Yang-Mills and Einstein-Yang-Mills-dilaton theory. Preprint gr-qc/9909160 Recently, we have constructed static axially symmetric regular and black hole solutions in SU(2) Einstein-Yang-Mills (EYM) and Einstein-Yang-Mills-dilaton (EYMD) theory . Representing generalizations of the spherically symmetric regular and black hole solutions , these solutions are characterized by two integers, their winding number $`n`$ and the node number $`k`$ of their gauge field functions. The spherically symmetric solutions have winding number $`n=1`$. These non-abelian EYM and EYMD solutions are asymptotically flat. They have non-trivial magnetic gauge field configurations, but carry no global charge. To every globally regular solution there exists a corresponding family of black hole solutions with regular event horizon $`x_\mathrm{H}>0`$. These non-abelian black hole solutions demonstrate that neither the “no-hair” theorem nor Israel’s theorem hold in EYM and EYMD theory. Here we give a detailed account of the asymptotic form of these static axially symmetric solutions. In particular we find, that the expansion of the gauge field functions in powers of $`1/x`$ must be supplemented with non-analytic terms for $`n=2`$ and 4. Let us briefly recall the SU(2) Einstein-Yang-Mills-dilaton action $$S=\left(\frac{R}{16\pi G}+L_M\right)\sqrt{g}d^4x$$ (1) with matter Lagrangian $$L_M=\frac{1}{2}_\mu \mathrm{\Phi }^\mu \mathrm{\Phi }e^{2\kappa \mathrm{\Phi }}\frac{1}{2}\mathrm{Tr}(F_{\mu \nu }F^{\mu \nu }),$$ (2) field strength tensor $`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu +ie[A_\mu ,A_\nu ]`$, gauge field $`A_\mu =\frac{1}{2}\tau ^aA_\mu ^a`$, dilaton field $`\mathrm{\Phi }`$, and Yang-Mills and dilaton coupling constants $`e`$ and $`\kappa `$, respectively. In terms of the polar coordinates $`r`$, $`\theta `$ and $`\varphi `$ the isotropic metric reads $$ds^2=fdt^2+\frac{m}{f}dr^2+\frac{mr^2}{f}d\theta ^2+\frac{lr^2\mathrm{sin}^2\theta }{f}d\varphi ^2,$$ (3) where $`f`$, $`m`$ and $`l`$ are only functions of $`r`$ and $`\theta `$, and regularity on the $`z`$-axis requires $`m|_{\theta =0}=l|_{\theta =0}`$ . We parameterize the static axially symmetric gauge field as $$A_\mu dx^\mu =\frac{1}{2er}\left[\tau _\varphi ^n\left(H_1dr+\left(1H_2\right)rd\theta \right)n\left(\tau _r^nH_3+\tau _\theta ^n\left(1H_4\right)\right)r\mathrm{sin}\theta d\varphi \right],$$ (4) where $`n`$ denotes the winding number, the $`su(2)`$ matrices $`\tau _\phi ^n,\tau _r^n,\tau _\theta ^n`$ are defined in terms of Pauli matrices $`\tau _1,\tau _2,\tau _3`$ by $$\tau _\phi ^n=\mathrm{sin}(n\phi )\tau _1+\mathrm{cos}(n\phi )\tau _2,\tau _r^n=\mathrm{sin}\theta \tau _\rho ^n+\mathrm{cos}\theta \tau _3,\tau _\theta ^n=\mathrm{cos}\theta \tau _\rho ^n\mathrm{sin}\theta \tau _3,$$ (5) with $`\tau _\rho ^n=\mathrm{cos}(n\phi )\tau _1+\mathrm{sin}(n\phi )\tau _2`$. The $`H_i`$ are only functions of $`r`$ and $`\theta `$, and regularity on the $`z`$-axis requires $`H_2|_{\theta =0}=H_4|_{\theta =0}`$. Under abelian gauge transformations $$U=\mathrm{exp}\left\{i\mathrm{\Gamma }\tau _\phi ^n/2\right\},$$ (6) where $`\mathrm{\Gamma }`$ is a function of $`r`$ and $`\theta `$, the gauge potential (4) is form invariant and the functions $`H_i`$ transform like $`H_1`$ $``$ $`\widehat{H}_1=H_1r_r\mathrm{\Gamma },`$ (7) $`H_2`$ $``$ $`\widehat{H}_2=H_2+_\theta \mathrm{\Gamma },`$ (8) $`H_3`$ $``$ $`\widehat{H}_3=\mathrm{cos}\mathrm{\Gamma }(H_3+\mathrm{cot}\theta )\mathrm{sin}\mathrm{\Gamma }H_4\mathrm{cot}\theta ,`$ (9) $`H_4`$ $``$ $`\widehat{H}_4=\mathrm{sin}\mathrm{\Gamma }(H_3+\mathrm{cot}\theta )+\mathrm{cos}\mathrm{\Gamma }H_4.`$ (10) We fix the gauge by choosing $$r_rH_1_\theta H_2=0.$$ (12) For convenience we introduce dimensionless quantities $$x=\frac{e}{\sqrt{4\pi G}}r,\phi =\sqrt{4\pi G}\mathrm{\Phi },\gamma =\frac{1}{\sqrt{4\pi G}}\kappa .$$ (13) To obtain globally regular solutions or black hole solutions with a regular horizon with the proper symmetries, we must impose appropriate boundary conditions . For asymptotically flat, magnetically neutral solutions the boundary conditions at infinity are $$f|_{x=\mathrm{}}=m|_{x=\mathrm{}}=l|_{x=\mathrm{}}=1,$$ (14) $$H_2|_{x=\mathrm{}}=H_4|_{x=\mathrm{}}=\pm 1,H_1|_{x=\mathrm{}}=H_3|_{x=\mathrm{}}=0,$$ (15) and we fix the scale invariance of the field equations by the condition $`\phi |_{x=\mathrm{}}=0`$. We now consider the asymptotic form of the functions at infinity. In Appendix C of we presented the expansion of the functions at infinity in powers of $`1/x`$, not considering possible non-analytic terms. This expansion reads $`H_1`$ $`=`$ $`{\displaystyle \frac{1}{x^2}}\overline{H}_{12}\mathrm{sin}\theta \mathrm{cos}\theta +O\left({\displaystyle \frac{1}{x^3}}\right),`$ (16) $`H_2`$ $`=`$ $`\pm 1+{\displaystyle \frac{1}{2x^2}}\overline{H}_{12}\left(\mathrm{cos}^2\theta \mathrm{sin}^2\theta \right)+O\left({\displaystyle \frac{1}{x^3}}\right),`$ (17) $`H_3`$ $`=`$ $`{\displaystyle \frac{1}{x}}\mathrm{sin}\theta \mathrm{cos}\theta \overline{H}_{31}{\displaystyle \frac{1}{4x^2}}\mathrm{sin}\theta \mathrm{cos}\theta \left(\pm 2\overline{H}_{12}+\overline{H}_{31}\left(\overline{f}_1+2\gamma \overline{\phi }_1\right)\right)+O\left({\displaystyle \frac{1}{x^3}}\right),`$ (18) $`H_4`$ $`=`$ $`\pm \left(1+{\displaystyle \frac{1}{x}}\overline{H}_{31}\mathrm{sin}^2\theta \pm {\displaystyle \frac{1}{2x^2}}\overline{H}_{12}{\displaystyle \frac{1}{4x^2}}\mathrm{sin}^2\theta \left(\pm 2\overline{H}_{12}+\overline{H}_{31}\left(\overline{f}_1+2\gamma \overline{\phi }_1\right)\right)\right)+O\left({\displaystyle \frac{1}{x^3}}\right),`$ (19) $`\phi `$ $`=`$ $`{\displaystyle \frac{1}{x}}\overline{\phi }_1+O\left({\displaystyle \frac{1}{x^3}}\right),`$ (20) $`f`$ $`=`$ $`1+{\displaystyle \frac{1}{x}}\overline{f}_1+{\displaystyle \frac{1}{2x^2}}\overline{f}_1^2+O\left({\displaystyle \frac{1}{x^3}}\right),`$ (21) $`m`$ $`=`$ $`1+{\displaystyle \frac{1}{x^2}}\overline{l}_2+{\displaystyle \frac{1}{x^2}}\mathrm{sin}^2\theta \overline{m}_2+O\left({\displaystyle \frac{1}{x^3}}\right),`$ (22) $`l`$ $`=`$ $`1+{\displaystyle \frac{1}{x^2}}\overline{l}_2+O\left({\displaystyle \frac{1}{x^3}}\right),`$ (23) with constants $`\overline{H}_{12}`$, $`\overline{H}_{31}`$, $`\overline{f}_1`$, $`\overline{\phi }_1`$, $`\overline{m}_2`$ and $`\overline{l}_2`$. The above expansion in powers of $`1/x`$ is, however, not necessarily complete, since non-analytic terms may be present. Inspection of the numerical solutions reveals, that $`\mathrm{ln}x`$ terms must be included in the expansion of the gauge field functions for $`n=2`$ and 4, to obtain the proper asymptotic form of the solutions. In particular, for an even number of nodes and for winding number $`n=2`$ we obtain $`H_1`$ $`=`$ $`{\displaystyle \frac{D_2\mathrm{ln}x\mathrm{sin}2\theta }{2x^2}}+{\displaystyle \frac{1}{x^2}}\left\{{\displaystyle \frac{D_2}{4}}\left[2({\displaystyle \frac{\pi }{2}}\theta )\mathrm{cos}2\theta +\mathrm{cos}\theta (\mathrm{sin}\theta \pi )\right]D_0\mathrm{sin}2\theta \right\}`$ (25) $`+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ $`H_2`$ $`=`$ $`1+{\displaystyle \frac{D_2\mathrm{ln}x\mathrm{cos}2\theta }{2x^2}}+{\displaystyle \frac{1}{x^2}}\left\{{\displaystyle \frac{D_2}{8}}\left[\mathrm{cos}2\theta +4\pi \mathrm{sin}\theta 4({\displaystyle \frac{\pi }{2}}\theta )\mathrm{sin}2\theta \right]D_0\mathrm{cos}2\theta \right\}`$ (28) $`+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ $`F_3`$ $`=`$ $`{\displaystyle \frac{D_2\mathrm{ln}x\mathrm{cos}\theta }{2x^2}}{\displaystyle \frac{1}{x^2}}\left\{{\displaystyle \frac{D_2}{16\mathrm{sin}\theta }}\left[4({\displaystyle \frac{\pi }{2}}\theta )\mathrm{cos}2\theta +3\mathrm{sin}2\theta +\pi \mathrm{cos}\theta (3\mathrm{sin}^2\theta 2)\right]D_0\mathrm{cos}\theta \right\}`$ (31) $`+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ $`F_4`$ $`=`$ $`{\displaystyle \frac{D_1\mathrm{sin}\theta }{x}}{\displaystyle \frac{D_1(\overline{f}_1+2\gamma \overline{\phi }_1)\mathrm{sin}\theta }{4x^2}}+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ (33) for $`n=3`$, $`H_1`$ $`=`$ $`{\displaystyle \frac{D_2\mathrm{sin}2\theta }{2x^2}}{\displaystyle \frac{D_3\mathrm{sin}2\theta }{2x^3}}+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ (34) $`H_2`$ $`=`$ $`1+{\displaystyle \frac{D_2\mathrm{cos}2\theta }{2x^2}}+{\displaystyle \frac{D_3(9\mathrm{sin}^2\theta 2)}{6x^3}}+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ (35) $`F_3`$ $`=`$ $`{\displaystyle \frac{D_2\mathrm{cos}\theta }{2x^2}}+{\displaystyle \frac{\mathrm{cos}\theta }{18x^3}}\left[(9D_1D_25D_3)\mathrm{sin}^2\theta +6D_3\right]+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ (36) $`F_4`$ $`=`$ $`{\displaystyle \frac{D_1\mathrm{sin}\theta }{x}}{\displaystyle \frac{D_1(\overline{f}_1+2\gamma \overline{\phi }_1)\mathrm{sin}\theta }{4x^2}}+{\displaystyle \frac{\mathrm{sin}\theta }{12x^3}}\left[\left(5D_3+3D_1D^{}+15D_4\right)\mathrm{sin}^2\theta 4D_312D_4\right]`$ (38) $`+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ and for $`n=4`$, $`H_1`$ $`=`$ $`{\displaystyle \frac{D_2\mathrm{sin}2\theta }{2x^2}}+{\displaystyle \frac{D_3\mathrm{ln}x\mathrm{sin}4\theta }{4x^4}}+{\displaystyle \frac{D_3}{16x^4}}\left[4({\displaystyle \frac{\pi }{2}}\theta )\mathrm{cos}4\theta \mathrm{sin}4\theta \pi \mathrm{cos}\theta (215\mathrm{sin}^2\theta )\right]+{\displaystyle \frac{D_4\mathrm{sin}4\theta }{8x^4}}`$ (40) $`+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ $`H_2`$ $`=`$ $`1+{\displaystyle \frac{D_2\mathrm{cos}2\theta }{2x^2}}+{\displaystyle \frac{D_3\mathrm{ln}x\mathrm{cos}4\theta }{4x^4}}+{\displaystyle \frac{D_3}{16x^4}}\left[4\pi \mathrm{sin}\theta (25\mathrm{sin}^2\theta )\mathrm{cos}4\theta 4({\displaystyle \frac{\pi }{2}}\theta )\mathrm{sin}4\theta \right]+{\displaystyle \frac{D_4\mathrm{cos}4\theta }{8x^4}}`$ (42) $`+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ $`F_3`$ $`=`$ $`{\displaystyle \frac{D_2\mathrm{cos}\theta }{2x^2}}{\displaystyle \frac{D_1D_2\mathrm{cos}\theta \mathrm{sin}^2\theta }{2x^3}}{\displaystyle \frac{D_3\mathrm{ln}x\mathrm{cos}\theta \mathrm{cos}2\theta }{4x^4}}+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ (43) $`F_4`$ $`=`$ $`{\displaystyle \frac{D_1\mathrm{sin}\theta }{x}}{\displaystyle \frac{D_1(\overline{f}_1+2\gamma \overline{\phi }_1)\mathrm{sin}\theta }{4x^2}}+{\displaystyle \frac{\mathrm{sin}\theta }{4x^3}}\left[\left(D_1D^{}5D_5\right)\mathrm{sin}^2\theta +4D_5\right]+\mathrm{higher}\mathrm{order}\mathrm{terms},`$ (44) where $`D_0`$$`D_5`$, $`\overline{\phi }_1`$ and $`\overline{f}_1`$ are constants and $`D^{}=[(\overline{f}_1+2\gamma \overline{\phi }_1)^2+2(\overline{f}_1^2+\overline{l}_22\overline{m}_2)]/4`$ . The gauge field functions $`H_3`$ and $`H_4`$ are related to the functions $`F_3`$ and $`F_4`$ by $$H_3=\mathrm{sin}\theta F_3+\mathrm{cos}\theta F_4,1H_4=\mathrm{cos}\theta F_3\mathrm{sin}\theta F_4,$$ (45) and the asymptotic solutions for an odd number of nodes are obtained from the asymptotic solutions for an even number of nodes by multiplying the functions $`H_1`$, $`H_2`$ and $`H_4`$ by minus one. Note the explicit occurrence of $`\theta `$ in the expansions for $`n=2`$ and 4. For all $`n`$, the leading terms of the dilaton function are $$\phi =\frac{\overline{\phi }_1}{x}+\frac{\overline{\phi }_3}{2x^3}(3\mathrm{cos}^2\theta 1)+\mathrm{higher}\mathrm{order}\mathrm{terms},$$ (46) and for the metric functions the equations (21)-(23) hold . We now discuss the properties of these asymptotic solutions, assuming $`x>>1`$. First of all we note that along the $`z`$\- and $`\rho `$-axis all functions fulfill the proper boundary conditions . Furthermore, under the transformation $`\theta \pi \theta `$ the gauge field functions $`H_1`$ and $`F_3`$ are odd while $`H_2`$ and $`F_4`$ are even, consequently $`H_3`$ is odd while $`H_4`$ is even. Concerning regularity of the asymptotic solutions along the symmetry axis, we found, using the analysis of ref. , that local gauge transformations $`\mathrm{\Gamma }_{(z)}^{(n)}`$ exist, which lead to locally regular gauge potentials along the $`z`$-axis. Thus, up to the order of the expansion the asymptotic solutions are regular. For $`n=2`$ we checked that the contributions from the next orders ($`\mathrm{ln}x/x^3`$ and $`1/x^3`$) do not change the regularity and symmetry properties of the asymptotic solutions. We conjecture that this is also true for the corresponding higher order terms of the $`n=3`$ and $`n=4`$ asymptotic solutions. Since $`\mathrm{ln}x/x^2`$ and $`\mathrm{ln}x/x^4`$ can not be expanded as a power series in $`1/x`$ a naive construction of the asymptotic solutions as power series in $`1/x`$ does not lead to the proper solutions for $`n=2`$ and $`n=4`$, respectively. Finally we note, that comparison of the asymptotic solution Eqs. (25)-(33) with the numerical solutions, e. g. for $`n=2`$ and $`k=1`$, yields excellent agreement.
no-problem/9909/nucl-ex9909010.html
ar5iv
text
# Parity Violation in Elastic Electron-Proton Scattering and the Proton’s Strange Magnetic Form Factor ## Abstract We report a new measurement of the parity-violating asymmetry in elastic electron scattering from the proton at backward scattering angles. This asymmetry is sensitive to the strange magnetic form factor of the proton as well as electroweak axial radiative corrections. The new measurement of $`A=4.92\pm 0.61\pm 0.73`$ ppm provides a significant constraint on these quantities. The implications for the strange magnetic form factor are discussed in the context of theoretical estimates for the axial corrections. (SAMPLE Collaboration) The anomalous magnetic moments of the neutron and proton are important clues to their internal quark structure. Since the first measurement of the proton’s magnetic moment in 1933, our empirical knowledge of the electromagnetic structure of the nucleon has been greatly improved through detailed measurements of the electric and magnetic form factors and their momentum transfer ($`Q^2`$) dependence. Nevertheless, we still lack a quantitative theoretical understanding of these properties (including the magnetic moments) and additional experimental information is crucial in our effort to understand the internal structure of the nucleons. Whereas the normal magnetic moment corresponds to the magnetic coupling to the photon, the weak magnetic moment represents the analogous coupling to the $`Z`$ boson, equally fundamental and just as important as the electromagnetic moment. The weak magnetic form factor provides unambiguous new information about the quark flavor structure of the nucleon and enables a complete decomposition of the proton’s magnetic structure into the contributions from different quark flavors (up, down, and strange). To lowest order, the neutral weak magnetic form factor of the proton, $`G_M^Z`$, is related to nucleon electromagnetic form factors and a contribution from strange quarks:This definition of $`G_M^Z`$ differs by a factor of 4 from that used in ref. in order to conform with more standard notation in the literature. The definition of $`G_M^s`$ is the same as in ref. . $`G_M^Z=(G_M^pG_M^n)4\mathrm{sin}^2\theta _WG_M^pG_M^s`$ (1) where $`G_M^p`$ and $`G_M^n`$ are the (electromagnetic) nucleon magnetic form factors, and $`\theta _W`$ is the weak mixing angle. (Electroweak radiative corrections to this expression have been computed in ref. .) Thus the measurement of $`G_M^Z`$ provides unique access to the strange quark-antiquark “sea” and its role in the basic electromagnetic structure of the nucleons at low energies. $`G_M^Z`$ can be determined via parity-violating effects in elastic electron-proton scattering. In this Letter, we report a new measurement of the parity-violating asymmetry with sufficient precision to provide the first meaningful information on the strange magnetic form factor, $`G_M^s`$. In comparison to our previous results , this measurement has involved both improved monitoring and control of systematic errors as well as improved statistical precision. As previously discussed, the parity-violating asymmetry for elastic scattering of right- vs. left- handed electrons from nucleons at backward scattering angles is quite sensitive to $`G_M^Z`$. The SAMPLE experiment measured the parity-violating asymmetry in the elastic scattering of 200 MeV polarized electrons at backward angles with an average $`Q^20.1`$(GeV/c)<sup>2</sup>. For $`G_M^s=0`$, the expected asymmetry in the SAMPLE experiment is about $`7\times 10^6`$ or -7 ppm, and the asymmetry depends linearly on $`G_M^s`$. The neutral weak axial form factor $`G_A^Z`$ contributes about 20% to the asymmetry in our experiment. In parity-violating electron scattering $`G_A^Z`$ is modified by a substantial electroweak radiative correction. The corrections were estimated in , but there is considerable uncertainty in the calculations. The uncertainty in these radiative corrections substantially limits our ability to determine $`G_M^s`$, as will be discussed below. The SAMPLE experiment was performed at the MIT/Bates Linear Accelerator Center using a 200 MeV polarized electron beam incident on a liquid hydrogen target. The scattered electrons were detected in a large solid angle ($`1.5`$ sr) air Čerenkov detector. The detector consists of 10 large mirrors, each with ellipsoidal curvature to focus the Čerenkov light onto one of ten shielded photomultiplier tubes. A remotely controlled light shutter can cover each photomultiplier tube for background measurements. Typically one fourth of the data was taken with shutters closed to monitor this background. As described in ref. , the Čerenkov detector signals were studied at low beam currents to determine the composition of the signal and the fraction of light due to elastic scattering (factors that scaled the individual mirror asymmetries by typically 1.8, depending upon the mirror, and that were determined to a precision of 4%.) The parity-violating asymmetry was measured using higher beam currents, for which it was necessary to integrate the detector signals over the beam pulse. The incident electron beam was pulsed at 600 Hz; the signals from the detector, beam toroid monitors, and various other beam monitors were integrated and digitized for every 25 $`\mu `$sec long beam pulse. The parity-violating asymmetry $`A`$ was determined from the asymmetries in ratios of integrated detector signal to beam intensity for left- and right-handed beam pulses. The polarized electron beam was generated via photoemission from unstrained GaAs by polarized laser light. The laser beam helicity for each pulse was determined by a $`\lambda /4`$ Pockels cell and was randomly chosen for each of 10 consecutive beam pulses; the complement helicities were then used for the next 10 pulses. The asymmetry in the normalized detector yields was computed for “pulse pairs” separated by 1/60 of a second to minimize systematic errors. The electron beam helicity relative to all electronic signals can be manually reversed by inserting a $`\lambda `$/2 plate in the laser beam. (We denote this configuration as $`\lambda `$/2 = “IN” as opposed to $`\lambda `$/2 = “OUT”.) During the 1998 running period, the IN/OUT configuration was reversed every few days to minimize false asymmetries and test for systematic errors. The electron polarization was measured using a Møller system on the beamline and averaged 36.3$`\pm `$1.4 % during the experiment. The effect of small transverse components of electron polarization on the observed parity violation signal was studied and determined to be negligible. Helicity correlations of various parameters of the electron beam were monitored continuously during the experiment. These parameters include the beam intensity, position and angle at the target in both transverse dimensions ($`x`$ and $`y`$), the beam energy, and the beam “halo”. Two forward angle lucite Čerenkov counters were also implemented at $`12^{}`$ to monitor luminosity and test for helicity dependence. These monitors detected low $`Q^2`$ elastic scattering at forward angles and other soft electromagnetic radiation and should show negligible parity violating asymmetry. As in the past, we reduced the beam intensity asymmetry through an active feedback system. In 1998, this feedback was implemented with an additional Pockels cell located between linear polarizers to separate this function from the Pockels cell that controlled the helicity (HPC). The HPC was also repositioned to be downstream of all laser transport elements. These resulted in improved stability of the laser beam position under helicity reversal. In addition, we implemented a feedback system to reduce the remaining helicity-correlated beam position asymmetry. This was accomplished using a tilted glass plate in the laser beam path and a piezoelectric transducer. By adjusting the tilt of this glass plate with helicity reversal, the first order beam position asymmetry is reduced, resulting in improved quality of the data. For example, the helicity correlated vertical beam shift at the target was reduced from $`200`$ nm to typically $`<20`$ nm. Of the 110 C of beam delivered to the experiment in 1998, the first 24 C were taken before the position feedback system was fully implemented, and significant position asymmetries were present. This is evident in Figure 1a, where the luminosity monitor asymmetries are shown for this time frame (“piezo off”) in comparison with the later runs (“piezo on”). We also use a linear regression technique to remove such effects from the data . The results of this analysis are shown in Figure 1b, where the corrected asymmetries are displayed. This procedure (involving six beam parameters: $`x`$, $`y`$, $`\theta _x`$, $`\theta _y`$, energy, and intensity) is very effective at removing the effects of beam helicity correlations, resulting in a final corrected luminosity monitor asymmetry result of $`0.17\pm 0.11`$ ppm. In Figure 2 are shown the analogous plots for the asymmetry measured in the SAMPLE detector (all 10 mirrors combined and corrected for background dilution, radiative effects, and beam polarization). In contrast to the luminosity monitors, the detector asymmetry is quite robust with respect to beam helicity correlations, the corrections affecting the final asymmetry by only 5% or 0.2 ppm. This correction is about equal to the estimated systematic error in the procedure as determined from the luminosity monitor analysis. The elastic scattering asymmetry was determined from the 10 individual mirror asymmetries after correction for all effects, including background dilution. The measured shutter closed asymmetry for all 10 mirrors combined (appropriately scaled to compare directly to the elastic asymmetry), is $`0.57\pm 0.64`$ ppm, consistent with zero as expected assuming the shutter closed yield is dominated by low-$`Q^2`$ processes. However, the mirror-by-mirror distribution of shutter closed asymmetries is statistically improbable, indicating the presence of some non-statistical component to the shutter closed yield. We therefore assume the combined shutter closed asymmetry to be zero in our analysis, and assign a systematic error due to the uncertainty in the shutter closed asymmetry of 0.64 ppm. The resulting elastic asymmetry is $$A=4.92\pm 0.61\pm 0.73\mathrm{ppm}$$ (2) where the first uncertainty is statistical and the second is the estimated systematic error as summarized in Table 1. This value is in good agreement with our previous reported measurement . At the mean kinematics of the experiment ($`Q^2`$ = 0.1 (GeV/c)<sup>2</sup> and $`\theta `$=146.1), the theoretical asymmetry is $$A=5.61+3.49G_M^s+1.55G_A^Z$$ (3) where $$G_A^Z=(1+R_A^1)G_A+R_A^0+G_A^s.$$ (4) $`G_A`$ is the charged current nucleon form factor: we use $`G_A=G_A(0)/(1+\frac{Q^2}{M_A^2})^2`$, with $`G_A(0)=(g_A/g_V)=1.267\pm 0.035`$ and $`M_A=1.061\pm 0.026`$ (GeV/c) . $`G_A^s(Q^2=0)=\mathrm{\Delta }s=0.12\pm 0.03`$ , and $`R_A^{0,1}`$ are the isoscalar and isovector axial radiative corrections. The radiative corrections were estimated by ref. to be $`R_A^1=0.34`$ and $`R_A^0=0.12`$, but with nearly 100% uncertainty.The notation used here is $`R_A^0=(1/2)(3FD)R_A^{T=0}`$, where $`\sqrt{3}R_A^{T=0}=0.62`$ in ref. 2b The strange magnetic form factor derived from the asymmetry in Eq. 2 is $`G_M^s(Q^2=0.1(\mathrm{GeV}/\mathrm{c})^2)=+0.197`$ $`\pm `$ $`0.17\pm 0.21`$ (5) $``$ $`0.445G_A^Z\mathrm{n}.\mathrm{m}.`$ (6) This result is graphically displayed in Fig. 3 along with $`G_A^Z`$ (dashed line) computed from taking the above calculated values of $`R_A^0`$ and $`R_A^1`$. Combining this value of $`G_A^Z`$ with our measurement implies a substantially positive value of $`G_M^s(Q^2=0.1(\mathrm{GeV}/\mathrm{c})^2)=+0.61\pm 0.17\pm 0.21`$. As noted in recent papers most model calculations tend to produce negative values of $`\mu _s=G_M^s(Q^2=0)`$, typically about $`0.3`$. A recent calculation using lattice QCD techniques (in the quenched approximation) reports a result $`\mu _s=0.36\pm 0.20`$ . As shown in Fig. 3, our new measurement implies that the computed negative value of $`G_A^Z`$ is inconsistent with $`G_M^s<0`$. Another recent study using a constrained Skyrme-model Hamiltonian that fits the baryon magnetic moments yields a positive value of $`\mu _s=+0.37`$ , which is in better agreement with our measurement and with the calculated value of $`G_A^Z`$. Since the dominant uncertainty in $`G_A^Z`$ comes from $`R_A^1`$, eliminating the uncertainty in $`R_A^1`$ is essential for deriving a firm conclusion about $`G_M^s`$. Toward this end, we are presently running the SAMPLE experiment with a deuterium target to measure the quasielastic asymmetry from deuterium . This asymmetry is quite insensitive to strange quark effects and will therefore independently determine the isovector axial radiative correction. When the deuterium data are available, we will be able to provide definitive experimental information on the proton’s strange magnetic form factor. The skillful efforts of the staff of the MIT/Bates facility to provide high quality beam and improve the experiment are gratefully acknowledged. This work was supported by NSF grants PHY-9420470 (Caltech), PHY-9420787 (Illinois), PHY-9457906/PHY-9971819 (Maryland), PHY-9733772 (VPI) and DOE cooperative agreement DE-FC02-94ER40818 (MIT-Bates) and contract W-31-109-ENG-38 (ANL).
no-problem/9909/cond-mat9909150.html
ar5iv
text
# Kinetic energy of a trapped Fermi gas interacting with a Bose-Einstein condensate ## I Introduction The achievement of Bose-Einstein condensation in trapped gases has opened new opportunities for investigating the low temperature behaviour of dilute quantum systems. Recent experimental studies have been also addressed to the search of degeneracy effects in Fermi gases and in mixtures of bosons and fermions . First experimental evidences of these effects have been recently reported in . Due to the Pauli exclusion principle the effects of the interactions in a Fermi gas are much weaker than for a Bose condensate . Therefore, the kinetic energy dominates the behaviour of the Fermi gas and is a clear indicator of its quantum degeneracy. Interest in boson-fermion mixtures is stimulated by the fact that, whereas in a one-component spin-polarized Fermi gas the absence of interactions leads to long thermalization times which hamper the process of evaporative cooling, the collisions between the two species in a mixture can ensure fast thermalization (the so-colled sympatethic cooling ). The kinetic energy of the Fermi component in such a mixture could be measured by time-of-flight techniques in an ideal experiment in which the confining potential is suddenly switched off after a fast expulsion (*i.e.* on a time scale shorter than the boson-fermion collision time) of the bosons from the trap. In this way one can avoid the effects of the interactions during the expansion of the gas. Alternatively, the kinetic energy could be obtained from inelastic photon scattering at high momentum transfer as recently shown in the case of a trapped Bose gas . In this paper we analyze the behaviour of the kinetic energy of the fermionic component in a boson-fermion mixture. We find that the kinetic energy can be significantly affected by the interactions of the Fermi gas with the Bose-Einstein condensed cloud for reasonable choices of the system parameters. An important consequence of this purely quantum effect is that one can measure the sign and the strength of the boson-fermion scattering length by using the Bose component as a tunable device to change the effective potential felt by the fermions . We assume a positive boson-boson scattering length, with a view to applications to mixtures of Rb-K and Na-K. ## II Interacting Fermi-Bose mixtures For the description of the mixture at finite temperature we adopt the semiclassical three-fluid model already developed in ref.. We consider a system of $`N_f`$ fermions of mass $`m_f`$ and $`N_b`$ bosons of mass $`m_b`$ confined by external potentials $`V_{ext}^{f,b}(r)=\frac{1}{2}m_{f,b}\omega _{f,b}^2r^2`$ with frequencies $`\omega _f`$ and $`\omega _b`$. The external potentials are assumed to be spherically symmetric, the asymmetric case requiring simply a change of variables in the framework of the semiclassical approximation that we adopt. We include the interaction between bosons through the scattering length $`a_{bb}`$ and the interaction between bosons and fermions through the scattering length $`a_{bf}`$. By assuming that a single spin state is trapped for each component of the mixture, we can safely neglect the fermion-fermion interaction which is inhibited by the Pauli exclusion principle. Extensions to multi-spin configurations can be naturally made within the present formalism. In the following we will neglect the possibility of a superfluid phase for the fermionic component (for a discussion of the BCS transition in trapped Fermi gases see ) as well as the possibility of the expulsion of the bosons from the center of the cloud (this phase separation is expected to occur only for values of the coupling strengths such that the mean field contribution of the fermions is comparable with the mean field contribution of the bosons. This requires values of the parameters such that $`n_fa_{bb}^31`$. The resulting system is no more a dilute one, see ). The spatial densities of the condensed bosons ($`n_c`$), of the bosonic thermal component ($`n_{nc}`$) and of the fermions ($`n_f`$) are determined by the self-consistent solution of the following equations: $`n_c(r)`$ $`={\displaystyle \frac{1}{g_{bb}}}\left(\mu _bV_{ext}^b(r)2g_{bb}n_{nc}(r)g_{bf}n_f(r)\right),`$ (1) $`n_{nc}(r)`$ $`={\displaystyle \frac{d^3p}{(2\pi \mathrm{})^3}\left(\mathrm{exp}[\beta (\frac{p^2}{2m_b}+V_{eff}^b(r)\mu _b)]1\right)^1}`$ (2) and $`n_f(r)`$ $`={\displaystyle \frac{d^3p}{(2\pi \mathrm{})^3}\left(\mathrm{exp}[\beta (\frac{p^2}{2m_f}+V_{eff}^f(r)\mu _f)]+1\right)^1}.`$ (3) Here, the effective potentials acting on the thermal boson cloud and on the fermions are given by $`V_{eff}^b(r)`$ $`=V_{ext}^b(r)+2g_{bb}n_c(r)+2g_{bb}n_{nc}(r)+g_{bf}n_f(r)`$ (4) and $`V_{eff}^f(r)`$ $`=V_{ext}^f(r)+g_{bf}n_c(r)+g_{bf}n_{nc}(r),`$ (5) where we have introduced the notations $`\beta =(K_BT)^1`$, $`g_{bb}=4\pi \mathrm{}^2a_{bb}/m_b`$ and $`g_{bf}=2\pi \mathrm{}^2a_{bf}/m_r`$ with $`m_r^1=m_b^1+m_f^1`$. The chemical potentials $`\mu _b`$ and $`\mu _f`$ are determined by the normalization conditions $`N_b=\left(n_c(r)+n_{nc}(r)\right)d^3r`$ and $`N_f=n_f(r)d^3r`$, which ensure the self-consistent closure of the model. Equations (1-5) have been derived from a grand-canonical Hamiltonian in which the interactions are included in a mean-field Hartree-Fock approximation , by employing the semiclassical approximation for the bosonic thermal cloud and for the fermions and by taking the strong coupling limit $`N_ba_{bb}/a_{ho}1`$ for the wave-function of the condensate, with $`a_{ho}=(\mathrm{}/m_b\omega _b)^{1/2}`$ the bosonic harmonic oscillator length. Upon averaging the Hamiltonian on the equilibrium state of the system at finite temperature we obtain the energy as the sum of various contributions: the kinetic and the external confinement energy for each of the species, as well as the boson-boson and boson-fermion interaction terms. One has $$\begin{array}{c}E=E_{kin}^f+E_{ext}^f+E_{kin}^b+E_{ext}^b+E_{int}^{bb}+E_{int}^{bf}\hfill \\ \hfill =\frac{3}{2}\left(\frac{m_f}{2\pi \mathrm{}^2}\right)^{3/2}\beta ^{5/2}f_{5/2}(z_f)d^3r+\frac{3}{2}\left(\frac{m_b}{2\pi \mathrm{}^2}\right)^{3/2}\beta ^{5/2}g_{5/2}(z_b)d^3r+\\ \hfill V_{ext}^b(r)\left(n_c(r)+n_{nc}(r)\right)d^3r+V_{ext}^f(r)n_f(r)d^3r+\\ \hfill \frac{g_{bb}}{2}\left(n_c^2(r)+2n_{nc}^2(r)+4n_c(r)n_{nc}(r)\right)d^3r+g_{bf}\left(n_c(r)+n_{nc}(r)\right)n_f(r)d^3r\end{array}$$ (6) where $`f_p(z)=\mathrm{\Gamma }(p)^1y^{p1}𝑑y/(z^1e^y+1)`$, $`g_p(z)=\mathrm{\Gamma }(p)^1y^{p1}𝑑y/(z^1e^y1)`$ are the usual Fermi and Bose functions and $`z_{f,b}=\mathrm{exp}(\beta (\mu _{f,b}V_{eff}^{f,b}(r)))`$ . The release energy is obtained by setting the confinement potentials $`V_{ext}^{b,f}`$ in eq. (6) to zero. This quantity can be measured via time-of-flight experiments. At low temperature the thermal component $`n_{nc}`$ can be safely neglected in the right hand side of equations (1),(4) and (5). Similarly, when the fermionic density $`n_f`$ is much smaller than the density $`n_c`$ of the Bose condensate, its contribution in the right hand side of the same equations can be dropped. This is valid, for example, if the interaction strengths $`g_{bf}`$ and $`g_{bb}`$ have comparable size and if the trapping potential $`V_{ext}^f`$ is not too stiff with respect to $`V_{ext}^b`$. In this case the density profile of the condensate is not affected by the interaction with the fermions and the effective potential felt by the fermions takes the simplified form: $$V_{eff}^f(r)=\{\begin{array}{cc}\frac{1}{2}m_f\omega _f^2(1\gamma )r^2+\frac{g_{bf}}{g_{bb}}\mu _b\hfill & \text{for }r<R_b\hfill \\ \frac{1}{2}m_f\omega _f^2r^2\hfill & \text{for }rR_b\hfill \end{array}$$ (7) where $$\gamma =\frac{g_{bf}}{g_{bb}}\frac{m_b\omega _b^2}{m_f\omega _f^2}$$ (8) and $`R_b=(2\mu _b/m_b\omega _b^2)^{1/2}`$ is the radius of the condensate cloud. The potential (7) depends on temperature through the boson chemical potential $`\mu _b`$, which determines the radius $`R_b`$. Full numerical calculations, including the contribution of the thermal Bose and Fermi components, show that this simplified model (hereafter called the double-parabola model) describes very well the main features of the system below the critical temperature for Bose-Einstein condensation. In the double-parabola model, if the parameter $`\gamma `$ in eq. (8) is smaller than unity, the potential felt by the fermions has its minimum in the center of the trap. In this situation two limiting cases can be envisaged by comparing $`R_b`$ with the radius of the Fermi cloud. This is approximatively given by the Fermi radius $`R_F=(2E_F/m_f\omega _f^2)^{1/2}`$ calculated in the absence of interactions, where $`E_F=(6N_f)^{1/3}\mathrm{}\omega _f`$ is the Fermi energy. In the limit $`R_bR_F`$ the number of bosons is much smaller than the number of fermions and thus the interactions play a minor role. Instead, in the limit $`R_bR_F`$ the fermionic cloud feels a harmonic trapping potential with a renormalized frequency $`\stackrel{~}{\omega }_f=\omega _f(1\gamma )^{1/2}`$. Finally in the case $`\gamma >1`$ the repulsive interaction with the bosons is stronger than the external potential. The effective potential (7) then exhibits a local maximum at the center of the trap. ## III Scaling and role of the interactions at $`T=0`$ Let us begin our discussion in the framework of the double-parabola model introduced in the previous section. In the case $`\gamma <1`$ we can give a simple approximate solution of the model by a variational minimization of the energy functional in eq. (6). At $`T=0`$ this reads $$\begin{array}{c}E(T=0)=\frac{3}{5}\frac{\mathrm{}^2}{2m_f}(6\pi ^2)^{2/3}n_f^{5/3}(r)d^3r+V_{ext}^b(r)n_c(r)d^3r+\hfill \\ \hfill V_{ext}^f(r)n_f(r)d^3r+\frac{g_{bb}}{2}n_c^2(r)d^3r+g_{bf}n_c(r)n_f(r)d^3r.\end{array}$$ (9) The variational approach (see the details in the Appendix) explicitly shows that the relevant properties of the system depend on the various parameters of the model through two adimensional combinations, which are the parameter $`\gamma `$ in eq. (8) and a parameter $`x`$ given by $$x=\sqrt{\frac{R_b}{R_F}}=\sqrt{\frac{m_f\omega _f}{2m_b\omega _b}}\left(\frac{15a_{bb}}{a_{ho}}\frac{N_b}{(6N_f)^{5/6}}\right)^{1/5}.$$ (10) At given $`\gamma `$, the ratio of the sizes of the two clouds in the absence of interactions determines the deviation of the kinetic energy of the Fermi component from its ideal-gas value. We have checked numerically that the scaling in these two variables is satisfied with good accuracy also by the full numerical solution of eqs. (1-5) at zero temperature for any value of $`\gamma `$. The description of the system with only two scaling parameters instead of the eight original ones entering eqs. (1-3) represents a major simplification of the problem. In view of this property, in the following we shall present our discussion in terms of the scaling parameters $`x`$ and $`\gamma `$. In fig. 1.a we show a plot of the kinetic energy as a function of $`x`$ at zero temperature for different values of $`\gamma <1`$. As $`x`$ increases, the kinetic energy of the fermions goes from its non-interacting value $`E_{kin}^0=3N_fE_F/8`$ to the strong-coupling limit $`\stackrel{~}{E}_{kin}=E_{kin}^0(1\gamma )^{1/2}`$. As a first result of our analysis we see from fig. 1.a that there is a clear correspondence, for a fixed value of $`x`$, between the value of the kinetic energy and the value of $`\gamma `$. Therefore the sign and the strength of the ratio between the boson-fermion and boson-boson coupling constants could be inferred from a measurement of the fermion kinetic energy. In the case $`\gamma >1`$ the fermions are expelled from the center of the trap and form a shell around the bosons as $`N_b`$ increases with respect to $`N_f`$. In this case the kinetic energy of the fermions (fig. 1.b) tends to zero when $`x\mathrm{}`$. For completeness we have also analyzed the behaviour of the mean square radius of the fermionic cloud as a function of $`x`$. For $`\gamma <1`$ the asymptotic value at large $`x`$ is larger(smaller) than in the ideal case for $`\gamma >0(<0)`$ (see fig. 2.a). For $`\gamma >1`$ the mean square radius increases indefinitely with increasing $`x`$ (fig. 2.b). These behaviours are immediately understood in terms of the behaviour of the kinetic energy shown in fig. 1. ## IV The role of temperature Let us finally examine the temperature dependence of the kinetic energy of the Fermi gas. To this purpose we have solved self-consistently the full set of eqs. (1-5). Of course, in the classical regime the kinetic energy is insensitive to interactions. As quantum degeneracy sets in at $`T<T_F`$, where $`T_F=E_F/K_B`$ is the Fermi temperature, deviations from the classical value $`3N_fK_BT/2`$ become apparent. We show in fig. 3 the predicted behaviour for a given choice of the parameters of the mixture. The role of the interactions decreases as temperature increases. This can also be seen in fig. 4 where we plot the kinetic energy as a function of $`x`$ for a choice of different temperatures. The scaling behaviour described in sec. III is less accurate at finite temperature. This is easily understood from the fact that the approximations leading to eq. (7) become less justified as temperature increases. ## V Conclusions We have presented a broad study of the kinetic energy of the fermionic component in a mixture of bosons and fermions in the so-called Thomas-Fermi regime ($`N_ba_{bb}/a_{ho}1`$). We have shown that at zero temperature the kinetic energy, as well as the mean square radius of the Fermi component, exhibit an important scaling behaviour in the relevant parameters $`\gamma `$ and $`x`$ defined in eqs. (8,10). This has allowed us to give a systematic investigation of these physical properties on a wide range of system parameters and to understand the role of the interactions between the Fermi gas and the Bose condensate. In particular, we have found that the shift of the fermionic kinetic energy due to the interactions becomes sizeable at appreciable values of the parameter $`x`$ mesauring the relative radii of the two clouds. This effect could be used to infer the sign of the boson-fermion scattering length from measurements of the fermion kinetic energy. Finally, the role of temperature has been investigated within the self-consistent numerical solution of the full set of equations for the coupled boson-fermion mixture and the interplay between thermal and interaction effects on the kinetic energy has been demonstrated. This work is supported by the Istituto Nazionale per la Fisica della Materia through the Advanced Research Project on BEC. One of us (L.V.) acknowledges the hospitality of the Scuola Normale Superiore di Pisa during part of this work. ## A Variational model The scaling behaviour discussed in section III can be explicitly predicted by a variational approach which turns out to be very accurate in reproducing the numerical results at $`T=0`$ in the case $`\gamma <1`$. The variational method is based on the minimization of the energy functional (9) within a restricted class of functions. For $`\gamma <1`$ it is convenient to describe the fermionic cloud as if it were embedded in an effective potential $`V_{var}(r)=\frac{1}{2}m_f\omega _{var}^2r^2`$, where the frequency $`\omega _{var}`$ is a variational parameter. The corresponding fermionic density profile is $`n_f(r)=1/(6\pi ^2)(2m_f/\mathrm{}^2)^{3/2}(E_F^{var}m_f\omega _{var}^2r^2/2)^{3/2}`$ and the expression for the variational energy functional takes the form $$\begin{array}{c}E_{var}=E_{kin}+E_{ho}+E_{int}=\frac{3}{8}N_fE_F\frac{\omega _{var}}{\omega _f}+\frac{3}{8}N_fE_F\frac{\omega _f}{\omega _{var}}+\hfill \\ \hfill \frac{g_{bf}}{g_{bb}}^{\mathrm{min}(R_F^{var},R_b)}\frac{1}{6\pi ^2}\left(\frac{2m_f}{\mathrm{}^2}\right)^{3/2}\left(E_F^{var}\frac{1}{2}m_f\omega _{var}^2r^2\right)^{3/2}\left(\mu _b\frac{1}{2}m_b\omega _b^2r^2\right)d^3r.\end{array}$$ (A1) Here $`E_F^{var}=(6N_f)^{1/3}\mathrm{}\omega _{var}`$ and $`R_F^{var}=(2E_F^{var}/m_f\omega _{var}^2)^{1/2}`$ are the the Fermi energy and the Fermi radius calculated with the frequency $`\omega _{var}`$. The bosons are described by the Thomas-Fermi inverted parabola $`n_b(r)=g_{bb}^1(\mu _bm_b\omega _b^2r^2/2)`$ with $`\mu _b=\mathrm{}\omega _b(15N_ba_{bb}/a_{ho})^{2/5}/2`$. The integral in eq. (A1)can be carried out analytically, with the result $$E_{var}(x,\gamma ,\alpha )=\frac{3}{8}N_fE_F\times \{\begin{array}{cc}\frac{\alpha ^2}{x^2}+\frac{x^2}{\alpha ^2}+\gamma \frac{x^2}{\alpha ^2}P(\alpha )\hfill & \text{for }\alpha <1\hfill \\ \frac{\alpha ^2}{x^2}+\frac{x^2}{\alpha ^2}+\gamma \frac{x^2}{\alpha ^2}\left(1+\frac{8}{3}\alpha ^2\right)\hfill & \text{for }\alpha 1\hfill \end{array}$$ (A2) where $`\alpha ^2=x^2\omega _{var}/\omega _f`$ and $$P(\alpha )=\frac{2}{9\pi }\left[\alpha \sqrt{1\alpha ^2}(918\alpha ^2+40\alpha ^416\alpha ^6)+3(3+8\alpha ^2)\mathrm{arcsin}(\alpha )\right],$$ (A3) $$x=\sqrt{\frac{m_f\omega _f}{2m_b\omega _b}}\left(15\frac{a_{bb}}{a_{ho}}\frac{N_b}{(6N_f)^{5/6}}\right)^{1/5}.$$ (A4) The condition $`E_{var}/\alpha =0`$ determines the value of $`\alpha `$ and hence of $`\omega _{var}`$. This equation has to be solved numerically, except for $`\alpha 1`$ where the model gives the result $`\omega _{var}=\stackrel{~}{\omega }_f`$. The expression (A2) allows an explicit identification of the scaling variables introduced in sec. III. In fact, the quantity $`E_{var}/N_fE_F`$ at its minimum depends only on $`x`$ and $`\gamma `$. Of course the variational estimate gives an upper bound for the total energy. This bound is very close to the value obtained by solving the Schrödinger equation with the potential (7). Typical deviations are less than 1 % of the energy.
no-problem/9909/cond-mat9909177.html
ar5iv
text
# Flat histogram Monte Carlo method ## 1 Introduction The basic problem in equilibrium statistical mechanics is to compute the canonical average $$A_T=\frac{\underset{\{\sigma \}}{}A(\sigma )\mathrm{exp}\left(H(\sigma )/kT\right)}{_{\{\sigma \}}\mathrm{exp}\left(H(\sigma )/kT\right)}.$$ (1) In addition, free energy $$F=kT\mathrm{ln}\underset{\{\sigma \}}{}\mathrm{exp}\left(H(\sigma )/kT\right)$$ (2) and related quantity entropy are also very important. Standard Monte Carlo method, e.g., Metropolis importance sampling algorithm, is simple and general. However, computation of free energy is difficult with such method. Over the last decade, there have been a number of methods addressing this problem . A common theme in these approaches is to evaluate the density of states $`n(E)`$ directly. If this can be done with sufficient accuracy, then the summation over all configuration states can be rewritten as sum over energy only, e.g., $$F=kT\mathrm{ln}\underset{E}{}n(E)\mathrm{exp}(E/kT).$$ (3) Can we evaluate $`n(E)`$ with a uniform relative accuracy for all $`E`$? Our experience with flat histogram method suggests a positive yes. ## 2 Broad histogram equation Oliveira et al showed the validity of following equation relating density of states with the microcanonical average number of moves in a single spin flip dynamics: $$n(E)N(\sigma ,\mathrm{\Delta }E)_E=n(E+\mathrm{\Delta }E)N(\sigma ^{},\mathrm{\Delta }E)_{E+\mathrm{\Delta }E}.$$ (4) This equation is equivalent to a detail balance condition in an infinite temperature transition matrix Monte Carlo simulation . The quantity $`N(\sigma ,\mathrm{\Delta }E)`$ is the number of ways that the system goes to a state with energy $`E+\mathrm{\Delta }E`$ by a single spin flip, given that the current state is $`\sigma `$ with energy $`E`$. The average $`\mathrm{}_E`$ is performed over all the states with a fixed initial energy $`E`$ (i.e., a microcanonical average). ## 3 A flat histogram dynamics Consider the follow Monte Carlo dynamics 1. Pick a site at random. 2. Flip the spin with probability $`r(E^{}|E)`$. 3. Sample $`N(\sigma ,\mathrm{\Delta }E)`$, i.e., accumulate the statistics for $`N(\sigma ,\mathrm{\Delta }E)`$. 4. Go to 1. The flip probability $`r`$ is given by $$r(E^{}|E)=\mathrm{min}(1,\frac{N(\sigma ^{},\mathrm{\Delta }E)_E^{}}{N(\sigma ,\mathrm{\Delta }E)_E}),$$ (5) where the current state $`\sigma `$ has energy $`E`$, and new state $`\sigma ^{}`$ with one spin flipped has energy $`E^{}=E+\mathrm{\Delta }E`$. With the above choice of flip rate, we can show that detail balance is satisfied $$r(E^{}|E)P(\sigma )=r(E|E^{})P(\sigma ^{})$$ (6) if $`P(\sigma )=\mathrm{const}/n(E(\sigma ))`$, since this equation is equivalent to the broad histogram equation, Eq. (4). The histogram in energy is then $`H(E)P(\sigma )=\mathrm{constant}`$. It turns out that the choice of flip rate is not unique; many other formulas are possible . Due to Eq. (4), the flip rate is also equal to $`\mathrm{min}[1,n(E)/n(E^{})]`$. This is exactly Lee’s method of entropy sampling (which is equivalent to multicanonical method ). However, since neither $`n(E)`$ nor $`N(\sigma ,\mathrm{\Delta }E)_E`$ is known before simulation, the way by which simulation gets boot-trapped is quite different. Our method is very efficient in this respect. Another important difference is that we take $`N(\sigma ,\mathrm{\Delta }E)_E`$ as our primary statistics, from which we derive the density of states $`n(E)`$. Apart from the number of iterations needed, the transition matrix results are in general more accurate than that obtained using the energy histogram . ## 4 Simulation procedures The first approximate scheme we use is to replace the true microcanonical average by a cumulative sample average $$N(\sigma ,\mathrm{\Delta }E)_E\frac{1}{M}\underset{i=1}{\overset{M}{}}N(\sigma ^i,\mathrm{\Delta }E),$$ (7) where the samples $`\sigma ^i`$ are configurations generated during simulation with energy $`E`$. Each state in the simulation contributes to $`N(\mathrm{})`$ for some $`E`$. For those $`E^{}`$ where data are not available, we set $`r=1`$. This biases towards unvisited energy states. This dynamics does not satisfy detail balance exactly since the transition rate is fluctuating. However, test on small systems shows that the sample averages do converge to the exact value with errors proportional to the inverse of square root of Monte Carlo steps. A two-stage simulation will guarantee detail balance. Stage one is the same as described above. In stage two, we adjust the approximate transition matrix obtained in stage one such that detail balance is satisfied exactly. In this stage, the flip rate is fixed and will not fluctuate with the simulation. The second stage simulation is dynamically equivalent to Berg’s multicanonical or Lee’s entropy sampling dynamics. The stage two can be iterated so that the simulated ensemble approaches the ideal multicanonical ensemble, but we found two-stage or even a single stage simulation already gives excellent results. The simulation can also be combined with the N-fold way with little overhead in computer time, since the quantity $`N(\sigma ,\mathrm{\Delta }E)`$ needed in the N-fold way is already computed. In addition, not only we can have equal histogram (multicanonical), we can also generate “equal-hit” ensemble, where each energy of distinct states is visited equally likely . ## 5 Density of states from transition matrix The density of states is related to the transition matrix $`T_{E,\mathrm{\Delta }E}=N(\sigma ,\mathrm{\Delta }E)_E/N`$ by Eq. (4), where $`N`$ is the total number of possible moves. Since there are more equations than unknown $`n(E)`$, we use a least-square method to obtain “optimal” solution. Let $`S(E)=\mathrm{ln}n(E)`$, we consider $$\mathrm{minimize}\underset{E,E^{}}{}\frac{1}{\sigma _{E,E^{}}^2}\left(S(E^{})S(E)\mathrm{ln}\frac{T_{E,\mathrm{\Delta }E}}{T_{E^{},\mathrm{\Delta }E}}\right)^2$$ (8) subject to all the conditions known. For example, for the Ising model, we have $`n(E_{min})=n(E_{max})=2`$, and $`_En(E)=2^N`$. The variance $`\sigma ^2`$ is the variance of the quantity $`\mathrm{ln}T_{E,\mathrm{\Delta }E}/T_{E^{},\mathrm{\Delta }E}`$ obtained from sets of Monte Carlo data. It is also possible to work with the matrix $`T`$ directly with the conditions that $`T`$ must satisfy. ## 6 Conclusion We proposed an algorithm which samples energy $`E`$ uniformly. Comparing to multicanonical simulation, the method offers a very easy way of starting the simulation. The dynamic characteristics are similar to well-converged multicanonical Monte Carlo dynamics. For example, the tunneling time for 10-state Potts model in two dimensions is about $`\tau L^{2.6}`$ for $`L\times L`$ system. It is very easy to combine statistics from several simulations, including parallel simulations. It is an efficient method for computing density of states and all the related thermodynamic quantities. ## Acknowledgements The work presented here is in collaborations with R.H. Swendsen, T.K. Tay, L.W. Lee, and Z.F. Zhen.
no-problem/9909/hep-ph9909536.html
ar5iv
text
# 1 Introduction ## 1 Introduction Supersymmetry has been studied for a long time as the possible framework for elementary particle theories beyond the standard model . It provides a natural solution to the hierarchy problem, allowing a small value, in fundamental terms, for the weak interaction scale. It also allows the measured values of the standard model coupling constants to be consistent with grand unification. Still, if Nature is supersymmetric, some new interaction must spontaneously break supersymmetry and transmit this information to the supersymmetric partners of the standard model particles. Two different approaches have been followed to model supersymmetry breaking. The first is the idea that supersymmetry breaking is transmitted by gravity and supergravity interactions . In these scenarios, the supersymmetry breaking scale $`\sqrt{F}`$ is of the order of $`10^{11}`$ GeV. This large value implies that gravitino interactions are extremely weak, and that the gravitino has a mass of the same size as the other supersymmetric partners. In this class of models, the lightest supersymmetric particle (LSP), which is the endpoint of all superpartner decays, is most often taken to be the superpartner of the photon, or, more generally, a neutralino. The second approach uses the gauge interactions to transmit the information of supersymmetry breaking to the standard model partners . In these gauge-mediated scenarios, the supersymmetry-breaking scale $`\sqrt{F}`$ is typically much smaller than in the gravity-mediated case, so that the gravitino $`\stackrel{~}{G}`$ is almost always the LSP. All other superpartners are unstable with respect to decay to the gravitino, though sometimes with a lifetime long on the time scale relevant to collider physics. In gauge-mediated scenarios, direct decay to the gravitino is hindered by a factor $`1/F`$ in the rate. Thus, attention shifts to those particles which have no allowed decays except through this hindered mode. Such a particle is called a next-to-lightest supersymmetric particle (NLSP). Any of the typically light superpartners can play the role of the NLSP, and the collider phenomenology of a given model depends on which is chosen. For example, if the gaugino-like lightest neutralino $`\stackrel{~}{\chi }^0`$ is the NLSP and decays inside the collider, supersymmetry reactions will end with the decay $`\stackrel{~}{\chi }^0\gamma \stackrel{~}{G}`$, producing a direct photon plus missing energy. Other common choices for the NLSP are the lepton partners and the Higgs boson. More involved scenarios are also possible . In this paper, we consider the possibility that the the lightest scalar top quark (stop, or $`\stackrel{~}{t}_1`$) is the NLSP of a gauge-mediation scenario . It is typical in supersymmetric models that the stop receives negative radiative corrections to its mass through its coupling to the Higgs sector. In addition, the mixing between the partners of the $`t_L`$ and $`t_R`$ is typically sizable, and this drives down the the lower mass eigenvalue. It is not uncommon in models that the lighter stop is lighter than the top quark, and it is possible to arrange that it is also lighter than the sleptons and charginos . The existence of this possibility, though, poses a troubling question for experimenters. In this scenario, the dominant decay of the lighter stop would be the three-body decay $`\stackrel{~}{t}bW^+\stackrel{~}{G}`$. The $`\stackrel{~}{G}`$ is not observable, and the rest of the reaction is extremely similar to the standard top decay $`tW^+b`$. The cross section for stop pair production is smaller than that for top pair production at the same mass. Thus, it is possible that the top quark events discovered at the Tevatron collider contain stop events as well. How could we ever know? In this paper, we address that question. Our strategy will be to systematically analyze the three-body stop decay. This decay process is rather complex, since the $`\stackrel{~}{G}`$ can be radiated from the partners of $`t`$, $`b`$, or $`W`$, and since both the top and the $`W`$ partners can be a mixture of weak eigenstates. For the application to the Tevatron, one must take into account that the center-of-mass energy of the production is unknown, and that the detectors can measure only a subset of the possible observables. Nevertheless, we will show that two observables available at the Tevatron can cleanly distinguish between top and stop events. The first of these is the mass distribution of the observed $`b`$ jet plus lepton system which results from a leptonic $`W`$ decay. The second is the $`W`$ longitudinal polarization. We will show that the first of these observables gives a reasonably model-independent signature of stop production, while the second is wildly model-dependent and can be used to gain insight into the underlying supersymmetry parameters. This paper is organized as follows: In Section 2, we set up our basic formalism and state our assumptions. In Section 3, we analyze the stop decay rate and the $`bW`$ and $`b\mathrm{}`$ mass distributions. In Section 4, we present the $`W`$ longitudinal polarization in various models. Section 5 gives our conclusions. ## 2 Formalism and assumptions In this section, we define our notation and set out the assumptions we will use in analyzing the stop decay process. Our calculation will be done within the framework of the minimal supersymmetric standard model (MSSM) with R-parity conservation. We will not consider any exotic particle other than those required in the MSSM. Our central assumption will be that the lighter stop mass eigenstate $`\stackrel{~}{t}_1`$ is lighter than the top quark and also lighter than the charginos and the $`b`$ superpartners, while the gravitino is very light, as in gauge-mediation scenarios. Under these assumptions, what would otherwise be the dominant decay $`\stackrel{~}{t}_1t\stackrel{~}{G}`$ is forbidden kinematically, so that the dominant stop decay must proceed either by $`\stackrel{~}{t}_1bW^+\stackrel{~}{G}`$ or by $`\stackrel{~}{t}_1c\stackrel{~}{G}`$. In the MSSM without additional flavor violation, quark mixing angles suppress the decay to $`c`$ by a factor $`10^6`$. That suppression makes this decay unimportant except near the boundary of phase space where $`mm_W`$. For this reason, we will ignore that decay in the rest of the paper. If the mass of the $`\stackrel{~}{t}_1`$ were larger than the mass of the top quark, the $`\stackrel{~}{t}_1`$ would decay entirely through $`\stackrel{~}{t}_1t\stackrel{~}{G}`$. All observable characteristics of this decay would be exactly those of top quark pair production, except that the two emitted gravitinos would lead to a small additional transverse boost. For such a heavy stop, the production cross section is less than 10% of that for top quark pair production. Nevertheless, this process might be recognized from the fact that the top quark and antiquark would be given a small preferential polarization, for example, in the $`t_R\overline{t}_L`$ helicity states if the $`\stackrel{~}{t}_1`$ is dominantly the partner of $`t_R`$. The methodology of the top polarization measurement has been discussed in detail in the literature , so we will not analyze this case further here. To analyze the case in which $`\stackrel{~}{t}_1`$ is lighter than the top quark, we begin by considering the form of the scalar top quark mass matrix. Including the effects of soft breaking masses, Yukawa couplings, trilinear scalar couplings, and D terms, this matrix can be written in the $`\stackrel{~}{t}_R`$, $`\stackrel{~}{t}_L`$ basis as $$M_{\stackrel{~}{t}}^2=\left(\begin{array}{cc}m_{\stackrel{~}{t}_R}^2& m_t(A_t+\mu \mathrm{cot}\beta )\\ m_t(A_t+\mu \mathrm{cot}\beta )& m_{\stackrel{~}{t}_L}^2\end{array}\right),$$ (1) where $`A_t`$, $`\mu `$, $`m_t`$, and $`tan\beta `$ denote, respectively, the trilinear coupling of Higgs scalars and sfermions, the supersymmetric Higgs mass term, the top quark mass, and the ratio of the two Higgs vacuum expectation values. The masses $`m_{\stackrel{~}{t}_R}^2`$ and $`m_{\stackrel{~}{t}_L}^2`$ arise from the soft breaking, the D term contribution, and the top Yukawa coupling as follows: $`m_{\stackrel{~}{t}_R}^2`$ $`=`$ $`m_{\stackrel{~}{U}_3}^2+m_t^2+{\displaystyle \frac{2}{3}}\mathrm{sin}^2\theta _wm_Z^2\mathrm{cos}2\beta `$ $`m_{\stackrel{~}{t}_L}^2`$ $`=`$ $`m_{\stackrel{~}{Q}_3}^2+m_t^2+({\displaystyle \frac{1}{2}}{\displaystyle \frac{2}{3}}\mathrm{sin}^2\theta _w)m_Z^2\mathrm{cos}2\beta ,`$ (2) where $`\theta _w`$ denotes the weak mixing angle and $`m_Z`$ is the $`Z^0`$ boson mass. The soft breaking masses $`m_{\stackrel{~}{U}_3}^2`$ and $`m_{\stackrel{~}{Q}_3}^2`$ are more model-dependent. In many models, these masses are derived from flavor-blind mass contributions by adding the effects of radiative corrections due to the top-Higgs Yukawa coupling $`\lambda _t`$. These corrections have the form $$m_{\stackrel{~}{U}_3}^2m_{\stackrel{~}{U}}^22\lambda _t^2\stackrel{~}{I},m_{\stackrel{~}{Q}_3}^2m_{\stackrel{~}{Q}}^2\lambda _t^2\stackrel{~}{I},$$ (3) where the function $`\stackrel{~}{I}`$ denotes a one-loop integral. The extra factor 2 in the expression for the $`m_{\stackrel{~}{U}_3}^2`$ is due to the fact that loop diagram contains the $`Q`$ and Higgs isodoublets. From this effect, we expect that $`m_{\stackrel{~}{U}_3}^2<m_{\stackrel{~}{Q}_3}^2`$. One should note that there is a flavor-universal positive mass correction due to diagrams with a gluino which combats the negative correction in (3). The lightest stop mass eigenstate $`\stackrel{~}{t}_1`$ and its mass $`\stackrel{~}{m}^2`$ are easily obtained by diagonalizing the stop mass matrix (1). One finds $`\stackrel{~}{t}_1`$ $`=`$ $`\mathrm{cos}\theta _t\stackrel{~}{t}_L+\mathrm{sin}\theta _t\stackrel{~}{t}_R`$ $`\stackrel{~}{t}_2`$ $`=`$ $`sin\theta _t\stackrel{~}{t}_L\mathrm{cos}\theta _t\stackrel{~}{t}_R`$ $`\stackrel{~}{m}^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\{m_{\stackrel{~}{t}_R}^2+m_{\stackrel{~}{t}_L}^2\sqrt{(m_{\stackrel{~}{t}_L}^2m_{\stackrel{~}{t}_R}^2)^2+4m_t^2(A_t+\mu \mathrm{cot}\beta )^2}\}`$ $`tan\theta _t`$ $`=`$ $`{\displaystyle \frac{m_t(A_t+\mu \mathrm{cot}\beta )}{(m_{\stackrel{~}{t}_R}^2m_1^2)}}.`$ (4) In these formulae, $`\theta _{\stackrel{~}{t}}`$ denotes the stop mixing angle and is chosen to be in the range $`\pi /2\theta _{\stackrel{~}{t}}\pi /2`$. The relations (4) demonstrate the two mechanims mentioned in the introduction for obtaining a small value of $`m_{\stackrel{~}{t}_1}`$: First, the radiative correction (3) could be large due to the large value of $`\lambda _t`$; second, the left-right mixing could be large due a large value of $`A_t`$. From here on, however, we will take $`\stackrel{~}{m}`$ and $`\theta _t`$ to be phenomeonological parameters to be determined by experiment. Since the final state of the three-body $`\stackrel{~}{t}_1`$ decay includes the $`W^+`$, our analysis must include the supersymmetric partners of $`W^+`$ and $`H^+`$, the charginos. In the MSSM, these states are mixtures of the winos $`\stackrel{~}{w}^\pm `$ and the Higgsinos $`h^\pm `$. In two-component fermion notation, the left-handed chargino fields are written $$\stackrel{~}{C}_i^+=(\stackrel{~}{w}^+,ih^+),\stackrel{~}{C}_i^{}=(\stackrel{~}{w}^{},ih^{}).$$ (5) In this basis, the chargino mass matrix is $$M_+=\left(\begin{array}{cc}m_2& \sqrt{2}m_W\mathrm{sin}\beta \\ \sqrt{2}m_W\mathrm{cos}\beta & \mu \end{array}\right),$$ (6) where $`m_2`$ is the soft breaking mass of the $`SU(2)`$ gaugino, and $`\mu `$ is the supersymmetric Higgs mass. The matrix $`M_+`$ is diagonalized by writing $`M_+=(V_{})^TDV_+`$, where $`V_+`$, $`V_{}`$ are unitary; then the mass eigenstates are given by $$\stackrel{~}{\chi }_i^+=V_{+ij}\stackrel{~}{C}_j^+,\stackrel{~}{\chi }_i^{}=V_{ij}\stackrel{~}{C}_j^{}.$$ (7) To be consistent with the assumption that the $`\stackrel{~}{t}_1`$ is the NLSP, we will consider only sets of parameters for which the mass of the $`\stackrel{~}{t}_1`$ is lower than either of the eigenvalues of $`M_+`$. We analyze the couplings of superparticles to the gravitino by using the supersymmetry analogue of Goldstone boson equivalence. The gravitino obtains mass through the Higgs mechanism, by combining with the Goldstone fermion (Goldstino) associated with spontaneous supersymmetry breaking. When the gravitino is emitted with an energy high compared to its mass, the helicity $`h=\pm \frac{3}{2}`$ states come dominantly from the gravity multiplet and are produced with gravitational strength, while the $`h=\pm \frac{1}{2}`$ states come dominantly from the Goldstino. In the scenario that we are studying, the mass of the gravitino is on the scale of keV, while the energy with which the gravitino is emitted is on the scale of GeV. Thus, it is a very good approximation to ignore the gravitational component and consider the gravitino purely as a spin $`\frac{1}{2}`$ Goldstino. From here on, we will use the symbol $`\stackrel{~}{G}`$ to denote the Goldstino. The coupling of one Goldstino to matter is given by the coupling to the supercurrent $$\delta =\frac{1}{\sqrt{2}F}_\mu \stackrel{~}{G}cJ^\mu +\frac{1}{\sqrt{2}F}J^\mu c_\mu G^{},$$ (8) where $`\sqrt{F}`$ is the scale of supersymmetry breaking and $`c=i\sigma ^2`$. The supercurrent takes the form $`J^\mu `$ $`=`$ $`\sqrt{2}\sigma ^\nu \overline{\sigma }^\mu D_\nu \varphi ^{}\psi \sqrt{2}i\left({\displaystyle \frac{W}{\varphi }}\right)^{}\sigma ^\mu c\psi ^{}`$ (9) $`g\sigma ^\mu c\varphi ^{}\lambda ^{}\varphi i\sigma ^{\lambda \sigma }F_{\lambda \sigma }\sigma ^\mu c\lambda ^{},`$ summed over all chiral supermultiplets $`(\varphi ,\psi )`$ and all gauge supermultiplets $`(A_\mu ,\lambda )`$. In this equation, $`W`$ is the superpotential and $`g`$ is the gauge coupling. All of the various terms in this equation actually enter the amplitude for the three-body stop decay. It is a formidable task to present the complete dependence of the properties of the three-body stop decay on the various supersymmetry parameters. We will present results in this paper for the following four scenarios, which illustrate the range of possibilities for the wino-Higgsino mixing problem: 1. a scenario in which the lightest chargino is light and wino-like: $`m_2=200`$ GeV, $`\mu =1000`$ GeV, 2. a scenario in which the lightest chargino is light and Higgsino-like: $`m_2=1000`$ GeV, $`\mu =200`$ GeV, 3. a scenario in which the lightest chargino is light and mixed: $`m_2=\mu =260`$ GeV, 4. a scenario in which the lightest chargino is heavy: $`m_2=\mu =500`$ GeV. Within each scenario, we will vary other parameters such as $`m`$, $`\mathrm{sin}\theta _t`$, and $`\mathrm{tan}\beta `$ in order to gain a more complete picture of the $`\stackrel{~}{t}_1`$ decay. ## 3 Characteristics of the stop decay Using the Goldstino interactions from (8) and the gauge and Yukawa interactions of the MSSM, we can construct the Feynman diagrams for $`\stackrel{~}{t}_1bW^+\stackrel{~}{G}`$ shown in Figure 1. These diagrams include processes with intermediate $`t`$, $`\stackrel{~}{\chi }_i^+`$, and $`\stackrel{~}{b}`$ particles, plus a contact interaction present in (8). It is useful to think about building up the complete amplitude for the stop decay by successively considering a number of limiting cases. In Figure 1, we have drawn the diagrams using a basis of weak interaction eigenstates. The first property to be derived from these amplitudes is the stop decay rate. It is always an issue when an NLSP decays to the gravitino whether the decay is prompt on the times scales of particle physics, or whether the NLSP travels a measureable distance from the production vertex before decaying. Taking into account the 3-body phase space and the fact that the amplitude is proportional to $`1/F`$, we might roughly estimate the decay amplitude as $$\mathrm{\Gamma }(\stackrel{~}{t}_1)\frac{\alpha _w(mm_W)^7}{1028\pi ^2m_W^2F^2},$$ (10) where $`\alpha _w=g_2^2/4\pi `$ is the weak-interaction coupling constant. By this estimate, a value of $`\sqrt{F}`$ smaller than 100 TeV would give a prompt decay, with $`c\tau <1`$ cm. In Figure 2 we show the result of a complete calculation of the decay rate in the four scenarios listed at the end of Section 2. In all four cases, we have chosen the parameter values $`\sqrt{F}=30`$ TeV, $`\mathrm{tan}\beta =1.0`$, $`m_{\stackrel{~}{b}_L}=300`$ GeV, and $`\mathrm{sin}\theta _t=0.8`$. The complete calculation reproduces the steep dependence on the stop mass $`m`$ which is present in (10). and shows that the normalization is roughly correct. Since $`\mathrm{\Gamma }`$ varies as the fourth power of $`\sqrt{F}`$, one can arrange for a short decay length by making $`\sqrt{F}`$ sufficiently low. For $`m160`$ GeV, the choice $`\sqrt{F}=30`$ leads to a decay length $`c\tau `$ of about 1 cm. We have found that the decay length is quite insensitive to all of the other relevant parameters. The variation between scenarios or within a given scenario is less than a factor of 2. From here on, we will analyze the $`\stackrel{~}{t}_1`$ decay as if it were prompt. But it is clear from the figure that, if $`\sqrt{F}`$ is as low as 30 TeV, stop decays will be identifiable by their displaced vertices in addition to the kinematic signatures discussed in this paper. The final state of the three-body stop decay is essentially the same as ordinary top decay, since the stop produces a $`b`$ jet, a $`W`$ boson, and an unobservable $`\stackrel{~}{G}`$. How, then, can we distinguish the $`t\overline{t}`$ and $`\stackrel{~}{t}_1\stackrel{~}{\overline{t}}_1`$ production processes? The most straightforward way to approach this problem is to analyze the observable mass distributions of $`t`$ and $`\stackrel{~}{t}_1`$ decay products. If we could completely reconstruct the $`W`$ boson, the invariant mass of the $`bW`$ system would peak sharply at $`m_t`$ in the case of $`t`$ decay, and would have a more extended distribution below the stop mass $`m`$ in the case of $`\stackrel{~}{t}_1`$ decay. However, in the observation of top events at the Tevatron, the analysis cannot be so clean. Events from $`t\overline{t}`$ production are typically observed in the final state in which one $`W`$ decays hadronically and the second decays to $`\mathrm{}\nu `$. Then the final state contains an unobserved neutrino. If there is only this one missing particle, the event can be reconstructed. But the events with $`\stackrel{~}{t}_1`$ contain two more missing particles, the $`\stackrel{~}{G}`$s, which potentially confuse the analysis. Fortunately, it is possible to discriminate $`t`$ from $`\stackrel{~}{t}_1`$ events by studying the invariant mass distribution of the directly observable $`b`$ and lepton decay products. For top decays, the distribution in the $`b`$-lepton invariant mass $`m(eb)`$ (quoted, for simplicity, for $`m_b=0`$) takes the form $$\frac{1}{\mathrm{\Gamma }}\frac{d\mathrm{\Gamma }}{dm(eb)}=\frac{12m(eb)}{2(1m_W^2/m_t^2)(2+m_W^2/m_t^2)}(1y)\left(1y+\frac{m_t^2}{2m_W^2}y\right),$$ (11) where $`y=m^2(eb)/(m_t^2m_W^2)`$. As is shown in Figure 3, this distribution extends from $`m(eb)=m_b`$ to a kinematic endpoint at $`m(eb)=155`$ GeV, and peaks toward its high end, at about $`m(eb)=120`$ GeV. On the other hand, in $`\stackrel{~}{t}_1`$ decay, not only does the $`m(eb)`$ distribution have a lower endpoint value, reflecting the value of $`m<m_t`$, but it also peaks toward the low end of its range. Figure 3 shows two typical distributions of $`m(eb)`$, corresponding to stop masses of 130 and 170 GeV. The corresponding distributions of the $`b`$-$`W`$ invariant mass $`m(bW)`$ are also shown for comparison. A remarkable feature of Figure 3 is that the $`m(eb)`$ distributions from top and stop decay remain distinctly different even in the limit in which the stop mass $`m`$ approaches $`m_t`$. Naively, one might imagine that the stop decay diagrams with top quark poles, (a) and (b) in Figure 1, would dominate in this limit and cause the stop decay to resemble top decay. Instead, we find that the top pole diagrams have no special importance in this limit. If $`E_G`$ is the $`\stackrel{~}{G}`$ energy, the top quark pole gives an energy denominator $`1/E_G`$, but this is cancelled by a $`\stackrel{~}{G}`$ emission vertex proportional to $`(E_G)^{3/2}`$. In Figure 4, we show the variation of the the distribution of $`m(eb)`$ and $`m(bW)`$ according to the choice of the supersymmetry parameters. The five curves in each group correspond specific parameter choices in the four scenarios listed at the end of Section 2, plus an additional choice in scenario (3) corresponding to the case of a pure $`\stackrel{~}{t}_L`$ ($`\theta _t=0`$). The distributions for a given value of $`m`$ are remarkably similar. Presumably, the shape of these distributions is determined more by general kinematic constraints than by the details of the decay amplitudes. The only exception to this rule that we have found comes in the case where the $`\stackrel{~}{t}_1`$ is dominantly $`\stackrel{~}{t}_L`$ and the $`\stackrel{~}{w}`$ exchange process is especially important. From these results, we believe that the $`\stackrel{~}{t}_1`$ production process can be identified by measuring the distribution of $`m(eb)`$ in events that pass the top quark selection criteria. The mass of the $`\stackrel{~}{t}_1`$ can be estimated from this distribution to about 5 GeV without further knowledge of the other supersymmetry parameters. ## 4 Longitudinal $`W`$ polarization One of the characteristic predictions of the standard model for top decay is that the final-state $`W`$ bosons should be highly longitudinally polarized. Define the degree of longitudinal polarization by $$r=\frac{\mathrm{\Gamma }(W_0)}{\mathrm{\Gamma }(\text{all})}.$$ (12) Then the leading-order prediction for this polarization in top decay is $$r_t=\frac{1}{1+2m_W^2/m_t^2}0.71.$$ (13) We have seen already that the configuration of the final $`bW^+`$ system in stop decay is quite different from that in top decay. Thus, it would seem likely that the longitudinal $`W`$ polarization would also deviate from the characteristic values for top. We will show that the value of $`r`$ in stop decay typically differs significantly from (13), in a manner that gives information about the underlying supersymmetry model. The measurement of the polarization $`r`$ at the Tevatron has been studied using the technique of reconstructing the $`W`$ decay angle in single-lepton events from the lepton and neutrino four-vectors . An accuracy of $`\pm 0.03`$ should be achieved in the upcoming Run II. This technique, however, cannot be used for stop events, since the missing momentum includes the $`\stackrel{~}{G}`$’s as well as the neutrino. However, one can also measure the longitudinal $`W`$ polarization from the $`W`$ decay angle determined by using the four-vectors of the two jets assigned to the hadronic $`W`$ in the event reconstruction. It is not necessary to distinguish the quark from the antiquark to determine the degree of longitudinal polarization. What value of $`r`$ should be found for light stop pair production? In Figures 5 and 6, we plot the value of $`r`$ in the four scenarios listed at the end of Section 2, for representative values of the parameters, as a function of the stop mass. We see that the value of $`r`$ is typically lower than the top quark value (13), that it has a slow dependence on the value of the stop mass $`m`$, and that it can depend significantly on the stop mixing angle $`\theta _t`$. The variation of $`r`$ arises from the competition between the diagrams in Figure 1 in which the Goldstino is radiated from the $`t`$ and $`b`$ legs and those in which the Goldstino is radiated from the $`W`$. To understand this, it is useful to think about the limiting cases in which each intermediate propagator goes on shell. In the case in which the top quark goes on shell in diagrams a,b of Figure 1, the $`W`$ polarization has the same vlaue (13) as that for top decay. In the case in which the $`\stackrel{~}{b}`$ goes on shell, we have the process $`\stackrel{~}{t}\stackrel{~}{b}W^+`$, for which also $`r=1/(1+2m_W^2/m_t^2)`$. However, the third case in which the $`\stackrel{~}{\chi }^+`$ goes on shell can give a very different result. In the limit in which the $`\stackrel{~}{\chi }^+`$ is pure gaugino, we have the subprocess $`\stackrel{~}{w}^+\stackrel{~}{G}W^+`$, which leads to purely transversely polarized $`W`$ bosons. More generally, for the process $`\stackrel{~}{\chi }_1^+\stackrel{~}{G}W^+`$ on shell, we have $$r=\frac{|V_{+12}|^2+|V_{12}|^2}{2(|V_{+11}|^2+|V_{11}|^2)+|V_{+12}|^2+|V_{12}|^2},$$ (14) where $`V_+`$, $`V_{}`$ are the matrices defined in (7). These individual components vary in importance as the masses on the intermediate lines are varied. The role of the chargino diagrams in producing a low value of $`r`$ is shown clearly in Figure 7. Here we plot the value of $`r`$ as a function of the supersymmetry-breaking $`SU(2)`$ gaugino mass $`m_2`$ and observe that $`r`$ moves to a higher asymptotic value as the gaugino is decoupled. Beyond this observation, though, the dependence of $`r`$ on the underlying parameters is not simple. As we have seen in the previous section, it is never true that one particular subprocess comes almost onto mass shell and dominates the stop decay. This feature of the stop decay, which was an advantage in the previous section, here provides a barrier to finding quantitative relation between a measured value of $`r`$ and the underlying parameter set. On the other hand, it is interesting that almost every scenario predicts a value of $`r`$ substantially different from the Standard Model value for top decay. ## 5 Conclusions In this paper, we have discussed the phenomenology of light stop decay through the process $`\stackrel{~}{t}W^+b\stackrel{~}{G}`$. We have shown that this process can be distinguished from $`t`$ decay through the characteristic shape of the $`b\mathrm{}`$ mass distribution. We have shown also that the fraction of longitudinal polarization of the $`W^+`$ in $`\stackrel{~}{t}`$ decays can vary significantly from the prediction (13) for $`t`$. Since these two observables are available at the Tevatron collider, it should be possible there to exclude or confirm this unusual scenario for the realization of supersymmetry. ACKNOWLEDGEMENTS We are grateful to Scott Thomas for suggesting this problem, to Regina Demina, for encouragement and discussions on stop experimentation, and to JoAnne Hewett, for helpful advice. This work was supported by the Department of Energy under contract DE–AC03–76SF00515.
no-problem/9909/hep-th9909094.html
ar5iv
text
# References ULB-TH/99-15, VUB/TENA/99/7, hep-th/9909094 DEFORMATIONS OF CHIRAL TWO-FORMS IN SIX DIMENSIONS XAVIER BEKAERT<sup>1</sup>, MARC HENNEAUX<sup>1</sup> and ALEXANDER SEVRIN<sup>2</sup> <sup>1</sup>Physique Théorique et Mathématique, Université Libre de Bruxelles, Campus Plaine C.P. 231, B-1050 Bruxelles, Belgium <sup>2</sup>Theoretische Natuurkunde, Vrije Universiteit Brussel Pleinlaan 2, B-1050 Brussel, Belgium ABSTRACT > Motivated by a system consisting of a number of parallel M5-branes, we study possible local deformations of chiral two-forms in six dimensions. Working to first order in the coupling constant, this reduces to the study of the local BRST cohomological group at ghost number zero. We obtain an exhaustive list of all possible deformations. None of them allows for a satisfactory formulation of the M5-branes system leading to the conclusion that no local field theory can describe such a system. The M5-brane is perhaps the most elusive object in M-theory . In the limit where bulk gravity decouples, it is described by a six-dimensional field theory. In order to match the eight propagating fermionic degrees of freedom, its bosonic sector has to include, besides the five scalar fields which describe the position of the brane in transverse space, a chiral two-form transforming as the (3,1) of the little group $`SU(2)\times SU(2)`$. The latter reflects that M2-branes may end on M5-branes. The resulting theory is an $`N=(2,0)`$ superconformal field theory in six dimensions . A single M5 brane with strong classical fields is well understood; its Lagrangian is described in . However, when several, say $`n`$, M5-branes coincide, little is known. Compactifying one dimension of M-theory on a circle yields type IIA string theory. If the M5-branes are taken transversal to the compact direction, they become $`n`$ coinciding NS5-branes in type IIA, again a poorly understood system. However, if the branes are longitudinal to the compact direction, the M5-branes appear as a set of coinciding D4-branes which are quite well understood. Their dynamics is governed by a five-dimensional $`U(n)`$ Born-Infeld theory which, ignoring higher derivative terms, is an ordinary $`U(n)`$ non-abelian gauge theory. Turning back to the eleven dimensional picture, this suggests that a non-abelian extension of the chiral two-form should exist. However, there are several indications that this is a highly unusual system. Both entropy considerations and the calculation of the conformal anomaly of the partition function show that the theory should have $`n^3`$ instead of $`n^2`$ degrees of freedom. In it was argued on geometric grounds, that for $`p>1`$, non-chiral $`p`$-forms do not allow for non-abelian extensions. In , geometric prejudices were dropped, and general deformations of non-chiral $`p`$-forms were classified to first order in the coupling constant. Though both known and novel deformations were discovered, none of them had the required property that the $`p`$-form gauge algebra becomes genuinely non-abelian. In the present letter we will specifically focus on deformations of chiral two-forms in six dimensions. By construction, these deformations are continously connected to the free theory. We will ignore the fermions and the scalar fields as we believe that they will not modify our conclusions. In fact this can easily be proven for the scalar fields because they are inert under the two-form gauge symmetry. Our starting point is the action of , $$S[A_{ij}^A]=\underset{A}{}d^6x(B^{Aij}\dot{A}_{ij}^AB^{Aij}B_{ij}^A)$$ (1) for a collection of $`N`$ free chiral $`2`$-forms $`A_{ij}^A`$, ($`i,j,\mathrm{}=1,\mathrm{},5`$), ($`A=1,\mathrm{},N`$), where $`N`$ is arbitrary and could e.g. be equal to $`n^3`$. The magnetic fields $`B^{Aij}`$ in (1) are defined through $$B_{ij}^A=\frac{1}{3!}ϵ_{ijklm}F^{Aklm},F_{ijk}^A=_iA_{jk}^A+_jA_{ki}^A+_kA_{ij}^A.$$ (2) If one varies the action with respect to the $`2`$-forms $`A_{ij}^A`$, one gets as equations of motion $$ϵ^{ijklm}_0_kA_{lm}^A2_kF^{Akij}=0ϵ^{ijklm}_k(_0A_{lm}^AB_{lm}^A)=0$$ (3) which imply, assuming the second Betti number of the spatial sections to vanish, $$_0A_{lm}^AB_{lm}^A=_lu_m^A_mu_l^A$$ (4) for some arbitrary spatial $`1`$-forms $`u_m^A`$. If one identifies $`u_m^A`$ with $`A_{0m}^A`$ (which is pure gauge), one may rewrite the equation (4) as $$E_{ij}^A=B_{ij}^A$$ (5) where the $`E`$’s are the electric fields, $`E_{ij}^A=F_{0ij}^A`$. Covariantly, this is equivalent to the self-duality condition $`F_{\lambda \mu \nu }^A=^{}F_{\lambda \mu \nu }^A`$. By gauge-fixing the gauge freedom of the theory, $$\delta _\mathrm{\Lambda }A_{ij}^A=_i\mathrm{\Lambda }_j^A_j\mathrm{\Lambda }_i^A$$ (6) one may set $`u_m^A=0`$. One gets then the equations in the “temporal gauge”, $`_0A_{lm}^AB_{lm}^A=0`$. The action (1) may be covariantized by adding appropriate auxiliary and gauge fields. One gets in this manner the free action of . Conversely, one may fall back on (1) by partly gauge-fixing the PST Lagrangian. Thus a consistent deformation of the PST Lagrangian defines a consistent deformation of (1). Though the action (1) is significantly simpler to handle than the PST Lagrangian, we pay the price that our analysis is not manifestly Lorentz invariant. Without enforcing Lorentz invariance, we will already obtain strong constraints on the allowed deformations. We shall come back to Lorentz invariance at the end of this letter. Our strategy for studying the possible local deformations of the action (1) is based on the observation that these are in bijective correpondence with the local BRST cohomological group $`H^{0,6}(s|d)`$ , where $`s`$ is the BRST differential acting on the fields, the ghosts, and their conjugate antifields, $`d`$ is the ordinary space-time exterior derivative and the upper indices refer to ghost number and form degree resp. In the present case, $`s`$ is given by $$s=\delta +\gamma $$ (7) with $`\delta A_{ij}^A`$ $`=`$ $`\delta C_i^A=\delta \eta ^A=0,`$ (8) $`\delta A^{Aij}`$ $`=`$ $`2_kF^{Akij}ϵ^{ijklm}_k\dot{A}_{lm}^A,`$ (9) $`\delta C^{Ai}`$ $`=`$ $`_jA^{Aij},`$ (10) $`\delta \eta ^A`$ $`=`$ $`_iC^{Ai}`$ (11) and $`\gamma A_{ij}^A`$ $`=`$ $`_iC_j^A_jC_i^A,`$ (12) $`\gamma C_i^A`$ $`=`$ $`_i\eta ^A,\gamma \eta ^A=0,`$ (13) $`\gamma A^{Aij}`$ $`=`$ $`\gamma C^{Ai}=\gamma \eta ^A=0.`$ (14) The $`C_i^A`$ are the ghosts, the $`\eta ^A`$ are the ghosts of ghosts, while the $`A^{Aij}`$, $`C^{Ai}`$ and $`\eta ^A`$ are the antifields. One verifies that $`\delta ^2=\gamma ^2=\delta \gamma +\gamma \delta =0`$. The cocycle condition defining elements of $`H^{0,6}(s|d)`$ is the “Wess-Zumino condition” at ghost number zero, $$sa+db=0,gh(a)=0$$ (15) Any solution of (15) defines a consistent deformation of the action (1) through $`S[A_{ij}^A]S[A_{ij}^A]+gd^6xa_0`$, where $`a_0`$ is the antifield-independent component of $`a`$. The deformation is consistent to first-order in $`g`$, in the sense that one can simultaneously deform the original gauge symmetry (6) in such a way that the deformed action is invariant under the deformed gauge symmetry up to terms of order $`g`$ (included). The antifield-dependent components of $`a`$ contain informations about the deformation of the gauge symmetry. Trivial solutions of (15) are of the form $`a=\gamma c+de`$ and correspond to $`a_0`$’s that can be redefined away through field redefinitions. Of course, there are also consistency conditions on the deformations arising from higher-order terms ($`g^2`$ and higher), but it turns out that in the case at hand, consistency to first order already restricts dramatically the possibilities. There are three possible types of consistent deformations of the action. First, one may deform the action without modifying the gauge symmetry. In that case, $`a`$ does not depend on the antifields, $`a=a_0`$. These deformations contain only strictly gauge-invariant terms, i.e., polynomials in the abelian curvatures and their derivatives (Born-Infeld terms are in this category) as well as Chern-Simons terms, which are (off-shell) gauge-invariant under the abelian gauge symmetry up to a total derivative. An example of a Chern-Simons term is given by the kinetic term of (1), which can be rewritten as $`F_0A`$ (in writing Chern-Simons terms, the spatial $`2`$-forms $`A^A`$ and their successive time derivatives, which are also spatial $`2`$-forms, are effectively independent). Second, one may deform the action and the gauge transformations while keeping their algebra invariant. In BRST terms, the corresponding cocycles involve (non trivially) the antifields $`A^{Aij}`$ but not $`C^{Ai}`$ or $`\eta ^A`$. Finally, one may deform everything, including the gauge algebra; the corresponding cocycles involve all the antifields. Reformulating the problem of deforming the free action (1) in terms of BRST cohomology enables one to use the powerful tools of homological algebra. Following the approach of , we have completely worked out the BRST cohomogical classes at ghost number zero. In particular, we have established that one can always get rid of the antifields by adding trivial solutions. In other words, the only consistent interactions for a system of chiral $`2`$-forms in six dimensions are (up to redefinitions) deformations that do not modify the gauge symmetries (6) of the free theory. These involve the abelian curvatures or Chern-Simons terms. There are no other consistent, local, deformations. We shall give the detailed proof of this assertion in a separate publication . We shall just outline here the general skeleton of the proof, which parallels the analysis of rather closely, emphasizing only the new features. To find the general solution of (15), one expands $`a`$ according to the antifields, i.e., more precisely, according to the antighost number, $$a=a_0+a_1+\mathrm{}+a_k,antigh(a_i)=i.$$ (16) The only variables with non-vanishing antighost number are the antifields, with $`antigh(A^{Aij})=1`$, $`antigh(C^{Ai})=2`$ and $`antigh(\eta ^A)=3`$. A similar expansion holds for $`b`$. The fact that $`k`$ remains finite follows from demanding locality in the sense that the number of derivatives in both the deformations of the action and in the deformations of the gauge transformations remains finite . What we must show is that one can eliminate all the terms in (16) but the antifield-independent component $`a_0`$. So, let us assume $`k>0`$ and finite. The last term in the expansion (16) must fulfill $`\gamma a_k+db_k=0`$ from (15). As in the non-chiral case, one may assume $`b_k=0`$ through redefinitions ($`k>0`$). Thus, $`\gamma a_k=0`$ and one must determine the general cocycle of the $`\gamma `$-differential. It is here that there is a difference with the non-chiral case. Indeed, the time derivatives of the ghosts of ghosts $`\eta ^A`$ are now in the $`\gamma `$-cohomology, while they are trivial in the non-chiral case, where one has $`\gamma C_0^A=_0\eta ^A`$. In the chiral case, however, there is no ghost $`C_0^A`$, so $`_0\eta ^A`$ is a non-trivial $`\gamma `$-cocycle (at ghost number two). A similar property holds for the higher-order time derivatives of $`\eta ^A`$. One easily verifies that these are the only generators of the $`\gamma `$-cohomology at positive ghost number. The other generators of the cohomology are the curvatures, the antifields and their spacetime derivatives. Thus, up to trivial terms that can be absorbed, one may write the last term $`a_k`$ in (16) as $$a_k=\underset{I}{}P^I\omega ^I$$ (17) where (i) the $`P^I`$ are $`6`$-forms constructed out of the antifields, the curvatures, their spacetime derivatives and the $`dx^\mu `$’s; and (ii) the $`\omega ^I`$ are a basis of the vector space of polynomials in the ghosts of ghosts $`\eta ^A`$ and their successive time-derivatives. Furthemore, $`antigh(P^I)=antigh(a_k)=k`$ (by assumption) and $`gh(\omega ^I)=k`$ so that $`gh(a_k)=gh(\omega ^I)antigh(P^I)=0`$. This shows that $`k`$ must be even since the $`\eta ^A`$’s and their successive time-derivatives have even ghost number. Thus, ik $`k`$ is odd, $`a_k`$ is trivial and can be entirely removed. Turn now to the next equation following from (15), $$\gamma a_{k1}+\delta a_k+db_{k1}=0$$ (18) By following the same line of thought as for non-chiral systems , and using in addition an argument based on counting time-derivatives of the ghosts of ghosts, one easily proves that $`P^I`$ must take the form $`P^I=Q^Idx^0`$ (up to trivial terms), where $`Q^I`$ is a spatial $`5`$-form (polynomial of degree $`5`$ in the spatial $`dx^k`$’s) solution of $`\delta Q^I+\stackrel{~}{d}R^I=0`$. Here, $`\stackrel{~}{d}`$ is the spatial exterior derivative, $`d_kdx^k`$. Furthermore, in order for (17) to be non-trivial, $`Q^I`$ must be a non-trivial solution, i.e., not of the form $`\delta M^I+\stackrel{~}{d}N^I`$. The analysis of imply that there are non-trivial solutions of $`\delta Q^I+\stackrel{~}{d}R^I=0`$ only for $`antigh(Q^I)=1`$ or $`3`$. In particular, all solutions of $`\delta Q^I+\stackrel{~}{d}R^I=0`$ are trivial in even ghost number, which is the relevant case for us since $`antigh(Q^I)=antigh(P^I)`$. There is therefore no way to match the odd antighost number of non-trivial solutions of $`\delta Q^I+\stackrel{~}{d}R^I=0`$ with the even ghost number of $`\omega ^I`$ in order to make a non-trivial $`a_k`$. Thus, $`a_k`$ is trivial and can be removed. The same argument applies then to the successive $`a_{k1}`$, $`a_{k2}`$ … and we can conclude that indeed, up to trivial terms, $`a`$ can be taken not to depend on the antifields, $`a=a_0`$. The only consistent interactions in six dimensions do not deform the gauge symmetry and are either strictly gauge-invariant ($`\gamma a_0=0`$), or gauge-invariant up to a total derivative ($`\gamma a_0+db_0=0`$)<sup>1</sup><sup>1</sup>1A similar result holds for non-chiral $`2`$-forms in six dimensions, for which the only symmety-deforming consistent interactions are the Freedman-Townsend interactions , but these are available in four dimensions only).. Because they do not deform the gauge-symmetry, the off-shell gauge-invariant (up to a possible total derivative) interactions are clearly consistent to all orders, so there is no further constraint following from gauge invariance. If one imposes in addition Lorentz-invariance, then, one gets of course additional restrictions on the gauge-invariant interactions. In the case where the Lagrangian is required to involve only first-order derivatives of the fields, these restrictions are most easily analysed by using the Dirac-Schwinger criterion, which easily leads to the Perry-Schwarz condition on the Hamiltonian in the case of a single field . We have, however, not done it explicitly for a system with many chiral $`2`$-forms<sup>2</sup><sup>2</sup>2The Dirac-Schwinger criterion is also useful for the related problem of manifestly duality-invariant formulations of electromagnetism in $`4`$ dimensions and reproduces there the condition of .. The Dirac-Schwinger criterion also implies consistent gravitational coupling of the chiral $`2`$-forms . The present analysis clearly leads to the conclusion that all continous, local deformations yield abelian algebras. In other words, no local field theory of chiral two-forms continously connected with the free theory, can describe a system of $`n`$ coinciding $`M5`$-branes. This leaves of course the non-local deformations of the abelian theory. Proposals in this direction exist where the two-form is used to construct a connection on the principal bundle based on the space of loops with a common point. However, this approach requires the introduction of a one-form potential which is used to parallel transport the two-form from the common point to some point on the loop. Such a one-form potential doesn’t seem to appear in the M5-system. A way out would be to constrain the potentials to be flat, but even then one finds that also here the algebra remains an abelian one. Finally, three-form field-strengths and their two-form potentials find a natural geometrical setting in the context of gerbes . However, there as well, a non-abelian extension of the two-form gauge-symmetry is still lacking. Acknowledgments: We thank Kostas Skenderis and Jan Troost for dicussions. X.B. and M.H. are supported in part by the “Actions de Recherche Concertées” of the “Direction de la Recherche Scientifique - Communauté Française de Belgique”, by IISN - Belgium (convention 4.4505.86) and by Proyectos FONDECYT 1970151 and 7960001 (Chile). A.S. is supported in part by the FWO and by the European Commission TMR programme ERBFMRX-CT96-0045 in which he is associated to K. U. Leuven.
no-problem/9909/cond-mat9909287.html
ar5iv
text
# Temporally disordered granular flow: A model of landslides ## I Introduction Understanding flow in realistic granular materials appears to be an important problem from both practical and theoretical point of view , . Renewed theoretical interest in this field has concentrated on the origin of scaling that characterizes phenomena in slowly driven granular materials: Distributions of avalanches in realistic granular piles , stratification, compactification , etc. The central question is: Do granular piles self-organize into critical steady states and if so, under what conditions? Another interesting phenomenon related to dynamics of granular materials in nature is the landscape evolution due to overland and channel flow, which results in fractal topography. The underlying mechanisms of erosion with spatially and temporally varying erosion rates are the subject of intensive discussion in the literature . It has been understood that realistic flow in slowly driven granular piles depends on many parameters, such as shapes and sizes (and masses) of individual beans, roughness of contact surfaces, their wetting properties, etc. Random (or controlled) variations in some of these parameters lead to fluctuations of contact angles and force distribution , nonlinear friction, stochastic character of diffusion, velocity and convection directions, and fluctuations in angle of repose. Unidirectional flow—reflecting dependence on gravity—is common in all granular materials, as well as occurrence of secondary avalanches following the initial instability. Molecular dynamic (MD) simulations and various cellular automata models with stochastic relaxation rules have been useful in describing certain aspects of realistic granular flow. However, comparison with measured avalanche properties has been only qualitative. In experiments the most often measured quantity is the outflow current $`J`$, which is defined as the number of grains that leave the system when an avalanche hits its lower boundary. The probability distribution of outflow current $`P(J)`$ in the steady state obeys the scaling form $`P(J,L)=L^\beta 𝒢(JL^\nu )`$ with $`\beta =2\nu `$ when the linear size $`L`$ of the pile support is varied, as found in Ref. for sandpiles of relatively small sizes. Using silicon dioxide sand Rosendahl et al. concluded that small and large avalanches behave differently and the distribution $`P(J)`$ shows no simple finite-size scaling. Moreover, avalanche statistics was found to vary with the size of grains used. Measuring the internal avalanches Bretz et al. have also observed that two types od statistics are governing small and large avalanches. The measured distribution of avalanche size exhibits a power-law behavior $`D(s)s^{\tau _s}`$ with $`\tau _s2.14`$, which probably applies for avalanches of small sizes. The two time (and size) scales were more clearly demonstrated recently by MD simulations , leading to two exponents $`\tau _s=2`$ for short, and $`\tau _s=1.5`$ for long time scale. A sophisticated measurement of the internal avalanches was done with a one-dimensional ricepile , in which elongated rice grains were used to suppress inertial effects. Scaling properties of the distribution of dissipated energy were determined, indicating that details of the dissipation are responsible for the occurrence of the critical state. In another experiment the transport of individual grains was monitored, and the distribution of transit time was also found to exhibit robust scaling behavior . The collected data for the landslides in nature, triggered by various mechanisms, also exhibit a power-law behavior . The exponents for the area of slides have been estimated in the range $`\tau _s=1.162.25`$ , depending on the dominating triggering mechanism and region where the data were collected. The distribution of the mass collected from Himalayan sandslides is characterized by the exponent $`\tau _m=0.190.23`$ . In the present work we introduce a new stochastic model of directional flow on the two-dimensional square lattice in which numerous after-avalanches are generated within a certain correlation time due to temporal disorder in the diffusion term. The dynamic rules are a combination of stochastic diffusion and deterministic branching processes. The diffusion probabilities change randomly in time, but are space-independent. Fluctuations in diffusion probability $`1\mu (t)`$ around threshold value $`1\mu _0`$, which depends on external conditions and thus appears as a control parameter, is motivated by fluctuations in wetting and drying conditions after an avalanche commenced (see Sec. II). Notice that the lifetime of an avalanche can range from seconds in the laboratory granular piles to geological times in the landscape evolution. Therefore, the change of local stability conditions during the avalanche lifetime is a natural choice in the case of long relaxation times. A similar type of disorder in directed percolation processes was recently considered by Jensen . We perform extensive numerical simulations for various values of the parameter $`\mu _0`$ and lattice sizes $`L`$, and quantify the behavior by the landslide distributions of: (i) duration $`t`$—time that an instability lasts measured on the internal time scale; (ii) size $`s`$—area affected by an instability, and (iii) mass $`n`$—number of grains that exhibit slides during one avalanche, and (iv) by outflow current $`J`$—number of grains that fall off the open boundaries of the pile. Self-organized critical states are found for a range of values of the control parameter $`\mu _0\mu _0^{}0.4`$, which are characterized with multifractal scaling properties and $`\mu _0`$-dependent critical exponents. For $`\mu _0<\mu _0^{}`$ large discharging events occur occasionally, representing large-scale erosional reorganization of the system rather than fluctuations around a well defined critical state. The organization of the paper is as follows: In Sec. II we introduce the model and show two representative examples of landslides. The probability distributions of slides and their scaling properties are determined in Sec. III and IV for various values of the linear system size $`L`$ and the parameter $`\mu _0`$ in the scaling region. Sec. V contains a short summary and the discussion of the results. ## II Model and landslides We consider a square lattice oriented downward, with a dynamic variable, height $`h(i,j)`$, associated to each site. The relaxation rules are a combination of (i) stochastic diffusion by two particles when $`h(i,j)h_c`$ with probability $`\mu (t)`$, which varies in time (see below); and (ii) deterministic convection, when local slope $`\sigma (i,j)h(i,j)h(i+1,j_\pm )`$ exceeds some critical value $`\sigma (i,j)\sigma _c`$. At each site the rule (ii) is applied by toppling one particle along an unstable slope repeatedly until both local slopes drop below $`\sigma _c`$. The system is updated in parallel, which leads to a well defined internal time scale of the relaxation process. The updating is stopped when all affected sites become temporarily stable. Here $`(i+1,j_\pm )`$ are positions of two downward neighbors of the site$`(i,j)`$. Mass flow is always downward, however, the instability can propagate backwards both due to nonlocal slope condition and due to time-dependent diffusion probability. We assume that diffusion probability fluctuates stochastically in time, but is space independent. Implementation of this rule is done as follows: We preset the threshold value $`\mu _0`$ which is the same for all sites in the system. Then at each site which is affected by an avalanche a new value $`\mu (t)`$ is selected at each time step until the avalanche stops from set of random numbers evenly distributed on the interval (0,1), and toppling is accepted if $`\mu (t)\mu _0`$, and rejected otherwise . Therefore, for $`\mu _0=1`$ all sites topple (the rule becomes deterministic), whereas for $`\mu _0<1`$ an unstable site might not topple at a given time $`t`$ because of instantly low diffusion probability $`p(t)1\mu (t)<1\mu _0`$, however, it may topple at a later time step $`t^{}>t`$ if $`p(t^{})`$ exceeds the threshold diffusion probability $`1\mu _0`$. This temporally varying disorder mimics changes in sticking properties with time, which then locally influence the angle of repose. This phenomenon can be of interest for the flow of granular materials with large effective friction, such as ricepiles in which the effects of granular boundaries may depend on the local dynamic variable $`h(i,j)`$ and its derivatives. Therefore the difference $`\mu (t)\mu _0`$ is a measure of the dynamic friction. Recently proposed models with stochastic critical slope rules in one dimension proved very successful in describing the observed transport properties of ricepiles . Whereas for avalanche distributions these models predict universal scaling exponents, in contrast to the experimental observations . Another interesting example is represented by landscape evolution, which can also be considered as a granular flow, in which local wetting properties fluctuate in time. By wetting, $`p(t)`$ drops below the threshold diffusion probability $`1\mu _0`$, the grains stick together, and the system builds up large local slopes. At a later time $`t^{}`$ these slopes may become unstable either when due to drying $`p(t^{})`$ exceeds the threshold, or when the slopes become larger than critical. Two different classes of triggering mechanisms of landslides have been discussed in the literature : rainfall and water level which control soil moisture on one side, and ground motion, which leads to slope variation on the other. The values of measured exponents of landslide distributions are directly related to the locally prevailing triggering mechanism . In principle, threshold shear stress may depend on the slope angle and on soil properties, which are influenced by soil moisture. We assume that these two mechanisms are related dynamically. In the present model both mechanisms are effective: The soil moisture, which affects local height, varies stochastically in time at each site, whereas we assume that the shear stress threshold depends only on the local angle and thus remains deterministic. Moreover, by tuning the critical height mechanism via the parameter $`\mu _0`$, we find nonuniversal critical properties and a transition to noncritical dynamic states, in qualitative agreement with experimental observations. A different model of landslides is obtained by ”averaging out” the critical height mechanism and assuming stochastic variations of critical slope, which can be viewed as one of few possible generalizations of stochastic critical slope models to two dimensions. So far the results of two-dimensional stochastic critical slope models are not available in the literature . The system is perturbed by adding grains one at a time at a random site on the first row, thus increasing local height and slopes. Therefore, an instability (avalanche) can in principle start only from the top, however, secondary avalanches are commencing from any affected site in the system, triggered either by high instant value of $`\mu (t)`$ or by supercritical slope. In order to have ”clean” statistics, we start each avalanche from the top row and consider only those secondary avalanches which are spatially connected within certain correlation time $`t_c`$. Here $`t_c`$ is not a prefixed parameter, but it is determined by the relaxation process itself. Typically $`t_c`$ is determined by the lifetime of the instability, thus $`t_c1`$ for large relaxation events. There are two interesting limits of our model. In the limit $`\mu _0=1`$ it reduces to the deterministic directed model , whereas for $`\mu _0<1`$ and in the limit when the correlation time is strictly equal to one, it reduces to the model considered in Ref. . The temporally varying diffusion probability is a new ingredient of our model, which was not considered so far in CA models of granular flow. It appears to be responsible both for new scaling properties and for the transition into the state dominated by large erosional avalanches. In Fig. 1 are shown two examples of simulated landslides with multiple topplings due to secondary avalanches up to fourth degree in the scaling region (top) and a large erosional event (bottom). ## III Probability distributions of slides and their scaling properties In this section we present results of numerical simulations of avalanche statistics. As discussed in Sec. I, a landslide consists of many interpenetrating avalanches of different degree, which are spatially connected one to another within the life-time of the instability. For concreteness, the probability distributions are determined for the whole relaxation event, which is equally termed as avalanche and/or landslide. We apply open boundary conditions in the perpendicular direction (see also later an example where periodic boundaries have been used). In most simulations we used $`h_c=2`$ and $`\sigma _c=8`$. By varying the external parameter $`\mu _0`$ between 0 and 1 and lattice size $`L`$ between 12 and 192, we determine the distributions of size, mass and duration of avalanches (slides). In Figs. 2 and 3 the distributions of avalanche duration longer than $`t`$, $`P(t)`$, size larger than $`s`$, $`D(s)`$, and mass larger than $`n`$, $`D(n)`$, are shown for $`L=128`$ and various values of the parameter $`\mu _0`$. (Notice that in the deterministic limit $`\mu _0=1`$ the distributions $`D(s)`$ and $`D(n)`$ become identical, however, unbounded number of topplings at each site for $`\mu _0<1`$ leads to two distinct distributions.) For $`\mu _0<1`$ a characteristic behavior with two scales appears: steep section corresponding to small avalanches, and flat section to large avalanches. The crossover length between small and large relaxation events varies with $`\mu _0`$, however, it remains small (cf. Figs. 2 and 3), so that distributions of avalanches smaller than the crossover length extend only over one decade. Here we concentrate on the behavior of large avalanches (i.e., avalanches which are larger than the crossover length). With lowering the threshold diffusion probability $`\mu _0`$ a large number of secondary instabilities develop, leading to the flattening of the distributions. However, we find a power-law behavior $`P(t)t^{1\tau _t}`$, $`D(s)s^{1\tau _s}`$, and $`D(n)n^{1\tau _n}`$, as long as $`\mu _00.4`$. The exponents $`\tau _t`$, $`\tau _s`$ and $`\tau _n`$ appear to vary continuously with control parameter $`\mu _0`$, as shown in the insets to Figs. 2 and 3. The character of the dynamics changes below $`\mu _0^{}0.4`$, where only occasionally very large avalanches occur. We study in some more detail the relaxation clusters at $`\mu _0=0.4`$. Numerical values of the exponents are $`\tau _t=1.253`$, $`\tau _s=1.202`$, and $`\tau _n=1.190`$ for the distributions of duration, size, and mass of avalanches, respectively. In addition, we have measured the distribution of linear elongation of avalanches in the direction of transport $`P(\mathrm{})\mathrm{}^\tau _{\mathrm{}}`$, the mass-to-scale ratio with respect to parallel length $`s_{\mathrm{}}\mathrm{}^D_{}`$, and the average transverse extent $`\mathrm{}_{}\mathrm{}^\zeta `$. We find $`\tau _{\mathrm{}}=1.578`$, $`D_{}=1.572`$, and $`\zeta =D_{}1=0.572`$ (estimated error bars $`\pm 0.03`$). These values are close to the numerical values of the exponents in the parity-conserving universality class of branching processes. On the other hand, the exponents governing small events increase with decreasing $`\mu _0`$ (cf. Fig. 2), reaching the values $`\tau _t^s=`$1.92, $`\tau _s^s=`$ 1.67, and $`\tau _n^s=`$1.45 for the duration, size, and mass of small avalanches, respectively, at $`\mu _0=\mu _0^{}`$. Notice that although the scale of the the distributions is small, being bounded by the crossover length, these values of the exponents indicate closeness of the mean-field universality class. ### A Multifractal scaling properties of landslide distributions By varying the lattice size $`L`$ with $`\mu _0`$ fixed in the scaling region we study the finite-size effects on the distributions of avalanches. In contrast to the most of two-dimensional sandpile automata models in the literature, the present distributions do not obey simple finite-size scaling. Instead, we find that different regions of a large avalanche have different fractal properties and consequently their own exponents. The following multifractal scaling form $$P(X,L)(L/L_0)^{\varphi _X(\alpha _X)},$$ (1) with $$\alpha _X\left(\mathrm{log}\frac{X}{X_0}\right)/\left(\mathrm{log}\frac{L}{L_0}\right)$$ (2) fits well our data with $`L_0=1/4`$ and $`X_0=1/4`$. (Here $`Xt,s,J`$). In Figs. 4 and 5 we show the probability distributions of duration and size, respectively, for five different lattice sizes $`L`$ and for fixed $`\mu _0=0.7`$ . The corresponding spectral functions $`\varphi _t(\alpha _t)`$ vs. $`\alpha _t`$ and $`\varphi _s(\alpha _s)`$ vs. $`\alpha _s`$ are shown in the insets to Figs. 4 and 5. ## IV Outflow current The outflow current results only from those avalanches that reach an open boundary of the system. The size of such events and their frequency is a relative measure of the transport processes which occur in the interior of the pile. The outflow current is easy to measure both in laboratory experiments and in natural landslides. For instance, the width of the sedimented layers of granular materials that occur below steep sections in mountains are directly related to the size of outflow avalanches from that section. Sensitivity of the outflow current distribution $`P(J)`$ to variations in the control parameter is monitored in our model for $`L=48`$ with periodic boundary conditions in the perpendicular direction. In Fig. 6 we show the distribution $`P(J)`$ vs. $`J`$ for $`\mu _0=`$1, 0.8, 0.6., 0.4 and 0.2 . Once again, the change in the character of the dynamics below $`\mu _0^{}`$ is also seen in the outflow current, which becomes centered around a certain mean value (depending on the lattice size). Above $`\mu _0^{}`$, we find that the outflow current distribution exhibits multifractal scaling properties according to Eqs. (1) and (2). The results for $`\mu _0=`$0.7 and varying $`L`$ from 12 to 192, obtained for open boundary conditions in perpendicular direction, are shown in Fig. 7. Additional information about transport processes in the interior of the system is obtained by measuring the outflow current as a function of time, and time intervals between successive outflow events. In the inset to Fig. 6 we show the average time interval between outflow events as a function of the control parameter $`\mu _0`$. The time intervals grow exponentially on lowering $`\mu _0`$. In Fig. 8 the outflow current is shown as a function of time (measured on the external time scale, i.e., by the number of added particles), averaged over 1000 time steps for $`L=`$54 and with periodic perpendicular boundary conditions. For $`\mu _00.4`$ (cf. lower three panels), the outflow current fluctuates around mean value $`J_0=1`$, thus balancing the input current and maintaining the steady states of the system (a steady state is characterized by balance between input and output currents). The amplitude of the outflow events increases with decreasing $`\mu _0`$, and at the same time the frequency of events decreases. This behavior is consistent with the histogram which is shown in Fig. 6. The character of the dynamics changes for $`\mu _0<\mu _0^{}`$ (see top panel in Fig. 8), with dominating output events of large size and large time intervals between the events. At $`\mu _0=\mu _0^{}`$ a dynamic phase transition occurs between critical steady states above $`\mu _0^{}`$ and states without long-range correlations below $`\mu _0=\mu _0^{}`$. (Similar phase transitions are found also in Refs. and , however, in different universality classes.) Although for $`\mu _0<\mu _0^{}`$ the system is likely to build up a finite slope (unlimited piling is prevented by the deterministic critical slope rule), preliminary results show that a substantial growth of the average slope occurs only for $`\mu _0<0.2`$, reaching the value $`\sigma _c`$ at $`\mu _00`$. Further work is necessary in order to investigate the universality class of this phase transition. ## V Discussion and Conclusions In the present model, combined relaxation rules with temporal disorder are responsible for numerous after-avalanches, which lead to large relaxation events resembling sandslides in realistic granular materials. Numerical simulations show that such large relaxation events exhibit scaling behavior for a range of values of the the control parameter $`\mu _0\mu _0^{}0.4`$. The avalanche distributions are characterized by continuously varying scaling exponents, in qualitative agreement with the data collected from natural landslides. Moreover, comparison of the exponent of the avalanche mass distribution $`\tau _n`$ for $`0.4<\mu _0<0.5`$ with the one that characterizes Himalayan sandslides reported in Ref. is satisfactory. For various lattice sizes the distributions are characterized by multifractal rather than finite-size scaling properties. The deterministic part of the relaxation rules leads to branching processes with, on the average, even number of offsprings. For this reason the scaling exponents for the distributions reach numerical values characteristic of the modulo-two conserving processes (also known as parity conserving processes) before scaling behavior disappears at $`\mu _0=\mu _0^{}`$. Below $`\mu _0^{}`$ the critical steady state is lost. The dynamics is dominated by large erosional avalanches in a region close to $`\mu _0^{}`$ and a net average slope appears for smaller values of $`\mu _0`$. ## Acknowledgments This work was supported by the Ministry of Science and Technology of the Republic of Slovenia. I would like to thank R. Pastor-Satorras, A. Corral, and D.L. Turcotte for helpful discussions.
no-problem/9909/hep-lat9909075.html
ar5iv
text
# UTCCP-P-75 September 1999 Equation of state in finite-temperature QCD with improved Wilson quarkspresented by S. Ejiri ## 1 Introduction The transition temperature and the equation of state (EOS) of QCD at finite temperature belong to the most basic information for understanding the early Universe and heavy ion collisions. Full QCD studies of these quantities have been made mainly with the Kogut-Susskind quarks, particularly for the EOS . In this paper we present the first result of the EOS from Wilson-type quarks. We study two-flavor QCD on $`N_t=4`$ lattices. In order to suppress lattice artifacts, which are known to be severe for the combination of the plaquette gauge and Wilson quark actions, we adopt a renormalization-group (RG) improved gauge action combined with a meanfield-improved clover quark action. See Ref. for details of our action. In Fig. 1 we show the phase diagram with our action at $`N_t=4`$. The line of finite-temperature transition is determined by the Polyakov loop and its susceptibility. The parity-broken phase is not yet identified. Dashed lines are used in a test discussed in Sec. 2. ## 2 Equation of state We compute the pressure $`p`$ by the integral method , which is based on the formula, valid for large homogeneous systems, that $$\frac{p}{T^4}=\frac{N_t^3}{N_s^3}^{(\beta ,K)}d\xi \left\{\frac{S}{\xi }\frac{S}{\xi }_{T=0}\right\}.$$ (1) The integration path in the parameter space $`(\beta ,K)`$ should start from a point in the low temperature phase where the integrand approximately vanishes. We evaluate the quark contributions to the derivatives $`\frac{S}{\beta }`$ and $`\frac{S}{K}`$ by the method of noisy source using U(1) noise vectors. The value for the pressure computed in (1) should be independent of the choice of the integration path. To check this point, we make a series of test runs on $`8^3\times 4`$ and $`8^4`$ lattices along three paths shown in Fig. 1, generating 500 HMC trajectories at each point. The results for $`p/T^4`$ obtained from these paths are plotted in Fig. 2. We find that $`p/T^4`$ at $`(\beta ,K)=(2.1,0.13)`$ and $`(2.2,0.13)`$ in the two figures are in good agreement, confirming the path independence of the integral. Encouraged by this result, we perform production runs on $`16^3\times 4`$ and $`16^4`$ lattices. At each dots plotted in Fig. 1, we generate 500–2000 trajectories on the $`16^3\times 4`$ lattice and 200–300 trajectories on the $`16^4`$ lattice. Measurement of the derivatives is made at every trajectory. Hadron propagators are calculated at every fifth trajectory to compute pseudo scalar ($`m_{\mathrm{PS}}`$) and vector ($`m_V`$) meson masses. As is seen from Fig. 2, paths along the $`K`$-direction give much smaller errors in $`p/T^4`$ than those from paths in the $`\beta `$-direction. Therefore, we carry out the integral in the $`K`$-direction. We then obtain pressure plotted in Fig. 3 as a function of the mass ratio $`m_{\mathrm{PS}}/m_\mathrm{V}`$ at zero temperature. Interpolating these data, we find $`p/T^4`$ for each value of $`m_{\mathrm{PS}}/m_\mathrm{V}`$. Figure 4 shows the pressure as a function of $`T/T_{pc}`$ at fixed $`m_{\mathrm{PS}}/m_\mathrm{V}`$. Here $`T_{pc}`$ is the pseudo-critical temperature at the same value of $`m_{\mathrm{PS}}/m_\mathrm{V}`$. The temperature scale is set by $`m_\mathrm{V}`$ through $`T/T_{pc}=m_\mathrm{V}(\beta _{pc})/m_\mathrm{V}(\beta )`$ with $`\beta _{pc}`$ the pseudo-critical coupling. We find that the pressure for fixed $`T/T_{pc}`$ depends only weakly on the quark mass even for relatively heavy quark in the range $`m_{\mathrm{PS}}/m_\mathrm{V}=0.7`$–0.8. For heavier quark masses, the pressure decreases toward the pure gauge value (dashed line) as expected. We also note that the magnitude of pressure is much larger than that for the pure gauge system for $`N_t=4`$, and that it overshoots the Stefan-Boltzman value in the continuum at high temperatures. These features are probably the result of large discretization errors from the clover quark action here . Indeed the large Stefan-Boltzman value on an $`N_t=4`$ lattice shown at the top-right in Fig. 4 is dominated by the quark contribution. ## 3 O(4) Scaling The chiral phase transition of two-flavor QCD is expected to belong to the universality class of O(4) spin system in three dimensions. In particular, identifying the magnetization, external magnetic field, and reduced temperature of the spin model with $`M=\overline{\mathrm{\Psi }}\mathrm{\Psi }`$, $`h=2m_qa`$, and $`t=\beta \beta _{ct}`$, where $`\beta _{ct}`$ is the chiral transition point, we expect a scaling relation $`M/h^{1/\delta }=f(t/h^{1/\beta \delta })`$ (2) to hold with the O(4) scaling function $`f(x)`$ and the O(4) critical exponents $`\beta `$ and $`\delta `$ . A previous study using the RG-improved gauge action and unimproved Wilson quark action found this relation to be well satisfied for the quark mass and the chiral condensate defined by axial Ward identities . Figure 5 shows the result of a similar analysis from the present work. Data for $`\overline{\mathrm{\Psi }}\mathrm{\Psi }_{\mathrm{sub}}=2m_qa(2K)^2_x\pi (x)\pi (0)`$ are fitted to the scaling relation, adjusting $`\beta _{ct}`$ and the scales for $`t`$ and $`h`$. The scaling ansatz works well, yielding $`\beta _{ct}=1.47(7)`$ for the best fit of data for $`2m_qa<0.9`$ and $`\beta 1.95`$ with $`\chi ^2/\mathrm{df}=1.1`$. This work is in part supported by the Grants-in-Aid of Ministry of Education, Science and Culture (Nos. 09304029, 10640246, 10640248, 10740107, 11640250, 11640294, 11740162). SE, KN and M. Okamoto are JSPS Research Fellows. AAK and TM are supported by the Research for the Future Program of JSPS.
no-problem/9909/cond-mat9909396.html
ar5iv
text
# Explaining the Forward Interest Rate Term Structure ## 1 Introduction The search for more adequate statistical models of the forward interest rate curve is essential for both risk control purposes and for a better pricing and hedging of interest rate derivative products . A large number of models have been proposed, but it is the Heath-Jarrow-Morton (hjm) model that has become widely accepted as the most appropriate framework for addressing these issues. This model has been the basis for a large amount of research in relation to the pricing and hedging of derivative products. However comparatively little has addressed how well this model describes empirical properties of the forward rate curve (frc). In a previous paper , a series of observations concerning the U.S. frc in the period 1991-96 were reported, which were in disagreement with predictions of the standard models. These observations motivated a new interpretation of frc dynamics. * First, the average shape of the frc is well fitted by a square-root law as a function of maturity, with a prefactor very close to the spot rate volatility. This strongly suggests that the forward rate curve is calculated by the money lenders using a Value-at-Risk (VaR) like procedure, and not, as assumed in standard models, through an averaging procedure. More precisely, since the forward rate $`f(t,\theta )`$ is the agreed value at time $`t`$ of what will be the value of the spot rate at time $`t+\theta `$, a VaR-pricing amounts to writing: $$_{f(t,\theta )}^{\mathrm{}}𝑑r^{}P_M(r^{},t+\theta |r,t)=p,$$ (1.1) where $`r(t)`$ is the value of the spot rate at time $`t`$ and $`P_M`$ is the market implied probability of the future spot rate at time $`t+\theta `$. The value of $`p`$ is a constant describing the risk-averseness of money lenders. The risk is that the spot rate at time $`t+\theta `$, $`r(t+\theta )`$, turns out be larger than the agreed rate $`f(t,\theta )`$. This probability is equal to $`p`$ within the above VaR pricing procedure. If $`r(t)`$ performs a simple unbiased random walk, then Eq. (1.1) indeed leads to $`f(t,\theta )=r(t)+A(p)\sigma _r\sqrt{\theta }`$, where $`\sigma _r`$ is the spot rate volatility and $`A(p)`$ is some function of $`p`$. * Second, the volatility of the forward rate is found to be ‘humped’ around $`\theta =1`$ year. This can be interpreted within the above VaR pricing procedure as resulting from a time dependent anticipated trend. Within a VaR-like pricing, the frc is the envelope of the future anticipated evolution of the spot rate. On average, this evolution is unbiased, and the average frc is a simple square-root. However, at any particular time $`t`$, the market actually anticipates a future trend. It was argued in that this trend is determined by the past historical trend of the spot rate itself over a certain time horizon. In other words, the market looks at the past and extrapolates the observed trend in the future. This means that the probability distribution of the spot, $`P_M(r^{},t+\theta |r,t)`$, is not centered around $`r`$ but includes a maturity dependent bias whose magnitude depends on the historical spot trend. However, the market also knows that its estimate of the trend will not persist on the long run. The magnitude of this bias effect is expected to peak for a certain maturity and this can explain the volatility hump. The aim of this paper is two fold. First we wish to empirically test the new interpretation of the frc dynamics outlined above. Specifically, we report measurements over several different data-sets, of the shape of the average frc and the correlation between the instantaneous frc and the past spot trend over a certain time horizon. We have investigated the empirical behaviour of the frc of four different currencies (usd, dem, gbp and aud), in the period 1987-1999 for the usd and 1994-1999 for the other currencies. Full report of the results can be found in . Here we only present detailed results for the usd 94-99, but we also discuss relevant results obtained with the other data-sets. Second, for usd 94-99, we wish to compare these empirical results with the predictions of the one-factor Gaussian hjm model fitted to the empirical volatility. ## 2 Empirical Results Our study is based on data sets of daily prices of futures contracts on 3 month forward interest rates. In the usd case the contract and exchange was the Eurodollar CME-IMM contract. In practice, the futures markets price three months forward rates for fixed expiration dates, separated by three month intervals. Identifying three months futures rates to instantaneous forward rates (the difference is not important here), we have available time series on forward rates $`f(t,T_it)`$, where $`T_i`$ are fixed dates (March, June, September and December of each year), which we have converted into fixed maturity (multiple of three months) forward rates by a simple linear interpolation between the two nearest points such that $`T_it\theta T_{i+1}t`$. In our notation we will identify $`f(t,\theta )`$ as the forward rate with fixed maturity $`\theta `$. This corresponds to the Musiela parameterization. The shortest available maturity is $`\theta _{\mathrm{min}}=3`$ months, and we identify $`f(t,\theta _{\mathrm{min}})`$ to the spot rate $`r(t)`$. For the usd 94-99 data-set discussed here, we had 38 maturities with the maximum maturity being 9.5 years. We will define the ‘partial’ spread $`s(t,\theta )`$, as the difference between the forward rate of maturity $`\theta `$ and the spot rate: $`s(t,\theta )=f(t,\theta )r(t)`$. The theoretical time average of $`O(t)`$ will be denoted as $`O(t)`$. We will refer to empirical averages (over a finite data set) as $`O(t)_e`$. For infinite datasets the two averages are the same. First we consider the average frc, which can be obtained from empirical data by averaging the partial spread $`s(t,\theta )`$: $$s(t,\theta )_e=f(t,\theta )r(t)_e.$$ (2.1) In Figure 1 we show the average frc $`s(t,\theta )_e`$, along with the following best fit: $$s(t,\theta )_e=a\left(\sqrt{\theta }\sqrt{\theta _{\mathrm{min}}}\right).$$ (2.2) As first noticed in , the average curve can be quite satisfactorily fitted by a simple square-root law. The corresponding value of $`a`$ (in $`\%`$ per $`\sqrt{\text{day}}`$) is 0.049 which is very close to the daily spot volatility 0.047 (which we shall denote by $`\sigma _r`$). We have found precisely the same qualitative behaviour for our 12 year usd data-set and also for the gbp and aud. The only exception was the steep dem average frc which can be explained by its low average spot level . We have therefore greatly strengthened – with much more empirical data – the proposal of ref. that the frc is on average fixed by a VaR-like procedure, specified by Eq. (1.1) above. In figure 2 we show the empirical volatility for the usd, defined as: $$\sigma (\theta )=\sqrt{\mathrm{\Delta }f^2(t,\theta )_e},\sigma (\theta _{\mathrm{min}})\sigma _r,$$ (2.3) where $`\mathrm{\Delta }f(t,\theta )`$ denotes the daily increment in the forward rates. We see a strong peak in the volatility at 1 year . For all the data-sets we have studied the volatility shows a steep initial rise between the spot rate and 6-9 months forward . We also show the fit of the function: $$\sigma (\theta )=0.0610.014\mathrm{exp}\left(1.55(\theta \theta _{\mathrm{min}})\right)+0.074(\theta \theta _{\mathrm{min}})\mathrm{exp}\left(1.55(\theta \theta _{\mathrm{min}})\right).$$ (2.4) It is not a priori clear why the frc volatility should universally be strongly increasing for the first few maturities. This is actually in stark contrast to the Vasicek model where the volatility is exponentially decaying with maturity. We will see that this universal feature is naturally explained with the anticipated trend proposal. We have studied the frc ‘deformation’ determined empirically by: $$y(t,\theta )=f(t,\theta )r(t)s(t,\theta )_e.$$ (2.5) By construction the deformation process vanishes at $`\theta _{\mathrm{min}}`$ and has zero mean. For the first few maturities we have observed that this quantity is strongly correlated with the past trend in the spot. Therefore, in accordance with the anticipated trend proposal, we consider the following simple one-factor model: $$f(t,\theta )=r(t)+s(t,\theta )+(\theta )b(t).$$ (2.6) The function $`b(t)`$ is the ‘anticipated trend’ which by construction has zero mean. One of the main proposals of was that the anticipated trend reflects the past trend of the spot rate. In other words, the market extrapolates the observed past behaviour of the spot to the nearby future. Here we consider a trend of the form: $$b(t)=_{\mathrm{}}^te^{\lambda _b(tt^{})}𝑑r(t^{}),$$ (2.7) which corresponds to an exponential cut-off in the past and is equivalent to an Ornstein-Uhlenbeck process for $`b(t)`$. We have also considered a simple flat window cut-off in . We choose here to calibrate $`(\theta )`$ to the volatility. Neglecting the contribution of all drifts, we find from Eq’s. (2.6) and (2.7) that the two are related simply by: $$\sigma (\theta )=\sigma _r\left[1+(\theta )\right].$$ (2.8) In accordance with the observed short-end behaviour of the frc volatility, we require $`(\theta )`$ to be positive and strongly increasing for the first few maturities. In our interpretation of the short-end of the frc, as described quantitatively by Eq’s (2.6-8), this universal feature is a consequence of the markets extrapolation of the spot trend into the future. To determine the parameter $`\lambda _b`$ in Eq. (2.7), we propose to measure the following average error: $$E=\sqrt{<\left(y(t,\theta )(\theta )b(t)\right)^2>}.$$ (2.9) To measure $`E`$, we must first extract the empirical deformation $`y(t,\theta )`$ using Eq. (2.5). We then determine $`b(t)`$ using the empirical spot time series and Eq. (2.7).<sup>1</sup><sup>1</sup>1In this empirical determination of $`b(t)`$ we actually use detrended spot increments, defined as $`d\widehat{r}(t)=dr(t)dr_e`$. The error $`E`$ will have a minimum for some $`\lambda _b`$. This is the time-scale where the deformation and anticipated trend match up best, thereby fixing the values of $`\lambda _b`$. <sup>2</sup><sup>2</sup>2Note that $`E`$ is also simply the average error between the empirical forward rates and the model forward rates as given by Eq. (2.6). In Figure 3 we plot the error $`E`$, against the parameter $`\lambda _b^1`$, used in the simulation of $`b(t)`$. We consider $`\theta =`$ 6 months which is the first maturity beyond the spot-rate. We see a clear minimum demonstrating a strong correlation between the deformation and anticipated trend. For a flat window model the minimum is even more pronounced . These results indicate the clear presence of a dynamical time-scale around $`100`$ trading days. We have observed that the time-scale obtained is independent of the maturity used . In Figure 4 we plot the empirical deformation against $`(\theta )b(t)`$, where we have set $`\lambda _b^1=100`$ trading days. Indeed, we visually confirm a very close correlation. Here we have restricted ourselves to a one-factor model for ease of presentation. In we consider a two and three-factor version of our model where the definition of the deformation now includes the subtraction of a long spread component. In this case we observe improved and very striking correlations that persist even up to 2 years forward of the spot! For the other data-sets the strength of the correlation is not as strong; however the same qualitative features are clearly present. ## 3 Comparison with HJM It is important to understand whether the popular hjm framework can capture the empirical properties discussed here. The stationary one-factor Gaussian hjm model is described by: $$f(t,\theta )=f(t_i,tt_i+\theta )+_{t_i}^t𝑑s\nu (t+\theta s)+_{t_i}^t\sigma (t+\theta s)𝑑W(s),$$ (3.1) where: $$\nu (\theta )=\sigma (\theta )_0^\theta 𝑑\theta ^{}\sigma (\theta ^{})\lambda \sigma (\theta ),$$ (3.2) $`\lambda `$ is the market price of risk and $`dW`$ is a Brownian motion. The average frc is given by: $$s(t,\theta )_\tau =f(t_i,\tau +\theta )f(t_i,\tau )+_\tau ^{\tau +\theta }𝑑u\nu (u)_0^\theta 𝑑u\nu (u),\tau =tt_i,$$ (3.3) which corresponds to an average over a finite time period $`\tau `$. For comparisons with our empirical average frc, we can consider $`\tau `$ to be 5 years which was the approximate length of our dataset. There are 3 separate contributions to the average frc. First is the contribution of the initial frc. In this case the initial frc was somewhat steeper than the average frc. Yet its contribution to the average frc is still roughly a factor of 3 less than the observed average. We can expect the magnitude of this contribution to decrease with increasing $`\tau `$. The second contribution comes from the $`\sigma ^2`$ factor in Eq. (3.2). The magnitude of this contribution grows linearly with $`\tau `$. Yet even for $`\tau =10`$ years we find that the size of this term is very small, at least a factor of 10 smaller than the observed average frc for the early maturities. This term can therefore be neglected. More interesting is the contribution of the market price of risk term. We can show that this contribution is always negative for some initial region of the frc if $`\sigma (\theta )>\sigma _r`$ for all $`\theta `$. We found that this condition holds for all the data-sets we studied . This negative contribution has a maximum at $`\sigma (\theta )=\sigma (\tau +\theta )\sigma (\theta _{\mathrm{max}})`$. Assuming the volatility is constant for large maturities, we find the market price of risk contribution takes the $`\tau `$ independent form: $$s(t,\theta )_\lambda \lambda \left[_0^\theta 𝑑u\sigma (u)\theta \sigma (\theta _{\mathrm{max}})\right].$$ (3.4) In figure 1 we show a plot of Eq. (3.4) where we use the empirical volatility Eq. (2.4) and choose $`\lambda =4.4`$ (per $`\sqrt{\mathrm{year}}`$) which gives a best fit to the average frc; it is clear that this fit is very bad, in particular compared to the simple square-root fit described above. In the usd case the market price of risk contribution is only negative for the first maturity since the usd has a very strong volatility peak. However for the other data-sets it occurs for much longer maturities or may remain negative for the entire maturity spectrum. Clearly the hjm model completely fails to account for our empirical results regarding the average frc. The next question to address is whether the hjm model can explain the striking correlation observed between the deformation and anticipated trend. We do this by calculating Eq. (2.9), where all averages are calculated with respect to the hjm model Eq. (3.1) calibrated to the empirical volatility. As before we have also calibrated $`(\theta )`$ to the empirical volatility via Eq. (2.8). An immediate problem arises because, as we have seen, the hjm average frc cannot be calibrated to the empirical average frc. As a result the average deformation will no longer have the required zero mean. We will ignore this problem by defining the deformation as Eq. (2.5) but with the empirical average frc now replaced by the hjm average frc. In this case we find the finite $`\tau `$ contributions of Eq. (2.9) are negligible and tend to zero for large $`\tau `$. The result is plotted in figure 3 where we again consider $`\theta =6`$ months. We see that the hjm model fails to adequately account for the strong anticipated trend effect observed here and more strikingly in . This is even after we have, in effect, assumed that the hjm model does describe the correct average frc. On the other hand, our model is very close in spirit to the strong correlation limit of the ‘two-factor’ spot rate model of Hull-White , which was introduced in an ad hoc way to reproduce the volatility hump. Although phrased differently, this model assumes in effect the existence of an anticipated trend following an Ornstein-Uhlenbeck process driven by the spot rate . It would be interesting to understand better the precise relation, if any, between this model and the hjm framework . Our main conclusions are as follows. We confirm with much more data that the average frc indeed follows a simple square-root law, with a prefactor closely related to the spot volatility. This strengthens the idea of a VaR-like pricing of the frc proposed in . We also confirm the striking correlation between the instantaneous frc and the past spot trend over a certain time horizon. This provides a clear empirical confirmation of the anticipated trend mechanism first proposed in . This mechanism provides a natural explanation for the universal qualitative shape of the frc volatility at the short end of the frc. This point is particularly important since the short end of the curve is the most liquid part of the curve, corresponding to the largest volume of trading (in particular on derivative markets). Interest rate models have evolved towards including more and more factors to account for the dynamics of the frc. Yet our study suggests that after the spot, it is the spot trend which is the most important model component. Finally, we saw that the one-factor Gaussian hjm model calibrated to the empirical volatility fails to adequately describe the qualitative features discussed here. We presented a simple one-factor version of a more complete model described in , which is consistent with the above interpretation. A natural extension of our work is to adapt the general method for option pricing in a non-Gaussian world detailed in to interest rate derivatives. Work in this direction is in progress. Acknowledgments: We thank J. P. Aguilar, P. Cizeau, R. Cont, O. Kwon, L. Laloux, M. Meyer, A. Tordjman and in particular M. Potters for many interesting discussions.
no-problem/9909/astro-ph9909330.html
ar5iv
text
# VLBI Polarisation Images of the Gravitational Lens B0218+357 ## 1. Summary We have made VLBI polarisation observations of the gravitational lens system B0218+357 (Patnaik et al, 1993). Observations at 8.4 GHz were made on 9 May 1995, using the NRAO VLBA together with the 100m Effelsberg telescope, and at 22 and 43 GHz on 29 May 1996 using the VLBA alone. Reduction of the data was made using standard procedures in the AIPS software package. Preliminary maps are presented in Figure 1. The ”core” and ”knot” components, seen by Patnaik et al (1995) at 15 GHz, appear at all 3 frequencies in both A and B images. The core (right) is highly polarised - consistent with the high degree of polarisation variability shown by this source (Biggs et al, 1999). Both the A and B image paths are known to suffer high Faraday rotation, with a differential RM of 980$`\pm `$10 rad m<sup>-2</sup> between the images (Patnaik et al, these proceedings). Our observations are not suitable for a direct determination of RMs as they are not simultaneous, and the polarisation angle can vary on the same timescale as the image relative delay (Biggs et al, 1999). However, the effect of differential rotation between the A and B image paths is apparent in the increase with wavelength of the difference between the core PAs of A and B. Indeed, the parallel PAs of the A and B cores at 43 GHz (where Faraday rotation is negligible) nicely demonstrates a basic property of gravitational lensing - that the PA of polarisation is unchanged by the action of the lens, even though source structural position angles may be changed in the images. ### Acknowledgments. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. ## References Biggs, A.D. et al, 1999, MNRAS, 304, 349 Patnaik, A.R. et al, 1993, MNRAS, 261,435 Patnaik, A.R. et al, 1995, MNRAS, 274, L5
no-problem/9909/hep-th9909098.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is well known that the supersymmetry algebra admits central charges that give BPS bounds on the energy. These charges can be carried by solitons and when the bound is saturated the states preserve some fraction of the supersymmetry. In addition there are tensorial ‘central’ charges carried by various $`p`$-branes in string/M-theory, for example, that lead to BPS bounds on the energy densities of the branes, and the BPS $`p`$-branes preserve 1/2 of the supersymmetry . The $`p`$-branes can intersect with or end on other branes while still preserving some supersymmetry, and intersecting brane configurations have been found that preserve fractions $`n/32`$ of the supersymmetry for $`n=0,1,2,3,4,5,6,8,16`$ so that in each of these cases no more than half the supersymmetry is preserved; see, for example, . By examining the supersymmetry algebra it is simple to see that there must exist charges that would correspond to preservation of any fraction $`n/M`$ of supersymmetry (where $`M`$ is the number of supersymmetries of the system, so that $`M=32`$ for M-theory). The general anticommutator of $`N`$ supercharges $`Q_{\alpha I}`$ (with $`\alpha `$ a spinor index and $`I=1,\mathrm{},N`$) can be written as $$\{Q_A,Q_B\}=M_{AB}$$ (1) where $`A=1,\mathrm{},M`$ is a composite index $`A=\{\alpha I\}`$ and $`M_{AB}`$ is a symmetric matrix of bosonic charges, which in most physical systems will take the form $$M_{AB}=H\delta _{AB}Z_{AB}$$ (2) with $`H`$ the hamiltonian and $`Z_{AB}`$ a traceless symmetric matrix of ‘central’ charges which can be decomposed into a set of $`p`$-form charges $`Z_{\mu _1\mathrm{}\mu _p}^{IJ}`$ contracted with gamma matrices. Let the eigenvalues of $`Z_{AB}`$ be $`\lambda _1,\mathrm{},\lambda _M`$ with $`\lambda _A=0`$. Then the supersymmetry algebra implies that $`M_{AB}`$ must be positive definite so that the energy $`E`$ is bounded below by the largest eigenvalue, $`E\lambda `$ where $`\lambda =max\{\lambda _1,\mathrm{}\lambda _M\}`$, as is easily seen in a basis in which $`Z_{AB}`$ is diagonal. If the largest eigenvalue is $`n`$-fold degenerate, $`\lambda _1=\lambda _2=\mathrm{}=\lambda _n\lambda `$ say, and if there is a state that saturates the bound with $`E=\lambda `$, then for this state $`M_{AB}`$ will have $`n`$ zero eigenvalues and by definition the state will preserve $`n`$ of the supersymmetries, namely $`Q_1,Q_2,\mathrm{},Q_n`$, and should fit into a supermultiplet generated by the action of the remaining $`Mn`$ supercharges. Thus a system will have a state preserving a given fraction $`n/M`$ of supersymmetry provided (i) there is a configuration of charges such that the maximal eigenvalue of $`Z_{AB}`$ is $`n`$-fold degenerate and (ii) there is a state that saturates the BPS bound for these charges. In many physical systems, $`Z_{AB}`$ is an arbitrary symmetric traceless matrix, since a configuration of charges can be found that gives any desired $`Z_{AB}`$. For example, in M-theory the matrix $`M_{AB}`$ has $`32\times 33/2=528`$ independent entries and all 528 arise from the 11-momentum, a 2-form charge and a 5-form charge, as $`11`$+$`55`$+$`462`$=$`528`$ . Moreover, each of the 527 charges $`Z_{AB}`$ is believed to actually arise in M-theory, and there is a 1/2-supersymmetric BPS state for each of the 527 charges . Most have been constructed explicitly, while evidence for the occurrence of the M9-brane is given in . Then in M-theory there is a configuration of charges corresponding to each fraction $`n/32`$ of supersymmetry for $`0n32`$, and most can be realised without recourse to M9-branes. If M-theory is dimensionally reduced to one dimension by compactifying all the spatial dimensions, the resulting theory is a quantum mechanical theory with 32 supersymmetries and algebra (1),(2), where $`A=1,\mathrm{},32`$ is now an internal index transforming under an $`Sp(32)`$ internal symmetry, and $`Z_{AB}`$ represents 527 scalar central charges, transforming irreducibly under the $`Sp(32)`$ automorphism group of the superalgebra, which is a contraction of $`OSp(32|1)`$. All central charges are then clearly on the same footing, and there seems no reason why an arbitrary central charge matrix $`Z_{AB}`$, and hence an arbitrary fraction of supersymmetry $`n/32`$, should not be realisable. If there is a set of charges in a supersymmetric theory for which the maximal eigenvalue of $`Z_{AB}`$ is $`n`$-fold degenerate, and if there is a state which saturates the BPS bound, it would preserve $`n/M`$ of the supersymmetries. In most cases that have been studied and for which the state of lowest energy has been found, it turns out to be a supersymmetric one saturating the bound. The fact that most allowed supersymmetric states actually occur suggests that it would be of interest to investigate further the configurations that could preserve exotic fractions of supersymmetry. Our purpose here will be to give some simple examples in which there is a BPS bound for which a state saturating it would preserve 3/4 supersymmetry, and to give some preliminary discussion as to whether such states actually occur. The possibility of 3/4 supersymmetry has also been recently discussed in ,. We will first consider the supersymmetry algebra in four dimensions. It is straightforward to provide charges that lead to preservation of 3/4 of the supersymmetry. This algebraic structure can be embedded in higher dimensions and we will focus on D=11. We will show that the charges preserving 3/4 of the supersymmetry can be realised by considering a very simple configuration of a membrane intersecting two fivebranes according to the array $$\begin{array}{ccccccccccc}M5:& 1& 2& 3& 4& 5& & & & & \\ M5:& 1& & & & & 6& 7& 8& 9& \\ M2:& 1& & & & & & & & & \mathrm{}\end{array}$$ (3) where the symbol $`\mathrm{}`$ is read as ‘10’, with the amount of supersymmetry preserved depending on the energy and the charges of the three branes. The case that has been discussed previously is that in which the product of all three brane charges is positive (in our conventions), leading to $`1/4`$ supersymmetry being preserved, whereas we will find new possibilities when one of the branes has negative charge, and so is an anti-brane (or all three are anti-branes). We will analyse this case in some detail and determine under which conditions the fractions 1/4, 1/2 and 3/4 of the supersymmetry could be preserved. One interesting feature is that for three or more intersecting branes, switching all the branes to anti-branes can lead to inequivalent results, whereas for configurations with just two branes, equivalent results would be obtained by the switch. Many other configurations of branes with exotic supersymmetry in M-theory or string theory can be generated from this example by dualities. ## 2 Exotic Supersymmetry in D=4 The general $`N`$ extended superalgebra in four dimensions is $$\{Q^I,\overline{Q}^J\}=(P^\mu \delta ^{IJ}\mathrm{\Gamma }_\mu +V_\mu ^{IJ}\mathrm{\Gamma }^\mu +iY_\mu ^{IJ}\mathrm{\Gamma }^5\mathrm{\Gamma }^\mu +X_{\mu \nu }^{IJ}\mathrm{\Gamma }^{\mu \nu }+iZ^{IJ}+i\stackrel{~}{Z}^{IJ}\mathrm{\Gamma }^5)$$ (4) where $`Q^I`$, $`I=1,\mathrm{},N`$ is a Majorana spinor, the charges $`Y_\mu ^{IJ},Z^{IJ},\stackrel{~}{Z}^{IJ}`$ are antisymmetric in the $`IJ`$ indices while $`V_\mu ^{IJ},X_{\mu \nu }^{IJ}`$ are symmetric and $`V_\mu ^{IJ}`$ is traceless. In supersymmetric theories, $`P_\mu `$ is the 4-momentum, $`Z`$ and $`\stackrel{~}{Z}`$ are electric and magnetic $`0`$-brane charges, $`X_{\mu \nu }`$ are domain wall charges , $`V_i,Y_i`$ are string charges $`(i=1,2,3)`$ and $`V_0,Y_0`$ are charges for space-filling 3-branes . Moreover, in some cases $`P_i`$ could be a linear combination of the momentum and a string charge, while $`P^0`$ could be a linear combination of the energy and a 3-brane charge. The number of charges on the right-hand-side of (4) is $`10\times N(N+1)/2+6\times N(N1)/2`$, which agrees with the number of components, $`2N(4N+1)`$, of the left-hand-side. This suggests that by choosing the charges on the right hand side, it should be possible to find a system for which the BPS bound would lead to any fraction $`n/4N`$ supersymmetry being preserved, provided that there existed a state saturating the BPS bound. In particular, there are some very simple systems that could allow $`3/4`$ supersymmetry. For example, consider $`N=2`$ supersymmetry with only the charges $`P_\mu `$ and $`Y_\mu ^{IJ}=Y_\mu ϵ^{IJ}`$ non-zero. A convenient choice of gamma matrices is $$\mathrm{\Gamma }^0=\left(\begin{array}{cc}0& i\\ i& 0\end{array}\right),\mathrm{\Gamma }^i=\left(\begin{array}{cc}0& i\sigma ^i\\ i\sigma ^i& 0\end{array}\right),\mathrm{\Gamma }^5=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$$ (5) Then configurations with $`P^0=E`$, $`P^3=p`$, $`Y_0=u`$ and $`Y_3=v`$ and all other charges zero have the superalgebra $$\{Q,Q^{}\}=diag(E\lambda _1,E\lambda _2,E\lambda _3,E\lambda _4)$$ (6) where $`Q=(Q^1+iQ^2)/\sqrt{2}`$ and the eigenvalues $`\lambda _i`$ are given by $`\lambda _1`$ $`=`$ $`p+u+v`$ $`\lambda _2`$ $`=`$ $`upv`$ $`\lambda _3`$ $`=`$ $`vup`$ $`\lambda _4`$ $`=`$ $`puv`$ (7) Note that there is a symmetry in the way the three charges occur. Positivity implies that the energy $`E`$ satisfies $`E\lambda _i`$ for each $`i`$. If only one of the charges is non-zero, $`u`$ say, then $`Eu`$ and $`Eu`$ so that we obtain the standard bound $`E|u|`$. With two charges, $`u`$ and $`v`$ say, we obtain $`E|u+v|`$ and $`E|uv|`$ and when one of these is saturated we have a configuration preserving 1/4 supersymmetry. With all three charges, there are four bounds corresponding to the four eigenvalues and in general when one is saturated there will be 1/4 supersymmetry preserved. However, for special values of the charges there can be degenerate eigenvalues. Consider for example the case in which all charges are equal, $`u=v=p=\lambda `$ so that $$\{Q,Q\}=diag(H+3\lambda ,H\lambda ,H\lambda ,H\lambda )$$ (8) If $`\lambda `$ is positive, a state with $`E=\lambda `$ would preserve the 3/4 supersymmetry corresponding to supersymmetry parameters of the form $`ϵ=(0,ϵ_2,ϵ_3,ϵ_4)`$. For negative $`\lambda `$, a BPS state with $`E=3\lambda `$ would preserve 1/4 supersymmetry. In , it will be shown that a similar example occurs in the Wess-Zumino model with $`N=1`$ supersymmetry. In that case, there is again a simple configuration, corresponding to intersecting domain walls with momentum along the intersection, for which a state saturating the bounds would have 1/4, 1/2 or 3/4 supersymmetry, depending on the values of the charges. It will also be shown in that the Wess-Zumino model does not admit any classical configurations with 3/4 supersymmetry. ## 3 Exotic Supersymmetry in String Theory and M-Theory ### 3.1 M-Theory The general form of the eleven dimensional supersymmetry algebra has $$\{Q,Q\}=C(\mathrm{\Gamma }^MP_M\frac{1}{2!}\mathrm{\Gamma }^{M_1M_2}Z_{M_1M_2}\frac{1}{5!}\mathrm{\Gamma }^{M_1\mathrm{}M_5}Z_{M_1\mathrm{}M_5}),$$ (9) where $`C`$ is the charge conjugation matrix, $`P_M`$ is the energy-momentum 11-vector and $`Z_{M_1M_2}`$ and $`Z_{M_1\mathrm{}M_5}`$ are 2-form and 5-form charges. The fraction of supersymmetry that is preserved by a configuration possessing a given set of charges is given by the number of zero eigenvalues of the matrix $`\{Q,Q\}`$ divided by 32. As argued in the introduction, both sides have equal numbers of components (528), and all 528 charges on the right hand side actually arise in M-theory, provided we include M9-branes carrying the charge $`Z_{0i}`$ , so that there must be configurations of charges that could give rise to all fractions $`n/32`$ of preserved supersymmetry, provided BPS states arise in that charge sector. For 3/4 supersymmetry, there is a very simple set of charges corresponding to two fivebranes and a membrane that allows 3/4 of the supersymmetry, obtained by embedding the example of the last section in 11 dimensions and using dualities. In addition we will show that there are some novel combinations of three charges leading to the preservation of 1/4 and 1/2 supersymmetry. It is known that it is possible to have two fivebranes and a membrane intersecting according to (3) and preserving 1/4 of the supersymmetry, provided the product of all three brane charges is positive. Changing the signs of one or three of the charges and tuning their values allows 3/4 supersymmetry instead, as we shall see. We begin by assuming that the only non-zero charges in (9) are $`q_5`$ $`=`$ $`Z_{12345}`$ $`q_5^{}`$ $`=`$ $`Z_{16789\mathrm{}}`$ $`q_2`$ $`=`$ $`Z_1\mathrm{}`$ (10) and positive charges will correspond to branes and negative charges to anti-branes. We use real gamma matrices with $`C=\mathrm{\Gamma }^0`$ and $`\mathrm{\Gamma }^{0123456789\mathrm{}}=1`$. It will be convenient to take a basis such that $`\mathrm{\Gamma }^{012345}`$ $`=`$ $`diag(1,1,1,1)1\mathrm{l}_8`$ $`\mathrm{\Gamma }^{016789}`$ $`=`$ $`diag(1,1,1,1)1\mathrm{l}_8`$ $`\mathrm{\Gamma }^{01\mathrm{}}`$ $`=`$ $`diag(1,1,1,1)1\mathrm{l}_8`$ where $`1\mathrm{l}_8`$ is the $`8\times 8`$ identity matrix. Setting $`P^0=E`$ we can then rewrite (9) as $$\{Q,Q\}=diag(E\lambda _1,E\lambda _2,E\lambda _3,E\lambda _4)1\mathrm{l}_8$$ (12) where $`\lambda _1`$ $`=`$ $`q_2+q_5+q_5^{}`$ $`\lambda _2`$ $`=`$ $`q_2+q_5q_5^{}`$ $`\lambda _3`$ $`=`$ $`q_2q_5+q_5^{}`$ $`\lambda _4`$ $`=`$ $`q_2q_5q_5^{}`$ (13) Since $`\{Q,Q\}`$ is a positive matrix we have the BPS bound $`E\lambda _i`$ for $`i=1,2,3,4`$. If there is only one non-zero charge, $`q_2`$ say, then the BPS bound is simply $`E|q_2|`$ and when it is saturated 1/2 of the supersymmetry is preserved. For example, for BPS membranes (with $`q_2`$ positive and $`E=q_2`$) we have $$\{Q,Q\}=diag(0,2q_2,2q_2,0)1\mathrm{l}_8$$ (14) The preserved supersymmetry parameters satisfy $`\mathrm{\Gamma }^{01\mathrm{}}ϵ=ϵ`$. With an additional non-zero charge $`q_5`$, the BPS bounds are the two conditions that $`E|q_2+q_5|`$ and $`E|q_2q_5|`$. When either of the bounds is saturated, 1/4 of the supersymmetry is preserved. For example, for a membrane and a fivebrane, $$\{Q,Q\}=diag(0,2q_2,2(q_2+q_5),2q_5)1\mathrm{l}_8$$ (15) with 8 zero eigenvalues. The supersymmetry preserved is the intersection of that preserved by each of the membranes and fivebranes; in this case $$\mathrm{\Gamma }^{01\mathrm{}}ϵ=\mathrm{\Gamma }^{012345}ϵ=ϵ$$ (16) Adding the fivebrane to the membrane further halved the membrane’s supersymmetries to leave 1/4 supersymmetry. However, a second fivebrane can now be added in the 16789 directions without breaking any more supersymmetry, as the corresponding projection $`\mathrm{\Gamma }^{016789}ϵ=ϵ`$ on the supersymmetry parameter is already implied by the conditions (16). We can indeed add a third positive charge $`q_5^{}`$ and preserve all 8 supersymmetries if the energy saturates the BPS bound, $`E=q_2+q_5+q_5^{}`$. We will refer to this as the usual BPS intersection of the $`(2,5,5)`$ system as it has been extensively studied in the literature. An identical analysis goes through for $`(2,\overline{5},\overline{5}^{})`$ if we take $`E=q_2q_5q_5^{}`$, for $`(\overline{2},5,\overline{5}^{})`$ if we take $`E=q_2+q_5q_5^{}`$ and for $`(\overline{2},\overline{5},5^{})`$ if we take $`E=q_2q_5+q_5^{}`$; in each case, we can start with any two of the branes intersecting and preserving 1/4 supersymmetry, and then add the third for free without any further breaking. Returning to the $`(2,5)`$ system preserving 8 supersymmetries, adding an anti-fivebrane with $`\mathrm{\Gamma }^{016789}ϵ=ϵ`$ instead of a fivebrane to give the $`(2,5,\overline{5}^{})`$ configuration would appear to break the original 8 supersymmetries of the membrane-fivebrane system, but, as we shall show, the BPS bound leads to 8 supersymmetries if the energy saturates the bound. (These will be a different 8-dimensional subset of the 32 for some values of the charges and will be the same 8 for other values.) The situation is the same for the $`(\overline{2},\overline{5},\overline{5}^{})`$, $`(\overline{2},5,5^{})`$ and $`(2,\overline{5},5^{})`$ systems; in each case any two of the three branes preserve 8 supersymmetries, while the third brane appears to break these 8 supersymmetries, but nonetheless 8 supersymmetries would be preserved if the bound is saturated. Moreover, if such a 1/4 supersymmetric BPS state exists, tuning the charges to particular values enhances the number of supersymmetries to 16 or to the exotic value of 24. For general charges it is useful to contrast the analysis for configurations related by switching branes with anti-branes and we will focus on the $`(2,5,5^{})`$ and $`(\overline{2},\overline{5},\overline{5}^{})`$ systems. With this in mind we return to (12) and (13) and first consider the $`(2,5,5^{})`$ case in which all the charges are positive. Clearly $`\lambda _1=q_2+q_5+q_5^{}`$ is the largest eigenvalue and hence the BPS bound is $`Eq_2+q_5+q_5^{}`$ and when it is saturated we preserve 1/4 of the supersymmetry; this is the usual case considered above. To analyse the $`(\overline{2},\overline{5},\overline{5}^{})`$ case in which all charges are negative, it is useful to rewrite (13) as $`\lambda _1`$ $`=`$ $`q_2+q_5+q_5^{}`$ $`\lambda _2`$ $`=`$ $`(q_2+q_5+q_5^{})+2q_5`$ $`\lambda _3`$ $`=`$ $`(q_2+q_5+q_5^{})+2q_5^{}`$ $`\lambda _4`$ $`=`$ $`(q_2+q_5+q_5^{})+2q_2`$ (17) One of $`\lambda _2,\lambda _3,\lambda _4`$ is now the biggest eigenvalue and is positive, since $`\lambda _1`$ is negative and the sum of the eigenvalues is zero. For example, when $$0q_5q_5^{},0q_5q_2$$ (18) it is $`\lambda _2`$ that is the biggest and the BPS bound is $`E(q_2+q_5+q_5^{})+2q_5`$. If this bound is saturated $$\{Q,Q\}=diag(2(q_2+q_5^{}),0,2(q_5q_5^{}),2(q_5q_2))1\mathrm{l}_8$$ (19) and 1/4 of the supersymmetry would be preserved. To obtain exotic preservation of 1/2 or 3/4 supersymmetry we only need to tune the charges: 1/2 supersymmetry is preserved when either $`q_5=q_5^{}`$ or $`q_5=q_2`$ and 3/4 supersymmetry is preserved when $`q_5=q_5^{}=q_2`$. Thus, for the $`(\overline{2},\overline{5},\overline{5}^{})`$ system with charges satisfying (18), the lowest energy allowed by supersymmetry is $`E=(q_2+q_5+q_5^{})+2q_5`$. If the ground state of this system indeed has this energy, then it would preserve 1/4 supersymmetry for generic values of the charges, but when two of the charges are equal, 1/2 supersymmetry would be preserved and if all three charges are equal, 3/4 supersymmetry would be preserved. Similar results follow for the cases in which it is $`\lambda _3`$ or $`\lambda _4`$ that is the biggest. To obtain further insight, we continue with the case with charges satisfying (18), so that $`\lambda _2`$ is the largest eigenvalue. The specific 8 supersymmetries that would be preserved are the same as those preserved by the two intersecting branes $`(\overline{2},\overline{5}^{})`$. It is interesting that it is these two branes that are contributing to the energy positively while the other fivebrane is contributing negatively. Recall that if we add a fivebrane, with $`q_5`$ positive, to $`(\overline{2},\overline{5}^{})`$ to obtain the $`(\overline{2},5,\overline{5}^{})`$ configuration, the usual case of preservation of 1/4 supersymmetry is obtained if we have $`E=(q_2+q_5^{})+q_5`$. The new point here is that we can add instead an anti-fivebrane, with $`q_5`$ negative, to get the $`(\overline{2},\overline{5},\overline{5}^{})`$ system and still preserve the same supersymmetry if again $`E=(q_2+q_5^{})+q_5`$, as long as $`q_5q_5^{}`$ and $`q_5q_2`$. In either case, the naive energy would just be the sum of the energies of the three branes, i.e. $`E_n=|q_2|+|q_5|+|q_5^{}|`$. This is the correct result for the usual 1/4 supersymmetric $`(\overline{2},5,\overline{5}^{})`$ case, but for the exotic $`(\overline{2},\overline{5},\overline{5}^{})`$ case the energy of a state saturating the bound would be $`E=E_nV`$ where $`V=2|q_5|`$, suggesting that $`V`$ might be interpreted as some kind of binding energy or as some tachyonic contribution. Finally, note that if the conditions (18) are not both satisfied, then either $`\lambda _3`$ or $`\lambda _4`$ will be the largest eigenvalue and adding the anti-fivebrane to $`(\overline{2},\overline{5}^{})`$ will break the original 8 supersymmetries and lead to a different 8 supersymmetries being preserved. ### 3.2 Tachyon Condensation It is perhaps worth comparing the above with the case of coincident brane/anti-brane pairs. It has been argued that $`m`$ D-branes and $`m`$ anti-D-branes will completely annihilate to leave the vacuum with energy $`E=mT+mTV=0`$ where $`T`$ is the energy of a single brane and the contribution $`V=2mT`$ arises from the negative potential energy released by tachyon condensation . Duality then implies that this should also apply to any $`m`$ brane/anti-brane pairs in M-theory or string theory, which should again completely annihilate. The tachyon condensation reduces the energy to the minimum allowed by the BPS bound, which in this case is zero as the brane/anti-brane pair carries no net charge. Adding a further $`n`$ branes to obtain $`n+m`$ branes and $`m`$ anti-branes, the $`m`$ anti-branes should completely annihilate $`m`$ of the branes to leave $`n`$ branes with energy $`E=nT`$, which can be written as $`E=E_nV`$ where $`E_n=(2m+n)T`$ is the naive energy given by the sums of the energies of the individual branes and anti-branes and $`V=2mT`$. Then for two coincident $`p`$-branes of charges $`q`$, $`\stackrel{~}{q}`$, the naive energy of the system would be the sum of the energies of the respective branes, $`E_n=|q|+|\stackrel{~}{q}|`$. This is the correct energy if $`q,\stackrel{~}{q}`$ have the same sign, so that they are either both branes or both anti-branes. However, if the charges have opposite sign so that one is a brane and the other an anti-brane (e.g. $`q=(n+m)T`$ and $`\stackrel{~}{q}=mT`$ for the case above), the resulting configuration has $`E=|q+\stackrel{~}{q}|=E_nV`$ with $`V=2min(|q|,|\stackrel{~}{q}|)`$. This is suggestively similar to the case considered above when $`\lambda _2`$ is the largest eigenvalue. The $`(\overline{2},\overline{5}^{})`$ system enters the energy formulae in exactly the same way as a 5-brane of charge $`\stackrel{~}{q}_5=(q_2+q_5^{})`$ would. Adding a fivebrane with positive charge $`q_5`$ gives a system with $`E=q_5+\stackrel{~}{q}_5`$, while adding an anti-fivebrane with negative charge $`q_5=q`$ gives a system with $`E=\stackrel{~}{q}_5q`$ and $`V=2q`$. More generally, when the product of the brane charges is positive, the naive energy is the sum of the energies of the branes $`E_n`$=$`|q_2|+|q_5|+|q_5^{}|`$ and such configurations preserve 1/4 of the supersymmetry. When the product of the charges is negative, exotic preservation of supersymmetry is possible only if the naive energy is modified to $`E=E_n2Min(|q_2|,|q_5|,|q_5^{}|)`$. This suggests that tachyon condensation could play a role here also, reducing the energy below the sum of the brane energies. It seems plausible that this could indeed be the case and that it reduces the energy to the minimum allowed by supersymmetry. ### 3.3 String-Theory The M-theory example with an M2-brane and two M5-branes is related by duality to many other configurations of three branes in string theory or M-theory. For example, it is related to the type II configuration of a Dp-brane, a D(8-p) brane and a fundamental string intersecting in a point, with the Dp-brane in the directions $`1,2,\mathrm{},p`$, the D(8-p) brane in the directions $`p+1,p+2,\mathrm{},8`$ and the fundamental string in the 9th direction, or to the configuration of a D5-brane in the 12345 directions, a NS5-brane in the 12678 directions and a D3-brane in the 129 directions studied by Hanany and Witten . In each case there is the usual 1/4 supersymmetric configuration in which one of the three branes is added ‘for free’, and an exotic configuration obtained from this by reversing the orientation of one of the branes, or of all three branes, in which a BPS state would preserve 1/4 supersymmetry for generic charges and 1/2 or 3/4 supersymmetry when two or three of the charges are of equal magnitude. There are no such configurations with only D-branes, so that the methods of cannot directly be used to test the possibility of tachyon condensation leading to exotic BPS states. ## 4 Conclusion We have shown that the supersymmetry algebra allows configurations preserving exotic amounts of supersymmetry and we have identified simple configurations of charges in M-theory and in field theories such that any state with the lowest energy allowed by supersymmetry would preserve 3/4 supersymmetry, but we have not been able to establish whether or not such states actually occur. We have been unable to find any D=11 supergravity solutions with 24 Killing spinors, corresponding to 3/4 supersymmetry. The known supersymmetric supergravity solutions with two (anti)-fivebranes and an (anti)-membrane either preserve 1/4 or none of the supersymmetry . We take the mass parameters associated with the harmonic or generalised harmonic functions of each of the individual branes to be positive. Then if the product of the three charges is positive (for example, the $`(2,5,5^{})`$ or $`(2,\overline{5},\overline{5}^{})`$ configurations), then the solutions have 8 Killing spinors corresponding to 1/4 supersymmetry. When the product of the charges is negative, (for example, the $`(\overline{2},\overline{5},\overline{5}^{})`$ configuration), the known supergravity solutions actually break all of the supersymmetry. If we vary the signs of the mass parameters we do not obtain solutions with more Killing spinors<sup>1</sup><sup>1</sup>1This point was also discussed in .. It is also possible to prove that no 3/4 supersymmetric classical solutions of the Wess-Zumino model exist, even though supersymmetry would have allowed them . In each of our examples, there are no spatial dimensions that are transverse to all the branes and in such situations a number of subtleties can arise, but nonetheless our analysis does recover the known cases of supersymmetric intersections. The configurations we have identified are parameterised by three charges. For generic values of these charges, a BPS state would preserve 1/4 supersymmetry, but for special values when two or three of the charges are equal, a BPS state would preserve 1/2 or 3/4 supersymmetry, respectively. We have conjectured that in string theory and M-theory, tachyon condensation could play a role in reducing the energy to the minimum allowed by the BPS bound, just as it does for the brane/anti-brane pair. It would be very interesting to either establish that such exotic states do occur in certain theories, or, if they don’t exist, to understand the reason for this, given that supersymmetry appears to allow them. We hope to return to these issues in the future. Acknowledgements: We thank Gary Gibbons, Jeff Harvey, Michael Douglas, David Tong and Paul Townsend for helpful discussions.
no-problem/9909/hep-th9909185.html
ar5iv
text
# 1. Introduction ## 1. Introduction Connes and Kreimer have recently shown that dimensional regularization of Feynman diagrams is underwritten by the uniqueness of the solution of Hilbert’s 21st problem . Here, we celebrate this welcome legitimization of current practice in perturbative quantum field theory, by finding a rational function of the spacetime dimension $`d`$ that yields, at $`d=4`$, the 4-loop term in the beta function of quenched QED. For the first time, suppression of zeta values is seen diagram by diagram. We prove the suppression of $`\pi ^4`$ in the $`\overline{\mathrm{MS}}`$-renormalized 3-loop single-scale Green functions of any massless gauge theory. There are 6 methods of calculating the 3-loop term $`\beta _3=2`$ in the beta function $$\beta (a):=\frac{d\mathrm{log}a}{d\mathrm{log}\mu ^2}=\underset{n>0}{}\beta _na^n=\frac{4}{3}a+4a^22a^346a^4+O(a^5)$$ (1) of quenched QED, with a coupling $`a:=\alpha /4\pi `$. As described in , they are as follows. M1: Dyson-Schwinger skeleton expansion . M2: Integration by parts of massive bubble diagrams . M3: Integration by parts of massless two-point diagrams . M4: Infrared rearrangement of massless bubble diagrams . M5: Propagation in a background field . M6: Crewther connection to deep-inelastic processes . For the 4-loop term, one does not know how to use M3 directly: there is as yet no algorithm for 4-loop 2-point functions. The historical progression through other methods was as follows. Method M4 first gave $`\beta _4=46`$ in . Then it was noted in that the result was consistent with deep-inelastic results by virtue of the exactness of the Crewther connection M6 in the quenched abelian case. Progress with 4-loop massive bubbles in M2 led to the 4-loop beta function of a general gauge theory, confirming the particular case of QED. Recently we used the Dyson-Schwinger method M1, which was shown to be very efficient. However, in none of these 4 analyses does one gain an understanding of how the rationality of $`\beta _4=46`$ comes about; all 4 involve intricate cancellations of zeta values between diagrams with quite different momentum flows. The attentive reader will have noticed that one stone lay unturned: the background-field method M5. Very recently we attempted it, and found it to be wonderfully user-friendly: 8 lines of code suffice. This is because it gives $`\beta _4`$ directly in terms of 8 three-loop diagrams, whereas in a reduction to 3 loops was achieved indirectly, via nullification of four-loop diagrams, which are much more numerous. Moreover, we find that in a background field the cancellation of zeta values is much less obscure. Our method follows immediately from the telling observation that $`d(\beta (a)/a)/da=_{n>1}(n1)\beta _na^{n2}`$ is given by the radiative corrections to the photon self-energy of massless quenched QED, in a background field. Consider the momentum-space correlator $$i𝑑xe^{ikx}0|T(J_\mu (x)J_\nu (0))|0=(k_\mu k_\nu k^2g_{\mu \nu })\left\{\mathrm{\Pi }_0(k^2)+\mathrm{\Pi }_2(k^2)F^2+O(F^4)\right\}$$ (2) of the electromagnetic current $`J_\mu :=\overline{\psi }\gamma _\mu \psi `$ in a background electromagnetic field $`F_{\mu \nu }`$, with $`F^2:=F_{\mu \nu }F^{\mu \nu }`$. After struggling to decode the rationality of $`\beta _4`$ via the 4-loop contribution to $`\mathrm{\Pi }_0`$, I judged it more prudent to tackle the finite 3-loop contribution in $$\mathrm{\Pi }_2(k^2)=\frac{\beta _2a}{6k^4}\left\{1+\left(\frac{2\beta _3}{\beta _2}\right)a+\left(\frac{3\beta _4}{\beta _2}\right)a^2+O(a^3)\right\}$$ (3) ## 2. Three-loop beta function First we dimensionally continue Rosner’s result, $`2\beta _3/\beta _2=1`$. Consider the $`O(F^2)`$ term of (2) in $`d:=42\epsilon `$ dimensions, with a dimensionless coupling $`\overline{a}:=(4\pi )^\epsilon g(k^2)a`$, where $$g(k^2):=\frac{\mathrm{\Gamma }(1+\epsilon )\mathrm{\Gamma }^2(1\epsilon )}{k^{2\epsilon }\mathrm{\Gamma }(12\epsilon )}$$ (4) absorbs the $`\mathrm{\Gamma }`$ functions and momentum dependence of one-loop integration. Let the $`d`$-dimensional form of (3) be written as $$\overline{\mathrm{\Pi }}_2(k^2)=\frac{\overline{\beta }_2\overline{a}}{2(d1)k^4}\left\{1+\left(\frac{2\overline{\beta }_3}{\overline{\beta }_2}\right)\overline{a}+\left(\frac{3\overline{\beta }_4}{\overline{\beta }_2}\right)\overline{a}^2+O(\overline{a}^3)\right\}$$ (5) with bars denoting analytic continuation in $`d`$. Then the one-loop self-energy yields $$\overline{\beta }_2=\frac{(6d)(d2)}{d}\mathrm{Tr}(1)$$ (6) where Tr$`(1)`$ is whatever one chooses to take for the trace of the unit matrix in the Clifford algebra of $`d`$ dimensions. At $`d=4`$, where certainly Tr$`(1)=4`$, we get $`\beta _2=4`$. Now consider the $`d`$-dimensional analysis for $`\overline{\beta }_3`$. Integration by parts gives the first radiative correction in (5) in terms of $`\mathrm{\Gamma }`$ functions: $$\frac{2\overline{\beta }_3}{\overline{\beta }_2}=R_{2,0}+R_{2,3}\overline{\zeta }_3$$ (7) where $`R_{2,0}`$ and $`R_{2,3}`$ are rational functions of $`d`$ and $$\overline{\zeta }_3:=\frac{(d3)k^4}{6(g(k^2)\pi ^{d/2})^2}\frac{dP_1dP_2dP_3\delta (kP_1)}{P_1^2P_2^2P_3^2(P_1P_2)^2(P_2P_3)^2(P_3P_1)^2}$$ (8) is a slice, via a $`\delta `$ function, of the wheel with three spokes: the tetrahedron. It is a template for our later construction of a 3-loop basis. At 2 loops, one easily evaluates $$\overline{\zeta }_3=\frac{1}{6\epsilon ^3}\left(1\frac{\mathrm{sec}(\pi \epsilon )\mathrm{\Gamma }(12\epsilon )}{\mathrm{\Gamma }(1+\epsilon )\mathrm{\Gamma }(13\epsilon )}\right)=\zeta (3)+\frac{3}{2}\zeta (4)\epsilon +7\zeta (5)\epsilon ^2+O(\epsilon ^3)$$ (9) which is the sole origin of $`\zeta (3):=_{n>0}1/n^3`$ in 2-loop 2-point functions at $`d=4`$. Exact $`d`$-dimensional analysis gives the rational functions $`R_{2,0}`$ $`=`$ $`4{\displaystyle \frac{d^637d^5+554d^44280d^3+17826d^237728d+31608}{(d1)(d3)^2(d6)^3(d8)}}`$ (10) $`R_{2,3}`$ $`=`$ $`6{\displaystyle \frac{(d2)(d4)(d5)(d^425d^3+230d^2920d+1376)}{(d1)(d6)^3(d8)}}`$ (11) This massless abelian analysis is far simpler than our massive non-abelian work . The rationality of $`\beta _3=2`$ is seen in the vanishing of (11) at $`d=4`$; the precise value comes from setting $`d=4`$ in (10), which gives $`2\beta _3/\beta _2=1`$. Before proceeding to $`\beta _4`$, we prove a lemma, which lies at the heart of the method and has wider import. ## 3. Suppression of $`\pi ^4`$ in massless 3-loop gauge theory It is well understood why $`\zeta (3)`$ is the first irrational number to appear in single-scale processes, renormalized in the $`\overline{\mathrm{MS}}`$ scheme. The suppression of a single power of $`\pi ^2`$ is a property of any massless Lorentz-covariant theory: every bare 2-loop 2-point diagram, with spacelike momentum $`k`$, gives a result of the form $`(R_{2,0}+R_{2,3}\overline{\zeta }_3)\overline{a}^2`$, where $`R_{2,0}`$ may have $`1/\epsilon ^2`$ and $`1/\epsilon `$ singularities, while $`R_{2,3}`$ is regular at $`d=4`$. In heavy-quark effective theory, where the worldline of the heavy quark breaks Lorentz symmetry, one finds 2-loop combinations of $`\mathrm{\Gamma }`$ functions that lack the $`\pi ^2`$ suppression of (9). It is the experience of several workers that if – and usually not until – one combines the 3-loop diagrams specified by a massless gauge theory, $`\pi ^4`$ vanishes as well . Multi-loop colleagues have asked me why this happens. The answer lies in the output format of the program slicer , which gives the results of subsequent sections. Following the example of (8), we define 4 three-loop integrals that depend only on $`d`$: $$\overline{\zeta }_{5,S}:=\frac{(d3)k^4}{20(g(k^2)\pi ^{d/2})^3}\frac{dP_1dP_2dP_3dP_4\delta (kP_S)}{P_1^2P_2^2P_3^2P_4^2(P_1P_2)^2(P_2P_3)^2(P_3P_4)^2(P_4P_1)^2}$$ (12) with 4 distinct slices of the 4-spoke wheel, encoded by $`S=L,M,N,Q`$, corresponding to $`P_L:=P_1P_3`$, $`P_M:=P_1`$, $`P_N:=P_1P_2+P_3P_4`$, $`P_Q:=P_1P_2`$, for the external momentum, $`k`$. Then each integral evaluates to $`\zeta (5)`$ at $`d=4`$, since it comes from slicing the wheel with 4 spokes and we have proved that the wheel with $`n+1`$ spokes gives $`\left(\genfrac{}{}{0pt}{}{2n}{n}\right)\zeta (2n1)`$, for $`n>1`$. The coding was suggested by letters in : the L (ladder) slice is at the 4-point hub, in the $`s`$-channel; M (Mercedes-Benz) is at the rim; N (non-planar) is at the hub, in the $`u`$-channel; Q (?) is at a spoke. Remarkably, this completes a basis. > Lemma: A bare 3-loop 2-point diagram with external momentum $`k`$ gives > > $$\left\{R_{3,0}+R_{3,3}\overline{\zeta }_3+\underset{S=L,M,N,Q}{}R_{3,S}\overline{\zeta }_{5,S}\right\}\overline{a}^3$$ > (13) > with coefficients $`\{R_{3,S}S=0,3,L,M,N,Q\}`$ that are rational functions of $`d`$. Proof: This is an extension of the fine analysis of Chetyrkin and Tkachov . Consider the 5 one-particle-irreducible 4-loop bubble diagrams of Fig. 1. Generate all tadpole-free trivalent 3-loop 2-point diagrams by slicing these, to obtain the 12 distinct cases of Fig 2, where each slice is marked. Whatever the numerators, and whatever the integer powers of the propagators, integration by parts allows one to eliminate all and only those diagrams that contain at least one unsliced triangle. Thus we may eliminate 5 of the 12 cases, namely C2, C3, C4, D1, D2. The final observation is that B3, the sole case with a sliced chord in Fig. 2, is rationally related to B2. The basic one-loop integral in $`d`$ dimensions is $$G(a,b):=\frac{\mathrm{\Gamma }(a+bd/2)}{\mathrm{\Gamma }(a)\mathrm{\Gamma }(b)}\frac{\mathrm{\Gamma }(d/2a)\mathrm{\Gamma }(d/2b)}{\mathrm{\Gamma }(dab)}$$ (14) with propagators raised to powers $`a`$ and $`b`$. Let $`\mathrm{\Phi }(\mathrm{X})`$ be the representative of class X in $`d`$-dimensional $`\varphi ^3`$ theory. Using only $`\mathrm{\Gamma }(z+1)=z\mathrm{\Gamma }(z)`$, we obtain $$\frac{\mathrm{\Phi }(\mathrm{B3})}{\mathrm{\Phi }(\mathrm{B2})}=\frac{G(1,1)G(1+\epsilon ,1+\epsilon )}{G(1,1+\epsilon )G(1,1+2\epsilon )}=\frac{3d10}{d3}$$ (15) Then (13) is a basis for the 6 remaining cases, since (8,12) are independent. $`\mathrm{}`$. Two comments are in order. First, the factor $`d3`$ in (12) ensures that the $`\epsilon `$ expansions of $`\overline{\zeta }_{5,S}`$ are all pure: at order $`\epsilon ^n`$ one encounters multiple zeta values, exclusively of weight $`n+5`$. All mixing derives from the rational coefficients, of which only $`R_{3,0}`$ and $`R_{3,3}`$ may be singular at $`d=4`$. Secondly, there are two combinations of $`\{\overline{\zeta }_{5,S}S=L,M,Q\}`$ that may be reduced to $`\mathrm{\Gamma }`$ functions. No purpose is served by making such reductions: one would need 5th order Taylor series for those $`\mathrm{\Gamma }`$ functions to arrive back at a result for $`d=4`$ that is immediately available from (13). Moreover, such reduction would obscure our physical conclusion. > Corollary: In massless gauge theory, $`\overline{\mathrm{MS}}`$-renormalized 3-loop single-scale Green functions do not involve $`\pi ^4`$. Proof: The lemma shows that $`\zeta (4)=\pi ^4/90`$ may arise only from (9). This would occur only if $`R_{3,3}`$ is singular. But such a singularity, in the sum of diagrams, would require a counterterm $`\zeta (3)/\epsilon `$ in the coupling. It is proven that the 3-loop $`\beta `$ function of any gauge theory is rational , hence no such counterterm is available. $`\mathrm{}`$ Now one sees why cancellations of $`\pi ^4`$ were regarded as surprising in computations such as . The bare $`\zeta (4)`$ terms generated by mincer appeared to have several sources: Taylor expansions of $`\mathrm{\Gamma }`$ functions from diagrams in classes A1, A2, B1, B2, B3. By contrast, slicer outputs the 6 exact $`d`$-dimensional rational functions of the lemma. In our approach, one is bound to find that $`R_{3,3}`$ is regular in the sum of diagrams of a massless gauge theory; slicer encodes the suppression of $`\pi ^4`$, ab initio. Note, however, that the 3-loop anomalous mass dimension involves $`\zeta (3)`$. Thus if one expands in the fermion mass, $`\pi ^4`$ may emerge. Indeed, one finds $`\zeta (4)`$ in non-abelian terms of the 4-loop anomalous mass dimension . Its absence from the 4-loop beta function is guaranteed by the lemma and serves as a check of the full result . ## 4. Four-loop beta function For $`\beta _4`$, we need the finite 3-loop term in (3). Instead of the $`7\times 5\times 3=105`$ four-loop diagrams of , there are only $`5\times 3=15`$ three-loop diagrams. These are reduced to 8, by symmetries. Each contains a single fermion loop: a hamiltonian circuit with 6 vertices. Consider a bare fermion propagator, with momentum $`p`$. Without a background field, it would simply be $`S(p)=i/p\text{/}`$. To take account of the abelian background field, $`F_{\mu \nu }`$, one replaces this by $$S(p+iD)=S(p)S(p)D\text{/}S(p)+S(p)D\text{/}S(p)D\text{/}S(p)+O(D^3)$$ (16) where $`D_\mu `$ is an operator in a Fock-Feynman-Schwinger formalism that is oblivious to the momentum integrations, feeling only the order of $`\gamma `$ matrices. All one needs to know is that $`[D_\mu ,D_\nu ]=ieF_{\mu \nu }`$, since $`D_\mu `$ acts as a gauge-covariant derivative in the external configuration space. For a modern review of the non-abelian case, see . Here, we need only obtain the coefficient of $`F^2:=F_{\mu \nu }F^{\mu \nu }`$, using $$D_\alpha D_\beta D_\mu D_\nu =\left(g_{\alpha \nu }g_{\beta \mu }g_{\alpha \beta }g_{\mu \nu }\right)\frac{e^2F^2}{2d(d1)}$$ (17) for the expectation value of 4 external derivations. This is contracted with all ways of making 4 ordered insertions of gamma matrices in the 6 fermion propagators on a hamiltonian circuit that begins and ends at an external vertex of the 3-loop Feynman diagram. The tensor in (17) implies that the second-order expansion (16) suffices for any single propagator: insertion of more than 2 gamma matrices in any one propagator leads to a vanishing contraction. Thus no infrared divergence is produced. Simple counting shows that each diagram produces 180 traces, containing 20 gamma matrices. This astoundingly user-friendly method is CPU-intensive. In the Dyson-Schwinger method , it was possible to take traces before double differentiation w.r.t. an external photon momentum, thus limiting the traces to 16 gamma matrices. Moreover, for the most demanding integrals one needed only traces taken at $`d=4`$. Now, the exact handling of two non-commutative algebras – Dirac’s gamma matrices and Schwinger’s covariant derivatives – requires more machine time, while the physicist relaxes. Within hours of reading , I had typed 8 lines into slicer, one for each of the 8 diagrams, added a short procedure to generate the $`8\times 180=1440`$ traces, and farmed the problem out, using a cluster of DecAlpha machines. Since slicer, using reduce, is wildly uncompetitive with Jos Vermaseren’s lightning-fast implementation of mincer in form, it was 2 days later when the answer $`46`$ appeared on screen. However, the object was not speed. Rather it was a better understanding of cancellation of zeta values, which was mysterious in the faster Dyson-Schwinger method . ## 5. Fock-Feynman-Schwinger anatomy The Dyson-Schwinger anatomy of $`\beta _3`$ exhibits cancellation of $`\zeta (3)`$ between diagrams in Landau gauge , in Feynman gauge , and indeed in any gauge . In the background-field method, $`\beta _3`$ comes from two 2-loop diagrams, each of which is free of $`\zeta (3)`$, as $`d4`$, in any combination of internal and external gauges. We used a gauge $`(q^2g_{\mu \nu }+(\xi _{\mathrm{int}}1)q_\mu q_\nu )/q^4`$, for the internal photon propagator, and contracted (2) with $`k^2g_{\mu \nu }+(\xi _{\mathrm{ext}}1)k_\mu k_\nu `$, where $`k`$ is the external photon momentum. As $`\epsilon 0`$, we found $`C_3(\mathrm{PE})`$ $`=`$ $`\frac{1}{2}+\xi _{\mathrm{int}}\left({\displaystyle \frac{2}{\epsilon }}+1\right)\xi _{\mathrm{ext}}\left({\displaystyle \frac{1}{\epsilon }}+2\right)\xi _{\mathrm{int}}\xi _{\mathrm{ext}}`$ (18) $`C_3(\mathrm{DF})`$ $`=`$ $`\frac{1}{2}\xi _{\mathrm{int}}\left({\displaystyle \frac{2}{\epsilon }}+1\right)+\xi _{\mathrm{ext}}\left({\displaystyle \frac{1}{\epsilon }}+2\right)+\xi _{\mathrm{int}}\xi _{\mathrm{ext}}`$ (19) $`2\beta _3/\beta _2`$ $`=`$ $`1`$ (20) where PE and DF identify the photon-exchange and dressed-fermion contributions. There is no sign of $`\zeta (3)`$ in either diagram. This was no accident, as witnessed by the 4-loop result, obtained in Feynman gauge, for the sake of economy. As $`\epsilon 0`$, the contributions and total for $`3\beta _4/\beta _2`$ are $`C_4(\mathrm{A1})`$ $`=`$ $`{\displaystyle \frac{2}{3\epsilon ^2}}{\displaystyle \frac{7}{3\epsilon }}{\displaystyle \frac{58}{3}}`$ (21) $`C_4(\mathrm{B2})`$ $`=`$ $`{\displaystyle \frac{2}{3\epsilon ^2}}{\displaystyle \frac{4}{3\epsilon }}+{\displaystyle \frac{5}{3}}`$ (22) $`C_4(\mathrm{B3})`$ $`=`$ $`{\displaystyle \frac{3}{2\epsilon }}{\displaystyle \frac{21}{4}}`$ (23) $`C_4(\mathrm{C2})`$ $`=`$ $`64\zeta (3){\displaystyle \frac{4}{3\epsilon ^2}}+{\displaystyle \frac{26}{3\epsilon }}+79`$ (24) $`C_4(\mathrm{C4})`$ $`=`$ $`{\displaystyle \frac{32}{3}}\zeta (3){\displaystyle \frac{4}{3\epsilon ^2}}+{\displaystyle \frac{6}{\epsilon }}+{\displaystyle \frac{209}{9}}`$ (25) $`C_4(\mathrm{D1})`$ $`=`$ $`{\displaystyle \frac{320}{3}}\zeta (3)+{\displaystyle \frac{4}{3\epsilon ^2}}{\displaystyle \frac{9}{\epsilon }}{\displaystyle \frac{2695}{18}}`$ (26) $`C_4(\mathrm{D2})`$ $`=`$ $`64\zeta (3){\displaystyle \frac{9}{2\epsilon }}{\displaystyle \frac{1177}{12}}`$ (27) $`C_4(\mathrm{E1})`$ $`=`$ $`96\zeta (3)+{\displaystyle \frac{4}{\epsilon }}+134`$ (28) $`3\beta _4/\beta _2`$ $`=`$ $`{\displaystyle \frac{69}{2}}`$ (29) with diagrams identified by Fig. 2. There is no sign of $`\zeta (5)`$ in any diagram. This, again, is radically different from the Dyson-Schwinger method. Even more impressive is the distribution of $`\zeta (5)`$ between the 4 distinct slices in (12). By construction, slicer is oblivious to the contingency that these integrals happen to be equal at $`d=4`$. Thus it records that each is cancelled separately, in each diagram, by the Dirac-Schwinger traces that produce 4 exact rational functions of $`d`$, each vanishing at $`d=4`$. Thus, in the background-field method, the absence of $`\zeta (5)`$ relies neither on conspiracies between different diagrams nor on any happenstance of 4-dimensional analysis. This is a reasonable type of rationality, that one may realistically hope to understand better, by essentially combinatoric methods. It will be noted that the subleading zeta value, $`\zeta (3)`$, is seen in results (2428), for individual diagrams that are slices of the C, D, E topologies of Fig. 1. However, it should also be noted that this subleading irrational occurs only at subleading order in $`\epsilon `$. Topologies C and D produce $`\zeta (3)/\epsilon `$ singularities, at the level of individual Dirac-Schwinger traces. These conspicuously cancel, diagram by diagram. Thus background-field trace algebra accounts for 6 of the 7 features of rationality up to 4 loops: the cancellation of $`\zeta (3)`$ at 3 loops; of all 4 species of $`\zeta (5)`$, separately, at 4-loops; of $`\zeta (3)/\epsilon `$ at 4 loops. Only the final cancellation – subleading irrational, subleading in $`\epsilon `$ – entails conspiracy between Feynman-gauge diagrams. A mincer analysis for all $`\xi _{\mathrm{int}}`$ and $`\xi _{\mathrm{ext}}`$ would be fascinating. ## 6. Exercise and conclusion We have completed an instructive exercise in dimensional continuation of a finite quantity. 1. Evaluate the exact $`d`$-dimensional 3-loop radiative correction in (5). By the lemma of section 3, this amounts to finding 6 precisely defined rational functions of $`d`$ in $$\frac{3\overline{\beta }_4}{\overline{\beta }_2}=R_{3,0}+R_{3,3}\overline{\zeta }_3+\underset{S=L,M,N,Q}{}R_{3,S}\overline{\zeta }_{5,S}$$ (30) with wheel-slice basis (8,12). Forget $`\pi ^4`$; in massless gauge theory it never happens. 2. Verify that $`\{R_{3,S}S0\}`$ contain the factor $`d4`$. Dissect this, diagram by diagram. Conclusion: rationality appears far more reasonable than in any other method. 3. Find the numerator and denominator of $`R_{3,0}=N(d)/D(d)`$. Solution: $`N(d)`$ $`=`$ $`1215d^{11}53433d^{10}+1072059d^912995191d^8+105924166d^7`$ (31) $`609433848d^6+2520429944d^57469717936d^4+15495188128d^3`$ $`21364053504d^2+17580978560d6532684800`$ $`D(d)`$ $`=`$ $`2(d1)(d3)^3(d5)^2(d6)^3(d8)(3d8)(3d10)^2`$ (32) Hence, in 4-loop quenched QED, $`\beta _4=N(4)/2^83^2`$. Check that the answer is $`46`$. I conclude that the key to Jonathan Rosner’s fine puzzle was given by Marshall Baker and Kenneth Johnson, in Eq (3.3) of . Noting the profound work of Alain Connes and Dirk Kreimer , one arrives at the nub of the rationality of quenched QED: dimensional continuation of the derivative of the scheme-independent single-fermion-loop Gell-Mann–Low function, via the Fock–Feynman–Schwinger formalism (16,17). It remains to be seen whether this can tell us what comes after $`\beta _4=46`$. Hope has risen. Acknowledgements: Encouragement from Marshall Baker and Jonathan Rosner kept me believing that there was a key to be found. Recollection of enjoyable work with Sotos Generalis and Andrey Grozin reminded me of (17). Jos Vermaseren encouraged inclusion of my explanation of the suppression of $`\pi ^4`$. Alain Connes and Dirk Kreimer provided the vital impetus for an exact $`d`$-dimensional analysis, by showing me an early version of . Fig 1: The 5 trivalent four-loop bubble diagrams, presented as chord diagrams Fig 2: The 12 trivalent 3-loop 2-point diagrams, from slices of Fig. 1
no-problem/9909/astro-ph9909306.html
ar5iv
text
# Fabry-Perot Absorption-Line Spectroscopy of NGC 7079 ## 1. Introduction Bar pattern speeds, $`\mathrm{\Omega }_p`$, constrain the dark matter content of barred galaxies (Debattista & Sellwood 1997). The measurement of $`\mathrm{\Omega }_p`$ in a statistically significant number of galaxies is, therefore, very desireable. Tremaine & Weinberg (1984) derived a model-independent equation for $`\mathrm{\Omega }_p`$ of systems satisfying the continuity equation. Using this method with slit spectra, Merrifield & Kuijken (1995) and Gerssen et al. (1998) found fast bars (i.e. corotation radius/bar semi-major axis $`1.2\pm 0.2`$) in NGC 936 and NGC 4596 respectively. ## 2. Observations NGC 7079 was selected for observation because it has a suitably placed disk and bar, has several bright stars within $`80^{\prime \prime }`$ of the galaxy center (for normalizations), lacks spirals, dust and large companions, is large and bright, has a known recession velocity and the CaII 8542.14 Å line is redshifted to wavelengths outside the bright sky-emission forests. This galaxy, classified as (L)SB(r)0<sup>0</sup>, is the brightest member of a group of seven galaxies (Garcia 1993). The slit spectra of Bettoni & Galletta (1996) through the center of NGC 7079 showed a small amount of counter-rotating gas in the inner $`15^{\prime \prime }`$. We used the CTIO 0.9m telescope to obtain multiple $`U`$, $`B`$, $`V`$, $`R`$ and $`I`$ exposures of 300 $`s`$ each, with seeing $`1.5^{\prime \prime }`$. After processing in the standard way, the exposures in each filter were combined. Ellipse fits at large radii ($`R51.5^{\prime \prime }`$) gave a disk inclination, $`i`$, of $`50.3_{0.4}^{+0.3}`$, which compares well with the results of Bettoni & Galletta ($`i=51^{}`$). The $`B`$-$`I`$ map shows that the disk and the bulge each have a constant color, indicating uniform (or zero) internal extinction. Thus the Tremaine-Weinberg method can be used for this galaxy. We observed NGC 7079 with the Rutgers Fabry-Perot imaging interferometer on the CTIO 4m telescope, with variable seeing of $`1.4^{\prime \prime }2.4^{\prime \prime }`$. We used the CaII 8542.14 Å absorption line, redshifted to 8618 Å, scanning the spectrum from $`8608`$ Å to $`8631`$ Å, in steps of $`1`$ Å, for a total of 25 exposures of 900 seconds each. After flattening, zero subtraction and cosmic ray removal in the usual way, sky and sky-emission rings were subtracted by radial sampling in the half of the frame which excluded the galaxy. The frames were normalized using bright field stars and then Voigt profiles were fitted to the spectrum at each pixel, giving maps of the velocity and dispersion. In Figure 1 we compare our velocity and dispersion data with those of Bettoni & Galletta along their major-axis slit; our data compare well inside $`r=20^{\prime \prime }`$ from the galaxy center. Figure 1 shows peaks in the velocity dispersion away from the galaxy center. We are still trying to understand their cause; perhaps they are related to the infalling material found by Bettoni & Galletta. ## 3. Discussion With the velocity and photometric data, we can now calculate $`\mathrm{\Omega }_p`$ for NGC 7079. Since we have 2D data, we will take full advantage of the freedom in the Tremaine-Weinberg method to reduce the uncertainty in the measurement of $`\mathrm{\Omega }_p`$. ## References Bettoni, D. & Galletta, G. 1997, A&A, 124, 61 Debattista, V.P. & Sellwood, J.A. 1997, ApJ, 493, L5 Garcia, A.M. 1993, A&AS, 100, 47 Gerssen, J., Kuijken, K. & Merrifield, M.R. 1999, MNRAS, 306, 926 Merrifield, M.R. & Kuijken, K. 1995, MNRAS, 274, 933 Tremaine, S. & Weinberg, M.D. 1984, MNRAS, 209, 729
no-problem/9909/astro-ph9909258.html
ar5iv
text
# 1 Introduction ## 1 Introduction The inner regions of many planetary nebulae (PNs) and proto-PNs show much larger deviation from sphericity than the outer regions (for catalogs of PNs and further references see, e.g., Acker et al. 1992; Schwarz, Corradi, & Melnick 1992; Manchado et al. 1996; Sahai & Trauger 1998; Hua, Dopita, & Martinis 1998). By “inner regions” we refer here to the shell that was formed from the superwind$``$the intense mass loss episode at the termination of the AGB, and not to the rim that was formed by the interaction with the fast wind blown by the central star during the PN phase (Frank, Balick, & Riley 1990). This type of structure suggests that there exists a correlation, albeit not perfect, between the onset of the superwind and the onset of a more asymmetrical wind. In extreme cases the inner region is elliptical while the outer region (outer shell or halo) is spherical (e.g., NGC 6826, Balick 1987). Another indication of this correlation comes from spherical PNs. Of the 18 spherical PNs listed by Soker (1997, table 2), $`75\%`$ do not have superwind but just an extended spherical halo. I consider two types of mechanisms that can in principle cause this correlation. In the first type ($`\mathrm{\S }2`$) a primary process or event causes both the increase in the mass loss rate and its deviation from spherical geometry. A primary mechanism or event may be external or internal to the star. An external event is a late interaction with a stellar or substellar companion (Soker 1995; 1997), while an internal mechanism can be the rapid changes in some of the envelope properties on the upper AGB due to a high mass loss rate and the rapid decrease of the extended envelope mass. Such changes can lead to mode-switch to nonradial oscillations (Soker & Harpaz 1992), or an increase in magnetic activity when the density profile below the photosphere becomes much shallower, and the entropy profile much steeper (Soker & Harpaz 1999). In the second type ($`\mathrm{\S }3`$), the increase in the mass loss rate on the upper AGB (the so called superwind) makes possible a mechanism which is very inefficient at low mass loss rates (Soker 2000). In the present paper I review four recent works on this topic, all deal with enhanced dust formation, hence enhanced mass loss rate, above magnetic cool spots on the surface of AGB stars (Soker 1998; Soker & Clayton 1999; Soker& Harpaz 1999; Soker 2000). The mechanisms proposed here do not invoke any new mass loss mechanism, but use the generally accepted model for the high mass loss rate on the upper AGB, which includes strong stellar pulsations coupled with large quantities of dust formation at a few stellar radii around the stellar surface (e.g., Wood 1979; Jura 1986; Knapp 1986; Bedijn 1988; Bowen & Willson 1991; Fleischer, Gauger, & Sedlmayr 1992; Woitke, Goeres, & Sedlmayr 1996; Habing 1996; Höfner & Dorfi 1997; Andersen, Loidl, & Höfner 1999). Again, the proposed mechanism(s) applies (apply) only to elliptical PNs, and not to bipolar PNs. The latter require stellar companions. ## 2 Magnetic Cool Spots In the first paper (Soker 1998), I presented the basic ingredient of the model. The main assumption is that dynamo magnetic activity results in the formation of cool spots, above which dust forms much easily. The dynamo dictates a general stronger activity toward the equator, but with significant sporadic behavior. The sporadic behavior leads to the formation of filaments, arcs, and clumps in the descendant PN. The enhanced magnetic activity toward the equator results in a higher dust formation rate there, hence higher mass loss rate. In that paper I assumed that the dynamo activity increases as the star ascends the AGB. Independently, mass loss rate increases as well, due to the increase of the density scale height (Bedijn 1988; Bowen & Willson 1991). In this model, the increase of dynamo magnetic activity is attributed to the decreasing density of the envelope, due to mass loss and expansion, which makes the density profile below the photosphere much shallower and the entropy profile much steeper (Soker & Harpaz 1999). The main points of the analysis of Soker (1998) and Soker & Harpaz (1999) are as follows. (1) In order for the dynamo to stay effective to the upper AGB, the AGB star should be spun-up by a companion, and/or the dynamo must be effective even for rotation velocity of $`\omega 10^5\omega _{\mathrm{Kep}}`$, where $`\omega _{\mathrm{Kep}}`$ is the Kepler velocity of a test particle on the equator. For the envelope spin-up, if it occurs on the upper AGB, a planet companion of mass $`0.1M_{\mathrm{Jupiter}}`$ is sufficient. Born-again AGB stars may hint that the mechanism is efficient even for $`\omega 0.3\times 10^4\omega _{\mathrm{Kep}}`$. 2) The angular velocity decreases rapidly as the envelope mass decreases toward the termination of the AGB. In the model this decrease is more than compensated by the increase of the vulnerability of dust formation and photospheric conditions to the magnetic activity, due to the shallower density profile and steeper entropy profile. 3) The required magnetic activity $`\dot{E}_B`$ does not depend on the mass loss rate. 4) Because the magnetic energy released through the photosphere is much below both the kinetic and thermal energy carried by the wind, the magnetic activity will not heat the region above the photosphere, except perhaps in localized regions where the magnetic energy becomes extremely strong. 5) The solar magnetic activity has a cycle of an 11-year period. If such a cycle exists in upper AGB stars, it will, according to the proposed model, cause oscillations in the mass loss rate. Can it explain the almost periodic shells (or arcs) found in several PNs (e.g., CRL 2688 \[Egg Nebula\], Sahai et al. 1998; IRAS 17150-3224, Kwok, Su, & Hrivnak 1998), and the AGB star IRC+10216 (Mauron & Huggins 1999)? ## 3 Radiation Shielding by Dust In a recent paper (Soker 2000) I consider the second type of mechanism, where the increase in the departure from spherical mass loss results from the increase of the mass loss rate. The mass loss rate increases due to the increase of the density scale height (Bedijn 1988; Bowen & Willson 1991). The large quantities of dust formed above a cool spot during the high mass loss rate phase shields the region above it from the stellar radiation. This leads to both further dust formation in the shaded region, and, due to lower temperature and pressure, the convergence of the stream toward the shaded region, and the formation of a flow having a higher density than its surroundings. This density contrast can be as high as $`4`$. A concentration of magnetic cool spots toward the equator will lead to a density contrast of up to $`5`$ between the equatorial and polar directions. The shielding does not occur for low mass loss rates, hence the positive correlation between mass loss rate and the degree of the departure from sphericity. An interesting result of the dust shielding is the required spot size. Without shielding, the temperature above a cool spot does not fall with radial distance from the surface as steeply as the temperature of the environment (Frank 1995). For the region to stay cool enough to form dust, the spot must be large: its radius should be $`b_s0.5R_{}`$ (Frank 1995). This is a large spot, which is not easy to form by concentration of small magnetic flux tubes (Soker & Clayton 1999). However, with the dust forming very close to the surface, as is suggested for cool magnetic spots (Soker & Clayton 1999), the shielded region forms dust, which in turn shields a region farther away. Therefore, when dust forms very close to the surface of a small cool spot with high optical depth, dust will be formed in the entire shaded region even when the spot is much smaller than what is required without dust shielding. Not only does the proposed flow allow higher mass loss rate from small cool spots, it is also limited to small spots. Above a large spot the relative mass entering from the surroundings is small, and since the radiation from the spot is weaker, the material in the shadow will not be accelerated much. ACKNOWLEDGMENTS: This research was supported in part by a grant from the Israel Science Foundation.
no-problem/9909/astro-ph9909350.html
ar5iv
text
# Revised Baade-Wesselink Analysis of RR Lyrae Stars ## 1. Introduction The distance scale problem is not solved yet, the dichotomy between long and short distance scales becoming even more clear-cut. In favour of the short distance scale (i.e. faint Mv(RR), $`(mM)_{LMC}18.3`$) are the studies on RR Lyrae statistical parallaxes (Gould & Popowski 1998), local RR Lyrae kinematics (Martin & Morrison 1998), eclipsing binaries in the LMC (Udalski et al. 1998, but see also Guinan et al. 1998 for a contrasting result), red clump stars (Cole 1998), Hipparcos parallaxes of field HB stars (Gratton 1998). On the other hand, in favour of the long distance scale (i.e. bright Mv(RR), $`(mM)_{LMC}18.5`$) are the studies on sd-MS fitting in globular clusters (Gratton et al. 1997; Reid 1997; Chaboyer et al. 1998; Pont et al. 1998; Grundahl, VandenBerg, & Andersen 1998; Carretta et al. 1999), RGB bump stars in globular clusters (Ferraro et al. 1999), tip of the RGB (Lee, Freedman & Madore 1993), RR Lyraes period-shift effect (Sandage 1993), RR Lyraes double-mode pulsators (Kovacs & Walker 1998), Hipparcos parallaxes of Cepheids (Feast & Catchpole 1997), SN1987A in the LMC (Panagia 1998). The Baade-Wesselink (B-W) analysis of RR Lyrae stars can help solve this problem. For a review and general description of this method and its assumptions and approximations see for example Gautschy (1987). Previous B-W analyses were performed on 29 field RR Lyraes by a few independent groups, e.g. Liu & Janes (1990), Jones et al. (1992), Cacciari, Clementini & Fernley (1992), and Skillen et al. (1993). These references are only indicative and are by no means exhaustive or complete. The results of these and several more B-W studies have been reanalysed and summarised by Fernley (1994) and Fernley et al. (1998) who find a relation $`M_V(RR)=0.20\pm 0.04[Fe/H]+0.98\pm 0.05`$ between the absolute visual magnitude and the metallicity of the RR Lyrae stars. This result is intermediate but closer to the short distance scale, corresponding to $`(mM)_{LMC}18.34`$. The previous B-W analyses were based on: i) Kurucz (1979) model atmospheres, with turbulent velocity $`V_{turb}`$ = 2 km/s; ii) semi-empirical temperature calibration; iii) mostly IR (i.e. K), but also visual V,R and I photometry; iv) the use of an average value for gravity (usually log$`g`$=2.75); v) a restricted phase range for fitting (usually 0.35 to 0.80); vi) the barycentric (also called $`\gamma `$-) velocity derived as simple integration over the entire pulsation cycle; vii) a constant factor (1.38) to transform radial into pulsational velocities. Several model atmospheres are now available, with: improved opacities; different treatments of convection; different values of turbulent velocity; more accurate temperature and BC calibrations. The aim of the present work is to test what is the effect of these new models and calibrations, as well as of other assumptions, on the B-W results for RR Lyrae stars. ## 2. Our re-analysis: assumptions and approximations ### 2.1. New elements $``$ Model atmospheres. We have used the following sets of model atmospheres: i) Kurucz (1995) with MLT+overshooting treatment of convection, $`V_{turb}`$ = 2 and 4 km/s, \[m/H\]=0.0; ii) Castelli (1999b) with MLT without overshooting treatment of convection, \[m/H\]=0.0 and $`V_{turb}`$ = 2 km/s, and \[m/H\]=–1.5 and $`V_{turb}`$ = 2 and 4 km/s. The models at \[m/H\]=–1.5 are enhanced in $`\alpha `$-elements by \[$`\alpha `$/$`\alpha _{}`$\]=+0.4. iii) Castelli (1999a) experimental models with no convection. These models do not have physical meaning, but are only intended to mimick the effects of recent treatments of convection, e.g. MLT with l/H=0.5 instead of 1.25 (Fuhrmann, Axer, & Gehren 1993), or Canuto & Mazzitelli (1992) approximation that predicts a very low convection or zero convection for stars with $`T_{eff}`$ 7000 K and that has been suggested to provide a better match to the data (see Gardiner, Kupka, & Smalley 1999 for a recent re-discussion of this issue). The no-convection models are available for \[m/H\]=0.0 and –1.5 and $`V_{turb}`$ = 2 km/s. $``$ Gravities. The values of log$`g`$ have been calculated at each phase-step from radius percentage variation (assuming $`\mathrm{\Delta }`$R/$`<R>`$ 15%) plus the acceleration component derived from the radial velocity curve. The zero-point was set from theoretical ZAHB models, i.e. log$`g`$=2.86 at the phase corresponding to average radius. Note that all ZAHB models give average log$`g`$=2.86 $`\pm `$ 0.01 (Dorman, Rood, & O’Connell 1993; Sweigart 1997 with no He-mixing; Chieffi, Limongi & Straniero 1998) with the only exception of Sweigart (1997) models with He-mix=0.10 which give log$`g`$=2.75 but may not be applicable to our field stars. $``$ Semi-empirical Teff and BC calibration. We have used Montegriffo et al. (1998) semi-empirical $`BC_V`$ and $`BC_K`$ calibration and temperature scale for PopII giants, that is based on RGB and HB stars in 10 globular clusters. $``$ Gamma-velocity. The default value of the $`\gamma `$-velocity was estimated from integration of the observed radial velocity curve over the entire pulsation cycle, as it was done in all previous B-W studies. However, Oke, Giver, & Searle (1962) had suggested for SU Dra that velocity gradients may exist in the atmosphere, and proposed to take them into account by correcting the observed radial velocities of a positive quantity that would vary about linearly between phase 0.95 and 0.40. On the other hand, Jones et al. (1987) found no observational evidence of velocity gradients among weak metal lines in X Ari, at least within their observational errors ($`\pm `$ 2 km/s). Chadid & Gillet (1998), however, did seem to find some evidence of differential velocities among weak metal lines in RR Lyr: the radial velocity curve from the FeII($`\lambda `$4923.9) line shows a slightly larger amplitude than that from the FeI($`\lambda `$4920.5) line which forms a little deeper in the atmosphere. A similar effect was found between BaII and TiII lines, and is related to the presence of strong shocks. Therefore, in addition to the default $`\gamma `$-velocity calculation from the observed RV curve, we have simulated two other cases, $`\gamma `$-1 where the RV curve has been corrected as suggested by Oke et al. (1962) albeit by a much smaller amount (+ 2.0 km/s at most), and $`\gamma `$-2 where the amplitude of the RV curve has been stretched by $`\pm `$ 5 km/s at the phases of maximum and minimum RV. These simulations are only numerical experiments and do not intend to provide realistic answers to the problem of radial velocity gradients in the atmosphere: the radial velocities for the RR Lyrae stars are derived from a large number of weak metal lines and it is still totally unclear which correction (if any) should be applied to the average values in the presence of velocity gradients among some of these lines. ### 2.2. Old Assumptions $``$ The $`p`$-factor. The factor used to transform radial into pulsational velocity has been assumed to be 1.38, as in Fernley (1994). It might be a few % smaller, this conservative assumption gives the brightest possible luminosity as a function of $`p`$. $``$ Fitting phase interval. As with all previous applications of the B-W method, the fitting has not been performed on the entire pulsation cycle, but on a restricted phase interval appropriate for each star: 0.30-0.80 for SW And, for best stability of results; and 0.25-0.70 for RR Cet, to avoid shock-perturbed phases. ## 3. Results on SW And and RR Cet, and Discussion ### 3.1. SW And We have used all possible and compatible data from the literature, i.e. BVRIK and uvby photometry, and radial velocities. The adopted input parameters for this star are \[Fe/H\]=0.0, E(B–V)=0.09, $`\gamma `$-velocity=–19.94 km/s and $`<V_0>`$=9.44. The default model calibration on Vega led to systematically hotter temperatures (by $``$ 180K) from the (B–V) colors with respect to all other colors. This difference has no physical justification, and is only due to a behaviour of the models that was already known and commented by Castelli (1999b). It is worth mentioning that this hotter temperature by $``$ 180K leads to a brighter $`M_V`$ magnitude by $``$0.12 mag using the combination V and B$``$V in the B-W analysis, and by only $``$0.02 mag using K and B$``$V. It is important to note that the use of the K magnitude with any color, in particular V$``$K, is the least affected by temperature uncertainties, and provides the most stable results. In Table 1 we summarize the values of $`M_V`$ that result from the use of the K magnitude and the various colors and models. The models are labelled as: K2 and K4 for Kurucz (1995) with $`V_{turb}`$= 2 and 4 km/s respectively; K2-nover and K2-noconv for Castelli (1999a,b) $`V_{turb}`$= 2 km/s and the no-overshooting and no-convection approximations respectively; Montegriffo for the Montegriffo et al. (1998) temperature scale and $`BC_K`$ calibration. We note that: $``$ The use of the b$``$y colors yields unreliable results because the angular diameter curve is distorted and the fitting to the linear radius curve is poor. $``$ The no-convection models yield much brighter magnitudes, the exact amount of “brightening” depending on the details of the convection treatment. $``$ All models, except the no-convection ones, yield consistent results with the empirical relation by Montegriffo et al. (1998). $``$ The present results, except the no-convection ones, are consistent with the previous determination of $`M_V`$ for SW And, i.e. 0.94 (Fernley 1994). ### 3.2. RR Cet We have used all compatible BVRIK photometry and radial velocities from the literature. The adopted input parameters for this star are \[Fe/H\]=–1.5, E(B–V)=0.05, $`\gamma `$-velocity=–74.46 km/s and $`<V_0>`$=9.59. The default model calibration on Vega for the \[Fe/H\]=–1.5 models led to consistent temperatures within $``$ 100K from all colors. In Table 2 we summarize the values of $`M_V`$ for RR Cet that result from the use of the K magnitude and the various colors and models. In addition to the models described also in Table 1, we have here a few other cases: K4-nover for Castelli (1999b) $`V_{turb}`$= 4 km/s and the no-overshooting approximation; K2/K4-nover for $`V_{turb}`$ = 4 km/s only at phases 0.60-1.00 and $`V_{turb}`$ = 2 km/s elsewhere; $`\gamma `$-1 for a corrected RV curve by + 2 km/s at the phase of minimum RV; $`\gamma `$-2 for a corrected RV curve whose amplitude has been stretched by $`\pm `$ 5 km/s. We note that: $``$ The no-convection models seem to have no significant effect on the derived magnitudes. $``$ The use of $`V_{turb}`$ = 4 km/s all over the fitting phase interval yields insignificantly brighter magnitudes (by $``$ 0.03 mag) with respect to the case at $`V_{turb}`$ = 2 km/s. On the other hand, setting $`V_{turb}`$ = 4 km/s only at phases 0.60-1.00 and $`V_{turb}`$ = 2 km/s elsewhere leads to somewhat fainter magnitudes, the effect being stronger in the bluer colors. $``$ Correcting the RV curve as suggested by Oke et al. (1962) by at most + 2 km/s at the phase of minimum RV (case $`\gamma `$-1) leads to slightly fainter magnitudes. $``$ Correcting the RV curve by stretching the amplitude in analogy to the behaviour of FeII and BaII lines (case $`\gamma `$-2) leads to brighter magnitudes. The amount of brightening we have estimated is however too large, since the correction we have applied ($`\pm `$ 5 km/s) was exaggerated for the sake of computation. $``$ All models yield consistent results with the empirical relation by Montegriffo et al. (1998). $``$ The present results are slightly brighter than the previous determination of $`M_V`$ for RR Cet, i.e. 0.68 (Fernley 1994). ### 3.3. Conclusions $``$ The meaning of our simulations with the no-convection models is only to point out that, if the effect of convection were indeed overestimated by the present MLT, a more correct treatment with a reduced impact of convection would lead to brighter magnitudes for solar-metallicity stars, but would have lesser consequences on metal-poor stars. However, Gardiner et al. (1999) reach no definitive conclusion on this subject, and suggest that the classical treatment of convection with l/H=1.25 may still be the best approach in the temperature range 6000-7000K which is relevant for RR Lyrae stars around minimum light. $``$ The corrections to the radial velocity curves simulated with RR Cet are quite arbitrary and not sufficiently supported by observational evidence. They are only intended as numerical experiments to test their effect on the derived magnitudes. $``$ The present results are quite compatible with the results from previous analyses, and do not support a significantly brighter zero-point for the RR Lyraes luminosity scale. ## References Cacciari, C., Clementini, G., & Fernley, J. 1992, ApJ, 396, 219 Canuto, V., & Mazzitelli, I. 1992, ApJ, 389, 724 Carretta, E., Gratton, R., Clementini, G., & Fusi Pecci, F. 1999, ApJ, submitted (astro-ph/9902086) Castelli, F. 1999a, private communication Castelli, F. 1999b, A&A, 346, 564 Chaboyer, B., Demarque, P., Kernan, P.J., & Krauss, L.M. 1998, ApJ, 494, 96 Chadid, M., & Gillet, D. 1998, A&A, 335, 255 Chieffi, A., Limongi, M., Straniero, O. 1998, private communication Cole, A.A. 1998, ApJ, 500, L137 Dorman, B., Rood, R.T., O’Connell, R. 1993, ApJ, 419, 596 Feast, M.W., & Catchpole, R.M. 1997, MNRAS, 286, L1 Fernley, J. 1994, A&A, 284, L16 Fernley, J., Carney, B.W., Skillen, I., Cacciari, C., & Janes, K. 1998, MNRAS, 293, L61 Ferraro, F.R., et al. 1999, AJ in press (astro-ph/9906248) Fuhrmann, K., Axer, M., Gehren, T. 1993, A&A, 271, 451 Gardiner, R.B., Kupka, F., & Smalley, B. 1999, A&A, 347, 876 Gautschy, A. 1987, Vistas in Astron. 30, 197 Gould, A., & Popowski, P. 1998, ApJ, 508, 844 Gratton, R. 1998, MNRAS, 296, 739 Gratton, R., et al. 1997, ApJ, 491, 749 Grundahl, F., VandenBerg, D.A., & Andersen, M.I. 1998, ApJ, 500 L179 Guinan, E.F., et al. 1998, ApJ, 509, L21 Jones, R.V., Carney, B.W., Latham, D.W., & Kurucz, R.L. 1987, ApJ, 312, 254 Jones, R.V., Carney, B.W., Storm, J., & Latham, D.W. 1992, ApJ, 386, 646 Kovacs, G. & Walker, A.R. 1998, ApJ, 512, 271 Kurucz, R.L. 1979, ApJS, 40, 1 Kurucz, R.L. 1995, available at http://cfaku5.harvard.edu Lee, M.G., Freedman, W.L., & Madore, B.F. 1993, ApJ, 417, 553 Liu, T., & Janes, K.A. 1990, ApJ, 354, 273 Martin, J.C., & Morrison, H.L. 1998, AJ, 116, 1724 Montegriffo, P., Ferraro, F.R., Origlia, L., Fusi Pecci, F. 1998, MNRAS, 297, 872 Oke, J.B., Giver, L.P., Searle, L. 1962, ApJ, 136, 393 Panagia, N. 1998, Mem. S.A. It. 69, 225 Pont, E., Mayor, M., Turon, C., & VandenBerg, D.A. 1998, A&A, 329, 87 Reid, I. N. 1997, AJ, 114, 161 Sandage, A. 1993, AJ, 106, 703 Skillen, I., Fernley, J., Stobie, R.S., & Jameson, R.F. 1993, MNRAS, 265, 301 Sweigart, A.W. 1997, ApJ, 474, L23 Udalski, A., et al. 1998, ApJ, 509, L25
no-problem/9909/astro-ph9909349.html
ar5iv
text
# Galaxy-galaxy lensing in clusters: new results ## 1 Introduction Recent work on galaxy-galaxy lensing in the cores of rich clusters suggests that the average mass-to-light ratio and spatial extents of the dark matter halos associated with morphologically-classified early-type cluster members is significantly different from those of comparable luminosity field galaxies (Natarajan et al. 1998 \[N98\]). For field galaxies galaxy-galaxy lensing has been used to place more uncertain constraints on halo masses and sizes, with the claim that halos of field galaxies extend beyond 100 kpc (Brainerd, Blandford & Smail 1996; Ebbels et al. 2000; Hudson et al. 1998). The detailed mass distribution within clusters – the fraction of the total cluster mass that is associated with individual galaxies – has important consequences for the frequency of galaxy interactions. The global tidal field of the cluster potential well is strong enough to truncate the dark matter halo of a galaxy whose orbit penetrates the cluster core. Compact dark halos indicate a high probability for galaxy–galaxy collisions over a Hubble time within a rich cluster. However, since the internal velocity dispersions of cluster galaxies ($`120`$–200 km s<sup>-1</sup>) are significantly lower than their orbital velocities, these interactions are in general unlikely to lead to mergers, suggesting that the encounters of the kind simulated in the galaxy harassment picture by Moore et al. (1996) are frequent and lead to morphological transformation. In high resolution N-body simulations of galaxy halos within a rich cluster Ghigna et al. (1998) report that halos that traverse within the inner $`200`$ kpc of the cluster centre suffer significant tidal truncation. From the analysis of local weak distortions in the cluster AC114 at a redshift of 0.31, N98 found that the total mass of a fiducial $`L^{}`$ cluster spheroidal galaxy was contained within $``$ 15 kpc radius halo ($``$ 8–10 $`R_e`$) with a mass-to-light ratio $`M/L_V\mathrm{\hspace{0.17em}23}_6^{+15}`$ (90 % c.l., $`h=0.5`$) in solar units within this radius. This limit on the truncation radius points to cluster galaxies having of a significantly more compact and less massive halo than an equivalently luminous field galaxy. However, we point out that several complex biases need to be taken into account to extend the analysis to such a non-uniform sample – these HST cluster-lenses span a large range in mass, richness, and X-ray luminosity, fortunately, they form a subset of the well-studied MORPHS clusters (Couch et al. 1998; Smail et al. 1997). No significant evolution in luminosity was found in the spheroidal populations of these clusters (Barger et al. 1998), therefore, any likely biases arising purely from differences in star formation activity due to different morphological mixes at various redshifts are expected to be small. ## 2 Analysis techniques We model the cluster potential as a composite of a large-scale smooth component and the sum of smaller-scale perturbers which are associated with bright, early-type galaxies in the cluster. Details of the procedure can be found elsewhere (Natarajan & Kneib 1997; N98). Using both the observed strong lensing features and the shear field as constraints, the relative fraction of mass that can be attributed to the smooth component and the perturbers are computed using a maximum-likelihood method. The likelihood prescription provides bounds on both the fiducial central velocity dispersion (in km/s) and the outer extent (in kpc) for an ensemble of galaxies using a parameterised description of the scaling of the velocity dispersion and truncation radius with luminosity (the constraints are not sensitive to the details of this parametric form). ## 3 Results and Conclusions Here we present the results of the application of our technique to the WFPC2 images of five HST cluster-lenses (Couch et al. 1998; Smail et al. 1997). The clusters span a wide redshift range: A 2218 at $`z=0.18`$; AC 114 at $`z=0.31`$; Cl 0412$``$65 at $`z=0.51`$; Cl 0016+16 at $`z=0.55`$ and Cl 0054$``$27 at $`z=0.58`$. We find that the mass-to-light ratio in the $`V`$-band of a typical L increases as a function of redshift, with a mean value that ranges from roughly 10–24. The average value of the fiducial truncation radius varies from about 15 kpc at $`z=0.18`$ to 60 kpc at $`z=0.58`$. For the galaxy models that we have used in our analysis, the typical total mass of an L varies with redshift from $`2.8\times 10^{11}M_{}`$ to $`7.7\times 10^{11}M_{}`$. The mass-to-light ratios quoted here take passive evolution of elliptical galaxies into account as given by the stellar population synthesis models of Bruzual & Charlot. The mass obtained for a typical bright cluster galaxy by Tyson et al. (1998) from the strong lensing analysis of the cluster Cl 0024+16, at $`z=0.41`$, is consistent with our results. Note that the preliminary numbers quoted here are average values, the estimates and discussion of error bars is presented in Natarajan, Kneib & Smail (2000). These observed trends are in good agreement with theoretical expectations from cluster formation and evolution models (Ghigna et al. 1998). Qualitatively, in the context of this paradigm, fewer cluster members in a dynamically younger cluster of galaxies are likely to have suffered passages through the inner regions and are hence expected to be less tidally stripped. Detailed analysis and interpretation of these results are presented in a forthcoming paper.
no-problem/9909/astro-ph9909510.html
ar5iv
text
# Searching for extragalactic planets ## Abstract Are there other planetary systems in our Universe? Indirect evidence has been found for planets orbiting other stars in our galaxy: the gravity of orbiting planets makes the star wobble, and the resulting periodic Doppler shifts have been detected for about a dozen stars . But are there planets in other galaxies, millions of light years away? Here we suggest a method to search for extragalactic planetary systems: gravitational microlensing of unresolved stars. This technique may allow us to discover planets in elliptical galaxies, known to be far older than our own Milky Way, with broad implications for life in the Universe. Department of Physics, University of California, Berkeley, CA 94720-3411, USA Email: eabaltz@astron.berkeley.edu and Max-Planck-Institut für Physik, Föhringer Ring 6, D-80805 München, Germany Email: gondolo@mppmu.mpg.de The phenomenon of gravitational microlensing is as follows. A massive object, be it a black hole, planet, star, etc., passes very near to the line of sight to a star being monitored. The gravity of the massive object bends the starlight (gravitational lensing), producing multiple images of, and magnifying, the monitored star for a short time. The multiple images can not be resolved (hence the prefix micro), but the amplification of the light intensity can be detected. The amplification has a well-defined characteristic temporal behaviour, allowing such events to be distinguished from other possible intensity changes such as those due to variable stars . In the case where the lens itself is a binary object, the microlensing lightcurve can be strongly affected, exhibiting short periods of very large magnification, coming in pairs . Gravitational lensing events of this type involving binary stars have been observed by the MACHO and EROS teams in programmes monitoring several millions of stars . The short duration large magnification events are referred to as caustic crossings. The magnification along these caustic curves is formally infinite, and in practice quite large, in excess of ten. Caustics are well known in optics, and can be seen, for example, as the oscillating patterns of bright lines on the bottom of a swimming pool. The utility of such events in searching for planetary systems is clear: a solar system can be described to first approximation as a binary object, as is the case of our own solar system, consisting primarily (in mass) of the Sun and the planet Jupiter. In Figure 1, we show two example lightcurves of microlensing events, together with the trajectories of the source stars relative to the caustic curves. In both cases the star has a companion one one-thousandth as massive, like the Sun–Jupiter system. Previous work has shown that planets might be detected in microlensing events in the bulge of the Milky Way galaxy, or in the Small and Large Magellanic Clouds, which are small galaxies in orbit around the Milky Way . Stars in the bulge and in the Magellanic clouds can be resolved easily, and surveys routinely monitor of order ten million stars for microlensing events. Evidence for a planet orbiting a binary star system in the Milky Way bulge has recently been presented in a joint publication of the MPS and GMAN collaborations . In order to observe planetary systems in more distant galaxies, we must resort to a technique known as pixel microlensing . Individual stars in distant galaxies can not be resolved, but this does not invalidate the method. Each pixel of a telescope camera collects light from a number of stars in the distant galaxy. If a single star is magnified due to gravitational microlensing, the pixel will collect more light. Of course, the light from other stars on the pixel makes a magnification more difficult to observe, but the technique works in practice . The pixel microlensing surveys that have been undertaken observe the Andromeda Galaxy (M31), the nearest large galaxy to the Milky Way. These surveys have used ground based telescopes. The bulge of M31 is quite dense, which allows a high probability of microlensing events. We have calculated the rate of planetary events observing M31 with a telescope like the Canada-France-Hawaii telescope (CFHT) on Mauna Kea . We assume that every star has a companion that is one one-thousandth as massive, just like the Sun and Jupiter. Furthermore, we assume, as is true for known binary stars, that the distribution of orbital periods is such that ten percent of such systems lie in each decade of period, from a third of a day to ten million years. This gives a 10% probability that a star has a companion between one and five AU (Astronomical Unit, the distance between the Earth and the Sun), in rough agreement with the observational findings of Marcy et al. . With a long–term monitoring program observing every night for the five months that M31 is visible in Hawaii over a period of eight years, we expect to observe about one planetary event. To increase the chances to detect planetary systems in distant galaxies, we require a space telescope such as the proposed Next Generation Space Telescope, to be launched around 2007. This will be a large (about 8 metres in diameter) infrared telescope at a Lagrange point Figure 1: Microlensing events due to a planetary system. We illustrate the trajectory of a source star over the caustic curves of a lens star and its planet. We then illustrate the observed magnification of the source star along its trajectory. The cross indicates the star’s position, whereas the planet lies off the plots at $`0.5`$ and $`0.6`$, respectively. The plots are in units of the Einstein angle $`\theta _E`$, which is the characteristic angular scale of a microlensing event. Also shown is the star–planet separation $`d`$ in units of $`\theta _E`$. of the Earth–Moon system, and it will be more than ten times as sensitive as the Hubble Space Telescope. We have calculated the rate of events we might detect with the NGST observing the giant elliptical galaxy M87 in the Virgo cluster, at a distance of fifty million light–years. With the same assumptions as the M31 calculation, we find that an NGST survey of two month’s duration, taking one image each day, should be able to detect of order three planetary systems. We find that such a survey is most sensitive to events where the separation between caustic crossings is about five days. An alert system for microlensing events would allow more frequent measurements of the light curve during the caustic crossings, with the possibility of determining the orbital parameters of the planetary system. We now make some comments about the dependence of the observed rate of microlensing events on the various physical parameters. As long as the size of the observed galaxy is small compared to the distance to it, the fundamental rate of events is constant. However, a more distant galaxy will appear smaller, with more stars per pixel, which decreases the rates. On the other hand, if the galaxy is too close, many images must be taken in order to monitor enough stars, requiring more telescope resources. For the pixel scale of the NGST, M87 is at a fairly optimal distance. We have shown that pixel microlensing may be used to detect extragalactic planetary systems. This is the most promising available technique for discovering planets outside our own galaxy. This is especially interesting in that we might find planets in elliptical galaxies such as M87, known to contain considerably less heavy elements than our own spiral galaxy. The discovery of an extragalactic planet in a galaxy very different from ours would have broad implications for the origin of life in the Universe.
no-problem/9909/hep-lat9909101.html
ar5iv
text
# RIKEN BNL Research Center preprint Quark masses using domain wall fermions Talk given at Lattice ’99, Pisa, Italy; work done in collaboration with T. Blum, P. Chen, N. Christ, M. Creutz, C. Dawson, G. Fleming, R. Mawhinney, S. Ohta, S. Sasaki, G. Siegert, A. Soni, P. Vranas, L. Wu, and Y. Zhestkov (RIKEN/BNL/CU Collaboration) ## 1 INTRODUCTION Last year an exploratory calculation of the strange quark mass was completed using the domain wall fermion discretization within the quenched approximation . At the 15–20% level the results were encouraging: the pion mass squared extrapolated to zero at the input mass $`m0`$, simulations at three different lattice spacings gave similar values for $`m_s^{\overline{\mathrm{MS}}}`$(2 GeV) and the values were in agreement with other lattice results. This work reports on the progress of the RIKEN/BNL/CU (RBC) Collaboration toward a precise calculation of the strange quark mass. ## 2 SPECTRUM Using domain wall quarks we have computed light hadron masses on a $`16^3\times 32`$ quenched lattice with the following parameters: $`\beta =6.0`$ (plaquette action), domain wall height $`M=1.8`$, seven values of valence quark masses ($`am`$’s), and number of sites in the extra dimension $`N_s=16`$. Results from 85 configurations for the pseudoscalar meson, vector meson, and nucleon masses are shown in Figures 1 and 2 (see also Ref. ). With the light masses and improved statistics of this study compared to Ref. , it has become clear that the linear extrapolation of $`(aM_\pi )^2`$ to $`am=0`$ gives a positive intercept, 0.018(2) (statistical error only), for these simulation parameters (see Fig. 1). Several effects could be responsible for this feature. First, the finite (and relatively small) volume of our lattice could increase the mass of the pion at a given $`am`$, and these effects would be especially visible at the lighter masses; as shown below the smallest $`am`$, 0.01, is roughly $`1/4`$ the strange quark mass. Second, there is an intrinsic breaking of chiral symmetry due to the finite $`N_s`$: the right– and left–handed surfaces states have some overlap within the fifth dimension and would result in a residual quark mass $`m_{\mathrm{res}}`$. Both finite $`V`$ and $`N_s`$ effects should be mostly $`am`$–independent. On the other hand studies by Columbia at $`\beta =5.7`$ suggest that in the large $`N_s`$ limit one cannot account for the entire $`M_\pi ^2`$ intercept by increasing the spatial volume by a factor $`2^3`$. Quenching effects predicted by quenched chiral perturbation theory or due to artificial instantons could play a role. The issue of residual mass is addressed in Ref. . We cannot presently distinguish $`am`$–dependent finite $`V`$ and $`N_s`$ effects from any nonleading or logarithmic behavior of the pseudoscalar meson mass. For this analysis we assume that the above effects are all $`am`$–independent, so then the chiral limit is at $`am=0.0038`$. We discuss systematic uncertainties regarding this assumption below. The $`am`$ corresponding to the light quark mass is determined by extrapolating $`(M_\pi /M_\rho )^2`$ to its physical ratio. There $`M_N/M_\rho =1.37(5)`$ compared to the physical ratio 1.22. The lattice spacing is then set using the $`\rho `$ mass, and the strange quark mass is set using either the physical $`K`$ or $`\varphi `$ mass. The bare quark masses $`(m_q=am+0.0038)`$ are thus (with statistical errors only): | bare | $`1/a=1.91(0.04)`$ GeV | | | --- | --- | --- | | mass | lattice units | MeV | | $`m_l`$ | 0.00166(0.00005) | 3.17(0.11) | | $`m_s(K)`$ | 0.042(0.003) | 80(6) | | $`m_s(\varphi )`$ | 0.053(0.004) | 101(7) | The large difference in $`m_s`$ due to whether the $`K`$ or $`\varphi `$ meson is used to set the scale is a common feature to quenched lattice simulations. This is succinctly expressed by the parameter $`JM_K^{}(dM_V/dM_{PS}^2)`$ . The value obtained using physical meson masses is $`0.48\pm 0.02`$ (the spread is due to the fact that $`(M_\varphi M_K^{})/(M_K^{}M_\rho )1`$); while our result is $`J=0.34(7)`$, comparable to calculations with Wilson fermions. In light of this difference, we do not average $`m_s(K)`$ and $`m_s(\varphi )`$ but quote results for each input separately. The calculation of the $`\varphi `$ mass in quenched QCD is subject to systematic error since disconnected graphs are being neglected, and so in the rest of this work we consider only $`m_s(K)`$. So far we have not completed an exhaustive study of the systematic errors at this lattice spacing. Therefore, for the present time we give rough estimates. Since $`M_N/M_\rho `$ is 10% higher than the physical value, our determination of the lattice spacing has an inherent uncertainty of at least 10%. In this work we have assumed that finite $`V`$ and $`N_s`$ effects gave a positive $`m`$–independent contribution to the pion mass. If we instead assume such effects account for only half of the nonzero intercept then the quark masses above would all decrease by 4 MeV. This is only a 5% change to the strange quark mass, but clearly the extrapolation to the average up and down quark mass is untrustworthy. A better understanding of the systematic uncertainties is necessary before a reliable calculation of $`m_l`$ can be completed. The bare strange mass from this work is $`m_s(\beta =6.0)=80\pm 6`$ (stat.) $`\pm 10`$ (sys.) MeV. ## 3 NONPERTURBATIVE RENORMALIZATION The mass renormalization constant has been computed perturbatively to one–loop for domain wall fermions . In this work we compute the renormalization constant nonperturbatively using the regularization independent momentum subtraction (RI/MOM) scheme as advocated by the Rome–Southampton group . To determine the renormalization constant for an operator $`O`$, we impose the following renormalization condition: $$\frac{Z_O(\mu a)}{Z_q(\mu a)}\mathrm{Tr}\left[P_O\mathrm{\Lambda }_O(pa)\right]|_{p^2=\mu ^2}=1,$$ (1) where $`P_O`$ projects out the tree–level spin structure of $`O`$, $`\mathrm{\Lambda }_O`$ is the amputated Greens function of $`O`$ in momentum space, and $`Z_q`$ is the quark wavefunction renormalization. For fermion bilinear operators $`O_\mathrm{\Gamma }(x)=\overline{q}(x)\mathrm{\Gamma }q(x)`$ only a single momentum space propagator is needed to compute the amputated Greens function $$\mathrm{\Lambda }_\mathrm{\Gamma }(p)=S^1(p)S(p)\mathrm{\Gamma }(\gamma _5S^{}(p)\gamma _5)S^1(p)$$ (2) where $`S(p)=_y\mathrm{exp}(ipy)q(y)\overline{q}(0)`$. In this work we compute momentum space propagators on 52 configurations . Eqns. (1) and (2) together give $`Z_\mathrm{\Gamma }/Z_q`$ at any momentum $`p`$. In order for the RI/MOM method to work reliably the momentum should satisfy $`\mathrm{\Lambda }_{\mathrm{QCD}}p1/a`$ so that nonperturbative and discretization effects, respectively, are small; however with presently accessible lattices this range is narrow. The approach which we follow is to fit to the leading $`(ap)^2`$ errors . First, we compute $`Z_q`$ through $$Z_q^{}\frac{i}{12}\frac{\mathrm{Tr}\underset{\mu }{}\gamma _\mu p_\mu S^1}{4_\mu p_\mu ^2}$$ (3) which differs from $`Z_q`$ at $`O(g^4)`$ in Landau gauge , then multiply $`Z_\mathrm{\Gamma }/Z_q`$ by $`Z_q^{}`$ to give $`Z_\mathrm{\Gamma }^{}`$ and divide by the two–loop running $`c_\mathrm{\Gamma }^{}`$ resulting in $$Z_\mathrm{\Gamma }^{\mathrm{RGI}}(a)=\frac{Z_\mathrm{\Gamma }^{}(\mu a)}{c_\mathrm{\Gamma }^{}(\mu )}=\frac{Z_\mathrm{\Gamma }^{\overline{\mathrm{MS}}}(\mu a)}{c_\mathrm{\Gamma }^{\overline{\mathrm{MS}}}(\mu )}$$ (4) which is scale independent through two–loop order. The result for the scalar density is shown in Fig. 3. The solid line is a linear fit to $`O(a^2p^2)`$ discretization errors, and we take the extrapolated value at $`(ap)^2=0`$ as our $`Z_S^{\mathrm{RGI}}`$. $`Z_S^{\overline{\mathrm{MS}}}`$ at 2 GeV is obtained by multiplying by the two–loop running factor $`c_S^{\overline{\mathrm{MS}}}`$(2 GeV). The result for $`Z_m^{\overline{\mathrm{MS}}}1/Z_S^{\overline{\mathrm{MS}}}`$ is 1.63(7)(9) (stat.)(sys.) which should be compared to 1.32, the one–loop perturbative $`Z_m`$ for these simulation parameters. Taking the lattice $`m_s(K)`$ from the previous section we find at $`\beta =6.0`$ $$m_s^{\overline{\mathrm{MS}}}(2\mathrm{GeV})=130\pm 11\pm 18\mathrm{MeV}$$ (5) where the first error is the statistical uncertainty and the second is the systematic uncertainty. ## 4 CONCLUSIONS We have completed a calculation of the light and strange quark masses at one lattice spacing and volume within the quenched lattice spacing. The increased precision has exposed finite $`N_s`$ and $`V`$ effects and these lead to errors of roughly 5% in the strange quark mass. We will soon have a calculation of $`m_s`$ on a coarser $`(\beta =5.85)`$ lattice, and of course further work at larger volumes, larger $`N_s`$, and smaller lattice spacing will reduce systematic uncertainties and yield a precise determination of the strange quark mass. ## ACKNOWLEDGMENTS The RBC Collaboration is grateful to RIKEN, Brookhaven National Laboratory, and the U.S. Department of Energy for providing the facilities essential for the completion of this work.
no-problem/9909/math-ph9909001.html
ar5iv
text
# Universality of the Distribution Functions of Random Matrix Theory ## 1 Random Matrix Models In probability theory and statistics a common first approximation to many random processes is a sequence $`X_1,X_2,X_3,\mathrm{}`$ of independent and identically distributed (iid) random variables. Let $`F`$ denote their common distribution. To motivate the material below, we take these random variables and construct a particularly simple $`N\times N`$ random matrix, $$\text{diag}(X_1(\omega ),X_2(\omega ),\mathrm{},X_N(\omega )).$$ The order statistics are the eigenvalues ordered $$\lambda _1\lambda _2\mathrm{}\lambda _N,$$ and the distribution of the largest eigenvalue, $`\lambda _{\text{max}}(N)=\lambda _N`$, is $`\text{Prob}\left(\lambda _{\text{max}}(N)x\right)`$ $`=`$ $`\text{Prob}\left(X_1x,\mathrm{},X_Nx\right)`$ $`=`$ $`F(x)^N.`$ Since the distribution $`F`$ is arbitrary, we observe that so too is the distribution of the largest eigenvalue of a $`N\times N`$ random matrix. However, one is really interested in limiting laws as $`N\mathrm{}`$. That is, we ask if there exist constants $`a_N`$ and $`b_N`$ such that $$\frac{\lambda _{\text{max}}(N)a_N}{b_N}$$ (1.1) converges in distribution to a nontrivial limiting distribution function $`G`$. In the present situation a complete answer is provided by Theorem: If (1.1) converges in distribution to some nontrivial distribution function $`G`$, then $`G`$ belongs to one of the following forms: 1. $`e^{e^x}`$ with support R. 2. $`e^{1/x^\alpha }`$ with support $`[0,\mathrm{})`$ and $`\alpha >0`$. 3. $`e^{(x)^\alpha }`$ with support $`(\mathrm{},0]`$ and $`\alpha >0`$. This theorem is a model for the type of results we want for nondiagonal random matrices. A random matrix model is a probability space $`(\mathrm{\Omega },𝒫,)`$ where $`\mathrm{\Omega }`$ is a set of matrices. Here are some examples * Circular Unitary Ensemble (CUE, $`\beta =2`$) + $`\mathrm{\Omega }=𝒰(N)=N\times N`$ unitary matrices. + $`𝒫`$= Haar measure. * Gaussian Orthogonal Ensemble (GOE, $`\beta =1`$) + $`\mathrm{\Omega }=N\times N`$ real symmetric matrices. + $`𝒫`$ = unique<sup>1</sup><sup>1</sup>1Uniqueness is up to centering and a normalization of the variance. measure that is invariant under orthogonal transformations and the matrix elements (say on and above the diagonal) are iid random variables. * Gaussian Unitary Ensemble (GUE, $`\beta =2`$) + $`\mathrm{\Omega }=N\times N`$ hermitian matrices. + $`𝒫`$= unique measure that is invariant under unitary transformations and the real and imaginary matrix elements (say on and above the diagonal) are iid random variables. * Gaussian Symplectic Ensemble (GSE, $`\beta =4`$) + $`\mathrm{\Omega }=2N\times 2N`$ Hermitian self-dual matrices.<sup>2</sup><sup>2</sup>2Identify the $`2N\times 2N`$ matrix with the $`N\times N`$ matrix whose entries are quaternions. If the quaternion matrix elements satisfy $`\overline{M}_{ji}=M_{ij}`$ where the bar is quaternion conjugation, then the $`2N\times 2N`$ matrix is called Hermitian self-dual. Each eigenvalue of a Hermitian self-dual matrix has multiplicity two. + $`𝒫`$= unique measure that is invariant under symplectic transformations and the real and imaginary matrix elements (say on and above the diagonal) are iid random variables. Expected values of random variables $`f:\mathrm{\Omega }\text{C}`$ are computed from the usual formula $$E_\mathrm{\Omega }(f)=_\mathrm{\Omega }f(M)𝑑𝒫(M).$$ If $`f(M)`$ depends only on the eigenvalues of $`M\mathrm{\Omega }`$, then one can be more explicit: * CUE (Weyl’s Formula) $$E_{𝒰(N)}(f)=\frac{1}{N!(2\pi )^N}_\pi ^\pi \mathrm{}_\pi ^\pi f(\theta _1,\mathrm{},\theta _N)\underset{1\mu <\nu N}{}\left|\mathrm{\Delta }(e^{i\theta _1},\mathrm{},e^{i\theta _N})\right|^2d\theta _1\mathrm{}d\theta _N,$$ * Gaussian Ensembles ($`\beta =1,2,4`$): $$E_{N\beta }(f)=c_{N\beta }_{\mathrm{}}^{\mathrm{}}\mathrm{}_{\mathrm{}}^{\mathrm{}}f(x_1,\mathrm{},x_N)\left|\mathrm{\Delta }(x_1,\mathrm{},x_N)\right|^\beta e^{\frac{\beta }{2}{\scriptscriptstyle x_j^2}}𝑑x_1\mathrm{}𝑑x_N,$$ where $`c_{N\beta }`$ is chosen so that $`E_{N\beta }(1)=1`$ and $`\mathrm{\Delta }(x_1,\mathrm{},x_N)=_{1i<jN}(x_ix_j)`$. The factor $`e^{\frac{\beta }{2}{\scriptscriptstyle x_j^2}}`$ explains the choice of the word “gaussian” in the names of these ensembles. A commonly studied generalization of these gaussian measures is to replace the sum of quadratic terms appearing in the exponential with $`V(x_i)`$ where $`V`$ is, say, a polynomial (with the obvious restrictions to make the measure well-defined). Choosing $`f=_i\left(1\chi _J(x_i)\right)`$, $`\chi _J`$ the characteristic function of a set $`J\text{R}`$, we get the important quantity<sup>3</sup><sup>3</sup>3This quantity has an obvious extension to other random matrix models. $$E_{N\beta }(f)=E_{N\beta }(0;J):=\text{probability no eigenvalues lie in}J,$$ and in the particular case $`J=(t,\mathrm{})`$ $$F_{N\beta }(t):=\text{Prob}\left(\lambda _{\text{max}}t\right)=E_{N\beta }(0,J).$$ The level spacing distribution<sup>4</sup><sup>4</sup>4Let the eigenvalues be ordered. The conditional probability that given an eigenvalue at $`a`$, the next one lies between $`s`$ and $`s+ds`$ is called the level spacing density. is expressible in terms of the mixed second partial derivative of $`E_{N\beta }(0;(a,b))`$ with respect to the endpoints $`a`$ and $`b`$. ## 2 Fredholm Determinant Representations Though $`E_{N\beta }(0;J)`$ are explicit $`N`$-dimensional integrals, these expressions are not so useful in establishing limiting laws as $`N\mathrm{}`$. What turned out to be very useful are Fredholm determinant representations for $`E_{N\beta }(0;J)`$. In 1961 M. Gaudin proved for $`\beta =2`$ (using the newly developed orthogonal polynomial method of M. L. Mehta) that $`E_{N2}(0;J)=det(IK_{N2})`$ where $`K_{N2}`$ is an integral operator acting on $`J`$ whose kernel is of the form $$\frac{\phi (x)\psi (y)\psi (x)\phi (y)}{xy},$$ (2.1) with $`\phi (x)=c_Ne^{x^2/2}H_N(x)`$, $`\psi (x)=c_Ne^{x^2/2}H_{N1}(x)`$, and $`H_j(x)`$ are the Hermite polynomials.<sup>5</sup><sup>5</sup>5For the random matrix models corresponding to general potential $`V`$, $`\phi (x)=c_Ne^{V(x)/2}p_N(x)`$ and $`\psi (x)=c_Ne^{V(x)/2}p_{N1}(x)`$ where $`p_j(x)`$ are the orthogonal polynomials associated with weight function $`w(x)=e^{V(x)}`$. It is in this generalization that we see the close relation between the general theory of orthogonal polynomials and random matrix theory. For $`\beta =1\text{or}\mathrm{\hspace{0.25em}4}`$, generalizing F. J. Dyson’s 1970 analysis of the $`n`$-point correlations for the circular ensembles, it follows from work by Mehta the following year that the square of $`E_{N\beta }(0;J)`$ again equals a Fredholm determinant, $`det(IK_{N\beta })`$, but now the kernel of $`K_{N\beta }`$ is a $`2\times 2`$ matrix.<sup>6</sup><sup>6</sup>6 See for elementary proofs of these facts. ## 3 Scaling Limits (Limiting Laws) ### 3.1 Bulk Scaling Limit Let $`\rho _N(x)`$ denote the density of eigenvalues at $`x`$ and pick a point $`x_0`$, independent of $`N`$ with $`\rho _N(x_0)>0`$. We scale distances so that resulting density is one at $`x_0`$, $`\xi :=\rho _N(x_0)\left(xx_0\right)`$, and we call the limit $$N\mathrm{},xx_0,\text{such that}\xi \text{is fixed},$$ the bulk scaling limit. For $`\beta =2`$, $$E_{N2}(0;J)E_2(0;J)=det\left(IK_2\right)$$ where the integral operator $`K_2`$ (acting on $`L^2(J)`$) has as its kernel (the sine kernel) $$\frac{1}{\pi }\frac{\mathrm{sin}\pi (\xi \xi ^{})}{\xi \xi ^{}}.$$ (We use the same symbol $`J`$ to denote the scaled set $`J`$.) Furthermore, $$p_2(s)=\frac{d^2}{ds^2}E_2(0;(0,s))$$ is the (limiting) level-spacing density for GUE; known as the Gaudin distribution.<sup>7</sup><sup>7</sup>7For the analogous $`\beta =1,4`$ results, see, e.g., or . We observe that the limiting kernel is translationally invariant and independent of $`x_0`$. ### 3.2 Edge Scaling Limit In the gaussian ensembles, the density decays exponentially fast around $`2\sigma \sqrt{N}`$; perhaps surprisingly, it is also the case that $$\underset{N\mathrm{}}{lim}\frac{\lambda _{\text{max}}(N)}{\sqrt{N}}=2\sigma ,a.s.$$ (3.1) where $`\sigma `$ is the standard deviation of the off-diagonal matrix elements. (In the normalization we’ve adopted, $`\sigma =1/\sqrt{2}`$.) If we introduce the scaled random variable $`\widehat{\lambda }`$ through $$\lambda _{\text{max}}=2\sigma \sqrt{N}+\frac{\sigma \widehat{\lambda }}{N^{1/6}},$$ then $$\text{Prob}\left(\lambda _{\text{max}}t\right)=\text{Prob}\left(\widehat{\lambda }s\right)F_\beta (s)\text{as}N\mathrm{},$$ where $`t=2\sigma \sqrt{N}+\sigma s/N^{1/6}`$. For $`\beta =2`$, $$F_2(s)=det(IK_{\text{Airy}}),$$ where $`K_{\text{Airy}}`$ has kernel of the form (2.1) with $`\phi (x)=\text{Ai}(x)`$, $`\psi (x)=\text{Ai}^{}(x)`$ and $`J=(s,\mathrm{})`$. (See, e.g., for the $`\beta =1,4`$ results.) ## 4 Connections with Integrable Systems ### 4.1 Bulk Scaling Limit In 1980 M. Jimbo, T. Miwa, Y. Môri, and M. Sato expressed the Fredholm determinant of the sine kernel in terms of a solution to a certain system of integrable differential equations.<sup>8</sup><sup>8</sup>8A simplified proof of their results can be found in . In the simplest case of a single interval, $`J=(0,s)`$, the differential equation is a particular case of Painlevé V ($`P_V`$)<sup>9</sup><sup>9</sup>9The differential equation below is the sigma representation of $`P_V`$. and the Fredholm determinant is given by $$det\left(I\lambda K_2\right)=\mathrm{exp}\left(_0^{\pi s}\frac{\sigma (x;\lambda )}{x}𝑑x\right),$$ $$\left(x\sigma ^{\prime \prime }\right)^2+4\left(x\sigma ^{}\sigma \right)\left(x\sigma ^{}\sigma +(\sigma ^{})^2\right)=0,$$ $$\sigma (x;\lambda )\frac{\lambda }{\pi }x,\text{as}x0.$$ For $`\beta =1,4`$ and $`J=(0,s)`$, $`E_\beta (0;(0,s))`$ can also be expressed in terms of the same function $`\sigma (x;1)`$. A down-to-earth application of these Painlevé representations (and using the known asymtptotics of $`\sigma (x;1)`$) is that one can easily produce graphs of the level spacing densities $`p_\beta (s)`$.<sup>10</sup><sup>10</sup>10Without the Painlevé representations, the numerical evaluation of the Fredholm determinants is quite involved. ### 4.2 Edge Scaling Limit The limiting distributions (edge scaling) of the largest eigenvalue, $`F_\beta (s)`$, can also be expressed in terms of Painlevé functions—this time $`P_{II}`$ : $`F_1(s)^2`$ $`=`$ $`\mathrm{exp}\left({\displaystyle _s^{\mathrm{}}}q(x)𝑑x\right)F_2(s),`$ (4.1) $`F_2(s)`$ $`=`$ $`\mathrm{exp}\left({\displaystyle _s^{\mathrm{}}}(xs)q(x)^2𝑑x\right),`$ (4.2) $`F_4(s/\sqrt{2})^2`$ $`=`$ $`\mathrm{cosh}^2\left({\displaystyle \frac{1}{2}}{\displaystyle _s^{\mathrm{}}}q(x)𝑑x\right)F_2(s),`$ (4.3) where $`q`$ satisfies the Painlevé II equation $$q^{\prime \prime }=xq+2q^3$$ with boundary condition $`q(x)\text{Ai}(x)`$ as $`x\mathrm{}`$.<sup>11</sup><sup>11</sup>11That a solution $`q`$ exists and is unique follows from the representation of the Fredholm determinant in terms of it. Independent proofs of this, as well as the asymptotics as $`x\mathrm{}`$, were given by S. Hastings and J. McLeod, P. Clarkson and McLeod and by P. Deift and X. Zhou. The graphs of the densities $`f_\beta (s)=dF_\beta (s)/ds`$ are in Figure 1. ### 4.3 Generalizations Both the sine kernel and the Airy kernel are of the form (2.1). Kernels of this form arise in many problems in integrable systems; indeed, so much so that A. Its, A. Izergin, V. Korepin and V. Slavnov in 1990 initiated a general analysis of these kernels. The following theorem , which applies to a wide class of $`\beta =2`$ random matrix ensembles, gives the general situation: Theorem: Let $`J=_{j=1}^m(a_{2j1}a_{2j})`$ be a union of open intervals. Define $`\tau (a)=det\left(IK\right)`$ where $`K`$ is an integral operator acting on $`L^2(J)`$ whose kernel is of the form (2.1) where $`\phi `$ and $`\psi `$ are assumed to satisfy $$\frac{d}{dx}\left(\begin{array}{c}\phi \\ \psi \end{array}\right)=\mathrm{\Omega }(x)\left(\begin{array}{c}\phi \\ \psi \end{array}\right)$$ with $`\mathrm{\Omega }(x)`$ a $`2\times 2`$ matrix with zero trace and rational entries in $`x`$, then $`\frac{}{a_j}\mathrm{log}det(IK)`$ are expressible polynomially in terms of solutions to a total system of partial differential equations ($`a_j`$ are the independent variables). The differential equations are given explicitly in terms of the coefficients of the rational functions appearing in $`\mathrm{\Omega }(x)`$. ### 4.4 Historical Comments The first connection between Toeplitz/Fredholm determinants and Painlevé functions was established in 1973–77 in work of T. T. Wu, B. M. McCoy, E. Barouch and the first author concerning the scaling limit of the 2-point functions of the 2D Ising model of statistical mechanics. The Painlevé function that arose was $`P_{III}`$. This work was subsequently generalized by Sato, Miwa and Jimbo to $`n`$-point functions and, more generally, holonomic quantum fields. The Kyoto School then took up the problem of the density matrix of the impenetrable Bose gas and it was in this context that they discovered that the Fredholm determinant of the sine kernel is related to $`P_V`$. A crucial simplification of the Kyoto School work, as it applies to random matrix theory, was made by Mehta in 1992 . This last work inspired the commutator methods introduced by the present authors in the period 1993–96. Since then both Riemann-Hilbert methods of Deift, Its, Zhou and others (see, e.g. ); and Virasoro methods of M. Adler, T. Shiota, P. van Moerbeke, and others (see, e.g. ), have played an increasingly important role in the development of random matrix theory. The connection of these methods with the isomonodromy method has been clarified by J. Palmer and J. Harnad . Space does not permit us to discuss the interesting connections between random matrices and Szegö type limit theorems. See E. Basor for connections with linear statistics and the review papers for some related historical comments. ## 5 Universality ### 5.1 Universality of Gaussian Ensembles in Random Matrix Models #### 5.1.1 Invariant Measures, $`\beta =2`$ As briefly mentioned above, a widely studied class of random matrix models is defined by the replacement of the gaussian potential, $`x^2`$, by the general potential $`V(x)`$. For the weight functions most studied, the parameter $`N`$ is put into the exponent so that the weight function becomes $`e^{NV(x)}`$. For different $`V`$’s, the limiting density $`\rho _V(x)`$ can be quite different. It may be supported on many distinct intervals, and it may vanish at interior points of its support. In the gaussian case, the limiting density is the Wigner semicircle law: $`\rho _W(x)=\frac{2}{\pi }\sqrt{1x^2}`$. Heuristic arguments suggest that the behavior exhibited by the Wigner law—that $`\rho `$ is positive on the interior of its support and vanishes like a square root at endpoints—is the typical behavior for $`\rho _V`$. The bulk scaling limit and edge scaling limit are defined in analogous ways to the gaussian cases. To establish universality of these scaling limits, one must show (for $`\beta =2`$ ensembles) that the scaled kernels approach the sine kernel and the Airy kernel, respectively. The potential $`V(x)=\frac{t}{2}x^2+\frac{g}{4}x^4`$ ($`g>0`$, $`t<0`$) is an example of a “two interval” potential. Indeed, for this important example P. Bleher and A. Its proved precisely this statement of universality. (See their paper for related work in the orthogonal polynomial literature as well as the physics literature.) Recently, building on work of , A. Kuijlaars and K. McLaughlin have shown this behavior is generic for real analytic $`V`$ satisfying $`lim_x\mathrm{}V(x)/\mathrm{log}|x|=+\mathrm{}`$. In the physics literature, M. Bowick, E. Brézin and others have argued (for $`\beta =2`$ ensembles) that if $`\rho _V`$ vanishes faster than a square root, then the corresponding edge scaling limit will result in nonAiry universality classes. The resulting new kernels will have form (2.1) and the theory developed in will apply, but there remains much to be understood in these cases. For $`\beta =1,4`$, the situation is more complicated due to the structure of $`K_V`$ , and the “universality” theorems are not so general. ### 5.2 Noninvariant Measures: Wigner Ensemble The Wigner ensembles are defined by requiring that the matrix elements on and above the diagonal in either the real symmetric case or the complex hermitian case are independent and identically distributed random variables. It is only in the case when the distribution is gaussian is the measure invariant. One usually assumes, as we do here, that all moments of the common distribution function exist. It was Wigner himself who showed that the limiting density of states is the Wigner semicircle. Subsequently several authors—culminating in a theorem by Z. Bai and Y. Lin clarifying which moments need exist—showed (3.1) continues to hold for the Wigner ensembles. It should be noted that because the measure is noninvariant, the nongaussian Wigner ensembles do not, as far as we understand, have Fredholm determinant representation for their distribution functions. This means, for one, that the methods of integrable systems are not directly applicable to Wigner ensembles. It is therefore particularly important, as A. Soshnikov recently proved, that in the edge scaling limit the Wigner ensembles are in the same universality class as the gaussian models. In particular, the limiting distribution of the scaled largest eigenvalue is given by $`F_1(s)`$ for real symmetric Wigner matrices and by $`F_2(s)`$ for complex hermitian Wigner matrices. ### 5.3 Examples from Physics A second type of universality, and the one first envisioned by Wigner in the context of nuclear physics, asserts in Wigner’s words > Let me say only one more word. It is very likely that the curve in Figure I \[an approximate graph of $`p_1(s)`$\] is a universal function. In other words, it doesn’t depend on the details of the model with which you are working. The modern version of this asserts that for a classical, “fully” chaotic Hamiltonian the corresponding quantum system has a level spacing distribution equal to $`p_\beta (s)`$ in the bulk. (The symmetry class determines which ensemble.) This quantum chaos conjecture, due to O. Bohigas, M. Giannoni and C. Schmit , has been a guiding principle for much subsequent work, though it is the authors’ understanding that it remains a conjecture. A particularly nice numerical example supporting this conjecture is M. Robnik’s work on chaotic billards. The reader is referred to the recent review article for further numerical examples that support this conjecture. It should be noted that there are examples from number theory where the conjecture fails. Thus, as it has been said, the conjecture is undoubtedly true except where it is demonstratively false. #### 5.3.1 Aperiodic Tiling Adjacency Matrix The discovery of quasicrystals has made the study of statistical mechanical models whose underlying lattice is quasiperiodic of considerable interest to physicists. In particular, in order to understand transport properties, tight binding models have been defined on various quasiperiodic lattices. One such study by Zhong et al. defined a simplified tight binding model for the octagonal tiling of Ammann and Beenker. This quasiperiodic tiling consists of squares and rhombi with all edges of equal lengths (see Figure 2) and has a $`D_8`$ symmetry around the central vertex. On this tiling the authors take as their Hamiltonian the adjacency matrix for the graph with free boundary conditions. The largest lattice they consider has 157,369 vertices. The matrix splits into ten blocks according to the irreducible representations of the dihedral group $`D_8`$. For each of these ten independent subspectra, they compare the empirical distribution of the normalized spacings of the consecutive eigenvalues with the GOE level spacing density $`p_1(s)`$. In Figure 2 we have reproduced a portion of their data for one such subspectrum together with $`p_1`$. ### 5.4 Spacings of the Consecutive Zeros of Zeta Functions Perhaps the most surprising appearance of the distributions of random matrix theory is in number theory. Analytical work by H. Montgomery and extensive numerical calculations by A. Odlyzko on the zeros of the Riemann zeta function have given convincing evidence that the normalized consecutive spacings follow the Gaudin distribution, see Figure 3. Recent results of Z. Rudnick and P. Sarnak are also compatible with the belief that the distribution of the spacings between zeros, not only of the Riemann zeta function, but also of quite general automorphic $`L`$-functions over Q, are all given by this Montgomery-Odlyzko Law. In their landmark book , N. Katz and P. Sarnak establish the Montgomery-Odlyzko Law for wide classes of zeta and $`L`$-functions over finite fields. ### 5.5 Random Matrix Theory and Combinatorics The last decade has seen a flurry of activity centering around connections between combinatorial probability of the Robinson-Schensted-Knuth (RSK) type on the one hand and random matrices and integrable systems on the other. From the point of view of probability theory, the quite surprising feature of these developments is that the methods came from Toeplitz determinants, integrable differential equations of the Painlevé type and the closely related Riemann-Hilbert techniques as they were applied and refined in random matrix theory. Using these techniques new, and apparently quite universal, limiting laws have been discovered. The earliest signs of these connections can be found in the work of A. Regev and I. Gessel . Here, however, we introduce this subject by examining a certain card game of D. Aldous and P. Diaconis , called patience sorting. #### 5.5.1 Patience Sorting and Random Permutations Our deck of cards is labeled $`\{1,2,\mathrm{},N\}`$ and we order the cards with their natural ordering. Shuffle the deck of cards and * Turn over the first card. * Turn over the second card. If it is of higher rank, start a new pile to the right of the first card. Otherwise place the second card on top of the first card. * Turn over the third card. If it is of higher rank than either the first or the second card, start a new pile to the right of the second card. Otherwise place the third card on top of the card of higher rank. If both first and second are of higher rank, place the third card on the smaller ranked card. (That is, play cards as far as possible to the left.) * Continue playing the game, playing cards as far left as possible, until all the cards are turned over. The object of the game is to end with a small number of piles. Let $`\mathrm{}_N(\sigma )`$ equal the number of piles at the end of the game where we started with deck $`\sigma =\{i_1,i_2,\mathrm{},i_N\}`$. Clearly, $`1\mathrm{}_N(\sigma )N`$, but what are some typical values for a shuffled deck? Starting each time with a newly shuffled deck of $`N=52`$ cards, the computer played patience sorting 100,000 times. Here are the statistics for $`\mathrm{}_{52}`$: * Mean=11.56 (11.00). * Standard Deviation=1.37 (1.74) * Skewness=0.33 (0.22) * Kurtosis Excess =0.16 (0.09) * Sample Range = 7 to 19 (Probability 0.993) where the numbers in parentheses are the asymptotic predictions (as the number of cards tends to infinity) of the theory of J. Baik, P. Deift and K. Johansson to be described below. A shuffled deck of cards, $`\sigma =\{i_1,i_2,\mathrm{},i_N\}`$, is a permutation of $`\{1,2,\mathrm{},n\}`$, and so we think of the shuffled deck as a random permutation. A moment’s reflection will convince the reader that $`\mathrm{}_N(\sigma )`$ is equal to the length of the longest increasing subsequence in the permutation $`\sigma `$. As a problem in random permutations, determining the asymptotics of $`E(\mathrm{}_N)`$ as $`N\mathrm{}`$ is called Ulam’s Problem. In the 1970’s A. Vershik and S. Kerov and independently B. Logan and L. Shepp showed $`E(\mathrm{}_N)2\sqrt{N}`$ with important earlier work by J. Hammersley. Hammersley’s analysis introduced a certain interacting particle system interpretation. This was developed by Aldous and Diaconis who in 1995 gave a “soft” proof of this result using hydrodynamic scaling arguments from interacting particle theory. Introducing the exponential generating function $$\underset{N0}{}\text{Prob}(\mathrm{}_Nn)\frac{t^N}{N!},$$ Gessel showed that it is equal to $`D_n(t)`$, the determinant of the $`n\times n`$ Toeplitz determinant with symbol $`e^{\sqrt{t}(z+z^1)}`$. (Recall that the $`i,j`$ entry of a Toeplitz matrix equals the $`ij`$ Fourier coefficient of its symbol.) It is in this work of Gessel and subsequent work of Odlyzko et al. and E. Rains , that the methods of random matrix theory first appear in RSK type problems.<sup>12</sup><sup>12</sup>12Gessel does not mention random matrices, but in light of well-known formulas in random matrix theory relating Toeplitz determinants to expectations over the unitary group, we believe it is fair to say that the connection with random matrix theory begins with this discovery. See, however, Regev . Starting with this Toeplitz determinant representation, Baik, Deift and Johansson , using the steepest descent method for Riemann-Hilbert problems , derived a delicate asymptotic formula for $`D_n(t)`$ which we now describe. Introduce another parameter $`s`$ and suppose that $`n`$ and $`t`$ are related by $`n=[2t^{1/2}+st^{1/6}]`$. Then as $`t\mathrm{}`$ with $`s`$ fixed one has $$\underset{t\mathrm{}}{lim}e^tD_n(t)=F_2(s)$$ where $`F_2(s)`$ is the distribution function (4.2). Using a dePoissonization lemma due to Johansson , these asymptotics led Baik, Deift and Johansson to the limiting law $$\underset{N\mathrm{}}{lim}\text{Prob}\left(\frac{\mathrm{}_N2\sqrt{N}}{N^{1/6}}<s\right)=F_2(s).$$ Since the work of Baik, Deift and Johansson, several groups have extended this connection between RSK type combinatorics and the distribution functions of random matrix theory. The aforementioned result is equivalent to the determination of the limiting distribution of the number of boxes in the first row in the RSK correspondence $`\sigma (P,Q)`$. In the same authors show that the limiting distribution of the number of boxes in the second row is (when centered and normalized) distributed as the second largest scaled eigenvalue in GUE . They then conjectured that this correspondence extends to all rows. This conjecture was recently proved by A. Okounkov using topological methods and by A. Borodin, A. Okounkov and G. Olshanski and Johansson using analytical methods. Placing restrictions on the permutations $`\sigma `$ (that they be fixed point free and involutions), Baik and Rains have shown that the limiting laws for the length of the longest increasing/decreasing subsequence are now the limiting distributions $`F_1`$ and $`F_4`$ for the scaled largest eigenvalue in GOE and GSE, see (4.1) and (4.3). Generalizing to signed permutations and colored permutations the present authors and Borodin showed that the distribution functions of the length of the longest increasing subsequence involve the same $`F_2`$. Johansson showed that the shape fluctuations of a certain random growth model, again appropriately scaled, converges in distribution to $`F_2`$. (This random growth model is intimately related to certain randomly growing Young diagrams.) In subsequent work, Johansson showed that the fluctuations in certain random tiling problems (related to the Artic Circle Theorem) are again described by $`F_2`$. Finally, Johansson and the present authors have considered analogous problems for random words and have discovered various random matrix theory connections. Acknowledgments The authors have benefited from conversations with A. Its and it is a pleasure to acknowledge this. The first author thanks J. Harnad and P. Winternitz for their invitation to speak at the workshop Integrable Systems: From Classical to Quantum. This work was supported, in part, by the National Science Foundation through grants DMS–9802122 and DMS–9732687.
no-problem/9909/hep-lat9909155.html
ar5iv
text
# Deriving exact results for Ising-like models from the cluster variation method ## Abstract The cluster variation method (CVM) is an approximation technique which generalizes the mean field approximation and has been widely applied in the last decades, mainly for finding accurate phase diagrams of Ising-like lattice models. Here we discuss in which cases the CVM can yield exact results, considering: (i) one-dimensional systems and strips (in which case the method reduces to the transfer matrix method), (ii) tree-like lattices and (iii) the so-called disorder points of euclidean lattice models with competitive interactions in more than one dimension. The cluster variation method (CVM) is a hierarchy of approximation techniques for discrete (Ising-like) classical lattice models, which has been invented by Kikuchi . In its modern formulation the CVM is based on the variational principle of equilibrium statistical mechanics, which says that the free energy $`F`$ of a model defined on the lattice $`\mathrm{\Lambda }`$ is given by $$F=\mathrm{min}F[\rho _\mathrm{\Lambda }]=\mathrm{min}\mathrm{Tr}(\rho _\mathrm{\Lambda }H+\rho _\mathrm{\Lambda }\mathrm{ln}\rho _\mathrm{\Lambda }),$$ (1) where $`H`$ is the hamiltonian of the model, $`\beta =1`$ for simplicity, and the density matrix $`\rho _\mathrm{\Lambda }`$ must be properly normalized: $`\mathrm{Tr}(\rho _\mathrm{\Lambda })=1`$. As a first step one usually introduces the cluster density matrices and the cluster entropies $$\rho _\alpha =\mathrm{Tr}_{\mathrm{\Lambda }\alpha }\rho _\mathrm{\Lambda }S_\alpha =\mathrm{Tr}(\rho _\alpha \mathrm{ln}\rho _\alpha ),$$ (2) where $`\alpha `$ is a cluster of $`n_\alpha `$ sites and $`\mathrm{Tr}_{\mathrm{\Lambda }\alpha }`$ denotes a summation over all degrees of freedom except those belonging to the cluster $`\alpha `$. One then introduces the cumulant expansion of the cluster entropies $$S_\alpha =\underset{\beta \alpha }{}\stackrel{~}{S}_\beta \stackrel{~}{S}_\beta =\underset{\alpha \beta }{}(1)^{n_\alpha n_\beta }S_\alpha ,$$ (3) in terms of which the variational free energy can be rewritten as $$F[\rho _\mathrm{\Lambda }]=\mathrm{Tr}(\rho _\mathrm{\Lambda }H)\underset{\beta \mathrm{\Lambda }}{}\stackrel{~}{S}_\beta .$$ (4) The above steps are all exact and the approximation defining the CVM comes in when one truncates the cumulant expansion of the entropy. The sum of the cumulants of the cluster entropies is restricted to a given set $`M`$ of clusters, which in most cases can be thought of as a set of maximal clusters and all their subclusters. If the model under consideration has only short range interactions and the maximal clusters are sufficiently large the hamiltonian can be decomposed into a sum of cluster contributions and the approximate variational free energy takes the form $`F[\{\rho _\alpha ,\alpha M\}]`$ $``$ $`{\displaystyle \underset{\alpha M}{}}\left[\mathrm{Tr}(\rho _\alpha H_\alpha )\stackrel{~}{S}_\alpha \right]`$ $`=`$ $`{\displaystyle \underset{\alpha M}{}}\left[\mathrm{Tr}(\rho _\alpha H_\alpha )a_\alpha S_\alpha \right],`$ where the coefficients $`a_\alpha `$ can be easily obtained from the set of linear equations $$\underset{\beta \alpha M}{}a_\alpha =1,\beta M$$ (6) and the cluster density matrices must satisfy the following conditions which express normalization $$\mathrm{Tr}\rho _\alpha =1,\alpha M$$ (7) and compatibility $$\rho _\alpha =\mathrm{Tr}_{\beta \alpha }\rho _\beta ,\alpha \beta M.$$ (8) Having introduced an approximation it is worth asking whether there are special cases in which it turns out to be exact. The simplest example is that of a system defined on a lattice $`\mathrm{\Lambda }`$ which can be regarded as the union of two clusters $`\mathrm{\Lambda }=AB`$, such that, denoting by $`K=AB`$ their intersection, there is no interaction between $`A^{}=AK`$ and $`B^{}=BK`$. In this case the hamiltonian has the general form $`H=H_A(\underset{¯}{\sigma }_A)+H_B(\underset{¯}{\sigma }_B)`$ and it is easy to check that the density matrix can be written as $`\rho _\mathrm{\Lambda }={\displaystyle \frac{\rho _A\rho _B}{\rho _K}}`$, which in turn implies the decomposition $`S_\mathrm{\Lambda }=S_A+S_BS_K`$ for the entropy. The CVM approximation which one obtains with the set of clusters $`M=\{A,B,K\}`$ leads to the same decomposition of the entropy (eq. 6 yields $`a_A=a_B=1`$, $`a_K=1`$) and is therefore exact. It can be verified that this argument can be easily generalized (to several clusters sharing a common intersection) and/or iterated (to a chain of clusters $`A,B,C,\mathrm{}`$). The above argument could be used to explain the well-known fact that CVM approximations are exact for Bethe and cactus lattices (that is, interior of Cayley and Husimi trees, respectively), which are made of links (respectively plaquettes), sharing common sites with no loops (respectively no loops larger than the elementary plaquette). However we shall leave apart tree-like lattices and turn our attention to euclidean ones, considering first one-dimensional systems (strips) and then the disorder points of higher dimensional models. Since it is known that the Bethe-Peierls approximation (which is the lowest level CVM approximation for a model with nearest-neighbour interactions only, obtained by taking $`M=\{links,sites\}`$) is exact for a one-dimensional chain, one might wonder whether there is a CVM approximation which is exact for a strip of finite width. The answer is affirmative and it is interesting to note that one recovers the transfer matrix formalism. Consider a strip of width $`N`$ and (finite, for the moment) length $`L`$ and let the hamiltonian contain only translation-invariant NN interactions, with open boundary conditions. In the above scheme, this is an example of a chain of intersecting clusters and we can guess that a CVM approximation with the $`N\times 2`$ ladders as maximal clusters should be exact. Denoting by II such clusters and by I their $`N\times 1`$ intersections we set $`M=\{\mathrm{II},\mathrm{I}\}`$ (no other subclusters enter the cumulant expansion) and in the thermodynamic limit $`L\mathrm{}`$, assuming that translational invariance is recovered, we get the variational principle $`f`$ $`=`$ $`\underset{L\mathrm{}}{lim}{\displaystyle \frac{F}{L}}`$ (9) $`=`$ $`\mathrm{min}\mathrm{Tr}\left(\rho _{\mathrm{II}}H_{\mathrm{II}}+\rho _{\mathrm{II}}\mathrm{ln}\rho _{\mathrm{II}}\rho _\mathrm{I}\mathrm{ln}\rho _\mathrm{I}\right).`$ Denoting by $`\underset{¯}{\sigma }`$ and $`\underset{¯}{\sigma }^{}`$ the two sets of degrees of freedom of the two I subclusters of a II cluster we can solve for $`\rho _{\mathrm{II}}`$ and recover the transfer matrix formalism in the form $$f=\mathrm{ln}\mathrm{max}\underset{\underset{¯}{\sigma },\underset{¯}{\sigma }^{}}{}\rho _\mathrm{I}^{1/2}(\underset{¯}{\sigma })\mathrm{e}^{H_{\mathrm{II}}(\underset{¯}{\sigma },\underset{¯}{\sigma }^{})}\rho _\mathrm{I}^{1/2}(\underset{¯}{\sigma }^{})$$ (10) with the normalization constraint $`{\displaystyle \underset{\underset{¯}{\sigma }}{}}\rho _\mathrm{I}(\underset{¯}{\sigma })=1`$. It is interesting to note that the CVM comes with a natural fixed point algorithm for finding the local miima of the free energy, which in this case reduces to the power method for finding the largest eigenvalue of the transfer matrix. The last (and perhaps the most interesting) case we want to consider is that of disorder points. As an example we consider the square lattice Ising model with competitive interactions, with hamiltonian $`H=`$ $`K_1{\displaystyle \underset{ij}{}}\sigma _i\sigma _jK_2{\displaystyle \underset{ij}{}}\sigma _i\sigma _j`$ (11) $`K_4{\displaystyle \underset{[ijkl]}{}}\sigma _i\sigma _j\sigma _k\sigma _l,`$ where $`K_1>0`$ is the NN coupling, $`K_2<0`$ the next nearest neighbour coupling and $`K_4`$ the plaquette coupling. It is known that in the disordered phase of this model there is an integrable subspace given by $$\mathrm{cosh}(2K_1)=\frac{\mathrm{e}^{2K_4}\mathrm{cosh}(4K_2)+\mathrm{e}^{2K_2}}{\mathrm{e}^{2K_2}+\mathrm{e}^{2K_4}},$$ (12) where the free energy density is given by $$f=\mathrm{ln}\left[\mathrm{exp}(K_4)+\mathrm{exp}(K_42K_2)\right].$$ (13) In this subspace the $`R`$ matrix has an eigenvector which is a pure tensorial product and the eigenvector of the transfer matrix corresponding to the largest eigenvalue is also a pure tensorial product , and hence the density matrix and the two-site correlations are factorized. Because of this factorization (and the corresponding decomposition of the entropy) one can expect that the model can be solved exactly by the CVM and indeed this is the case. It is enough to choose $`M`$ = {plaquettes and their subclusters} (of course larger maximal clusters work as well) to verify eq. 13 and also to calculate the two-site correlation functions $`\mathrm{\Gamma }(x,y)=\sigma (x_0,y_0)\sigma (x_0+x,y_0+y)`$: $`\mathrm{\Gamma }(x,y)`$ $`=`$ $`g^{|x|+|y|},`$ $`g`$ $`=`$ $`{\displaystyle \frac{\mathrm{exp}(4K_2)\mathrm{cosh}(2K_1)}{\mathrm{sinh}(2K_1)}}`$ (14) as well as many-site correlation functions like the plaquette correlation function $`q=\sigma _i\sigma _j\sigma _k\sigma _l_{\mathrm{}}`$ $$q=\frac{\mathrm{e}^{4K_4}\left(1\mathrm{e}^{8K_2}\right)+4\mathrm{e}^{2K_2}\left(\mathrm{e}^{2K_4}\mathrm{e}^{2K_2}\right)}{\mathrm{e}^{4K_4}\left(1\mathrm{e}^{8K_2}\right)+4\mathrm{e}^{2K_2}\left(\mathrm{e}^{2K_4}+\mathrm{e}^{2K_2}\right)}.$$ (15) Details, and generalizations to other models, will be reported elsewhere .
no-problem/9909/hep-ph9909340.html
ar5iv
text
# On the Origin of the Violation of Hara’s Theorem for Conserved Current ## 1 Introduction In 1964 Hara proved a theorem , according to which the parity-violating amplitude of the $`\mathrm{\Sigma }^+p\gamma `$ decay should vanish in the limit of exact SU(3) symmetry. The assumptions used in the proof were fundamental. Over the years, however, there appeared several theoretical, phenomenological and experimental indications that, despite the proof, Hara’s theorem may be violated. Quark model calculations of Kamal and Riazuddin , VDM-prescription and experiment provide such hints. In particular only these models that violate Hara’s theorem provide a reasonably good description of the overall body of experimental data on weak radiative hyperon decays , as it stands now. Obviously, if Hara’s theorem is violated in Nature it follows that at least one of its fundamental assumptions is not true. This in turn means that some unorthodox and totally new physics must manifest itself in weak radiative hyperon decays (WRHD). Although in general any non-orthodox physics should be avoided as long as possible, the problem with WRHD is that we are on the verge of being forced to accept it. Namely, there exists a clean experimental way of distinguishing between the orthodox and nonorthodox physics. The decisive measurable parameter is the asymmetry of the $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ decay. Its absolute value is expected to be large (of order $`0.7`$) independently of the type of physics involved. One may show that the sign of the asymmetry should be negative (positive) if physics is orthodox (unorthodox). Present experimental number is $`+0.43\pm 0.44`$, almost $`3\sigma `$ away from the orthodox prediction. Of course the relevant experiment may have been performed or analysed incorrectly. However, this is just one (though the most crucial) of the hints against Hara’s theorem. Other hints, more theoretical in nature, are provided by the calculations in the naive quark model and by the VDM approach in which VDM was combined with our present knowledge on parity violating weak coupling of vector mesons to nucleons . There is a growing agreement that the calculation originally presented by Kamal and Riazuddin (KR) is technically completely correct . (However, there is no consensus as to the meaning of the KR result .) The VDM approach is based on two pillars: VDM itself and the DDH paper on nuclear parity violation , in which parity violating weak couplings of mesons to nucleons are discussed. The DDH paper forms the foundation of our present understanding of the whole subject of nuclear parity violation, with the basis of the paper hardly to be questioned . Similarly, VDM has an extraordinary success record in low energy physics. If Hara theorem is correct at least one of the above two pillars of the VDM approach to WRHD must be incorrect. This would be an important discovery in itself. Given this situation, I think it is a timely problem to pinpoint precisely what it is that might lead to the violation of Hara’s theorem. Some conjectures in this connection were presented in (and even earlier, see references cited therein). These conjectures pointed at the assumption of locality. In fact, in a recent Comment it was shown that one can obtain violation of Hara’s theorem for conserved current provided the current is not sufficiently well localized. As proved in , the Hara’s-theorem-violating contribution comes from $`r=\mathrm{}`$. However, as the example of the Reply to my Comment shows, the content and implications of the Comment are not always understood. Therefore, in this paper I will try to shed some additional light on the problem. Before I discuss the question of the implication of current (non)locality on Hara’s theorem I will show that the argument raised in against the technical correctness of the KR calculation is logically incorrect. After disposing of the argument against the technical correctness of the KR calculation I will present a simple example in which current conservation alone does not ensure that Hara’s theorem holds, unless an additional physical assumption is made. Then, I will proceed to discuss the main relevant point made in ref.. In fact, ref. agrees with my standpoint that any violation of Hara’s theorem must result from a new phenomenon. However, identification of the origin of this phenomenon therein proposed is mathematically incorrect. This shall be proved below in several ways. In the final remarks I will stress once again that the resolution of the whole issue (in favour of Hara’s theorem or against it) can be settled once and forever by experiment, that is by a mesurement of the asymmetry of the $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ decay. ## 2 Conservation of the nonrelativistic current In ref. Kamal and Riazuddin obtain gauge-invariant current-conserving covariant amplitude. Ref. accepts correctness of their calculation up to this point. The claim of ref. is that the authors of incorrectly perform nonrelativistic reduction thereby violating current conservation. According to ref. this may be seen from Eq.(13) of ref. which is of the form $`H_{PV}\mathit{ϵ}(𝝈_\mathrm{𝟏}\times 𝝈_\mathrm{𝟐})`$. In this equation the current seems to be of the form $$𝐉=𝝈_1\times 𝝈_2$$ (1) and is not transverse as it should have been for a conserved current. This claim is logically incorrect. Eq.(13) of ref. is obtained after both performing the nonrelativistic reduction and choosing the Coulomb gauge $`\mathit{ϵ}\widehat{𝐪}=0`$ ($`\widehat{𝐪}=𝐪/|𝐪|`$). The origin of the lack of transversity of the ”current” $`𝐉`$ in Eq.(1) is not the nonrelativistic reduction but the choice of Coulomb gauge $`\mathit{ϵ}\widehat{𝐪}=0`$, i.e. the restriction to transverse degrees of freedom only. By choosing the Coulomb gauge we restrict the allowed $`\mathit{ϵ}`$ to be transverse only. It is then incorrect to replace $`\mathit{ϵ}`$ by (longitudinal) $`\widehat{𝐪}`$. In other words the correct form of the current-photon interaction insisted upon in ref., i.e. $$\mathit{ϵ}(𝝈_1\times 𝝈_2\widehat{𝐪}[(𝝈_1\times 𝝈_2)\widehat{𝐪}])$$ (2) after choosing the Coulomb gauge $`\mathit{ϵ}\widehat{𝐪}=0`$ reduces to Eq.(13) of ref.. Hence, from the form $`\mathit{ϵ}(𝝈_1\times 𝝈_2)`$ obtained in ref. after choosing the Coulomb gauge one cannot conclude that the current is $`𝐉=𝝈_1\times 𝝈_2`$ and therefore that the nonrelativistic reduction was performed incorrectly. Having proved that the argument against the KR calculation presented in ref. is logically incorrect, we proceed to the issue of current (non)locality. ## 3 A simple example Let us consider the well-known concept of partially conserved axial current (PCAC). According to this idea the axial current is approximately conserved, with its divergence proportional to the pion mass squared. The weak axial current becomes divergenceless when the pion mass goes to zero, a situation obtained in the quark model with massless quarks. Thus, one may have a nonvanishing coupling of a vector boson to an axial conserved current and a nonvanishing transverse electric dipole moment, ie. violation of Hara’s theorem. The price one has to pay to achieve this in the above example is the introduction of massless pions. A massless pion corresponds to an interaction of an infinite range - the pion may propagate to spatial infinity. Thus, vice versa, if one obtains a nonvanishing transverse electric dipole moment in a gauge-invariant calculation (the KR case) this suggests that the relevant current contains a piece that does not vanish at infinity sufficiently fast but resembles the pion contribution in the example above. In other words one expects that something happens at spatial infinity. Of course, for Hara’s theorem to be violated, the mechanism of providing the necessary nonlocality must be different from the particular one discussed above. After all, no massless hadrons exist. Consequently, current nonlocality would have to constitute an intrinsic feature of baryons. It might result from baryon compositeness: it is known that composite quantum states may exhibit nonlocal features. In this paper we will not pursue this line of thought any further since here we are primarily interested in proving beyond any doubt that nonlocality is crucial, but not in discussing its deeper justification and implications. Such a discussion will appear timely and desirable if new experiments confirm the positive sign of the $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ asymmetry. Ref. accepts that the current specified in ref. is conserved and that nonetheless it yields a nonzero value of the electric dipole moment in question. However, it is alleged that this nonzero result originates from $`r=0`$ (and not from spatial infinity). In view of the example given above this claim should be suspected as incorrect. In fact its mathematical incorrectness can be proved. Let us therefore see where the arguments of ref. break down. ## 4 The origin of the nonzero contribution to the transverse electric dipole moment In ref. it is shown that for the current of the form $`𝐉_5^\epsilon (𝐫)`$ $`=`$ $`[𝝈(𝝈\widehat{𝐫})\widehat{𝐫}]\delta _\epsilon ^3(𝐫)+{\displaystyle \frac{1}{2\pi r^2}}[𝝈3(𝝈\widehat{𝐫})\widehat{𝐫}]\delta _\epsilon (r)`$ (3) $`{\displaystyle \frac{1}{4\pi r^3}}[𝝈3(𝝈\widehat{𝐫})\widehat{𝐫}]\mathrm{erf}\left({\displaystyle \frac{r}{2\sqrt{\epsilon }}}\right)`$ where $`\mathrm{erf}(x)=\frac{2}{\sqrt{\pi }}_0^xe^{t^2}𝑑t`$ is the error function, $`\widehat{𝐫}=𝐫/r`$, $`r=|𝐫|`$ and $`\epsilon 0`$, the transverse electric dipole moment is given by $$T_{1M}^{el}=\underset{\epsilon 0}{lim}\frac{iq}{2\pi \sqrt{2}}_0^{\mathrm{}}𝑑r\mathrm{erf}\left(\frac{r}{2\sqrt{\epsilon }}\right)j_1(qr)𝑑\mathrm{\Omega }_{\widehat{𝐫}}𝝈\widehat{𝐫}Y_{1M}(\widehat{𝐫})$$ (4) and is nonzero. The question is where does this nonzero result comes from. Ref. (ref.) claim that the whole contribution is from $`r=\mathrm{}`$ (respectively $`r=0`$). We shall show that the claim of ref. is mathematically incorrect. The Reply is based on the (true) equality (Eqs.(3,4) therein) $$\alpha =\underset{ϵ0}{lim}q_0^{\mathrm{}}𝑑rj_1(qr)\mathrm{erf}\left(\frac{r}{2\sqrt{ϵ}}\right)=\left(\frac{2}{q}\right)_0^{\mathrm{}}𝑑zj_0(z)\delta \left(\frac{z}{q}\right)$$ (5) in which the left-hand side (l.h.s.) is the original integral appearing in the expression for the electric dipole moment, from which it was concluded in ref. that violation of Hara’s theorem originates from $`r=\mathrm{}`$. The Reply further claims that as one has to perform the integral first, and only then take the limit $`ϵ0`$, it can be seen from the right-hand side of Eq.(5) that in the limit $`ϵ0`$ the integral on the left-hand side receives all its contribution from the point $`r=0`$. That this claim is mathematically incorrect can be seen in many ways. We shall deal with the integral on the left-hand side directly since equality of definite integrals does not mean that the integrands are identical. In particular integration by parts used to arrive at the r.h.s. of Eq.(5) may change the region from which the value of the integral comes as it should be obvious from the following example: $`{\displaystyle _0^{\mathrm{}}}𝑑x\mathrm{exp}(x)\theta (xϵ)`$ $`=`$ $`=\mathrm{exp}(x)\theta (xϵ)|_0^{\mathrm{}}`$ $`+`$ $`{\displaystyle _0^{\mathrm{}}}𝑑x\mathrm{exp}(x)\delta (xϵ)`$ (6) $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑x\mathrm{exp}(x)\delta (xϵ)`$ Clearly, the integral on the l.h.s. of Eq.(6) does not receive all its contribution from the point $`x=ϵ`$ while the r.h.s. does. Let us therefore concentrate on the l.h.s of Eq.(5) since it is the integrand on the l.h.s. which has a physical meaning. a) Mathematical proof For any finite $`ϵ`$ the integrand on the l.h.s. of Eq.(5) vanishes for $`r=0`$ since $`j_1(0)=\mathrm{erf}(0/(2\sqrt{ϵ}))=0`$. Consequently, already the most naive argument seems to show that the point $`r=0`$ does not contribute in the limit $`ϵ0`$ at all. Should one be concerned with the neighbourhood of the point $`r=0`$, we notice that both functions $`j_1(qr)`$ and $`\mathrm{erf}(r/(2\sqrt{ϵ}))`$ are bounded for any $`q`$, $`r`$, $`ϵ`$ of interest. Consequently, the integrand on the left-hand side of Eq.(5) is bounded by $`\mathrm{max}_{0z\mathrm{}}j_1(z)M<\mathrm{}`$. Hence, the contribution from any interval $`[0,\mathrm{\Delta }]`$, ($`0\mathrm{\Delta }1`$) is bounded by $`q_0^\mathrm{\Delta }𝑑rMq\mathrm{\Delta }M`$ and vanishes when $`q\mathrm{\Delta }0`$. From the mathematical point of view the incorrectness of ref. is thus proved. For further clarification, however, the following two points may be consulted. Point b) below provides simple and intuitive visual demonstration of what happens on the l.h.s. of Eq.(5) in the limit $`ϵ0`$ . In point c) the integral is actually performed before taking the limit $`ϵ0`$, the procedure considered in ref. to be correct. b) Intuitive ”proof” The integral on the left of Eq.(5) can be evaluated for any $`ϵ`$ (formula 2.12.49.6 in ref.) and one obtains $$q_0^{\mathrm{}}𝑑rj_1(qr)\mathrm{erf}\left(\frac{r}{2\sqrt{ϵ}}\right)=\frac{\sqrt{\pi }}{2q}\frac{1}{\sqrt{ϵ}}\mathrm{erf}(q\sqrt{ϵ})$$ (7) which for small $`q\sqrt{ϵ}`$ is equal to $$1\frac{q^2ϵ}{3}+O((q^2ϵ)^2)$$ (8) This approach to $`1`$ from below (when $`q^2ϵ0`$) can be seen from a series of plots shown in Fig.1. In Fig.1 one can see that for small $`q\sqrt{ϵ}`$ the integrand in Eq.(5) differs significantly from $`j_1(qr)`$ only for very small $`qr<q\mathrm{\Delta }`$, where the integrand is smaller than $`j_1(qr)`$. It is also seen that in the limit $`q\sqrt{ϵ}0`$ the contribution from the region of small $`qr`$ grows (thus the whole integral grows in agreement with Eq.(8)) but never exceeds the integral $`_0^\mathrm{\Delta }q𝑑rj_1(qr)`$. It is intuitively obvious that the latter integral is smaller than $`j_1(q\mathrm{\Delta })q\mathrm{\Delta }`$ and cannot yield the value $`1`$ in Eq.(8) for $`\mathrm{\Delta }0`$! For more details consult point (c2) below. c) Doing integrals first Should one be not satisfied for any reasons with the above two arguments, and insist that one has to perform the integral first, an appropriate rigorous proof of mathematical incorrectness of ref. follows. In this proof the integral is performed before taking the limit $`ϵ0`$, as argued in ref. to be the only correct procedure. Let us divide the integral on the left-hand side of Eq.(5) into two contributions: $$\underset{ϵ0}{lim}\left[_0^\mathrm{\Delta }𝑑rqj_1(qr)\mathrm{erf}\left(\frac{r}{2\sqrt{ϵ}}\right)+_\mathrm{\Delta }^{\mathrm{}}𝑑rqj_1(qr)\mathrm{erf}\left(\frac{r}{2\sqrt{ϵ}}\right)\right]$$ (9) where $`\mathrm{\Delta }`$ is finite, but otherwise arbitrary: $`0<\mathrm{\Delta }<\mathrm{}`$. According to ref., the whole contribution to the integral on the left-hand side of Eq.(5) comes from the point $`r=0`$ when the limit $`ϵ0`$ is taken after evaluating the integral. Hence, the whole contribution to the left-hand side of Eq.(5) should come from the first term in Eq.(9), i.e. from $$f_{[0,\mathrm{\Delta }]}(q,ϵ)_0^\mathrm{\Delta }𝑑rqj_1(qr)\mathrm{erf}\left(\frac{r}{2\sqrt{ϵ}}\right)$$ (10) when the limit $`ϵ0`$ is taken after evaluating the integral. c1) Let us therefore estimate the integral $`f_{[0,\mathrm{\Delta }]}(q,ϵ)`$. Integrating by parts we obtain $`f_{[0,\mathrm{\Delta }]}(q,ϵ)`$ $`=`$ $`{\displaystyle \frac{1}{q}}j_0(q\mathrm{\Delta }){\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle _0^{\mathrm{\Delta }/(2\sqrt{ϵ})}}\mathrm{exp}(t^2)𝑑t`$ (11) $`+{\displaystyle \frac{1}{q}}j_0(q0){\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle _0^{0/(2\sqrt{ϵ})}}\mathrm{exp}(t^2)𝑑t`$ $`+{\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle \frac{1}{2\sqrt{ϵ}}}{\displaystyle \frac{1}{q}}{\displaystyle _0^\mathrm{\Delta }}𝑑rj_0(qr)\mathrm{exp}(r^2/(4ϵ))`$ Since we take the limit $`ϵ0`$ only after evaluating the integral, the second term above vanishes. Thus $$f_{[0,\mathrm{\Delta }]}(q,ϵ)=\frac{1}{q}\frac{2}{\sqrt{\pi }}_0^{\mathrm{\Delta }/(2\sqrt{ϵ})}𝑑t\mathrm{exp}(t^2)(j_0(q2\sqrt{ϵ}t)j_0(q\mathrm{\Delta }))$$ (12) Consequently $$|f_{[0,\mathrm{\Delta }]}(q,ϵ)|\frac{1}{q}\frac{2}{\sqrt{\pi }}_0^{\mathrm{\Delta }/(2\sqrt{ϵ})}𝑑t\mathrm{exp}(t^2)|j_0(q2\sqrt{ϵ}t)j_0(q\mathrm{\Delta })|$$ (13) We are ultimately interested in the limit $`q0`$. Hence, let us take $`q\mathrm{\Delta }1`$. This may be assumed for any finite $`\mathrm{\Delta }`$. Since $`02\sqrt{ϵ}t\mathrm{\Delta }`$, and the function $`j_0(z)`$ is monotonically decreasing for $`z1`$ it follows that $$|j_0(q2\sqrt{ϵ}t)j_0(q\mathrm{\Delta })||j_0(0)j_0(q\mathrm{\Delta })|$$ (14) Hence, for $`q1/\mathrm{\Delta }`$ we have $`|f_{[0,\mathrm{\Delta }]}(q,ϵ)|`$ $``$ $`{\displaystyle \frac{1}{q}}{\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle _0^{\mathrm{\Delta }/(2\sqrt{ϵ})}}𝑑t\mathrm{exp}(t^2)|j_0(0)j_0(q\mathrm{\Delta })|`$ (15) $``$ $`{\displaystyle \frac{1}{q}}{\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle _0^{\mathrm{}}}𝑑t\mathrm{exp}(t^2)|j_0(0)j_0(q\mathrm{\Delta })|`$ $`=`$ $`\mathrm{\Delta }\left|{\displaystyle \frac{j_0(q\mathrm{\Delta })j_0(0)}{q\mathrm{\Delta }}}\right|`$ For finite $`\mathrm{\Delta }`$, in the limit $`q0`$, the factor under the sign of modulus is the definition of the derivative of $`j_0`$ at $`0`$, i.e. $$\underset{q0}{lim}|f_{[0,\mathrm{\Delta }]}(q,ϵ)|\mathrm{\Delta }|j_1(0)|$$ (16) Since $`j_1(0)=0`$ we conclude that for any finite $`\mathrm{\Delta }`$ one has $`lim_{q0}|f_{[0,\mathrm{\Delta }]}(q,ϵ)|=0`$, and that this occurs for any finite $`ϵ`$. We now take the limit $`ϵ0`$ and obviously obtain $`lim_{ϵ0}(lim_{q0}|f_{[0,\mathrm{\Delta }]}(q,ϵ)|)=0`$. This directly contradicts the claim of ref.. It is also seen that only for $`\mathrm{\Delta }=\mathrm{}`$ the above proof does not go through because then $`q\mathrm{\Delta }`$ is $`\mathrm{}`$ for any finite $`q`$, and $`|j_0(0)j_0(q\mathrm{\Delta })|=|j_0(0)j_0(\mathrm{})|=|j_0(0)|=1`$. Thus, since for any finite $`\mathrm{\Delta }`$ the contribution to the first term in Eq.(9) is $`0`$ in the limit of $`q0`$, the whole contribution must come from the second term in Eq.(9). Since $`\mathrm{\Delta }`$ is arbitrary, the contribution comes from $`r=\mathrm{}`$. This can be checked by a direct evaluation of the second term in Eq.(9) for any finite $`\mathrm{\Delta }`$. c2) Should someone be not convinced by the procedure of bounding the integrand in Eq.(13), one can perform the integral in Eq.(10) directly. Denoting $`\delta =q\mathrm{\Delta }`$, $`ϵ^{}=q\sqrt{ϵ}`$ we have $`f_{[0,\mathrm{\Delta }]}(q,ϵ)`$ $`=`$ $`{\displaystyle _0^\delta }𝑑zj_1(z)\mathrm{erf}\left({\displaystyle \frac{z}{2ϵ^{}}}\right)=`$ $`=j_0(\delta )\mathrm{erf}\left({\displaystyle \frac{\delta }{2ϵ^{}}}\right)`$ $`+`$ $`{\displaystyle _0^\delta }𝑑zj_0(z){\displaystyle \frac{2}{\sqrt{\pi }}}\mathrm{exp}\left({\displaystyle \frac{z^2}{4ϵ^2}}\right){\displaystyle \frac{1}{2ϵ^{}}}`$ (17) For small $`ϵ^{}`$ the second term on the r.h.s. above receives contributions from small $`z`$ only. Therefore we may expand $`j_0(z)`$ around $`z=0`$: $$j_0(z)1\frac{1}{6}z^2+\mathrm{}$$ (18) and perform the integrations. We obtain $`{\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle \frac{1}{2ϵ^{}}}{\displaystyle _0^\delta }𝑑z(1z^2/6)\mathrm{exp}(z^2/(4ϵ^2))=`$ $`={\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle _0^{\delta /(2ϵ^{})}}𝑑t\mathrm{exp}(t^2){\displaystyle \frac{1}{6}}{\displaystyle \frac{2}{\sqrt{\pi }}}(2ϵ^{})^2{\displaystyle _0^{\delta /(2ϵ^{})}}𝑑tt^2\mathrm{exp}(t^2)`$ (19) The integral in the second term in Eq.(4) may be evaluated as $`{\displaystyle \frac{2}{\sqrt{\pi }}}\left[{\displaystyle \frac{d}{d\lambda }}{\displaystyle _0^{\delta /(2ϵ^{})}}𝑑t\mathrm{exp}(\lambda t^2)\right]_{\lambda =1}=`$ $`{\displaystyle \frac{d}{d\lambda }}\left[\lambda ^{1/2}\mathrm{erf}\left({\displaystyle \frac{\delta \lambda ^{1/2}}{2ϵ^{}}}\right)\right]_{\lambda =1}=`$ $`={\displaystyle \frac{1}{2}}\mathrm{erf}\left({\displaystyle \frac{\delta }{2ϵ^{}}}\right)+{\displaystyle \frac{2}{\sqrt{\pi }}}{\displaystyle \frac{\delta }{4ϵ^{}}}\mathrm{exp}(\delta ^2/(4ϵ^2))`$ (20) Putting together Eqs.(13-4) one obtains $$f_{[0,\mathrm{\Delta }]}(q,ϵ)=(1j_0(\delta )\frac{ϵ^2}{3})\mathrm{erf}(\delta /(2ϵ^{}))+\frac{ϵ^2}{3}\frac{2}{\sqrt{\pi }}\frac{\delta }{2ϵ^{}}\mathrm{exp}(\delta ^2/(4ϵ^2))$$ (21) We now recall that $`\delta /ϵ^{}=\mathrm{\Delta }/\sqrt{ϵ}`$ and that we are interested in the limit $`ϵ0`$ for any finite $`\mathrm{\Delta }`$. For very large (but finite) $`\delta `$ and small $`ϵ^{}`$ we have $`j_0(\delta )0`$, $`\mathrm{erf}(\delta /(2ϵ^{}))1`$, and $$\frac{\delta }{2ϵ^{}}\mathrm{exp}(\delta /(4ϵ^2))0.$$ (22) Eq.(21) reduces then to $$f_{[0,\mathrm{\Delta }]}(q,ϵ)1ϵ^2/3$$ (23) approaching 1 from below in agreement with Eq.(8) and Fig. 1. For $`ϵ0`$ and fixed $`\mathrm{\Delta }`$ one obtains from Eq.(21) $$\underset{ϵ0}{lim}f_{[0,\mathrm{\Delta }]}(q,ϵ)=1j_0(q\mathrm{\Delta })$$ (24) Clearly, the contribution to the integral in Eq.(5) coming from the interval $`[0,\mathrm{\Delta }]`$ is small and goes to zero when $`q\mathrm{\Delta }0`$. Thus, for any finite $`\mathrm{\Delta }`$, in the limit $`q0`$ the contribution to the integral in Eq.(5) comes entirely from the second term in Eq.(5). Since $`\mathrm{\Delta }`$ is arbitrary, the contribution comes from $`r=\mathrm{}`$. ## 5 Final remarks In summary, violation of Hara’s theorem may occur for conserved current as shown in ref.. One has to pay a price, though: the price is the lack of sufficient localizability of the current. This connection to the physical issue of locality has been already suggested in . Thus, violation of Hara’s theorem would require a highly non-orthodox resolution. Whether this is a physically reasonable option constitutes a completely separate question. However, one should remember that what is ”physically reasonable” is determined by experiment and not by our preconceived ideas about what the world looks like. After all, all our fundamental ideas are abstracted from experiment. They do not live their own independent life and must be modified if experiment proves their deficiencies. In general, we should try to avoid non-orthodox physics as long as we can. The problem is, however, that there are various theoretical, phenomenological, experimental and even philosophical hints that, despite expectations based on standard views, Hara’s theorem may be violated. It is therefore important to ask and answer the question whether one can provide a single and clearcut test, the results of which would unambiguously resolve the issue. In fact, as already mentioned in the introduction, such a test has been pointed out in (see also ). It was shown there that the issue can be experimentally settled by measuring the asymmetry of the $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ decay. The sign of this asymmetry is strongly correlated with the answer to the question of the violation of Hara’s theorem in $`\mathrm{\Sigma }^+p\gamma `$. In Hara’s-theorem-satisfying models this asymmetry is negative and around $`0.7`$. On the contrary, in Hara’s-theorem-violating models this asymmetry is positive and of the same absolute size, (ie. it is around $`+0.7`$). Present data is $`+0.43\pm 0.44`$. The KTeV experiment at Fermilab has 1000 events of $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ . These data are being analysed. Thus, the question of the violation of Hara’s theorem should be experimentally settled soon. If the results of the KTeV experiment (and those of an even higher statistics experiment being performed by the NA48 collaboration at CERN ) confirm large positive asymmetry for the $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ decay, one should start to discuss the possible deeper physical meaning of the violation of Hara’s theorem. I tried to refrain from such a discussion so far. On the other hand, if the asymmetry in the $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ decay is negative, one must conclude that Hara’s theorem holds in Nature. In this case, however, it follows that either vector meson dominance is inapplicable to weak radiative hyperon decays or our present understanding of nuclear parity violation (ref.) is incorrect. In conclusion, whatever sign of asymmetry is measured in the $`\mathrm{\Xi }^0\mathrm{\Lambda }\gamma `$ decay, something well accepted will have to be discarded. ## 6 Acknowledgements I would like to thank A. Horzela for providing reference and J. Lach and A. Horzela for discussions regarding the presentation of the argument. Comments on the presentation of the material of this paper, received from V. Dmitrasinovic prior to paper’s dissemination, are also gratefully acknowledged.
no-problem/9909/physics9909043.html
ar5iv
text
# Untitled Document $`\mathrm{\copyright }`$ 2000 The American Institute of Physics to be published in AIP Conference Proceedings of the Space Technology and Applications International Forum (STAIF-2000) Conference on Enabling Technology and Required Scientific Developments for Interstellar Missions January 30—February 3, 2000, Albuquerque, NM TOWARD AN INTERSTELLAR MISSION: ZEROING IN ON THE ZERO-POINT-FIELD INERTIA RESONANCE Bernhard Haisch<sup>1</sup> and Alfonso Rueda<sup>2</sup> <sup>1</sup>Solar & Astrophysics Laboratory, Lockheed Martin, L9-41, B252, 3251 Hanover St., Palo Alto, CA 94304 <sup>2</sup>Dept. of Electrical Eng. and Dept. of Physics & Astronomy, California State Univ., Long Beach, CA 90840 <sup>1</sup>haisch@starspot.com <sup>2</sup>arueda@csulb.edu Abstract. While still an admittedly remote possibility, the concept of an interstellar mission has become a legitimate topic for scientific discussion as evidenced by several recent NASA activities and programs. One approach is to extrapolate present-day technologies by orders of magnitude; the other is to find new regimes in physics and to search for possible new laws of physics. Recent work on the zero-point field (ZPF), or electromagnetic quantum vacuum, is promising in regard to the latter, especially concerning the possibility that the inertia of matter may, at least in part, be attributed to interaction between the quarks and electrons in matter and the ZPF. A NASA-funded study (independent of the BPP program) of this concept has been underway since 1996 at the Lockheed Martin Advanced Technology Center in Palo Alto and the California State University at Long Beach. We report on a new development resulting from this effort: that for the specific case of the electron, a resonance for the inertia-generating process at the Compton frequency would simultaneously explain both the inertial mass of the electron and the de Broglie wavelength of a moving electron as first measured by Davisson and Germer in 1927. This line of investigation is leading to very suggestive connections between electrodynamics, inertia, gravitation and the wave nature of matter. BACKGROUND Although at this time there are no known or plausible technologies which would make interstellar travel possible, the concept of an interstellar mission has recently started to become a legitimate topic for scientific discussion. In July 1996 NASA formally established a Breakthrough Propulsion Physics (BPP) Program. The initiation of a program operating under the auspices of NASA provides a forum for ideas which are required to be both visionary and credible. This approach was designed by program manager Marc Millis to provide a valuable and necessary filter for ideas and their proponents. The first BPP activity was a workshop held in August 1997 to carry out an initial survey of concepts; this is now available as a NASA conference proceedings (NASA/CP—1999-208694). Following additional funding, the BPP program issued a NASA Research Announcement (NRA) in November 1998 soliciting proposals. Out of 60 submissions, six studies were selected for funding. Three other events relevant to interstellar exploration but not directly connected with the BPP program have also taken place. In February 1998, the Marshall Space Flight Center (MSFC) hosted a four-day workshop on “Physics for the Third Millenium” directed by Ron Koczor of the MSFC Space Science Laboratory. In July 1998 a four-day workshop on “Robotic Interstellar Exploration in the Next Century” sponsored by the Advanced Concepts Office at JPL and the NASA Office of Space Science was held at Caltech. In September 1998 an Independent Review Panel was convened to assess the NASA Space Transportation Research Program (STRP). The STRP supports such areas of investigation as fission- and fusion-based advanced propulsion and even a serious attempt to replicate a claimed “Delta-g Gravity Modification Experiment.” A report was issued in January 1999 entitled: “Ad Astra per Aspera: Reaching for the Stars.” The report had the following to say about the Delta-g Gravity Modification Experiment: An experimental demonstration of gravity modification, regardless of how minute, would be of extraordinary significance. The Delta-g experiments now being carried out at MSFC are an attempt to duplicate a claimed anomalous weight loss of up to two percent in objects of various compositions suspended above a rotating 12-inch diameter Type II ceramic superconductor. This apparent phenomenon was discovered by accident during superconducting experiments by a Russian scientist, Dr. Eugene Podkletnov, then working at Tampere University in Finland. Initial results of the MSFC replication have been published in Physica C, 281, 260, 1997. As of November 1998 the group, led by David Noever and Ron Koczor, had made a 12-inch YBCO disk that survived pressing and heat treating in one piece. This is now being characterized and cut up to do mechanical testing. The next step is to make a 2-layer disk of the sort used by Podkletnov. The review committee was impressed with the high quality of the researchers and the careful and methodical approach. We believe that this research is a prime candidate for continued STR support and urge that funding is adequate to permit a definitive replication to be carried to completion.” In all of these activities (in which the first author participated) it became clear that there are two approaches to conceptualizing an interstellar mission. The first is to extrapolate certain relevant present-day technologies by orders of magnitude and then see what possibilities might emerge. One example — discussed at the Caltech workshop — would be a craft propelled by a laser-pushed lightsail…but this would require a 1000 km-diameter sail and a 10 km-diameter lens having an open-loop pointing capability of $`10^5`$ arcsec (given a feedback time of years): quite a formidable challenge! Another “known technology pushed to the limit” example is based on production and storage of huge amounts of anti-matter. This makes a good example of overwhelming technical difficulties in pushing present-day technology. Based on energy arguments alone (i.e. ignoring the issue of specific impulse) one can achieve a speed of 0.1$`c`$ by annihilating 0.5 percent of the mass of a starship; this is simply calculated by equating the final kinetic energy of the starship, $`m_sv^2/2`$, where $`v=0.1c`$, to the rest energy of the propellant, $`m_pc^2`$, a good approximation in this regime. It would take an equal amount of energy to stop, and similarly for the return to Earth. Under perfectly ideal conditions then, one percent of the mass of the starship in antimatter and one percent in ordinary matter would, in principle, suffice as propellant. One hundred percent efficiency for conversion of rest mass energy into kinetic energy is out of the question. Let us take an optimistic 10 percent. This means that for a starship as modest as the space shuttle in size — about 100 tons, hardly adequate for a century-long out and back mission to Alpha Centauri — one would require 10 tons of antimatter. The manufacture of antiprotons is extremely inefficient. Techniques for creating antiprotons at CERN require approximately two and one-half million protons each accelerated to an energy of 26 GeV to create a single antiproton. This amounts to an energy efficiency of $`3\times 10^8`$. This is further reduced by a factor of 20 or so for the efficiency of the proton accelerator, leaving a net efficiency of perhaps $`1.5\times 10^9`$, i.e. about one part in a billion! At a cost of 5 cents per kilowatt-hour of electricity the cost of 10 tons of antiprotons would be $`1.4\times 10^{21}`$ dollars. A good way to imagine this is to say that it represents the total current U. S. federal budget (app. $1.2 trillion per year) spent every year for the next 1.2 billion years (cf. M. R. LaPointe, NASA SBIR Phase I Final Report for contract NAS8-98109). Other technological extrapolations for propulsion discussed in the various workshops and reviews suffer from similar tremendous order-of-magnitude problems. Building a starship based on extrapolation of known technology might be likened to insisting on building some kind of sailing ship capable of crossing the Atlantic in 6 hours, when one really has to discover flight to do that. The second approach is to try to find new regimes in physics or perhaps even new laws of physics. When Alcubierre (1994) published his article, “The Warp Drive: Hyper-fast Travel within General Relativity,” this aroused considerable enthusiasm since it demonstrated that, in principle, general relativity allowed for local metric modifications resulting in expansion of space faster than the speed-of-light. Indeed, as is well known in cosmology, there is no speed limit to the expansion of space itself; in conventional inflationary big bang theory there must be regions of the Universe beyond our event horizon whose Hubble speed (relative to us) is greater than $`c`$ and this in no way conflicts with special relativity. Relativity merely forbids motion through space faster than $`c`$. Alcubierre demonstrated that the mathematics of general relativity allowed for the creation of what might be termed a “bubble” of ordinary flat (Minkowski) space — in which a starship could be situated — that could surf, so to speak, at arbitrarily large speeds on a spacetime distortion, a faster than light stretching of spacetime; this would indeed be a warp drive. The Alcubierre bubble was soon burst, though, by Pfenning and Ford (1997) who showed that it would require more energy than that of the entire Universe to create the extremely warped space necessary. However recently Van den Broeck (1999) has shown that this energy requirement can be reduced by 32 orders of magnitude via a slight change in the Alcubierre geometry. While this still leaves us a long way from a feasible interstellar technology, warping space or creating wormholes are physics possibilities meriting further theoretical exploration. Another regime of “new physics” is in actuality almost a century old. The concept of an electromagnetic zero-point field was developed by, among others, Planck, Einstein and Nernst. If an energetic sea of electromagnetic fluctuations comprising the electromagnetic quantum vacuum fills the Universe — for reasons discussed below — then this suggests the possibility of generating propulsive forces or extracting energy anywhere in space. Even more intriguing possibilities are opened by the linked proposed concepts that gravitation and inertia may originate in the zero-point field. If both gravitation and inertia are manifestations of the vacuum medium and in particular of its electromagnetic component, the ZPF, they can be treated by the techniques of electrodynamics, and perhaps they can be manipulated. The concept of gravity manipulation has been a staple of science fiction, but in fact inertia manipulation would be even more far reaching. As exciting as it would be to reduce the (gravitational) weight of a launch vehicle to zero, this would merely set if free from the gravitational potential well of the Earth and of the Sun. The problem of adding kinetic energy to reach high interstellar velocities would remain…unless one can modify inertia. Modification of inertia would (a) reduce energy requirements to attain a given velocity and (b) possibly allow greatly enhanced accelerations. The latter would open many possibilities since it would be far more efficient to have a perhaps enormously large acceleration device that never has to leave the ground and needs to act over only a short distance to rapidly impart a huge impulse, slingshotting a starship on its way. (We assume that the inertia of everything inside the starship would be modified as well. For the time being we overlook the problem of deceleration at the end of the journey, it being prudent to tackle only one apparent impossibility at a time.) The concept of inertia modification may forever remain a practical impossibility. However at the moment it has become a legitimate theoretical possibility. In the following sections we summarize a recently developed theoretical connection between the ZPF and inertia and report on the discovery that a specific resonance frequency is likely to be involved. It is shown that such a resonance would simultaneously offer an explanation for both the inertia of a particle and the de Broglie wavelength of that particle in motion as first measured for electrons by Davisson and Germer (1927). THE ELECTROMAGNETIC ZERO-POINT FIELD The necessary existence of an electromagnetic zero-point field can be shown from consideration of elementary quantum mechanics. The Heisenberg uncertainty relation tells us that a harmonic oscillator must have a minimum energy of $`\mathrm{}\omega /2`$, where $`\mathrm{}`$ is the Planck constant divided by $`2\pi `$ and $`\omega `$ is the oscillation frequency in units of radians per second. (Expressed in cycles per second, Hz, this minimum energy is $`h\nu /2`$.) This is the zero-point energy of a harmonic oscillator, the derivation of which is a standard example in many introductory quantum textbooks. The electromagnetic field is subject to a similar quantization: this is done by “the association of a quantum-mechanical harmonic oscillator with each mode of the radiation field” (cf. Loudon 1983). The same $`h\nu /2`$ zero-point energy is found in each mode of the field, where a mode of the field can be thought of as a plane wave specified by its frequency ($`\nu `$), directional propagation vector ($`\widehat{𝐤}`$), and one of two polarization states ($`\sigma `$). Summing up over all plane waves one arrives at the zero-point energy density for the electromagnetic field, $$\rho _{ZP}(\nu )=_0^{\nu _c}\frac{4\pi h\nu ^3}{c^3}𝑑\nu ,$$ $`(1)`$ where $`\nu _c`$ is a presumed high-frequency cutoff, often taken to be the Planck frequency, $`\nu _P=(c^5/G\mathrm{})^{1/2}=1.9\times 10^{43}`$ Hz. (See the appendix for a brief discussion of the Planck frequency). With this assumed cutoff, the energy density becomes the same (within a factor of $`2\pi ^2`$) as the maximum energy density that spacetime can sustain: with $`\nu _c=\nu _P`$, the ZPF energy density is $`\rho _{ZP}=2\pi ^2c^7/G^2\mathrm{}`$. This is on the order of $`10^{116}`$ ergs cm<sup>-3</sup> s<sup>-1</sup>. The term “ZPE” is often used to refer to this electromagnetic energy of the zero-point fluctuations of the quantum vacuum. Note that the strong and weak interactions also have associated zero-point energies. These should also contribute to inertia. Their exact contributions remain to be determined since we have yet to consider these in the present context. For now we restrict ourselves to the electromagnetic contribution. Can one take seriously the concept that the entire Universe is filled with a background sea of electromagnetic zero-point energy that is nearly 100 orders of magnitude beyond the energy equivalent of ordinary matter density? The concept is inherently no more unreasonable in modern physics that that of the vast Dirac sea of negative energy anti-particles. Moreover the derivation of the zero-point energy from the Heisenberg uncertainty relation and the counting of modes is so elementary that it becomes convoluted to try to simply argue away the ZPF. The objection that most immediately arises is a cosmological one: that the enormous energy density of the ZPF should, by general relativity, curve the entire Universe into a ball orders of magnitude smaller than the nucleus of an atom. Our contention is that the ZPF plays a key role in giving rise to the inertia of matter. If that proves to be the case, the principle of equivalence will require that the ZPF be involved in giving rise to gravitation. This at least puts the spacetime-curvature objection in abeyance: in a self-consistent ZPF-based theory of inertia and gravitation one can no longer naively attribute a spacetime curving property to the energy density of the ZPF itself (Sakharov, 1968; Misner, Thorne and Wheeler, 1973; Puthoff, 1989; Haisch and Rueda, 1997; Puthoff, 1999). One might try taking the position that the zero-point energy must be merely a mathematical artifact of theory. It is sometimes argued, for example, that the zero-point energy is merely equivalent to an arbitrary additive potential energy constant. Indeed, the potential energy at the surface of the earth can take on any arbitrary value, but the falling of an object clearly demonstrates the reality of a potential energy field, the gradient of which is equal to a force. No one would argue that there is no such thing as potential energy simply because it has no well-defined absolute value. Similarly, gradients of the zero-point energy manifest as measurable Casimir forces, which indicates the reality of this sea of energy as well. Unlike the potential energy, however, the zero-point energy is not a floating value with no intrinsically defined reference level. On the contrary, the summation of modes tells us precisely how much energy each mode must contribute to this field, and that energy density must be present unless something else in nature conspires to cancel it. Another argument for the physical reality of zero-point fluctuations emerges from experiments in cavity quantum electrodynamics involving suppression of spontaneous emission. As Haroche and Raimond (1993) explain: These experiments indicate a counterintuitive phenomenon that might be called “no-photon interference.” In short, the cavity prevents an atom from emitting a photon because that photon would have interfered destructively with itself had it ever existed. But this begs a philosophical question: How can the photon “know,” even before being emitted, whether the cavity is the right or wrong size? The answer is that spontaneous emission can be interpreted as stimulated emission by the ZPF, and that, as in the Casimir force experiments, ZPF modes can be suppressed, resulting in no vacuum-stimulated emission, and hence no “spontaneous” emission (McCrea, 1986). The Casimir force attributable to the ZPF has now been well measured, the agreement between theory and experiment being approximately five percent over the measured range (Lamoreaux, 1997). Perhaps some variation on the Casimir cavity configuration of matter could one day be devised that will yield a different force on one side than on the other of some device, thus providing in effect a ZPF-sail for propulsion through interstellar space. Independent of whether a propulsive force may be generated from the ZPF is the question of whether energy can be extracted from the ZPF. This was first considered — and found to be a possibility in theory — in a thought experiment published by Forward (1984). Subsequently Cole and Puthoff (1993) analyzed the thermodynamics of zero-point energy extraction in some detail and concluded that, in principle, no laws of thermodynamics are violated by this. There is the possibility that nature is already tapping zero-point energy in the form of very high energy cosmic rays and perhaps in the formation of the sheet and void structure of clusters of galaxies (Rueda, Haisch and Cole, 1995). Another very useful overview is the USAF Report by Forward (1996) and also the discussion of force generation and energy extraction in Haisch and Rueda (1999). THE INERTIA RESONANCE AND THE DE BROGLIE WAVELENGTH Can the inertia of matter be modified? In 1994 we, together with H. Puthoff, published an analysis using the techniques of stochastic electrodynamics in which Newton’s equation of motion, $`𝐅=m𝐚`$, was derived from the electrodynamics of the ZPF (Haisch, Rueda and Puthoff \[HRP\], 1994). In this view the inertia of matter is reinterpreted as an electromagnetic reaction force. A NASA-funded research effort at Lockheed Martin in Palo Alto and California State University in Long Beach recently led to a new analysis that succeeded in deriving both the Newtonian equation of motion, $`𝐅=m𝐚`$, and the relativistic form of the equation of motion, $`=d𝒫/d\tau `$, from Maxwell’s equations as applied to the ZPF (Rueda & Haisch, 1998a; 1998b). This extension from classical to relativistic mechanics increases confidence in the validity of the hypothesis that the inertia of matter is indeed an effect originating in the ZPF of the quantum vacuum. Overviews of these concepts may be found in the previous STAIF proceedings and other conference proceedings (Haisch and Rueda, 1999; Haisch, Rueda and Puthoff, 1998; Haisch and Rueda, 1998; additional articles are posted or linked at http://www.jse.com/haisch/zpf.html). In the HRP analysis of 1994 it appeared that the crucial interaction between the ZPF and the quarks and electrons constituting matter must be concentrated near the Planck frequency. As discussed in the previous section (and in the appendix), the Planck frequency is the highest possible frequency in nature and is the presumed cutoff of the ZPF spectrum: $`\nu =(c^5/G\mathrm{})^{1/2}1.9\times 10^{43}`$ Hz. In contrast, the new approach involves the assumption that the crucial interaction between the quarks and electrons constituting matter and the ZPF takes place not at the ZPF cutoff, but at a resonance frequency. We have now found evidence that, for the electron, this resonance must be at its Compton frequency: $`\nu =m_ec^2/h=1.236\times 10^{20}`$ Hz. This is 23 orders of magnitude lower (hence possibly within reach of electromagnetic technology) than the Planck frequency. In Rueda and Haisch (1998a, 1998b) we show that from the force associated with the non-zero ZPF momentum flux (obtained by calculating the Poynting vector) in transit through an accelerating object, the apparent inertial mass derives from the energy density of the ZPF as follows: $$m_i=\frac{V_0}{c^2}\eta (\nu )\rho _{ZP}(\nu )𝑑\nu .$$ $`(2)`$ where $`\eta (\nu )`$ is a scattering parameter (see below). It was proposed by de Broglie that an elementary particle is associated with a localized wave whose frequency is the Compton frequency, yielding the Einstein-de Broglie equation: $$h\nu _C=m_0c^2.$$ $`(3)`$ As summarized by Hunter (1997): “…what we regard as the (inertial) mass of the particle is, according to de Broglie’s proposal, simply the vibrational energy (divided by $`c^2`$) of a localized oscillating field (most likely the electromagnetic field). From this standpoint inertial mass is not an elementary property of a particle, but rather a property derived from the localized oscillation of the (electromagnetic) field. De Broglie described this equivalence between mass and the energy of oscillational motion…as ‘une grande loi de la Nature’ (a great law of nature).” The rest mass $`m_0`$ is simply $`m_i`$ in its rest frame. What de Broglie was proposing is that the left-hand side of eqn. (3) corresponds to physical reality; the right-hand side is in a sense bookkeeping, defining the concept of rest mass. De Broglie assumed that his wave at the Compton frequency originates in the particle itself. An alternative interpretation is that a particle “is tuned to a wave originating in the high-frequency modes of the zero-point background field” (de la Peña and Cetto, 1996; Kracklauer, 1992). The de Broglie oscillation would thus be due to a resonant interaction with the ZPF, presumably the same resonance that is responsible for creating inertial mass as in eq. (2). In other words, the ZPF would be driving this $`\nu _C`$ oscillation. We therefore suggest that an elementary charge driven to oscillate at the Compton frequency by the ZPF may be the physical basis of the $`\eta (\nu )`$ scattering parameter in eqn. (2). For the case of the electron, this would imply that $`\eta (\nu )`$ is a sharply-peaked resonance at the frequency, expressed in terms of energy, $`h\nu =512`$ keV. The inertial mass of the electron would physically be the reaction force due to resonance scattering of the ZPF at that frequency. This leads to a surprising corollary. It can be shown (Haisch and Rueda, 1999; de la Peña and Cetto, 1996; Kracklauer, 1992) that as viewed from a laboratory frame, the standing wave at the Compton frequency in the electron frame transforms into a traveling wave having the de Broglie wavelength, $`\lambda _B=h/p`$, for a moving electron. The wave nature of the moving electron appears to be basically due to Doppler shifts associated with its Einstein-de Broglie resonance frequency. The identification of the resonance frequency with the Compton frequency would solve a fundamental mystery of quantum mechanics: Why does a moving particle exhibit a de Broglie wavelength of $`\lambda =h/p`$? It can be shown that if the electron acquires its mass because it is driven to oscillate at its Compton frequency by the ZPF, then when viewed from a moving frame, a beat frequency arises whose wavelength is precisely the de Broglie wavelength (Haisch & Rueda, 1999; de la Peña & Cetto, 1996). Thus within the context of the zero-point field inertia hypothesis we can simultaneously and suggestively explain both the origin of mass and the wave nature of matter as ZPF phenomena. Furthermore, the relative accessibility of the Compton frequency of the electron encourages us that an experiment to demonstrate mass modification of the electron by techniques of cavity quantum electrodynamics may soon be feasible. ACKNOWLEDGMENTS We acknowledge support of NASA contract NASW-5050 for this work. REFERENCES Alcubierre, M., “The Warp Drive: Hyper-fast Travel Within General Relativity,” Class. Quantum Grav., 11, L73 (1994). Cole, D.C. and Puthoff, H.E., “Extracting Energy and Heat from the Vacuum,” Phys. Rev. E, 48, 1562 (1993). Davisson, C.J. and Germer, L.H., “Diffraction of Electrons by a Crystal of Nickel,” Phys. Rev, 30, 705 (1927). de la Peña, L. and Cetto, A.M., The Quantum Dice: An Introduction to Stochastic Electrodynamics, (Kluwer Acad. Publ.), (1996). Forward, R., “Extracting electrical Energy from the Vacuum by Cohesion of Charged Foliated Conductors,” Phys. Rev. B, 30, 1700 (1984). Forward, R., “Mass Modification Experiment Definition Study,” J. of Scientific Exploration, 10, 325 (1996). Haisch, B. and Rueda, A., “Reply to Michel’s ‘Comment on Zero-Point Fluctuations and the Cosmological Constant’,” Astrophys. J., 488, 563 (1997). Haisch, B., and Rueda, A., “The Zero-Point Field and Inertia,” in Causality and Locality in Modern Physics, (G. Hunter, S. Jeffers and J.-P. Viger, eds.), (Kluwer Acad. Publ.), 171, (1998). xxx.lanl.gov/abs/gr-qc/9908057 Haisch, B., and Rueda, A., “Progress in Establishing a Connection Between the Electromagnetic Zero-Point Field and Inertia.” AIP Conference Proceedings No. 458, p. 988 (1999) xxx.lanl.gov/abs/gr-qc/9906069 Haisch B. and Rueda, A., “On the Relation Between Zero-point-field-induced Inertial Mass and the Einstein-de Broglie Formula.” Phys. Lett. A, in press (2000) xxx.lanl.gov/abs/gr-qc/9906084 Haisch, B., Rueda, A. and Puthoff, H.E. (HRP), “Inertia as a Zero-point-field Lorentz Force,” Phys. Rev. A, 49, 678 (1994). Haisch, B., Rueda, A. and Puthoff, H.E, “Advances in the Proposed Electromagnetic Zero-point-field Theory of Inertia.” AIAA 98-3143 (1998) xxx.lanl.gov/abs/physics/9807023 Haroche, S. and Raimond, J.M., “Cavity Quantum Electrodynamics,” Scientific American, 268, No. 4, 54 (1993) Hunter, G., “Electrons and photons as soliton waves,” in The Present Status of the Quantum Theory of Light, (S. Jeffers et al. eds.), (Kluwer Acad. Publ.), pp. 37–44 (1997). Kracklauer, A.F., “An Intuitive Paradigm for Quantum Mechanics.” Physics Essays, 5, 226 (1992). Lamoreaux, S.K., “Demonstration of the Casimir Force in the 0.6 to 6 $`\mu `$m Range,” Phys. Rev. Letters, 78, 5 (1997) Loudon, R., The Quantum Theory of Light, chap. 1, (Oxford: Clarendon Press) (1983). McCrea, W., “Time, Vacuum and Cosmos,” Q. J. Royal Astr. Soc., 27, 137 (1986). Misner, C.W., Thorne, K.S. and Wheeler, J.A., Gravitation, (Freeman and Co.), pp. 426–428 (1973). Pfenning, M.J. and Ford, L.H., “The Unphysical Nature of ‘Warp Drive’,” Class. Quant.Grav., 14, 1743-1751 (1997). Puthoff, H.E., “Gravity as a Zero-point-fluctuation Force,” Phys. Rev. A, 39, 2333 (1989). Puthoff, H.E., “Polarizable-Vacuum (PV) Representation of General Relativity,” preprint, (1999). xxx.lanl.gov/abs/gr-qc/9909037 Rueda, A. and Haisch, B., “Inertia as Reaction of the Vacuum to Accelerated Motion,” Physics Lett. A, 240, 115 (1998a). xxx.lanl.gov/abs/physics/9802031 Rueda, A. and Haisch, B., “Contribution to Inertial Mass by Reaction of the Vacuum to Accelerated Motion,” Foundations of Physics, 28, 1057 (1998b). xxx.lanl.gov/abs/physics/9802030 Rueda, A., Haisch, B. and Cole, D. C., “Vacuum Zero-Point Field Pressure Instability in Astrophysical Plasmas and the Formation of Cosmic Voids,” Astrophys. J., 445, 7 (1995). Sakharov, A.D., “Vacuum Quantum Fluctuations in Curved Space and the Theory of Gravitation,” Sov. Phys.-Doklady 12, No. 11, 1040 (1968). Van den Broeck, C. “A ‘warp drive’ with Reasonable Total Energy Requirements,” preprint, (1999) xxx.lanl.gov/abs/gr-qc/9905084 APPENDIX: THE PLANCK FREQUENCY The Planck frequency is assumed to be the highest frequency that spacetime itself can sustain. This can be understood from simple, semi-classical arguments by combining the constraints of relativity with those of quantum mechanics. In a circular orbit, the acceleration is $`v^2/r`$, which is obtained from a gravitational force per unit mass of $`Gm/r^2`$. Letting $`vc`$, one obtains a maximum acceleration of $`c^2/r=Gm/r^2`$, from which one derives the Schwarzschild radius for a given mass: $`r_S=Gm/c^2`$. The Heisenberg uncertainty relation specifies that $`\mathrm{\Delta }x\mathrm{\Delta }p\mathrm{}`$, and letting $`\mathrm{\Delta }pmc`$ one arrives at the Compton radius: $`r_C=\mathrm{}/mc`$, which specifies the minimum quantum size for an object of mass $`m`$. Equating the minimum quantum size for an object of mass $`m`$ with the Schwarzschild radius for that object one arrives at a mass: $`m_P=(c\mathrm{}/G)^{1/2}`$ which is the Planck mass, i.e. $`2.2\times 10^5`$ g. The Compton radius of the Planck mass is called the Planck length: $`l_p=(G\mathrm{}/c^3)^{1/2}`$, i.e. $`1.6\times 10^{33}`$ cm. One can think of this as follows: Due to the uncertainty relation, a Planck mass cannot be compressed into a volume smaller than the cube of the Planck length. A Planck mass, $`m_P`$, in a Planck volume, $`l_P^3`$, is the maximum density of matter that can exist without being unstable to collapsing spacetime fluctuations: $`\rho _{P,m}=c^5/G^2\mathrm{}`$ or as an energy density, $`\rho _{P,e}=c^7/G^2\mathrm{}`$. The speed-of-light limit constrains the fastest oscillation that spacetime can sustain to be $`\nu _P=c/l_P=(c^5/G\mathrm{})^{1/2}`$, i.e. $`1.9\times 10^{43}`$ Hz.
no-problem/9909/hep-ex9909037.html
ar5iv
text
# Hadronic final state interactions at ALEPH and OPAL ## 1 Introduction Bose–Einstein (BE) correlations between identical bosons and Fermi–Dirac (FD) correlations between identical fermions lead to an enhancement or a suppression, respectively, of the particle pairs produced close to each other in phase space. The effect is sensitive to the distribution of particle sources in space and time. The strength of the correlations can be expressed by the two-particle correlation function $`C(p_1,p_2)=P(p_1,p_2)/P_0(p_1,p_2)`$, where $`p_1`$ and $`p_2`$ are the four-momenta of the particles, $`P(p_1,p_2)`$ is the measured differential cross section for the pairs and $`P_0(p_1,p_2)`$ is that of a reference sample, identical to the data sample in all aspects except the presence of FD or BE correlations. Usually $`C(Q)`$ is measured, where $`Q^2=(p_1p_2)^2`$. For $`\mathrm{W}`$-pairs from $`\mathrm{e}^+\mathrm{e}^{}\mathrm{W}^+\mathrm{W}^{}`$ at energies in the LEP2 range, the distance between $`\mathrm{W}^+`$ and $`\mathrm{W}^{}`$ vertices is less than 0.1 fm, i.e. less than the typical hadronic distance scale of 1 fm. Therefore the fragmentation of $`\mathrm{W}^+`$ and $`\mathrm{W}^{}`$ may not be independent. Two phenomena may appear: pions from different $`\mathrm{W}`$’s may exhibit BE correlations and pairs of quarks and antiquarks $`\mathrm{q}_1\overline{\mathrm{q}}_4`$ and $`\mathrm{q}_3\overline{\mathrm{q}}_2`$ from the decay of different $`\mathrm{W}`$’s can form colour strings (colour reconnection). Colour reconnection (CR) and BE correlations have opposite effects. They may influence the accuracy of the $`\mathrm{W}`$ mass measurement at LEP. In this talk, three ALEPH analyses are presented: FD correlations of $`\mathrm{\Lambda }`$$`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}\overline{\mathrm{\Lambda }}`$ pairs in hadronic $`\mathrm{Z}`$ decays at LEP1 , colour reconnection and BE correlations in $`\mathrm{W}`$-pairs decays at LEP2. The OPAL analysis of BE correlations in $`\mathrm{W}`$-pair decays is also discussed here, due to the unavailability of the OPAL speaker. ## 2 Fermi-Dirac correlations in $`\mathbf{(}𝚲𝚲\mathbf{,}\overline{𝚲}\overline{𝚲}\mathbf{)}`$ system The FD correlations in $`(\mathrm{\Lambda }\mathrm{\Lambda },\overline{\mathrm{\Lambda }}\overline{\mathrm{\Lambda }})`$ system were studied using 3.9 million hadronic $`\mathrm{Z}`$ decays recorded by the ALEPH detector on and around the $`\mathrm{Z}`$ peak. A sample of 2133 pairs with $`Q<10`$ GeV was obtained. In the analysis, three reference samples were used: A) simulated pairs from JETSET MC without FD correlations, where $`C(Q)=P(Q)_{\mathrm{data}}/P(Q)_{\mathrm{MC}}`$; B) pairs obtained by event mixing, where $`C(Q)=[P(Q)_{\mathrm{data}}/P(Q)_{\mathrm{data}}^{\mathrm{mix}}]/[P(Q)_{\mathrm{MC}}/P(Q)_{\mathrm{MC}}^{\mathrm{mix}}]`$; C) reweighted sample of mixed pairs, where $`C(Q)=P(Q)_{\mathrm{data}}/P(Q)_{\mathrm{data},\mathrm{mix}}^{\mathrm{reweighted}}`$. The measured correlation functions are shown in Fig. 1, parametrised with $$C(Q)=𝒩[1+\beta \mathrm{exp}(R^2Q^2)]$$ (1) Consisted results are obtained for the three reference samples. The correlation function $`C(Q)`$ decreases for $`Q<2`$ GeV; as $`Q`$ tends to zero, it approaches the value of 0.5, as expected for a statistical spin mixture ensemble. If this is interpreted as a FD effect and the parametrisation of Eq. (1) is used, the resulting values for the source size $`R`$ and for the suppression parameter $`\beta `$ are $`R`$ $`=0.11\pm 0.02_{\mathrm{stat}}\pm 0.01_{\mathrm{sys}}\mathrm{fm}`$ $`\beta `$ $`=0.59\pm 0.09_{\mathrm{stat}}\pm 0.04_{\mathrm{sys}}`$ An alternative method to study the $`(\mathrm{\Lambda }\mathrm{\Lambda },\overline{\mathrm{\Lambda }}\overline{\mathrm{\Lambda }})`$ system is to measure the spin composition of the system using the angular distribution $`\mathrm{d}N/\mathrm{d}y^{}`$, with $`y^{}`$ the cosine of the angle between the two protons (antiprotons) in the di-hyperon centre-of-mass system. The measured $`\mathrm{d}N/\mathrm{d}y^{}`$ distribution has contributions from both $`S=1`$ and $`S=0`$ states: $$\frac{\mathrm{d}N}{\mathrm{d}y^{}}=(1\epsilon )\frac{\mathrm{d}N}{\mathrm{d}y^{}}|_{S=0}+\epsilon \frac{\mathrm{d}N}{\mathrm{d}y^{}}|_{S=1}$$ where $`\epsilon `$ is the fraction of $`S=1`$ contribution. By fitting the $`\mathrm{d}N/\mathrm{d}y^{}`$ distribution in different $`Q`$ ranges, the dependence $`\epsilon (Q)`$ is obtained. But $`\epsilon (Q)`$ can also be defined as $`\epsilon (Q)=C(Q)_{S=1}/[C(Q)_{S=0}+C(Q)_{S=1}]`$, with $`C(Q)_{S=1}`$ and $`C(Q)_{S=0}`$ the contributions of $`S=1`$ and $`S=0`$ states to the correlation function $`C(Q)`$. Using the parametrisation given in Eq. (1), one obtains for a statistical spin mixture ensemble $$\epsilon (Q)=0.75\frac{1\gamma \mathrm{exp}(R^2Q^2)}{10.5\gamma \mathrm{exp}(R^2Q^2)}$$ (2) with $`\gamma =2\beta `$. The distribution $`\epsilon (Q)`$ is shown in Fig. 2, fitted with the parametrisation given in Eq. (2) with $`\gamma `$ fixed to one. The state $`S=1`$ dominates for $`Q>2`$ GeV, but it is suppressed for $`Q<2`$ GeV. The value of the source size estimated from $`\epsilon (Q)`$ is $`R=0.14\pm 0.09_{\mathrm{stat}}\pm 0.03_{\mathrm{sys}}`$ fm, in agreement with the value obtained from the correlation function. From comparison with results from $`\pi ^\pm `$$`\pi ^\pm `$ and $`\mathrm{K}^\pm `$$`\mathrm{K}^\pm `$ correlation measurements, one observes that the source dimension decreases with increasing mass of the emitted particle. ## 3 Colour reconnection in $`\mathrm{W}`$-pair decays The colour reconnection in $`\mathrm{e}^+\mathrm{e}^{}\mathrm{W}^+\mathrm{W}^{}`$ was studied in a data sample of 174.2 pb<sup>-1</sup> at a centre-of-mass energy of $`\sqrt{s}=189`$ GeV. Hadronic and semileptonic $`\mathrm{W}`$-pair decays were selected and the experimental distribution $`\mathrm{ln}x_p`$ of the charged particles was compared for each event type to the MC models KORALW and EXCALIBUR without CR and to EXCALIBUR with CR. Here $`x_p=p/(\sqrt{s}/2)`$ is the scaled momentum of a particle. KORALW and EXCALIBUR are used to generate $`\mathrm{W}`$-pairs; both are coupled to JETSET for the hadronization part. Three CR models , denoted as $`I`$, $`II`$ and $`IIP`$ and implemented in JETSET, were compared to data. The distribution obtained for the data and the MC models are shown in Fig. 3. The ratio of the multiplicity in fully-hadronic events to twice the multiplicity in semileptonic events is also shown. The multiplicities within the experimental acceptance for the semileptonic channel are $`N_{ch}^{l\nu \mathrm{q}\overline{\mathrm{q}}}=17.53\pm 0.19\pm 0.24`$ for data and $`N_{ch}^{l\nu \mathrm{q}\overline{\mathrm{q}}}=17.41\pm 0.04\pm 0.29`$ for MC (no CR), giving a difference between data and MC of $`0.12\pm 0.42`$. For the fully hadronic channel, the multiplicity is $`N_{ch}^{\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}}=35.52\pm 0.22\pm 0.43`$ for data and $`N_{ch}^{\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}}=34.77\pm 0.04\pm 0.58`$ for MC (no CR), which gives a difference of $`0.75\pm 0.76`$. The difference $`N_{ch}^{\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}}2N_{ch}^{l\nu \mathrm{q}\overline{\mathrm{q}}}`$ is $`0.47\pm 0.44\pm 0.26`$ for data, $`0.05\pm 0.09`$ for MC (no CR) and $`0.52\pm 0.52`$ for the difference between data and MC. No colour reconnection was observed in the data, but at the current level of statistical precision both models with and without CR are compatible with the experimental results. ## 4 Bose–Einstein correlations in $`\mathrm{W}`$-pair decays The Bose–Einstein correlations in $`\mathrm{W}`$-pair decays were studied using data recorded by the ALEPH detector at centre-of-mass energies of 172, 183 and 189 GeV. For the tuning of the MC models of BE correlations, $`\mathrm{Z}`$ data recorded at 91.2 GeV with the same detector configuration as for $`\mathrm{W}^+`$$`\mathrm{W}^{}`$ events were used. Pairs of unlike-sign pions were chosen as reference sample and the correlation function was defined as $$R^{}(Q)=\frac{\left({\displaystyle \frac{N_\pi ^{++,}(Q)}{N_\pi ^+(Q)}}\right)^{\mathrm{data}}}{\left({\displaystyle \frac{N_\pi ^{++,}(Q)}{N_\pi ^+(Q)}}\right)_{\mathrm{no}\mathrm{BE}}^{\mathrm{MC}}}.$$ (3) The correlation function was parametrised with $$R^{}(Q)=\kappa (1+ϵQ)(1+\lambda \mathrm{e}^{\sigma ^2Q^2})$$ (4) where the term $`1+\lambda \mathrm{e}^{\sigma ^2Q^2}`$ describes the BE effect. The two parameters $`\lambda `$ and $`\sigma `$ characterise the effective strength of the correlations and the source size, respectively. The term $`1+ϵQ`$ takes into account some long range correlations due to the charge and the energy-momentum conservation, while $`\kappa `$ is a normalisation factor. The BE correlations were first measured in $`\mathrm{Z}`$ decays. A MC simulation of the BE effect with the JETSET BE<sub>3</sub> model was then tuned on these data. The tuned parameters were $`\lambda _{\mathrm{input}}=2.3`$ and $`R_{\mathrm{input}}=0.26`$ GeV. As $`\mathrm{W}`$ bosons do not decay into b quarks, the BE effect was determined separately in an udsc sample and in a b sample. Two b samples of different purities were tagged and the parameters $`\lambda _\mathrm{b}`$ and $`\lambda _{\mathrm{udsc}}`$ were determined. Using these parameters, the b$`\overline{\mathrm{b}}`$ component was replaced by an udsc component. The agreement between the MC model and the udsc data is good; residual discrepancies between them are corrected bin by bin. The prediction of the MC model tuned and corrected on $`\mathrm{Z}`$ data was then checked on semileptonic $`\mathrm{W}`$-pairs decays. They were found to be in very good agreement. For the hadronic $`\mathrm{W}`$-decays, two cases were considered in the MC model: pions from different $`\mathrm{W}`$’s may exhibit BE correlations (denoted as $`BEB`$) and only pions from the same $`\mathrm{W}`$ exhibit BE correlations (denoted BEI). The comparison between these two MC models and data is shown in Fig. 4. The result of the fit with the parametrisation given in Eq. (4) is also shown. All four parameters $`\kappa `$, $`ϵ`$, $`\lambda `$ and $`\sigma `$ were free in this fit. The values of $`\lambda `$ and $`\sigma `$ were used to compute an integral of the BE signal $`I_{BE}=_0^{\mathrm{}}\lambda \mathrm{e}^{\sigma ^2Q^2}dQ\lambda /\sigma `$ for data and MC. A second fit with $`\kappa `$, $`ϵ`$ and $`\sigma `$ fixed and $`\lambda `$ free was also made to the first four bins only, where the effect is expected to be maximum. The differences between the data and the MC models for $`\lambda `$ from the one-parameter fit are $`\lambda ^{\mathrm{data}}\lambda ^{\mathrm{MC}\mathrm{BEB}}=`$ $`0.088\pm 0.026\pm 0.020`$ $`\lambda ^{\mathrm{data}}\lambda ^{\mathrm{MC}\mathrm{BEI}}=`$ $`0.019\pm 0.026\pm 0.016`$ while for the $`I_{BE}`$ quantity they are $`I_{BE}^{\mathrm{data}}I_{BE}^{\mathrm{MC}\mathrm{BEB}}=`$ $`0.0217\pm 0.0062\pm 0.0048`$ $`I_{BE}^{\mathrm{data}}I_{BE}^{\mathrm{MC}\mathrm{BEI}}=`$ $`0.0040\pm 0.0062\pm 0.0036`$ The first error is the statistical error, the second is the systematic one. A better agreement is obtained for the JETSET model with BE correlations present only for pions coming from the same $`\mathrm{W}`$ boson. The JETSET model which allows for BE correlations between pions from different $`\mathrm{W}`$’s is disfavoured by both $`\lambda `$ and $`I_{BE}`$ variables at $`2.7\sigma `$ level. ## 5 Bose–Einstein correlations in $`\mathrm{W}`$-pair decays at OPAL The OPAL collaboration has analysed data recorded at centre-of-mass energies of 172, 183 and 189 GeV. Three mutually exclusive event sample were selected: the fully hadronic event sample $`\mathrm{W}^+\mathrm{W}^{}\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}`$, the semileptonic event sample $`\mathrm{W}^+\mathrm{W}^{}l\nu \mathrm{q}\overline{\mathrm{q}}`$ and the non-radiative $`(\mathrm{Z}^0/\gamma )^{}`$ event sample $`(\mathrm{Z}^0/\gamma )^{}\mathrm{q}\overline{\mathrm{q}}`$. The correlation function C(Q) was defined according to Eq. (3). For each sample, the correlation function was written as a combination of contributions from the various pure pion classes, including the background. For hadronic event sample one has $$\begin{array}{cc}\hfill C^{\mathrm{had}}(Q)=& P_{\mathrm{had}}^{\mathrm{WW}}(Q)C^{\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}}(Q)+\hfill \\ & [1P_{\mathrm{had}}^{\mathrm{WW}}(Q)]C_{bg}^\mathrm{Z}^{}(Q).\hfill \end{array}$$ Similar expressions were written for $`C^{\mathrm{semi}}(Q)`$ and $`C^{\mathrm{non}\mathrm{rad}}(Q)`$ correlation functions. The probabilities $`P_{\mathrm{had}}^{\mathrm{WW}}(Q)`$, etc. were taken from MC simulations without BE effect. Each correlation function $`C^{\mathrm{q}\overline{\mathrm{q}}\mathrm{q}\overline{\mathrm{q}}}(Q)`$, $`C^{\mathrm{q}\overline{\mathrm{q}}}(Q)`$ and $`C^\mathrm{Z}^{}(Q)`$ was parametrised by $$C(Q)=N[1+f_\pi (Q)\lambda \mathrm{exp}(Q^2R^2)]$$ (5) where $`f_\pi (Q)`$ is the probability of the pair to be a pair of pions. A simultaneous fit was made to the experimental data, with the parameters $`N`$, $`\lambda `$ and $`R`$ free for each event class (nine free parameters). All three classes exhibit BE correlations with consistent $`R`$ and $`\lambda `$ parameters. BE correlations were then investigated separately for pions coming from the same $`\mathrm{W}`$ and from different $`\mathrm{W}`$’s. The correlation function for the hadronic event sample was written as $`C^{\mathrm{had}}(Q)=P_{\mathrm{had}}^{\mathrm{same}}(Q)C^{\mathrm{same}}(Q)+P_{\mathrm{had}}^\mathrm{Z}^{}(Q)C_{bg}^\mathrm{Z}^{}(Q)`$ $`+[1P_{\mathrm{had}}^{\mathrm{same}}(Q)P_{\mathrm{had}}^\mathrm{Z}^{}(Q)]C^{\mathrm{diff}}(Q)`$ where $`C^{\mathrm{same}}(Q)`$, $`C^{\mathrm{diff}}(Q)`$ and $`C_{bg}^\mathrm{Z}^{}(Q)`$ are the correlation functions for pions from the same $`\mathrm{W}`$, from different $`\mathrm{W}`$’s and from $`(\mathrm{Z}^0/\gamma )^{}\mathrm{q}\overline{\mathrm{q}}`$ events. Similar expressions were written for $`C^{\mathrm{semi}}(Q)`$ and $`C^{\mathrm{non}\mathrm{rad}}(Q)`$. The correlation functions $`C^{\mathrm{same}}(Q)`$, $`C^{\mathrm{diff}}(Q)`$ and $`C^\mathrm{Z}^{}(Q)`$ were unfolded from the data and are shown in Fig. 5. They were then parametrised by Eq. (5) and simultaneous fits to the experimental distributions were performed. Three different cases were considered: 1) the same source-size $`R`$ for all event classes; 2) different $`R`$ parameters for each class, and 3) $`R^{\mathrm{diff}}`$ is related to $`R^{\mathrm{same}}`$ using the theoretical prediction $`(R^{\mathrm{diff}})^2=(R^{\mathrm{same}})^2+4\beta ^2\gamma ^2c^2\tau ^2`$. The results obtained for the parameter $`\lambda `$ for the third case are $`\lambda ^{\mathrm{same}}`$ $`=0.69\pm 0.12\pm 0.06`$ $`\lambda ^{\mathrm{diff}}`$ $`=0.05\pm 0.67\pm 0.35`$ $`\lambda ^\mathrm{Z}^{}`$ $`=0.43\pm 0.06\pm 0.08`$ At the current level of statistical precision it is not possible to determine if correlations between pions from different $`\mathrm{W}`$’s exist or not. ## 6 Conclusion The ALEPH analyses of Fermi–Dirac correlations of $`(\mathrm{\Lambda }\mathrm{\Lambda },\overline{\mathrm{\Lambda }}\overline{\mathrm{\Lambda }})`$ pairs in hadronic $`\mathrm{Z}`$ decays and of colour reconnection and Bose–Einstein correlations in $`\mathrm{W}`$-pairs decays have been presented. A depletion of events are observed for the region $`Q<2`$ GeV in the $`(\mathrm{\Lambda }\mathrm{\Lambda },\overline{\mathrm{\Lambda }}\overline{\mathrm{\Lambda }})`$ system. In the analysis of $`\mathrm{W}`$-pair decays, no colour reconnection effects are observed, but models which predict such effects can not be excluded. The Bose–Einstein correlations measured in $`\mathrm{W}`$-pair decays are reproduced by a Monte Carlo model with independent fragmentation of the two $`\mathrm{W}`$’s, while a variant of the same model with Bose–Einstein correlations between decay products of different $`\mathrm{W}`$’s is disfavoured at $`2.7\sigma `$. The OPAL analysis of Bose–Einstein correlations in $`\mathrm{W}`$-pair decays has also been presented. At the current level of statistical precision it was not possible to determine if correlations between pions from different $`\mathrm{W}`$’s exist or not. A. Simon, University of Freiburg (Germany) The experiments obviously use different methods to study BE correlations. It is technically possible to agree on one dataset and check the various methods on their systematics? Answer: Each experiment has optimised the analysis on detector characteristics and on the aspects of BE correlations which were considered important to be studied with the available statistics. The methods are therefore different; this is a normal situation, there is no need to have coordinated analyses until each experiment obtains a set of “final results”. A too early coordination and convergence of the methods can reduce the quality of the analyses. Once the experiments obtains “final results”, an intercomparison is meaningful, followed by cross checks of the methods used by the other experiments (within the limits of statistics and… manpower). In fact, a LEP group was formed recently to understand the differences between the results of the four LEP experiments.
no-problem/9909/astro-ph9909430.html
ar5iv
text
# Efficiency of cosmic ray reflections from an ultrarelativistic shock wave ## 1 Introduction In attempt to substitute a single question mark for the previous two, some authors try to identify the process accelerating particles to ultra high energies (‘UHE’, energy $`E>10^{18}`$ eV) with ultrarelativistic shock waves considered to be sources of $`\gamma `$-ray bursts (cf. Waxman 1995a,b, Vietri 1995, Milgrom & Usov 1995, Gallant & Achterberg 1999, $``$ GA99). In the proposed models, reaching cosmic ray energies in excess of $`10^{20}`$ eV requires that particles are reflected from the shock wave characterized with a large Lorentz gamma factor, $`\mathrm{\Gamma }`$, to enable a relative energy gain – in a single reflection – comparable to $`\mathrm{\Gamma }^2`$ (cf. GA99). On the other hand, basing on our experience with numerical modeling, we suggested (Bednarz & Ostrowski 1998, Ostrowski 1999) that such processes cannot actually work due to low efficiency of particle reflections. In the present note we elaborate this problem in detail with the use of numerical modeling of particle interactions with the shock. We show that ‘$`\mathrm{\Gamma }^2`$’ reflections can occur with a non-negligible rate only if a substantial amount of turbulence is present downstream of the shock. However, even in such conditions the number of accelerated particles is a small fraction of all cosmic rays hitting the shock. ## 2 Simulations As discussed by Bednarz & Ostrowski (1998), and in detail by GA99, particles accelerated in multiple interactions with an ultrarelativistic shock wave gain on average in a single ‘loop’ – upstream-downstream-upstream – the amount of energy comparable to the original energy, $`<\mathrm{\Delta }E>E`$. It is due to extreme particle anisotropy occurring in large $`\mathrm{\Gamma }`$ shocks. Particles hitting the shock wave the first time, with their isotropic upstream distribution, can receive higher energy gains. In this case an individual reflection from the shock may increase particle energies on a factor of $`\mathrm{\Gamma }^2`$. However the effectiveness of such acceleration depends on how many of particles from the original upstream population can be reflected. To verify it we use Monte Carlo simulations similar to the applied earlier to derivation of the accelerated particle spectra (Bednarz & Ostrowski 1998, for details see also Bednarz & Ostrowski 1996). The code reproduces perturbations of particle trajectories due to MHD turbulence by applying discrete scatterings of particle direction within a narrow cone along its momentum vector. A procedure uses a hybrid approach involving very small scattering angles close to the shock and larger angles further away from it. Between successive scatterings particle trajectories are derived in the uniform background magnetic field. The respective scaling of the time between the successive scattering acts close and far from the shock mimics the same turbulence amplitude everywhere. The scattering amplitude was selected in a way to reproduce a pitch angle diffusion process for particle momentum. It requires the angular scattering amplitude of particle momentum vector to be much smaller than the particle anisotropy. All computations were done in a respective local plasma rest frame and the Lorentz transformation was applied to every particle crossing of the shock. Particles with initial momenta $`p_0`$ taken as a momentum unit, $`p_0=1`$, were injected at the distance of $`2r_g`$ ($`r_g`$ \- particle gyroradius) upstream of the shock front. For all such particles we derived their trajectories until they crossed the shock upstream, or were caught in the downstream plasma, reaching a distance of $`4r_g`$ downstream of the shock. For each single particle interaction with the shock the particle momentum vector was recorded, so we were able to consider angular and energy distributions of such particles. We considered shocks with Lorentz factors $`\mathrm{\Gamma }=10`$, $`160`$ and $`320`$. For each shock we discussed the acceleration processes in conditions with the magnetic field inclination $`\psi =0^{}`$, $`10^{}`$, $`70^{}`$ and with 16 values for the turbulence amplitude measured by the ratio $`\tau `$ of the cross-field diffusion coefficient, $`\kappa _{}`$, to the parallel diffusion coefficient, $`\kappa _{}`$. The applied values of $`\tau `$ were taken from the range of ($`3.210^6`$, $`0.95`$), approximately uniformly distributed in $`\mathrm{log}\tau `$ . In each simulation run we derived trajectories of $`510^4`$ particles with initial momenta isotropically distributed in the upstream rest frame. ## 3 Efficiency of ‘$`\mathrm{\Gamma }^2`$’ reflections In the downstream plasma rest frame the ultrarelativistic shock moves with velocity $`c/3`$. This velocity is comparable to the particle velocity $`c`$. Therefore, from all particles crossing the shock downstream only the ones with particular momentum orientations will interact with the shock again, the remaining particles will be caught in the downstream plasma flow and advected far from the shock front. In the simulations we considered this process quantitatively. However, let us first present a simple illustration. Large compression ratios occurring in ultrarelativistic shocks, as measured between the upstream and downstream plasma rest frames, lead for nearly all oblique upstream magnetic field configurations to the quasi-perpendicular configurations downstream of the shock. Thus, let us consider for this illustrative example a shock with a non-perturbed perpendicular downstream magnetic field distribution. Particle crossing the shock downstream with inclination to the magnetic field $`\theta `$ and the phase $`\varphi `$ – both measured in the downstream plasma rest frame, $`\varphi =\pi /2`$ for particles normal to the shock and directed downstream – will be able to cross the shock upstream only if the equation $$\frac{c}{3}t=r_\mathrm{g}\left[\mathrm{cos}(\varphi +\omega _\mathrm{g}t)\mathrm{cos}\varphi \right]$$ $`(3.1)`$ has a solution at positive time $`t`$. Here $`r_\mathrm{g}=\frac{pc}{eB}\mathrm{sin}\theta `$ is the particle gyroradius, $`\omega _\mathrm{g}=\frac{eB}{p}`$ is the gyration frequency, and other symbols have the usual meaning. An angular range in the space ($`\theta `$, $`\varphi `$) enabling particles crossing the shock downstream to reach the shock again can be characterized for illustration by three values of $`\theta `$. Particles with $`\mathrm{sin}\theta =1`$ are able to reach the shock again if $`\varphi `$(1.96, 3.48), with $`\mathrm{sin}\theta =0.5`$ if $`\varphi `$(2.96, 3.87) and with $`\mathrm{sin}\theta =1/3`$ only for $`\varphi =4.71`$. That means that all particles with $`\varphi `$ smaller than 1.96 (Fig. 1) are not able to reach the shock again if fluctuations of the magnetic field downstream of the shock are not present. For perturbed magnetic fields some downstream trajectories starting in the ($`\theta `$, $`\varphi `$) plane outside the reflection range can be scattered toward the shock to cross it upstream. We prove it by simulations presented in Fig. 2. One may observe that increasing the perturbation amplitude leads to increased number of reflected particles, reaching $`13`$% in the limit of $`\tau =1`$. For large magnetic field fluctuations the mean relative energy gains of reflected particles are close to $`1.2\mathrm{\Gamma }^2`$ for the shock Lorentz factors considered. One may note that for small $`\psi `$ and $`\mathrm{\Gamma }`$ the energy gain increases with growing $`\tau `$. The points resulting from simulations for the smallest values of $`\tau `$ were not included into Fig. 3 because of small number of reflected particles (cf. Fig. 2). ## 4 Summary We have shown that efficiency of ‘$`\mathrm{\Gamma }^2`$’ reflections in ultrarelativistic shock waves strongly depends on fluctuations of magnetic field downstream of the shock. In the most favorable conditions with high amplitude turbulence downstream the shock the reflection efficiency is a factor of 10 or more smaller than the values assumed by other authors. Moreover, due to the magnetic field compression at the shock, we do not expect the required large values of $`\kappa _{}/\kappa _{}`$ to occur behind the shock (cf. a different approach of Medvedev & Loeb 1999). Therefore, with the actual efficiency of 1 - 10 % there is an additional difficulty for models postulating UHE particle acceleration at GRB shocks (cf. GA99). Let us note, however, that the mean downstream trajectory of the reflected particle involves only a fraction of its gyroperiod. Thus the presence of compressive long waves in this region, leading to non-random trajectory perturbations could modify our estimates. ## Acknowledgements The presented computations were partly done on the HP Exemplar S2000 in ACK ‘CYFRONET’ in Kraków. We acknowledge support from the Komitet Badań Naukowych through the grant PB 179/P03/96/11 .
no-problem/9909/quant-ph9909001.html
ar5iv
text
# References QUANTUM ALGEBRAIC SYMMETRIES IN NUCLEI, MOLECULES, AND ATOMIC CLUSTERS Dennis Bonatsos<sup>1</sup> and C. Daskaloyannis<sup>2</sup> <sup>1</sup> Institute of Nuclear Physics, N.C.S.R. “Demokritos” GR-15310 Aghia Paraskevi, Attiki, Greece <sup>2</sup> Department of Physics, Aristotle University of Thessaloniki GR-54006 Thessaloniki, Greece Abstract Various applications of quantum algebraic techniques in nuclear structure physics and in molecular physics are briefly reviewed and a recent application of these techniques to the structure of atomic clusters is discussed in more detail. 1. Introduction Quantum algebras (also called quantum groups) are deformed versions of the usual Lie algebras, to which they reduce when the deformation parameter $`q`$ is set equal to unity. From the mathematical point of view they are Hopf algebras. Their use in physics became popular with the introduction of the $`q`$-deformed harmonic oscillator as a tool for providing a boson realization of the quantum algebra su<sub>q</sub>(2), although similar mathematical structures had already been known . Initially used for solving the quantum Yang–Baxter equation, quantum algebras have subsequently found applications in several branches of physics, as, for example, in the description of spin chains, squeezed states , hydrogen atom and hydrogen-like spectra , rotational and vibrational nuclear and molecular spectra and in conformal field theories. By now much work has been done on the $`q`$-deformed oscillator and its relativistic extensions , and several kinds of generalized deformed oscillators and generalized deformed su(2) algebras have been introduced. Here we shall confine ourselves to applications of quantum algebras in nuclear structure physics and in molecular physics. The purpose of this short review is to provide the reader with references for further reading. In addition a recent application of quantum algebraic techniques to the structure of atomic clusters will be discussed in more detail. 2. The su<sub>q</sub>(2) rotator model The first application of quantum algebras in nuclear physics was the use of the deformed algebra su<sub>q</sub>(2) for the description of the rotational spectra of deformed and superdeformed nuclei. The Hamiltonian of the $`q`$-deformed rotator is proportional to the second order Casimir operator of the su<sub>q</sub>(2) algebra. Its Taylor expansion contains powers of $`J(J+1)`$ (where $`J`$ is the angular momentum), being similar to the expansion provided by the Variable Moment of Inertia (VMI) model. Furthermore, the deformation parameter $`\tau `$ (with $`q=e^{i\tau }`$) has been found to correspond to the softness parameter of the VMI model. Through a comparison of the su<sub>q</sub>(2) model to the hybrid model the deformation parameter $`\tau `$ has also been connected to the number of valence nucleon pairs and to the nuclear deformation $`\beta `$ . Since $`\tau `$ is an indicator of deviation from the pure su(2) symmetry, it is not surprising that $`\tau `$ decreases with increasing $`\beta `$ . The su<sub>q</sub>(2) model has been recently extended to excited (beta and gamma) bands . B(E2) transition probabilities have also been described in this framework . In this case the $`q`$-deformed Clebsch–Gordan coefficients are used instead of the normal ones. (It should be noticed that the $`q`$-deformed angular momentum theory has already been much developed .) The model predicts an increase of the B(E2) values with angular momentum, while the rigid rotator model predicts saturation. Some experimental results supporting this prediction already exist . Similarly increasing B(E2) values are predicted by a modified version of the su(3) limit of the Interacting Boson Model (IBM), by the su(3) limit of the sdg Interacting Boson Model , by the Fermion Dynamical Symmetry Model (FDSM) , as well as by the recent systematics of Zamfir and Casten . 3. Extensions of the su<sub>q</sub>(2) model The su<sub>q</sub>(2) model has been successful in describing rotational nuclear spectra. For the description of vibrational and transitional nuclear spectra it has been found that $`J(J+1)`$ has to be replaced by $`J(J+c)`$. The additional parameter $`c`$ allows for the description of nuclear anharmonicities in a way similar to that of the Interacting Boson Model (IBM) and the Generalized Variable Moment of Inertia (GVMI) model . The use of $`J(J+c)`$ instead of $`J(J+1)`$ for vibrational and transitional nuclei is also supported by recent systematics . Another generalization is based on the use of the deformed algebra su<sub>Φ</sub>(2) , which is characterized by a structure function $`\mathrm{\Phi }`$. The usual su(2) and su<sub>q</sub>(2) algebras are obtained for specific choices of the structure function $`\mathrm{\Phi }`$. The su<sub>Φ</sub>(2) algebra has been constructed so that its representation theory resembles as much as possible the representation theory of the usual su(2) algebra. Using this technique one can construct, for example, a rotator having the same spectrum as the one given by the Holmberg–Lipas formula . A two-parameter generalization of the su<sub>q</sub>(2) model, labelled as su<sub>qp</sub>(2), has also been successfully used for the description of superdeformed nuclear bands . 4. Pairing correlations It has been found that correlated fermion pairs coupled to zero angular momentum in a single-$`j`$ shell behave approximately as suitably defined $`q`$-deformed bosons. After performing the same boson mapping to a simple pairing Hamiltonian, one sees that the pairing energies are also correctly reproduced up to the same order. The deformation parameter used ($`\tau =\mathrm{ln}q`$) is found to be inversely proportional to the size of the shell, thus serving as a small parameter. The above mentioned system of correlated fermion pairs can be described exactly by suitably defined generalized deformed bosons . Then both the commutation relations are satisfied exactly and the pairing energies are reproduced exactly. The spectrum of the appropriate generalized deformed oscillator corresponds, up to first order perturbation theory, to a harmonic oscillator with an $`x^4`$ perturbation. If one considers, in addition to the pairs coupled to zero angular momentum, pairs coupled to non-zero angular momenta, one finds that an approximate description in terms of two suitably defined $`q`$-oscillators (one describing the $`J=0`$ pairs and the other corresponding to the $`J0`$ pairs) occurs . The additional terms introduced by the deformation have been found to improve the description of the neutron pair separation energies of the Sn isotopes, with no extra parameter introduced. $`q`$-deformed versions of the pairing theory have also been given in . 5. $`q`$-deformed versions of nuclear models A $`q`$-deformed version of a two dimensional toy Interacting Boson Model (IBM) with su<sub>q</sub>(3) overall symmetry has been developed , mainly for testing the ways in which spectra and transition probabilities are influenced by the $`q`$-deformation. The question of possible complete breaking of the symmetry through $`q`$-deformation, i.e. the transition from the su<sub>q</sub>(2) limiting symmetry to the so<sub>q</sub>(3) one has been examined . It has been found that such a transition is possible for complex values of the parameter $`q`$ . (For problems arising when using complex $`q`$ values see ). Complete breaking of the symmetry has also been considered in the framework of an su<sub>q</sub>(2) model . It has also been found that $`q`$-deformation leads (for specific range of values of the deformation parameter $`\tau `$, with $`q=e^{i\tau }`$) to a recovery of the u(3) symmetry in the framework of a simple Nilsson model including a spin-orbit term. Finally, the o<sub>q</sub>(3) limit of the toy IBM model has been used for the description of <sup>16</sup>O + $`\alpha `$ cluster states in <sup>20</sup>Ne, with positive results . $`q`$-deformed versions of the o(6) and u(5) limits of the full IBM have been discussed in . The $`q`$-deformation of the su(3) limit of IBM is a formidable problem, since the su<sub>q</sub>(3) $``$ so<sub>q</sub>(3) decomposition has for the moment been achieved only for completely symmetric su<sub>q</sub>(3) irreducible representations . Furthermore a $`q`$-deformed version of the Moszkowski model has been developed and RPA modes have been studied in it. A $`q`$-deformed Moszkowski model with cranking has also been studied in the mean-field approximation. It has been seen that the residual interaction simulated by the $`q`$-deformation is felt more strongly by states with large $`J_z`$. The possibility of using $`q`$-deformation in assimilating temperature effects is receiving attention, since it has also been found that this approach can be used in describing thermal effects in the framework of a $`q`$-deformed Thouless model for supercoductivity. In addition, $`q`$-deformed versions of the Lipkin-Meshkov-Glick (LMG) model have been developed, both for the 2-level version of the model in terms of an su<sub>q</sub>(2) algebra , and for the 3-level version of the model in terms of an su<sub>q</sub>(3) algebra . 6. Anisotropic quantum harmonic oscillator with rational ratios of frequencies The symmetries of the 3-dimensional anisotropic quantum harmonic oscillator with rational ratios of frequencies (RHO) are of high current interest in nuclear physics, since they are the basic symmetries underlying the structure of superdeformed and hyperdeformed nuclei . The 2-dimensional RHO is also of interest, in connection with “pancake” nuclei , i.e. very oblate nuclei. Cluster configurations in light nuclei can also be described in terms of RHO symmetries, which underlie the geometrical structure of the Bloch–Brink $`\alpha `$-cluster model . The 3-dim RHO is also of interest for the interpretation of the observed shell structure in atomic clusters , especially after the realization that large deformations can occur in such systems . (See section 9 for further discussion of atomic clusters.) The two-dimensional and three-dimensional anisotropic harmonic oscillators have been the subject of several investigations, both at the classical and the quantum mechanical level (see for references). These oscillators are examples of superintegrable systems. The special cases with frequency ratios 1:2 and 1:3 have also been considered . While at the classical level it is clear that the su(N) or sp(2N,R) algebras can be used for the description of the N-dimensional anisotropic oscillator, the situation at the quantum level, even in the two-dimensional case, is not as simple. It has been proved that a generalized deformed u(2) algebra is the symmetry algebra of the two-dimensional anisotropic quantum harmonic oscillator , which is the oscillator describing the single-particle level spectrum of “pancake” nuclei, i.e. of triaxially deformed nuclei with $`\omega _x>>\omega _y`$, $`\omega _z`$. Furthermore, a generalized deformed u(3) algebra turns out to be the symmetry algebra of the three-dimensional RHO . 7. Three-dimensional $`q`$-deformed (isotropic) harmonic oscillator Recently the 3-dimensional $`q`$-deformed (isotropic) harmonic oscillator has been studied in detail , following the mathematical developments of . It turns out that in this framework, one can reproduce level schemes similar to the ones occuring in the modified harmonic oscillator model, first suggested by Nilsson . An appropriate $`q`$-deformed spin–orbit interaction term has also been developed . Including this term in the 3-dimensional $`q`$-deformed (isotropic) harmonic oscillator scheme one can reproduce level schemes similar to these provided by the modified harmonic oscillator with spin–orbit interaction. It is expected that this scheme, without the spin–orbit interaction term, will be appropriate for describing the magic numbers occuring in the various kinds of atomic clusters , since a description of magic numbers of atomic clusters in terms of a Nilsson model without a spin–orbit interaction has already been attempted . This subject will be discussed in some detail in Section 9. A recent review of the applications of quantum algebraic techniques to nuclear structure problems can be found in . 8. The use of quantum algebras in molecular structure Similar techniques can be applied in describing properties of diatomic and polytomic molecules. A brief list will be given here. 1) Rotational spectra of diatomic molecules have been described in terms of the su<sub>q</sub>(2) model . As in the case of nuclei, $`q`$ is a phase factor ($`q=e^{i\tau }`$). In molecules $`\tau `$ is of the order of 0.01. The use of the su<sub>q</sub>(2) symmetry leads to a partial summation of the Dunham expansion describing the rotational–vibrational spectra of diatomic molecules . Molecular backbending (bandcrossing) has also been described in this framework . Rotational spectra of symmetric top molecules have also been considered in the framework of the su<sub>q</sub>(2) symmetry. Furthermore, two $`q`$-deformed rotators with slightly different parameter values have been used for the description of $`\mathrm{\Delta }I=1`$ staggering effects in rotational bands of diatomic molecules. (For a discussion of $`\mathrm{\Delta }I=2`$ staggering effects in diatomic molecules see ). 2) Vibrational spectra of diatomic molecules have been described in terms of $`q`$-deformed anharmonic oscillators having the su<sub>q</sub>(1,1) or the u<sub>q</sub>(2) $``$ o<sub>q</sub>(2) symmetry, as well as in terms of generalized deformed oscillators similar to the ones described in sec. 3 . These results, combined with 1), lead to the full summation of the Dunham expansion . A two-parameter deformed anharmonic oscillator with u<sub>qp</sub>(2) $``$ o<sub>qp</sub>(2) symmetry has also been considered . 3) The physical content of the anharmonic oscillators mentioned in 2) has been clarified by constructing WKB equivalent potentials (WKB-EPs) and classical equivalent potentials providing approximately the same spectrum. The results have been corroborated by the study of the relation between su<sub>q</sub>(1,1) and the anharmonic oscillator with $`x^4`$ anharminicities . Furthermore the WKB-EP corresponding to the su<sub>q</sub>(1,1) anharmonic oscillator has been connected to a class of Quasi-Exactly Soluble Potentials (QESPs) . 4) Generalized deformed oscillators giving the same spectrum as the Morse potential and the modified Pöschl–Teller potential , as well as a deformed oscillator containing them as special cases have also been constructed. In addition, $`q`$-deformed versions of the Morse potential have been given, either by using the so<sub>q</sub>(2,1) symmetry or by solving a $`q`$-deformed Schrödinger equation for the usual Morse potential . For the sake of completeness it should be mentioned that a deformed oscillator giving the same spectrum as the Coulomb potential has also been constructed . 5) A $`q`$-deformed version of the vibron model for diatomic molecules has been constructed , in a way similar to that described in sec. 5. 6) For vibrational spectra of polyatomic molecules a model of $`n`$ coupled generalized deformed oscillators has been built , containing the approach of Iachello and Oss as a special case. In addition a model of two $`Q`$-deformed oscillators coupled so that the total Hamiltonian has the su<sub>Q</sub>(2) symmetry has been proved to be equivalent, to lowest order approximation, to a system of two identical Morse oscillators coupled by the cross-anharmonicity usually used empirically in describing vibrational spectra of diatomic molecules. 7) Quasi-molecular resonances in the systems <sup>12</sup>C+<sup>12</sup>C and <sup>12</sup>C+<sup>16</sup>O have been described in terms of a $`q`$-deformed oscillator plus a rigid rotator . A review of several of the above topics, concerning the applications of quantum algebraic techniques to molecular structure, accompanied by a detailed and self-contained introduction to quantum algebras, has been given by Raychev . 9. The 3-dimensional $`q`$-deformed harmonic oscillator and magic numbers of alkali metal clusters Metal clusters have been recently the subject of many investigations (see for relevant reviews). One of the first fascinating findings in their study was the appearance of magic numbers , analogous to but different from the magic numbers appearing in the shell structure of atomic nuclei . This analogy led to the early description of metal clusters in terms of the Nilsson–Clemenger model , which is a simplified version of the Nilsson model of atomic nuclei, in which no spin-orbit interaction is included. Further theoretical investigations in terms of the jellium model demonstrated that the mean field potential in the case of simple metal clusters bears great similarities to the Woods–Saxon potential of atomic nuclei, with a slight modification of the “wine bottle” type . The Woods–Saxon potential itself looks like a harmonic oscillator truncated at a certain energy value and flattened at the bottom. It should also be recalled that an early schematic explanation of the magic numbers of metallic clusters has been given in terms of a scheme intermediate between the level scheme of the 3-dimensional harmonic oscillator and the square well . Again in this case the intermediate potential resembles a harmonic oscillator flattened at the bottom. On the other hand, modified versions of harmonic oscillators have been recently investigated, as it has already been mentioned. The spectra of $`q`$-deformed oscillators increase either less rapidly (for $`q`$ being a phase factor, i.e. $`q=e^{i\tau }`$ with $`\tau `$ being real) or more rapidly (for $`q`$ being real, i.e. $`q=e^\tau `$ with $`\tau `$ being real) in comparison to the equidistant spectrum of the usual harmonic oscillator , while the corresponding (WKB-equivalent) potentials resemble the harmonic oscillator potential, truncated at a certain energy (for $`q`$ being a phase factor) or not (for $`q`$ being real), the deformation inflicting an overall widening or narrowing of the potential, depending on the value of the deformation parameter $`q`$. Very recently, a $`q`$-deformed version of the 3-dimensional harmonic oscillator has been constructed , taking advantage of the u<sub>q</sub>(3) $``$ so<sub>q</sub>(3) symmetry . As it has already been mentioned in Section 7, the spectrum of this 3-dimensional $`q`$-deformed harmonic oscillator has been found to reproduce very well the spectrum of the modified harmonic oscillator introduced by Nilsson , without the spin-orbit interaction term. Since the Nilsson model without the spin orbit term is essentially the Nilsson–Clemenger model used for the description of metallic clusters , it is worth examining if the 3-dimensional $`q`$-deformed harmonic oscillator can reproduce the magic numbers of simple metallic clusters. This is the subject of the present section. The space of the 3-dimensional $`q`$-deformed harmonic oscillator consists of the completely symmetric irreducible representations of the quantum algebra u<sub>q</sub>(3). In this space a deformed angular momentum algebra, so<sub>q</sub>(3), can be defined . The Hamiltonian of the 3-dimensional $`q`$-deformed harmonic oscillator is defined so that it satisfies the following requirements: a) It is an so<sub>q</sub>(3) scalar, i.e. the energy is simultaneously measurable with the $`q`$-deformed angular momentum related to the algebra so<sub>q</sub>(3) and its $`z`$-projection. b) It conserves the number of bosons, in terms of which the quantum algebras u<sub>q</sub>(3) and so<sub>q</sub>(3) are realized. c) In the limit $`q1`$ it is in agreement with the Hamiltonian of the usual 3-dimensional harmonic oscillator. It has been proved that the Hamiltonian of the 3-dimensional $`q`$-deformed harmonic oscillator satisfying the above requirements takes the form $$H_q=\mathrm{}\omega _0\left\{[N]q^{N+1}\frac{q(qq^1)}{[2]}C_q^{(2)}\right\},$$ (1) where $`N`$ is the number operator and $`C_q^{(2)}`$ is the second order Casimir operator of the algebra so<sub>q</sub>(3), while $$[x]=\frac{q^xq^x}{qq^1}$$ (2) is the definition of $`q`$-numbers and $`q`$-operators. The energy eigenvalues of the 3-dimensional $`q`$-deformed harmonic oscillator are then $$E_q(n,l)=\mathrm{}\omega _0\left\{[n]q^{n+1}\frac{q(qq^1)}{[2]}[l][l+1]\right\},$$ (3) where $`n`$ is the number of vibrational quanta and $`l`$ is the eigenvalue of the angular momentum, obtaining the values $`l=n,n2,\mathrm{},0`$ or 1. In the limit of $`q1`$ one obtains $`\mathrm{lim}_{q1}E_q(n,l)=\mathrm{}\omega _0n`$, which coincides with the classical result. For small values of the deformation parameter $`\tau `$ (where $`q=e^\tau `$) one can expand eq. (3) in powers of $`\tau `$ obtaining $$E_q(n,l)=\mathrm{}\omega _0n\mathrm{}\omega _0\tau (l(l+1)n(n+1))$$ $$\mathrm{}\omega _0\tau ^2\left(l(l+1)\frac{1}{3}n(n+1)(2n+1)\right)+𝒪(\tau ^3).$$ (4) The last expression to leading order bears great similarity to the modified harmonic oscillator suggested by Nilsson (with the spin-orbit term omitted) $$V=\frac{1}{2}\mathrm{}\omega \rho ^2\mathrm{}\omega \kappa ^{}(𝐋^2<𝐋^2>_N),\rho =r\sqrt{\frac{M\omega }{\mathrm{}}},$$ (5) where $$<𝐋^2>_N=\frac{N(N+3)}{2}.$$ (6) It has been proved that the spectrum of the 3-dimensional $`q`$-deformed harmonic oscillator closely reproduces the spectrum of the modified harmonic oscillator of Nilsson. In both cases the effect of the $`l(l+1)`$ term is to flatten the bottom of the harmonic oscillator potential, thus making it to resemble the Woods–Saxon potential. The level scheme of the 3-dimensional $`q`$-deformed harmonic oscillator (for $`\mathrm{}\omega _0=1`$ and $`\tau =0.038`$) is given in Table 1 , up to a certain energy. Each level is characterized by the quantum numbers $`n`$ (number of vibrational quanta) and $`l`$ (angular momentum). Next to each level its energy, the number of particles it can accommodate (which is equal to $`2(2l+1)`$) and the total number of particles up to and including this level are given. If the energy difference between two successive levels is larger than 0.39, it is considered as a gap separating two successive shells and the energy difference is reported between the two levels. In this way magic numbers can be easily read in the table: they are the numbers appearing above the gaps, written in boldface characters. The magic numbers provided by the 3-dimensional $`q`$-deformed harmonic oscillator in Table 1 are compared to available experimental data for Na clusters in Table 2 (columns 2–6) . The following comments apply: i) Only magic numbers up to 1500 are reported, since it is known that filling of electronic shells is expected to occur only up to this limit . For large clusters beyond this point it is known that magic numbers can be explained by the completion of icosahedral or cuboctahedral shells of atoms . ii) Up to 600 particles there is consistency among the various experiments and between the experimental results in one hand and our findings in the other. iii) Beyond 600 particles the predictions of the three experiments, which report magic numbers in this region, are quite different. However, the predictions of all three experiments are well accommodated by the present model. Magic numbers 694, 832, 1012 are supported by the findings of both Martin et al. and Bréchignac et al. , magic numbers 1206, 1410 are in agreement with the experimental findings of Martin et al. , magic numbers 912, 1284 are supported by the findings of Bréchignac et al., while magic numbers 676, 1100, 1314, 1502 are in agreement with the experimental findings of Pedersen et al. . In Table 2 the predictions of three simple theoretical models (non-deformed 3-dimensional harmonic oscillator (column 9), square well potential (column 8), rounded square well potential (intermediate between the previous two, column 7) ) are also reported for comparison. It is clear that the predictions of the non-deformed 3-dimensional harmonic oscillator are in agreement with the experimental data only up to magic number 40, while the other two models give correctly a few more magic numbers (58, 92, 138), although they already fail by predicting magic numbers at 68, 70, 106, 112, 156, which are not observed. It should be noticed at this point that the first few magic numbers of alkali clusters (up to 92) can be correctly reproduced by the assumption of the formation of shells of atoms instead of shells of delocalized electrons , this assumption being applicable under conditions not favoring delocalization of the valence electrons of alkali atoms. Comparisons among the present results, experimental data (by Martin et al. (column 2), Pedersen et al. (column 3) and Bréchignac et al. (column 4) ) and theoretical predictions more sophisticated than these reported in Table 2, can be made in Table 3 , where magic numbers predicted by various jellium model calculations (columns 5–8, ), Woods–Saxon and wine bottle potentials (column 9, ), as well as by a classification scheme using the $`3n+l`$ pseudo quantum number (column 10, ) are reported. The following observations can be made: i) All magic numbers predicted by the 3-dimensional $`q`$-deformed harmonic oscillator are supported by at least one experiment, with no exception. ii) Some of the jellium models, as well as the $`3n+l`$ classification scheme, predict magic numbers at 186, 540/542, which are not supported by experiment. Some jellium models also predict a magic number at 748 or 758, again without support from experiment. The Woods–Saxon and wine bottle potentials of Ref. predict a magic number at 68, for which no experimental support exists. The present scheme avoids problems at these numbers. It should be noticed, however, that in the cases of 186 and 542 the energy gap following them in the present scheme is 0.329 and 0.325 respectively (see Table 1), i.e. quite close to the threshold of 0.39 which we have considered as the minimum energy gap separating different shells. One could therefore qualitatively remark that 186 and 542 are “built in” the present scheme as “secondary” (not very pronounced) magic numbers. The following general remarks can also be made: i) It is quite remarkable that the 3-dimensional $`q`$-deformed harmonic oscillator reproduces the magic numbers at least as accurately as other, more sophisticated, models by using only one free parameter ($`q=e^\tau `$). Once the parameter is fixed, the whole spectrum is fixed and no further manipulations can be made. This can be considered as evidence that the 3-dimensional $`q`$-deformed harmonic oscillator owns a symmetry (the u<sub>q</sub>(3) $``$ so<sub>q</sub>(3) symmetry) appropriate for the description of the physical systems under study. ii) It has been remarked that if $`n`$ is the number of nodes in the solution of the radial Schrödinger equation and $`l`$ is the angular momentum quantum number, then the degeneracy of energy levels of the hydrogen atom characterized by the same $`n+l`$ is due to the so(4) symmetry of this system, while the degeneracy of energy levels of the spherical harmonic oscillator (i.e. of the 3-dimensional isotropic harmonic oscillator) characterized by the same $`2n+l`$ is due to the su(3) symmetry of this system. $`3n+l`$ has been used to approximate the magic numbers of alkali metal clusters with some success, but no relevant Lie symmetry could be determined. In view of the present findings the lack of Lie symmetry related to $`3n+l`$ is quite clear: the symmetry of the system appears to be a quantum algebraic symmetry (u<sub>q</sub>(3)), which is a nonlinear extension of the Lie symmetry u(3). iii) An interesting problem is to determine a WKB-equivalent potential giving (within this approximation) the same spectrum as the 3-dimensional $`q`$-deformed harmonic oscillator, using methods similar to these of Ref. . The similarity between the results of the present model and these provided by the Woods–Saxon potential (column 9 in Table 3) suggests that the answer should be a harmonic oscillator potential flattened at the bottom, similar to the Woods–Saxon potential. If such a WKB-equivalent potential will show any similarity to a wine bottle shape, as several potentials used for the description of metal clusters do , remains to be seen. In summary, we have shown in this section that the 3-dimensional $`q`$-deformed harmonic oscillator with u<sub>q</sub>(3) $``$ so<sub>q</sub>(3) symmetry correctly predicts all experimentally observed magic numbers of alkali metal clusters up to 1500, which is the expected limit of validity for theories based on the filling of electronic shells. This indicates that u<sub>q</sub>(3), which is a nonlinear deformation of the u(3) symmetry of the spherical (3-dimensional isotropic) harmonic oscillator, is a good candidate for being the symmetry of systems of alkali metal clusters. This work has been supported by the Greek Secretariat of Research and Technology under contract PENED 95/1981.
no-problem/9909/astro-ph9909033.html
ar5iv
text
# The MHD Kelvin-Helmholtz Instability III: The Role of Sheared Magnetic Field in Planar FlowsAccepted by the Astrophysical Journal ## 1. Introduction The Kelvin Helmholtz (KH) instability is commonly expected in boundary layers separating two fluids and should occur frequently in both astrophysical and geophysical environments. The instability taps the free energy of the relative motion between two regions separated by a shear layer, or “vortex sheet” and is often cited as a means to convert that directed flow energy into turbulent energy (e.g., Maslowe 1985). Astrophysically, the instability is likely, for example, along jets generated in some astrophysical sources, such as active galactic nuclei and young stellar objects (e.g., Ferrari et al. 1981). There are also strongly sheared flows in the solar corona (e.g., Kopp 1992). Geophysically, it is expected on the earth’s magnetopause separating the magnetosphere from the solar wind (e.g., Miura 1984). The KH instability leads to momentum and energy transport, as well as fluid mixing. If the fluid is magnetized, then the instability can locally amplify the magnetic field temporarily by stretching it, enhance dissipation by reconnection, and lead to self organization between the magnetic and flow fields. The linear evolution of the KH instability was thoroughly studied a long time ago. The analysis of the basic magnetohydrodynamic (MHD) KH linear stability was summarized by Chandrasekhar (1961) and Miura & Pritchett (1982) and has been applied to numerous astrophysical situations (e.g., Ferrari et al. 1981; Bodo et al. 1998). Chandrasekhar showed, for example, that for incompressible flows, a uniform magnetic field aligned with the flow field will stabilize the shear layer if the Alfvénic Mach number of the transition is small enough; in particular, for a plasma of uniform density if $`M_A=U_0/c_A<2`$, where $`U_0`$ is the velocity change across the shear layer. There is now a considerable literature on the nonlinear evolution of the hydrodynamical (HD) KH instability (e.g., Corcos & Sherman 1984; Maslowe 1985) thanks to the development of both robust algorithms to solve fluid equations and fast computers to execute numerical simulations. However, the literature on the nonlinear behavior of the MHD KH instability is much more limited. That is because MHD physics is more complicated and because accurate, robust numerical algorithms, especially for compressible flows, are relatively new. Since a great many astrophysical applications surely involve magnetized fluids, it is important to come to a full understanding of the MHD version of the problem. In the past several years initial progress has been made, especially in two-dimensional MHD flows involving initially uniform magnetic fields having a component aligned with the flow field (e.g., Miura 1984; Wu 1986; Malagoli et al. 1996; Frank et al. 1996 \[Paper I\]; Jones et al. 1997 \[Paper II\]). The KH problem is, of course, a study of boundary evolution. An interesting and physically very relevant variation on this problem adds a current sheet to the vortex sheet. Then, the direction and/or the magnitude of the magnetic field will change across the shear layer. Dahlburg et al. (1997) recently presented a nice discussion of the linear properties of this problem for non-ideal, incompressible MHD when $`M_A1`$, and also examined numerically some aspects of the early nonlinear evolution of current-vortex sheets. They pointed out for the (strong field) parameters considered that the character of the instability changes depending on the relative widths of the current and vortex sheets. Let those be measured by $`a_b`$ and $`a_v`$, respectively. When the vortex sheet was thicker than the current sheet (actually when $`(a_b/a_v)M_A^{(2/3)}<1`$), they found the instability to be magnetically dominated and that its character resembled resistive, tearing modes; i.e., “sausage modes”. In the opposite regime, on the other hand the instability evolved in ways that were qualitatively similar to the KH instability, but with dynamically driven magnetic reconnection on “ideal flow timescales”, independent of the resistivity. Recently, Keppens et al. (1999) explored numerically the situation in which the magnetic field reverses direction discontinuously within the shear layer. They found that this magnetic field configuration actually enhances the linear KH instability rate, because it increases the role of tearing mode reconnection, which accelerates plasma and helps drive circulation. Miura (1987) and Keller & Lysak (1999) reported some results from numerical simulations with sheared magnetic fields inside a velocity shear layer; that is, with magnetic fields whose direction rotates relative to the flow plane. Those papers considered specific cases designed to probe issues associated with convection in the earth’s magnetosphere. Both found significant influences from the geometry of the magnetic field. Galinsky and Sonnerup (1994) described three-dimensional simulations also with sheared magnetic fields inside a velocity shear layer. Their simulations followed the early nonlinear evolution of three-dimensional current-vortex tubes, but were limited by low resolution. The work reported here provides an extension through compressible MHD simulations of the results given by Dahlburg et al. (1997). More explicitly it extends our earlier studies of Papers I and II in this direction. To summarize the latter two studies: Paper I examined the two-dimensional nonlinear evolution of the MHD KH instability on periodic sections of unstable sheared flows of a uniform-density, unit Mach number plasma with a uniform magnetic field parallel to the flow direction. That paper considered two different magnetic field strengths corresponding to $`M_A=2.5`$, which is only slightly weaker than required for linear stability, and a weaker field case, $`M_A=5`$. It emphasized that the stronger field case became nonlinearly stable after only a modest growth, resulting in a stable, laminar flow. The weaker field case, however, developed the classical “Kelvin’s Cat’s Eye” vortex expected in the HD case. That vortex, which is a stable structure in two-dimensional HD flows, was soon disrupted in the $`M_A=5`$ case by magnetic tension during reconnection, so that this flow also became nearly laminar and effectively stable, because the shear layer was much broadened by its evolution. Subsequently, Paper II considered in $`2\frac{1}{2}`$-dimensions a wider range of magnetic field strengths for the same plasma flows, and allowed the still-uniform initial field to project out of the computational plane, but in a direction still parallel to the shear plane, over a full range of angles, $`\theta =\mathrm{arccos}(\stackrel{}{B}\widehat{x}/|\stackrel{}{B}|)`$. With planar symmetry in $`2\frac{1}{2}`$-dimensions, the $`B_z`$ component out of the plane interacts only through its contribution to the total pressure. So, Paper II emphasized for the initial configurations studied there that the $`2\frac{1}{2}`$-dimensional nonlinear evolution of the MHD KH instability was entirely determined by the Alfvénic Mach number associated with the field projected onto the computational plane; namely, $`M_{Ax}=(U_0/c_A)\mathrm{cos}\theta `$. Those simulations also showed how the magnetic field wrapped into the Cat’s Eye vortex would disrupt that structure if the magnetic tension generated by field stretching became comparable to the centripetal force within the vortex. That result is equivalent to expecting disruption when the Alfvénic Mach number on the perimeter of the vortex becomes reduced to values near unity. This makes sense, because then the Maxwell stresses in the vortex are comparable to the Reynolds stresses. Since magnetic fields are “expelled” from a vortex by reconnection on the timescale of one turnover time (Weiss 1966) and in two-dimensions the perimeter field is amplified roughly by an order of magnitude during a turnover, Paper II concluded that vortex disruption should occur roughly when the initial $`M_{Ax}20`$. For our sonic Mach 1 flows, this corresponds to plasma $`\beta _0480`$, which has a field strength range often seen as too weak to be dynamically important. After the Cat’s Eye was disrupted in those cases the flow settled into an almost laminar form containing a broadened, but hotter shear layer that was stable to perturbations smaller than the length of the computational box. For weaker initial fields the role of the magnetic field became primarily one of enhanced dissipation through magnetic reconnection, although the transition between these two last behaviors is not sharp. Thus, for uniform initial magnetic fields one can define four distinct regimes describing the role of the magnetic field on the evolution of the $`2\frac{1}{2}`$-dimensional KH instability. In descending field-strength order they are: 1) “linearly stabilized”, 2) “nonlinearly stabilized”, 3) “disruptive” of the vortical structures formed by the HD instability, and 4) “dissipative” in the sense that the field enhances dissipation over the HD rate. To extend our understanding to a broader range of field configurations, we now study $`2\frac{1}{2}`$-dimensional cases in which the magnetic field rotates from the $`z`$ direction to the $`x`$ direction within the shear layer. Such field rotations are commonly called “magnetic field shear” (e.g., Biskamp 1994). Magnetic shear will likely occur in a number of KH unstable astrophysical environments. One example is the propagation of astrophysical jets. The current consensus hold that jets in both Young Stellar Objects and and Active Galactic Nuclei are generated via some form of magneto-centrifugal mechanism (Ouyed & Pudritz 1997) which produces a jet beam with a helical topology. The environment however will likely have fields that differ substantially from from the tight helix in the beam. Thus the KH unstable shear layer at the beam-environment interface will also be a region where the field rotates, as well. Consistent with the behaviors identified by Dahlburg et al. (1997) for related field structures we find that this modification strongly alters the simple patterns we found for uniform fields and adds new insights to the roles played by the magnetic fields. In our new simulations the magnetic shear layer is fully contained within the velocity shear layer; that is, the vortex sheet and the current sheet are the same width. Our current sheet is oblique to the flow field, however, so that one one side of the velocity shear magnetic tension is absent in 2 1/2-dimensions. In this situation the shear layer always remains KH unstable. This is true even for field strengths that would have stabilized flows were the field uniform. At the beginning of each simulation the interactions between the flow and the magnetic field resemble those described for uniform fields in the one half of the space where the magnetic field lies within the flow plane. In the other half plane, however, the initial magnetic field interacts only through its pressure, so has a negligible influence. This situation breaks the symmetry inherent in our earlier simulations. The resulting nonlinear evolution of the KH instability can be much more complex, and this has considerable influence on the dissipation rate and the mixing that takes place between fluids initially on opposite sides of the shear layer. Those issues will be the focus of this paper. The plan of the paper is as follows. In section II we summarize numerical methods and initial condition used in the simulations. In section III we compare and analyze the evolutionary results. In section IV we contain summary and conclusion. ## 2. Numerical methods and Initial Conditions Motion of a conducting, compressible fluid carrying a magnetic field must satisfy the MHD equations consisting of the Maxwell’s equations and the equations of gas dynamics, extended to include the influences of the Maxwell stresses. In the MHD limit the displacement current and the separation between ions and electrons are neglected. The ideal, compressible MHD equations, where the effects of viscosity, electrical resistivity and thermal conductivity are neglected, can be written in conservative form as $$\frac{\rho }{t}+\left(\rho \stackrel{}{v}\right)=0,$$ (1) $$\frac{(\rho \stackrel{}{v})}{t}+_j\left(\rho \stackrel{}{v}v_j\stackrel{}{B}B_j\right)+\left(p+\frac{1}{2}B^2\right)=0,$$ (2) $$\frac{E}{t}+\left[\left(E+p+\frac{1}{2}B^2\right)\stackrel{}{v}\left(\stackrel{}{v}\stackrel{}{B}\right)\stackrel{}{B}\right]=0,$$ (3) $$\frac{\stackrel{}{B}}{t}+_j\left(\stackrel{}{B}v_j\stackrel{}{v}B_j\right)=0,$$ (4) where gas pressure is given by $$p=\left(\gamma 1\right)\left(E\frac{1}{2}\rho v^2\frac{1}{2}B^2\right).$$ (5) The units are such that magnetic pressure is $`p_B=1/2B^2`$ and the Alfvén speed is simply $`c_A=B/\sqrt{\rho }`$. In our study these equations were solved using a multi-dimensional MHD TVD code using a Strange-type directional splitting (Ryu & Jones 1995; Ryu et al. 1995). It is based on an explicit, second-order Eulerian finite-difference scheme called the Total Variation Diminishing (TVD) scheme which is a second-order-accurate extension of the Roe-type upwind scheme. This version of the code contains an fft-based routine that uses “flux cleaning” to maintain the $`B=0`$ condition at each time step within machine accuracy. We simulated physical variables ($`\rho `$, $`\rho v_x`$, $`\rho v_y`$, $`\rho v_z`$, $`B_x`$, $`B_y`$, $`B_z`$, $`E`$) in the $`xy`$ plane on a computation domain $`x=[0,L_x]`$ and $`y=[0,L_y]`$ with $`L_x=L_y=L=1`$. Here we used periodic conditions on the $`x`$ boundaries and reflecting conditions on the $`y`$ boundaries. The mass, kinetic energy and Poynting fluxes in the $`y`$ direction all vanish at $`y=0`$ and $`y=L`$. The total magnetic flux through the box remains constant throughout the simulations. Such boundary conditions were used initially by Miura (1984) and subsequently in Papers I and II. As in the two previous papers we simulated an initial background flow of uniform density, $`\rho =1`$, gas pressure, $`p=0.6`$, and adiabatic index, $`\gamma =5/3`$, so that the initial sound speed is $`c_s=\sqrt{\gamma p/\rho }=1`$. We considered a hyperbolic tangent initial velocity profile that establishes nearly uniform flow except for a thin transition layer in the mid-plane of the simulations. Explicitly, we set $$v_0(y)\widehat{x}=\frac{U_0}{2}\mathrm{tanh}\left(\frac{yL/2}{a}\right)\widehat{x}.$$ (6) Here $`U_0`$ is the velocity difference between the two layers and was set to unity, so that the sonic Mach number of the transition is also unity; i.e., $`M_s=U_0/c_s=1`$. This equation describes motion of fluid flowing to the right in the lower part ($`0yL/2`$) of the two layers and to the left in the upper part ($`L/2yL`$). To reduce un-wanted interactions with the $`y`$ boundaries, and to assure an initially smooth transition on a discrete grid, $`a`$ should be chosen to satisfy $`h<a<<L`$ where $`h`$ is the size of the computational grid cells. Here we considered $`a=L/25`$, and $`h(1/10)a`$. These choices were evaluated in Paper I. We began with a magnetic field of uniform strength, but one that rotates (i.e., is sheared) smoothly within the transition layer ($`L/2a<y<L/2+a`$) from the $`z`$ direction on the bottom of the grid to the $`x`$ direction on the top; namely, $$B_x=B_0,B_z=0\mathrm{for}\frac{L}{2}+a<y<L,$$ (7) $$B_x=B_0\mathrm{sin}\left(\frac{\pi }{2}\frac{yL/2+a}{2a}\right)\mathrm{for}\frac{L}{2}ay\frac{L}{2}+a,$$ (8) $$B_z=B_0\mathrm{cos}\left(\frac{\pi }{2}\frac{yL/2+a}{2a}\right)\mathrm{for}\frac{L}{2}ay\frac{L}{2}+a,$$ (9) $$B_z=B_0,B_x=0\mathrm{for}0<y<\frac{L}{2}a.$$ (10) This construction keeps $`|B|`$ constant and gives nominally equal widths to the shear layer and the current sheet. The magnetic field shear is, however, sharply bounded by $`L/2ayL/2+a`$, so that the vortex sheet is effectively slightly broader. As mentioned in the introduction, the analysis of Dahlburg et al. (1997) leads to the expectation that perturbations on this initial set up will be unstable to the KH instability modes, even when $`M_A<1`$. The other relevant MHD parameters are defined as $`\beta _0=p/p_b`$, which measures relative gas and magnetic pressures, and the Alfvénic Mach number of the shear transition, $`M_A=U_0/c_A`$. These are related through the relation $`M_A^2=(\gamma /2)M_s^2\beta _0`$. We have considered a wide range of these parameters as listed in Table 1. A random perturbation of small amplitude was added to the velocity to initiate the instability. All the simulations reported here were carried out on a grid with $`256\times 256`$ cells. Papers I and II included convergence studies for similar flows using the same code. There we found with the resolution used in the present study all major flow properties that were formed in equivalent, higher resolution simulations, and that such global measures as energy dissipation and magnetic energy evolution were at least qualitatively similar. The key was having sufficient resolution that there was an essentially non-dissipative range of flow scales inside the Cat’s Eye vortex. That is, it was necessary that the effective kinetic and magnetic Reynolds numbers are large for the Cat’s Eye. The effective Reynolds numbers scale as the square of the number of cells for this code (Ryu et al. 1995), so that is readily achieved in the current simulations. An additional comment is appropriate on the numerical methods. The code used for these studies nominally treats the flows as “ideal”, or non-dissipative. Dissipation does take place, of course, through numerical truncation and diffusion at the grid-cell level; i.e., primarily within the smallest resolved structures. The consistency of this approach with non-ideal HD flows of high Reynolds number has been convincingly demonstrated using turbulence simulations for conservative methods analogous to those employed here (e.g., Porter & Woodward 1994; Sytine et al. 1999). While that comparison has not yet been accomplished for MHD flows, there are a number of results that support consistency for ideal MHD codes when the dissipation scales are small, as well. These include the apparent “convergence” in general flow and magnetic field patterns and global energy evolution seen in our own simulations mentioned above, as well as MHD turbulence studies (e.g., Mac Low et al. 1998; Stone et al. 1998). In addition, others among our quasi-ideal MHD simulations develop structures that behave like those generally ascribed to resistive reconnection, such as unstable current sheet tearing-modes (e.g., Miniati et al. 1999) when the effective Lundquist number is large and stable Parker-Sweet current sheets when that parameter is not large enough to be tearing-mode unstable (Gregori et al. 1999). ## 3. Results ### 3.1. Structure Evolution Table 1 includes a descriptor in each case to indicate, for reference, the evolution expected for equivalent simulations using a uniform magnetic field orientation instead of the sheared field. For example, the strongest field case with $`M_A=2`$ would be marginally stable if the field were uniform. Based on Papers I and II the uniform-field $`M_A=2.5`$ and $`3.3`$ cases would be “nonlinearly stable”, the $`M_A=5,\mathrm{\hspace{0.17em}10}`$ and $`14.3`$ cases would be “disruptive”, while the two weakest field cases ($`M_A=50,\mathrm{\hspace{0.17em}142.9}`$) would be “dissipative”. Recall that the disruptive cases were those whose initially weak fields were amplified through stretching around the vortex to the point that the vortex was destroyed. As previously stated, the MHD KH evolution beginning from a sheared field in the velocity shear layer can be very different from the comparable uniform magnetic field case. Because the magnetic field cannot stabilize flows in those regions where it is perpendicular to the flow plane, the linear KH instability can always develop there, independent of the strength of the field. For the flows investigated here the HD linear growth time $`\tau _g1.5\lambda `$ in simulation units, where $`\lambda `$ is the wavelength of the perturbation. On a slightly longer timescale one or more “large” vortices will generally form. But if magnetic tension on the aligned-field side of the shear layer is dynamically significant, the vortex or vortices will not be symmetric across the shear layer and the subsequent nonlinear evolution of the full flow field can become complex through interactions between the two regions. In this section we will outline briefly the behavior patterns we observe in our sheared magnetic field simulations as a function of field strength, beginning with the very weak field cases. In our simulations, cases with extremely weak magnetic fields ($`M_A=142.9`$ and $`M_A=50`$) produce a relatively symmetric, stable Cat’s Eye vortex essentially the same as in HD flows. As we found in Paper II, it spins indefinitely on the timescales considered here. Similarly to the uniform field cases considered in Paper II, the magnetic flux initially in the region where the field is aligned with the flow ($`L/2+a<y<L`$) is stretched around the vortex, increasing magnetic energy. However, in these cases the magnetic field amplification by stretching during one vortex turnover is not adequate to reduce the local Alfvénic Mach number on the perimeter of the vortex to values near unity. Recalling that the vortex turnover time is also the timescale to generate magnetic topologies unstable to the tearing mode instability and driven reconnection (see, e.g., Papers I and II), the magnetic field in these cases is not able to disrupt the quasi-HD nature of the flow. As for the analogous uniform-field cases we studied before, the inclusion of a very weak magnetic field mostly enhances kinetic energy dissipation via the dissipation that comes as a consequence of magnetic reconnection. There is little new understanding that is added from those cases. In cases with field values in the “disruptive” regime for uniform fields ($`M_A=14.3`$, $`M_A=10`$ and $`M_A=5`$) the magnetic field is too weak still to inhibit initial formation of vortices in either half of the flow field, but sufficiently strong to destroy those vortices after the external field pulled into the vortex perimeter becomes stretched by vortex rotation. In the sheared magnetic field flows the Maxwell stresses on the Cat’s Eye are asymmetric, since there is no flux pulled into the vortex from one side. As one considers stronger fields this dynamical asymmetry becomes more obvious. In fact, in the sheared field $`M_A=5`$ case the Cat’s Eye never forms fully, whereas it did in the uniform field simulation for the same field strength (Paper I). Evolution of the $`M_A=14.3`$ case is illustrative of the weaker sheared field evolution and is shown in Fig. 1. A single Cat’s Eye develops by the first time shown in the figure ($`t=7`$), but is clearly in the processes of disruption by the third time ($`t=10`$). This early evolution closely parallels the results for the $`M_A=5`$ uniform field case discussed in Paper I. But, already by $`t=10`$ the symmetry of the flow in Fig. 1 is obviously broken. The subsequent evolution qualitatively resembles the later “weak field” evolution shown in Paper I, but with a very interesting twist. The analogous flow in Paper I developed a “secondary” vortex in the mid-plane of the flow. This structure was created by the sudden release of magnetic tension during reconnection associated with disruption of the Cat’s Eye. The magnetic flux in the interior of the secondary vortex was isolated from that of the exterior flow, creating a “flux island”. However, the secondary vortex was “spun down” by magnetic tension around its perimeter, since the vortex was embedded in magnetic flux crossing the computational box. Thus, there was a net torque provided by the magnetic tension around the vortex. Magnetic flux in the interior of the secondary vortex was annihilated through reconnection as the secondary vortex was dissipated. In the uniform field simulation of Paper I this stage was part of a relaxation of the entire flow to a broadened, nearly laminar shear layer, and the magnetic and flow fields were nearly perfectly aligned, that is, they “self-organize”. Paper I referred to this as a “quasi-steady relaxed” state. In the present cases there are multiple “secondary vortices” generated at various locations within the grid as the primary, Cat’s Eye vortex is disrupted, as one might expect from a random initial perturbation. Each secondary vortex also holds a magnetic island. Secondary vortices that become entrained within the part of the flow where the shear is greatest contain the greatest enstrophy so rotate very obviously. Here, enstrophy is defined as $`(\times v)^2(\frac{1}{2}\omega ^2)`$ where $`\omega `$ is the rotational frequency of a vortex. The secondary vortices tend to merge, as one expects in two-dimensional flow, since they all have the same sign in vorticity. But, in the $`M_A=14.3`$ case shown in Fig. 1 a vortex pair still remain at the end of the simulation. Once again, these structures hold closed magnetic field loops, or flux islands. During vortex merging the magnetic flux islands also merge through reconnection. However, unlike the uniform field case discussed in Paper I, these structures and the flux within them then tend to be stable to the end of the simulation. The reason for this difference is that in the present case the secondary structures that survive are all in the lower part of the flows where there is almost no magnetic flux crossing the box. So, their external interactions are almost entirely those of quasi-ideal HD. By contrast, the upper portions of these flows, where $`\stackrel{}{B}`$ was initially in the flow plane, fairly quickly become smooth and laminar. Almost all of the open field lines within the $`xy`$ plane are now concentrated into this region and well aligned with the velocity field. Thus, this portion of the flow resembles the quasi-steady relaxed state of Paper I. Also, as for the flow studied in Paper I, the aligned magnetic flux becomes concentrated near the velocity shear layer. In the present case, however, the shear layer is displaced upwards to some degree; that is, towards the region where the magnetic field was initially aligned to the computational plane. There is just a little compression in these flows, so this shift represents a net exchange of momentum between the two layers as a consequence of vortex disruption, rather than a squeezing of the fluid in one half of the plane. On the whole we find that the disruptive behavior by a weak field acts sooner in the sheared magnetic field cases than was the situation with a uniform field of the same strength. This point was already mentioned with regard to comparing flows with $`M_A=5`$, and seems generally to be a direct consequence of the symmetry breaking in the sheared field cases. Symmetry breaking causes deformations in the Cat’s Eye, which, in turn, enhance the degree of field-line stretching around its perimeter. In the $`M_A=5`$ sheared field case, for example, a secondary vortex rolls up at the end of the Cat’s Eye during its formation, pulling magnetic field with it. That field quickly becomes disruptive, the main Cat’s Eye breaks up, and the central shear layer relaxes into a quasi-steady flow. For this particular case, early disruption of the vortical flow means a significant reduction in the kinetic energy dissipation within the flow compared to the equivalent uniform field case. On the other hand the asymmetry should be responsible for the net momentum exchange mentioned in the previous paragraph. The consequences of beginning with a sheared, strong field are somewhat different, however. By strong we mean that a uniform field in the flow plane would lead either to linear stability or nonlinear stability of the shear layer, as described earlier. In sheared field cases ($`M_A=3.3`$, $`M_A=2.5`$ and $`M_A=2`$), that is still the situation on the side of the flow where the magnetic field initially lies within the flow plane. On the other side, however, the magnetic field initially has a negligible influence, only adding an isotropic pressure. Here, just as for an HD flow, corrugations of the shear layer grow because of the Bernoulli effect. Meanwhile, aligned magnetic field across the shear layer can be stretched out by the growing corrugation, stiffening and, if it is strong, maintaining a smooth flow where the field is in the plane. Shear becomes concentrated along the resulting inflection in the flow boundary, causing a vortex to be shed into the quasi-HD portion of the flow. This secondary vortex formation is the same process that was mentioned previously for the somewhat weaker field case with $`M_A=5`$. It is illustrated clearly for three strong field cases in Fig. 2. Once the secondary vortex is shed in these cases, the shear layer stabilizes, and the vortex becomes embedded in the quasi-HD portion of the flow. The vortex contains a magnetic island similar to those discussed earlier. Once again, this structure also seems to be stable and long lived in our simulations. ### 3.2. Energy Evolution: Dissipation In Papers I and II we pointed out the role played by a magnetic field in determining the amount of kinetic energy dissipated during evolution of a KH unstable shear layer and also examined the evolution of magnetic energy during flow development. Here we revisit those issues briefly in order to see how the differing dynamical influence of the magnetic shear affects energy evolution within the flow. Recall that some amount of kinetic energy must be dissipated during the formation of Kelvin’s Cat’s Eye, since the kinetic energy of the latter flow pattern is less than in the initial flow. For the initial conditions used in our simulations the kinetic energy reduction expected in a two-dimensional HD simulation is about 7%, and since the initial kinetic energy is about 10% of the initial thermal energy, the analogous increase in thermal energy is roughly 0.7% of the total. This transition is irreversible, of course. Generally, we find the dissipation to be enhanced when a magnetic field is added. The greatest energy dissipation comes from disruption of the Cat’s Eye, since portions of the flow become chaotic, and those chaotic motions are quickly dissipated. When the magnetic field is strong enough to prevent initial vortex roll-up (nonlinearly stable cases), viscous dissipation accompanies flow and it generally exceeds the dissipation associated with the HD Cat’s Eye flow. Even a very weak magnetic field that has no obvious direct dynamical consequences enhances energy dissipation, since driven magnetic reconnection and magnetic energy annihilation in the perimeter of the Cat’s Eye is also irreversible. Since the dynamical influence of the sheared magnetic field is different from that of an equivalent uniform field aligned in the flow plane, we also should expect at least some quantitative differences from the results reported in Papers I and II. Figs. 3, 4 and 5 allow us to explore this. Figs. 3 and 4 show the time evolution of the thermal energy ($`E_t`$), kinetic energy ($`E_k`$), magnetic energy ($`E_b`$) for the sheared field simulations. Each quantity is normalized by its initial value. The figures also show the minimum value of the plasma $`\beta `$ parameter in the computation as a function of time. The simulations are grouped into those with fields weak enough to allow fully formed Cat’s Eye structures (Fig. 3) and those with fields strong enough to prevent that formation, as discussed above (Fig. 4). Energy dissipation associated with Cat’s Eye formation is clearly evident between $`t6`$ and $`t8`$ in Fig. 3 from the evolution of the thermal and kinetic energies. The changes in those two quantities are about the values just cited for Cat’s Eye generation. The subsequent energy evolution varies dramatically with $`M_A`$, however. For the very weak field cases with $`M_A=142.9`$ and $`50`$, the subsequent dissipation is small, but larger than for a purely HD flow. The dissipation rate is greater for the $`M_A=50`$ case than for the $`M_A=142.9`$ case, since there is more magnetic energy being generated and dissipated, as discussed in Paper II. For the cases with $`M_A=14.3`$ and $`M_A=10`$, disruption of the Cat’s Eye between $`t1020`$ leads to much greater subsequent kinetic energy dissipation. In those two cases the kinetic energy reduction is also significantly greater than in the cases with comparable field strength in our uniform field simulations. For $`M_A=14.3`$, for example, the kinetic energy at the end of the analogous uniform field simulation in Paper II was reduced by about 50%, whereas here the final kinetic energy is less than 25% of the initial value. That added dissipation comes from the more extensive and complex flow structures entraining magnetic flux formed in the sheared magnetic field evolution. Fig. 4 shows that the energy dissipation pattern is quite different for the flows with stronger magnetic fields. There is a period of moderately large dissipation as the initial corrugations of the shear layer grow to saturation and secondary vortices are shed and dissipated. Afterwards, however, the dissipation rate is only modestly greater than for quasi-HD flows such as the $`M_A=142.9`$ case, since the flow pattern for these cases is laminar in the aligned field region and uniform except for isolated vortices in the transverse field region (see Fig. 2). In the $`M_A=3.3`$ case the interface between these two regions is less regular than the other three cases, so that flow remains a little more dissipative to the end. The dissipation dependence on $`M_A`$ in all our sheared field simulations is summarized in Fig. 5, which shows the amount of the kinetic energy left at the end of simulations ($`t=40`$). Figs. 3 and 4 also show us the evolution of magnetic energy during the simulations. There are evident differences between the two groupings. In particular there is a relatively small fractional increase in magnetic energy for the stronger field cases. That makes sense, of course, because in those cases magnetic field is never stretched around a vortex. The peak enhancement to the magnetic energy in those cases ranged from less than 6% for the $`M_A=2`$ case to $`25\%`$ for the $`M_A=3.3`$ and $`M_A=5`$ cases. Those peaks correspond to periods of maximum stretching by growth of the initial corrugation at early times, $`t710`$. In the $`M_A=5`$ case, the peak corresponds to an extended interaction between the magnetic field in the shear layer with the secondary vortex, where the magnetic field is wrapped up. As the flows relax in these cases, the magnetic energy returns to values close to, but slightly below the initial values. Since the magnetic flux through the grid is conserved, this lower final magnetic energy must come from the fact that flux in $`B_z`$, initially entirely in the lower half of the box has now encroached into the upper half (see Figs. 7, 9), while flux in $`B_x`$ has expanded from the top half of the box into the lower half. By contrast, the peak magnetic energy enhancement in the weaker field simulations ranges from factors of $`3.5`$ for the $`M_A=10`$ case to $`7.8`$ for the $`M_A=14.3`$ case. The peak values here correspond to formation of the Cat’s Eye near $`t=8`$ and subsequent episodes of magnetic field stretching. (See Fig. 1 for the $`M_A=14.3`$ case.) In all of these cases the final magnetic energy is at least as large as the initial value, in contrast to the situation described for the stronger field cases. The reason for excess residual magnetic energy is the formation of coherent “flux tube” like structures similar to those described in Paper I. In the $`M_A=14.3`$ case the final magnetic energy is about twice its initial value. For the others in this group the differences are much smaller, however. ### 3.3. Mixing Across the Boundary Layer We have emphasized how the addition of magnetic shear across the unstable flow boundary adds complexity to the nonlinear flow that follows from the instability. So far we have seen how that increases the dissipation of kinetic energy from the flow over the HD solution and over the equivalent MHD solution with a uniform magnetic field. This added complexity may also influence mixing between fluids initially in the top and bottom layers. Figs. 6, 7 and 8 give us some insight into that process. They illustrate how magnetic flux perpendicular to the flow plane has been transported from the lower part of the computational domain into the top part. Recall that initially $`B_z=0`$ for $`y>L/2+a=0.54`$ (Eqs. 7-10), so that $`^{top}B_z𝑑x𝑑y/^{bottom}B_z𝑑x𝑑y=0.03`$ in Fig 6. Note, also, that the total $`B_z`$ flux through the plane is constant. Since this component is frozen into the fluid very well and the fluid is almost incompressible during these simulations, the evolution of this ratio in Fig 6 offers a simple way to examine approximately how fluid from the lower part of the flow region penetrates into the upper flow. In the strong field cases, generally less than $`15\%`$ of the perpendicular magnetic flux penetrates the upper flow. That penetration is mostly due to the spreading of the shear layer, which is illustrated in Fig. 9 for these cases by the form of $`v_x(x,y)𝑑x/L`$ at the end of simulations. Fig. 7 also helps clarify that point by showing the maximum upward penetration of the $`B_z`$ component during each simulation. There is a clear separation in this quantity between the strong field cases and weak field cases. In the strong field cases the maximum penetration of $`B_z`$ is generally around $`y=0.7`$, which corresponds about to the location of the top of the shear layer at the end of these simulations, as shown in Fig. 9. By contrast, Fig. 6 shows that the weaker field cases transport between $`1/4`$ and $`1/3`$ of the $`B_z`$ flux into the top part of the computational domain. In these cases the transport happens primarily during the period when the instability is in transition from linear to nonlinear evolution. That point is illustrated in more detail for the weak field, disruptive case with $`M_A=14.3`$ in Fig. 8. There we see at various times $`B_z(x,y)𝑑x/L`$. By $`t=20`$, $`B_z`$ has achieved almost its maximum penetration in $`y`$. Cat’s Eye disruption for this case takes place between roughly $`t=10`$ and $`t=20`$ (see Fig. 1). By the end of this simulation perpendicular magnetic flux is roughly uniformly distributed below $`y0.7`$. As Fig. 1 illustrates, that value of $`y`$ also corresponds to the lower bound for most of the $`B_x`$ flux in the relaxed flow. In fact, most of the $`B_x`$ flux is concentrated into a “flux tube” just above $`y=0.7`$, accounting for the fact that the final magnetic energy for this case is significantly greater than the initial magnetic energy. On the other hand, the center of the velocity shear is still close the center of the grid, where the two secondary vortices are located (visible in Fig. 1). Thus, since the $`B_z`$ flux is effectively frozen into its initial fluid, we can see that the disruption of the Cat’s Eye has resulted in considerable exchange and mixing of fluids between the top and bottom portions of the computational domain. Through the Maxwell stresses generated in the magnetic field lying within the flow plane, that exchange has also included exchange of momentum, since at the end the velocity distribution is still not far from symmetric. ## 4. Summary and Conclusion We have performed numerical simulations of the nonlinear evolution of MHD KH instability in $`2\frac{1}{2}`$-dimensions. A compressible MHD code has been used. However, with an initially trans-sonic shear layer ($`M_s=1`$), flows stay only slightly compressible ($`\delta \rho /\rho <<1`$), and compressibility plays only minor roles (see §3.1). We have considered initial configurations in which the magnetic field rotates across the velocity shear layer from being aligned with the flow on one side to being perpendicular to the flow plane on the other; that is, we have considered “sheared” magnetic fields. The velocity and magnetic shears have similar widths. In the top portion of the flow, where the magnetic field is aligned, magnetic tension can inhibit the instability or at least modify its evolution, as we and others described in earlier papers. In the bottom portion of the flow the magnetic field has negligible initial influence, at least in $`2\frac{1}{2}`$-dimensions. We have simulated flows with a wide range of magnetic field strengths. Our objective was to understand how this symmetry break would modify our earlier findings and hence to gain additional insights to the general properties of the MHD KH instability. When the magnetic field is extremely weak, the flows are virtually hydrodynamic in character except for added dissipation associated with magnetic reconnection on small scales. So the field geometry does not appear to be very important. We should emphasize once again, however, that one should be careful in choosing to label magnetic fields as “very weak”, because our current findings support our earlier conclusion that an initially weak field embedded in a complex flow can have crucial dynamical consequences. In our simulations of trans-sonic shear layers, we find that magnetic fields (whether sheared or not) can disrupt the HD nature of the flow even when the initial $`\beta _0=p/p_b`$ several hundred. Actually, the more meaningful parameter is the Alfvénic Mach number, $`M_A`$, since that provides a comparison between the Maxwell and Reynolds stresses in a complex flow. In the KH instability, particularly, the free energy derives directly from directed flow energy, not isotropic pressure. We find for all of our $`2\frac{1}{2}`$-dimensional simulations that flows beginning roughly with $`M_A20`$ will be strongly modified by the magnetic fields during the nonlinear flow evolution. These characteristic values are commonly seen as outside the ranges where MHD is the necessary language, but that is evidently not the case. For MHD flows initiated with $`M_A20`$, we find some interesting differences between the flows beginning with uniform and sheared magnetic fields. Magnetic tension will in those cases become significant by stretching of field lines out of the portion of the initial flow containing the aligned field. It may even inhibit development of the instability in that space. But, for sheared fields flow on the side with the perpendicular field begins to evolve almost hydrodynamically, so the instability always evolves into a nonlinear regime, whereas for uniform fields cases with $`M_A<2`$ are stable for the flow properties we are considering. Further, for sheared fields the resulting nonlinear structures are highly asymmetric and tend to become chaotic. That, in turn, tends to increase kinetic energy dissipation and mixing of plasma when compared to flows starting with uniform fields. In previously simulated flows beginning with uniform magnetic fields that became dynamically important, the end result of prolonged evolution was always a quasi-laminar flow with a broadened velocity shear layer. In the sheared magnetic field simulations presented here, however, that description is not complete. It does adequately characterize the eventual flows on the side of the shear layer containing the aligned magnetic field. Also, on the other side, the flows do become smooth to some degree and appear stable. However, there are typically embedded vortices within the flow region that does not have significant aligned magnetic flux. Those vortices contain magnetic flux islands with long lifetimes. We expect they will eventually dissipate, but apparently only on timescales much longer than our simulations consider. In conclusion, we find that applying sheared magnetic fields across a KH unstable boundary of comparable width still leads to the same qualitative weak-field behaviors (dissipative and disruptive) that we identified in Paper II. For stronger fields the instability is not inhibited by the presence of the field, however. The sheared fields also enhance the complexity within the flow and generally increase the amount of kinetic energy dissipation and fluid mixing that comes from the instability. Boundary layers that include sheared flows and sheared magnetic fields are probably astrophysically common, so it is of some consequence to note how the vortex and current sheets behave together. The best known examples are the earth’s magnetopause (e.g., Russell 1990) and coronal helmut streamers (e.g., Dahlburg & Karpen 1995) both of which may appear to be magnetically dominated. Other relevant situations might include “interstellar MHD bullets” (e.g., Jones et al. 1996; Miniati et al. 1999; Gregori et al. 1999), where initially weak magnetic fields become stretched around the cloud to form a magnetic shield. If there is a tangential discontinuity on the cloud surface, then it could resemble the situation we discuss. The magnetosheaths of comets may also include current and vortex sheets (e.g., Yi et al. 1996). Our present work supports and augments the findings of Keppens et al. (1999) that the instabilities developing on apparently magnetic dominated boundaries containing velocity and magnetic shear may still retain some of their hydrodynamical character. On the other hand, this work also reinforces our previous results showing that even a relatively weak field embedded in an unstable vortex sheet tends to smooth the nonlinear flow. Detailed comparisons with astrophysical objects should be done with caution, however, since these numerical studies are rather idealized, and not fully 3D. The extension to 3D will be discussed elsewhere (Jones et al. 1999; Ryu et al. 1999). The work by HJ and DR was supported in part by KOSEF through grant and 981-0203-011-2. The work by TWJ was supported in part by the NSF through grants AST93-18959, INT95-11654, AST96-19438 and AST96-16964, by NASA grant NAG5-5055 and by the University of Minnesota Supercomputing Institute. AF was supported in part by NSF Grant AST-0978765 and the University of Rochester’s Laboratory for Laser Energetics. We thank anonymous referee for clarifying comments.
no-problem/9909/cond-mat9909392.html
ar5iv
text
# Exact results for the zeros of the partition function of the Potts model on finite lattices ## I introduction The $`Q`$-state Potts model in two and three dimensions exhibits a rich variety of critical behavior and is very fertile ground for the analytical and numerical investigation of first- and second-order phase transitions. With the exception of the two-dimensional $`Q=2`$ Potts (Ising) model in the absence of an external magnetic field, exact solutions for arbitrary $`Q`$ are not known. However, some exact results have been established for the two-dimensional $`Q`$-state Potts model. For $`Q4`$ there is a second-order phase transition, while for $`Q>4`$ the transition is first order. From the duality relation the critical temperature is known to be $`T_c=J/k_B\mathrm{ln}(1+\sqrt{Q})`$. For $`Q4`$ the critical exponents are known, while for $`Q>4`$ the latent heat, spontaneous magnetization, and correlation length at $`T_c`$ are also known. The $`Q`$-state Potts model on a lattice $`G`$ in an external magnetic field $`H_q`$ is defined by the Hamiltonian $$_Q=J\underset{i,j}{}[1\delta (\sigma _i,\sigma _j)]H_q\underset{k}{}\delta (\sigma _k,q),$$ (1) where $`J`$ is the coupling constant, $`i,j`$ indicates a sum over nearest-neighbor pairs, $`\sigma _i=1,\mathrm{},Q`$, and $`q`$ is a fixed integer between 1 and $`Q`$. The partition function of the model can be written as $$Z_Q(x,y)=\underset{M=0}{\overset{N_s}{}}\underset{E=0}{\overset{N_b}{}}\mathrm{\Omega }_Q(M,E)x^My^E,$$ (2) where $`x=e^{\beta H_q}`$, $`y=a^1=e^{\beta J}`$, $`E`$ and $`M`$ are positive integers $`0EN_b`$ and $`0MN_s`$, respectively, $`N_b`$ and $`N_s`$ are the number of bonds and the number of sites on the lattice $`G`$, and $`\mathrm{\Omega }_Q(M,E)`$ is the number of states with fixed $`E`$ and fixed $`M`$. From Eq. (2) it is clear that $`Z_Q(x,y)`$ is simply a polynomial in $`x`$ and $`y`$. By introducing the concept of the zeros of the partition function in the complex magnetic-field plane (Yang-Lee zeros), Yang and Lee proposed a mechanism for the occurrence of phase transitions in the thermodynamic limit and yielded a new insight into the unsolved problem of the two-dimensional Ising model in an arbitrary nonzero external magnetic field. It has been shown that the distribution of the zeros of a model determines its critical behavior. Lee and Yang also formulated the celebrated circle theorem which states that the Yang-Lee zeros of the Ising ferromagnet lie on the unit circle. While we lack the circle theorem of Lee and Yang to tell us the location of the zeros for $`Q2`$, something can be said about their general behavior as a function of temperature. At zero temperature ($`y=0`$) from Eq. (2) the partition function is $`Z_Q(x,0)`$ $`=`$ $`{\displaystyle \underset{M}{}}\mathrm{\Omega }_Q(M,0)x^M`$ (3) $`=`$ $`(Q1)+x^{N_s}.`$ (4) Therefore, the Yang-Lee zeros at $`T=0`$ are given by $$x_k=(Q1)^{1/N_s}\mathrm{exp}[i(2k1)\pi /N_s],$$ (5) where $`k=1,\mathrm{},N_s`$. The zeros at $`T=0`$ are uniformly distributed on the circle with radius $`(Q1)^{1/N_s}`$ which approaches unity in the thermodynamic limit, independent of $`Q`$. At infinite temperature ($`y=1`$) Eq. (2) becomes $$Z_Q(x,1)=\underset{M=0}{\overset{N_s}{}}\underset{E=0}{\overset{N_b}{}}\mathrm{\Omega }_Q(M,E)x^M.$$ (6) Because $`_E\mathrm{\Omega }_Q(M,E)`$ is simply $`\left(\genfrac{}{}{0pt}{}{N_s}{M}\right)(Q1)^{N_sM}`$, at $`T=\mathrm{}`$, the partition function is given by $$Z_Q(x,1)=(Q1+x)^{N_s},$$ (7) and its zeros are $`N_s`$-degenerate at $`x=1Q`$, independent of lattice size. ## II Yang-Lee zeros in one dimension For the one-dimensional Potts model in an external field the eigenvalues of the transfer matrix were found by Glumac and Uzelac. The two dominant eigenvalues are $`\lambda _\pm =(A\pm iB)/2a`$, where $`A=a(1+x)+Q2`$, $`B=i\sqrt{[a(1x)+Q2]^2+4(Q1)x}`$, and $`\lambda _0=(a1)/a`$ is $`(Q2)`$-fold degenerate. We will assume that $`|\lambda _\pm |>\lambda _0`$ and verify this assumption a posteriori. The partition function is $$Z_N=\lambda _+^N+\lambda _{}^N+(Q2)\lambda _0^N,$$ (8) but, by the above approximation, for large $`N`$ we have $$Z_N\lambda _+^N+\lambda _{}^N.$$ (9) If we define $`A=2C\mathrm{cos}\psi `$ and $`B=2C\mathrm{sin}\psi `$, where $`C=\sqrt{(a1)(Q+a1)x}`$, then $`\lambda _\pm =(C/a)\mathrm{exp}(\pm i\psi )`$, and the partition function is $$Z_N=2\left(\frac{C}{a}\right)^N\mathrm{cos}N\psi .$$ (10) The zeros of the partition function are then given by $$\psi =\psi _k=\frac{2k+1}{2N}\pi ,k=0,1,2,\mathrm{},N1.$$ (11) The location of these zeros in the complex $`x`$ plane is determined by the solution of $$A=2C\mathrm{cos}\psi _k.$$ (12) This equation is quadratic in $`z=\sqrt{x}`$ with roots $$z_k=\frac{1}{a}\left[\sqrt{(a1)(Q+a1)}\mathrm{cos}\psi _k\pm i\sqrt{(a1)(Q+a1)\mathrm{sin}^2\psi _k+Q1}\right].$$ (13) It is easily verified that $$|z_k|^2=|x_k|=\frac{Q+a2}{a},$$ (14) and we see that all the zeros lie on a circle in the complex $`x`$ plane. The argument of $`x_k`$ is given by $$\mathrm{cos}\frac{\theta _k}{2}=\sqrt{\frac{(a1)(Q+a1)}{a(Q+a2)}}\mathrm{cos}\psi _k.$$ (15) Before we analyze these results we must verify that in fact $`|\lambda _\pm |>\lambda _0`$ in the region of these zeros. We find $$\frac{|\lambda _\pm |}{\lambda _0}=\sqrt{\frac{(Q+a1)(Q+a2)}{a(a1)}}$$ (16) which is indeed greater than unity for $`Q>1`$. If we return to Eq. (13) we can discern a remarkably systematic behavior of the Yang-Lee zeros with $`Q`$. For $`1<Q<2`$, $`|x_k|<1`$ and the zeros lie inside the unit circle. As $`a\mathrm{}`$, the zeros approach the unit circle from within, as we observed in Eq. (4), and for $`a=1`$, $`\mathrm{cos}(\theta _k/2)=0`$, so $`\theta _k=\pi `$ and all the zeros lie at $`1Q`$, as predicted in Eq. (6). For $`Q>2`$, $`|x_k|>1`$ and the zeros lie outside the unit circle and approach the unit circle as $`a\mathrm{}`$ from outside. In the special case $`Q=2`$ we of course find that $`|x_k|=1`$, as proved by Lee and Yang. The edge singularity in the thermodynamic limit is given by $$\theta _0=2\mathrm{cos}^1\sqrt{\frac{(a1)(Q+a1)}{a(Q+a2)}}>0,$$ (17) from which we conclude that no transition occurs for any $`T>0`$. Finally, we can use these exact results to study finite size effects on the distribution of zeros. For finite $`N`$ we must find the zeros of the full partition function as given in Eq. (7). We find that finite size effects are very small in one dimension, but the largest deviation from the limiting locus occurs for $`\mathrm{arg}(x)=\pi `$. ## III Yang-Lee zeros in two and three dimensions We have calculated exact integer values for $`\mathrm{\Omega }_Q(M,E)`$ on $`L\times L`$ square lattice for $`3L8`$ and $`3^3`$ simple-cubic lattice ($`Q=3`$) using the restricted microcanonical transfer matrix. For square lattices up to $`L=12`$ ($`Q=3`$) the zeros are calculated using a semi-canonical variation of the transfer matrix. Fig. 1a shows the Yang-Lee zeros for the two-dimensional three-state Potts model in the complex $`x`$ plane at the critical temperature $`y_c=1/(1+\sqrt{3})=0.366\mathrm{}`$ for $`L=4`$ and $`L=10`$ with cylindrical boundary conditions. Note that just as in the one-dimensional model, the zeros of the three-state Potts model lie close to, but not on, the unit circle. The zero farthest from the unit circle is in the neighborhood of $`\mathrm{arg}(x)=\pi `$, while the zero closest to the positive real axis lies closest to the unit circle. The behavior of the zeros with the size of the lattice also follows that of the one-dimensional model; the zeros for $`L=10`$ lie on a locus interior to that for $`L=4`$, and we find similar behavior for larger values of $`Q`$. In the thermodynamic limit the locus of zeros cuts the real $`x`$ axis at the point $`x=1`$ corresponding to $`H_q=0`$, as described by Yang and Lee. Fig. 1b shows the zeros for the two-dimensional three-state Potts model at several temperatures with cylindrical boundary conditions. At $`y=0.5y_c`$ the zeros are nearly uniformly distributed in argument and close to the unit circle. As the temperature is increased the edge singularity moves away from the real axis and the zeros detach from the unit circle. Finally, as $`y`$ approaches unity, the zeros converge on the point $`x=2`$. As predicted by Eqs. (4) and (6), for periodic boundary conditions and self-dual boundary conditions we observe the same behaviors as those in Fig. 1 for cylindrical boundary conditions. The exact nature of the locus of zeros for the $`Q`$-state Potts model in two dimensions is unknown, with the exception of $`Q=2`$. It is clear that the Yang-Lee zeros of the two-dimensional $`Q`$-state Potts model do not lie on the unit circle for $`Q>2`$ for any value of $`y`$ and any finite value of $`L`$. Since the zero in the neighborhood of $`\mathrm{arg}(x)=\pi `$ is always the farthest from the unit circle, if this zero can be shown to approach $`|x(\pi )|=1`$ in the limit $`L\mathrm{}`$, all the zeros should lie on the unit circle in this limit. In Fig. 2 we show values for $`|x(\pi )|`$ extrapolated to infinite size using the Bulirsch-Stoer (BST) algorithm for $`3Q8`$ at $`y=0.5y_c`$ and $`y=y_c`$. The error bars are twice the difference between the $`(n1,1)`$ and $`(n1,2)`$ approximants. From these results it is clear that while the locus of zeros lies close to the unit circle at $`y=y_c`$, it does not coincide with it except at the critical point $`x=1`$. While the numerical evidence presented here suggests that the locus of zeros for the three-state model is not the unit circle, and the exact result in one-dimension provides an example where the locus varies continuously with $`Q`$, the nature of the locus remains an open and fascinating question. Fig. 3 shows the BST estimate of the modulus of the locus of zeros as a function of angle for the two-dimensional three-state Potts model at $`y=0.5y_c`$, $`y=y_c`$ and $`y=1.2y_c`$. To calculate the extrapolated values for each angle, $`\theta `$, we selected the zero whose arguments were closest to $`\theta `$ for lattices of size $`3L12`$ for $`\theta =0.0,0.5,\mathrm{},2.5`$ and $`\pi `$. The BST algorithm was then used to extrapolate these values for finite lattices to infinite size. The large variation in the size of the error bars is due to the fact that for a given $`\theta `$ there may be no zero close to $`\theta `$ for the smaller lattices. In Fig. 3 at $`y=y_c`$ the first four angles are shifted slightly from the original values ($`\theta =0.0`$, 0.5, 1.0 and 1.5) to be distinguished from the results at $`y=0.5y_c`$. For $`y=0.5y_c`$ and $`y=y_c`$ the first zeros definitely lie on the point $`r(\theta =0)=1`$ in the thermodynamic limit. However, for $`y=1.2y_c`$ the BST estimates of the modulus and angle of the first zero are 1.054(2) and 0.09(6). Therefore, at $`y=1.2y_c`$ the locus of zeros does not cut the positive real axis in the thermodynamic limit, consistent with the absence of a physical singularity for $`y>y_c`$. Fig. 4 shows the Yang-Lee zeros for the three-dimensional three-state Potts model at several temperatures on the $`3^3`$ simple-cubic lattices with periodic boundary conditions in $`x`$ and $`y`$ directions and free boundary conditions in $`z`$ direction. The behaviors in Fig. 4 for the three-dimensional model are the same as those in Fig. 1b for the two-dimensional model. ## IV Fisher zeros in an external field Fisher emphasized that the partition function zeros in the complex temperature plane (Fisher zeros) are also very useful in understanding phase transitions. In particular, using the Fisher zeros both the ferromagnetic phase and the antiferromagnetic phase can be considered at the same time. From the exact solutions of the square lattice Ising model it has been shown that in the absence of an external magnetic field the Fisher zeros lie on two circles in the thermodynamic limit. Recently the locus of the Fisher zeros of the $`Q`$-state Potts model in the absence of an external magnetic field has been studied extensively. It has been shown that for self-dual boundary conditions near the ferromagnetic critical point $`y_c=1/(1+\sqrt{Q})`$ the Fisher zeros of the Potts model on a finite square lattice lie on the circle with center $`1/(Q1)`$ and radius $`\sqrt{Q}/(Q1)`$ in the complex $`y`$ plane, while the antiferromagnetic circle of the Ising model completely disappears for $`Q>2`$. However, the properties of the Fisher zeros for $`Q>2`$ in an external field are not known. In the limit $`H_q\mathrm{}`$ ($`x0`$) the partition function of the $`Q`$-state Potts model becomes $$Z_Q=\underset{E=0}{\overset{N_b}{}}\mathrm{\Omega }_Q(M=0,E)y^E,$$ (18) where $`\mathrm{\Omega }_Q(M=0,E)`$ is the same as the number of states $`\mathrm{\Omega }_{Q1}(E)`$ of the ($`Q1`$)-state Potts model in the absence of an external magnetic field. As $`x`$ decreases from 1 to 0, the $`Q`$-state Potts model is transformed into the ($`Q1`$)-state Potts model in zero external field. For an external field $`H_q<0`$, one of the Potts states is supressed relative to the others. The symmetry of the Hamiltonian is that of the $`(Q1)`$-state Potts model in zero external field, so that we expect to see cross-over from the $`Q`$-state critical point to the $`(Q1)`$-state critical point as $`H_q`$ is increased. We have studied the field dependence of the critical point for $`0x1`$ through the Fisher zero closest to the real axis, $`y_1(x,L)`$, for the two-dimensional three-state Potts model. For a given applied field $`y_1`$ approaches the critical point $`y_1(L)y_c(x)`$ in the limit $`L\mathrm{}`$, and the thermal exponent $`y_t(L)`$ defined as $$y_t(L)=\frac{\mathrm{ln}\{\mathrm{Im}[y_1(L+1)]/\mathrm{Im}[y_1(L)]\}}{\mathrm{ln}[(L+1)/L]}$$ (19) will approach the critical exponent $`y_t(x)`$. Table I shows values for $`y_c(x)`$ extrapolated from calculations of $`y_1(x,L)`$ on $`L\times L`$ lattices for $`3L8`$ using the BST algorithm. The critical points for $`x=1`$ (three-state) and $`x=0`$ (two-state) Potts models are known exactly and are included in Table I for comparison. Note that the imaginary parts of $`y_c`$(BST) are all consistent with zero. We have also calculated the thermal exponent, $`y_t`$, applying the BST algorithm to the values given by Eq. (18), and these results are also presented in Table I. For $`x=1`$ we find $`y_t`$ very close to the known value $`y_t=6/5`$ for the three-state model, but for $`x`$ as large as 0.5 we obtain $`y_t=1`$, the value of the thermal exponent for the two-state (Ising) model. Fig. 5 shows the critical line of the two-dimensional three-state Potts ferromagnet for $`H_q<0`$. In Fig. 5 the upper line is the critical temperature of the two-state model, $`T_c(Q=2)=1/\mathrm{ln}(1+\sqrt{2})`$, and the lower line is the critical temperature for the three-state model, $`T_c(Q=3)=1/\mathrm{ln}(1+\sqrt{3})`$. The critical line for small $`H_q`$ is given by $`TT_c(Q=3)(H_q)^{y_t/y_h},`$ where $`y_t=6/5`$ and $`y_h=28/15`$ for the three-state Potts model. ## V conclusion Following the work of Glumac and Uzelac we have studied the locus of Yang-Lee zeros in the complex $`x`$ plane for the $`Q`$-state Potts model in one-dimension and find that for $`Q>1`$ the zeros lie on a circle whose radius varies continuously with temperature and $`Q`$. For $`1<Q<2`$ and any finite temperature, the zeros lie inside the unit circle, while for $`Q>2`$ they lie outside the unit circle. In the special case $`Q=2`$ (Ising model) the zeros lie exactly on the unit circle, as first proved by Lee and Yang. In two and three dimensions we have used the microcanonical transfer matrix to find the Yang-Lee zeros for finite lattices. The general trends observed in one dimension are repeated in two and three dimensions. In two dimensions with $`3Q8`$ the zeros lie outside the unit circle, and finite-size scaling suggests that the locus of zeros in the thermodynamic limit touches the unit circle only at the critical point, $`x=1`$. In three dimensions with $`Q=3`$ we have calculated the zeros on a single $`3\times 3\times 3`$ lattice, and there again the zeros lie outside the unit circle. While the exact form of the locus of zeros remains an open and important question, our results suggest a universal behavior which incorporates the Lee-Yang circle theorem. For $`1<Q<2`$ the locus of zeros lies within the unit circle while for $`Q>2`$ it lies outside the unit circle and in each case touches the unit circle at the critical point $`x=1`$. We have also studied the Fisher (complex temperature) zeros of the two-dimensional three-state Potts model for lattices of size $`3L8`$ in the presence of an external field. The field reduces the symmetry of the Hamiltonian to that of the ($`Q1`$)-state model, and by studying the edge singularity we are able to determine the critical point and temperature exponent as a function of the field. Cross-over from the 3-state to the 2-state universality class is apparent. A more detailed analysis using larger lattices and allowing for confluent singularities is in progress.