id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9908/hep-ph9908274.html
ar5iv
text
# Charmonium-Hadron Cross Section in a Nonperturbative QCD Approach ## Abstract We calculate the nonperturbative $`J/\mathrm{\Psi }N`$ and $`\mathrm{\Psi }^{}N`$ cross sections with the model of the stochastic vacuum which has been succesfully applied in many high energy reactions. We also give a quantitative discussion of pre-resonance formation and medium effects. PACS: 13.85.Hd 13.85.Ni 13.87.Fh 14.65.Dw 14.40.Lb During the next year the first data on heavy ion collisions at high energy ($`\sqrt{s}=200\text{GeV}`$ per nucleon pair) will be available at RHIC. As it is well known, one of the main goals of this machine is to find and study the plasma of quarks and gluons (QGP) . The search for this state of matter has started in the early eighties and since then has been subject of intense debate . Recent results on Pb-Pb collisions, taken at lower energies at CERN-SPS, have attracted a great attention and increased the hope that a new phase of nuclear matter is “just around the corner”. The signature of QGP formation has been and still remains a theoretical and experimental challenge. Indeed there is so far no “crucial test” able to disentangle the possible new phase from the dense hadronic background. Among the proposed signatures the most interesting is the suppression of $`J/\mathrm{\Psi }`$ . This suppression was observed experimentally by the NA38 collaboration in 1987 in collisions with light ions and also and more dramatically by the NA50 collaboration in 1995-1996 in Pb-Pb collisions . Whereas the old data could be reasonably well explained by a “conventional” approach the new Pb-Pb results of the NA50 collaboration created a big controversy . The entire set of the $`J/\mathrm{\Psi }`$ data from pA and AB collisions available before the advent of the Pb beam at CERN SPS has been found to be consistent with the nuclear absorption model. However, in the case of the new Pb-Pb data, the density of secondaries, which (together with primary nucleons flying around) are presumably responsible for the charmonium absorption, is so high that the hadronic system in question is hardly in a hadronic phase. Nevertheless, a conventional treatment of the problem is not yet discarded . Reliable values for the charmonium nucleon cross sections are of crucial importance in the present context. One needs to know the cross section $`\sigma _{J/\psi }`$ in order to predict a nuclear suppression of $`J/\mathrm{\Psi }`$ without assuming a so called “deconfining regime”. Estimates using perturbative QCD give values which are too small to explain the observed absorption conventionally, but they are certainly not reliable for that genuine nonperturbative problem. A nonperturbative estimate may be tried by applying vector dominance to $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ photoproduction. In this way a cross section of $`\sigma _{J/\psi }1.3`$ mb for $`\sqrt{s}10`$ GeV and $`\sigma _\psi ^{}/\sigma _{J/\psi }0.8`$ has been obtained . In ref. strong arguments against the vector dominance with only few intermediate vector mesons were put forward even in the case of the production of light vector mesons and which apply a fortiori to the production of heavy vector mesons. The instability of the vector dominance model can be seen from a more refined multichannel analysis where a value $`\sigma _{J/\psi }34`$ mb has been obtained. Even this value is too small in order to explain the absorption in p-A collisions which is of the order $`\sigma _\psi ^{abs}7.3`$ mb These hadron-hadron cross sections involve nonperturbative aspects of QCD dynamics and therefore require a nonperturbative model to be calculated. In a recent letter the nonperturbative QCD contribution to the charmonium-nucleon cross section was evaluated by using an interpolation formula for the dependence of the cross section on the transverse size of a quark-gluon configuration. In this work we calculate the $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ \- nucleon cross sections in a specific nonperturbative model of QCD: the model of the stochastic vacuum (MSV) . It has been applied to a large number of hadronic and photoproduction processes (including photoproduction of $`J/\mathrm{\Psi }`$) with remarkably good success. Its application to $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ nucleon scatttering is straightforward. We also investigate the influence of the nuclear matter and arrive at rather stringent limits for the cross sections in an environment different from the vacuum and where the properties of the medium are reflected in a shift of the $`J/\mathrm{\Psi }`$ mass . The basis of the MSV is the calculation of the scattering amplitude of two colourless dipoles based on a semiclassical treatment developped by Nachtmann . For details we refer to the literature and show here only some intermediate steps necessary for the understanding of the text. The dipole-dipole scattering amplitude is expressed as the expectation value of two Wegner-Wilson loops with lightlike sides and transversal extensions $`\stackrel{}{r}_{t1}`$ and $`\stackrel{}{r}_{t2}`$ respectively. This leads to a profile function $`J(\stackrel{}{b},\stackrel{}{r}_{t1},\stackrel{}{r}_{t2})`$ from which hadron-hadron scattering amplitudes are obtained by integrating over different dipole sizes with the transversal densities of the hadrons as weight functions according to $$\sigma _{J/\mathrm{\Psi }}^{tot}=d^2bd^2r_{t1}d^2r_{t2}\rho _{J/\mathrm{\Psi }}(\stackrel{}{r}_{t1})\rho _N(\stackrel{}{r}_{t2})J(\stackrel{}{b},\stackrel{}{r}_{t1},\stackrel{}{r}_{t2}).$$ (1) Here $`\rho _{J/\mathrm{\Psi }}(\stackrel{}{r}_{t1})`$ and $`\rho _N(\stackrel{}{r}_{t2})`$ are the transverse densities of the $`J/\mathrm{\Psi }`$ and nucleon respectively. The basic ingredient of the model is the gauge invariant correlator of two gluon field strength tensors. The latter is characterized by two constants: the value at zero distance, the gluon condensate $`<g^2FF>`$, and the correlation length $`a`$. We take these values from previous applications of the model ( and literature quoted there): $$<g^2FF>=2.49\mathrm{GeV}^4\mathrm{a}=0.346\mathrm{fm}.$$ (2) The wave functions of the proton have been determined from proton-proton and proton-antiproton scattering respectively. It turns out that the best description for the nucleon transverse density is given by that of a quark diquark system with transversal distance $`\stackrel{}{r}_t`$ and density: $$\rho _N(\stackrel{}{r_t})=|\mathrm{\Psi }_p(\stackrel{}{r}_t)|^2=\frac{1}{2\pi }\frac{1}{S_p^2}e^{\frac{|\stackrel{}{r}_t|^2}{2S_p^2}}.$$ (3) The value of the extension parameter, $`S_p=0.739\text{fm}`$, obtained from proton-proton scattering agrees very well with that obtained from the electromagnetic form factor in a similar treatment . For the wave function of the $`J/\mathrm{\Psi }`$ we used two approaches: 1) A numerical solution of the Schroedinger equation with the standard Cornell potential : $$V=\frac{4}{3}\frac{\alpha _s}{r}+\sigma r.$$ (4) 2) A Gaussian wave function determined by the electromagnetic decay width of the $`J/\mathrm{\Psi }`$ which has been used in a previous investigation of $`J/\mathrm{\Psi }`$ photoproduction . For the $`\mathrm{\Psi }^{}`$ no analysis of photoproduction in the model has been made so we use only the solution of the Schroedinger equation. The linear potential can be calculated in the model of the stochastic vacuum which yields the string tension: $$\sigma =\frac{8\kappa }{81\pi }<g^2FF>a^2=0.179\text{GeV}^2,$$ (5) where the parameter $`\kappa `$ has been detemined in lattice calculations to be $`\kappa =0.8`$ . The other parameters, the charmed (constituent) mass and the (frozen) strong coupling can be adjusted in order to give the correct $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ mass difference and the $`J/\psi `$ decay width $$m_c=1.7\mathrm{GeV}\alpha _\mathrm{s}=0.39.$$ (6) We also use the standard Cornell model parameters : $$\alpha _s=0.39,\sigma =0.183\text{GeV}^2\text{ and }m_c=1.84\text{GeV}.$$ (7) From the numerical solution $`\psi (|\stackrel{}{r}|)`$ of the Schroedinger equation the transversal density is projected: $$\rho _{J/\mathrm{\Psi }}(\stackrel{}{r}_t)=\left|\psi (\sqrt{\stackrel{}{r}_t^2+r_3^2})\right|^2𝑑r_3,$$ (8) where $`\stackrel{}{r}_t`$ is the $`J/\mathrm{\Psi }`$ transversal radius. Given the values of $`\alpha _s`$, $`\sigma `$ and $`m_c`$ we solve the non-relativistic Schroedinger equation numerically, obtain the wave function, compute the transverse wave function and plugg it into the MSV calculation . The results are shown in Table I. In this table $`\sqrt{<r^2>}`$ is the root of the mean square distance of quark and antiquark and $`\sqrt{<r_t^2>}`$ is the root of the mean square transversal distance of quark and antiquark. Wave function A) is the one obtained with the parameters given by Eqs. (5) and (6). Wave function B) corresponds to the standard Cornell model parameters Eq. (7). | Wave function | $`\sqrt{<r^2>}`$ fm | $`\sqrt{<r_t^2>}`$ fm | $`\sigma _{tot}`$ \[mb\] | | --- | --- | --- | --- | | $`J/\mathrm{\Psi }(1S)`$ | | | | | A | 0.393 | 0.321 | 4.48 | | B | 0.375 | 0.306 | 4.06 | | C | | | 4.69 | | $`\mathrm{\Psi }(2S)`$ | | | | | A: | 0.788 | 0.640 | 17.9 | TABLE I $`J/\mathrm{\Psi }N`$ and $`\mathrm{\Psi }^{}N`$ cross section. A and B: numerical solution of the Schroedinger equation with parameters in Eqs. (6) and (7) respectively. C: Cross section obtained by the weighted average of the longitudinally and transversely polarized $`J/\mathrm{\Psi }`$ wave functions of ref. . In ref. a gaussian ansatz was made to construct vector meson wave functions that describe well the electromagnetic decay of the vector meson and photo and electroproduction cross sections. In Table I, wave function C) gives the result for the $`J/\mathrm{\Psi }N`$ cross section obtained with the weighted average of the longitudinally and transversely polarized $`J/\mathrm{\Psi }`$ wave functions from with transversal sizes $`\sqrt{<r_t^2>}=0.327\text{fm}`$ and 0.466 fm. Averaging over our results for different wave functions, our final result for the $`J/\mathrm{\Psi }N`$ cross section is $$\sigma _{J/\psi }=4.4\pm 0.6\text{mb}.$$ (9) The error is an estimate of uncertainties coming from the wave function and the model. The only other nonperturbative calculation of the $`J/\mathrm{\Psi }N`$ cross section that we are aware of was done in ref. and the obtained cross section was $`\sigma _{J/\psi }=3.6`$ mb, in a fair agreement with our result and with recent analysis of $`J/\mathrm{\Psi }`$ photoproduction data . For $`\mathrm{\Psi }^{}`$ our cross section is also of the same order as the value obtained in : $`\sigma _\psi ^{}=20.0`$ mb. Since one of the possible explanations of the observed $`J/\mathrm{\Psi }`$ suppression is based on the pre-resonance absorption model we present numerical calculations of the nucleon - pre-resonant charmonium state cross section, $`\sigma _\psi `$. In the pre-resonance absorption model, the pre-resonant charmonium state is either interpreted as a color-octet, $`(c\overline{c})_8`$, and a gluon in the hybrid $`(c\overline{c})_8g`$ state, or as a coherent $`J/\mathrm{\Psi }\mathrm{\Psi }^{}`$ mixture. We use a gaussian transverse wave function, as in Eq. (3), to represent a state with transversal radius $`\sqrt{<r_t^2>}0.82\sqrt{<r^2>}=\sqrt{2}S_\psi `$ ($`S_\psi `$ is the pre-resonance extension parameter analogous to $`S_p`$). With the knowledge of the wave functions and transformation properties of the constituents we can compute the total cross section given by the MSV. The resulting nucleon - pre-resonant charmonium state cross section will be different if the pre-resonant charmonium state consists of entities in the adjoint representation (as $`(c\overline{c})_8g`$) or in the fundamental representation (as a $`J/\mathrm{\Psi }\mathrm{\Psi }^{}`$ mixture), the relation being $`\sigma _{\mathrm{adjoint}}=\frac{2N_C^2}{N_C^21}\sigma _{\mathrm{fundamental}}`$, with $`N_C=3`$. In Table II we show the results for these two possibilities and different values of the transverse radius. | $`\sqrt{<r_T^2>}`$ | $`\sigma _{c\overline{c}}`$ | $`\sigma _{(c\overline{c})_8g}`$ | | --- | --- | --- | | (fm) | (mb) | (mb) | | 0.20 | 1.79 | 4.02 | | 0.25 | 2.76 | 6.21 | | 0.30 | 3.96 | 8.91 | | 0.35 | 5.30 | 11.92 | | 0.40 | 6.81 | 15.32 | | 0.45 | 8.50 | 19.12 | | 0.50 | 10.28 | 23.13 | TABLE II The cross section charmonium-nucleon for gaussian wave-functions and different values of the transverse radius ($`\sqrt{<r_t^2>}`$) of the $`c\overline{c}`$ in a singlet state (first row) or in a hybrid $`(c\overline{c})_8g`$ state (second row). From our results we can see that a cross-section $`\sigma _\psi ^{abs}67`$ mb, needed to explain the $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ suppression in p-A collisions in the pre-resonance absorption model , is consistent with a pre-resonant charmonium state of size $`0.500.55`$ fm if it is a $`J/\mathrm{\Psi }\mathrm{\Psi }^{}`$ mixture or $`0.300.35`$ fm for a $`(c\overline{c})_8g`$ state. So far the calculations were done with the vacuum values of the correlation length and gluon condensate generally used in the MSV . However, since the interaction between the charmonium and the nucleon occurs in a hadronic medium, these values may change. Indeed, lattice calculations show that both the correlation length and the gluon condensate tend to decrease in a dense (or hot) medium. The reduction of the string tension, $`\sigma `$, leads to two competing effects, which can be quantitatively compared in the MSV. On one hand the cross section tends to decrease strongly when the gluon condensate or the correlation length decrease. On the other hand, when the string tension is reduced the $`c\overline{c}`$ state becomes less confined and will have a larger radius, which, in turn, would lead to a larger cross section for interactions with the nucleons in the medium. It is of major interest to determine which of these effects is dominant. The dependence of the total cross section, Eq. (1), on the extension parameters $`S_p`$ and $`S_\psi `$ is quite well parametrized as: $$\sigma _{J/\psi }<g^2FF>^2a^{10}\left(\frac{S_p}{a}\right)^{1.5}\left(\frac{S_\psi }{a}\right)^2$$ (10) In the MSV the string tension, $`\sigma `$, is related to the gluon condensate and to the correlation length through Eq. (5). Therefore, the dependence of the cross section on the string tension and correlation length is approximately given by: $$\sigma _\psi \sigma ^2a^6\left(\frac{S_p}{a}\right)^{1.5}\left(\frac{S_\psi }{a}\right)^2.$$ (11) In a rough approximation the hadron radii can be estimated, using the Ritz variational principle, to be: $$S\left(\frac{1}{\sigma }\right)^{1/3},$$ (12) and thus we finally obtain the following three possibilities to express the cross section as a function of the string tension, $`\sigma `$, the correlation length, $`a`$, and the gluon condensate, $`<g^2FF>`$: $$\sigma _{\psi N}\{\begin{array}{c}\sigma ^{5/6}a^{5/2}\\ \sigma ^{25/12}<g^2FF>^{5/4}\\ <g^2FF>^{5/6}a^{25/6}\end{array}$$ (13) From the equation above we see that the final effect of the medium is a reduction in the cross section. We can also see that a 10% variation in the parameters lead to large variations on the cross sections. Using the values of the correlation length and the gluon condensate reduced by 10%: $`a=0.31`$ fm , $`g^2FF=2.25`$ GeV<sup>4</sup>, we obtain a 40% reduction in the cross sections. Taking this reduction into account the cross sections obtained in this work are smaller than the ones needed (both in Refs. and ) to explain experimental data. However, since in our model the nucleon - pre-resonant charmonium state cross section is much bigger than the $`J/\mathrm{\Psi }N`$ cross section if the pre-resonance charmonium state is a hybrid $`(c\overline{c})_8g`$ state, the reduction in the cross sections due to medium effects favors the pre-resonant model for the hadronic explanation of the observed $`J/\mathrm{\Psi }`$ suppression. In order to get more precise results we have varied only one the parameters $`a`$ and $`<g_s^2FF>`$ and kept the other fixed. This was done in such a way as to decrease the string tension according to equation (5) to the values given in Table III, first row. The numerically evaluated values for the mass of the $`J/\mathrm{\Psi }`$ (serving as a physical measure of the change) and the cross sections are given in rows 2 to 4, using Gaussian wave functions determined by the variational principle. | string tension \[GeV<sup>2</sup>\] | $`\mathrm{\Delta }E`$ \[MeV\] | $`\sigma _{tot}`$ \[mb\] | $`\sigma _{tot}`$ \[mb\] | | --- | --- | --- | --- | | (GeV<sup>2</sup>) | MeV | (mb) | (mb) | | | | $`a`$ const. | $`<g^2FF>`$ const. | | $`\sigma _0`$ | 0 | 4.48 | 4.48 | | $`0.9\sigma _0`$ | -33 | 3.86 | 3.38 | | $`0.75\sigma _0`$ | \- 83 | 3.27 | 2.2 | | $`0.50\sigma _0`$ | -174 | 2.0 | 0.83 | | $`0.25\sigma _0`$ | -275 | 0.91 | 0.13 | TABLE III $`J/\mathrm{\Psi }`$-Nucleon total cross sections as function of the string tension with either the correlation length or the gluon condensate kept constant. $`\mathrm{\Delta }E`$ is the mass decrease of the $`J/\mathrm{\Psi }`$ due to the change of string tension. To summarize, we calculated the nonperturbative $`J/\mathrm{\Psi }N`$ and $`\mathrm{\Psi }^{}N`$ cross sections with the MSV. The basic ingredient of the model is the gauge invariant correlator of two gluon field strength tensors which is characterized by two constants: the gluon condensate and the gluon field correlation length. Using for these quantities values fixed in previous applications and using well accepted charmonium wave functions we obtain $`\sigma _{J/\psi N}4`$ mb and $`\sigma _{\psi ^{}N}18`$ mb. An interesting prediction of the MSV is the strong depedence on the parameters of the QCD vacuum which will most likely lead to a drastic reduction of them at higher temperatures and perhaps also at higher densities. Acknowledgements: This work has been supported by FAPESP (under project #: 98/2249-4) and CNPq. We would like to warmly thank E. Ferreira for instructive discussions in the early stage of this work. F.S.N. and M.N. would like to thank the Institut für Theoretische Physik at the University of Heidelberg for its hospitality during their stay in Heidelberg.
no-problem/9908/astro-ph9908243.html
ar5iv
text
# Early X-ray/UV Line Signatures of GRB Progenitors and Hypernovae ## 1 Introduction The nature of the progenitors of gamma-ray bursts (GRB) is an unsettled issue of extreme interest, e.g. Fryer, Woosley & Hartmann (1999); Paczyński (1998); Mészáros , (1998). It is becoming increasingly apparent that whatever the progenitor, a black hole plus debris torus may result which powers the GRB, but the burning question is what gives rise to this system. Both compact binary (NS-NS or BH-NS) mergers, other mergers (WD-BH, He-BH) or the collapse of a massive, fast rotating star (referred to as hypernovae or collapsars) could lead to such a BH plus debris torus energy source, and much current work centers on discriminating between the various progenitors. Evidence concerning the progenitor comes both from the accumulating statistics on the off-sets between GRB afterglow optical transients and their host galaxies (Bloom et al. (1998)) and from light curve fits and continuum spectral information providing evidence for either low (Wijers & Galama (1999)) or high (Owen et al. (1998)) density in front of the afterglow. However, the most direct diagnostics for the environment are probably X-ray and UV spectral lines (Bisnovatyi-Kogan & Timokhin (1997); Perna & Loeb (1998); Mészáros & Rees 1998b ; Böttcher et al. (1998)) and an interesting possible diagnostic for hypernovae or collapsars is the presence of Fe K-$`\alpha `$ emission lines, produced by fluorescent scattering from the outer parts of the stellar progenitor of the continuum X-ray photons originating in the afterglow of the GRB (Mészáros & Rees 1998b ; Ghisellini et al. (1999); Lazzati et al., (1999)). Quantitative calculations of spectral diagnostics of GRB progenitors are hindered by the lack of detailed calculations or data on the evolution and mass loss history in the period of months to years before the outburst. However, it is possible to guess what some of the generic features of such pre-burst environments may be. The purpose of this paper is to consider some very simplified but physically plausible progenitor configurations, and to explore in quantitative detail the range of possible X-ray/UV spectral signatures that can be expected from them in the time scale of hours to days after the outburst. ## 2 Pre-burst Environments and Computational Method The task of finding useful progenitor diagnostics is simplified if the pre-burst evolution of the latter leads to a significantly enhanced gas density in the immediate neighborhood of the burst. In the case of a massive progenitor scenario, such as a hypernova or collapsar, it is known that red supergiants and supernova progenitors in general are prone to have strong winds. One would expect such a strong mass loss phase to produce a pre-burst environment which could have the form of a shell, e.g. as inferred in SN 1987a. For instance, a star evolving from a red giant to a blue giant phase might emit first a slower wind, which is later swept up into a shell by a faster wind. In some collapse or compact binary merger scenarios, e.g. of a BH or NS with a White Dwarf or He core left over from a massive companion star, a supernova producing a metal-enriched supernova remnant (SNR) shell might precede the burst. In general the shell would be expected to have dispersed before the burst occurs, but there could be rare cases where this is not the case. Another possible scenario which has been discussed is the delayed collapse of a rotationally stabilized neutron star, which could lead to a burst with a SNR shell around it (Vietri & Stella (1998)). Another geometry characterizing massive progenitor or collapsar models could arise if the giant progenitor is fast-rotating, e.g. due to spin-up from merging with a compact companion. Then the stellar envelope and the wind would be expected to be least dense inside a funnel-like cavity extending along the orbital-spin axis, with the GRB at the tip of the funnel. However, detailed models for either a funnel-like environment or for a shell resulting from a GRB progenitor are lacking so far. Therefore, our choice of parameters below for these scenarios is purely phenomenological, and guided more by the reported observations than by theoretical considerations. For the computations, we need to treat in detail the photoionization and recombination of the various ions in the environment material, and to obtain spectra which can be compared to observations we need to consider the time-dependence. The latter is due to the fact that the recombination and ionization has a natural timescale depending on the ambient density, the chemical abundance and the flux received; that the ionizing spectrum from the GRB afterglow varies in time; and that the spectrum observed at a given observer time is made up of light arriving from different regions of the remnant, for which the source time is different. The problem can be simplified if the first timescale is shorter than the latter two, since in this case one may use a steady-state photoionization code. The recombination time is $`t_{rec}10^3Z^2T_7^{1/2}n_{10}^1`$ s for ions of charge $`Z`$ at the typical temperatures and densities in the reprocessing gas, which is short compared to the timescales $`10^5\mathrm{s}`$ 1 day considered, so the ionization equilibrium approximation is justified in the examples calculated below. In this paper we exploit this approximation, and make use of the XSTAR code (Kallman & McCray (1982); Kallman & Krolik (1998)) to calculate the spectrum. This is a steady state code which, for a given input spectrum, calculates the photoionization of a plasma in a shell at a given distance from the source, as a function of the density and chemical abundances. These position dependent spectra in the source frame, which arise in response to a time-variable input spectrum, are integrated over the remnant to obtain the observer-time dependent spectra that would be actually measured. A restriction on the use of this code is that the effects of comptonization can be included only in a rough manner. This is not a problem if the remnant is Thomson thin (column density $`\mathrm{\Sigma }\stackrel{<}{}2.5\times 10^{24}\text{cm}^2`$), or if the incident continuum is absorbed over a column density smaller than this. Very little if at all is known about the nature and geometry of the remnants, and in what follows we assume situations where the above restriction is either satisfied, or the effects of its violation can be estimated by means of a different Monte Carlo code (Matt et al., (1996)) which is not subject to this restriction. The input spectrum that we assume is typical of simple afterglow models (Mészáros & Rees 1997a ; Waxman (1997)), with phenomenological parameters chosen to approximate those of the observed afterglow GRB 970508. We take these to be a break luminosity $`L_{ϵ_m}3.2\times 10^{46}`$ erg/s/keV with a break energy $`ϵ_m=1.96`$ keV at $`t=10^3`$ s, a Band et al. (1993) spectrum with energy indices $`\alpha =0.33`$, $`\beta =0.75`$ and a standard time decay exponent for the peak frequency of $`\gamma =(3/2)\beta `$. ## 3 Shell Models One type of environment model considered is a shell of gas at some distance from the burst, considered to be essentially stationary over the period of interest for its response to the above time-dependent input spectrum. The shells could be metal-enriched, especially if arising from a supernova explosion before the burst, and possibly also if involving a pre-ejected wind from a massive stellar progenitor (although in the latter, solar abundances are probably likelier). These shells could have a large coverage fraction, and would have a mean density much larger than typical ISM values, especially when blobs and condensations form via instabilities. Guided by the report of an Fe line detection peaking after about one day in GRB 970508 and in GRB 970828 (Piro et al., (1999); Yoshida et al. (1998)), one is led to consider shell radii $`R\stackrel{>}{}`$ light day. A physical requirement is that the distance $`ct\mathrm{\Gamma }(t)^2`$ reached after one day by the afterglow shock producing the continuum be less than $`R`$, with $`\mathrm{\Gamma }\stackrel{>}{}1`$. As an example, we assume that the shock is observed to reach the shell at an inner-radius distance $`R1.5\times 10^{16}`$ cm at $`t_s=1`$ day along the L.O.S.. Within the context of simple (adiabatic, impulsive, homogeneous external density) standard afterglow models, this could occur for a deceleration radius $`\stackrel{<}{}R`$, requiring a density between the burst and the shell higher than usually considered. A pre-shell density $`n\stackrel{<}{}10^6\text{cm}^3`$ could do this, involving a total mass $`10^2M_{}`$ much less than in the assumed shell, and a Thomson optical depth $`\tau _T1`$. However, in scenarios leading to a shell the conditions might differ substantially from those implied in snapshot fits to simple standard models, e.g. Wijers & Galama 1999, and the error bars in such fits are hard to estimate. A complete model of the physics for both the input continuum and the reprocessing gas would be uncertain, especially in view of the preliminary nature of current X-ray line observations. For this reason, we prefer to consider a phenomenological input spectrum as a quantity given by observations, and treat the environment simply as a test particle gas, choosing its physical parameters in such a manner as to reproduce the current observations. For computational reasons, the calculations are carried out for thin homogeneous spherical shells of different densities, which can be used to represent thicker inhomogeneous shells with the same mass per unit area and the same density in the filaments or blobs as in a homogeneous thin shell. The shell was assumed to have an Fe abundance either 10 times solar or $`10^2`$ solar (and solar for the other elements), and a hydrogen column density $`N_H=5\times 10^{23}\text{cm}^2`$, with a total mass $`M_s1M_{}`$. The X-ray/UV spectrum as a function of observer time is shown in Figure 1, for several values of the particle density in the shell. The line spectrum becomes more prominent as the gas cools and recombines. Due to the very high luminosities and hard initial $`\gamma `$-ray spectrum, initially all the Fe is fully ionized, and as it cools the strongest features initially are the Fe K-$`\alpha `$ and K-edge , which initially appear in absorption and later as recombination emission features, the strength of the K-edge feature peaking at later times than the K-$`\alpha `$ for these luminosities, the K-$`\alpha `$ line being more prominent and easier to detect for this input continuum luminosity. (For a lower input continuum luminosity with a steeper spectrum, however, the time sequence can reverse and the K-edge can be more prominent than the K-$`\alpha `$ feature). As the continuum continues to decrease the Fe K-$`\alpha `$ line becomes more important, shifting its energy gradually from 6.7 to 6.4 keV as the lower ions become in turn more predominant. At later times the Fe K-edge recombination emission feature begins to emerge in emission as well, whereas at early times times it is largely an absorption feature. After the Fe features have become important, with some delay depending on the density and abundance, other features in the 2-3 keV range due to Si and S also become prominent, as well as an O recombination and K-$`\alpha `$ features at 0.86 and 0.65 keV. The corresponding X-ray light curves in the 2-10 keV range are shown in Figure 2, as well as the equivalent widths of the Fe K-$`\alpha `$ feature. The Fe K-$`\alpha `$ luminosity reaches values $`\stackrel{<}{}10^{43}`$ erg s<sup>-1</sup> and the EW reaches values of 0.2 to 3.5 keV in emission. The Fe K-edge feature in emission reaches values $`\stackrel{<}{}0.1`$ keV in Figure 2d (not shown), but at early times the K-edge absorption is substantial, as seen in Figure 1. In this example of a full shell the equivalent width (EW) of the Fe K-$`\alpha `$ in the 6.4-6.7 keV range and the Fe K-edge at $`9.28`$ keV continues to grow as the bulk of the diffuse K-edge recombination and fluorescent K-$`\alpha `$ photons reach the observer from the rim and the back portions of the shell, in response to the GRB time-dependent continuum. This growth continues until a time $`tR/c5`$ days, when the diffuse radiation from the rim of the shell becomes visible. However, by this time the total X-ray flux (continuum plus lines) has decreased significantly (Figure 1) and the S/N is less favorable for detection. Notice that in this calculation the continuum source, i.e. the shock, crosses the shell at 1 day. At this point the observed continuum X-ray luminosity temporarily increases, as the radiation along the L.O.S.is no longer absorbed by the shell, but then it continues to decrease according to the standard afterglow decay law. This temporary brightening would be enhanced, and might be dominated by, the heating of the shell as the shock goes across it; a consistent analysis of the shock heating would require a number of additional assumptions and detailed gas dynamical calculations which are beyond the scope of this paper (see, e.g., Vietri et al. (1999) for an analytical estimate). A temporary brightening of the continuum at one day is in fact seen in the observations of GRB 970508 (Piro et al., (1999)). The unabsorbed continuum reaching the observer after one day from beyond the shell is also responsible for the gradual re-filling of the absorption throughs seen in Figure 1 at late times. The effects of a jet-like fireball illuminating a spherical shell is also of interest. An example of the spectral evolution is shown in Figure 3, for a fireball whose continuum radiation is collimated in a jet of opening half-angle $`\theta _j=37\mathrm{deg}`$ (and other properties the same as for the spherical fireball of Figure 1). In this example the shell was assumed to be spherical, with the same dimensions and properties as in Figure 1. The effect of a jet is that the ring-shaped area of illuminated shell which is visible to the observer increases only up to a time $`t_j=(R/c)(1\mathrm{cos}\theta _j)`$ 1 day. After that time, the shell regions at angles larger than $`\theta _j`$ which become visible do not contribute any diffuse radiation, since they are not (and were never) illuminated by the continuum source. This choice of $`\theta _j`$ results therefore in K-edge and K-$`\alpha `$ equivalent widths which grow until $`t_j1`$ day, and decay thereafter (see Figure 4). ## 4 Scattering Funnel Hypernova Models A different configuration which may characterize hypernovae involves a funnel geometry. Accurate hypernova line diagnostics will be uncertain due to the absence of quantitative models, extending from minutes to days after the burst, of the gaseous environment in the outer layers and/or winds in such objects. We can, however, get an estimate of what may be expected by using a physically plausible toy model. We take a parabolical funnel as an idealized representation of the centrifugally evacuated funnel along the rotation axis of the collapsing stellar configuration, with the GRB at its tip. In order to produce line features which peak at about one day from such a model, one requires the X-ray continuum to be inside the outer rim of the funnel for at least this long. A simple configuration with these properties is, for example, a wind with a scattering optical depth $`\stackrel{>}{}1`$ extending out to $`R=1.5\times 10^{16}`$, in which there are two empty (or at any rate much lower density) funnels, inside which which the fireball expands. The fireball is assumed to have the same luminosity per solid angle and spectral characteristics as used in the previous two shell models, and the funnel opening half-angle was taken to be $`15\mathrm{deg}`$. For the funnel walls we take a uniform density $`n=10^{10}\text{cm}^3`$, and an Fe abundance $`x_{Fe}=10`$ or $`x_{Fe}=10^2`$; we assume the effective column density within which reprocessing is most effective to be $`\mathrm{\Sigma }=10^{24}\text{cm}^2`$, the effective amount of reprocessing mass involved being $`0.2M_{}`$. An accurate calculation of the spectrum escaping from a funnel is not straightforward, since a rigorous prescription for treating multiple scatterings and a non-spherical geometry is difficult to implement in a code such as XSTAR. However, it is possible to obtain useful lower and upper limits for the actual equivalent widths, by calculating the widths expected in two limits. A low estimate for the EW is computed by counting only the once-reflected line photons which are directed inside the opening angle of the funnel, and comparing them to the continuum photons (either direct or reflected) which are similarly directed inside the opening angle. The high limit for the EW is calculated using all the once-reflected line photons (whether directed at the opening or not) and comparing them to the directly escaping plus all the reflected continuum. A spectrum as a function of time for the second limit (all) and $`x_{Fe}=10^2`$ is shown in Figure 5. The funnel model was taken to have the same input luminosity per solid angle as the shell models, but the incidence angle is shallower in funnels, and hence the effective heating per unit area is smaller than in shells at the same distance, which favors Fe K-$`\alpha `$ recombination, hence the Fe K-$`\alpha `$ luminosity is larger. The upper and lower limits for this hypernova example with $`x_{Fe}=10^2`$ (see bottom of figure 5) show that at one day the Fe K-$`\alpha `$ luminosity is bounded between $`2\times 10^{44}`$ erg s<sup>-1</sup> and $`6\times 10^{42}`$ erg s<sup>-1</sup>, and the Fe K-$`\alpha `$ line EW is bounded between 1.2 and $`0.1`$ keV, while the Fe K-edge EW, which is more prominent, is bounded between 2.7 and 0.2 keV. For $`x_{Fe}=10`$, these values are lower by a factor $`3`$ (Figure 6). ## 5 Discussion We have considered a series of models where the environment of the burst can be represented as a shell of enhanced density at some radial distance from the burst. These shells could be the result of a pre-burst wind phase of a massive progenitor, a hypernova, collapsar or a merger involving a massive companion or its core. Alternatively, they might be supernova remnant (SNR) shells of a rare kind, which originated sufficiently recently that they have not yet dispersed before the burst occurs. Such shells can produce significant Fe K-$`\alpha `$ and K-edge luminosities and equivalent widths of order $``$ keV, provided the density (possibly in the form of blobs) in the shell is large ($`10^{10}10^{12}\text{cm}^3`$), and the coverage fraction is a substantial fraction of $`4\pi `$. For a mass of Fe in the shell $`2.5\times 10^4M_{}`$ or $`2.5\times 10^3M_{}`$ (a total shell mass $`1M_{}`$) the Fe K-$`\alpha `$ equivalent widths after one day can be $`E_W\stackrel{<}{}`$ keV, comparable to the values reported by Piro et al., (1999) and Yoshida et al. (1998). However, the Fe luminosity is $`\stackrel{<}{}10^{43}`$ erg s<sup>-1</sup>, which is low by a factor of 5-10. The higher density or more Fe-rich shells also show a drop in the continuum after $`t10^4`$ s due to Fe absorption and re-emission, e.g. panels b.c and d of Figure 2, a feature which qualitatively resembles an observed dip in the light curve of GRB 970508 at $`5^4`$ s (Piro et al., (1999)). We note that winds of $`\dot{M}10^4M_{}`$/yr with velocities $`v_w100`$ Km/s varying on timescales $`\stackrel{<}{}100`$ years (characteristic of massive stars) would yield shell enhancements starting at $`R10^{16}`$ cm with mean density $`n\stackrel{>}{}10^5\text{cm}^3`$, in which condensations of $`n10^910^{11}\text{cm}^3`$ could form via instabilities. Dense shells may also form as a result of a fast wind following a slower one. The results would be similar whether the shells are homogeneous, or consist of blobs with the same density and a comparable total coverage fraction as a homogeneous shell. (Note that in this model the shell or blobs have a different origin, are further out and are much bulkier and slower than, e.g the metal-enriched blobs possibly accelerated in the relativistically moving burst ejecta itself, e.g. Hailey et al. (1999); Mészáros & Rees 1998a ). In this particular model, the Fe K-$`\alpha `$ EWs reach values $`E_W0.33`$ keV at 1 day, and continue to be significant up to a time $`R/c5`$ days when the diffuse radiation from the rim of the shell reaches the observer (Figures 1 and 2). However the continued decay of the continuum after 1 day would reduce the S/N ratio, which would make it harder to detect an Fe feature at later times. We have also explored a different shell scenario, where the Fe features would cut-off abruptly after reaching a peak. This occurs if the continuum is beamed, e.g. it is produced by a collimated fireball jet. In this case (Figures 3 and 4), the diffuse radiation, including the Fe and other spectral features, cuts off after a time $`t(R/c)(1\mathrm{cos}\theta _j)1`$ day, and for a $`1M_{}`$ shell at $`R10^{16}`$ cm with Fe abundance $`1010^2`$ times solar the models produce Fe K-$`\alpha `$ equivalent widths $`\stackrel{<}{}0.33`$ keV (depending on the density) peaking at one day. Another series of models that we considered address the consequences of a funnel geometry in a spinning massive progenitor (hypernova or collapsar). If the stellar envelope or its wind can be assumed to extend out to radii of light-days with an appreciable density of order $`n10^910^{11}\text{cm}^3`$ (particularly in winds where the density drops slower than $`r^2`$, e.g. as in hour-glass shaped winds such as that of SN 1987a), a wider range of luminosities and equivalent widths is possible after $`1`$ day. In this case, for $`x_{Fe}=10^2`$ we obtain an Fe K-$`\alpha `$ luminosity $`L_{Fe}2\times 10^{44}`$ erg s<sup>-1</sup> at $`t1`$ day (or a factor 3 lower, for $`x_{Fe}=10`$), comparable to the observational results. The equivalent width grows until the continuum source (or afterglow shock) moves beyond the radius where there is a substantial amount of stellar wind material to reprocess it (see Figure 5). After that time, only the decaying shock continuum is detected which is now beyond the wind region, and a fast decaying component from the funnel wall as it cools. The calculations presented here indicate that a qualitative difference between shell and funnel models is that, whereas shells produce Fe K-$`\alpha `$ , K-edge and features from other metals predominantly in absorption, and later also partly in emission, the funnel models are dominated by emission features throughout. This is due to the presence of material along the line of sight in the shell models, which is absent in the funnel case. It is worth noting that in GRB 970508 the energy of the X-ray spectral feature discussed by Piro et al., (1999) agrees with that of a 6.7 keV Fe K-$`\alpha `$ line at the previously known redshift $`z=0.835`$ (Metzger, et al. (1997)), while in GRB 970828 the energy of the X-ray spectral feature reported by Yoshida et al. (1998) is compatible with an Fe K-edge feature at 9.28 keV in the rest frame, at the recently reported redshift $`z=0.958`$ (Djorgovski, et al., (2000)). A general point is that in the case of low mass binary mergers, such as NS-NS or BH-NS, it is harder to see how shells or funnels would have formed and still be present within distances $`\stackrel{>}{}10^{15}10^{16}`$ cm at the time of the burst. Hence the detection of Fe K-edge and K-$`\alpha `$ features peaking at $``$ 1 day at the strengths discussed here (and as reported by Piro et al., (1999); Yoshida et al. (1998)) would appear to be a significant diagnostic for a massive progenitor. Shells and funnels with dimensions about a light-day are rough examples of extreme geometries which might characterize massive progenitor remnants. However, a clear distinction between various types of massive progenitors (or mergers involving a massive progenitor) would require extensive quantitative calculations in the spirit of, e.g. Fryer, Woosley & Hartmann (1999) and Ruffert & Janka (1999), but considering more specifically the different pre-burst evolution and near-burst environments. What our present calculations are able to indicate is that Fe K-$`\alpha `$ equivalent widths of $``$ keV can be produced in a variety of plausible progenitor scenarios, but the absolute value of the Fe K-$`\alpha `$ line flux provides constraints on the combined values of the density, chemical abundance and distance from the burst, as well as the geometry. Our present calculations, which include a number of simplifying assumptions, indicate that Fe-enriched funnel models agrees better than shell models with the currently reported Fe line values. More detailed modeling, as well as more sensitive X-ray spectral line detections, should be able to provide valuable constraints on specific progenitors. We are grateful to NASA NAG5-2857, NSF PHY94-07194, the Division of Physics, Math & Astronomy, the Astronomy Visitor Program and Merle Kingsley fund at Caltech, the DAAD and the Royal Society for support. We thank M. Böttcher for pointing out a significant discrepancy, and N. Brandt, G. Chartas, G. Djorgovski, A. Fabian, S. Kulkarni, A. Panaitescu, S. Sigurdsson and A. Young for discussions.
no-problem/9908/astro-ph9908362.html
ar5iv
text
# The Convergence Depth of the Local Peculiar Velocity Field ## 1 The SFI, SCI and SCII Surveys The results discussed in this report derive from peculiar velocity measurements obtained for three samples of spiral galaxies, based on the Tully-Fisher (TF) technique which combines $`I`$ band photometry and either 21 cm or H$`\alpha `$ long-slit spectroscopy. The three samples are: (a) SFI, which includes 1631 field galaxies out to $`cz6500`$ km s<sup>-1</sup> (Haynes et al. 1998a,b); (b) SCI, which includes 782 galaxies in the fields of 24 clusters within $`cz9000`$ km s<sup>-1</sup> (Giovanelli et al. 1997a,b); and (c) SCII, which includes 522 galaxies in the fields of 52 clusters between $`cz5,000`$ and $`20,000`$ km s<sup>-1</sup> (Dale et al. 1997, 1998, 1999b,c). For the merged cluster sample SCI+SCII, there is an overlap of: 49 clusters with the Lauer & Postman (1994) cluster sample; 27 clusters with the SMAC sample (Hudson et al. 1999); and 22 clusters with the EFAR sample (Wegner et al. 1996). Detailed criteria for the construction of the samples are given elsewhere: for SCI see Giovanelli et al. (1997a); sample characteristics of SCI are given in Giovanelli et al. (1994); for SCII see Dale et al. (1999c). Driving concerns for the selection of each sample were: dense sampling of the local velocity field (SFI), high quality determination of the TF template relation (SCI) and reliable recovery of the dipole reflex motion of the Local Group (SCI+SCII). The latter sample was constructed with the aid of Monte Carlo realizations of the cluster distribution and velocity field, utilizing mock samples from $`N`$-body simulations obtained under a variety of cosmological scenarios, with the collaboration of Stefano Borgani. ## 2 The $`I`$ Band Tully-Fisher Template Relation Tully-Fisher distances, $`\mathrm{c}z_{\mathrm{tf}}`$, and therefore peculiar velocities $$V_{\mathrm{pec}}=\mathrm{c}z\mathrm{c}z_{\mathrm{tf}},$$ (1) are computed by reference with a fiducial template relation which must be observationally derived. The template relation defines the rest reference frame, against which peculiar velocities are to be measured. The importance of accurately calibrating such a tool cannot be overemphasized. It is possible that the discrepancies between different claims of bulk motions may be partly related to insufficiently well determined template relations. For an assumed linear TF relation we need to determine two main parameters: a slope and a magnitude zero point. The slope of the TF relation is best determined by a sample that maximizes the dynamic range in absolute magnitude $`M_I`$ and disk rotational velocity width $`W`$, i.e. one that preferentially includes nearby objects. The proximity of the SCI sample provides the dynamic range in galactic properties necessary to accurately determine the TF slope. On the other hand, the magnitude zero point of the relation is best obtained from a sample of distant objects for which a peculiar motion of given amplitude translates into a small magnitude shift. The SCII sample provides an ideal data set from which to calibrate the TF zero point. ### 2.1 The Accuracy of the Tully-Fisher Zero-Point As discussed in Giovanelli et al. (1997b) and Giovanelli et al. (1999), given a number $`N`$ of clusters the uncertainty on the TF zero point of the resulting template cannot be depressed indefinitely by increasing the average number $`\overline{n}`$ of galaxies observed per cluster, and taking advantage of the $`\overline{n}^{1/2}`$ statistical reduction of noise on the mean. That is because a “kinematical” or “thermal” component of the uncertainty depends on the number $`N`$, the distribution in the sky and the peculiar velocity distribution function of the clusters used. In SCI, for example, the statistical uncertainty deriving from the total number of galaxies observed ($`\overline{n}\times N`$) is exceeded by the kinematic uncertainty, which is quantified as follows. For a sample of $`N`$ objects with an rms velocity of $`V^2^{1/2}`$ at a mean redshift of $`\mathrm{c}z`$, the expected accuracy of the zero point is limited by systematic concerns to $$|\mathrm{\Delta }m|\frac{2.17V^2^{1/2}}{\mathrm{c}z\sqrt{N}}\mathrm{mag}.$$ (2) This quantity is about 0.04 mag for SCI, while it is only 0.01 mag for SCII due to the larger mean distance and number of clusters of the latter; the SCII sample was selected, in part, to improve upon the accuracy of the kinematical component of the TF zero point. Since the total number of galaxies involved in the two samples is comparable, the zero point of the SCII template is thus more accurate than that of SCI. The total uncertainty of the SCII zero point, including the contribution from limitations on its statistical, or internal, accuracy (arising from measurement errors, uncertainties in the corrections applied to observed parameters, the TF scatter, etc) is of order 0.02 magnitudes. The TF template relation is determined internally for a cluster sample. In the case of SCI, it was obtained by assuming that the subset of clusters farther than 40$`h^1`$ Mpc has a globally null monopole (Giovanelli et al. 1997b). The SCII-based template is obtained by assuming that the overall set of clusters (wich extends between 50$`h^1`$ and 200$`h^1`$ Mpc) has a globally null monopole, and adopting the same TF slope as for the SCI sample (Dale et al. 1999c). This approach does not, however, affect the value of the dipole or higher moments, and thus still allows an effective measure of possible bulk flows. The SCII template relation is $$M_I5\mathrm{log}h=7.68(\mathrm{log}W2.5)20.91\mathrm{mag}.$$ (3) The zero points of the SCI and SCII templates were found to agree to within the estimated uncertainty of 0.02 mag (the SCII zero point is fainter by 0.015 mag). ### 2.2 The Tully-Fisher Scatter and its Intrinsic Component Any relation involving observed parameters has a limited accuracy described by the amplitude of the relation’s scatter. Claims of the scatter in the TF relation vary from as low as 0.10 mag (Bernstein et al. 1994) to as high as 0.7 mag (Sandage et al. 1994a,b; 1995; Marinoni et al. 1998). The amplitude of the scatter does depend on wavelength, and studies in the $`I`$ band typically yield the tightest relations. The efforts with the largest samples yield 1$`\sigma `$ dispersion values of $``$0.3–0.4 mag (Mathewson, Ford & Buckhorn 1994; Willick et al. 1995; Giovanelli et al. 1997b; Dale et al. 1999c). Uncertainties in observational measurements are not the only factors that lead to the overall spread in the data. The corrections typically applied to the observed fluxes and disk rotational velocities are not exactly known, nor are the variety of methods used to account for inherent sample biases such as cluster incompleteness. Moreover, there is an intrinsic component to the TF dispersion since individual galaxies have diverse formation histories. In fact, Eisenstein & Loeb (1996) advocate an intrinsic scatter of 0.3 magnitudes, a number greater than most estimates from observational work. In light of this fact, they make the interesting claim that either spirals formed quite early or that there must be a type of feedback loop that promotes galactic assimilation. The results of Eisenstein & Loeb are not corroborated by the results of $`N`$-body simulations (e.g. Baugh et al. 1997, Mo et al. 1998 and Steinmetz & Navarro 1999), in which reasonably low values for the TF scatter are recovered. As already established in Giovanelli et al. (1997b), Dale et al. (1999c), and Willick (1999), Figure 1 reinforces the notion of low intrinsic scatter. The left panel refers to the SCI data whereas the righthand panel shows the results from SCII. The two dotted lines in each panel indicate the velocity width and magnitude uncertainties $`ϵ_x`$ and $`ϵ_y`$, with $`ϵ_x`$ multiplied by the TF slope $`b`$ to put it on a magnitude scale; the thin solid line labeled $`ϵ_m=\sqrt{(bϵ_x)^2+ϵ_y^2}`$ represents the average uncertainty resulting from measurement errors and that deriving from extinction, geometric and statistical corrections. The data displayed in Figure 1 are generated using equal numbers of galaxies per data point (the difference in the redshift distributions of the SCI and SCII samples result in slightly different observed velocity width distributions). The circles plotted represent the average standard deviations of the residuals from the fiducial TF relation. We see that the velocity width errors dominate those from the $`I`$ band fluxes, which are approximately independent of velocity width. Furthermore, the logarithmic velocity widths become increasingly uncertain for slower rotators (cf. Giovanelli et al. 1997b; Willick et al. 1997). We approximate the total observed scatter for the SCI and SCII samples with simple linear relations that depend on the velocity width: $`\sigma _{\mathrm{tot},\mathrm{SCI}}`$ $`=`$ $`0.33x+0.32\mathrm{mag}.`$ (4) $`\sigma _{\mathrm{tot},\mathrm{SCII}}`$ $`=`$ $`0.40x+0.38\mathrm{mag}.`$ (5) The total scatter for SCII is in general larger than that for SCI. This is unsurprising given that SCII velocity widths primarily stem from optical rotation curves instead of 21 cm profiles, a comparatively easier source from which to estimate velocity widths, and since the nearer SCI galaxies generally have better determined disk inclinations. The gap between the observed scatter and the measured errors is attributed to an intrinsic scatter contribution: the thick top line is a sum in quadrature of our observed measurement errors, $`ϵ_m`$, and an intrinsic scatter term (Giovanelli et al. 1997b): $$\sigma _{\mathrm{int}}=0.28x+0.26\mathrm{mag},$$ (6) the same in SCI and SCII. This result has been recently confirmed by Willick and his collaborators, as reported at this meeting. ## 3 Peculiar Velocity Distributions We interpret departures from the canonical template relation as an indication of peculiar motion, with larger departures from the template implying larger amplitude peculiar velocities. Quantitatively, for an object at a redshift $`z`$ with an average magnitude departure from the template of $`\mathrm{\Delta }m`$, we write the peculiar velocity as $$V_{\mathrm{pec}}=\mathrm{c}z(110^{0.2\mathrm{\Delta }m}).$$ (7) Tabulations of the SCI and SCII peculiar motions are given in Giovanelli et al. (1999b) and Dale et a. (1999c), respectively. Figure 2 shows the distribution of SCI and SCII (CMB frame) peculiar velocities in an Aitoff projection of Galactic Coordinates. The symbols plotted in the figure reflect both the radial directions of the peculiar velocities and the strengths of the measurements – in the CMB reference frame, solid/dotted-lined squares represent approaching/receding clusters and the square size is inversely proportional to the accuracy of the measurement. The largest cluster peculiar velocities, e.g. those for A3266 and A3667, are also the most uncertain, as the clusters are poorly sampled. ### 3.1 The RMS One Dimensional Peculiar Velocity Dispersion It is useful to estimate the line-of-sight distribution of peculiar velocities. The amplitude of that distribution has been known to be a very sensitive discriminator of cosmological models (see, for example, Bahcall & Oh 1996). The SCII cluster sample is relatively distant and the cluster membership counts are relatively anemic when compared to SCI. Consequently, SCII peculiar velocities are much less certain and the overall distribution shown in Figure 3 is significantly broadened by measurement errors. The peculiar velocities are represented by equal area Gaussians centered at the peculiar velocity of each cluster with dispersions equal to the estimated peculiar velocity errors. The thick dashed line superimposed on each plot is the sum of the individual Gaussians (its amplitude has been rescaled for plotting purposes). The 1$`\sigma `$ dispersion in the observed distribution of peculiar velocities is found from a Gaussian fit to the dashed line: $`\sigma _{1\mathrm{d},\mathrm{obs},\mathrm{SCI}}=325`$ km s<sup>-1</sup> and $`\sigma _{1\mathrm{d},\mathrm{obs},\mathrm{SCII}}=796`$ km s<sup>-1</sup>. These values, however, are biased high by measurement errors. Recovering an estimate of the true values can easily be obtained via Monte Carlo simulations, yielding $`\sigma _{1\mathrm{d},\mathrm{SCI}}=270\pm 54`$ km s<sup>-1</sup> and $`\sigma _{1\mathrm{d},\mathrm{SCII}}=341\pm 93`$ km s<sup>-1</sup> where the error estimates derive from the scatter in the dispersions of the simulated samples. These values of $`\sigma _{1\mathrm{d}}`$ are consistent with a relatively low density Universe (Giovanelli et al. 1998b; Bahcall & Oh 1996; Borgani et al. 1997; Watkins 1999; Bahcall, Gramann & Cen 1994a; Croft & Efstathiou 1994). ## 4 Dipoles of the Field Spiral Sample (SFI) On a grid of equatorial coordinates in an Aitoff projection centered at $`\alpha =6^h`$ and $`\delta =0^{}`$, Figure 4 displays galaxies from the SFI sample. Filled and unfilled symbols refer respectively to positive and negative peculiar velocities, measured with respect to the Local Group reference frame. Lines of Galactic latitude $`0^{}`$, $`+20^{}`$ and $`20^{}`$ are also shown, outlining the Zone of Avoidance. This sample, which extends to 6500 km s<sup>-1</sup>, clearly displays the dipole moment associated with the motion of the Local Group with respect to the CMB, in the form of a prevalence of positive peculiar velocities in the Southern Galactic Hemisphere and of negative ones in the Northern half (the apex of the Local Group motion is near $`\alpha =11.2^h`$, $`\delta =28^{}`$). In other words, a large fraction of the volume sampled by the SFI galaxies does not share the Local Group motion resulting in the CMB dipole. Figure 5 is a similar display to that in Figure 4, except that the peculiar velocities are referred to the CMB reference frame. The dipole signature clear in Figure 4 is gone, indicating that most of the SFI sample has a small bulk flow with respect to the CMB reference frame. This issue is tackled in a more quantitative way in what follows. Figure 6 displays the dipoles of the reflex motion of the Local Group with respect to field galaxies in shells 2000 km s<sup>-1</sup> thick. The dashed line in panel 6a corresponds to the amplitude of the CMB dipole. The three sets of symbols identify different ways of computing the peculiar velocities, using different subsets of the data or adopting a direct (stars) or inverse Tully-Fisher relation. The SFI sample indicates convergence to the CMB dipole, in the motion of the Local Group with respect to galaxies in shells, when a radius of a few thousand km s<sup>-1</sup> is reached. Convergence is achieved both in amplitude and apex direction. Note that the reflex motion of the LG is estimated with respect to shells centered on each redshift, and not with respect to all galaxies within that redshift; the convergence rate in the latter case would be slightly slower than shown in Figure 6. The dipole of Lauer & Postman (1994) is excluded with a high level of confidence. Alternatively, the bulk flow with respect to the CMB reference frame of a sphere of 6500 km s<sup>-1</sup> is 200$`\pm `$65 km s<sup>-1</sup>, directed toward $`(l,b)=(295^{},+25^{})\pm 20^{}`$. This is in general agreement with the direction of bulk flows reported in other studies (da Costa et al. 1996; Courteau et al. 1993; Dekel 1994; Hudson et al. 1999; Willick 1999), but it is smaller than other determinations, which range between 270 and 700 km s<sup>-1</sup>. See Giovanelli et al. (1998a) for further details. Compare these results with the convergence expectations from the PSCz redshift survey, as presented by W. Saunders at this meeting. ## 5 Dipoles of the Cluster Spiral Samples (SCI and SCII) The dipole moments of the reflex motion of the Local Group with respect to (i) the subset of SCI clusters farther than 3000 km s<sup>-1</sup> and (ii) the SCII sample plus the subset of SCI clusters farther than 4500 km s<sup>-1</sup> both coincide, within the errors, with that of the CMB. This coincidence is matched both in amplitude and apex direction. Figure 7 displays the error clouds for the coordinates of the dipole derived from the distant SCI+SCII sample, plotted against supergalactic Cartesian coordinates. One and two-sigma confidence ellipses are plotted, as well as the locations of the CMB and the Lauer & Postman (1994) reflex dipoles. The latter can be excluded by our data with a high level of confidence (at the 3.5$`\sigma `$ level). The bulk motion for the 64 clusters comprising the distant SCI+SCII sample ($`\mathrm{c}z100h^1`$ Mpc) is estimated to be no greater than 200 km s<sup>-1</sup>; corrected for error biasing, the amplitude is consistent with a null bulk flow. In other words, the inertial frame defined by the SCI+SCII clusters is consistent with being at rest with respect to the CMB fiducial rest frame. More locally, the bulk flow within a sphere of 6000 km s<sup>-1</sup> radius, as derived from SCI, is between 140 and 320 km s<sup>-1</sup>, in the CMB reference frame. See Dale et al. (1999a) and Giovanelli et al. (1998b) for more details. ## 6 No Hubble Bubble in the Local Universe Recently, it has been suggested by Zehavi et al. (1998) that the volume within c$`z7000`$ km s<sup>-1</sup> is subject to an acceleration, in the sense that the local Hubble constant is $`6.6\pm 2.2`$% larger than the fiducial value. The inference is that we reside at the center of a local underdensity of amplitude 20%, surrounded by an overdense shell. The result is based on the distances to 44 SN of type Ia. In Figure 8, we test that result by using the combined sample of all 76 SCI+SCII clusters. The TF distances to each individual cluster have a typical accuracy similar to or better than that quoted for the SN measurements (which is 5–8% due to internal errors alone), and the cluster sample is well distributed over the sky (see Figure 2). Figure 8 displays $`\delta H/H=V_{\mathrm{pec}}/\mathrm{c}z_{\mathrm{tf}}`$ against $`hd=\mathrm{c}z_{\mathrm{tf}}/100`$, where $`V_{\mathrm{pec}}`$ is the peculiar velocity in the CMB reference frame and $`\mathrm{c}z_{\mathrm{tf}}`$ is the Tully-Fisher distance in Mpc, in the same frame. Unfilled symbols represent clusters with poor distance determinations, based on fewer than 5 galaxies with Tully-Fisher measurements per cluster. Figure 8 illustrates how, at small distances, the deviations from Hubble flow are dominated by the motions of nearby groups, comparable in amplitude to those of the Local Group (611 km s<sup>-1</sup>). At small distances, even modest peculiar velocities constitute a sizable fraction of $`cz`$, thus amplifying and — if the sampling is sparse — even distorting the distribution of values of $`\delta H_{}/H_{}`$. The deviation from Hubble flow that they imply is of scarce interest. At distances larger than 35$`h^1`$ Mpc, the monopole of the cluster peculiar velocity field exhibits no significant change of value. Our sample is most sensitive to the possible presence of a “step” (as would be produced by the Hubble bubble proposed by Zehavi et al.) between 50$`h^1`$ and 130$`h^1`$ Mpc. The amplitude and significance of a step at 70$`h^1`$ Mpc is $$\frac{\delta H_{}}{H_{}}=0.010\pm 0.022$$ (8) Our data yield no evidence for a step deviation in the Hubble flow of amplitude larger than 2–3%, up to 130$`h^1`$ Mpc. The Hubble bubble suggestion of Zehavi et al. (1998) is thus not corroborated by this larger and more accurate data set (see Giovanelli et al. 1999 for further details). ## 7 Conclusions We have obtained $`I`$ band Tully-Fisher data for 3000 spiral galaxies in the field and in 76 clusters, evenly spread across the sky and distributed between $``$10 and 200$`h^1`$ Mpc. Rotational velocity widths derive from either 21 cm or long-slit optical spectroscopy. The SCI and SCII cluster data are used to construct an accurate Tully-Fisher template relation. The relation has an average scatter of approximately 0.35 magnitudes, and a zero point with an accuracy of 0.02 mag. Peculiar velocities are obtained for the SFI galaxies and each of the SCI and SCII clusters, by reference to the TF template relation. The typical distance uncertainty is 16% for individual galaxies and 3-9% for clusters, the latter primarily depending on the number of Tully-Fisher measurements within each cluster. The rms line-of-sight component of the cluster peculiar velocity, debroadened for measurement errors, is $`270\pm 54`$ km s<sup>-1</sup> for SCI and $`341\pm 93`$ km s<sup>-1</sup> for SCII. Using the data for the field SFI and cluster SCI samples of spiral galaxies, we have shown that the reflex motion of the Local Group converges to the CMB dipole amplitude and direction within 6000 km s<sup>-1</sup>. Results from the relatively distant SCII cluster sample confirm that convergence is maintained beyond the limits of the SCI and SFI samples. Finally, no evidence is found for the local Hubble bubble advocated by Zehavi and coworkers, or for any radially averaged deviation from Hubble flow with amplitude larger than 2–3%, between 35$`h^1`$ and 130$`h^1`$ Mpc. Averaged over distance in excess of a few tens of Mpc, the Hubble flow appears to be remarkably smooth. ###### Acknowledgements. The results presented here were obtained by the combined efforts of M. Haynes, E. Hardy, L. Campusano, M. Scodeggio, J. Salzer, G. Wegner, L. da Costa, W. Freudling and the authors. They are based on observations that were carried out at several observatories, including Palomar, Arecibo, NOAO, NRAO, Nançay, MPI and MDM. NRAO, NOAO and NAIC are operated under management agreements with the National Science Foundation respectively by AUI, AURA and Cornell University. The Palomar Observatory is operated by Caltech under a management agreement with Cornell University and JPL. This research was supported by NSF grants AST94-20505, AST95-28960, and AST96-17069.
no-problem/9908/cond-mat9908111.html
ar5iv
text
# Marginal Fermi Liquid with a Two-Dimensional Patched Fermi Surface ## Abstract We consider a model composed of Landau quasiparticle states with patched Fermi surfaces (FS) sandwiched by states with flat FS to simulate the “cold” spot regions in cuprates. We calculate the one particle irreducible function and the self-energy up to two-loop order. Using renormalization group arguments we show that in the forward scattering channel the renormalized coupling constant is never infrared stable due to the flat FS sectors. Furthemore we show that the self-energy scales with energy as $`Re\omega \mathrm{ln}\omega `$ as $`\omega 0`$, and thus the Fermi liquid state within each FS patch is turned into a marginal Fermi liquid. The normal phase of the high-$`T_c`$ superconductors is by now notable for the exhibition of a number of physical properties which don’t fit in a Landau-Fermi liquid framework. These anomalies have in common the fact that they show a crossover behavior near optimal doping and they are in one way or another deeply related with the peculiar nature of the electronic excitations in the underdoped regime. In particular, the presence of an anisotropic pseudogap in the underdoped regime is central to our understanding of the FS topology in these materials. Based on the photoemission spectroscopy data (ARPES) as well as other experimental techniques we have now a better view of the FS in the cuprates. The experiments performed especially on underdoped Bi2212 and its family compounds demonstrate the existence of a spectrum characterized by the pseudo gap in the regions around $`(\pm \pi ,0)`$ and $`(0,\pm \pi )`$ in the Brillouin zone. Besides that the experimental data show a single-particle peak structure in the ARPES spectra near $`(\pm \frac{\pi }{2},\pm \frac{\pi }{2})`$. It emerges from these results the phenomenological picture of a two-dimensional patched FS disconnected by pseudogap regions with the quasiparticle like sectors locating in between, the so-called hot spots and cold spots respectively . The hot spots are in essence associated with non-Fermi liquid like states presumably responsible for all the anomalies observed in the normal phase of the underdoped cuprates. In contrast, the cold spots are the patches in which presumably Landau Fermi liquid theory holds. Our purpose in this work is to test the validity of this scenario. In this work we consider two-dimensional ($`2d`$) quasi-particles with a truncated FS composed of four patches. The boundaries of each patch are taken to be flat with a linear particle dispersion law. This latter description is known to produce non-Fermi liquid behavior as the result of FS nesting. The central regions of these patches are occupied by conventional Fermi liquid states. We take into account interactions between particles of opposite spins in both Cooper $`\left(C\right)`$ and forward channels $`\left(F\right)`$. As expected we find logarithmic divergent non-interacting susceptibilities in these two channels. We calculate the resulting effective quasi-particle interaction up to two-loop order. The Cooper channel is associated with the superconducting instability and does not drive the physical system outside the Landau-Fermi liquid regime for repulsive bare interactions. This is opposite to what happens in the $`F`$ channel. These contributions are therefore momentum dependent. Whenever the superconducting instability is dominant the effective coupling constant approaches the Fermi liquid fixed point. If the coupling constant is regulated by the $`F`$ channel instead it becomes infrared divergent at the Fermi surface. We also show that the flat sectors of the FS produce new features in the self-energy $`\mathrm{\Sigma }`$. We calculate the quasiparticle self-energy up to two-loop order. The resulting $``$ scales as $`\omega \mathrm{ln}\omega `$ when $`\omega 0`$ , as a marginal Fermi liquid, producing $`\frac{Re{\scriptscriptstyle }}{\omega }\mathrm{}`$ and a nullified quasiparticle renormalization constant $`Z=0`$. As a result we conclude that there isn’t a truly stable Landau Fermi liquid in such a patched Fermi Surface if the effects of its flat sectors are appropriately taken into account. We consider initially a $`2d`$ Fermi Surface which consists of four disconnected patches centered around $`(\pm k_F,0)`$ and $`(0,\pm k_F)`$ respectively, as shown in Fig. 1 (a). Let us assume that they are all Landau Fermi liquid like. The disconnected arcs separate occupied single-particle states from unoccupied ones. However as we approach any patch boundary along the arc there cannot be such a sharp separation of single-particle states. If we phrase this in terms of Fermi liquid theory we can say that there are large incoherence effects near each patch border which destroys the sharpness of the FS in these regions. That is, the border regions within any patch cannot truly be Landau like. As a result each one of the disconnected patches has al least finite segments around its borders in which there should exist non-Fermi liquid states. We choose to represent these non-Fermi liquid sectors by finite flat FS pieces combining $`2d`$ and $`1d`$ features. Our resulting FS model , as shown in Fig. 1b, consists in this way of four symmetrical disconnected patches as before with flat border sectors. The $`2d`$ Fermi liquid states defined around the patch center are such that their dispersion law become one-dimensional as we approach the borders since the FS curvature is identically zero in these regions. In order to make contact with conventional Fermi liquid theory let us consider the Lagrangian density $``$ $$=\underset{\sigma }{}\psi _\sigma ^{}\left(x\right)\left(i_t+\frac{^2}{2m^{}}+\mu \right)\psi _\sigma \left(x\right)\underset{\sigma ,\sigma ^{^{}},\alpha ,\alpha ^{^{}}}{}\underset{y}{}U_{\sigma ,\sigma ^{^{}};\alpha ,\alpha ^{^{}}}\left(xy\right)\psi _\alpha ^{}\left(x\right)\psi _\alpha ^{^{}}^{}\left(y\right)\psi _\sigma ^{^{}}\left(y\right)\psi _\sigma \left(x\right)$$ (1) where $`\sigma ,\sigma ^{^{}},\alpha ,\alpha ^{^{}}`$ are spin indices and $`\underset{y}{}=𝑑td^2y`$. Here $`\mu =k_F^2/2m^{}`$ with the fermionic excitations represented by the fermion fields $`\psi _\sigma `$ and $`\psi _\sigma ^{}`$ well defined only in the region defined by the four patches centred around the FS points $`(\pm k_F,0)`$ and $`(0,\pm k_F)`$ ( see Fig. 1 (b) ). Outside these disconnected patches the fermion fields are taken to be identically zero. Following this argument the fermionic single-particle energy $`\epsilon \left(𝐩\right)`$ should be defined in accordance with the patch under consideration. For the up and down patches defined around the y-axis the Fermi liquid single-particle energy dispersion is given by $$\epsilon \left(𝐩\right)\frac{k_F^2}{2m^{}}+\left(\pm p_yk_F\right)v_F+\frac{p_x^2}{2m^{}}$$ (2) with $`k_F\lambda \pm p_yk_F+\lambda `$ and $`\mathrm{\Delta }p_x\mathrm{\Delta }`$, where $`v_F`$ is the Fermi velocity and $`\lambda `$ and $`\mathrm{\Delta }`$ are wave vector cutoffs. In the corresponding flat sectors the energy dispersion is $`1d`$. That is, $$\epsilon \left(𝐩\right)\frac{k_F^2}{2m^{}}+\left(\pm p_yk_F\right)v_F$$ (3) for $`k_F\lambda \pm p_yk_F+\lambda `$ and $`\lambda p_x\mathrm{\Delta }`$ $`or`$ $`\mathrm{\Delta }p_x\lambda `$. We have similar results for the patches defined around $`(\pm k_F,0)`$ exchanging $`p_x`$ and $`p_y`$. Within this framework we can use perturbation theory to calculate the single-particle Green’s function $`G_\sigma `$ , the particle-particle and the particle-hole susceptibilities $`\mathrm{\Pi }_{}^{\left(0\right)}`$ and $`\chi _{}^{\left(0\right)}`$ respectively and the one-particle irreducible functions $`\mathrm{\Gamma }_{\sigma ,\sigma ;\sigma ,\sigma }`$ and $`\mathrm{\Gamma }_{\sigma ,\sigma ;\sigma ,\sigma }`$. In all our perturbation theory calculations we assume for simplicity that the interaction function $`U\left(x\right)`$ reduces to $`U\delta \left(x\right)`$. Let us therefore proceed with our perturbation theory calculation of the one-particle irreducible function $`\mathrm{\Gamma }_{\sigma ,\sigma ;\sigma ,\sigma }(p_1,p_2;p_3,p_4)`$ up to two-loop order. The corresponding diagrams for $`\mathrm{\Gamma }_;`$ are drawn in Fig. 2. The diagram are given in terms of particle-hole and particle-particle susceptibilities given by $`i\chi _{}^{\left(0\right)}\left(P\right)`$ $`=`$ $`{\displaystyle \underset{q}{}}G_{}^{\left(0\right)}\left(q\right)G_{}^{\left(0\right)}\left(q+P\right),`$ (4) $`i\mathrm{\Pi }_{}^{\left(0\right)}\left(P\right)`$ $`=`$ $`{\displaystyle \underset{q}{}}G_{}^{\left(0\right)}\left(q\right)G_{}^{\left(0\right)}\left(q+P\right)`$ (5) with the free single-particle Green’s function $`G^{\left(0\right)}\left(q\right)=`$ $`\frac{1}{q_0(\epsilon \left(𝐪\right)\mu )\pm i\delta }`$ where $`\epsilon \left(𝐪\right)\mu `$ is calculated around the central point of one of the four available FS patches being either 1d or 2d depending on the relative position of the $`𝐪`$ vector. Let us initially consider these scattering processes in the Cooper channel which is characterized by the choice $`p_1+p_2=(p_0,0)`$ with $`p_00`$. If we calculate the integrals for $`i\mathrm{\Pi }_{}^{\left(0\right)}\left(P\right)`$ above we find in this case $$\mathrm{\Pi }_{}^{\left(0\right)}\left(p_0\right)\frac{\lambda /\pi ^2}{k_F/m^{}}\left[\mathrm{ln}\left(\frac{2k_F\lambda /m^{}p_0i\delta }{p_0+i\delta }\right)+\mathrm{ln}\left(\frac{2k_F\lambda /m^{}+p_0i\delta }{p_0i\delta }\right)\right]$$ (6) In this channel the ln singularity in $`\mathrm{\Pi }_{}^{\left(0\right)}`$ is produced by both flat and curved FL sectors. Let us next calculate $`\chi _{}^{\left(0\right)}\left(P\right)`$ in the forward channel with $`𝐩_1=𝐩_3;𝐩_2=𝐩_4`$ for $`P=p_4p_1=(p_0,2k_F𝐲)`$. Using the corresponding definition for $`\chi ^{\left(0\right)}`$ we find that $$\chi _{}^{\left(0\right)}\left(P\right)\frac{\left(\lambda \mathrm{\Delta }\right)m^{}}{4\pi ^2k_F}[\mathrm{ln}\left(\frac{2k_F\lambda /m^{}+p_0i\delta }{p_0i\delta }\right)+\mathrm{ln}\left(\frac{2k_F\lambda /m^{}p_0i\delta }{p_0+i\delta }\right)]$$ (7) As opposed to what happens with $`\mathrm{\Pi }_{}^{\left(0\right)}\left(p_0\right)`$ the singularity in $`\chi ^{\left(0\right)}`$ is entirely produced by the flat sectors of the Fermi surface. The appearance of singularities in $`\mathrm{\Gamma }`$ is momenta dependent since it is regulated by our choice of scattering channel. For $`𝐩_1+𝐩_2=\mathrm{𝟎}`$ , $`\mathrm{\Pi }_{}^{\left(0\right)}`$ is singular and $`\chi _{}^{\left(0\right)}`$ is finite. For $`𝐩_4𝐩_1=2k_F𝐲`$ only $`\chi _{}^{\left(0\right)}`$ is singular. Thus in the Cooper channel for $`𝐩_1+𝐩_2=\mathrm{𝟎}`$ it follows that our perturbation theory expansion for $`\mathrm{\Gamma }_;`$ is dominated by the $`\mathrm{\Pi }_{}^{\left(0\right)}`$ terms in the diagram series. We obtain in this case $$\mathrm{\Gamma }_;\left(𝐩_1=𝐩_2;𝐩_3=𝐩_4;p_0\right)iU+iU\mathrm{\Pi }_{}^{\left(0\right)}\left(p_0\right)+iU^3\left({}_{}{}^{2}\mathrm{\Pi }_{}^{\left(0\right)}\left(p_0\right)\right)^2+\mathrm{}$$ (8) Following conventional renormalization theory procedure we can replace our original coupling function by its bare analogue so that all divergences are cancelled exactly in all orders of perturbation theory. For this let us define the bare coupling function $`U_{0;}(p_1,p_2;p_3,p_4)`$ such that if $`𝐩_1+𝐩_2=\mathrm{𝟎}`$ $$U_{0;}\left(𝐩_1=𝐩_2;𝐩_3=𝐩_4\right)=U+U^2\frac{2\lambda /\pi ^2}{k_F/m^{}}\mathrm{ln}\left(\frac{2k_F\lambda /m^{}}{\omega }\right)+\mathrm{}$$ (9) with $`\omega `$ being an energy scale parameter which will eventually approach zero. Using the condition $`\omega U_0/\omega =0`$ we obtain from this result the renormalization group$`\left(\text{RG}\right)`$ equation appropriate for a Fermi liquid $`\beta \left(U\right)=\omega \frac{U}{\omega }=bU^2`$where $`b=\frac{2\lambda /\pi ^2}{k_F/m^{}}`$. This equation can be easily integrated out to give $$U\left(\omega \right)=\frac{U\left(\mathrm{\Omega }\right)}{1+bU\left(\mathrm{\Omega }\right)\mathrm{ln}\left(\frac{\mathrm{\Omega }}{\omega }\right)},$$ (10) where $`\mathrm{\Omega }`$ is some fixed upper energy limit. Even if $`U\left(\mathrm{\Omega }\right)`$ is not small, $`U\left(\omega \right)`$ grows weaker for low values of $`\omega `$ signalling the existence of a trivial Landau fixed point for the renormalized $`U_;(p_1,p_2;p_3,p_4)`$. Consider next the case in which $`𝐏=𝐩_4𝐩_1=2k_F𝐲`$ and $`𝐩_1+𝐩_2=2\mathrm{\Delta }𝐱`$ with $`𝐩_1=𝐩_3;𝐩_2=𝐩_4`$ and $`p_0=\omega 0`$. Using our perturbation theory expansion we find $$\mathrm{\Gamma }_;\left(𝐩_1=𝐩_3;𝐩_2=𝐩_4;\omega \right)iUiU^2\chi _{}^{\left(0\right)}\left(\omega \right)+iU^3\left(\chi _{}^{\left(0\right)}\left(\omega \right)\right)^2+\mathrm{}$$ (11) since for $`2\mathrm{\Delta }>\lambda `$ , the $`\chi _{}^{\left(0\right)}\left(\omega \right)`$ contributions dominate our perturbative expansion above. Following the same route as before we can define the bare interaction function $`U_{0;}(p_1,p_2;p_3,p_4)`$ for $`𝐩_4𝐩_1`$ $`=`$ $`𝐏`$ as $$U_{0;}\left(𝐩_1=𝐩_3;𝐩_2=𝐩_4;𝐩_4𝐩_1=𝐏;\omega \right)=UU^2\frac{m^{}\left(\lambda \mathrm{\Delta }\right)}{2\pi ^2k_F}\mathrm{ln}\left(\frac{2k_F\lambda /m^{}}{\omega }\right)+\mathrm{}$$ (12) As a result in this channel if we neglect the tweo-loop order terms the renormalized coupling $`U`$ satisfies the RG equation $`\beta \left(U\right)=\omega \frac{U}{\omega }=cU^2`$which when integrated out gives $$U\left(\omega \right)=\frac{U\left(\mathrm{\Lambda }\right)}{1cU\left(\mathrm{\Lambda }\right)\mathrm{ln}\left(\frac{\mathrm{\Lambda }}{\omega }\right)}$$ (13) for $`c=\frac{m^{}\left(\lambda \mathrm{\Delta }\right)}{2\pi ^2k_F}`$ and $`\mathrm{\Lambda }`$ being an upper energy limit. Clearly the physical system is driven outside the domain of validity of perturbation theory if $`cU\left(\mathrm{\Lambda }\right)\mathrm{ln}\left(\frac{\mathrm{\Lambda }}{\omega }\right)1`$. The Fermi liquid infrared fixed point is physically unattainable in this limit. In fact if we consider the two-loop order terms the $`\beta `$ function series becomes $`\beta \left(U\right)=cU^2aU^3+\mathrm{}`$where $`a=2c^2\mathrm{ln}\left(\frac{2k_F\lambda /m^{}}{\mathrm{\Omega }}\right)`$. Thus there is a non-trivial fixed point $`U^{}=1/2c\mathrm{ln}\left(\frac{2k_F\lambda /m^{}}{\mathrm{\Omega }}\right).`$ However since $`\left(\beta \left(U\right)/U\right)_U^{}<0`$ it is also infrared unstable. Consider next the calculation of the self-energy $`\mathrm{\Sigma }\left(p\right)`$ up to two-loop order ( see Fig. 3 ). The one-loop diagram is trivial producing only a constant shift in the Fermi energy. The same goes for the two-loop diagram from Fig. 3 (b). However the remaining one gives a nontrivial contribution: $$i\mathrm{\Sigma }_{}\left(p\right)=U^2\underset{q}{}G_{}^{\left(0\right)}\left(q\right)\chi _{}^{\left(0\right)}\left(qp\right)$$ (14) From our previous calculation we noticed that the singularity in $`\chi ^{\left(0\right)}`$ is produced by the flat sectors of the FS when there is perfect nesting between the patches related by the scattering process. Since $`p`$ is an external variable we choose for convenience $`𝐩=(\mathrm{\Delta },k_F)`$. In this way the relevant patch for the $`𝐤`$ integration is $`(0,k_F)`$. Taking into account that $`𝐪`$ is located in the patch $`(0,k_F)`$ we can evaluate $`\chi _{}^{\left(0\right)}\left(qp\right)`$ directly. If we replace the result in the Eq. (14) the self-energy reduces simply to $$\mathrm{\Sigma }\left(p_0\right)=\frac{U^2\left(m^{}\left(\lambda \mathrm{\Delta }\right)\right)^2}{16\pi ^4\left(2k_F\right)^2}p_0\left[\mathrm{ln}\left(\frac{2k_F\lambda /m^{}p_0i\delta }{p_0+i\delta }\right)+\mathrm{ln}\left(\frac{2k_F\lambda /m^{}+p_0i\delta }{p_0i\delta }\right)\right]$$ (15) Therefore it follows from this result that in the limit $`p_00`$ the self-energy $`\mathrm{\Sigma }`$ reproduces the marginal Fermi liquid result $`Re\mathrm{\Sigma }\left(p_0\right)p_0\mathrm{ln}p_0`$ producing in this way a charge renormalization parameter $`Z`$ identically zero: $`Z=\left(1\frac{}{p_0}Re\mathrm{\Sigma }\left(p_0=0\right)\right)^1=0`$. The Fermi liquid states are in this way turned into a marginal Fermi liquid by the interaction effects produced by the flat sectors of the patched Fermi surface. In conclusion, in this work we test the validity of the phenomenological picture of a patched FS composed of confined quasiparticle states (cold spots) separated by non-Fermi liquid ( hot spots ) sectors which are presumably responsible for the anomalous metallic properties observed in the normal phase of underdoped high-temperature superconductors. We consider a $`2d`$ disconnected Fermi surface composed of four different patches. In each of these patches there are conventional Fermi liquid states defined around their center as well as non-Fermi liquid states located in the border sectors. These latter states are modeled by single-particle states with flat Fermi surfaces. This representation is known to produce non-Fermi liquid behavior due to the presence of perfect nesting of the Fermi surface patches. For simplicity we assume that all particles interact locally but we take proper account of the spin degrees of freedom. Using a simplified model Lagrangian we calculate the one-particle irreducible function $`\mathrm{\Gamma }_;`$up to two-loop order. The perturbation series is momentum dependent. In the Cooper channel the expansion is dominated by the particle-particle susceptibility $`\mathrm{\Pi }_{}^{\left(0\right)}`$which is logarithmic divergent even if flat sectors are not present in our system. Making a renormalization group analysis we show that this channel is infrared stable with the renormalized repulsive coupling constant approaching the Landau trivial fixed point as we approach the Fermi surface. In contrast, in the forward scattering channel the perturbation series is dominated by the logarithmic divergence of $`\chi _{}^{\left(0\right)}`$which is now produced by the flat sectors of FS. This time the first derivative of the $`\beta \left(U\right)`$with respect to $`U`$is negative definite. The system is now infrared unstable and only reaches a negative non-trivial fixed point in the ultraviolet limit. Finally we calculate the quasiparticle self-energy up two-loop order. We show that one of the second order diagram produces a non-trivial contribution to $`\mathrm{\Sigma }`$. When we evaluate this term we find that it reproduces the marginal Fermi liquid result $`\mathrm{Re}\mathrm{\Sigma }\omega \mathrm{ln}\omega `$when $`\omega 0`$. Therefore we can conclude that the confined Fermi liquid states, the so-called cold spots, are drastically altered and turned into a marginal Fermi liquid by the quasiparticles located in the flat sectors of the Fermi surface. ACKNOWLEDGMENTS Two of us ( AF and TS ) would like to acknowledge financial support from the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) and Financiadora de Estudos e Projetos (FINEP).
no-problem/9908/astro-ph9908170.html
ar5iv
text
# Diffuse Galactic Soft Gamma-Ray Emission ## 1 Introduction Galactic diffuse gamma-ray emission provides information on the origin, propagation, and interaction of cosmic-ray electrons within the Galaxy, complementary to the knowledge gained from direct observation of the cosmic-ray electrons in the Solar vicinity, and nonthermal radio emission from interactions with Galactic magnetic fields. The cosmic-ray electron spectrum has been directly measured above a few GeV (Golden et al. (1984, 1994); Taira et al. (1993)), and extended down to 300 MeV by radio synchrotron observations (Weber (1983)). Below 300 MeV, the electron spectrum must be constrained by the gamma-ray observations themselves. Observations and mapping of this diffuse continuum also provides a background which, as soft gamma-ray observations become more sensitive, will be important for the detailed study of faint sources. A number of satellite and balloon instruments have measured the Galactic diffuse continuum in the hard X-ray/soft gamma-ray bands, e.g., OSO 7 \[7-40 keV\] (Wheaton (1976)), HEAO 1 \[2-50 keV\] (Worrall et al. (1982)), SMM \[0.3-8.5 MeV\] (Harris et al. (1990)), GRIS \[20 keV - 10 MeV\] (Gehrels & Tueller (1993)), COMPTEL/CGRO \[1-30 MeV\] (Strong et al. (1996)), Ginga \[2-16 keV\] (Yamasaki et al. (1996)), OSSE/CGRO \[70 keV - 4 MeV\] (Purcell et al. (1995)), RXTE \[3-35 keV\] (Valinia & Marshall (1998)). These observations have revealed nonthermal emission distributed along a Galactic ridge that extends $`\pm 60^{}`$ in longitude. The hard X-ray/soft gamma-ray continuum is believed to be dominated by three components (Strong et al. (1995); Skibo & Ramaty (1993)): nonthermal bremsstrahlung from cosmic-ray interactions with the interstellar gas, inverse Compton scattering of the interstellar radiation (optical, infrared, and cosmic microwave background) off of the cosmic-ray electrons, and positronium continuum from the annihilation of Galactic positrons below 511 keV. At lower energies ($`<10keV`$) thermal emission from the hot ISM dominates, while at higher energies ($`>100MeV`$) $`\pi ^0`$ decay emission dominates. Models of the nonthermal bremsstrahlung and inverse Compton components (Strong et al. (1995); Skibo & Ramaty (1993)) predict that bremsstrahlung dominates in the MeV range, though inverse Compton still provides a significant fraction of the total flux; at lower energies, however, bremsstrahlung is highly inefficient compared to ionization and Coulomb collision losses (e.g., Strong et al. (1995); Skibo, Ramaty & Purcell (1996)), and the bremsstrahlung spectrum drops rapidly; therefore, inverse Compton is expected to dominate below 1 MeV. The combination of the bremsstrahlung and inverse Compton, however, should produce a very smooth continuum from 10 keV to 30 MeV (Strong et al. (1995); Skibo & Ramaty (1993)), though the addition of the positronium continuum will produce a well-defined spectral jump at 511 keV. This smooth continuum makes it difficult to distinguish the relative contributions of the bremsstrahlung and inverse Compton emission from the spectrum alone. The spatial distributions of the two components, however, should be significantly different. Due to the diffuse cosmic background, and the large scale heights of the interstellar optical and IR emission and the cosmic-ray electrons compared to the interstellar gas, the scale height of the inverse Compton component should be significantly larger than the scale height of the bremsstrahlung emission. The presence of both broad and narrow scale height components has been confirmed in the 1-30 MeV energy band by COMPTEL/CGRO (Strong et al. (1996)) and in the 3-35 keV band by RXTE (Valinia & Marshall (1998)), adding strong support to the two-component model. These observations also suggest a smooth continuum from 10 keV to 30 MeV, with measured photon indices of 1.8 (RXTE) and $``$2.0 (COMPTEL/CGRO). This smooth continuum is consistent with models of the combined bremsstrahlung and inverse Compton emission (e.g., Strong et al. (1995); Skibo & Ramaty (1993)). Observations in the 50 keV - 4 MeV energy band with OSSE/CGRO, the energy range between COMPTEL/CGRO and RXTE, have brought into question this simple two component (plus positronium) model of the Galactic diffuse continuum (Purcell et al. (1995)). These observations suggest a strong steepening of the diffuse spectrum below 200 keV which is difficult to explain in terms of this model. The inverse Compton emission at these energies is produced predominantly by higher energy ($`>GeV`$) electrons, the spectrum of which has been well constrained by Galactic synchrotron radio emission (Weber (1983)), so that an inverse Compton origin for a hard X-ray excess is highly unlikely (Skibo (1997)). Furthermore, due to the inefficiency of bremsstrahlung emission below 1 MeV, the energetics required to produce such hard X-ray emission exceeds the total power expected from Galactic supernovae (Skibo (1997)). Measurements of the diffuse Galactic hard X-ray and soft gamma-ray emission are inherently upper limits due to the large number of hard X-ray compact sources in the Galactic Center region and the inevitable source confusion. Difficulty arises because non-imaging instruments with large fields-of-view (FOVs) are most sensitive to the diffuse continuum but are highly susceptible to compact source confusion, whereas imaging instruments or instruments with narrow FOVs are able to measure the compact sources individually, but are not as sensitive to the diffuse emission. Scans by wide and moderate FOV instruments (e.g., HEAO 1, SMM), and coordinated observations between imaging instruments and wider FOV instruments (e.g., OSSE/CGRO & SIGMA) have been performed to subtract out the compact source contributions, but still show hard X-ray spectra that are easiest to explain by compact source confusion. The HIREGS spectrometer observed the Galactic Center region in January 1995 during a long duration balloon flight from Antarctica (Boggs et al. (1998)). HIREGS is a wide FOV instrument ($`21^{}`$ FWHM), but three sets of narrow FOV passive hard X-ray collimators ($`3.7^{}\times 21^{}`$ FWHM) were added over half of the twelve detectors for this flight. HIREGS thus simultaneously performed wide and narrow FOV observations of the same region, allowing the direct measurement and separation of compact source spectra and the Galactic diffuse. In this paper we present the analysis and results of the HIREGS measurements of the diffuse Galactic emission in the 30-800 keV band from the Galactic Center region. Our observations are consistent with a single power-law (plus positronium) over the entire energy band, with a spectral index that agrees well with RXTE observations at lower energies, and COMPTEL/CGRO observations at higher energies. These results are inconsistent with previous observations of spectral steepening below 200 keV (e.g., OSSE/CGRO), which are most likely due to compact source contamination. Upper limits on the longitudinal gradients along the Galactic ridge are presented. We discuss implications of the soft gamma-ray diffuse Galactic emission on the cosmic-ray electron spectrum. ## 2 Instrument and Observations HIREGS is a high-resolution spectrometer (2.0 keV FWHM at 592 keV) covering three orders of magnitude in energy, from 20 keV to 17 MeV. The instrument is an array of twelve high-purity 6.7-cm diameter by 6.1-cm height coaxial germanium detectors designed for long duration balloon flights. The array is enclosed on the sides and bottom by a 5-cm thick BGO shield, and collimated to a $`21^{}`$ FWHM FOV by a 10-cm thick CsI collimator. The instrument (Pelling et al. (1992)), and the flight (Boggs et al. (1998)) are described in greater detail elsewhere. Passive molybdenum (0.15 mm) collimators were designed for this flight to limit the hard X-ray ($`<200keV`$) FOV for half of the detectors to $`3.7^{}\times 21^{}`$ FWHM (Figure 1). Three sets of these collimators were inserted in the main collimator and slanted in parallel directions, providing three parallel but separate FOVs. Therefore, the instrument had four different hard X-ray FOVs for any given pointing: one wide $`21^{}`$ FOV, and three narrow $`3.7^{}\times 21^{}`$ FOVs. Spectra were collected for 30 minute periods, alternating between source observations and background observations at the same horizon elevation. The backgrounds were taken at least one FOV width off of the Galactic Plane to avoid source contamination, and were alternated to the east and west of the source pointings to avoid any systematic errors. Despite these precautions, there is a soft source ($`<40keV`$) contaminating the westerly background pointings. This source affected the preliminary analysis of this data (Boggs et al. (1997)), allowing misidentification of the compact sources and forcing one spectra to show significant spectral flattening below 40 keV. For this analysis, the westerly background spectra and corresponding source spectra are not used for energy bins below 40 keV. The data reported here are for three different pointing directions (Figure 1): 48 hrs centered on the Galactic Center (l=$`0^{}`$, b=$`0^{}`$), 4 hrs on GRO J1655-40 (l=$`14.6^{}`$, b=$`1.8^{}`$), and 23 hrs on the Galactic Plane (l=$`25^{}`$, b=$`0^{}`$). ## 3 Conversion to Photon Spectra The source and background counts were gain and livetime corrected, and then binned into 42 channels from 20 keV to 17 MeV for each separate FOV. A narrow interval around the neutron-induced instrumental line at 198 keV was excluded due to its variability, and the 511 keV line subtraction was performed in a separate analysis (Boggs (1998)). The rest of the lines were either weak enough or varied slowly enough that they required no special treatment. The corresponding background is subtracted from each source observation and the residual count spectra are corrected for atmospheric attenuation and detector response by a procedure equivalent to directly inverting the coupled atmospheric/detector response matrix. Matrices were modeled for a range of atmospheric column densities using a Monte Carlo simulation of a thin atmosphere above the HIREGS instrument, then normalized to calibration measurements performed before the flight (Boggs (1998)). For the exact column density of any source observation, the response matrix is interpolated from the models. All of the photon spectra for a given pointing direction are then weighted and averaged into a single spectrum for each FOV. These spectra are weighted by the product of livetime and atmospheric transmission, which is an energy-dependent factor that is proportional to the expected signal-to-noise. Spectra are weighted by this factor instead of the measured signal-to-noise so as not to bias the average towards those spectra where statistical fluctuations have resulted in a higher signal. ## 4 Determination of Compact Sources The hard X-ray (30-200 keV) photon flux for each of the 12 FOVs is shown in Figure 2. The flux has been corrected for the detector response and the atmospheric transmission, but not for the angular response of the main collimator or the narrow collimators. Therefore, these rates are the sum of the diffuse and compact sources in the Galactic Center region convolved through the angular response of the instrument for each separate FOV. The wide ($`21^{}`$) FOVs are large enough that this flux will be a combination of possibly several compact sources plus a strong contribution from the diffuse Galactic continuum. The narrow ($`3.7^{}\times 21^{}`$) FOVs \[1-3, 5-7, 9-11\] are oriented relative to the Galactic Plane such that a compact source (or multiple sources) within one of these FOVs will dominate over the diffuse flux, and thus permit identification of the compact sources and their intensities. The problem is to determine the number of compact sources, their locations, and their fluxes — as well as the diffuse component distribution and flux. The approach taken here is to assume that the possible compact sources observed must be among the previously known hard X-ray sources in the Galactic Center region. Figure 1 shows the 10 known sources used in this analysis. The hard X-ray sources, however, are highly variable — any combination of the sources could be contributing to the observations; therefore, nothing is assumed about the source intensities except that they have to be positive. These ten compact source locations were convolved through the angular response for each FOV to determine a response array for each source (Figure 3). The contribution of any source to the overall photon rates (Figure 2) is given by the response array for that source multiplied by the source flux. (The response arrays are actually energy dependent, but vary little below 200 keV where the main and narrow collimators are nearly opaque.) Each source has a unique response array, which allows deconvolution of the source contribution to the measured fluxes. The assumed model for the Galactic diffuse continuum is that the flux comes entirely from the Galactic Plane, the scale height of this distribution is much less than the $`21^{}`$ FOV, and that the longitudinal distribution is flat for the range of observations here ($`35^{}<l<10^{}`$). The resulting distribution is a line of constant flux along the Galactic Plane. The assumption of a constant longitudinal distribution is based on previous observations of a Galactic ridge extending $`\pm 60^{}`$ in longitude in the hard X-ray and soft gamma-ray bands (e.g., Wheaton (1976); Worrall et al. (1982); Koyama et al. (1989); Gehrels & Tueller (1993); Strong et al. (1996); Purcell et al. (1995); Valinia & Marshall (1998)), and will be shown to be consistent with this observation as well. The narrow Galactic Plane distribution was convolved through the angular response for each FOV to determine a response array for the diffuse component. This response array differs in units from the compact source array since it is expressed as the effective angular extent of the Galactic Plane through each FOV (Figure 3). The contribution of the diffuse flux to the overall photon rates (Figure 2) is given by this response array multiplied by the Galactic Plane flux, which is measured in \[$`ph/cm^2/s/rad`$\]. Given the set of 10 compact source and the Galactic diffuse response arrays, it remains only to find the combination of compact source (CS) and Galactic Plane diffuse (GP) intensities which, when multiplied by the appropriate response arrays, best fit the observed rates in Figure 2. Statistically, with 12 data points and 11 intensities to fit, there is only one remaining degree of freedom to judge the fit. Instead of trusting this fit, especially with the errors on these observations, a more systematic and physical approach is applied. The hard X-ray sources are highly variable, and often below the level of detection. Given this variability, the approach we have taken to determine the compact sources is to try an increasingly complex systematic combination of source models $`GP+nCS`$ ($`n=0,1,\mathrm{},10`$), where we increase the number n of compact source combinations until a good fit is determined. The first acceptable fits, which are also very good fits, occur for the combination $`GP+3CS`$ model. There are actually four combinations of three compact sources which give acceptable fits. All of these combinations have the sources GRO J1719-24 and 1E1740-2942 in common, but four choices of the third source give acceptable fits: GRO J1655-40 \[$`\chi _\nu ^2=5.40`$, $`\nu =8`$\], OAO 1657-415 \[5.55,8\], GX 340+0 \[6.29,8\], 4U1700-377 \[9.95,8\]. These four sources are located within $`10^{}`$ of each other, and all lie along a line roughly parallel to the narrow collimator orientation (Figure 1), which explains why they could be confused in this analysis. While these measurements cannot absolutely determine which of the compact sources is the actual third source, there are two strong reasons to believe that this source is the marginally best-fit source, GRO J1655-40. The first reason is that GRO J1655-40 was reported as undergoing a small flare during the period of this balloon flight, as determined by BATSE using occultation techniques (Harmon et al. (1995)). The second reason is due to the spectral results below. This third compact source has a very hard spectrum, with a power-law ($`\alpha =2.3`$) showing no sign of a spectral break below 330 keV. This hard spectrum is characteristic of black hole candidates. Of the four candidate sources, OAO 1657-415, GX 340+0, and 4U1700-377 are all neutron star binaries, while GRO J1655-40 is one of the strongest black hole candidates (Bailyn et al. (1995)). The spectrum supports the conclusion that GRO J1655-40 is the third compact source in these observations. The best-fit combination of the response arrays for this model is shown in Figure 2. The overall fit is excellent \[$`\chi _\nu ^2=5.40`$, $`\nu =8`$\], with the individual flux rates given in Table 1. The uncertainties on these flux rates are determined by varying the individual source intensities in the model fit independently until $`\delta \chi ^2=1`$. ## 5 Spectral Decomposition To determine the individual spectra of these sources, one would like to perform the same analysis as above, only using smaller energy bands, extending the analysis from 30 keV to 800 keV (the upper limit of good statistics), and assuming the four sources ($`GP+3CS`$) instead of deriving them. This analysis, however, tends to produce large fluctuations, and hence spectral features, that are clearly not real, an excess in one source spectrum having a corresponding drop in another. The measured spectra in the 12 FOVs are fairly smooth (Figure 4), exhibiting no obvious spectral features. Therefore, it is natural to assume that the source spectra themselves are smooth, that spectral features in the sources are not combining in a way to produce overall smooth measured spectra. In order to keep the source spectra smooth, we assume spectral models for the four sources, and then determine the best-fit parameters of these models. For the three compact sources we assume power-laws with exponential breaks: $$f(E)=f(E_o)(E/E_o)^\alpha e^{(EE_o)/E_{break}},(E_{low}<E<E_{high}),$$ (1) where the spectral index $`\alpha `$, exponential break $`E_{break}`$, and normalization $`f(E_o)`$ are free parameters of the fit. This is a good general function because it can fit both a pure power-law ($`E_{break}>E_{high}`$), or a thermal bremsstrahlung ($`\alpha 1.31.5`$). For the diffuse continuum, we assume a single power-law plus positronium continuum, where the normalization of the two components is allowed to vary independently. There are two issues concerning this fit which should be addressed immediately. The first issue arises because of the assumed single power-law — whereas several observations have shown a spectral break or steepening in the hard X-ray range (Gehrels & Tueller (1993); Purcell et al. (1995)) which forcing a single power-law fit will not duplicate. The ultimate test of this fit is the $`\chi ^2`$. The fitting methods below turns out to be very robust for this data, in the sense that if a ”wrong” fit is forced, the $`\chi ^2`$ grows very quickly. We also confront this problem directly, however, by performing the spectral deconvolution twice, in the overall energy range (30-800 keV), and also in just the hard X-ray range (30-200 keV), and confirming the consistency of the fit over the entire soft gamma-ray range. The second issue concerns our use of a single incident positronium flux for all the FOVs of the instrument. While there has been evidence for a flat Galactic ridge for the power-law component, the same is not true for the positronium continuum. Our deconvolution, therefore, fits the average positronium continuum, not accounting for a gradient. This fit is justified, as will be shown, because our measurements indicate that the gradient is less than our uncertainty for this component. Using reasonable initial estimates of the spectral shapes and intensities, we apply a relaxation procedure to determine the best estimates of these spectra consistent with every FOV. An improved estimate of each source (compact or diffuse) is obtained by subtracting all of the other estimated source fluxes convolved through the collimator response from each FOV, then finding the best-fit spectrum to the residuals consistent with the angular efficiency of the source in each FOV. This spectrum is used as the new best estimate for the source, and the procedure is then repeated on the next source. This procedure is iterated until the source fits have stabilized and the $`\chi ^2`$’s are minimized. This technique is stable to initial estimates and order of permutation. It is also sensitive to spectral models — with 397 degrees of freedom, spectral models that are forced (i.e. wrong shapes, parameters) quickly give unacceptable $`\chi ^2`$’s. The overall goodness of the fits are determined by convolving the best-fit model spectra through the angular response of the instrument for each FOV, and finding the $`\chi ^2`$ fit to the 12 measured spectra simultaneously (Figure 4). Uncertainties were determined by varying the parameters independently until $`\delta \chi ^2=1`$. The spectral models fit the observations very well, with an overall \[$`\chi _\nu ^2=392.1`$, $`\nu =397`$\], which gives a 56% probability that the four model spectra agree with the observations. The best spectral fits to the compact sources are given in Table 2. 1E1740-2942 is the weakest of the sources, and shows a spectral break near or just below the lower energy range (30 keV) of our observations, which makes the spectral index highly uncertain; therefore, it is assumed that the index fits a thermal bremsstrahlung ($`\alpha =1.4`$). The spectra for these compact sources are all consistent with black hole candidates, and support the validity of the spectral decomposition process. The Galactic diffuse spectrum is well fit by a power-law with photon spectral index $`\alpha =(1.80\pm 0.25)`$, $`f(70keV)=(1.95\pm 0.28)\times 10^4`$ $`ph/cm^2/s/rad/keV`$, and a positronium continuum flux $`(1.24\pm 0.30)\times 10^2`$ $`ph/cm^2/s/rad`$ (Figure 5). The measured 511 keV flux from the Galactic Center is $`(1.25\pm 0.52)\times 10^3`$ $`ph/cm^2/s`$, the analysis of which is presented elsewhere (Boggs (1998)). This spectral index is consistent with CGRO/COMPTEL measurements at higher energies (Strong et al. (1996)), and RXTE measurements at lower energies (Valinia & Marshall (1998)). ## 6 Self-Consistency Analysis In order to determine whether there is any evidence for a hard X-ray spectral break or steepening in the diffuse continuum, the above analysis was repeated for just the hard X-ray spectra (30-200 keV). If the Galactic diffuse does steepen in the hard X-ray range, then the power-law index for this fit should be significantly softer than the index for the entire energy range fit. The best-fit Galactic diffuse hard X-ray spectra (30-200 keV) has index $`\alpha =(2.03\pm 0.32)`$ with $`f(70keV)=(1.89\pm 0.28)\times 10^4`$ $`ph/cm^2/s/rad/keV`$. This fit is consistent with the previous results over the entire range (30-800 keV), and therefore these observations show no evidence for spectral steepening in the hard X-ray range. Furthermore, this measurement strongly rules out the best-fit spectrum determined by OSSE/CGRO observations in conjunction with SIGMA (Purcell et al. (1995)). Figure 6 shows the best-fit parameters and uncertainty curves for the (30-800 keV) fit. Shown for comparison are the parameters for the OSSE/CGRO hard X-ray results. As can be seen, this measurement is inconsistent with the OSSE/CGRO results at $`\delta \chi ^2>100`$ (only taking into account the uncertainties on this measurement), corresponding to a probability of consistency of $`<10^6`$. As a check on the spectral deconvolution, the model fits are integrated over the range (30-200 keV) and the fluxes are compared to those determined from the source flux rates determined in Section 4. The results of these integrations are given in Table 1. The model spectra are consistent with the directly deconvolved hard X-ray fluxes for all four sources. Given the best-fit spectral models, we can subtract the compact source contribution from each FOV and determine whether the assumption of a flat Galactic ridge distribution, both for the power-law and the positronium continuum, is justified. The compact source spectra were convolved through the instrument response and subtracted from the flux in each FOV. Then, the remaining diffuse flux is determined for each FOV separately. The flux rates in each FOV can then be compared, and gradients determined. Using this technique, the diffuse hard X-ray (30-200 keV) flux shows a gradient of $`(1/f)(df/dl)=(0.30\pm 0.55)`$ $`rad^1`$ which is consistent with a flat distribution. For the positronium flux, the gradient is $`(1/f)(df/dl)=(0.52\pm 1.19)`$ $`rad^1`$ which is also consistent with a flat distribution. Therefore, our assumption of a flat Galactic diffuse distribution is self-consistent with the deconvolved source distribution and fluxes. These measurements place upper limits on the gradient of the components of the diffuse continuum in the central radian of the Galaxy. ## 7 Discussion From the measurements presented in this paper, we have concluded that the diffuse emission from the Galactic ridge can be well fit by a single power-law ($`\alpha =1.8`$) plus the positronium continuum in the energy range 30-800 keV. The energy flux per logarithmic energy decade, $`E^2f`$, for our Galactic diffuse measurements is plotted in Figure 7 along with the COMPTEL/CGRO (Strong et al. (1996)) and the RXTE (Valinia & Marshall (1998)) observations. From this plot we can see a smooth continuum (plus positronium) extending from 10 keV up to 30 MeV. However, our normalization is slightly lower than the COMPTEL/CGRO and RXTE results, which is most likely due to our assumption that the latitude distribution of the diffuse emission is very narrow. For a broad latitude distribution, our overall normalization could be low by factors as large as $``$2; however, the spectral shape is unaffected by this overall normalization factor. Also included is the bremsstrahlung and inverse Compton model of Strong et al. (1996), which we have smoothly extrapolated from 10 keV down to 3 keV, and to which we have added our measured positronium continuum, and the thermal Raymond-Smith plasma component($`kT=2.9keV`$) measured in the X-ray range by RXTE (Valinia & Marshall (1998)). This model fits the diffuse emission measurements well from $``$20 keV to 30 MeV. With the addition of the $`\pi ^0`$ emission (Stecker (1988); Bertsch et al. (1994)), this model fits the diffuse well up to $``$1 GeV (Strong et al. (1996)). This model predicts that the HIREGS observations are dominated by the inverse Compton (plus positronium) emission, which is consistent with the large scale-height components measured by COMPTEL/CGRO and RXTE. This model, however, underestimates the diffuse continuum below 20 keV as measured by RXTE. While it is beyond the scope of this paper to determine the exact nature of this discrepancy, we present two possible explanations. One possibility is that thermal emission from a superhot ISM component could be contributing above 10 keV. Such a component ($`kT7keV`$) was invoked to explain the hard component measured in ASCA observations of the Galactic ridge in the energy range 0.7-10 keV (Kaneda et al. (1998)). Given the Galactic disk gravitational potential of $`0.5keV`$ (Townes (1989)), however, it is not clear how such a superhot component could be confined to the Galactic disk (e.g., Kaneda et al. (1998); Valinia & Marshall (1998)). A more probable explanation is that the measured excess reflects the uncertainties in the cosmic-ray electron spectrum below 100 MeV. The solar modulation of the cosmic-ray electrons allows direct observation down to only a few GeV (Golden et al. (1984, 1994); Taira et al. (1993)), and synchrotron radio emission further constrains the electrons down to 300 MeV (Weber (1983)). At lower energies, the cosmic-ray electron spectrum is not constrained by observations. Theoretical calculations have been performed to extend the spectrum to lower energies (e.g., Strong et al. (1995); Skibo & Ramaty (1993)), which has in turn been used to estimate the diffuse gamma-ray spectrum. Inverse Compton scattering emission below 20 keV would be predominantly produced by electrons below 100 MeV, so we can turn the argument around to say that the measured hard X-ray spectrum is constraining the cosmic-ray electron spectrum below 300 MeV. Therefore, we interpret the most likely cause of the discrepancy below 20 keV as due to theoretical underestimates of the true cosmic-ray electron spectrum at these previously unconstrained energies. Our measurements are in strong disagreement with the coordinated OSSE/CGRO and SIGMA observations (Purcell et al. (1995)), which show a steepening of the diffuse spectrum in the hard X-ray range ($`\alpha =2.7`$). The OSSE/CGRO spectrum would require an anomalously large power ($`10^{43}erg/s`$) to maintain the electron equilibrium against losses (Skibo, Ramaty & Purcell (1996)). One possibility is that the OSSE/SIGMA observations could still be contaminated by significant point source contributions. Integrating the power-law component of the Galactic diffuse and scaling for a rough model of the Galactic distribution (Skibo & Ramaty (1993)) gives a total Galactic power output in the 30-800 keV band of $`6\times 10^{37}erg/s`$. Given an efficient emission process, i.e. inverse Compton scattering of interstellar radiation off of cosmic-ray electrons accelerated in supernovae, this power is well within the total power injected to the Galaxy via supernovae, $`10^{42}erg/s`$ (assuming $`10^{51}erg`$ every 30 yr). ## 8 Conclusions HIREGS observations of the Galactic Center region, by combining wide and narrow FOVs simultaneously, allow separation of the Galactic diffuse continuum and the compact source contributions, from ($`35^{}<l<10^{}`$). The resulting diffuse soft gamma-ray spectrum is found to be hard ($`\alpha =1.8`$) down to 30 keV. The Galactic Plane distribution is found to be flat in this longitude range (Galactic ridge) with no sign of an edge, and upper limits are set on the gradients of the diffuse components in this region. Comparison of the diffuse spectrum with the model of Strong et al. (1996) suggests that we are predominantly measuring the inverse Compton emission, in addition to the positronium continuum. The inverse Compton model is consistent with the large scale height components measured by COMPTEL/CGRO and RXTE in the neighboring energy bands, and with the smooth continuum measured from $``$20 keV to 30 MeV. By minimizing the need to invoke inefficient nonthermal bremsstrahlung emission in the hard X-ray range, these measurements require only a small fraction of the power provided by Galactic supernovae. Discrepancies between the model and the measurements below 20 keV are most likely due to theoretical underestimates of the cosmic-ray electron spectrum below 100 MeV; therefore, the diffuse hard X-ray/soft gamma-ray spectrum is providing the first direct constraints on the cosmic-ray electron spectrum below 300 MeV. Given these conclusions, further measurements of the diffuse hard X-ray/soft gamma-ray emission, both its spatial and spectral distributions, are crucial to separate the various components of the emission and better constrain the cosmic-ray electron spectrum below 300 MeV. Furthermore, these conclusions suggest that nonthermal components extending from the hard X-ray into the soft X-ray bands should be analyzed in terms of constraining the cosmic-ray electron spectrum to even lower energies, dominated by inverse Compton scattering emission. We are grateful to R. Ramaty for useful discussions, and A. Valinia for providing the RXTE data. This research was supported in part by NASA grant NAGW-3816.
no-problem/9908/cond-mat9908401.html
ar5iv
text
# Ground State Theory of 𝛿–Pu ## Abstract Correlation effects are important for making predictions in the $`\delta `$ phase of Pu. Using a realistic treatment of the intra–atomic Coulomb correlations we address the long-standing problem of computing ground state properties. The equilibrium volume is obtained in good agreement with experiment when taking into account Hubbard $`U`$ of the order 4 eV. For this $`U,`$ the calculation predicts a 5f<sup>5</sup> atomic–like configuration with L=5, S=5/2, and J=5/2 and shows a nearly complete compensation between spin and orbital magnetic moments. Metallic plutonium is a key material in the energy industry and understanding its physical properties is of fundamental and technological interest . Despite intensive investigations , its extremely rich phase diagram with six crystal structures as well as its unique magnetic properties are not well understood. It is therefore of great interest to study the ground state of Pu by modern theoretical methods using first principles electronic structure calculations, which take into account the possible strong correlation among the f electrons. Density functional theory in its local density or generalized gradient approximations (LDA or GGA) is a well-established tool for dealing with such problems. This theory does an excellent job of predicting ground-state properties of an enormous class of materials. However, when applied to Pu , it runs into serious problems. Calculations of the high-temperature fcc $`\delta `$ phase have given an equilibrium atomic volume up to 35% lower than experiment . This is the largest discrepancy ever known in density functional based calculations and points to a fundamental failure of existing approximations to the exchange-correlation energy functional. Many physical properties of this phase are puzzling: large values of the linear term in the specific heat coefficient and of the electrical resistivity are reminiscent of the physical properties of strongly-correlated heavy-fermion systems. On the other hand, the magnetic susceptibility is small and weakly temperature dependent . Moreover, early LDA calculations predicted $`\delta `$–Pu to be magnetic with a total moment of 2.1 Bohr magnetons in disagreement with experiments. The reason for these difficulties has been understood for a long time: Pu is located on the border between light actinides with itinerant 5f–electrons and the heavy actinides with localized 5f electrons . Near this localization-delocalization boundary, the large intra-atomic Coulomb interaction as well as the itineracy of the f electrons have to be considered on the same footing, and it is expected that correlations must be responsible for the anomalous properties. The parameter governing the importance of correlations in electronic structure calculations is the ratio between effective Hubbard interaction $`U`$ and the bandwidth $`W.`$ When the distance between atoms is small, the correlation effects may be not important, since the hybridization, and consequently the bandwidth become large. The low-temperature $`\alpha `$ phase of Pu has an atomic volume which is 25% smaller than the volume of $`\delta `$ phase. To the extent that the complicated monoclinic structure of the $`\alpha `$ phase can be modelled by the simplified fcc lattice, it becomes clear that the LDA or GGA calculations which ignore the large effective $`U`$ converge to the low volume $`\alpha `$ phase (for which $`U/W<1`$). When volume is increased, this ratio is turned around, and LDA loses its predictive power. This results in the long-standing problem of accurate prediction of the volume of $`\delta `$–Pu. In the present work it will be shown that a proper treatment of Coulomb correlations allows us to compute the equilibrium atomic volume of $`\delta `$–Pu in good agreement with experiment. Moreover, our calculations suggest that there is a nearly complete compensation between the spin and the orbital contributions to the total magnetic moment which is consistent with experiment. Thus the strong correlation effects in $`\delta `$–Pu are not manifest in the static magnetic properties. To incorporate the effects of correlations we use the LDA + U approach of Anisimov and coworkers . This approach recognizes that the failure of LDA is related to the fact that it omits the Hubbard like interaction among electrons in the same shell, irrespectively of their spin orientation. A new orbital–dependent correction to the LDA functional was introduced to describe this effect. In its most recent, rotationally invariant representation, the correction to the LDA functional has the following form : $$\mathrm{\Delta }E[n]=\frac{1}{2}\underset{\{\gamma \}}{}(U_{\gamma _1\gamma _2\gamma _3\gamma _4}U_{\gamma _1\gamma _2\gamma _4\gamma _3})n_{\gamma _1\gamma _2}^cn_{\gamma _3\gamma _4}^cE_{dc}$$ (1) where $`n_{\gamma _1\gamma _2}^c`$ is the occupancy matrix for the correlated orbital (d or f), and $`\gamma `$ stands for the combined spin, ($`s`$), and azimuthal quantum number,($`m`$), indexes. The electron–electron correlation matrix $`U_{\gamma _1\gamma _2\gamma _3\gamma _4}=m_1m_3\left|v_C\right|m_2m_4\delta _{s_1s_2}\delta _{s_3s_4}`$ can be expressed via Slater integrals $`F^{(i)}`$, $`i=0,2,4,6`$ in the standard manner . The term $`E_{dc}`$ accounts for the double counting effects. This scheme, known as the ”LDA+U method”, gives substantial improvements over the LDA in many cases. The value of the $`U`$ matrix is an input which can be obtained from a constrained LDA calculations , or just taken from the experiment. The philosophy of this approach is that the delocalized s p d electrons are well described by the LDA while the energetics of the more localized f electrons require the explicit introduction of the Hubbard U. In the spirit of this method, in this work we will treat the s p d electrons by the generalized gradient approximation (GGA) which is believed to be more accurate that the LDA. Our implementation of the GGA+U functional is based on the localized–orbital representation provided by the linear–muffin–tin–orbital (LMTO) method for electronic structure calculations . It is important to include spin–orbit coupling effects which are not negligible for 5f electrons of Pu. Our calculations include non-spherical terms of the charge density and potential both within the atomic spheres and in the interstitial region . All low-lying semi-core states are treated together with the valence states in a common Hamiltonian matrix in order to avoid unnecessary uncertainties. These calculations are spin polarized and assume the existence of long–range magnetic order. For simplicity, the magnetic order is taken to be ferromagnetic . We now report our results on the calculated equilibrium volume. To analyze the importance of the correlation effects, our calculations have been performed for several different values of $`U`$ varying from 0 to 4 eV. For $`U`$=4 eV we use standard choice of Slater integrals: $`F^{(2)}`$=10 eV, $`F^{(4)}`$=7 eV, and $`F^{(6)}`$=5 eV . For other U’s we have scaled these values proportionally. For each set of $`F`$’s a full self–consistent cycle minimizing the LDA/GGA+U functionals has been performed for a number of atomic volumes. We calculated the total energy $`E`$ as a function of both $`V`$ and $`U`$. For fixed $`U,`$ the theoretical equilibrium, $`V_{calc},`$ is given by the minimum of $`E(V)`$. Fig. 1 shows the dependence of the calculated–to–experimental equilibrium volume ratio $`V_{calc}/V_{exp}`$ as a function of the input $`U.`$ It is clearly seen that the $`U`$=0 result (LDA) predicts an equilibrium volume which is 38% off the experimental result and the use of GGA gives only slightly improved result ($`V_{calc}/V_{exp}`$=0.66). On the other hand, switching on a very large repulsion between 5f electrons obviously leads to an overestimate of the inter-atomic distances. An optimal $`U`$ deduced from this analysis is found to be close to 4 eV when using the GGA expressions for the exchange and correlation. This estimate of the intra-atomic correlation energy is in excellent agreement with the published conventional data : The value of $`U`$ deduced from the total energy differences was found to be 4.5 eV. Atomic spectral data give similar value close to 4 eV. Thus, it is demonstrated how significant it is to properly treat Coulomb correlations in predicting the equilibrium properties of this actinide. We now discuss the calculated GGA+U electronic structure of $`\delta `$–Pu for the optimal value of $`U`$=4 eV. Fig. 2 shows the energy bands in the vicinity of the Fermi level. They originate from the extremely wide 6s–band strongly mixed with the 5d–orbitals which are strongly hybridized among themselves. The resulting band complex has a bandwidth of the order of 20 eV. On top of this structure there exist a weakly hybridized set of levels originating from the 5f–orbitals. In order to understand the physics behind the formation of spin and orbital moment in the f–shell, it is instructive to visualize the orbital characters as ”fat bands”. The one–electron wave function has an expansion $`\psi _{𝐤j}(𝐫)=A_{lms}^{𝐤j}\varphi _{lms}(𝐫)`$ where $`\varphi _{lms}(𝐫)`$ are the solutions of the radial Schrödinger equation normalized to unity within atomic sphere. The information about partial $`lms`$ characters of the state with given $`𝐤j`$ is contained in the coefficients $`|A_{lms}^{𝐤j}|^2`$. Sum over all $`lms`$ in the latter quantity gives unity (we neglect by a small contribution from the interstitial region) since one band carries one electron per cell. At the same time, sum over all $`j`$ in $`|A_{lms}^{𝐤j}|^2`$ is also equal to one since each $`lms`$ describes one state. Fixing a particular $`lms`$, we can visualize this partial character on top of the band structure by widening each band $`E_{𝐤j}`$ proportionally to $`|A_{lms}^{𝐤j}|^2`$. A maximum width $`\mathrm{\Delta }`$ which corresponds to $`_j|A_{lms}^{𝐤j}|^2`$=1 should be appropriately chosen. Now, at the absence of hybridization, each band originates from a particular $`lms`$ state, and therefore there exists only one ”fat band” for given $`lms`$ which has the maximum width $`\mathrm{\Delta }`$. When hybridization is switched on, there can be many bands which have the particular $`lms`$ character, they will all be widened as $`\mathrm{\Delta }|A_{lms}^{𝐤j}|^2`$, while the sum of individual widths for all bands is now equal to $`\mathrm{\Delta }`$. The width of the band is then proportional to its $`lms`$ character. This technique gives us an important information on the distribution of atomic levels as well as their hybridization in a solid. For f–electrons of Pu, it is convenient to work in the spherical harmonics representation in which the f–f block of the Hamiltonian is found to be nearly diagonal. The result of such ”fat bands” analysis for 5f–orbitals is shown on Fig. 2. In order to distinguish the states with different $`m`$’s and spins we have used different colors. (-3 $``$ red, -2 $``$ green, -1 $``$ blue, 0 $``$ magenta, +1 $``$ cyan, +2 $``$ yellow, +3 $``$ gray). Two consequences are seen from this coloured spaghetti: First, spin–up and spin–down bands are all split by the values governed by the effective $`U`$ and the occupancies of the levels. These are just the well known lower and upper Hubbard subbands. Second, only spin–up states with $`m`$=-3,-2,-1,0, and +1 are occupied while all other states are empty. This simply implies 5f<sup>5</sup> like atomic configuration for $`\delta `$–Pu which is filled according to the Hund rule. Note that spin-orbit coupling is crucial for the existence of such an occupation scheme. In the absence of spin orbit coupling the occupancies of the levels with $`\pm m`$ are the same which automatically produces zero orbital moment. Besides providing the experimentally observed volume of the $`\delta `$–Pu, our calculation suggests a simple picture of the electronic structure of this material and sheds new light on its puzzling physical properties discussed in the introduction. The ”fat bands” shown in Fig. 2, suggest a physical picture in which the f electrons are in atomic states forming a multiplet of the 5f<sup>5</sup> configuration with L=5, S=5/2 spin orbit coupled to J=5/2. Crystal fields can split this multiplet into a doubly degenerate state transforming according to $`\mathrm{\Gamma }_7`$ representation of the cubic group and a quartet transforming according to $`\mathrm{\Gamma }_8`$ representation , but cannot remove the orbital degeneracy completely. In a dynamic picture, the f electrons will fluctuate between the degenerate configurations, until this degeneracy is removed by the Kondo effect with the delocalized electrons in the s-p-d band. Therefore the experimentally observed characteristic heavy fermion behavior in this system, namely, the large high–temperature resistivity and the large linear T coefficient of the specific heat arises naturally in this picture . This heavy fermion behavior however should not appear in the magnetic susceptibility. The GGA + U calculation suggests that the magnetic moment of the low lying configurations of the f electrons is much smaller than the $`5\mu _B`$ that one would obtain if we ignore the orbital angular momentum and assumed that the spin is fully polarized. The combination of strong Coulomb interactions and spin–orbit coupling reduce the crystal–field effects and give rise to a large magnetic moment which nearly cancels the spin moment. In an atomic picture, the $`5f^5`$ configuration with L=5, S=5/2 and J=5/2 has a total moment given by $`M_{tot}`$ $`=\mu _BgJ=0.7\mu _B`$, with Lande’s g-factor of 0.28. This simple relation breaks down in the presence of crystal fields, but in both the $`\mathrm{\Gamma }_7`$ or the $`\mathrm{\Gamma }_8`$ representation the g factor is further reduced from the atomic estimate. The GGA + U calculation gives a spin moment $`M_S`$ 5.1 Bohr magnetons which is slightly increased relative to the 5 Bohr magnetons expected in a pure f<sup>5</sup> atomic configuration due to the polarization of the band electrons outside the muffin–tin shell. Evaluation of the orbital and total moments is in general a more difficult problem . We have estimated the average of $`𝐤j\left|l_z\right|𝐤j`$ summed over all occupied states $`|𝐤j.`$ This leads to a value $`M_L`$=-3.9 $`\mu _{B.}`$ for the orbital moment. The total calculated moment $`M_{tot}=M_S+M_L`$ is thus reduced to 1.2 $`\mu _B`$ . It worth noting that an atomic analogue of this estimate, $`M_{tot}`$ $`=\mu _B\left|L2S\right|`$ gives exactly zero for our 5f<sup>5</sup> ground state. A remarkable outcome of the calculation is clearly seen: A nearly complete compensation of spin and orbital contributions occurs for metallic $`\delta `$–Pu. In this picture the weakly temperature independent susceptibility which is observed in $`\delta `$–Pu is the result of a a very large Van Vleck contribution and of a very small magnetic moment which results from the near cancellation of two large orbital and spin moments. In a recent paper Eriksson and coworkers introduced a different approach to the anomalous properties of $`\delta `$ plutonium. In their calculation a fraction of the f-electrons is treated as core electrons while the rest are treated as delocalized. Using a combination of the constrained LDA calculation with the atomic multiplets data they obtain the correct equilibrium volume when four f-electrons are part of the core, while one f electron is itinerant. The basic difference between the methods, is the different treatment of the f electrons. In this paper all the f electrons on equal footing, and their itineracy is reduced by the Hubbard U relative to the predictions of LDA or GGA calculations. Since our approach and that of Eriksson et. al. lead to different ground state configurations of the localized f electrons ($`f^5`$ vs $`f^4`$), further experimental spectroscopic studies of $`\delta `$–Pu would be of interest. In conclusion, using a realistic value of the Hubbard $`U`$=4 eV incorporated into the density functional GGA calculation, we have been able to describe ground state properties of $`\delta `$–Pu in good agreement with experimental data. This theory correctly predicts the equilibrium volume of the $`\delta `$ phase and suggest that nearly complete cancellation occurs between spin and orbital moments. The main shortcoming of the present calculation is the assumed long range spin and orbital order. This is the essential limitation of the LDA+U approach (or of any static mean field theory ) in order to capture the effects of correlations this approach it has to impose some form of long–range order. Static mean field theories are unable to capture subtle many–body effects such as the formation of local moments and their subsequent quenching via the Kondo effect. These deficiencies will be removed by ab initio dynamical mean field calculations for which codes are currently being developed. We believe however, that our main conclusions, i.e. that correlations lead to the correct lattice constant and a reduction of the moment, relative to the LDA results, are robust consequences of the strong correlations presented in this material, and will be reproduced by more accurate treatments of the electron correlations. The authors are indebted to E. Abrahams, O. K. Andersen, O. Gunnarsson, A. I. Lichtenstein, and J. R. Schrieffer for many helpful discussions. The work was supported by the DOE division of basic energy sciences, grant No. DE-FG02-99ER45761. FIGURE CAPTIONS Fig.1. Calculated theoretical volume (normalized to the experiment) of $`\delta `$–Pu as a function of the Hubbard $`U`$ within the LDA+U and GGA+U approaches. Fig.2. Calculated energy bands of $`\delta `$–Pu using GGA+U method with $`U`$=4 eV. Spin and orbital characters of the f-bands are shown with the color: ($`m`$=-3 $``$ red, -2 $``$ green, -1 $``$ blue, 0 $``$ magenta, +1 $``$ cyan, +2 $``$ yellow, +3 $``$ gray). Boxes from the left and from the right show approximate positions of the f levels.
no-problem/9908/astro-ph9908020.html
ar5iv
text
# 3C 48: Stellar Populations and the Kinematics of Stars and Gas in the Host Galaxy1footnote 11footnote 1Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. ## 1 Introduction Strong interactions and mergers have been implicated in the triggering of both ultra-luminous infrared galaxies (ULIGs; Sanders et al. 1988; Sanders & Mirabel 1996 and references therein) and QSOs (e.g., Stockton 1999 and references therein). For ULIGs with the highest FIR luminosities ($`\mathrm{log}L_{FIR}>12.0`$ in solar units), essentially all appear to be mergers. The evidence linking interactions to QSOs, while strong for a number of specific cases, is generally more circumstantial and less compelling than that for ULIGs. This does not necessarily imply lower interaction rates for QSO host galaxies: it is much more difficult to observe diagnostic features in QSO hosts because of the bright nucleus. Sanders et al. (1988) have suggested that there may be an evolutionary path from ULIGs to QSOs. If so, then signs of the interaction also would be expected to be less obvious in QSOs because of the fading and dissipation of tidal features such as tails. The relation of mergers to intense starbursts (resulting in ULIGs) and to the feeding of supermassive black holes (resulting in QSO activity) is undoubtedly complex. Some mergers of disk galaxies result in only comparatively moderate star formation, and most mergers apparently do not lead to strong nuclear activity. It is not yet clear whether the extra ingredients (besides the bare fact of a merger) needed to produce a sizable starburst are necessarily identical to those required to trigger strong nuclear activity, but the fact that a fair fraction of ULIGs show signs of nuclear activity indicates that there is at least a good deal of overlap. In order to explore this connection in more detail, we have initiated a program to study stellar populations in the host galaxies of QSOs having FIR colors tending towards those of ULIGs, and therefore indicating a substantial contribution from a starburst or recent post-starburst component. We have previously fitted population-synthesis models to high-S/N spectroscopy of the close, interacting companion to the QSO PG 1700+518 (Canalizo & Stockton 1997; Stockton, Canalizo, & Close 1998). Here, we use similar techniques to analyze the host galaxy of 3C 48. 3C 48 was the first quasar to be discovered (Matthews et al. 1961; Matthews & Sandage 1963). As often seems to be the case for the first example of a new class of astronomical object, it is, in retrospect, far from being typical. The original identification of 3C 48 depended in part on the small size of its radio source, and it remains the only example of a compact steep-spectrum (CSS) radio source among powerful quasars at redshifts $`<0.5`$. The host galaxy of 3C 48 is unusually large and bright in comparison with those of other low-redshift quasars (Kristian 1973). 3C 48 was the first QSO for which an extended distribution of ionized gas was observed (Wampler et al. 1975), and it remains one of the most luminous examples of extended emission among low-redshift QSOs (Stockton & MacKenty 1987). Spectroscopy of the host galaxy of 3C 48 by Boroson & Oke (1982, 1984) showed strong Balmer absorption lines, demonstrating clearly not only that the extended continuum radiation was dominated by stars, but that these stars were fairly young ($`<1`$ Gyr old). A similar story is told by the far-infrared (FIR) colors, which lie between those of most QSOs and those of ULIGs, indicating that a significant fraction of the FIR radiation is likely due to dust in regions of current or recent star formation (Neugebauer et al. 1986; Stockton & Ridgway 1991; Stockton 1999). Another connection with ULIGs is seen in the apparent tidal tail extending to the northwest from the 3C 48 host galaxy and the possibility that the luminosity peak 1″ northeast of the quasar nucleus might be the nucleus of a merging companion (Stockton & Ridgway 1991). In this paper, we describe the results of deep spectroscopy of the host galaxy of 3C 48, using multiple slit positions to cover most regions of the galaxy. After analyzing the stellar velocity field, fitting population-synthesis models to the spectra at many discrete points in the host galaxy, and determining the distribution and velocity structure of the extended emission-line gas, we discuss the implications of our results in terms of a merger scenario. We assume $`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$ throughout this paper. ## 2 Observations and Data Reduction Spectroscopic observations of the host galaxy of 3C 48 were carried out on 1996 October 13–14 UT and 1996 November 3–4 UT, with the Low-Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the Keck II telescope. In Table 1 we show a complete journal of observations, with specification of the slit positions. We used a 600 groove mm<sup>-1</sup> grating blazed at 5000 Å for slit positions A, B, C, and G, yielding a dispersion of 1.28 Å pixel<sup>-1</sup>. We used a 900 groove mm<sup>-1</sup> grating blazed at 5500 Å for the remaining slit positions in order to obtain a higher dispersion (0.85 Å pixel<sup>-1</sup>) to observe the structure of the emission lines in the spectra. Total integration times were 5 minutes for slit position D, 40 minutes for slit position F, and 60 minutes for all the other slit positions. The spectra were reduced with IRAF, using standard reduction procedures. After dark, bias and flat field correction, the images were background subtracted to remove sky lines. Wavelength calibrations were made using the least-mean-squares fit of cubic spline segments to identified lines in a comparison spectrum. The spectra were flux calibrated using spectrophotometric standards from Massey et al. (1988) observed with the slit at the parallactic angle. The distortions in the spatial coordinate were removed with the APEXTRACT routines. The spectra were not corrected for discrete atmospheric absorption features; thus they show the atmospheric B-band redwards of the \[O III\] $`\lambda `$ 5007 emission feature. Spectra from slit position A suffered from strong light scattering from the quasar. In addition, one of the three 20 minute exposures taken at this position was unusable because of problems with the Keck II mirror control system. Spectra from slit positions A, B, C and G were subdivided into eight regions each (see Fig. 2$`a`$) for the analysis of stellar populations and kinematics of the host galaxy. The size of each of these regions was chosen so that each spectrum would have sufficient signal-to-noise ratio for modeling while still providing good spatial resolution. The spectra of regions close to the quasar nucleus were contaminated by scattered quasar light. The scattered light was removed by subtracting from each region a version of the quasar nuclear spectrum, scaled to match the broad-line flux. The percentage in flux (at 4500 Å, rest wavelength) subtracted in each case is listed in column (6) of Table 2. This quantity depends both on the intensity of the quasar profile wings and the surface brightness of the host galaxy in each particular region. WFPC2 images of 3C 48 were obtained from the Hubble Space Telescope (HST) data archive. The images used in this analysis included two 1400 s exposures in the F555W filter and two 1700 s exposures in the F814W filter. In each case, 3C 48 was centered on the PC1 detector. We also used two 1100 s and one 1300 s exposures in the linear ramp filter FR680N, centered on redshifted \[O III\] $`\lambda `$ 5007. 3C 48 fell near the center of the WFC2 chip. All HST images were reduced as described in Canalizo, Stockton & Roth 1998. We subtracted a scaled stellar profile at the position of the quasar, using Tiny Tim (Krist 1993) models of the HST/WFPC2 PSF. We also use three ground-based images of 3C 48 obtained with the University of Hawaii 2.2 m telescope. The first of these is a sum of three 2700 s exposures, taken on 1991 September 16 with a Tektronix 1024$`\times `$1024 CCD through a 30 Å bandpass filter centered on redshifted \[O III\] $`\lambda `$ 5007, with an image scale of 0$`\stackrel{}{\mathrm{.}}`$22 pixel<sup>-1</sup>. The second is a sum of 30 300 s exposures, taken on UT 1993 September 18 with a Tektronix $`2048\times 2048`$ CCD through a filter centered at 6120 Å and having a 960 Å FWHM, covering the relatively line-free region of the rest-frame spectrum from 4120 to 4820 Å in the rest frame of 3C 48, also at a scale of 0$`\stackrel{}{\mathrm{.}}`$22 pixel<sup>-1</sup>. We shall refer to this filter as “$`R^{}`$.” Finally, we obtained short wavelength images of 3C 48 on UT 1998 September 18 through a $`U^{}`$ filter (centered at 3410 Å with a 320 Å FWHM). Nine 1200 s exposures were taken using an Orbit CCD, with an image scale of 0$`\stackrel{}{\mathrm{.}}`$138 pixel<sup>-1</sup>. All ground-based images were reduced with IRAF using standard image reduction procedures. The $`U^{}`$ and $`R^{}`$ have been processed with CPLUCY (Hook 1998), an improved version of the PLUCY task (Hook et al. 1994) available in the STSDAS CONTRIB package. Both tasks carry out a two-channel deconvolution of a field, one channel comprising designated point sources, which are deconvolved using the standard Richardson-Lucy algorithm, and the second channel including everything else, for which the Richardson-Lucy deconvolution is constrained by an entropy term to ensure smoothness. One of the virtues of this procedure is that, with a proper choice of parameters, it is possible to remove the stellar profile from a non-zero background without the ringing problem inherent in the application of the standard Richardson-Lucy procedure. Both the ground-based images and the HST images are shown in Fig. 2, to which we will refer as needed during our discussion. ## 3 Results ### 3.1 Kinematics of the Stellar Component in the Host Galaxy Redshifts determined from stellar absorption features give us information about the kinematics of the stars in the host galaxy. We have obtained measurements from 32 regions covered by 4 slit positions (A, B, C and G; see Table 1). Some of these regions overlap, providing a consistency check. Column (2) of Table 2 lists the relative velocity of each region with respect to $`z=0.3700`$, which is close to the average of the stellar redshifts measured in the inner regions of the host. Measuring redshifts from stellar absorption features presents two main problems: the absorption features may be contaminated by emission lines coming from the extended ionized gas, and some absorption features may be interstellar in nature. Contamination of absorption features by emission can yield a higher or lower redshift depending on whether the ionized gas has a lower or higher redshift than the stars. In order to check for contamination, we examined the Balmer lines. Because the Balmer decrement is steeper in emission than in absorption, those lines that are higher in the series (e.g., H8, H9, H10) show less contamination by emission. As a result, in the regions where there is extended emission, the redshift measured from the lower lines in the Balmer series was different from that measured from the higher-series lines. This effect is quite significant in some cases, and it correlates with the strength of the emission observed in our \[O III\] image (Fig. 2$`f`$). Where there was evidence for strong emission contamination, we measured the redshift of the stellar component only from the higher Balmer lines (usually H 8 and above) and the Ca II K line. We did not measure redshifts from Ca II H when a young stellar component was present as this line was blended with H $`ϵ`$. Redshifts measured from Ca II lines, however, must be treated with caution, as these lines occur often in the interstellar medium. From our modeling of stellar populations (see § 3.2) we notice that Ca II is sometimes stronger in the observed spectrum than in an otherwise good fitting model (e.g., at B4 in Fig. 3), and this is indicative of intervening interstellar gas, which may have a different velocity from that of the stellar component. However, for those spectra with a young stellar component, the redshift we measured from Ca II K was always consistent with that measured from the higher members of the Balmer series within measurement errors. In the cases where there was no young stellar component, we checked for consistency with other old population features, such as the the Mg I$`b`$ feature. Figure 2$`b`$ shows a map of the radial velocities in the host galaxy. The north-west tidal tail-like extension of the host galaxy is blueshifted with respect to the main body of the host galaxy by as much as 300 km s<sup>-1</sup>. The west end of the tail has smaller approaching velocities ($`>`$200 km s<sup>-1</sup> ), which increase to a maximum of $``$300 km s<sup>-1</sup>, and then decrease again ($``$100 km s<sup>-1</sup>) as we approach the main body of the host. We find relatively small variations in velocity in the central region of the host galaxy. However, the south-east part of the galaxy appears to have receding velocities as high as $`+`$180 km s<sup>-1</sup>. This may indicate that the system is rotating around an axis oriented roughly northeast—southwest. ### 3.2 Stellar Populations #### 3.2.1 Modeling the Spectra The high signal-to-noise ratio of the Keck LRIS spectra allows us to attempt modeling different regions of the host galaxy, with sizes ranging down to approximately one square arcsecond in the higher-surface-brightness regions. We have modeled 32 individual regions (see Fig. 2$`b`$) of the host galaxy using the Bruzual & Charlot 1996 synthesis models. Typical spectra of the host galaxy are shown in Fig. 3. The spectra generally show Balmer absorption lines from a relatively young stellar population as well as evidence for an older population, such as the Mg Ib feature and the Ca II K line. The simultaneous presence of both kinds of features indicates that we can do some level of decomposition of the two components. As we shall discuss in more detail in § 4.1, the morphology of the host galaxy indicates that some form of strong interaction has occurred. We assume that this strong tidal interaction has induced one or more major starburst episodes in the host galaxy. Thus, to first order, the observed spectrum consists of an underlying old stellar population (stars in galaxy before interaction) plus a recent starburst (stars formed as a result of interaction). Our early experiments in modeling the old population in similar situations showed that, as expected, it is not strongly constrained by the observations. On the other hand, we have found that the precise choice of the underlying old stellar population makes very little difference in the modeling of the superposed starburst (Stockton, Canalizo & Close 1998). Accordingly, we have made what we take to be physically reasonable assumptions regarding the old population and have used this model for all of our analyses. In our early modeling of the central regions of the host galaxy the old stellar component was well fit by a 10 Gyr old population with an exponentially decreasing star formation rate with an e-folding time of 5 Gyr. Later, we were able to extract a purely old population (not contaminated by younger stars) in the tail, far from the nucleus (see C8 in Figs. 2$`a`$ and 3). The old population model we had previously chosen gives a very reasonable fit, confirming that we had chosen a reasonable model for the old stellar population. We assume that the same old stellar population is present everywhere in the host galaxy. To this population we add isochrone synthesis models (Bruzual & Charlot 1996) of different ages. The assumption of a population with no age dispersion can be justified if the period during which the star formation rate was greatly enhanced is short compared to the age of the population. Indeed, observational evidence indicates that the duration of starbursts in individual star forming clouds can be very short, often $`<1`$ Myr (e.g., Heckman et al. 1997; Leitherer et al. 1996). We perform a $`\chi ^2`$ fit to the data to determine the contribution of each component and the age of the most recent starburst. Since the size of each region we are modeling is at least $`1\times 1\mathrm{}`$ ($`4\times 4`$ kpc), the observed spectrum is likely to be the integrated spectrum of several starbursts of different ages. The starburst age we derive for the observed spectrum will be weighted towards the younger starbursts because such starbursts will have a greater flux contribution. On the other hand, this weighted-average age will likely be at least somewhat greater than the age of the youngest starburst in the observed region. Thus our ages can be taken as upper limits to the age of the most recent major episodes of star formation along the line of sight. While the starburst ages that we list on Table 2 represent the best fit to the observed spectrum, there is a range of ages in each case that produces a reasonable fit. Figure 4 shows the spectrum from region B8 with three different solutions superposed: The youngest that still looks reasonable (top), the best fit (middle), and the oldest that still looks reasonable (bottom). In each case, the relative contribution of the old population was increased or decreased to give the best fit. The difference in $`\chi ^2`$ statistic between the middle and top or middle and bottom fits is $``$15%. We take these limits to be an estimate of the error in the determination of age so that, for this particular case, the starburst age would be $`114(+67,42)`$ Myr. In general, a reasonable estimate of the error for the starburst age determined in each case is $`\pm 50\%`$. As mentioned in § 3.1, absorption lines in many regions are contaminated by emission lines coming from the extended ionized gas. In modeling these regions, we excluded those Balmer lines that were obviously contaminated by emission from the $`\chi ^2`$ fits. Notice that in some of the spectra (e.g., G2 in Fig. 3), the observed data are slightly depressed with respect to the models in the region between 4600 Å and 4800 Å. We previously found this problem in the spectrum of the companion to PG 1700+518 (Canalizo & Stockton 1997) and attributed it there to an artifact of the subtraction of the QSO scattered light. However, some of the spectra of 3C 48 show this problem even when no QSO light was subtracted. We find no evidence for a correlation of this effect with the age of the starburst. Fritze-von Alvensleben & Gerhard (1994) seem to have run into the same problem when they use their own models to fit a spectrum of NGC 7252 (see their Fig. 1). This problem is also evident in the spectrum of G 515 in Liu & Green 1996. Neither of these groups discusses the discrepancy. #### 3.2.2 Mapping Stellar Populations Figure 2$`c`$ shows a map of the host galaxy with the starburst ages we determine from spectra. Figure 3 shows a characteristic spectrum of each of the regions described below. In column (3) of Table 2 we list the ages of the young stellar component for each region analyzed. In columns (4) and (5) we give, respectively, the mass and light (at 4500Å, rest wavelength) contribution of the starburst to the observed spectrum. We start with the long extension to the north-west of the quasar which is very likely a tidal tail, as we shall see. As we mentioned in the previous section, the stellar populations in most of this tail appear to have no younger component. Panel C8 of Fig. 3 shows one such spectrum with a $`\chi ^2`$ fit to a 10 Gyr-old population with an exponentially decreasing star formation rate with an e-folding time of 5 Gyr. Even though most of the extension is made up by old populations, there is one small starburst region covered by our slit positions G and A. This starburst has an age $``$ 33 Myr and is about 4″ west and 8″ north of the quasar nucleus, where A7 and G8 intersect (see Fig. 2$`a`$,$`c`$). A faint clump at the same position is clearly seen on our $`U^{}`$ ground-based image (Fig. 2$`d`$). The $`U^{}`$ image also shows two additional larger bright regions north of the quasar nucleus. The eastern region corresponds to a strong emission region clearly seen in the \[O III\] image shown in Fig. 2$`f`$. The western region (covered by C6 and adjacent regions in Fig. 2), on the other hand, is much fainter in both the narrow-band and broad-band images than in the $`U^{}`$ image. This region contains starbursts of ages $`9`$ Myr, and the F555W HST image suggests that it is formed of several smaller clumps. We also find starburst ages $`9`$ Myr in the regions directly north (C3, C4) and southwest (G3) of the quasar. These are seen in the $`U^{}`$ image as relatively high surface brightness areas. The area corresponding to G3 has a knotty structure, which is not obvious in the longer wavelength images. We have tried taking the ratio of our $`U^{}`$ image to our $`R^{}`$ image (Fig. 2$`e`$). Both of these images should be strongly dominated by continuum radiation; however, the physical interpretation of this ratio is not straightforward for two reasons: (1) while the $`U^{}`$ image should relatively emphasize regions that have young stars, it may also be enhanced by strong nebular thermal emission from emission regions, and (2) as our modeling of the stellar populations shows, the ratio of young and old stellar populations in the $`R^{}`$ image is highly variable. Nevertheless, a striking feature of the ratio image is the apparent ridge of strong UV radiation along the leading edge of the NW tail. This is apparently a complex of emission regions and young stars, including both the brightest emission region seen in the \[O III\] images and the A7/G8 star-forming region mentioned above. On the west side of the nucleus (along Slit G) we find that there is no recent star formation in the northern half of the slit other than the star forming clumps mentioned above. The southern part of the host galaxy seems to be dominated by relatively older starbursts (generally $``$100 Myr) with a range of ages between 16 and 114 Myr. Finally, the youngest populations we observe in the host galaxy are found in the regions coincident with the two brightest features in the $`U^{}`$ image (see inset in Fig. 2$`d`$): (1) a region $`2\mathrm{}`$ SE of the quasar, covered by A3 and B5, and (2) a region $`1\mathrm{}`$ NE of the quasar, surrounded by B3 and C2, which will be discussed in more detail in §4.2. We find starburst ages as young as 4 Myr in these regions. Such young ages are comparable to starburst timescales and it is virtually impossible to distinguish between continuous star formation and instantaneous bursts, so the young ages we derive simply indicate that there is ongoing star formation. ### 3.3 Emission #### 3.3.1 Emission in the Quasar Nucleus Thuan, Oke, & Bergeron (1979) noticed that the broad permitted emission lines in the spectrum of 3C 48 had a systematically higher redshift, by about 600 km s<sup>-1</sup>, than did the forbidden lines. Boroson & Oke (1982) confirmed this qualitative result but found a lower velocity difference of $`330\pm 78`$ km s<sup>-1</sup>. Gelderman & Whittle (1994) mentioned that the redshift of the forbidden lines is difficult to measure because they “have approximately flat tops.” Our spectrum of the quasar nucleus in the region around H$`\beta `$ and \[O III\] $`\lambda \lambda 4959`$,5007 is shown Fig. 5. The \[O III\] lines clearly have double peaks. By deconvolving the lines into best-fit Gaussian components, we find that the narrower, higher velocity component has a mean redshift $`z=0.36942`$, with a Gaussian width $`\sigma =400`$ km s<sup>-1</sup>, while the broader, lower-velocity component has a mean redshift $`z=0.36681`$, with $`\sigma =700`$ km s<sup>-1</sup>. These redshifts correspond to a velocity difference of $`563\pm 40`$ km s<sup>-1</sup>. The redshift of the broad H$`\beta `$ line is $`z=0.36935`$, in good agreement with the higher-redshift \[O III\] line. Thus the anomalous feature is the broad, lower-velocity \[O III\] line. What is the nature of this blue-shifted component? First, it is highly luminous: the luminosity in the \[O III\] $`\lambda 5007`$ line alone is $`5\times 10^{43}`$ erg s<sup>-1</sup>, over 3 times the luminosity of the same line in the “standard” narrow-line region. Second, it does show some spatially-resolved velocity structure, though only over a region of about 0$`\stackrel{}{\mathrm{.}}`$5. We first noticed this velocity gradient in our reduced long-slit spectrum, but it is very difficult to display because of the strong variation of the underlying quasar continuum in the direction perpendicular to the slit. Figure 6 shows the result of a PLUCY (Hook et al. 1994) deconvolution of the 2-dimensional spectrum, followed by a reweighting of the image perpendicular to the slit to even out the dynamic range in the resolved emission. The steep velocity gradient is clearly visible in the blueshifted component of both of the \[O III\] lines. As we were completing this paper, an important new study by Chatzichristou, Vanderriest, & Jaffe (1999) appeared, in which the spatial and velocity structure of the emission lines in the inner region of 3C 48 were mapped with a fiber integral-field spectroscopic unit. They also observed the double-peaked narrow-line emission in the quasar spectrum, and their decomposition of the profile gave a radial velocity difference of $`586\pm 15`$ km s<sup>-1</sup>, with which our value of $`563\pm 40`$ km s<sup>-1</sup> is in good agreement. Our results are in general agreement on most other points for which the two datasets overlap, except that Chatzichristou et al. find the luminous broad blueshifted component to have no resolved velocity structure, whereas we find it to have a strong velocity gradient over the central 0$`\stackrel{}{\mathrm{.}}`$5 region. #### 3.3.2 Extended Emission We have a total of 6 slit positions useful for studying the extended emission, which are shown in Fig. 7, superposed on our narrow-band \[O III\] image. Images of the two-dimensional spectra are shown in Fig. 8. We have fitted Gaussian profiles to the \[O III\] $`\lambda 5007`$ lines (and to the \[O II\] $`\lambda \lambda 3726`$,3729 lines in our higher-dispersion spectra from slits E and F) in order to measure the velocity field of the extended emission. At some positions, we had to use up to 3 components in order to get a satisfactory fit to the observed profile. The results are shown in Fig. 9, where the velocities, fluxes, and widths of the lines are indicated at each position. We also plot the velocities of the stellar component for slit positions A, B, C, and G. While there is generally at least rough agreement between the velocities of the stars and at least one component of the emission-line gas, there are significant differences in detail. Little, if any, of the emission can be due to an undisturbed in situ interstellar medium. The total velocity range of the emission is over 1200 km s<sup>-1</sup>, and the FWHM of the emission reaches up to 2000 km s<sup>-1</sup>. The morphology of the extended emission, as seen in our deep ground-based \[O III\] $`\lambda 5007`$ image (Fig. 2$`f`$) and the HST WFPC2 linear ramp filter image in the same emission line (Fig. 2$`i`$), consists of several discrete emission regions having characteristic sizes ranging from a few hundred pc to $`1`$ kpc, connected by a web of lower-surface-brightness emission. Some of the emission regions lie outside the host galaxy, as defined by the deep continuum images. The HST image shows an apparent bright object about 0$`\stackrel{}{\mathrm{.}}`$8 S of the quasar, which has no counterpart on the HST continuum images. Kirhakos et al. (1999) have referred to this object as an H II region. Its absence on the HST continuum images would indicate an \[O III\] $`\lambda 5007`$ equivalent width $`4000`$ Å. An equivalent width this large is not impossible: if the intensity ratio of \[O III\] $`\lambda 5007`$ to H$`\beta `$ is $`10`$, as is common in extended emission regions around QSOs, the equivalent width of H$`\beta `$ would be about a factor of 3 below that which would be expected from the nebular thermal continuum alone for a temperature of 15000 K. However, we estimate that the object’s absence on the HST PC F555W image requires that the flux of \[O II\] $`\lambda 3727`$ be $`<4`$% that of \[O III\] $`\lambda 5007`$, rather than the more typical $`20`$%. Finally, we have carried out a careful deconvolution of our ground-based \[O III\] image, which has an original FWHM of 0$`\stackrel{}{\mathrm{.}}`$74 for stellar profiles. On our deconvolved image, we could easily have seen any object as bright as the emission region $`2\stackrel{}{\mathrm{.}}5`$ east of the quasar, which appears almost 10 times fainter than the apparent object 0$`\stackrel{}{\mathrm{.}}`$8 south of the quasar on the HST \[O III\] image, but there is no feature present at this location on our deconvolved image. While the WFPC2 Instrument Handbook does not mention the presence of nearly in-focus ghost images, these doubts about the reality of the feature prompted us to check with Matthew McMaster, a data analyst at STScI specializing in WFPC2 instrument anomalies and reduction of linear ramp filter data. He examined a calibration image of a star taken with the same filter and centered on nearly the same wavelength, and he found a similar feature at exactly the same position in relation to the star, confirming that the apparent object seen near 3C 48 is an artifact. The resolution of the \[O II\] $`\lambda \lambda 3726`$,3729 doublet in the brighter regions of the extended emission is of special interest, since electron densities can be determined from the intensity ratio (e.g., Osterbrock 1989). For the bright pair of regions centered 4″ N of the quasar (see Fig. 2$`i`$), we find a ratio $`I(3729)/I(3726)=1.25\pm 0.05`$, corresponding to an electron density of 150 $`e^{}`$ cm<sup>-3</sup>, assuming an electron temperature $`T_e=10^4`$ K. This density is sufficiently high that it is unlikely that the emitting gas, at its location some 20 kpc projected distance from the quasar, is in hydrostatic pressure equilibrium with the surrounding medium. Instead, it is probably either confined gravitationally (if, for example, the gas is in a dark-matter dominated dwarf galaxy) or compressed by shocks due to collisions of gas clouds during the interaction. This latter process seems to be the more likely one, at least for the string of emission and star-forming regions along the leading edge of the NW tail of 3C 48. ## 4 Discussion ### 4.1 Evidence for a Merger Our modeling of the stellar populations in the faint tail-like structure extending to the NW of the quasar indicates that this extension is made up mostly of old stars. The predominance of old stellar populations in a feature that clearly has a dynamical timescale much shorter than the age of the stars that comprise it points strongly to a tidal origin. (The possibility that the apparent old population is the result of a truncated IMF rather than actual old stars can be discounted since we observe unambiguously high mass stars in some clumps within the tail and elsewhere in the main body of the host.) There can be little doubt that the feature is indeed a tidal tail. If this tail has an inclination angle $`i`$ between 30°and 60°with respect to the plane of the sky, the dynamical age of the feature is between 100 and 300 Myr. Since the oldest starburst ages we find in the host galaxy are 114 Myr, the dynamical age of the tail indicates that the starbursts were induced after the initial encounter of the interacting galaxies. If the host galaxy of 3C 48 has undergone strong gravitational interaction with a second object, and both objects originally possessed cold stellar disks, one might expect to observe also counter-tidal features such as a second tidal tail. Boyce, Disney & Bleaken (1999) published the archival HST F555W image of 3C 48 and identified a second tidal tail “stretching 8$`\stackrel{}{\mathrm{.}}`$5 to the south-east of the quasar”. We see a hint of a “tail” south of the nucleus and extending towards the south-east in both the HST images and our $`R^{}`$ image, but it is only about $`6\mathrm{}`$ long. The object that Boyce et al. seem to consider the SE end of the “tail” is a background galaxy at redshift $`z=0.8112`$ (object 4 in Table 3). When this object is removed, the case for this tidal tail becomes considerably less compelling. Another possible second tidal tail is the feature starting SE of the nucleus, but arching from the SE towards the SW. The broad-band optical ground-based images show a relatively bright feature extending $`3\mathrm{}`$ SE of the nucleus and the HST images show peculiar structure around the same area. This could be the tidal tail with clumps of star formation (see below) of the merging companion. Typically, double tails in merging systems appear to curve in the sense that they are roughly rotationally symmetric. If this SE extension is a tidal tail, the two tails would, instead, present more of a “gull-wing” appearance. While this is not the usual case, a combination of disks inclined to the plane of the mutual orbit and projection effects can lead to precisely such a configuration. The best-known example is “The Antennae” (NGC 4038/9; Whitmore & Schweitzer 1995 and references therein) whose configuration has been reproduced in numerical simulations (Barnes 1988; Toomre & Toomre 1972). Our kinematic results clearly indicate that the NW tidal tail has approaching velocities with respect to the main body of the galaxy. On the other hand, we find some evidence that the south extension(s) has receding velocities. Whether the $`3\mathrm{}`$ SE extension is a tidal tail and the system has an Antennae-like configuration, or the longer extension mentioned by Boyce et al. (1999) is the tidal tail forming a more typical merging configuration, the observed velocities are qualitatively in agreement with what we would expect from theoretical models (Toomre & Toomre 1972). However, there is much uncertainty in the interpretation of these southern features, and at this stage we cannot conclusively decide which, if any, of these features is a real tidal tail. Our spectroscopy confirms that the clumps in the NW tidal tail observed on HST images (Kirhakos et al. 1999) are regions of star formation, with ages $`33`$ Myr. Similar clumps have been observed in a number of strongly interacting or merging systems such as NGC 3628 (Chromey et al. 1998), IRAS 19254$``$7245 (Mirabel, Lutz & Maza 1991), Arp 105 (Duc & Mirabel 1994) and NGC 7252 (Schweizer & Seitzer 1998). Small star forming clumps in the tidal tails of merging systems are also predicted by numerical simulations (e.g., Elmegreen, Kaufman & Thomasson 1993; Mihos & Hernquist 1996). The observational evidence seems to indicate that 3C 48 is a major merger according to the definition of Mihos & Hernquist (1994,1996), that is, a merger of two gas-rich galaxies of roughly comparable size. In addition, at least one of the galaxies probably had a significant bulge component. In the absence of a bulge, merging galaxies are predicted to undergo early dissipation, producing central starbursts in the disks shortly after the initial encounter, when the galaxies are still widely separated, and leaving little gas to form stars at the time of the merger (Mihos & Hernquist 1996). In the case of 3C 48, however, we observe that there are strong starbursts still going on while the galaxies are close to the final stages of merging. The detection of large amounts of molecular gas in 3C 48 (Scoville et al. 1993; Wink et al. 1997), along with the very young ages for starbursts that we observe ($`4`$ Myr), indicates that high rates of star formation are still present in the inner regions. Merging disk/bulge/halo galaxies have much weaker initial encounter starbursts, so that most of the gas is conserved until the final merger and the rapid collapse of this gas drives a tremendous starburst in the center of the merging system. The importance of the presence of a massive bulge in preserving gas for star formation at the time of the final merger depends on rather uncertain assumptions regarding the star-formation law. However, there is little disagreement that an interaction involving a disk that is marginally stable against bar formation will lead to enhanced star formation at the time of the initial passage. The fact that all of the starbursts we observe were produced after the tidal tail was first launched strongly suggests that the presence of a bulge was important in the case of 3C 48. ### 4.2 The Effect of the Radio Jet As mentioned in §1, 3C 48 is a compact steep-spectrum radio source. The bright radio structure extends $`0\stackrel{}{\mathrm{.}}5`$ and takes the form of a one-sided jet extending approximately to the N, but with considerable distortion and local irregularities (Simon et al. 1990; Wilkinson et al. 1991). Lower-surface-brightness emission extends in a fan to the E side of this jet and continues to the NE out to a distance of $`1\mathrm{}`$ from the nucleus. The structure of this radio emission indicates interaction with a dense gaseous medium (Wilkinson et al. 1991). Is there any evidence in the optical observations for such an interaction? We see spatially resolved velocity structure in the strong blueshifted emission component near the nucleus, and we find weaker broad, high-velocity emission at greater distances. These high velocities, ranging from $`500`$ km s<sup>-1</sup> with respect to the systemic velocity to over twice that, cannot plausibly be due to gravitational dynamics. Instead, they very likely reflect the interaction of the radio jet with the ambient medium. As mentioned in §3.2, some of the youngest stellar populations are found just NE of the quasar. It was in this region that Stockton & Ridgway (1991) discovered what looked like a secondary nucleus 1″ from the quasar nucleus, which they called 3C 48A. Figure 2$`g`$ and $`h`$ show WFPC2 PC images of 3C 48 from the HST archive, which confirm the object found in the ground-based imaging and show much intriguing structural detail. While it is still possible that this may be, in fact, the distorted nuclear regions of the companion galaxy in the final stages of merger, it now seems more likely that it, too, is related to the interaction of the radio jet with the dense surrounding medium, as suggested by Chatzichristou et al. (1999). The WFPC2 PC images (Fig. 2$`g`$,$`h`$) show an almost circular edge brightening effect around 3C 48A, which Kirhakos et al. 1999 suggest is an image ghost artifact. After careful inspection, we believe it to be a real feature, especially since it perfectly surrounds the peak seen in both the ground-based and HST imaging. The VLBI map of 3C 48 (Wilkinson et al. 1991) shows diffuse radio emission extending into this region. Together with the spectroscopic evidence, the radio morphology suggests that this edge-brightening effect may be the remains of a bubble, where the radio jet had recently, but temporarily, broken through the dense gas in the inner region. We believe that this is not an active bubble at present because (1) the present jet appears to be deflected along a more northerly direction, and (2) the shell-like structure is not apparent in the emission-line image, indicating that it probably comprises stars, which may have been formed as shocks compressed the gas at the boundaries of the bubble. From the weakness of the core component in 3C 48, Wilkinson et al. (1991) argue that the radio jet is truly one sided, i.e., we are not missing an oppositely directed jet simply because of Doppler boosting. Our spectroscopy tends to confirm this view: we see widespread high-velocity gas with blueshifts relative to the systemic velocity, but not with redshifts. ## 5 Summary and Conclusions In summary, * We have measured redshifts from stellar absorption features in 32 regions of the host galaxy. The average redshift in the central region of the host is close to $`z=0.3700`$, and there are relatively small variations ($`50`$ km s<sup>-1</sup>) in the main body of the galaxy. The large faint feature extending NW of the quasar is clearly blueshifted with respect to the main body of the host galaxy by as much as 300 km s<sup>-1</sup>. Some regions in the SE of the galaxy are redshifted, suggesting that the system is rotating about an axis oriented roughly NE—SW. * We have successfully modeled the stellar populations in the same 32 regions of the host galaxy of 3C 48. Spectra from most regions can be modeled by an old stellar component plus an instantaneous burst population, presumably the stellar component present in the galaxies prior to interaction plus a starburst produced as a result of the interaction. Spectra from the remaining regions are well fit by the old component alone. * The feature extending NW of the quasar is a tidal tail composed mostly of an old stellar component, with clumps of recent star formation, much like those observed in other merging systems. The estimated dynamical age of this feature (between 0.1 and 0.3 Gyr) is much younger than the age of the dominating stellar population ($``$ 10 Gyr) and roughly equal to or older than the starburst ages we find anywhere on the host ($``$ 0.11 Gyr). Thus the starbursts occurred after the tidal tail was initially launched. This delay of most of the star formation until close to the time of final merger indicates that at least one of the merging galaxies probably had a massive bulge capable of stabilizing the gas in the inner disk. * We find very young stellar populations in the central regions of the host galaxy, with the youngest populations ($`4`$ Myr) in a region $`1\mathrm{}`$ NE of the quasar and in a region $`2\mathrm{}`$ SE of the quasar. The youngest ages we find ($`<`$ 10 Myr) indicate ongoing star formation. * The large amounts of observed CO (Scoville et al. 1993; Wink et al. 1997) along with the very large fractions of gas already used up to form stars in the starburst regions we observe indicate that 3C 48 may be near the peak of its starburst activity. * The extended emission is distributed mostly in several discrete regions. Most of this gas falls in one of two distinct velocity regimes: either (1) within $`200`$ km s<sup>-1</sup> of the systemic velocity, or (2) blueshifted with respect to the systemic velocity by $`500`$ km s<sup>-1</sup>. Densities in at least the brighter clumps of extended emission are $`150`$ $`e^{}`$ cm<sup>-3</sup>, indicating probable compression by shocks resulting from the interaction. The presence of at least one region of recent star formation in the tail indicates that some of these clumps reach sufficient densities to become self gravitating. * Extremely luminous high-velocity emission close to the quasar nucleus and extended emission with high velocities and large velocity widths, together with the convoluted shape of the VLBI radio jet, indicate a strong interaction between the radio jet and the ambient gas. The luminosity peak 1″ NE of the quasar and the edge-brightening around this region seen in the HST images may be a relic of a previous, but still quite recent, interaction between the radio plasma and the ambient material. From these results, we can outline a tentative evolutionary history for the host galaxy of 3C 48. Two gas-rich galaxies of comparable mass, at least one of which has a massive bulge, interact strongly enough for tidal friction to have a significant influence on their mutual orbit. The orbit is prograde with respect to the stellar disk of at least one of the galaxies, and, at their first close passage, a tidal tail is produced from this disk. Little star formation occurs at this point, but the tail does include some gas which will later form small star-forming regions, particularly along the leading edge of the tail. Most of star formation is delayed by a few hundreds of Myrs until the final stages of merger, when gas-flows into the center also trigger the quasar. A one-sided radio jet is produced, which interacts strongly with the dense gas that has accumulated near the center, producing a bubble of shocked gas to the NE of the quasar. The shocked gas at the boundary of this bubble forms stars; shortly afterwards, the jet is deflected along a more northerly direction. Information on stellar populations in host galaxies and close companions to QSOs can potentially give us a substantial lever in attempting to sort out the nature and time scales of various phenomena associated with the initiation and evolution of QSO activity. Numerical simulations suggest that the peak of starburst activity for mergers occurs at roughly the same time as a major gas inflow towards the center of the galaxy, so the time elapsed since the peak of the central starburst may be approximately coincident with the QSO age. Thus, by determining starburst ages in these objects we may be able to place them on an age sequence, and this in turn may help clarify the relationship between ULIGs and QSOs. In particular, our results are suggestive of a connection between 3C 48 and the ULIG population. We have shown that 3C 48 is likely to be near the peak of the starburst activity, which would place it near the beginning of the age sequence mentioned above. We have also presented evidence that 3C 48 is in the final stages of a merger; ULIGs are found preferentially in the final merging phase (Surace et al. 1998; Mihos 1999). Further, 3C 48 occupies a place in the FIR diagram close to that of ULIGs (Neugebauer et al. 1986; Stockton & Ridgway 1991), and Haas et al. 1998 find that the FIR emission of 3C 48 is unambiguously dominated by thermal emission. Indeed the mass and dominant temperature of dust in 3C 48 appears to be very similar to that of ULIGs (Klaas et al. 1997; Haas et al. 1998). In short, if the evolutionary sequence proposed by Sanders et al. (1988) is correct, 3C 48 seems to be viewed right after the optical QSO becomes visible. While this scenario may seem plausible, there are clearly a number of worries. To what extent is it legitimate to use a star-formation age measured several kpc from the nucleus as a proxy for that in the nucleus itself, which is the age that we might expect to be most closely correlated with that of the QSO activity? Given that we are not sure what angular-momentum transfer mechanisms operate to bring the gas from the 100-pc-scale of the central starburst to the sub-pc scale of the QSO accretion disk, is our assumption that the starburst peak and the triggering of the QSO activity are more-or-less contemporaneous valid? The picture also becomes more complex when one tries to make sense of recent related observations. Tran et al. 1999 concluded that QSOs are likely to be found only in host galaxies with a dominant stellar population older than 300 Myr. Apparently, their reasoning is based on the observation that ULIGs have starbursts with mean ages of $`300`$ Myr, and the belief that the active nucleus only becomes visible at a later stage. 3C 48 is a clear counterexample to this view. There clearly must be a dispersion, and quite likely a rather broad one, in the properties of ULIGs that govern the time it takes for dust to be cleared from the inner regions. Even more important may be the viewing angle. For 3C 48, there is almost certainly a range of lines of sight for which the quasar is hidden and the object would be classified solely as an ULIG. In a study of off-nuclear optical spectroscopy of 19 QSO host galaxies, Kukula et al. 1997 found much older dominant stellar populations in the hosts, with ages ranging from $`2`$ Gyr to $`11`$ Gyr. While there certainly do seem to be QSOs for which the host-galaxy spectrum is dominated by old stars, our detailed study of 3C 48 does inject a note of caution regarding such determinations, particularly those based on a single slit position in the outskirts of the host galaxy. If we were to confine our interpretation of the stellar population of 3C 48 to an integrated spectrum from points $`6\mathrm{}`$ from the quasar, we would have found only an old population. However, it is clear that our selection of QSOs with FIR colors similar to those of ULIGs has strongly biased our results; in fact, none of the host galaxies or strongly interacting companions in this sample that we have yet observed (e.g., PG 1700+518: Canalizo & Stockton 1997; Stockton, Canalizo & Close 1998; Mrk 231, IR 0759+651, Mrk 1014: Canalizo & Stockton 2000) show dominant populations as old as the youngest of those observed by Kukula et al. 1997. On a more speculative note, there may be a correlation between the morphology of the merging galaxies, the star-formation properties, and the QSO properties. If the models of Mihos & Hernquist (1996) are at least qualitatively correct, mergers involving disk galaxies of nearly equal mass, but with insignificant bulges, will have a rather broad peak of star formation in which they use up most of their available gas near the time of first passage, leaving little for further star formation at the time of final merger. Similar mergers involving pairs with substantial, centrally concentrated bulges have only a small amount of star formation prior to their final merger, when most of the star formation occurs in quite a sharp peak. Thus one expects the most luminous starbursts to be in mergers involving gas-rich galaxies with massive bulges. Comparison of Monte-Carlo simulations of mergers with images of a sample of ULIGS does indicate that most of the latter are in the late stages of merger, consistent with stability of their inner disks over the early phases of the interaction (Mihos 1999). While bulges may be important in determining the timing and the rate of star formation during an interaction, they may also have some bearing on the nature and strength of the nuclear activity. The recent demonstration of an apparent correlation between bulge mass and black-hole mass (Magorrian et al. 1998) indicates that those mergers for which gas-flows into the center are delayed by the presence of stabilizing bulges and that have the highest intrinsic flow rates may also involve the most massive black holes. If QSO luminosities are statistically correlated with the Eddington limits on their black-hole accretion rates, then these sorts of mergers are likely to produce the most luminous QSOs. While there are many qualifications to this scenario (e.g., current N-body simulations fall several orders of magnitude short of being able to follow the gas to the scale of the accretion disk; feedback from stellar winds and supernova outflows on the infalling gas is not well treated), it does at least supply a motivation to be alert to possible differences between QSOs for which the star formation and nuclear activity occur while the objects are still distinctly separated, such as PG 1700+518 (Canalizo & Stockton 1997; Stockton et al. 1998) and those, like 3C 48, where the activity appears to peak close to the time of final merger. We thank Bill Vacca, John Tonry, Dave Sanders, and Josh Barnes for helpful discussions, and Susan Ridgway for assisting in some of the observations. We also thank Richard Hook for supplying us with his CPLUCY routine and Matt McMaster for his detective work in confirming the spurious nature of the apparent object near 3C 48 in the HST \[O III\] image. We are grateful to the referee, Linda Dressel, for useful comments and suggestions. This research was partially supported by NSF under grant AST95-29078. ## Appendix A Serendipitous Objects Around 3C 48 A number of objects in the field of 3C 48 fell on our various slit positions. We measured redshifts for those objects which could also be identified in the HST images. Table 3 lists these objects with their coordinates (J2000.0) as measured from the HST images, redshift, whether the redshift was determined from emission or absorption features, and the slit position ID (see §2). Gehren et al. (1984) suggest that the object 12″ NW of the quasar nucleus may be a companion galaxy to 3C 48. This object falls on the edge of our slit position E. We see evidence for a faint emission line at the 2.5 sigma level, wide enough at this resolution to be \[O II\] $`\lambda 3727`$. If this were the case, the redshift of the object would be $`z=0.378`$ ($`\mathrm{\Delta }`$v = $`1800`$ km s<sup>-1</sup> with respect to our zero velocity point in the host galaxy). This object is not listed on Table 3. Only one object in Table 3 has a redshift relatively close to that of 3C 48. This may indicate that 3C 48 is in a low density cluster (Yee & Green 1987), if in a cluster at all. The background galaxy $`6\mathrm{}`$ SE of 3C 48 (object 4) stresses the importance of obtaining spectroscopic redshifts to discriminate between objects associated with QSOs and close projections.
no-problem/9908/hep-lat9908033.html
ar5iv
text
# Percolation and Deconfinement in SU(2) Gauge Theory The work has been supported by the TMR network ERBFMRX-CT-970122 and the DFG under grant Ka 1198/4-1. ## 1 INTRODUCTION The critical behaviour of the Ising model can today be reformulated in terms of percolation theory: magnetization sets in when suitably defined clusters of parallel spins reach the dimensions of the system . In particular, the critical exponents for percolation then become equal to the Ising exponents. We extend this description of critical behaviour in terms of percolation to the deconfinement transition in SU(2) gauge theory for two space dimensions. We show that the percolation of Polyakov loop clusters (taken to be suitably defined areas of Polyakov loops $`L`$ of the same sign) leads to the correct deconfinement temperature and to the correct critical exponents for the deconfinement. In contrast to the conventional studies of $`L(T)`$, the use of percolation strenght $`P(T)`$ as deconfinement order parameter remains possible also in the presence of dynamical quarks and could thus constitute a genuine deconfinement order parameter for full QCD. ## 2 PERCOLATION AND CRITICAL <br>PHENOMENA The percolation problem is easy to formulate: just place randomly pawns on a chessboard. Regions of adjacent pawns form clusters. Percolation theory deals with the properties of these clusters when the chessboard is infinitely large. If one of the clusters spans the chessboard from one side to the opposite one, we say that the cluster percolates. Quantitatively, one counts how many pawns belong to each cluster and calculates two quantities: $``$ The average cluster size S, defined as: $`S={\displaystyle \underset{s}{}}\left({\displaystyle \frac{n_ss^2}{_sn_ss}}\right).`$ (1) Here $`n_s`$ is the number of clusters of size $`s`$ and the sums exclude the percolating cluster; this number indicates how big on average the clusters are which do not percolate. $``$ The percolation strength P, defined as: $`P={\displaystyle \frac{\text{size of the percolating cluster}}{\text{no. of lattice sites}}}.`$ (2) By varying the density of our pawns, a kind of phase transition occurs. We pass from a phase of non-percolation to a phase in which one of the clusters percolates. The percolation strength P is the order parameter of this transition: it is zero in the non-percolation phase and is different from zero in the percolation phase. It is particularly interesting to study what happens near the concentration $`p_c`$ where for the first time a percolating cluster is formed. It turns out that P and S as functions of the density behave respectively like the magnetisation M of the Ising model and its magnetic susceptibility $`\chi `$ as functions of the temperature T (Table 1). ## 3 DROPLET MODEL The analogy of percolation with second order thermal phase transitions leads to a natural question: Is it possible to describe these transitions in terms of percolation ? The answer to that is given by the droplet model, which reproduces precisely the results of the Ising model. The droplet model establishes the following correspondence: $$\begin{array}{ccc}\hfill \text{spins up (down)}& & \text{occupied sites}\hfill \\ \hfill \text{spins down (up)}& & \text{empty sites}\hfill \\ \hfill \begin{array}{c}\hfill \text{spontaneous}\\ \hfill \text{magnetization M}\end{array}& & \begin{array}{c}\text{strength of the}\hfill \\ \text{perc. cluster P}\hfill \end{array}\hfill \\ \hfill \text{susceptibility }\chi & & \text{av. cluster size S}\hfill \end{array}$$ The clusters are basically magnetic domains (either with positive or with negative magnetization). When there is an infinite cluster (percolation), magnetization becomes a global property of the system. One has then to find at which temperature percolation occurs and determine the corresponding critical exponents. But this requires one further conceptual feature. If the droplets are defined as clusters of nearest-neighbour spins of the same type (pure site percolation), then they do not lead to the correct Ising results, neither in two dimensions, where the critical points coincide but not the exponents , nor in three dimensions, where even the thresholds are different . In other words, correlations in pure site percolation differ from thermal Ising correlations. To solve the problem, Coniglio and Klein introduced a bond probability $`CK=1e^{2J\beta }`$ (J is the spin-spin coupling of the Ising model, $`\beta `$=1/kT, being k the Boltzmann constant). The term $`2J`$ is just the difference $`\mathrm{\Delta }E`$ of energy that we have when we pass from a pair of next-neighbour spins to a one in which one of them is flipped. To define the new ”droplets” one has to connect the parallel nearest-neighbour spins in a cluster with a bond probability CK. Since CK is less than one, not all the parallel spins in a cluster become part of the same droplet. With the Coniglio-Klein definition, the droplet model reproduces the results of the Ising model, both in two and in three dimensions. ## 4 EXTENDING PERCOLATION TO SU(2) GAUGE THEORY As it is well known, considerations of symmetry led to the conjecture that SU(2) Gauge Theory and the Ising model belong to the same universality class, that is they have the same critical exponents . This close relation inspired our work. If we take a typical SU(2) configuration at a certain temperature, there will be areas where L takes negative values, and areas where L takes positive values. Both the positive and the negative ’islands’ can be seen as local regions of deconfinement. But as long as there are finite islands of both signs, deconfinement remains a local phenomenon and the whole system is in the confined phase. When one of this islands percolates, that is it becomes infinite, then we can talk of deconfinement as a global phase of the system. The main point of our work is to look for a suitable definition of droplets for SU(2). We expect our definition to be similar to the Coniglio-Klein one. The problem is then to find the right bond probability. In contrast to the Ising model, in which the spins can only take two values, we have now for L a continuous range of values. Therefore, if we try to build a bond probability like $`CK=1e^{\beta \mathrm{\Delta }E}`$, $`\mathrm{\Delta }E`$ is not the same at any lattice site. Because of that we used a local factor $$1e^{2\beta _{eff}L_iL_j}.$$ (3) Here $`\beta _{eff}`$ is an effective coupling which was shown in certain cases to approximate SU(2) pure gauge theory as a system of nearest-neighbour interacting spins . ## 5 RESULTS We performed simulations of SU(2) gauge theory in 2+1 dimensions (2 space dimensions and 1 time dimension) using four different lattice sizes, $`64^2`$, $`96^2`$, $`128^2`$, $`160^2`$. We focused on the case of $`N_\tau =2`$ lattice spacings in the time direction. Figure 1 shows the average cluster size S for our four lattices at different $`\beta `$ values. The curves clearly peak close to the physical transition, and they shift slightly to the right the bigger the lattice size is. The second step was to perform other simulations in the very narrow range of $`\beta `$ values where our transition seems to occur. We performed a finite size scaling analysis of our data in this smaller range. Figure 2 shows the results of our analysis. The two curves represent the best fit values for the ratios $`\beta /\nu `$ and $`\gamma /\nu `$ calculated for different beta values in the range between $`\beta =3.410`$ and $`\beta =3.457`$. We found the best $`\chi ^2`$ at $`\beta =3.4407`$, where $`\beta /\nu `$ and $`\gamma /\nu `$ take the Ising values $`1/8`$ and $`7/4`$, respectively. ## 6 CONCLUSIONS We have shown that the deconfinement transition for SU(2) gauge theory in 2+1 dimensions can be described in terms of Polyakov Loop percolation.
no-problem/9908/cond-mat9908101.html
ar5iv
text
# Two phase transitions in (𝑑_{𝑥²-𝑦²}+𝑖⁢𝑠)-wave superconductors \[ ## Abstract We study numerically the temperature dependencies of specific heat, susceptibility, penetration depth, and thermal conductivity of a coupled $`(d_{x^2y^2}+is)`$-wave Bardeen-Cooper-Schrieffer superconductor in the presence of a weak $`s`$-wave component (1) on square lattice and (2) on a lattice with orthorhombic distorsion. As the temperature is lowered past the critical temperature $`T_c`$, a less ordered superconducting phase is created in $`d_{x^2y^2}`$ wave, which changes to a more ordered phase in $`(d_{x^2y^2}+is)`$ wave at $`T_{c1}`$. This manifests in two second-order phase transitions. The two phase transitions are identified by two jumps in specific heat at $`T_c`$ and $`T_{c1}`$. The temperature dependencies of the superconducting observables exhibit a change from power-law to exponential behavior as temperature is lowered below $`T_{c1}`$ and confirm the new phase transition. PACS number(s): 74.20.Fg, 74.62.-c, 74.25.Bt Keywords: $`d_{x^2y^2}+is`$-wave superconductor, specific heat, susceptibility, thermal conductivity. \] The unconventional high-$`T_c`$ superconductors with a high critical temperature $`T_c`$ have a complicated lattice structure with extended and/or mixed symmetry for the order parameter . It is generally accepted that for many of these high-$`T_c`$ materials, the order parameter exhibits anisotropic behavior. However, it is difficult to establish the detailed nature of anisotropy, which could be of an extended $`s`$-wave, a pure $`d`$-wave, or a mixed $`(s+\mathrm{exp}(i\theta )d)`$-wave type. Some of the high-$`T_c`$ materials have singlet $`d`$-wave Cooper pairs and the order parameter has $`d_{x^2y^2}`$ symmetry in two dimensions . Recent measurements of the penetration depth $`\lambda (T)`$ and superconducting specific heat at different temperatures $`T`$ and related theoretical analysis also support this point of view. In some cases there is the signature of an extended $`s`$\- or $`d`$-wave symmetry . The possibility of a mixed $`(sd)`$-wave symmetry was suggested sometime ago by Ruckenstein et al. and Kotliar . Several different types of measurements which are sensitive to the phase of the order parameter indicate a significant mixing of $`s`$-wave component with a predominant $`d_{x^2y^2}`$ state at lower temperatures below $`T_{c1}`$. For temperatures between $`T_{c1}`$ and the critical temperature $`T_c`$ only the $`d_{x^2y^2}`$ state survives. There are experimental evidences based on Josephson supercurrent for tunneling between a conventional $`s`$-wave superconductor (Pb) and single crystals of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> (YBCO) that YBCO has mixed $`d\pm s`$ or $`d\pm is`$ symmetry at lower temperatures. A similar conclusion may also be obtained based on the results of angle-resolved photoemission spectroscopy experiment by Ma et al. in which a temperature dependent gap anisotropy in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+x</sub> was detected . The measured gaps along both high-symmetry directions are non-zero at low temperatures and their ratio was strongly temperature dependent. This observation is also difficult to reconcile employing a pure $`s`$\- or $`d`$-wave order parameter and suggests that a mixed \[$`d_{x^2y^2}+\mathrm{exp}(i\theta )s`$\] symmetry is applicable at low temperatures. However, at higher temperatures one could have a pure $`d_{x^2y^2}`$ symmetry of the order parameter. Recently, this idea has been explored to explain the NMR data in the superconductor YBCO and the Josephson critical current observed in YBCO-SNS and YBCO-Pb junctions . Recently, a new class of c-axis Josephson tunneling experiments are reported by Kouznetsov et al. in which a conventional superconductor (Pb) was deposited across a single twin boundary of a YBCO crystal. In that case measurements of critical current as a function of the magnitude and angle of a magnetic field applied in the plane of the junction provides a direct evidence of an order parameter of mixed \[$`d_{x^2y^2}+\mathrm{exp}(i\theta )s`$\] symmetry in YBCO. The microwave complex conductivity measurement in the superconducting state of high quality YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> single crystals measured at 10 GHz using a high-Q Nb cavity also strongly suggests a multicomponent superconducting order parameter in YBCO . More recently, Krishana et al. reported a phase transition in the high-$`T_c`$ superconductor Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> induced by a magnetic field from a study of the thermal conductivity as a function of temperature and applied field. Possible interpretation of this measurement could be the induction of a minor $`s`$ or $`d_{xy}`$ component with a $`d_{x^2y^2}`$ symmetry with the application of a weak field . There have also been certain recent theoretical studies using mixed $`s`$\- and $`d`$-wave symmetries and it was noted that it is more likely to realize a stable mixed $`d+is`$ state than a $`d+s`$ state considering different couplings and lattice symmetries. As noted by Liu et al. , a stable $`d+s`$ solution can not be realized on square lattice. However, in the presence of orthorhombic distortion, such a solution can be obtained. The special interest in these investigations was on the temperature dependence of the order parameter of the mixed $`d+is`$ state within the Bardeen-Cooper-Schrieffer (BCS) model . The study by Liu et al. of the order parameter on mixed $`d+is`$ state also explored the effect of a van Hove singularity in the density of states on the solutions of the BCS equation. Laughlin has provided a theoretical explanation of the observation by Krishana et al. that at low temperatures and for weak magnetic field a time-reversal symmetry breaking state of mixed symmetry is induced in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub>. From a study of vortex in a $`d`$-wave superconductor using a self-consistent Bogoliubov-de Gennes formalism, Franz and Teśanović also predicted the possibility of the creation of a superconducting state of mixed symmetry. Although, the creation of the mixed state in this case is speculative, they conclude that a dramatic change should be observed in the superconducting observables when a pure superconducting $`d_{x^2y^2}`$ state undergoes a phase transition to a mixed-symmetry state. In many cases there are evidences of a $`s`$-wave mixture with a $`d_{x^2y^2}`$ state. In view of this, in this study we present an investigation of this phase transition on specific heat, spin susceptibility, penetration depth, and thermal conductivity. There is no suitable microscopic theory for high-$`T_c`$ superconductors. Although it is accepted that Cooper pairs are formed in such materials, there is controversy about a proper description of the normal state and the pairing mechanism for such materials . In the absence of a suitable microscopic theory, a phenomenological tight-binding model in two dimensions which incorporate the proper lattice symmetry within the BCS formalism has been suggested . This model has been very successful in describing many properties of high-$`T_c`$ materials and is often used in the study of high-$`T_c`$ compounds. We shall use this model in the present investigation. In all previous theoretical studies on mixed-symmetry superconductors a general behavior of the temperature dependence of the order paramers emerged, independent of lattice symmetry employed . The tight-binding BCS model for a mixed $`d+is`$ state becomes a coupled set of equations in the two partial waves. The ratio of the strengths of the $`s`$\- and $`d`$-wave interactions should lie in a narrow region in order to have a coexisting $`s`$ and $`d`$ wave phases in the case of $`d+is`$ symmetry. As the $`s`$-wave ($`d`$-wave) interaction becomes stronger, the $`d`$-wave ($`s`$-wave) component of the order parameter quickly reduces and disappears and a pure $`s`$-wave ($`d`$-wave) state emerges. In this work we study the temperature dependencies of different observables, such as specific heat, susceptibility, penetration depth, and thermal conductivity, of a $`(d_{x^2y^2}+is)`$-wave BCS superconductor with a weak $`s`$-wave admixture both on square lattice and on a lattice with orthorhombic distortion and find that it exhibits interesting properties. A pure $`s`$-wave ($`d`$-wave) superconducting observable exhibits an exponential (power-law) dependence on temperature. In the case of mixed $`(d_{x^2y^2}+is)`$-wave symmetry, we find that it is possible to have a crossover from an exponential to power-law dependence on temperature below the superconducting critical temperature and a second second-order phase transition. For a weaker $`s`$-wave admixture, in the present study we establish in the two-dimensional tight-binding model (1) on square lattice and (2) on a lattice with orthorhombic distortion another second-order phase transition at $`T=T_{c1}<T_c`$, where the superconducting phase changes from a pure $`d`$-wave state for $`T>T_{c1}`$ to a mixed $`(d+is)`$-wave state for $`T<T_{c1}`$. The specific heat exhibits two jumps at the transition points $`T=T_{c1}`$ and $`T=T_c`$. The temperature dependencies of the superconducting specific heat, susceptibility, penetration depth and thermal conductivity change drastically at $`T=T_{c1}`$ from power-law behavior (typical to $`d`$ state with node(s) in the order parameter on the Fermi surface) for $`T>T_{c1}`$ to exponential behavior (typical to $`s`$ state with no nodes) for $`T<T_{c1}`$. The order parameter for the present ($`d+is`$) wave does not have a node on the Fermi surface for $`T<T_{c1}`$ and it behaves like a modified $`s`$-wave one. The observables for the normal state are closer to the superconducting $`l=2`$ state than to those for the superconducting $`l=0`$ state . Consequently, superconductivity in $`s`$ wave is more pronounced than in $`d`$ wave. Hence as temperature decreases the system passes from the normal state to a “less” superconducting $`d`$-wave state at $`T=T_c`$ and then to a “more” superconducting state with dominating $`s`$-wave behavior at $`T=T_{c1}`$ signaling a second phase transition. The pronounced change in the nature of the superconducting state at $`T=T_{c1}`$ becomes very apparent from a study of the entropy. At a particular temperature the entropy for the normal state is larger than that for all superconducting states signalling an increase in order in the superconducting state. In the case of the present $`(d_{x^2y^2}+is)`$ state we find that as the temperature is lowered past $`T_{c1}`$, the entropy of the superconducting $`(d_{x^2y^2}+is)`$ state decreases very rapidly (not shown explicitly in this work) indicating the appearance of a more ordered superconducting phase and a second phase transition. We base the present study on the two-dimensional tight binding model which we describe below. This model is sufficiently general for considering mixed angular momentum states, with or without orthorhombic distortion, employing nearest and second-nearest-neighbour hopping integrals. The effective interaction is taken to possess an on-site repulsion $`(v_r)`$ and a nearest-neighbour attraction $`(v_a)`$ and can be represented as $$V_{\mathrm{𝐤𝐪}}=v_rv_a[\mathrm{cos}(k_xq_x)+\beta ^2\mathrm{cos}(k_yq_y)],$$ (1) where $`\beta =1`$ corresponds to a square lattice, and $`\beta 1`$ represents orthorhombic distortion. On expansion, and keeping only the $`s`$\- and $`d_{x^2y^2}`$-wave components of this interaction we have $$V_{\mathrm{𝐤𝐪}}=V_0V_2(\mathrm{cos}k_x\beta \mathrm{cos}k_y)(\mathrm{cos}q_x\beta \mathrm{cos}q_y).$$ (2) Here $`V_0=v_r+(1+\beta ^2)v_a/2`$ and $`V_2=v_a/2`$ are the couplings of effective $`s`$\- and $`d`$-wave interactions, respectively. As we shall consider Cooper pairing and subsequent BCS condensation in both $`s`$ and $`d`$ waves the constants $`V_0`$ and $`V_2`$ will be taken to be positive corresponding to attractive interactions. In this case the quasiparticle dispersion relation is given by $$ϵ_𝐤=2t[\mathrm{cos}k_x+\beta \mathrm{cos}k_y\gamma \mathrm{cos}k_x\mathrm{cos}k_y],$$ (3) where $`t`$ and $`\beta t`$ are the nearest-neighbour hopping integrals along the in-plane $`a`$ and $`b`$ axes, respectively, and $`\gamma t/2`$ is the second-nearest-neighbour hopping integral. The energy $`ϵ_𝐤`$ is measured with respect to the surface of the Fermi sea. We consider the weak-coupling BCS model in two dimensions with $`(d_{x^2y^2}+is)`$ symmetry. At a finite $`T`$, one has the following BCS equation $`\mathrm{\Delta }_𝐤`$ $`=`$ $`{\displaystyle \underset{𝐪}{}}V_{\mathrm{𝐤𝐪}}{\displaystyle \frac{\mathrm{\Delta }_𝐪}{2E_𝐪}}\mathrm{tanh}{\displaystyle \frac{E_𝐪}{2T}}`$ (4) with $`E_𝐪=[ϵ_𝐪^2+|\mathrm{\Delta }_𝐪|^2]^{1/2}`$. We use units $`k_B=1`$, where $`k_B`$ is the Boltzmann constant. The order parameter $`\mathrm{\Delta }_𝐪`$ has the following anisotropic form: $`\mathrm{\Delta }_𝐪\mathrm{\Delta }_0+i\mathrm{\Delta }_2(\mathrm{cos}q_x\beta \mathrm{cos}q_y)`$. Using the above form of $`\mathrm{\Delta }_𝐪`$ and potential (2), Eq. (4) becomes the following coupled set of BCS equations $`\mathrm{\Delta }_0=V_0{\displaystyle \underset{𝐪}{}}{\displaystyle \frac{\mathrm{\Delta }_0}{2E_𝐪}}\mathrm{tanh}{\displaystyle \frac{E_𝐪}{2T}}`$ (5) $`\mathrm{\Delta }_2=V_2{\displaystyle \underset{𝐪}{}}{\displaystyle \frac{\mathrm{\Delta }_2(\mathrm{cos}k_x\beta \mathrm{cos}k_y)^2}{2E_𝐪}}\mathrm{tanh}{\displaystyle \frac{E_𝐪}{2T}}`$ (6) where the coupling is introduced through $`E_𝐪`$. In Eqs. (5) and (6) both the interactions $`V_0`$ and $`V_2`$ are assumed to be energy-independent constants for $`|ϵ_𝐪|<T_D`$ and zero for $`|ϵ_𝐪|>T_D`$, where $`T_D`$ is a mathematical cutoff to ensure convergence of the integrals. It should be compared with the usual Debye cutoff for the conventional superconductors. The specific heat per particle is given by $$C(T)=\frac{2}{NT^2}\underset{𝐪}{}f_𝐪(1f_𝐪)\left(E_𝐪^2\frac{1}{2}T\frac{d|\mathrm{\Delta }_𝐪|^2}{dT}\right)$$ (7) where $`f_𝐪=1/(1+\mathrm{exp}(E_𝐪/T))`$. The spin-susceptibility $`\chi `$ is defined by $$\chi (T)=\frac{2\mu _N^2}{T}\underset{𝐪}{}f_𝐪(1f_𝐪)$$ (8) where $`\mu _N`$ is the nuclear magneton. The penetration depth $`\lambda `$ is defined by $$\lambda ^2(T)=\lambda ^2(0)\left[1\frac{2}{NT}\underset{𝐪}{}f_𝐪(1f_𝐪)\right].$$ (9) The superconducting to normal thermal conductivity ratio $`K_s(T)/K_n(T)`$ is defined by $$\frac{K_s(T)}{K_n(T)}=\frac{_𝐪(ϵ_q1)f_𝐪(1f_𝐪)E_𝐪}{_𝐪(ϵ_q1)^2f_𝐪(1f_𝐪)}.$$ (10) Fig. 1. The $`s`$\- and $`d`$-wave parameters $`\mathrm{\Delta }_0`$, $`\mathrm{\Delta }_2`$ in Kelvin at different temperatures for $`(d_{x^2y^2}+is)`$-wave models 1(a) (square lattice: full line) and 2(b) (orthorhombic distortion: dashed line) described in the text with different mixtures of $`s`$ and $`d`$ waves. Fig. 2. Specific heat ratio $`C(T)/C_n(T_c)`$ versus $`T/T_c`$ for models 1(a) and 1(b) on square lattice: 1(a) (full line) and 1(b) (dashed line). The dashed-dotted line represents the result for the normal state for comparison. We solved the coupled set of equations (5) and (6) numerically and calculated the gaps $`\mathrm{\Delta }_0`$ and $`\mathrm{\Delta }_2`$ at various temperatures for $`T<T_c`$. We have performed calculations (1) on a perfect square lattice and (2) in the presence of an orthorhombic distortion with Debye cut off $`k_BT_D=0.02586`$ eV ($`T_D=300`$ K) in both cases. The parameters for these two cases are the following: (1) Square lattice $``$ (a) $`t=0.2586`$ eV, $`\beta =1`$, $`\gamma =0`$, $`V_0=1.8t`$, and $`V_2=0.73t`$, $`T_c=71`$ K, $`T_{c1}`$ = 25 K; (b) $`t=0.2586`$ eV, $`\beta =1`$, $`\gamma =0`$, $`V_0=1.92t`$, and $`V_2=0.73t`$, $`T_c=71`$ K, $`T_{c1}`$ = 51 K; (2) Orthorombic distorsion $``$ (a) $`t=0.2586`$ eV, $`\beta =0.95`$, and $`\gamma =0`$, $`V_0=2.06t`$, and $`V_2=0.97t`$, $`T_c`$ = 70 K, $`T_{c1}`$ = 25 K; (b) $`t=0.2586`$ eV, $`\beta =0.95`$, and $`\gamma =0`$, $`V_0=2.2t`$, and $`V_2=0.97t`$, $`T_c`$ = 70 K, $`T_{c1}`$ = 50 K. For a very weak $`s`$-wave ($`d`$-wave) coupling the only possible solution corresponds to $`\mathrm{\Delta }_0=0`$ ($`\mathrm{\Delta }_2=0`$). We have studied the solution only when a coupling is allowed between Eqs. (5) and (6). Fig. 3. 3. Specific heat ratio $`C(T)/C_n(T_c)`$ versus $`T/T_c`$ for models 2(a) and 2(b) for lattice with orthorhombic distortion: for notations see Fig. 2. Fig. 4. 4. Specific heat jump for different $`T_c`$ for pure $`s`$ and $`d`$ waves: $`s`$ wave (solid line, square lattice), $`s`$ wave (dashed line, orthorhombic distortion), $`d`$ wave (dashed-dotted line, square lattice), $`d`$ wave (dashed-double-dotted line, orthorhombic distortion). The temperature dependencies of different $`\mathrm{\Delta }`$’s have been studied in details previously . Here, for completeness, in Fig. 1 we plot the temperature dependencies of different $`\mathrm{\Delta }`$’s for the following two sets of $`s`$-$`d`$ mixing corresponding to models 1(a) (full line) and 2(b) (dashed line), respectively. In the coupled $`(d_{x^2y^2}+is)`$ wave as temperature is lowered past $`T_c`$, the parameter $`\mathrm{\Delta }_2`$ increases up to $`T=T_{c1}.`$ With further reduction of temperature, with the appearance of the parameter $`\mathrm{\Delta }_0`$, the $`d`$-wave component begins to decrease and the parameter $`\mathrm{\Delta }_2`$ is suppressed in the presence of a non-zero $`\mathrm{\Delta }_0`$. We also studied the effect of orthorhombicity in this problem. For fixed $`V_0`$ and $`V_2`$, if a small orthorhombicity is introduced in the model, $`T_c`$ ($`T_{c1}`$) decreases (increases). For example, if, in the model 1(a) above, we vary the parameter $`\beta `$ from 1 to 0.95, there is no coupled solution involving $`s`$ and $`d`$ waves for $`\beta <0.96`$, where only the pure $`s`$-wave solution survives. The coupled solution is observed for $`\beta 0.97`$ in this case. For $`\beta =0.97`$ (0.99), with the parameters of model 1(a), $`T_c`$ = 58 K (69 K), and $`T_{c1}`$ = 42 K (28 K). So the effect of introducing the orthorhombicity in a model is to increase $`T_{c1}`$ and decrease $`T_c`$. As a consequence the ratio $`T_{c1}/T_c`$ increases. In order to substantiate the claim of the second phase transition at $`T=T_{c1}`$, we study the temperature dependence of specific heat in some detail. The different superconducting and normal specific heats are plotted in Figs. 2 and 3 for square lattice \[models 1(a) and 1(b)\] and orthorhombic distortion \[models 2(a) and 2(b)\], respectively. The superconducting specific heat exhibits an unexpected peculiar behavior. In both cases the specific heat exhibits two jumps $``$ one at $`T_c`$ and another at $`T_{c1}`$. From Eq. (7) and Fig. 1 we see that the temperature derivative of $`|\mathrm{\Delta }_𝐪|^2`$ has discontinuities at $`T_c`$ and $`T_{c1}`$ due to the vanishing of $`\mathrm{\Delta }_2`$ and $`\mathrm{\Delta }_0`$, respectively, responsible for the two jumps in specific heat. For pure $`d`$ wave we find that the specific heat exhibits a power-law dependence on temperature. However, the exponent of this dependence varies with temperature. For small $`T`$ the exponent is approximately 2.5, and for large $`T`$ ($`TT_c`$) it is nearly 2. In the $`(d+is)`$-wave models, for $`T_c>T>T_{c1}`$ the specific heat exhibits $`d`$-wave power-law behavior. For $`d`$-wave models $`C_s(T_c)/C_n(T_c)`$ is a function of $`T_c`$ and $`\beta `$. In Figs. 2 and 3 this ratio for the $`d`$-wave case, for $`T_c`$ = 70 K, is approximately 3 (2.5) for $`\beta =`$ 1 (0.95). In a continuum calculation this ratio was 2 in the absence of a van Hove singularity . For $`T<T_{c1}`$, we find an exponential behavior in both cases. In Fig. 4 we study the jump $`\mathrm{\Delta }C`$ in the specific heat at $`T_c`$ for pure $`s`$\- and $`d`$-wave superconductors as a function of $`T_c`$, where we plot the ratio $`\mathrm{\Delta }C/C_n(T_c)`$ versus $`T_c`$. For a BCS superconductor in the continuum $`\mathrm{\Delta }C/C_n(T_c)`$ = 1.43 (1.0) for $`s`$-wave ($`d`$-wave) superconductor independent of $`T_c`$ . Because of the presence of the van Hove singularity in the present model this ratio increases with $`T_c`$ as can be seen in Fig. 4. For a fixed $`T_c`$, the ratio $`\mathrm{\Delta }C/C_n(T_c)`$ is greater for square lattice ($`\beta =1`$) than that for a lattice with orthorhombic distortion ($`\beta =0.95`$) for both $`s`$ and $`d`$ waves. At $`T_c`$ = 100 K, in the $`s`$-wave ($`d`$-wave) square lattice case this ratio could be as high as 3.63 (2.92). Many high-$`T_c`$ materials have produced a large value for this ratio. Fig. 5. Susceptibility ratio $`\chi (T)/\chi (T_c)`$ versus $`T/T_c`$ for pure $`d`$ wave (solid line, square lattice), $`d`$ wave (dashed line, orthorhombic distortion), pure $`s`$ wave (dashed-dotted line, square lattice), $`(d_{x^2y^2}+is)`$ model 1(a) (dotted line), $`(d_{x^2y^2}+is)`$ model 2(b) (dashed-double-dotted line). In all cases $`T_c70`$ K. Next we study the temperature dependencies of spin susceptibility, penetration depth, and thermal conductivity which we exhibit in Figs. 5 $``$ 7 where we also plot the results for pure $`s`$ and $`d`$ waves for comparison. In Figs. 5 $``$ 7, we show the results for pure $`d`$-wave cases on square lattice and with orthorhombic distortion, $`(d+is)`$ models 1(a) and 2(b) mentioned above, and pure $`s`$-wave case on square lattice. In all cases reported in these figures $`T_c70`$ K. For pure $`d`$-wave case we obtained power-law dependencies on temperature. The exponent for this power-law scaling was independent of critical temperature $`T_c`$ but varied from a square lattice to that with an orthorhombic distortion. In case of thermal conductivity, the exponent for square lattice (orthorhombic distortion, $`\beta =0.95`$) is 2.2 (1.4). For spin susceptibility, the exponent for square lattice (orthorhombic distortion, $`\beta `$ = 0.95) is 2.6 (2.4). For the mixed $`(d+is)`$-wave case, $`d`$-wave-type power-law behavior is obtained for $`T_c>T>T_{c1}`$ with the same exponent as in the pure $`d`$-wave case. For $`T<T_{c1}`$, there is no node in the present order parameter on the Fermi surface and one has a typical $`s`$-wave behavior. A passage from $`d`$\- to $`s`$-type state at $`T_{c1}`$ represents an increase in order and hence an increase in superconductivity . As temperature decreases, the system passes from the normal state to a $`d`$-wave state at $`T=T_c`$ and then to a $`s`$-wave-type state at $`T=T_{c1}`$ signaling a second phase transition. Until now no high-$`T_c`$ material has clearly shown two jumps in specific heat as noted in the present study. If this transition to a mixed state occurs at a low temperature this jump is expected to be be small and very high precision experimental data will be needed for its confirmation. The same will be needed for confirming this phase transition from a measurement of Knight shift which should show a change from a power-law to exponential dependence on temperature. However, if a transition from a pure $`d`$-wave state to a mixed-symmetry state takes place in any compound, they could be identified by a study of the temperature dependencies of the observables considered in this work. Fig. 6. Penetration depth ratio $`\mathrm{\Delta }\lambda (T)[\lambda (T)\lambda (0)]/\lambda (0)`$ versus $`T/T_c`$ for different models. For notation see Fig. 5. Fig. 7. Thermal conductivity ratio $`K_s(T)/K_n(T)`$ versus $`T/T_c`$ for different models. For notation see Fig. 5. In conclusion, we have studied the $`(d_{x^2y^2}+is)`$-wave superconductivity employing a two-dimensional tight binding BCS model on square lattice and also for orthorhombic distortion and confirmed a second second-order phase transition at $`T=T_{c1}`$ in the presence of a weaker $`s`$ wave. We have kept the $`s`$\- and $`d`$-wave couplings in such a domain that a coupled $`(d_{x^2y^2}+is)`$-wave solution is allowed. As temperature is lowered past the first critical temperature $`T_c`$, a weaker (less ordered) superconducting phase is created in $`d_{x^2y^2}`$ wave, which changes to a stronger (more ordered) superconducting phase in $`(d_{x^2y^2}+is)`$ wave at $`T_{c1}`$. The $`(d_{x^2y^2}+is)`$-wave state is similar to an $`s`$-wave-type state with no node in the order parameter. The phase transition at $`T_{c1}`$ is also marked by power-law (exponential) temperature dependencies of $`C(T),\chi (T)`$, $`\mathrm{\Delta }\lambda (T)`$ and $`K(T)`$ for $`T>T_{c1}`$ ($`<T_{c1}`$). Furthermore, the effect of orthorhombic distortion is shown to increase the transition temperature between these two superconducting phases, thus stabilizing the mixed state. We thank Conselho Nacional de Desenvolvimento Científico e Tecnológico and Fundação de Amparo à Pesquisa do Estado de São Paulo for financial support.
no-problem/9908/astro-ph9908303.html
ar5iv
text
# OGLE observations of four X-ray binary pulsars in the SMC ## 1 Introduction The Magellanic Clouds present a unique opportunity to study stellar populations in galaxies other than our own. Their structure and chemical composition differs from that of the Galaxy, yet they are close enough to allow study with modest sized ground based telescopes. The study of any stellar population in an external galaxy is of great interest because any differences with the same population in our own Galaxy will have implications on the evolutionary differences of the stars within the galaxies. Be/X-ray binaries and supergiant X-ray binaries represent a subclass of High Mass X-ray Binaries (HMXRB’s). In these Be/X-ray binaries the primary is an early type emission line star, typically 10 to 20 solar masses, and the secondary is a neutron star. The Be stars are early type luminosity class III-V stars which display Balmer emission (by definition) and a significant excess flux at long (IR - radio) wavelengths (dubbed as the infrared excess). These have been successfully modelled as recombination emission (Balmer) and free-free free-bound emission (infrared excess) from a cool dense wind. However, observations in the ultraviolet regime indicate a highly ionised, far less dense wind. These apparent inconsistencies are explained in current models by assuming a non-spherically symmetric wind structure, with a hot, low density wind emerging from the poles of the star, and a cool, dense wind from the equatorial regions (the circumstellar disk). The X-ray emission is caused by accretion of circumstellar material onto the compact companion. As a consequence, many of the Be/X-ray binaries are known only as transient X-ray sources, with emission occurring when accretion is enhanced during periastron passage or when the envelope expands and reaches the neutron star. Quiescent level emission, though low, has been detected from some of these systems. Observations of the HMXB’s in the Magellanic Clouds appear to show marked differences in the populations. The X-ray luminosity distribution of the Magellanic Clouds sources appears to be shifted to higher luminosities relative to the Galactic population. There also seems to be a higher incidence of sources suspected to contain black holes (Clark et al. 1978; Pakull 1989; Schmidtke et al. 1994). Clark et al. (1978) attribute the higher luminosities to the lower metal abundance of the Magellanic Clouds, whilst Pakull (1989) refers to the evolutionary scenarios of van den Heuvel & Habets (1984) and de Kool et al. (1987) which appear to favour black hole formation in low metal abundance environments, ie. the Magellanic Clouds. In order to study the differences between the HMXB populations of the Magellanic Clouds and the Galaxy, it is desirable to determine the physical parameters of as many systems as possible. We can then investigate whether the distributions of mass, orbital period, or spectral type are significantly different. To address these questions we need to identify the optical counterparts to the X-ray sources which remain unidentified, to increase the sample size to a statistically significant number, As part of this programme we have searched the OGLE photometric database of 2.2 million stars (Udalski et al. 1998) to look for counterparts to X-ray pulsars thought to be in HMXBs. Four such systems lie within the OGLE fields and the results from binary period searches are presented here. ## 2 The OGLE data The Optical Gravitational Lensing Experiment (OGLE) is a long term observational program with the main goal to search for dark, unseen matter using the microlensing phenomenon (Udalski et al. 1992). The Magellanic Clouds and the Galactic Bulge are the most natural locations to conduct such a search due to a large number of background stars that are potential targets for microlensing. As a result daily photometric measurements are made of $``$2.2 million stars in the SMC, and, as such, it provides an extremely valuable resource for determining the light curves of objects included in the survey. Figure 1 shows the distribution of the 11 known X-ray pulsars superimposed on an outline image of the Small Magellanic Cloud. The OGLE scan regions are described in Udalski et al. (1998) from which it can be seen that 7 out of the 11 pulsars lie outside the region covered by that experiment. In fact the overall distribution of these X-ray pulsars is far from consistent with the general visible mass distribution of the SMC - a fact undoubtably related to the details of star formation in this object. In general the OGLE data cover the period June 1997 to February 1998 and primarily consist of I band observations, though some observations were also taken in B and V. Most of the work presented in this paper consists of identifying the possible optical counterparts to X-ray pulsars within the OGLE database, determining their colours and searching for regular time variability indicative of a binary period. ### 2.1 1WGA J0054.9-7226 This source was catalogued using ROSAT by Kahabka and Pietsch (1996) and identified as an X-ray pulsar with a period of 59s by RXTE (Marshall & Lochner 1998). An optical study of the X-ray error boxes clearly identified one particular object with strong H$`\alpha `$ emission as the counterpart to the X-ray source (Stevens, Coe & Buckley 1998). This object was identified within the OGLE database as object number 70829 in Field 7 with the following magnitudes : V=15.28, B=15.24 and I=15.13 (all magnitudes have errors of $`\pm `$ 0.01). Its position, accurate to 0.1 arcsec, is RA=00 54 56.17, dec=-72 26 47.6 (2000). Its position on a colour-colour diagram is shown in Figure 2 along with the nearest 70-80 other stars. The position of the candidate is consistent with that expected for a B0-B1 object. More precisely, taking V=15.28 and using the distance modulus to the SMC of (m-M)=18.9, together with a reddening of E(B-V) in the range 0.06-0.28 (Hill, Madore & Freedman 1994), results in an absolute V magnitude in the range -3.8 to -4.5. A B0V has an absolute V magnitude of -4.0 and a B1III star has V=-4.4. In addition, it is worth noting that the observed (B-V)= -0.04 is almost the same as that found for another SMC X-ray counterpart RX J0117.6-7330 (Coe et al. 1998) which was identified with a star in the range B1-B2 (luminosity class III-V). Thus one can be fairly confident in a similar spectral class for 1WGA J0054.9-7226. Figure 3 shows the power spectrum of the source obtained using CLEAN on the 96 daily I band observation points. A strong peak occurs at a period of 14.26d. However, when the data are folded at either that period or twice the period, no strong light curve emerges. There is some evidence of a small sinusoidal modulation with a semi-amplitude of 0.008$`\pm `$0.001 magnitudes which presumably is the source of the peak in the power spectrum seen in Figure 3. ### 2.2 RX J0050.7-7316 This 323s X-ray pulsar was intitially catalogued by ROSAT and subsequently discovered to be a pulsar by Yokogawa & Koyama (1998) using the ASCA satellite. Subsequently Cook (1998) identified one of the objects in the 1 armin X-ray error circle as having a period of 0.708d using data from the MACHO collaboration. Schmidtke & Cowley (1998) independently confirmed this identification. In the OGLE database this source was identified as Object number 180026 in Scan Field 5. Its position is given as RA = 00 50 44.7, dec = -73 16 05 (2000) with uncertainties of $`\pm `$0.1 arcsec. Its colours are V=15.44, B=15.41 and I=15.27 with uncertainties of $`\pm `$0.01 magnitudes. A colour-colour plot of objects in the immediate vicinity of the source is shown in Figure 4, from which the blue nature of the counterpart is very clear. A CLEAN periodicity search of the 134 I band data points clearly revealed a periodicity in the data of 0.708d - see Figure 5. This is exactly the same as reported by Cook (1998). A full discussion of the light curve of this source is presented in Section 3 below. ### 2.3 RX J0049.1-7250 This X-ray source was identified as an X-ray pulsar with a period of 75s by RXTE (Corbet et al. 1998) after its discovery by Kahabka and Pietsch (1996) from ROSAT data. Stevens, Coe & Buckley (1998) carried out a photometric study of the objects in the X-ray error box and concluded that Star 1 (a previously known Be star) was the most likely counterpart. However, a second Be star (denoted Star 2 in their paper) lies just on the edge of the ROSAT X-ray positional uncertainty and cannot be ruled out. Both of these objects were identified in the OGLE data base in Scan Field 5. The data on these two objects are presented in Table 1. A colour-colour plot for the whole of the region surrounding these two sources is shown in Figure 6. From this figure it can be seen that the two candidates both lie at the blue edge of the distribution, consistent with their Be star nature. Apart from this there appears to be nothing else remarkable about their colours. Both of their I band lightcurves were searched using the CLEAN algorithm for possible periodicities similar to those found in the previous two systems, but nothing was seen in the period range 1-50d. A modulation upper limit of $``$ $`\pm `$0.01 magnitudes may be set based upon the signals seen from similar data runs of other sources in this work. ### 2.4 1SAX J0103.2-7209 This object was identified in 1998 by the SAX satellite (Israel & Stella 1998). Its X-ray signal is modulated at a period of 345s and it has been identified with the brightest object in the X-ray error circle, a V=14.8 magnitude Be star. This star has previously been proposed as the counterpart to an earlier X-ray source RX J0103-722 by Hughes & Smith (1994). The proposed optical counterpart - a bright Be star - was identified in the OGLE data base as Object number 173121 in Scan Field 9. Its magnitudes are given as V=14.8, B-V=-0.089 and V-I=0.132 (all magnitudes have errors of $`\pm `$0.01). Its position is given as RA=01 03 13.9, declination = -72 09 14.0 (2000), with a positional uncertainty of $`\pm `$0.1arcsec. A colour-colour plot of objects in the region around the source is shown in Figure 7. The position of the proposed candidate is marked, and it is obviously in a very similar location on this diagram to the other sources discussed in this paper - confirming the Be star nature of this object. There are many more objects of similar magnitude within the SAX X-ray uncertainty circle (40 arcsec), but Hughes & Smith (1994) obtained a much smaller X-ray error circle ($``$10 arcsec) using ROSAT which includes this Be star. It is therefore very likely that this is the correct counterpart to the X-ray pulsar. Timing analysis of 104 daily I band photometric measurements from OGLE revealed no evidence for any coherent period in the range 1 to 50 days with a modulation upper limit of $``$ $`\pm `$0.02 magnitudes. ## 3 Modelling the light curve of RX J0050.7-731 Using (m-M)=18.9 and E(B-V)=0.06 to 0.28, we find an absolute V magnitude of -3.65 to -4.33 for the mass donor star of RX J0050.7-731. This corresponds to a spectral type of B0 to B1, luminosity class III-V. According to Gray (1992), a B0V star has a mass of $`13.2M_{}`$ and a radius of $`6.64R_{}`$, and a B2V star has a mass of $`8.7M_{}`$ and a radius of $`4.33R_{}`$. In order to get some idea of the scale of this binary, we can use Eggleton’s (1983) formula to compute the relative radii of the Roche lobes $$\frac{R_{\mathrm{L1}}}{a}=\frac{0.49q^{2/3}}{0.6q^{2/3}+\mathrm{ln}(1+q^{1/3})},$$ (1) (where $`q`$ is the mass ratio), and Kepler’s Third Law to compute the separation between the two components (assuming a circular orbit) $$a=4.2P_{\mathrm{day}}^{2/3}(M_{\mathrm{total}}/M_{})^{1/3}R_{},$$ (2) (where $`P_{\mathrm{day}}`$ is the orbital period in days). We find $`R_{\mathrm{L1}}=3.89R_{}`$ using $`P_{\mathrm{day}}=0.708`$, $`M_\mathrm{X}=1.4M_{}`$, and $`M_\mathrm{B}=8.7M_{}`$. In this case where we assumed the 0.708 day periodicity was the orbital period, the B star clearly would overfill its Roche lobe by a large margin. If we assume the orbital period is 1.416 days, then we find $`R_{\mathrm{L1}}=6.17R_{}`$ using $`M_\mathrm{X}=1.4M_{}`$, and $`M_\mathrm{B}=8.7M_{}`$. We conclude the orbital period of RX J0050.7-7316 must be 1.416 days in order for the B star to fit within the Roche lobe. Even then, the B star still fills a large fraction of its Roche lobe. Folding the I and V band data for RX J0050.7-7316 at a period of 1.416 days produces the light curves shown in Figure 8. Note that the gaps exist in the V band data because there were much fewer observations carried out in this band (32 compared to 134 I band measurements). The light curves are double-wave with two maxima and two minima per cycle. Since the B star probably comes close to filling its Roche lobe and is hence tidally distorted, we may presume that the cause of the optical modulation is the well-known ellipsoidal variations (e.g. Avni & Bahcall 1975; Avni 1978). The amplitude of the ellipsoidal light curve depends on three basic parameters: the inclination of the orbital plane, the mass ratio, and the Roche lobe filling fraction of the distorted star. Thus one can in principle model ellipsoidal light curves and obtain constraints on the system geometry. However, the observed light curve can be altered by the addition of extra sources of light and by eclipses. The amount of extra light can depend on the orbital phase, as in light due to X-ray heating of the secondary star, or can be independent of phase, such as light from an uneclipsed “steady” accretion disk. One must account for these extra sources of light if one is to obtain reliable constraints from ellipsoidal modelling. We will focus here on only the I light curve of RX J0050.7-7316, since the V light curve of this object is much less complete. We used the modified version of the Avni (1978) code described in Orosz & Bailyn (1997) to model the light curve. This code uses Roche geometry to describe the shape of the secondary star and accounts for light from a flat, circular accretion disc and for extra light due to X-ray heating. The parameters for the model are the parameters which determine the geometry of the system: the mass ratio $`Q=M_x/M_B`$, the orbital inclination $`i`$, the Roche lobe filling factor $`f`$, and the rotational velocity of the secondary star; the parameters which determine the light from the secondary star: its polar temperature $`T_{\mathrm{pole}}`$, the linearized limb darkening coefficients $`u(\lambda )`$, and the gravity darkening exponent $`\beta `$; the parameters which determine the contribution of light from the accretion disc: the disc radius $`r_d`$, flaring angle of the rim $`\beta _{\mathrm{rim}}`$, the temperature at the outer edge $`T_d`$, and the exponent on the power-law radial distribution of temperature $`\xi `$, where $`T(r)=T_d(r/r_d)^\xi `$; and parameters related to the X-ray heating: the X-ray luminosity of the compact object $`L_x`$, the orbital separation $`a`$ and the X-ray albedo $`W`$. In this case, we can make some reasonable assumptions and greatly reduce the number of free parameters. We fixed the polar temperature of the B star at 27,000 K (Gray 1992). The B star has a radiative envelope, so the value of the gravity darkening exponent $`\beta `$ was set to 0.25 (von Zeipel 1924). The limb darkening coefficients were taken from Wade & Rucinski (1985). We can neglect the extra light due to X-ray heating since the optical luminosity of most HMXBs is dominated by the mass donor star (van Paradijs & McClintock 1995). We carried out some numerical experiments and found that the light curve shapes did not depend on the values of $`L_x`$, $`W`$, and $`a`$. For definiteness we used $`L_x=2\times 10^{36}`$ erg s<sup>-1</sup>, $`W`$=0.5, and $`a=11R_{}`$. In that same vein, we found that the optical flux from the accretion disk was only a small fraction ($`0.5\%`$) of the optical flux from the B star for a wide range of reasonable values of $`r_d`$, $`\beta _{\mathrm{rim}}`$, $`T_d`$, and $`\xi `$. Again for definiteness we adopt $`\beta _{\mathrm{rim}}=4^{}`$, $`T_d=6000`$ K, and $`\xi =0.75`$ (the value expected for a steady-state accretion disk, Pringle 1981). Since the accretion disk may partially eclipse the B star, we will keep the radius of the disk $`r_d`$ as a free parameter. Small changes in the disk radius can lead to relatively large changes in the eclipse profile, whereas the eclipse profile is much less sensitive to changes in the disk opening angle $`\beta _{\mathrm{rim}}`$. Finally, we will assume the B star is tidally locked so that its rotational period is the same as the orbital period. In this case the system geometry is specified by the mass ratio, inclination, and the Roche lobe filling factor of the B star. Hence we have four free parameters in the model: $`i`$, $`Q`$, $`f`$, and $`r_d`$. We do not know the absolute orbital phase of RX J0050.7-7316, so we simply adjusted the phase so that the deeper minimum is at phase 0.5. We computed a grid of models in the inclination-mass ratio plane, where the inclinations ranges from 40 to 70 degrees and the mass ratios ranged from 0.05 to 0.35. At each point in the plane, the values of $`i`$ and $`Q`$ were fixed at the values corresponding to the point in the grid and the values of $`f`$ and $`r_d`$ were adjusted to minimize the $`\chi ^2`$ of the fit, where we fit only the points near the two maxima and two minima. Figure 9 shows the contour plot of $`\chi ^2`$ values. Formally, the best-fitting model has a mass ratio of 0.34 and an inclination of $`50^{}`$. However, the mass of the B star probably is in the range of $`712M_{}`$ (Gray 1992), and the mass of the neutron star probably is not too different from $`1.4M_{}`$. Hence the mass ratio is likely to be in the range of 0.12 to 0.20. If we restrict our attention to fits with mass ratios less than 0.2, we see that the corresponding best-fitting values of the inclination are $`55^{}`$ or higher. We also show the inclination at which the neutron star would be eclipsed by the B star (thick solid line). The model fits to the I light curve indicate that RX J0050.7-7316 has a fairly high inclination and might show X-ray eclipses. It turns out that the optical light curve is altered very little by the grazing eclipse of the star by the rim of the disk, and of the nearly total eclipse of the disk by the star. Figure 10 shows a representative model light curve with $`i=60^{}`$, $`Q=0.20`$, $`f=0.99`$, and $`r_d=0.33`$. The amplitude of the observed I light curve is matched reasonably well, and the relative depths of the two minima are also matched reasonably well. However, there are relatively large deviations from the model, especially between phases 0.4 and 0.6. It is difficult to draw quantitative conclusions from the modelling, given the fact that we do not yet have dynamical data. We have shown that if we take the I light curve at face value, the inclination probably is large enough to be an X-ray eclipsing system if we assume reasonable values of the mass ratio. There may be systematic errors in our analysis. For example, we assumed the OGLE light curve represents the true ellipsoidal component from the secondary star. Cook et al. (1998) report a long-term trend in the baseline brightness in data from the MACHO collaboration. Such a trend could alter the amplitude of the folded light curve. It would be worthwile to obtain more complete photometry of this source over the course of a several night run, rather than a few observations per night over an entire season as in the OGLE and MACHO observations. The light curves obtained over a short run would be much less prone to errors introduced by the long-term baseline brightness changes. Furthermore, better sampling near the minima is useful if there are grazing eclipses since there are subtle changes in the shape of the light curve near the minima caused by eclipses. Finally, it would be useful to have a radial velocity curve of the B star and a complete X-ray light curve so that dynamical mass estimates can be derived. ## 4 Conclusions This paper presents analysis and interpretation of OGLE photometric data of the SMC X-ray pulsars 1WGA J0054.9-7226, RX J0050.7-7316, RX J0049.1-7250, and 1SAX J0103.2-7209. In each case, the probable optical counterpart is identified on the basis of its colours. In the case of RX J0050.7-7316, the regular modulation of its optical light appears to reveal binary motion with a period of 1.416 days. We show that the amplitude and relative depths of the minima of the I-band light curve of RX J0050.7-7316 can be matched with an ellipsoidal model where the B star nearly fills its Roche lobe. For mass ratios in the range of 0.12 to 0.20, the corresponding inclinations are about 55 degrees or larger. Thus the neutron star may be eclipsed by the B star in this system. Although the present ellipsoidal model is not perfect, additional observations of RX J0050.7-7316 should be obtained. In particular, additional photometry in several colours, and most importantly, radial velocity data for the B star will be needed before we can draw more quantitative conclusions about the component masses. ## Acknowledgments We are very grateful to Andrzej Udalski and the OGLE team for their excellent support in providing access to their data base.
no-problem/9908/physics9908043.html
ar5iv
text
# Do Zero-Energy Solutions of Maxwell Equations Have the Physical Origin Suggested by A. E. Chubykalo? [Comment on the paper in Mod. Phys. Lett. A13 (1998) 2139-2146] Submitted to “Modern Physics Letters A” ## ACKNOWLEDGMENTS I greatly appreciate discussions with Profs. A. Chubykalo (1995-1999), patient answers on my questions by Profs. Y. S. Kim, M. Moshinsky and Yu. F. Smirnov, and the useful information from Profs. A. F. Pashkov (1982-99) and E. Recami (1995-99). Frank discussions with Prof. D. Ahluwalia (1993-98) are acknowledged, even if I do not always agree with him. Zacatecas University, México, is thanked for awarding the full professorship. This work has been partly supported by the Mexican Sistema Nacional de Investigadores and the Programa de Apoyo a la Carrera Docente.
no-problem/9908/hep-ph9908496.html
ar5iv
text
# SU(3) Chiral approach to meson and baryon dynamics ## 1 CHIRAL UNITARY APPROACH Chiral perturbation theory ($`\chi PT`$) has proved to be a very suitable instrument to implement the basic dynamics and symmetries of the meson meson and meson baryon interaction at low energies. The essence of the perturbative technique, however, precludes the possibility of tackling problems where resonances appear, hence limiting tremendously the realm of applicability. The method that we expose leads naturally to low lying resonances and allows one to face many problems so far intractable within $`\chi PT`$. The method incorporates new elements: 1) Unitarity is implemented exactly; 2) It can deal with the allowed coupled channels formed by pairs of particles from the octets of stable pseudoscalar mesons and ($`\frac{1}{2}^+`$) baryons; 3) A chiral expansion in powers of the external four-momentum of the lightest pseudoscalars is done for Re $`T^1`$, instead of the $`T`$ matrix itself which is the case in standard $`\chi PT`$. Within this scheme, and expanding $`T_2ReT^1T_2`$ up order $`O(p^4)`$, where $`T_2`$ is the $`O(p^2)`$ amplitude, one obtains the matrix relation in coupled channels $$T=T_2[T_2T_4]^1T_2,$$ (1) where $`T_4`$ is the usual $`O(p^4)`$ amplitude in $`\chi PT`$. Once this point is reached one has several options to proceed in decreasing order of complexity: a) A full calculation of $`T_4`$ within the same renormalization scheme as in $`\chi PT`$ can be done. The eight $`L_i`$ coefficients from $`L^{(4)}`$ are then fitted to the existing meson meson data on phase shifts and inelasticities up to 1.2 GeV, where 4 meson states are still unimportant. This procedure has been carried out in in the cases where the complete $`O(p^4)`$ amplitude has been calculated. The resulting $`L_i`$ parameters are compatible with those used in $`\chi PT`$. At low energies the $`O(p^4)`$ expansion for $`T`$ of eq. (1) is identical to that in $`\chi PT`$. However, at higher energies the nonperturbative structure of eq. (1), which implements unitarity exactly, allows one to extend the information contained in the chiral Lagrangians to much higher energy than in ordinary $`\chi `$ PT, which is up to about $`\sqrt{s}400`$ MeV. Indeed it reproduces the resonances present in the L = 0, 1 partial waves. b) A technically simpler and equally successful additional approximation is generated by ignoring the crossed channel loops and tadpoles and reabsorbing them in the $`L_i`$ coefficients given the weak structure of these terms in the physical region. The fit to the data with the new $`\widehat{L}_i`$ coefficients reproduces the whole meson meson sector, with the position, widths and partial decay widths of the $`f_0(980)`$, $`a_0(980)`$, $`\kappa (900)`$, $`\rho (770)`$, $`K^{}(900)`$ resonances in good agreement with experiment . A cut off regularization is used in for the loops in the s-channel. c) For the L = 0 sector (also in L = 0, S = $`1`$ in the meson baryon interaction) a further technical simplification is possible. In these cases it is possible to choose the cut off such that, $`\text{Re}T_4=T_2\text{Re}GT_2`$ where G is the loop function of two meson propagators. This is possible in those cases because of the predominant role played by the unitarization of the lowest order $`\chi PT`$ amplitude, which by itself leads to the low lying resonances, and because other genuine QCD resonances appear at higher energies. In such a case and given the fact that $`ImT_4=T_2ImGT_2`$, eq. (1) becomes $`T=T_2[T_2T_2GT_2]^1T_2=[1T_2G]^1T_2T=T_2+T_2GT,`$ which is a Bethe-Salpeter equation with $`T_2`$ and $`T`$ factorized on shell outside the loop integral, with $`T_2`$ playing the role of the potential. This option has proved to be successful in the L = 0 meson meson sector in and in the L = 0, S = $`1`$ meson baryon sector in . In the meson baryon sector with S = 0, given the disparity of the masses in the coupled channels $`\pi N`$, $`\eta N`$, $`K\mathrm{\Sigma }`$, $`K\mathrm{\Lambda }`$, the simple “one cut off approach” is not possible. In higher order Lagrangians are introduced while in different subtraction constants in G are incorporated in each of the former channels leading in both cases to acceptable solutions when compared with the data. An alternative, related procedure to eq. (1), is developed in using the $`N/D`$ method and allowing the contribution of preexisting mesons which remain in the limit of large $`N_c`$. This procedure allows one to separate the physical mesons into preexisting ones, mostly $`q\overline{q}`$ pairs and the others which come as resonances of the meson meson scattering due to unitarization. ## 2 APPLICATION TO THE $`K^{}p\mathrm{\Lambda }(1405)\gamma `$ REACTION Using the option c), a good description of interaction of the $`K^{}p`$ is coupled channels interaction is obtained in terms of the lowest order Lagrangian and the Bethe Salpeter equation with a single cut off. One of the interesting features of the approach is the dynamical generation of the $`\mathrm{\Lambda }(1405)`$ resonance just below the $`K^{}p`$ threshold. The threshold behavior of the $`K^{}p`$ amplitude is thus very much tied to the properties of this resonance. Modifications of these properties in a nuclear medium can substantially alter the $`K^{}p`$ and $`K^{}`$ nucleus interaction and experiments looking for these properties are most welcome. In a recent paper we propose the $`K^{}p\mathrm{\Lambda }(1405)\gamma `$ reaction as a means to study the resonance, together with the $`K^{}A\mathrm{\Lambda }(1405)\gamma A^{}`$ reaction to see the modification of its properties in nuclei. The $`\mathrm{\Lambda }(1405)`$ is seen in its decay products in the $`\pi \mathrm{\Sigma }`$ channel, but as shown in the sum of the cross sections for $`\pi ^0\mathrm{\Sigma }^0`$, $`\pi ^+\mathrm{\Sigma }^{}`$, $`\pi ^{}\mathrm{\Sigma }^+`$ production has the shape of the resonance $`\mathrm{\Lambda }(1405)`$ in the I = 0 channel. Hence, the detection of the $`\gamma `$ in the elementary reaction, looking at $`d\sigma /dM_I`$ ($`M_I`$ being the invariant mass of the meson baryon system which can be obtained from the $`\gamma `$ momentum), is sufficient to get a clear $`\mathrm{\Lambda }(1045)`$ signal. In fig. 1 we show the cross sections predicted for the $`K^{}p\mathrm{\Lambda }(1405)\gamma `$ reaction looking at $`\gamma \pi ^0\mathrm{\Sigma }^0`$, $`\gamma `$ $`all`$ and $`\gamma \mathrm{\Lambda }(1405)`$ (alone). All of them have approximately the same shape and strength given the fact that the I = 1 contribution is rather small. The momentum chosen for the $`K^{}`$ is 500 MeV/c which makes it suitable of experimentation at KEK and others facilities. ## 3 $`N^{}(1535)N^{}(1535)\pi ,\eta `$ COUPLINGS Since the $`N^{}`$(1535) resonance is also obtained via the Bethe-Salpeter equation (see fig. 2) in the meson baryon $`S=0`$ sector with the channels $`\pi N`$, $`\eta N`$, $`K\mathrm{\Sigma }`$, $`K\mathrm{\Lambda }`$, one can automatically generate the series implicit in fig. 3 which provides the coupling of a $`\pi `$ or an $`\eta `$ to the resonance, the latter being generated at both sides of the external mesonic vertex. All vertices needed for the calculation can be obtained from standard chiral Lagrangians. The results which we obtain are $`\frac{g_{\pi ^0N^{}N^{}}}{g_{\pi ^0NN}}=1.3;\frac{g_{\eta N^{}N^{}}}{g_{\eta NN}}=2.2`$ The result for the $`\pi `$ coupling rule out the mirror assignment in chiral models where the nucleon and the $`N^{}(1535)`$ form a parity doublet in analogy with the linear $`\sigma `$ model. ## 4 SUMMARY We have reported on the unitary approach to meson meson and meson baryon interactions using chiral Lagrangians, which has proved to be an efficient method to extend the information on chiral symmetry breaking to higher energies where $`\chi PT`$ cannot be used. This new approach has opened the doors to the investigation of many new problems so far intractable with $`\chi PT`$ and a few examples have been reported here. We have applied these techniques to the $`K^{}p\mathrm{\Lambda }(1405)\gamma `$ reaction and the evaluation of the $`N^{}N^{}\pi ,\eta `$ couplings. The experimental implementation of the former reaction and others on photoproduction of scalar mesons and of the $`\mathrm{\Lambda }(1405)`$, reported elsewhere , will provide new tests of these emerging pictures implementing chiral symmetry and unitarity. Similarly, the techniques used to evaluate the $`N^{}N^{}\pi ,\eta `$ couplings can be easily extended to evaluate electromagnetic properties of low lying resonances which are generated within the unitary scheme.
no-problem/9908/cond-mat9908095.html
ar5iv
text
# Interference of a Bose-Einstein condensate in a hard-wall trap: Formation of vorticity ## Abstract We theoretically study the coherent expansion of a Bose-Einstein condensate in the presence of a confining impenetrable hard-wall potential. The nonlinear dynamics of the macroscopically coherent matter field results in rich and complex spatio-temporal interference patterns demonstrating the formation of vorticity and solitonlike structures, and the fragmentation of the condensate into coherently coupled pieces. 03.75.Fi,05.30.Jp A landmark experiment demonstrated in a striking way the interference of two freely expanding Bose-Einstein condensates (BECs). In this paper we theoretically study the evolution of a BEC in a coherently reflecting hard-wall trap. The present situation is closely related to the recent experiments on an expanding BEC in an optically-induced ‘box’ by Ertmer and coworkers . Due to the macroscopic quantum coherence the BEC exhibits rich and complex self-interference patterns. We identify the formation of vorticity and solitonlike structures, and the dramatic fragmentation of an initially uniform parabolic BEC into coherently coupled pieces. Atomic BECs exhibit a macroscopic quantum coherence in an analogy to the optical coherence of lasers. In the conventional reasoning the coherence of a BEC is introduced in the spontaneous symmetry breaking. Nevertheless, even two BECs with no phase information, could show relative phase correlations as a result of the back-action of quantum measurement . Moreover, the density-dependent self-interaction of a BEC demonstrates the analogy between nonlinear laser optics and nonlinear atom optics with BECs. BECs are predicted to exhibit dramatic coherence properties: The macroscopic coherent quantum tunneling and the formation of fundamental structures, e.g., vortices and solitons . Some basic properties of grey solitons have been recently addressed for harmonically trapped 1D BECs in Ref. . Also optical solitons have been actively studied in the 1D homogeneous space . In this paper we study the dynamics of a BEC confined to a hard-wall trap with potential $`V(𝐫)`$. Such walls can be realized, e.g., with a blue-detuned far-off-resonant light sheet. Throughout the paper we focus on repulsive interactions. When the BEC is released from a magneto-optical trap (MOT) inside the confining potential by suddenly turning off the MOT, the repulsive mean-field energy of the condensate transforms into kinetic energy and the BEC rapidly expands towards the walls. The reflections of the matter wave from the binding potential result in rich and complex spatio-temporal and interference patterns referred to in 1D as quantum carpets . They have been recently proposed as a thermometer for measuring the temperature of the BEC . In this paper we show that the nonlinear dynamics of a BEC displays dramatic local variations of the condensate phase including the formation of vorticity and solitonlike structures. The solitary waves could possibly be used as an experimental realization of the macroscopic coherent tunneling analogous to the Josephson effect. The dynamics of a BEC follows from the Gross-Pitaevskii equation (GPE) $$i\mathrm{}\frac{}{t}\psi (𝐫;t)=\left[\frac{\mathrm{}^2}{2M}\mathbf{}^2+V(𝐫)+\kappa |\psi (𝐫;t)|^2\right]\psi (𝐫;t).$$ Here $`M`$ and $`\kappa 4\pi \mathrm{}^2aN/M`$ denote the atomic mass and the coefficient of the nonlinearity, respectively, with the scattering length $`a`$ and the number $`N`$ of BEC atoms. Our initial distribution $`\psi (𝐫;t=0)`$ is the stationary solution of the GPE with the potential $`V(𝐫)`$ replaced by the potential of the MOT. We integrate GPE in one and two spatial dimensions. The projections from 3D into 1D or 2D require that the mean field $`\psi `$ does not vary significantly as a function of time in the corresponding orthogonal directions. This condition can be satisfied, e.g., in the presence of a strong spatial confinement to these dimensions. Then we can approximate the position dependence in these directions by constants $`𝒜`$ and $`\mathrm{}`$ resulting in $`\psi (𝐫)\psi _1(x)/𝒜^{1/2}`$ in 1D and $`\psi (𝐫)\psi _2(x,y)/\mathrm{}^{1/2}`$ in 2D. This yields the strengths $`\kappa _1=\kappa /𝒜`$ and $`\kappa _2=\kappa /\mathrm{}`$ of the nonlinearity in GPE for the mean fields in 1D and 2D $`\psi _1`$ and $`\psi _2`$. We emphasize that especially the 2D calculations may already contain the essential features of the full 3D coupling between the different spatial dimensions by the nonlinearity. One-dimensional case – The linear Schrödinger equation for the 1D box of length $`L`$ exhibits regular spatio-temporal patterns. These patterns consist of straight lines, so called traces, of different steepness corresponding to harmonics of a fundamental velocity, $`v_0`$. The traces arise from the interferences between degenerate eigenmodes of the system. The eigenmodes of frequency $`\omega _nn^2\omega _1n^2\mathrm{}\pi ^2/(2ML^2)`$ are a superposition of right and left propagating plane waves with wavenumbers $`k_nn\pi /Lnk_1`$. The probability density consists of the interferences between different eigenmodes. Therefore the lines of constant phase $`\pm (k_m\pm k_n)x+(\omega _m\omega _n)t=\mathrm{const}`$. correspond to straight lines in space-time with velocities $`v_{mn}\pm (\omega _m\omega _n)/(k_m\pm k_n)=(m\pm n)v_0`$. Here we introduced the fundamental trace velocity $`v_0\mathrm{}\pi /(2mL)`$. We now turn to the quantum carpet of Fig. 1a representing a 1D BEC trapped between two impenetrable steep Gaussian potentials approximating infinitely high walls at $`x=\pm L/2`$. At time $`t=0`$ the BEC is released from a harmonic trap of frequency $`\mathrm{\Omega }`$. In the limit of strong confinement the initial state is well approximated by the Thomas-Fermi solution $`\psi _1(x;t=0)=\theta (R_1|x|)[3(R_1^2x^2)/(2R_1^3)]^{1/2}`$. Here $`R_1[3\kappa _1/(M\mathrm{\Omega }^2)]^{1/3}`$ describes the 1D radius of the BEC wave function. After the turn-off of the MOT the kinetic energy term in GPE becomes dominant and the matter wave expands towards, and eventually reflects from, the box boundaries. Due to the macroscopic quantum coherence of a BEC different spatial regions of the matter field generate a complex self-interference pattern that exhibits canals analogously to the quantum carpet structures of the linear Schrödinger equation . From Fig. 1 we note that only destructive interference fringes, canals, appear in the nonlinear carpet. Constructive interference fringes, ridges, do not emerge. For the single particle Schrödinger equation the resulting quantum carpet demonstrates the fundamental wave nature of the particle. Therefore, it is perhaps surprising that in the case of GPE, which represents the coherent matter field of the many-particle system, the intermode traces acquire properties that demonstrate dramatic particle nature. In particular, in the case of repulsive interparticle interactions the traces correspond to the evolution of solitonlike structures with an associated phase kink in Fig. 1b,c. Grey solitons correspond to the propagation of ‘density holes’ in the matter field and the depth of the hole characterizes the greyness of the soliton. In homogeneous space the soliton is called dark, when it exhibits a vanishing density at the center of the dip and a sharp phase slip of $`\pi `$. For grey solitons the value of the phase slip $`\phi `$ is reduced, thereby giving the soliton a nonvanishing speed according to $`v=c\mathrm{cos}(\phi /2)`$, where the maximum velocity $`c(4\pi \mathrm{}^2a\rho /M^2)^{1/2}`$ is the Bogoliubov speed of sound at a constant atom density $`\rho `$. As a result of the formation of the soliton-like structures the initially uniform parabolic BEC is dramatically fragmented into spatially separated pieces and we can interprete these density holes as the boundaries of the fragmented BEC. Analogously to the Josephson junction the phase of the BEC is approximately constant outside the narrow region of the solitary wave. The coherent tunneling of atoms across the boundary depends on the relative phase between the two contiguous pieces and results in the motion of the ‘solitary wave junction’. For a phase slip of $`\pi `$ the velocity of the soliton vanishes analogously to the vanishing Josephson oscillations of the number of atoms. The interatomic interactions shift the eigenenergies compared to the linear case. As a result the fringe velocities in Fig. 1 are a few tens of percents larger than the linear trace velocities $`4nv_0`$ for the canals with a symmetric initial state . The nonlinear fringe velocities are still determined by the degenerate intermode traces and are approximately integer multiples of the minimum speed. The largest velocity in Fig. 1 is approximately given by the Bogoliubov speed of sound in terms of the average density of atoms in the box. The fringe evolution shown in Fig. 1 displays a remarkable inherent particle-like robustness and solitonlike behaviour: The fringes survive complex dynamics. Their paths exhibit dramatic avoided crossings demonstrating repulsive interactions and therefore the colliding wave packet holes become degenerate in momentum space . Two-dimensional case – We now turn to the evolution of a BEC in a circular box. Again we approximate the infinitely high walls by a steep Gaussian potential aligned along a circle of radius $`rL/2`$. The initial wave function is the Thomas-Fermi solution of the symmetric MOT $`\psi _2(x,y;t=0)=[2(R_2^2x^2y^2)/(\pi R_2^4)]^{1/2}`$, and $`R_2[4\kappa _2/(\pi M\mathrm{\Omega }^2)]^{1/4}`$ denotes the 2D ‘classical’ radius. Hence, the initial wave function is located at the center of the circular box and we can show that in this case the state remains symmetric also at later times. Figure 2 shows the 2D density $`|\psi _2(x,y;t)|^2`$ and the phase profile $`|\phi |`$ of a BEC obtained from GPE at a later time. We note the formation of a regular interference patterns similar to solitary waves. The fragmentation of the BEC is even more dramatic than in the 1D case: The fringes exhibit a vanishing density at the center of the dip. The BEC forms coherently coupled loops and the resulting structures are similar to optical ring solitons . When we now slightly displace the initial state of the BEC from the center of the circular box, the rotational symmetry is broken. In Figs. 3 and 4 we show the resulting 2D density (left column) and the phase profiles (right column) at three characteristic times. The reflections of the BEC from the hard-wall potential create solitonlike structures. At later times the stripes bend and eventually break up forming dark spots that correspond to vortices with associated phase windings around closed paths . We note that we can use the present situation of an expanding BEC in a circular box to create vortices in some particular spatial location by simply introducing a static potential dip and letting the expanding BEC flow across it. This is a simplified version of the suggestion by Ref. that a moving potential barrier through a BEC can create vorticity in the vicinity of the potential. As a final example and to demonstrate the effect of the symmetry of the hard-wall trap we consider the evolution of a BEC in the 2D square box. We generate the boundary by steep Gaussians approximating infinitely high walls at $`x=\pm L/2`$ and $`y=\pm L/2`$. The square has the symmetry of rotations of $`\pi /2`$. Hence, for a symmetric nonrotating initial state the minimum number of vortices conserving the total angular momentum is eight. In Fig. 5 we show the density profile at two characteristic times. The reflections of the BEC from the boundary generate dark solitonlike structures with an amazingly regular square shape that start bending, break up, and form vorticity. We now address the experimental feasibility of the proposed self-interference measurements. A higher-order Laguerre-Gaussian beam can generate a hollow optical beam and thus a cylindrical atom-optical hard-wall potential around the magnetically-trapped BEC. Moreover, BECs with purely 2D confinement may also be investigated in certain magnetic trap configurations . In recent experiments Bongs et al. studied the coherent reflections of a BEC by atom-optical mirrors and the evolution of a BEC in a atom-optical waveguide closely related to the present theoretical study. The density profiles of the BECs could be directly measured via absorption imaging, if the necessary spatial resolution could be obtained, e.g., via ballistic expansion of the atomic cloud. Vortices may also be detected by interfering a BEC with and without vorticity. Then the phase slip in the interference fringes would be the signature of the vorticity . The phase slip between two pieces of the BEC has a dramatic effect on the dynamical structure factor of the two-component system which may be observed, e.g., via the Bragg scattering . In conclusion, we studied the generation of vorticity and solitonlike structures of a BEC in a hard-wall trap. The nonlinear evolution of GPE dramatically divides the initially uniform parabolic BEC into coherently coupled pieces. We showed that the density profile of the BEC can be a direct manifestation of the macroscopic quantum coherence. Obviously it could also be a sensitive measure for the decoherence rate of the BEC . Unlike the typical coherence measurement that detects the relative macroscopic phase between two well-distinguishable BECs , the present set-up probes the self-interference of an initially uniform matter field. We acknowledge financial support from DFG and from the EC through the TMR Network ERBFMRXCT96-0066.
no-problem/9908/cond-mat9908110.html
ar5iv
text
# Ab Initio Study of Screw Dislocations in Mo and Ta: A new picture of plasticity in bcc transition metals ( ) ## Abstract We report the first ab initio density-functional study of $`111`$ screw dislocations cores in the bcc transition metals Mo and Ta. Our results suggest a new picture of bcc plasticity with symmetric and compact dislocation cores, contrary to the presently accepted picture based on continuum and interatomic potentials. Core energy scales in this new picture are in much better agreement with the Peierls energy barriers to dislocation motion suggested by experiments. The microscopic origins of plasticity are far more complex and less well understood in bcc metals than in their fcc and hcp counterparts. For example, slip planes in fcc and hcp metals are almost invariably close-packed, whereas in bcc materials many slip systems can be active. Moreover, bcc metals violate the Schmid law that the resistance to plastic flow is constant and independent of slip system and applied stress. Detailed, microscopic observations have established that in bcc metals at low temperatures, long, low-mobility $`111`$ screw dislocations control the plasticity. Over the last four decades, the dominant microscopic picture of bcc plasticity involves a complex core structure for these dislocations. The key ingredient of this intricate picture is an extended, non-planar sessile core which must contract before it moves. The first such proposed structure respected the symmetry of the underlying lattice and extended over many lattice constants. More recent and currently accepted theories, based on interatomic potentials, predict extension over several lattice constants and spontaneously broken lattice symmetry. While these models can explain the overall non-Schmid behavior, their predicted magnitude for the critical stress required to move dislocations (Peierls stress) is uniformly too large by a factor of about three when compared to experimental yield stresses extrapolated to zero-temperature. We take the first ab initio look at dislocation core structure in bcc transition metals. Although we study two metals with quite different mechanical behavior, molybdenum and tantalum, a consistent pattern emerges from our results which, should it withstand the test of time, will require rethinking the presently accepted picture. Specifically, we find screw dislocation cores with compact structures, without broken symmetry, and with energy scales which appear to be in much better accord with experimental Peierls barriers. Ab initio methodology – Our ab initio calculations for Mo and Ta are carried out within the total-energy plane-wave density functional pseudopotential approach, using the Perdew-Zunger parameterization of the Ceperly-Alder exchange-correlation energy. Non-local pseudopotentials of the Kleinman-Bylander form are used with $`s`$, $`p`$, and $`d`$ channels. The Mo potential is optimized according to and the Ta potential is from. We use plane wave basis sets with energy cutoffs of 45 Ryd for Mo and 40 Ryd for Ta to expand the wave functions of the valence (outermost $`s`$ and $`d`$) electrons. Calculations in bulk show these cutoffs to give total system energies to within 0.01 eV/atom. We carry out electronic minimizations using the analytically continued approach within the DFT++ formalism. To gauge the reliability of the pseudopotentials, Table 1 displays our ab initio results for the materials’ lattice constants and those elastic moduli most relevant for the study of $`111`$ screw dislocations. The tabulated moduli describe the long-range elastic fields of the dislocations ($`K`$), the coupling of displacement gradients along the dislocation axis $`z`$ to core-size changes in the orthogonal $`x,y`$ plane ($`c_{xx,zz}=(c_{11}+5c_{12}2c_{44})/6`$), and the coupling of core-size changes to themselves in the plane ($`c_{xx,xx}=(c_{11}+c_{12}+2c_{44})/2`$ and $`c_{xx,yy}=(c_{11}+2c_{12}2c_{44})/3`$). These results indicate that our predicted core energy differences should be reliable to within better than $`30`$%, which suffices for the purposes of our study. Preparation of dislocation cells – The cell we use for dislocation studies has lattice vectors $`\stackrel{}{a}_1=5a[1,1,0]`$, $`\stackrel{}{a}_2=3a[1,1,2]`$, and $`\stackrel{}{a}_3=a[1,1,1]/2`$, where $`a`$ is the lattice constant. We call this ninety-atom cell the “5$`\times `$3” cell in reference to the lengths of $`\stackrel{}{a}_1`$ and $`\stackrel{}{a}_2`$, and the Burgers vectors of all of the dislocations in our work are along $`\stackrel{}{a}_3`$. Eight $`k`$-points $`k_1=k_2=\frac{1}{4},k_3\pm \{\frac{1}{16},\frac{3}{16},\frac{5}{16},\frac{7}{16}\}`$ sample the Brillouin zone in conjunction with a non-zero electronic temperature of $`k_BT=0.1`$ eV, which facilitates the sampling of the Fermi surface. These choices give total energies to within 0.01 eV/atom. Given the relatively small cell size, we wish to minimize the overall strain and the effects of periodic images. We therefore follow and employ a quadrupolar arrangement of dislocations (a rectangular checkerboard pattern in the $`\stackrel{}{a}_1,\stackrel{}{a}_2`$ plane). This ensures that dislocation interactions enter only at the quadrupolar level and that the net force on each core is zero by symmetry, thereby minimizing perturbations of core structure due to the images. As was found in and as we explore in detail below, we find very limited impact of finite-size effects on the cores when following this approach. In bcc structures, screw dislocations are known to have two inequivalent core configurations, termed “easy” and “hard”. These cores can be obtained from one another by reversing the Burgers vector of a dislocation line while holding the line at a fixed position. We produce cells with either only easy or only hard cores in this way. To create atomic structures for the cores, we proceed in three stages. First, we begin with atomic positions determined from isotropic elasticity theory for our periodic array of dislocations. Next, we relax this structure to the closest local energy minimum within the interatomic MGPT model for Mo. Since we do not have an interatomic potential for Ta and expect similar structures in Ta and Mo, we create suitable Ta cells by scaling the optimized MGPT Mo structures by the ratio of the materials’ lattice constants. Finally, we perform standard ab initio atomic relaxations on the resulting MGPT structures until all ionic forces in all axial directions are less than 0.06 eV/Å. Extraction of core energies – The energy of a long, straight dislocation line with Burgers vector $`\stackrel{}{b}`$ is $`E=E_c(r_c)+Kb^3\mathrm{ln}(L/r_c)`$ per $`b`$ along the line, where $`L`$ is a large-length cutoff, and $`K`$ is an elastic modulus (see Table 1) computable within anisotropic elasticity theory. The core radius $`r_c`$ is a short-length cutoff inside of which the continuum description fails and the discrete lattice and electronic structure of the core become important. $`E_c(r_c)`$ measures the associated “core energy”, which, due to severe distortions in the core, is most reliably calculated by ab initio methods. The energy of our periodic cell contains both the energy of four dislocation cores and the energy stored outside the core radii in the long-range elastic fields. To separate these contributions, we start with the fact that two straight dislocations at a distance $`d`$ with equal and opposite Burgers vectors have an anisotropic elastic energy per $`b`$ given by $`E=2E_c(r_c)+2Kb^3\mathrm{ln}(d/r_c)`$. Next, by regularizing the infinite sum of this logarithmically divergent pair interaction, we find that the energy per dislocation per $`b`$ in our cell is given by $$E=E_c(r_c)+Kb^3\left[\mathrm{ln}\left(\frac{|\stackrel{}{a}_1|/2}{r_c}\right)+A\left(\frac{|\stackrel{}{a}_1|}{|\stackrel{}{a}_2|}\right)\right].$$ (1) The function $`A(x)`$ contains all the effects of the infinite Ewald-like sums of dislocation interactions and has the value $`A=\mathrm{0.598\hspace{0.17em}846\hspace{0.17em}386}`$ for our cell. Subtracting the long-range elastic contribution (the second term of (1)) from the total energy, we arrive at the core energy $`E_c`$. To test the feasibility of this approach, we compare $`E_c(r_c)`$ for the MGPT potential as extracted with the above procedure from cells of two different sizes: the 5$`\times `$3 cell and the corresponding 9$`\times `$5 cell. (The MGPT is fit to reproduce experimental elastic moduli, so $`K`$ is given in Table 1.) With the choice $`r_c=2b`$, Table 2 shows that our results, even for the 5$`\times `$3 cell, compare quite favorably with those of, especially given that our 5$`\times `$3 and 9$`\times `$5 cells contain only ninety and 270 atoms respectively, whereas the cited works used cylindrical cells with a single dislocation and two thousand atoms or more. Given the suitability of the 5$`\times `$3 cell, all ab initio results reported below are carried out in this cell. Ab initio core energies – Except for the Mo hard core, all the core structures relax quite readily from their MGPT configurations to their equilibrium ab initio structures. The Mo hard-core configuration, however, spontaneously relaxes into easy cores, strongly indicating that the hard core, while meta-stable within MGPT by only 0.02 eV/$`b`$, is not stable in density functional theory. We do not believe that this instability is due to finite-size effects, which appear to be quite small for the reasons outlined previously. Table 3 compares our ab initio results to available MGPT results for core energies in Mo and Ta. To make comparison with the MGPT, for the unstable Mo hard core we evaluate the ab initio core energy at the optimal MGPT atomic configuration (column AI in Table 3). Note that, in computing hard–easy core energy differences, the long-range elastic contributions cancel so that these differences are much better converged than the absolute core energies. Table 3 shows that the MGPT hard–easy core energy differences are much larger than the corresponding ab initio values by approximately a factor of three. The accuracy of the elastic moduli of Table 1, combined with the high transferability of the local-density pseudopotential approach, indicates that this factor of three is not an artifact of our approximations. We believe that the reason for this discrepancy is that the MGPT is less transferable. Having been forced to fit bulk elastic moduli and thus long-range distortions, the MGPT may not describe the short wavelength distortions in the cores with high accuracy. An examination of Mo phonons along $`[100]`$ provides poignant evidence: the MGPT frequencies away from the zone center are too large when compared to experimental and band-theoretic values and translate into spring constants that are up to approximately three times too large. The magnitude of the core energy difference has important implications for the magnitude of the Peierls energy barrier and Peierls stress for the motion of screw dislocations in Mo and Ta. In a recent Mo MGPT study, the most likely path for dislocation motion was identified to be the $`112`$ direction: the moving dislocation core changes from easy to hard and back to easy as it shifts along $`112`$. The energy barrier was found to be $`0.26`$ eV/$`b`$, very close to the MGPT hard–easy energy difference itself. The fact that the ab initio hard–easy energy differences in Mo and Ta are smaller by about a factor of three than the respective interatomic values suggests that the ab initio energy landscape for the process has a correspondingly smaller scale. If so, the Peierls stress in Mo and Ta should also be correspondingly smaller and in much better agreement with the values suggested by experiments. Dislocation core structures – Figure 1 shows differential displacement (DD) maps of the core structures we find in our ninety-atom supercell when working with the interatomic MGPT potential for Mo. Our DD maps show the atomic structure projected onto the (111) plane. The vector between a pair of atomic columns is proportional to the change in the $`[111]`$ separation of the columns due to the presence of the dislocations. The maps show that both easy and hard cores have approximate 3-fold rotational ($`C_3`$) point-group symmetry about the out-of-page $`[111]`$ axis through the center of each map. The small deviations from this symmetry reflect the weakness of finite-size effects in our quadrupolar cell. The hard core has three additional 2-fold rotational ($`C_2`$) symmetries about the three $`110`$ axes marked in the maps, increasing its point-group symmetry to the dihedral group $`D_3`$ which is shared by the underlying crystal. The easy core, however, shows a strong spontaneous breaking of this symmetry: its core spreads along only three out of the six possible $`112`$ directions. Our results reproduce those of who employed much larger cylindrical cells with open boundaries, underscoring the suitability of our cell for determining core structure. This symmetry-breaking core extension is that which has been theorized to explain the relative immobility of screw dislocations and violation of the Schmid law in bcc metals. Figure 2 displays DD maps of our ab initio core structures. Contrary to the atomistic results, we find that the low-energy easy cores in Mo and Ta have full $`D_3`$ symmetry and do not spread along the $`112`$ directions. Combining this with the above results concerning core energetics, we have two examples for which our pseudopotentials are sufficiently accurate to disprove the conventional wisdom that generic bcc metallic systems require broken symmetry in the core to explain the observed immobility of screw dislocations. Turning to the hard core structures, the ab initio resuts for Ta show a significant distortion when compared to the atomistic core (contrast Figure 1b and Figure 2b). As the ab initio Mo hard core was unstable, we believe that this distortion of the Ta hard core suggests that this core is much less stable within density functional theory than in the atomistic potentials. To complete the specification of the three-dimensional ab initio structure of easy cores in Mo and Ta, Figure 3 presents maps of the atomic displacement in the (111) plane. The small atomic shifts, which are due entirely to anisotropic effects, are shown as in-plane vectors centered on the bulk atomic positions and magnified by a factor of fifty. To reduce noise in the figure, before plotting we perform $`C_3`$ symmetrization of the atomic positions about the $`[111]`$ axis passing through the center of the figure. As all the dislocation cores in our study have a minimum of $`C_3`$ symmetry, this procedure does not hinder the identification of possible spontaneous breaking of the larger $`D_3`$ symmetry group. Our maps indicate that the easy cores in both Mo and Ta have full $`D_3`$ symmetry. Recent high-resolution electron microscopy explorations of the symmetry of dislocations in Mo have focused on the small shifts in the (111) plane of columns of atoms along $`[111]`$. This pioneering work reports in-plane displacements extending over a range much greater than the corresponding MGPT results and also much greater than what we find ab initio. In this is attributed to possible stresses from thickness variations and foil bending. We believe this makes study of the internal structure of the core difficult, and that cleaner experimental results are required to resolve the nature of the symmetry of the core and its extension. In conclusion, our first principles results show no preferential spreading or symmetry breaking of the dislocation cores and exhibit an energy landscape with the proper scales to explain the observed immobility of dislocations. Atomistic models which demonstrate core spreading and symmetry breaking, both of which tend to reduce the mobility of the dislocations, are well-known to over-predict the Peierls stress. The combination of these two sets of observations argues strongly in favor of much more compact and symmetric bcc screw dislocation cores than presently believed. This work was supported by an ASCI ASAP Level 2 grant (contract #B338297 and #B347887). Calculations were run primarily at the Pittsburgh Supercomputing Center with support of the ASCI program and also on the MIT Xolas prototype SMP cluster. We thank members of the H-division at Lawrence Livermore National Laboratories for providing the Ta pseudopotential, the Mo MGPT code, and many useful discussions.
no-problem/9908/cs9908007.html
ar5iv
text
# Computational Geometry Column 37 ## Abstract Open problems from the 15th Annual ACM Symposium on Computational Geometry. Following is a list of the problems presented on June 15, 1999 at the open-problem session of the 15th Symposium on Computational Geometry. Are there $`n`$ points in $`^3`$ that have more than $`n!`$ different triangulated spheres? An upper bound of $`O(10^nn!)`$ derives from Tutte’s bounds on unlabeled planar triangulations times the ways to embed vertices, ignoring intersections. A lower bound of $`\mathrm{\Omega }((n/3)!)`$ is established by a construction with $`n/3`$ thin needles. Frequently in computer graphics, triangulated spheres are represented through two separate components: geometry (vertex coordinates) and topology (vertex connectivity). A negative answer to the posed question permits the compression of the topological information by storing it implicitly within the geometric information. Say that a nonconvex polyhedron in $`^3`$ is *smooth* if (a) every facet is a triangle with constant aspect ratio (i.e., the triangles are fat), and (b) the minimum dihedral angle (either internal or external) is a constant. Can any smooth polyhedron be triangulated (perhaps employing Steiner points) using only $`O(n)`$ tetrahedra? Equivalently, is it always possible to decompose a smooth polyhedron into $`O(n)`$ convex pieces? Both conditions (a) and (b) are necessary. Chazelle’s polyhedron \[C84\], which cannot be decomposed into fewer than $`\mathrm{\Omega }(n^2)`$ convex pieces, has large dihedral angles, but its facets have aspect ratio $`\mathrm{\Omega }(n)`$. A variant of Chazelle’s polyhedron can be constructed with fat triangular facets, but with minimum dihedral angle $`O(1/n^2)`$. Smooth polyhedra are not necessarily “fat”<sup>1</sup><sup>1</sup>1 Under any reasonable definition of fat, e.g., a bounded volume ratio of the smallest enclosing sphere to the largest enclosed sphere. —for example, a $`1\times n\times n`$ rectangular “pizza box” can be approximated arbitrarily closely by a smooth polyhedron with $`\mathrm{\Theta }(n)`$ facets. The Schönhardt polyhedron (a nontriangulatable twisted triangular prism \[E97\]) is smooth, so smooth polyhedra cannot necessarily be triangulated without Steiner points. In fact, gluing $`O(n)`$ Schönhardt polyhedra onto a sphere yields a smooth polyhedron that satisfies both conditions and requires $`\mathrm{\Omega }(n)`$ Steiner points. Is this the worst possible? B. Chazelle. Convex partitions of polyhedra: A lower bound and worst-case optimal algorithm. *SIAM J. Comput.* 13:488–507, 1984. D. Eppstein. Three untetrahedralizable objects. Part of *The Geometry Junkyard*. May 1997. http://www.ics.uci.edu/~eppstein/junkyard/untetra/ What is the complexity of the Maximum TSP for Euclidean distances in the plane? Barvinok et al. \[BJWW98\] have shown that the *Maximum TSP* (i.e., the problem of finding a traveling salesman tour of maximum length) can be solved in polynomial time, provided that distances are computed according to a polyhedral norm in $`^d`$, for some fixed $`d`$. The most natural instance of this class of problems arises for rectilinear distances in the plane $`^2`$, where the unit ball is a square. With the help of some additional improvements by Tamir, the method by Barvinok et al. yields an $`O(n^2\mathrm{log}n)`$ algorithm for this case. This was improved in \[F99\] to an $`O(n)`$ algorithm that finds an optimal solution. For the case of Euclidean distances in $`^d`$ for any fixed $`d3`$, it has been shown \[F99\] that the Maximum TSP is NP-hard. The case of $`d=2`$ remains open, but the problem poser conjectures it to be NP-hard. A. I. Barvinok, D. S. Johnson, G. J. Woeginger, and R. Woodroofe. The maximum traveling salesman problem under polyhedral norms. Proc. 6th Internat. Integer Program. Combin. Optim. Conf., Springer-Verlag, Lecture Notes in Comput. Sci. 1412, 1998, 195–201. S. P. Fekete. Simplicity and hardness of the Maximum Traveling Salesman Problem under geometric distances. Proc. 10th Annu. ACM-SIAM Sympos. Discrete Algorithms, 1999, 337–345. The “wireless communications location problem” is to determine the point $`p`$ on the $`z=0`$ plane at which a user is located from signals detected by a collection of receivers. The user sends out a radio signal (a sphere expanding at uniform velocity), which perhaps diffracts around or reflects off of buildings before being received. The given data is as follows: * A set $`B`$ of orthogonal 3D boxes (the buildings), oriented isothetically, each with its base on the $`z=0`$ plane, each taller than the highest receiver; * A set $`R`$ of receiver locations (points in $`^3`$); and * A set $`S`$ of received signals, each described by the receiver and the distance traveled along the signal’s polygonal path. The signals $`S`$ come in three varieties: 1. Line-of-sight (LOS) signals, direct from $`p`$ to $`rR`$. 2. $`1`$-diffracted signals, diffracted at most once around an edge or vertex of some building $`bB`$. In effect the expanding sphere of radio waves is retransmitted at building corners. 3. $`k`$-reflected signals, reflected at most $`k`$ times off of building walls (which act as a radio-wave mirrors). It may be assumed that $`k3`$. LOS signals can be distinguished from diffracted and reflected signals. It may be assumed that no signal is both diffracted and reflected. Various pragmatic assumptions may be made about the maximum propagation distance of a signal, and the size of the buildings with respect to this distance; see \[PS99\] and \[F96\] for the technical details. An approximate location for $`p`$ with bounded error would suffice. A natural question is determining necessary conditions for accurate location of the user. Preprocessing can be as extensive as needed. G. Perkins and D. Souvaine. Efficient radio wave front propagation for wireless radio signal prediction. DIMACS Tech. Rep. 99-27, May 1999. http://dimacs.rutgers.edu/TechnicalReports/1999.html S. Fortune. A beam-tracing algorithm for prediction of indoor radio propagation. Proc. 1st ACM Workshop on Appl. Comput. Geom. Eds. M. Lin and D. Manocha. Springer-Verlag, Lecture Notes in Comput. Sci. 1148, 1996, 76–81. Define a prism $`𝒫`$ as a polyhedron formed by extruding a polygon $`P`$ in the $`z=0`$ plane to the $`z=1`$ plane. Call the bottom and top faces $`P_0`$ and $`P_1`$ respectively. Given two different triangulations of $`P_0`$ and $`P_1`$, each perhaps using Steiner points, determine whether $`𝒫`$ can be tetrahedralized respecting the triangulations of $`P_0`$ and $`P_1`$, perhaps with Steiner points, but using only $`O(n)`$ tetrahedra. If so, provide an algorithm to find such a tetrahedralization. Is there a topological cube with orthogonal opposite facets? More precisely, does there exist a bounded convex polytope in $`^3`$, whose graph ($`1`$-skeleton) is the same as that of a unit cube (6 quadrangular facets with the incidence relationships of a cube), such that for each of the three pairs of opposite facets, the planes containing them form a dihedral angle of 90 degrees? During the open-problem session, John Conway observed that there is a nonconvex topological cube with this property. His construction is as follows. Take three quadrangles arranged in the plane as shown to the right, and add three vertical rectangular faces at the bold edges, meeting at a vertex at infinity. Then “distort” this polyhedron slightly to make a topological cube with orthogonal opposite faces. The center vertex dents inwards, so the polyhedron is nonconvex. Solve either of the following problems in $`o(n^4)`$ time: 1. Let $`S`$ be a set of $`n`$ segments in the plane. Determine whether there is a point $`p`$ seeing all segments completely. 2. Let $`P`$ be a set of $`n`$ points in the plane. Find a point $`q`$ in the plane that maximizes the minimum difference between any pair of distances $`d(p,q)`$ and $`d(r,q)`$ over all pairs $`p,rP`$. That is, find a point $`q`$ that maximizes $`\mathrm{min}_{p,rP}|d(p,q)d(r,q)|`$. The goal of the second problem is to reduce the dimension of $`P`$ from two to one by replacing every point by its distance to the point $`q`$. For this to be effective, $`q`$ should be chosen so that it discriminates between any pair of points as much as possible. Let $`P`$ be a set of $`n`$ points in the plane. Call a unit circle *good* if it touches at least three points from $`P`$. What is the maximum number of good unit circles that can be placed, maximized over all possible sets of $`n`$ points? All that is known is a trivial upper bound of $`O(n^2)`$, and a lower bound of $`2n`$ (on a lattice). This problem was mentioned in \[B83\]; nothing more recent is known. It has an application in estimating the complexity of certain pattern-matching problems. J. Beck. On the lattice property of the plane and some problems of Dirac, Motzkin and Erdős in combinatorial geometry. Combinatorica 3(3–4): 281–297, 1983. Several classic problems were reposed: Consider the following game played on an infinite chessboard. In each move, the *angel* can fly to any uneaten square within 1,000 king’s moves; and the *devil* can eat any one square (while the angel is aloft). The angel wins if it has a strategy so that it can always move; otherwise the devil wins. Prove who wins. Reward: $1,000 if the devil wins; $100 if the angel wins. E. R. Berlekamp, J. H. Conway, R. K. Guy. Winning Ways for Your Mathematical Plays. Volume 2: Games in Particular. Academic Press, 1982, p. 609. *Thrackles* are made of “spots” (points in $`^2`$) and “paths” (smooth closed curves, ending at spots), with the condition that any two paths intersect at exactly one point, and have distinct tangents at that point. Can there be more paths than spots? Reward: $1,000. J. O’Rourke. Computational geometry column 26. Internat. J. Comput. Geom. Appl. 5:339–341, 1995. Also in SIGACT News 26(2):15–17, 1995. Is there a polyhedron with a hole in every face (i.e., every face of which is multiply connected)? Reward: $10,000 / number of faces in the discovered polyhedron. The reason for the divisor in the reward is that a solution is known for an astronomical number of faces. Given an infinite set $`S`$ of points in $`^2`$, with the property that there are at most $`k`$ points of $`S`$ in each disk of radius $`1`$, must there be arbitrarily large convex “holes,” i.e., regions containing no points of $`S`$? H. P. Croft, K. J. Falconer, and R. K. Guy. Unsolved Problems in Geometry. Springer-Verlag, 1991. Problem E14.
no-problem/9908/astro-ph9908190.html
ar5iv
text
# Gamma-Ray Bursts Detected From BATSE DISCLA Data ## 1 Introduction The Burst and Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory has been very effective in recording gamma-ray bursts (GRB). Soon after the observatory’s launch on April 19, 1991, the results obtained with BATSE showed convincingly that the sky distribution of GRBs was isotropic (Meegan et al. 1992a) and that the line-of-sight distribution was incompatible with a homogeneous distribution in euclidean space (Meegan et al. 1992b). The proposition based on these findings that the distance scale of GRBs is cosmological (Paczynski 1992) was eventually confirmed by the observation of a large redshift (Metzger 1997). The GRB data from BATSE have been published primarily in a succession of catalogs (cf. Meegan et al. 1997). These data are based on an on-board trigger, which acts when certain conditions are fulfilled. Usually these require that the counts in the energy range $`50300`$ keV exceed the background by $`5.5\sigma `$ on a time scale of 64, 256 or 1024 msec in at least two of the eight detectors. Variations of these criteria, in the channels and the signal-to-noise limits used, have been in effect at various times (cf. Meegan et al. 1997). BATSE produces data in various modes in archival form. The DISCLA data provide a continuous record of the counts in channels 1, 2, 3, and 4 for the eight detectors on a time scale of 1024 msec. The DISCLA data allow the detection of GRBs a posteriori in a way similar to the on-board trigger (except for the availability of only one time scale). There are distinct advantages to searching for GRBs in archival data: the detection parameters can be set independently of those that are hard-wired in the on-board trigger mechanism; the search can be repeated with a different detection algorithm; each search can be carried out on the full data set, etc. A search for GRBs not detected by the on-board BATSE trigger has been conducted by Kommers et al. (1997). We have used the DISCLA data to search for GRBs in the time period TJD $`836510528`$. In the following sections we describe the search, the classification of the triggers, and the derivation of $`<V/V_{max}>`$ of the resulting sample. ## 2 The search We have tried to automate the search procedure as much as possible, for the sake of both speed and consistency. This will allow simulations to be carried out to investigate various scientific problems. Our search has some similarities to that carried out by the on-board BATSE trigger: using the counts in channels $`2+3`$ covering the energy range $`50300`$ keV; averaging background counts over 17.408 s; and requiring that two modules see a minimum excess over background. We differ, however, in the evaluation of the background as explained below. In the search conducted by the on-board BATSE trigger on the 1024 msec time scale, the background is derived over a given stretch of 17.408 sec and the trigger test is carried out for the 1024 msec bin following that stretch. The same background stretch is used for the next sixteen 1024 msec bins, so that there is a separation between the end of the background stretch and the test bin of $`016.384`$ sec. We use a fixed separation between the background stretch and the test bin, and also introduce a second background stretch allowing a linear interpolation of the background for the duration of the GRB, cf. Fig. 1. For a small or zero separation between background stretch and test bin, slowly rising GRBs may escape detection (Higdon and Lingenfelter 1996). If we increase the separation, more slowly rising GRBs can be detected, but then we find that the number of false triggers caused by higher-order variations in the background increases substantially. In the current search, we used a separation of 20.48 sec, and placed the beginning of the second background stretch 230.4 sec after the test bin, as shown in Fig. 1. Following a burst, we disabled the trigger mechanism for 230.4 sec. Artifacts or defects in the data will lead to false triggers. We have systematically searched for gaps in the data, and for constant output numbers (usually zeroes), and, from quality data provided by the BATSE project, for the occurrence of checksum errors. In each case, we set up a time window of exclusion around the defect, so that the automatic application of our search algorithm will not lead to a trigger. Considerable time is lost: the total number of our exclusion windows is over $`151,000`$, of which some overlap. We adopted a limiting $`S/N`$ ratio for detection in two detectors of $`5.0`$. Our search of the time period TJD $`836510528`$ with the criteria described above yielded 7536 triggers. The geographic coordinates of the observatory at the time of trigger are plotted in Fig. 2. Besides some areas near the South Atlantic Anomaly, there are clear concentrations in the south over W. Australia and in the north over Mexico and Texas. The southern concentration has been discussed by Horack et al. (1992). Since these concentrations have nothing to do with cosmic GRBs, we outlined geographic exclusion regions to avoid most of these triggers. With these exclusions in place, we are left with 4485 triggers, which form the basis for classification and discussion. ## 3 Classification of triggers For each of the 4485 triggers, we derived sky positions on the assumption that they corresponded to the emission from a point source. We used the response function for the eight detectors (cf. Pendleton et al. 1995), and assumed a Band et al. (1993) type spectrum with $`\alpha =1`$, $`\beta =2`$, and break energy $`E_0=150`$ keV. The positions were derived by testing around $`10,000`$ positions on the sky and minimizing $`\chi ^2`$ of the counts for all eight detectors. We ignored the effects of Compton scattering on the Earth’s atmosphere or the spacecraft. Fig. 3 shows the celestial coordinates of all 4485 triggers. It is clear from an inspection of Fig. 3 that at least four types of triggers are present. Both CygX-1 and Nova Persei 1992 are prominent. The sun is clearly shown by solar flares along the ecliptic. There is a fourth component which appears more or less isotropic. Plots of the number of triggers versus time close to the sky positions of CygX-1 and of Nova Persei 1992 show that both were detected during well-defined periods of activity. We used our positions of Nova Persei 1992, which totally dominated activity in its part of the sky for around 60 days, to evaluate the distribution of position errors. Based on these results, we eliminated from consideration all triggers within 23 deg of the sun, of CygX-1, and of Nova Persei 1992 during the periods that these sources were active, see Table 1. We accept as GRBs all triggers whose onset was within 230.4 sec of those listed for GRBs in the BATSE catalog (cf. Meegan et al. 1997 and the World Wide Web version maintained by the BATSE Burst team). For the remaining triggers, we inspected each of their time profiles from $`300`$ to $`+400`$ sec relative to the trigger time. In our judgment of the nature of these events we were guided by descriptions by the BATSE team of magnetospheric events (cf. Fishman et al. 1992, Horack et al. 1992), by the value of $`\chi ^2`$ of the solution, by the dispersion of the positions obtained during each second that the burst was brighter than the limiting flux, and by a derivation of the angular motion of the source (which for a GRB should be consistent with zero). In the process, we noticed that 44 of the remaining triggers had spectra softer than any listed in the BATSE catalog, and that they were all within 35 deg of the sun. On the basis of this evidence, we rejected these as GRBs. The results of this excercise are shown in Table 1. We accepted 404 GRBs that are not listed in the BATSE catalog. There are 130 GRBs listed in the BATSE catalog that we have not detected, while as stated we find 404 GRBs that are not in the catalog. The difference in content between the BATSE catalog and our sample is partly a consequence of differences in the de-activation of the trigger following a burst or around a defect in the data, and partly caused by statistical differences in the different backgrounds used. Based on the entire search procedure, including the time windows of exclusion set up around bad data, the exclusion of particular geographic areas and of some sky areas in the classification procedure, and the part of the sky occulted by the Earth, we estimate that the sample of 1422 GRBs represents effectively 2.0 years of isotropic exposure by BATSE, for an annual detection rate of 710 GRBs per year. ## 4 Derivation of $`<V/V_{max}>`$ The most important property of the GRBs in the sample is the euclidean value of $`<V/V_{max}>`$. We will use its value to derive a distance scale for GRBs in a subsequent communication. In most evaluations of $`V/V_{max}`$ for individual GRBs, it has been assumed that it equals $`(C_{max}/C_{min})^{3/2}`$ where $`C_{min}`$ is the minimum detectable count rate and $`C_{max}`$ is the count rate at the maximum amplitude of the burst. This may not be correct. If we remove the source to the largest distance at which it is just still detectable, it is likely that the detection of the burst will occur later, and therefore that $`C_{min}`$ will include some burst signal. Also, in some cases the part of the burst containing $`C_{max}`$ will not cause a trigger when we remove the burst. Due to these inescapable effects, $`V/V_{max}`$ will be larger than follows from the values of $`C_{max}`$ and $`C_{min}`$ at detection. This situation was encountered by Higdon and Schmidt (1990) in their discussion of GRBs from the Venera 11 and 12 KONUS experiments. We have derived the euclidean value of $`V/V_{max}`$ for each GRB as follows. Upon detection, we derive the burst time profile by subtracting the background (interpolated between the two background stretches, see Fig. 1) from the counts. We multiply this original time profile by a factor of X and add it to the background. Next we apply the search algorithm to see whether we detect a burst. If so, we repeat the process with a smaller value of X, until we do not detect a burst anymore. This represents the situation (as best we can) when the source has been removed in distance by a factor $`X^{1/2}`$ in euclidean space. Therefore $`V/V_{max}`$ for the source is $`X^{3/2}`$. Application of this process to all bursts in the sample produces $`<V/V_{max}>=0.334\pm 0.008`$. ###### Acknowledgements. During the many years in which this work came to fruition, I have had much appreciated help from or discussions with D. Chakrabarty, M. Finger, G. Fishman, J. Gunn, J. Higdon, J. Horack, C. Meegan, T. Prince, and B. Vaughan.
no-problem/9908/hep-ph9908432.html
ar5iv
text
# |𝑉_{𝑐⁢𝑏}| and |𝑉_{𝑢⁢𝑏}| from 𝐵 Decays: Recent Progress and Limitations Invited talk at the Chicago Conference on Kaon Physics (Kaon’99), June 21–26, 1999, Chicago, IL. [FERMILAB-Conf-99/213-T] ## 1 Introduction The purpose of $`K`$ and $`B`$ physics in the near future is testing the Cabibbo–Kobayashi–Maskawa (CKM) picture of quark mixing and $`CP`$ violation. The goal is to overconstrain the unitarity triangle by directly measuring the sides and (some) angles in several decay modes. If the value of $`\mathrm{sin}2\beta `$, the $`CP`$ asymmetry in $`BJ/\psi K_S`$, is near the CDF central value , then searching for new physics will require a combination of precision measurements. This talk concentrates on $`|V_{cb}|`$ and $`|V_{ub}|`$; the latter is particularly important since it largely controls the experimentally allowed range for $`\mathrm{sin}2\beta `$ in the standard model. ## 2 Exclusive decays In mesons composed of a heavy quark and a light antiquark (plus gluons and $`q\overline{q}`$ pairs), the energy scale of strong processes is small compared to the heavy quark mass. The heavy quark acts as a static point-like color source with fixed four-velocity, since the soft gluons responsible for confinement cannot resolve structures much smaller than $`\mathrm{\Lambda }_{\mathrm{QCD}}`$, such as the heavy quark’s Compton wavelength. Thus the configuration of the light degrees of freedom become insensitive to the spin and flavor (mass) of the heavy quark, resulting in a $`SU(2n)`$ spin-flavor symmetry ($`n`$ is the number of heavy quark flavors). Heavy quark symmetry (HQS) helps understanding the spectroscopy and decays of heavy hadrons from first principles. The predictions of HQS are particularly restrictive for $`\overline{B}D^{()}\mathrm{}\overline{\nu }`$ decays. In the infinite mass limit all form factors are proportional to a universal Isgur-Wise function, $`\xi (vv^{})`$, satisfying $`\xi (1)=1`$ . The symmetry breaking corrections can be organized in a simultaneous expansion in $`\alpha _s`$ and $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_Q`$ ($`Q=c,b`$). The $`\overline{B}D^{()}\mathrm{}\overline{\nu }`$ decay rates are given by $`{\displaystyle \frac{d\mathrm{\Gamma }(\overline{B}D^{}\mathrm{}\overline{\nu })}{dw}}`$ $`=`$ $`{\displaystyle \frac{G_F^2m_B^5}{48\pi ^3}}r_{}^3(1r_{})^2\sqrt{w^21}(w+1)^2`$ $`\times \left[1+{\displaystyle \frac{4w}{1+w}}{\displaystyle \frac{12wr_{}+r_{}^2}{(1r_{})^2}}\right]|V_{cb}|^2_D^{}^2(w),`$ $`{\displaystyle \frac{d\mathrm{\Gamma }(\overline{B}D\mathrm{}\overline{\nu })}{dw}}`$ $`=`$ $`{\displaystyle \frac{G_F^2m_B^5}{48\pi ^3}}r^3(1+r)^2(w^21)^{3/2}|V_{cb}|^2_D^2(w),`$ (1) where $`w=vv^{}`$ and $`r_{()}=m_{D^{()}}/m_B`$. $`_{D^{()}}(w)`$ is equal to the Isgur-Wise function in the $`m_Q\mathrm{}`$ limit, and in particular $`_{D^{()}}(1)=1`$, allowing for a model independent determination of $`|V_{cb}|`$. Including symmetry breaking corrections one finds $`_D^{}(1)`$ $`=`$ $`1+c_A(\alpha _s)+{\displaystyle \frac{0}{m_Q}}+{\displaystyle \frac{(\mathrm{})}{m_Q^2}}+\mathrm{},`$ $`_D(1)`$ $`=`$ $`1+c_V(\alpha _s)+{\displaystyle \frac{(\mathrm{})}{m_Q}}+{\displaystyle \frac{(\mathrm{})}{m_Q^2}}+\mathrm{}.`$ (2) The perturbative corrections, $`c_A=0.04`$ and $`c_V=0.02`$, have been computed to order $`\alpha _s^2`$ , and the unknown higher order corrections should affect $`|V_{cb}|`$ at below the 1% level. The vanishing of the order $`1/m_Q`$ corrections to $`_D^{}(1)`$ is known as Luke’s theorem . The terms indicated by $`(\mathrm{})`$ are only known using phenomenological models at present. Thus the determination of $`|V_{cb}|`$ from $`\overline{B}D^{}\mathrm{}\overline{\nu }`$ is theoretically more reliable than that from $`\overline{B}D\mathrm{}\overline{\nu }`$ (unless using lattice QCD for $`_{D^{()}}(1)`$ — see below), although for example QCD sum rules predict that the order $`1/m_Q`$ correction to $`_D(1)`$ is small . Due to the extra $`w^21`$ suppression near zero recoil, $`\overline{B}D\mathrm{}\overline{\nu }`$ is also harder experimentally. The main uncertainty in this determination of $`|V_{cb}|`$ comes from the estimate of nonperturbative corrections at zero recoil. In the case of $`\overline{B}D^{}\mathrm{}\overline{\nu }`$, model calculations and sum rule estimates suggest about $`5\%`$. Assigning a 100% uncertainty to this estimate, I will use $$_D^{}(1)=0.91\pm 0.05,_D(1)=1.02\pm 0.08.$$ (3) The most promising way to reduce these uncertainties may be calculating directly the deviation of the form factor from unity, $`_{D^{()}}(1)1`$, in lattice QCD from certain double ratios of correlation functions . Recent quenched calculations give $`_D(1)=1.06\pm 0.02`$ and $`_D^{}(1)=0.935\pm 0.03`$ , in agreement with Eq. (3) but with smaller errors. Another uncertainty comes from extrapolating the experimentally measured quantity, $`|V_{cb}|_{D^{()}}(w)`$, to zero recoil. Recent theoretical developments largely reduce this uncertainty by establishing a model independent relationship between the slope and curvature of $`_{D^{()}}(w)`$ . This may also become less of an experimental problem at asymmetric $`B`$ factories, where the efficiency may fall less rapidly near zero recoil. Eq. (3) and the experimental average, $`|V_{cb}|_D^{}(1)=0.0347\pm 0.0015`$ , obtained using the constraints on the shape of $`_D^{}(w)`$ yield $$|V_{cb}|=(38.1\pm 1.7_{\mathrm{exp}}\pm 2.0_{\mathrm{th}})\times 10^3.$$ (4) The value obtained from $`\overline{B}D\mathrm{}\overline{\nu }`$ is consistent with this, but the experimental uncertainties are significantly larger. For the determination of $`|V_{ub}|`$ from exclusive heavy to light decays, heavy quark symmetry is less predictive. It neither reduces the number of form factors parameterizing these decays and nor determines the value of any form factor. Still, there are model independent relations between $`B`$ and $`D`$ decay form factors, e.g., the form factors which occur in $`DK^{}\overline{\mathrm{}}\nu `$ can be related to those in $`\overline{B}\rho \mathrm{}\overline{\nu }`$ using heavy quark and chiral symmetry . These relations apply for the same value of $`vv^{}`$ in the two processes, i.e., from the measured $`DK^{}\overline{\mathrm{}}\nu `$ form factors one can predict the $`\overline{B}\rho \mathrm{}\overline{\nu }`$ rate in the large $`q^2`$ region . Such a prediction has first order heavy quark and chiral symmetry breaking corrections, each of which can be $`1520\%`$. Lattice QCD also works best for large $`q^2`$, but the existing calculations are still all quenched. Light cone sum rules are claimed to yield predictions for the form factors with small model dependence in the small $`q^2`$ region. Recently CLEO made the first attempt at concentrating at the large $`q^2`$ region to reduce the model dependence, and obtained $$|V_{ub}|=(3.25\pm 0.14_{0.29}^{+0.21}\pm 0.55)\times 10^3.$$ (5) A determination of $`|V_{ub}|`$ from $`\overline{B}\pi \mathrm{}\overline{\nu }`$ is more complicated because very near zero recoil “pole contributions” spoil the simple scaling of the form factors with the heavy quark mass. Still, in the future some combination of the soft pion limit, model independent bounds based on dispersion relations and analyticity , and lattice results may provide a determination of $`|V_{ub}|`$ from this decay with small errors. If experimental data on the $`D\rho \overline{\mathrm{}}\nu `$ and $`\overline{B}K^{}\mathrm{}\overline{\mathrm{}}`$ form factors become available in the future, then $`|V_{ub}|`$ can be extracted with $``$10% theoretical uncertainty using a “Grinstein-type double ratio” , which only deviates from unity due to corrections which violate both heavy quark and chiral symmetries. Such a determination is possible even if only the $`q^2`$ spectrum in $`D\rho \overline{\mathrm{}}\nu `$ and the integrated $`\overline{B}K^{}\mathrm{}\overline{\mathrm{}}`$ rate in the large $`q^2`$ region are measured . ## 3 Inclusive decays Inclusive $`B`$ decay rates can be computed model independently in a series in $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$ and $`\alpha _s(m_b)`$, using an operator product expansion (OPE) . The $`m_b\mathrm{}`$ limit is given by $`b`$ quark decay, and for most quantities of interest it is known including the dominant part of the order $`\alpha _s^2`$ corrections. Observables which do not depend on the four-momentum of the hadronic final state (e.g., total decay rate and lepton spectra) receive no correction at order $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$ when written in terms of $`m_b`$, whereas differential rates with respect to hadronic variables (e.g., hadronic energy and invariant mass spectra) also depend on $`\overline{\mathrm{\Lambda }}/m_b`$, where $`\overline{\mathrm{\Lambda }}`$ is the $`m_Bm_b`$ mass difference in the $`m_b\mathrm{}`$ limit. At order $`\mathrm{\Lambda }_{\mathrm{QCD}}^2/m_b^2`$, the corrections are parameterized by two hadronic matrix elements, usually denoted by $`\lambda _1`$ and $`\lambda _2`$. The value $`\lambda _20.12\mathrm{GeV}^2`$ is known from the $`B^{}B`$ mass splitting. Corrections to the $`m_b\mathrm{}`$ limit are expected to be under control in parts of the $`bq`$ phase space where several hadronic final states are allowed (but not required) to contribute with invariant masses satisfying $`m_{X_q}^2\begin{array}{c}>\hfill \\ \hfill \end{array}m_q^2+(\mathrm{few}\mathrm{times})\mathrm{\Lambda }_{\mathrm{QCD}}m_b`$. The major uncertainty in the predictions for such “sufficiently inclusive” observables is from the values of the quark masses and $`\lambda _1`$, or equivalently, the values of $`\overline{\mathrm{\Lambda }}`$ and $`\lambda _1`$. These quantities can be extracted, for example, from heavy meson decay spectra. A theoretical subtlety is related to the fact that $`\overline{\mathrm{\Lambda }}`$ (or the heavy quark pole mass) cannot be defined unambiguously beyond perturbation theory , and its value extracted from data using theoretical expressions valid to different orders in the $`\alpha _s`$ may vary by order $`\mathrm{\Lambda }_{\mathrm{QCD}}`$. These ambiguities cancel when one relates consistently physical observables to one another. One way to make this cancellation manifest is by using short-distance quark mass definitions, but recent determinations of such $`b`$ quark masses still have about $`50100`$MeV uncertainties . The shape of the lepton energy or hadronic invariant mass spectra in $`\overline{B}X_c\mathrm{}\overline{\nu }`$ decay can be used to determine $`\overline{\mathrm{\Lambda }}`$ and $`\lambda _1`$. Last year the CLEO Collaboration measured the first two moments of the hadronic invariant mass-squared distribution. Each of these measurements gives an allowed band in the $`\overline{\mathrm{\Lambda }}\lambda _1`$ plane, and their intersection gives $$\overline{\mathrm{\Lambda }}=(0.33\pm 0.08)\mathrm{GeV},\lambda _1=(0.13\pm 0.06)\mathrm{GeV}^2.$$ (6) This result agrees well with the one obtained from an analysis of the lepton energy spectrum in Ref. . CLEO also considered moments of the lepton spectrum, however, without any restriction on the lepton energy, yielding unlikely central values of $`\overline{\mathrm{\Lambda }}`$ and $`\lambda _1`$. Since this analysis uses a model dependent extrapolation to $`E_{\mathrm{}}<0.6`$GeV, I consider the result in Eq. (6) more reliable . The unknown order $`\mathrm{\Lambda }_{\mathrm{QCD}}^3/m_b^3`$ terms not included in Eq. (6) introduce a sizable uncertainty , which could be significantly reduced when more precise data on the photon energy spectrum in $`\overline{B}X_s\gamma `$ becomes available . The significance of Eq. (6) is that, taken at face value, it gives $`|V_{cb}|=0.0415`$ from the $`\overline{B}X_c\mathrm{}\overline{\nu }`$ width with only 3% uncertainty. The theoretical uncertainty hardest to quantify in the inclusive determination of $`|V_{cb}|`$ is the size of quark-hadron duality violation . Studying the shapes of these $`\overline{B}X_c\mathrm{}\overline{\nu }`$ decay distributions may be the best way to constrain this experimentally, since it is unlikely that duality violation would not show up in a comparison of moments of different spectra. Thus, testing our understanding of these spectra is important to assess the reliability of the inclusive determination of $`|V_{cb}|`$, and especially that of $`|V_{ub}|`$ (see below). A new approach to replace the $`b`$ quark mass in theoretical predictions with the $`\mathrm{{\rm Y}}(1S)`$ mass was proposed recently . The crucial point of this “upsilon expansion” is that for theoretical consistency one must combine different orders in the $`\alpha _s`$ perturbation series in the expression for $`B`$ decay rates and $`m_\mathrm{{\rm Y}}`$ in terms of $`m_b`$. As the simplest example, consider schematically the $`\overline{B}X_u\mathrm{}\overline{\nu }`$ rate, neglecting nonperturbative corrections, $$\mathrm{\Gamma }(\overline{B}X_u\mathrm{}\overline{\nu })=\frac{G_F^2|V_{ub}|^2}{192\pi ^3}m_b^5\left[1(\mathrm{})\frac{\alpha _s}{\pi }ϵ(\mathrm{})\frac{\alpha _s^2}{\pi ^2}ϵ^2\mathrm{}\right].$$ (7) The coefficients denoted by $`(\mathrm{})`$ are known, and the parameter $`ϵ1`$ denotes the order in the upsilon expansion. In comparison, the expansion of the $`\mathrm{{\rm Y}}(1S)`$ mass in terms of $`m_b`$ has a different structure, $$m_\mathrm{{\rm Y}}=2m_b\left[1(\mathrm{})\frac{\alpha _s^2}{\pi ^2}ϵ(\mathrm{})\frac{\alpha _s^3}{\pi ^3}ϵ^2\mathrm{}\right],$$ (8) In this expansion one must assign to each term one less power of $`ϵ`$ than the power of $`\alpha _s`$ . At the scale $`\mu =m_b`$ both of these series appear badly behaved, but substituting Eq. (8) into Eq. (7) and collecting terms of a given order in $`ϵ`$ gives $$\mathrm{\Gamma }(\overline{B}X_u\mathrm{}\overline{\nu })=\frac{G_F^2|V_{ub}|^2}{192\pi ^3}\left(\frac{m_\mathrm{{\rm Y}}}{2}\right)^5\left[10.115ϵ0.035ϵ^2\mathrm{}\right].$$ (9) The perturbation series, $`10.115ϵ0.035ϵ^2`$, is far better behaved than the series in Eq. (7) in terms of the $`b`$ quark pole mass, $`10.17ϵ0.13ϵ^2`$, or the series expressed in terms of the $`\overline{\mathrm{MS}}`$ mass, $`1+0.30ϵ+0.19ϵ^2`$. The uncertainty in the decay rate using Eq. (9) is much smaller than that in Eq. (7), both because the perturbation series is better behaved, and because $`m_\mathrm{{\rm Y}}`$ is better known (and better defined) than $`m_b`$. The relation between $`|V_{ub}|`$ and the $`\overline{B}X_u\mathrm{}\overline{\nu }`$ rate is $$|V_{ub}|=(3.06\pm 0.08\pm 0.08)\times 10^3\left(\frac{(\overline{B}X_u\mathrm{}\overline{\nu })}{0.001}\frac{1.6\mathrm{ps}}{\tau _B}\right)^{1/2}.$$ (10) The upsilon expansion also improves the behavior of the perturbation series for the $`\overline{B}X_c\mathrm{}\overline{\nu }`$ rate, and yields $$|V_{cb}|=(41.9\pm 0.8\pm 0.5\pm 0.7)\times 10^3\left(\frac{(\overline{B}X_c\mathrm{}\overline{\nu })}{0.105}\frac{1.6\mathrm{ps}}{\tau _B}\right)^{1/2}.$$ (11) These results agree with other estimates within the uncertainties. The first error in Eqs. (10) and (11) come from assigning an uncertainty equal to the size of the $`ϵ^2`$ term, the second is from assuming a $`100`$MeV uncertainty in Eq. (8), and the third error in Eq. (11) is from a $`0.25\mathrm{GeV}^2`$ error in $`\lambda _1`$. The most important uncertainty is the size of nonperturbative contributions to $`m_\mathrm{{\rm Y}}`$ other than those which can be absorbed into $`m_b`$, for which we used $`100`$MeV. By dimensional analysis it is of order $`\mathrm{\Lambda }_{\mathrm{QCD}}^4/(m_b\alpha _s)^3`$, however, quantitative estimates vary in a large range. It is preferable to constrain such effects from data . For the determination of $`|V_{ub}|`$, Eq. (10) is of little use by itself, since $`(\overline{B}X_u\mathrm{}\overline{\nu })`$ cannot be measured without significant cuts on the phase space. The traditional method for extracting $`|V_{ub}|`$ involves a study of the electron energy spectrum in the endpoint region $`m_B/2>E_{\mathrm{}}>(m_B^2m_D^2)/2m_B`$ (in the $`B`$ rest frame), which must arise from $`bu`$ transition. Since the width of this region is only $`300`$MeV (of order $`\mathrm{\Lambda }_{\mathrm{QCD}}`$), an infinite set of terms in the OPE may be important, and at the present time it is not known how to make a model independent prediction for the spectrum in this region. Another possibility for extracting $`|V_{ub}|`$ is based on reconstructing the neutrino momentum. The idea is to infer the invariant mass-squared of the hadronic final state, $`s_H=(p_Bp_{\mathrm{}}p_{\overline{\nu }})^2`$. Semileptonic $`B`$ decays satisfying $`s_H<m_D^2`$ must come from $`bu`$ transition . Both the invariant mass region $`s_H<m_D^2`$ and the electron endpoint region $`E_{\mathrm{}}>(m_B^2m_D^2)/2m_B`$ receive contributions from hadronic final states with invariant masses between $`m_\pi `$ and $`m_D`$. However, for the electron endpoint region the contribution of states with masses nearer to $`m_D`$ is strongly suppressed kinematically. This region may be dominated by the $`\pi `$ and the $`\rho `$, and includes only of order 10% of the total $`\overline{B}X_u\mathrm{}\overline{\nu }`$ rate. The situation is very different for the low invariant mass region, $`s_H<m_D^2`$, where all such states contribute without any preferential weighting towards the lowest mass ones. In this case the $`\pi `$ and the $`\rho `$ exclusive modes comprise a smaller fraction, and only of order 10% of the $`\overline{B}X_u\mathrm{}\overline{\nu }`$ rate is excluded from the $`s_H<m_D`$ region. Consequently, it is much more likely that the first few terms in the OPE provide an accurate description of the decay rate in the region $`s_H<m_D^2`$ than in the region $`E_{\mathrm{}}>(m_B^2m_D^2)/2m_B`$. Since $`m_D^2`$ is not much larger than $`\mathrm{\Lambda }_{\mathrm{QCD}}m_b`$, one needs to model the nonperturbative effects in both cases. However, assigning a 100% uncertainty to these estimates affects the extracted value of $`|V_{ub}|`$ much less from the $`s_H<m_D^2`$ than from the $`E_{\mathrm{}}>(m_B^2m_D^2)/2m_B`$ region. Such estimates suggest that the theoretical uncertainty in $`|V_{ub}|`$ determined from the hadronic invariant mass spectrum in the region $`s_H<m_D^2`$ is about $``$10%. If experimental constraints force to consider a significantly smaller region, then the uncertainties increase rapidly. The first analyses of LEP data utilizing this idea were performed recently , but it is not transparent how they weigh the Dalitz plot, which affects crucially the theoretical uncertainties. The inclusive nonleptonic decay rate to “wrong sign” charm ($`\overline{B}X_{u\overline{c}s}`$) may also give a determination of $`|V_{ub}|`$ with modest theoretical uncertainties , if such a measurement is experimentally feasible. ## 4 Conclusions The present status of $`|V_{cb}|`$ and $`|V_{ub}|`$ is approximately $$|V_{cb}|=0.040\pm 0.002,|V_{ub}/V_{cb}|0.090\pm 0.025.$$ (12) The central value and error of $`|V_{cb}|`$ comes from first principles, and the uncertainty in both its exclusive and inclusive determination is of order $`1/m_Q^2`$. On the other hand, the above error on $`|V_{ub}|`$ is somewhat ad hoc, since it is still estimated relying on phenomenological models. Within the next 3–5 years, in my opinion, an optimistic scenario is roughly as follows. The theoretical error of $`|V_{cb}|`$ might be reduced to 2–3%. This requires better agreement between the inclusive and exclusive determinations, since in the exclusive determination the nonperturbative corrections to $`_{D^{()}}(1)`$ are at the 5% level and model dependent, while in the inclusive determination it is hard to constrain model independently the size of quark-hadron duality violation. It will give confidence in lattice calculations of $`_D^{}(1)`$ and $`_D(1)`$ if they give the same value of $`|V_{cb}|`$, and the deviations of the form factor ratios conventionally denoted by $`R_{1,2}(w)`$ from unity can also be predicted precisely. Quark-hadron duality violation in the inclusive determination of $`|V_{cb}|`$ can be constrained by comparing the measured shapes of $`\overline{B}X_c\mathrm{}\overline{\nu }`$ decay spectra in different variables (e.g., lepton energy, hadronic invariant mass, etc.). At the same time, the theoretical error of $`|V_{ub}|`$ might be reduced to about 10%. Again, a better agreement between the inclusive and exclusive determinations is needed. At this level only unquenched lattice calculations will be trusted, and they ought to give consistent values of $`|V_{ub}|`$ from $`\overline{B}\pi \mathrm{}\overline{\nu }`$ and $`\overline{B}\rho \mathrm{}\overline{\nu }`$. From exclusive decays a double ratio method discussed in Sec. 2 may give $`|V_{ub}|`$ with $``$10% error. In inclusive $`\overline{B}X_u\mathrm{}\overline{\nu }`$ decay, the hadron invariant mass spectrum should be measured up to a cut as close to $`m_D`$ as possible. It would be reassuring as a check if varying this cut in some range leaves $`|V_{ub}|`$ unaffected. I would like to thank Jon Rosner and Bruce Winstein for inviting me and for organizing a very interesting and stimulating workshop. I also thank Adam Falk and Andreas Kronfeld for comments on the manuscript. Fermilab is operated by Universities Research Association, Inc., under DOE contract DE-AC02-76CH03000.
no-problem/9908/hep-ph9908412.html
ar5iv
text
# References TPR-99-15 Sum rules for the T-odd fragmentation functions A. Schäfer<sup>1</sup>, O.V. Teryaev<sup>1,2</sup> <sup>1</sup>Institut für Theoretische Physik Universität Regensburg D-93040 Regensburg, Germany <sup>2</sup>Bogoliubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research, Dubna 141980 Russia ## Abstract The conservation of the intrinsic transverse momentum during parton fragmentation imposes non-trivial constraints on T-odd fragmentation functions. These significantly enhance the differences between the favoured and unfavoured fragmentation functions, which could be relevant to understand the azimuthal asymmetries of charged pion production observed recently by the HERMES collaboration. The T-odd effects in partonic fragmentation provide a unique tool for the study of parton polarization. The key point is that they generate a self-analyzing pattern, which connects the parton polarizations in the hard process to angular asymmetries in the produced hadrons. The fragmentation functions, being the cornerstone of such processes, should be very different from the standard ones. Their characteristic property is that they require the interference of two probability amplitudes with a relative phase, which excludes the usual probabilistic interpretation. This makes the non-perturbative modelling of such effects rather difficult. One possibility is to use an effective quark propagator incorporating the imaginary part , another are model calculations in which the phase shift is produced by the Breit-Wigner propagator for a wide hadronic resonances . The available experimental data are also not numerous at the moment. One should note the measurement of longitudinal handedness and the correlation of the Collins fragmentation function in $`e^+e^{}`$-annihilation by the DELPHI collaboration. At the same time, the recent measurements of azimuthal asymmetries in DIS by SMC and HERMES show the need for an extended theoretical treatment. In a complete analysis the effects are represented as convolutions of various distribution and fragmentation functions (and possibly, T-odd fracture functions ), for which one needs some phenomenological inputs. In this situation model independent constraints for T-odd fragmentation functions are most welcome. In the present article we present a method to derive such constraints, which is based on the conservation of the intrinsic transverse momentum of hadrons in partons. The resulting zero sum rule for the fragmentation function $`H_1^{(1)}`$ allows to understand the flavour dependence of the azimuthal asymmetry observed at HERMES. To be more specific, let us start with the definition of the fragmentation functions $`{\displaystyle \frac{dx^{}d^2x_T}{12(2\pi )^3}\mathrm{exp}\left(iP^+\frac{x^{}}{z}+iP_T\frac{x_T}{z}\right)Tr_{Dirac}\gamma ^\mu \underset{P,X}{}<0|\psi (0)|P,X><P,X|\overline{\psi }(x)|0>}`$ $`=D(z,k_T,\mu ^2)P^\mu `$ $`{\displaystyle \frac{dx^{}d^2x_T}{12(2\pi )^3}\mathrm{exp}\left(iP^+\frac{x^{}}{z}+iP_T\frac{x_T}{z}\right)Tr_{Dirac}i\sigma ^{\mu \nu }\underset{P,X}{}<0|\psi (0)|P,X><P,X|\overline{\psi }(x)|0>}`$ $`=H_1^{}(z,k_T,\mu ^2){\displaystyle \frac{P^\mu k_T^\nu }{M}}`$ Here $`k_T`$ and $`M`$ are the intrinsic transverse momentum of the parton and a mass parameter of order of the jet mass. We have also given, for comparison, the expression for the standard spin-averaged fragmentation function $`D`$. The hadron and parton momenta are related by $`P^+=zk^+,P_T=zk_T`$ Let us immediately come to the physical interpretation. For $`D`$ it is most straigtforward. We adopt the normalization conditions of (differing from that of by a factor of $`z`$) so that $`P(z,k_T)=D(z,k_T)/z`$ is the probability density to find a specific hadron with the specified momentum. The transverse momentum integrated probability density is therefore $$P(z)=d^2P_T\frac{D(z,\frac{P_T}{z})}{z}=zd^2k_TD(z,k_T)zD(z),$$ (2) so that longitudinal momentum conservation takes the form $$\underset{h}{}_0^1𝑑zz^2D(z)=1.$$ (3) Let us pass on to T-odd fragmentation function. Although there is no direct probabilistic interpretation (the relative phase shift between the matrix elements $$<0|\sigma ^{\mu \nu }\psi (0)|P,X>,<P,X|\overline{\psi }(x)|0>$$ (4) is actually crucial), it may still be considered as a quark spin dependent term in the differential cross-section, so that the corresponding probability density is proportional to $$\frac{H_1^{}(z,k_T)}{z}k_T\mathrm{cos}\varphi $$ (5) Here $`\varphi `$ is the azimuthal angle with respect to the plane of the given component of transverse momentum. One can now immediately write the transverse momentum conservation as $$\underset{h}{}𝑑zd^2P_TP_Tk_Tcos^2\varphi \frac{H_1^{}(z,\frac{P_T}{z})}{z}\underset{h}{}𝑑zz^2d^2k_Tk_T^2H_1^{}(z,k_T)=0.$$ (6) This equality leads to the conservation of each of the two components of transverse momentum, due to the arbitrary choice of the angle $`\varphi `$. The integration over $`\varphi `$ is factored out. The integrated quantity is then proportional to the transverse momentum averaged function $$H_1^{(1)}(z)=d^2k_T\frac{k_T^2}{2M^2}H_1^{}(z,k_T).$$ (7) It then follows that $$\underset{h}{}𝑑zz^2H_1^{(1)}(z)=0,$$ (8) which is our main result. Several additional comments might help to better understand this formula. First, note that the $`k_T`$ oddness required by the T-odd nature was absolutely crucial in obtaining the result. While the conservation of the transverse momentum components applies to all $`k_T`$-dependent functions, it gives only non-trivial constraints for odd powers of $`k_T`$. The similar contribution to $`D(z,k_T)`$ is zero just because it is an even function of $`k_T`$. Therefore, similar sum rules hold for all $`k_T`$-odd fragmentation functions. Moreover, similar sum rules can also be derived for functions which do not requiring intrinsic $`k_T`$ at all, like for the twist-3 T-odd fragmentation function $`c_V`$ , which describes the fragmentation of unpolarized quarks into polarized baryons. Second, the chirality of the function is actually inessential in the presented derivation, as the latter does not make direct reference to the energy-momentum operator, which is chiral even. For the actual function $`H_1^{(1)}`$ one should think about momentum conservation for processes initiated by the tensor quark currents, rather than about matrix elements of the momentum operators. The immediate consequence of our sum rule is a larger difference between favored (for $`z1`$) and unfavored fragmentation. If the favored one is positive for large $`z`$, there must be compensating negative contribution at lower $`z`$. For the unfavored fragmentation function which probes generally smaller $`z`$ values the resulting cancellations should then be more pronounced. Therefore we expect that for all T-odd fragmentation functions the contributions from non-leading parton fragmentation will be severely suppressed compared to the normal fragmentation function $`D(z)`$. The resulting asymmetries should be much smaller for non-leading hadrons than for leading ones. This behaviour actually shows up in a comparison of the double semi-inclusive $`\pi ^+`$ and $`\pi ^{}`$ asymmetries as measured by HERMES with the $`\pi ^+`$ and $`\pi ^{}`$ azimuthal single spin asymmetries, measured by the same collaboration . Acknowledgements: We acknowledge the useful discussions with H. Avakian and A.V. Efremov. A.S. acknowledges the support from DFG and BMBF. O.V.T. was supported by Erlangen-Regensburg Graduiertenkollegs and by DFG in the framework of Heisenberg-Landau Program of JINR-Germany Collaboration.
no-problem/9908/astro-ph9908162.html
ar5iv
text
# Formation of Gaseous Shells ## 1. The Problem Shells are sharp-edged features, formed during interactions and mergers, through phase-wrapping of debris (Quinn 1984, Dupraz & Combes 1986, Hernquist & Quinn 1989). Recent HI observations have revealed the existence of associated gaseous shells, slightly displaced from the stellar ones, questioning the validity of the phase-wrapping mechanism (Centaurus A: Schiminovich et al 1994; NGC 2865: Schiminovich et al 1995; NGC 1210: Petric et al 1997). An intriguing result is that the HI shells follow the curvature of the stellar shells, but are shifted about 1kpc outside. There are two ways shells can be formed: -(1)- in minor mergers, shells correspond to phase-wrapping of the stars liberated from the small companion (e.g. Quinn 1984); -(2)- in major mergers, shells correspond to phase-wrapping of the debris falling back into the merged-object potential (Hernquist & Spergel 1992, Hibbard & Mihos 1995). But what is the fate of gas? due to dissipation, it falls towards the center, as in the simulations of Weil & Hernquist (1993), and there should not exist gaseous shells. ## 2. Solution Proposed There exists a population of small and dense gas clouds, that have very low dissipation. This gas has a behaviour intermediate between stars and diffuse gas, and remains available to form shells. Already Kojima & Noguchi (1997) have simulated the sinking of a disk satellite into an elliptical, with a sticky particle code, instead of SPH, and found no segregation between gas and stars. We have also simulated the phenomenon, with a cloud-cloud collision code, to be able to control the dissipation rate. With strong dissipation, the gas component, after a few oscillations back and forth in the primary’s potential, settles in the center, as previously. But with small dissipation, only a small fraction of the gas falls into the potential well, most of it form shells (cf figure 1). Now it is necessary to explain the spatial displacement between the gaseous and stellar shells. Two possibilities could be tested: -(1)- the gas in the companion is not as bound, and does not occupy the same region initially, being in the outer parts of the galaxy. We simulate this, but this results in only a very slight and negligible shift. -(2)- in the merging, the gas is liberated early from the companion by the tidal forces, since it is not very bound, while the stars are liberated afterwards. Through dynamical friction, the stars have therefore time to lose a lot of energy, contrary to the gas. Dynamical friction also explains the dynamical range of the shell radii (Dupraz & Combes 1987). This second possibility accounts very well for the shift observed between HI and stellar shells, according to simulations. ## 3. Observations To check the model, millimeter observations have been carried out, to detect molecular gas in shells, since the dense clumps able to form shells should be on H<sub>2</sub> form. This led to the surprising detection of CO with the SEST 15m-telescope in Centaurus-A shells (Charmandaris, Combes, van der Hulst 1999). There are comparable amounts of H<sub>2</sub> and HI gas in the shells, far away from the galaxy center (15 kpc), which is completely unusual for a normal galaxy. This is compatible with the view that the dense clumps have been dragged in the shells by the phase-wrapping mechanism, and the HI diffuse gas has been photo- dissociated from there. ## References Charmandaris V., Combes F., van der Hulst J-M.: 1999, in prep Dupraz C., Combes F.: 1986, A&A 166, 53; 1987, A&A 185, L1 Hernquist L., Quinn P.: 1989, ApJ 342, 1 Hernquist L., Spergel D.N.: 1992, ApJ 399, L117 Hibbard J., Mihos C.: 1995, AJ 110, 140 Kojima M., Noguchi M.: 1997, ApJ 481, 132 Petric A., Schiminovich D., van Gorkom J.H. et al.:1997, AAS 191, 8212 Quinn P.J.: 1984, ApJ 279, 596 Schiminovich D., et al.: 1994, ApJ 423, L101; 1995, ApJ 444, L77 Weil M.L., Hernquist L.: 1993, ApJ 405, 142
no-problem/9908/hep-ph9908325.html
ar5iv
text
# Comment on the broadening of the 7Be neutrino line in vacuum oscillation solutions of the solar neutrino problem ## Abstract We consider the effect of thermal broadening of the <sup>7</sup>Be neutrino on vacuum oscillation solutions of the solar neutrino problem and analyze the conditions under which the electron-neutrino survival probability must be averaged over neutrino energy. For “just-so” solutions with $`\mathrm{\Delta }m^2`$ of order $`5\times 10^{11}\mathrm{eV}^2`$ averaging is not necessary, but for much larger values, it is. We analyze the effective broadening due to the extended production region in the Sun and find similar results. We also comment on the possibility of seasonal variations of the <sup>7</sup>Be neutrino signal. PACS numbers: 26.65.+t, 14.60.Pq, 96.40.Tv In “just-so” and other vacuum oscillation solutions of the solar neutrino problem, it is customary to replace the distance $`(L)`$ dependent factor $`P(L,E_\nu )=\mathrm{sin}^2\left(\mathrm{\Delta }m^2L_{}/E_\nu \right)`$ by its average value of $`1/2`$ when it oscillates rapidly over the spectra of pp or <sup>8</sup>B neutrinos. The <sup>7</sup>Be neutrinos, however, arise from an electron-capture process and are therefore mono-energetic. If the thermal broadening of the <sup>7</sup>Be line is large compared with the energy width of a single oscillation, then the averaging procedure is still valid; but if the broadening is small, then averaging is not valid and it becomes necessary to “fine tune” the oscillation parameters to be consistent with experimental data. Here we analyze the conditions under which averaging is or is not valid, and we show that it is not valid for “just-so” solutions which appear to be consistent with the observed anomaly at the high-energy end of the recoil electron spectrum. We also analyze the effective broadening induced by the finite size of the solar region in which <sup>7</sup>Be neutrinos are produced. Finally we study the conditions for seasonal variations in the <sup>7</sup>Be neutrino signal. To perform this analysis, we re-express $`P(L,E_\nu )`$ in terms of the energy at which its phase is $`\pi /2`$: $$1.27\mathrm{\Delta }m^2\frac{d_{}}{E_{\pi /2}}=\frac{\pi }{2};\mathrm{\Delta }m^2=8.27\times 10^{12}E_{\pi /2}.$$ (1) For a fixed solar distance $`d_{}`$, $`1.5\times 10^8`$ km, this is entirely equivalent to the usual $`\mathrm{\Delta }m^2`$ characterization with energies in MeV and $`\mathrm{\Delta }m^2`$ in eV<sup>2</sup>. The survival probability of solar electron-type neutrinos arriving at Earth from the Sun is then given by: $$P(\nu _e\nu _e;E_\nu )=1\mathrm{sin}^22\theta \mathrm{sin}^2\left(\frac{\pi }{2}\frac{E_{\pi /2}}{E_\nu }\right).$$ (2) It reaches its minimum value of $`(1\mathrm{sin}^22\theta )`$ at an energy of $`E_{\pi /2}`$, and then increases monotonically to 1 as $`E_\nu `$ increases. Suppose now that the neutrino energy changes by a small amount: $$E_\nu E_\nu \pm \delta .$$ (3) Then $`P(L,E_\nu )`$ becomes: $$\mathrm{sin}^2\left(\frac{\pi }{2}\frac{E_{\pi /2}}{E_\nu }\right)\mathrm{sin}^2\left(\frac{\pi }{2}\frac{E_{\pi /2}}{E_\nu }[1\frac{\delta }{E_\nu }]\right).$$ (4) For a significant change in the value of the function, the additional phase induced by $`\pm \delta `$ must be of order $`\frac{\pi }{2}`$. This will happen when $$\frac{\delta }{E_\nu }\frac{E_\nu }{E_{\pi /2}}.$$ (5) In the case of <sup>7</sup>Be neutrinos, the energy and width are: $$E_\nu =860\mathrm{keV};\delta 1\mathrm{keV}.$$ (6) Fitting the shape of the recoil spectrum requires: $$E_{\pi /2}69\mathrm{MeV}.$$ (7) Thus $`\delta /E_\nu `$ is of order $`10^3`$, while $`E_\nu /E_{\pi /2}`$ is much larger, namely of order $`10^1`$, and so the condition above is not satisfied. This means that the width of the <sup>7</sup>Be line does not cause more than a 1-2$`\%`$ change in $`P(L,E_\nu )`$. We can turn this argument around and ask that, given the energy and width of <sup>7</sup>Be neutrinos, the parameters $`E_{\pi /2}`$ and its equivalent $`\mathrm{\Delta }m^2`$ be chosen so as to give a large change in $`P(L,E_\nu )`$. From the above analysis, we find that $`E_{\pi /2}`$ must be about one thousand times larger than the <sup>7</sup>Be neutrino energy: $$E_{\pi /2}1000\mathrm{MeV};$$ (8) and the corresponding $`\mathrm{\Delta }m^2`$ increases by two orders of magnitude above the initial case, $$\mathrm{\Delta }m^28\times 10^9\mathrm{eV}^2.$$ (9) Besides the thermal broadening of the <sup>7</sup>Be line, another feature that can cause changes in the phase of $`P(L,E_\nu )`$ is the finite extent of the production region in the Sun for <sup>7</sup>Be neutrinos. This region is about $`10\%`$ of the solar radius, or $`10^5`$ kilometers, and is small compared with the solar distance $`d_{}`$, $`10^8`$ km. We can think of it as causing a change in the value of $`E_{\pi /2}`$ by one part in a thousand. Such a small fractional change will not have a significant affect on the distance-dependent factor for $`E_{\pi /2}`$ in the range of 6-9 MeV; but for the much larger value of $`E_{\pi /2}`$ given in the last equation it will cause a large variation and it will be necessary to average $`P(L,E_\nu )`$ over the production region. The variation in the Earth-Sun distance due to the eccentricity of the orbit of the Earth can also be regarded as a small variation in $`E_{\pi /2}`$. As we have shown before, this does not cause a significant seasonal variation in the <sup>7</sup>Be signal for $`E_{\pi /2}=6\mathrm{MeV}`$, but it does for $`E_{\pi /2}=9\mathrm{MeV}`$. As $`E_{\pi /2}`$ grows to about $`50\mathrm{Mev}`$, the oscillations in the signal become quite pronounced, and for much larger values they eventually become so rapid that again an average may need to be performed. The corresponding values of $`\mathrm{\Delta }m^2`$ are roughly $`8\times 10^{11}\mathrm{eV}^2`$, $`4\times 10^{10}\mathrm{eV}^2`$, and $`4\times 10^9\mathrm{eV}^2`$ respectively. The need for an average in the eccentricity case is determined by the rate at which observations of the <sup>7</sup>Be signal are made. In the SAGE and GALLEX experiments, for example, the observations are made over approximately 22 days, and so an average will be required if eccentricity-induced oscillations have a much smaller period. In experiments like BOREXINO, the neutrinos will be detected in real time, and one might be able to trace out oscillations whose period is long compared to the average interval between successive events.
no-problem/9908/nucl-th9908020.html
ar5iv
text
# CERN-TH/99-193NUC-MINN-99/9-TJuly 1999Revised January 2000 DYNAMICAL EVOLUTION OF THE SCALAR CONDENSATE IN HEAVY ION COLLISIONS ## 1 Introduction Scalar condensates often appear in quantum field theories when a symmetry is spontaneously broken. Prominent examples include the Higgs condensate and the chiral condensate. The equilibrium behavior of these condensates as a function of temperature and density has been extensively studied in the context of cosmology and heavy ion collisions. However, much remains to be learned about how these condensates really evolve in out-of-equilibrium systems. In this paper, we study the dynamical evolution of the chiral condensate in the $`O(N)`$ linear sigma model and apply it to the expanding matter in a high energy heavy ion collision. We recall that the sigma field $`\sigma `$ represents the quark condensate $`\overline{q}q`$ in the sense that they both have the same transformation properties for $`N=4`$, corresponding to two flavors of massless quarks. At high temperatures, quarks and gluons exist in a deconfined, chirally symmetric phase. At some critical temperature of order 150 MeV a transition to a hadronic phase occurs. In this confined and symmetry-broken phase the quark condensate is nonzero. In a collision between large nuclei the beam energy must be high enough so that the matter reaches at least approximate thermal equilibrium at a temperature greater than the critical temperature. We derive and solve an effective coarse-grained equation of motion for the chiral condensate, or mean sigma field, starting at the critical temperature and continuing down to the temperature where the system loses its ability to maintain local equilibrium and freezes out. The equation of motion approach we espouse here was used by Linde in his pioneering work on phase transitions in relativistic quantum field theory , although he did not use this technique to analyze dissipative processes or fields out of equilibrium. Our derivation of a coarse-grained field equation is based on linear response theory , where the connection between finite temperature averages and the evolution of an observable in real physical time is clear and straightforward. There is less clarity, in our opinion, with the Influence Functional and closely related Closed Time Path methods as used by many others in this context . Full nonlinear response theory is certainly equivalent to both of those methods, as indeed it must be since they all describe the same physics. However, in most cases practical calculations can only be performed when the deviation from thermal equilibrium is small. This is true of the Influence Functional and Closed Time Path methods as well as response theory. Still we are able to make several improvements to the most recent treatment of the linear sigma model. First, the complications associated with doubling of the field variables do not arise, nor those arising from paths in the complex time plane. Second, we make a direct connection with physical response functions. It is clear that these response functions should be evaluated in the fully interacting equilibrium ensemble. Here we shall estimate the relevant response functions by including the physical processes of sigma formation and decay via two pions, and the scattering of thermal pions and sigma mesons from sigma mesons in the condensate. To our knowledge, the latter processes have not been studied before in this or any related context. The amplitudes for these processes are computed in tree approximation. In section 2 we derive the fundamental equation. In section 3 we calculate decay and scattering contributions to the dissipative term in the coarse-grained field equation for the condensate. In section 4 we solve the equation of motion in the hadronic phase of a high energy heavy ion collision. Conclusions, and extensions of this paper, are discussed in section 5. ## 2 The Fundamental Equation The Lagrangian of the linear $`O(N)`$ sigma model involves the sigma field and a vector of $`(N1)`$ pion fields, $`𝝅`$: $$=\frac{1}{2}\left(_\mu \sigma \right)^2+\frac{1}{2}\left(_\mu 𝝅\right)^2\frac{1}{4}\lambda \left(\sigma ^2+𝝅^2f_\pi ^2\right)^2,$$ (1) where $`\lambda `$ is a positive coupling constant and $`f_\pi `$ is the pion decay constant. In the vacuum the symmetry is spontaneously broken: the sigma field acquires a vacuum expectation value of $`0|\sigma |0=f_\pi `$, excitations of the sigma field have mass $`m_\sigma ^2=2\lambda f_\pi ^2`$, and the pion is a Goldstone boson since we neglect explicit chiral symmetry breaking here. The symmetry is restored by a second order phase transition at the critical temperature $`T_c^2=12f_\pi ^2/(N+2)`$. When one is interested in what happens at a fixed temperature at and below $`T_c`$ the sigma field is usually expressed as $$\sigma (𝐱,t)=v+\sigma ^{}(𝐱,t),$$ (2) where the thermal average of the sigma field at temperature $`T`$ is $`\sigma _{eq}=v`$ so that $`\sigma ^{}_{eq}=0`$. The equation of motion for the fluctuating component $`\sigma ^{}`$ is $$\ddot{\sigma }^{}^2\sigma ^{}=\lambda f_\pi ^2\left(v+\sigma ^{}\right)\lambda \left(v+\sigma ^{}\right)^3\lambda \left(v+\sigma ^{}\right)𝝅^2.$$ (3) We now allow the sigma field to be slightly out of equilibrium, and write instead: $$\sigma (𝐱,t)=v+\sigma _s(𝐱,t)+\sigma _f(𝐱,t).$$ (4) Here $`\sigma =v+\sigma _s`$, where $`v`$ denotes the equilibrium value as before, but the deviation has been split into two pieces: a slow part $`\sigma _s`$, whose average is nonzero, and a fast part $`\sigma _f`$, whose average is zero. (Primes have been dropped for clarity of presentation.) The notation $`\sigma `$ denotes averaging over the space-time volume chosen for coarse-graining and thus the precise division between the fast and slow parts of the field depends on the choice that is made. For example, one may include in the slow part only those Fourier components with wavenumber below some cutoff value , or one may average the field fluctuations over some coarse-graining time . We shall not delve into any details here, but only remark that if the results depend strongly on the coarse-graining technique then the procedure is not very useful in the given context. In any case, the slow part represents occupation of the low momentum modes by a large number of particles and so may be treated as a slowly varying classical field. The fast part is a fully quantum field. The equilibrium ensemble average of an operator $`O`$ is denoted by $`O_{eq}`$ and is characterized by the temperature $`T`$ and the full Hamiltonian determined from the linear sigma model Lagrangian in the absence of $`\sigma _s`$. When this equilibrium ensemble is perturbed by the presence of the field $`\sigma _s`$ the Hamiltonian is modified and the resulting (nonequilibrium) ensemble average is denoted by $`O`$. It should be noted at this point that we have not allowed for a nonzero ensemble average of the pion field. Such a nonzero average is usually referred to as a disoriented chiral condensate, or DCC. One could certainly allow for a DCC and carry through the following computations in a straightforward way. However in thermal equilibrium a DCC never develops (although it can arise from thermal fluctuations in a small system, but even then the probability is small ). This is in contrast to the scalar condensate, whose value is zero above the critical temperature and becomes nonzero below it. (A very small pion mass, too small even to affect the equation of state or correlation functions at the temperatures of interest, will still tilt the system towards a unique vacuum.) Thus we choose to focus on the behavior of the $`\sigma `$ field. Let us average the sigma field equation over time and length scales large compared to the scales characterizing the quantum fluctuations of the fields $`\sigma _f`$ and $`𝝅`$ but short compared to the scales typifying $`\sigma _s`$. It is an assumption that such a separation exists, but in any given situation it can be verified or refuted a posteriori. Since $`\sigma _f=0`$ we obtain $$\ddot{\sigma }_s^2\sigma _s=\lambda \left(v+\sigma _s\right)\left[f_\pi ^2\left(v+\sigma _s\right)^23\sigma _f^2𝝅^2\right]\lambda \sigma _f^3.$$ (5) The full ensemble averages are $`\sigma _f^2`$ $`=`$ $`\sigma _f^2_{eq}+\delta \sigma _f^2`$ (6) $`𝝅^2`$ $`=`$ $`𝝅^2_{eq}+\delta 𝝅^2`$ (7) $`\sigma _f^3`$ $`=`$ $`\sigma _f^3_{eq}+\delta \sigma _f^3,`$ (8) where the deviations are caused by $`\sigma _s`$ and are generally proportional to $`\sigma _s`$ to some positive power. Equation (5) must be satisfied even when $`\sigma _s`$ vanishes. That determines the equilibrium value of $`v`$ to be $$v=0\mathrm{if}T>T_c$$ (9) or $$v^2=f_\pi ^23\sigma _f^2_{eq}𝝅^2_{eq}\sigma _f^3_{eq}/v\mathrm{if}T<T_c.$$ (10) We are interested in the second of these because it represents the low temperature symmetry-broken phase. To first approximation in either (i) a perturbative expansion in $`\lambda `$ or (ii) an expansion in $`1/N`$ , which are the usual approximations for the sigma model, the field fluctuations are $$\sigma _f^2_{eq}=\frac{d^3p}{(2\pi )^3}\frac{1}{E_\sigma }n_B(E_\sigma /T),$$ (11) where $`E_\sigma =\sqrt{m_\sigma ^2+p^2}`$ and $`n_B(E/T)=1/\left(\mathrm{exp}(E/T)1\right)`$ is the Bose-Einstein distribution. Note that the sigma mass is still to be determined. The pion fluctuations are $$𝝅^2_{eq}=(N1)\frac{T^2}{12}.$$ (12) The latter follows from the fact that there are $`N1`$ Goldstone bosons. Corrections to these formulae come from interactions not included in the effective mass. The sigma mass is very large at zero temperature and vanishes at $`T_c`$. The term $`\sigma _f^3_{eq}/v`$ is not zero on account of the cubic self-coupling of the $`\sigma `$ with coupling coefficient $`\lambda v`$. It does have a finite limit as $`v0`$. However, once the approximations (11–12) have been made it is not legitimate to keep this term because it is one higher power in $`\lambda `$ and/or $`1/N`$, and keeping it would violate the $`O(N)`$ symmetry. Therefore the limits are: $`T0:`$ $`v^2=f_\pi ^2(N1)T^2/12`$ $`TT_c:`$ $`v^2=f_\pi ^2(N+2)T^2/12`$ (13) where the formula for $`T_c`$ was given earlier. It is, of course, consistent with the above expression. Using Eqs. (6–8) and (10) in (5), the coarse-grained field equation for $`\sigma _s`$ can now be written as $$\ddot{\sigma }_s^2\sigma _s+2\lambda v^2\sigma _s=\lambda \left[3v\sigma _s^2+\sigma _s^3+\delta \sigma _f^3+(v+\sigma _s)(3\delta \sigma _f^2+\delta 𝝅^2)\right].$$ (14) Note that here the system is assumed to be at a fixed temperature so that $`v(T)`$ is a constant. If the temperature is allowed to vary then time derivatives of $`v`$ must be included; see the next section. Equation (14) shows that the sigma mass and the equilibrium value of the scalar condensate are related by $`m_\sigma ^2=2\lambda v^2`$. They are temperature dependent and determined self-consistently from the formula $$m_\sigma ^2=2\lambda \left[f_\pi ^2(N1)\frac{T^2}{12}3\frac{d^3p}{(2\pi )^3}\frac{1}{E_\sigma }n_B(E_\sigma /T)\right].$$ (15) The limits $`T0`$ and $`TT_c`$ are readily obtained from those of $`v`$. An interpolating formula which connects these two limits is: $$\frac{m_\sigma ^2}{2\lambda f_\pi ^2}=\frac{v^2}{f_\pi ^2}\frac{1T^2/T_c^2}{1[3/(N+2)](T^2/T_c^2)(1T^2/T_c^2)}.$$ (16) This is useful when studying solutions of the equation of motion numerically. The deviations in the fluctuations due to the presence of $`\sigma _s`$ give rise to a renormalization of the parameters in the equation of motion of $`\sigma _s`$ but, more importantly, they lead to dissipation. Energy can be transferred between the field $`\sigma _s`$ and the fields $`\sigma _f`$ and $`𝝅`$. These deviations in the fluctuations may be computed using linear response theory as long as $`\sigma _s`$ is small. Here small means in comparison to either $`v`$ (most relevant at low temperature) or to $`\sqrt{\sigma _f^2}`$ (most relevant at high temperature). Technically, linear response theory is an expansion in powers of the Hamiltonian coupling the out-of-equilibrium field $`\sigma _s`$ to the other modes of the system, and guarantees that symmetries of the theory are respected. This piece of the Hamiltonian consists of positive powers of $`\sigma _s`$ and so becomes smaller and smaller with decreasing departures from equilibrium. The coupling between the slow modes and the fast modes are determined straightforwardly from the potential to be $$H_{s\pi }=\lambda \left(v\sigma _s+\frac{1}{2}\sigma _s^2\right)𝝅^2$$ (17) and $$H_{sf}=\lambda \left[\sigma _s^2+3v\sigma _s+3v^2f_\pi ^2+\frac{3}{2}(\sigma _s+2v)\sigma _f+\sigma _f^2\right]\sigma _s\sigma _f.$$ (18) In order to apply linear response analysis we need some initial conditions. In a nuclear collision, or in the early universe for that matter, it is assumed that the system reaches a state of thermal equilibrium at some negative time, that the system expands and cools, and at time $`t=0`$ the system is at the critical temperature. This implies the initial condition $`\sigma _s(𝐱,t=0)=0`$. (In a finite volume one may wish to consider an ensemble of initial values chosen from a canonical distribution .) From the standard theory of linear response one immediately deduces that $`\delta \sigma _f^n(x)`$ $`=`$ $`i\lambda {\displaystyle _0^t}𝑑t^{}{\displaystyle d^3x^{}\left(\left(3v^2f_\pi ^2\right)\sigma _s(x^{})+3v\sigma _s^2(x^{})+\sigma _s^3(x^{})\right)[\sigma _f(x^{}),\sigma _f^n(x)]_{eq}}`$ (19) $`+`$ $`i3\lambda {\displaystyle _0^t}𝑑t^{}{\displaystyle d^3x^{}\left(v\sigma _s(x^{})+\frac{1}{2}\sigma _s^2(x^{})\right)[\sigma _f^2(x^{}),\sigma _f^n(x)]_{eq}}`$ $`+`$ $`i\lambda {\displaystyle _0^t}𝑑t^{}{\displaystyle d^3x^{}\sigma _s(x^{})[\sigma _f^3(x^{}),\sigma _f^n(x)]_{eq}},`$ and $$\delta 𝝅^2(x)=i\lambda _0^t𝑑t^{}d^3x^{}\left(v\sigma _s(x^{})+\frac{1}{2}\sigma _s^2(x^{})\right)[𝝅^2(x^{}),𝝅^2(x)]_{eq}.$$ (20) The response functions are just the commutators of powers of the field operators evaluated at two different space-time points in the unperturbed (equilibrium) ensemble. We should emphasize that the unperturbed ensemble does include all interactions among the fast modes and makes no approximation regarding the strength of these interactions. Insertion of these deviations into Eq. (14) represents the fundamental equation of this paper. For very small departures from equilibrium it is sufficient to keep only those terms which are linear in $`\sigma _s`$. The high temperature, symmetric phase is obtained by setting $`v=0`$. The time-delayed response of the fast modes to the slow one evident in eqs. (19-20) has two effects: The sigma mass and self-interactions are modified, and dissipation occurs. These effects may be seen by expanding the slow field $`\sigma _s(𝐱^{},t^{})`$ in a Taylor series about the point $`(𝐱,t)`$ in Eqs. (1920) (although such an expansion is not required). Terms with no derivative or an even number of derivatives either renormalize existing terms in the equation of motion or add new nondissipative ones, such as $`\sigma _s^2\sigma _s`$ and $`^2^2\sigma _s`$. Terms with an odd number of derivatives are explicitly dissipative. Examples are $`\dot{\sigma }_s`$, $`\sigma _s\dot{\sigma }_s`$, and $`^2\dot{\sigma }_s`$ . Perhaps the closest analysis to ours is due to Rischke . The differences may be summarized thusly: First, Rischke used the influence functional method , which is closely related to the closed-time-path method , for deriving the equations of motion of the classical field. We use linear response theory. Since the former techniques ultimately rely on a perturbative expansion in terms of $`\sigma _s`$ anyway, one might as well employ linear response theory to begin with. Linear response theory is quicker, easier to use, and more intuitive. Second, we write down the field equation for $`\sigma _s`$, which is the deviation of the scalar condensate from its equilibrium value $`v`$. Rischke writes down the equation of motion for $`\overline{\sigma }=v+\sigma _s`$. Of course these approaches are equivalent. Third, Rischke’s response functions are valid for free fields only. For example, a response function from Eqs. (1920) has the form $`[\varphi ^2(𝐱^{},t^{}),\varphi ^2(𝐱,t)]_{eq}`$. For free fields this may be written $`2[D_>^2(x^{}x)D_<^2(x^{}x)]`$ where the $`D`$ are Wightmann functions/propagators and the subscript indicates whether $`t^{}`$ is greater than or less than $`t`$. Using our approach it is clear that these response functions should be evaluated in the fully interacting ensemble (but unperturbed by $`\sigma _s`$). Taken together with the second difference this constitutes a major improvement over Rischke’s analysis. This will be discussed in more detail in the next section. Fourth, Rischke employed a particular coarse graining technique by separating soft and hard modes according to the magnitude of the momentum. We have left the coarse graining technique open. Finally, with a view towards the formation of disoriented chiral condensates, or DCC, Rischke allowed for slow classical components of the pion field. We have not done so here, mainly to keep the analysis as simple and direct as possible, although it could easily worked out in the same way as the sigma field. ## 3 Estimating the Response Functions One’s first inclination might be to evaluate the response functions using free fields. Suppose, for example, that $`\sigma _s`$ is so slowly varying in space and time that to first approximation it can be taken outside the integration over $`𝐲`$ and $`t^{}`$. Then it is a simple exercise to show that for a free field $`\varphi `$ with mass $`m`$ and energy E one gets $$_0^t𝑑t^{}d^3x^{}[\varphi ^2(𝐱^{},t^{}),\varphi ^2(𝐱,t)]_{eq}=i_0^t𝑑t^{}\frac{d^3p}{(2\pi )^3}\frac{1}{E^2}\left(2n_B(E/T)+1\right)\mathrm{sin}[2E(tt^{})].$$ (21) The temperature-independent piece is a vacuum contribution and may be dropped for the present discussion. In the case that $`m=0`$ the momentum integral is done trivially with the following result: $$\frac{i}{4\pi ^2}_0^t𝑑s\left[\frac{2\pi T}{\mathrm{tanh}(2\pi Ts)}\frac{1}{s}\right].$$ In the limit that $`t`$ becomes large compared to $`1/2\pi T`$ this approaches the asymptotic value $`iTt/2\pi `$. This corresponds to the first term of a Taylor expansion of $`\sigma _s`$ and further terms in the series bring in powers of $`(tt^{})`$ yielding dissipative coefficients that grow as $`t^2`$, $`t^3`$ and so on. This is clearly unacceptable. The origin of this problem can be traced to the inadequacy of evaluating the response functions in the free field limit. Indeed, these response functions are closely related to the shear and bulk viscosities via the Kubo formulas which express those quantities in terms of ensemble averages of commutators of the energy-momentum tensor density operator at two different space-time points. It is known that determination of the viscosities requires summation of all ladder diagrams at a minimum . An alternative is to make a harmonic approximation instead of a Taylor series expansion which, in this situation, can be formulated as: $$\sigma _s(ts)\sigma _s(t)\mathrm{cos}(m_\sigma s)\dot{\sigma }_s(t)\mathrm{sin}(m_\sigma s)/m_\sigma .$$ For the applications we have in mind in this paper the deviation of the scalar condensate from its equilibrium value is not expected to oscillate significantly; rather, it is expected to decay exponentially to zero. Therefore the harmonic approximation does not solve the problem. Rather than attempt a sophisticated evaluation of the response functions in this paper we shall proceed to estimate them based on direct physical reasoning. There are two obvious mechanisms for adding or removing sigma mesons from the condensate: (1) decay into two pions or the reverse process of formation via two pion annihilation, and (2) a meson from the thermal bath elastically scattering off a sigma meson and knocking it out of the condensate. The first of these processes is included in all analyses of the linear sigma model; the second of these processes has not been studied in the literature to our knowledge. We shall evaluate them at the tree level. This means that the parameters in the Lagrangian are to be fitted to experimental data at the tree level for consistency. Let us examine each of these in turn. The decay rate of a sigma meson into two pions in the sigma’s rest frame is $$\mathrm{\Gamma }_{\sigma \pi \pi }=\frac{(N1)}{8\pi }\frac{\lambda ^2v^2}{m_\sigma }=\frac{(N1)}{16\pi }\lambda m_\sigma ,$$ (22) when account is taken of the relation $`m_\sigma ^2=2\lambda v^2`$. The decay rate for a sigma meson at rest in the finite temperature system is Bose-enhanced by a factor of $`[1+n_B(m_\sigma /2T)]^2`$, where $`n_B`$ is the Bose-Einstein occupation number with the indicated argument of its exponential. The rate for two pions to form a sigma meson at rest is obtained from detailed balance by multiplying $`\mathrm{\Gamma }_{\sigma \pi \pi }`$ by $`n_B^2(m_\sigma /2T)`$. Therefore the net rate is $`\mathrm{\Gamma }_{\sigma \pi \pi }`$ $`=`$ $`{\displaystyle \frac{(N1)}{16\pi }}\lambda m_\sigma \left[(1+n_B)^2n_B^2\right]={\displaystyle \frac{(N1)}{16\pi }}\lambda m_\sigma \left[1+2n_B\right]`$ (23) $`=`$ $`{\displaystyle \frac{(N1)}{16\pi }}\lambda m_\sigma \mathrm{coth}(m_\sigma /4T).`$ (The argument of $`n_B`$ is the same as above.) This leads to the dissipative term $`\mathrm{\Gamma }_{\sigma \pi \pi }\dot{\sigma }_s`$ in the equation of motion . It agrees with eq. (80) of Rischke . Scattering of a thermal boson $`b`$ off a sigma meson with negligibly small momentum and which is considered to be a component of the background field $`\sigma _s`$ can be studied by evaluating the self-energy . $`\mathrm{\Pi }_{\sigma b}`$ $`=`$ $`4\pi {\displaystyle \frac{d^3p}{(2\pi )^3}n_B(E/T)\frac{\sqrt{s}}{E}f_{\sigma b}^{(\mathrm{cm})}(s)}`$ (24) $`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle _{m_b}^{\mathrm{}}}𝑑Epn_B(E/T)\sqrt{s}f_{\sigma b}^{(\mathrm{cm})}(s).`$ Here $`E`$ and $`p`$ are the energy and momentum of the boson $`b`$, $`s=m_\sigma ^2+m_b^2+2m_\sigma E`$, and $`f_{\sigma b}`$ is the forward scattering amplitude. The normalization of the amplitude corresponds to the standard form of the optical theorem $$\sigma =\frac{4\pi }{q_{\mathrm{cm}}}\mathrm{Im}f^{(\mathrm{cm})}(s),$$ (25) where $`q_{\mathrm{cm}}`$ is the momentum in the cm frame. Because the cross section is invariant under longitudinal boosts the scattering amplitude transforms as follows: $$m_\sigma f_{\sigma b}^{(\sigma \mathrm{rest}\mathrm{frame})}=\sqrt{s}f_{\sigma b}^{(\mathrm{cm})}.$$ (26) The imaginary part of the self-energy $$\mathrm{Im}\mathrm{\Pi }_{\sigma b}=\frac{m_\sigma }{2\pi ^2}_{m_b}^{\mathrm{}}𝑑Ep^2n_B(E/T)\sigma _{\sigma b}(s)$$ (27) determines the rate at which the field decays : $$\mathrm{\Gamma }_{\sigma b}=\mathrm{Im}\mathrm{\Pi }_{\sigma b}/m_\sigma .$$ (28) The applicability of this expression is limited to those cases where interference between sequential scatterings is negligible. First consider pion scattering. We will calculate to tree level only and suppose that the parameters of the theory are adjusted to reproduce low energy experimental data at this same level. The invariant amplitude is $$=2\lambda \left[1+m_\sigma ^2\left(\frac{1}{s}+\frac{1}{u}+\frac{3}{tm_\sigma ^2}\right)\right].$$ (29) The $`s`$, $`t`$ and $`u`$ are standard Mandelstam variables satisfying $`s+t+u=2m_\sigma ^2`$. Note that the forward scattering amplitude evaluated at threshold, $`(s=m_\sigma ^2,t=0)`$, vanishes in accordance with Adler’s Theorem . The differential cross section in the center-of-momentum frame is $$\frac{d\sigma }{d\mathrm{\Omega }_{\mathrm{cm}}}=\frac{||^2}{64\pi ^2s}.$$ (30) The total cross section is given by $`{\displaystyle \frac{4\pi s}{\lambda ^2}}\sigma `$ $`=`$ $`{\displaystyle \frac{(s^{}+1)^2}{s^2}}+{\displaystyle \frac{s^{}}{2s^{}}}+{\displaystyle \frac{9s^{}}{s^2s^{}+1}}`$ (31) $``$ $`{\displaystyle \frac{2(s^23s^{}1)}{(s^{}1)^3}}\mathrm{ln}\left[s^{}(2s^{})\right]{\displaystyle \frac{6(s^2s^{}1)}{(s^{}1)^3}}\mathrm{ln}\left[{\displaystyle \frac{s^2s^{}+1}{s^{}}}\right],`$ where $`s`$ has been scaled by the sigma mass: $`s^{}=s/m_\sigma ^2`$. The total cross section has a branch point and a pole at $`s=2m_\sigma ^2`$ due to the $`u`$ channel exchange of a sigma meson. Going beyond the tree level is necessary to incorporate the finite width of the sigma meson and regulate the singularity. Indeed, the tree approximation is not reliable at high energy. For example, it is well known that unitarity is violated in $`\pi \pi `$ scattering at energies of order of one to two times $`m_\sigma `$ . Therefore we are only allowed to use the low energy limit, which is quite acceptable when $`Tm_\sigma `$. In this limit $$\sigma =\frac{112}{3\pi }\lambda ^2\frac{p^4}{m_\sigma ^6},$$ (32) where $`p`$ is the pion momentum in the sigma rest frame. Notice the workings of Adler’s Theorem here: According to that theorem the forward scattering amplitude $`f`$ evaluated in the rest frame of any target particle must vanish like $`p^2`$ as $`p0`$. Notice also that the total cross section cannot be obtained from the imaginary part of $`f`$ because we are essentially using a Born approximation. In the limit that $`TT_c`$ we have the opposite situation where $`m_\sigma T`$. Then the three-point vertices do not contribute ($`=2\lambda `$ in place of (29)) and so use of $$\sigma =\frac{\lambda ^2}{4\pi s}$$ (33) may be considered more appropriate. The contribution to the imaginary part of the self-energy is readily determined in these two limits. At low energy/temperature $$\mathrm{Im}\mathrm{\Pi }_{\sigma \pi }=\frac{13,440}{\pi ^3}\zeta (7)\lambda ^2\frac{T^7}{m_\sigma ^5},$$ (34) and at high energy/temperature $$\mathrm{Im}\mathrm{\Pi }_{\sigma \pi }=\frac{\lambda ^2T^2}{96\pi }.$$ (35) These are the contributions from a single pion and must be multiplied by $`N1`$ to factor in all pions. For the accuracy required in this paper we can construct a Padé approximant to represent these results. $$\mathrm{Im}\mathrm{\Pi }_{\sigma \pi }(N1)\frac{\lambda ^2T^2}{96\pi }\frac{T^5}{[T^5+(m_\sigma /10.568)^5]}.$$ (36) Notice that this contribution to $`\mathrm{\Gamma }`$ diverges as $`TT_c`$ on account of division by $`m_\sigma `$. This ought to come as no surprise since one is approaching a critical point where certain fluctuations diverge. The above analysis may be repeated for a sigma meson knocking another out of the condensate. The invariant amplitude for $`\sigma \sigma \sigma \sigma `$ is $$=6\lambda \left[1+3m_\sigma ^2\left(\frac{1}{sm_\sigma ^2}+\frac{1}{um_\sigma ^2}+\frac{1}{tm_\sigma ^2}\right)\right],$$ (37) reflecting the symmetry in the $`s,t,u`$ channels. The total cross section is $$\frac{8\pi s}{9\lambda ^2}\sigma =\left(\frac{s^{}+2}{s^{}1}\right)^2+\frac{18}{s^{}3}\frac{12(s^23s^{}1)}{(s^{}4)(s^{}2)(s^{}1)}\mathrm{ln}(s^{}3).$$ (38) This expression has no singularities because $`s^{}4`$. The low energy limit is $$\sigma =\frac{9}{2\pi }\frac{\lambda ^2}{m_\sigma ^2},$$ (39) which gives rise to the low temperature limit $$\mathrm{Im}\mathrm{\Pi }_{\sigma \sigma }=\frac{9}{2\pi ^3}\lambda ^2T^2\mathrm{e}^{m_\sigma /T}.$$ (40) In the high energy limit only the four-point vertex contributes, giving $$\sigma =\frac{9}{8\pi }\frac{\lambda ^2}{s},$$ (41) which gives rise to the high temperature limit $$\mathrm{Im}\mathrm{\Pi }_{\sigma \sigma }=\frac{3}{64\pi }\lambda ^2T^2.$$ (42) This expression is identical to that calculated by Jeon and Rischke in a symmetric $`\varphi ^4`$ model to which it can be compared. The two limits can be combined in a Padé approximant as $$\mathrm{Im}\mathrm{\Pi }_{\sigma \sigma }\frac{9}{2\pi }\frac{\lambda ^2T^2}{[96+\pi ^2\left(\mathrm{e}^{m_\sigma /T}1\right)]},$$ (43) which is useful for numerical computations. The total rate is obtained by addition of all components, namely: $$\mathrm{\Gamma }=\mathrm{\Gamma }_{\sigma \pi \pi }+\mathrm{\Gamma }_{\sigma \pi }+\mathrm{\Gamma }_{\sigma \sigma },$$ (44) where $$\mathrm{\Gamma }_{\sigma \pi }=\mathrm{Im}\mathrm{\Pi }_{\sigma \pi }/m_\sigma ,$$ (45) and $$\mathrm{\Gamma }_{\sigma \sigma }=\mathrm{Im}\mathrm{\Pi }_{\sigma \sigma }/m_\sigma .$$ (46) Other works, such as , do not include the latter two scattering contributions arguing that they are of order $`\lambda ^2`$, while $`\mathrm{\Gamma }_{\sigma \pi \pi }`$ is of order $`\lambda `$. While this is correct at low $`T`$ when $`m_\sigma `$ is large, it becomes questionable near to the critical temperature where $`m_\sigma `$ is small (see the discussion of Fig. 1 below). It should be noted that the scattering contributions diverge at the critical temperature on account of division by the vanishing sigma mass. This may be a signal of the breakdown of the use of tree-level scattering amplitudes and requires further investigation. ## 4 Solution in an Expanding Fireball In this section we shall study solutions to the coarse-grained field equation in several limits. First, suppose that the volume and temperature are fixed in time but that the system is slightly out of equilibrium in the sense that $`\sigma _s0`$. The field equation is then $$\ddot{\sigma }_s+\mathrm{\Gamma }(T)\dot{\sigma }_s+m_\sigma ^2(T)\sigma _s=\lambda \left(3v(T)\sigma _s^2+\sigma _s^3\right).$$ (47) where $`\mathrm{\Gamma }`$ is the sum of decay and scattering terms as given in the previous section. When $`|\sigma _s|v`$ this equation can be linearized. It is equivalent to a simple damped harmonic oscillator. The system is underdamped if $`m_\sigma >\mathrm{\Gamma }/2`$ and overdamped if $`m_\sigma <\mathrm{\Gamma }/2`$. We choose $`\lambda =18`$ so that the sigma mass in vacuum is $`6f_\pi `$, corresponding to the s-wave resonance observed in the $`\pi \pi `$ channel. In figure 1 we plot $`m_\sigma `$ and the individual contributions to $`\mathrm{\Gamma }`$ as functions of temperature $`T`$. The field is overdamped when $`T>0.8T_c`$. Next consider the expansion of the system created in a high energy nuclear collision. If the beam energy is high enough it will form a quark-gluon plasma with temperature greater than $`T_c`$. This “fireball” will expand and cool, eventually reaching $`T_c`$. At this moment, say at time $`t=t_c`$, the initial conditions for the coarse-grained field must be specified. We will assume, for sake of illustration, that the system is locally uniform so that spatial gradients are unimportant. Depending on whether the system is expanding spherically or only longitudinally along the beam axis ($`D`$ = 3 or 1, respectively) we obtain the modified equation of motion $$\ddot{\sigma }_s+\ddot{v}+\frac{D}{t}(\dot{\sigma }_s+\dot{v})+\mathrm{\Gamma }(T)\dot{\sigma }_s+m_\sigma ^2(T)\sigma _s=\lambda \left(3v(T)\sigma _s^2+\sigma _s^3\right).$$ (48) In this situation $`t`$ is really the local, or proper, time and the term proportional to $`D/t`$ may be thought to arise from either the d‘Alembertian or from a volume dilution term $`[\dot{V}(t)/V(t)]\dot{\sigma }_s`$ analogous to the Hubble expansion . Perhaps the easiest way to obtain this equation is to derive the equation of motion for the total condensate $`\overline{\sigma }=v+\sigma _s`$ and then make the substitution. Most authors actually solve the equation of motion for $`\overline{\sigma }`$, but this is a matter of taste. Note that $`v(T(t))`$ is the instantaneous value of the equilibrium condensate and that is why no potential for it appears above. In addition to the coarse-grained field equation one needs to know how the local temperature evolves with time. It must be assumed that it changes slowly enough so that local equilibrium can be maintained by the rapidly fluctuating pion and sigma fields. In principle one ought to solve the equation $$de/dt=Dw/t$$ (49) where $`e`$ is the energy density, $`w=e+P`$ is the enthalpy and $`P`$ is the pressure. Rather than working out a detailed description of the equation of state, which is really dominated by the rapidly fluctuating thermal fields, we simply assume a free massless gas of pions where the pressure is proportional to $`T^4`$. The sigma meson has a mass small compared to $`T`$ only very near the critical temperature, and its effect is therefore generally unimportant. As a consequence, the temperature falls with time according to the law $$T(t)=T_c\left(\frac{t_c}{t}\right)^{D/3}.$$ (50) A reasonable numerical value for $`t_c`$ is 3 fm/c . The back reaction of $`\sigma _s`$ on the time evolution of the temperature is thereby neglected. This is a quite reasonable approximation because very little of the total energy resides in the field $`\sigma _s`$. The equation of motion may be solved by numerically evolving an analytic solution in the neighborhood of $`t_c`$. As $`tt_c`$ the equilibrium condensate $`v0`$, however Eqs. (16) and (50) indicate that the $`\ddot{v}`$ and $`\dot{v}`$ are singular, as are the scattering contributions to the width $`\mathrm{\Gamma }`$. The analytic behavior of $`\sigma _s`$ as $`tt_c`$ is uniquely determined by requiring that the derivatives of $`\sigma _s`$ exactly cancel these singularities, while $`\sigma _s0`$. The result is displayed in figure 2 for $`D=1`$ and in figure 3 for $`D=3`$. The deviation from equilibrium $`\sigma _s`$ is maximal less than 1/2 fm/c after the critical temperature is passed, and it dies away with a time scale of about 2 fm/c. There is a hint of oscillatory motion in the solutions, but basically they are overdamped. ## 5 Conclusion In this paper we have studied the dynamical evolution of the scalar condensate in the $`O(N)`$ linear sigma model in out-of-equilibrium situations. Our method is based on the equation of motion. Dissipation arises because of the response of the correlation functions of the fast modes to the slow modes of the fields. This is treated with standard linear response theory. These response functions should be computed exactly and used in the resulting dissipative, coarse-grained equation of motion. However, such explicit computations are generally not possible to do. Therefore, we identified the physical mechanisms responsible for the dissipation and estimated the corresponding response functions based on them. These mechanisms include the decay of sigma mesons in the condensate, and the knockout of sigma mesons in the condensate due to scattering with thermal sigma mesons and pions. To our knowledge, the latter physical mechanisms have not been studied before. We then studied the dynamical evolution of the condensate in heavy ion collisions, after the phase transition from quark-gluon plasma to hadrons, and allowing for either one or three dimensional expansion of the hot matter. These showed that thermal equilibrium was reestablished rather rapidly, with a time constant of order 2 fm/c. Clearly, much more could be studied along these same lines, including the formation and fate of disoriented chiral condensates (DCC). The method we used in this paper is very general, and may be applied to other theories, including nuclear matter, QCD, and electroweak theory. Such work is underway. ## Acknowledgements J.K. thanks the Institute of Technology at the University of Minnesota for granting a single quarter leave in the spring of 1999 and the Theory Division at CERN for hospitality and support during that time. S.J. was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, and by the Office of Basic Energy Sciences, Division of Nuclear Sciences, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098. This work was also supported by the Department of Energy under grant DE-FG02-87ER40328, by the NSF under travel grant INT-9602108 and by the Norwegian Research Council. ## Figures
no-problem/9908/nucl-ex9908001.html
ar5iv
text
# Novel rapidity dependence of directed flow in high energy heavy ion collisions ## Abstract For high energy nucleus-nucleus collisions, we show that a combination of space-momentum correlations characteristic of radial expansion together with the correlation between the position of a nucleon in the nucleus and its stopping, results in a very specific rapidity dependence of directed flow: a reversal of sign in the mid-rapidity region. We support our argument by RQMD model calculations for Au+Au collisions at $`\sqrt{s}`$ = 200 $`A`$GeV. The study of anisotropic flow in high energy nuclear collisions has attracted increasing attention from both experimentalists and theorists . The rich physics of directed and elliptical flow is due to their sensitivity to the system evolution at early time. Anisotropic flow in general is also sensitive to the equation of state which governs the evolution of the system created in the nuclear collision. The collective expansion of the system created during a heavy-ion collision implies space-momentum correlation in particle distributions at freeze-out. Simplified, this means that particles created on the left side of the system move in the left direction and particles created on the right side move in the right direction (on average). We will show that the rapidity dependence of directed flow of nucleons and pions can address this space-momentum correlation experimentally. A sketch of a medium central symmetric heavy-ion collision is shown in Fig. 1, from before the collision (a,b) to the resulting distributions of $`x`$ and $`p_x`$ shown in (d). In Fig. 1a the projectile and target are shown before the collision in coordinate space. In Fig. 1b the overlap region is magnified and the “spectators” are not shown. It shows in a schematic way the number of nucleons and their position in the x–z plane (where $`\widehat{𝐱}`$ is the impact parameter direction). Projectile nucleons (light color) at negative $`x`$ suffer more rapidity loss than those at positive $`x`$, ending up closer to mid-rapidity. Assuming a positive space-momentum correlation (as indicated in Fig. 1c), these nucleons have negative $`p_x`$, while those at positive $`x`$ have positive $`p_x`$. This results in a wiggle structure in the rapidity dependencies of $`x`$ and $`p_x`$ which is shown schematically in Fig. 1d. The shape of the wiggle, both the magnitude of $`p_x`$ and the rapidity range, depends on the strength of the space-momentum correlation, the initial beam-target rapidity gap and the amount of stopping. Therefore, the dependence of the wiggle on the collision centrality, system size and center of mass energy may reveal important information on the relation between radial flow and baryon stopping. In addition, it has been shown that the magnitude of $`p_x`$ depends on the nuclear matter equation of state . The above picture changes for collisions at lower energies, where there is no clear rapidity separation between projectile and target nucleons at freeze-out, because nucleons cross over mid-rapidity. Moreover, when the time for the nuclei to pass each other becomes long relative to the characteristic time scale for particle production, the interactions between particles and spectators (so-called shadowing) become important. This has been pointed out by many people and recently in . This is consistent with the results of heavy ion collisions, in the 2 to 158 $`A`$GeV energy range, where the experimental observed slope around mid-rapidity in directed flow shows a trend from a positive value at 2 $`A`$GeV to almost zero at 158 $`A`$GeV . Note that the change of sign of directed flow at mid-rapidity has been discussed for Ca+Ca collisions at 350 $`A`$MeV and for Au+Au collisions at 11 $`A`$GeV . However, the physical origins on which these predictions are based are different from what we discuss in this Letter. In 350 $`A`$MeV Ca+Ca collisions the wiggle originates from a combination of repulsive nucleon-nucleon collisions and an attractive mean field. The repulsive nucleon-nucleon collisions dominate at the mid-rapidity region and lead to a positive slope for $`p_x`$ versus rapidity. The attractive mean field dominates at beam and target rapidities and leads to a negative slope. In 11 $`A`$GeV Au+Au collisions the wiggle is caused by the longitudinal hydrodynamic expansion of a tilted source . It has been noticed that in hydro calculations the wiggle only appeared if a QGP equation of state is used. The QGP equation of state is a prerequisite to reach the stopping needed to create this tilted source. The predicted wiggle in this Letter does not assume a QGP equation of state. The other main difference is that in our prediction we specifically assume in-complete stopping. The arguments used in this Letter which lead to a change of sign of directed flow at mid-rapidity are valid on general grounds. However, to test the picture quantitatively we study Au+Au collisions at $`\sqrt{s}`$ = 200 $`A`$GeV in an impact parameter range $`b`$=5–10 fm, using the Relativistic Quantum Molecular Dynamics (RQMD v2.4) model in cascade mode . To characterize directed flow, we use the first Fourier coefficient , $`v_1`$, of the particle azimuthal distribution. At a given rapidity and transverse momentum the coefficient is determined by $`v_1=\mathrm{cos}\varphi `$, where $`\varphi `$ is the azimuthal angle of a particle relative to the reaction plane angle ($`\widehat{𝐱}`$ direction in Fig. 1). Similarly a Fourier coefficient can be determined in coordinate space, $`s_1=\mathrm{cos}\varphi _s`$ , where $`\varphi _s`$ is the azimuthal angle of a particle, determined from the freeze-out coordinates $`x`$ and $`y`$, relative to the reaction plane angle. Figure 2 shows the RQMD calculations of $`v_1`$ and $`s_1`$ for nucleons and pions in Au+Au collisions at RHIC energy. Indeed, the shape at mid-rapidity for nucleons is consistent with the picture described above: both $`v_1`$ and $`s_1`$ show a negative slope at mid-rapidity. For larger rapidities the $`s_1`$ values leave the ordinate scale. Pions are produced particles and their space-rapidity correlation is different from that of nucleons shown in Fig. 1d. Due to the asymmetry in the numbers of colliding target and projectile nucleons, the pions produced at positive $`x`$ shift toward positive rapidity. The pions produced at negative $`x`$ shift toward negative rapidity. This results in a positive space-rapidity correlation without a wiggle, see Fig. 2b. (open circles). Due to the space-momentum correlation, the momentum distribution tends to follow the space distribution. This leads to the positive slope of $`v_1`$ at mid-rapidity for the pions, see Fig. 2b. (filled circles). However, shadowing by nucleons is also important in the formation of pion directed flow. The contribution is relatively small in the mid-rapidity region, where in high energy nucleus-nucleus collisions the nucleon to pion ratio is small. At beam/target rapidity the shadowing becomes dominant, this explains why $`s_1`$ has the opposite sign from $`v_1`$ for pions close to beam/target rapidity. In this Letter we have shown that the combination of space-momentum correlations characteristic of radial expansion together with the correlation between the position of a nucleon in the nucleus and its stopping, results in a wiggle in the rapidity dependence of directed flow in high energy nucleus-nucleus collisions. Moreover, the amount of stopping and the space-momentum correlation depend on the equation of state and this affects the strength of the wiggle around mid-rapidity . Finally, because the wiggle appears at mid-rapidity, it is accessible by the current SPS experiment NA49 and the near future RHIC experiments. The study of its dependence on collision centrality, system size and the center of mass energy may reveal important information on the relation between collective radial flow and baryon stopping. We are grateful to G.E. Cooper, Y. Pang, S. Panitkin, A.M. Poskanzer, G. Rai, H.G. Ritter and H. Ströbele for useful discussions. This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics of the U.S. Department of Energy under Contract DE-AC03-76SF00098.
no-problem/9908/hep-ph9908305.html
ar5iv
text
# 1 Introduction ## 1 Introduction The SUSY standard model (SM) has a number of naturality problems. One of the most pressing ones is the problem of proton stability. Indeed, the most general superpotential consistent with $`SU(3)\times SU(2)\times U(1)`$ gauge invariance and supersymmetry has dimension-three and -four operators which violate lepton and/or baryon number. In particular it has the general form: $`W`$ $`=`$ $`h_uQ_Lu_L^c\overline{H}+h_dQ_Ld_L^cH+h_lL_Le_L^cH`$ (1.1) $`+`$ $`h_Bu_L^cd_L^cd_L^c+h_LQ_Ld_L^cL+h_L^{}L_LL_Le_L^c`$ $`+`$ $`\mu _LL\overline{H}+\mu H\overline{H}`$ in a self-explanatory notation. The first line contains the usual Yukawa couplings which are needed for the standard quark and lepton masses whereas the second line shows B or L-violating couplings and the third shows the $`\mu `$-terms. One needs to invoke a symmetry of some sort to forbid at least a subset of the dangerous couplings in order to obtain consistency with proton stability. Another problematic point of the SUSY SM is the smallness of the $`\mu `$-parameter, the SUSY mass of the Higgs multiplets. In principle that mass parameter would be expected to be as high as the cut-off of the theory, unless there is some symmetry reason which protects the Higgs mass from becoming large. In field theories we are free to impose either a discrete symmetry such as $`R`$-parity or global continuous symmetries to forbid the dangerous couplings. In string theory $`R`$-parity does not in general appear as a natural symmetry and furthermore, global symmetries are believed not to be present. In fact, in perturbative string theory it can be shown that there are no global symmetries (besides Peccei-Quinn symmetries of axion fields or accidental symmetries of the low-energy effective action). In nonperturbative string theory, even though there is no general proof, it is also believed that global symmetries are absent, the reason being that any theory that includes gravity will not preserve global symmetries since they are broken naturally by black holes and other similar objects. Therefore, perhaps the simplest possibility for solving both problems in string theory would be to gauge some continuos $`U(1)`$ which forbids the dangerous couplings . This is in general problematic because, if we stick to the particle content of the MSSM such symmetries are bound to be anomalous. One might think of using the Green-Schwarz mechanism found in perturbative heterotic vacua in order to cancel those anomalies and indeed this possibility has been explored in the past. However there are two main obstructions: 1) The mixed anomalies of the $`U(1)`$’s with the SM interactions are not in the appropriate ratios to be cancelled <sup>1</sup><sup>1</sup>1 This can be avoided by going to generation-dependent $`U(1)`$ symmetries as in ref. . We will concentrate in this article on flavour-independent $`U(1)`$ symmetries.. 2) Due to the presence of a Fayet-Iliopoulos (FI) term, the $`U(1)`$ symmetry is in general broken slightly below the string scale and does not survive as an exact global symmetry. Thus in general one has to rely on the particularities of the model and holomorphicity in order to supress sufficiently parameters like the $`\mu `$-term . In the present letter we point out that these two problems are not present in the alternative generalized Green-Schwarz anomaly cancellation mechanism recently found in Type I and Type IIB $`D=4`$, $`N=1`$ string vacua. Indeed in this novel mechanism the mixed anomalies of a $`U(1)`$ with the different group factors can be different. In addition, unlike what happens in the perturbative heterotic case, the FI term may be put to zero. In that case, as first pointed out in ref. , the $`U(1)`$ survives as an effective global symmetry which is exact in perturbation theory, evading in this way the general argument against global symmetries in string theory. Both these aspects are wellcome in order to suppress the dangerous couplings with a gauged Abelian symmetry. We will discuss two general classes of flavour-independent anomalous $`U(1)`$’s. The first class allows all Yukawa interactions and it is discussed in section 3 whereas another class of anomalous $`U(1)`$’s with anomalies proportional to the beta function coefficients of the corresponding gauge groups is discussed in section 4. We will start in section 2 discussing the general aspects of anomalous $`U(1)`$’s in heterotic and type I models respectively. ## 2 Anomalous $`U(1)`$’s: Heterotic vs Type I Models In $`D=4`$, $`N=1`$ perturbative heterotic vacua one anomalous $`U(1)`$ is often present and anomalies are cancelled by a Green-Schwarz mechanism. An important role is played by the imaginary part of the complex heterotic dilaton $`S`$, which is dual to the unique antisymmetric tensor $`B_{\mu \nu }`$ of perturbative heterotic strings. Under an anomalous $`U(1)`$ gauge transformation $`A_\mu A_\mu +\delta _{GS}_\mu \theta (x)`$, Im$`S`$ gets shifted by $`\mathrm{Im}S\mathrm{Im}S\delta _{GS}\theta (x)`$, where $`\delta _{GS}`$ is a constant anomaly-cancelling coefficient. Since the gauge kinetic function for the gauge group $`G_a`$ is at tree-level $`f_a=k_aS,`$ the Lagrangian contains the couplings $`\mathrm{Im}S_ak_aF_aF_a`$, where the sum runs over all gauge groups in the model and the coefficients $`k_a`$ are known as the Kac-Moody levels. Then, a shift in $`\mathrm{Im}S`$ can in principle cancel mixed $`U(1)`$-gauge anomalies. However, for this to be possible the mixed anomalies $`C_a`$ have to be in the same ratios as the coefficients $`k_a`$ (Kac-Moody levels) of the gauge factors: $$\frac{C_a}{C_b}=\frac{k_a}{k_b}$$ (2.1) With $`\delta _{GS}=C_a/k_a`$. In the SM, assuming the standard hypercharge normalization, we have $`k_3:k_2:k_1=1:1:5/3`$. The anomalous $`U(1)`$ induces, a Fayet-Iliopoulos (FI) term $`\xi =g^2M_P^2\delta _{GS}/16\pi ^2`$ at one-loop . The gauge coupling $`g`$ here is given by $`8\pi /g^2=\mathrm{Re}S`$. The $`D`$-term in the Lagrangian then takes the form: $$_𝒟=\frac{g^2}{2}\left(\underset{i}{}q_iX_iK_i+\xi \right)^2$$ (2.2) where $`K_i`$ is the derivative of the Kähler potential $`K`$ with respect to the matter fields $`X_i`$ which have charge $`q_i`$ under the anomalous $`U(1)`$. This term triggers gauge symmetry breaking. In order to preserve supersymmetry, the $`D`$-term has to vanish. The Fayet-Iliopoulos term $`\xi `$ cannot vanish because, $`U(1)`$ being anomalous implies $`\delta _{GS}0`$ and, in a nontrivial vacuum, $`g0`$. Therefore a combination of the charged fields $`X_i`$ is forced to get a nonvanishing vev to compensate the FI term, breaking the anomalous $`U(1)`$ and often some other non-anomalous groups. Let us see how the anomalous $`U(1)`$ gauge field gets a mass. First, the anomaly cancelling term in the lagrangian $`\delta _{GS}BF_{U(1)}`$ gives rise, upon dualization, to a term proportional to $`(_\mu \mathrm{Im}S+\delta _{GS}A_\mu )^2`$ which allows the gauge field $`A_\mu `$ to eat the axion $`\mathrm{Im}S`$ and get a mass, as in the standard Higgs effect. A linear combination of the real part of $`S`$ and the charged scalars $`X_i`$, also gets the same mass from the Fayet-Iliopoulos term, after expanding the $`D`$ term around the nontrivial vacuum. Therefore the analogy with the supersymmetric Higgs effect is complete: the original vector superfield eats the chiral superfield of the dilaton giving rise to a massive vector superfield. The anomalous $`U(1)`$ symmetry gets broken at a scale determined by $`\xi `$, which can be $`1`$-$`2`$ orders of magnitude below the Planck mass depending on the value of $`\delta _{GS}`$. The massless combination of the dilaton with the charged fields $`X_i`$ plays the role of the string coupling. Since chiral fields $`X_i`$ charged under the $`U(1)`$ are forced to get vevs, non-renormalizable couplings of the form $`\psi ^nX_i^m`$, where the $`\psi `$ denote SM superfields will induce effective operators which will in general violate the anomalous $`U(1)`$ symmetry. Thus typically this symmetry does not survive at low energies as a global symmetry. Let us now see how things change in Type I strings. Recently it has been realized that the cancellation of $`U(1)`$ anomalies in certain Type I and Type IIB $`D=4`$, $`N=1`$ models proceeds in a manner quite different to the one in perturbative heterotic vacua. These are models which may be constructed as Type IIB orbifolds or orientifolds and contain different D-brane configurations in the vacuum. For example, the vacuum may contain a certain number of D3-branes and D7-branes with their transverse coordinates located at different positions in the extra six compact dimensions. Chiral $`N=1`$ theories in $`D=4`$ are only obtained when e.g., the D3-branes are located on top of orbifold singularities. It has been found that in this class of theories: a) There may be more than one anomalous $`U(1)`$; b) The mixed anomaly of the $`U(1)`$ with other groups need not be universal ; c) There is a generalized Green-Schwarz mechanism at work in which the cancelling role is played not by the complex dilaton $`S`$ but by twisted closed string massless modes $`M_k`$. These are fields which live on the fixed points of the orbifold. In particular, their real part (which are NS-NS type of fields) parametrize the smoothing out of the orbifold singularities whereas their imaginary parts (which are Ramond-Ramond fields) are the ones actually participating in the $`U(1)`$ anomaly cancellation <sup>2</sup><sup>2</sup>2 More precisely, from string theory, the blowing-up modes together with the antisymmetric tensors coming from the RR sector, belong to linear multiplets. The scalar components of these multiplets, $`m_k`$ are the ones that vanish at the orbifold point and their value determine the blowing-up procedure . Upon dualization, the linear multiplets get switched to the chiral multiplets $`M_k`$, the relation between $`m_k`$ and the real part of $`M_k`$ depends on the structure of the Lagrangian but close to the singularity it is linear and $`m_k=\mathrm{Re}M_kF(T_i,T_i^{})`$ where $`F(T,T^{})`$ is a model dependent function which depends on the untwisted moduli fields $`T_i`$ which determine the size of the compact space.. More specifically, cancellation of $`U(1)`$ anomalies results from the presence in the $`D=4`$, $`N=1`$ effective action of the term $$\underset{k}{}\delta _k^lB_kF_{U(1)_l}$$ (2.3) where $`k`$ runs over the different twisted sectors of the underlying orbifold (see ref. for details) and $`B^k`$ are the two-forms which are dual to the imaginary part of the twisted fields $`M_k`$. Here l labels the different anomalous $`U(1)`$’s and $`\delta _k^l`$ are model-dependent constant coefficients. In addition the gauge kinetic functions have also a (tree-level) $`M_k`$-dependent piece <sup>3</sup><sup>3</sup>3This is for gauge groups coupling either to Type I 9-branes or 3-branes. In the case of 5-branes or 7-branes the complex field $`S`$ is to be replaced by the appropriate $`T_i`$ field. The different choices for Dp-branes are in fact T-dual to each other and , hence, equivalent. See e.g. ref. for details. $$f_\alpha =S+\underset{k}{}s_\alpha ^kM_k$$ (2.4) where the $`s_\alpha ^k`$ are model dependent coefficients. Under a $`U(1)_l`$ transformation the $`M_k`$ fields transform non-linearly $$\mathrm{Im}M_k\mathrm{Im}M_k+\delta _k^l\mathrm{\Lambda }_l(x)$$ (2.5) This non-linear transformation combined with eq.(2.4) results in the cancellation of the $`U(1)_l`$ anomalies as long as the coefficients $`C_\alpha ^l`$ of the mixed $`U(1)_l`$-$`G_\alpha ^2`$ anomalies are given by $$C_\alpha ^l=\underset{k}{}s_\alpha ^k\delta _k^l$$ (2.6) Unlike the perturbative heterotic case, eq.(2.6) does not in general require universal mixed anomalies as in eq.(2.1). In $`D=4`$ models like these there can also be mixed $`U(1)_X`$-gravitational anomalies. In the perturbative heterotic case, in order for the Green-Schwarz mechanism to work, the coefficient $`C_{grav}^l`$ of those anomalies must be related to those of mixed $`U(1)`$-gauge anomalies by $$C_{grav}^l=\frac{24}{k_\alpha }C_\alpha ^l.$$ (2.7) Such relationship disappears in the case of Type IIB $`D=4`$, $`N=1`$ orientifolds. One can find though certain sum rules relating the gravitational to the gauge anomalies in certain classes of models. In particular, for the toroidal orientifolds of the general class studied in refs. one can obtain the constraint : $$C_{grav}^l=\frac{3}{2}\underset{\alpha }{}n_\alpha C_\alpha ^l$$ (2.8) where $`n_\alpha `$ is the rank of the $`U(n)`$ or $`SO(m)`$ groups which are present in this class of orbifolds. This constraint has to be satisfied for the anomalies to be cancelled by the generalized Green-Schwarz mechanism present in these models <sup>4</sup><sup>4</sup>4 It is amusing that eq.(2.8), valid for certain classes of Type IIB orientifolds, turns out to be consistent with what is found in perturbative heterotic $`SO(32)`$ Abelian orbifolds. Indeed, in that case all mixed $`U(1)`$-gauge anomalies are equal and one has $`_\alpha n_\alpha C_\alpha ^l=rank(SO(32))C^l`$ $`=16C^l`$. Plugging this back into eq.(2.8) we recover the perturbative heterotic result eq.(2.7). . The FI term for this $`U(1)_X`$ is given by $`\xi =\delta _XK_M`$, since the Kähler potential (to first order in $`M`$)is given by $`K=[\mathrm{Re}MF(T_i,T_i^{})]^2+\mathrm{}`$ we can easily see that the FI term vanishes at the orbifold singularity (see also ). This is impossible in the heterotic case, because in that case it is the field $`S`$ that cancels the anomaly and the FI-term $`\xi `$ is then proportional to $`K_Sg^2`$ which cannot be set to zero in a nontrivial vacuum. The anomalous gauge field gets a mass exactly in the same way as in the heterotic case, nevertheless, there is no need for a charged field to get a nonvanishing vev in order to cancel the $`D`$-term, therefore the corresponding global symmetry is not broken. Thus, the gauge field gets a mass of the order of the string scale (since the mass depends on $`K_{MM^{}}`$ and not on $`K_M`$) but the global symmetry remains perturbatively exact as long as we are at the orbifold singularity $`\xi =0`$. If there is a vacuum for which the $`D`$-term vanishes outside the singularity, the scale of breaking of the global symmetry could in principle be as small as we want. We can see then that in the class of Type IIB orientifolds in which this anomaly cancellation mechanism has been studied, the anomalous $`U(1)`$’s have generically a mass of order the string scale. Unlike what happens in the heterotic case, this FI-term is arbitrary at the perturbative level and hence may in principle vanish (orbifold limit). In this case the $`U(1)_X`$ symmetry remains as an effective global $`U(1)`$ symmetry which is perturbatively exact. ## 3 Anomalous U(1)’s and Yukawa Couplings We will now study the new possibilities offered by these generalized Green-Schwarz mechanism when applied to MSSM physics. We will consider the simplest possibility in which we extend the SM gauge group by adding a single anomalous $`U(1)_X`$. There are just three $`U(1)`$ charge asignements (beyond hypercharge) for the MSSM chiral fields which 1) allow for the presence of the usual Yukawa couplings and 2) are flavour-independent. These were named $`R`$, $`A`$ and $`L`$ in ref. , and the corresponding assignments are displayed in table 1, where we also include the hypercharge assignments $`Y`$. Notice that $`L`$ is just lepton number and $`R`$ corresponds to the 3rd component of right-handed isospin in left-right symmetric models (baryon number is given by the combination $`B=6Y+3R+3L`$). The other symmetry, $`A`$, is a Peccei-Quinn type of symmetry. Thus the more general such $`U(1)`$ symmetry will be a linear combination: $$Q_X=mR+nA+pL$$ (3.1) where $`m,n,p`$ are real constants. We will denote the corresponding $`U(1)_X`$’s by giving the three numbers $`Q_X=(m,n,p)`$. The fifth line in the table shows the general $`Q_X`$ charge of the particles of the MSSM. Notice that, depending on the values of $`m,n,p`$ some of the terms in the superpotential (1.1) may be forbiden. Thus, for example, $`Q_X=R=(1,0,0)`$ forbids all the terms in the second line and would forbid proton decay at this level by itself. On the other hand it does not provide an explanation for the smallness of the $`\mu `$-term. In particular, the bilinear $`H\overline{H}`$ has $`Q_X`$ charge equal to $`n`$, and hence it can only be forbiden if our $`U(1)`$ has $`n0`$. For that purpose we can see that the $`U(1)`$ symmetry is necessarily anomalous. Thus let us study the anomalies of the above $`Q_X`$ symmetry. The mixed anomalies $`C_i`$ of $`Q_X`$ with the SM gauge interactions are given by<sup>5</sup><sup>5</sup>5We will not consider constraints coming from cancellation of mixed $`U(1)`$-gravitational anomalies in our analysis in this chapter and the next, since one can always cancel those by the addition of apropriate SM singlets carrying $`U(1)_X`$ charges.: $`C_3`$ $`=`$ $`n{\displaystyle \frac{N_g}{2}}`$ (3.2) $`C_2`$ $`=`$ $`n{\displaystyle \frac{N_g}{2}}+n{\displaystyle \frac{N_D}{2}}p{\displaystyle \frac{N_g}{2}}`$ $`C_1`$ $`=`$ $`n{\displaystyle \frac{5N_g}{6}}+n{\displaystyle \frac{N_D}{2}}+p{\displaystyle \frac{N_g}{2}}`$ where we have preferred to leave the number of generations $`N_g`$ and doublets $`N_D`$ free to trace the origin of the numerical factors (one has $`N_g=3`$, $`N_D=1`$ in the MSSM). There is an additional constraint from the cancellation of $`U(1)_Y\times U(1)_X^2`$ anomalies which yields: $$p(mn)=\frac{n}{2N_g}\left(N_D(n2m)+2N_gm\right).$$ (3.3) Thus there are only two independent parameters out of the three $`m,n,p`$ if we imposse this latest constraint. It is now easy to see that (3.2) cannot be satisfied in the heterotic case since there is no solution to these equations with $`C_3:C_2:C_1=1:1:5/3`$. However these equations can, in principle, be easily satisfied in the type I case. To see this, let us suppose for simplicity that one twisted field $`M`$ is relevant in the cancellation mechanism <sup>6</sup><sup>6</sup>6Notice that $`M`$ may also denote a linear combination of several twisted fields living at the singularity.. The gauge kinetic functions for the SM interactions will have the form: $$f_\alpha =S+s_\alpha M,\alpha =3,2,1.$$ (3.4) Now, from eq.(2.6) we see that the mixed anomalies will cancel if the anomaly coefficients in (3.2) are related to the parameters $`\delta _X`$ and $`s_\alpha `$ by $`C_\alpha =\delta _X\times s_\alpha `$, which is possible to satisfy for appropriate $`\delta _X`$ and $`s_\alpha `$. Notice , however, that in the present case the coefficients $`s_\alpha `$ are related. Indeed, for the physical case $`N_g=3,N_D=1`$ anomalies can be cancelled as long as the parameters $`s_\alpha `$ satisfy: $$2s_3=s_1+s_2$$ (3.5) This imposes a constraint on which type I models can have an anomalous $`U(1)`$ that allow all standard Yukawa couplings. Let us now be a bit more specific and study the posibilities for anomalous $`Q_X=(m,n,p)`$ symmetries. As we said, we need $`n0`$ in order to forbid the $`\mu `$-term. 1. The simplest case is obtained for $`p=0`$, i.e., no gauging of lepton symmetry. In this case condition (3.3) requires $`n=2m(N_DN_g)/N_D=4m`$ and we are left with a unique possibility $`Q_\mu `$ consistent with anomaly cancellation: $$Q_\mu =R4A$$ (3.6) It is easy to check (see Table 2) that this symmetry forbids all dimension 3 and 4 terms violating baryon or lepton number in eq.(1.1). Dangerous $`F`$-term operators of dimension smaller than $`9`$ are also forbidden in this case (see for instance reference , for a recent discussion of these operators). The dimension 6 operators $`[QQu^{}e^{}]_D`$ and $`[Qu^{}d^{}L]_D`$ are however allowed (see Table 3). 2. A related symmetry is the one introduced by Weinberg in 1982 in order to eliminate dangerous $`B,L`$ violating operators in the supersymmetric SM . In his model all quarks and leptons carry unit charge whereas the Higgses have charge $`2`$. This symmetry corresponds to $`Q_W=5R4A6Y`$. In order to cancel $`U(1)`$ anomalies he was forced to add extra chiral fields transforming like $`(8,1,0,2)`$+$`(1,3,0,2)`$+$`2(1,1,1,2)`$+$`2(1,1,1,2)`$ under $`SU(3)\times SU(2)\times U(1)_Y\times U(1)_X`$. We now see that in the present context the addition of all those extra fields is not required and one can stick to the particle content of the MSSM as long as the anomaly cancelation mechanism here considered is at work. The $`U(1)`$ clearly satisfies equations (3.2) with $`p=0,n=4`$ but the value $`m=5`$ does not satisfy the quadratic constraint (3.3). However this is exactly cancelled by the contribution from the hypercharge. Concerning what $`B`$, $`L`$-violating operators are allowed, the same as in the previos example applies. 3. For the generic case with $`m,n,p0`$ table 2 and 3 show that all $`B`$, $`L`$ violating operators up to dimension 6 are forbidden. A similar analysis may be done for higher dimensional operators which may be also dangerous for models with a relative small string scale . 4. For particular choices of $`m,n,p`$ one can allow some $`R`$-parity violating dim=4 operator. For example, the $`U(1)`$ given by: $$Q_{udd}=4R+2A+3L$$ (3.7) forbids all dim=3,4,5,6 lepton violating couplings but allow the coupling of type $`udd`$ in the superpotential. One can also find choices which allows for lepton number violating but not for baryon number violating ones. 5. There is a particularly simple $`U(1)`$ for which all the anomalies are the same: $`C_1=C_2=C_3`$. This will require a string model with identical $`s_\alpha `$ coefficients in the gauge kinetic function. This is a solution as long as $`N_g=3N_D`$ which is satisfied in the physical case with $`p=n/3;m=3n/2`$. Since the three $`s_\alpha `$ are identical the gauge couplings are unified for any value of $`<ReM>`$ . Notice however that the $`U(1)_Y`$ normalization is not the canonical one. This $`U(1)`$ also forbids all dangerous couplings of dimension $`3,4,5`$ and $`6`$ as well as all dangerous $`F`$-term operators of dimension less than $`9`$. Thus one concludes that ensuring a small $`\mu `$-parameter implies in general that $`B`$ and $`L`$-violating dim=3,4,5,6 operators are generically forbidding in the presence of anomalous $`U(1)`$’s of this type except for very particular cases. As for neutrino masses, the operator $`LL\overline{H}\overline{H}`$ is forbiden as long as $`mn+p`$ so one must include right-handed neutrinos to allow for neutrino masses with charges $`n+pm`$. Now, once the $`U(1)_X`$ symmetry is gauged, the $`\mu `$ parameter is forced to be zero at the perturbative level and we understand why the Higgs fields have small masses compared to the cut-off. A small but non-vanishing $`\mu `$-parameter is however needed in order to obtain appropriate $`SU(2)\times U(1)`$ symmetry breaking. Notice however that once SUSY is broken, a non-vanishing $`D_X`$-term will in general appear of order $`M_W^2`$. Thus the $`U(1)_X`$ symmetry will get small breakings of order $`M_W`$ and an effective $`\mu `$-term of order $`M_W`$ could be generated. $`U(1)_X`$ symmetry breaking effects could also be generated from non-perturbative effects and could also give rise to a $`\mu `$ term. ## 4 Anomalous $`U(1)`$’s and $`\beta `$-functions The class of $`U(1)`$ symmetries considered in the previous section is very interesting phenomenological and in principle it may be realized in type I strings. However, at the moment we do not yet know an explict example giving rise to such symmetries since we are still lacking sufficiently realistic models. We will now change our approach in the following way. Instead of imposing the phenomenological requirements of allowing all quark and lepton masses, we will impose some constraints on the anomalies inspired in some known orientifold models. Recently a special class of anomalous $`U(1)`$’s has been found in orientifold models . These models are such that the mixed anomalies of these $`U(1)`$’s with the other gauge groups are in the ratio of the beta-function coefficients of the corresponding gauge groups. This could be of interest also in trying to accomodate a string scale well below the unification scale $`M_X=2\times 10^{16}`$ GeV . For these $`U(1)`$’s instead of (3.1), valid for heterotic models, we would have: $$\frac{C_\alpha }{C_\gamma }=\frac{\beta _\alpha }{\beta _\gamma }$$ (4.1) If an extension of the MSSM exists with such an extra $`U(1)_X`$, since we know the beta-function coefficients for the supersymmetric standard model, $`\beta _3:\beta _2:\beta _1=3:1:11`$, we can then look for the most general family independent $`U(1)`$ that satisfies these constraints. Imposing the three constraints for the mixed anomalies with the standard model groups we find four independent solutions as shown in the table. The most general anomalous $`U(1)`$ is $$Q_X^{}=mQ_1+nQ_2+pQ_3+qQ_4$$ (4.2) which has mixed anomalies with the standard model groups given by: $$C_\alpha =\frac{\beta _\alpha }{2}(2m+2n+p+q)$$ (4.3) We therefore will assume $`2m+2n+p+q0`$ so that the $`U(1)`$ is indeed anomalous. These anomalies are cancelled if the gauge kinetic function has the form $`f_\alpha =S+\frac{\beta _\alpha }{2}M`$ (i.e., $`s_\alpha =\beta _\alpha /2`$) with $`\delta _X=(2m+2n+p+q)`$. We thus have $`C_\alpha =\frac{\beta _\alpha }{2}\delta _X`$. There are two more conditions that have to be imposed. First that the mixed $`U(1)_XU(1)_XY`$ anomaly vanishes identically, imposing a quadratic constraint among the $`U(1)_X`$ charges. Second, that the $`U(1)_X^3`$ anomaly is also cancelled by a Green-Schwarz mechanism, therefore a constraint $`C_X=\delta _X/2\beta _x`$ has also to be satisfied. The quadratic constraint can be automatically satisfied by use of the following argument: adding a term proportional to the hypercharge to each of the $`U(1)_X`$ charges will not change any of the linear constraints coming from the mixed $`U(1)_XGG`$ anomalies because we know that hyercharge is anomaly free. Therefore this will change only the quadratic constraint, we then assume that the proportionality constant has been fixed and not impose the quadratic constraint. This will tell us though that the four coefficients $`m,n,p,q`$ are not independent and we are free to impose one constraint among them (the Weinberg $`U(1)`$ of the previous section was obtained in this way). As for the cubic constraint, since it involves only the anomalous $`U(1)_X`$ we will allow the possibility of extra matter fields charged only under the anomalous $`U(1)_X`$ but not the Standard Model groups, which is very common in string models, so that their charges satisfy the cubic anomaly condition. Let us then try to extract the possible implications of the $`U(1)_X`$ symmetry. It turns out that one can draw some general conclussions without needing to go into the details of each $`U(1)_X`$. We show in table 2 the $`U(1)_X`$ charges of the dim=3,4 operators in the MSSM. Here one defines $`Y=mnp`$ and $`Z=p+2q`$ <sup>7</sup><sup>7</sup>7Note that $`m,n,p,q`$ are defined in eq.( 4.2) and have nothing to do with those defined in eq.(3.1).. For any of those couplings to be allowed the corresponding entry has to vanish. Examining the table one reaches the following conclussions: 1. The $`\mu `$-term is prohibited as long as the $`U(1)`$ is anomalous ($`\delta _X0`$). This result happens to be identical to the case in the previous section. Thus this a very generic fact: $`U(1)^{}s`$ forbidding the $`\mu `$-term are necesarily anomalous. 2. If a mass term for the up quarks is allowed ($`Y=0`$), then automatically a mass term for the down quarks is also allowed. However, at the same time, lepton masses are forbidden. Therefore with this $`U(1)_X`$ lepton or quark mass terms may be present but not both simultaneously. This implies that the class of $`U(1)`$’s studied in the previous section does not fall into the present cathegory. In the following we will consider the case that quark masses are permitted ($`Y=0`$), similar conclusions can be obtained if only the lepton masses are present, or none of them. We will comment below how leptons could get a mass. 3. If the baryon number violating operator $`udd`$ is allowed ($`Z=0`$) then the lepton number violating operator $`QdL`$ is automatically forbidden implying, at this level, proton stability. 4. From table 3 one also observes that in the generic case all $`B`$ and $`L`$-violating terms are forbidden at least up to dimension 6 except for the dim=5 operator $`[QQQL]_F`$ which is always necesarily allowed for a $`U(1)`$ of this type. One can also check that the charges of the dimension 5 operators $`LL\overline{H}\overline{H}`$ are given by $`2Z2Y4\delta _X`$. Thus for choices with $`2Z=2Y+4\delta _X`$ neutrino masses can be naturally generated as in the standard see-saw mechanism. Alternatively, right-handed neutrinos may be added. We can then see how restrictive a single anomalous symmetry can be regarding the physically interesting couplings in the superpotential. Even though the lepton masses are forbidden, there is a very economical way to generate them. If there is a standard model singlet $`N`$ with charge $`3\delta _X`$, the coupling $`NLeH`$ is invariant under the anomalous $`U(1)`$ and if $`N`$ has a nonvanishing expectation value, it gives rise to lepton masses. This looks like dangerous since, as we discussed in the introduction, once we give large vevs to fields charged under the anomalous $`U(1)_X`$, this symmetry will be broken in the effective Lagrangian, as it generically hapened in the heterotic models. Interestingly enough there is an unexpected unbroken discrete $`Z_3`$ gauge group which saves the day. Indeed, a vev for the field $`N`$ turns out to break $`U(1)_X`$ to a discrete $`Z_3`$ subgroup. This is due to the fact that the $`N`$ field has to have charge $`3\delta _X`$ whereas the forbidden terms would require vevs of fields with charges $`\pm 2\delta _X`$ or $`\pm 1\delta _X`$ to be allowed. This is enough to forbid the dangerous couplings, such as the $`\mu `$-term and the $`B,L`$ violating operators. The ratio $`<N>/M_s`$ may be at the origin of a hierarchy of fermion masses. The only dangerous coupling that is not forbidden by this $`U(1)_X`$ (nor its residual $`Z_3`$ once leptons get masses) is the dimension 5 operator $`QQQL`$. We may hope that an extra symmetry, possibly a flavour-dependent $`U(1)`$ or even a sigma-model symmetry as those discussed in , may be at work to forbid this operator and keep the proton stable. Notice that this coupling is dangerous only for the first families of quarks and leptons. A detailed analysis of flavour-dependent anomalous $`U(1)`$’s may also be interesting , in order to study the possible structure of fermion masses. We hope to report on this in a future publication. ## 5 Final comments We have studied the possible use of anomalous $`U(1)`$ gauge symmetries of the class found in Type IIB $`D=4`$, $`N=1`$ orientifolds to restrict Yukawa couplings and operators in simple extensions of the MSSM in which a single such $`U(1)`$ is added. We have studied in detail two general classes of flavour independent anomalous $`U(1)`$’s that may come from type I strings. The general properties of anomaly cancellation and induced FI terms are very different compared to those previously considered in the context of perturbative heterotic models. Besides its intrinsic interest, this study may lead us to extract general properties of these models. We have seen that in most cases, dangerous couplings are forbidden, in particular the $`\mu `$ term and $`B,L`$ violating operators are naturally constrained by these anomalous symmetries. We have checked that generic symmetries of this class easily forbid B,L violating couplings up to large operator dimensions. This could be wellcome for string models with the string scale well below the Planck mass as in refs. . It would be interesting to extend the present analysis to the case of flavour-dependent $`U(1)`$ symmetries which could restrict the patterns of fermion mass textures. Notice that in the analysis in section 3 we have tacitely assumed that the residual global $`U(1)_X`$ symmetry is broken (by vevs of charged scalars) only close to the weak scale so that proton decay supression is sufficiently efficient and a large $`\mu `$-term is not generated. This is a possibility which is not pressent in heterotic models in which the FI-term generically forces charged scalars to get vevs close to the string scale. However in the Type I context other flavour-dependent $`U(1)`$’s can be assumed to be broken by vevs of singlet scalars slightly below the string scale so that schemes analogous to those considered in are also possible. Notice also that more than one anomalous $`U(1)`$ may be present now. If there is more than one $`M`$ field at the singularity, since only one gets swallowed by the $`U(1)_X`$ to become massive, they can play the role of invisible axions and solve the strong CP-problem as proposed in ref. . Indeed, these other fields would be massless to high accuracy and have the adequate couplings to do the job. It is unlikely that they get substantial masses after SUSY-breaking if the latter originates in a hidden sector. However it is not clear if these fields couple to $`FF`$. For example, in $`Z_3`$ with 9-branes, the sum of 27 fields is massive, the other 26 are not. But they have zero coupling with $`FF`$. We expect that in the generic case, for an anomalous $`U(1)_X`$, the linear combination $`s_X^kM_k`$ cancels the anomaly and gets eaten by the anomalous gauge field, whereas the combination $`s_3^kM_k`$ is the one that couples to QCD and plays the role of the QCD axion. Once $`SU(2)\times U(1)`$ is broken and the Higgs fields get vevs, those vevs will also break the $`U(1)_X`$ symmetry. In the $`D_X`$ term this can be compensated by a tiny (compared to the string scale) FI-term $`\xi `$. Thus it seems electroweak symmetry breaking will trigger a FI-term of order $`M_W`$. This means that $`<M>M_W`$ in these models. Thus, interestingly enough, the electroweak scale would be a measure of the blowing up of the singularity. The process of $`SU(2)\times U(1)`$ breaking would look like a transition of some branes to the bulk. The distance of the branes to the original singularity (given by the vevs of the Higgs) is equal to the induced FI term. Of course, all this depends on the supersymmetry breaking mechanism and how it affects the structure of the $`D`$-terms. In the general case we can say that for an arbitrary anomalous $`U(1)`$ the vacuum is either at the singularity $`\xi =0`$ or not. If it is not, then the blowing-up mode can substantially affect the unification scale as argued in refs. . Otherwise the anomalous $`U(1)`$ symmetry remains as a perturbatively exact global symmetry that can help forbidding dangerous couplings allowed by supersymmetry. This is the first concrete proposal to evade the general claim against the existence of global symmetries in string models. The SM Higgs can break this symmetry, triggering a nonvanishing value to the FI term and then moving away from the singularity. This may provide a ‘brany’ interpretation to the scale of electroweak symmetry breaking. In any case these new anomalous symmetries can certainly play a very interesting role in low-energy physics. Acknowledgements This work has been partially supported by CICYT (Spain), the European Commission (grant ERBFMRX-CT96-0045), the John Simon Guggenheim foundation and PPARC.
no-problem/9908/hep-ph9908381.html
ar5iv
text
# Large Mixing and 𝐶⁢𝑃 Violation in Neutrino Oscillations ## Abstract I introduce a simple phenomenological model of lepton flavor mixing and $`CP`$ violation based on the flavor democracy of charged leptons and the mass degeneracy of neutrinos. The nearly bi-maximal mixing pattern, which can interpret current data on atmospheric and solar neutrino oscillations, emerges naturally from this model. The rephasing-invariant strength of $`CP`$ or $`T`$ violation amounts to about one percent and could be measured in the long-baseline neutrino experiments. The similarity and difference between lepton and quark flavor mixing phenomena are also discussed. 1 In the standard electroweak model neutrinos are assumed to be the massless Weyl particles. This assumption, which has no conflict with all direct-mass-search experiments , is not guaranteed by any fundamental principle of particle physics. Indeed most extensions of the standard model, such as the grand unified theories of quarks and leptons, allow the existence of massive neutrinos. If the masses of three active neutrinos ($`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$) are nonvanishing, why are they so small in comparison with the masses of charged leptons or quarks? For the time being this question remains open, although a lot of theoretical speculations towards a definite answer have been made. The smallness of neutrino masses is perhaps attributed to the fact that neutrinos are electrically neutral fermions, or more exactly, to the Majorana feature of neutrino fields. Recent observation of the atmospheric and solar neutrino anomalies, particularly that in the Super-Kamiokande experiment, has provided strong evidence that neutrinos are massive and lepton flavors are mixed. Analyses of the atmospheric neutrino deficit in the framework of either two or three lepton flavors favor $`\nu _\mu \nu _\tau `$ as the dominant oscillation mode and yield the following mass-squared difference and mixing factor at the $`90\%`$ confidence level : $$\mathrm{\Delta }m_{\mathrm{atm}}^210^3\mathrm{eV}^2,\mathrm{sin}^22\theta _{\mathrm{atm}}>0.9.$$ (1) As for the solar neutrino anomaly, the hypothesis that solar $`\nu _e`$ neutrinos change to $`\nu _\mu `$ neutrinos during their travel to the earth through the long-wavelength vacuum oscillation with the parameters $$\mathrm{\Delta }m_{\mathrm{sun}}^210^{10}\mathrm{eV}^2,\mathrm{sin}^22\theta _{\mathrm{sun}}1,$$ (2) can provide a consistent explanation of all existing solar neutrino data . Alternatively the large-angle MSW (Mikheyev, Smirnov, and Wolfenstein) solution, i.e., the matter-enhanced $`\nu _e\nu _\mu `$ oscillation with the parameters $$\mathrm{\Delta }m_{\mathrm{sun}}^210^5\mathrm{eV}^2,\mathrm{sin}^22\theta _{\mathrm{sun}}1,$$ (3) seems also favored by current data . To distinguish between the MSW and vacuum solutions to the solar neutrino problem is a challenging task of the next-round solar neutrino experiments. Current data indicate that solar and atmospheric neutrino oscillations are approximately decoupled from each other. Each of them is dominated by a single mass scale, i.e., $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ $`=`$ $`|\mathrm{\Delta }m_{21}^2|=\left|m_2^2m_1^2\right|,`$ $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ $`=`$ $`|\mathrm{\Delta }m_{32}^2|=\left|m_3^2m_2^2\right|,`$ (4) and $`\mathrm{\Delta }m_{31}^2\mathrm{\Delta }m_{32}^2`$ in the scheme of three lepton flavors. In addition, the $`\nu _3`$-component in $`\nu _e`$ is rather small; i.e., the $`V_{e3}`$ element of the lepton flavor mixing matrix $`V`$, which links the neutrino mass eigenstates $`(\nu _1,\nu _2,\nu _3)`$ to the neutrino flavor eigenstates $`(\nu _e,\nu _\mu ,\nu _\tau )`$, is suppressed in magnitude. Note, however, that the hierarchy of $`\mathrm{\Delta }m_{21}^2`$ and $`\mathrm{\Delta }m_{32}^2`$ (or $`\mathrm{\Delta }m_{31}^2`$) cannot give clear information about the absolute values or the relative magnitude of three neutrino masses. For example, either the strongly hierarchical neutrino mass spectrum ($`m_1m_2m_3`$) or the nearly degenerate one ($`m_1m_2m_3`$) is allowed to reproduce the “observed” mass gap between $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ and $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$. In the presence of flavor mixing among three different fermion families, $`CP`$ violation is generally expected to appear. This is the case for quarks, and there is no reason why the same phenomenon does not manifest itself in the lepton sector . The strength of $`CP`$ violation in neutrino oscillations, no matter whether neutrinos are of the Dirac or Majorana type, depends only upon a universal (rephasing-invariant) parameter $`𝒥`$, which is defined through $$\mathrm{Im}\left(V_{il}V_{jm}V_{im}^{}V_{jl}^{}\right)=𝒥\underset{k,n}{}ϵ_{ijk}^{}ϵ_{lmn}^{}.$$ (5) The asymmetry between the probabilities of two $`CP`$-conjugate neutrino transitions, due to the $`CPT`$ invariance and the unitarity of $`V`$, are uniquely given as $`𝒜_{CP}`$ $`=`$ $`P(\nu _\alpha \nu _\beta )P(\overline{\nu }_\alpha \overline{\nu }_\beta )`$ (6) $`=`$ $`16𝒥\mathrm{sin}F_{12}\mathrm{sin}F_{23}\mathrm{sin}F_{31}`$ with $`(\alpha ,\beta )=(e,\mu )`$, $`(\mu ,\tau )`$ or $`(\tau ,e)`$, $`F_{ij}=1.27\mathrm{\Delta }m_{ij}^2L/E`$ and $`\mathrm{\Delta }m_{ij}^2=m_i^2m_j^2`$, in which $`L`$ is the distance between the neutrino source and the detector (in unit of Km) and $`E`$ denotes the neutrino beam energy (in unit of GeV). The $`T`$-violating asymmetries can be obtained in a similar way : $`𝒜_T`$ $`=`$ $`P(\nu _\alpha \nu _\beta )P(\nu _\beta \nu _\alpha )`$ $`=`$ $`16𝒥\mathrm{sin}F_{12}\mathrm{sin}F_{23}\mathrm{sin}F_{31},`$ $`𝒜_T^{}`$ $`=`$ $`P(\overline{\nu }_\alpha \overline{\nu }_\beta )P(\overline{\nu }_\beta \overline{\nu }_\alpha )`$ (7) $`=`$ $`+16𝒥\mathrm{sin}F_{12}\mathrm{sin}F_{23}\mathrm{sin}F_{31},`$ where $`(\alpha ,\beta )=(e,\mu )`$, $`(\mu ,\tau )`$ or $`(\tau ,e)`$. These formulas show clearly that $`CP`$ or $`T`$ violation is a behavior of all three lepton families. In addition, the relationship $`𝒜_T^{}=𝒜_T`$ indicates that the two $`T`$-violating measurables are odd functions of time . A necessary condition for obtaining large (observable) $`CP`$ or $`T`$ violation is that the magnitude of $`𝒥`$ should be large enough. In view of the smallness of $`|V_{e3}|`$, one may conclude that the largeness of $`|𝒥|`$ requires the largeness of both $`|V_{e2}|`$ and $`|V_{\mu 3}|`$. Therefore a mixing scheme which can accommodate the small-angle MSW solution to the solar neutrino problem (due to the smallness of $`|V_{e2}|`$) would not be able to give rise to large $`CP`$ or $`T`$ violation. In the following I present a phenomenological model for lepton mass generation and $`CP`$ violation within the framework of three lepton species (i.e., the LSND evidence for neutrino oscillations, which was not confirmed by the KARMEN experiment , is not taken into account). The basic idea, first pointed out by Fritzsch and me in 1996 to get nearly bi-maximal lepton flavor mixing, is associated with the flavor democracy of charged leptons and the mass degeneracy of active neutrinos. We introduce a simple flavor symmetry breaking scheme for charged lepton and neutrino mass matrices, so as to generate two nearly bi-maximal flavor mixing angles and to interpret the approximate decoupling of solar and atmospheric neutrino oscillations. Large $`CP`$ or $`T`$ violation of order $`|𝒥|1\%`$ can naturally emerge in this scenario. Consequences on the upcoming long-baseline neutrino experiments, as well as the similarity and difference between lepton and quark mixing phenomena, will also be discussed. 2 Let me start with the symmetry limits of the charged lepton and neutrino mass matrices. In a specific basis of flavor space, in which charged leptons have the exact flavor democracy and neutrino masses are fully degenerate, the mass matrices can be written as $`M_l^{(0)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right),`$ $`M_\nu ^{(0)}`$ $`=`$ $`c_\nu \left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),`$ (8) where $`c_l^{}=m_\tau `$ and $`c_\nu =m_0`$ measure the corresponding mass scales. If the three neutrinos are of the Majorana type, $`M_\nu ^{(0)}`$ could take a more general form $`M_\nu ^{(0)}P_\nu `$ with $`P_\nu =\mathrm{Diag}\{1,e^{i\varphi _1},e^{i\varphi _2}\}`$. As the Majorana phase matrix $`P_\nu `$ has no effect on the flavor mixing and $`CP`$-violating observables in neutrino oscillations, it will be neglected in the subsequent discussions. Clearly $`M_\nu ^{(0)}`$ exhibits an S(3) symmetry, while $`M_l^{(0)}`$ an $`S(3)_\mathrm{L}\times S(3)_\mathrm{R}`$ symmetry. In these limits $`m_e=m_\mu =0`$, $`m_1=m_2=m_3=m_0`$, and no flavor mixing is present. A simple real diagonal breaking of the flavor democracy for $`M_l^{(0)}`$ and the mass degeneracy for $`M_\nu ^{(0)}`$ may lead to instructive results for flavor mixing in neutrino oscillations . To accommodate $`CP`$ violation, however, complex perturbative terms are required . Let me proceed with two different symmetry-breaking steps. (a) Small real perturbations to the (3,3) elements of $`M_l^{(0)}`$ and $`M_\nu ^{(0)}`$ are respectively introduced : $`\mathrm{\Delta }M_l^{(1)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& \epsilon _l^{}\end{array}\right),`$ $`\mathrm{\Delta }M_\nu ^{(1)}`$ $`=`$ $`c_\nu \left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& \epsilon _\nu \end{array}\right).`$ (9) In this case the charged lepton mass matrix $`M_l^{(1)}=M_l^{(0)}+\mathrm{\Delta }M_l^{(1)}`$ remains symmetric under an $`S(2)_\mathrm{L}\times S(2)_\mathrm{R}`$ transformation, and the neutrino mass matrix $`M_\nu ^{(1)}=M_\nu ^{(0)}+\mathrm{\Delta }M_\nu ^{(1)}`$ has an $`S(2)`$ symmetry. The muon becomes massive ($`m_\mu 2|\epsilon _l^{}|m_\tau /9`$), and the mass eigenvalue $`m_3`$ is no more degenerate with $`m_1`$ and $`m_2`$ (i.e., $`|m_3m_0|=m_0|\epsilon _\nu |`$). After the diagonalization of $`M_l^{(1)}`$ and $`M_\nu ^{(1)}`$, one finds that the 2nd and 3rd lepton families have a definite flavor mixing angle $`\theta `$. We obtain $`\mathrm{tan}\theta \sqrt{2}`$ if the small correction of $`O(m_\mu /m_\tau )`$ is neglected. Then neutrino oscillations at the atmospheric scale may arise from $`\nu _\mu \nu _\tau `$ transitions with $`\mathrm{\Delta }m_{32}^2=\mathrm{\Delta }m_{31}^22m_0|\epsilon _\nu |`$. The corresponding mixing factor $`\mathrm{sin}^22\theta 8/9`$ is in good agreement with current data. (b) Small imaginary perturbations, which have the identical magnitude but the opposite signs, are introduced to the (2,2) and (1,1) elements of $`M_l^{(1)}`$. For $`M_\nu ^{(1)}`$ the corresponding real perturbations are taken into account : $`\mathrm{\Delta }M_l^{(2)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}i\delta _l& 0& 0\\ 0& i\delta _l& 0\\ 0& 0& 0\end{array}\right),`$ $`\mathrm{\Delta }M_\nu ^{(2)}`$ $`=`$ $`c_\nu \left(\begin{array}{ccc}\delta _\nu & 0& 0\\ 0& \delta _\nu & 0\\ 0& 0& 0\end{array}\right).`$ (10) We obtaine $`m_e|\delta _l|^2m_\tau ^2/(27m_\mu )`$ and $`m_1=m_0(1\delta _\nu )`$, $`m_2=m_0(1+\delta _\nu )`$. The diagonalization of $`M_l^{(2)}=M_l^{(1)}+\mathrm{\Delta }M_l^{(2)}`$ and $`M_\nu ^{(2)}=M_\nu ^{(1)}+\mathrm{\Delta }M_\nu ^{(2)}`$ leads to a full $`3\times 3`$ flavor mixing matrix, which links neutrino mass eigenstates $`(\nu _1,\nu _2,\nu _3)`$ to neutrino flavor eigenstates $`(\nu _e,\nu _\mu ,\nu _\tau )`$ in the following manner: $$V=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& 0\\ \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{6}}& \frac{2}{\sqrt{6}}\\ \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}\end{array}\right)+\mathrm{\Delta }V$$ (11) with $$\mathrm{\Delta }V=i\xi _V^{}\sqrt{\frac{m_e}{m_\mu }}+\zeta _V^{}\frac{m_\mu }{m_\tau },$$ (12) where $`\xi _V^{}`$ $`=`$ $`\left(\begin{array}{ccc}\frac{1}{\sqrt{6}}& \frac{1}{\sqrt{6}}& \frac{2}{\sqrt{6}}\\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& 0\\ 0& 0& 0\end{array}\right),`$ $`\zeta _V^{}`$ $`=`$ $`\left(\begin{array}{ccc}0& 0& 0\\ \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{6}}\\ \frac{1}{\sqrt{12}}& \frac{1}{\sqrt{12}}& \frac{1}{\sqrt{3}}\end{array}\right).`$ (13) Some consequences of this mixing scenario can be drawn as follows: (1) The mixing pattern in Eq. (11), after neglecting small corrections from the charged lepton masses, is quite similar to that of the pseudoscalar mesons $`\pi ^0`$, $`\eta `$ and $`\eta ^{}`$ in QCD . One may speculate whether such an analogy could be taken as a hint towards an underlying symmetry and its breaking, which are responsible for lepton mass generation and flavor mixing . (2) The $`V_{e3}`$ element, of magnitude $$|V_{e3}|=\frac{2}{\sqrt{6}}\sqrt{\frac{m_e}{m_\mu }},$$ (14) is naturally suppressed in the symmetry breaking scheme outlined above. A similar feature appears in the $`3\times 3`$ quark flavor mixing matrix, i.e., $`|V_{ub}|`$ is the smallest among the nine quark mixing elements . Indeed the smallness of $`V_{e3}`$ provides a necessary condition for the decoupling of solar and atmospheric neutrino oscillations, even though neutrino masses are nearly degenerate. The effect of small but nonvanishing $`V_{e3}`$ can manifest itself in the long-baseline $`\nu _\mu \nu _e`$ and $`\nu _e\nu _\tau `$ transitions, as shown in Ref. . (3) The flavor mixing between the 1st and 2nd lepton families and that between the 2nd and 3rd lepton families are nearly maximal . This property, together with the natural smallness of $`V_{e3}`$, allows a satisfactory interpretation of the observed large mixing in atmospheric and solar neutrino oscillations. We obtain $`\mathrm{sin}^22\theta _{\mathrm{sun}}`$ $`=`$ $`1{\displaystyle \frac{4}{3}}{\displaystyle \frac{m_e}{m_\mu }},`$ $`\mathrm{sin}^22\theta _{\mathrm{atm}}`$ $`=`$ $`{\displaystyle \frac{8}{9}}+{\displaystyle \frac{8}{9}}{\displaystyle \frac{m_\mu }{m_\tau }},`$ (15) to a quite high degree of accuracy. Explicitly $`\mathrm{sin}^22\theta _{\mathrm{sun}}0.99`$ and $`\mathrm{sin}^22\theta _{\mathrm{atm}}0.94`$, favored by current data . It is obvious that the model is fully consistent with the vacuum oscillation solution to the solar neutrino problem and might also be able to incorporate the large-angle MSW solution . (4) The nearly degenerate neutrino masses and nearly bi-maximal mixing angles in the present scenario make it possible to accommodate the hot dark matter of the universe in no conflict with the constraint from the neutrinoless double beta decay (denoted by $`(\beta \beta )_{0\nu }`$). The former requirement can easily be fulfilled, if one takes $`m_i12`$ eV (for $`i=1,2,3`$). The effective mass term of the $`(\beta \beta )_{0\nu }`$ decay, in the presence of $`CP`$ violation, is written as $$m=\underset{i=1}{\overset{3}{}}\left(m_iU_{ei}^2\right),$$ (16) where $`U=VP_\nu `$, and $`P_\nu =\mathrm{Diag}\{1,e^{i\varphi _1},e^{i\varphi _2}\}`$ is a diagonal phase matrix of the Majorana nature. Taking $`\varphi _1=\varphi _2=\pi /2`$ for example, we arrive at $$|m|=\frac{2}{\sqrt{3}}\sqrt{\frac{m_e}{m_\mu }}m_i,$$ (17) i.e., $`|m|0.08m_i0.2`$ eV, the latest experimental bound of the $`(\beta \beta )_{0\nu }`$ decay . (5) The strength of $`CP`$ violation in this scheme is given by $$𝒥\frac{1}{3\sqrt{3}}\sqrt{\frac{m_e}{m_\mu }}\left(1+\frac{1}{2}\frac{m_\mu }{m_\tau }\right).$$ (18) Explicitly we have $`𝒥0.014`$. The large magnitude of $`𝒥`$ for lepton mixing is remarkable, as the same quantity for quark mixing is only of order $`10^5`$ . In view of the approximate decoupling of solar and atmospheric neutrino oscillations, the $`CP`$\- and $`T`$-violating asymmetries presented in Eqs. (6) and (7) can be simplified to the following form in a long-baseline neutrino experiment: $$𝒜_{CP}=𝒜_T\mathrm{\hspace{0.33em}16}𝒥\mathrm{sin}F_{12}\mathrm{sin}^2F_{23}.$$ (19) We see that the maximal magnitude of $`𝒜_{CP}`$ or $`𝒜_T`$ is able to reach $`16𝒥0.2`$, significant enough to be measured from the asymmetry between $`P(\nu _\mu \nu _e)`$ and $`P(\overline{\nu }_\mu \overline{\nu }_e)`$ or that between $`P(\nu _\mu \nu _e)`$ and $`P(\nu _e\nu _\mu )`$ in the long-baseline neutrino experiments with the condition $`E/L|\mathrm{\Delta }m_{12}^2|`$. Of course the afore-mentioned requirement for the length of the baseline singles out the large-angle MSW solution, whose oscillation parameters are given in Eq. (3), among three possible solutions to the solar neutrino problem. If upcoming neutrino oscillation data turn out to rule out the consistency between our model and the large-angle MSW scenario, then it would be quite difficult to test the model itself from its prediction for large $`CP`$ and $`T`$ asymmetries in the realistic long-baseline experiments. It is at this point worth mentioning that the diagonal non-hermitian perturbation to $`M_l^{(0)}`$ is not the only way to generate large $`CP`$ violation in our model. Instead one may consider the off-diagonal non-hermitian perturbations $`\mathrm{\Delta }\stackrel{~}{M}_l^{(2)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}0& 0& i\delta _l\\ 0& 0& i\delta _l\\ i\delta _l& i\delta _l& 0\end{array}\right),`$ $`\mathrm{\Delta }\widehat{M}_l^{(2)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}i\delta _l& 0& i\delta _l\\ 0& i\delta & i\delta _l\\ i\delta _l& i\delta _l& 0\end{array}\right);`$ (20) or the off-diagonal hermitian perturbations $`\mathrm{\Delta }𝐌_l^{(2)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}0& i\delta _l& 0\\ i\delta _l& 0& 0\\ 0& 0& 0\end{array}\right),`$ $`\mathrm{\Delta }\stackrel{~}{𝐌}_l^{(2)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}0& 0& i\delta _l\\ 0& 0& i\delta _l\\ i\delta _l& i\delta _l& 0\end{array}\right),`$ $`\mathrm{\Delta }\widehat{𝐌}_l^{(2)}`$ $`=`$ $`{\displaystyle \frac{c_l^{}}{3}}\left(\begin{array}{ccc}0& i\delta _l& i\delta _l\\ i\delta & 0& i\delta _l\\ i\delta _l& i\delta _l& 0\end{array}\right),`$ (21) for the same purpose. Note that all the six perturbative mass matrices $`\mathrm{\Delta }M_l^{(2)}`$, $`\mathrm{\Delta }\stackrel{~}{M}_l^{(2)}`$, $`\mathrm{\Delta }\widehat{M}_l^{(2)}`$ and $`\mathrm{\Delta }𝐌_l^{(2)}`$, $`\mathrm{\Delta }\stackrel{~}{𝐌}_l^{(2)}`$, $`\mathrm{\Delta }\widehat{𝐌}_l^{(2)}`$ have a common feature: the (1,1) elements of their counterparts in the hierarchical basis all vanish . This feature assures that the $`CP`$-violating effects, resulted from the above complex perturbations, are approximately independent of other details of the flavor symmetry breaking and have the identical strength to a high degree of accuracy. Indeed it is easy to check that the relevant charged lepton mass matrices, together with the neutrino mass matrix $`M_\nu ^{(2)}=M_\nu ^{(0)}+\mathrm{\Delta }M_\nu ^{(1)}+\mathrm{\Delta }M_\nu ^{(2)}`$, lead to the same flavor mixing pattern $`V`$ as given in Eq. (11) . Hence it is in practice difficult to distinguish one scenario from another. In our point of view, the simplicity of $`M_l^{(2)}`$ and its parallelism with $`M_\nu ^{(2)}`$ might make it technically more natural to be derived from a yet unknown fundamental theory of lepton mixing and $`CP`$ violation. 3 In the scheme of three lepton species I have introduced a simple phenomenological model for lepton mass generation, flavor mixing and $`CP`$ violation. The model starts from the flavor democracy of charged leptons and the mass degeneracy of neutrinos. After the symmetry limits of both mass matrices are explicitly broken, we find that this scenario can naturally give rise to large flavor mixing and large $`CP`$ or $`T`$ violation. The condition for the approximate decoupling of atmospheric and solar neutrino oscillations (i.e., $`|V_{e3}|1`$) is also fulfilled. The lepton mixing pattern, which arises from $`m_em_\mu m_\tau `$ and $`m_1m_2m_3`$, seems to be “anomalously” different from the quark mixing pattern obtained from the fact $`m_um_cm_t`$ and $`m_dm_sm_b`$. However, their similarities do exist. To see this point clearly, let me parametrize the quark (Q) and lepton (L) flavor mixing matrices in an instructive way : $`V_\mathrm{Q}`$ $`=`$ $`R_{12}(\theta _\mathrm{u},0)R_{23}(\theta _\mathrm{Q},\varphi _\mathrm{Q})R_{12}^\mathrm{T}(\theta _\mathrm{d},0),`$ $`V_\mathrm{L}`$ $`=`$ $`R_{12}(\theta _l,0)R_{23}(\theta _\mathrm{L},\varphi _\mathrm{L})R_{12}^\mathrm{T}(\theta _\nu ,0),`$ (22) where the complex rotation matrices $`R_{12}`$ and $`R_{23}`$ are defined as $`R_{12}(\theta ,\varphi )`$ $`=`$ $`\left(\begin{array}{ccc}c& s& 0\\ s& c& 0\\ 0& 0& e^{i\varphi }\end{array}\right),`$ $`R_{23}(\theta ,\varphi )`$ $`=`$ $`\left(\begin{array}{ccc}e^{i\varphi }& 0& 0\\ 0& c& s\\ 0& s& c\end{array}\right)`$ (23) with $`c\mathrm{cos}\theta `$ and $`s\mathrm{sin}\theta `$. Note that the rotation sequence of $`V_\mathrm{Q}`$ or $`V_\mathrm{L}`$ is essentially the original Euler sequence with an additional $`CP`$-violating phase. The rotation angle $`\theta _l`$ (or $`\theta _\nu `$) mainly describes the mixing between $`e`$ and $`\mu `$ leptons (or between $`\nu _e`$ and $`\nu _\mu `$ neutrinos), and the rotation angle $`\theta _\mathrm{u}`$ (or $`\theta _\mathrm{d}`$) primarily describes the mixing between $`u`$ and $`c`$ quarks (or between $`d`$ and $`s`$ quarks). The rotation angle $`\theta _\mathrm{Q}`$ (or $`\theta _\mathrm{L}`$) is a combined effect arising from the mixing between the 2nd and 3rd families for quarks (or leptons). The phase parameters $`\varphi _\mathrm{Q}`$ and $`\varphi _\mathrm{L}`$ signal $`CP`$ violation in flavor mixing (for neutrinos of the Majorana type, two additional $`CP`$-violating phases may enter but they are irrelevant for neutrino oscillations). Comparing Eqs. (11)–(13) with (22) we immediately arrive at $$\mathrm{tan}\theta _l=\sqrt{\frac{m_e}{m_\mu }},\mathrm{tan}\theta _\nu =\sqrt{\frac{m_1}{m_2}}.$$ (24) In contrast, a variety of quark mass matrices have predicted $$\mathrm{tan}\theta _\mathrm{u}=\sqrt{\frac{m_u}{m_c}},\mathrm{tan}\theta _\mathrm{d}=\sqrt{\frac{m_d}{m_s}}.$$ (25) As one can see, the large mixing angle $`\theta _\nu `$ (i.e., $`\mathrm{tan}\theta _\nu 1`$ for almost degenerate $`m_1`$ and $`m_2`$) is attributed to the near degeneracy of neutrino masses in our flavor symmetry breaking scheme. Explicitly three mixing angles of leptons take the values $$\theta _l4^{},\theta _\nu 45^{},\theta _\mathrm{L}52^{};$$ (26) and those of quarks take the values $$\theta _\mathrm{u}5^{},\theta _\mathrm{d}11^{},\theta _\mathrm{Q}2^{}.$$ (27) Furthermore, the $`CP`$-violating phases $`\varphi _\mathrm{L}`$ and $`\varphi _\mathrm{Q}`$ are both close to a special value: $$\varphi _\mathrm{Q}\varphi _\mathrm{L}\mathrm{\hspace{0.33em}90}^{}.$$ (28) Let me emphasize that the possibility $`\varphi _\mathrm{Q}90^{}`$ is favored by a variety of realistic models of quark mass matrices . The result $`\varphi _\mathrm{L}90^{}`$ is a distinctive feature of our lepton mixing scenario, but to verify it in a model-independent way would be extremely difficult, if not impossible, in the future neutrino experiments. Indeed the questions, about how large the feasibility is and how much the cost will be to measure $`CP`$ or $`T`$ violation in neutrino oscillations , remain open. We are expecting that more data from the Super-Kamiokande and other neutrino experiments could provide stringent tests of the model discussed here. Acknowledgments This talk is based on the works in collaboration with H. Fritzsch. I am indebted to him for his constant encouragement. I would like to thank F.L. Navarria and the organizing committee of San Miniato 1999 for partial financial support. I am grateful to Q.Y. Liu, W.G. Scott and Y.L. Wu for intensive and interesting discussions during the workshop.
no-problem/9908/cond-mat9908483.html
ar5iv
text
# Four-fold basal plane anisotropy of the nonlocal magnetization of YNi2B2C \[ ## Abstract Studies of single crystal YNi<sub>2</sub>B<sub>2</sub>C have revealed a four-fold anisotropy of the equilibrium magnetization in the square crystallographic basal plane. This $`\pi /2`$ periodicity occurs deep in the superconductive mixed state. In this crystal symmetry, an ordinary superconductive mass anisotropy (as in usual London theory) allows only a constant, isotropic response. In contrast, the experimental results are well described by generalized London theory incorporating non-local electrodynamics, as needed for this clean, intermediate-$`\kappa `$ superconductor. \] Borocarbide superconductors have received considerable recent attention, due in part to the interaction between magnetism and superconductivity. A rich superconducting phase diagram, including transitions between hexagonal, rhombohedral, and square vortex lattices has been observed. The existence of vortex lattices with non-hexagonal symmetry has been attributed to nonlocality effects on the superconducting electrodynamics, which arise from the large electronic mean free path of these clean superconductors. Geometrically, a vortex directed along the tetragonal $`\mathrm{c}`$-axis has current contours that are square-like . It has been shown in the non-magnetic borocarbide YNi<sub>2</sub>B<sub>2</sub>C, that the deviations from the standard (local) London magnetic field dependence of the equilibrium magnetization $`M_{eq}`$ ln$`(H)`$ can be quantitatively accounted for by introducing non-local electrodynamics into the London description . Traditionally, it was widely thought that nonlocality effects should be significant only in materials with a Ginzburg-Landau parameter $`\kappa `$ $``$ 1; however, YNi<sub>2</sub>B<sub>2</sub>C has $`\kappa `$ $``$ 10 – 15. In the local London model of superconducting vortices, the material anisotropy is introduced via a second rank mass tensor $`m_{ij}`$. In tetragonal materials such as YNi<sub>2</sub>B<sub>2</sub>C or LuNi<sub>2</sub>B<sub>2</sub>C, the masses in both principal directions in the square basal plane are the same, $`m_a=m_b`$, thus the superconducting properties are isotropic in the a-b plane. In contrast, non-local corrections are expected to introduce a four-fold anisotropy as a function of the magnetic field orientation within the a-b plane. A temperature dependent in-plane anisotropy of the upper critical field $`H_{c2}`$ has been observed in the non-magnetic borocarbide LuNi<sub>2</sub>B<sub>2</sub>C and described within a Ginzburg-Landau framework incorporating non-local effects. However, a direct observation of the in-plane anisotropy deep into the superconducting phase, where the non-local London model applies and unusual vortex lattices are observed, has not been reported until now. In this work we show that, in the superconducting mixed state of YNi<sub>2</sub>B<sub>2</sub>C, the reversible magnetization oscillates with a $`\pi /2`$ periodicity when the applied field is rotated within the a-b plane. The amplitude of the angular oscillation decreases with field, passes through zero and then reverses sign at a field well below $`H_{c2}`$. The results are in good quantitative agreement with the non-local London description introduced by Kogan et al. The 17 mg single crystal of YNi<sub>2</sub>B<sub>2</sub>C investigated in this study is the same as that previously used by Song et al. to explore the magnetic response when the applied field H is parallel to the c-axis. The critical temperature is $`T_c=14.5K`$, defined as the point at which the linearly varying magnetization $`M(T)`$ extrapolates to zero; this ignores a slight ”tail” extending to $`15.6K`$. The crystal is a slab of thickness $`t0.5mm`$, whose shape and size in the basal plane are sketched in Fig. 1. It will be useful to approximate such shape by an ellipse of axis $`L_x`$ and $`L_y`$. X-ray diffraction shows that the crystallographic c-axis is normal to the slab, and that the two equivalent (110) axes of the tetragonal structure very approximately coincide with the axes of the ellipse. Measurements were performed in a Quantum Design SQUID magnetometer with a 50 kOe magnet. Two sets of detection coils allows us to measure both the longitudinal (parallel to H) and transverse (perpendicular to H) components of the magnetization vector M, but only the longitudinal component (denoted hereafter as simply the magnetization $`M`$) will be discussed in this work. The crystal was mounted into a previously described, home-made rotating sample holder with rotation axis perpendicular to H. The c-axis was aligned with the rotation axis, so that H could be rotated within the basal plane. Figure 1 shows $`M`$ as a function of the angle $`\phi `$ between H and the a-axis (see sketch in Fig. 1), at $`T=7K`$ and two values of $`H`$. The crystal was initially cooled in zero field, $`H`$ was then applied and the sample was subsequently rotated in steps of $`\mathrm{\Delta }\phi 3.1^{}`$. Each data point was taken at fixed $`\phi `$. We first discuss the low field curve of fig. 1b. As $`H=30Oe`$ is well below the lower critical field $`H_{c1}`$ for all $`\phi `$, this curve represents the total flux exclusion of the Meissner state. The oscillatory behavior with periodicity $`\pi `$ (two-fold symmetry) originates from geometrical effects. Indeed, a field applied at any orientation within the basal plane can be decomposed in $`H_x=Hcos(\phi +45^{})`$ and $`H_y=Hsin(\phi +45^{})`$. If we approximate the crystal shape by the ellipse, the Meissner response associated with each component is $`4\pi M_i=H_i/(1\nu _i)`$, where $`i=x;y`$ and $`\nu _i=t/L_i`$ are the demagnetizing factors, thus $`4\pi M=H[cos^2(\phi +45^{})/(1\nu _x)+sin^2(\phi +45^{})/(1\nu _y)]`$. The best fit to this expression gives $`\nu _x1/4`$ and $`\nu _y1/5`$. This corresponds to the ellipse of axes $`L_x2.0mm`$ and $`L_y2.5mm`$ shown in the sketch of fig. 1. We now turn to the high field data of Fig. 1a. The applied field, $`H=45kOe`$, is well above $`H_{c2}`$ 35 kOe at this temperature (which is only weakly $`\phi `$ dependent, see below), thus in this case $`M(\phi )`$ = $`M^{ns}(\phi )`$ is the normal state paramagnetic response. We again observe an oscillatory behavior, but in this case the periodicity is $`\pi /2`$. By combining the information provided by the X-rays with the geometrical effects on the Meissner response, we conclude that the maximum normal state magnetization occurs at the crystallographic orientations $`110`$ and $`\overline{1}10`$, while the minimum corresponds to $`100`$ and $`010`$. No hint of the geometry-originated two-fold symmetry is observed. This is to be expected, as demagnetizing effects vanish in the limit $`|M/H|1`$. Further analysis of $`M^{ns}`$ suggests that it arises from a low concentration, $`0.001`$ molar fraction, of rare earth ions, most likely from impurities in the yttrium starting metal. As shown below, $`M^{ns}`$ is much smaller than the superconducting contribution except close to $`H_{c2}`$. The above procedure cannot be used in the superconducting mixed state, due to the appearance of magnetic hysteresis arising from vortex pinning. The critical current density $`J_c`$ is very small in this crystal. As a result, the magnetic hysteresis $`(M^{}M^{})J_c`$, where $`M^{}`$ and $`M^{}`$ are respectively the magnetizations measured in the field-decreasing and field-increasing branches of an isothermal $`M(H)`$ loop, is small as compared to the equilibrium or reversible magnetization, $`M_{eq}(M^{}+M^{})/2`$. In spite of this, the residual hysteresis strongly affects the response obtained by rotating the crystal at fixed $`T`$ and $`H`$, by superimposing a periodicity $`\pi `$ (related to shape effects on the critical state magnetization) that almost completely hides the intrinsic $`\pi /2`$ periodicity of fundamental interest. To solve this difficulty, we performed magnetization loops at $`T=7K`$ at a set of fixed angles and then calculated $`M_{eq}(H)`$ for each $`\phi `$. In all cases we extended the loops up to $`H=50kOe`$, thus we could repeat the measurement of $`M^{ns}`$ in the normal state and compare the data with those obtained by rotating at fixed $`H`$. Due to the absence of hysteresis, both determinations of $`M^{ns}(H,\phi )`$ should coincide. This is indeed the case as seen in Fig. 1a, where the open circles represent the data at $`H=45kOe`$ obtained from the $`M(H)`$ loops. Figure 2 shows $`M_{eq}`$ (obtained from averaging $`M^{}`$ and $`M^{}`$) as a function of $`\phi `$ for several $`H`$. All the data have the same scale, but the curves at different $`H`$ have been vertically shifted to accommodate the whole field range within the plot. For $`H<1.5kOe`$, the irreversibility becomes large enough to introduce a significant uncertainty in the determination of $`M_{eq}`$, consequently those data have been disregarded. It is apparent that a four-fold symmetry exists in the whole field range of the measurements. To quantify the amplitude of the oscillations, we fitted the curves by $`M_{eq}(H,\phi )`$ = $`M_{eq}`$ \+ $`\delta M_{eq}(H)cos(4\phi )`$. The values of $`\delta M_{eq}(H)`$ so obtained are plotted in Figure 3, while the values of $`M_{eq}(H)`$ are shown in the inset. A remarkable fact, clearly visible in figs. 2 and 3, is that $`\delta M_{eq}(H)`$ crosses zero and it reverses sign at some intermediate field ($``$ 12 kOe) well within the superconducting mixed state. Another interesting observation is that the amplitude of the oscillations at $`H`$ 1.5 - 2 kOe is as large as that at $`H50kOe`$. The above results show that a $`\pi /2`$ basal plane anisotropy exists both in the normal and superconducting states. It is also clear from Fig. 3 that a change in the behavior of $`\delta M_{eq}(H)`$ takes place at the superconducting transition at $`H_{c2}35kOe`$. This observation, together with the sign reversal and the large amplitudes at low fields, point to the existence of a second source of in-plane anisotropy, in addition to the normal state one, that turns on in the superconducting phase. Prior to analyzing the superconducting basal-plane anisotropy it is necessary to subtract the normal state contribution, which persists within the superconducting phase. To that end we performed rotations at fixed $`H`$, as those shown in Fig. 1a, at several $`T`$ and $`H`$ above $`H_{c2}(T)`$. We found a paramagnetic response that exhibits a four-fold symmetry, with the minimum at $`\phi =0`$ in all cases, i.e., $`M^{ns}(\phi =45^{})>M^{ns}(\phi =0^{})>0`$ for all $`T`$ and $`H`$. We thus have a well defined set of data $`\delta M^{ns}(H,T)`$ which exhibits no sign reversal. The extrapolation is not obvious, however, as $`\delta M^{ns}`$ is not linear in $`H`$. Figure 4 shows all the $`\delta M^{ns}`$ data collected at various temperatures $`5KT16K`$ and $`H50kOe`$, as a function of $`H/T`$. We found that, when plotted in this way, all the data points collapse on a single curve. The dashed line in Fig. 4 is a fit to the $`\delta M^{ns}(H/T)`$ data. The same fit, for the case $`T=7K`$, is also shown as a dashed line in Fig. 3. We can now subtract that curve from the total $`\delta M_{eq}`$ shown in Fig. 3, to isolate the superconducting contribution $`\delta M_{eq}^{sc}`$. Note that, as $`\delta M^{ns}`$ is always positive and increases monotonically with $`H`$, both the sign reversal and the non-monotonic behavior immediately below $`H_{c2}`$ exhibited by $`\delta M_{eq}`$ must arise from the $`\delta M_{eq}^{sc}`$ contribution. We now show that the four-fold symmetry of $`M_{eq}^{sc}`$, as well as the field dependence of $`\delta M_{eq}^{sc}`$, can be well described using the non-local modifications to the London electrodynamics introduced by Kogan et al. . According to that model, for $`H_{c1}HH_{c2}`$ $$M_{eq}^{sc}=M_0\left[ln\left(\frac{H_0}{H}+1\right)\frac{H_0}{H_0+H}+\zeta \right]$$ (1) Here $`M_0=\mathrm{\Phi }_0/32\pi ^2\lambda ^2`$, the new characteristic field $`H_0=\mathrm{\Phi }_0/4\pi ^2\rho ^2`$ is related to the nonlocality radius $`\rho `$, and $`\zeta =\eta _1ln(H_0/\eta _2H_{c2}+1)`$, where $`\eta _1`$ and $`\eta _2`$ are constants of order unity. Song et al. have shown that the magnetization of this same crystal is very well described by Kogan’s model, when $`Hc`$-axis. A fingerprint of the nonlocality effects is the deviation from the $`M_{eq}`$ ln$`(H)`$ behavior predicted by the local London model. The curvature clearly visible in the inset of Fig. 3 is thus a strong indication that nonlocality also plays a major role when $`Hc`$-axis. It is worth mentioning that, although a quantitative analysis of the curve in the inset of Fig. 3 in terms of Eq. 1 would require removal of the normal state magnetization, its contribution is small and would not significantly modify the curvature seen in the $`M_{eq}`$ vs. ln($`H`$) data. We now apply Eq. 1 to the analysis of our data. In principle, the basal plane anisotropy could be ascribed to the material parameters $`M_0`$, $`H_0`$ and $`H_{c2}`$. However, $`M_0\lambda ^2`$ is isotropic within the a-b plane of a tetragonal structure. On the other hand, four-fold variations of $`H_{c2}`$ within the basal plane have been observed in LuNi<sub>2</sub>B<sub>2</sub>C and attributed to nonlocality. We then assume that both $`H_0`$ and $`H_{c2}`$ have $`\pi /2`$ periodicity, $`H_0(\phi )=H_0+\delta H_0cos(4\phi )`$ and $`H_{c2}(\phi )=H_{c2}+\delta H_{c2}cos(4\phi )`$. To first order in $`\delta H_0`$ and $`\delta H_{c2}`$ we obtain $$\delta M_{eq}^{sc}=M_0\left[\left(\frac{1}{\left(1+\frac{H}{H_0}\right)^2}\alpha \right)ϵ_1+\alpha ϵ_2\right]$$ (2) where $`\alpha =\frac{1}{1+\eta _2\frac{H_{c2}}{H_0}};ϵ_1=\frac{\delta H_0}{H_0};ϵ_2=\frac{\delta H_{c2}}{H_{c2}}`$ Experimentally we have determined $`H_{c2}`$ = 35 kOe and $`\delta H_{c2}`$ $``$ 0.4 kOe (at $`T=7K`$), so we can fix $`ϵ_2`$ = 0.01. We could also attempt to determine $`M_0`$ and $`H_0`$ by fitting our $`M_{eq}`$ data with Eq. 1. However, this is a difficult task that requires the measurement of a large set of temperatures to check the consistency of the results. Instead, we decided to use the results of Song et al., (for $`H\mathrm{c}`$-axis) as good estimates. For $`T`$ = 7 K, we take $`H_0`$ = 56 kOe and $`M_0`$ = 5.2 G. (Here we scaled down $`M_01/\lambda _a\lambda _c`$ by the experimental mass anisotropy, $`\gamma 1.15`$, between the c-axis and the ab-plane, which is close to the value $`\gamma 1.1`$ obtained from band structure calculations.) In any case, small variations in any of these parameters will not significantly affect the rest of the analysis. If we also assume $`\eta _21`$, we obtain $`\alpha 2/3`$. With these fixed parameters, we fit our $`\delta M_{eq}^{sc}(H)`$ data with Eq. 2, with the *s*ingle free parameter $`ϵ_1`$. We obtain $`ϵ_1`$ = 0.14. The fitted curve (for $`\delta M_{eq}^{sc}+\delta M^{ns}`$) is shown as a solid line in Fig. 3. The very good coincidence between our data and the model is remarkable. With a single fitting parameter $`ϵ_1`$, which is field independent, we have been able to account for the nontrivial $`H`$ dependence of $`\delta M_{eq}^{sc}`$, including the sign reversal at intermediate fields. Of course, the fit deviates from the data close to $`H_{c2}`$, where the London model fails. These experimental results show that nonlocality effects have a profound effect on these clean, intermediate-$`\kappa `$ superconductors and they underscore the remarkable utility of the generalized London theory. In summary, we have demonstrated a four-fold anisotropy in the square basal plane of clean single crystal YNi<sub>2</sub>B<sub>2</sub>C. This superconducting response is inconsistent with conventional local London theory, but it is well explained by a generalized London model incorporating non-local electrodynamics, with parameters based largely on complementary experiments. These observed effects of nonlocality persist deep into the superconducting state, where complex, evolving vortex lattices occur. \[Note added: while preparing this manuscript, we learned that P. C. Canfield et al. at the Ames Lab. have observed similar oscillations of the basal plane magnetization of LuNi<sub>2</sub>B<sub>2</sub>C.\] We are pleased to acknowledge useful discussions with V.G. Kogan. A.S. is member of CONICET. Collaboration between UTK and CAB was supported in part by a UTK Faculty Research Fund. Research at the ORNL is supported by the Div. of Materials Sciences, U.S. Dept. of Energy under contract number DE-AC05-96OR22464 with Lockheed Martin Energy Research Corp.
no-problem/9908/cond-mat9908258.html
ar5iv
text
# Enhanced stability of the square lattice of a classical bilayer Wigner crystal ## I Introduction Phase transitions and in particular the melting transition is one of the most fundamental problems in condensed matter and statistical physics. The most intensively studied model system is the one consisting of charged particles with a pure spherical symmetric repulsive inter–particle interaction (see e.g. ). Examples are the electron Wigner solid, colloidal spheres and dusty plasmas. After the discovery of the Wigner crystallisation of electrons above the surface of liquid helium by Grimes and Adams the interest in the melting of low dimensional systems has increased. Recently the layered close-packed crystalline structure and their structural transition were observed in experiments with dust rf discharge and with ion layered crystals in ion traps . Motivated by the theoretical works of Nelson, Halperin , and Young who developed a theory for a two stage continuous melting of a two dimensional (2D) crystal and which was based on ideas of Berenzinskii , Kosterlitz and Thouless , several experimental and theoretical studies were devoted to the melting transition of mainly single layer crystals. In this case the hexagonal lattice is the most energetically favored structure for potentials of the form $`1/r^n`$ . The effect of the dimensionality of the system on the melting transition is another important issue which has been investigated recently. An important subsystem in this field is the bilayer system consisting of two parallel layers of charged particles with a pure repulsive inter-particle interaction. The interlayer distance is an additional variable with which the effective interparticle interaction can be altered. While the classical single layer can crystallise only in a hexagonal lattice it was found that the bilayer system exhibits a much richer structural phase diagram. Previously, the different types of lattices and structural transitions in a multilayer crystal at zero temperature with parabolic lateral confinement was analysed in . Different classes of lattices of the double-layer crystal were specified in as function of the interlayer spacing and electron density. A systematic study of the stability and the phonon spectra of these crystal phases was presented in Ref.. The collective modes of the corresponding bilayer liquid state was recently studied in . The extension of the classical bilayer model to the finite vertical coupled classical dot system was presented in Ref. where a rich collection of structural first and second order phase transitions was found. The quantum bilayer system was investigated by several groups . Here we will concentrate on the melting of the classical bilayer crystal and study the influence of the different crystal structures on the melting temperature. The equilibrium states of a bilayer crystal at different temperatures ($`T`$) will be investigated using the Monte Carlo (MC) simulation technique. Preliminary resuls of this study were published in . The melting of the classical bilayer crystal was studied in Ref. using the Lindemann criterion where the mean square displacement was calculated within the harmonic approximation. It is well-known that the harmonic approximation can only give a rough estimate of the melting temperature. For example, for the single electron layer using the Kosterlitz–Thouless–Halperin–Nelson–Young (KTHNY) theory and inserting the Lamé coefficients of the $`T=0`$ lattice one obtains $`\mathrm{\Gamma }=78`$, while inclusion of non linear effects, using Monte-Carlo simulations , results into $`\mathrm{\Gamma }=125`$ which compares to the experimental result $`\mathrm{\Gamma }=137\pm 15`$ (see e.g.). This is the motivation to go beyond the harmonic approximation and to investigate the importance of non linear effects on the melting temperature. Using the Monte Carlo simulation technique has the added advantage that information can be obtained about the topology of the defects which are responsible for melting. The present paper is organised as follows. The model system and the numerical approach is given in Sec.II. In Sec. III we construct the solid–liquid phase diagram for Coulomb bilayer crystals. In Sec. IV the influence of screening on melting is studied and we formulate a new criterion for the melting temperature. In Sec. V we show that it gives accurate results for the melting temperature when we use the new melting criterion and the harmonic approximation. The existence of hysteresis during melting of the 2D single layer crystal is considered in Sec. VI. The topology of defects in the hexagonal type crystal is considered in Sec. VII. The gallery of defects for the square bilayer crystal is presented in Sec. VIII. Our results are summarised in Sec. IX. ## II The model system and numerical approach In the present study we limit ourselves to infinitely thin bilayers with equal density, $`n/2`$, of charged particles in the two layers. In the crystal phase the particles are arranged in two parallel layers in the $`(x,y)`$–plane which are a distance $`d`$ apart in the $`z`$–direction. The lattice structure in the top and bottom layers are the same, but the particle sites in opposite layers are staggered (i.e. closed-packing system). A single layer crystal is a limiting case of a bilayer crystal with $`d=0`$ and particle density $`n`$. We assume that the particles interact through an isotropic Coulomb or screened repulsive potential $$V(\stackrel{}{r}_i,\stackrel{}{r}_j)=\frac{q^2}{ϵ\stackrel{}{r}_i\stackrel{}{r}_j}\mathrm{exp}(\kappa \stackrel{}{r}_i\stackrel{}{r}_j),$$ (1) where $`q`$ is the particle charge, $`ϵ`$ is the dielectric constant of the medium the particles are moving in, $`\stackrel{}{r}=(x,y,z)`$ the position of the particle with $`r=\stackrel{}{r}`$, and $`1/\kappa `$ is the screening length. The type of lattice symmetry at $`T=0`$ depends on the dimensionless parameter $`\nu =d/a_0`$, where $`a_0=1/\sqrt{\pi n}`$ is the mean interparticle distance. For the classical Coulomb system ($`\kappa =0`$) at $`T0`$ there are two dimensionless parameters $`\nu `$ and $`\mathrm{\Gamma }=q^2/ϵa_0k_BT`$ which determine the state of the system. The classical Yukawa system ($`\kappa >0`$) at $`T0`$ is characterised by three independent dimensionless parameters: $`\nu `$, $`\mathrm{\Gamma }`$ and $`\lambda =\kappa a_0`$. Below we measure the temperature in units of $`T_0=q^2/ϵa_0k_B`$ and the energy in $`E_0=k_BT_0`$. In addition, we will also consider the melting of a single layer crystal with a Lennard-Jones ($`1/r^{12}1/r^6`$) inter-particle potential and with a $`1/r^{12}`$ repulsive potential as well. In our simulations the density of particles $`n`$ remains the same for all interlayer distances $`\nu `$ while the interlayer distance changes. The initial symmetry of the structure is set by the primitive vectors, the values of which are derived from a calculation of the minimal energy configuration for fixed $`\nu `$ at $`T=0`$ using the Ewald method . In our numerical calculations we took a fragment of lattice (ranging from $`N=228`$ to $`N=780`$ particles) and used periodic boundary conditions. The structure and potential energy of the system at $`T0`$ are found by the standard Metropolis algorithm in which at some temperature the next simulation state of the system is obtained by a random displacement of one of the particles. If the new configuration has a smaller energy it is accepted and if the new energy is larger the configuration is accepted if $`\delta >r`$, where the probability $`\delta =\mathrm{exp}(\mathrm{\Delta }E/k_BT)`$ with $`\mathrm{\Delta }E`$ the increment in the energy and $`r`$ is a random number between $`0`$ and $`1`$. We allow the system to approach its equilibrium state at some temperature $`T`$, after executing $`(10^4÷5\times 10^5)`$ ‘MC steps’. The potential energy at $`T0`$ is found by summation over all particles and their periodic images using the Ewald method . ## III Solid–liquid phase diagram In it was found that the bilayer Coulomb crystal at $`T=0`$ exhibits five different lattices as function of the interlayer distance: $`\nu <0.006`$–hexagonal, $`0.006<\nu <0.262`$–rectangular, $`0.262<\nu <0.621`$–square, $`0.621<\nu <0.732`$–rhombic, and $`\nu >0.732`$–hexagonal. It should be noted that the behaviour of the system near the melting point is essentially non-linear and therefore the melting temperature obtained in Ref. has only qualitative predictive power. Therefore, we revise the melting phase diagram and present a more accurate calculation of it, by performing MC simulation of the melting transition. We use different criteria to find the critical melting temperature. The parameter $`L_{}=<u_{}^2>/a_0^2`$, where $`<u_{}^2>`$ is the mean square displacement of the particles from their equilibrium positions was introduced in to characterise the system order. It is well–known that $`L_{}`$ diverges for a 2D system. We will use a modified parameter $`L`$ as introduced in Ref., which enables us to minimize the system size effect which gives significant contributions in the usual parameter $`L_{}`$. For the modified parameter $`L=<u^2>/a_0^2`$, $`<u^2>`$ is defined by the difference in the mean square displacement of neighboring particles from their initial sites $`\stackrel{}{r}_0`$ and can be written as $$<u^2>=\frac{1}{N}\underset{i=1}{\overset{N}{}}\frac{1}{N_{nb}}\underset{j=1}{\overset{N_{nb}}{}}((\stackrel{}{r}_i\stackrel{}{r}_{i0})(\stackrel{}{r}_j\stackrel{}{r}_{j0}))^2,$$ (2) where $`<>`$ means averaging over MC steps, the index $`j`$ denotes the $`N_{nb}`$ nearest neighbours of particle $`i`$ in the upper or in the bottom layers. For the hexagonal lattice $`N_{nb}=6`$, while for rectangular, square and rhombic lattices $`N_{nb}=4`$. For sufficiently low temperatures the particles exibit harmonic oscillations around their $`T=0`$ equilibrium position, and the oscillation amplitude increases linearly with temperature, which leads to a linear increase of $`L`$ with $`T`$. Figs. 1–3(a) show the modifies parameter $`L`$ as a function of the reduced temperature $`T/T_0`$: i) for the hexagonal single layer crystal ($`\nu =0`$) (see Fig. 1(a)), ii) for the bilayer crystal with a square lattice ($`\nu =0.4`$) (see Fig. 2(a)), and iii) for a hexagonal type bilayer crystal ($`\nu =0.8`$) (see Fig. 3(a)). At higher temperatures non-linear effects are important and $`L`$ becomes a nonlinear function of $`T`$. The particle oscillation amplitude increases faster than linear with $`T`$, but the system is not melted, since the particles display still an ordered structure. The rapid increase of $`L`$ with $`T`$ is a manifestation of the anharmonicity of the motion of the particles. Melting occurs when $`L`$ increases very sharply with $`T`$. In our previous study we assumed that melting occurs when $`L0.1`$ as was obtained from numerical simulations of melting in a single layer crystal . From the present study we find that $`L0.1`$ is not a good criterion for melting, except for the case $`\nu =0`$ while for $`\nu =0.4`$ and $`\nu =0.8`$ the system is still ordered when $`L0.1`$. Note, that we defined $`L=<u^2>/a_0^2`$, where $`a_0`$ is the mean inter-particle distance for a single layer crystal with density $`n`$, for all interlayer distances $`\nu `$. In the case when $`\nu \mathrm{}`$ and the inter–layer particle interaction is negligible small, we obtain $`L0.2`$ at melting because now the particle density in each layer is $`\sqrt{2}`$ times smaller. As a second independent parameter from which we determine the melting temperature we calculated the height of the first peak in the pair correlation function as function of temperature. The pair correlation function is calculated for the particles in one of the layers (see Fig. 4). This is shown in Fig. 4 for an interlayer separation $`\nu =0.4`$ and three different temperatures. The height of the first peak in $`g(r)`$ is plotted as function of temperature in Figs. 1-3(b) for $`\nu =0,0.4,0.8`$, respectively. The value of the first peak of the pair correlation function decreases with increasing temperature, and exhibits a quick decrease at $`T=T_{mel}`$. This is most pronounced for the case of a single layer crystal (Fig. 1(b)). Even in the liquid phase the pair correlation function still has a peak structure due to the local inter-particle correlation. In Fig 4 the state of the system just before melting is denoted by the doted curve and the dashed curve refers to the state after melting. In the crystalline state the potential energy of the system increases almost linearly with temperature and only a weak non linear $`T`$-dependence is observed. At larger temperature a quick rise in the energy is clearly seen in Figs. 1-3(c) denoting the beginning of melting and is related to the unbinding of dislocation pairs, which we will discuss below. At melting the potential energy has a very steep rise. The heat capacity of a 2D structure is determined by $`C=(d\epsilon /dT)_{V=const}`$, where the total energy $`\epsilon =k_BT_0+E`$ includes the kinetic and potential energy. The heat capacity of the crystal and the liquid phases near melting are very close to each other in two dimensions, and the first derivative of the energy with respect to T taken in the crystal phase and the liquid phase are practically the same. The potential energy decreases with increasing $`\nu `$ which is a consequence of the decreasing inter-layer Coulomb interaction. Note, that the potential energy difference between the liquid and solid phases near melting for the square bilayer crystal at $`\nu =0.4`$ is of size $`\delta _e=0.71\times 10^2k_BT_0`$ which is about a factor 2 larger than for a hexagonal lattice, i.e. at $`\nu =0`$, $`\delta _e=0.39\times 10^2k_BT_0`$, and at $`\nu =0.8`$, $`\delta _e=0.31\times 10^2k_BT_0`$. Moreover, as follows from the temperature behaviour of the parameter $`L`$, the peak of the pair correlation function and the potential energy, the square lattice turns out to exibit a substantial higher melting temperature, and consequently is more stable against thermal fluctuations, than the hexagonal one. The enhanced stability of the square lattice can be understood from an analysis of the defects which are formed during melting and which will be discussed in Secs. VII and VIII. In order to determine the type of melting transition and the melting point the bond–orientational and translational correlation functions are calculated which sometimes can also be obtained experimentally . But our finite fragment of 780 particles (with periodic boundary conditions) is too small for a reliable analysis of the asymptotic decay of the density correlation functions. Therefore, we calculate the bond–angular order factor $`G_\theta `$ which originally was introduced by Halperin and Nelson . For each layer ($`k`$) we define the bond-angular order factor $$G_\theta ^k=\frac{2}{N}\underset{j=1}{\overset{N/2}{}}\frac{1}{N_{nb}}\underset{n=1}{\overset{N_{nb}}{}}\mathrm{exp}(iN_{nb}\theta _{j,n}),$$ (3) and the translational order factor $$G_{tr}^k=\frac{2}{N}\underset{j=1}{\overset{N/2}{}}\mathrm{exp}(i\stackrel{}{G}(\stackrel{}{r}_i\stackrel{}{r}_j)),$$ (4) where $`\theta _{j,n}`$ is the angle between some fixed axis and the vector which connects the $`j`$th particle and its nearest $`n`$th neighbour, and $`\stackrel{}{G}`$ is a reciprocal–lattice vector. The latter we took equal to the smallest reciprocal lattice vector. The total bond-angular order factor of the bilayer crystal is defined as $`G_\theta =(G_\theta ^1+G_\theta ^2)/2`$ and similar for $`G_{tr}`$. These order factors are shown in Figs. 1-3(d) as function of temperature. For low temperatures the factors $`G_\theta `$ (solid circles) and $`G_{tr}`$ (open circles) decrease linearly with $`T`$, and when $`G_\theta `$ reaches $`0.45`$, both order factors drop quickly to zero at the same temperature as seen in Figs. 1-3(d). From the behaviour of the order factors we can derive the temperature at which order is lost in the system. Our numerical results show that the bond-order angular order factor: 1) decreases linearly with increasing temperature, except very close to the melting temperature, where it decreases faster, and 2) it drops to zero just after it reaches the value of 0.45. In the phase diagram of Fig. 5 we show the melting temperature as a function of $`\nu `$, which is the interlayer distance for fixed particle density. All criteria mentioned above gave the same temperature $`T_{mel}`$. For $`\nu =0`$ we obtained the critical $`\mathrm{\Gamma }^{}=132`$, resulting in $`T/T_0=0.0076`$. This critical value was first measured in Ref. and found to be $`137\pm 15`$. Later experiments and simulations gave critical $`\mathrm{\Gamma }^{}`$ within this range. As seen in Fig. 5 the hexagonal (I and V), the rectangular (II) and rhombic (IV) lattices melt at almost the same temperature. This implies that the inter-layer correlation does not strongly influence the melting temperature, although it determines which lattice structure is stable and has the lowest energy. Note, however that the melting temperature for phase IV and V are slightly smaller than the corresponding temperature for phases I and II in the given $`\nu `$ range. We checked that in the limit $`\nu 0`$ $`T_{mel}/T_00.0076/\sqrt{2}0.0537`$, which is reached for $`\nu =2÷3`$. The general shape of the phase diagramm agrees with the one obtained earlier using the Lindemann criterion where the mean square displacement was calculated within the harmonic approximation. But there are several important differences: 1) the absolute value of the melting temperature was underestimated in Ref. by about a factor of $`2`$; 2) at the second-order structural phase transition points the melting temperature becomes zero wich was a consequence of the softening of a phonon mode at $`T=0`$. With the present Monte-Carlo simulations we find $`T_{mel}0`$ which is due to the importance of non-linear effects; 3) in Ref. the maximum melting temperature in phase II was larger than the one in phase III (square lattice) which is opposite to what is found in the present calculation; and 4) the melting temperature in phase IV was smaller than in phase V for $`\nu <1`$, which is opposite to the present results. For the square bilayer crystal (III) the melting temperature increases with rising $`\nu `$ and only for $`\nu >0.4`$ we found that $`T_{mel}`$ starts to decrease with increasing $`\nu `$. It is surprising that the square lattice bilayer crystal has the maximal melting temperature $`T_{mel}=0.01078T_0`$ which exceeds the critical temperature of the single layer crystal $`T_{mel}=0.0076T_0`$, where the particle density is two times larger. The most stable square lattice bilayer occurs at $`\nu =0.4`$ which results into the critical coupling parameter $`\mathrm{\Gamma }^{}=187`$. In the present paper we will not discuss the temperature induced structural phase transition, but limit ourselves to the melting transition. We denote in Fig. 5 the phase boundaries between the different crystal phases by vertical dotted lines. The analysis of the stability of the classical bilayer crystal was carried out in Ref. using the harmonic approximation and it was found that near the structural phase transition the melting temperature was strongly reduced due to the softening of a phonon mode which may lead to a re–entrant behaviour. No such behaviour was found in the present Monte-Carlo simulaion. Near the structural phase transition we found deformed lattice structures with a long wavelength deformation. At present we cannot draw any definite conclusions from it and we need to simulate larger crystal fragments during more Monte-Carlo steps. But this observation does not influence our conclusions concerning the melting behaviour. ## IV Melting of bilayer crystal with screened inter–particle interaction and formulation of a new melting criterion. Symmetry of the lattice of the bilayer crystal at $`T=0`$ might depend not only on the interlayer distance, but also on the type of the inter-particle interaction. The phase diagram corresponding to the lowest energy lattice structures for a bilayer crystal at $`T=0`$ with screened Coulomb inter-particle interaction is shown in Fig. 6 as function of the screening length $`\lambda `$. Notice that the phase boundaries depend only weakly on the screening length. On the other hand, the melting temperature reduces considerably with screening as is apparent from Fig. 5 in the case $`\lambda =1`$ (solid circles) and $`\lambda =3`$ (open triangles). The melting temperature of a single layer crystal with Coulomb (i.e. $`\lambda =0`$) interaction is $`T_{mel}=0.0076T_0`$, with screened one is $`T_{mel}=0.0066T_0`$ for $`\lambda =1`$, and for $`\lambda =3`$ it is $`T_{mel}=0.0035T_0`$. Note, that the bilayer crystal with screened inter–particle interaction has the same qualitative melting curve as the Coulomb bilayer crystal (see Fig. 5). For the case of a screened Coulomb interparticle interaction ($`\lambda =1`$) the change of the potential energy with temperature as referred to the $`T=0`$ result is shown in Fig. 7(a) for two different inter-layer distances. Notice that the potential energy initially increases linearly with temperature but then near the melting temperature it rises quickly within a very narrow temperatures range. It is apparent that the square lattice bilayer crystal ($`\nu =0.4`$) has an enhanced melting temperature as compared to the hexagonal lattice (i.e. $`\nu =0`$). The system looses order just after the bond–angular order factor reaches the value $`0.45`$ as is apparent from Fig. 7(b). Also, for the screened Coulomb interaction the square bilayer lattice exibites an enhanced melting temperature. From the present numerical results for $`G_\theta `$ for a screened Coulomb interaction and from previous section for a pure Coulomb interaction for single and bilayer lattices we formulate a new criterion for melting which we believe is universal: melting occurs when the bond-angle correlation factor becomes $`G_\theta 0.45`$. We also checked the validity of this criterion for a single layer crystal with a Lennard-Jones $`V=1/r^{12}1/r^6`$ and a repulsive $`V=1/r^{12}`$ interaction potential. The results for the translational and the bond–angular factors for single and bilayer systems and for different inter-particle interactions are shown in Fig. 8 as function of the ratio $`T/T_{mel}`$. These results confirm the universality of the proposed criterion. It is valid for single and bilayer systems and we believe it is also independent of the functional form of the inter-particle interaction. ## V Harmonic approximation Given the fact that the bond-order angular order factor decreases almost linearly with increasing temperature, and melting occurs when the bond–angular order factor equals $`0.45`$ we will calculate the melting temperature within the harmonic approximation. Therefore, we reformulate the definition of the angular-bond order factor within the harmonic approximation in terms of eigenfrequencies and eigenvectors of the $`T=0`$ phonon spectrum. Previously such a method was used in Ref. in the calculation of the modified Lindemann parameter for a finite cluster and in Ref. for the bilayer system. In the present paper we apply the same algorithm in the calculation of the bond angular order parameter. We consider a finite crystal fragment with periodic boundary conditions and diagonalize numerically the corresponding Hessian matrix. The theoretical background can be shortly described as follows. Any thermodynamically averaged observable which is a functional of the particles coordinates $`\stackrel{}{r}_i`$ can be written as $$G=G(\stackrel{}{R})exp(U(\stackrel{}{R})/T)𝑑\stackrel{}{R},$$ where $`\stackrel{}{R}=(\stackrel{}{r}_1,\stackrel{}{r}_2,\mathrm{},\stackrel{}{r}_N)`$ is a multidimensional vector describing the system coordinates and $`U`$ is the potential energy. Within a standard linear approach we expand both $`G(\stackrel{}{R})`$ and $`U(\stackrel{}{R})`$ near the equilibrium state $`\stackrel{}{R}=\stackrel{}{R}_0+\stackrel{}{\xi }`$, where $`U/\stackrel{}{R}|_{\stackrel{}{R}=\stackrel{}{R}_0}=0`$, and obtain $$G=\frac{1}{2}\frac{^2G}{\xi _i\xi _j}\xi _i\xi _jexp\left(\frac{1}{2T}\frac{^2G}{\xi _i\xi _j}\xi _i\xi _j\right)𝑑\stackrel{}{\xi }.$$ The multidimensional integration can be performed after diagonalisation of the Hessian matrix which has the elements $$A_{ij}=\frac{^2G}{\xi _i\xi _j},$$ and after introduction of the normal mode coordinates. Thus within the harmonic approach the angular–bond order factor is given by the following expression $$G_\theta =1\frac{T}{2N^2}\underset{n=1}{\overset{N}{}}\underset{k=1}{\overset{N}{}}\underset{i=4}{\overset{2N}{}}\left((\stackrel{}{r}_n\stackrel{}{r}_{n1})\times (\stackrel{}{A}_{n,i}\stackrel{}{A}_{n1,i})(\stackrel{}{r}_k\stackrel{}{r}_{k1})\times (\stackrel{}{A}_{k,i}\stackrel{}{A}_{k1,i})\right)^2/\omega _i^2,$$ (5) where $`\omega _i`$, $`\stackrel{}{A}_i`$ are the eigenfrequencies and the eigenvectors of the Hessian matrix formed by the second derivative of the potential energy; the indices $`n1`$, $`k1`$ refer to the nearest neighbours of the particles $`n`$, $`k`$, respectively, where the summation is performed over all particles. The first three eigenmodes have $`\omega _i=0`$ and correspond to the center-of-mass motion and rotation of the system as a whole which we removed from the summation in Eq. (5). The melting temperature is then obtained by taking $`G_\theta `$ equal to the value $`0.45`$. The obtained results are shown in Fig. 9 by the solid curves for the square lattice (Fig. 9(a)) as function of the interlayer distance and the hexagonal (Fig. 9(b)) lattice ($`\nu =0`$) as function of the screening length. Thus using our new criterion together with the harmonic approximation reproduces $`T_{mel}`$ rather well and agrees with our MC calculations within $`10\%`$ for $`0.35<\nu <0.55`$ in the case of the square lattice. Near the structural transition the harmonic approximation underestimates the melting temperature due to the softening of the $`T=0`$ phonon modes (see Ref.). ## VI Is there hysteresis at melting? One of the most interesting questions in the field of phase transitions is whether or not there is a hysteresis behaviour at melting and freezing of the crystal structure. Even if such an hysteresis is found it can be an artefact of the numerical simulation arising from the fact that the equilibrium state of the system is not reached. To shed light on this problem we studied more carefully the melting of the single layer crystal in the temperature range where a rapid increase of the potential energy was found $`T_1<T<T_3`$, (see Fig. 1(c)). We took very small temperature increments of $`0.001T_{mel}`$. At the temperature $`T_1=0.007560T_0`$, which is just below the melting point, during numerical annealing of the system which lasts for $`5\times 10^5N`$ MC steps, the potential energy practically remains the same and changes only with $`3\%`$ of the total energy jump $`\delta _e`$. Note, that the Monte-Carlo ‘time’ step is about a factor 10 larger than that in typical molecular dynamic calculations. At the melting temperature $`T_3=0.007565T_0`$ the energy rises rapidly during $`10^4`$ MC steps, and then the system remains in the liquid state during $`5\times 10^5`$ MC steps. We also found such a state of the system at the intermediate temperature $`T_2=0.007562T_0`$ when the system switches from solid to liquid and back as function of the simulation time. The system exhibits a network of grain boundary defects which makes it possible for the system to get back to a defect-free structure at the same temperature (see Fig. 10). Such a type of equilibrium behavior of the system at the critical point was found earlier by Morf . The potential energy of the system at this temperature and the order factors $`G_\theta `$, $`G_{tr}`$ are shown in Fig. 11 as function of the ‘MC time’. As function of time the system switches between the solid and liquid state where it looses and regains translational and orientational order. The existence of this state shows that there is a temperature range (at least for a crystalline fragment of 448 particles) in which the crystalline and liquid states have equal probability to occur and, consequently, no hysteresis occurs in the Coulomb single layer crystal. It should be noted that the potential energy changes very rapidly, in an extreme narrow range of temperatures. Whether or not this rapid rise is continuous or discontinuous is not yet clear. ## VII Defects in the hexagonal lattice. In this section we consider the topology of the defects which are created during melting of a single layer crystal. At low temperature the probability of defect creation is small and the lattice remains perfect. With rising $`T`$ the energy increases and the system can overpass some potential barrier and we found that during the MC simulation the system transits from one metastable state to another. The different metastable states differ by the appearance of isomers in the crystal structure which appear with different probabilities. In order to find these isomers in the equilibrium state of our system we choose, during our MC simulation, a number of instant particle configurations. Then we quickly freeze each instant particle configuration which enables us to exclude the ‘visual’ oscillation effect, i.e. the displacements of the particles from their lattice configurations due to thermal fluctuations. After freezing each configuration we find the topology and energy of the isomer and the bond-angular and the translational order factors of this crystal structure. The isomers at $`T_1=0.00756T_0`$, which is a temperature just before melting, at $`T_3=0.00765T_0`$ which is just above melting, and at the intermediate temperature $`T_2=0.007562`$ (see Fig. 1(c)), were obtained. Each point in Fig. 12(a) represents one configuration containing an isomer in a single layer ($`\nu =0`$). Similar results are shown in Fig. 12(b) for the square lattice bilayer system ($`\nu =0.4`$). With rising temperature first, point defects such as a ‘centered vacancy’ and a ‘centered interstitial’ are created at $`TT_1=0.00756T_0`$. These calculated defects as obtained by freezing the instant particle configuration to zero temperature are shown in Fig. 13(a,b). Point defects appear in pairs in our MC calculations, which are a consequence of the periodic boundary condition. The total energy of a non bounded pair consisting of a ‘centered interstitial’ and a ‘centered vacancy’ is $`E=0.29k_BT_0`$. We checked the accuracy of this value on fragments of 512 and 780 particles which contain only one pair of ‘centered interstitial’ and ‘centered vacancy’. In both cases the minimum energy of a pair of uncorrelated point defects was the same within $`1\%`$. The quartet of bound disclinations (two sevenfold and two fivefold) also appear at this temperature (see Fig. 5(a) of Ref.) and has an energy $`E=0.285k_BT_0`$. The point defects and disclinations bound into a quartet have only negligible effect on $`G_\theta `$ and $`G_{tr}`$ (see Fig. 12(a), group 1). The order factors of the ‘cold’ configurations (freezed to $`T=0`$) containing a ‘centered interstitial’ and a ‘centered vacancy’ equal $`G_\theta =0.8÷0.9`$ and $`G_{tr}=0.85÷0.95`$, (see Fig. 12(a)). It should be noted that in spite of prolonged annealing of the system during $`5\times 10^5`$ MC steps at a temperature $`T_1=0.007560T_0`$, which is just below melting, we did not find more complex isomers than point defects and quartets of disclinations. At the temperature $`T_3=0.007565T_0`$, which is the melting temperature, (see Fig. 1(c)) we find pairs of disclinations which are created by the dissociation of disclinations which were bound into a quartet (see Fig. 5(b) of Ref.). Pairs of disclinations can be found in an energy range of $`(0.216÷0.285)k_BT_0`$ (group 2 in Fig. 12(a)). These defects destroy the translational order of the system, but conserve the bond–angular order. The energy range of $`(0.29÷0.62)k_BT_0`$ (group 2) refers to multiple disclination pairs. Within this range of energy the bond–angular order factor remains larger than the translational one. For the energy range $`(0.62÷1)k_BT_0`$ (group 3) there are pairs of disclinations aggregating into chains, which are nothing else then grain boundaries (see Fig. 10) which have the effect of rotating one part of the crystal with respect to another. In this case, both order factors become very small. These defects occur in the system at intermediate temperature $`T=T_2`$ when the system exhibits the remarkable property of switching between a quasi–liquid state and a crystalline state during our numerical simulation. These defects change the energy substantially, but allows the system to come back to a quasi-ordered state (see Fig. 11). At temperature $`T>0.008T_0`$ both order factors become small and the system transits to an isotropic fluid. ## VIII Topology of defects in the square bilayer crystal. In order to understand why the square lattice bilayer crystal has a considerable larger melting temperature as compared to e.g. the hexagonal lattice we investigated the various isomers which are created in the temperature interval $`T_1<T<T_2`$ (see Fig. 2(c) for the location of the temperatures). For bilayer crystals the crystal structure and the topology of the defects is viewed as being composed by the top and the bottom staggered layers. Here, we took as an example $`\nu =0.4`$ which has the largest melting temperature, i.e. $`T_{mel}=0.01078T_0`$. Note, that the energy of the defects which occurs in the square lattice depends on the interlayer distance. The scenario of defect formation is the same as in the case of a single layer crystal. First, for $`TT_1=0.01076`$ point defects are formed. These are the ‘twisted bond’ which is shown in Fig. 14(a) and has the energy $`E=0.23k_BT_0`$, a ‘twisted triangle’ and a ‘twisted square’ which are shown in Fig. 14(b,c), and have an energy $`E=0.24k_BT_0`$ and $`E=0.26k_BT_0`$, respectively (group 1 in Fig. 12(b)). Each of these defects can appear separately. The next energy group includes the ‘vacancy’ and the ‘interstitial’ point defects which appear in pairs, see Fig. 14(d,f) (group 1). The energy of a pair composed of a ‘vacancy’ and an ‘interstitial’ is $`E=0.316k_BT_0`$. These defects are followed by the formation of pairs of extended dislocations (see Fig. 5(e) of Ref.). As seen from Fig. 12(b) these defects do not change substantially either the bond–angular or the translational order factors. At $`T=T_2=0.01078T_0`$ the picture of defect structure changes crucially. Uncorrelated extended dislocations with non-zero Burgers vector and unbounded disclination pairs (see Fig. 5(e) of Ref.) are formed which causes a substantial decrease of the translational order. These isomers refer to the next energy range up to $`E1.4k_BT_0`$ (group 2). Then with increasing density of dislocations and the appearance of single disclinations the system looses order and transits to an isotropic fluid (group 3) in Fig. 12(b). Fig. 12 clearly illustrates that the energy of the different defects in both lattice structures is substantially different. The defects which are able to destroy the translational and orientational order in the square lattice have a larger energy than corresponding one in the single layer. Even point defects in the square lattice crystal have a higher energy. Thus, the square type bilayer crystal requires larger energies in order to create defects which are responsible for the loss of the bond-orientational and the translational order. ## IX Conclusion Monte-Carlo simulations of the melting transition of single and bilayer crystals were performed. The solid–liquid phase diagram for the five lattice structures which are stable in the bilayer system was constructed. We found an unexpected larger melting temperature for the square lattice. A comparison of the topology of the defects occurring in a hexagonal single layer crystal with these in the square bilayer crystal clarifies the enhanced stability of the square lattice. In the case of the square lattice, all defects have a larger energy and consequently larger thermal energy is required to create them. An analysis of the bond-angular and translational order factors of a 2D crystal for the Coulomb, screened Coulomb, Lennard-Jones and $`1/r^{12}`$ repulsive inter–particle interaction potentials allowed us to propose a new universal criterion for the melting temperature: melting occurs when the bond-angular correlation factor attains the value $`G_\theta =0.45`$. Using this criterion we showed that the melting temperature, $`T_{mel}`$, can be obtained with sufficient accuracy within the harmonic approximation. It is shown that the potential energy of the system changes substantially within a very narrow range of temperatures around the melting temperature. We found the equilibrium state of the system nearby the melting point where the system alternates between the crystalline and the liquid state. The system is able to switch from the liquid to the solid state and back during the simulation time. Such a behaviour indicates that we reached the equilibrium state of the melting transition and hysteresis is absent for the melting of the Coulomb crystal at least for our finite size (448 particles) system with periodic boundary conditions. ## X Acknowledgments This work is supported by the Flemish Science Foundation (FWO-Vl) and the Russian Foundation for Fundamental Investigation 96–023–9134a. One of us (FMP) is a Researcher Director with FW0-Vl. We acknowledge discussions with G. Goldoni in the initial stage of this work. FIGURES FIG. 1. The relative mean square displacement of the particles (a), the height of the first peak of the pair correlation function (b), the potential energy (c) and the bond-angular (solid circles) and the translational (open circles) order factors (d) as function of temperature for the interlayer distance $`\nu =0`$. FIG. 2. The same as Fig. 1, but now for the interlayer distance $`\nu =0.4`$, where the system forms a square lattice. FIG. 3. The same as Fig. 1, but now for the interlayer distance $`\nu =0.8`$, where the system is in a staggered triangle lattice. FIG. 4. The pair correlation function of the square lattice bilayer crystal ($`\nu =0.4`$) at $`T=0.0005T_0`$ (solid curve), $`T=0.0095T_0`$ (dotted curve) just before melting and $`T=0.011T_0`$ (dashed curve) after melting where the system is in the liquid state. FIG. 5. The phase diagram of the bilayer Coulomb crystal for different screening lengths $`\lambda =0`$ (open squares), $`\lambda =1`$ (solid circles), and $`\lambda =3`$ (open triangles). The curves are guides to the eye. The crystal structures are shown in the inserts where open (solid) symbols are for the particles in the top (bottom) layer. FIG. 6. The structural phase diagram of the bilayer screened Coulomb crystal at $`T=0`$ as function of the screening length $`\lambda `$. FIG. 7. The potential energy $`(E(T)E(T=0))/k_BT_0`$ (a) and the bond–angular order factor (b) as function of temperature for the interlayer distances: $`\nu =0`$ (solid squares), $`\nu =0.4`$ (open triangles), for the screened Coulomb inter-particle potential with screening length $`\lambda =1`$. FIG. 8. The translational (a) and the bond-angular order factors (b) as function of temperature for the interlayer distances $`\nu =0`$ ($`\lambda =1`$–solid squares; $`\lambda =3`$-open circles), and $`\nu =0.4`$ ($`\lambda =1`$–solid triangles; $`\lambda =3`$–open triangles), for the Lennard-Jones potential (solid rhombics) and for the repulsive potential $`1/r^{12}`$ (open rhombics). FIG. 9. The melting temperature in the harmonic approximation (solid curves) and from MC simulations (symbols): (a) for the square bilayer crystal as function of the interlayer distance for screening lengths $`\lambda =0`$ (circles), $`\lambda =1`$ (squares), $`\lambda =3`$ (triangles), and (b) for the hexagonal single layer crystal as function of the screening length. FIG. 10. Chain of dislocations (grain boundary): (a) in the vertical direction, and (b) in the horizontal direction in a single layer hexagonal lattice. FIG. 11. The bond-angular (squares) and the translational (open circles) order factors (a) and $`(E(T)E(T=0))/k_BT_0`$ (b) as function of ‘MC time’ steps. FIG. 12. The bond-angular (squares) and the translational order (circles) factors of the different defects in (a) a single layer crystal $`\nu =0`$, and (b) in the square lattice bilayer system for $`\nu =0.4`$. FIG. 13. Defects in a single layer crystal: (a) ‘centred vacancy’ and (b) ‘centred interstitial’. FIG. 14. The point defects: (a) ‘twisted bond’, (b) ‘twisted triangular’, (c) ’twisted square’, (d) ‘vacancy’, and (e) ‘interstitial’ in the square lattice bilayer crystal.
no-problem/9908/nucl-ex9908011.html
ar5iv
text
# Power densities for two-step gamma-ray transitions from isomeric states ## Abstract We have calculated the incident photon power density $`P_2`$ for which the two-step induced emission rate from an isomeric nucleus becomes equal to the natural isomeric decay rate. We have analyzed two-step transitions for isomeric nuclei with a half-life greater than 10 min, for which there is an intermediate state of known energy, spin and half-life, for which the intermediate state is connected by a known gamma-ray transition to the isomeric state and to at least another intermediate state, and for which the relative intensities of the transitions to lower states are known. For the isomeric nucleus <sup>166m</sup>Ho, which has a 1200 y isomeric state at 5.98 keV, we have found a value of $`P_2=6.3\times 10^7`$ W cm<sup>-2</sup>, the intermediate state being the 263.8 keV level. We have found power densities $`P_2`$ of the order of $`10^{10}`$ W cm<sup>-2</sup> for several other isomeric nuclei. PACS numbers: 23.20.Lv, 23.20.Nx, 25.20.Dc, 42.55.Vc The induced deexcitation of an isomeric nucleus is a two-step process in which the nucleus, initially in the isomeric state $`|i`$, first absorbs a photon of energy $`E_{ni}`$ to reach a higher intermediate state $`|n`$, then the nucleus makes a transition to a lower state $`|l`$ by the emission of a gamma-ray photon of energy $`E_{nl}`$ or by internal conversion. In a previous work it has been required that the cascade originating on the state $`|n`$ should contain a gamma-ray transition of energy $`E_\gamma >2E_{ni}`$, the latter relation representing a condition of upconversion of the energy of the incident photons, and it was found that in favorable cases the two-step induced emission rates become equal to the natural isomeric decay rates for incident power densities of the order of $`10^{10}`$ W cm<sup>-2</sup>. The Weisskopf estimates where used in in cases when tabulated half-lives or partial widths were not available. In this report we have analyzed, regardless of the upconversion condition $`E_\gamma >2E_{ni}`$, the two-step transitions for isomeric nuclei with a half-life greater than 10 min, for which there is an intermediate state $`|n`$ of known energy, spin and half-life, for which the intermediate state $`|n`$ is connected by a known gamma-ray transition to the isomeric state and to a lower state $`|l`$ , and for which the relative intensities of the transitions from the intermediate state $`|n`$ are known. We have calculated the incident power density $`P_2`$ for which the two-step induced emission rate becomes equal to the natural isomeric decay rate. We have found a value of $`P_2=6.3\times 10^7`$ W cm<sup>-2</sup> for <sup>166</sup>Ho, which has a 1200 y isomeric state at $`E_i`$=5.98 keV, and the intermediate state is the $`E_n`$=263.8 keV level, so that $`E_{ni}=257.8`$ keV. We have also found several isomers for which the power density $`P_2`$ is of the order of $`10^{10}`$ W cm<sup>-2</sup>. We assume that the nucleus is initially in an isomeric state $`|i`$ of energy $`E_i`$, spin $`J_i`$ and half-life $`t_i`$. By absorbing an incident photon of energy $`E_{ni}`$, the nucleus makes a transition to a higher intermediate state $`|n`$ of energy $`E_n`$, spin $`J_n`$ and half-life $`t_n`$. The state $`|n`$ then decays into a lower state $`|l`$ of energy $`E_l`$ and spin $`J_l`$ by the emission of a gamma-ray photon having the energy $`E_{nl}`$ or by internal conversion, as shown in Fig. 1. In some cases the state $`|l`$ may be situated above the isomeric state $`|i`$, and in these cases the transition $`|n|l`$ is followed by further gamma-ray transitions to lower states. As shown in , the rate $`w_{il}^{(2)}`$ for the two-step transition $`|i|n|l`$ is $$w_{il}^{(2)}=\sigma _{ni}\mathrm{}\mathrm{\Gamma }_{eff}N(E_{ni}).$$ (1) where $$\sigma _{ni}=\frac{2J_n+1}{2J_i+1}\frac{\pi ^2c^2\mathrm{}^2}{E_{ni}^2}$$ (2) is the induced-emission cross section for the transition $`|i|n`$. The quantity $`\mathrm{\Gamma }_{eff}`$ is the effective width of the two-step transition, $$\mathrm{\Gamma }_{eff}=F_R\mathrm{ln}2/t_n,$$ (3) where the dimensionless quantity $`F_R`$ has the expression $$F_R=\frac{(1+\alpha _{nl})R_{ni}R_{nl}}{\left[(1+\alpha _{ni})R_{ni}+(1+\alpha _{nl})R_{nl}+_l^{}(1+\alpha _{nl^{}})R_{nl^{}}\right]^2},$$ (4) $`R_{ni},R_{nl},R_{nl^{}}`$ being the relative gamma-ray intensities and $`\alpha _{ni},\alpha _{nl},\alpha _{nl^{}}`$ the internal conversion coefficients for the transitions $`|n|i,|n|l,|n|l^{},l^{}i,l.`$ In Eq. (1) $`N(E_{ni})`$ is the spectral intensity for the incident photon flux, defined such that $`N(E)dE`$ should represent the number of photons incident per unit surface and time and having the energy between $`E`$ and $`E+dE`$. The spectral intensity $`N_2`$ for which the two-step transition rate $`w_{il}^{(2)}`$ becomes equal to the natural decay rate $`\mathrm{ln}2/t_i`$ of the isomeric nucleus is $$N_2=\frac{\mathrm{ln}2}{\sigma _{int}t_i},$$ (5) where the integrated cross section $`\sigma _{int}`$ is $$\sigma _{int}=\sigma _{ni}\mathrm{}\mathrm{\Gamma }_{eff}.$$ (6) The incident power density $`P_2`$ for which the induced emission rate becomes equal to the natural decay rate of the isomeric state can then be estimated as $$P_2=N_2(E_{ni})E_{ni}^2.$$ (7) We have analyzed two-step transitions for isomeric nuclei with a half-life greater than 10 min, for which there is an intermediate state $`|n`$ of known energy $`E_n`$, spin $`J_n`$ and half-life $`t_n`$, for which the intermediate state is connected by a known gamma-ray transition to the isomeric state $`|i`$ and to a lower state $`|l`$, and for which the relative intensities $`R_{ni},R_{nl},R_{nl^{}}`$ of the transitions to lower states are known. The internal conversion coefficients have been calculated by interpolation from refs. and . The cases for which all the previously mentioned quantities were available in ref. are listed in Table I. If the input values were available for several two-step transitions of a certain isomeric nucleus, we gave the two-step transition having the lowest value of $`P_2`$. The two-step transitions for the nuclei <sup>52</sup>Mn, <sup>99</sup>Tc, <sup>152</sup>Eu, <sup>178</sup>Hf, <sup>201</sup>Bi, <sup>204</sup>Pb, which fulfil the upconversion condition, have been studied in ref. and are not included in Table I. If the half-life $`t_n`$ of the intermediate state was given in as less than $`(<)`$ or greater than $`(>)`$ a certain value, we have calculated $`\mathrm{}\mathrm{\Gamma }_{eff},\sigma _{int}`$ and $`P_2`$ corresponding to the given limiting value of $`t_n`$. In the case of <sup>166</sup>Ho we have neglected the 3.1 keV transition, of unknown intensity, from the 263.8 keV level. In the case of <sup>97</sup>Tc the intensity of the 441.2 keV transition from the 656.9 keV level was given as an upper limit. In the case of <sup>121</sup>Sn we have neglected the weak 56.35 keV transition from the 925.6 keV level. The half-life of the 0.0768 keV isomeric level of <sup>235</sup>U and the intensity of the 637.7 keV transition from the 637.79 keV intermediate state are approximate values. In the case of <sup>34</sup>Cl the intensity of the 2433.8 keV transition from the 2580.2 keV intermediate state was given as an upper limit, as are the intensities of some other transitions from this intermediate state. In the case of <sup>123</sup>Sn we have neglected the weak 284.7 keV transition from the 1130.5 keV intermediate state. For the isomeric nucleus <sup>166m</sup>Ho, which has a 1200 y isomeric state at $`E_i`$=5.98 keV, we have found a value of $`P_2=6.3\times 10^7`$ W cm<sup>-2</sup>, the intermediate state being the $`E_n`$=263.8 keV level. The relatively low value of $`P_2`$ is due to the long half-life of this isomer. The largest effective width in Table I is for <sup>34m</sup>Cl, for which $`\mathrm{}\mathrm{\Gamma }_{eff}=2.5\times 10^3`$ eV. The relatively large effective width is due to the short half-life of the intermediate state, $`t_n`$=3 fs. The energy of the pumping transition is however large, $`E_{ni}`$=2433.8 keV, and the multipolarity is E2/M1. The large value of $`P_2`$ for <sup>34</sup>Cl is due to the relatively small half-life of the isomeric state. There are several cases in Table I for which the power density $`P_2`$ is of the order of $`10^{10}`$ W cm<sup>-2</sup>. For <sup>97</sup>Tc and <sup>95</sup>Tc the low values of $`P_2`$ can be attributed to the short half-lives of the intermediate state, whereas for <sup>113</sup>Cd and <sup>121</sup>Sn the values of $`P_2`$ are a result of the long isomeric half-lives. The two-step transitions for which all the input values were available represent a small fraction of the total number of possible two-step transitions in isomeric nuclei. Since the number of isomeric nuclei in a sample is limited by the total activity of that sample, a longer isomeric half-life means a larger number of isomeric nuclei in the sample. The two-step induced transitions can be induced by incident photons, as assumed in this work, or directly by incident electrons. The cross sections for the two-step electron excitation of isomeric states are about two orders of magnitudes lower than the cross sections for two-step photon excitation but the two-step photon excitation rates are also proportional to the efficiency with which the energy of an incident electron beam can be converted into bremsstrahlung. Fig. 1. Two-step deexcitation of an isomeric nucleus. The nucleus, initially in the isomeric state $`|i`$, makes a transition to a higher nuclear state $`|n`$ by the absorption of an incident photon of energy $`E_{ni}`$, then it emits a photon of energy $`E_{nl}`$ to reach the lower nuclear state $`|l`$. Table I. Power density $`P_2`$ for which the two-step induced emission rate becomes equal to the natural decay rate of the isomeric state of energy $`E_i`$ and half-life $`t_i`$. The intermediate state of the problem is the level of energy $`E_n`$ and half-life $`t_n`$. The multipolarities of the transitions of energy $`E_{ni},E_{nl}`$ are given in the multipole column.
no-problem/9908/hep-ph9908450.html
ar5iv
text
# Limits on Neutrino Mass from Cosmic Structure Formation \[ ## Abstract We consider the effect of three species of neutrinos with nearly degenerate mass on the cosmic structure formation in a low matter-density universe within a hierarchical clustering scenario with a flat initial perturbation spectrum. The matching condition for fluctuation powers at the COBE scale and at the cluster scale leads to a strong upper limit on neutrino mass. For a flat universe with matter density parameter $`\mathrm{\Omega }=0.3`$, we obtain $`m_\nu <0.6`$ eV for the Hubble constant $`H_0<80`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. Allowing for the more generous parameter space limited by $`\mathrm{\Omega }<0.4`$, $`H_0<80`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and age $`t_0>11.5`$ Gyr, the limit is 0.9 eV. \] Recent experiments for atmospheric and solar neutrino fluxes suggest that the neutrinos are massive. In particular, the atmospheric neutrino experiment indicates an almost maximal mixing between the two neutrinos, which is most naturally understood if the relevant species are nearly degenerate in mass. Nearly maximal mixing is also a viable possibility to explain the long-standing solar neutrino problem with oscillation either in vacuum or in matter, although there remains the solution that it is explained by small-angle mixing via oscillation in matter . For these reasons the idea has gained popularity that the three neutrinos are massive and almost degenerate in mass (e.g., ). The degenerate neutrinos mean that neutrino mass is larger than several tenths of eV, and this means that they provide the universe with a matter density comparable to or more than that in stars, and play some role in cosmological structure formation. There are a few authors who discussed the possibility that neutrinos have played an active role in the formation of large-scale structure of the universe, especially in giving a power at a large scale which otherwise cannot be accounted for in the standard cold dark matter scenario at the critical matter density . At the time of the emergence of this idea theorists took more seriously the Einstein-de Sitter (EdS) universe of the critical matter density, so that typical compositions of the matter were assumed to be $`\mathrm{\Omega }_{CDM}=0.70.8`$ and $`\mathrm{\Omega }_\nu =0.30.2`$ in units of the closure density, $`10.54h^2`$keV (cm)<sup>-3</sup>, where $`h`$ is the Hubble constant $`H_0`$ in units of 100 km s<sup>-1</sup> Mpc<sup>-1</sup>. This neutrino mass density corresponds to neutrino mass of $`(3020)h^2`$ eV. There have been many explorations of this scenario since the proposal , and the current conclusion is that the neutrino density in excess of $`\mathrm{\Omega }_\nu 0.3`$ is disfavoured in the EdS universe from the viewpoint of early cosmic structure formation. Over the last few years the evidence has been accumulated indicating a low density universe. There are also observations pointing to the dominance of the vacuum energy (cosmological constant, $`\mathrm{\Lambda }`$) that makes universe’s curvature flat, which is also preferred from theoretical point of view for a low matter density universe. The list in favour of a low matter-density universe includes: (1) Hubble constant - cosmic age mismatch for the $`\mathrm{\Omega }=1`$ universe; (2) No positive indications for the presence of copious matter beyond the cluster scale: the mass to light ratio inferred from clusters and galaxies, $`M/L=(100400)h`$, corresponds to $`\mathrm{\Omega }=0.10.3`$ ; (3) consistency of the cluster baryon fraction with the field value ; (4) The slow evolution of the cluster abundance from redshift z=0 to 0.8 together with the abundance normalization at z=0 ; (5) The Hubble diagram of Type Ia supernovae . The data indicate a finite cosmological constant. In view of systematic errors in various steps of the analysis, however, a zero $`\mathrm{\Lambda }`$ is probably not excluded, whereas an $`\mathrm{\Omega }=1`$ universe is too far away from the observations; (6) The perturbation spectral-shape parameter $`\mathrm{\Gamma }=\mathrm{\Omega }h=0.20.3`$ from large-scale structure ; (7) Matching of the power spectrum between COBE and galaxy clustering ; (8) Evolution of small scale non-linear galaxy clustering ; (9) Local velocity field versus density enhancement . The results of (1)$``$(9) converge between $`\mathrm{\Omega }_0=0.2`$ and 0.4. The two positive indications for the presence of a cosmological constant are the Type Ia supernova Hubble diagram mentioned above and the acoustic peak distribution in the cosmic microwave background radiation (CBR) anisotropies , which says that the universe is close to flat whether dominated by matter or vacuum energy. We remark that the results from large-scale velocity flow analyses are controversial; the resulting $`\mathrm{\Omega }`$ varies from analyses to analysis compounded by the uncertainty in the biasing parameter regarding the extent to which galaxies trace the mass distribution . Recent analyses of galaxy peculiar velocities combined with other observations claim that they are consistent with a low density universe with a finite $`\mathrm{\Lambda }`$ . There seems no extensive analysis available for the effect of neutrinos in a low matter-density universe, although a brief reference has been made in . In this paper we consider the effect of massive, three degenerate neutrinos on the cosmic structure formation in the low matter-density universe. We assume the hierarchical structure formation dominated by cold dark matter (CDM), the current standard model of the cosmic structure formation. The match of the power spectrum in the COBE scale with that in the galaxy clustering provides the most conspicuous evidence for the model. While power spectrum analyses are the most common way to demonstrate the consistency of the hierarchical clustering scenario, the amplitude estimated from galaxy clustering receives unknown biasing factors associated with galaxy formation mechanisms; therefore, this is not appropriate for a quantitative analysis as presented here. We consider matching of the two normalizations of the cosmic mass density fluctuation power, the normalizations derived from COBE at a several hundred Mpc scale and the rich cluster abundance at z=0 which measures the power at $`8h^1`$ Mpc scale and most conveniently represented by the rms mass fluctuation parameter $`\sigma _8`$. We do not use the $`\sigma _8`$ parameters derived from velocity fields or other observations, which are more susceptible to various uncertainties. The advantage of using the cluster abundance information is that it refers to the mass function that is not affected by any biasing uncertainties, and the fiducial length scale 8 $`h^1`$ Mpc is close to that of clusters before collapse. The requirement of this matching leads us to derive quantitative constraints on the neutrino contribution to cosmic structure formation. We first consider the flat universe with low matter density, but also discuss later the case for open universes. We assume a flat (Harrison-Zeldovich) initial power spectrum, $`P(k)=Ak^n`$ with $`n=1`$, which is the most natural prediction of inflation. The fluctuation spectrum receives a modification as $`Ak^nT(k)`$ for a large $`k`$ as the fluctuations evolve . The shape of the transfer function $`T(k)`$ depends on the assumed cosmological model, and the neutrino content. We use the computer code CMBFast to calculate the transfer function for many choices of parameters. The spectrum is normalized with a fitting formula around $`\mathrm{}=10`$ deduced by Bunn & White from the four-year COBE-DMR data . They have estimated one standard deviation error to be 7% in square root of the harmonic coefficient of CBR anisotropies $`C_{\mathrm{}}`$. We take the baryon fraction $`\mathrm{\Omega }_B=0.015h^2`$ corresponding to $`\eta _{10}=4`$ . The result is not very sensitive to this choice. We calculate the specific mass fluctuations within a sphere of a radius of 8 $`h^1`$ Mpc by integrating the spectrum with a top hat window: $`\sigma _8^2`$ $`=`$ $`(\delta M/M)^2_{r<8h^1}`$ (1) $`=`$ $`{\displaystyle _{r<8h^1}}𝑑k4\pi k^2|\delta _k|^2\left[3{\displaystyle \frac{\mathrm{sin}(kr)(kr)\mathrm{cos}(kr)}{(kr)^3}}\right]^2.`$ (2) The resulting $`\sigma _8`$ for a given Hubble constant and the neutrino mass density $`\mathrm{\Omega }_\nu `$ is presented in Fig. 1 for a flat universe (a) $`\mathrm{\Omega }=0.3`$ and $`\lambda =0.7`$, and (b) $`\mathrm{\Omega }=0.4`$ and $`\lambda =0.6`$. A set of curves (increasing towards the right) gives contours of constant $`\sigma _8`$. Another set of curves shows the neutrino mass density for given neutrino mass. The neutrino mass that concerns us is in the range $`<1`$eV for most cases. The core radius of neutrino clustering allowed from the phase space argument is $$R_\nu =3.2\mathrm{Mpc}(m_\nu /1\mathrm{e}\mathrm{V})^2(v/1000\mathrm{k}\mathrm{m}\mathrm{s}^1)^{1/2},$$ (3) which is large compared with the core radius of rich clusters $`R_c(0.12\pm 0.02)h^1`$Mpc<sup>-1</sup> for velocity dispersion $`v10^3`$ km s<sup>-1</sup>. Together with small neutrino mass density, we can ignore the neutrino component in integrating the cluster mass. We have estimated the contribution from neutrinos to the cluster mass within linear perturbation theory. The inclusion changes the result at most by a few percent, which can safely be ignored in the present argument. This calculated $`\sigma _8`$ is compared with the value estimated from the rich cluster abundance. The estimate of $`\sigma _8`$ has been made by a number of authors \[23-25,6\]. The most ambiguous in such analyses is the estimate of the cluster mass, but the modern results are well converged among the authors, at least for $`z0`$ clusters. A summary is presented in Table 1. We take the values given by Eke et al. which agree with other estimates within the error: $`\sigma _8=0.93\pm 0.07`$ for $`\mathrm{\Omega }=0.3`$ and $`\sigma _8=0.80\pm 0.06`$ for $`\mathrm{\Omega }=0.4`$. Adoption of Viana & Liddle’s value makes the derived limit on neutrino mass slightly tighter. If we add the normalization error of the CBR anisotropies in quadrature, the errors become 0.10 and 0.09, respectively. The allowed range is shown by shadows in the figure. We see from Fig. 1 that one can obtain the limit on neutrino mass if the Hubble constant is set. For $`H_0=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup> we obtain 0.21 eV ($`r=\mathrm{\Omega }_\nu /\mathrm{\Omega }5`$%) for $`\mathrm{\Omega }=0.3`$ and 0.91 eV ($`r15`$%) for $`\mathrm{\Omega }=0.4`$. For $`H_0=80`$ km s<sup>-1</sup> Mpc<sup>-1</sup> our limits are 0.62 eV ($`r10`$%) for $`\mathrm{\Omega }=0.3`$ and 1.8 eV ($`r22`$%) for $`\mathrm{\Omega }=0.4`$.Our limit is summarized by a fitting formula: $$m_\nu <[5.20h(\mathrm{\Omega }/0.3)^{2.03}3.20(\mathrm{\Omega }/0.3)^{1.32}]h^2.$$ (4) Allowing for conservative parameter space, $`\mathrm{\Omega }0.4`$, $`H_080`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`t_0>11.5`$ Gyr, the upper limit is 0.87 eV, which corresponds to $`r13`$% of the total mass density. A similar figure is given in Fig. 2 for zero-$`\mathrm{\Lambda }`$ universes, (a) for $`\mathrm{\Omega }=0.3`$ and (b) for $`\mathrm{\Omega }=0.4`$. The normalization from the cluster abundance is $`\sigma _8=0.76\pm 0.09`$ and 0.87$`\pm `$0.09 including the CBR normalization error. There is no consistent parameter range for $`\mathrm{\Omega }=0.3`$ for $`H_0<100`$ km s<sup>-1</sup> Mpc<sup>-1</sup> with or without neutrinos. A consistent parameter range appears for $`\mathrm{\Omega }=0.4`$, but only with a relatively high $`H_0`$. No-neutrino models are consistent for $`70<H_0<80`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. Requiring $`H_080`$ leads to $`m_\nu <0.5`$ eV, which is significantly stronger than the one for the flat universe. The modification of power spectra with inclusion of massive neutrinos has been discussed by Hu et al. . The change is about by a factor of two for $`m_\nu =1`$ eV when $`H_0=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. The scatter among the various data and our ignorance of the biasing factor make it difficult to exclude 1 eV neutrinos using the current power spectrum data. A strong constraint, such as $`m_\nu <0.4`$ eV, would be obtained only when the power spectrum is derived from a homogeneous galaxy sample with statistics as high as that would be expected in the Sloan Digital Sky Survey . This would provide us with an alternative mean to set a limit on the neutrino mass, although we must still assume the biasing factor being scale independent. The reason we obtained a strong limit in this paper is ascribed to the advantage of using the cluster mass function, which is directly related to the mass fluctuation, as well as using the information spanning a very large baseline in the length scale. Let us discuss possible uncertainties or loop-holes of our argument. We have ignored the contribution from gravitational wave perturbations to CBR spectrum. Its inclusion only makes the limit on neutrino component more stringent. A possible loop-hole in our argument is the possibility that the index of the power spectrum is significantly larger than one. The COBE data alone do not exclude an index in the range $`0.9<n<1.5`$, but the range is reduced to $`1<n<1.2`$ if supplemented by other CBR data on small scales . With an index, $`n<1`$, which can be easily realized with inflation models, the constraints become tighter. If $`n>1`$, the excess large-scale power generated by neutrino perturbations are cancelled by the intrinsically small large-scale power and more massive neutrinos become viable. For $`n=1.2`$, the limit for $`\mathrm{\Omega }=0.3`$ and $`h=0.7`$ is loosened from 0.2 to 0.7 eV, and for $`h=0.8`$, 0.6 to 1.4 eV. For our $`(H_0,\mathrm{\Omega },t_0)`$ range discussed above the limit is 1.8 eV, still quite strong. We remark that we need some tricky tuning to give $`n>1`$ in inflation models . The limit we derived in this paper is quite strong. It is 5-20 times stronger than would be obtained from a straightforward mass density consideration $`93.8\mathrm{\Omega }h^2`$ eV. Ellis and Lola have recently developed an argument for neutrinos with a degenerate mass as large as 5 eV with interesting physics. Such neutrinos, however, bring a large mismatch into the fluctuation power between the very large scale and the cluster scale, causing a disaster to currently accepted cosmic structure formation models. Let us finally compare our limits with those obtained from experiments or other cosmological considerations. A direct limit on electron neutrino mass from tritium beta decay is $``$ 4.4eV (95% CL) allowing for some systematic effects that make the measured $`m_\nu ^2`$ negative. Additional limits are available if the neutrinos are of the Majorana type. The limit of lifetime for double beta decay of <sup>76</sup>Ge has now increased to $`5.7\times 10^{25}`$ year (90% CL), which leads to $`0.21.5`$ eV depending on the nuclear matrix element used. We also refer to a limit from cosmological baryon excess: the condition for baryon asymmetry left-over leads to the Majorana neutrino to be $`12`$ eV . The limit obtained in this paper does not depend on neutrino types. If one would consider only one species of neutrinos being massive, the mass limit simply becomes weaker by about a factor of three. Acknowledgements We thank George Efstathiou and Craig Hogan for usuful comments on the draft manuscript. MF and NS are supported by Grants-in-Aid of the Ministry of Education. MF thanks the Newton Institute and Institute of Astronomy in Cambridge and NS the Max Planck Institut für Astrophysik for their hospitality while this work was completed.
no-problem/9908/cond-mat9908348.html
ar5iv
text
# Low temperature magnetic hysteresis in Mn12 acetate single crystals ## 1 Introduction The high spin ($`S=10`$) molecular magnets Mn<sub>12</sub> acetate and Fe<sub>8</sub> have become prototypes for the study of the transition from classical superparamagnetism to quantum tunnelling of mesoscopic spins. Much of the recent interest in these materials has been stimulated by the observation of a remarkably regular series of steps and plateaus in the magnetic hysteresis loops of Mn<sub>12</sub> at low temperature (below a blocking temperature of $`3`$ K), first in oriented powders and shortly thereafter in single crystals . These results indicate that the relaxation rate of the magnetization toward equilibrium is greatly enhanced at well-defined intervals of magnetic field. These observations have been interpreted within a simple effective spin Hamiltonian for these molecules and a model of thermally assisted tunnelling of the magnetization, first suggested in reference . This model describes a regime intermediate between thermal activation over the anisotropy barrier (superparamagnetism) and pure quantum tunnelling ($`T=0`$) in which both thermal activation and quantum tunnelling are important to the magnetization reversal. Mn<sub>12</sub> has subsequently been studied extensively and by a variety of techniques. Notably, both EPR and inelastic neutron spectroscopy have been used to independently determine the parameters for the spin Hamiltonian of Mn<sub>12</sub>. These have shown that higher order terms in the Hamiltonian–not considered in the analysis thus far of magnetic hysteresis data–are necessary to fit the spectra. Surprisingly few experiments have been conducted at lower temperature in Mn<sub>12</sub> to study the transition to pure quantum tunnelling behaviour, in contrast to important studies of this type in Fe<sub>8</sub> . Some earlier experiments showed the appearance of magnetic avalanches at lower temperature, that is, rapid and uncontrolled magnetization switching to its saturation value, which precluded controlled low temperature relaxation studies . Later studies using cantilever magnetometry revealed several new higher field magnetization steps at lower temperature, consistent with the model of thermally assisted tunnelling . In this letter we present precise low temperature high field magnetic hysteresis measurements on Mn<sub>12</sub> which reveal two important new aspects of the magnetization reversal. First, on lowering the temperature below $`1.6`$ K, we find that steps in magnetization shift gradually to higher magnetic field, consistent with the presence of a second order (fourth power) uniaxial magnetic anisotropy constant determined by EPR spectroscopy and, recently, by precise inelastic neutron scattering measurements . Second, at lower temperature an intriguing abrupt shift in step position is found. We suggest that this shift may be due to an abrupt transition between thermally assisted and pure quantum tunnelling in Mn<sub>12</sub> acetate, first predicted theoretically by Chudnovsky and Garanin . We also discuss other possible origins of this observation. ## 2 Background Mn<sub>12</sub> acetate crystals have a tetragonal lattice ($`a=1.73`$ nm and $`b=1.24`$ nm) of molecules with $`12`$ interacting mixed valent Mn ions with a net ground state spin of $`10`$, $`S=10`$ . Thus each molecule has $`2S+1=21`$ magnetic levels, labeled by quantum number $`m`$ ($`m=10,9,8,\mathrm{},10`$). The molecules have a strong uniaxial magnetic anisotropy energy, and to a good approximation the spin Hamiltonian can be written: $$H=DS_z^2BS_z^4g_z\mu _BH_zS_z+H^{}$$ (1) The parameters D and B have been determined first by EPR spectroscopy and now very accurately by inelastic neutron spectroscopy to be $`D=0.548(3)`$ K, B=$`1.173(4)\times 10^3`$ K, and $`g_z`$ is estimated to be $`1.94(1)`$ . Spin alignment is favored to be up ($`m=10`$) or down ($`m=10`$) along the z-axis. The energy barrier between up and down states is approximately $`67`$ K. The third term is the Zeeman energy for fields applied parallel to the easy axis. $`H^{}`$ represents small terms which break the axial symmetry and, hence, produce tunnelling. These are due to transverse fields (i.e., terms like $`H_xS_x`$), and higher order magnetic anisotropies, the lowest order form allowed by tetragonal symmetry being ($`S_+^4+S_{}^4`$). By itself, this last term would lead to a tunnelling selection rule with $`\mathrm{\Delta }m=4i`$, with $`i`$ an integer. Since this is not found experimentally, it is likely that transverse fields due to hyperfine, dipolar fields ($`0.1`$ K) and/or an external applied field (such as due to a small misalignment between the applied field and the z-axis) are most important to mixing the m-levels and producing tunnelling. The steps observed in magnetic hysteresis measurements, their temperature dependence and magnetization relaxation experiments provide strong evidence for thermally assisted tunnelling. Within this model magnetization reversal occurs by tunnelling from thermally excited magnetic sublevels (i.e, $`m=9,8,7,\mathrm{},8,9`$) at magnetic fields at which these levels are in resonance with levels on the opposite side of the anisotropy barrier (inset Fig. 1). From (1) levels $`m`$ and $`m^{}`$ have the same energy when: $$H=nH_o\left[1+\frac{2B}{D}\left(\left(m\frac{n}{2}\right)^2+\frac{n^2}{4}\right)\right]$$ (2) where $`n=m+m^{}`$ is the step index, $`H_o=D/g_z\mu _B=0.42`$ T is a field “quantum” and $`m`$ is the escape level from the metastable well. At these magnetic fields, the magnetization relaxation can occur on measurement time scales and give rise to the step-like changes in magnetization. Otherwise, the relaxation rate is slower, leading to plateaus in the magnetic hysteresis loop. The relaxation rate from any level is proportional to the product of the thermal occupation probability of that level and the probability for quantum tunnelling from the level, and thus should increase exponentially with temperature, as found in experiments above $`2`$ K . Since the tunnelling probability is intrinsically small for lower lying levels (with large $`\mathrm{\Delta }m=mm^{}`$), larger magnetic fields are necessary at lower temperature to produce observable tunnelling relaxation, also, as seen in experiments. Importantly, within this model, tunnelling relaxation occurs from small group of quasilevels in the metastable well–the escape levels, $`m_{esc}`$. This is because the tunnelling probability increases exponentially with energy $`E`$, as the effective barrier height becomes lower, while the thermal occupation probability decreases exponentially with energy, $`exp(E/kT)`$. Note also from equation (2) that for a given step index n the fields at which steps occur depend on the escape levels. Larger fields are necessary to bring lower lying levels, i.e., larger $`m`$ levels, into resonance (as generally, $`m>n/2`$), so that a shift in step position to higher fields signals tunnelling from states deeper in the metastable potential well (i.e., larger $`m_{esc}`$). Finally, dipolar interactions between clusters, interactions with nuclear spins and spin-phonon interactions are essential to a quantitative and microscopic understanding of the relaxation such as, the observation of non-exponential relaxation and both the linewidth and form of the relaxation peaks . ## 3 Experiment The magnetization of small single crystals of Mn<sub>12</sub>-acetate in the form of a parallelepiped ($`50\times 50\times 300\mu m^3`$) was measured using a high sensitivity micro-Hall effect magnetometer . Like a micro-superconducting quantum interference device ($`\mu `$-SQUID) , this magnetometer measures a magnetic field induced by the crystal’s magnetization. The measurements are done in a rf shielded automated high-field Helium $`3`$ system, in which careful attention has been paid to reducing electrical noise and to thermalizing the sample. Temperature is measured both at the cold stage of the cryostat and with a small resistance thermometer mounted within a few mm of the sample. These measurements are always within $`50`$ mK of one another. Fig. 1 shows a typical portion of a hysteresis curve measured at $`0.4`$ K starting from a demagnetized state, $`M=0`$, and measured at a ramp rate of $`0.1`$ T/min, with the field along the easy axis (within a few degrees). Prominent step-like changes in magnetization are observed at fields between $`3`$ and $`5`$ Tesla. Fig. 2 shows the derivative of the magnetization curves $`dM/dH`$ versus applied field at different temperatures. Peaks in $`dM/dH`$ correspond to maxima in the magnetization relaxation rate at that applied field, sample magnetization and measurement temperature. Examining the data from high to low temperature; first, above $`1.6`$ K the data are in good accord with previous experiments . Peaks appear at approximately equally spaced field intervals ($`0.45`$ T) and their amplitude is a strong function of the temperature. As the temperature is reduced, higher numbered maxima in $`dM/dH`$ appear, while lower field peaks decrease in amplitude, again, consistent with the model of thermally assisted tunnelling. Second, on lowering the temperature peaks shift continuously to higher fields. For instance, peak $`n=5`$, at $`2.2`$ K is at $`2.20`$ T and by $`1.4`$ K has shifted to $`2.33`$ T. Third, and most intriguing, at lower temperature ($`T<1.2`$ K) peaks in $`dM/dH`$ shift dramatically in position as a function of temperature. This is well illustrated by the behaviour of the $`n=7`$ peak as the temperature is reduced. This peak first appears at $`1.6`$ K at $`H=3.10`$ T, grows in amplitude and shifts to significantly higher fields on lowering the temperature and, at $`1.0`$ K, abruptly develops a high field shoulder. On slightly lowering the temperature to $`0.9`$ K, “spectral” weight is transferred into this shoulder and at the lowest temperature the peak remains fixed in position. This peak has shifted to $`3.53`$ T, by a full field quantum $`H_o`$, in this temperature interval. Shifts in peak position of this order are seen for all the steps observed at low temperature ($`5n9`$). Finally, note that at $`0.6`$ K and lower temperature, the maxima remain fixed in field and approximately constant in amplitude. The dependence of the $`dM/dH`$ peak positions on temperature are summarized in Fig. 3. Here the peak positions, in internal field ($`H_{int}=H+4\pi M`$) divided by the field quantum, $`H_o`$, are plotted versus temperature. Note that peaks initially shift gradually to higher magnetic fields as the temperature is lowered. Between $`0.6`$ K and $`1.2`$ K the peak positions shift abruptly, with higher step indices changing position at lower temperature. The solid vertical line demarcates the approximate temperature at which these sudden shifts in step position occur. Below this line the step positions are independent of temperature. ## 4 Discussion These experiments are the first to show that the steps in magnetization of Mn<sub>12</sub> are not always at regular magnetic field intervals and that steps shift to higher magnetic fields as the temperature is lowered. The data are consistent with the presence of a second order (fourth power) uniaxial term in the magnetic anisotropy of Mn<sub>12</sub> (eqn. 1) and indicate that lower lying magnetic sublevels dominate the tunnelling relaxation as the temperature is reduced (eqn. 2). We can clearly distinguish the physical behaviour in two different temperature regimes. At higher temperature (above the solid line in Fig. 3) the step positions shift gradually with temperature. This is the regime of thermally assisted tunnelling, where the magnetic escape is from thermally excited magnetic levels. We associate shifts in step positions with incremental changes in these levels, such as from $`m`$ to $`m+1`$, as the temperature is reduced and/or changes in the relative importance of a few levels, which contribute “in parallel” to the magnetization relaxation at a given temperature. Fig. 4 shows the energy levels versus field for the Mn<sub>12</sub> spin Hamiltonian (eqn. 1). The vertical solid lines indicate the level coincidences important to the magnetic relaxation in this temperature and field range. For example, for $`n=6`$, the step positions we find are consistent with transitions from $`m=8`$ to $`m^{}=2`$ and $`m=7`$, to $`m^{}=1`$. The second regime is at low temperature (below the solid line in Fig. 3) in which the position of peaks in $`dM/dH`$ are independent of temperature. This is consistent with magnetization relaxation being of a pure quantum nature, i.e. tunnelling escape occurring from the lowest level in the metastable well, $`m=10`$. At $`0.6`$ K and below, the amplitude of the peaks are also temperature independent this is additional evidence for a quantum regime in Mn<sub>12</sub>, as it indicates that the relaxation of the magnetization in our measurement time window has become temperature independent. The most striking feature of these data is the abrupt shift in step position, observed at the boundary between these temperature regimes. This shift suggests that different levels become important to the tunnelling relaxation in a narrow temperature interval. The shift in peak positions of Fig. 3 is consistent with the change in levels responsible for tunnelling illustrated by the dashed arrows in Fig. 4–$`m_{esc}`$ changes by $`2`$ in an interval of $`0.1`$ to $`0.2`$ K. The abrupt nature of this transition is evident directly from the magnetic hysteresis data in Fig. 2. For example, that the shoulder which develops for the n=7 peak at 1.0 K (Fig. 2) indicates that metastable levels $`m=8`$ ($`m^{}=1`$) and $`m=10`$ ($`m^{}=3`$), both contribute to the magnetic relaxation at this temperature but at different easy axis magnetic fields. We now speculate as to the origin of the abrupt in shift step position with temperature. The most interesting possibility is that the abrupt shift in peak position we observe is evidence for a first-order transition between thermally assisted and pure quantum tunnelling, as suggested in ref. . In this theory it is shown that for a small uniaxial magnetic particle the energy of the quasilevels in the metastable magnetic well which dominate the magnetic escape need not be a smooth function of temperature. Larkin and Ovchinnikov called the smooth transition from classical thermal activation to pure quantum tunnelling a second-order transition , regarding the energy of escape as analogous to an order parameter in a phase transition problem. For small transverse fields, Chudnovsky and Garanin find that the transition can be first-order with certain energy levels in the metastable well being skipped entirely as the temperature is varied. They considered both a large uniaxial spin in a quasiclassical approximation as well as small spins ($`S10100`$) with a discrete level spectrum , as in Mn<sub>12</sub>. There may, of course, be other explanations for the observed shift in relaxation peaks with temperature. A tacit assumption we make is that peaks in $`dM/dH`$ correspond to maxima in the relaxation rate at a given field (including the internal field). Then the maximum shift in relaxation rate maxima due to the internal fields is about $`4\pi M0.1`$ T, which is smaller than the changes that we observe ($`0.4`$ T). However, it may not be possible to account for the internal fields in this average way. For instance, the distribution of internal fields throughout the crystal likely changes in a complex manner during our field sweep experiments. It is also possible that sample heating plays a role. For example, if the sample were not in thermal equilibrium with the thermometers during the measurements and actually at a higher temperature, this would explain the temperature independent behaviour observed below $`0.6`$ K. Sample heating may also play another role. Relaxation of the magnetization leads to strong dissipation, which leads to sample heating which, in turn, leads to enhanced magnetization relaxation. This positive feedback is at the origin of the magnetic avalanches reported in ref. . Perhaps, this positive feedback could produce the shoulder-like structures we observe on certain $`dM/dH`$ peaks in Fig. 2 ($`0.9`$ and $`1.0`$ K). We estimate that the maximum heat generated in these experiments is $`H(dM/dt)=H(dM/dH)(dH/dt)=1`$ nW, which is of the same order as the heat dissipated in our Hall magnetometer ($`2`$ nW). Nonetheless, we have sometimes observed magnetic avalanches at higher sweep rates ($`0.4`$ T/min) so we cannot completely rule out this possibility. Finally, while the peak shifts we observe are abrupt, the transition could still be continuous but occur over a narrow temperature interval. Further detailed experimentation and modeling are likely to clarify this situation. In summary, we have presented new data which suggest the transition between thermally assisted and pure quantum tunneling in Mn<sub>12</sub> may be abrupt, or first order. Importantly, these results show that magnetization relaxation and magnetic hysteresis measurements may be used to do a new type of spectroscopy of the levels important to magnetic escape in Mn<sub>12</sub>. Further experiments and modeling will undoubtedly lead to a better fundamental understanding of this transition between thermally assisted and pure quantum tunneling. ## 5 Acknowledgements This work was supported by at NYU by NSF-INT grant 9513143 and NYU, at CCNY by NSF grant DMR-9704309 and UCSD by NSF grant DMR-9729339. Corresponding author: andy.kent@nyu.edu
no-problem/9908/quant-ph9908084.html
ar5iv
text
# 1 Introduction ## 1 Introduction John von Neumann seems to have first clearly pointed out the conceptual difficulties that arise when one attempts to formulate the physical process underlying subjective observation within quantum theory . He emphasized the latter’s incompatibility with a psycho-physical parallelism, the traditional way of reducing the act of observation to a physical process. Based on the assumption of a physical reality in space and time, one either assumes a “coupling” (causal relationship — one-way or bidirectional) of matter and mind, or disregards the whole problem by retreating to pure behaviorism. However, even this may remain problematic when one attempts to describe classical behavior in quantum mechanical terms. Neither position can be upheld without fundamental modifications in a consistent quantum mechanical description of the physical world. These problems in formulating a process of observation within quantum theory arise as a consequence of quantum nonlocality (quantum correlations or “entanglement”, characterizing generic physical states), which in turn may be derived from the superposition principle. This fundamental quantum property does not even approximately allow the physical state of a local system (such as the brain or parts thereof) to exist . Hence, no state of the mind can exist “parallel” to it (that is, correspond to it one-to-one or determine it). The question does not only concern the philosophical issue of matter and mind. It has immediate bearing on quantum physics itself, as the state vector seems to suffer the well known reaction upon observation: its “collapse”. For this reason Schrödinger once argued that the wave function might not represent a physical object (not even in a statistical sense), but should rather have a fundamental psycho-physical meaning. This sitation appears so embarrassing to most physicists that many tried hard to find a local reality behind the formalism of quantum theory. For some time their effort was borne by the hope that quantum correlations could be understood as statistical correlations arising from an unknown ensemble interpretation of quantum theory. (An ensemble explanation within quantum theory can be excluded .) However, Bell’s work has demonstrated quite rigorously that any local reality — regardless of whether it can be experimentally confirmed in principle or not — would necessarily be in conflict with certain predictions of quantum theory. Less rigorous though still quite convincing arguments had been known before in the form of the dynamical completeness of the Schrödinger equation for describing isolated microscopic systems, in particular those containing quantum correlations (such as many-electron atoms). Although the evidence in favor of quantum theory (and against local realism) now appears overwhelming, the continuing search for a traditional solution may be understandable in view of the otherwise arising epistemological problems. On the other hand, in the absence of any empirical hint how to revise quantum theory, it may be wise to accept the description of physical reality in terms of non-local state vectors, and to consider its severe consequences seriously. Such an approach may be useful regardless of whether it will later turn out to be of limited validity. The conventional (“Copenhagen”) pragmatic attitude of switching between classical and quantum concepts by means of ad hoc decisions does, of course, not represent a consistent description. It should be distinguished from that wave-particle duality which can be incorporated into the general concept of a state vector (namely, the occupation number representation for wave modes). Unfortunately, personal tendencies for local classical or for non-local quantum concepts to describe “true reality” seem to form the major source of misunderstandings between physicists — cf. the recent discussion between d’Espagnat and Weißkopf . It appears evident that conscious awareness must in some way be coupled to local physical systems: our physical environment has to interact with and thereby influence our brains in order to be perceived. There is even convincing evidence supporting the idea that all states of awareness reflect physico-chemical processes in the brain. These neural processes are usually described by means of classical (that is, local) concepts. One may speculate about the details of this coupling on purely theoretical grounds , or search for them experimentally by performing neurological and psychological work. In fact, after a few decades of exorcizing consciousness from psychobiology by retreating to pure behaviorism, the demon now seems to have been allowed to return . On closer inspection, however, the concept of consciousness as used turns out to be a purely behavioristic one: certain aspects of behavior (such as language) are rather conventionally associated with consciousness. For epistemological reasons it is indeed strictly impossible to derive the concept of subjective consiousness (awareness) from a physical world. Nonetheless, subjectivity need not form an “epistemological impasse” (Pribram’s term ), but to grasp it may require combined efforts from physics, psychology and epistemology. ## 2 The Epistemology of Consciousness By inventing his malicious demon, Descartes demonstrated the impossibility of proving the reality of the observed (physical) world. This hypothetical demon, assumed to delude our senses, may thereby be thought of as part of (another) reality — similar to an indirect proof. On the other hand, Descartes’ even more famous cogito ergo sum is based on our conviction that the existence of subjective sensations cannot be reasonably doubted. Instead of forming an epistemological impasse, subjectivity should thus be regarded as an epistemological gateway to reality. Descartes’ demon does not disprove a real physical world — nor does any other epistemological argument. Rather does it open up the possibility of a hypothetical realism, for example in the sense of Vaihinger’s heuristic fictions . Aside from having to be intrinsically consistent, this hypothetical reality has to agree with observations (perceptions), and describe them in the most economical manner. If, in a quantum world, the relation between (ultimately subjective) observations and postulated reality should turn out to differ from its classical form (as has often been suggested for reasons of consistency), new non-trivial insights may be obtained. While according to Descartes my own sensations are beyond doubt to me, I cannot prove other people’s consciousness even from the presumption of their physical reality. (This was the reason for eliminating it from behavioristic psychology.) However, I may better (that is, more economically) “understand” or predict others’ behavior (which I seem to observe in reality) if I assume that they experience similar sensations as I do. In this sense, consciousness (beyond solipsism) is a heuristic concept precisely as reality. There is no better epistemological reason to exorcise from science the concept of consciousness than that of physical reality. A consequence of this heuristic epistemological construction of physical and psychic reality is, of course, that language gives information about the speaker’s consciousness. This argument emphasizes the epistemologically derived (rather than dynamically emerged) nature of this concept. However, only that part of others’ consciousness can be investigated, that manifests itself as some form of behavior (such as language). For this reason it may indeed be appropriate to avoid any fundamental concept of consciousness in psychobiology. This requires that conscious behavior (behavior as though being conscious) can be completely explained as emerging — certainly a meaningful conjecture. It would have to include our private (subjectively experienced) consciousness if a psycho-physical parallelism could be established. Only for such a dynamically passive parallelism (or epi-phenomenalism) would the physical world form a closed system that in principle allowed complete reductionism. Before the advent of quantum theory this ivory tower position of physics could be upheld without posing problems. If, on the other hand, the nonlocal quantum concepts describe real aspects of the physical world (that is, if they are truly heuristic concepts), the parallelism has to be modified in some way. Such a modification could some day even turn out to be important for experimental psychobiology, but it is irrelevant whenever nonlocality can be neglected, as for present-day computers or most neural processes. However, the quasi-classical activities of neurons may be almost as far from consciousness as an image on the retina. The concept of “wholeness” — often emphasized as being important for complex systems such as the brain — is usually insufficiently understood: in quantum theory it is neither a mere dynamical wholeness (that is, an efficient interaction between all parts) nor is it restricted to the system itself. Dynamical arguments require a kinematical wholeness of the entire universe (when regarded as composed of spatial parts) . It may be neglected for certain (“classical”) aspects only — not for a complete microscopic description that may be relevant for subjective perceptions. ## 3 Observing in a Quantum World One possible consequence of these problems that inevitably arise in quantum theory would be to abandon the heuristic and generally applicable concept of a physical reality — explicitly or tacitly. This suggestion includes the usual restriction to formal rules when calculating probability distributions of presumed classical variables in situations which are intuitively understood as “measurements” (but insufficiently or even inconsistently distinguished from normal “dynamical” interactions). Clearly, no general description of physical processes underlying awareness could be given in the absence of a physical reality, even though macroscopic behavior (including the dynamics of neural systems) can be described by means of the usual pragmatic scheme. This is quite unsatisfactory, since subjective awareness has most elementary meaning without external observation (that would be required in the Copenhagen interpretation). Epistemologically, any concept of observation must ultimately be based on an observing subject. This “non-concept” of abandoning microscopic reality is not at all required, as has been pointed out before . Instead, one may regard the state vector as “actual” and representing reality, since it acts dynamically (often as a whole) on what is observed. Moreover, in view of Bell’s analysis of the consequences of quantum nonlocality, it appears questionable whether anything, and what, might be gained from inventing novel fundamental concepts (hidden variables) without any empirical support. Two different solutions of the measurement problem then appear conceivable: von Neumann’s collapse or Everett’s relative state interpretation . In both cases a (suitably modified) psycho-physical parallelism can be re-established. A dynamical collapse of the wave function would require nonlinear and nonunitary terms in the Schrödinger equation . They may be extremely small, and thus become effective only through practically irreversible amplification processes occurring during measurement-like events. The superposition principle would then be valid only in a linearized version of the theory. While this suggestion may in principle explain quantum measurements, it would not be able to describe definite states of concsiousness unless the parallelism were restricted to quasi-classical variables in the brain. Since nonlinear terms in the Schrödinger equation lead to observable deviations from conventional quantum theory, they should at present be disregarded for similar reasons as hidden variables. Any proposed violation of the superposition principle must be viewed with great suspicion because of the latter’s great and general success. For example, even superpositions of different vacua have proven heuristic (that is, to possess predictive power) in quantum field theory. The problems thus arising when physical states representing consciousness are described within wave mechanics by means of nonlinear dynamical terms could possibly be avoided if these nonlinearities were themselves caused by consciousness. This has in fact been suggested as a way to incorporate a genuine concept of free will into the theory , but would be in conflict with the hypothesis of a closed physical description of the world. If the Schrödinger equation is instead assumed to be universal and exact, superpositions of states of the brain representing different contents of consciousness are as unavoidable as Schrödinger’s superposition of a dead and alive cat. However, because of unavoidable interaction with the environment, each component must then be quantum correlated with a different (almost orthogonal) state of the rest of the universe. This consequence, together with the way how we perceive the world, leads obviously to a “many-worlds” interpretation of the wave function.<sup>2</sup><sup>2</sup>2 Everett suggested “branching” wave functions in order to discuss cosmology in strictly quantum mechanical terms (without an external observer or a collapse). I was later led to similar conclusions as a consequence of unavoidable quantum entanglement — initially knowing neither of Everett’s nor of Bell’s work. Unfortunately this name is misleading. The quantum world (described by a wave function) would correspond to one superposition of myriads of components representing classically different worlds. They are all dynamically coupled (hence “actual”), and they may in principle (re)combine as well as branch. It is not the real world (described by a wave funtion) that branches in this picture, but consciousness (or rather the state of its physical carrier), and with it the observed (apparent) “world” . Once we have accepted the formal part of quantum theory, only our experience teaches us that consciousness is physically determined by (factor) wave functions in certain components of the total wave function.<sup>3</sup><sup>3</sup>3 It would always be possible to introduce additional (entirely arbitrary and unobservable) variables as a hypothetical link between the wave function and consciousness. Given their (hypothetical) dynamics, the required quantum probabilities can then be postulated by means of appropriate initial conditions. An example are the classical variables in Bohm’s pilot wave theory . The existence of “other” components (with their separate conscious versions of ourselves) is a heuristic fiction, based on the assumption of a general validity of dynamical laws that have always been confirmed when tested. When applied to classical laws and concepts, an analogous assumptions leads to the conventional model of reality in space and time as an extrapolation of what is observed. In the quantum model, a collapse would represent a new kind of solipsism, since it denies the existence of these otherwise arising consequences. Everett related his branching to the practically irreversible dynamical decoupling of components that occurs when microscopic properties become correlated to macroscopic ones. This irreversibility requires specific initial conditions for the global state vector . Such initial conditions will then, for example, also cause a sugar molecule to permanently send retarded “information” (by scattering photons and molecules) about its handedness into the universe. In this way, their relative phases become nonlocal, and thus cannot affect the physical states of local conscious observers (or states of their brains) any more. The separation of these components is dynamically “robust”. There is no precise localization of the branch cut (while a genuine dynamical collapse would have to be specified as a dynamical law). Nonetheless, Everett’s branching in terms of quasi-classical properties does not appear sufficient to formulate a psycho-physical parallelism. Neither would this branching produce a definite factor state for some relevant part of the brain, nor does every decoherence process somewhere in the universe describe conscious observation. Even within a robust branch, most parts of the brain will remain strongly quantum correlated with one another and with their environment. Everett’s branchings represent objective measurements — not necessarily conscious observations. A parallelism seems to be based on a far more fine-grained branching (from a local point of view) than that describing measurements, since it should correspond one-one to subjective awareness. The conjecture here is: does the (not necessarily robust) branching that is conceptually required for defining the parallelism then readily justify Everett’s (apparently objective) branching into quasi-classical worlds? The branching of the global state vector $`\mathrm{\Psi }`$ with respect to two different conscious observers ($`A`$ and $`B`$, say) may be written in their Schmidt-canonical forms , $$\mathrm{\Psi }=\underset{n_A}{}c_{n_A}^A\chi _{n_A}^A\varphi _{n_A}^A=\underset{n_B}{}c_{n_B}^B\chi _{n_B}^B\varphi _{n_B}^B,$$ $`(1)`$ where $`\chi ^{A,B}`$ are states of the respective physical carriers of consciousness (presumably small but not necessarily local parts of the central nervous system), while $`\varphi ^{A,B}`$ are states of the respective “rests of the universe”. In order to describe the macroscopic behavior of (human) observers, one has to consider the analogous representation with respect to the states $`\stackrel{~}{\chi }`$ of their whole bodies (or relevant parts thereof), $$\mathrm{\Psi }=\underset{k_A}{}\stackrel{~}{c}_{k_A}^A\stackrel{~}{\chi }_{k_A}^A\stackrel{~}{\varphi }_{k_A}^A=\underset{k_B}{}\stackrel{~}{c}_{k_B}^B\stackrel{~}{\chi }_{k_B}^B\stackrel{~}{\varphi }_{k_B}^B.$$ $`(2)`$ In particular, the central nervous system may be assumed to possess (usually unconscious) “memory states” (labelled by $`m_A`$ and $`m_B`$, say) which are similarly robust under decoherence as the handedness of a sugar molecule. Time-directed quantum causality (based on the initial condition for the global wave function) will then force the Schmidt states $`\stackrel{~}{\chi }^A`$ and $`\stackrel{~}{\chi }^B`$ to approximately factorize in terms of these memory states , $$\mathrm{\Psi }\underset{m_A\mu _A}{}\stackrel{~}{c}_{m_A\mu _A}^A\stackrel{~}{\chi }_{m_A\mu _A}^A\stackrel{~}{\varphi }_{m_A\mu _A}^A\underset{m_B\mu _B}{}\stackrel{~}{c}_{m_B\mu _B}^B\stackrel{~}{\chi }_{m_B\mu _B}^B\stackrel{~}{\varphi }_{m_B\mu _B}^B,$$ $`(3)`$ where $`\mu _A`$ and $`\mu _B`$ are additional quantum numbers. The “rest of the universe” thus serves as a sink for phase relations. In general, the robust quantum numbers $`m_A`$ and $`m_B`$ will be partly correlated — either because of special interactions between the two oberservers (communication), or since they have arisen from the same cause (that is, from observations of the same event). These correlations define the concept of objectivization in quantum mechanical terms. The genuine carriers of consciousness (described by the states $`\chi `$ in (1)) must not in general be expected to represent memory states, as there do not seem to be permanent contents of consciousness. However, since they may be assumed to interact directly with the rest of the $`\stackrel{~}{\chi }`$-system only, and since phase relations between different quantum numbers $`m_A`$ or $`m_B`$ would immediately become nonlocal, memory appears “classical” to the conscious observer. Each robust branch in (2), hence also each $`m`$-value, describes essentially an independent partial sum of type (1) when observed . The emprirically relevant probability interpretation in terms of quasi-classical branches (including pointer positions) may, therefore, be derived from a similar (but fundamental) one for the subjective branching (with respect to each observer) that according to this interpretation defines the novel psycho-physical parallelism. As mentioned before, macroscopic behavior (including behavior as though being conscious) could also be described by means of the pragmatic (probalistic) rules of quantum theory. An exact Schrödinger equation does not imply deterministic behavior of conscious beings, since one has to expect that macroscopic stimuli have microscopic effects in the brain before they cause macroscopic behavior. Thereby, interaction with the environment will intervene. Everett’s “relative state” decomposition (1) with respect to the subjective observer state $`\chi `$ may then considerably differ from the objectivized branching (3), that would be meaningful with respect to all conceivable “external” observations. This description may even put definite meaning into Bohr’s vague concept of complementarity. ## 4 Conclusion The multi-universe interpretation of quantum theory (which should rather be called a multi-consciousness interpretation) seems to be the only interpretation of a universal quantum theory (with an exact Schrödinger equation) that is compatible with the way the world is perceived. However, because of quantum nonlocality it requires an appropriate modification of the traditional epistemological postulate of a psycho-physical parallelism. In this interpretation, the physical world is completely described by Everett’s wave function that evolves deterministically (Laplacean). This global quantum state then defines an indeterministic (hence “branching”) succession of states for all observers. Therefore, the world itself appears indeterministic — subjective in principle, but largely objectivized through quantum correlations (entanglement). This quite general scheme to describe the empirical world is conceptually consistent (even though the parallelism remains vaguely defined), while it is based on the presently best founded physical concepts. The latter may some day turn out to be insufficient, but it is hard to see how any future theory that contains quantum theory in some approximation may avoid similar epistemological problems. These problems arise from the contrast between quantum nonlocality (confirmed by Bell’s analysis as part of reality) and the locality of consciousness “somewhere in the brain”. Quantum concepts should be better founded than classical ones for approaching these problems. ## 5 Addendum The above-presented paper of 1981 has here been rewritten (with minor changes, mainly regarding formulations), since the solution of the quantum mechanical measurement problem proposed therein has recently gained interest, while the Epistemological Letters (which were used as an informal discussion forum between physicists and philosophers interested in the “new quantum debate”) are now hard to access. The dynamical dislocalization of phase relations (based on ) referred to in this article in order to justify robust Everett branches has since become better known as decoherence (see ), while the “multi-consciousness interpretation” mentioned in the Conclusion has been rediscovered on several occasions. It is now usually discussed as a “many-minds interpretation” , but has also been called a “many-views” or “many-perceptions” interpretation . The conjectured quasi-classical nature of those dynamical states of neurons in the brain which may carry memory or can be investigated “from outside” has recently been confirmed by quantitative estimates of their decoherence in an important paper by Tegmark . To most of these states, however, the true physical carrier of consciousness somewhere in the brain may still represent an external observer system, with whom they have to interact in order to be perceived. Regardless of whether the ultimate observer systems are quasi-classical or possess essential quantum aspects, consciousness can only be related to factor states (of systems assumed to be localized in the brain) that appear in branches (robust components) of the global wave function — provided the Schrödinger equation is exact. Environmental decoherence represents entanglement (but not any “distortion” — of the brain, in this case), while ensembles of wave functions, representing various potential (unpredictable) outcomes, would require a dynamical collapse (that has never been observed).<sup>4</sup><sup>4</sup>4 A collapse would be conceivable as well in Bohm’s theory, where memory and objective thought seems to be still described by neuronal quantum states, rather than by the classical configurations which according to Bell would have to describe physical states corresponding to consciousness. An essential role of the conscious observer for the occurrence of fundamental (though objective) quantum events was apparently suggested already by Heisenberg in his early “idealistic” interpretation of a particle trajectory coming into being by our act of observing it. Bohr, in his later Copenhagen interpretation, insisted instead that classical outcomes arise in the apparatus during irreversible measurements, which he assumed not to be dynamically analyzable in terms of a microscopic reality (cf. Beller ). This first classical link in the chain of interactions that forms the observation of a quantum system can now be identified with the first occurrence of decoherence (globally described as a unitary but practically irreversible dynamical process — see ). However, Bohr’s restriction of the applicability of quantum concepts as well as Heisenberg’s uncertainty relations were meant to establish bounds to a rational description of Nature. (The popular simplistic view of quantum theory as merely describing stochastic dynamics for an otherwise classical world leads to the well known wealth of “paradoxes”, which rule out any local description but have all been derived from the superposition principle, that is, ultimately from an entangled global wave function.) Von Neumann’s “orthodox” interpretation, on the other hand, is somewhat obscured by his use of observables, which should have no fundamental place in a theory of interacting wave functions. His postulate of a dynamical collapse representing conscious observations was later elaborated upon by London and Bauer , while Wigner suggested an active influence of the mind on the physical state (that should better not affect objectively measurable probabilities). Stapp expressed varying views on this problem, while Penrose speculated that human thinking, in contrast to classical computers, requires genuine quantum aspects (including superpositions of neuronal states and the collapse of the wave function).<sup>5</sup><sup>5</sup>5 There seems to be a certain confusion between logical statements (that is, tautologies), which have no implicit relation to the concept of time, and algorithmic procedures, performed in time in order to prove them. (Undecidable formal statements are meaningless, and hence not applicable.) A dynamical collapse of the wave function must not be regarded as representing “quantum logic” (or “logic of time”). This misconception appears reminiscent of the popular confusion of cause and reason. The Everett interpretation leads to its “extravagant” (unfamiliar and unobservable) consequences, because it does not invent any new laws, variables or irrational elements for the sole purpose of avoiding them. Lockwood is quite correct when he points out the essential role of decoherence for the many-minds interpretation (see also ). This unavoidable “continuous measurement” of all macroscopic systems by their environments (inducing entanglement) was indeed initially discussed precisely in order to support the concept of a universal wave function, in which “branching components” must be separately experienced. Heisenberg once recalled that Einstein had told him (my translation): “Only the theory may tell us what we can observe. … On the whole long path (ganzer langer Weg) from the event to its registration in our consciousness you have to know how Nature works.” Einstein did thus not suggest that the theory has to postulate “observables” for this purpose (as the first part of this quotation is often understood). Formal observables are useful only since the subsequent part of this chain of interactions can for all practical purposes be described in terms of classical variables, after initial values have been stochastically created for them somewhere in the chain. However, most physicists would now agree on what to do (in principle) if quantum effects should be relevant during some or all intermediary steps (cf. ): they would have to calculate the evolution of the corresponding series of entangled quantum systems, taking into account decoherence by the environment where required. There is then no need for genuine classical variables anywhere, since Tegmark’s decohered neuronal (quantum) states form an appropriate “pointer basis” for the application of quantum probabilities. These probabilities need not characterize a stochastic dynamical process (a collapse of the wave function), but would describe an objectivizable splitting of the (state of the) mind if the Schrödinger equation were exact. In Bohm’s quantum theory, on the other hand, states of the mind would be related to “surreal” classical trajectories which are guided by — hence would be aware of — a branch wave function only . Therefore, I feel that the Heisenberg-Bohr picture of quantum mechanics can now be claimed dead. Neither classical concepts, nor any uncertainty relations, complementarity, observables, quantum logic, quantum statistics, or quantum jumps have to be introduced on a fundamental level (see also Sect. 4.6 of ). In a recent experiment , interference experiments were performed with mesoscopic molecules, and proposed for small virus. Time may be ripe to discuss the consequences of similar Gedanken experiments with objects carrying some primitive form of “core consciousness” — including an elementary awareness of their path through the slits. How can “many minds” be avoided if their coherence can be restored? I wish to thank Erich Joos for various helpful comments.
no-problem/9908/hep-ph9908374.html
ar5iv
text
# Color Evaporation Induced Rapidity Gaps ## I Introduction We show that the appearance of rapidity gaps between jets, observed at the HERA and Tevatron colliders, can be explained by supplementing the string model with the idea of color evaporation, or soft color. The inclusion of soft color interactions between the dynamical partons, which rearranges the string structure of the interaction, leads to a parameter-free calculation of the formation rate of rapidity gaps. The idea is extremely simple. Like in the string model, the dynamical partons are those producing the hard interactions, and the left-over spectators. A rapidity gap occurs whenever final state partons form color singlet clusters separated in rapidity. As the partons propagate within the hadronic medium, they exchange soft gluons which modify the string configuration. These large-distance fluctuations are probably complex enough for the occupation of different color states to approximately respect statistical counting. The probability to form a rapidity gap is then determined by the color multiplicity of the final states formed by the dynamical partons, and nothing else. All data obtained by ZEUS , H1 , DØ , and CDF collaborations are reproduced when this color structure of the interactions is superimposed on the usual perturbative QCD calculation for the production of the hard jets. Rapidity gaps refer to intervals in pseudo-rapidity devoid of hadronic activity. The most simple example is the region between the final state protons, or its excited states, in $`pp`$ elastic scattering and diffractive dissociation. Such processes were first observed in the late 50’s in cosmic rays experiments and have been extensively studied at accelerators . Attempts to describe the formation of rapidity gaps have concentrated on Regge theory and the pomeron , and on its possible QCD incarnation in the form of a colorless 2-gluon state . After the observation of rapidity gaps in deep inelastic scattering (DIS), it was suggested that events with and without rapidity gaps are identical from a partonic point of view, except for soft color interactions that, occasionally, lead to a region devoid of color between final state partons. We pointed out that this soft color mechanism is identical to the color evaporation mechanism for computing the production rates of heavy quark pairs produced in color singlet onium states, like $`J/\psi `$. Moreover, we also suggested that the soft color model could provide a description for the production of rapidity gaps in hadronic collisions . Color evaporation assumes that quarkonium formation is a two-step process: the pair of heavy quarks is formed at the perturbative level with scale $`M_Q`$, and bound into quarkonium at scale $`\mathrm{\Lambda }_{QCD}`$ (see Fig. 1a). Heavy quark pairs of any color below open flavor threshold can form a colorless asymptotic quarkonium state provided they end up in a color singlet configuration after the inevitable exchange of soft gluons with the final state spectator hadronic system. The final color state of the quark pairs is not dictated by the hard QCD process, but by the fate of their color between the time of formation and emergence as an asymptotic state. The success of the color evaporation model to explain the data on quarkonium production is unquestionable . We show here that the straightforward application of the color evaporation approach to the string picture of QCD readily explains the formation of rapidity gaps between jets at the Tevatron and HERA colliders. ## II Color Counting Rules In the color evaporation scheme for calculating quarkonium production, it is assumed that all color configurations of the quark pair occur with equal probability. This must be a reasonable guess because, before formation as an asymptotic state, the heavy quark pair can exchange an infinite number of long wavelength soft gluons with the hadronic final state system in which it is immersed. For instance, the probability that a $`Q\overline{Q}`$ pair ends up in a color singlet state is $`1/(1+8)`$ because all states in $`\mathrm{𝟑}\overline{\mathrm{𝟑}}=\mathrm{𝟖}\mathrm{𝟏}`$ are equally probable. We propose that the same color counting applies to the final state partons in high $`E_T`$ jet production. In complete analogy with quarkonium, the production of high energy jets is a two-step process where a pair of high $`E_T`$ partons is perturbatively produced at a scale $`E_T`$, and hadronize into jets at a scale of order $`\mathrm{\Lambda }_{QCD}`$ by stretching color strings between the partons and spectators. The strings subsequently hadronize. Rapidity gaps appear when a cluster of dynamical partons, i.e. interacting partons or spectators, form a color singlet (see Fig. 1b). As before, the probability for forming a color singlet cluster is inversely proportional to its color multiplicity. In this scenario we expect that quark-quark processes possess a higher probability to form rapidity gaps than gluon-gluon reactions, because of their smaller color multiplicity. This simple idea is at variance with the two-gluon exchange model for producing gaps, in which $`F_{QQ}<F_{GG}`$, where $`F_{QQ(GG)}`$ is the gap probability of reactions initiated by quark-quark (gluon-gluon) collisions. We already confronted these diverging predictions using the Tevatron data . We analyzed the gap fraction in $`p\overline{p}`$ collisions in terms of quark-quark, quark-gluon, and gluon-gluon subprocesses, i.e. $$F_{gap}=\underset{ij}{}F_{ij}d\sigma _{ij}/d\sigma ,$$ (1) where $`i(j)`$ is a quark or a gluon and $`d\sigma =_{ij}d\sigma _{ij}`$. We found that $`F_{QQ}>F_{GG}`$. This somewhat unexpected feature of the data is in line with the soft color idea. In order to better understand the soft color idea let us consider the formation of rapidity gaps between two jets in opposite hemispheres, which happens when the interacting parton forming the jet and the accompanying remnant system form a color singlet. This may occur for more than one subprocess $`N`$ and, therefore, the gap fraction is $$F_{gap}=\frac{1}{d\sigma }\underset{N}{}F_Nd\sigma _N,$$ (2) where $`F_N`$ is the probability for gap formation in the $`N^{th}`$ subprocess, $`d\sigma _N`$ is the corresponding differential parton-parton cross section, and $`d\sigma `$ ($`=_Nd\sigma _N`$) is the total cross section. In our model, the probabilities $`F_N`$ are determined by the color multiplicity of the state and spatial distribution of partons while $`d\sigma _N`$ is evaluated using perturbative QCD. The soft color procedure is obvious in a specific example: let us calculate the gap formation probability for the subprocesses $`p\overline{p}Q^V\overline{Q}^VQ\overline{Q}XY,`$where $`Q^V`$ stands for $`u`$ or $`d`$ valence quark, and $`X`$ $`(Y)`$ is the diquark remnant of the proton (antiproton). The final state is composed of the $`X`$ ($`\mathrm{𝟑}\mathrm{𝟑}`$) color spectator system with rapidity $`\eta _X=+\mathrm{}`$, the $`Y`$ ($`\overline{\mathrm{𝟑}}\overline{\mathrm{𝟑}}`$) color spectator system with $`\eta _Y=\mathrm{}`$, one $`\mathrm{𝟑}`$ parton $`j_1`$, and one $`\overline{\mathrm{𝟑}}`$ parton $`j_2`$. It is the basic assumption of the soft color scheme that by the time these systems hadronize, any color state is equally likely. One can form a color singlet final state between $`X`$ and $`j_1`$ since $`\mathrm{𝟑}\mathrm{𝟑}\mathrm{𝟑}=\mathrm{𝟏𝟎}\mathrm{𝟖}\mathrm{𝟖}\mathrm{𝟏}`$, with probability $`1/27`$. Because of overall color conservation, once the system $`Xj_1`$ is in a color singlet, so is the system $`Yj_2`$. On the other hand, it is not possible to form a color singlet system with $`j_1`$ and $`Y`$. Moreover, to form a rapidity gap these systems ($`j_1X`$ and $`j_2Y`$) must not overlap in rapidity space. Since the experimental data consists of events where the two jets are in opposite hemispheres, the only additional requirements are $`j_1`$ to be in the same hemisphere as $`X`$, i.e. $`\eta _1>0`$, and $`j_2`$ to be in the opposite hemisphere ($`\eta _1\eta _2<0`$). In this configuration, the color strings linking the remnant and the parton in the same hemisphere will not hadronize in the region between the two jets. We have thus produced two jets separated by a rapidity gap using the color counting rules which form the basis of the color evaporation scheme for calculating quarkonium production. As it is clear from the above example, the application of the soft color model for rapidity gap formation requires the analyses of the color multiplicity of the possible partonic subprocesses. In the next sections, we apply this model to the production of rapidity gaps between jets in photoproduction at HERA and hadronic collisions at the Tevatron, spelling out the relevant counting rules. ## III Rapidity Gaps at HERA The parton diagram for dijet photoproduction is shown in Fig. 2a. It is related to the $`ep`$ cross section by $$\sigma _{epj_1j_2XY}(s)=_{y_{min}}^{y_{max}}_{Q_{min}^2}^{Q_{max}^2}F_e^\gamma (y,Q^2)\sigma _{p\gamma j_1j_2XY}(W)𝑑y𝑑Q^2,$$ (3) where $`W`$ is the center-of-mass energy of the $`p\gamma `$ system, $`y=W^2/s`$ is the fraction of the electron momentum carried by the photon, and $`Q^2`$ is the photon virtuality. $`Q^2`$ ranges from $`Q_{min}^2=M_e^2y^2/(1y)`$ to $`Q_{max}^2`$ which depends on the kinematic coverage of the experimental apparatus. The distribution function of photons in the electron is $$F_e^\gamma (y,Q^2)=\frac{\alpha }{2\pi yQ^2}\left[1+(1y)^2\frac{2M_e^2y^2}{Q^2}\right],$$ (4) where $`M_e`$ is the electron mass and $`\alpha `$ is the fine-structure constant. The $`p\gamma `$ cross section is related to the parton-parton cross section by $$\sigma _{p\gamma j_1j_2XY}(W)=\underset{a,b}{}F_p^a(x_a)F_\gamma ^b(x_b)\sigma _{abp_1p_2}(\widehat{s})𝑑x_a𝑑x_b,$$ (5) where $`F_p^a(x_a)`$ ($`F_\gamma ^b(x_b)`$) is the distribution function for parton $`a`$ ($`b`$) in the proton (photon) and $`\sqrt{\widehat{s}}=\sqrt{x_ax_b}W`$ is the parton-parton center-of-mass energy. For direct $`p\gamma `$ reactions ($`b\gamma `$), $`F_\gamma ^\gamma (x_b)=\delta (1x_b)`$. The hadronic system $`X`$ $`(Y)`$ is the proton (photon) remnant, and $`j_{1(2)}`$ is the jet which is initiated by the parton $`p_{1(2)}`$. The proton is assumed to travel in the positive rapidity direction, and the $`t`$-channel momentum squared is defined as $`t=(P_aP_1)^2`$, where $`P_a`$ is the momentum of the parton $`a`$, and $`P_1`$ is the momentum of the parton $`p_1`$. The expressions for the parton-parton invariant amplitudes can be found, for instance, in reference . We present in Table I the irreducible decomposition of active parton systems that yield color singlet states, e. g. $`\mathrm{𝟑}\mathrm{𝟖}=\mathrm{𝟏𝟓}\mathrm{𝟔}\mathrm{𝟑}`$ is omitted. Taking into account this table, it is simple to obtain the $`SU(3)_{color}`$ representations and the gap formation probability for all possible subprocesses. These are displayed in Table II. Notice that only resolved photon processes can produce rapidity gaps because there is no hadronic remnant associated with direct photons. One of the features of the color configurations shown in Table II is that, for all classes of subprocesses, when a color singlet is (not) allowed in one of the clusters, the same happens for the other one. Moreover, it can happen that the color multiplicities are different in the two clusters. In this case the probability for gap formation is given by the largest of the two probabilities because, once that cluster forms a color singlet, the other cluster must do so as well by overall color conservation. ### A ZEUS Results The ZEUS collaboration has measured the formation of rapidity gaps between jets produced in $`ep`$ collisions with $`0.2<y<0.85`$ and photon virtuality $`Q^2<4`$ GeV<sup>2</sup>. Jets were defined by a cone radius of 1.0 in the $`(\eta ,\varphi )`$ plane, where $`\eta `$ is the pseudorapidity and $`\varphi `$ is the azimuthal angle. In the event selection, jets were required to have $`E_T>6`$ GeV, to not overlap in rapidity ($`\mathrm{\Delta }\eta =|\eta _1\eta _2|>2`$), to have a mean position $`|\overline{\eta }|<0.75`$, and to be in the region $`\eta <2.5`$. The cross sections were measured in $`\mathrm{\Delta }\eta `$ bins in the range $`2\mathrm{\Delta }\eta 4`$. For the above event selection, we evaluated the dijet differential cross section $`d\sigma ^{jets}/d\mathrm{\Delta }\eta `$, which is the sum of the direct ($`d\sigma _{dir}`$) and the resolved photon ($`d\sigma _{res}`$) cross sections. We used the GRV-LO distribution function for the proton, and the GRV for the photon. We fixed the renormalization and factorization scales at $`\mu _R=\mu _F=E_T/2`$, and calculated the strong coupling constant for four active flavors with $`\mathrm{\Lambda }_{QCD}=350`$ MeV. Our results are confronted with the experimental data in Fig. 3a, showing that we describe well both the shape and absolute normalization of the total dijet cross section. Notice that the bulk of the cross section originates from resolved events. Now we turn to dijet events showing a rapidity gap. We evaluate the differential cross section $`d\sigma ^{gap}/d\mathrm{\Delta }\eta `$ which has two sources of gap events: color evaporation gaps ($`d\sigma _{cem}^{gap}`$) and background gaps ($`d\sigma _{bg}^{gap}`$). In our model, the gap cross section is the weighted sum over resolved events $$d\sigma _{cem}^{gap}=\underset{N}{}F_Nd\sigma _{res}^N,$$ (6) with the gap probability $`F_N`$ for the different processes given in Table II. Background gaps are formed when the region of rapidity between the jets is devoid of hadrons because of statistical fluctuation of ordinary soft particle production. Their rate should fall exponentially as the rapidity separation $`\mathrm{\Delta }\eta `$ between the jets increases . We parametrize the background gap probability as $$F_{bg}(\mathrm{\Delta }\eta )=e^{b(2\mathrm{\Delta }\eta )},$$ (7) where $`b`$ is a constant. The background gap cross section is then written as $$d\sigma _{bg}^{gap}=F_{bg}(\mathrm{\Delta }\eta )(d\sigma ^{jets}d\sigma _{cem}^{gap}).$$ (8) Notice that the jet definition used by ZEUS implies that the gap cross section must be equal to the total dijet cross section at $`\mathrm{\Delta }\eta =2`$. This parametrization of the background does take this fact into account. Moreover, background gaps can be formed in both resolved and direct processes. Our results are compared with the experimental data in Fig. 3b, where we fitted $`b=2.9`$ and used the same QCD parameters of Fig. 3a. This value of $`b`$ agrees with $`b=2.7\pm 0.3`$ found by ZEUS collaboration, when they approximated the non-background gap fraction by a constant. As we can see from this figure, the color evaporation model describes very well the gap formation between jets at HERA. It is noteworthy that for large values of $`\mathrm{\Delta }\eta `$ the contribution of the background gap is negligible. In this region the data is correctly predicted by the color evaporation mechanism alone, with the probability of gap formation uniquely determined by statistical counting of color states. The gap frequency $`F^{gap}(\mathrm{\Delta }\eta )=d\sigma ^{gap}/d\sigma ^{jets}`$ is shown in Fig. 4a, where we show the contributions of the color evaporation mechanism and the background. Within the color evaporation framework we can easily predict other differential distributions for the gap events, which can be used to further test our model. As an example, we present in Fig. 4b the gap frequency predicted by the color evaporation model as a function of the jet transverse energy for large rapidity separations ($`\mathrm{\Delta }\eta >3`$), assuming that the background has been subtracted. There is currently no data on this distribution. ### B H1 Results We also performed an identical analysis for the data obtained by the H1 collaboration . They used the same cone size for the jet definition $`(\mathrm{\Delta }R=1)`$, and collected events produced in proton-photon reactions with center-of-mass energy in the range $`158<W<247`$ GeV and with photon virtuality $`Q^2<0.01`$ GeV<sup>2</sup>. They also imposed cuts on the jets: $`2.82<\eta <2.35`$ and $`E_T>4.5`$ GeV. Our results are compared with the preliminary experimental data on Fig. 5a where we used $`b=2.3`$ to describe the background in the H1 kinematic range. As before, color evaporation induces gap formation with a rate compatible with observation. We show in Fig. 5b our predictions for the background subtracted gap frequency as a function of the jet transverse energy for large rapidity separations $`\mathrm{\Delta }\eta >3`$. ### C Survival Probability at HERA Our computation of gap rates using color evaporation is free of parameters and therefore predicts absolute rates, as well as their dependence on kinematic variables. In practice, this prediction is diffused by the necessity to introduce a gap survival probability $`S_p`$, which accounts for the fact that genuine gap events, as predicted by the theory, can escape experimental identification because additional partonic interactions in the same event produce secondaries which spoil the gap. Its value has been estimated for high energy $`p\overline{p}`$ interactions to be of order a few tens of percent. The fact that the color evaporation calculation correctly accommodates the absolute gap rate observed in $`p\gamma `$ collisions implies that $`S_p=1`$. There is a simple explanation for this value. The dijet cross section is dominated by resolved photons. However, for resolved processes, a secondary partonic interaction which could fill the gap is unlikely because it requires resolving the photon in 2 partons. Although this routinely happens at high energies for hadrons, it does not for photons. ## IV Rapidity Gaps at Tevatron The kinematics for dijet production in $`p\overline{p}`$ collisions is illustrated in Fig. 2b, where we denoted by $`X`$ ($`Y`$) the proton (antiproton) remnant, and $`j_{1(2)}`$ is a parton giving rise to a jet. The proton is assumed to travel in the positive rapidity direction. The dijet production cross section is related to the parton-parton one via $$\sigma _{p\overline{p}j_1j_2XY}(s)=\underset{a,b}{}F_p^a(x_a)F_{\overline{p}}^b(x_b)\sigma _{abp_1p_2}(\widehat{s})𝑑x_a𝑑x_b,$$ (9) where $`s`$ ($`\widehat{s}=x_ax_bs`$) is the (subprocess) center-of-mass energy squared and $`F_{p(\overline{p})}^{a(b)}`$ is the distribution function for the parton $`a`$ $`(b)`$. We evaluated the dijet cross sections using MRS-J distribution functions with renormalization and factorization scales $`\mu _R=\mu _F=\sqrt{\widehat{s}}`$. The color evaporation model prediction for the gap production rates in $`p\overline{p}`$ collisions is analogous to the one in $`p\gamma `$ interactions, with the obvious replacement of the photon by the antiproton, represented as a $`\overline{\mathrm{𝟑}}\overline{\mathrm{𝟑}}\overline{\mathrm{𝟑}}`$ system. The color subprocesses and their respective gap formation probabilities are listed in Table III. Both experimental collaborations presented their data with the background subtracted. The CDF collaboration measured the appearance of rapidity gaps at two different $`p\overline{p}`$ center-of-mass energies. For the data taken at $`\sqrt{s}=1800`$ GeV, they required that both jets to have $`E_T>20`$ GeV, and to be produced in opposite sides ($`\eta _1\eta _2<0`$) within the region $`1.8<|\eta |<3.5`$. For the lower energy data, $`\sqrt{s}=630`$ GeV, they required both jets to have $`E_T>8`$ GeV, and to be produced in opposite sides within the region $`|\eta |>1.8`$. Since the experimental distributions are normalized to unity, on average, we do not need to introduce an ad-hoc gap survival probability. Therefore, our predictions do not exhibit any free parameter to be adjusted, leading to a important test of the color evaporation mechanism. In Figs. 6, 7, and 8 we compare our predictions with the experimental observations of the gap fraction as a function of the jets transverse energy, their separation in rapidity, and the Bjorken-$`x`$ of the colliding partons, respectively. As we can see, the overall performance of the color evaporation model is good since it describes correctly the shape of almost all distributions. This is an impressive result since the model has no free parameters to be adjusted. The DØ collaboration has made similar observations at $`\sqrt{s}=1800`$ GeV. They required that both jets to have $`E_T>15`$ GeV, to be produced in opposite sides ($`\eta _1\eta _2<0`$) within the region $`1.9<|\eta |<4.1`$, and to be separated by $`|\mathrm{\Delta }\eta |>4.0`$. In Fig. 9 our results are compared with experimental observations of the dependence of the gap frequency on jet transverse energy, where we used a gap survival probability $`S_p=30\%`$ to reproduce the absolute normalization. This is consistent with qualitative theoretical estimates; see discussion below. As we can see, the fraction of gap events increases with the transverse energy of the jets. This is expected once the dominant process for the rapidity gap formation is quark-quark fusion, which becomes more important at larger $`E_T`$. Apart from the lowest transverse energy bin, data and theory are in good agreement. In Fig. 10 we compare our prediction for the dependence of the gap frequency with the separation between the jets. Agreement is satisfactory although the absolute value of our predictions for low transverse energy is somewhat higher than data as shown in Fig. 9. Finally, in Fig. 11 we show our results for the mean value of the Bjorken-$`x`$ of the events, where all correlations between the jet transverse energy and rapidity have been included. Again, the agreement between theory and data is satisfactory except for the low transverse energy bins. ### A Survival Probability at Tevatron We estimated the survival probability of rapidity gaps formed at $`p\overline{p}`$ collisions, comparing our predictions with the values of gap fraction actually observed. Assuming that the survival probability varies only with the collision center-of-mass energy, and not with the jet’s transverse energy, we evaluated the average survival probability $$S_p=\frac{F_{exp}^{gap}}{F_{cem}^{gap}}.$$ (10) In order to extract $`\overline{S}_p`$ we combined the DØ and CDF available data at each center-of-mass energy: 630 and 1800 GeV. We found $`\overline{S}_p(1800)=34.4\pm 3.3`$% and $`\overline{S}_p(630)=65.4\pm 12`$%, a value compatible with the calculation of Ref. based on the Regge model, which yields $`S_p(1800)=32.6`$%. For individual contributions and further details see Table IV. Moreover, we have that $`\overline{S}_p(630)/\overline{S}_p(1800)=1.9\pm 0.4`$, which is compatible with the theoretical expectation $`2.2\pm 0.2`$ obtained in Ref. . Using the extracted values of the survival probability, we contrasted the color evaporation model predictions for the gap fraction corrected by $`\overline{S}_p`$ ($`F_{cor}^{gap}=F_{cem}^{gap}\times \overline{S}_p`$) with the experimental data in Table IV. We can also compare the ratio $`R=F_{cor}^{gap}(630)/F_{cor}^{gap}(1800)`$ with the experimental result. DØ has measured this fraction for jets with $`E_T>12`$ GeV for both energies, and they found $`R=3.4\pm 1.2`$; we predict $`R=2.5\pm 0.5`$. On the other hand, CDF measured this ratio using different values for $`E_T^{min}`$ at 630 GeV and 1800 GeV; they obtained $`R=2.0\pm 0.9`$ while we obtained $`R=2.0\pm 0.4`$ for the same kinematical arrangement. ## V Conclusion In summary, the occurrence of rapidity gaps between hard jets can be understood by simply applying the soft color, or color evaporation, scheme for calculating quarkonium production, to the conventional perturbative QCD calculation of the production of hard jets. The agreement between data and this model is impressive. ###### Acknowledgements. This research was supported in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation, by the U.S. Department of Energy under grant DE-FG02-95ER40896, by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and by Programa de Apoio a Núcleos de Excelência (PRONEX).
no-problem/9908/physics9908006.html
ar5iv
text
# New Features in Non-coalescense of Tangent Liquid Bulks ## 1 Introdution When two drops of one liquid come in contact, they simply coalesce and form a single drop. This is to minimize the free surface of the liquid, and accordingly minimize the surface potential energy caused by surface tension. But can we set the circumstances such that the two tangent drops do not coalesce, even though externally pushing them toward each other? The answer is yes, and this is the main point of discussion in this article. We have recently found a class of effects in which two liquid bulks of the same material touch each other, but do not coalesce. In all cases there is even an additional force pushing the bulks toward each other. These phenomena can be divided into two categories. The first category is the floating of a liquid drop on the free surface of the same liquid. This free surface is, sometimes, the flat surface of the liquid in a container and sometimes the surface of another drop. This phenomenon had been seen before in two different cases; we have observed a third case. In the first case a traveling water droplet floats on the surface of the water, till it is moving faster than some critical velocity . The second case is completely different in appearance: Two liquid drops of the same kind but with different temperatures are pushed to each other but do not coalesce , because the tempreture gradiant causes a surface tension gradient, which leads to the formation of a film of air between the drop surfaces . The third case, is the floating of a droplet of water on the surface of water (fig 1). This droplet is standing still with respect to the container, but there is a wave on the surface of the water which causes the non-coalescance. The second category is mainly based on the currents on surface of a liquid. The main point is that two currents, which may have slightly or completely different directions, collide with each other. Under some circumstances, instead of combining, the currents repel each other just like the collision of two solid balls; see fig 2. In the following, we will introduce the effects in some detail and describe some observations on the phenomena. ## 2 The Phenomena ### 2.1 Floating of droplets over the oscillating surface of water This phenomenon can simply be seen in a cylinderical plastic bottle. of about 30cm in height and 10 cm in diameter. If you pour water in the bottle for about three-forths of the bottle’s volume, and then strike the bottle repeatedly with your finger, a little below the surface of the water, you will see some droplets of different diameters (1-3mm). These droplets come from the walls, go to the center and stay there still for some seconds. They only float if you continue striking, and sink immediately if the surface stops oscillating. You can improve the number and life time of the drops up to minutes by changing the period of striking. More success will be gained by a little more practice. These are not air bubbles. They are much brighter, and if you blow at the surface, they move much easier and faster than bubbles. You can see that their buttom is placed below the water surface, while in the case of bubbles they are completely over the surface. This shows that they are much heavier than air. What else can they be but just water? If you make more floating drops they will come together, and sometimes make big colonies of about fifty in number (fig 3). This colonization can be explained easily if we mention that each droplet deforms the surface of the water around it like a hole. If two droplets get near each other they slide toward each other and make a unique deeper hole that lets the droplets go down more. If you stop hitting the bottle, the drops coalesce into each other as time passes, and form bigger and bigger droplets, up to diameters of about 8mm, till they all sink. In these colonies, you can see the rainbow if you watch carefully. ### 2.2 Repulsion of two colliding water currents If we bring two narrow cylindrical currents of water (about 3mm in diameter)slowly in contact with each other, sometimes instead of combining into one current, they just repel each other (fig 2), like the case of collision between too hard bodies. This state is stable for minutes, and it becomes more stable if we fix the initial conditions of the currents more precisely. We have dyed one stream of water and observed that the streams do not combine at all. Each current deforms slightly before the point of collision, so we can see some kind of wave on the surface of the currents. After the collision point, the shape of water changes its form completely. We see that the cross section of the current starts changing its shape from circular to elliptical and viceversa, periodically, if we travel along the length of the flow. This is commonly expected because the circular cross section is just the case of lowest potential and with any change in the initial cross section we must expect oscillation of the shape. This phenomenon is not only for cylindrical currents. It is seen in the case of other kinds of collision, for example a droplet that hits the surface of the water in a close angle and then jumps up from it. In another experiment we tested this for collision between a cylindrical current and a plain one. (See fig 4). To make the plain current, we used the scattering of a falling current of water from the surface of a spoon. You might have experienced this, especially if you are not an expert in washing dishes . In this case again, you can see some kind of deformation just like the case where both currents were cylindrical. ## 3 The Experiments ### 3.1 Floating Droplets The experiments with the droplets first began with the simple setup just described in the previous part. The first step was to make some machine that would do work of a human. This machine made the work much easier so that we could fix the frequency of the strikes, and change it to any number we wanted. The most important result of this experiment was that there is a set of distinct frequencies that let droplets float more, both in number and lifetime, and there is a different set of frequencies in which droplets are created much more than other frequencies. These two different sets showed that creation and floating of the droplets are two completely distinct effects. We can simply float drops from an external source; for example a faucet. The size of the initial drops (before combining with other drops) depends mainly on the wave spreads around the plastic bottle, that is, the sound of strike. The result: Drops made from bass strikes are bigger than drops made from treble one. A very strange observation was that adding a little detergent to water allows the drops to construct and to float more. This behavior might be caused by the special form of detergent molocules, with two defferent ends. The second step was to change the shape of the container. We chose a square one, so that we can analyse the wave spreading over the surface, easier. The main change was the set of frequencies. Floating of drops was just like before. Next, we tried to analyze the movements inside the drop. We used ink, since we could easily inject it to the drop (fig 5). In this way we can even make the drops bigger or smaller by injecting water into them, or sucking out of them. The result was that the velocities in the drop are very small in compare with the velocities on the free surface, because the ink defuses much slower in the drop than the surface of the water. It means that the water in the drop is somehow motionless. The ink in the drops can not be spread to the below water until the hole body of drop sinks (coalesces). This is just like the two flows mentioned in the previous part. Other experiments that have been done are: 1. If we look from below the surface, we’ll see the bright surface of drops, this shows that there is something else with different refraction coefficient from water between the two water layers, (otherwise, we couldn’t see the layer) and what else can it be but air. 2. We can separate one drop from others, by enclosing it with a ring. This ring do not allow the drop to travel freely over the surface and also preserves it from others to come in contact with. By this trick we can improve the lifetime of drops very much. 3. We can make the air pressure higher and see a great improve in the number and lifetime of the drops. The experiment is done in a closed plastic bottle of cola, which had been shaked before the experiment began, and in result had a high pressure inside. We can easily decrease the pressure and see the phenomenon vanishes. 4. Some drops do not sink at once. These drops loose their mass, and make a smaller floating drop. This can be repeated two or three times. An existing explanation is that this is due to the existance of impurities on the free surface of water, . If the impurities are larger than the air gap between the two layer, and because water wets the impurities, a bridge between the bodies form that transfers water from the droplet to the free surface. Sometimes the flow caused by this transferation shoots the impurity out and we gain a new smaller floating droplet. The main point here is wetting quality of water. We know that in a non-wetting liquid, for example mercury, the impurities on the surface, do not let the drops to coalesce. ### 3.2 Colliding Flows In the previous section we mentioned how we can see this effect. here we bring some other experiments related to the phenomenon. Experiments show that, if the velocity of the current is less than, or more than some critical values, this phenomenon can hardly be seen. The lower limit is, roughly, few centimeters per second, but we couldn’t found any fixed value, because it depended hardly on the circumstances. We can just say that there is some lower limit. There is also an upper limit for the velocities, about few hundred centimeters per second, though it is not very sharp limit. In fact, the effect can be seen even in higher velocities, but it becomes very sensitive and you should fix the initial conditions much more precisely, to obtain a mentionable stability lifetime. In all the related cases, we see that there are some kind of movements in the surface of liquid bulks. So we can agree with the idea that water drags air between the two layers, and the film of air prevents two surfaces from reaching each other and coalesceing. Though not expected, dimensional analysis shows that the order of the velocities, in all the four cases, are the same. ## 4 Dimensional Analysis If we want to compare this new effects with the effects that was we should first estimate some parameters of those experiment. As said before, the most important parameter is the surface velocity of two neighbor layers. we call this velocity $`v_s`$. In the second place we can put the force, $`f`$, pushing two bulks towards each other. We can then estimate the thickness, $`t`$ ,of the air layer and the pressure that it can support before it ruins. ### 4.1 Surface velocities For the traveling droplets described in the introduction, we can take the drop speed as the surface velocity in the contact point. As described in the article the drops are produced from a jet that throws them in a 45 degree initial angle, And the drops hit the surface just at their maximum height. Because of the specific initial projection, the horizontal speed equals the initial vertical speed. This means $$v_s=v_{0x}=v_{0y}=(2gh)^{1/2}$$ where $`h`$ is the maximum height of the drops trajectory. If we take $`h=.05m`$, the estimated answer is $$v_s=1m/s$$ . The next experiment is non-coalescense of two drops with different temperatures. For this part we should first estimate another parameter, the velocity correlation length, $`l_c`$, caused by viscosity. We estimate it by pure dimensional analysis. as we know the most useful dimensionless number in this kinds of problems is the Reynolds Number. We also know that this number in cases - for example a rigid body in a flow - can be calculated as $`\rho v/\eta l`$, where $`\rho `$, $`\eta `$ and $`v`$ are density, viscosity and velocity of the flowing liquid, and $`l`$ is a parameter with length dimension, related to the size and shape of the rigid body. So we can take $`\eta /\rho v`$ as, $`l_c`$. Now we go back to calculation of $`v_s`$. We know that here the cause of surface motion is the temperature gradient that causes gradient of surface tension. We can write $$\frac{d\sigma }{dx}=\frac{d\sigma }{dT}\frac{dT}{dx}$$ Here $`\sigma `$ and $`T`$ are surface tension and temperature. If we calculate the net force on a surface differential area we find that $`\mathrm{\Delta }f=w\mathrm{\Delta }xd\sigma /dx`$, where $`w`$ is the width, and $`\mathrm{\Delta }x`$ is the length of the element. Now we consider that this force should move the volume $`w\mathrm{\Delta }xt`$. Here $`t`$ is the typical thickness of the moving layer, and we put $`l_c`$ in its place. So we can write a simple Newton Equation and obtain that $$a_s=\frac{d\sigma }{dx}\frac{1}{\rho l_c}$$ and $`a_s`$ is the surface acceleration. If we take the diameter of the drop $`d`$, as an estimate for the length in which the surface material accelerates, and also put the quantity of $`l_c`$ in its place, we find that $$v_s=(\frac{d\sigma }{dx}\frac{v_sd}{\eta })^{1/2}$$ Solving the equation for $`v_s`$ and putting in place of $`d\sigma /dx`$ we find the last equation: $$v_s=(\frac{d\sigma }{dT})\frac{\mathrm{\Delta }T}{\eta }$$ On the basis of data given in we take the parameters like this: $`\eta =510^3kg/ms`$, $`\mathrm{\Delta }T=10K`$ and $`d\sigma /dT=510^4m/s^2K`$. The result estimates that $$v_s=1m/s$$ . The result is strangely like the previous case. This is a good win for the idea that this effects has the same physics. And also courages us to continue this analysis for the other phenomena. For the other floating drops reported in this article we can easily estimate $`v_s`$ if we take the oscillation of water a gravity wave with small amplitude. The answer is $`v_s=(A/\lambda )(2\pi g\lambda )^{1/2}`$. Where $`\lambda `$ is the wavelength and $`A`$ is the amplitude. By taking $`\lambda =2mm`$ and $`(A/\lambda )=.5`$ we obtain, $$v_s=.2m/s$$ Again a near answer to the two previous. And at last for colliding currents. Because the motion of each current is a simple free fall, we can take $`v_x<<v_y`$. And so $`v_s=v_y=(2gH)^{1/2}`$. Where $`H`$ is the vertical distance between the falling point of water and the collision point. Again we take $`H=.05m`$ and another time we find $$v_s=1m/s$$ Quite satisfying. We should mention here that the surface velocities in which each of these phenomena is seen is not a single value and varries in some range, but here, by a rough estimate, we showed that this variations is near the same point for all of them. And this is the first step for experimentally showing the uniqueness of the cause. ### 4.2 The Force Between The Tangent Bulks As the second step we compare the forces, $`f_m`$, exerted on the bodies to push the bulks to eachother - In addition to surface tension that noramlly causes the bulks to coalesce. For the case of drops the only force is gravity. If we estimate it for the heaviest floating drops, with about $`6mm`$ diameter, we find the numerical result: $`f_m=300\mu N`$. For the drops with temperature difference, the maximum force reported is about $`100\mu N`$. And for the repel of currents, we can easily estimate the strike. As we know $`f=dp/dt`$. In the present situation we should use the Newton equation in the horizontal direction, so we can write $`f=d(mv_x)/dt`$. By considering a hard collision we can rewrite the equation like this: $`f=2v_xdm/dt`$. With another rough estimate we can write $`\frac{dm}{dt}=\rho Sv_y`$ where $`S`$ is the cross section of the currents. This gives the last equation: $$f_m=\rho \frac{\pi }{4}d^2v_xv_y$$ where $`d`$ is the diameter and $`(v_x,v_y)`$ is the speed of current just at the point of collision. This time the answer for the limit is about $`400\mu N`$. Another time we see a good agreement between the parameters of different effects, and we can pick another step forward to the experimental evidence of the unique physics lying behind these phenomena. I specially thank A. Shariati for his grateful help in preparing the article and P. Forughi and S. Siavoshi for their help in the observations. I also thank M.R. Ejtehadi and N. Rivier for helpful comments. I’m grateful to M. Hafezi and M. Sedighi for recording the experiments and preparing the pictures with patience.
no-problem/9908/cond-mat9908020.html
ar5iv
text
# Quasi-one-dimensional superconductors: from weak to strong magnetic field ## I INTRODUCTION According to conventional wisdom, superconductivity and high magnetic field are incompatible. A magnetic field acting on the orbital electronic motion breaks down time-reversal symmetry and ultimately restores the metallic phase. In the case of singlet pairing, the coupling of the field to the electron spins also suppresses the superconducting order (Pauli or Clogston-Chandrasekhar limit ). Recently, this conventional point of view has been challenged, both theoretically \[2-6\] and experimentally , in particular in quasi-1D organic materials. Because of their open Fermi surface, these superconductors exhibit unusual properties in presence of a magnetic field. As first recognized by Lebed’ , the possible existence of a superconducting phase at high magnetic field results from a magnetic-field-induced dimensional crossover that freezes the orbital mechanism of destruction of superconductivity. Moreover, the Pauli pair breaking (PPB) effect can be largely compensated by the formation of a Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) state \[2-5,8\]. The aim of this paper is to discuss three aspects of high-field superconductivity in quasi-1D conductors: the formation of a LOFF state (i), and the respective roles of temperature-induced (ii) and magnetic-field-induced (iii) dimensional crossovers. We consider a quasi-1D superconductor with an open Fermi surface corresponding to the dispersion law ($`\mathrm{}=k_B=1`$) $$E_𝐤=v_F(|k_x|k_F)+t_y\mathrm{cos}(k_yb)+t_z\mathrm{cos}(k_zd)+\mu .$$ (1) $`\mu `$ is the Fermi energy, and $`v_F`$ the Fermi velocity for the motion along the chains. $`t_y`$ and $`t_z`$ ($`t_yt_z`$) are the transfer integrals between chains. The magnetic field $`H`$ is applied along the $`y`$ direction and we denote by $`T_{c0}`$ the zero-field transition temperature. \[In Bechgaard salts, $`T_{c0}1`$ K, and $`t_c=t_z/2`$ is in the range 2-10 K .\] ## II LOFF STATE IN QUASI-1D SUPERCONDUCTORS We first discuss the effect of a magnetic field acting on the electron spins. Larkin and Ovchinnikov, and Fulde and Ferrell, have shown that the destructive influence of Pauli paramagnetism on superconductivity can be partially compensated by pairing up and down spins with a non-zero total momentum . At low temperature (i.e. high magnetic field), when $`TT_00.56T_{c0}`$, this non-uniform state becomes more stable than the uniform state corresponding to a vanishing momentum of Cooper pairs. Quasi-1D superconductors appear very particular with respect to the existence of a LOFF state . The fundamental reason is that, because of the quasi-1D structure of the Fermi surface, the partial compensation of the Pauli pair breaking effect by a spatial modulation of the order parameter is much more efficient than in isotropic systems. Indeed, a proper choice of the Cooper pairs momentum allows one to keep one half of the phase space available for pairing whatever the value of the magnetic field. Thus, the critical field $`H_{c2}(T)1/T`$ diverges at low temperature . ## III TEMPERATURE-INDUCED DIMENSIONAL CROSSOVER At low temperature, strongly anisotropic superconductors may exhibit a dimensional crossover that allows the superconducting phase to persist at arbitrary strong field (in the mean-field approximation) in the absence of PPB effect. Within the Ginzburg-Landau theory, at high temperature ($`TT_{c0}`$) the mixed state is an (anisotropic) vortex lattice. The critical field $`H_{c2}(T)`$ is determined by $`H_{c2}(T)=\varphi _0/2\pi \xi _x(T)\xi _z(T)`$ where $`\xi _x`$ and $`\xi _z`$ are the superconducting coherence lengths and $`\varphi _0`$ the flux quantum. When the anisotropy is large enough, i.e. when $`t_z\stackrel{<}{}T_{c0}`$, $`\xi _z(T)`$ can become at low temperature of the order or even smaller than the lattice spacing $`d`$. Vortex cores, which have an extension $`\xi _z(T)`$ in the $`z`$ direction, can then fit between planes without destroying the superconducting order in the planes. The superconducting state is a Josephson vortex lattice and is always stable at low temperature for arbitrary magnetic field (see Fig. 3 in Ref. 4). A proper description of this situation, which takes into account the discreteness of the lattice in the $`z`$ direction, is given by the Lawrence-Doniach model . It is tempting to conclude that this temperature-induced dimensional crossover, together with the formation of a LOFF state, could lead to a diverging critical field $`H_{c2}(T)`$ at low temperature. It has been shown in Ref. 4 that this is not the case: the PPB effect strongly suppresses the high-field superconducting phase (this point is further discussed in sect. 4.1 (see Fig. 1.b)). Therefore the temperature-induced dimensional crossover cannot explain the critical field $`H_{c2}(T)`$ measured in Bechgaard salts in the case of spin singlet pairing. ## IV MAGNETIC-FIELD-INDUCED DIMENSIONAL CROSSOVER The microscopic justification of the Ginzburg-Landau or Lawrence-Doniach theory of the mixed state of type II superconductors is based on a semiclassical approximation (known as the semiclassical phase integral or eikonal approximation) that completely neglects the quantum effects of the magnetic field. At low temperature (or high magnetic field) and in sufficiently clean superconductors, when $`\omega _cT,\tau `$ ($`\omega _c=eHdv_F/\mathrm{}`$ being the frequency of the semiclassical electronic motion, and $`\tau `$ the elastic scattering time), these effects cannot be neglected and an exact description of the field is required. To be more specific, we write the Green’s function (or electron propagator) as $$G(𝐫_1,𝐫_2)=\mathrm{exp}\left\{ie_{𝐫_1}^{𝐫_2}𝑑𝐥𝐀\right\}\overline{G}(𝐫_1𝐫_2),$$ (2) where $`𝐀`$ is the vector potential. The Ginzburg-Landau or Lawrence-Doniach theory identifies $`\overline{G}`$ with the Green’s function $`G_0`$ in the absence of magnetic field. The latter intervenes only through the phase factor $`ie_{𝐫_1}^{𝐫_2}𝑑𝐥𝐀`$, which breaks down time-reversal symmetry and tends to suppress the superconducting order. When $`\omega _cT,1/\tau `$, the approximation $`\overline{G}=G_0`$ breaks down and a proper treatment of the field is required. In isotropic systems, $`\overline{G}`$ includes all the information about Landau level quantization. In strongly anisotropic conductors, it describes a magnetic-field-induced dimensional crossover , i.e. a confinement of the electrons in the planes of highest conductivity. The same conclusion can be reached by considering the semiclassical equation of motion $`\mathrm{}d𝐤/dt=e𝐯\times 𝐇`$ with $`𝐯=\mathbf{}E_𝐤`$. The corresponding electronic orbits in real space are of the form (neglecting for simplicity the (free) motion along the field) $`z=z_0+d(t_z/\omega _c)\mathrm{cos}(\omega _cx/v_F)`$. The electronic motion is extended along the chains (and the magnetic field direction), but confined with respect to the $`z`$ direction with an extension $`d(t_z/\omega _c)1/H`$. In very strong field $`\omega _ct_z`$, the amplitude of the orbits becomes smaller than the lattice spacing $`d`$ showing that the electronic motion is localized in the $`(x,y)`$ planes. The latter being parallel to the magnetic field, the orbital frustration of the superconducting order parameter vanishes (there is no magnetic flux through the 2D Cooper pairs located in the $`(x,y)`$ planes). ### A Large anisotropy Fig. 1.a shows the phase diagram in the exact mean-field approximation in the case of a weak interplane transfer $`t_z/T_{c0}1.33`$. $`Q`$ is a pseudo momentum for the Cooper pairs in the field \[3-5\]. At high temperature ($`TT_{c0}`$), the mixed state is an Abrikosov vortex lattice, and $`T_c`$ decreases linearly with the field. $`T_c`$ does not depend on $`Q`$ in this regime, which is shown symbolically by the shaded triangle in Fig. 1.a. For $`H0.3`$ T, the system undergoes a temperature-induced dimensional crossover, which leads to an upward curvature of the transition line. This dimensional crossover selects the value $`Q=0`$ of the pseudo momentum. The PPB effect becomes important for $`H2`$ T and leads to a formation of a LOFF state ($`Q`$ then switches to a finite value). At higher field, $`Q2\mu _BH/v_F`$ ($`\mu _B`$ is the Bohr magneton), and $`T_c`$ exhibits again an upward curvature. Fig. 1.b shows the phase diagram obtained in the Lawrence-Doniach model . The metallic phase is restored above a field $`H2.7`$ T, and the LOFF state is stable only in a narrow window around $`H2.6`$ T. Only when the magnetic-field-induced dimensional crossover is taken into account does the LOFF state remain stable at very high magnetic field. In a microscopic picture, the dimensional crossover shows up as a localization of the single-particle wave functions with a concomitant quantization of the spectrum into a Wannier-Stark ladder (i.e. a set of 1D spectra if we neglect the energy dispersion along the field). This is precisely this quantization that allows one to construct a LOFF state in a way similar to the 1D or (zero-field) quasi-1D case . Thus, when the field is treated semiclassically, the region of stability of the LOFF state in the $`HT`$ plane becomes very narrow as in isotropic 2D or 3D systems . ### B Smaller anisotropy For a smaller anisotropy, the coherence length $`\xi _z(T)`$ is always larger than the spacing between chains: $`\xi _z(T)\xi _z(T=0)>d`$. There is no possibility of a temperature-induced dimensional crossover. Fig. 2.a shows the phase diagram without the PPB effect for $`t_z/T_{c0}4`$. The low-field Ginzburg-Landau regime (corresponding to the shaded triangle in Fig. 2) is followed by a cascade of superconducting phases separated by first-order transitions. These phases correspond to either $`Q=0`$ or $`Q=G\omega _c/v_F`$ . In the quantum regime, the field-induced localization of the wave functions plays a crucial role in the pairing mechanism. The transverse periodicity $`a_z`$ of the vortex lattice is not determined by the Ginzburg-Landau coherence length $`\xi _z(T)`$ but by the magnetic length $`d(t_z/\omega _c)`$. The first order phase transitions are due to commensurability effects between the crystal lattice spacing $`d`$ and $`a_z`$: each phase corresponds to a periodicity $`a_z=Nd`$ ($`N`$ integer). $`N`$ decreases by one unit at each phase transition. The mixed state evolves from a triangular Abrikosov vortex lattice in weak field to a triangular Josephson vortex lattice in very high field (where $`N=2`$). It has been pointed out that $`a_z`$ decreases in both the Ginzburg-Landau and quantum regimes, but increases at the crossover between the two regimes where $`\omega _cT`$ . This suggests that the mixed state exhibits unusual characteristics in the quantum regime. Indeed, the amplitude of the order parameter and the current distribution show a symmetry of laminar type. In particular, each chain carries a non-zero total current (except the last phase $`N=2`$). We expect these unusual characteristics to influence various physical measurements. Fig. 2.b shows the phase diagram when the PPB effect is taken into account. There is an interplay between the cascade of phases and the formation of a LOFF state. The latter corresponds to phases with $`Q=2\mu _BH/v_F`$ and $`Q=G2\mu _BH/v_F`$. ## V CONCLUSION Our discussion shows that a temperature-induced dimensional crossover could explain recent experiments in the Bechgaard salts only in the case of triplet pairing, although this would require a value of the interchain coupling $`t_z`$ slightly smaller than what is commonly expected . In the more likely case of singlet pairing, the existence of a high-field superconducting phase in quasi-1D conductors may result from a magnetic-field-induced dimensional crossover and the formation of a LOFF state. The crossover between the semiclassical Ginzburg-Landau and quantum regimes, which occurs when $`\omega _cT`$, is accompanied by an increase of the transverse periodicity of the vortex lattice. This, as well as the characteristics of the vortex lattice in the quantum regime (laminar symmetry of the order parameter amplitude and the current distribution), suggests that the high-field superconducting phase in quasi-1D conductors should exhibit unique properties. ## VI REFERENCES 1. A.M. Clogston, Phys. Rev. Lett. 9, 266 (1962); B.S. Chandrasekhar, Appl. Phys. Lett. 1, 7 (1962). 2. A.G. Lebed’, JETP Lett. 44, 114 (1986); L.I. Burlachkov, L.P. Gor’kov and A.G. Lebed’, EuroPhys. Lett. 4, 941 (1987). 3. N. Dupuis, G. Montambaux and C.A.R. Sá de Melo, Phys. Rev. Lett. 70, 2613 (1993); N. Dupuis and G. Montambaux, Phys. Rev. B 49, 8993 (1994). 4. N. Dupuis, Phys. Rev. B 51 9074 (1995). 5. N. Dupuis, Phys. Rev. B 50, 9607 (1994); J. Phys. I France 5 1577 (1995). 6. Y. Hasegawa and M. Miyazaki, J. Phys. Soc. Jpn 65, 1028 (1996); M. Miyazaki and Y. Hasegawa, J. Phys. Soc. Jpn 65, 3283 (1996). 7. I.J. Lee, A.P. Hope, M.J. Leone, and M.J. Naughton, Synth. Met. 70, 747 (1995); Appl. Supercond. 2, 753 (1994); I.J. Lee, M.J. Naughton, G.M. Danner, and P.M. Chaikin, Phys. Rev. Lett. 78, 3555 (1997). 8. A.I. Larkin and Yu.N. Ovchinnikov, Sov. Phys. JETP 20, 762 (1965); P. Fulde and R.A. Ferrell, Phys. Rev. 135, A550 (1964). 9. W.E. Lawrence and S. Doniach, in Proceedings of the 12th International Conference on Low Temperature Physics LT12, Kyoto, edited by E. Kanada (Academic, New York, 1970). 10. L.W. Gruenberg and L. Gunther, Phys. Rev. 176, 606 (1968). 11. L.W. Gruenberg and L. Gunther, Phys. Rev. Lett. 16, 996 (1966).
no-problem/9908/astro-ph9908268.html
ar5iv
text
# Early Evolution of Stellar Clusters ## 1 Introduction Although stellar clusters and associations contain but a small fraction of the stellar content of the Galaxy, it is becoming increasingly clear that most stars originate in clusters and that to understand how stars form we have to consider clusters of stars and how such environments can affect their formation. Surveys of nearby star forming regions have found that the majority of pre-main sequence stars are found in clusters (e.g Lada et. al. 1991; Lada, Strom & Myers 1993; see also Clarke, Bonnell & Hillenbrand 2000). The fraction of stars in clusters depends on the molecular cloud considered but generally varies from 50 to $`\mathrm{¿}\mathrm{}90`$ per cent. This fraction appears to decreases with age. Young stellar clusters in the solar neighbourhood are found to contain anywhere from tens to thousands of stars with typical numbers of around a hundred (Lada et. al. 1991; Phelps & Lada 1997; Clarke et. al. 2000). Cluster radii are generally a few tenths of a parsec such that mean stellar densities are of the order of $`10^3`$ stars/pc<sup>3</sup> (c.f. Clarke et. al. 2000) with central stellar densities of the larger clusters (ie the ONC) being $`\mathrm{¿}\mathrm{}10^4`$ stars/pc<sup>3</sup> (McCaughrean & Stauffer 1994; Hillenbrand & Hartmann 1998; Carpenter et. al. 1997). These clusters are usually associated with massive clumps of molecular gas. Indeed, the mass in gas is typically much more than that in stars. Gas can thus play an important role in the dynamics of the clusters and possibly affect the final stellar masses through accretion. Recently, it has become possible to conduct a stellar census in young stellar clusters (e.g. Hillenbrand 1997) by using theoretical pre-main sequence evolutionary tracks to estimate each star’s mass and age. Unfortunately, there is a fair degree of uncertainty in these estimates due to the uncertainty in the tracks themselves (see Hillenbrand 1997 for an example) and due to the possibility of ongoing gas accretion affecting the star’s pre-main sequence evolution (Tout, Livio & Bonnell 1999). What is relatively certain in the census is that the cluster stars are generally young (ages $`10^6`$ years) and contain both low-mass and high-mass stars in proportion as you would expect from a field-star IMF (Hillenbrand 1997). Furthermore, there is a degree of mass segregation present in the clusters with the most massive stars generally found in the cluster cores. In this paper, I review work that has been done on the evolution of young stellar clusters and how this work can be combined with observations of such clusters to investigate the relevant processes of star formation in clusters. ## 2 Cluster Morphology One of the major questions concerning stellar clusters is their formation mechanism. A number of different scenarios can be imagined including a triggering where a shock induces star formation in a group, or a coagulation where a number of smaller groups merge to form one cluster (e.g. Klessen, Burkert & Bate 1998). Such processes can leave traces of their initial conditions in the cluster morphologies. For example, a triggered formation mechanism such as a shock or cloud-cloud collision leads to a flattened structure which then fragments to form a cluster (Whitworth et al. 1994, Bhattal et al. 1998). This flattened morphology should then be in the initial conditions of the stellar cluster. In order to use the morphology in present-day clusters to constrain possible initial conditions, we have to understand how the early evolution of the stellar cluster can affect its morphology. Boily, Clarke & Murray (1999; see also Goodwin 1997a) have investigated the evolution of a dynamically cold flattened stellar system. Using N-body simulations, they showed how the cluster relaxes through both an initial violent relaxation and subsequent two-body relaxation. As these systems are generally younger than their relaxation time (see below), it is the violent relaxation which will have a greater affect on the cluster morphology. Boily et al. found that a flattened system becomes less elliptical, but that the violent relaxation does not completely remove the initial asphericity. For example, a system with an initial axis ratio of 5:1 relaxed to an axis ratio of 2:1. This suggests that the clusters such as the ONC which are significantly elongated (axis ratio of 2:1, Hillenbrand & Hartmann 1998; see also Clarke et al. 2000) were initially very aspherical and thus could have been formed as the result of a triggering mechanism. Substructure, or subclustering, is another possibility in the cluster’s initial conditions that can constrain the formation mechanism. Bate, Clarke & McCaughrean (1998) used statistical tests for substructure in clusters based on the mean surface density of companions (see also Larson 1995). They found that the ONC is consistent with having no substructure although a narrow window of subclustering is possible. In contrast, IC348 does appear, at least by eye, to have significant substructure (Lada & Lada 1995), but this has not been tested statistically. At present, most young clusters do not display significant substructure, but it is possible that the initial conditions did contain subclustering that has sinced been removed through its evolution (Bate et al. 1998). This can be investigated through N-Body simulations (Scally, in preparation), investigating how the subclusters relax and dissolve into the surrounding larger-scale cluster. This occurs due to a combination of the tidal forces acting on the subcluster and its internal relaxation which leads it to dissolve on a timescale proportional to the number of stars it contains. ## 3 Mass Segregation Young stellar clusters are commonly found to have their most massive stars in or near the centre (Hillenbrand 1997; Carpenter et al. 1997). This mass segregation is similar to that found in older clusters but the young dynamical age of these systems offers the chance to test whether the mass segregation is an initial condition or due to the subsequent evolution. We know that two-body relaxation drives a stellar system towards equipartition of kinetic energy and thus towards mass segregation. In gravitational interactions, the massive stars tend to lose some of their kinetic energies to lower-mass stars and thus sink to the centre of the cluster (see Fig. 3). Numerical simulations of two-body relaxation have shown that while some degree of mass segregation can occur over the short lifetimes of these young clusters, it is not sufficient to explain the observations (Bonnell & Davies 1998). Thus the observed positions of the massive stars near the centre of clusters like the ONC reflects the initial conditions of the cluster and of massive star formation that occurs preferentially in the centre of rich clusters. Forming massive stars in the centre of clusters is not straightforward due to the high stellar density. For a star to fragment out of the general cloud requires that the Jeans radius, the minimum radius for the fragment to be gravitationally bound, $$R_JT^{1/2}\rho ^{1/2},$$ (1) be less than the stellar separation. This implies that the gas density has to be high, as you would expect at the centre of the cluster potential. The difficulty arises in that the high gas density implies that the fragment mass, being approximately the Jeans mass, $$M_JT^{3/2}\rho ^{1/2},$$ (2) is quite low. Thus, unless the temperature is unreasonably high in the centre of the cluster before fragmentation, the initial stellar mass is quite low. Equation (2) implies that the stars in the centre of the cluster should have the lowest masses, in direct contradiction with the observations. Therefore, we need a better explanation for the origin of massive stars in the centre of clusters. ## 4 Accretion and stellar masses Young stellar clusters are commonly found to be gas-rich with typically 50 % to 90 % of their total mass in the form of gas (e.g. Lada 1991). This gas can interact with, and be accreted by, the stars as both move in the cluster. If significant accretion occurs, it can affect both the dynamics and the masses of the individual stars (e.g. Larson 1992). Simulations of accretion in clusters using a combination SPH and N-body code have found that accretion is a highly non-uniform process where a few stars accrete significantly more than the rest (Bonnell et al. 1997). Individual stars’ accretion rates depend largely on their position in the cluster (see Fig. 4) with those in the centre accreting more gas than those near the outside. This process is termed “competitive accretion” (Zinnecker 1982) each star competes for the available gas reservoir with the advantage going to those in the cluster centre that benefit from the overall cluster potential. Accretion in stellar clusters naturally leads to both a mass spectrum and mass segregation. Even from initially equal stellar masses, the competitive accretion results in a wide range of masses with the most massive stars located in or near the centre of the cluster. Furthermore, if the initial gas mass-fraction in clusters is generally equal, then larger clusters will produce higher-mass stars and a larger range of stellar masses as the competitive accretion process will have more gas to feed the few stars that accrete the most gas. ## 5 Formation of Massive Stars The formation of massive stars is problematic not only for their special location in the cluster centre, but also due to the fact that the radiation pressure from massive stars is sufficient to halt the infall and accretion (Yorke & Krugel 1977; Yorke 1993). This occurs for stars of mass $`\mathrm{¿}\mathrm{}10`$ M. A secondary effect of accretion in clusters is that it can force it to contract significantly. The added mass increases the binding energy of the cluster while accretion of basically zero momentum matter will remove kinetic energy. If the core is sufficiently small that its crossing time is relatively short compared to the accretion timescale, then the core, initially at $`n10^4`$ stars pc<sup>-3</sup>, can contract to the point where, at $`n10^8`$ stars pc<sup>-3</sup>, stellar collisions are significant (Bonnell, Bate & Zinnecker 1998). Collisions between intermediate mass stars ($`2M_{}\mathrm{¡}\mathrm{}m\mathrm{¡}\mathrm{}10M_{}`$), whose mass has been accumulated through accretion in the cluster core, can then result in the formation of massive ($`m\mathrm{¿}\mathrm{}50M_{}`$) stars. This model for the formation of massive stars predicts that the massive stars have to be significantly younger than the mean stellar age due to the time required for the core to contract (Bonnell et al. 1998). ## 6 Binaries in clusters Binary stars can play an important role in young stellar clusters as well as in older, evolved clusters. Their importance stems not from their influence on the dynamics of the cluster, but rather from how the cluster dynamics affect, and destroys, some of the binaries. The high binary frequency in nearby, non-clustered star-forming regions (e.g. Mathieu 1994; Ghez 1995, Mathieu et al. 2000) compared to the main sequence (Duquennoy & Mayor 1991) and to that in clustered star forming regions (Padgett, Strom & Ghez 1997; Petr et al. 1998; Mathieu et al. 2000) suggest that as most stars are formed in clusters, the cluster environment plays an important role in setting the binary frequency. This can happen in one of two ways, either the cluster environment impedes binary formation or it subsequently destroys them. Kroupa (1995) has shown how a 100 % binary frequency can be reduced through stellar encounters in clusters and how the final binary frequency depends on the stellar density in the cluster. Binary systems wider than the hard-soft limit (basically where the orbital velocity is equal to the cluster velocity dispersion) are destroyed in encounters. Thus denser systems with higher velocity dispersions disrupt more binaries. This binary destruction in clusters can also be used as a tracer of the cluster evolution (Smith & Bonnell, in preparation). Figure 6 shows the resultant binary frequency versus separation for clusters of various stellar densities. Clusters with stellar densities of the order of $`10^3`$ stars pc<sup>-3</sup> only have a significant effect on systems wider than $`\mathrm{¿}\mathrm{}10^3`$ au whereas clusters with higher densities destroy closer systems. Thus, if a cluster does go through a very dense phase in order for collisions to occur (see above) then no binaries wider than 100 au should survive with significant depletion extending down to 10 au (Fig. 6). ## 7 Cluster Dissolution The majority of young stellar clusters dissolve before their low-mass stars reach the main sequence. This can happen either through a sudden removal of the majority of the binding mass, the gas contained in the cluster, or through the dynamical interactions that put all of the cluster’s binding energy into a central binary. Gas removal is most important for large stellar clusters that are more likely to contain massive stars. These stars can ionise the intracluster gas which can then escape (unless the velocity dispersion is $`v_{\mathrm{disp}}\mathrm{¿}\mathrm{}10`$ km s<sup>-</sup>). Gas removal on timescales less than the dynamical time is catastrophic for the cluster if gas is the major mass component (Lada, Margulis & Dearborn 1984; Goodwin 1997b). Gas removal over many dynamical times will leave a remnant cluster containing a fraction of the initial cluster stars. The second possibility for cluster dissolution is that two-body relaxation takes the cluster’s binding energy and puts it into one central binary, typically containing the most massive stars (Sterzik & Durisen 1998; Bonnell in preparation). This occurs on a timescale similar to the relaxation time of the cluster as a whole (the difference in that it involves only the central binary as the energy source and that the binary shrinks during the energy exchange), $$t_{\mathrm{diss}}Nt_{\mathrm{cross}}.$$ (3) Thus, small clusters will dissolve readily through two-body relaxation whilst large clusters dissolve through the interaction of their massive stars with the gas. It should only therefore be the intermediate clusters which have long dissolution times but that do not contain very massive stars which survive long enough to be considered as open or Galactic clusters. ## 8 Summary The early evolution of stellar clusters involves many interactions which affect the clusters’ and the individual stellar properties. Understanding these interactions, and their possible consequences, allows us to investigate probable cluster initial conditions and how they relate to observations of young stellar clusters. The dynamical interactions are of two types, pure stellar interactions and star-gas interactions. The first type are investigated through N-Body simulations and include violent relaxation and two-body relaxation. Both of these decrease initial structure, including ellipticity and substructure, although some degree of structure is likely to remain long enough to be observable. The ellipticity in the ONC could thus indicate an initially highly aspherical initial condition. Two-body relaxation also drives mass segregation although this occurs over many dynamical times in large clusters such that the position of the massive stars in the ONC reflects their initial location and thus constrains how massive stars form. Star-gas interactions include accretion of the gas onto the stars and the feedback (especially from massive stars) from the stars onto the gas. Although feedback has not yet been studied in this context, we are starting to understand the process of accretion in clusters. Gas accretion in a stellar cluster is highly competitive and uneven. Stars near the centre of the cluster accrete at significantly higher rates due to their position where they are aided in attracting the gas by the overall cluster potential. This competitive accretion naturally results in both a spectrum of stellar masses, and an initial mass segregation even if all the stars originate with equal masses. Accretion in stellar clusters can also force the core of the cluster to contract sufficiently to allow for stellar collisions to occur. Such a collisional model for the formation of massive stars evades the problem of accreting onto massive stars. Wide binary systems in clusters are destroyed by stellar encounters in clusters. The maximum separation which survives such interactions depends primarily on the cluster density and velocity dispersion. Wide systems are more likely to survive less dense clusters than in the core of dense clusters. Binaries can thus be used as a tracer of the cluster evolution. Finally, clusters dissolve through either gas removal (generally larger clusters) or through dynamical interactions which transfer all the clusters binding enery to a central binary (small-N clusters). Thus, clusters surviving to the main sequence and Galactic cluster status represent a small subset of the initial population of stellar clusters.
no-problem/9908/cond-mat9908434.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recently, there was an increase of interest in non-hermitian hamiltonians and quantum phase transitions (typically localized to extended wavefunctions) in systems characterized by them. There are in general two classes of problems in this context: one in which the non-hermiticity is in the nonlocal part and the other in which it is in the local part \[3-8\]. In the first category, one considers an imaginary vector potential added to the momentum operator in the Schrödinger hamiltonian. In the second category (non-hermiticity in the local term), an imaginary term is introduced in the one-body potential. It is well-known from textbooks on quantum mechanics that depending on the sign of the imaginary term, this means the presence of a sink (absorber) or a source (amplifier) in the system. It may be noted that this second category does also have a counterpart in classical systems characterized by a Helmholtz (scalar) wave equation as well, where the practical application is in the studies of the effects of classical wave (light) localization due to backscattering in the presence of an amplifying (lasing) medium that has a complex dielectric constant with spatial disorder in its real part . There is a common thread binding both the problems though, namely that the spectrum for both becomes complex (the hamiltonian being non-hermitean or real non-symmetric), but can admit real eigenvalues as well. The common property is that the real eigenvalues represent localized states and the eigenvalues off the real lines extended states. That it is so in the first category has been shown in the recent works starting with Hatano and Nelson and followed by others . In the rest of the paper we would be concerned with non-hermitian hamiltonians of the second category only. For this category with sources at each scatterer and in the absence of impurities, it seems counter-intuitive that there are localized solutions; but it has been shown in a simple way that the real eigenvalues are always localized. However, up to now the physical origins of this effect have not been provided. Since the localization is a consequence of the backscattering and the destructive interferences, we expect this effect to be related to the scaling behavior of the phases of the transmission and reflection amplitudes. This is the aim of this letter where we examine numerically the effect of the amplification on the phase of the transmission and reflection amplitudes. We use for this end the Kronig-Penney model which is a continuous multiband model. We first consider a periodic passive system in order to understand the behavior of the phase for localized and extended states. This allow us to explain the phase behavior in such amplifying systems. ## 2 Model description We consider a non interacting electron of energy $`E`$ moving through a linear chain of $`\delta `$-potentials strengths strength $`\beta _n`$, $`n`$ is the site position. In each site an imaginary term $`\eta `$ is included leading to a Non Hermitian Hamiltonian. The Schrödinger equation then reads $$\left\{\frac{d^2}{dx^2}+\underset{n}{}(\beta _n+\eta )\delta (xn)\right\}\mathrm{\Psi }(x)=E\mathrm{\Psi }(x)$$ (1) Here $`\mathrm{\Psi }(x)`$ is the single particle wavefunction at $`x`$, and $`E`$ is expressed in units of $`\mathrm{}^2/2m`$ with $`m`$ being the free electron effective mass. For simplicity, the lattice spacing is taken to be unity in all this work. Since we are interested only in periodic systems, the potential strength $`\beta _n`$ is a constant $`\beta _0`$. The complexe potential appearing in the local part of the Hamiltonian in (1) leads either to complex eigenvalues and real wavenumbers or real eigenvalues and complex vavenumbers. We consider the system Ohmically connected to ideal leads so that the second case is used since the total energy is conserved. In this case the imaginary part acts either as a sink (absorber) if $`\eta <0`$ or as a source (amplifier) if $`\eta >0`$ . From the computational point of view it is more useful to consider the discrete version of the Schrödinger equation which is called the generalized Poincaré map and can be derived without any approximation from (1). It reads $$\mathrm{\Psi }_{n+1}=\left[2\mathrm{cos}k+\frac{\mathrm{sin}k}{k}(\beta _0+i\eta )\right]\mathrm{\Psi }_n\mathrm{\Psi }_{n1}$$ (2) where $`\mathrm{\Psi }_n`$ is the value of the wavefunction at site $`n`$ and $`k=\sqrt{E}`$. This representation relates the values of the wavefunction at three successive discrete locations along the x-axis without restriction on the potential shape at those points and is very suitable for numerical computations. The solution of equation (2) is done iteratively by taking for our initial conditions the following values at sites $`1`$ and $`2`$ : $`\mathrm{\Psi }_1=`$ $`\mathrm{exp}(ik)`$ and $`\mathrm{\Psi }_2=`$ $`\mathrm{exp}(2ik)`$. We consider here an electron having a wave number $`k_F`$ (at Fermi energy) incident at site $`N+3`$ from the right (by taking the chain length $`L=N`$, i.e. $`N+1`$ scatterers). The transmission and reflection amplitudes ($`t`$ and $`r`$) can then be expressed as $$t=\frac{2i\mathrm{exp}(ik(N+3))\mathrm{sin}k}{\mathrm{\Psi }_{N+3}\mathrm{exp}(ik)\mathrm{\Psi }_{N+2}},$$ (3) and $$r=\frac{\mathrm{exp}(2ik(N+3))\left(\mathrm{\Psi }_{N+2}\mathrm{exp}(ik)\mathrm{\Psi }_{N+3}\right)}{\mathrm{\Psi }_{N+3}\mathrm{exp}(ik)\mathrm{\Psi }_{N+2}},$$ (4) where the terms $`\mathrm{exp}(ik(N+3))`$ and $`\mathrm{exp}(2ik(N+3))`$ apprearing respectively in the transmission and reflection amplitudes originate from the fact that the electron is incident at site $`N+3`$ with an incident phase $`k(N+3)`$. Therefore, these fictious phases are to be disgarded. Note here that the wave number $`k`$ appearing in the last expressions is that of the free electron moving in the leads and is different from that inside the system (which is complex). From Eqs. (3 and 4) the phases of the transmission and reflection amplitudes depend only on the values of the wavefunction at the end sites, $`\mathrm{\Psi }_{N+2}`$, $`\mathrm{\Psi }_{N+3}`$ which are evaluated from the iterative equation (2). The phases of the transmission and reflection amplitudes ($`\mathrm{\Phi }_t`$ and $`\mathrm{\Phi }_r`$) are then the arguments of $`t`$ and $`r`$ respectively. These phases vary obviously between $`0`$ and $`2\pi `$. ## 3 Results and discussion As discussed below, the observed asymptotic localization in amplifying periodic systems should come from the phase interferences and the backscattering. Indeed, the maximum transmission length ($`L_{max}`$) in this case can be seen as the characteristic length separating the region where the amplification dominates from that where the interferences and backscattering dominate (below $`L_{max}`$). Let us first consider the effect on the transmission and reflection phases. In order to understand the phase behavior in the case of constructive and destructive interferences, we start examining its scaling in a passive periodic system. We fix in this case $`\beta _0=8`$ which, from Eqs. (1) and (2) leads us to the energy spectrum shown in Fig.1. In this spectrum, we choose the energies $`E=1`$, $`E=3`$ and $`E=5`$ to scan the phase scaling either in the gap and the allowed band (Figs.2). The transmission phase in Fig.2a oscillates around $`\pi `$ with decreasing periods for energies away from the allowed band while they increase inside this band. Therefore a higher frequency oscillating phase means a localization. In Fig.2b, the initial reflection phase seems to be always between $`\pi /2`$ and $`3\pi /2`$ for energies in the gap which corresponds to localized states for such finite systems. Let us now examine the phase scaling for amplifying systems $`\eta >0`$ (see Figs.3). For simplicity we consider that the on-site potential is purely imaginary (i.e., $`\beta =0`$). We see in particular in these figures that both the reflection and amplification phases remain constant in the region where the transmission coefficient grows. It is important to notice that the reflection phase is greater than $`\pi /2`$ which indicates that there are destructive interferences in the region of growing transmission but they seem to not affect it. In the region of maximum transmission (and reflection) both phases oscillate and the transport properties of the system seems to become sensitive to them. ## 4 Conclusion We used in this letter the effect of the amplification on the scaling behavior of both transmission and reflection phases in order to interpret the recently observed effect on the coefficients. The main results show a constant phase in the growth region while it starts oscillating near the maximum transmission and reflection. However, the amplification effect has been studied here only in the allowed band of the corresponding passive periodic system (since $`\beta _0=0`$ when the amplification $`\eta `$ is applied, all the spectrum of the passive system is Bloch like). Therefore, it is interesting to examine this effect in the gap of the corresponding passive system. In this case the transmission coefficient is exponentially decaying (the system being finite) and the Lyapunov exponent should be affected differently by the amplification. Acknowledgements NZ would like to thank the ICTP for its hospitality and the Arab Funds for its support during the progress of this work. Figure Captions Fig.1 Transmission coefficient (in a log scale) versus energy for $`\beta _0=8`$ and $`\eta =0`$ (passive system). Fig.2 Variation of the reflection and transmission phase with the length scale for $`\eta =0`$, $`\beta _0=8`$ and different energies $`1,3`$ and $`5`$. a) $`\mathrm{\Phi }_t`$, b) $`Phi_r`$ Fig.3 variations of the reflexion and transmission phases and the transmission coefficient with the length scale $`L`$ for $`\beta =0`$, $`\eta =0.05`$ and $`0.1`$ and the energy $`E=1`$. a) phase of the transmission, b) phase of the reflection, c) transmission coefficient.
no-problem/9908/math9908076.html
ar5iv
text
# Compact groups and absolute extensors ## Abstract. We discuss compact Hausdorff groups from the point of view of the general theory of absolute extensors. In particular, we characterize the class of simple, connected and simply connected compact Lie groups as $`AE(2)`$-groups the third homotopy group of which is . This is the converse of the corresponding result of R. Bott. ###### Key words and phrases: Compact group, absolute extensor, inverse spectrum Author was partially supported by NSERC research grant. The following result \[3, Theorem 6.57 and Corollary 6.59\] plays an important role in the theory of compact groups. ###### Theorem A (Structure Theorem for Compact Groups). Let $`G`$ be a connected compact group. Then there exist a continuous homomorphism $$p:Z_0(G)\times \{L_t:tT\}G$$ where $`Z_0(G)`$ stands for the identity component of the center of $`G`$ and $`L_t`$ is a simple, connected and simply connected compact Lie group, $`tT`$, such that $`\mathrm{ker}(p)`$ is isomorphic to a zero-dimensional central subgroup of $`G`$. If $`G`$ itself is a compact connected $`n`$-dimensional Lie group, then $`\mathrm{ker}(p)`$ is finite, the indexing set $`T`$ is also finite and $`Z_0(G)`$ is a torus of codimension $`|T|`$ (i.e. $`Z_0(G)`$ is isomorphic to the product of $`n|T|`$ copies of the circle group 𝕋). In many instances the above statement allows us to split a reasonably stated problem about compact connected groups into the abelian and nonabelian parts (explicitly indicated in the domain of $`p`$). Obviously its efficiency depends on our ability to recognize these parts separately. This is why the second part of Theorem A is much more precise than the first. One aspect of this general point of view is reflected in the following two questions: * What is a (topological) characterization of tori, i.e. of (possibly uncountable) powers $`\text{𝕋}^\tau `$ of the circle group 𝕋. * What is a (topological) characterization of simple, connected and simply connected compact groups? Question (I) has recently been answered \[1, Theorem E\] within the general theory of absolute extensors (consult for a comprehensive discussion of related topics): ###### Theorem B. The following conditions are equivalent for a compact abelian group $`G`$: * $`G`$ is a torus. * $`G`$ is an $`AE(1)`$-compactum (= compact absolute extensor in dimension $`1`$). In these notes we give a complete solution of question (II). Characterization, as in the abelian case , is given within the theory of $`AE(n)`$-spaces. Lie groups, topologically being manifolds, are locally $`n`$-connected for each $`n0`$ (even locally contractible). Two basic examples of global connectivity properties are, of course, connectedness and simple connectedness. Let us examine these concepts more closely. Connected manifolds (because of their local niceness) are arcwise connected (i.e. $`0`$-connected). Note that this is not true for non locally connected groups. Unlike connectedness, arcwise connectedness is a concept of a homotopy theoretical nature and fits into a general extension theory. Indeed, consider the following extension problem $`E(Z,Z_0,g_0)`$ which asks whether a map $`g_0`$, defined on a closed subspace $`Z_0`$ of a compactum $`Z`$, with values in $`X`$, has an extension $`g`$ over the whole $`Z`$. Clearly a compactum $`X`$ is arcwise connected precisely when the extension problem $`E(B^1,B^1,g_0)`$, where $`B^1`$ is the $`1`$-dimensional disk, is solvable for any $`g_0`$. Similarly the class of compacta for which the extension problem $`E(B^2,B^2,g_0)`$, where $`B^2`$ is the $`2`$-dimensional disk, is solvable for any $`g_0`$ coincides with the class of simply connected spaces. An important observation (see for instance \[4, Chapter 7, §53, Section IV, Theorems 1 and 1\]) here is that for a metrizable locally $`0`$-connected compactum $`X`$ the $`0`$-connectedness of $`X`$ guarantees the solvability of the extension problem $`E(Z,Z_0,g_0)`$ for any choice of at most $`1`$-dimensional compactum $`Z`$, closed subspace $`Z_0`$ of $`Z`$, and map $`g_0`$. In other words, $`X`$ in this case is an absolute extensor in dimension $`1`$ (shortly, $`AE(1)`$-compactum). This fact sometimes is expressed by writing $`LC^0C^0=AE(1)`$. These concepts can obviously be defined for any $`n`$ (recall that a compactum is an absolute extensor in dimension $`n`$, shortly $`AE(n)`$-compactum, if all extension problems $`E(Z,Z_0,g_0)`$ with values in $`X`$ and with at most $`n`$-dimensional $`Z`$ are solvable) and as is well known the strictly decreasing sequences $$AE(1)AE(2)\mathrm{}AE(n)AE(n+1)\mathrm{}$$ and $$LC^0C^0LC^1C^1\mathrm{}LC^{n1}C^{n1}LC^nC^n\mathrm{}$$ are identical (i.e., $`AE(n)=LC^{n1}C^{n1}`$) in the presence of metrizability. Nevertheless these concepts differ (i.e. $`AE(n)LC^{n1}C^{n1}`$) in general. For compact groups the difference between these concepts is very delicate. To see this it suffices to compare Theorem B with the result from : the validity of the statement “an arcwise connected (i.e. $`0`$-connected) compact abelian group is a torus” is undecidable within ZFC. Thus every connected Lie group topologically is an $`AE(1)`$-space and every connected and simply connected Lie group is an $`AE(2)`$-space. ###### Theorem C. The following conditions are equivalent for a compact group $`G`$: * $`G`$ is a simply connected $`AE(1)`$-compactum. * $`G`$ is an $`AE(2)`$-compactum. * $`G`$ is an $`AE(3)`$-compactum. * $`G`$ is a product of simple, connected and simply connected compact Lie groups. ###### Proof. Implications (c) $``$ (b) and (b) $``$ (a) are trivial. (a) $``$ (d). It follows from the proof of Theorem B that there exists a continuous homomorphism $`\alpha :G^{}\times Z_0(G)G`$ where $`G^{}`$ is the commutator subgroup of $`G`$, $`Z_0(G)`$ is the connected component of the center of $`G`$, and $`\mathrm{ker}(\alpha )`$ is a zero-dimensional compact group isomorphic to the (closed and central) subgroup $`G^{}Z_0(G)`$ of $`G`$. Further, there exists a continuous homomorphism $`\beta :\{L_t:tT\}G^{}`$ where each $`L_t`$, $`tT`$, is a simple, connected and simply connected compact Lie group, and $`\mathrm{ker}(\beta )`$ is zero-dimensional. Obviously, the kernel of the homomorphism $`p`$, defined as the composition $$p=\alpha (\beta \times \mathrm{id}_{Z_0(G)}):\{L_t:tT\}\times Z_0(G)\stackrel{\beta \times \mathrm{id}}{}G^{}\times Z_0(G)\stackrel{𝛼}{}G,$$ is zero-dimensional. Let us show that $`|\mathrm{ker}(p)|=1`$. Indeed, assuming that $`|\mathrm{ker}(p)|>1`$ and remembering that $`dim\mathrm{ker}(p)=0`$, we can find a compact group $`\stackrel{~}{G}`$ and two continuous homomorphisms $$p_1:\{L_t:tT\}\times Z_0(G)\stackrel{~}{G}\text{and}p_2:\stackrel{~}{G}G$$ such that $`p=p_2p_1`$, $`\mathrm{ker}(p_2)`$ is finite and $`|\mathrm{ker}(p_2)|>1`$. Clearly $`p_2`$ is a bundle (since $`\mathrm{ker}(p_2)`$ is a finite group) and $`G`$, as an $`AE(1)`$-compactum, is arcwise connected and locally arcwise connected. Also, by (a), $`\pi _1(G)=0`$. Consequently, by \[9, Corollary on p.66\], $`p_2`$ is a trivial bundle, i.e. $`\stackrel{~}{G}G\times \mathrm{ker}(p_2)`$. Now observe that $`\stackrel{~}{G}`$, as a continuous image of $`\{L_t:tT\}\times Z_0(G)`$, is connected. This implies that $`\mathrm{ker}(p_2)`$ is a singleton which is impossible. This contradiction shows that $`|\mathrm{ker}(p)|=1`$ and hence $`\{L_t:tT\}\times Z_0(G)G`$. From this we conclude that $`Z_0(G)`$, as a retract of $`G`$, is an $`AE(1)`$-compactum with $`\pi _1(Z_0(G))=0`$. An abelian group which is an $`AE(1)`$-compactum is a torus (Theorem B). But the only simply connected torus is the singleton. Thus $`|Z_0(G)|=1`$ and $`G\{L_t:tT\}`$. (d) $``$ (c). Suppose that $`G`$ is a product $`\{L_t:tT\}`$ of simple, connected and simply connected compact Lie groups. By Theorem of H. Cartan \[5, Theorem 3.7\], $`\pi _2(L_t)=0`$, $`tT`$. By the local contractibility of $`L_t`$, this means that $`L_t`$ is an $`AE(3)`$-compactum, $`tT`$. Then $`G`$, as a product of $`AE(3)`$-compacta, is an $`AE(3)`$-compactum. The proof is complete. ∎ ###### Remark 1. Implication (a) $``$ (b) of Theorem C extends a result of H. Cartan which states that if $`G`$ is a connected and (semi-) simple compact Lie group, then $`\pi _2(G)=0`$. We now present corollaries of Theorem C. The following answers question (II). ###### Corollary 1. The following conditions are equivalent for a non-trivial compact group $`G`$: * $`G`$ is an $`AE(2)`$-group with $`\pi _3(G)=\text{}`$. * $`G`$ is a simple, connected and simply connected Lie group. ###### Proof. (i) $``$ (ii). By Theorem C, $`G`$ is isomorphic to the product $`\{L_t:tT\}`$ of simple, connected and simply connected compact Lie groups. Let $`T^{}=\{tT:|L_t|>1\}`$. By (i) and a result of R. Bott \[5, Theorem 3.8\], we then have $$\text{}=\pi _3(G)=\pi _3\left(\{L_t:tT^{}\}\right)=\{\pi _3(L_t):tT^{}\}=\text{}^T^{}.$$ This obviously is possible precisely when $`|T^{}|=1`$ which implies that $`G`$ is isomorphic to some $`L_t`$ and hence is a simple, connected and simply connected compact Lie group. (ii) $``$ (i). Every connected and simply connected Lie group is an $`AE(2)`$-space. Since $`G`$ is simply connected, it is non abelian. Since $`G`$ is simple, by the above cited result of R. Bott, $`\pi _3(G)=\text{}`$. ∎ ###### Remark 2. Implication (ii) $``$ (i) of Corollary 1 is due to R. Bott. ###### Corollary 2. There is no non-trivial compact $`AE(4)`$-group. ###### Proof. Let $`G`$ be a compact $`AE(4)`$-group. Since every $`AE(4)`$-compactum is an $`AE(3)`$-compactum, it follows from Theorem C, that $`G`$ is a product $`\{L_t:tT\}`$ of simple, connected and simply connected compact Lie groups. Since each retract of an $`AE(4)`$-compactum is an $`AE(4)`$-compactum, it follows that $`L_t`$ is an $`AE(4)`$-compactum, $`tT`$. Then $`\pi _3(L_t)=0`$ for each $`tT`$. This contradicts Corollary 1 and completes the proof. ∎ The concept of an $`ANE(4)`$-space (=absolute neighborhood extensor in dimension $`4`$) is obtained by requesting (in the extension problem $`E(Z,Z_0,g_0)`$ with any at most $`4`$-dimensional space $`Z`$, any closed subspace $`Z_0`$ of $`Z`$, and any continuous $`g_0`$) the existence of an extension $`g`$ (of $`g_0`$) defined not on the whole $`Z`$ (as we did while defining the concept of $`AE(4)`$) but only on a neighborhood of $`Z_0`$ in $`Z`$. Clearly every $`ANE(4)`$-space is locally $`3`$-connected but not vice versa. It is well known that a locally compact group is a Lie group if and only if it is finite-dimensional and locally connected . It is easy to produce examples of non-metrizable locally connected compact groups, whereas a finite dimensional connected locally compact group is always metrizable . Roughly speaking our next Corollary shows what degree of local connectedness forces an infinite dimensional compact group to be metrizable ###### Corollary 3. Every locally compact $`ANE(4)`$-group is metrizable. ###### Proof. Let $`G`$ be a locally compact $`ANE(4)`$-group. By \[8, Theorem 16\], $`G`$ is homeomorphic to the product $`\text{}\times \left(\text{}_2\right)^\tau \times \text{}^n\times L`$ for some integer $`n`$, a cardinal number $`\tau `$ and a compact group $`L`$. Since each retract of an $`ANE(4)`$-space is an $`ANE(4)`$-space, we conclude that $`\tau `$ is finite. Therefore it suffices to show that $`L`$ is metrizable. By \[2, Lemma 8.2.1\], $`L`$ is isomorphic to the limit of an $`\omega `$-spectrum $`𝒮_L=\{L_\alpha ,p_\alpha ^\beta ,A\}`$ consisting of metrizable compact groups $`L_\alpha `$ and continuous homomorphisms $`p_\alpha ^\beta :L_\beta L_\alpha `$. Observe that $`L`$, as a retract of $`G`$, is an $`ANE(4)`$-compactum. Consequently, by \[2, Theorems 1.3.4 and 6.3.2\], there exists a cofinal (in particular, non-empty) subset $`B`$ of the indexing set $`A`$ such that the limit projection $`p_\alpha :LL_\alpha `$ is $`4`$-soft for each $`\alpha B`$. Then, by \[2, Corollary 6.1.17 and Proposition 6.1.21\], the fiber $`\mathrm{ker}(p_\alpha )`$ is a compact $`AE(4)`$-group. By Corollary 2, $`|\mathrm{ker}(p_\alpha )|=1`$. Therefore $`p_\alpha `$ is an isomorphism. Since $`L_\alpha `$ is metrizable, so is $`L`$. ∎
no-problem/9908/astro-ph9908319.html
ar5iv
text
# 1 Introduction ## 1 Introduction The large scale texture of the Universe shows a great complexity and variety of observed structures, it shows a strange pattern of filaments, voids and sheets. Moreover, due to the increasing amount of different types of observational data and theoretical analysis the last years, it was realized, that there exists a characteristic very large scale of about $`130h^1`$ Mpc in the large scale texture of the Universe. Namely, the galaxy deep pencil beam surveys (Broadhurst et al. 1988, 1990) found an intriguing periodicity in the very large scale distribution of the luminous matter. The data consisted of several hundred redshifts of galaxies, coming from four distinct surveys, in two narrow cylindrical volumes into the directions of the North and the South Galactic poles of our Galaxy, up to redshifts of more than $`z0.3`$, combined to produce a well sampled distribution of galaxies by redshift on a linear scale extending to $`2000h^1`$ Mpc. The plot of the numbers of galaxies as a function of redshifts displays a remarkably regular redshift distribution, with most galaxies lying in discrete peaks, with a periodicity over a scale of about $`130h^1`$ Mpc comoving size. It was realized also that the density peaks in the regular space distribution of galaxies in the redshift survey of Broadhurst et al. (1990), correspond to the location of superclusters, as defined by rich clusters of galaxies in the given direction (Bahcall 1991). The survey of samples in other directions, located near the South Galactic pole also gave indications for a regular distribution on slightly different scales near $`100h^1`$ Mpc (Ettori et al. 1995, see also Tully et al. 1992 and Guzzo et al. 1992, Willmer et al. 1994). This discovery of a large scale pattern at the galactic poles was confirmed in a wider angle survey of 21 new pencil beams distributed over 10 degree field at both galactic caps (Broadhurst et al. 1995) and also by the new pencil-beam galaxy redshift data around the South Galactic pole region (Ettori et al. 1997). The analysis of other types of observations confirm the existence of this periodicity. Namely, such structure is consistent with the reported periodicity in the distribution of quasars and radio galaxies (Foltz et al. 1989, Komberg et al. 1996, Quashnock et al. 1996, Petitjeau 1996, Cristiani 1998) and Lyman-$`\alpha `$ forest (Chu & Zhu 1989); the studies on spatial distribution of galaxies (both optical and IRAS) and clusters of galaxies (Kopylov et al. 1984, de Lapparent et al. 1986, Geller & Hunchra 1989, Hunchra et al. 1990, Bertshinger et al. 1990, Rowan-Robinson et al. 1990, Buryak et al. 1994, Bahcall 1992, Fetisova et al. 1993a, Einasto et al. 1994, Cohen et al. 1996, Bellanger & de Lapparent 1995) as well as peculiar velocity information (Lynden-Bell et al. 1988, Lauer & Postman 1994, Hudson et al. 1999) suggest the existence of a large scale superclusters-voids network with a characteristic scale around $`130h^1`$ Mpc. An indication of the presence of this characteristic scale in the distribution of clusters has been found also from the studies of the correlation functions and power spectrum of clusters of galaxies (Kopylov et al. 1988, Bahcall 1991, Mo et al. 1992, Peacock & West 1992, Deckel et al. 1992, Saar et al. 1995, Einasto et al. 1993, Einasto & Gramann 1993, Fetisova et al. 1993b, Frisch et al. 1995, Einasto et al. 1997b, Retzlaff et al. 1998, Tadros et al. 1997). The galaxy correlation function of the Las Campanas redshift survey also showed the presence of a secondary maximum at the same scale and a strong peak in the 2-dimensional power spectrum corresponding to an excess power at about 100 Mpc (Landy et al. 1995, 1996, Shectman et al. 1996, Doroshkevich et al. 1996, Geller at al. 1997, Tucker et al. 1999). The supercluster distribution was shown also to be not random but rather described as some weakly correlated network of superclusters and voids with typical mean separation of $`100150h^1`$ Mpc. Many known superclusters were identified with the vertices of an octahedron superstructure network (Battaner 1998). The network was proven to resemble a cubical lattice, with a periodic distribution of the rich clusters along the main axis (coinciding with the supergalactic $`Y`$ axis) of the network, with a step $`130h^1`$ Mpc (Toomet et al. 1999). These results are consistent with the statistical analysis of the pencil beam surveys data (Kurki-Suonio et al. 1990, Amendola 1994), which advocates a regular structure. Recently performed study of the whole-sky distribution of high density regions defined by very rich Abell and APM clusters of galaxies (Baugh 1996, Einasto et al. 1994, 1996, 1997a, Gaztanaga & Baugh 1997, Landy et al. 1996, Retzlaff et al. 1998, Tadros et al. 1997, Kerscher 1998) confirmed from 3-dimensional data the presence of the characteristic scale of about $`130h^1`$ Mpc of the spatial inhomogeneity of the Universe, found by Broadhurst et al. (1988, 1990) from the one dimensional study. The combined evidence from cluster and CMB data (Baker et al. 1999, Scott et al. 1996) also favours the presence of a peak at $`130h^1`$ Mpc and a subsequent break in the initial power spectrum (Atrio-Barandela et al. 1997, Broadhurst$`\&`$Jaffe 1999). For a recent review of the regularity of the Universe on large scales see Einasto (1997). Concerning all these rather convenient data, pointing that different objects trace the same structure at large scales, we are forced to believe in the real existence of the $`130h^1`$ Mpc as a typical scale for the matter distribution in the Universe (see also Einasto et al 1998). However, this periodicity points to the existence of a significantly larger scale in the observed today Universe structure than predicted by standard models of structure formation by gravitational instability (Davis 1990 , Szalay et al. 1991, Davis et al. 1992, Luo & Vishniac 1993, Bahcall 1994, Retzlaff et al. 1998, Atrio-Barandela et al. 1997, Lesgourgues et al. 1998, Meiksin et al. 1998, Eisenstein et al. 1998, Eisenstein & Hu 1997a, 1997b) and is rather to be regarded as a new feature appearing only when very large scales ($`>100h^1`$ Mpc) are probed. The problem of the generation of the spatial periodicity in the density distribution of luminous matter at large scales was discussed in numerous publications (Lumsden et al. 1989, Ostriker & Strassler 1989, Davis 1990, Coles 1990, Kurki-Suonio et al. 1990, Trimble 1990, Kofman et al. 1990, Ikeuchi & Turner 1991, van de Weygaert 1991, Buchert & Mo 1991, SubbaRao & Szalay 1992, Coleman & Pietronero 1992, Hill et al. 1989, Tully et al. 1992, Chincarini 1992, Weis & Buchert 1993, Atrio-Barandela et al. 1997, Lesgourgues et al. 1998, Eisenstein & Hu 1997a, Meiksin et al. 1998, Eisenstein et al. 1998, etc.). It was shown that a random structure could not explain the observed distribution. Statistical analysis of the deviations from periodicity showed that even for a perfectly regular structure a somewhat favoured direction and/or location within the structure may be required. The presence of the observed periodicity up to a great distance and in different directions seams rather amazing. Having in mind this results and the difficulties that perturbative models encounter in explaining the very large scale structure formation (namely the existence of the very large characteristic scale and the periodicity of the visible matter distribution), we chose another way of exploration, namely, we assume these as a typical new feature characteristic only for very large scales ($`>100h^1`$ Mpc). I. e. we consider the possibility that density fluctuations required to explain the present cosmological largest scale structures of the universal texture may have arisen in a different from the standard way, they may be a result from a completely different mechanism not necessarily with gravitational origin. Such a successful mechanism was already proposed (Chizhov & Dolgov 1992) and analyzed in the framework of high-temperature baryogenesis scenarios.<sup>3</sup><sup>3</sup>3By high-temperature baryogenesis scenarios we denote here those scenarios which proceed at very high energies of the order of the Grand Unification scale, and especially the GUT baryogenesis scenarios. In contrast, low temperature baryogenesis scenarios like Affleck and Dine scenario and electroweak baryogenesis, proceed at several orders of magnitude lower energies. According to the discussed mechanism an additional complex scalar field (besides inflaton) is assumed to be present at the inflationary epoch, and it yields the extra power at the very large scale discussed. Primordial baryonic fluctuations are produced during the inflationary period, due to the specific evolution of the space distribution of the complex scalar field, carrying the baryon charge. In the present work we study the possibility of generating of periodic space distribution of primordial baryon density fluctuations at the end of inflationary stage, applying this mechanism for the case of low temperature baryogenesis with baryon charge condensate of Dolgov & Kirilova (1991). The preliminary analysis of this problem, provided in Chizhov & Kirilova (1994), proved its usefulness in that case. Here we provide detail analysis of the evolution of the baryon density perturbations from the inflationary epoch till the baryogenesis epoch and describe the evolution of the spatial distribution of the baryon density. The production of matter-antimatter asymmetry in this scenario proceeds generally at low energies ($`10^9`$ GeV). This is of special importance having in mind that the low-temperature baryogenesis scenarios are the preferred ones, as far as for their realization in the postinflationary stage it is not necessary to provide considerable reheating temperature typical for GUT high temperature baryogenesis scenarios. Hence, the discussed model (Dolgov & Kirilova 1991) has several attractive features: (a) It is compatible with the inflationary models as far as it does not suffer from the problem of insufficient reheating. (b) Generally, this scenario evades the problem of washing out the previously produced baryon asymmetry at the electroweak transition. (c) And as it will be proved in the following it may solve the problem of large scale periodicity of the visible matter. It was already discussed in (Dolgov 1992, Chizhov & Dolgov 1992) a periodic in space baryonic density distribution can be obtained provided that the following assumptions are realized: (a) There exists a complex scalar field $`\varphi `$ with a mass small in comparison with the Hubble parameter during inflation. (b) Its potential contains nonharmonic terms. (c) A condensate of $`\varphi `$ forms during the inflationary stage and it is a slowly varying function of space points. All these requirements can be naturally fulfilled in our scenario of the scalar field condensate baryogenesis (Dolgov & Kirilova 1991) and in low temperature baryogenesis scenarios based on the Affleck and Dine mechanism (Affleck & Dine 1985). In case when the potential of $`\varphi `$ is not strictly harmonic the oscillation period depends on the amplitude $`P(\varphi _0(r))`$, and it on its turn depends on $`r`$. Therefore, a monotonic initial space distribution will soon result into spatial oscillations of $`\varphi `$ (Chizhov & Dolgov 1992). Correspondingly, the baryon charge, contained in $`\varphi `$: $`N_B=i\varphi ^{}\underset{0}{\overset{}{}}\varphi `$, will have quasi-periodic behavior. During Universe expansion the characteristic scale of the variation of $`N_B`$ will be inflated up to a cosmologically interesting size. Then, if $`\varphi `$ has not reached the equilibrium point till the baryogenesis epoch $`t_B`$, the baryogenesis would make a snapshot of the space distribution of $`\varphi (r,t_B)`$ and $`N_B(r,t_B)`$, and thus the present periodic distribution of the visible matter may date from the spatial distribution of the baryon charge contained in the $`\varphi `$ field at the advent of the $`B`$-conservation epoch. Density fluctuations with a comoving size today of $`130h^1`$ Mpc reentered the horizon at late times at a redshift of about 10 000 and a mass of $`10^{18}M_o`$. After recombination the Jeans mass becomes less than the horizon and the fluctuations of this large mass begin to grow. We propose that these baryonic fluctuations, periodically spaced, lead to an enhanced formation of galaxy superclusters at the peaks of baryon overdensity. The concentration of baryons into periodic shells may have catalysed also the clustering of matter coming from the inflaton decays onto these “baryonic nuclei”. After baryogenesis proceeded, superclusters may have formed at the high peaks of the background field (the baryon charge carrying scalar field, we discuss). (See the results of the statistical analysis (Plionis 1995), confirming the idea that clusters formed at the high peaks of background field, which is analogous to our assumption.) We imply that afterwards the self gravity mechanisms might have optimized the arrangement of this structure into the thin regularly spaced dense baryonic shells and voids in between with the characteristic size of $`130h^1Mpc`$ observed today. The analysis showed that in the framework of our scenario both the generation of the baryon asymmetry and the periodic distribution of the baryon density can be explained simultaneously as due to the evolution of a complex scalar field. Moreover, for a certain range of parameters the model predicts that the Universe may consist of sufficiently separated baryonic and antibaryonic shells. This possibility was discussed in more detail elsewhere (Kirilova 1998). This is an interesting possibility as far as the observational data of antiparticles in cosmic rays and the gamma rays data do not rule out the possibility for existence of superclusters of galaxies of antimatter in the Universe (Steigman 1976, Ahlen et al. 1982, 1988, Stecker 1985, 1989, Gao et al. 1990). The observations exclude the possibility of noticeable amount of antimatter in our Galaxy, however, they are not sensitive enough to test the existence of antimatter extragalactic regions. I.e. current experiments (Salamon et al. 1990, Ahlen et al. 1994, Golden et al. 1994, 1996, Yoshimura et al. 1995, Mitchell et al. 1996, Barbiellini & Zalateu 1997, Moiseev et al. 1997, Boesio et al. 1997, Orito et al. 1999, etc.) put only a lower limit on the distance to the nearest antimatter-rich region, namely $`20`$ Mpc. Future searches for antimatter among cosmic rays are expected to increase this lower bound by an order of magnitude. Namely, the reach of the AntiMatter Spectrometer is claimed to exceed 150 Mpc (Ahlen et al. 1982) and its sensitivity is three orders of magnitudes better than that of the previous experiments (Battiston 1997, Plyaskin et al. 1998). For a more detail discussion on the problem of existence of noticeable amounts of antimatter at considerable distances see Dolgov (1993), Cohen et al. (1998), Kinney et al. (1997). The following section describes the baryogenesis model and the last section deals with the generation of the periodicity of the baryon density and discusses the results. ## 2 Description of the model. Main characteristics. Our analysis was performed in the framework of the low temperature non-GUT baryogenesis model described in (Dolgov & Kirilova 1991), based on the Affleck and Dine SUSY GUT motivated mechanism for generation of the baryon asymmetry (Affleck & Dine 1985). In this section we describe the main characteristics of the baryogenesis model, which are essential for the investigation of the periodicity in the next section. For more detail please see the original paper. ### 2.1 Generation of the baryon condensate. The essential ingredient of the model is a squark condensate $`\varphi `$ with a nonzero baryon charge. It naturally appears in supersymmetric theories and is a scalar superpartner of quarks. The condensate $`<\varphi >0`$ is formed during the inflationary period as a result of the enhancement of quantum fluctuations of the $`\varphi `$ field (Vilenkin & Ford 1982, Linde 1982, Bunch & Davies 1978, Starobinsky 1982): $`<\varphi ^2>=H^3t/4\pi ^2`$. The baryon charge of the field is not conserved at large values of the field amplitude due to the presence of the B nonconserving self-interaction terms in the field’s potential. As a result, a condensate of a baryon charge (stored in $`<\varphi >`$) is developed during inflation with a baryon charge density of the order of $`H_I^3`$, where $`H_I`$ is the Hubble parameter at the inflationary stage. ### 2.2 Generation of the baryon asymmetry. After inflation $`\varphi `$ starts to oscillate around its equilibrium point with a decreasing amplitude. This decrease is due to the Universe expansion and to the particle production by the oscillating scalar field (Dolgov & Kirilova 1990, 1991). Here we discuss the simple case of particle production when $`\varphi `$ decays into fermions and there is no parametric resonance. We expect that the case of decays into bosons due to parametric resonance (Kofman et al. 1994, 1996, Shtanov et al. 1995, Boyanovski et al. 1995, Yoshimura 1995, Kaiser 1996), especially in the broad resonance case, will lead to an explosive decay of the condensate, and hence an insufficient baryon asymmetry. Therefore, we explore the more promising case of $`\varphi `$ decaying into fermions. In the expanding Universe $`\varphi `$ satisfies the equation $$\ddot{\varphi }a^2_i^2\varphi +3H\dot{\varphi }+\frac{1}{4}\mathrm{\Gamma }\dot{\varphi }+U_\varphi ^{}=0,$$ (1) where $`a(t)`$ is the scale factor and $`H=\dot{a}/a`$. The potential $`U(\varphi )`$ is chosen in the form $$U(\varphi )=\frac{\lambda _1}{2}|\varphi |^4+\frac{\lambda _2}{4}(\varphi ^4+\varphi ^4)+\frac{\lambda _3}{4}|\varphi |^2(\varphi ^2+\varphi ^2)$$ (2) The mass parameters of the potential are assumed small in comparison to the Hubble constant during inflation $`mH_I`$. In supersymmetric theories the constants $`\lambda _i`$ are of the order of the gauge coupling constant $`\alpha `$. A natural value of $`m`$ is $`10^2÷10^4`$ GeV. The initial values for the field variables can be derived from the natural assumption that the energy density of $`\varphi `$ at the inflationary stage is of the order $`H_I^4`$, then $`\varphi _o^{max}H_I\lambda ^{1/4}`$ and $`\dot{\varphi _o}=0`$. The term $`\mathrm{\Gamma }\dot{\varphi }`$ in the equations of motion explicitly accounts for the eventual damping of $`\varphi `$ as a result of particle creation processes. The explicit account for the effect of particle creation processes in the equations of motion was first provided in (Chizhov & Kirilova 1994, Kirilova & Chizhov 1996). The production rate $`\mathrm{\Gamma }`$ was calculated in (Dolgov & Kirilova 1990). For simplicity here we have used the perturbation theory approximation for the production rate $`\mathrm{\Gamma }=\alpha \mathrm{\Omega }`$, where $`\mathrm{\Omega }`$ is the frequency of the scalar field.<sup>4</sup><sup>4</sup>4For the toy model, we discuss here, we consider this approximation instructive enough. For $`g<\lambda ^{3/4}`$, $`\mathrm{\Gamma }`$ considerably exceeds the rate of the ordinary decay of the field $`\mathrm{\Gamma }_m=\alpha m`$. Fast oscillations of $`\varphi `$ after inflation result in particle creation due to the coupling of the scalar field to fermions $`g\varphi \overline{f}_1f_2`$, where $`g^2/4\pi =\alpha _{SUSY}`$. Therefore, the amplitude of $`\varphi `$ is damped as $`\varphi \varphi \mathrm{exp}(\mathrm{\Gamma }t/4)`$ and the baryon charge, contained in the $`\varphi `$ condensate, is considerably reduced. It was discussed in detail in Dolgov & Kirilova (1991) that for a constant $`\mathrm{\Gamma }`$ this reduction is exponential and generally, for a natural range of the model’s parameters, the baryon asymmetry is waved away till baryogenesis epoch as a result of the particle creation processes. Fortunately, in the case without flat directions of the potential, the production rate is a decreasing function of time, so that the damping process may be slow enough for a considerable range of acceptable model parameters values of $`m`$, $`H`$, $`\alpha `$, and $`\lambda `$, so that the baryon charge contained in $`\varphi `$ may survive until the advent of the $`B`$-conservation epoch. Generally, in cases of more effective particle creation, like in the case with flat directions in the potential, or in the case when $`\varphi `$ decays spontaneously into bosons due to parametric resonance, the discussed mechanism of the baryon asymmetry generation cannot be successful. Hence, it cannot be useful also for the generation of the matter periodicity. ### 2.3 Baryogenesis epoch $`t_B`$. When inflation is over and $`\varphi `$ relaxes to its equilibrium state, its coherent oscillations produce an excess of quarks over antiquarks (or vice versa) depending on the initial sign of the baryon charge condensate. This charge, diluted further by some entropy generating processes, dictates the observed baryon asymmetry. This epoch when $`\varphi `$ decays to quarks with non-zero average baryon charge and thus induces baryon asymmetry we call baryogenesis epoch. The baryogenesis epoch $`t_B`$ for our model coincides with the advent of the baryon conservation epoch, i.e. the time after which the mass terms in the equations of motion cannot be neglected. In the original version (Affleck & Dine 1985) this epoch corresponds to energies $`10^210^4`$ GeV. However, as it was already explained, the amplitude of $`\varphi `$ may be reduced much more quickly due to the particle creation processes and as a result, depending on the model’s parameters the advent of this epoch may be considerably earlier. For the correct estimation of $`t_B`$ and the value of the generated baryon asymmetry, it is essential to account for the eventual damping of the field’s amplitude due to particle production processes by an external time-dependent scalar field, which could lead to a strong reduction of the baryon charge contained in the condensate. ## 3 Generation of the baryon density periodicity. In order to explore the spatial distribution behavior of the scalar field and its evolution during Universe expansion it is necessary to analyze eq.(1). We have made the natural assumption that initially $`\varphi `$ is a slowly varying function of the space coordinates $`\varphi (r,t)`$. The space derivative term can be safely neglected because of the exponential rising of the scale factor $`a(t)\mathrm{exp}(H_It)`$. Then the equations of motion for $`\varphi =x+iy`$ read $`\ddot{x}+3H\dot{x}+{\displaystyle \frac{1}{4}}\mathrm{\Gamma }_x\dot{x}+(\lambda +\lambda _3)x^3+\lambda ^{}xy^2=0`$ $`\ddot{y}+3H\dot{y}+{\displaystyle \frac{1}{4}}\mathrm{\Gamma }_y\dot{y}+(\lambda \lambda _3)y^3+\lambda ^{}yx^2=0`$ (3) where $`\lambda =\lambda _1+\lambda _2`$, $`\lambda ^{}=\lambda _13\lambda _2`$. In case when at the end of inflation the Universe is dominated by a coherent oscillations of the inflaton field $`\psi =m_{PL}(3\pi )^{1/2}\mathrm{sin}(m_\psi t)`$, the Hubble parameter is $`H=2/(3t)`$. In this case it is convenient to make the substitutions $`x=H_I(t_i/t)^{2/3}u(\eta )`$, $`y=H_I(t_i/t)^{2/3}v(\eta )`$ where $`\eta =2(t/t_i)^{1/3}`$. The functions $`u(\eta )`$ and $`v(\eta )`$ satisfy the equations $$\begin{array}{c}u^{\prime \prime }+0.75\alpha \mathrm{\Omega }_u(u^{}2u\eta ^1)+u[(\lambda +\lambda _3)u^2+\lambda ^{}v^22\eta ^2]=0\\ v^{\prime \prime }+0.75\alpha \mathrm{\Omega }_v(v^{}2v\eta ^1)+v[(\lambda \lambda _3)v^2+\lambda ^{}u^22\eta ^2]=0.\end{array}$$ (4) The baryon charge in the comoving volume $`V=V_i(t/t_i)^2`$ is $`B=N_BV=2(u^{}vv^{}u)`$. The numerical calculations were performed for $`u_o,v_o[0,\lambda ^{1/4}]`$, $`u_o^{},v_o^{}[0,2/3\lambda ^{1/4}]`$. For simplicity we considered the case: $`\lambda _1>\lambda _2\lambda _3`$, when the unharmonic oscillators $`u`$ and $`v`$ are weakly coupled. For each set of parameter values of the model $`\lambda _i`$ we have numerically calculated the baryon charge evolution $`B(\eta )`$ for different initial conditions of the field corresponding to the accepted initial monotonic space distribution of the field (see Figs. 1,2). The numerical analysis confirmed the important role of particle creation processes for baryogenesis models and large scale structure periodicity (Chizhov & Kirilova 1994, 1996) which were obtained from an approximate analytical solution. In the present work we have accounted for particle creation processes explicitly. <sup>5</sup><sup>5</sup>5It was shown, that the damping effect due to the particle creation is proportional to the initial amplitudes of the field. As far as the particle creation rate is proportional to the field’s frequency, it can be concluded that the frequency depends on the initial amplitudes. This result confirms our analytical estimation provided in earlier works. The space distribution of the baryon charge is calculated for the moment $`t_B`$. It is obtained from the evolution analysis $`B(\eta )`$ for different initial values of the field, corresponding to its initial space distribution $`\varphi (t_i,r)`$ (Fig. 3). As it was expected, in the case of nonharmonic field’s potential, the initially monotonic space behavior is quickly replaced by space oscillations of $`\varphi `$, because of the dependence of the period on the amplitude, which on its turn is a function of $`r`$. As a result in different points different periods are observed and space behavior of $`\varphi `$ becomes quasiperiodic. Correspondingly, the space distribution of the baryon charge contained in $`\varphi `$ becomes quasiperiodic as well. Therefore, the space distribution of baryons at the moment of baryogenesis is found to be periodic. The observed space distribution of the visible matter today is defined by the space distribution of the baryon charge of the field $`\varphi `$ at the moment of baryogenesis $`t_B`$, $`B(t_B,r)`$. So, that at present the visible part of the Universe consists of baryonic shells, divided by vast underdense regions. For a wide range of parameters’ values the observed average distance of $`130h^1`$ Mpc between matter shells in the Universe can be obtained. The parameters of the model ensuring the necessary observable size between the matter domains belong to the range of parameters for which the generation of the observed value of the baryon asymmetry may be possible in the model of scalar field condensate baryogenesis. This is an attractive feature of this model because both the baryogenesis and the large scale structure periodicity of the Universe can be explained simply through the evolution of a single scalar field. Moreover, for some model’s variations the presence of vast antibaryonic regions in the Universe is predicted. This is an interesting possibility as far as the observational data do not rule out the possibility of antimatter superclusters in the Universe. The model proposes an elegant mechanism for achieving a sufficient separation between regions occupied by baryons and those occupied by antibaryons, necessary in order to inhibit the contact of matter and antimatter regions with considerable density. It is interesting, having in mind the positive results of this investigation, to provide a more precise study of the question for different possibilities of particle creation and their relevance for the discussed scenario of baryogenesis and periodicity generation. In the case of narrow-band resonance decay the final state interactions regulate the decay rate, parametric amplification is effectively suppressed (Allahverdi & Campbell 1997) and does not drastically enhance the decay rate. Therefore, we expect that this case will be interesting to explore. Another interesting case may be the case of strong dissipative processes of the products of the parametric resonance. As far as the dissipation reduces the resonant decay rate (Kolb et al. 1996, Kasuya & Kawasaki 1996) it may be worthwhile to consider such a model as well. ## 4 Acknowledgments We are glad to thank A.D.Dolgov for stimulating our interest in this problem. We are thankful to ICTP, Trieste, where this work was finished, for the financial support and hospitality. We are grateful also to the referee for the useful remarks and suggestions. This work was partially financially supported by Grant-in-Aid for Scientific Research F-553 from the Bulgarian Ministry of Education, Science and Culture. References Affleck, I. & Dine, M., 1985, Nucl. Phys. B, 249, 361 Ahlen, S. et al., 1982, Ap.J., 260, 20 Ahlen, S. et al., 1988, Phys. Rev. Lett., 61, 145 Ahlen, S. et al., 1994, N.I.M. A, 350, 351 Allahverdi, R. & Campbell, B., 1997, Phys. Lett. B, 395, 169 Amendola, L., 1994, Ap.J., 430, L9 Atrio-Barandela, F. et al., 1997, JETP Lett., 66, 397 Bahcall, N., 1991, Ap.J., 376, 43 Bahcall, N., 1992, in Clusters and Superclusters of Galaxies, Math. Phys. Sciences, 366 Bahcall, N., 1994, Princeton observatory preprints, POP-607 Baker, J. et al., 1999, astro-ph/9904415 Barbiellini, G. & Zalateu, M., 1997, INFN-AE-97-29 Battaner, E., 1998, A$`\&`$Ap, 334, 770 Battiston, R., 1997, hep-ex/9708039 Baugh, C., 1996, MNRAS, 282, 1413 Bellanger, C. & de Lapparent, V., 1995, Ap.J., 455, L103 Bertshinger, E., Deckel, A. & Faber, S., 1990, Ap.J., 364, 370 Boesio, M. et al., 1997, Ap.J., 487, 415 Boyanovski, D. et al., 1995, Phys. Rev. D, 51, 4419 Broadhurst, T.J., Ellis, R.S. & Shanks, T., 1988, MNRAS, 235, 827 Broadhurst, T.J., Ellis, R.S., Koo, D.C. & Szalay, A.S., 1990, Nature, 343, 726 Broadhurst, T.J. et al., 1995, in Wide Field Spectroscopy and the Distant Universe, eds. Maddox, S. and Aragon-Salamanca, A., 178 Broadhurst, T. & Jaffe A., 1999, astro-ph/9904348 Buchert, T. & Mo, H., 1991, A&Ap, 249, 307 Bunch, T.S. & Davies, P.C.W., 1978, Proc. R. Soc. London A, 360, 117 Buryak, O. E., Doroshkevich, A.G. & Fong, R., 1994, Ap.J., 434, 24 Chincarini, G., 1992, in Clusters and Superclusters of Galaxies, ed. Fabian, A., Series C, Math. Phys. Sci. 366, 253 Chizhov, M.V. & Dolgov, A.D., 1992, Nucl. Phys. B, 372, 521 Chizhov, M.V. & Kirilova, D.P., 1994, JINR Comm. E2-94-258, Dubna Chizhov, M.V. & Kirilova, D.P., 1996, Ap&A Tr, 10, 69 Chu, Y. & Zhu, X., 1989, A&A, 222, 1 Cohen, J. et al. 1996, Ap.J., 462, L9 Cohen, A., DeRujula, A. & Glashow, S., 1998, Ap.J., 495, 539 Coleman, P. & Pietronero, L., 1992, Phys. Rep., 213, 313 Coles, P., 1990, Nature, 346, 446 Cristiani, S., 1998, astro-ph/9811475, review at the MPA/ESO Cosmology Conference “Evolution of Large-Scale Structure: From Recombination to Garching”, Garching, 2-7 August Davis, M., 1990, Nat, 343, 699 Davis, M., Efstathiou, G., Frenk, C. & White, S., 1992, Nature, 356, 489 de Lapparent, V., Geller, M. & Huchra, J., 1986, Ap.J., 302, L1 Deckel, A., Blumental, G.R., Primack, J.R. & Stanhill, D., 1992, MNRAS, 257, 715 Dolgov, A.D., 1992, Phys. Rep., 222, 311 Dolgov, A.D., 1993, Hyperfine Interactions, 76, 17 Dolgov, A.D. & Kirilova, D.P., 1990, Yad. Fiz., 51, 273 Dolgov, A.D. & Kirilova, D.P., 1991, J. Moscow Phys. Soc., 1, 217 Doroshkevich, A. et al., 1996, MNRAS, 283, 1281 Einasto, J. & Gramann, M., 1993, Ap.J., 407, 443 Einasto, J., Gramann, M., Saar, E. & Tago, E., 1993, MNRAS, 260, 705 Einasto, M. et al., 1994, MNRAS, 269, 301 Einasto, M. et al., 1996, astro-ph/9610088 Einasto, J., 1997, astro-ph/9711318, astro-ph/9711321 Einasto, J. et al., 1997a, MNRAS, 289, 801; A.Ap.Suppl., 123, 119; Nature, 385, 139 Einasto, J. et al., 1997b, astro-ph/9704127, astro-ph/9704129 Einasto, J., 1998, astro-ph/9811432 Eisenstein, D. & Hu, W., 1997a, astro-ph/9710252 Eisenstein, D. & Hu, W., 1997b, astro-ph/9709112, astro-ph/9710303 Eisenstein, D. et al., 1998, Ap.J., 494, 1 Ettori, S., Guzzo, L. & Tarenghi, M., 1995, MNRAS, 276, 689 Ettori, S., Guzzo, L. & Tarenghi, M., 1997, MNRAS, 285, 218 Fetisova, T., Kuznetsov, D., Lipovetski, V., Starobinsky, A. & Olowin, P., 1993a, Pisma v Astr. Zh., 19, 508; Fetisova, T., Kuznetsov, D., Lipovetski, V., Starobinsky, A. & Olowin, P., 1993b, Astron. Lett., 19, 198 Foltz, C. et al., 1989, A.J., 98, 1959 Frisch, P. et al., 1995, A&A 269, 611 Gao, Y. et al., 1990, Ap.J., 361, L37 Gaztanaga, E. & Baugh, C., 1997, MNRAS in press, astro-ph/9704246 Geller, M. & Hunchra, J., 1989, Science, 246, 897 Geller, M. et al., 1997, A.J., 114, 2205 Golden, R. et al., 1994, Ap.J., 436, 769 Golden, R. et al., 1996, Ap.J., 457, L103 Guzzo, L. et al., 1992, Ap.J., 393, L5 Hill, C., Fry, J. & Schramm, D., 1989, Comm. Nucl. Part. Phys., 19, 25 Hudson, J. et al., 1999, astro-ph/9901001 Hunchra, J., Henry, J., Postman, M. & Geller, M., 1990, Ap.J., 365, 66 Ikeuchi, S. & Turner, E., 1991, MNRAS, 250, 519 Kaiser, D., 1996, Phys. Rev. D, 53, 1776 Kasuya, S. & Kawasaki, M., 1996, Phys. Lett. B, 388, 686 Kerscher, M., 1998, astro-ph/9805088, astro-ph/9710207 Kinney, W. H., Kolb, E. W. & Turner, M. S., 1997, Phys. Rev. Lett. 79, 2620 Kirilova, D. & Chizhov, M., 1996, in Proc. Scient. Session 100th Anniversary of the Sofia University Astronomical Observatory, Sofia, Naturela publ., Sofia, 114 Kirilova, D., 1998, Astron. Astrophys. Transections, 3, 211 (See also the updated version ICTP preprint IC/98/71, 1998) Kofman, L., Pogosyan, D. & Shandarin, S., 1990, MNRAS, 242, 200 Kofman, L., Linde, A. & Starobinsky, A., 1994, Phys. Rev. Lett., 73, 3195 Kofman, L., Linde, A. & Starobinsky, A., 1996, Phys. Rev. Lett., 76, 1011 Kolb, E., Linde, A. & Riotto, A., 1996, Phys. Rev. Lett., 77, 4290 Komberg, B., Kravtsov, A. & Lukash, V., 1996, MNRAS, 282, 713 Kopylov, A., Kuznetsov, D., Fetisova, T. & Shvartzman, V., 1984, Astr. Zirk. 1347, 1 Kopylov, A., Kuznetsov, D., Fetisova, T. & Shvartzman, V., 1988, in Large Scale Structure in the Universe, eds. Audouze, J., Pelletan, M.-C. & Szalay, A., 129 Kurki-Suonio, H., Mathews, G. & Fuller, G., 1990, Ap.J., 356, L5 Landy, S. et al., 1995, Bull. American Astron. Soc., 187 Landy, S. et al., 1996, Ap.J., 456, L1 Lauer, T. & Postman, M., 1994, Ap.J., 425, 418 Lesgourgues, J., Polarski, D. & Starobinsky, A., 1998, MNRAS, 297, 769 Linde, A.D., 1982, Phys. Lett. B, 116, 335 Lumsden, S., Heavens, A. & Peacock, J., 1989, MNRAS 238, 293 Luo, S. & Vishniac, E.T., 1993, Ap.J., 415, 450 Lynden-Bell, D., Faber, S., Burstein, D., Davis, R., Dressler, A., Terlevich, R. & Wegner, G., 1988, Ap.J., 326, 19 Meiksin, A., White, M. & Peacock, J.A., 1998, astro-ph/9812214 Mitchell, J. et al., 1996, Phys. Rev. Lett., 76, 3057 Mo, H. et al., 1992, A&A, 257, 1; A&A, 256, L23 Moiseev, A. et al., 1997, Ap.J., 474, 479 Orito, S. et al., 1999, astro-ph/9906426 Ostriker, J. & Strassler, M., 1989, Ap.J., 338, 579 Peacock, J. & West, M., 1992, MNRAS, 259, 494 Petitjeau, P., 1996, contributed talk at the ESO Workshop on “The Early Universe with the VLT” 1-4 April Plionis, M., 1995, MNRAS, 272, 869 Plyaskin, V. et al., AMS Collaboration, 1998, Surveys High Energy Phys., 13, 177 Quashnock, J. et al., 1996, Ap.J., 472, L69 Retzlaff, J. et al., 1998, astro-ph/9709044 v2 Rowan-Robinson, R. et al., 1990, MNRAS, 247, 1 Salamon, M. et al., 1990, Ap.J., 349, 78 Saar, V., Tago, E., Einasto, J., Einasto, M. & Andernach, H., 1995, astro-ph/9505053, in Proc. of XXX Moriond meeting Scott, P. et al., 1996, Ap.J., 461, L1 Shectman, S. et al., 1996, Phys. Lett. B, 470, 172 Shtanov, Y., Traschen, J. & Brandenberger, R., 1995, Phys. Rev. D, 51, 5438 Starobinsky, A.A., 1982, Phys. Lett. B, 117, 175 Stecker, F., 1985, Nucl. Phys. B, 252, 25 Stecker, F., 1989, Nucl. Phys. B (Proc. Suppl.), 10, 93 Steigman, G., 1976, Ann. Rev. Astr. Ap., 14, 339 SubbaRao, M. U. & Szalay, A.S., 1992, Ap.J., 391, 483 Szalay, A.S., Ellis, R.S., Koo, D.C. & Broadhurst, T.J., 1991, in proc. After the First Three Minutes, eds. Holt, S., Bennett, C. & Trimble, V., New York: AIP, 261 Tadros, H., Efstathiou, G. & Dalton, G., 1997, astro-ph/9708259, submitted to MNRAS Toomet, O. et al., astro-ph/9907238 Trimble, V., 1990, Nature, 345, 665 Tucker, D., Lin, H. & Shectman, S., 1999, astro-ph/9902023 Tully, R.B., Scaramella, R., Vetollani, G. & Zamorani, G., 1992, Ap.J., 388, 9 van de Weygaert, R., 1991, MNRAS, 249, 159 Vilenkin, A. & Ford, L.H., 1982, Phys. Rev. D, 26, 1231 Weis, A. & Buchert, T., 1993, A$`\&`$ Ap, 274, 1 Willmer, C. et al., 1994, Ap. J., 437, 560 Yoshimura, M., 1995, Prog. Theor. Phys., 94, 873 Yoshimura, K. et al., 1995, Phys. Rev. Let., 75, 379 Captions Figure 1: The evolution of the baryon charge $`B(\eta )`$ contained in the condensate $`<\varphi >`$ for $`\lambda _1=5\times 10^2`$, $`\lambda _2=\lambda _3=\alpha =10^3`$, $`H_I/m=10^7`$, $`\varphi _o=H_I\lambda ^{1/4}`$, and $`\dot{\varphi }_o=0`$. Figure 2: The evolution of the baryon charge $`B(\eta )`$ contained in the condensate $`<\varphi >`$ for $`\lambda _1=5\times 10^2`$, $`\lambda _2=\lambda _3=\alpha =10^3`$, $`H_I/m=10^7`$, $`\varphi _o=\frac{1}{50}H_I\lambda ^{1/4}`$, and $`\dot{\varphi }_o=0`$. Figure 3: The space distribution of baryon charge at the moment of baryogenesis for $`\lambda _1=5\times 10^2`$, $`\lambda _2=\lambda _3=\alpha =10^3`$, $`H_I/m=10^7`$.
no-problem/9908/hep-ph9908297.html
ar5iv
text
# References Freiburg-THEP 99/09 August 1999 Large rescaling of the scalar condensate, towards a Higgs-gravity connection ? <sup>1</sup><sup>1</sup>1Presented at the EPS-HEP99 meeting, july 1999, Tampere, Finland J.J. van der Bij Fakultät für Physik, Albert–Ludwigs–Universität Freiburg, Hermann–Herder–Strasse 3, 79104 Freiburg, Germany. ## Abstract In the Standard Model the Fermi constant is associated with the vacuum expectation value of the Higgs field $`\mathrm{\Phi }`$, ‘the condensate’, usually believed to be a nearly cut-off independent quantity. General arguments related to the ‘triviality’ of $`\lambda \mathrm{\Phi }^4`$ theory in 4 space-time dimensions suggest, however, a dramatic renormalization effect in the continuum theory. This effect is visible on the relatively large lattices (such as $`32^4`$) available today. The result is suggestive of a certain ‘Higgs-gravity connection’, as discussed some years ago. The space-time structure is determined by symmetry breaking and the Planck scale is essentially a rescaling of the Fermi scale. The resulting picture may lead to quite substantial changes in the usual phenomenology associated with the Higgs particle. The aim of this paper is to analyze the scale dependence of the ‘Higgs condensate’ $`\mathrm{\Phi }`$, an important subject for any theory aiming to insert the electroweak theory in more ambitious frameworks. At the same time, it turns out that the solution of this genuine quantum-field-theoretical problem can provide precious insights on possible connections between the Higgs sector and spontaneously broken theories of gravity. We shall restrict to the case of a pure $`\mathrm{\Phi }^4`$ theory and comment in the end about the stability of the results when other interactions (such as gauge and yukawa) are present. Let us define the theory in the presence of a lattice spacing $`a1/\mathrm{\Lambda }`$ and assume that the ultraviolet cutoff $`\mathrm{\Lambda }`$ is much larger than the Higgs mass $`M_h`$. Our basic problem is to relate the bare condensate $$v_B(\mathrm{\Lambda })\mathrm{\Phi }_{\mathrm{latt}}$$ (1) to the low-energy physical value $`v_R`$ (that in the Standard Model is associated with the Fermi scale $`v_R246`$ GeV ). In the usual approach, one writes $$v_B^2=Zv_R^2$$ (2) where $`Z`$ is called the ‘wave-function renormalization’. As pointed out in , in the presence of spontaneous symmetry breaking, where there is no smooth limit $`p0`$, there are two basically different definitions of $`Z`$: a) $`ZZ_{\mathrm{prop}}`$ where $`Z_{\mathrm{prop}}`$ is defined from the propagator of the shifted fluctuation field $`h(x)=\mathrm{\Phi }(x)\mathrm{\Phi }`$, namely $$G(p^2)\frac{Z_{\mathrm{prop}}}{p^2+M_h^2}$$ (3) b) $`ZZ_\phi `$ where $`Z_\phi `$ is the rescaling needed in the effective potential $`V_{\mathrm{eff}}(\phi )`$ to match the quadratic shape at its absolute minima with the Higgs mass, namely $$\frac{d^2V_{\mathrm{eff}}}{d\phi _R^2}|_{\phi _R=v_R}=M_h^2$$ (4) Now, by assuming ‘triviality’ as an exact property of $`\mathrm{\Phi }^4`$ theories in four space-time dimensions , it is well known that the continuum limit leads to $`Z_{\mathrm{prop}}1`$. However, is this result relevant to evaluate the scale dependence of the Higgs condensate ? When dealing with the space-time constant part of the scalar field, it is $`ZZ_\phi `$, as defined from Eq.(4), that represents the relevant definition to be used in Eq.(2). Now, as pointed out in , by restricting to those approximations to the effective potential (say $`V_{\mathrm{eff}}V_{\mathrm{triv}}`$ ) that are consistent with ‘triviality’, since the field $`h(x)`$ is governed by a quadratic hamiltonian and $`Z_{\mathrm{prop}}=1`$, one finds a non-trivial $`Z_\phi `$. Indeed, $`V_{\mathrm{triv}}`$, given by the sum of a classical potential and the zero-point energy of the shifted fluctuation field, is an extremely flat function of $`\phi _B`$ implying a divergent $`Z_\phi \mathrm{ln}\frac{\mathrm{\Lambda }}{M_h}`$ in the limit $`\mathrm{\Lambda }\mathrm{}`$. Thus, when properly understood, ‘triviality’ requires at the same time $`Z_{\mathrm{prop}}1`$ and $`Z_\phi \mathrm{}`$, implying that, in a true continuum theory, $`v_B/v_R`$ can become arbitrarily large. The existence of a non-trivial $`Z_\phi `$ for the Higgs condensate, quite distinct from the $`Z_{\mathrm{prop}}`$, associated with the finite-momentum fluctuation field, is a definite prediction that can be tested with a precise lattice computation. To this end a lattice simulation of the Ising limit of $`\mathrm{\Phi }^4`$ theory was used to measure i) the zero-momentum susceptibility: $$\chi ^1=\frac{d^2V_{\mathrm{eff}}}{d\phi _B^2}|_{\phi _B=v_B}\frac{M_h^2}{Z_\phi }$$ (5) and ii) the propagator of the shifted field (at Euclidean momenta $`p0`$). The latter data can be fitted to the form in Eq.(3) to obtain $`M_h`$ and $`Z_{\mathrm{prop}}`$. The lattice data show that $`Z_{\mathrm{prop}}`$ is slightly less than one, and tends to unity as the continuum limit is approached, consistently with the non-interacting nature of the field $`h(x)`$. However, the $`Z_\phi `$ extracted from the susceptibility in the broken phase is clearly different. It shows a rapid increase above unity and the trend is consistent with it diverging in the continuum limit. Absolutely no sign of such a discrepancy is found in the symmetric phase, as expected. The first indications obtained in are now confirmed by a more complete analysis performed with more statistics on the largest lattices employed so far, such as $`32^4`$. The results accord well with the ”two-Z” picture where one predicts a $`Z_{\mathrm{prop}}`$ slowly approaching one and a zero-momentum rescaling $`Z_\phi =M_h^2\chi `$ becoming higher and higher the closer we get to the continuum limit. The above result, giving a large difference in the rescalings of the vacuum and the fluctuation fields, has been established for the model with a single scalar field. The results have been described on the lattice. In order to make contact with experiment, one must extend the results to the O(4) sigma model, which describes the Higgs sector of the standard model. This can be done straightforwardly when one has a description of the effect directly in the continuum. To describe the different rescalings in the continuum one should consider the effective Lagrangian, which is in general a combination of many operators. We are looking for an operator that would give rise to a rescaling of the Higgs propagator in the broken phase but not in the unbroken phase. At the same time it should leave the constant fields untouched and be invariant under the symmetry of the theory. This leads us to consider the following operator: $`(_\mu |\mathrm{\Phi }|^2)(^\mu |\mathrm{\Phi }|^2)`$. Because of the derivatives this operator leaves the constant fields untouched, but leads to a wave function renormalization of the Higgs field after spontaneous symmetry breaking. To make a connection with the standard model one now simply takes $`\mathrm{\Phi }`$ to be the four component Higgs field. If one adds this term to the standard model as an effective interaction, with a very large coefficient, the result after symmetry breaking is very simple. In the unitary gauge one finds simply the standard model, but with a Higgs coupling to matter reduced by the wave function renormalization. Ultimately, when one takes the wave function renormalization to $`\mathrm{}`$, the Higgs particle does not couple to matter anymore. One therefore ends up with the standard model without Higgs-particle. Given the precision of present data in the weak interactions one has to ask oneself whether this picture is in agreement with experiment. At the fundamental level there is clearly a problem reconciling perturbative calculations of electroweak parameters like $`\delta \rho `$ with the demands of triviality in the scalar sector. To do perturbative calculations one simply needs the Higgs matter and self-couplings to get finite results. However presently we are only sensitive to one-loop corrections, where these couplings do not play a role. At the one loop level the Higgs particle plays essentially a role as a cut-off to the momentum integrals. Corrections to electroweak quantities behave like $`log(m_H/m_W)+c`$. The logarithmic corrections are universal, in the sense that essentially any form of dynamical symmetry breaking would give rise to logarithmic divergences with the same coefficients as the $`log(m_H)`$ terms. It is only the constants, that are unique for the normal perturbative interpretation of the standard model. While present data indicate a low value of the Higgs mass within the standard perturbative calculations, they are not precise enough to determine the constant terms uniquely. Therefore a model without a Higgs, but with a cut-off around the weak scale, implying new interactions, cannot be excluded at present. The origin of the triviality, as found above from the lattice data, lies in the appearance of a very large, essentially infinite, ratio of wave function renormalizations. This naturally brings up the question whether there might be a connection with other large ratios of parameters that appear in fundamental physics. Here we suggest a connection with the ratio of Planck mass and Fermi scale. There are a number of reasons that suggest that a connection between gravity and the Higgs sector is possible. There is for instance the question of the cosmological constant, which is generated by the Higgs potential. Another argument is the fact, that both gravity and the Higgs particle have a universal form of coupling to matter. Gravity couples universally to the energy-momentum tensor, the Higgs particle to mass, which corresponds to the trace of the energy-momentum tensor. However the way the Planck scale appears is quite different from the way the weak scale appears. This discrepancy can be resolved by assuming that also the Planck scale arises after spontaneous symmetry breaking. We start therefore with the Lagrangian: $$=\sqrt{g}\left(\xi \mathrm{\Phi }^+\mathrm{\Phi }R\frac{1}{2}g^{\mu \nu }(D_\mu \mathrm{\Phi })^+(D_\nu \mathrm{\Phi })V(\mathrm{\Phi }^+\mathrm{\Phi })\frac{1}{4}F_{\mu \nu }F^{\mu \nu }\right)$$ $`(5)`$ This is the spontaneous symmetry breaking theory of gravity , with the standard model Higgs v.e.v. as the origin of the Planck mass. The parameter $`\xi `$ is given by $`\xi =1/16\pi G_Nv^2`$. This parameter is of $`O(10^{34})`$ and gives rise to a large mixing between the Higgs boson and the graviton. This model was recently discussed in . The physical content of the model becomes clear after the Weyl rescaling $`g_{\mu \nu }\frac{\kappa ^2}{\xi |\mathrm{\Phi }|^2}g_{\mu \nu }`$, giving the Lagrangian : $$=\sqrt{g}\left(\kappa ^2R\frac{3}{2}\frac{\xi v^2}{|\mathrm{\Phi }|^4}(_\mu |\mathrm{\Phi }|^2)(^\mu |\mathrm{\Phi }|^2)\frac{1}{2}\frac{v^2}{|\mathrm{\Phi }|^2}(D_\mu \mathrm{\Phi }^+)(D^\mu \mathrm{\Phi })\frac{v^4}{|\mathrm{\Phi }|^4}V(|\mathrm{\Phi }|^2)\right)$$ $`(6)`$ As $`\xi `$ is very large indeed, this final Lagrangian is of the right form to be consistent with the triviality of the Higgs sector. After making the wavefunction renormalization by the factor $`(1+12\xi )^{1/2}`$ the coupling of the Higgs particle to ordinary matter becomes of gravitational strength. Also the mass of the Higgs particle becomes of the order $`v^2/m_P`$. This would give rise to a Yukawa potential of gravitational strength with a range that could be in the millimeter range. Such a force can be looked for at the next generation of fifth force experiments. If indeed such a close connection between gravity and the Higgs sector is present in nature, quite dramatic effects should be expected at the electroweak scale, as quantum gravity would already appear at this scale. What these effects are precisely, cannot be predicted confidently at the present time. Typical suggestions are higher dimensions, string Regge trajectories, etcetera. In any case it is very suggestive that the picture of triviality in the Higgs sector, can be so naturally described in the spontaneous symmetry breaking theory of gravity. Acknowledgements I wish to thank Prof. M. Consoli for many illuminating discussions on the nature of triviality. I thank the INFN, Catania for hospitality. This work was supported by the Deutsche Forschungsgemeinschaft.
no-problem/9908/hep-ph9908379.html
ar5iv
text
# 1 What Are Shadowing Corrections? ## 1 What Are Shadowing Corrections? In the region of low $`x`$ and low $`Q^2`$ we face two challenging problems which have to be resolved in QCD: 1. Matching of “hard” processes, that can be successfully described in perturbative QCD (pQCD), and “soft” processes, that should be described in non-perturbative QCD (npQCD), but actually, we have only a phenomenological approach for them; 2. Theoretical approach for the high parton density QCD (hdQCD) which we reach in the deep inelastic scattering at low $`x`$ but at sufficiently high $`Q^2`$. In this kinematic region we expect that the typical distances will be small but the parton density will be so large that a new non perturbative approach shall be developed for understanding this system. We are going to advocate the idea that these two problems are correlated and the system of partons always passes the stage of hdQCD before ( at shorter distances ) it goes to the black box, which we call non-perturbative QCD and which, practically, we describe in old fashion Reggeon phenomenology. In spite of the fact that there are many reasons to believe that such a phenomenology could be even correct, the distance between Reggeon approach and QCD is so large we are loosing any taste of theory doing this phenomenology. In hdQCD we still have a small parameter ( running QCD coupling $`\alpha _S`$ ) and we can start to approach this problem using the developed methods of pQCD . However, we should realize that the kernel of the hd QCD problems are non-perturbative one, and therefore, approaching hdQCD theoretically we are preparing a new training grounds for searching methods for npQCD. First, let me recall that DIS experiment is nothing more than a microscope and we have two variables to describe its work. The first one is the resolution of the microscope, namely, $`\mathrm{\Delta }x\mathrm{\hspace{0.17em}\hspace{0.17em}1}/Q`$ where $`Q^2`$ is the virtuality of the photon. It means that out microscope can see all constituents inside a target with the size larger that $`\mathrm{\Delta }x`$. The second variable is time of observation. It sounds strange that we have this new variable, which we do not use, working with a medical microscope. However, we are dealing here with the relativistic system which can produce hadrons (partons). So, for everyday analogy, we should consider rather a box with flies which multiply and their number is, certainly, different in different moment of time. To estimate this time we can use the uncertainty principle $`\mathrm{\Delta }t1/\mathrm{\Delta }E`$ where $`\mathrm{\Delta }E`$ is the change of energy, namely, $`\mathrm{\Delta }E=E_{initial}E_{final}`$, and for system of quark and antiquark $`\mathrm{\Delta }E=q_0q_1q_2=q_0q=\frac{(q_0q)(q_0+q)m}{2q_0m}=\frac{Q^2m}{W^2}=mx`$, where $`m`$ is mass of the target and $`q_0`$ and $`q`$ is the energy and momentum of the virtual photon. Finally, $`t=1/mx`$ with $`x=\frac{Q^2}{W^2}`$ where $`W`$ is the energy of photon - target interaction. Therefore, the question, that we are asking in DIS at low $`x`$, is what happens with constituents of rather small size after long time. It is clear that the number of these constituents should increase since in QCD each parton can decay in two partons with the probability $`P_i=\frac{N_c\alpha _S}{\pi }\frac{dE_i}{E_i}\frac{d^2k_{i,t}}{k_{i,t}^2}`$ where $`E_i`$ and $`\stackrel{}{k}_i`$ are energy and momentum of an emitted parton $`i`$. This growth we can describe introducing so called structure function ( $`xG(x,Q^2)`$ ) or the number of partons that can be resolved with the microscope with definite $`Q^2`$ and $`x`$. Indeed, $$\frac{^2xG(x,Q^2)}{\mathrm{ln}(1/x)\mathrm{ln}Q^2}=\frac{N_c\alpha _S}{\pi }xG(x,Q^2).$$ (1) This equation is the DGLAP evolution equation in the region of low $`x`$. It has an obvious solution $`xG(x,Q^2)exp\left(2\sqrt{\frac{N_c\alpha _S}{\pi }\mathrm{ln}(1/x)\mathrm{ln}(Q^2/Q_0^2)}\right)`$. Therefore, we expect the increase of the parton densities at $`x0`$. In Fig. 1 we picture the parton distributions in the transverse plane. At $`x1`$ there are several partons of a small size. The distance between partons is much larger than their size and we can neglect interactions between them. However, at $`x\mathrm{\hspace{0.17em}0}`$ the number of partons becomes so large that they are populated densely in the area of a target. In this case, you cannot neglect the interactions between them which was omitted in the evolution equations ( see Eq. ( 1 ) ). Therefore, at low $`x`$ we have a more complex problem of taking into account both emission and rescatterings of partons. Since the most important in QCD is the three parton interaction, the processes of rescattering is actually a process of annihilation in which one parton is created out of two partons (gluons). Therefore, at low $`x`$ we have two processes 1. Emission induced by the QCD vertex $`G+GG`$ with probability which is proportional to $`\alpha _S\rho `$ where $`\rho `$ is the parton density in the transverse plane , namely $$\rho =\frac{xG(x,Q^2)}{\pi R^2},$$ (2) where $`\pi R^2`$ is the target area; 2. Annihilation induced by the same vertex $`G+GG`$ with probability which is proportional to $`\alpha _S\sigma _0\rho ^2`$, where $`\alpha _S`$ is probability of the processes $`G+GG`$, $`\sigma _0`$ is the cross section of two parton interaction and $`\sigma _0\frac{\alpha _S}{Q^2}`$. $`\sigma _0\rho `$ gives the probability for two partons to meet and to interact, while $`\alpha _S\sigma _0\rho ^2`$ gives the probability of the annihilation process. Finally, the change of parton density is equal to $$\frac{^2\rho (x,Q^2)}{\mathrm{ln}(1/x)\mathrm{ln}Q^2}=\frac{N_c\alpha _S}{\pi }\rho (x,Q^2)\frac{\alpha _S^2\gamma }{Q^2}\rho ^2(x,Q^2)$$ (3) or in terms of the gluon structure function $$\frac{^2xG(x,Q^2)}{\mathrm{ln}(1/x)\mathrm{ln}Q^2}=\frac{N_c\alpha _S}{\pi }xG(x,Q^2)\frac{\alpha _S^2\gamma }{\pi R^2Q^2}\left(xG(x,Q^2)\right)^2,$$ (4) where $`\gamma `$ has been calculated in pQCD . Therefore, Eq.( 4 ) is a natural generalization of the DGLAP evolution equations. The question arises, why we call shadowing and /or screening corrections such a natural equation for a balance of partons due to two competing processes. To understand this let us consider the interaction of the fast hadron with the virtual photon at the rest ( Bjorken frame ). In the parton model, only the slowest ( “wee” ) partons interact with the photon. If the number of the “wee” partons $`N`$ is not large, the cross section is equal to $`\sigma _0N`$. However, if we have two “wee” partons with the same energies and momenta, we overestimate the value of the total cross section using the above formula. Indeed, the total cross section counts only the number of interactions and, therefore, in the situation when one parton is situated just behind another we do not need to count the interaction of the second parton if we have taken into account the interaction of the first one. It means that the cross section is equal to $$\sigma _{tot}=\sigma _0N\{\mathrm{\hspace{0.17em}\hspace{0.17em}1}\frac{\sigma _0}{\pi R^2}\},$$ (5) where $`R`$ is the hadron radius. One can see that we reproduce Eq.( 4 ) by taking into account that there is a probability for a parton not to interact being in a shadow of the second parton. ## 2 What Have We Learned about SC? During the past two decades high parton density QCD has been under the close investigation of many theorists and we summarize here the result of their activity. * The parameter which controls the strength of SC has been found and it is equal to $$\kappa =\frac{3\pi ^2\alpha _s}{2Q^2}\times \frac{xG(x,Q^2)}{\pi R^2}=\sigma _0\times \rho (x,Q^2).$$ (6) The meaning of this parameter is very simple. It gives the probability of interaction for two partons in the parton cascade or, better to say, a packing factor for partons in the parton cascade. * We know the correct degrees of freedom at high energies: colour dipoles . By definition, the correct degrees of freedom is a set of quantum numbers which mark the wave function that is diagonal with respect to interaction matrix. Therefore, we know that the size and the energy of the colour dipole are not changed by the high energy QCD interaction. * A new scale $`Q_0^2(x)`$ for hdQCD has been traced in pQCD approach which is a solution to the equation $$\kappa =\frac{3\pi ^2\alpha _s}{2Q_0^2(x)}\times \frac{xG(x,Q_0^2(x))}{\pi R^2}=\mathrm{\hspace{0.17em}\hspace{0.17em}1}.$$ (7) This new scale leads to the effective Lagrangian approach which gives us a general non-perturbative method to study hdCD. * We know that the GLR equation ( see Eqs.( 3 ) and ( 4) ) describes the evolution of the dipole density in the full kinematic region . We understood that the Mueller-Glauber approach for colour dipole rescattering gives the initial condition to the GLR equation. * The new, non-perturbative approach, based on the effective Lagrangian , have been developed for hdQCD which gives rise to the hope that hdQCD can be treated theoretically from the first principles. * We are very close to understanding of the parton density saturation . In general, we think that the theoretical approach to hdQCD in a good shape now. ## 3 HERA Puzzle: Where Are SC? The wide spread opinion is that HERA experimental data for $`Q^2\mathrm{\hspace{0.17em}1}GeV^2`$ can be described quite well using only the DGLAP evolution equations, without any other ingredients such as shadowing corrections, higher twist contributions and so on ( see. for example, reviews ). On the other hand, the most important HERA discovery is the fact that the density of gluons ( gluon structure function ) becomes large in HERA kinematic region . The gluon densities extracted from HERA data is so large that parameter $`\kappa `$ ( see Eq.( 6) ) exceeds unity in substantial part of HERA kinematic region (see Fig.2a ). Another way to see this is to plot the solution to Eq.( 7) ( see Fig.2b). It means that in large kinematic region $`\kappa \mathrm{\hspace{0.17em}1}`$ ( to the left from line $`\kappa =1`$ in Fig.2b), we expect that the SC should be large and important for a description of the experimental data. At first sight such expectations are in a clear contradiction with the experimental data. Certainly, this fact gave rise to the suspicions or even mistrusts that our theoretical approach to SC is not consistent. However, the revision and re-analysis of the SC , as has been discussed in the previous section, have been completed with the result, that $`\kappa `$ is responsible for the value of SC. Therefore, we face a puzzling question: where are SC?. Actually, this question includes, at least, two questions: (i) why SC are not needed to describe the HERA data on $`F_2`$, and (ii) where are the experimental manifestation of strong SC. The answers for these two questions you will find in the next three sections but, in short, they are: SC give a weak change for $`F_2`$ in HERA kinematic region, but they are strong for the gluon structure function . We hope to convince you that there are at least two indications in HERA data supporting a large value of SC to gluon density: 1. $`x_P`$ \- behaviour of the cross section of the diffractive dissociation ($`\sigma ^{DD}`$) in DIS; 2. $`Q^2`$ \- behaviour of $`F_2`$ -slope ( $`\frac{F_2(x,Q^2)}{\mathrm{ln}Q^2}`$ ). ## 4 SC for $`𝐅_\mathrm{𝟐}`$ It is well known, that $`\gamma ^{}`$ \- hadron interaction goes in two stages: (i) the transition from virtual photon to colour dipole and (ii) the interaction of the colour dipole with the target. To illustrate how SC work, we consider the Glauber - Mueller formula which describes the rescatterings of the colour dipole with the target: $`F_2(x_B,Q^2)`$ $`=`$ $`{\displaystyle \frac{N_c}{6\pi ^3}}{\displaystyle \underset{1}{\overset{N_f}{}}}Z_f^2{\displaystyle _{Q_0^2}^{Q^2}}𝑑Q^2{\displaystyle 𝑑b_t^2\{\mathrm{\hspace{0.17em}\hspace{0.17em}1}e^{\frac{1}{2}\frac{4}{9}\kappa (x_B,Q^2)S(b_t)}\}}`$ (8) $`+`$ $`F_2(x_B,Q_0^2),`$ where $`S(b_t)=e^{\frac{b_t^2}{R^2}}`$ is the target profile function in the impact parameter representation and $`\frac{4}{9}\kappa (x_B,Q^2)=\sigma (x_B,r_{}^2=4/Q^2)`$ is the cross section of the dipole scattering in pQCD. One can see that Eq.( 8) leads to $$F_2(x_B,Q^2)\frac{N_c}{6\pi ^3}\underset{1}{\overset{N_f}{}}Z_f^2Q^2R^2.$$ (9) However, we are sure that the kinematic region of HERA is far away from the asymptotic one. The practical calculations depend on three ingredients: the value of $`R^2`$, the value of the initial virtuality $`Q_0^2`$ and the initial $`F_2`$ at $`Q_0^2`$. We fix them as follows: $`R^2=10GeV^2`$ which corresponds to “soft” high energy phenomenology , $`Q_0^2=1GeV^2`$ and $`F_2^{input}(x_B,Q_0^2)=F_2^{GRV^{}94}(x_B,Q_0^2=1GeV^2)`$. Therefore, the result of calculation should be read as “SC for colour dipoles with the size smaller than $`r_{}^2\mathrm{\hspace{0.17em}4}/1GeV^2`$ are equal to …” From Fig.3 one can see that the SC are rather small for $`F_2`$ but they are strong and essential for the gluon structure function. It means that we have to look for the physics observables which will be more sensitive to the value of the gluon structure function than $`F_2`$. ## 5 $`𝐱_𝐏`$\- dependence of $`\sigma ^{\mathrm{𝐃𝐃}}`$ One of such observables is the cross section of the diffractive dissociation and, especially, the energy dependence of this cross section. Data: Both H1 and ZEUS collaborations found that $$\sigma ^{DD}\frac{1}{x^{2\mathrm{\Delta }_P}},$$ (10) where $`\mathrm{\Delta }_P=\alpha _P(0)1`$ and the values of $`\alpha _P(0)`$ are: * H1 : $`\alpha _P(0)`$ = 1.2003 $`\pm `$ 0.020(stat.) $`\pm `$ 0.013(sys) ; * ZEUS : $`\alpha _P(0)`$ = 1.1270 $`\pm `$ 0.009(stat.) $`\pm `$ 0.012(sys) . It is clear that the Pomeron intercept ($`\alpha _P(0)`$) for diffractive processes in DIS is higher than the intercept of “soft” Pomeron . Why is it surprising and interesting? To answer this question we have to recall that the cross sections for diffractive production of quark-antiquark pair have the following form in pQCD : $`x_P{\displaystyle \frac{d\sigma _{DD}^T(\gamma ^{}q+\overline{q})}{dx_Pdt}}`$ $``$ $`{\displaystyle _{Q_0^2}^{\frac{M^2}{4}}}{\displaystyle \frac{dk_{}^2}{k_{}^2}}\times {\displaystyle \frac{\left(\alpha _Sx_PG(x_P,\frac{k_{}^2}{1\beta })\right)^2}{k_{}^2}};`$ (11) $`x_P{\displaystyle \frac{d\sigma _{DD}^L(\gamma ^{}q+\overline{q})}{dx_Pdt}}`$ $``$ $`{\displaystyle _{Q_0^2}^{\frac{M^2}{4}}}{\displaystyle \frac{dk_{}^2}{Q^2}}\times {\displaystyle \frac{\left(\alpha _Sx_PG(x_P,\frac{k_{}^2}{1\beta })\right)^2}{k_{}^2}}.`$ (12) From Eqs.( 11) and ( 12) you can see that $`k_{}`$ integration looks quite differently for transverse and longitudinal polarized photon: the last one has a typical log integral over $`k_{}`$ while the former has the integral which normally converges at small values of $`k_{}`$. We have the same property for production a more complex system than $`q\overline{q}`$, for example $`q\overline{q}G`$ . Therefore, we expect that the diffractive production should come from long distances where the “soft” Pomeron contributes. However, the experiment says a different thing, namely, that this production has a considerable contamination from short distances. How is it possible? As far as we know, there is the only one explanation: SC are so strong that $`xG(x,k_{}^2)k_{}^2R^2`$ (see Eq.( 9) ) Substituting this asymptotic limit in Eq.( 11) one can see that the integral becomes convergent and it sits at the upper limit of integration which is equal to $`k_{}^2=Q_0^2(x)`$. Finally, we have $$x_P\frac{d\sigma _{DD}}{dx_Pdt}\left(x_PG(x_P,Q_0^2(x_P))\right)^2\times \frac{1}{Q_0^2(x_P)}$$ (13) The calculation for $`\frac{xG(x,Q^2)}{\mathrm{ln}(1/x)}`$ is given in Fig.4 for HERA kinematic region using Glauber-Mueller formula for SC. Taking into account that $`Q_0^2(x)`$ in Fig.2b can be fitted as $`Q_0^2(x)=\mathrm{\hspace{0.17em}\hspace{0.17em}1}GeV^2\left(\frac{x}{x_0}\right)^\lambda `$ with $`\lambda =0.54`$ and $`x_0=10^2`$ we see from Eq.( 13) and Fig.4 that we are able to reproduce the experimental value of $`\alpha _P(0)`$ and conclude that the typical $`k_{}^2`$ which are dominant in the integral is not small ( $`k_{}^2\mathrm{\hspace{0.17em}\hspace{0.17em}1}2GeV^2`$ ). For Golec-Biernat and Wüsthoff approach, which we will discuss late, $`\lambda =0.288`$ and the value of typical $`k_{}^2`$ turns out to be higher. ## 6 The $`𝐐^\mathrm{𝟐}`$\- dependence of the $`𝐅_\mathrm{𝟐}`$ \- slope. Data: The experimental data for $`F_2`$ \- slope $`dF_2(x,Q^2)/d\mathrm{ln}Q^2`$ are shown in Fig.5a (Caldwell plot). These data give rise to a hope that the matching between “hard” ( short distance) and “soft” (long distance) processes occurs at sufficiently large $`Q^2`$ since the $`F_2`$-slope starts to deviate from the DGLAP predictions around $`Q^2\mathrm{\hspace{0.17em}5}8GeV^2`$. $`𝐅_\mathrm{𝟐}`$-slope and SC: Our principle idea, as we have mentioned in the beginning of the talk, is that matching between “hard” and “soft” processes is provided by the hdQCD phase in the parton cascade or, in other words, due to strong SC. The asymptotic behaviour of $`F_2Q^2R^2`$ for $`Q^2Q_0^2(x)`$ leads to $`dF_2(x,Q^2)/d\mathrm{ln}Q^2Q^2R^2`$ at $`Q^2Q_0^2(x)`$ (see Eq.( 9) ) and this behaviour supports our point of view . However, we have two problems to solve before making any conclusion: (i) the experimental data are taken at different points $`(x,Q^2)`$ and therefore could be interpreted as the change of $`x`$-behaviour rather than $`Q^2`$ one; and (ii) the value of $`F_2`$-slope is quite different from the value of $`F_2`$ while for the asymptotic solution it should be the same. Therefore, we have to calculate $`F_2`$ \- slope to understand them. The result of calculation using the Glauber-Mueller formula is presented in Fig.5b. One can see that (i) the experimental data shows rather $`Q^2`$ \- behaviour then the $`x`$-dependence, which is not qualitatively influenced by SC; and (ii) SC are able to describe both the value and the $`Q^2`$-behaviour of the experimental data. Fig.5b shows also that the ALLM’97 parameterization , which can be viewed as the phenomenological description of the experimental data, has the same features as our calculation confirming the fact that the data show $`Q^2`$ \- dependence but not $`x`$-behaviour of the $`F_2`$-slope. ## 7 Golec-Biernat and Wüsthoff Approach Golec-Biernat and Wüsthoff suggested a phenomenological approach which takes into account the key idea of hdQCD, namely, the new scale of hardness in the parton cascade. They use for $`\gamma ^{}p`$ cross section the following formula $`\sigma _{tot}(\gamma ^{}p)`$ $`=`$ $`{\displaystyle d^2r_{}_0^1𝑑z|\mathrm{\Psi }(Q^2;r_{},z)|^2\sigma _{tot}(r_{}^2,x)};`$ (14) $`\sigma (x,r_{})`$ $`=`$ $`\sigma _0\{\mathrm{\hspace{0.17em}\hspace{0.17em}1}e^{\frac{r_{}^2}{R^2(x)}}\};`$ (15) $`R^2(x)`$ $`=`$ $`\mathrm{\hspace{0.17em}1}/Q_0^2(x)withQ_0^2(x)=Q_0^2\left({\displaystyle \frac{x}{x_0}}\right)^\lambda .`$ (16) Extracting parameters of their model from fitting of the experimental data, namely, $`\sigma _0`$=23.03 mb, $`\lambda `$ = 0.288, $`Q_0^2=1GeV^2`$ and $`x_0`$ = 3.04 $`10^4`$ , they described quite well all data on total and diffractive cross sections in DIS (see Fig.6 ). ## 8 Why Have We Only Indications? The answer is: because we have or can have an alternative explanation of each separate fact. For example, we can describe the $`F_2`$ -slope behaviour changing the initial $`x`$-distribution for the DGLAP evolution equations . Our difficulties in an interpretation of the experimental data is seen in Fig.6a where the new scale $`Q_0^2(x)`$ is plotted. One can see that $`Q_0^2(x)`$ is almost constant in HERA kinematic region. It means that we can put the initial condition for the evolution equation at $`Q_0^2=<Q_0^2(x)>`$ where $`<Q_0^2(x)>`$ is the average new scale in HERA region. Therefore, SC can be absorbed to large extent in the initial condition and the question, that can and should be asked, is how well motivated these conditions are. For example, I do not think that initial gluon distribution in MRST parameterization , needed to describe the $`F_2`$ \- slope data, can be considered as a natural one. ## 9 Summary We hope we convinced you that (i) hdQCD is in a good theoretical shape; (ii) the hdQCD region has been reached at HERA; (iii) HERA data do not contradict the strong SC effects; (iv) there are at least two indications on SC effects in HERA data: $`Q^2`$ behaviour of $`F_2`$ slope and $`x_P`$ behaviour of diffractive cross section in DIS; and (v) the HERA data and the hdQCD theory gave an impetus for a very successful phenomenology for matching “hard” and “soft” physics. We would like to finish this talk with rather long citation: “Small $`x`$ Physics is still in its infancy. Its relations to heavy ion physics, mathematical physics and soft hadron physics along with a rich variety of possible signature makes it central for QCD studies over the next decade” ( A.H. Mueller, B. Müller, G. Rebbi and W.H. Smith “Report of the DPF Long Range Planning WG on QCD”). Hopefully, we will learn more on low $`x`$ physics by the next Ringberg Workshop.
no-problem/9908/astro-ph9908143.html
ar5iv
text
# The bimodal spiral galaxy surface brightness distribution ## 1 Introduction The distribution of spiral galaxy disc surface brightnesses is of great observational and theoretical importance in extragalactic astronomy. Observationally, knowledge about the present day disc galaxy central surface brightness distribution (SBD) allows us to better understand the importance and implications of surface brightness selection effects, which is important for e.g. understanding the local galaxy luminosity function (e.g. Dalcanton 1998; de Jong & Lacey 1998), or in comparing galaxy populations at low and high redshifts (e.g. Simard et al. 1999). Theories of galaxy formation and evolution link the SBD with the angular momentum, mass and star formation history of galaxies: indeed, the SBD has been used as a constraint on some galaxy evolution models \[Dalcanton, Spergel & Summers 1997, Mo, Mao & White 1998, Baugh et al. 1998\]. Previous studies have generally found a SBD that can be described as flat or slowly declining with surface brightness, modulated by a sharp cut-off at the high surface brightness end, normally associated with the Freeman \[Freeman 1970\] value (e.g. de Jong 1996c; McGaugh 1996). Tully and Verheijen (1997; TV hereafter) derived the SBD of a complete sample of 60 spiral discs in the Ursa Major Cluster from an imaging study in the optical and near-infrared \[Tully et al. 1996\]. Their data suggested an inclination-corrected near-infrared bimodal SBD (as opposed to the conventional unimodal distribution described above) with a peak associated with the Freeman value, and a second peak $`3`$ mag arcsec<sup>-2</sup> fainter, separated by a gap (Fig. 1, solid line in panel a). This would have far-reaching implications for our understanding of galaxy formation and evolution. A bimodal SBD would imply that there are two discrete galaxy populations: a high surface density population and one with surface densities a factor of $`10`$ lower (assuming comparable mass to light ratios for the two populations). Correspondingly, the masses, spin distributions and star formation histories of these two families of galaxies would adopt one of two preferred states, and plausible mechanisms for generating these discrete differences in e.g. star formation history, or spin distribution, would have to be found. Why this bimodal distribution was not found before is an important question. TV address this problem at length: here we will summarise their main arguments. First, the bimodal SBD is based on near-infrared $`K^{}`$ data and, in fact, is only really apparent there. Near-infrared photometry is much less susceptible to the effects of dust extinction than optical photometry. Furthermore, high surface brightness galaxies are typically redder than low surface brightness galaxies (de Jong 1996b), which accentuates the gap in the near-infrared. TV demonstrate that in the optical $`B`$ band the bimodality is washed out by these effects: the bimodality in the optical is consistent with the bimodality in the near-infrared, but is less significant (accordingly, we do not consider TV’s optical SBDs here). A bimodal SBD would therefore remain undetected in optical studies (e.g. McGaugh 1996, de Jong & Lacey 1998). Second, the TV SBD has been determined from a complete sample of galaxies at a common distance, making selection effects much easier to understand. This may explain why de Jong \[de Jong 1996c\], using $`K`$ band photometry for a sample of field spiral galaxies, did not report any sign of bimodality in his SBD. In an attempt to investigate whether the bimodal distribution could be caused by environmental differences, TV defined an isolated subsample, consisting of 36 galaxies that have a projected distance $`>80`$ kpc to their nearest neighbour (assuming a distance to the Ursa Major Cluster of 15.5 Mpc, corresponding to $`H_0=85`$ kms<sup>-1</sup> Mpc<sup>-1</sup>). They found that the gap is more pronounced in this isolated subsample, and attributed intermediate surface brightnesses found in their total sample to interactions (Fig. 1, solid line in panel b). Unfortunately, TV did not attempt to robustly estimate the significance of the apparent bimodality. Their only estimate is based on the isolated subsample where they compare the number of galaxies in the 1.5 mag wide gap (between $``$ 17.5 and $``$ 19 mag arcsec<sup>-2</sup>; 3 galaxies) with the number of galaxies that they expected to see in that gap (20 galaxies). This expectation value was derived from a model unimodal distribution normalised to reproduce the peak number of galaxies in the high surface brightness bins, not the total number of galaxies. Properly normalising the model distribution lowers the expectation from $``$ 20 galaxies to $`12`$ galaxies. In this paper, we investigate the significance of the bimodal near-infrared SBD, focussing on whether this bimodality is simply an artifact of small number statistics. In sections 2 and 3, we use a Kolmogorov-Smirnov (KS) style of test to address this question, investigating the significance of both the total and isolated samples. In section 4, we discuss the results, focussing on the rôle of inclination corrections. We present our main conclusions in section 5. ## 2 Constructing a null hypothesis In order to test the significance of the bimodality in the SBD, it is necessary to choose a null distribution to test the observations against. Ordinarily, it would be best to choose some kind of ‘best bet’ observationally determined or theoretically motivated null hypothesis: failure of this null hypothesis would indicate that it does not accurately describe the dataset. However, in this case, we wish to quantify the significance of a particular feature in the observational distribution, namely the bimodality in the Ursa Major Cluster SBD. For this reason, it is necessary to be careful about choosing a null distribution: if a null distribution was chosen which did not adequately describe the general features of the SBD, then a high significance (i.e. a low probability of obtaining the data from the null) would not represent a large degree of bimodality, but rather would indicate that the null hypothesis was a poor match to the general shape of the SBD. For this reason, we have chosen to fit a relatively simple model distribution to the data (taking into account selection effects), allowing us to determine the significance of the bimodality by minimising the mismatches between the general shapes of the two distributions. For the unimodal model, we use a slightly modified version of the optical SBD presented in McGaugh \[McGaugh 1996\]: $`\mathrm{log}_{10}\mathrm{\Phi }(\mu _{K^{},0})`$ $`=m(\mu _{K^{},0}\mu _{K^{},0}^{})`$ $`\mathrm{where}`$ $`\{\begin{array}{cccc}m\hfill & =\hfill & m_f\hfill & \mathrm{if}\mu _{K^{},0}\mu _{K^{},0}^{}\hfill \\ m\hfill & =\hfill & 2.6\hfill & \mathrm{if}\mu _{K^{},0}<\mu _{K^{},0}^{}\hfill \end{array}`$ (3) The critical surface brightness $`\mu _{K^{},0}^{}`$ and the faint-end slope $`m_f`$ are adjustable parameters which are fit below. <sup>1</sup><sup>1</sup>1McGaugh’s original values were $`\mu _0^{}=`$ 21.5 $`B_J`$ mag arcsec<sup>-2</sup> and $`m_f=0.3`$, which are roughly consistent with the best fit results described in section 3. For reasonable values of $`m_f`$, the number of galaxies per unit mag arcsec<sup>-2</sup> only slowly changes with surface brightness, and falls quite sharply to zero above a given ‘critical’ surface brightness $`\mu _{K^{},0}^{}`$ (see e.g. the dashed line in panel e of Fig. 1). To take into account the selection effects that affect TV’s observations, it is also necessary to adopt a scale size distribution: we assume that the scale size distribution is constant (per logarithmic interval) over the interval $`\mathrm{log}_{10}h_K^{}=0.8`$ to $`\mathrm{log}_{10}h_K^{}=1.7`$ ($``$6.3 arcsec to 50.1 arcsec, or, at an adopted distance of 15.5 Mpc, $``$0.5 to 3.8 kpc). Selection criteria are modelled by choosing only galaxies with isophotal magnitudes $`m_K^{}<12`$, and isophotal diameters $`D_K^{}>60^{\prime \prime }`$, both measured at the 22.5 $`K^{}`$ mag arcsec<sup>-2</sup> isophote. Though these criteria differ slightly from those used by TV (as their selection was made in $`B`$ band rather than $`K^{}`$ band), they are comparable to better than 0.5 mag, assuming $`BK^{}3`$ for faint late-type galaxies \[de Jong 1996b\], and accurately follow the surface brightness and size limits on TV’s $`\mu _{K^{},0}\mathrm{log}_{10}h_K^{}`$ plane. We emphasize that we do not hope to somehow represent the ‘universal’ $`K^{}`$ band SBD with this function, we simply aim to construct a plausible, simple SBD that provides a sensible single-peaked fit to the observed bimodal Ursa Major SBD. In panel e in Fig. 1, we show a 10<sup>4</sup> galaxy Monte Carlo realisation of a model distribution with $`\mu _{K^{},0}^{}=16.3`$ and $`m_f=0.0`$ (also in panels a and b, and in cumulative form in panels g and h as a dashed line). We also show the same model distribution without the selection effects (with arbitrary normalisation) in panel e as a dashed line. ## 3 Results In the above section, we described the single-peaked distribution that we will fit to the observed SBD. As we want to take into account selection effects in the null distribution, we cannot make a straightforward fit, but have to use Monte Carlo realisations of the null distribution and associated selection effects. We generate a grid of Monte Carlo unimodal distributions, each containing 10<sup>4</sup> galaxies (later renormalised to contain the same number of galaxies in the observed SBD), and each with different values of $`\mu _{K^{},0}^{}`$ and $`m_f`$ (with grid steps of 0.05 in $`\mu _{K^{},0}^{}`$ and 0.02 in $`m_f`$). The best fit unimodal distribution is determined by minimising the Cash statistic $`C=2_{i=1}^{n_{bins}}\mathrm{log}P_i(n,\lambda )`$, where $`P_i(n,\lambda )`$ is the Poissonian probability of observing $`n`$ galaxies in a bin which the unimodal model predicts should have a mean galaxy number of $`\lambda `$ \[Cash 1979\]. The Cash statistic has the advantage that the underlying distribution does not have to be Gaussian. As the model grid used to determine the best fit is itself a Monte Carlo realisation of the true underlying model grid, the best fit parameters will depend slightly on the model grid realisation used. This ultimately affects the significances that we will derive later. We have therefore run the code 10 times, and thus constructed 10 realisations of the entire model grid, and derived a best fit for each of these realisations. The best fit unimodal distribution typically has parameters around $`\mu _{K^{},0}^{}16.35\pm 0.05`$ mag arcsec<sup>-2</sup> and a faint end slope $`m_f0.05\pm 0.08`$, where these uncertainties are the RMS variation in the fit parameters. To estimate the significance of the departures of the observed SBD from the best fit model distribution we use the KS statistic $`d_{\mathrm{max}}`$, defined as the maximum distance between the cumulative distributions of the model and the data (i.e. the maximum distance between the solid and dashed lines in panels g and h of Fig. 1). The $`d_{\mathrm{max},\mathrm{obs}}`$ between this best fit model and observed SBD is then measured. However, because we have fitted a single peaked distribution to the bimodal SBD, it is not possible to convert the measured $`d_{\mathrm{max},\mathrm{obs}}`$ into a probability in the standard way (e.g. Press et al. 1986). <sup>2</sup><sup>2</sup>2Note that the binning of the data in a histogram also, strictly speaking, invalidates the use of a standard KS test, although if the binning is over a much smaller scale than the ‘structure’ in the SBD (as it is in this case), tests have shown that the effects on the final significances are negligible. Therefore, it is necessary to use Monte Carlo simulations to convert the measured maximum distance into a probability. Accordingly, we generated 100 random simulated datasets from the best fit model, and subjected them to the same fitting and maximum distance measuring procedure as the data. In this way, we determined what the distribution of maximum distances should be from the simulations. The fraction of simulations with maximum distances larger than the $`d_{\mathrm{max},\mathrm{obs}}`$ then gives us the probability $`P`$ that the observed dataset was a statistical accident. Because the model unimodal distributions are Monte Carlo realisations of the true model distributions, this procedure (i.e. generating the 100 datasets and measuring the significance) was repeated for each of the 10 model grid realisations described above, to allow more accurate measurement of the significance. The significance is defined as $`1P`$, with $`P`$ defined as above. Significances for the total sample of 60 galaxies, and the isolated sample of 36 galaxies are given in Table 1. Quoted significances are the mean (along with the error in those means) of the 10 runs. Note that we also carried out this analysis in two different programming languages on two different computers using different sets of random number generators: the results were found to be indistinguishable to within the quoted errors. ## 4 Discussion From Table 1, it is clear that the bimodality shown by the total sample is not statistically significant. However, the isolated subsample shows a bimodality significant at the 96 per cent level. To understand if this high significance of the histogram for the isolated subsample reflects a true, astrophysical bimodality in the Ursa Major Cluster, it is necessary to understand the assumptions used to construct the isolated galaxy SBD. An obvious check is to see whether systematic effects in the data can produce a bimodality, e.g., because of bulge contamination in the fits. We re-fit the surface brightness profiles presented in Tully et al. \[Tully et al. 1996\] using the ‘marking the disc’ technique (the same technique that was used by TV), to check that there were no obvious personal biases in the fitting of the surface brightness profiles. Comparing our $`K^{}`$ band central surface brightnesses with those of Tully et al. \[Tully et al. 1996\], we found a mean offset of +0.16 mag arcsec<sup>-2</sup>, and that 68 per cent of the central surface brightnesses compared to better than $`\pm `$ 0.25 mag arcsec<sup>-2</sup> (both with and without this mean offset applied). When corrected for inclination in exactly the same way as TV (assuming an infinitely thin disc; by subtracting $`2.5\mathrm{log}_{10}(b/a)`$, where $`b/a`$ is the minor to major axis ratio), the two versions of the isolated subsample SBD are virtually indistinguishable (see Fig. 1, panel c), and the significance of the re-fitted distribution is hardly different, decreasing slightly to 86 per cent. This suggests that there are no systematic personal biases in the fits TV use that might introduce a spurious bimodality. Note that de Jong \[de Jong 1996a\] shows that the ‘marking the disc’ technique is unbiased, with $``$ 90 per cent of the central surface brightnesses accurate to 0.5 mag, compared to more sophisticated bulge-disc decomposition techniques. However, because there was no inclination cut applied to the Ursa Major sample, some of the inclination corrections that were applied to the $`K^{}`$ band surface brightnesses were rather large: in $``$ 1/3 of the cases (in either the total or isolated samples) they exceeded 1 mag. However, large inclination corrections are highly uncertain for a number of reasons, even in $`K^{}`$ band. In Fig. 2, we show only two of the many uncertainties that affect inclination corrections in $`K^{}`$ band. First we computed the corrected surface brightnesses, assuming i) an infinitely thin disc (solid circles) and ii) an intrinsic disc axial ratio $`q_0=0.2`$ (Holmberg 1958; open circles). These two values are connected with a dashed line in Fig. 2. The error bars in Fig. 2 represent the random uncertainty in the inclination correction due to axial ratio errors of 0.05. Taken together, this shows that the inclination corrections for galaxies with axial ratios $`b/a`$ below 0.4 have considerable systematic uncertainties stemming from our poor knowledge of the intrinsic axial ratios of galaxies, coupled with the random uncertainty associated with measurement error. Taken together, consideration of these two uncertainties argues that the inclination corrections applied in this paper and by TV are likely to be lower limits. However, a number of effects conspire to make inclination corrections smaller, suggesting that the inclination corrections might actually be upper limits. Firstly, the effects of dust in nearly edge-on galaxies is non-negligible, even in the $`K^{}`$ band, which could lower the inclination correction by roughly as much as 0.3 mag \[Tully et al. 1998\]. Secondly, averaging of the surface brightness profiles over elliptical annuli at high inclinations (as in this case) is likely to produce systematic underestimates of the real central surface brightness \[Huizinga 1994\]. Finally, at large inclinations, the assumption of a thin, uniform slab disc breaks down: vertical and radial structure in the galaxy will affect the inclination correction in non-trivial ways. Accordingly, in Fig. 2 we have also connected the inclination corrected surface brightness with the uncorrected surface brightness with a dotted line. In Fig. 2, we can see that uncertainties in these large inclination corrections may go some way towards filling in the gap in the SBD. As an example, we consider the four galaxies with the largest inclination corrections. UGC 6667, UGC 6894, NGC 4010 and NGC 4183 have $`>2`$ mag inclination corrections and uncorrected (reasonably high) $`K^{}`$ surface brightnesses between 17 and 18.5 mag arcsec<sup>-2</sup>. The large corrections move them across the gap from the high surface brightness into the low surface brightness regime. However, if these corrections are at all uncertain, they may in reality lie in the gap of the $`K^{}`$ band SBD. Moving even one or two of them into the gap in the isolated galaxy SBD would decrease markedly the significance of the bimodality in the isolated subsample. Certainly a few of the galaxies mentioned above suffer from problems that may seriously affect the value of the inclination correction (not because of the uncertainty in axial ratio, but because of uncertainties in e.g. the vertical and radial structure in the galaxy): NGC 4010 is clearly lopsided, with one half of the galaxy puffed up with respect to the other half, while NGC 4183 has a clear warp. Because of these serious uncertainties in the larger inclination corrections, and because there are no clear recipes for producing more realistic inclination corrections which take into account dust and the vertical and radial structure of the disc, the only fair thing to do is omit the high-inclination galaxies from the SBD. Accordingly, the high-inclination galaxies were removed from the isolated subsample, leaving 23 galaxies with $`b/a`$ larger than 0.4, corresponding to an inclination of less than 66 or inclination corrections smaller than 1 mag. The resulting SBD is shown in Fig. 1, panel d: the bimodality is insignificant in this SBD, due primarily to small number statistics (Table 1). To check how large the sample size needs to be to detect significant bimodality in the low inclination isolated subsample, we simulated intrinsically bimodal datasets and tested them using the above procedure. Bimodal datasets were generated using two normal distributions with $`\sigma =0.5`$ mag and means of 17.15 and 19.85 mag arcsec<sup>-2</sup>, with the high surface brightness peak containing on average 63 per cent of the galaxies (thus resembling the TV distribution). These simulations showed that in order to detect a bimodality at the 95 (99) per cent level in the low inclination sample at least 4/5 of the time, the sample size needs to be increased to around 50 (100) galaxies. ## 5 Conclusions We have re-analysed Tully & Verheijen’s (1997) bimodal SBD, using a Kolmogorov-Smirnov style of test to estimate the likelihood that a single-peaked distribution would be able to reproduce the bimodality in the Ursa Major Cluster. * The total sample of 60 galaxies is inconsistent with a single-peaked distribution at only the 58 per cent level. * However, the isolated subsample of 36 galaxies is significantly bimodal: it is inconsistent with the null distribution at the 96 per cent level. * We re-fit the $`K^{}`$ band surface brightness profiles of the Ursa Major sample, and found that this re-fit made relatively little difference, with the isolated subsample retaining a reasonable significance of 86 per cent. * However, we argue that large inclination corrections are uncertain even in the near-infrared, placing the reality of the gap between high and low surface brightness galaxies in some doubt. * When the galaxies with inclination corrections $`>1`$ mag are removed from the isolated subsample, the significance of the 23 galaxy subsample drops to only 40 per cent. * Assuming that the low inclination, isolated galaxy SBD is truly bimodal, in order to increase the significance to 95 (99) per cent, it is necessary to increase the low inclination isolated galaxy sample size by around a factor of two (four). To summarise, if inclination corrections greater than 1 mag can be trusted, then there is reasonable evidence that the Ursa Major Cluster SBD is bimodal. If, however, these large inclination corrections are in some doubt, then the SBD lacks a sufficient number of galaxies to argue convincingly against a unimodal SBD. Either way, to convincingly demonstrate at the 99 per cent level that the near-infrared spiral galaxy SBD is bimodal will require an independent dataset to be used with around a factor of four more low inclination, isolated galaxies than are included in the Ursa Major Cluster sample. ## Acknowledgements We would like to thank Marc Verheijen for providing tables of data in electronic form. We would also like to thank the referee for their comments on the manuscript. We would like to thank Greg Bothun, Roelof Bottema, Richard Bower, Roelof de Jong, Rob Kennicutt, John Lucey and Stacy McGaugh for useful discussions and comments on the manuscript. EFB would like to thank the Isle of Man Education Department for their generous support. This project made use of STARLINK facilities in Durham.
no-problem/9908/hep-ph9908211.html
ar5iv
text
# CFNUL/99-02 DFAQ/99/TH/03 hep-ph/9908211 Classical Nambu-Goldstone fields ## Acknowledgments We would like to thank Sasha Dolgov for useful discussions. L. B. acknowledges the support of FCT under the grant PESO/P/PRO/1250/98.
no-problem/9908/hep-ex9908020.html
ar5iv
text
# Observation of CP Violation in 𝐾_𝐿→𝜋⁺⁢𝜋⁻⁢𝑒⁺⁢𝑒⁻ Decays The KTeV E799 experiment at Fermi National Accelerator Laboratory recently reported the first observation of the four body decay mode $`K_L\pi ^+\pi ^{}e^+e^{}`$. Based on 2% of the data, a branching ratio of $`3.2\pm 0.6(\mathrm{stat})\pm 0.4(\mathrm{syst})\times 10^7`$ was measured. In this paper, we report an analysis of the entire KTeV E799 data from which the $`K_L\pi ^+\pi ^{}e^+e^{}`$ signal (shown in Fig. 1) of 1811 events above background was obtained after the analysis cuts described below. We observed in these $`K_L\pi ^+\pi ^{}e^+e^{}`$ data a CP-violating asymmetry in the CP- and T-odd variable $`\mathrm{sin}\varphi \mathrm{cos}\varphi `$, $$A=\frac{N_{\mathrm{sin}\varphi \mathrm{cos}\varphi >0.0}N_{\mathrm{sin}\varphi \mathrm{cos}\varphi <0.0}}{N_{\mathrm{sin}\varphi \mathrm{cos}\varphi >0.0}+N_{\mathrm{sin}\varphi \mathrm{cos}\varphi <0.0}}$$ (1) where $`\varphi `$ is the angle between the $`e^+e^{}`$ and $`\pi ^+\pi ^{}`$ planes in the $`K_L`$ cms. This asymmetry implies, with mild assumptions, time reversal symmetry violation as well. The quantity $`\mathrm{sin}\varphi \mathrm{cos}\varphi `$ is given by $`(\widehat{n}_{ee}\times \widehat{n}_{\pi \pi })\widehat{z}(\widehat{n}_{ee}\widehat{n}_{\pi \pi })`$, where the $`\widehat{n}^{}s`$ are the unit normals and $`\widehat{z}`$ is the unit vector in the direction of the $`\pi \pi `$ in the $`K_L`$ cms. The observed asymmetry $`\mathrm{sin}\varphi \mathrm{cos}\varphi `$ shown in Fig. 2 was $`23.3\pm 2.3(\mathrm{stat})`$% before corrections. Inspection of Fig. 2 shows that the asymmetry between the bins near $`\mathrm{sin}\varphi \mathrm{cos}\varphi =\pm 0.5`$ is considerably larger. As discussed below, this cannot be explained by asymmetries due to either the spectrometer acceptance or detector elements. Using the model of Ref. to correct for regions of $`K_L\pi ^+\pi ^{}e^+e^{}`$ phase space outside the acceptance of the KTeV spectrometer (which have small asymmetry), an asymmetry integrated over the entire phase space of the $`K_L\pi ^+\pi ^{}e^+e^{}`$ mode of $`13.6\pm 2.5(\mathrm{stat})`$% was obtained, the largest such CP- (and T-) violating effect yet observed. In comparison, CPLEAR recently reported a $`0.66\pm 0.13(\mathrm{stat})`$% T-violating asymmetry between $`K^0\overline{K}^0`$ and $`\overline{K}^0K^0`$ transition rates. The $`K_L\pi ^+\pi ^{}e^+e^{}`$ data were accumulated during the ten weeks of E799 operation. A proton beam with intensity typically in the range $`3.03.5\times 10^{12}`$ protons per 23 second spill incident at an angle of 4.8 mr on a BeO target was employed to produce two nearly parallel $`K_L`$ beams for E799. The configuration of the KTeV E799 spectrometer consisted of a vacuum decay region, a magnetic spectrometer with four drift chambers, photon vetoes, eight transition radiation chambers, a CsI electromagnetic calorimeter, and a muon detector. A total of $`2.7\times 10^{11}`$ $`K_L`$ decays were accumlated during the E799 run. Details of the KTeV detector are given in Ref. . The KTeV four track trigger selected $`1.3\times 10^8`$ events. Candidate $`K_L\pi ^+\pi ^{}e^+e^{}`$ events were extracted from these triggers by requiring events with four tracks that passed track quality cuts and had a common vertex with a good vertex $`\chi ^2`$. To be designated as $`e^\pm `$, two of the tracks were required to have opposite charges and $`0.95\mathrm{E}/\mathrm{p}1.05`$ where E was the energy deposited by the track in the CsI and p was the momentum obtained from magnetic deflection. To be consistent with a $`\pi ^\pm `$ pair, the other two tracks were required to have $`\mathrm{E}/\mathrm{p}0.90`$ and opposite charges. To reduce backgrounds arising from other types of $`K_L`$ decays in which decay products have been missed, the candidate $`\pi ^+\pi ^{}e^+e^{}`$ were required to have transverse momentum $`P_t^2`$ of the four tracks relative to the direction of the $`K_L`$ be less than $`0.6\times 10^4\mathrm{GeV}^2/c^2`$. This cut was 91.8% efficient for retaining $`K_L\pi ^+\pi ^{}e^+e^{}`$. The major background to the $`K_L\pi ^+\pi ^{}e^+e^{}`$ mode was $`K_L\pi ^+\pi ^{}\pi _D^0`$ where $`\pi _D^0`$ was a Dalitz decay, $`\pi ^0\gamma e^+e^{}`$, in which the photon was not observed in the CsI calorimeter or the photon vetos. To reduce this background, all $`K_L\pi ^+\pi ^{}e^+e^{}`$ candidate events were interpreted as $`K_L\pi ^+\pi ^{}\pi _D^0`$ decays. Under this assumption, the momentum squared $`P_{\pi ^0}^2`$ of the assumed $`\pi ^0`$ can be calculated in the frame in which the momentum of $`\pi ^+\pi ^{}`$ is transverse to the $`K_L`$ direction. $`P_{\pi ^0}^2`$ was mostly greater than zero for $`K_L\pi ^+\pi ^{}\pi _D^0`$ decays except for cases where finite detector resolution produces a $`P_{\pi ^0}^20`$. In contrast, most of the $`K_L\pi ^+\pi ^{}e^+e^{}`$ decays had $`P_{\pi ^0}^20`$. The requirement that all $`\pi ^+\pi ^{}e^+e^{}`$ had $`(P_{\pi ^0})^20.00625\mathrm{GeV}^2/c^2`$ minimized the $`K_L\pi ^+\pi ^{}\pi _D^0`$ background while retaining 94.8% of the signal. Other backgrounds were relatively minor. The largest of these was due to $`K_L\pi ^+\pi ^{}\gamma `$ decays in which the photon converted in the material of the spectrometer. These events, which reconstructed to the $`K_L`$ mass and survived the $`P_t^2`$ and $`P_{\pi ^0}^2`$ cuts, were eliminated by requiring $`M_{e^+e^{}}2.0\mathrm{MeV}/c^2`$. The $`M_{e^+e^{}}`$ cut retained 95.3% of the $`K_L\pi ^+\pi ^{}e^+e^{}`$ events. A third background due to accidental coincidence of two $`K_L\pi ^\pm e^{}\nu `$ decays ($`K_{e_3}`$) whose decay vertices overlap was minimized by track and vertex $`\chi ^2`$ cuts. A fourth background due to $`\mathrm{\Xi }^0\mathrm{\Lambda }\pi _D^0`$ where the proton from the $`\mathrm{\Lambda }`$ decay was misidentified as a $`\pi ^+`$ was made negligible by $`K^0`$ momentum and vertex $`\chi ^2`$ cuts. Finally, a fifth background due to $`K_S\pi ^+\pi ^{}e^+e^{}`$ decays was eliminated by requiring the energy of the $`\pi \pi ee`$ be $``$200 GeV/$`c^2`$. The final requirement of the $`K_L\pi ^+\pi ^{}e^+e^{}`$ events was $`492\mathrm{MeV}/c^2M_{\pi \pi ee}504\mathrm{MeV}/c^2`$. The magnitude of the background under the $`K_L`$ peak was determined by a fit to the $`\pi ^+\pi ^{}e^+e^{}`$ mass distribution outside the signal region. From this fit, a $`K_L\pi ^+\pi ^{}e^+e^{}`$ signal of $`1811\pm 43(\mathrm{stat})`$ events above a background of $`45\pm 11`$ events was obtained in the signal region. The 45 event background was composed of residual $`K_L\pi ^+\pi ^{}\pi _D^0`$ (36 events), $`K_L\pi ^+\pi ^{}\gamma `$ (4.0 events), overlapping $`K_{e_3}`$ (3.5 events), cascade decays (1.3 events) and $`K_S\pi ^+\pi ^{}e^+e^{}`$ (0.2 events). Possible sources of false asymmetries were considered, including those due to backgrounds and asymmetries in the detector. To check for detector asymmetries, the copious $`K_L\pi ^+\pi ^{}\pi _D^0`$ decay mode, which has a similar topology to $`K_L\pi ^+\pi ^{}e^+e^{}`$ except for the presence of an extra photon in the CsI, was used. This mode is expected to have no asymmetry in the $`\varphi `$ distribution formed using the $`\pi ^+\pi ^{}e^+e^{}`$. In a sample of approximately five million Dalitz decays, an asymmetry of $`0.02\pm 0.05`$% was observed. The small background under the $`K_L`$ was determined not to contribute significantly to the asymmetry in the $`K_L`$ mass region since the asymmetry of the sideband regions below and above the $`K_L`$ mass was measured to be $`3.1\pm 5.1`$% and $`2.3\pm 9.2`$% respectively. To perform an acceptance correction for loss of events due to spectrometer geometry, trigger, reconstruction efficiency, and analysis cuts, we modeled the $`K_L\pi ^+\pi ^{}e^+e^{}`$ decays and simulated the response of the KTeV detector elements. The $`K_L\pi ^+\pi ^{}e^+e^{}`$ decay mode is expected to proceed via both CP-violating and conserving amplitudes and exhibit both direct and indirect CP-violation. The dominant CP-violating amplitude is indirect and proceeds via an initial decay of the $`K_L`$ into $`\pi ^+\pi ^{}`$ followed by one of the pions undergoing inner bremsstrahlung with the resulting photon internally converting to an $`e^+e^{}`$ pair. The dominant CP-conserving amplitude is the emission of an M1 photon at the $`\pi ^+\pi ^{}`$ decay vertex followed by internal conversion. The interference between two amplitudes shown in Fig. 3a and b respectively generates the $`\varphi `$ asymmetry (Monte Carlo simulations with the phases of the bremsstralung and M1 amplitudes set so that no interference takes place exhibit no $`\varphi `$ asymmetry). Using this model, the angular distribution in $`\varphi `$ is $$\frac{d\mathrm{\Gamma }}{d\varphi }=\mathrm{\Gamma }_1\mathrm{cos}^2\varphi +\mathrm{\Gamma }_2\mathrm{sin}^2\varphi +\mathrm{\Gamma }_3\mathrm{sin}\varphi \mathrm{cos}\varphi $$ (2) where the T-odd $`\mathrm{\Gamma }_3\mathrm{sin}\varphi \mathrm{cos}\varphi `$ term contains the interference between the M1 and bremsstrahlung amplitudes. Two other processes that contribute small amounts to the $`K_L\pi ^+\pi ^{}e^+e^{}`$ decay were taken into account: the indirect CP-violating E1 photon emission (Fig. 3c) and the CP-conserving $`K^0`$ charge radius process (Fig. 3d) in which the $`K_LK_S`$ via emission of a photon. The Monte Carlo simulation incorporated the amplitudes shown in Fig. 3a-d. To obtain agreement with the virtual photon energy spectrum $`E_\gamma ^{}=E_{e^+}+E_e^{}`$ of the data (Fig. 4a), a form factor was required in the M1 virtual photon emission amplitude of Fig. 3b. We turn now to a detailed discussion of this form factor. Such a form factor has been required to explain the energy spectrum of the M1 photon emitted in the $`K_L\pi ^+\pi ^{}\gamma `$ decay. In order to incorporate a similar form factor, we have modified the coupling $`g_{M1}`$ of the M1 amplitude, including a form factor $$F=\stackrel{~}{g}_{M1}[1+\frac{a_1/a_2}{(M_\rho ^2M_K^2)+2M_K(E_{e^+}+E_e^{})}]$$ (3) similar to that used to describe $`K_L\pi ^+\pi ^{}\gamma `$ where $`M_\rho `$ is the mass of the $`\rho `$ meson (770 MeV/$`c^2`$) and the photon energy has been replaced by $`E_{e^+}+E_e^{}`$. The ratio $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ were determined by fitting the $`K_L\pi ^+\pi ^{}e^+e^{}`$ data using the likelihood function $$L(a_1/a_2,\stackrel{~}{g}_{M1})=\frac{\mathrm{\Pi }_{k=1}^NP_M^k(a_1/a_2,\stackrel{~}{g}_{M1})P_a^k}{(_{ps}P_M(a_1/a_2,\stackrel{~}{g}_{M1})P_a(a_1/a_2,\stackrel{~}{g}_{M1}))^N}$$ (4) The probability $`P_M^k`$ of a given event is based on the $`K_L\pi ^+\pi ^{}e^+e^{}`$ matrix element and is a function of the five independent variables: $`\varphi `$, $`\theta _{e^+}`$ (the angle between the $`e^+`$ and the $`\pi ^+\pi ^{}`$ direction in the $`e^+e^{}`$ cms), $`\theta _{\pi ^+}`$ (the angle between the $`\pi ^+`$ and the $`e^+e^{}`$ direction in the $`\pi ^+\pi ^{}`$ cms), $`M_{\pi ^+\pi ^{}}`$, and $`M_{e^+e^{}}`$. It is calculated using the particular values of the parameters $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ and nominal values from the PDG or Ref. for the other model parameters. The likelihood of an event is the product of $`P_M^k`$ times $`P_a^k`$, the acceptance times efficiency of the event, normalized by the product of $`P_M`$ and $`P_a`$ integrated over the entire phase space (ps). The result of the likelihood calculation is shown in Fig. 4b. The maximum of the likelihood occurs at $`a_1`$/$`a_2=0.720\pm 0.028\mathrm{GeV}^2/c^2`$ and $`|\stackrel{~}{g}_{M1}|=1.35_{0.17}^{+0.20}`$ where the errors represent the excursions of the likelihood function at the point where the log of the likelihood has decreased by one half unit (39% CL). The $`E_\gamma ^{}`$ spectrum predicted by the Monte Carlo with these parameters is shown in Fig. 4a, together with the prediction for a constant $`|g_{M1}|`$. Figure 2 shows the good agreement obtained using these parmeters between the observed $`\varphi `$ and $`\mathrm{sin}\varphi \mathrm{cos}\varphi `$ angular distributions and the Monte Carlo. When this form factor is included in the M1 amplitude, the constant $`|g_{M1}|=0.76`$ used in Ref. can no longer be directly compared to the new $`|\stackrel{~}{g}_{M1}|`$ obtained in the likelihood fit. Rather, the average of the form factor $`F`$ of equation over the range of $`E_{e^+}+E_e^{}`$ must be compared with the constant $`|g_{M1}|`$ value of $`0.76\pm 0.11`$. An average for $`F`$ of $`0.84\pm 0.10`$ was found, consistent within errors with 0.76. The branching ratio calculated using the form factor was increased by 5.7% compared with that obtained using $`|g_{M1}|`$ = 0.76. Using the acceptance obtained from the Monte Carlo generated with the maximum likelihood values of $`|\stackrel{~}{g}_{M1}|`$ and $`a_1`$/$`a_2`$, the asymmetry of the acceptance corrected $`\mathrm{sin}\varphi \mathrm{cos}\varphi `$ distribution is found to be $`13.6\pm 2.5(\mathrm{stat})`$%. The contours of acceptance corrected asymmetry shown superimposed on the likelihood contours of $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ in Fig. 4 were determined from the $`\mathrm{sin}\varphi \mathrm{cos}\varphi `$ distribution of the data, corrected for acceptances determined using the particular $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ values. We have considered whether the asymmetry was due to final state interactions. Because of the symmetry of the $`\pi ^+\pi ^{}e^+e^{}`$ state, electromagnetic or strong final state interactions, while they can modify the $`\varphi `$ distribution, cannot generate an T-odd asymmetry. The interference between the bremsstrahlung (I=0 $`\pi ^+\pi ^{}`$) and the M1 (I=1 $`\pi ^+\pi ^{}`$) amplitudes, which is responsible for the CP violating asymmetry, depends on the difference of the I=0 and I=1 strong interaction phase shifts but this difference can only modulate, not generate the asymmetry. Systematic errors on $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ due to analysis cuts, resolutions and variations of parameters of the Monte Carlo were studied. By varying each analysis cut over a reasonable range and observing the variation of $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$, $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ systematic errors of $`\pm 0.008\mathrm{GeV}^2/c^2`$ and $`\pm 0.04`$ were obtained. To determine the systematic errors due to resolution, resolution functions in the five variables were estimated by comparing generated and reconstructed Monte Carlo events. Using these functions to smear each independent variable for each data event, one thousand passes through the 1811 $`K_L\pi ^+\pi ^{}e^+e^{}`$ signal events were made. The one thousand smeared data samples were refit, and $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ were determined for each of the samples. The variation of $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ for these samples resulted in errors of $`\pm 0.002\mathrm{GeV}^2/c^2`$ and $`\pm 0.01`$ for $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$. The systematic errors in $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ due to uncertainties in the magnitude and phase of $`\eta _+`$, and the uncertainties in $`|g_{E1}|`$ and $`|g_{CR}|`$, estimated by varying the magnitude of the ratio of $`|g_{E1}|`$ to $`|\stackrel{~}{g}_{M1}|`$ from 0.0 to 0.05 (nominal 0.038) and $`|g_{CP}|`$ from 0.10 to 0.17 (nominal 0.15), resulted in systematic errors in $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ of $`\pm 0.004\mathrm{GeV}^2/c^2`$ and $`\pm 0.01`$ respectively. All systematic errors in $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ were added in quadrature to obtain an overall error of $`\pm 0.009\mathrm{GeV}^2/c^2`$ and $`\pm 0.04`$ in $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ respectively. The systematic error in the $`\varphi `$ asymmetry due to variations in the corrections for acceptance arising from the systematic errors of the $`a_1`$/$`a_2`$ and $`|\stackrel{~}{g}_{M1}|`$ and one sigma uncertainties of other parameters of the MC model discussed above was determined to be $`\pm 0.7`$%. The variation in asymmetry due to analysis cuts was also estimated to be $`\pm 0.7`$%. Finally, the systematic error due to resolution effects was determined to be $`\pm 0.7`$% using generated tracks from the Monte Carlo rather than reconstructed tracks in the analysis. Adding in quadrature the systematic errors from these three sources, a total systematic error of $`\pm 1.2`$% was obtained for the acceptance corrected asymmetry of the $`\mathrm{sin}\varphi \mathrm{cos}\varphi `$ distribution. In conclusion, the KTeV experiment has observed a CP-violating asymmetry in the distribution of T-odd angle $`\varphi `$ in $`K_L\pi ^+\pi ^{}e^+e^{}`$ decays. This effect, the largest CP violation effect yet observed and the first in an angular variable, is T-violating barring possible exotic phenomena such as direct CPT violation in the $`K_L\pi ^+\pi ^{}e^+e^{}`$ matrix element. The magnitude of the acceptance corrected asymmetry is $`13.6\pm 2.5(\mathrm{stat})\pm 1.2(\mathrm{syst})`$%, consistent with the theoretically expected asymmetry . In addition, the M1 photon emission amplitude requires a vector form factor as given in equation (3) with $`a_1`$/$`a_2=0.720\pm 0.028(\mathrm{stat})\pm 0.009(\mathrm{syst})\mathrm{GeV}^2/c^2`$ and $`|\stackrel{~}{g}_{M1}|=1.35_{0.17}^{+0.20}(\mathrm{stat})\pm 0.04(\mathrm{syst})`$. The rich structure of the $`K_L\pi ^+\pi ^{}e^+e^{}`$ mode has provided a new opportunity for the study of novel CP- and T-violation effects. In the future, it may be possible to use this mode to search for direct CP violation and more exotic phenomena . We thank Fermilab, the U.S. Department of Energy, the U.S. National Science Foundation, and the Ministry of Education and Science of Japan for their support. To whom correspondence should be addressed. Electronic address: cox@uvahep.phys.virginia.edu On leave from C.P.P. Marseille/C.N.R.S., France
no-problem/9908/gr-qc9908060.html
ar5iv
text
# Acknowledgements ## Acknowledgements I am very much indebt to Professors Christoffer Koehler and Patricio Letelier for helpful discussions on the subject of this paper.Thanks are also due to Universidade do Estado do Rio de Janeiro (UERJ) for financial Support.
no-problem/9908/cond-mat9908412.html
ar5iv
text
# Instantaneous Normal Mode Analysis of Supercooled Water ## Abstract We use the instantaneous normal mode approach to provide a description of the local curvature of the potential energy surface of a model for water. We focus on the region of the phase diagram in which the dynamics may be described by the mode-coupling theory. We find, surprisingly, that the diffusion constant depends mainly on the fraction of directions in configuration space connecting different local minima, supporting the conjecture that the dynamics are controlled by the geometric properties of configuration space. Furthermore, we find an unexpected relation between the number of basins accessed in equilibrium and the connectivity between them. The study of the dynamics in supercooled liquids has received great interest in recent years due to possibilities opened by novel experimental techniques , detailed theoretical predictions , and by the possibility of following the microscopic dynamics via computer simulation . One important theoretical approach developed recently is the ideal mode coupling theory(MCT) , which quantitatively predicts the time evolution of correlation functions and the dependence on temperature $`T`$ of characteristic correlation times. Unfortunately, the temperature region in which MCT is able to make predictions for the long time dynamics is limited to weakly supercooled states . In parallel with the development of MCT, theoretical work has called attention to thermodynamic approaches to the glass transition, and to the role of configurational entropy in the slowing down of dynamics. These theories, which build on ideas put forward some time ago , stress the relevance of the topology of the potential energy surface (PES) explored in supercooled states. Detailed studies of the PES may provide insights on the slow dynamics of liquids, and new ideas for extending the present theories to the deep supercooling regime. The instantaneous normal mode (INM) approach uses the eigenvectors (normal modes) and eigenvalues of the Hessian matrix (the second derivative of the potential energy) to provide an “instantaneous” representation of the PES in the neighborhood of an equilibrium phase space point. Examples of INM analysis have appeared for a few representative liquids . In this Letter we present INM calculations for a rigid water molecule model, the SPC/E potential , for several temperatures and densities in supercooled states. The reasons for selecting such a system are twofold: (i) The SPC/E model has been studied in detail, and it has been shown that its dynamics follows closely the predictions of MCT . Furthermore, the ($`\rho `$, $`T`$) dependence of the configurational entropy has been calculated, and shown to correlate with the dynamical behavior . (ii) The SPC/E model has a maximum of the diffusion constant $`D`$ under pressure along isotherms, as observed experimentally . The maxima in $`D`$ can be used as sensitive probes for testing proposed relations between the change in topology of configuration space and the change in $`D`$. We analyze recent simulations for six densities between $`0.95`$ and $`1.3`$ g/cm<sup>3</sup> and five temperatures for each density. For each state point, we extract 100 equally-spaced configurations , from which we calculate and diagonalize the Hessian using the center of mass and the three principal inertia momenta vectors as molecular coordinates . For each configuration, we classify the imaginary modes into two categories: (i) shoulder modes and (ii) double-well modes, according to the shape of the PES along the corresponding eigenvector . The shoulder modes are modes for which the negative curvature appears as a result of a local anharmonicity; the double-well modes include only the directions in configuration space which connect two adjacent minima. The classification of the modes is done by studying the shape of the PES along the eigenvector corresponding to the mode, i.e. beyond the point in configuration space where such modes have been calculated. In this respect, the potential energy along the eigenvector direction may be different from the actual curvilinear direction described by the time evolution of the eigenvector under consideration. Furthermore, some modes classified as double-well may be related to intra-basin motions and therefore may not be relevant to describe diffusivity. In the present study we show — using the $`D`$ maxima line as a guide — that these spurious features are negligible. Fig. 1 shows the density dependence of the fractions of imaginary (unstable) modes $`f_u`$, double-well modes $`f_{dw}`$, and of shoulder modes $`f_{sh}`$, where $`f_u=f_{dw}+f_{sh}`$; we also show $`D`$ for the same state points. The close correlation of $`f_{dw}`$ and $`D`$ is striking. At $`\rho 1.15`$ g/cm<sup>3</sup>, $`D`$ has a maximum; at this $`\rho `$, the increase of diffusion constant caused by the the progressive disruption of the hydrogen bond network on increasing the density is balanced by slowing down of the dynamics due to increased packing. At the same density, $`f_u`$ and $`f_{dw}`$ also show maxima, supporting the view that these quantities are a good indicator of the molecular mobility. There is also a weak maximum in $`f_{sh}`$, but at a $`\rho <1.15`$ g/cm<sup>3</sup>. The presence of maxima in both $`D`$ and $`f_{dw}`$ at the same density suggests that, for the SPC/E potential, $`f_{dw}`$ is directly related to $`D`$. The relation between $`D`$ and $`f_{dw}`$ is shown in Fig. 2 for all the studied isochores. We note that $`D`$ in this system is a monotonic function of $`f_{dw}`$ and that all points fall on the same master curve, notwithstanding the large range of $`D`$ values analysed. Thus, surprisingly, in SPC/E water, the knowledge of the fraction of directions in configuration space leading to a different basin, $`f_{dw}`$, is sufficient to determine the value of $`D`$. We also note from Fig. 2 that $`D`$ vanishes, but at a small non-zero value of $`f_{dw}0.007`$ (vertical arrow) , suggesting that indeed a small number of spurious double-well modes, related to intrabasin motion , are still included in our classification. The reduction of mobility on cooling in the studied $`(\rho ,T)`$ range – where MCT provides a description of the dynamics – appears related to the geometrical properties of the PES, i.e. the system mobility is reduced because the number of directions connecting different local minima (needed to explore freely the configuration space) is decreasing . Hence, the observed reduced mobility is mainly related to the geometry of the PES, i.e. it is “entropic” in origin. MCT seems to be able to describe the entropic slowing down of the dynamics associated with the vanishing of $`f_{dw}`$. This is consistent with the general consensus that the missing decay channels for the correlations responsible for the failure of MCT at very low $`T`$ are activated processes. To make closer contact with MCT, we compare in Fig. 3 the density dependence of the MCT critical temperature $`T_{MCT}`$ with the $`T`$ at which the fraction of double-well modes goes to zero, as well as with the $`T`$ at which $`f_{dw}`$ reaches the estimated asymptotic value of $`0.007`$ , to take into account the small over counting intrinsic in the mode classification. We observe that the $`f_{dw}=0`$ locus tracks closely the $`T_{MCT}`$ line, but that the $`f_{dw}=0.007`$ locus nearly coincides with the $`T_{MCT}`$ line. The results presented here suggest that the liquid dynamics in the MCT region of SPC/E water is controlled by the average connectivity in configuration space. On the other hand, for the same model, the ($`\rho `$, $`T`$) dependence of the configurational entropy $`S_{\text{conf}}`$ – which, in the Stillinger-Weber formalism, can be defined as the logarithm of the number of different basins $`\mathrm{\Omega }`$ in configuration space – has also been calculated and shown to correlate with $`D`$. Since no a priori relation is expected between the connectivity of the local minima and their number, we consider particularly interesting the observation of such a relation. For this reason, we show in Fig. 4 a “parametric plot” (in the parameter $`D`$) of $`f_{dw}`$ versus $`S_{\text{conf}}`$ for all studied densities. Fig. 4 shows a linear relation between $`\mathrm{ln}(f_{dw})`$ and $`S_{\text{conf}}`$, i.e. between $`f_{dw}`$ and the number of basins (since $`\mathrm{\Omega }=\mathrm{exp}(S_{\text{conf}}/k_B)`$) and highlight the existence of an unexpected relation between the number of basins accessed in equilibrium and the connectivity between them. This novel feature of the PES contributes to a better understanding of the water dynamics, and may contribute to the understanding of the dynamics of glass-forming liquids. We thank T. Keyes for useful discussions, and the NSF for support; FS acknowledges partial support from MURST (PRIN 98).
no-problem/9908/hep-ex9908003.html
ar5iv
text
# Measurement of 𝐷^{∗±} Cross Sections and the Charm Contribution to the Structure Function of the Proton in Deep Inelastic Scattering at HERA ## 1 INTRODUCTION The first HERA measurements of the charm contribution to the proton structure function, $`F_2^{c\overline{c}}`$ were reported by the H1 and ZEUS Collaborations from an analysis of $`D^\pm `$ production in their 1994 data sets . The results were consistent with Photon Gluon Fusion (PGF) being the dominant mechanism for $`D^\pm `$ production in $`e^+p`$ Deep Inelastic Scattering (DIS). If this is the case, this type of measurements are sensitive to the gluons in the proton. In addition, they can provide a test of the universality of the parton distribution functions (pdf’s), namely, whether pdf’s extracted from the inclusive measurement of the proton structure function, $`F_2`$, can be used as input for calculations of more exclusive processes as charm production. Here we present a study of $`D^\pm `$ production using the 1996 and 1997 data corresponding to an integrated luminosity of 37 pb<sup>-1</sup>. More than tenfold larger data sample, together with the modifications of the ZEUS detector made for the 1996 and 1997 operation allow an extension of the kinematic range to both smaller and larger $`Q^2`$. The $`D^\pm `$ is tagged via $`D^+(D^0K^{}\pi ^+)\pi ^+`$(+ c.c.) and $`D^+(D^0K^{}\pi ^+\pi ^+\pi ^{})\pi ^+`$(+ c.c.) decay channels, refered to as $`K2\pi `$ and $`K4\pi `$ respectively. ## 2 $`D^\pm `$ CROSS SECTIONS The measured $`D^\pm `$ cross section using the $`K2\pi (K4\pi )`$ final state, in the region $`1.5(2.5)<p_\mathrm{T}(D^{})<15GeV`$, $`|\eta (D^{})|<1.5`$ is $`\sigma `$$`\left(e^+pe^+D^\pm X\right)`$ $`=8.31\pm 0.31\left(stat\right)_{0.50}^{+0.30}\left(sys\right)`$ nb($`3.65\pm 0.36(stat)_{0.41}^{+0.20}(sys)`$ nb). Figure 1 shows the differential $`D^{}`$ cross sections in the restricted $`Q^2`$, $`y`$, $`p_\mathrm{T}(D^{})`$ and $`\eta (D^{})`$ region as functions of $`log_{10}(Q^2)`$, $`log_{10}(x)`$, $`W`$, $`p_\mathrm{T}(D^{})`$, $`\eta (D^{})`$ and $`x(D^{})`$$`=2p^{}(D^{})/W`$, where $`p^{}(D^{})`$ is the momentum in the $`\gamma ^{}`$-proton CMS frame. The results using each decay channel can be directly compared in the $`p_\mathrm{T}(D^{})`$ differential cross section. The agreement is satisfactory. ## 3 COMPARISON WITH NLO QCD We compute NLO QCD calculations with a semi-inclusive Monte Carlo generator HVQDIS for heavy quark production and subsequent fragmentation to $`D^\pm `$ via a Peterson fragmentation function . This generator is based on NLO calculations in the three flavor number scheme (TFNS), in which only light quarks ($`u,d,s`$) are included in the initial state proton. Heavy quarks are produced exclusively by the convolution of the light flavours and the gluon with the massive matrix elements and coefficient functions calculated previously . We use as input pdf ZEUS94 . The QCD renormalization and factorization scales are set to $`\sqrt{Q^2+4m_{c}^{}{}_{}{}^{2}}`$. $`m_c`$ is varied between 1.3 and 1.5 GeV. $`f(cD^{})=0.222`$ is taken from $`e^+e^{}`$ measurements . The error on this quantity introduces a normalization uncertainty of $`9`$%. Finally, the Peterson fragmentation parameter is set to $`ϵ=0.035`$. The NLO QCD predictions (figure 1) are in reasonable agreement with the data, except in the $`\eta (D^{})`$ distribution, where the measurements show a shift into the positive $`\eta `$ region (proton direction) with respect to the prediction. Also a softer charm fragmentation is favored by the data. ### 3.1 Fragmentation Effects In Monte Carlo fragmentation models like JETSET or HERWIG, a forward shift in the $`D^{}`$ direction w.r.t. that of the original charm quark is produced during the fragmentation due to the interaction of the charm quark with the proton remnant via either strings or soft gluon radiatio To investigate how this affects the NLO QCD predictions we have reweighted a LO Monte Carlo for charm production, RAPGAP (it uses JETSET for the fragmentation) in a way such that at the stage of the hard interaction it will reproduce exactly the HVQDIS results for the $`p_t(c)`$ and $`\eta (c)`$ differential cross sections. The predictions from this NLO reweighted RAPGAP Monte Carlo are shown in (figure 1). They provide a better descripion of the data, especially in the $`\eta (D^{})`$ and $`x(D^{})`$ differential cross sections This result suggest that the small disagreement found with HVQDIS come from the fact that the Peterson function can not account for all the charm quark fragmentation effects present at HERA, in particular the interaction with the remnant. ## 4 $`F_2^{c\overline{c}}`$ EXTRACTION The procedure for extracting $`F_2^{c\overline{c}}`$ starts by measuring the $`D^\pm `$ production cross section in the restricted $`p_\mathrm{T}(D^{})`$, $`\eta (D^{})`$ region in bins of $`Q^2`$ and $`y`$. The extrapolation to the full $`p_\mathrm{T}(D^{})`$, $`\eta (D^{})`$ region is done in the following way: $$F_{2}^{c\overline{c}}{}_{}{}^{\text{m}}(x_i,Q_i^2)=\frac{\sigma _i^\text{m}(e^+pD^{}X)}{\sigma _i^\text{t}(e^+pD^{}X)}F_{2}^{c\overline{c}}{}_{}{}^{\text{t}}(x_i,Q_i^2)$$ (1) where $`x_i`$, $`Q_i^2`$ is the center of gravity of the bin $`i`$ and ‘m’ and ‘t’ denote ‘measured’ and ‘theoretical’ respectively. $`F_{2}^{c\overline{c}}{}_{}{}^{t}`$ is taken from ZEUS94 fit. For $`\sigma _i^t`$ we use the reweighted Monte Carlo. A number of assumptions are implicitly done in this procedure: * The TFNS is valid, * $`F_L^{c\overline{c}}`$ is negligible($`<1\%`$ of $`F_2`$ in our $`Q^2`$, $`y`$ region from calculations based on ), * the value of $`f(cD^{})`$ measured in $`e^+e^{}`$ is valid also at HERA, * the cross section outside the restricted region is well described by NLO QCD. Figure 2 shows the measured $`F_2^{c\overline{c}}`$ after combining the results from both decay channels. Compared to our previous study we have extended the kinematic range to $`Q^2`$ as low as 1.8 GeV<sup>2</sup> and up to 130 GeV<sup>2</sup> and the errors are reduced substantialy. $`F_2^{c\overline{c}}`$ exhibits a steep rise with decreasing $`x`$ at constant $`Q^2`$. From a comparison with the ZEUS94 parametrization we determine that $`F_2^{c\overline{c}}`$ accounts for $`<10`$% of $`F_2`$ at low $`Q^2`$ and $`x510^4`$ and $`30`$% of $`F_2`$ for $`Q^2`$$`>11`$ GeV<sup>2</sup> at the lowest $`x`$ measured. ## 5 SUMMARY We have presented a charm analysis in DIS using the combined 1996 and 1997 data sample. Charm was tagged with $`D^{}`$ mesons decaying into two decays ($`K2\pi `$ and $`K4\pi `$). In the experimentally accessible region, differential $`D^\pm `$ cross sections are in reasonable agreement with NLO QCD calculations of charm production in the TFNS using a pdf extracted from an inclusive measurement of $`F_2`$. This represents a successful test of the universality of the pdf’s. Small disagreements in the $`\eta (D^{})`$ and $`x(D^{})`$ distributions show that the fragmentation a la Peterson can not account for all the charm quark fragmentation effects present at HERA. Using these calculations to extrapolate outside the measured $`p_\mathrm{T}(D^{})`$, $`\eta (D^{})`$ region, $`F_2^{c\overline{c}}`$ was extracted. $`F_2^{c\overline{c}}`$ is rising steeply with decreasing $`x`$ at costant $`Q^2`$. It amounts to $`25\%`$ of $`F_2`$ at low $`x`$, $`Q^2>10`$.
no-problem/9908/cond-mat9908042.html
ar5iv
text
# Looking at friction through “shearons” ## Abstract We study the response to shear of a one-dimensional monolayer embedded between two rigid plates, where the upper one is externally driven. The shear is shown to excite “shearons”, which are collective modes of the embedded system with well defined spatial and temporal pattern, and which dominate the frictional properties of the driven system. We demonstrate that static friction, stick-slip events, and memory effects are well described in terms of the creation and/or annihilation of a shearon. This raises the possibility to control friction by modifying the excited shearon, which we examplify by introducing a defect at the bottom plate. PACS numbers: 46.55.$`+`$d, 81.40.Pq, 68.15.$`+`$e, 05.45.$``$a The field of nanotribology evolves around the attempts to understand the relationship between macroscopical frictional response and microscopic properties of sheared systems . New experimental tools such as the surface force apparatus (SFA) are used to explore shear forces between atomically smooth solid surfaces separated by a nanoscale molecular film . These experiments have unraveled a broad range of phenomena and new behaviors which help shed light on some “old” concepts which have been considered already textbook materials: static and kinetic friction forces, transition to sliding, thinning, and memory effects. These and other observations have motivated theoretical efforts , both numerical and analytical, but many issues have remained unresolved, in particular the relation between the macroscopic observables and the microscopic properties of the embedded system. In this Letter we introduce the concept of “shearons”, which are shear-induced collective modes excited in the embedded system and characterized by their wave vector $`q`$. Shearons, which display well defined spatial and temporal patterns, dominate the frictional properties of the driven system and are found useful in establishing a connection between frictional response and motional modes of the embedded system. Within this framework observations such as static friction, stick-slip behavior, and memory effects can be correlated with the creation and/or annihilation of a shearon. These correlations suggests the possibility to control friction by modifying the shearon’s wave vector and thereby tuning the embedded system, by adding, for example, a defect at one of the plates. We start from a microscopic model which has been investigated recently and has been shown to capture many of the important experimental findings . The model system consists of two rigid plates, with a monolayer of $`N`$ particles with masses $`m`$ and coordinates $`x_i`$ embedded between them. The top plate with mass $`M`$ and center of mass coordinate $`X`$ is pulled with a linear spring of spring constant $`K`$. The spring is connected to a stage which moves with velocity $`V`$. This system is described by $`N+1`$ equations of motion $$M\ddot{X}+\underset{i=1}{\overset{N}{}}\eta (\dot{X}\dot{x}_i)+K(XVt)+\underset{i=1}{\overset{N}{}}\frac{\mathrm{\Phi }(x_iX)}{X}=0$$ (1) $$m\ddot{x}_i+\eta (2\dot{x}_i\dot{X})+\underset{\genfrac{}{}{0pt}{}{j=1}{ji}}{\overset{N}{}}\frac{\mathrm{\Psi }(x_ix_j)}{x_j}+\frac{\mathrm{\Phi }(x_i)}{x_i}+\frac{\mathrm{\Phi }(x_iX)}{x_i}=0i=1,\mathrm{},N.$$ (2) The second term in Eqs. (1) and (2) describes the dissipative forces between the particles and the plates and is proportional to their relative velocities with proportionality constant $`\eta `$, accounting for dissipation that arises from interactions with phonons and/or other excitations. The interaction between the particles and the plates is represented by the periodic potential $`\mathrm{\Phi }(x)=\mathrm{\Phi }_0\mathrm{cos}(2\pi x/b)`$. Concerning the inter-particle interaction, we assume nearest neighbor interactions of two types: (i) harmonic interaction $`\mathrm{\Psi }(x_ix_{i\pm 1})=(k/2)[x_ix_{i\pm 1}\pm a]^2`$ and (ii) Lennard-Jones interaction $`\mathrm{\Psi }(x_ix_{i\pm 1})=(ka^2/72)\{[a/(x_ix_{i\pm 1})]^{12}2[a/(x_ix_{i\pm 1})]^6\}`$ . The two plates do not interact directly. The basic frequency is chosen to be the frequency of the top plate oscillation in the potential $`\mathrm{\Omega }(2\pi /b)\sqrt{N\mathrm{\Phi }_0/M}`$. The other frequencies in the model are the frequency of the particle oscillation in the potential $`\omega (2\pi /b)\sqrt{\mathrm{\Phi }_0/m}`$, the characteristic frequency of the inter-particle interaction $`\widehat{\omega }\sqrt{k/m}`$, and the frequency of the free oscillation of the top plate $`\widehat{\mathrm{\Omega }}\sqrt{K/M}`$. To simplify the discussion we introduce unitless coordinates $`YX/b`$ and $`y_ix_i/b`$ of the top plate and the particles, respectively, as well as an unitless time $`\tau \mathrm{\Omega }t`$. We define the following quantities: The misfit between the substrate and inter-particle potentials’ periods $`\mathrm{\Delta }1a/b`$, the ratio of masses of the particles and the top plate $`ϵNm/M`$, the unitless dissipation coefficient $`\gamma N\eta /(M\mathrm{\Omega })`$, the ratio of frequencies of free oscillation of the top plate and the oscillation of the top plate in the potential $`\alpha \widehat{\mathrm{\Omega }}/\mathrm{\Omega }`$, the ratio of the frequencies related to the inter-particle and particle/plate interactions $`\beta \widehat{\omega }/\omega `$, and the dimensionless velocity $`vV/(b\mathrm{\Omega })`$. Additionally, the friction force per particle $`f_\mathrm{k}`$ is defined as $`f_\mathrm{k}F_\mathrm{k}/(NF_\mathrm{s})`$, where $`F_\mathrm{k}`$ is the total friction force measured with the external spring and $`F_\mathrm{s}2\pi \mathrm{\Phi }_0/b`$ is the force unit given by the plate potential. Here, we keep fixed the number of particles $`N=15`$, the mass ratio $`ϵ=0.01`$, the misfit $`\mathrm{\Delta }=0.1`$, and the relative strength of the inter-particle interaction $`\beta ^2=1`$. We vary only the relative strength of the external spring $`\alpha ^2`$, the dissipation coefficient $`\gamma `$, and stage velocity $`v`$. In order to analyze the motion of the embedded system more closely, we seperate the motion of the particles into the center of mass part $`y_{\mathrm{cms}}1/N_{i=1}^Ny_i`$ and the fluctuations $`\delta y_iy_iy_{\mathrm{cms}}`$. It has been observed in similar models that different modes of motion can coexist for a given set of parameters and lead to different frictional forces. Here, we concentrate on these solutions of the coupled dynamical equations (1) and (2) which correspond to smooth or to stick-slip motion of the top plate. To understand the nature of the motion of the embedded system in these regimes and the relation to the frictional response, we choose the particles density $`\varrho `$ as a observable. Instead of defining the density as $`_{i=1}^N\delta (yy_{\mathrm{cms}}\delta y_i)`$, we represent each particle by a Gaussian of width $`\sigma =1`$, namely $$\varrho (yy_{\mathrm{cms}},\tau )\underset{i=1}{\overset{N}{}}\mathrm{exp}\left\{\left[\frac{yy_{\mathrm{cms}}\delta y_i}{\sigma }\right]^2\right\}.$$ (3) This allows to visualize correlated motions of the particles on length scales which exceed nearest neighbor distance and are of the order of $`2\sigma `$ to $`3\sigma `$. Using Eq. (3) we find that the response of the embedded system to shear can be described in terms of collective modes which result in well defined spatial and temporal pattern in the density, which we call “shearons”. These shearons are the spatial/temporal manifestation of parametric resonance between the external drive and the embedded system . In Fig. 1(a)-(c) we present three stable shearons for a chain of harmonically interacting particles with free boundary conditions, where in all cases the top plate slides smoothly. All three shearons share the same set of parameters and differ only in their initial conditions. We find that the observed mean friction force per particle $`f_\mathrm{k}`$ ($``$ denotes time average), which is directly related to the spatial/temporal fluctuations by $$f_\mathrm{k}=\pi \gamma v+\frac{2\pi \gamma }{v}\underset{i=1}{\overset{N}{}}\delta \dot{y}_i^2,$$ (4) decreases with increasing shearon wave vector $`q`$. This means that the mean friction force can be reduced by increasing the shearon wave vector. In the present example, we observe a reduction from $`f_\mathrm{k}0.263`$ \[Fig.1(a)\] to $`f_\mathrm{k}0.180`$ \[Fig.1(b)\], and to $`f_\mathrm{k}0.125`$ \[Fig.1(c)\]. The lowest possible mean friction force $`f_\mathrm{k}=\pi \gamma v`$ can be achieved in the limit $`q\mathrm{}`$. For our choice of parameters $`\pi \gamma v0.04712`$. We find that the regions of high density fluctuations exhibit low velocity fluctuations and, according to Eq. (4), low dissipation. These regions are surrounded and stabilized by regions of low density fluctuations with highly fluctuating velocities and hence high dissipation. We note that the minima/maxima of the density reflect correlations in the motion of the neighboring and next-neighboring particles rather than actual locations. The cooperative nature of motion is visualized here introducing a finite width $`\sigma `$. As we have already seen, larger wave vectors correspond to higher average density of the embedded system. This means that for a given number of particles the mean chain length $`Ly_Ny_1`$ decreases with $`q`$: $`L13`$ \[Fig. 1(a)\], $`L12.5`$ \[Fig. 1(b)\], and $`L12`$ \[Fig. 1(c)\]. Note that for $`N=15`$ and $`\mathrm{\Delta }=0.1`$ the equilibrium length of the free chain is $`12.6`$. The interplay between mean length $`L`$ and the shearon wave vector $`q`$ becomes clearer within the calculations under periodic boundary conditions, where $`q`$ is a function of the box size $`\mathrm{}`$, see Fig. 1(d)-(f). In Fig. 1(d) we present the spatial/temporal pattern found for harmonically interacting particles within a box of size $`\mathrm{}=13`$. It is found that for similar wave vectors the mean friction force is smaller for the case of periodic boundary conditions compared to a free chain \[c.f. Fig. 1(c)\], since the free ends strongly fluctuate and cause additional dissipation. When replacing the harmonic with the Lennard-Jones interaction \[see Fig. 1(e) and (f)\], the picture remains basically unchanged. The essential differences are found for: (i) very large box sizes, when the embedded Lennard-Jones system breaks and one reobtaines the independend particle scenario described in ; (ii) very small box sizes, when the hardcore of Lennard-Jones potential becomes dominant. The latter case corresponds to the high pressure limit $`p\mathrm{}^1\mathrm{}`$ and results in a strong stiffness. This, together with incommensurability, allows to achive the lower bound for the mean friction force $`f_\mathrm{k}=\pi \gamma v`$, c.f. Eq. (4). In the case of harmonically interacting particles a minimum of the mean friction force $`f_\mathrm{k}`$ as a function of box size $`\mathrm{}`$ is found for a finite value of $`\mathrm{}`$, which is determined by commensurate condition of shearon wave vector and the box size. These effects will be discussed elsewhere . We now explore the concept of shearons in relationship to varios frictional phenomenon. We start from the well known stick-slip phenomenon observed in many nanoscale systems at low driving velocity, but whose nature is still not well understood. The start of a slip event has been commonly attributed to the ‘melting’ of the embedded system, namely a transition from a ordered ‘solid’-like to a disordered ‘liquid’-like structure, that ‘refreezes’ at the end of the slip event. In Fig. 2(a) we show that during slippage the motion of the embedded system is highly ordered and highly correlated. At the start of the slip event, the moment of the highest spring force \[enlarged in Fig. 2(b)\], a shearon is created, persisting with a constant wave vector until it gets annihilated at the moment of the lowest spring force \[enlarged in Fig. 2(c)\]. The shearon gets annihilated since it cannot exists below a certain shear force needed to compensate the energy dissipation. As a result we find that the static force $`f_\mathrm{s}`$ needed to be overcome in order to initiate the motion is the shear force needed to create the shearon. This becomes clearer in a stop/start experiment, where for a smoothly sliding top plate the external drive is stopped for a certain time and reinitiated afterwards. We find that the static friction force needed to restart the motion manifests a stepwise behavior as a function of stopping time . It vanishes only as long as the motion is restarted within the lifetime of the shearon, giving a possible explanation of the memory effects observed in . Since in the shearon description the frictional force is determined by the wave vector, it is suggestive to influence the friction by modifying $`q`$. It is possible to change the shearon wave vector by changing external parameters such as the stage velocity (a higher velocity results in general in a larger wave vector). A shearon with a large wave vector, created at a high velocity, can be maintained at lower velocities by deceleration, giving rise to a hysteretic behavior observed in many experimental systems . Here, we present as an example a “chemical” method to manipulate the shearon wave vector by introducing a defect. The defect is placed at the bottom plate at an integer position $`y_0`$, leading to a modified bottom plate potential $`\mathrm{\Phi }_{y_0}^{}(y)=\mathrm{\Phi }_0(1h\{1+\mathrm{cos}(2\pi [yy_0])\})`$ for $`\left|yy_0\right|1/2`$ and $`\mathrm{\Phi }_{y_0}^{}(y)=\mathrm{\Phi }(y)`$ otherwise. Here $`h`$ defines the relative depth of the minimum at position $`y_0`$, with $`h=1`$ being the regular case. As an example, the resulting density pattern for $`h=1/2`$ is shown in Fig. 3, where it can be seen that scattering by the defect changes the shearon wave vector $`q`$ into a new wave vector $`q^{}>q`$. The new shearon with wave vector $`q^{}`$ is stable, leading to a decrease of the mean friction force from $`f_\mathrm{k}0.263`$ before passing the defect to $`f_\mathrm{k}^{}0.178`$ afterwards. Depending on the amplitude of $`h`$, both a decrease and an increase of the friction force are possible, which provides a method to tune $`f_\mathrm{k}`$. It was already observed in similar models that disorder can significantly change the frictional behavior . In this Letter we have introduced the concept of shearons, which are shear-induced collective modes in the embedded system, and have demonstrated that their properties dominate the frictional behavior of the driven system. The results have been obtained for a certain set of parameters describing the embedded system. However, the collective effects discussed above have a general nature and have been found in a wide range of parameters, except for the limits $`N1`$, $`\beta 0`$, and/or $`\mathrm{\Delta }0`$, which correspond to the case of independent particle motion. We believe that the observed collective modes of motion are even more general and should not depend on the one-dimensionality of our model. In particular, we expect shearons to persist in systems of higher dimension, as long as the embedded system remains a monolayer, so that each particle interacts with both the bottom and the top plate. The possibility to modify shearons and hence the frictional response by (i) (ambient) pressure and (ii) defects at the plates should realizable experimentally. Financial support from the Israel Science Foundation, the German Israeli Foundation, and DIP grants is gratefully acknowledged. M.P. gratefully acknowledges the Alexander-von-Humboldt foundation (Feodor-Lynen program) for financial support.
no-problem/9908/nucl-ex9908018.html
ar5iv
text
# Analysis by neutron activation analysis a some ancient ceramics from Romanian territories ## 1 Introduction Ceramics is the most common archaeological material and therefore it is very used material by the historians to draw temporal and cultural characterisations. The importance of knowledge the the compositional scheme of the pottery is well known<sup>1-5</sup> although very rarely one can get important conclusions from the elemental analysis of the potsherds<sup>6-8</sup>. In this paper we have analyzed samples of Neolthic ceramics from Cucuteni Scanteia - Vaslui county (History Museum from Iasi), and Neolithic and Dacian ceramics from Magurele - near Bucharest (collection of the Scholl from Magurele), Romania, by the method of neutron activation analysis (NAA). In Table 1 it is presented the list of the analyzed ceramics samples. ## 2 Experimental method The samples listed in the Table 1 have been analyzed by neutron activation analysis. We considered that the analysis should give an image of the bulk of the objects and therefore the surface of the shards was removed. Also we had into consideration the homogeinity of the samples and that the samples must be representative for the whole object. Samples of 10-30 mg of potsherds have been cut, weighted and wrapped individually in plastic foil. The samples of 10-30 mg have been irradiated at at the rabbit system of VVR-S reactor of the NIPNE, Bucharest-Magurele, at a flux of 1.5 x 10<sup>12</sup> neutrons/cm<sup>2</sup>$``$sec<sup>-1</sup>, for a period of 30 minutes. A standard spectroscopic pure metallic copper was used as neutron flux standard. The radioactivity of the samples has been measured after a decay time of 1…20 h, and again after a decay time of 10…14 d. The measurements have been performed with a Ge(Li) detector, 135 cm<sup>3</sup> coupled at PC with a MCA interface. The system gave a resolution of 2.4 keV at 1.33 MeV (<sup>60</sup>Co). The following elements have been noticed: Fe, K, La, Mn, Na, Sc and Sm. ## 3 Results and discussions In the Table 2 are given the results of the neutron activation analysis on the ancient potsherds from Cucuteni Scanteia and Magurele. The concentrations are given in ppm, and when the concentration was larger than 10,000 ppm the result was given in percents. The considered statistical errors have been for La, Mn, Na, Sc and Sm $`<`$5% and $``$10% for K and Fe. The diagram from the Fig. 1 describes the analyzed potsherds from the point of view of all considered elements in the samples. The minimum and maximum limits of the concentrations are shown for all observed elements. One can notice that the 2 compositional schemes are not completely separated and there are regions of interference between Cucuteni group of the objects and Magurele group. Searching an characteristic element or ratio of elements for a given region, we have noticed that the ratios Na/Mn, La/Sc and La/Sm could be considered representative and characteristic for a group of analyzed sherds. In Fig. 2a and Fig. 2b are shown the diagrams of the ratios of concentrations Na/Mn versus La/Sc, and respectively Na/Mn versus La/Sm, and one could observe a relative synchronozation of the analyzed objects on the provenance and culture on the mentioned ratios. References 1. A. Aspinal, D. N. Slater, ”Neutron activation analysis of medieval ceramics”, Nature 217 (1968) 368 2. J. S. Olin and Ed. V. Sayre, ”Trace analysis of English and American pottery of the american colonial period” The 1968 Intern. Conference of Modern Trends in Activation Analysis” (1968) p. 207 3. N. Saleh, A. Hallak and C. Bennet, ”PIXE analysis of ancient ordanian pottery”, Nuclear Instruments and Methods 181 (1981) p. 527 4. Ed. Sayre, ”Activation Analysis applications in art and archaeology”, in Advancees in Activation Analysis, eds. J.M.A. Lenihan, S.J. Thomson and V.P. Guinn, Academic Press, London, p.157 5. Ch. Lahanier, F.D. Preusser and L. Van Zelst, ”Study and conservation of museum objects: use of classical analytical techniques”, Nuclear Instruments and Methos, B14 (1986) p.2 6. Zvi Goffer, Archaeological Chemistry, Chemical Analysis, Vol. 55, eds. P.J. Elving J. D. Winefordner, John Wiley & Sons, p.108 7. I. Perlman and F. Assaro, ”Deduction of provenience of pottery from trace element analysis”, Scientific Methods in Medieval Archaeology, ed. R. Berger Univ. of California Press (1970) p.389 8. A. Millet and H. Catling, ”Composition and provenance: a challenge”, Archaeometry, Vol. 9 (1966) p.92
no-problem/9908/physics9908028.html
ar5iv
text
# Modelling Meso-Scale Diffusion Processes in Stochastic Fluid Bio-membranes ## Abstract The space-time dynamics of rigid inhomogeneities (inclusions) free to move in a randomly fluctuating fluid bio-membrane is derived and numerically simulated as a function of the membrane shape changes. Both vertically placed (embedded) inclusions and horizontally placed (surface) inclusions are considered. The energetics of the membrane, as a two-dimensional (2D) meso-scale continuum sheet, is described by the Canham-Helfrich Hamiltonian, with the membrane height function treated as a stochastic process. The diffusion parameter of this process acts as the link coupling the membrane shape fluctuations to the kinematics of the inclusions. The latter is described via Ito stochastic differential equation. In addition to stochastic forces, the inclusions also experience membrane-induced deterministic forces. Our aim is to simulate the diffusion-driven aggregation of inclusions and show how the external inclusions arrive at the sites of the embedded inclusions. The model has potential use in such emerging fields as designing a targeted drug delivery system. PACS 87.20- Membrane biophysics. PACS 34.20- Interatomic and intermolecular potential. PACS 87.22BT- Membrane and subcellular physics. Amphiphilic molecules, such as lipids and proteins, can self-assemble themselves into a variety of exotic structures in an aqueous environment . Computer-based simulation of the dynamics of these structures forms an interesting research area in the computational statistical mechanics of bio- material systems. One such structure is the phospholipid bilayer which represents the generic structure of all bio-membrane systems, both natural and artificial. These membranes can have thicknesses of only few nano-metres but linear sizes of up to tens of micro-metres, and can therefore be regarded as highly flexible, fluid-like, 2D continuum sheets embedded in a three-dimensional space. Thermal fluctuations can induce shape fluctuations and shape transformations in membranes. For example, in the so-called budding transition a spherical vesicle transforms into a prolate ellipsoid as the temperature increases. There is also the possibility that the spherical geometry becomes oblate, producing a shape similar to the biconcave rest shape of a red blood cell. Bio-membranes regulate the recognition and transport processes to and from the interior of the cells, as well as between the cells, forming a barrier which all external particles arriving at the cell must cross. They contain a variety of integral (embedded) inhomogeneities, such as proteins and other macromolecules , that penetrate the thickness of the membrane and act as transport channels. We shall refer to these as the M-type inclusions. These inclusions are mobile and can freely diffuse across the membrane. Their presence, however, force the bilayer to adjust its thickness locally so as to match the thickness of the hydrophobic region of these inclusions , causing local deformations in the membrane geometry. The perturbations produced in the membrane shape due to these local deformations give rise to both short-range and long-range membrane-induced indirect forces between the inclusions. These forces act in addition to direct Van der Waals and electrostatic forces between the inclusions. The Long-range forces originate from the perturbations associated with the long wavelength shape fluctuations , whereas the short-range forces are associated with the local deformations in the immediate vicinity of the inclusions . These membrane-induced forces between the inclusions play a far more significant role, viz the direct molecular interactions, when the length scales involved are comparable to the size of the membrane. In addition to this mode of deformation, a membrane can also deform due to the tension at the amphiphilic molecules-water interfaces. This tension results in a change in the overall surface area of the membrane. A third mode of deformation also exists and this is associated with the bending elastic-curvature property of the membrane which distinguishes it from a sheet of simple fluid dominated by surface tension . Accordingly, two models to study the inclusion-induced local deformations have been developed. In the first model, the membrane energy is taken to consist of a contribution from the molecular expansion/compression due to the change in the thickness at the inclusion boundary, and also a contribution from the overall change in the surface area. Using this model, it is shown that the inclusion- induced deformations cause exponential decays in the thickness of the membrane, extending from the inclusion-imposed value to the equilibrium thickness value, as shown schematically in Fig.1 for two rod-like inclusions. In the second model , the contribution of the membrane bending property is taken into account in the energy term, and it is found that this significantly affects the perturbation profile at the inclusion boundary as well as modifying the membrane- induced interactions. Evidently, an object supported by surface tension would have a different dynamics than one supported by the bending elasticity of the surface . In addition to the M-type inclusions, membranes can also carry inclusions that lie on their surfaces as shown schematically in Fig.2. These surface inclusions can represent objects that have arrived at the membrane from the outside and are therefore referred to as external inclusions. We shall refer to these as the S-type inclusions. At the meso-scale, i.e. when the detailed molecular architecture of the membrane can be subsumed into a background 2D sheet, the free elastic energy of a symmetric membrane is described by the Canham-Helfrich Hamiltonian $$=d^2\sigma \sqrt{g}\{\sigma _0+2\kappa H^2+\overline{\kappa }K\},$$ (1) where $`K=\text{det}(K_{ij})={\displaystyle \frac{1}{R_1R_2}},`$ $`H={\displaystyle \frac{1}{2}}\text{Tr}(K_{ij})={\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1}{R_1}}+{\displaystyle \frac{1}{R_2}}\right).`$ (2) are respectively the Gaussian and mean curvatures of the sheet, $`R_1`$ and $`R_2`$ are the two principle radii of curvature of the sheet, $`\sigma _0`$ is the surface tension, $`\kappa `$ is the bending rigidity, $`\overline{\kappa }`$ is the Gaussian rigidity, $`g`$ is determinant of the metric tensor and $`\sigma =(\sigma _1,\sigma _2)`$ is the 2D local coordinate on the sheet as opposed to the coordinates on the embedding space. The last term in (1) is, by Gauss-Bonett theorem, an invariant for closed surfaces implying that the dynamics of a membrane is not influenced by this term if its topology remains fixed. In what follows, we concentrate on membranes with fixed topology and drop this term. We then have $$=d^2\sigma \sqrt{g}\{\sigma _0+2\kappa H^2\}.$$ (3) The study of a membrane whose free energy is described by (3) is facilitated by considering it to be nearly flat, i.e. its thickness to be much smaller than its linear size $`L`$. This is indeed what we mean by a meso-scale model of a membrane. We therefore take the membrane to be almost parallel to the $`(x_1,x_2)`$ plane, regarded as the reference plane. The position of a point on the membrane can then be described by a single-valued function $`h(x_1,x_2)`$ representing the height of that point. This simplification is achieved by writing the Hamiltonian (3) in the Monge representation which gives for the mean curvature $$2H=g^{3/2}[_1^2h(1+(_2h)^2)+_2^2h(1+(_1h)^2)2_1h_2h_1_2h],$$ (4) where $`_i\frac{}{x^i},i=1,2`$. We assume that the area of the membrane can fluctuate without constraint by setting $`\sigma _0=0`$ in (3). Consequently, using (4), the Hamiltonian (3) to leading order in derivatives of $`h`$ becomes $$_0=\frac{\kappa }{2}d^2x(^2h)^2.$$ (5) This is the Canham-Helfrich Hamiltonian in Monge representation, expressed in terms of the height function of the membrane. It is the expression that we employ to describe the energetics of the membrane. Employing a statistical mechanics based on (5) only, and ignoring the contributions from the expansion/compression and interfacial energies, the potential energy function $$V_{MM}^T(R_{ij})=k_BT\frac{12A^2}{\pi ^2R_{ij}^4},$$ (6) was constructed to describe the membrane-induced temperature- dependent long-range forces between a pair of disk shape M-type inclusions that can freely tilt with respect to each other. Another function was also constructed for long-range interaction between two S-type inclusions $$V_{SS}^T(R_{ij},\theta _i,\theta _j)=k_BT\frac{L_i^2L_j^2}{128R_{ij}^4}\mathrm{cos}^2[2(\theta _i+\theta _j)],$$ (7) where $`A=\pi r_0^2`$ is the area of an M-type inclusion of radius $`r_0`$, $`k_B`$ is the Boltzmann constant, $`R_{ij}`$ is distance between the centres of mass of two inclusions $`i`$ and $`j`$, $`L_i`$ and $`L_j`$ are the lengths of two S-type inclusions making the angles $`\theta _i`$ and $`\theta _j`$ respectively with the line joining their centres of mass (see Fig.2) and $`T`$ is the membrane temperature. It is evident that both of these membrane-induced potentials are attractive and fall off as $`R^4`$ with the distance. These expressions are derived for rod-like inclusions that are assumed to be much more rigid than the ambient membrane so that these inclusions can not move coherently with the membrane. The only degrees of freedom for the rods are rigid translations and rotations while they remain attached to the membrane. So far, the modelling of bio-membrane dynamics decorated with inclusions has been mainly concerned with constructing potential energy functions such as those given in (6) and (7). An interesting problem, however, would be to use this information to simulate the space-time behaviour of inclusions in a membrane described by (5) and undergoing stochastic shape fluctuations. This is the problem that we address in this paper. This type of simulation can establish a direct link between the randomly changing membrane shape on the one hand and the inclusion dynamics on the other. In such a simulation, the thermodynamic phase behaviour of inclusions, such as their temperature-dependent aggregation, can be directly computed. This phase behaviour plays a crucial role in the functional specialisation of a membrane . Furthermore, information on the capture rate of the S-type inclusions, which could represent external drug particles, at the sites of the M-type inclusions can be obtained as a function of the changes in the environmental variables such as the ambient temperature. This type of meso-scale simulation when coupled with the Molecular Dynamics (MD) simulation of membrane patches near the inclusions at the nano-scale , can produce a seamless multi-scale model of the entire environment for many bio-molecular processes, starting with the arrival of external inclusions at the cell, their diffusion in the membrane, and finally their molecular docking at the site of the embedded inclusions. To proceed, let us consider a 2D bio-membrane described by (5) containing both the M-type and the S-type inclusions. To make the membrane a stochastically fluctuating medium, we treat the height function in (5) as a stochastic Wiener process with a Gaussian distribution, whose mean and variance can be written as $$h(x_1,x_2;t)=0,$$ (8) $$h(x_1,x_2;t)h(x_1,x_2;t)=[h(x_1,x_2;t)h(x_1,x_2;t)]^2=2Dt$$ (9) where $`D`$ is the diffusion constant associated with the height fluctuations at the local position $`(x_1,x_2)`$ and represents the measure with which random fluctuations propagates in the local geometry. Such random height changes would cause a roughening of the membrane surface on molecular scales, and this has been observed in NMR experiments . Assuming that this is the only stochastic process present in the membrane, it is then reasonable to suppose that this stochastic dynamics is communicated to the inclusions as well, and that their ensuing random motions are contingent only on these fluctuations. This implies that the mathematical point representing the centre of mass of an inclusion coinciding with the membrane point $`(x_1,x_2)`$ would also experience the same fluctuations and would diffuse with the same diffusion constant. To derive an expression for $`D`$, based on (5), we start with the static height-height correlation function obtained from (5). This is given by $$h(𝐪;0)h^{}(𝐪^{};0)=\frac{k_BT}{\kappa q^4}(2\pi )^2\delta (𝐪𝐪^{}),$$ (10) where $`\mathrm{}`$ is the thermal averaging with respect to the Boltzmann weight, $`\text{exp}(_0/k_BT)`$, and $`𝐪`$ is the wave vector of magnitude $`q`$. The corresponding dynamic correlation function can be obtained by writing $$h(𝐪;t)=h(𝐪;0)e^{\gamma _0(𝐪)t},$$ (11) giving $$h(𝐪;t)h^{}(𝐪^{};t)=\frac{k_BT}{\kappa q^4}e^{2\gamma _0(q)t}(2\pi )^2\delta (𝐪𝐪^{}),$$ (12) where the damping rate, $`\gamma _0(q)`$, reflecting the long-range character of the hydrodynamic damping, is defined as $$\gamma _0(q)=\kappa q^3/4\eta ,$$ (13) and $`\eta `$ denotes the coefficient of viscosity of the fluid membrane. In real space, (12) transforms to $$h(x_1,x_2;t)h(x_1,x_2;t)=\frac{k_BT}{4\pi \kappa }L^2e^{2\gamma _0t},$$ (14) where $`L`$ is the length of the membrane. This is the equal-time correlation function for membrane fluctuations. A similar model of an active fluctuating membrane in which the vertical displacements of the membrane satisfy a Langevin equation in the $`𝐪`$ space has also been proposed , and is its shown that a term similar to the static version of (12) contributes to the correlation function which also contains a contribution from non-equilibrium fluctuations. The latter is in the form of a $`q^5`$ term which dominates at long distances. Comparison of (9) and (14) yields the desired result $$D=\frac{\left(\frac{k_BT}{4\pi \kappa }\right)L^2e^{2\gamma _0t}}{2t}.$$ (15) It should be emphasised that the association of a diffusive process with the membrane height function, and the resulting diffusion constant, is not analogous to the usual model of a diffusion process in which, for example, a particle diffuses through a medium, such as a fluid. Rather, what is suggested here is that the magnitude of a mathematical function representing the height of a mathematical point in the membrane is subject to random stochastic variations, and the diffusion constant is a measure of this variation. When the M-type inclusions are present they produce exponentially decaying local deformations in the membrane geometry (see Fig.1). Correspondingly, the correlation function can be modified by a multiplicative exponential factor to $$h(x_1,x_2;t)h(x_1,x_2;t)_{NI}=e^{r_0/R}h(x_1,x_2;t)h(x_1,x_2;t),$$ (16) where $`r_0`$ is the radius of an M-type inclusion, $`R+r_0`$ is the radius of the circular region around the inclusion with its centre coinciding with that of the inclusion, and $`NI`$ stands for near inclusion. It is evident that outside this region the exponential decay of the profile is negligible. Accordingly, within this circular region of radius $`R+r_0`$, the diffusion constant is also modified to $$D_m=De^{r_0/R}.$$ (17) This equation implies that when the centre of mass an S-type inclusion enters a circular region of radius $`R+r_0`$ its diffusion coefficient goes over to $`D_m`$ and progressively approaches zero as the boundary of an M-type inclusion is approached. We can ascertain, as a first approximation, that this is how an M-type inclusion interacts with an S-type inclusion. In our simulations, the equations motion of both the S-type and the M-type inclusions are represented by the differential equation of the the Ito stochastic calculus $$d𝐫(t)=𝐀[𝐫(t),t]dt+D^{1/2}d𝐖(t).$$ (18) This equation describes the stochastic trajectory, $`𝐫(t)`$, of the centres of mass of the inclusions in terms of a dynamical variable of the inclusions, $`𝐀[𝐫(t),t]`$, which is referred to as the drift velocity, and a term, $`d𝐖(t)`$, which is a given Gaussian Wiener process with the mean and variance given by $`d𝐖(t)=0`$ (19) $`d𝐖_i(t)d𝐖_j(t)=2\delta _{ij}dt.`$ Equation (18) applies to each dimension of the motion. The Ito equation predicts the increment in position, i.e. $`d𝐫(t)=𝐫(t+dt)𝐫(t)`$, for a meso-scale time interval $`dt`$ as a combination of a deterministic drift part, represented by $`𝐀[𝐫(t),t]`$, and a stochastic diffusive part represented by $`D^{1/2}d𝐖(t)`$ and superimposed on this drift part. This equation resembles the ‘position’ Langevin equation describing the Brownian motion of a particle . The position Langevin equation corresponds to the long-time (diffusive time) configurational dynamics of a stochastic particle in which its momentum coordinates are in thermal equilibrium and hence have been removed from the equations of motion. Since we are interested in diffusive time scales as well, we can re-write (18) as $$d𝐫(t)=\frac{D}{k_BT}𝐅(t)dt+D^{1/2}d𝐖(t),$$ (20) where $`𝐅(t)`$ is the instantaneous systematic force experienced by the $`i`$-th inclusion and is obtained from the inter-inclusion potentials, given in (6) and (7), according to $$𝐅_i=\underset{j>i}{}_{𝐑_i}V(R_{ij}).$$ (21) We implemented (20) for our 2D simulations according to the iterative scheme $`X(t+dt)=X(t)+{\displaystyle \frac{D}{k_BT}}F_X(t)dt+\sqrt{2Ddt}R_X^G`$ (22) $`Y(t+dt)=Y(t)+{\displaystyle \frac{D}{k_BT}}F_Y(t)dt+\sqrt{2Ddt}R_Y^G`$ where $`R_X^G`$ and $`R_Y^G`$ are standard random Gaussian variables chosen separately and independently for each inclusion according to the procedure given in , and $`F_X,F_Y`$ are the $`X`$ and $`Y`$ components of the force $`𝐅`$. For the S-type inclusions, we treated the angles in (7) as independent stochastic variables described by $$\theta (t+dt)=\theta (t)+\frac{D}{k_BTL^2}\tau (t)dt+\frac{1}{L}\sqrt{2Ddt}\theta ^G,$$ (23) where $`\tau `$ is the torque experienced by an S-type inclusion and is given by $$\tau _i=\underset{j>i}{}\frac{V^T(R_{ij},\theta _i,\theta _j)}{\theta _i},$$ (24) and $`\theta ^G`$ is the angular counterpart of $`R_X^G`$ and $`R_Y^G`$. In the numerical simulations, recently reported in their broad outlines , we use a square membrane with $`L=40\mu `$m on its side. The other parameters used were set at $`\kappa =10^{19}`$ J and $`\eta =10^3`$ J sec m<sup>-3</sup> . These values correspond to the condition in which the bending mode of the membrane is important. From these data the damping coefficient, $`\gamma _0`$, in the real space of the membrane, can be obtained from (13). The simulation temperature was set at $`T=300^{}K`$, and the correlation (delay) time, $`t`$, over which the diffusion coefficient in (15) was calculated, was set at $`t=10^4`$sec. These data gave $`D=2.6\times \mathrm{\hspace{0.17em}10}^9`$ m<sup>2</sup>sec<sup>-1</sup>, which is in close agreement with the value of $`D4.4\times \mathrm{\hspace{0.17em}10}^9`$m<sup>2</sup>sec<sup>-1</sup> obtained at the molecular level via an MD simulation of a fully hydrated phospholipid dipalmitoylphosphatidylcholine (DPPC) bilayer diffusing in the $`z`$-direction . To justify our choice of the correlation time, $`t=10^4`$ sec, we recall that the time scale of a stochastic particle, $`t_D`$, of mass $`m`$ is usually determined from the relation $`t_D={\displaystyle \frac{mD}{k_BT}}`$ (25) Since $`t_D`$ is normally of the order of $`10^9`$sec, then for the criterion of long-time dynamics, employed in our model (cf (20)), to be justified the correlation (diffusive) time scale, $`t`$, in (15) has to satisfy the condition $`tt_D.`$ (26) For our calculated value of $`D`$ and our choice of the inclusion mass $`m=10^{12}\mu `$g, corresponding to an inclusion of length $`L_i=0.1\mu `$m, we obtained a value of $`t_D=0.6\times 10^9`$sec, showing that our choice of the correlation time was appropriate to satisfy the condition in (26). The radius of an M-type inclusion was set at $`r_0=0.01\mu m`$, and the inclusions were all equal in length. The stochastic trajectories of the inclusions were obtained in a set of five simulations. The simulation time step, $`dt`$, in (22) was set at $`dt=10^9`$sec, and each simulation was performed for $`4\times \mathrm{\hspace{0.17em}10}^6`$ time steps, i.e for a mesoscopic interval of $`4000\mu `$sec. The total number of inclusions considered was $`36`$, consisting of $`13`$ S-type and $`23`$ M-type. In the first simulation, we computed the random motions of the S-type inclusions in a membrane devoid of the M-type inclusions. This was done in order to observe the details of the drift-diffusion motion over mesoscopic time scales. Figure 3 shows the stochastic X-Y trajectories of a sample of 4 S-type inclusions plotted on a micron scale up to the end of the simulation time. In addition to the drift motions, represented by the second terms in (22), the random Brownian-type variations, emanating from the membrane shape fluctuations, are superimposed on this drift motion and are clearly visible over the mesoscopic length and time scales. Figure 4 shows the snapshots of a small patch of the membrane with both the S-type (white spheres) and the M-type (black spheres) inclusions. In this, and subsequent figures, the solid spheres refer to the centres of mass of the rod-like inclusions. In the initial state the outer M-type inclusions were regularly positioned, whereas the inner ones were randomly distributed. The S-type inclusions were all distributed completely at random. Figures 4a to 4c refer to the simulation in which the M-type inclusions were pinned to the membrane, i.e. were static, and only the S-type inclusions were mobile, and figures 4d to 4f refer to the simulation in which both the M-type and the S-type inclusions were mobile. The initial states in both simulations, figures 4a and 4d, were the same. The snapshots were obtained from dynamic simulations, akin to a MD simulation, covering the entire simulation time interval. These snapshots were recorded at $`2\times 10^3`$ sec intervals, with figures 4c and 4f referring to the final states reached at the conclusion of the simulations after $`4\times \mathrm{\hspace{0.17em}10}^6`$ time steps. The animation of a complete run showed clearly the stochastic motions of the inclusions, and how the S-type inclusions approached the M-type inclusions and were captured at the site of the M-type inclusions. An examination of figure 4 shows that for the case of dynamic M-type inclusions, a larger number of the S-type inclusions were captured at the M-type inclusion sites, i.e. the number was some 4 times higher than in the static case at the same temperature. We adopted the method of counting an S-type inclusion as a captured inclusion when its centre of mass coincided with that of an M-type inclusion. The numerical algorithm then transformed the colour code of that S-type inclusion from white to black. Figures 4e and 4d also show some diffusion-driven local aggregation of the M-type inclusions. The capture of the S-type inclusions can be viewed as the first stage in the molecular docking process which will eventually transfer these inclusions into the interior of the cell. To examine the membrane response to temperature changes, two of the simulations were performed at different temperatures. Figure 5 shows the snapshots of these simulations at $`T=100^{}`$K (a to c) and at $`T=350^{}`$K (d to f). Figures 5d to 5f clearly show that both the number of captured inclusions and the aggregation of the M-type inclusions were affected by these temperature differences, as can be seen by comparing figures 5c and 5f. To sum up, although many dynamical aspects of membrane-like surfaces have been addressed in the past , it is only relatively recently that attention has focused on the dynamics of membranes with inclusions. To our knowledge no computer-based simulation of this dynamics has been reported so far. In this paper we constructed a meso-scale model of a generic bio-membrane based on the Canham-Helfrich curvature-energy formalism. We treated the height function of the membrane as a stochastic Wiener process whose correlation function provides the relevant diffusion constant describing the membrane fluctuations. Two types of inclusions, one mimicking the internal embedded type and the other the external floating type, are carried by this membrane. These inclusions experience the same stochastic fluctuations as those experienced by the membrane itself, resulting in the transformation of their deterministic (drift) space-time dynamics into a stochastic Langevin-type dynamics described by the Ito stochastic calculus. A set of dynamic simulations, resembling the standard MD simulations, were performed to investigate the phase behaviour of these inclusions. In addition to stochastic forces, these inclusions also experience deterministic interactions described by inter-inclusion potentials varying as $`1/R^4`$ with their separations. The simulation results clearly indicate that the capture and aggregation rates change with the temperature and that the embedded mobile inclusions capture a greater number of the floating inclusions. A further extension of the present work would be to include the influence of the surface tension, as well as the bending rigidity, by keeping the corresponding term in (3). This will constrain the fluctuations in the surface area of the membrane and would have a direct bearing on the inclusion dynamics. The second author(HRS) is grateful to the UK Royal Society for financial support through a visiting research fellowship and to the School of Computing and Mathematical Sciences (Greenwich University) for their hospitality. Both authors acknowledge useful discussions with Professor E. Mansfield FRS on the dynamics of objects supported by surface tension. Figure captions Figure 1: Two rod-like embedded (M-type) inclusions vertically placed in an amphiphilic fluid membrane. The inclusions impose exponentially decaying thickness-matching constraints on the bilayer at the inclusion boundary. Heavy solid lines represent amphiphilic molecules. Figure based on . Figure 2: Two rod-like surface (S-type) inclusions lying on the surface of the membrane. The rods have lengths $`L_1`$ and $`L_2`$, widths $`ϵ_1`$ and $`ϵ_2`$ and making angles $`\theta _1`$ and $`\theta _2`$ with the line joining their centres of mass. Figure based on . Figure 3: A small patch of the membrane showing the stochastic X-Y trajectories obtained from equation (22) for a sample of four S-type inclusions without the presence of the M-type inclusions. Both the drift and diffusion motions can be clearly distinguished. Figure 4: A set of snapshots, obtained from dynamic simulations, showing the capture of rod-like S-type inclusions (white spheres) at the rod-like M-type inclusion sites (black spheres) for static (a to c) and dynamic (d to f) M-type inclusions. The aggregation of the M-type inclusions can also be observed (d to f). Only the centres of mass of the inclusions are shown. Figure 5: A set of snapshots, obtained from dynamic simulations, showing the capture of rod-like S-type inclusions at the rod-like M-type inclusion sites for mobile M-type inclusions at $`T=100^{}`$K (a to c) and $`T=350^{}`$K (d to f). Only the centres of mass of the inclusions are shown.
no-problem/9908/math9908134.html
ar5iv
text
# A method for computing quadratic Brunovsky forms ## 1 Introduction Linear control systems can be continuous in time $`t`$, $`\dot{\xi }=A\xi +b\mu ,`$ (1) or discrete $`\xi (t+1)=A\xi (t)+b\mu (t),`$ (2) where coefficients $`A\mathrm{}^{n\times n}`$ and $`b\mathrm{}^{n\times 1}`$ are generally constant, and state variable, $`\xi `$($`\mathrm{}^n`$), and control variable, $`\mu `$($`\mathrm{}`$), are continuous or discrete respectively. When controllable, both (1) and (2) admit the Brunovsky form , which is derived from controller form, under the following linear change of coordinates and state feedback $`\xi =Tx,\mu =u+x^Tv,`$ (3) where $`x`$ and $`u`$ are new state and control variables respectively, and $`T\mathrm{}^{n\times n}`$ and $`v\mathrm{}^{n\times 1}`$ are the transformation matrices (vectors) (refer to Chapter 3 of ). In the linear Brunovsky form of (1) and (2), $`A`$ and $`b`$ have the following forms: $`A=\left[\begin{array}{ccccc}0& 1& & & \\ & 0& 1& & \\ & & 0& \mathrm{}& \\ & & & \mathrm{}& 1\\ & & & & 0\end{array}\right],b=\left[\begin{array}{c}0\\ 0\\ \mathrm{}\\ 0\\ 1\end{array}\right].`$ (14) In an attempt to extending the Brunovsky forms for non-linear systems, which generally do not even have controller forms, Kang and Krener studied continuous linearly controllable quadratic control systems with a single input, which can be written as (We use different notations from for the purpose of clearly presenting our method of computation.) $`\dot{\xi }=A\xi +b\mu +𝔉^{[2]}(\xi )+G\xi \mu +O(\xi ,\mu )^3,`$ (15) where $`𝔉^{[2]}(\xi )=(\xi ^TF_1\xi ,\mathrm{},\xi ^TF_n\xi )^T`$ is a vector of $`n`$ quadratic terms with symmetric $`n\times n`$ matrices $`F_i`$’s ($`i=1,\mathrm{},n`$), $`G\xi \mu =(G_1\xi \mu ,\mathrm{},G_n\xi \mu )^T`$ is a vector of $`n`$ bilinear terms with $`G\mathrm{}^{n\times n}`$, and $`O(\xi ,\mu )^3`$ includes all terms $`\xi ^a\mu ^b`$ with $`a+b3`$. Moreover, the following additional assumptions are made for this system. First, the coefficients in (15) and (19) $`A`$, $`b`$, $`F_i`$ ($`i=1,\mathrm{},n`$), $`G`$ and $`h`$ are assumed to be time-invariant. Second, the two systems are assumed to be linearly controllable; by linearly controllable we mean that the linear parts of the two quadratic systems are controllable and, as a result, the linear parts can be transformed into the Brunovsky form with (3). Kang and Krener first defined quadratically state feedback equivalence up to second-order, or quadratic equivalence for short, under the following quadratic change of coordinates and state feedback, $`\xi =x+𝔓^{[2]}(x)+O(x)^3,\nu =\mu +x^TQx+rx\mu +O(x,\mu )^3,`$ (16) in which $`𝔓^{[2]}(x)=(x^TP_1x,\mathrm{},x^TP_nx)^T`$ is a vector of $`n`$ quadratic terms, and transformation matrices include symmetric $`P_i\mathrm{}^{n\times n}`$ ($`i=1,\mathrm{},n`$), symmetric $`Q\mathrm{}^{n\times n}`$, and $`r\mathrm{}^{1\times n}`$. We can see that (16) are equivalent to $`\xi =x+𝔓^{[2]}(x)+O(x)^3,\mu =\nu x^TQxrx\nu +O(x,\nu )^3,`$ (17) and hereafter refer to transformations of as (17). Then, from all quadratically equivalent systems of a general system (15), two types of Brunovsky forms were defined in . In type I forms, the nonlinear terms are reduced into a number of quadratic terms $`x_i^2`$; i.e., there are no the cross terms in $`x_i`$ and $`x_j`$ ($`ij`$) or bilinear terms, $`x\nu `$. In type II forms, only bilinear terms are kept. In both types of Brunovsky forms, the number of non-zero nonlinear terms is $`n(n1)/2`$, compared to $`n^2(n+3)/2`$ for a general quadratic system. In this paper, we propose a new method for studying the Brunovsky forms, first for (15) under transformations (17). This method can be carried out in three steps: first, we find the relationships between the coefficients of (15), the coefficients of its quadratically equivalent systems, and the corresponding transformation matrices; second, from these relationships, we derive a mapping from the coefficients of (15) and its equivalent systems to the transformation matrix $`P_1`$, which can be considered as the necessary condition that all equivalent systems of (15) should satisfy; third, we show how to compute, from the necessary condition, Brunovsky forms as well as the corresponding transformations. With this method, we find that (15) admits the same two types of Brunovsky forms defined in . In contrast, our approach, constructive in nature, is capable of computing the Brunovsky forms and the transformation matrices simultaneously. We further show that quadratic transformations, (17), can be simplified by setting $`r=0`$ to $`\xi =x+𝔓^{[2]}(x)+O(x)^3,\mu =\nu x^TQx+O(x,\nu )^3.`$ (18) Still defining the quadratic equivalence in the sense of , the new transformations prevent multiple solutions of type I or type II Brunovsky forms. I.e., the Brunovsky form of each type and the corresponding transformation matrices $`P_i`$ ($`i=1,\mathrm{},n`$) and $`Q`$ are uniquely determined by the original system. Moreover, we apply the same method, but with the simplified transformations defined in (18), and study the following discrete system $`\xi (t+1)=A\xi (t)+b\mu (t)+𝔉^{[2]}(\xi (t))+G\xi (t)\mu (t)+h\mu ^2(t)+O(\xi ,\mu )^3,`$ (19) where, similarly, $`𝔉^{[2]}(\xi (t))`$ and $`G\xi (t)\mu (t)`$ are a vector of quadratic terms and a vector of bilinear terms respectively. <sup>1</sup><sup>1</sup>1Note that the discrete system (19) contains a term quadratic in the control variable, $`h\mu ^2(t)`$, where $`h\mathrm{}^{n\times 1}`$, and (19) has $`n^2(n+3)/2+n`$ non-zero nonlinear terms. We find that for (19) there exists only one type of Brunovsky form consisting of $`n(n+1)/2`$ bilinear terms, which corresponds to type II Brunovsky forms of continuous systems. The rest of this paper is organized as follows. After reviewing the Brunovsky form of linear systems (Section 2), we study the Brunovsky forms of continuous quadratic systems in Section 3 and of discrete quadratic systems in Section 4. In Section 5, we summarize our method into two computation algorithms: one for continuous systems, and the other for discrete systems. We conclude our paper in Section 6. ## 2 Review: computation of the Brunovsky form of a controllable linear control system We first review the computation of the Brunovsky form of a continuous controllable linear control system (1). <sup>2</sup><sup>2</sup>2For further explanation, refer to and Chapter 3 of . The procedure and the results also apply to the discrete system (2). Computation of the Brunovsky form for (1) comprises of two steps: first, the linear system is transformed into the controller form with a linear change of coordinates given by the first equation in (3); second, the controller form is further reduced into the Brunovsky form with a state feedback given by the second equation in (3). The control matrix for (1) is defined as $`C`$ $`=`$ $`[A^{n1}b\mathrm{}Abb].`$ (20) Since (1) is controllable, rank($`C`$) is $`n`$ and, therefore, $`C`$ is invertible. Denote $`d`$ as the first row of $`C^1`$, then we can compute the transformation matrix $`T`$ as $`T`$ $`=`$ $`\left[\begin{array}{c}d\\ dA\\ \mathrm{}\\ dA^{n2}\\ dA^{n1}\end{array}\right],`$ (26) and with the linear change of coordinates $`\xi =Tx`$, (1) can be transformed into the following controller form, $`\dot{x}`$ $`=`$ $`\overline{A}x+\overline{b}\mu ,`$ (27) in which $`\overline{A}=T^1AT=\left[\begin{array}{ccccc}0& 1& & & \\ 0& 0& 1& & \\ \mathrm{}& & 0& \mathrm{}& \\ \mathrm{}& & & \mathrm{}& 1\\ v_1& v_2& \mathrm{}& \mathrm{}& v_n\end{array}\right],\overline{b}`$ $`=`$ $`T^1b=\left[\begin{array}{c}0\\ 0\\ \mathrm{}\\ 0\\ 1\end{array}\right].`$ (38) Then, using a linear state feedback $`\mu =u+x^Tv`$, in which, $`v`$ $`=`$ $`\left[\begin{array}{c}v_1\\ v_2\\ \mathrm{}\\ v_{n1}\\ v_n\end{array}\right],`$ (44) we can transform the controller form (27) into the following Brunovsky form, $`\dot{x}`$ $`=`$ $`\left[\begin{array}{ccccc}0& 1& & & \\ & 0& 1& & \\ & & 0& \mathrm{}& \\ & & & \mathrm{}& 1\\ & & & & 0\end{array}\right]x+\left[\begin{array}{c}0\\ 0\\ \mathrm{}\\ 0\\ 1\end{array}\right]u.`$ (55) From the derivation above, we can see that the Brunovsky form exists and is unique, and the transformations can be explicitly computed. In addition, this procedure can be easily integrated in applications related to the Brunovsky form. In the same spirit, we propose a method for directly computing quadratic Brunovsky forms and the corresponding transformations in the following sections. ## 3 Computation of continuous quadratic Brunovsky forms Note that, with linear transformations in (3), quadratic systems (15) and (19) will not change their forms except their coefficients. Without loss of generality, therefore, we hereafter assume that $`A`$ and $`b`$ in (15) ( and also in (19)) have already been transformed into the forms defined by (14). In this section, we study Brunovsky forms of continuous system (15) under transformations (17), in a constructive manner. First, we find relationships between coefficients of (15), coefficients of its quadratically equivalent systems, and the corresponding transformation matrices. Second, from these relationships, we derive a mapping from the coefficients of (15) and those of the equivalent systems to transformation matrix $`P_1`$; this mapping is a necessary condition that all the equivalent systems should satisfy. Third, we show how to obtain two types of Brunovsky forms as well as the corresponding transformations. ### 3.1 Quadratically equivalent system of (15) ###### Definition 1. Assume $`A\mathrm{}^{n\times n}`$ is in the linear Brunovsky form given by (14), we define a linear operator $`𝐋:\mathrm{}^{n\times n}\mathrm{}^{n\times n}`$ as $`\begin{array}{ccc}𝐋^0P\hfill & =& P,\hfill \\ 𝐋P\hfill & =& A^TP+PA,\hfill \\ 𝐋^{i+1}P\hfill & =& 𝐋𝐋^iP,i=0,1,\mathrm{}.\hfill \end{array}`$ (59) The linear operator $`𝐋`$ has the following properties: 1. $`𝐋^iP=0`$ when $`i2n1`$. 2. The nullity of $`𝐋`$ is $`n`$; if $`Pker(𝐋)`$; i.e., $`𝐋P=0`$, then $`P`$ can be written as $`P_{ij}`$ $`=`$ $`\{\begin{array}{cc}0,\hfill & i+jn,\hfill \\ (1)^ip_{i+jn},\hfill & \text{otherwise,}\hfill \end{array}`$ in which $`p_1,\mathrm{},p_n`$ are independent. 3. $`𝐋`$ is not invertible. 4. If $`P`$ is symmetric, $`𝐋P`$ is symmetric. ###### Theorem 2. The continuous quadratic system (15) is equivalent, in the sense of , under the quadratic transformations given by (17), to a quadratic system, whose $`i`$th ($`i=1,\mathrm{},n`$) equation is $`\dot{x}_i`$ $`=`$ $`x_{i+1}+b_i\nu +x^T\overline{F}_ix+\overline{G}_ix\nu +O(x,\nu )^3,`$ (61) where $`x_{n+1}(t)0`$ is a dummy state variable, $`\overline{F}_1,\mathrm{},\overline{F}_n`$ are symmetric, $`\overline{G}_i`$ is the $`i`$th row of the matrix $`\overline{G}`$, and the coefficients $`\overline{F}_i`$ ($`i=1,\mathrm{},n`$) and $`\overline{G}`$ are defined by $`\overline{F}_i`$ $`=`$ $`F_i+P_{i+1}𝐋P_ib_iQ,`$ (62) $`\overline{G}_i`$ $`=`$ $`G_i2b^TP_ib_ir,`$ (63) where $`P_{n+1}\mathrm{}^{n\times n}`$ is a zero dummy transformation matrix, and $`b_i`$ is the $`i`$th element of $`b`$ defined in (14). Proof. Plugging the transformations defined in (17) into (15), we obtain $`\dot{x}+{\displaystyle \frac{d}{dt}}𝔓^{[2]}(x)`$ $`=`$ $`Ax+b\nu +𝔉^{[2]}(x)+Gx\nu `$ $`+A𝔓^{[2]}(x)bx^TQxbrx\nu +O(x,\nu )^3,`$ of which the $`i`$th ($`i=1,\mathrm{},n`$) differential equation can be written as $`\dot{x}_i+{\displaystyle \frac{d}{dt}}(x^TP_ix)`$ $`=`$ $`x_{i+1}+b_i\nu +x^TF_ix+G_ix\nu `$ $`+x^TP_{i+1}xb_ix^TQxb_irx\nu +O(x,\nu )^3.`$ After plugging $`\dot{x}=Ax+b\nu +O(x,\nu )^2`$ into the term $`\frac{d}{dt}(x^TP_ix)`$ and collecting all terms whose orders are higher than two, we then obtain the equivalent system (61) with the coefficients given in (62) and (63). $`\mathrm{}`$ Equations 62 and 63 present the relationships between the coefficients of (15), those of its quadratically equivalent system (61), and the corresponding transformation matrices. We further simplify these relationships in the following subsections. ### 3.2 A necessary condition for quadratically equivalent systems ###### Lemma 3. The mapping from the coefficients, $`G`$ and $`\overline{G}`$, and $`r`$ to the transformation matrices $`P_i`$ ($`i=1,\mathrm{},n`$) is given by $`\left[\begin{array}{c}[P_1]_{(n)}\\ \\ [P_2]_{(n)}\\ \\ \mathrm{}\\ \\ [P_n]_{(n)}\end{array}\right]`$ $`=`$ $`{\displaystyle \frac{1}{2}}G{\displaystyle \frac{1}{2}}\overline{G}{\displaystyle \frac{1}{2}}br,`$ (68) where the operator $`[]_{(n)}`$ takes the $`n`$th row from its object. Proof. Since $`b_i=0`$ ($`i=1,\mathrm{},n1`$) and $`b_n=1`$, we obtain, from (63), $`[P_i]_{(n)}`$ $`=`$ $`b^TP_i={\displaystyle \frac{1}{2}}G_i{\displaystyle \frac{1}{2}}\overline{G}_i{\displaystyle \frac{1}{2}}b_ir.`$ Thus we have (68). $`\mathrm{}`$ ###### Lemma 4. The mapping from the coefficients $`F_i`$ and $`\overline{F}_i`$ ($`i=1,\mathrm{},n`$) to the transformation matrices $`P_i`$ ($`i=1,\mathrm{},n`$) and $`Q`$ is given by $`P_{i+1}`$ $`=`$ $`𝐋^iP_1{\displaystyle \underset{j=0}{\overset{i1}{}}}𝐋^jF_{ij}+{\displaystyle \underset{j=0}{\overset{i1}{}}}𝐋^j\overline{F}_{ij},i=1,\mathrm{},n1,`$ (69) $`Q`$ $`=`$ $`{\displaystyle \underset{j=0}{\overset{n1}{}}}𝐋^j(F_{nj}\overline{F}_{nj})L^nP_1.`$ (70) Proof. When $`i=1,\mathrm{},n1`$, from (62), we have $`P_{i+1}`$ $`=`$ $`𝐋P_iF_i+\overline{F}_i.`$ After iterating this equation with respect to $`i`$, we obtain (69). When $`i=n`$, (62) can be written as $`Q`$ $`=`$ $`F_n\overline{F}_n𝐋P_n.`$ Since $`P_n`$ is given in (69), we then have (70). $`\mathrm{}`$ ###### Definition 5. With $`A\mathrm{}^{n\times n}`$ and $`𝐋`$ defined in (14) and Definition 1 respectively, we define a series of linear operators $`\text{X}_i:\mathrm{}^{n\times n}\mathrm{}^{n\times n}`$ ($`i=0,1,\mathrm{}`$) by $`\text{X}_0P`$ $`=`$ $`\left[\begin{array}{c}[𝐋^0P]_{(n)}\\ \\ [𝐋^1P]_{(n)}\\ \\ \mathrm{}\\ \\ [𝐋^{n1}P]_{(n)}\end{array}\right],\text{X}_iP=(A^T)^i\text{X}_0P.`$ (75) The linear operators $`\text{X}_i`$ ($`i=0,1,\mathrm{}`$) have the following properties: 1. $`\text{X}_0`$ transforms a diagonal matrix into a skew-triangular matrix of the following structure. For a $`k`$th ($`k=n+1,\mathrm{},n1`$) diagonal matrix $`P`$, we denote the $`k`$th diagonal elements by $`p_l=P_{(|k|k)/2+l,(|k|+k)/2+l}`$ ($`l=1,\mathrm{},n|k|`$). Then $`\text{X}_0P`$ becomes a diagonal matrix with the following properties: $`(\text{X}_0P)_{n(|k|k)/2l+1,(|k|+k)/2+l}=p_l`$, all $`(\text{X}_0P)_{i,j}`$ for $`i+j=n+k+1+2m`$, where $`m1`$ and $`n+k+1+2m2n`$<sup>3</sup><sup>3</sup>3Note that, for $`n=2`$, there is no solution to $`m`$. In this case, all entries are zeros except $`(\text{X}_0P)_{ij}`$ for $`i+j=n+k+1`$., are determined by $`p_l`$, and the other entries are zeros. 2. From the preceding property, $`\text{X}_0`$ transforms a lower-triangular matrix into a full matrix and transforms an upper-triangular matrix into a lower skew-triangular matrix $`\mathrm{\Delta }`$, defined by $`\mathrm{\Delta }_{ij}=0,\text{ when }i+jn+1.`$ (76) 3. From Properties 1 and 2, we have that the nullity of $`\text{X}_0`$ is 0. Therefore, $`\text{X}_0`$ is invertible. 4. From the definition of $`𝐋`$, $`\text{X}_i=0`$ when $`in`$. 5. From the definition of $`\text{X}_i`$ ($`i=1,\mathrm{},n1`$), $`\text{X}_iP`$ can be obtained by shifting $`\text{X}_0P`$ down by $`i`$ rows and replacing the first $`i`$ rows by zeros. From Property 1, therefore, $`\text{X}_i`$ transforms a diagonal matrix $`P`$, whose main diagonal elements are $`p_1,\mathrm{},p_n`$ into a lower skew-triangular matrix of the following structure: $`(\text{X}_iP)_{nl+i+1,l}=p_l`$ for $`l=i+1,\mathrm{},n`$, and all other elements except $`(\text{X}_iP)_{i,j}`$ for $`i+j=n+i+2m`$, where integer $`m1`$ and $`n+k+1+2m2n`$, are zeros. ###### Theorem 6. The mapping from $`F_i`$, $`G`$, $`\overline{F}_i`$, $`\overline{G}`$ ($`i=1,\mathrm{},n1`$), and $`r`$ to $`P_1`$ is given by $`P_1`$ $`=`$ $`\text{X}_0^1\left({\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_iF_i+{\displaystyle \frac{1}{2}}G{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_i\overline{F}_i{\displaystyle \frac{1}{2}}\overline{G}{\displaystyle \frac{1}{2}}br\right).`$ (77) Proof. From (69) we can find the $`n`$th row of $`P_{i+1}`$ ($`i=1,\mathrm{},n1`$) as $`[P_{i+1}]_{(n)}`$ $`=`$ $`[𝐋^iP_1]_{(n)}{\displaystyle \underset{j=0}{\overset{i1}{}}}[𝐋^jF_{ij}]_{(n)}+{\displaystyle \underset{j=0}{\overset{i1}{}}}[𝐋^j\overline{F}_{ij}]_{(n)}.`$ Hence, we have $`\left[\begin{array}{c}[P_1]_{(n)}\\ \\ [P_2]_{(n)}\\ \\ \mathrm{}\\ \\ [P_n]_{(n)}\end{array}\right]`$ $`=`$ $`\text{X}_0P_1{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_iF_i+{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_i\overline{F}_i.`$ (82) From (68) and (82), we obtain $`\text{X}_0P_1`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_iF_i+{\displaystyle \frac{1}{2}}G{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_i\overline{F}_i{\displaystyle \frac{1}{2}}\overline{G}{\displaystyle \frac{1}{2}}br.`$ Multiplying both sides by the inverse of $`\text{X}_0`$, we then have (77). $`\mathrm{}`$ Note that $`P_1`$ is assumed to be symmetric. Therefore quadratically equivalent coefficients $`\overline{F}_i`$ ($`i=1,\mathrm{},n1`$) and $`\overline{G}`$ have to ensure the symmetry of the right hand side of (77). Thus (77) constitutes a necessary condition for all equivalent systems of (15), including the Brunovsky forms. ### 3.3 Computation of the Brunovsky forms and the transformation matrices In this section, given $`r`$ as well as the coefficients of (15), $`F_i`$ ($`i=1,\mathrm{},n`$) and $`G`$, we show how to choose $`\overline{F}_i`$ ($`i=1,\mathrm{},n`$) and $`\overline{G}`$ in Brunovsky forms, which, first, satisfy the necessary condition (77) and, second, have the smallest number of non-zero terms. Since $`\overline{F}_n`$ does not appear in the necessary condition, (77), we simply let $`\overline{F}_n=0`$ in Brunovsky forms. To determine other coefficients, we decompose the terms related to original system (15) and $`r`$ on the right hand side of (77) as $`\text{X}_0^1\left({\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_iF_i+{\displaystyle \frac{1}{2}}G{\displaystyle \frac{1}{2}}br\right)`$ $`=`$ $`L+D+U,`$ (83) where $`L,D,U`$ are strictly lower triangular, diagonal, and strictly upper triangular matrices respectively. In order to add $`\text{X}_0^1(_{i=1}^{n1}\text{X}_i\overline{F}_i+\frac{1}{2}\overline{G})`$ to the right hand side of (77) to obtain a symmetric matrix, we have, according to the property of the decomposition, the following three cases. 1. If $`LU^T=0`$, $`L+D+U`$ is already symmetric, and we simply set $`\overline{F}_i`$ ($`i=1,\mathrm{},n1`$) and $`\overline{G}`$ to be 0. In this case, the Brunovsky form of (15) is a linear system; i.e., (15) can be linearized. 2. When $`LU^T0`$, we can set $`\text{X}_0^1(_{i=1}^{n1}\text{X}_i\overline{F}_i+\frac{1}{2}\overline{G})`$ to a lower-triangular matrix, $`LU^T`$. In this case, $`P_1=U^T+D+U`$. According to the properties of $`\text{X}_0`$, $`_{i=1}^{n1}\text{X}_i\overline{F}_i+\frac{1}{2}\overline{G}`$ is a full matrix. 3. When $`LU^T0`$, we can also set $`\text{X}_0^1(_{i=1}^{n1}\text{X}_i\overline{F}_i+\frac{1}{2}\overline{G})`$ to be an upper-triangular matrix, $`UL^T`$, $`_{i=1}^{n1}\text{X}_i\overline{F}_i+\frac{1}{2}\overline{G}`$ is a triangular matrix defined by (76), which has $`n(n1)/2`$ non-zero terms as $`UL^T`$. In this case, we have $`{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_i\overline{F}_i+{\displaystyle \frac{1}{2}}\overline{G}`$ $`=`$ $`\text{X}_0(UL^T)\mathrm{\Delta }_1,`$ (84) and the first transformation matrix is given by $`P_1`$ $`=`$ $`L+D+L^T.`$ (85) 4. In addition to the aforementioned cases, $`\text{X}_0^1(_{i=1}^{n1}\text{X}_i\overline{F}_i+\frac{1}{2}\overline{G})`$ can be also either $`LU^T`$ or $`UL^T`$ plus an arbitrary symmetric matrix. Comparing Cases 2, 3, and 4, we can see that the solutions in Cases 2 and 4 have more non-linear terms than in Case 3. Therefore, in the Brunovsky forms, we set $`\text{X}_0^1(_{i=1}^{n1}\text{X}_i\overline{F}_i+\frac{1}{2}\overline{G})`$ to be $`UL^T`$, and from (84), we can obtain two types of Brunovsky forms as follows. First, by setting $`\overline{G}=0`$, we have from (84) that $`_{i=1}^{n1}\text{X}_i\overline{F}_i=\mathrm{\Delta }_1`$, which is a lower skew-triangular matrix. To obtain the smallest number of, i.e., $`n(n1)/2`$, non-zero items in the Brunovskey forms, we can select $`\overline{F}_i`$ to be a diagonal matrix with main diagonal elements as $`(0,\mathrm{},0,\overline{f}_{i,i+1},\mathrm{},\overline{f}_{i,n})`$, which can be uniquely computed as follows. From properties of $`\text{X}_i`$ ($`i=1,\mathrm{},n1`$), we first have $`\overline{f}_{1,j}=(\text{X}_1\overline{F}_1)_{nj+2,j}=(\mathrm{\Delta }_1)_{nj+2,j},j=2,\mathrm{},n,`$ (86) and $`\mathrm{\Delta }_2=\mathrm{\Delta }_1\text{X}_1\overline{F}_1.`$ (87) Then, we can compute $`\overline{f}_{2,j}`$ for $`j=3,\mathrm{},n`$ from $`\mathrm{\Delta }_2`$ and $`\mathrm{\Delta }_3=\mathrm{\Delta }_2\text{X}_2\overline{F}_2`$ in the same fashion. By repeating this process, we can obtain all non-zero elements in $`\overline{F}_i`$ for $`i=1,\mathrm{},n1`$. Since all these matrices are uniquely determined by $`\mathrm{\Delta }_1`$, the original system is uniquely equivalent to the following system, $`\dot{x}_i`$ $`=`$ $`x_{i+1}+b_i\nu +{\displaystyle \underset{j=i+1}{\overset{n}{}}}\overline{f}_{ij}x_j^2+O(x,\nu )^3,i=1,\mathrm{},n,`$ (88) which is type I, complete-quadratic Brunovsky form in . Second, by setting $`\overline{F}_i=0`$ ($`i=1,\mathrm{},n1`$), we have $`\overline{G}=2\mathrm{\Delta }_1`$ and the corresponding equivalent system, $`\dot{x}_i`$ $`=`$ $`x_{i+1}+b_i\nu +{\displaystyle \underset{j=n(i+1)}{\overset{n}{}}}\overline{G}_{ij}x_j\nu +O(x,\nu )^3,i=1,\mathrm{},n,`$ (89) which is type II, bilinear Brunovsky form in . We can see that $`\overline{G}`$ is also uniquely determined by the original system. Once we have the Brunovsky forms of a continuous quadratic system, we can solve the corresponding transformation matrices $`P_i`$ ($`i=1,\mathrm{},n`$) and $`Q`$ from (85), (69), and (70). These transformation matrices are also unique corresponding to each Brunovsky form. ### 3.4 Discussions In the preceding subsection, we finished computing the two types of Brunovsky forms and the corresponding transformation matrices of a linearly controllable quadratic system, (15). Besides, we showed that the Brunovsky form of each type and the corresponding transformation matrices are unique. However, the uniqueness is dependent on $`r`$. That is, for different values of $`r`$, a system can have multiple solutions of type I or type II Brunovsky forms. A straightforward strategy to prevent the multiplicity is to set $`r=0`$, and we have a simplified version of transformations, defined by (18). I.e., with (18), a quadratic system has unique type I and type II Brunovsky forms. Further, one can follow the arguments of Kang and Krener but with $`r=0`$ <sup>4</sup><sup>4</sup>4Due to the difference in notations, $`r=0`$ is equivalent to saying $`\beta ^{[1]}=0`$ in . and prove that (18) still defines quadratic equivalence in the same sense. ## 4 Computation of discrete quadratic Brunovsky forms In this section, we apply the method proposed in the preceding section and solve Brunovsky forms for a discrete quadratic control system (19), but with simplified transformations defined in (18). We carry out our study in the same three-step, constructive procedure as for the continuous system. Compared to continuous system, the discrete system has one more term, and the relationships between coefficients and transformations and the resulted Brunovsky forms are fundamentally different from those of continuous systems. ### 4.1 Quadratically equivalent systems of (19) ###### Definition 7. Assume $`A\mathrm{}^{n\times n}`$ is in the linear Brunovsky form defined in (14), we define a linear operator $`𝐋:\mathrm{}^{n\times n}\mathrm{}^{n\times n}`$ by $`\begin{array}{ccc}𝐋^0P\hfill & =& P,\hfill \\ 𝐋P\hfill & =& A^TPA,\hfill \\ 𝐋^{i+1}P\hfill & =& 𝐋𝐋^iP,i=0,1,\mathrm{}.\hfill \end{array}`$ (92) The linear operator $`𝐋`$ has the following properties: 1. $`𝐋^iP=0`$ when $`in`$. 2. The nullity of $`𝐋`$ is $`2n1`$; if $`Pker(𝐋)`$, then $`P_{ij}=0`$ when $`(ni)(nj)>0`$, and elements in $`n`$th row and $`n`$th column can be arbitrarily selected. 3. $`𝐋`$ is not invertible. 4. If $`P`$ is symmetric, $`𝐋P`$ is symmetric; but the inverse proposition may not be true. Note that the linear operator $`𝐋`$ for the discrete system is different from that for continuous systems. This fundamental difference leads to the difference in the relationships between coefficients and transformations and the Brunovsky forms. ###### Theorem 8. Discrete system (19) is quadratically equivalent, in the sense of , under the quadratic transformations in (18), to a system, whose $`i`$th ($`i=1,\mathrm{},n`$) equation is $`x_i(t+1)`$ $`=`$ $`x_{i+1}(t)+b_i\nu (t)+x^T(t)\overline{F}_ix(t)+\overline{G}_ix(t)\nu (t)+O(x,\nu )^3,`$ (93) where $`x_{n+1}(t)0`$ is a dummy state variable, $`\overline{F}_1,\mathrm{},\overline{F}_n`$ are symmetric, $`\overline{G}_i`$ is the $`i`$th row of the matrix $`\overline{G}`$, and the coefficients $`\overline{F}_i`$ and $`h_i`$ ($`i=1,\mathrm{},n`$) and $`\overline{G}`$ are defined by $`\overline{F}_i`$ $`=`$ $`F_i+P_{i+1}𝐋P_ib_iQ,`$ (94) $`\overline{G}_i`$ $`=`$ $`G_i2b^TP_iA,`$ (95) $`h_i`$ $`=`$ $`(P_i)_{nn},`$ (96) where $`P_{n+1}`$ is a zero dummy matrix, $`b_i`$ is the $`i`$th element of $`b`$, and $`(P_i)_{nn}`$ is the $`(n,n)`$ entry of the matrix $`P_i`$. Proof. The proof is similar to that of Theorem 2 and, therefore, omitted. $`\mathrm{}`$ Note that (95) is different from (63), and we have an additional equation (96). ### 4.2 A necessary condition for the equivalent systems ###### Lemma 9. The mapping from the coefficients $`G`$ and $`\overline{G}`$ to the transformation matrices $`P_i`$ ($`i=1,\mathrm{},n`$) is given by $`\left[\begin{array}{c}[P_1]_{(n)}\\ \\ [P_2]_{(n)}\\ \\ \mathrm{}\\ \\ [P_n]_{(n)}\end{array}\right]A`$ $`=`$ $`{\displaystyle \frac{1}{2}}G{\displaystyle \frac{1}{2}}\overline{G}.`$ (101) Proof. The proof is similar to that of Lemma 3 and, therefore, omitted. $`\mathrm{}`$ ###### Lemma 10. The mapping from the coefficients $`F_i`$, $`\overline{F}_i`$ ($`i=1,\mathrm{},n`$) and $`h`$ to the transformation matrices $`P_i`$ ($`i=1,\mathrm{},n`$) and $`Q`$ is given by $`P_{i+1}`$ $`=`$ $`𝐋^iP_1{\displaystyle \underset{j=0}{\overset{i1}{}}}𝐋^jF_{ij}+{\displaystyle \underset{j=0}{\overset{i1}{}}}𝐋^j\overline{F}_{ij},i=1,\mathrm{},n1,`$ (102) $`Q`$ $`=`$ $`{\displaystyle \underset{j=0}{\overset{n1}{}}}𝐋^j(F_{nj}\overline{F}_{nj}),`$ (103) and $`\begin{array}{ccc}(P_1)_{nini}\hfill & =& _{j=0}^{i1}(𝐋^jF_{ij})_{nn}+h_{i+1},i=1,\mathrm{},n1,\hfill \\ (P_1)_{nn}\hfill & =& h_1.\hfill \end{array}`$ (106) Proof. The derivation of (102) and (103) from (94) is similar to that in the proof of Lemma 4 and, therefore, omitted. Here we only show how (106) can be derived as follows. From (102), we have ($`i=1,\mathrm{},n1`$) $`(P_{i+1})_{nn}`$ $`=`$ $`(𝐋^iP_1)_{nn}{\displaystyle \underset{j=0}{\overset{i1}{}}}(𝐋^jF_{ij})_{nn}=(P_1)_{nini}{\displaystyle \underset{j=0}{\overset{i1}{}}}(𝐋^jF_{ij})_{nn}.`$ Comparing this with (96), we then obtain (106), from which we can see that the diagonal entries of $`P_1`$ are uniquely determined by the coefficients $`F_i`$ ($`i=1,\mathrm{},n`$) and $`h`$. $`\mathrm{}`$ ###### Definition 11. With $`A\mathrm{}^{n\times n}`$ and $`𝐋`$ given by (14) and Definition 7 respectively, we define a series of linear operators $`\text{X}_i:\mathrm{}^{n\times n}\mathrm{}^{n\times n}`$ ($`i=0,1,\mathrm{}`$) by $`\begin{array}{ccc}\text{X}_0P\hfill & =& \left[\begin{array}{c}[𝐋^0P]_{(n)}\\ \\ [𝐋^1P]_{(n)}\\ \\ \mathrm{}\\ \\ [𝐋^{n1}P]_{(n)}\end{array}\right],\hfill \\ \text{X}_iP\hfill & =& (A^T)^i\text{X}_0P.\hfill \end{array}`$ (113) The linear operators $`\text{X}_i`$ ($`i=0,1,\mathrm{}`$) have the following properties: 1. $`\text{X}_i=0`$ when $`in`$. 2. The rank of $`\text{X}_0`$ is $`n(n+1)/2`$; denote the $`(i,j)`$th entry of $`P`$ by $`P_{ij}`$ ($`i,j=1,\mathrm{},n`$), then the $`(i,j)`$th entry of $`\text{X}_0P`$ can be written as $`(\text{X}_0P)_{ij}`$ $`=`$ $`\{\begin{array}{cc}P_{ni+1,ji+1},\hfill & ji;\hfill \\ 0,\hfill & j<i.\hfill \end{array}`$ (116) 3. From Property 2, $`\text{X}_0`$ is not invertible. Note that its continuous counterpart in Definition 5 is invertible. ###### Theorem 12. The mapping from the coefficients $`F_i`$, $`G`$, $`\overline{F}_i`$, and $`\overline{G}`$ ($`i=1,\mathrm{},n`$) to $`P_1`$ is given by $`\text{X}_0P_1A`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_iF_iA+{\displaystyle \frac{1}{2}}G{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_i\overline{F}_iA{\displaystyle \frac{1}{2}}\overline{G}`$ (117) Proof. The proof is similar to that of Theorem 6 and omitted. $`\mathrm{}`$ According to (116), we can find the $`(i,j)`$th entry of $`\text{X}_0PA`$, $`(\text{X}_0P_1A)_{ij}`$ $`=`$ $`\{\begin{array}{cc}(P_1)_{ni+1,ji},\hfill & j>i;\hfill \\ 0\hfill & ji.\hfill \end{array}`$ (120) I.e., $`\text{X}_0PA`$, the left hand side of (117), is an upper-triangular matrix, which contains all the non-diagonal entries of $`P_1`$. Therefore, $`\overline{F}_i`$ ($`i=1,\mathrm{},n`$) and $`\overline{G}`$ of a quadratically equivalent system of (19) must satisfy that the right hand side of (117) be also upper triangular. Thus, (117) is a necessary condition for all equivalent systems of (19), including the Brunovsky form. ### 4.3 Computation of the Brunovsky form and the transformation matrices ###### Theorem 13. In the Brunovsky form of (19), the coefficients are $`\begin{array}{ccc}\overline{F}_i\hfill & =& 0,i=1,\mathrm{},n,\hfill \\ \overline{G}\hfill & =& 2(L+D),\hfill \end{array}`$ (123) where $`L`$ and $`D`$ are the strictly lower triangular part and the diagonal part of $`_{i=1}^{n1}\text{X}_iF_iA+\frac{1}{2}G`$. Proof. The first two terms on the right hand side of (117) can be written as $`{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_iF_iA+{\displaystyle \frac{1}{2}}G`$ $`=`$ $`L+D+U,`$ (124) where $`L,D,U`$ are strictly lower triangular, diagonal, and strictly upper triangular matrices respectively. To ensure the right hand side of (117) to be an upper triangular matrix, we set $`_{i=1}^{n1}\text{X}_i\overline{F}_iA+\frac{1}{2}\overline{G}`$ to be $`{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_i\overline{F}_iA+{\displaystyle \frac{1}{2}}\overline{G}=L+D,`$ (125) in which there are $`n(n+1)/2`$ non-zero elements at most, and we hence have $`\text{X}_0P_1A`$ $`=`$ $`U.`$ (126) Due to the properties of $`\text{X}_i`$, no simple $`\overline{F}_i`$’s with $`n(n+1)/2`$ non-zero elements can satisfy (125). We thus simply set the coefficients as in (123), and obtain the only type of Brunovsky form of (19): $`x_i(t+1)`$ $`=`$ $`x_{i+1}(t)+b_i\nu (t)+{\displaystyle \underset{j=1}{\overset{i}{}}}\overline{G}_{ij}x_j(t)\nu +O(x,\nu )^3,i=1,\mathrm{},n.`$ (127) $`\mathrm{}`$ The Brunovsky form of discrete systems corresponds to type II form of continuous systems and has $`n(n+1)/2`$ bilinear terms. After finding the Brunovsky form of a discrete quadratic system, we can solve the corresponding transformation matrices $`P_i`$ ($`i=1,\mathrm{},n`$) and $`Q`$ as follows. From (120) and (126), we can find the non-diagonal elements of $`P_1`$, and the diagonal elements from (106). Then $`P_i`$ ($`i=2,\mathrm{},n`$) and $`Q`$ can be calculated from (102) and (103). ## 5 Algorithms and examples In this section, we summarize the algorithms for computing Brunovsky forms and the corresponding transformations of both continuous and discrete linearly controllable quadratic systems with a single input. To prevent multiplicity in Brunovsky forms, we use the simplified version of transformations, (18), for both systems. Thus, $`r0`$ for all formulas in Section 3. Following each algorithm, an example is given. ###### Algorithm 14. Computation of continuous quadratic Brunovsky forms. * Compute $`\text{X}_0^1\left(_{i=1}^{n1}\text{X}_iF_i+\frac{1}{2}G\right)`$ and decompose it into $`L+D+U`$. * Compute $`P_1=L+D+L^T`$ and $`\mathrm{\Delta }_1=\text{X}_0(UL^T)`$. * For type I Brunovsky forms, set $`\overline{G}=0`$ and solve the equation $`{\displaystyle \underset{i=1}{\overset{n1}{}}}\text{X}_i\overline{F}_i=\mathrm{\Delta }_1`$ to compute $`\overline{F}_i`$ ($`i=1,\mathrm{},n1`$) as in (86) and (87). Go to Step 5. * For type II Brunovsky forms, set $`\overline{F}_i=0`$ ($`i=1,\mathrm{},n`$) and $`\overline{G}=2\mathrm{\Delta }_1`$. Go to step 5. * Compute $`P_i`$ ($`i=2,\mathrm{},n`$) and $`Q`$ from (69) and (70). ###### Example 1. Find type I Brunovsky form of a continuous quadratic control system, which is already in type II Brunovsky form, $`\begin{array}{ccc}\dot{\xi }_1\hfill & =& \xi _2+O(\xi ,\mu )^3,\hfill \\ \dot{\xi }_2\hfill & =& \mu +\xi _2\mu +O(\xi ,\mu )^3.\hfill \end{array}`$ (130) Solutions. In this system, $`n=2`$, and coefficients are $`F_1`$ $`=`$ $`F_2=0,`$ $`G`$ $`=`$ $`\left[\begin{array}{cc}0& 0\\ 0& 1\end{array}\right].`$ Following Algorithm 14, we find type I Brunovsky form, $`\begin{array}{ccc}\dot{x}_1\hfill & =& x_2+\frac{1}{2}x_2^2+O(x,\nu )^3,\hfill \\ \dot{x}_2\hfill & =& \nu +O(x,\nu )^3,\hfill \end{array}`$ (134) and the corresponding transformations, $`\begin{array}{ccc}\xi _1\hfill & =& x_1,\hfill \\ \xi _2\hfill & =& x_2+\frac{1}{2}x_2^2,\hfill \\ \mu \hfill & =& \nu .\hfill \end{array}`$ (138) Reversely, if given (134), we find the transformations for obtaining (130), $`x_1`$ $`=`$ $`\xi _1,`$ $`x_2`$ $`=`$ $`\xi _2{\displaystyle \frac{1}{2}}\xi _2^2,`$ $`\nu `$ $`=`$ $`\mu ,`$ which are the inverses of (138). $`\mathrm{}`$ ###### Algorithm 15. Computation of discrete quadratic Brunovsky form. * Compute $`_{i=1}^{n1}\text{X}_iF_iA+\frac{1}{2}G`$ and decompose it into $`L+D+U`$. * Compute $`\overline{G}=2(L+D)`$. * Compute the non-diagonal elements of $`P_1`$ from (120) and (126). * Compute the diagonal elements of $`P_1`$ from (106). * Compute $`P_i`$ ($`i=2,\mathrm{},n`$) and $`Q`$ from from (102) and (103). ###### Example 2. Find the Brunovsky form of the following discrete system: $`\xi _1(t+1)`$ $`=`$ $`\xi _2(t)+\xi _1^2(t)+\xi _2^2(t)+\mu ^2(t)+O(\xi ,\mu )^3,`$ $`\xi _2(t+1)`$ $`=`$ $`\mu (t)+\mu ^2(t)+O(\xi ,\mu )^3.`$ Solutions. In this system, n=2 and coefficients are $`F_1`$ $`=`$ $`\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right],`$ $`F_2`$ $`=`$ $`0,`$ $`G`$ $`=`$ $`0,`$ $`h`$ $`=`$ $`\left[\begin{array}{c}1\\ 1\end{array}\right].`$ Following Algorithm 15, we find the Brunovsky form, $`x_1(t+1)`$ $`=`$ $`x_2(t)+O(x,\nu )^3,`$ $`x_2(t+1)`$ $`=`$ $`\nu (t)+O(x,\nu )^3,`$ and the corresponding transformations $`\xi _1(t)`$ $`=`$ $`x_1(t)+2x_1^2(t)+x_2^2(t),`$ $`\xi _2(t)`$ $`=`$ $`x_2(t)x_1^2(t)+x_2^2(t),`$ $`\mu (t)`$ $`=`$ $`\nu (t)x_2^2(t).`$ Thus, this system is linearized. $`\mathrm{}`$ ## 6 Conclusions In this paper, we proposed a method for computing Brunovsky forms of both continuous and discrete quadratic control systems, which are linearly controllable with a single input. Our approach is constructive in nature and computes Brunovsky forms explicitly in a three-step manner, with the introduction of linear operators $`𝐋`$ and $`\text{X}_i`$ ($`i=0,\mathrm{},n1`$): (i) we first derived relationships between the coefficients of original systems, those of the quadratically equivalent systems, and the corresponding transformation matrices; (ii) after simplifying these relationships, we obtained a necessary condition for all equivalent systems; (iii) from the necessary conditions, we finally computed the Brunovsky forms and the corresponding transformations. However, the linear operators $`𝐋`$ and $`\text{X}_i`$ and, therefore, the Brunovsky forms are fundamentally different for continuous and discrete systems. For a continuous quadratic system, which has at most $`n(n+1)^2/2`$ nonlinear terms, there are two types of Brunovsky forms with $`n(n1)/2`$ nonlinear terms each; for a discrete quadratic system, there is only one type, and the numbers of nonlinear terms of a general system and its Brunovsky form are $`n(n+1)^2/2+n`$ and $`n(n+1)/2`$, respectively. For continuous systems, we used the same transformations as in and found that the inclusion of transformation vector $`r`$ yields multiple solutions of a type of Brunovsky forms. Therefore, we suggested to set $`r=0`$ in order to maintain the uniqueness. I.e., under transformations defined by (18) for both continuous and discrete systems, the Brunovsky forms and the corresponding quadratic transformations always exist and are uniquely determined by the original systems. In this sense, our study can be viewed as a constructive proof of the existence and uniqueness of Brunovsky forms. Finally, the method proposed in this paper could be extended for studying quadratic control systems, not linearly controllable or with multiple input. In addition, the method could be useful for applications involving analysis and control of quadratic systems.
no-problem/9908/physics9908056.html
ar5iv
text
# 1 Introduction ## 1 Introduction High power proton or ion accelerators at hundreds MW in continuous mode are needed for the Accelerator Driven Transmutation Technologies (ADDT), in different nuclear power production industrial applications. However, the same cyclotron accelerator system can also be applied to produce ultra-short beam pulses for scientific investigations, for the production of powerful neutron sources. In this case, generated pulsed beams may have peak power in the range of 10-100 GW. If in addition the duration of ejected pulsed beams is less than 1.0 ms at energies of 1.0 GeV, it is also possible to exclude the necessity of placing a last-stage storage ring for pulse compression to produce intense neutron bursts . This work is performed to optimize plans for the construction of a proton accelerator system with 1.0 GeV energy and tens of mA current, to be realized in the existing shielded structure of the Yerevan 6-GeV Electron Synchrotron. The primary approaches, based on the use of large-radius isochronous Coaxial Ring Cyclotrons (CRC) and Asynchronous Cyclotrons (AC) were described in references \[3-7\]. The key problems that relate to the construction of powerful proton accelerators are the following: \- To greatly decrease the beam losses \- To increase the efficiency and reliability of accelerator’s with reduced operating costs \- To decrease the development and construction costs. The most difficult problem is to get high values of average beam current with acceptably low values of accelerated beam losses. ## 2 Concept of multistage coaxial cyclotrons The concept of Multistage Coaxial Cyclotron (MCC) gives an opportunity to perform flexible management of separated cyclotron orbits in the energy range of 120-2000 MeV, and to also obtain hundreds of mA average beam current in continuous beam operation. These advantages are based on demonstrated accelerator physics and technologies that have been implemented predominantly on powerful synchrotrons and linear accelerators (linacs). We first note that on HERA it has been demonstrated that injection and acceleration of proton beams in synchrotrons (without storage) proceed well when the number of particles in each bunch is limited to $`(1.2÷1.4)\times 10^{11}`$, for acceleration frequencies of 10-100 MHz and energies of tens of MeV to tens of GeV. Second, in the same publications it is shown that due to high-precision strong focusing and high vacuum, beam losses during many years of HERA machine operation have not exceeded the tolerable value of 0.01 nA/m, for energies higher than 100 MeV. Third, in modern high power linacs , the construction, transmission and handling of megawatt level RF power from single sources have been demonstrated at frequencies of 350-700 MHz. In principle, it is also possible to develop similar approaches for large acceleration cavities and low frequencies of 50-100 MHz. It also appears that in a large radius MCC it would not only be possible to create conditions like those existing in proton synchrotrons for strong longitudinal and transverse beam focusing, but to attain much better conditions due to the possibility of the more flexible and relative independent management of the separate orbit parameters. Thus, if we set the acceleration field of cyclotrons to have a frequency in range of 50-100 MHz and as obtained at HERA, limit the number of particles per bunch to $`(1.2÷1.4)\times 10^{11}`$, in continuous mode of operation an average acceleration current of up to one Ampere $`[(1.2÷1.4)\times 10^{11}\times (50÷100)\times 10^6]`$ should be feasible for reasonably low beam interception losses. In order to realize such a MCC, the structure, the number of stages and their parameters will be chosen according to the following considerations: 1. In order to provide for flexible longitudinal beam focusing, the opportunity of independently tuning in a wide range of up to tens of degrees of equilibrium phase in acceleration fields will be exercised, in one or two pairs of turn sectors, in each separate orbit turn while maintaining isochronism. 2. To create a strong and quality transversal focusing of beam, the providing of enough space between the accelerating cavities to install the warm magnet system with separated functions and with additional sextupole lens to correct the chromaticity and also to change the mechanical coordinates of magnet elements in the range of several millimeters and length of bending magnets in the range of several centimeters. 3. In order to increase the efficiency and simplify the injection and ejection of beams, the parameters of MCC stages is tuned to be possible to put these stages coaxial or concentric, each into the other. 4. In order to increase the opportunities of beam parameter’s tuning, the value of the turn separation is chosen to be more than 20cm. and camera’s aperture, not less than $`3\times 5cm`$. 5. To compensate the shift of bethatron oscillations frequency from the effect of the beam space charge, the opportunities of slow changing of Q value from one magnet period to the other in according with computer code and depending of measured beam parameters, are supposed. 6. The cavities could be warm as well as superconducting (SC). The choice depends on a lot of reasons and the firstly on opportunities and tastes of desingners. 7. In order to increase the opportunities to compensate the mutual influence of beams into the accelerating cavities and spread of cavities’ parameters during the accelerator tuning, besides of conditions 1, the opportunities of independent changing of accelerating voltage on each beam’s channel, using not mechanical methods is supposed. 8. It’s reasonable to create the MCC on energies more than 120MeV. Lower energy accelerators seem to be not satisfied from technical point of view, because of essential decreasing the ”RF acceptance” value and increasing of the tolerances on the magnetic field’s quality. That becomes more remarkable at high values of harmonic h of accelerating RF field, however, on the other hand the stable operation of accelerator usually is easily realized on super narrow phase width of accelerating beam. The upper value of energy does not limited, because of formal technical reasons are absent and only economical reasons could be mainly considered, acceptable limit for which seems to be around 2000MeV. 9. Nowadays opportunities to construct the proton accelerators with energy up to 120MeV are wide enough, starting the RFQ and finishing the SC Cyclotrons with Separated Orbits. That’s why the decision of question concerning the Project of injecting system with final energy up to 120MeV depends on opportunities and tastes of designers and will be considered separately. 10. The price of construction and exploring of MCC should be essential lower the price of adequate Linac. 11. Developing of MCC Project must be also based on enough studying of experience and results of designing and construction of similar cyclotrons with separated orbits . ## 3 MCC pulsed operation mode Anyway, even constructing the MCC with continuous functioning, it’s reasonable to have opportunity to realize initial adjustment and start up of accelerator in pulsed mode and rather with acceleration of one-bunch beam, which is formatted into the injector’s structure. The radius of the last stage of MCC is chosen from the condition to provide the duration of one turn of beam to be less than 1.0 mksec., in order to satisfy the requirements applied to using proton beam into the system of neutron source. Thus, if the peak value of current for accelerating beam in MCC will be high enough (about tens Ampere), then it’s not necessary to create the storage rings . During one-bunch regime of MCC operation, in any time only one bunch will pass any cavity, and time interval between loading the cavity by beam (bunch) is equal the duration of one turn of beam at given stage. Instabilities at such regime of operation and essential limitation of intensity (took place in regime of continuous acceleration of beam), connected with these instabilities , are just disappeared. So, during the short pulsed feeding of cavity (20-40mksec.), depends on the level of connection with the RF generator and quality factor , it’s possible to increase significantly the value of electrical field in cavity(before appearance of multiplicity effects) and use the warm cavities. However, in the same time it seems to be not technically profitable the construction of pulsed feeding for Magnet Elements (ME\- bending magnets and lenses) during the period of the pulsed mode accelerating in MCC. The explanation is, that for pulsed feeding of ME the distortions and instabilities of the ME’s magnetic fields are dramatically increased. This is connected mainly with drifting of ”coercivity” of the magnet’s core iron, and demagnetization by inverse polarity current is not enough stabilizing effect, particularly for the edge of magnet and short lens. Besides that, the pulsed feeding determines the necessity of ME construction, using more thinner of sheet steel, that leads to the increasing of mechanical distortions of the core. Moreover, the isolation of winding increases, that leads to the according increasing of the ME sizes. As a result, the opportunities to construct small sizes ME, which are very important for considered type of cyclotron, are decreased. So, the ME feeding system becomes complicated, particularly the possibilities of precise stabilization of this system. The main thing is, that identity of such magnet system will be destroy during the transition from pulsed to continuous feeding. That’s why the chose of continuous current for the ME feeding, during the pulsed one-bunch beam acceleration in MCC, will allow to move smoothly to multi-bunches acceleration up to continuous regime. That is possible only at the expense of the RF system power increasing and seems to be technically justified. Anyway, initial tuning of the MCC with one-bunch acceleration in each cycle of accelerating open also some additional opportunities to increase the quality of cyclotron. For example, it’s possible to identify by help of the same bunch all detectors and also some others parameters of cyclotron in all channels and stages. Besides that, cycling rate can be not only 50-60Hz, but will be varied in a wide range from zero up to several thousands Hz. Described advantages of one-bunch acceleration , will allow, in roughly estimation, to increase the number of particles in single bunch more than in one order, e.g. to reach the values of $`10^{12}`$ and more. Taking in account, that phase duration of bunch is usual about 0.1 of oscillations period of accelerating RF field in cavity, then, at the RF oscillations frequency about 50MHz, the pulsed power of ejected MCC beam at energy about 1GeV will be about 80 GW with pulse duration equal to 2ns. It seems to be enough to construct the powerful pulsed neutron source without the storage ring . For the future, with each increasing of bunches number, the number of stable accelerating particles in each bunch will be a little decreased. This process will be continued till the reaching of the continuous mode, after that process will mainly finish. However, if MCC system is constructed and adjusted correctly, then the number of particles located in each bunch at continuous regime should not be less the value already reached on practice, e.g. $`10^{11}`$. With increasing of average current of beam it’s necessary to have mainly according increasing of accelerating RF system power, what is mainly connected with financial opportunities. Thus, one more practical advantage of suggested type of accelerator is the possibility of the gradual upgrade of it power without changing of structure and magnet system, any upgrade depends just on financial opportunities. It will be not so correct to estimate generally the cost of MCC, because of strong dependence on the ”making” country. Because of that, we will initially consider some relative cost reasons in comparison with Linac, which seemed to be independent enough from the ”making” country. For example, the length of Linac at 1GeV energy in SNS and ADDT systems is about 1.0 km. Then the length of shielding concrete wall of 3m thickness will be about 2km. While, the same external wall for MCC with similar parameters of beam and with final radius about 38m(see bellow), will have a length almost in one order shortly. One can expect, that also the cost of expensive wall will decrease about ten times. Besides that, the cost of accelerating RF technique on 50MHz frequency is usually significantly lower than for similar technique on 350 and 700MHz frequencies. Approximate picture of cost values for suggested type accelerator construction one can image, based on the estimations of four-stage MCC construction at 1GeV energy for one-bunch accelerating of beam at Yerevan Physics Institute (YerPhI) in to existing radiation zones of electron synchrotron at 6GeV(see bellow). Based on results of these estimations we can conclude, that construction of MCC will take about 40,000,000 US dollars, what is more than one order less than cost of construction of similar accelerators, described in . We should note, that obtained estimation of cost does not include the cost of shielded structures, manpower and cost of 120MeV injection system construction. The cost for injection system is estimated in about $`15\%`$ of total sum. The manpower and shielded structures cost essentially depends on current economical status in the ”making” country. ## 4 MCC based on YerPhI synchrotron The aim of this work is to try to demonstrate the principal possibility and advantages of a wide-applicable MCC based on example, developed for YerPhi , relative to similar accelerators. The following data are used as a basis of calculations: the main ring average radius is 34.5 m, the tunnel width is 6.0 m, with 3.0 m thick tunnel walls, and the inner circular hall diameter is 57.0 m. As shown in Fig. 1, the first three cyclotron stages are nested and concentrically installed in the hall, acting as injectors for the fourth stage cyclotron in the YerPhI main-ring tunnel. The choice of MCC structure and its parameters depends on the choice of the type and parameters of the accelerating RF cavities. In the research and design of large, low-frequency (90-170MHz) accelerating SC cavities with no spherical symmetry and with substantial radial range of gap are described. In these cavities, as well as in large warm ones, which have the radial range of working gap more than 4m and 50MHz frequency , the inner active surfaces are made of Pb or PbSn(4Sn 96Pb atoms). At the same time, in warm cavities, the electrical fields of 6.2Mv/m are reached, and in SC \- 10.6Mv/m, while the ratio of the total electric field to the using part of field is $`E_{peak}/E_{acc}=1.5`$. For the proposed four-stage MCC-YerPhI the warm cavities’ design is taken into account, similar to those modeled in based on the presently operating in PSI modern cavities, at 40-50MHz frequencies and achieving peak voltage of up to 1.1MV, with radial range of working gap up to 4m. The results of calculations for one of the versions of MCC-YerPhI, consisting of 4 rings are given in Table 1. The transit-time factor value is assumed to 0.95. The magnet period structure is assumed as FODO lattice with separate function. Table 1 shows, that the orbit separation is in the range of 40-23cm., and the length of drift space is not less than 3.7m, which will allow not only to place freely focusing magnet elements and detectors , but also to easily provide $`100\%`$ beam ejection. The total value of turns in the MCC-YerPhI will be 39.5 . Table 2 shows the computer calculations for the first turn of beam in the first stage of MCC, at artificial change of equilibrium phase on +3.6 degree in sector 5 and on -3.6 degree in sector 6, while maintaining isochronism. According to the evaluated errors in the dipole and quadruple alignment, the beam envelope, including orbit distortions, was computed. It sizes in different stages of MCC lies within the limits of $`40mm\times 60mm`$ and determines the aperture and outside dimension of magnets, the latter of which are $`20cm\times 20cm`$. For the calculations the followings values of injecting beam emitance were taken: $`E_x=15\pi \times mm\times mrad`$ and $`E_z=10\pi \times mm\times mrad`$. Table 3 shows the main parameters of extracted beam and cost estimations of MCC-YerPhI construction in two cases of accelerator functioning. The continuous regime of hundreds MW beam production is not considered, because of low probability to realize it at YerPhI, due to the financial reasons. Still, one can easely obtain from these tables the main information to approximate more powerful continuous regime of accelerators’ functioning. One can see for this type of machine, that it’s profitable to choose the low values of RF accelerating frequency, because that makes accordingly easy the problem of precisions, almost for all systems, but on the other hand, the problems to get the high values of electric fields into the cavities and their quality factor(Q) become complicated, although in SC case this effect is weaker displayed. Besides that, high values of the harmonic number lead to the increasing of opportunities to tune of equilibrium phase and stability of acceleration. Therefore, the necessity to search the compromise decisions is obvious. To increase the quality and stability of MCC acceleration, the installation of sextupole lenses near the quadrupole lenses in the area of non zero dispersion orbit to compensate the chromatity is foreseen. It’s also foreseen to construct the electronic systems of inverse connection with controlled passing strip and with feed of correcting voltage on special electrodes as well as application of others known methods. All these methods, including a new developed ones, will provide the minimal beam losses during the accelerating over the spiral orbit with varying orbit separation value. ## 5 Conclusions The main goal of given work is to demonstrate, that it’s very important to continue design, developing and improvement of such a new, perspective type of accelerator as MCC complex, which will be useful not only for scientific investigations, but also for nuclear industry. In conclusion, the authors would like to thank the colleagues from YerPhI, particularly A.Ts. Amatuni and colleagues from JINR (Dubna) for useful discussions and remarks.
no-problem/9908/quant-ph9908065.html
ar5iv
text
# Limits for entanglement measures ## Abstract Basic principle of entanglement processing says that entanglement cannot increase under local operations and classical communication. Basing on this principle we show that any entanglement measure $`E`$ suitable for the regime of high number of identically prepared entangled pairs satisfies $`E_DEE_F`$ where $`E_D`$ and $`E_F`$ are entanglement of distillation and formation respectively. Moreover, we exhibit a theorem establishing a very general form of bounds for distillable entanglement. Since the pioneering papers on quantifying entanglement much have been done in this field . However, in the case of mixed states, we are still at the stage of gathering phenomenology. In the very fruitful axiomatic approach there is not even an agreement as to what postulates should be satisfied by candidates for entanglement measures. Moreover, we do not know the quantum communication meaning of the known measures apart from entanglement of formation $`E_F`$ and entanglement of distillation $`E_D`$ , having the following dual meaning: * $`E_D(\varrho )`$ is the maximal number of singlets that can be produced from the state $`\varrho `$ by means of local operations and classical communication (LQCC). * $`E_F(\varrho )`$ is the minimal number of singlets needed to produce the state $`\varrho `$ by LQCC operations. (More precisely: $`E_D`$ ($`E_F`$) is minimal number of singlets per copy in the state $`\varrho `$ in the asymptotic sense of considering $`n\mathrm{}`$ copies altogether). Unfortunately, they are very hard to deal with. One can ask a general question: Is there a rule that would somehow order the many possible measures satisfying some reasonable axioms? Moreover, is there any connection between the axiomatically defined measures and the entanglement of distillation and formation? Surprisingly, it appears that just the two, historically first, measures of entanglement constitute the sought after rule, being extreme measures. In this paper we show that any measure satisfying certain natural axioms (two of them specific to the asymptotic regime of high number of identically prepared entangled pairs) must be confined between $`E_D`$ and $`E_F`$: $$E_DEE_F.$$ (1) The result is compatible with some earlier results in this direction. In Ref. Vedral and Plenio provided heuristic argumentation that an additive measure of entanglement should be no less than $`E_D`$. Uhlmann showed that non-regularized entanglement of formation (closely related to $`E_F`$ ) is upper bound for all convex functions that agree with it on pure states . Finally, the presented result is compatible with the result by Popescu and Rohrlich , completed by Vidal , stating uniqueness of the entanglement measure for pure states. The proof of the result (contained in Theorem 1) is very simple, but it is very powerful. Indeed, as a by-product we obtain (Theorem 2) surprisingly weak conditions for a function to be upper bound for $`E_D`$. This is remarkable result, as evaluation of $`E_D`$ is one of the central tasks of the present stage of quantum entanglement theory. In particular, we obtain elementary proof that the relative entropy entanglement $`E_r`$ and the function considered by Rains are bounds for distillable entanglement. Note that the proof of Ref. involves complicated mathematics, while the one of Ref. is based on still unproven additivity assumption. In addition, our result is very general, and we expect it will result in an easy search for bounds on distillable entanglement. It is crucial that the basic tool we employ to obtain the results is the fundamental principle of entanglement theory stating that entanglement cannot increase under local operations and classical communication . Thus the principle, putting bounds for the efficiency of distillation, plays a similar role to that of second law of thermodynamics (cf. ) the basic restriction for efficiency of heat engines. Let us first set the list of postulates we impose for entanglement measure. So far, the rule of choosing some postulates and discarding others was an intuitive understanding of what entanglement is. Now, we would like to add a new rule: Entanglement of distillation is a good measure. Thus, we cannot accept a postulate that is not satisfied by $`E_D`$. This is reasonable because $`E_D`$ has a direct sense of quantum capacity of the teleportation channel constituted by the source producing bipartite systems. We will see that this rule will suppress some of hitherto accepted postulates: This is the lesson given us by existence of bound entangled states . We split the postulates into the following three groups: 1. Obvious postulates.- * Non-negativity: $`E(\varrho )0`$; * vanishing on separable states: $`E(\varrho )=0`$ if $`\varrho `$ is separable; * normalization: $`E(|\psi _+\psi _+|)=1`$, where $`\psi _+=\frac{1}{\sqrt{2}}(|00+|11)`$. 2. Fundamental postulate: monotonicity under LQCC operations.- * Monotonicity under local operation: If either of the parties sharing the pair in the state $`\varrho `$ performs the operation leading to state $`\sigma _i`$ with probability $`p_i`$, then the expected entanglement cannot increase $`E(\varrho ){\displaystyle \underset{i}{}}p_iE(\sigma _i);`$ * convexity (monotonicity under discarding information) $`E\left({\displaystyle \underset{i}{}}p_i\varrho _i\right){\displaystyle \underset{i}{}}p_iE(\varrho _i).`$ 3. Asymptotic regime postulates.- * Partial additivity $`E(\varrho ^n)=nE(\varrho );`$ * continuity: If $`\psi ^n|\varrho _n|\psi ^n1`$ for $`n\mathrm{}`$ then $`{\displaystyle \frac{1}{n}}|E(\psi ^n)E(\varrho _n)|0,`$ where $`\varrho _n`$ is some joint state of $`n`$ pairs. Let us now briefly discuss the considered postulates. In the first group, the postulate of normalization is to prevent us from the many trivial measures given by positive constant multiply of some measure $`E`$. The axiom 1a) is indeed obvious (separable state contains no entanglement). What, however, is not obvious, is: Should not we require vanishing of $`E`$ if and only if the state is separable? The latter seems reasonable, because if the state is not separable, it contains entanglement, that should be indicated by the entanglement measure. However, according to our rule, we should look at distillable entanglement. We then can see that the bound entangled states are entangled, but have $`E_D`$ equal to zero. Thus we should accept entanglement measures that indicate no entanglement for some entangled states. This curiosity is due to existence of different types of entanglement. Let us now pass to the second group. The fundamental postulate, displaying the basic feature of entanglement (that creating entanglement requires global quantum interaction) was introduced in Ref. and developed in Refs. . It was put into the above, very convenient form in Ref. . Any function satisfying it must be invariant under product unitary transformations and constant on separable states . It also follows that if a trace preserving map $`\mathrm{\Lambda }`$ can be realized as a LQCC operation, then $`E(\mathrm{\Lambda }(\varrho ))E(\varrho )`$. The postulates of the first and the second groups are commonly accepted. The functions that satisfied them (without normalization axiom) have been called entanglement monotones . Let us now discuss the last group of postulates, called “asymptotic regime” ones because they are necessary in the limit of large number of identically prepared entangled pairs, and can be discarded if a small number of pairs are considered. This asymptotic regime is extremely important as it is natural regime both for the directly related theory of quantum channel capacity and the recently developed “thermodynamics of entanglement” . Partial additivity says that if we have a stationary, memoryless source, producing pairs in the state $`\varrho `$, then the entanglement content grows linearly with the number of pairs. A plausible argument to accept this postulate was given in Ref. in the context of thermodynamical analogies. Vedral and Plenio considered full additivity $`E(\varrho \sigma )=E(\varrho )+E(\sigma )`$ as a desired property. However, the effect of activation of bound entanglement suggests that $`E_D`$ is not fully additive, so, according to our rule, we will not impose this stronger additivity. Let us now pass to the last property. It says that in the region close to the pure states, our measure is to behave regularly: If the joint state of large number pairs is close to the product of pure states, then the densities of entanglement (entanglement per pair) of both of the states should be close to each other, too. This is a very weak form of the continuity exhibited e.g. by von Neumann entropy that follows from Fannes inequality . We do not require the latter, strong continuity, because we expect that entanglement of distillation can exhibit some peculiarities at the boundary of the set of bound entangled states. However, it can be seen that $`E_D`$ satisfies this weak continuity displayed as the last postulate of our list. The continuity property as a potential postulate for entanglement measures was considered by Vidal in the context of the problem of uniqueness of entanglement measure for pure states. Namely, Popescu and Rohrlich starting from thermodynamical analogies, argued that entanglement of formation (equal to entanglement of distillation for pure states ) is a unique measure, if one imposes additivity and monotonicity (and, of course, normalization). Later on, many monotones different than $`E_F`$ on pure states were designed . There was still no contradiction because they were not additive. However, Vidal constructed a set of monotones additive for pure states, that still differed from $`E_F`$ for pure states . He removed the contradiction by pointing out the missing assumption being just the considered continuity. The completed-in-this-way uniqueness theorem states that a function satisfying the listed axioms must be equal to entanglement of formation on the pure states. In the following we will show that the above theorem can be viewed as a special case of the general property of entanglement measures (in this paper the functions satisfying the list of postulates we will call entanglement measures). Before we state the theorem we need definitions of entanglement of distillation and formation. We accept the following definitions. $`E_F`$ is a regularized version of the original entanglement of formation $`E_f`$ defined as follows. For pure states $`E_f`$ is equal to entropy of entanglement, i.e., von Neumann entropy of either of the subsystems. For mixed states, it is given by $$E_f(\varrho )=\mathrm{min}\underset{i}{}p_iE_f(\psi _i),\text{with}\varrho =\underset{i}{}p_i|\psi _i\psi _i|$$ (2) where the minimum is taken over all possible decompositions of $`\varrho `$ (we call the decomposition realizing the minimum the optimal decomposition of $`\varrho `$). Now $`E_Flim_nE_f(\varrho ^n)/n`$. To define distillable entanglement $`E_D`$ (see Ref. for justifying this definition) of the state $`\varrho `$ we consider distillation protocols $`𝒫`$ given by a sequence of trace preserving, completely positive superoperators $`\mathrm{\Lambda }_n`$, that can be realized by using LQCC operations, and that map the state $`\varrho ^n`$ of $`n`$ input pairs into a state $`\sigma _n`$ acting on the Hilbert space $`_n^{out}=_n_n`$ with $`dim_n=d_n`$. Define the maximally entangled state on the space $``$ by $$P_+()=|\psi _+()\psi _+()|,\psi _+()=\frac{1}{\sqrt{d}}\underset{i=1}{\overset{d}{}}|ii$$ (3) where $`|i`$ are basis vectors in $``$, while $`d=dim`$. Now $`𝒫`$ is distillation protocol if, for high $`n`$, the final state approaches the above state $`P_+`$, $$F\psi _+(_n)|\sigma _n|\psi _+(_n)1$$ (4) (i.e. the fidelity $`F`$ tends to 1). The asymptotic ratio $`D_𝒫`$ of distillation via protocol $`𝒫`$ is given by $$D_𝒫(\varrho )\underset{n\mathrm{}}{lim}\frac{\mathrm{log}_2dim_n}{n}$$ (5) The distillable entanglement is defined by thethe maximum of $`D_𝒫`$ over all protocols $$E_D(\varrho )=\underset{𝒫}{sup}D_𝒫.$$ (6) Now, the main result of this paper is the following. Theorem 1. For any function $`E`$ satisfying the introduced postulates, and for any state $`\varrho `$, one has $$E_D(\varrho )E(\varrho )E_F(\varrho ).$$ (7) Remark. For pure states we have $`E_D=E_F`$; hence from the above inequality it follows that all measures are equal to $`E_F`$ in this case. This is compatible with the uniqueness theorem. Proof. Surprisingly enough, the proof is elementary. Both left- and right-hand-side inequality of the theorem are proved by the use of the same line of argumentation: * by definition $`E_D`$ ($`E_F`$) is asymptotically constant during optimal distillation (formation) protocol * distillation (formation) protocol is an LQCC operation and cannot increase any entanglement measure * the final (the initial) state is the pure one * for pure states all measures coincide by virtue of uniqueness theorem Then it easily follows that, if the given measure $`E`$ were, e.g,. less than $`E_D`$, it would have to increase under optimal distillation protocol. We used here additivity, because formation and distillation protocols are collective operations (performed on $`\varrho ^n`$). Continuity is needed, because we use the uniqueness theorem. By writing the above more formally in the case $`EE_F`$ we obtain: $`E(\varrho )={\displaystyle \frac{E(\varrho ^n)}{n}}{\displaystyle \frac{\underset{i}{}p_iE(\psi _i)}{n}}={\displaystyle \frac{\underset{i}{}p_iE_f(\psi _i)}{n}}`$ (8) $`={\displaystyle \frac{E_f(\varrho ^n)}{n}}\stackrel{n\mathrm{}}{}E_F(\varrho )`$ (9) where we chose optimal decomposition of $`\varrho ^n`$, so that the $`_ip_iE_f(\psi _i)`$ is minimal and hence equal to $`E_f(\varrho ^n)`$ . The first equality comes from additivity, the inequality is a consequence of monotonicity (more precisely - convexity, axiom 2b)). The next-to-last equality follows from the uniqueness theorem. We will skip the formal proof of the inequality $`E_DE`$, because in the following we prove formally a stronger result concerning bounds for entanglement of distillation. Below we will show that the above, very transparent line of argumentation is a powerful tool, as it allows to prove a very general theorem on upper bounds of $`E_D`$. Theorem 2. Any function $`B`$ satisfying the conditions a)-c) below is an upper bound for entanglement of distillation: a) Weak monotonicity: $`B(\varrho )B(\mathrm{\Lambda }(\varrho ))`$ where $`\mathrm{\Lambda }`$ is the trace-preserving superoperator realizable by means of LQCC operations. b) Partial subadditivity: $`B(\varrho ^n)nB(\varrho )`$ c) Continuity for isotropic state $`\varrho (F,d)`$ . The latter is of the form $$\varrho (F,d)=pP_+(C^d)+(1p)\frac{1}{d^2}I,0p1$$ (10) with $`\text{Tr}\left[\varrho (F,d)P_+(C^d)\right]=F`$. Suppose now that we have a sequence of isotropic states $`\varrho (F_d,d)`$, such that $`F_d1`$ if $`d\mathrm{}`$. Then we require $$\underset{d\mathrm{}}{lim}\frac{1}{\mathrm{log}_2d}B(\varrho (F_d,d))1.$$ (11) Remarks. (1) The above conditions are implied by our postulates for entanglement measures. Specifically: the condition a) is implied by monotonicity, b), by additivity, while the condition c), by continuity plus additivity. (2) If instead of LQCC operations we take other class $`C`$ of operations including one-way classical communication, the mutatis mutandis proof also applies (then the condition a) would involve the class $`C`$). Proof. We will perform analogous evaluation as in formula (9) (now, however, we will not even use the uniqueness theorem). By subadditivity we have $$B(\varrho )\frac{1}{n}B(\varrho ^n).$$ (12) Since the only relevant parameters of the output of the process of distillation are the dimension of the output Hilbert space and fidelity $`F`$ (see definition of distillable entanglement), we can consider distillation protocol ended by twirling , that results in isotropic final state. By condition a), distillation does not increase $`B`$, hence $$\frac{1}{n}B(\varrho ^n)\frac{1}{n}B(\varrho (F_{d_n},d_n))$$ (13) Now, in the limit of large $`n`$, distillation protocol produces $`F1`$ and $`(\mathrm{log}_2d_n)/nE_D(\varrho )`$; hence by condition c) the right hand side of the inequality tends to $`E_D(\varrho )`$. Thus we obtain that $`B(\varrho )E_D(\varrho )`$. Using the above theorem, to find a bound for $`E_D`$, three things must be done: one should show that a chosen function satisfies the weak monotonicity, then check subadditivity and calculate it for isotropic state, to check the condition c). Note that the weak monotonicity is indeed much easier to prove than full monotonicity, as given by postulate 2a). Checking subadditivity, in contrast to additivity, is in many cases immediate: It in fact holds for all so-far-known entanglement monotones. Finally, the isotropic state is probably the easiest possible state to calculate the value of a given function. To illustrate the power of the result let us prove that relative entropy entanglement $`E_r`$ is bound for $`E_D`$. Subadditivity, and weak monotonicity are immediate consequence of the properties of relative entropy used in definition of $`E_r`$ (subadditivity proved in Ref. , weak monotonicity – in Ref. ). The calculation of $`E_r`$ for isotropic state is a little bit more involved, but by using high symmetry of the state it was found to be $`E_r(\varrho (F,d))=\mathrm{log}_2d+F\mathrm{log}_2F+(1F)\mathrm{log}_2\frac{1F}{d1}`$. By evaluating this expression now for large $`d`$, we easily obtain that the condition c) is satisfied. The proof applies without any change to the Rains bound . To summarize, we have presented two results. The first one has conceptual meaning leading to deeper understanding the phenomenon of entanglement. It provides some synthetic overview of the domain of quantifying entanglement in asymptotic regime. One of possible applications of the result would be to reverse the direction of reasoning, and accept the condition $`E_DEE_F`$ as a preliminary test for a good candidate for entanglement measure. The second result presented in this paper is of direct practical use. We believe that it will make the search for strong bounds on $`E_D`$ much easier, especially in higher dimensions. Finally we would like to stress that the results display the power of the fundamental principle of entanglement processing: the latter allow not only to replace a complicated proof by a straightforward one, but also makes the argumentation very transparent from the physical point of view. We are grateful to E. Rains for stimulating discussion. We would also like to thank the participants of the ESF-Newton workshop (Cambridge 1999), especially C. H. Bennett, S. Lloyd and G. Vidal for helpful comments. The work is supported by Polish Committee for Scientific Research, contract No. 2 P03B 103 16.
no-problem/9908/nucl-th9908007.html
ar5iv
text
# Crossed-boson exchange contribution and Bethe-Salpeter equation ## 1 Introduction The necessity of introducing a contribution due to the crossed-boson exchange in the interaction kernel entering the Bethe-Salpeter equation has been referred to many times in the literature . This component of the interaction is indeed required if one wants the Bethe-Salpeter equation to reproduce results obtained with the Dirac or Klein-Gordon equations, which are known to provide a good account of the energy spectrum of charged particles in the field of a heavy system (one-body limit). At the same time it removes an undesirable contribution of the order $`\alpha ^3\mathrm{log}\alpha `$ (or $`\alpha ^3`$) expected from the simplest ladder approximation , indirectly supporting the validity of the instantaneous approximation in describing the one-boson exchange contribution to the two-body interaction. Transparent details about the derivation of the above result are scarcely found however and, thus, it may be thought that it is pertinent to QED or to the one-body limit. The role of the spin and charge of the exchanged boson, as well as its mass, is hardly mentioned. Recently, Nieuwenhuis and Tjon , employing the Feynman Schwinger representation (FSR), performed a calculation of the energy for a system of two equal-mass constituents with zero spin exchanging a zero spin massive boson. They found a strong discrepancy with the ladder Bethe-Salpeter approximation, pointing to the contribution of crossed-boson exchanges. Their binding energies are also larger than those obtained from various equations inspired by the instantaneous (equal time) approximation. Using a non-relativistic but field-theory motivated approach, Amghar and Desplanques found that the two-boson exchange contribution to the interaction was cancelled by the crossed two-boson exchange one, recovering the instantaneous approximation as an effective interaction. This result was extended to constituents with unequal masses but is restricted to spin- and charge-less bosons. In the present work, we looked at the contribution of the crossed two-boson exchange contribution in the framework of the Bethe-Salpeter equation. To the best of our knowledge, this is the first time that such a calculation is performed. The present study is obviously motivated by the two above works, which provide separate benchmarks. In the first case, it may allow one to determine which part of the large discrepancy between the FSR results and those obtained from the Bethe-Salpeter equation is due to the crossed two-boson exchange. In the second case, it is interesting to see whether results of a relativistic framework confirm the qualitative features evidenced by a non-relativistic approach. The plan of the paper is as follows. In the second section, we derive and discuss the expression we will use to describe the contribution of the crossed two-boson exchange. A particular attention is given to the off-shell extension of the corresponding operator. The third section is devoted to the description of the method used to solve the Bethe-Salpeter equation. Results for a system of two distinguishable, scalar constituents are presented and discussed in the fourth section. While the present work has not much relationship to the main research activity of W. Glöckle, we don’t think it is orthogonal to his physics interests. Looking at his work, one can indeed find some publications dealing with punctual aspects pertinent to the relativistic description of a few body system. We feel that this contribution will nicely fit to his concerns in physics. ## 2 Expression of the crossed-boson exchange to the interaction kernel For definiteness, we first remind the expression of the Bethe-Salpeter equation for two scalar particles of equal mass, interacting via the exchange of another scalar particle. It is given in momentum space by: $$(m^2(\frac{P}{2}+p)^2)(m^2(\frac{P}{2}p)^2)\mathrm{\Phi }(p,P)=i\frac{d^4p^{}}{(2\pi )^4}K(p,p^{},P)\mathrm{\Phi }(p^{},P),$$ (1) where the quantities $`m`$, $`P`$, $`p`$, $`p`$’ represent the mass of the constituents, their total and their relative momenta, respectively. $`K(p,p^{},P)`$ represents the interaction kernel whose expression is precised below. It contains a well determined contribution due to a single boson exchange: $$K^{(1)}=\frac{g^2}{\mu ^2t}=\frac{g^2}{\mu ^2(pp^{})^2iϵ},$$ (2) which has been extensively used in the literature. In this equation, the coupling, $`g^2`$, has the dimension of a mass squared. The quantity $`\frac{g^2}{4m^2}`$ is directly comparable to the coupling, $`g_{MNN}^2`$ currently used in hadronic physics or to $`4\pi \alpha `$ where $`\alpha `$ is the usual QED coupling. The full fernel $`K`$ also contains multi-boson exchange contributions that Eq. (2) cannot account for, namely of the non-ladder type. Examples of such contributions are shown in Fig. 1 involving crossed two- and three-boson exchanges. Dealing with the crossed two-boson exchange should not raise difficulties if the expression of its contribution was tractable. This one, as derived from the Feynman diagram 1a, is quite complicated and, moreover, not amenable to the mathematical methods generally employed in solving Eq. (1). There exists however an expression for the on-shell amplitude that has a structure similar to Eq. (2). It consists in writing a double dispersion relation with respect to the $`t`$ and $`u`$ variables : $`K^{(2)}(s,t,u)={\displaystyle \frac{g^4}{8\pi ^2}}{\displaystyle _{4\mu ^2}^{\mathrm{}}}{\displaystyle _{4m^2}^{\mathrm{}}}{\displaystyle \frac{dt^{}}{t^{}tiϵ}}{\displaystyle \frac{du^{}}{u^{}uiϵ}}{\displaystyle \frac{1}{\sqrt{\kappa (u^{},t^{})}}}\theta (\kappa (u^{},t^{})),`$ (3) $`\text{}\mathrm{with}\kappa (u^{},t^{})=u^{}t^{}[(t^{}4\mu ^2)(u^{}4m^2)4\mu ^4].`$ The quantities $`s`$, $`t`$ and $`u`$ represent the standard Mandelstam variables. For a physical process, they verify the equality $`s+t+u=4m^2`$. The integration in Eq. (3) runs over the domain $`t^{}>4\mu ^2`$, $`u^{}>4m^2`$ where the quantity $`\kappa (u^{},t^{})`$ entering the square root function is positive. One of the integrations in Eq. (3) can be performed analytically, allowing one to write: $`K^{(2)}`$ $`=`$ $`{\displaystyle \frac{g^4}{8\pi ^2}}{\displaystyle _{4\mu ^2}^{\mathrm{}}}{\displaystyle \frac{dt^{}}{t^{}tiϵ}}{\displaystyle \frac{1}{\sqrt{\kappa (u,t^{})}}}{\displaystyle \frac{1}{2}}\mathrm{log}\left({\displaystyle \frac{\alpha (u,t^{})+\frac{1}{2}\sqrt{\frac{t^{}4\mu ^2}{t^{}}}\sqrt{\kappa (u,t^{})}}{\alpha (u,t^{})\frac{1}{2}\sqrt{\frac{t^{}4\mu ^2}{t^{}}}\sqrt{\kappa (u,t^{})}}}\right),`$ $`\mathrm{with}\alpha (u,t^{})={\displaystyle \frac{1}{4}}[(4m^2u)(t^{}4\mu ^2)+4\mu ^4u(t^{}4\mu ^2)].`$ The expression for $`\alpha (u,t^{})`$ has been written as the sum of two terms appearing in a product form in the second term of the numerator and denominator of the log function in Eq. (2). This suggests an alternative expression for the last factor in Eq. (2) (to be used with care): $$\frac{1}{2}\mathrm{log}\left(\frac{\alpha (u,t^{})+\frac{1}{2}\sqrt{\frac{t^{}4\mu ^2}{t^{}}}\sqrt{\kappa (u,t^{})}}{\alpha (u,t^{})\frac{1}{2}\sqrt{\frac{t^{}4\mu ^2}{t^{}}}\sqrt{\kappa (u,t^{})}}\right)=\mathrm{log}\left(\frac{\sqrt{(4m^2u)(t^{}4\mu ^2)+4\mu ^4}+\sqrt{u(t^{}4\mu ^2)}}{\sqrt{(4m^2u)(t^{}4\mu ^2)+4\mu ^4}\sqrt{u(t^{}4\mu ^2)}}\right).$$ (5) Expressions for $`K^{(2)}`$ differing from the previous ones by the front factor may be found in the literature (two times smaller in ref. and two times too large in ref. , which is perhaps due to the consideration of identical particles). For this reason, we felt obliged to enter somewhat into details above and precise our inputs. Methods based on dispersion relations are powerful ones, allowing one to make off-shell extrapolations of on-energy shell amplitudes, sometimes far from the physical domain. This extrapolation is not unique however (how to extrapolate an amplitude which is equal to zero on-energy shell?) and it may thus result some uncertainty in a calculation involving a limited number of exchanged bosons. Without considering all possibilities, the dependence on the variable $`u`$ of the crossed two-boson exchange contribution given by Eq. (2) can provide some insight on the above uncertainties. Indeed, the variable $`u`$, which is an independent one in Eq. (2), can be replaced by $`4m^2st`$ on-energy shell, thus introducing a possible dependence on the variables $`s`$ and $`t`$. The dependence on $`s`$ makes the interaction energy dependent. There is nothing wrong with this feature but it is not certain that it is physically relevant. It may well be an economical way to account for higher order processes. On the other hand, the arbitrary character of its contribution should be removed by considering the contribution of these higher processes. In any case, the effect of the dependence on $`s`$ gives some order of magnitude for contributions not included in this work. The dependence on $`t`$ through the dependence on $`u`$ raises another type of problem. It is illustrated by rewriting the factor appearing in Eq. (3): $$\frac{1}{(t^{}t)(u^{}u)}=\frac{1}{(t^{}t)(u^{}+s4m^2+t)}=\left(\frac{1}{t^{}t}+\frac{1}{u^{}+s4m^2+t}\right)\frac{1}{u^{}+s4m^2+t^{}}.$$ (6) While the first term in the bracket has mathematical properties similar to Eq. (2), the second term evidences different properties, poles appearing for negative values of $`t`$. This prevents us from applying the Wick rotation and, with it, the numerical methods employed to solve Eq. (1). In the following, we will consider various choices about the off-shell extrapolation of Eq. (2). For non-relativistic systems, we expect the uncertainties to be small, in relation with the fact that the quantity $`u`$ appearing in the factor $`(u^{}u)`$ in the denominator of Eq. (3) is small in comparison with $`u^{}`$ ($`>4m^2`$). Our first choice assumes that $`u=0`$, as it was done in studies relative to the contribution of two-pion exchange to the nucleon-nucleon force . It is quite conservative and possibly appropriate for a non-relativistic approach, which assumes that the potential is energy independent. The expression of $`K^{(2)}`$ is then given by: $$K^{(2)}=\frac{g^4}{8\pi ^2}_{4\mu ^2}^{\mathrm{}}\frac{dt^{}}{t^{}t}\sqrt{\frac{t^{}4\mu ^2}{t^{}}}\frac{1}{2(\mu ^4+m^2(t^{}4\mu ^2))}.$$ (7) A second choice consists in replacing the variable $`u`$ in Eq. (2) by the factor $`4m^2st^{}`$. Upon inspection, one finds that this is equivalent to neglecting the second term in the bracket of Eq. (6). The corresponding interaction kernel is given by: $`K^{(2)}={\displaystyle \frac{g^4}{8\pi ^2}}{\displaystyle _{4\mu ^2}^{\mathrm{}}}{\displaystyle \frac{dt^{}}{t^{}t}}\sqrt{{\displaystyle \frac{t^{}4\mu ^2}{t^{}}}}{\displaystyle \frac{1}{AB}}\mathrm{log}{\displaystyle \frac{A+B}{AB}},`$ (8) $`\mathrm{with}A=\sqrt{(t^{}2\mu ^2)^2+s(t^{}4\mu ^2)},B=\sqrt{(t^{}+s4m^2)(t^{}4\mu ^2)}.`$ This expression depends on the variable $`s`$ and, thus, can provide some order of magnitude for the effect due to higher order processes. Its effect can be directly compared to that in the non-relativistic limit corresponding to $`s=4m^2`$, whose expression, also conservative, is given by: $`K^{(2)}={\displaystyle \frac{g^4}{8\pi ^2}}{\displaystyle _{4\mu ^2}^{\mathrm{}}}{\displaystyle \frac{dt^{}}{t^{}t}}\sqrt{{\displaystyle \frac{t^{}4\mu ^2}{t^{}}}}{\displaystyle \frac{1}{AB}}\mathrm{log}{\displaystyle \frac{A+B}{AB}},`$ (9) $`\mathrm{with}A=\sqrt{(t^{}2\mu ^2)^2+4m^2(t^{}4\mu ^2)},B=\sqrt{t^{}(t^{}4\mu ^2)}.`$ A last expression of interest corresponds to the non-relativistic limit where $`m\mathrm{}`$. Not surprisingly, it is identical to that obtained by considering time ordered diagrams in the same limit and may be written as: $$K^{(2)}=\frac{g^4}{8\pi ^2}_{4\mu ^2}^{\mathrm{}}\frac{dt^{}}{t^{}t}\sqrt{\frac{t^{}4\mu ^2}{t^{}}}\frac{1}{2m^2(t^{}4\mu ^2)}.$$ (10) In this last expression, the possible effect of off-shell extrapolations might be studied by replacing the factor $`m^2`$ by $`\frac{s}{4}`$. As a theoretical model, the Bethe-Salpeter equation is most often used with an interaction kernel derived from the exchange of a scalar neutral particle. Anticipating on the conclusion that the contribution of the crossed two-boson exchange in this case could be strongly misleading, we also considered the case of the exchange of bosons that would carry some “isospin” . In particular, for the case of two isospin $`\frac{1}{2}`$ constituents and an isospin 1 exchange particle, a factor $`\stackrel{}{\tau }_1.\stackrel{}{\tau }_2`$ would appear in the single boson exchange contribution, Eq. (2), while a factor $`(3+2\stackrel{}{\tau }_1.\stackrel{}{\tau }_2)`$ should be inserted in the expressions involving the crossed two-boson exchange, Eqs. (7) - (10). The factor relative to the iterated one-boson exchange would be $`(32\stackrel{}{\tau }_1.\stackrel{}{\tau }_2)`$. For a state with an “isospin” equal to 1, the factor $`\stackrel{}{\tau }_1.\stackrel{}{\tau }_2`$ is equal to 1 and thus the single boson exchange, Eq. (2), remains unchanged. It is easily checked that the factor relative to the iterated single boson exchange, $`(32\stackrel{}{\tau }_1.\stackrel{}{\tau }_2)`$, is also equal to 1 (as well as for the multi-iterated exchanges). On the contrary, for the crossed two-boson exchange, the factor $`(3+2\stackrel{}{\tau }_1.\stackrel{}{\tau }_2)`$ makes a strong difference. It is equal to 5 instead of 1. Its consequences and the possible role of crossed-boson exchange in restoring the validity of the instantaneous approximation lost in the ladder Bethe-Salpeter equation will be examined in Sect. 4. ## 3 Numerical method In order to solve Eq. (1) we employ a variational method that, in a similar form, was already used for one of the first numerical solutions of Bethe-Salpeter equations . This method has several advantages, like being little computer time consuming, and allowing for an easy control of the numerical solutions. The drawback lies in the fact, that in the weak binding limit, due to our particular choice of trial functions, the convergence of the eigenvalues becomes extremely slow, which implies less accurate solutions in this limit. This kind of problem is similar to the one encountered in ref. , where the rate of convergence was studied in detail. We have checked carefully, that in the region for which we report numerical solutions in this work, we reproduce the results of ref. for calculations in the ladder approximation. ### 3.1 Solution of the equation in ladder approximation We consider the problem of two scalar particles of equal mass $`m`$ which interact via the exchange of a third scalar particle of mass $`\mu `$. After removal of the center of mass coordinate and performing the Wick rotation to work in Euclidian space, one is left with an eigenvalue problem in four dimensions for the coupling constant $`\lambda `$. We use the parameter $`ϵ=E/2m`$, $`0<ϵ<1`$, where $`E`$ is the total energy of the bound state and we take $`m`$ as our mass unit. The Bethe-Salpeter equation in coordinate space takes the form $$L\mathrm{\Psi }=\lambda V\mathrm{\Psi },$$ (11) where $$L=\left(\mathrm{}+1ϵ^2\right)^24ϵ^2\frac{^2}{x_0^2},\mathrm{}=\underset{i=1}{\overset{4}{}}\frac{^2}{x_i^2},$$ (12) and the interaction in ladder approximation is $$V(R)=\frac{1}{\pi ^2}d^4q\frac{e^{iqx}}{q^2+\mu ^2}=\frac{4\mu }{R}K_1(\mu R),R=\left(\underset{i=1}{\overset{4}{}}x_i^2\right)^{1/2}.$$ (13) In the last equation, $`K_1(x)`$ is the modified Bessel function of the second kind. For $`\mu =0`$, the potential function $`V(R)`$ becomes simply $`4/R^2`$. The eigenvalue $`\lambda `$ in Eq. (11) is related to the coupling constant $`g^2`$ used in the previous section by $`g^2=(4\pi )^2m^2\lambda `$. Equation (11) is invariant under rotations in the three-dimensional subspace (but not in the complete four-space except for $`ϵ=0`$). One can therefore separate the angular part of the wavefunction, to get a partial differential equation in two variables. A more convenient way to proceed is to switch to spherical coordinates in four dimensions and introduce the four-dimensional spherical harmonics $$|nlm=\sqrt{\frac{2^{2l+1}}{\pi }\frac{(n+1)(nl)!l!^2}{(n+l+1)!}}\mathrm{sin}^l\chi C_{nl}^{l+1}(\mathrm{cos}\chi )Y_l^m(\theta ,\varphi ).$$ (14) This choice is particularly suited for our problem, since apart from obeing simple orthonormality relations, these spherical harmonics are eigenfunctions of the d’Alembertian operator $`\mathrm{}`$. The $`C_n^l(x)`$ in Eq. (14) are Gegenbauer polynomials. We expand the functions $`\mathrm{\Psi }`$ of Eq. (11) in these four-dimensional spherical harmonic functions, which implies that we have to determine a set of functions of one variable $`R`$ only. We thus arrive at the structure of our trial function $$\mathrm{\Psi }=\underset{n}{}R^nf_{nl}(R)|nlm$$ (15) where the factor $`R^n`$ follows from the asymptotic behaviour of the wave function at the origin. Now we expand the functions $`f_{nl}`$ in terms of some basis functions, that are chosen of gaussian type: $$f_{nl}(R)=\underset{i}{}c_ie^{\alpha _iR^2}$$ (16) where the $`\alpha _i`$ are stochastically chosen parameters. Note that due to the transformation properties of the spherical harmonic functions, the sum in Eq. (15) always runs over either even or odd values of $`n`$, according to the quantum numbers of the state. In particular this allows us to calculate states of different relative time parity separately. With the basis functions of Eq. (16) the matrix elements of $`L`$ are simply computed and for those of $`V`$ we just need the integrals $$_0^{\mathrm{}}R^{2n}e^{\alpha R^2}K_1(\mu R)𝑑R=\frac{\mathrm{\Gamma }(1+n)\mathrm{\Gamma }(n)}{2\alpha ^n\mu }e^{\frac{\mu ^2}{8\alpha }}W_{n,\frac{1}{2}}\left(\frac{\mu ^2}{4\alpha }\right),$$ (17) where $`W_{\mu ,\nu }(z)`$ is Whittaker’s function. ### 3.2 Inclusion of crossed diagrams The potential function of Eq. (13) was obtained from the one-meson exchange Feynman diagram, which gave a contribution (in Minkowski space and momentum representation) $$K^{(1)}=(4\pi )^2m^2\frac{\lambda }{t\mu ^2},t=(pp^{})^2,$$ (18) where $`\mu `$ is the mass of the exchanged meson. With the results of Sect. 2, the contribution of the crossed diagram can be expressed with the help of a dispersion relation in the form $$K^{(2)}=(4\pi )^2\mathrm{\hspace{0.17em}2}m^4_{4\mu ^2}^{\mathrm{}}\frac{\lambda ^2}{tt^{}}F(t^{})𝑑t^{},$$ (19) where the spectral function $`F(t^{})`$ is given by one of the $`t`$-independent forms under the integrals of Eqs. (7)-(10). Now, since for all the cases considered here, $`F(t^{})`$ is an analytic function everywhere in the integration domain, we are allowed to carry out the Wick rotation, and by making the transition to configuration space we obtain the generalization of Eq. (11), $$L\mathrm{\Psi }=\lambda V\mathrm{\Psi }+\lambda ^2W\mathrm{\Psi },$$ (20) where the second potential function is given by $$W(s,R)=\frac{8}{R}_{4\mu ^2}^{\mathrm{}}\sqrt{t^{}}K_1(R\sqrt{t^{}})F(t^{})𝑑t^{}.$$ (21) We have indicated in Eq. (21) by the functional argument that contrary to V, the new function W may also depend on the total energy of the system, i.e. the Mandelstam variable $`s`$. For the choices of $`F(t^{})`$ that we consider here, this is only the case for Eq. (8). As we see, Eq. (20) is no longer a simple eigenvalue problem, but it can easily be solved by iteration in the form $$L\mathrm{\Psi }=\lambda (V+\lambda W)\mathrm{\Psi }\lambda U\mathrm{\Psi },$$ (22) where at each step, the determined value of $`\lambda `$ is reinserted into $`U`$. ## 4 Results ### 4.1 Comparison with non-perturbative results We first present results for the ground state mass as a function of the dimensionless coupling constant $`g^2/4\pi m^2=4\pi \lambda `$ for the case $`\mu /m=0.15`$. The reason for this choice is that recently Nieuwenhuis and Tjon reported the first calculation of bound state properties beyond the ladder approximation using the Feynman- Schwinger representation (FSR). Since this formulation takes into account all ladder and crossed ladder diagrams, we may expect the results of the present approach to lie somewhere between those of the ladder and the Feynman- Schwinger calculations. If the perturbative series expansion of the Bethe-Salpeter kernel turns out to converge reasonably fast, which we would like to be the case, our results should actually be closer to those of the Feynman-Schwinger approach. This would imply that even higher order terms of the kernel could safely be neglected in perturbative calculations. In Fig. (2) we show the ladder results and the Feynman-Schwinger results (taken from ref. ) together with the various off-shell extrapolations given by Eqs. (7) - (10). For small couplings, the differences between these various choices are small compared to the difference between the ladder and the Feynman-Schwinger results, allowing one to make safe statements about the contribution of the crossed two-boson exchange diagram. The binding energies so obtained are found just about halfway between the FSR- and the ladder results. The remaining, still considerable, discrepancy to the exact binding energies of the non-perturbative approach makes us believe, that even higher order terms than the crossed two-boson exchange term in the kernel are essential for doing reliable calculations within the Bethe-Salpeter framework at large coupling. The inset of Fig. (2) shows the evolution of our solutions over the whole range of binding energies. As can be seen, the solution corresponding to the parametrization of Eq. (8) shows a double-valued structure with a lower branch where the binding energy decreases with increasing coupling. This is a consequence of the energy dependence of this choice, since replacing $`s`$ by $`4m^2`$, which corresponds to Eq. (9), renders the curve monotonically decreasing. It is however interesting to note that the lower branch does not start off at a coupling constant equal to 0 for $`s=0`$, as it is the case, for instance, for the Gross equation, (see below). Also the lower branch solution of the equal mass Klein-Gordon equation goes through 0 for $`s=0`$. This qualitative difference to well known facts led us to investigate the $`\mu `$ dependence of our results. It was found that, from a value of the exchanged particles mass of about $`\mu 0.5`$ on, the curves become monotonically decreasing even for the choice of Eq. (8). On the other hand, for $`\mu 0`$, the lower branch tends to $`s=0`$ for vanishing coupling. This could have been immediately expected, since for $`\mu =0`$ the integral of Eq. (8) diverges, a reminiscent of the original infrared divergence of the box diagram. This is somehow a pity, since the case $`\mu =0`$, in the ladder approximation, corresponds to the Wick-Cutkosky model, which for $`s=0`$ admits analytic solutions in the form $`\lambda =N(N+1)`$ . With the crossed box diagram included, all these solutions become degenerate at $`\lambda =0`$. We shall not comment any more on this subject, as we believe that further theoretical work is required in order to correctly account for the contribution of the crossed box diagram in the case $`\mu =0`$. Nieuwenhuis and Tjon also give a detailed comparison of their results with those of various quasipotential equations. They considered the BSLT equation, the equal-time (ET) equation and the Gross equation . Generally, these equations reduce the description from a 4-dimensional to a 3-dimensional one by making an ansatz for the propagators and the potentials involved. Specifically, the propagator factor of the Bethe-Salpeter equation in ladder approximation $$S^1(p)\mathrm{\Phi }(p)=\frac{1}{(2\pi )^4}d^4p^{}V(pp^{})\mathrm{\Phi }(p^{}),$$ (23) (that is Eq. (1) after the Wick rotation) is replaced by the following forms in the various approaches (compare ref. ): $`S_{\mathrm{QPE}}(p)`$ $`\stackrel{\mathrm{BSLT}}{=}`$ $`{\displaystyle \frac{1}{4\sqrt{\stackrel{}{p}^2+m^2}}}{\displaystyle \frac{2\pi \delta (p_0)}{\stackrel{}{p}^2+m^2\frac{1}{4}s}},`$ (24) $`\stackrel{\mathrm{ET}}{=}`$ $`{\displaystyle \frac{1}{4\sqrt{\stackrel{}{p}^2+m^2}}}{\displaystyle \frac{2\pi \delta (p_0)}{\stackrel{}{p}^2+m^2\frac{1}{4}s}}\times (2{\displaystyle \frac{s}{4(\stackrel{}{p}^2+m^2)}}),`$ (25) $`\stackrel{\mathrm{Gross}}{=}`$ $`{\displaystyle \frac{1}{4\sqrt{s}\sqrt{\stackrel{}{p}^2+m^2}}}{\displaystyle \frac{2\pi \delta \left(p_0+\frac{1}{2}\sqrt{s}\sqrt{\stackrel{}{p}^2+m^2}\right)}{\sqrt{\stackrel{}{p}^2+m^2}\frac{1}{2}\sqrt{s}}}.`$ (26) In the potential function $`V(pp^{})=K^{(1)}(p,p^{},P)`$ for the BSLT and ET equations, the time component is simply neglected: $$V(pp^{})=g^2\frac{1}{(\stackrel{}{p}\stackrel{}{p}^{})^2+\mu ^2},$$ (27) whereas for the Gross equation $`V`$ takes the form $$V(pp^{})=g^2\frac{1}{(\stackrel{}{p}\stackrel{}{p}^{})^2(\omega \omega ^{})^2+\mu ^2},$$ (28) with $`\omega =\sqrt{\stackrel{}{p}^2+m^2}\frac{1}{2}\sqrt{s}`$ and $`\omega ^{}=\sqrt{\stackrel{}{p}^2+m^2}\frac{1}{2}\sqrt{s}`$ . In ref. it was found that all the binding energies obtained in these quasipotential models are distributed between the energies of the FSR- and the ladder calculations. In Fig. (3) we compare our results with those of the equations specified above. For small coupling constants, we find that our binding energies, represented by the choice $`u=0`$ indicated in Fig (3), are remarkably close to those of the ET and Gross equation. Since it is often conjectured that results like these may tend to support the validity of the instantaneous approximation, we would like to point out that this is true only for the specific model considered here. In fact, one can quickly convince oneself of the special nature of this approximate agreement by considering another specific case, where the involved particles would carry some sort of isospin. For the case already mentioned in Sect. 2, the contribution of the crossed box for a state of isospin 1 gets multiplied by a factor five, whereas the ladder terms remain unchanged. It can be seen in Fig. (3), that the corresponding curve even largely overshoots the Feynman-Schwinger points. The inset of Fig. (3) shows again the evolution of the curves over the whole range of binding energies and one can notice the unphysical lower branch of the Gross equation already mentioned above and discussed in ref. . Let us finally notice that results quite similar to the ones presented here can also be obtained in a non-relativistic scheme. Details about this approach will be presented elsewhere . ### 4.2 Excited states In Fig. 4 we show the complete spectrum of lowest states for a spatial orbital angular momentum $`l=0`$ and an exchanged particle of mass $`\mu =0.15`$. The spectrum of the ladder approximation, on the left hand side, is very similar to the one of the Wick-Cutkosky model. There are normal and abnormal solutions, the latter ones corresponding to excitations in the relative time variable. For $`s=0`$ an approximate degeneracy appears. This can be traced back to the extra symmetry that occurs in the Wick-Cutkosky model, i.e. for $`\mu =0`$, when $`s0`$ (O(5) instead of O(4)). Like in this model, the normal states tend to the correct non-relativistic limit when the coupling gets small, whereas the abnormal states exist only for larger values of the coupling constant. The only qualitative difference to the Wick-Cutkosky model is the fact that the different abnormal solutions do not tend to a common value of the coupling constant when the binding energy becomes small. The spectrum gets considerably changed quantitatively by the inclusion of the crossed box diagram. On the right hand side of Fig. 4 we show the results for the energy independent choice $`u=0`$ of Eq. (7). First note the different scale in the coupling constant. Generally, the crossed box increases the binding energies for all states as compared with the ladder results. Second, it has to be stated that we still find abnormal solutions with in fact similar properties as in the ladder approximation. Ever since the initial conjecture of Wick, it has been repeatedly claimed in the literature, that these abnormal states should be spurious consequences of the ladder approximation. The mere existence of these states beyond the ladder approximation is therefore already an interesting result per se. As for the qualitative properties of the solutions, we see that the normal ones still behave normally, that is, they seem to have the correct weak binding limit. Also the abnormal solutions behave like in the ladder approximation in this limit, since they tend to different but larger values of the coupling constant. In the strong-binding limit however, the approximate degeneracy of states is severely lifted. This could have been expected, since the effect of the term on the right hand side of Eq. (7) is similar to the ladder term, Eq. (2), with a much higher effective mass $`\mu `$. An interesting feature is the fact that the abnormal solutions which are odd functions of relative time receive a more attractive contribution from the crossed box diagram than the even time-parity abnormal solutions. This is clearly seen in Fig. 4, where the odd time-parity solutions get shifted to the left, the even time-parity solutions to the right of the corresponding normal solutions. This even leads to the crossing of abnormal states, that in the ladder approximation would belong to different O(4) multiplets. This crossing of states does not cause any numerical problems, since as outlined in Sect. 3, we calculate even- and odd time-parity solutions separately. There occurs however a problem for the even time-parity case. Since the normal solutions all tend to a value that is closer to zero than the limit of the abnormal solutions, the former ones necessarily cross some of the latter ones, if they initially belonged to a higher O(4) multiplet. This leads to a perturbation of the eigenvalues near the crossing region, which can be clearly seen from Fig. (4) in the case of the two positive time-parity abnormal solutions. The effect is in principle also present in the ladder approximation, but it seems somehow less pronounced there. In fact, taking a closer look at the crossing region, one finds rather a repulsion of the two solutions, with no real crossing taking place. In this region, it is obviously not possible to identify the two solutions due to configuration mixing. It is only by the smooth continuation of the curves after the crossing, that the solutions were identified in order to draw the graph of Fig. 4. The curve for the abnormal solution after the crossing is however not expected to be meaningfull anymore, since it gets repeatedly perturbed by all the normal states crossing it. ## 5 Conclusion In this work, we have studied the contribution due to crossed two-boson exchange to the binding energy of a system made of two distinguishable particles. This was done in the framework of the Bethe-Salpeter equation. The present work completes that one by Nieuwenhuis and Tjon for the lowest $`l=0`$ state, allowing one to determine what is the role of the simplest crossed-boson exchange contribution among all those included in their work. For the range of coupling constants where a comparison is possible, this contribution is rather well determined and accounts for roughly half of the total effect, the discrepancy tending to increase with the coupling. This is consistent with the expectation that the role of multi-boson exchanges not included here should increase similarly. The result is however somewhat disturbing. It implies that the convergence of the Bethe-Salpeter approach in terms of an expansion of the interaction kernel in the coupling constant is likely to be slow. We obviously assumed that the comparison is meaningful, namely the results of Nieuwenhuis and Tjon exclude effects from self-energy or vertex corrections ignored here. Amazingly, our results, which include crossed two-boson exchange contributions, are close to those obtained with the instantaneous approximation, where these contributions are absent. This confirms a theoretical result obtained in a non-relativistic scheme . Like there, the specific character of this coincidence is demonstrated by the consideration of a model with “charged” bosons, which leads to a different conclusion, showing that the validity of the instantaneous approximation is limited to the Born approximation. We have looked at excited states, including abnormal ones. Our results are partly academic. The effect due to the simplest crossed-boson exchange is so large that one has to seriously worry about higher order crossed-boson diagrams. Interesting features nevertheless show up. There is no tendancy for abnormal states to disappear, as sometimes conjectured in the literature. Their energy is differently affected, depending on the parity of the states under a change in the sign of the relative time coordinate. We scarcely considered the case of an exchanged boson with zero mass, which would be most interesting in view of possible applications to QED. This case supposes further theoretical work to deal with the divergences that appear. Getting rid of the $`\alpha ^3\mathrm{log}\alpha `$ corrections to the binding energy obtained in the ladder approximation for the case of scalar neutral bosons should be easily achieved. Removing the corrections $`\alpha ^3`$, which are absent in results obtained from the Dirac or Klein-Gordon equations, is more delicate. In any case, the difficulty has a somewhat general character and is not pertinent to the present approach. Another development concerns the understanding of the remaining discrepancies with the FSR results. Estimating the contribution of crossed three-boson exchange is not out of reach with some approximations. The net effect is unclear however. In the non-relativistic case, at the order $`(\frac{1}{m})^0`$, it is expected to vanish as a result of a cancellation with a contribution due to the renormalization of the interaction . It remains the possibility that the higher order corrections, which seem to be responsible for a slight extra binding in present results, have their role slowly increased when the treatment of the problem becomes more complete.
no-problem/9908/hep-th9908093.html
ar5iv
text
# 1 Introduction ## 1 Introduction According to the so called holography principle, the maximum number of degrees of freedom in a volume is proportional to its bounding surface area . If true, this would amount to an enormous simplification of the world, as it would enormously reduce the degrees of freedom required to understand it. Furthermore, it would be informative as it could provide a holographic bound on entropy in a variety of physical settings, including cosmology. There are two ways to approach this question: either phenomenologically or at a fundamental level. Here - in line with other applications of this principle to cosmology - we shall concentrate on the former. We recall that an important motivation for this idea comes from the Bekenstein-Hawking results concerning black holes, according to which the entropy of the matter inside a black hole of mass $`S_M`$ cannot exceed the Bekenstein-Hawking entropy $`S_{BH}`$ given by a quarter of the area $`A`$ of its event horizon in Planck units, i.e. $$S_MS_{BH}=\frac{A}{4}.$$ (1) The aim of the holographic principle is essentially to generalise this result to more general settings, including cosmology . Leaving aside the justification for this enormous extrapolation, this generalisation poses important questions. To begin with, as opposed to the case of the black holes (BH), where appropriate notions of volume and surface are naturally provided by the event horizon, it is not clear whether appropriate analogues of these notions in fact exist in general cosmological settings and if so whether they are unique and how they should be determined. This is particularly true of the choice of surfaces, as even for a fixed volume, the surface is not uniquely defined. Recently Fischler and Susskind have considered the application of holography principle to cosmology. Their proposal may be stated as follows: Fischler–Susskind Proposal : Let $`M`$ be a four-dimensional spacetime. Let $`\mathrm{\Gamma }`$ be a spatial region in $`M`$ with a two-dimensional spatial boundary $`B`$. Let $`L`$ be the light surface bounded by $`B`$ and generated by the past light rays from $`B`$ towards the centre of $`\mathrm{\Gamma }`$. Then the entropy passing through $`L`$ never exceeds the area of the bounding surface $`B`$. In particular, in the case of adiabatic evolution, the total entropy of the matter within the particle horizon<sup>1</sup><sup>1</sup>1Here this is taken to mean the creation light cone, as in , rather than the set of particles bounding causal connection, as originally defined by Rindler . must be smaller than the area of the horizon. They have shown that this proposal is compatible with flat and open Friedman-Lemaitre-Robinson-Walker (FLRW) cosmologies, but that it fails for the $`k=+1`$ recollapsing models. A number of attempts have subsequently been made to remedy this difficulty -. In particular, Bousso has recently put forward a generalisation of this proposal and has applied it to a number of examples including the recollapsing $`k=+1`$ FLRW cosmological models<sup>2</sup><sup>2</sup>2See for a recent critique of this scenario.. Here we take a closer look at the question of holography in a generic realistic inhomogeneous cosmological setting. We consider the proposal by Bousso as well as putting forward a modified version, in each case discussing the nature of the resulting light surfaces and the difficulties in their operational definability. In section 2, we briefly look at the application of this principle to FLRW models. Sections 3 and 4 contain a brief discussion of Bousso’s proposal in the inhomogeneous cosmological settings, and the nature of the resulting light surfaces in these settings respectively. In section 5 we put forward a modified version to this proposal and discuss the nature of the resulting light surfaces. Finally section 6 contains our conclusions. ## 2 Holography and FLRW universes To begin with, let us briefly recall how the proposal by Fischler and Susskind runs into difficulty in the case of $`K=+1`$ recollapsing FLRW universes. In this case the metric is given by $$ds^2=dt^2+a^2(t)\left(d\chi ^2+\mathrm{sin}^2\chi d\mathrm{\Omega }^2\right),$$ (2) where $`a(t)`$ is the scale factor, $`\chi `$ is the azimuthal angle and $`d\mathrm{\Omega }^2`$ is the line element of the 2–sphere at constant $`\chi `$. Assuming a constant<sup>3</sup><sup>3</sup>3An assumption that will obviously not be correct in a general inhomogeneous universe. comoving entropy density $`\sigma `$, the ratio $`S/A`$ can be readily given as $$\frac{S}{A}=\sigma \left[\frac{2\chi _H\mathrm{sin}2\chi _H}{4a^2(\chi _H)\mathrm{sin}^2(\chi _H)}\right],$$ (3) where $`\chi _H=_0^t\frac{dt^{}}{a(t^{})}`$ is the comoving horizon size. This clearly shows that the bound can be violated in this case, on noting that the area (given by the denominator of (3)) becomes zero at the epoch of maximum expansion ($`a=a_{max}`$) given by $`\chi _H=\pi `$. In order to remedy this shortcoming, Bousso has put forward a generalisation of the Fischler–Susskind Proposal which considers all four light-like directions and selects some according to an additional criterion of non-positive expansion of the null congruences generating the null surfaces orthogonal to the starting surface $`B`$. More precisely, the idea is as follows: Bousso’s Proposal : Let $`M`$ be a four-dimensional spacetime which satisfies Einstein’s equations with the dominant condition holding for matter. Let $`A`$ be the connected area of a two-dimensional spatial surface $`B`$ contained in $`M`$. Let $`L`$ be the connected part of a hypersurface bounded by $`B`$ and generated by one of the four null congruences orthogonal to $`B`$ such that the expansion of this congruence, measured in the direction away from $`B`$, is non-positive everywhere. Let $`S`$ be the total entropy contained in $`L`$. Then $`SA/4`$. A hypersurface $`L`$ with the above properties is then referred to as a light sheet (surface) $`𝒮`$ for the surface $`B`$. The crucial points regarding this proposal are that first it selects which of the four null surfaces orthogonal to $`B`$ can be considered as light sheets, and second it determines what part of those null surfaces will be included in the light sheet: namely, they start at $`B`$, and any caustics present in the surface must act as end points to the light sheet , if it is extended that far. Considering the case of FLRW universes, on choosing a surface within the apparent horizon as the surface $`B`$, this proposal prevents the violation of the holography bound in the contracting phase . We note that in this simple (homogeneous and isotropic) case the light sheet, given optimally by the apparent horizon in a flat radiation dominated universe, is indeed connected as well as being differentiable. Note that while the definition used is time symmetric, the null surfaces in the expanding universe are not invariant under time reversal except at an instant of maximum expansion in a homogeneous universe (which does not correspond to the present day situation), and clearly almost never in an inhomogeneous universe. In the following sections we study various aspects of this proposal in inhomogeneous cosmological settings. This is an essential extension of the previous work if it is to be taken as referring to the real universe. ## 3 Holography and Inhomogeneous universes Let us consider a general realistic inhomogeneous universe, which may possibly possess a recollapsing phase. To begin with, recall that a fundamental feature of classical self-gravitating systems is that in general they are not in equilibrium states. This instability gives rise to spontaneous creation of structure (lumpiness) which in the physically realistic case increases with time. As an example of this we briefly recall a recent study of this question in the context of Lemaitre–Tolman and Szekeres inhomogeneous cosmological models . Employing as a measure of density contrast (structuration), covariant density contrast indicators in the form $$DC=_\mathrm{\Sigma }\left|\frac{h^{ab}}{\rho }\frac{\rho }{x^a}\frac{\rho }{x^b}\right|𝑑V,$$ (4) it has been shown that in general such structuration varies with time, as expected<sup>4</sup><sup>4</sup>4Moreover, it has been shown that indicators of this kind exist which grow monotonically with time for both ever-expanding and recollapsing models of Lemaitre–Tolman and Szekeres types, simultaneously (see for details).. Here $`\rho `$ is the density, $`h_{ab}=g_{ab}+u_au_b`$ projects orthogonal to the unit 4-velocity $`u^a`$ and $`\mathrm{\Sigma }`$ is a 3-surface. This indicates that density contrast (lumpiness) is likely to change with time in inhomogeneous cosmological models, which is bound to be reflected in the behaviour of the corresponding Ricci and Weyl tensors. Now the geometry of an arbitrary null surface encodes detailed information about the Ricci and the Weyl tensors encountered by that surface, as shown by the usual optical scalar equation (see e.g. ) $$\frac{d\theta }{d\lambda }=\frac{1}{2}\theta ^2\sigma _{ab}\sigma ^{ab}+\omega _{ab}\omega ^{ab}R_{ab}\kappa ^a\kappa ^b$$ (5) together with $`\kappa ^c_C\sigma _{ab}`$ $`=`$ $`\theta \sigma _{ab}+C_{cbad}\kappa ^c\kappa ^d`$ (6) $`\kappa ^c_C\omega _{ab}`$ $`=`$ $`\theta \omega _{ab},`$ (7) where $`\theta `$, $`\sigma _{ab}`$ and $`\omega _{ab}`$ are respectively the expansion, shear and the twist of a congruence of null geodesics with the tangent field $`\kappa ^a`$. The convergence of the null generators changes according to the null component of the Ricci tensor sampled by the tangent vector and the rate of change of distortion according to the Weyl tensor component sampled by these generators. The shear in turn alters the expansion which determines the local rate of change of area along the generators. The total area entering the inequality in the holography conjecture is a summation of all the resulting infinitesimal area elements of the null surface, and so is a coarse-grained summation of all this information (in which all the fine details are lost). As the universe evolves and structures form, the gravitational focusing (and caustic<sup>5</sup><sup>5</sup>5Clearly focusing does not always lead to caustics.) properties in inhomogeneous cosmologies are time dependent, which in turn makes the structure of the light surfaces time dependent in these models, i.e. if we look at light surfaces associated with a spatial surface $`B_2`$ that lies to the future of a surface $`B_1`$ we expect a time dependence in the associated area and entropy. To fix ideas, consider an inhomogeneous universe possessing $`N`$ lumps at a given time $`t`$. Let us denote by $`C_i`$ the caustics produced by the lump $`i`$ in a null surface orthogonal to $`B`$ that starts off with non-expanding normals at $`B`$. Now given that, according to the above proposal, caustics must act as end points to the light surface $`𝒮`$, a necessary condition $`𝒮`$ needs to satisfy to ensure the holography bound according to the above proposal is that it should contain all such caustics and hence $$𝒮\underset{i}{}C_i.$$ (8) This immediately raises a number of fundamental issues concerning the nature of such light surfaces and their operational definability in practice. ## 4 Nature of the light sheets in inhomogeneous universes Assuming that the bound defined by $`𝒮`$ does indeed hold in general inhomogeneous cosmological settings, a number of important questions still need to be addressed. They include: ### 4.1 Differentiability and connectedness Given that light surfaces end at caustics, their structures are in general forced to be extremely complex and non-differentible, with possibly disconnected or even fractal boundaries, depending upon the nature of the inhomogeneous lumpiness and the resultant caustics in the universe. We recall that a given source (lens) can in principle produce a hierarchy of caustics with a range of intensities. Thus each star will generally cause strong gravitational lensing with associated caustics<sup>6</sup><sup>6</sup>6The corresponding multiple images will in general not be detectable because they will lie too close to the apparent surface of the star.. However if the star is in the core of a galaxy, there will also be much larger scale multiple images and caustics associated with the gravitational field of the galaxy itself; and if that in turn is in the core of a rich cluster of galaxies, the cluster will produce strong lensing with associated arcs and articles at even larger angular scales. In this way, each such a star would contribute to multiple levels of lensing and caustics. Furthermore, strong gravitational lensing is a commonplace phenomenon. Indeed in the real universe we expect such a hierarchical structure, with at least $`10^{22}`$ caustics in our past light cone because of lensing caused by all the stars our past light cone intersects in all visible galaxies, with further multiple layers of caustics caused by additional lensing associated with at least some galaxy cores and some rich clusters of galaxies, as just indicated. At each level, caustics occur that are associated with parts of the past light cone that lie as indentations inside the boundary of the past, and are associated with multiple images of distant objects (see for example ). When lensing occurs, the past light rays generating a past light cone self-intersect first non-locally and then locally, as one follows them back from the apex of the light cone. A light ray near a lensing object is deflected inwards by the gravitational field of the lens as it moves near it. It swings back towards the optic axis (the null geodesic from the observer through the centre of the lens) and self-intersects a similar family of geodesics coming from the other side. At this point it moves from generating the outer part of the past light cone (the boundary of the past) to an inner part, folded inside and lying within the past of the apex point<sup>7</sup><sup>7</sup>7As is implied by Figure 3 of .. It continues till local self-intersection occurs at a cusp; from there on it generates the back part of the folded light cone, which also lies inside the past of the apex point. This is a general structure that results from the nature of the boundary of the past of a set of points in a generic space-time . In the case of non-spherical lenses, multiple caustics due to a single lens can lie inside each other; these complex nullcone geometries have been investigated analytically in the case of elliptical lenses, and numerically in the case of realistic lensing models (see e.g. ). Additionally, it has been shown that the presence of single BH can produce an infinity of caustics associated with light rays that circle the black hole an arbitrary number of times - albeit with rapidly decreasing intensities<sup>8</sup><sup>8</sup>8Note that even though in practice caustics below a certain cut-off intensity may be ignored, in principle they all need to be taken into account. . Thus even a finite universe with say $`10^{11}`$ black holes in the visible region - a very conservative estimate, given that we expect massive black holes at the centres of many galaxies, as well as all those resulting from the collapse of super-massive stars - could give rise to an infinite number of caustics associated with each of these black holes, and hence to an extremely complicated light surface. In this way the light sheet $`𝒮`$ (whether the past light cone of single point or not) may be said to light trace the content of the universe on all scales and thus encode its complexity, particularly through its caustic structure. This makes sense, as in contrast to the case of BH<sup>9</sup><sup>9</sup>9Where the presence of the no-hair theorem ‘smoothes’ the information on the event horizon boundary. and completely smooth FLRW cosmological models, for which $`𝒮`$ is readily given in terms of the small number of parameters which characterise these systems<sup>10</sup><sup>10</sup>10Namely mass $`M`$ in the case of BH and the deceleration and the Hubble parameters ($`q_0,H`$) in the case of FLRW models., one would in general cosmological settings expect this surface to be complex since no such constraints exist. To describe the detailed structure of a null surface (e.g. a past light cone) in a realistic cosmological setting will require many millions of parameters. ### 4.2 Operational definability Strictly speaking, to define the holography bound precisely, all $`C_i`$ need to be included in the construction of $`𝒮`$. The problem, however, is that the details of $`C_i`$ are not given a priori in terms of theory, but depend on the details of the contents of the universe (including the masses and sizes of the sources and lenses, together with the detailed knowledge of their distributions in space and time), which needs to be specified through observations. The crucial point being that $`𝒮`$ is constructive rather than theoretically given. This then raises the important question of operational definability of $`𝒮`$ for the real universe. Now given that all observations possess finite resolutions, only sources, lenses and caustics above certain threshold levels can be observable in practice. In this way, a cut-off (course-graining) is inevitably involved in the definition of $`𝒮`$. Thus limitations in observational resolution become a barrier to constructing the precise form of $`𝒮`$ and hence ensuring the bound. This then raises the interesting question of whether one could formulate an averaged holography principle in terms of the averaged (coarse-grained) light surface $`𝒜𝒮`$. The problem, however, is that the coarse-graining (averaging) of the content (say matter distribution) does not commute with the averaging of the geometry, mainly due to the nonlinearity of the curvature tensor (see for example and references therein for a detailed discussion of this issue). Worse still, coarse-graining of neither of these two quantities would in general commute with coarse-graining the caustics. For example, a single BH of a given mass can produce an infinite number of caustics; it is not clear that a cut-off on the mass of the lens in general results in a similar cut-off in the area of the resulting caustics. ### 4.3 Time dependence and reversibility As was pointed out above, in a real universe the number of lumps $`N`$ as well as their positions, masses and shapes vary with time, This is interesting as, in addition to the precise distribution of sources, lenses etc, their time evolution is also important for the construction of $`𝒮`$, which is time dependent as the surface $`B`$ is moved to different epochs in the universe’s history. Now given that this evolution must ultimately be related to the question of entropy (whatever its precise formulation may be in the presence of gravitational fields), the surface $`𝒮`$ seems to encode time-dependent information regarding entropy as well. What has not been done is to show that at later times in a realistic cosmological setting the total entropy encoded this way will be larger than at earlier times. This is one of the issues that needs clarifying; if the definition of entropy has the desired properties, this must work out successfully. Additionally, even though locally (in a spacetime sense) in the neighbourhood of the bounding surface $`B`$ one might argue (as is done by Bousso ) that the screen definition is invariant under time reversal, the actual surfaces will not be in an expanding universe: a unique direction of time will be picked by the expansion of the universe, and usually this will be marked by a difference in the expansions of the null normals to $`B`$. This difference will be enhanced by a major difference between the caustics encountered in the future and the past of $`B`$, with the growth of inhomogeneities at quite different evolutionary stages in the two directions of time from $`B`$, giving another way in which the geometry of $`𝒮`$ causes this time symmetry to be violated. ## 5 A modified proposal In the previous section it was shown that the light surfaces in Bousso’s proposal are likely to have extremely complicated structures in a real inhomogeneous universe. This is a direct consequence of the fact that in this proposal light surfaces are taken to end at caustics. Here we put forward a modified proposal which drastically simplifies the structure of these light surfaces. Before doing so, we note that it is important to distinguish carefully between $`𝒫_{}`$, the boundary of the past of the 2–surface $`B`$ that starts off from $`B`$ with converging null geodesics, and the light sheet $`𝒮`$ suggested by Bousso. The former is a subset of the latter; the boundary $`𝒫_{}`$ ends at the first self-intersections of the null sheets orthogonal to $`B`$, which will usually be non-local intersections, whereas the light sheet ends at caustics, which are local self-intersections. It is the region between these self-intersections where the complex connectivity occurs. It therefore makes sense to separate out the part of the light surface which is not part of the boundary of the past of $`B`$. We shall call this the inner light sheet ($`𝒮`$) and refer to the rest of the $`𝒮`$, i.e. the part of the null surface through B that is also the null boundary of the past of $`B`$, as the outer light sheet ($`𝒪𝒮`$). Then $`𝒮`$ encodes detailed information on the strong lensing that occurs for the null surface, for it bounds the region after self-intersection but before caustics. The number and topology of such components depends on the lensing objects and hence reflects the degree of strong density inhomogeneity. However, weak inhomogeneities will not cause strong lensing and so will not be encoded in $`𝒮`$. It is thus this inner light sheet that produces the enormous complexity in the light sheets proposed by Bousso. Additionally, continuing the null surface beyond the first self-intersections until the caustics actually result in multiple coverings of part of the interior of $`B`$ by the null surfaces, which have turned in on themselves. Thus including this part results in excess area being counted, and a much more complex projection of data onto the null boundary than is necessary when setting up the holographic principle. For this purpose, it is only necessary to include data on the outer light surface $`𝒪𝒮`$; the data on $`𝒮`$ is then redundant, having already been counted on $`𝒪𝒮`$. We therefore propose a modified version of Bousso’s proposal thus: Proposal: Let $`M`$ be a four-dimensional spacetime which satisfies Einstein’s equations with the dominant condition holding for matter. Let $`A`$ be the connected area of a two-dimensional spatial surface $`B`$ contained in $`M`$. Let $`L`$ be the hypersurface bounded by $`B`$ and generated by one of the four null congruences orthogonal to $`B`$ such that the expansion of this congruence, measured in the direction away from $`B`$, is non-positive everywhere, and ending on the boundary of the past of $`B`$. Let $`S`$ be the total entropy contained in $`L`$. Then $`SA/4`$. The hypersurface $`L`$ with the above properties is the outer light surface $`𝒪𝒮`$. The important feature of this modified proposal is that it cuts out the $`𝒮`$, together with the caustics and the fractal boundaries arising from them, and therefore has a much simpler light sheet structure. It also covers regions in the interior of $`B`$ only once. We therefore suggest that, in the case of realistic inhomogeneous cosmologies, this is a better surface to choose for the holographic principle and the associated entropy conjecture. ### 5.1 Nature of the light sheets in the modified proposal To begin with let us note that the modified covariant entropy conjecture proposed above leaves unchanged all the examples considered by Bousso , including the $`K=+1`$ FLRW model. This is clear since none of the null surfaces in these examples contain self-intersections other than caustics, for example those at the origin of coordinates ($`r=0`$) in the light-sheets of 2-surfaces $`B`$ that are spherically placed about the origin (see Figure 2 of .) In the case of the more complicated inhomogeneous models with both caustics and self-intersections present, on the other hand, replacing $`𝒮`$ by $`𝒪𝒮`$ enormously simplifies the structure of the light surface. Despite this, there are a number of difficult points that still remain. Firstly, even though the caustics are removed in this formulation, the non-continuity of the generators of the boundary of the past of $`B`$ still remain at the self-intersections, which make the $`𝒪𝒮`$ surface non-differentiable there. However a simple smoothing over these regions where the outer surface has self-intersections should deal with this adequately in most cases, where the area of the smoothed surface can be arbitrarily close to that of the real surface. When this smoothing cannot be done, new effects may occur and a very careful analysis will be required. Secondly, since caustics can be locally determined, the end points of the $`𝒮`$ in Bousso’s proposal were definable locally - at least in principle. Our proposal relies on the determination of the null boundary of the past of $`B`$, which cannot be determined locally. So in this sense there are both advantages and disadvantages with this new proposal. It greatly simplifies the shape of the $`𝒮`$ \- at least in theory, but operationally it is still difficult to determine $`𝒪𝒮`$. ### 5.2 Coarse-graining and information loss Crucial to the whole discussion is the issue of the scale of description used in the space-time model. One can represent the same physical situation at different averaging scales: thus we can represent the real universe (a) at a smoothed out cosmological scale, where a FLRW model will suffice; (b) a finer scale, where each cluster of galaxies is represented as an inhomogeneity; (c) a finer scale, where each galaxy is represented; (d) a still finer scale where each star and each black hole is individually represented. The nature of the surface $`𝒮`$ will be drastically different in these different representations. A coarse-graining procedure will relate them to each other \- remembering all the time that these are different geometrical representations of the same physical situation. Now it is plausible that in most circumstances, the definition of entropy is closely associated with coarse-graining, and with the loss of information that results from coarse-graining (see also ). We might therefore expect quite different results for the entropy determined in terms of the areas associated with the null surfaces obtained on different averaging scales for models representing the same physical situation. We regard this as a fundamental issue but will not pursue it further here except for the following remarks. The area of the smoothed out model can be expected to be close to that obtained in the detailed (lumpy) model for the surface $`𝒪𝒮`$. It is the area of $`𝒮`$ and of $`𝒮`$ that will be very different in these two cases; indeed the latter will be empty in the smoothed-out case. The area of $`𝒮`$ is associated with strong lensing only, and may perhaps be considered as a measure of the pure gravitational entropy of the solution: the larger that area is, the larger the degree of inhomogeneity (and hence entropy) encoded in the gravitational field. On coarse-graining and consequent smoothing of the matter representation, the corresponding $`𝒮`$ will decrease to zero. The loss of this area represents loss of detailed information on the inhomogeneity structure of the gravitational field resulting from this coarse-graining. The one-way nature of the information loss associated with coarse-graining is reflected in the fact that the area of the fine-grained surface $`𝒮`$ is necessarily larger than that of $`𝒮`$, the latter being close to the area of the $`𝒪𝒮`$ in the smoothed-out (coarse-grained) description. There are potential parallels here with the presence of reversibility at the level of microphysics and irreversibility at the macroscopic level. What is needed to make the definitions and theory compelling is a comparison of entropy estimates and associated areas at earlier and later times in the history of the expansion of the universe, at different scales of description. We do not attempt this here, but note it as an important problem. ## 6 Conclusion We have taken a closer look at the applications of the holography principle to cosmology and in particular the proposal recently put forward by Bousso. We have argued that in a real inhomogeneous universe, the light surfaces defined in his way in order to satisfy the holography principle would be non-differentiable and extremely (in principle infinitely) complicated. Such a light surface can be viewed as a light tracing of the complexity in the universe, projected onto this surface; like a cosmological analogue of the images on the walls of Plato’s cave! In this way, to satisfy the holography principle in a general inhomogeneous universe requires a detailed knowledge of the contents of the universe and in turn its detailed caustic structure. Furthermore, the inevitable limits to the observational resolution puts fundamental limits on the operational definability of this surface. Moreover, given the dynamical (and irreversible) evolutionary nature of light surfaces in general, such bounds cannot remain invariant under time reversal. We have introduced an alternative proposal which results in a much simpler light surface. However, operationally, it is still very difficult to define such surfaces in practice. This leads us to conclude that in a realistic setting the theoretical existence of such surfaces must be clearly distinguished from their complexity and operational definability. It would be extremely useful if an averaged holography principle could be formulated. Failing this, given the enormous amount of detailed information required for the construction of such light sheets, it is difficult to see how such a principle \- formulated phenomenologically - can prove useful in simplifying the understanding of the cosmos in practice. This of course does not rule out the possibility of correctness and usefulness of such a principle in the real world at a fundamental level and that could still be vitally important. On the other hand, the phenomenological difficulties raised here, including the complexity, non–differentiability and potential fractality of such surfaces, might have some relevance in debates regarding the applications of the holographic principle at a fundamental level in other settings such as string theory. Acknowledgments: We wish to thank Robert Brandenberger, Nemanja Kaloper and Lee Smolin for valuable discussions and comments. RT benefited from PPARC UK Grant No. L39094 and GE from support from the FRD (South Africa).
no-problem/9908/astro-ph9908082.html
ar5iv
text
# On the possibility of variation of the fundamental constants of physics in the static universe ## 1 Introduction In 1938 Dirac developed a cosmology based on a remarkable numerical coincidence among the fundamental quantities of physics. He proposed the gravitational constant changes with time only. There were many efforts to register this experimentally. Other scientists were varying different physical constants (the electron mass, the light velocity, etc.) to explain Large Numbers in physics or Hubble law . We formulate our question another way: how fundamental constants and how many of them could be changed if the universe around us remained the same at all points of space and at all times. The synchronous variation of a number of fundamental constants is considered in the work . The static cosmological model has been made, based on the hypothesis of continuous variation (decrease) of the light velocity in vacuum. The author gets the variation of the fine structure constant, and this variation may be observed for an interval of 10 years. Our work is based on a ”perfect cosmological principle”: the universe is spatially isotropic and homogeneous and it looks the same at all times . We also suggest that the universe expands with time. In our work we are not making any suggestion about the expansion rate of the universe, described by the Hubble’s constant. The classical steady state model confirms the expansion must always occur at the same rate $`H_0`$ (Hubble’s constant) as we measure today. The greatest shortcomings of static model are an inability to describe microwave background radiation and the abundance of light elements in the universe. Troitskii explains the microwave background radiation by thermal radiation of galaxies . Burbidge and Hoyle conclude that all of the chemical elements and microwave background radiation were produced from hydrogen burning in stars . The synchronous variation of fundamental constants is proposed in . Assumptions of the constancy of the fine structure constant, the Coulomb and Newton forces and electric charge are made. Here we start with a concept that all bodies including an atom, galaxies and the universe expand with time. Thus the universe is static for an observer. We assume that the velocity of light is constant for any local observer. One of the ways to make this assumption valid when dimensions of bodies increase is to suggest that the velocity of light increases at the same scale as expansion of bodies. We are getting that the Planck’s constant increases and masses of bodies decreases with time. ## 2 Assumptions An expansion of the universe is accepted to explain redshift of a light from remote celestial bodies. A wavelength $`\lambda _0`$ observed here at time $`t_0`$ is related to a wavelength $`\lambda _1`$ emitted at time $`t_1`$ by $$\frac{\lambda _0}{\lambda _1}=\frac{R(t_0)}{R(t_1)},$$ (1) where R(t) is a cosmic scale factor at time $`t`$. If the universe is expanding, then R( $`t_0`$) $`>`$ R( $`t_1`$), and (1) gives a red shift, while if the universe is contracting, then R( $`t_0`$) $`<`$ R( $`t_1`$), and (1) gives a blue shift. We are making first our assumption: dimensions of all bodies varies with time at the same scale as the cosmic scale factor. Distances between bodies drawn on a rubber balloon expand as the balloon inflates and the dimensions of the bodies on the balloon increase at the same scale as the distances between them. The dimensions of an atom also have to expand. The expansion of bodies should be understood as averaged through the universe. Our tools of the measurements are related to bodies and we are not able to check this altering of space and dimensions. So, the universe should obey a ”perfect cosmological principal” : it looks the same not only at all points and in all directions, but also at all times. It is a steady state model of the universe, but without the necessity of continuous creation of matter. However, it is also the static universe for an observer because he is not able to register this expansion. When the length unit increases and the velocity of light and time rate remain unaltered the observer will register that the velocity of light decreases with time. By this, we mean that the length unit increases but it stays the length unit for the observer at all times. We are not able to observe its increasing in the other way but the result of decreasing of the velocity of light. We will save Einstein’s assumption of the constancy of the velocity of light by making another assumption, the assumption that the velocity of light increases from time $`t_1`$ to time $`t_0`$ ( $`t_1`$ $`<`$ $`t_0`$ ) by the factor: $$\mathrm{\Omega }(t_1t_0)=\frac{\lambda _0}{\lambda _{1.}}$$ (2) Meaning, $`\mathrm{\Omega }`$ $`>`$ 1. Further, we will use $`\mathrm{\Omega }`$ instead of $`\mathrm{\Omega }`$( $`t_1t_0`$). So we have supposed that dimensions of bodies are less when the velocity of light is smaller and a clock rate is the same. Therefore, observers are unable to register an altering of $`c`$. These assumptions can be written as $$c_0=c_1\mathrm{\Omega },$$ (3) and $$l_0=l_1\mathrm{\Omega }.$$ (4) At time $`t_0`$ ( $`t_1`$), $`c_0`$ and $`l_0`$ ( $`c_1`$, $`l_1`$) are the velocity of light and the length, respectively. From (4) we conclude that the velocity $`v_0`$ of the free body at time $`t_0`$ could be expressed by the velocity $`v_1`$ of the same body at time $`t_1`$ as $$v_0=v_1\mathrm{\Omega }.$$ (5) It is important to note that the body has not been acted on by any forces when it was moving from time $`t_1`$ to time $`t_0`$. An observer will not register this velocity alteration since the dimensions alter at the same scale and the time rate remains unchanged. ## 3 Increasing of Planck’s constant The assumption of the constancy of clock rate in the universe at all time means that forces acting between bodies do not change. A time derivative of momentum is equal to the force. So the momentum of a free body is not altering. It could be written as a product of body mass $`m`$ and velocity $`v`$. So, we get from equation (5) the relationship: $$m_0=m_1/\mathrm{\Omega },$$ (6) where $`m_0`$ ( $`m_1`$) is the body mass at time $`t_0`$ ( $`t_1`$). The mass of the body is decreasing with time. Atoms are a constitution of each body. The dimension of the atoms could be expressed in units of Bohr radius which is equal to: $$a_0=\frac{h^24\pi ϵ_0}{m_ee^2},$$ (7) where $`ϵ_0=10^7/4\pi c^2`$ is the permittivity of empty space, $`m_e`$ is the electron mass, $`e`$ is the electron charge and $`h`$ is the Planck’s constant. The variation of $`ϵ_0`$ depends on the variation of c. According to (3), (4), (6) and (7) we could express Planck’s constant $`h_0`$ at time $`t_0`$ through Planck’s constant $`h_1`$ at time $`t_1`$ when the electrical charge remains unchanged: $$h_0=h_1\mathrm{\Omega },$$ (8) meaning, Planck’s constant increases with time as the velocity of light. The variation of these fundamental constants leads to the variation of bodies dimensions. This corresponds to changes of space geometry. When the variation of constants depends on the space and time coordinates, we have curved spacetime. This curving of spacetime is caused by the changes of the linear dimensions of bodies. If the linear dimensions of an observer are less in some area than the linear dimensions of another observer in the same area then this area will be larger for the first observer and he will need more time to cross this area. So, we have curved spacetime. Meanwhile, dimensions of bodies change in our curved spacetime. Thus, our spacetime differs from the accepted model of curved spacetime, when dimensions of bodies remain unchanged but only space and time alters. ## 4 Variation of some physical quantities The suggestion of unaltering time means that the gravitational and electromagnetical forces remain unchanged in any situation. When Newton force $$F=\gamma \frac{m_1m_2}{r^2}$$ (9) remains unchanged, from equations (4) and (6) we get $$\gamma _0=\gamma _1\mathrm{\Omega }^4.$$ (10) The gravitational constant $`\gamma `$ increases with time. A ratio of the gravitational potential $$\phi =\gamma \frac{m}{r}$$ (11) with a square of the light velocity is constant. The Coulomb force $$F=\frac{1}{4\pi ϵ_0}\frac{q_1q_2}{r^2}$$ (12) between two charges $`q_1`$ and $`q_2`$ separated by a distance $`r`$ remains unchanged according to our assumptions made above. The dimensionless constant (fine structure constant) $$\alpha =\frac{1}{4\pi ϵ_0}\frac{e^2}{hc}$$ (13) remains unaltered after variations of $`ϵ_0`$, c and $`h`$ as we have proposed. In the same manner, we could get the variation of other electromagnetical quantities: $`\stackrel{}{E}_0`$ $`=`$ $`\stackrel{}{E}_1,`$ (14) $`\stackrel{}{B}_0`$ $`=`$ $`\stackrel{}{B}_1/\mathrm{\Omega },`$ $`\rho _0`$ $`=`$ $`\rho _1/\mathrm{\Omega }^3,`$ $`\stackrel{}{j}_0`$ $`=`$ $`\stackrel{}{j}_1/\mathrm{\Omega }^2,`$ and for $`\stackrel{}{}`$ operator $$\stackrel{}{}_0=\frac{1}{\mathrm{\Omega }}\stackrel{}{}_1.$$ (15) Here $`\stackrel{}{E}`$ and $`\stackrel{}{B}`$ are the electric and magnetic field vectors, respectively; $`\rho `$ is the charge density and $`\stackrel{}{j}`$ is the current density. The above mentioned variation of constants if applied to the Maxwell equations remains these equations without changes, when $`\mathrm{\Omega }`$ is constant, or dependence of $`\mathrm{\Omega }`$ on time or space coordinates could be neglected. In the same way we may learn how we could vary other fundamental constants or quantities of physics. ## 5 Hubble law Our proposed method of varying fundamental constants of physics corresponds to changes of energetical distances between levels in atoms. For example, energies of allowed states of the hydrogen atom are $$E_n=\left(\frac{m_ee^4}{8ϵ_0^2h^2}\right)\frac{1}{n^2}.$$ (16) After varying $`ϵ_0`$, $`h`$ and $`m_e`$ we get $$E_{n_0}=E{}_{n_1}{}^{}\mathrm{\Omega }.$$ (17) This means that when $`\mathrm{\Omega }`$ $`>`$ 1, the corresponding levels of hydrogen atoms were higher, and the distances between them were smaller in the past. So, a photon emitted at time $`t_1`$ has lower energy than the photon emitted at time $`t_0`$ ( $`t_1`$ $`<`$ $`t_0`$) by the same atom. An atomic spectrum from remote bodies is shifted to the red side because of the finite value of $`c`$. We should note that all physical quantities with same dimension varies in the same way during synchronous variation of constants. Thus, all lines (not only described by equation (16)) of distant objects are redshifted. A time measurement based on transitions between levels in atoms shows the increase of the time rate when the light velocity increases. Meanwhile, a dynamical clock (for example, the period of revolution of the Earth about the Sun) rate does not depend on the light velocity variation. It means that the light velocity measurement by the material length standard and the quantum clock will show the decrease of the light velocity with time. It is important to note the light velocity increases but the measurement shows the decrease. This happens since the quantum time rate increases when the light velocity and length standard increases too. Troitskii gets that the light velocity measured in this way remains unchanged. The length of meter definition as a distance which the light covers in 1/299792458 seconds does not have meaning when it is not mentioned type of clocks. The measurements of time by dynamical and quantum clocks will show different length standard. ## 6 Conclusions As we see, assumptions of the bodies expansion with time and the constancy of the light velocity for a local observer leads to a variation of other fundamental constants. Accepted cosmological theory forbids the expansion of bodies such as atoms, the Earth, the Sun and so on. The expansion of all material bodies is forbidden primarily because there are no criterions by which the expansion of the universe could be measured. We perceive this separation as very strange because all these objects could be characterized by their dimensions. Therefore, in our cosmological model all bodies feel expansion because of the variation of the light velocity, Planck’s constant, and bodies mass. A consequence of this variation is the steady state model without the creation of mass. The variation of the fundamental constants gives the Hubble law. Also, it does not change the fine structure constant, the electromagnetical equations. The fundamental constants of physics vary and our universe is steady without Big Bang. The same method of variation of fundamental constants could be applied to the gravitation. The assumption of decrease of the velocity of light in the gravitational field should be made. Thus, the factor $`\mathrm{\Omega }`$ is less than 1 and it depends on the gravitational potential. We got that the quantum time rate varies but the dynamical clock rate remains unaltered. This discrepancy of rates can be register by a measurement of the light velocity with a material length standard and a quantum clock. The measurement will show the decrease of the light velocity with time. The same measurements will show that the light velocity increases in the gravitational field. According to general relativity a local observer will register the same light velocity. ## References
no-problem/9908/cond-mat9908132.html
ar5iv
text
# 1 Variance 𝜎²⁢(𝑞) of the distribution of overlaps as a function of system size 𝐿. The line represents the function 0.71⁢𝐿^{-0.5}. Please note the double logarithmic scale. Comment on “Glassy Transition in a Disordered Model for the RNA Secondary Structure” In a recent, very interesting paper, Pagnani, Parisi and Ricci-Tersenghi have studied the low-temperature behavior of a model for RNA secondary structure. They claim that the model exhibits a breaking of the replica symmetry, since the width of the distribution $`P(q)`$ of overlaps may converge to a finite value at $`T=0`$. The authors used an exact enumeration method to obtain all ground states for a given RNA sequence. Because of the exponential growing degeneracy, only sequences up to length $`L=256`$ could be studied. Here it is shown that, in contrast to the previous results, by going to much larger sizes as $`L=2000`$ the variance $`\sigma ^2(q)L^{0.5}`$. This means that $`P(q)`$ becomes a delta function in the thermodynamical limit at $`T=0`$. The method used here combines the ideas presented in and . The method is faster than the algorithm of because no floating-point arithmetic is necessary. Furthermore, the algorithm of is not exact, although usually true ground states are obtained. The technique of guarantees exact ground states but is restricted to small sizes. Here, a finite number of exact ground states is selected randomly from the set of all ground states which is represented by a graph. Similar to an ordinary Monte-Carlo simulation it has to be guaranteed that each ground state appears with the proper weight, i.e. with the same probability, since all ground states have exactly the same energy. This is ensured by the following technique: Let now $`G_{i,j}`$ denote the set of ground states for the sequence $`[r_i,\mathrm{},r_j]`$. Similar to the representation of the partition function applied in , $`G_{i,j}`$ can be expressed in terms of ground states for smaller sequences: a ground state for the sequence $`[r_i,\mathrm{},r_j]`$ can either be a ground-state of $`[r_{i+1},\mathrm{},r_j]`$ (if the energy is low enough), or it is a combination of a pair $`(i,k)`$ ($`k(i+1,\mathrm{},j)`$) with an arbitrary ground state of $`[r_i,\mathrm{},r_{k1}]`$ and an arbitrary ground state of $`[r_{k+1},\mathrm{},r_j]`$ (if the energy is low enough). The calculation of all ground states proceeds as follows: $`G_{i,i}=G_{i,i+1}=\mathrm{}`$ for all feasible $`i`$ holds. Starting with $`G_{i,i+2}=\{(i,i+2)\}`$ ($`i=1,\mathrm{},L2`$) the complete set of ground states can be calculated recursively. The result is stored as a acyclic directed graph with $`G_{i,j}`$ ($`1ijL`$) being the nodes and $`G_{1,L}`$ the root. At each node, edges pointing to to the descendant sets $`G_{i+1,j}`$, $`G_{i+1,k1}`$ and $`G_{k+1,1}`$ are stored instead of enumerating the states. Additionally, along with each node the ground-state energy $`E_0(i,j)`$ and the number of ground states $`d_{i,j}`$ is kept. The degeneracy $`d_{i,j}`$ can be calculated recursively as well. The selection of a ground state is performed by a steepest descent into the graph. Each ground state consists of the pairs encountered during the descent. At each node the steepest descent continues either into one descendant $`G_{i+1,j}`$ or into two descendant $`G_{i+1,k1},G_{k+1,j}`$, the alternative for proceeding being chosen randomly. The probability for each choice is proportional to the number of ground states found in the corresponding branch(s). For that purpose the degeneracy values $`d_{i,j}`$ are used. It means that a path which contains twice the number of ground states of another path is selected on average twice as often. Therefore, it is guaranteed that each single ground state contributes with the same weight and a statistical correct $`T=0`$ average is obtained. For each sequence length the calculations were performed for 8000 independent realizations of the disordered, except for $`L=2000`$ where only 1800 random sequences were generated. For each realization 100 ground states were selected randomly and stored for further evaluation. It was tested that by increasing this number the results do not change significantly. The resulting values $`\sigma ^2(q)`$ are shown in Fig. 1 using a double logarithmic scale. Clearly, the function converges towards zero, thus $`P(q)\delta (q)`$ for $`L\mathrm{}`$, a similar result was found for the model presented in . For small sizes, this convergence is much smaller due to finite-size effects. This may be the reason that in no decision about the behavior of the width of $`P(q)`$ could be taken. The author thanks A. Pagnani and F. Ricci-Tersenghi for interesting discussions on the subject. A.K. Hartmann Institut für theoretische Physik, Göttingen, Germany
no-problem/9908/cond-mat9908203.html
ar5iv
text
# Influence of uncorrelated overlayers on the magnetism in thin itinerant-electron films ## I INTRODUCTION In the past few years, there has been growing interest in the magnetic behavior of thin metallic films. This attention originates both from fundamental physics and from applications. The reduced symmetry and the lower coordination numbers at the surface offer the possibility of inducing new and exotic phenomena, such as perpendicular anisotropy in ultrathin films, interlayer coupling and giant magnetoresistance(GMR) in ferromagnetic metal/nonmagnetic metal (FM/NM) multilayers. In experiment, it was shown that ultrathin transition metal films can display long range ferromagnetic order from a monolayer on. It is well known, that due to the Mermin-Wagner theorem, an effectively two-dimensional spin-isotropic system cannot display long-range magnetic order at any finite temperature. However, in real ultrathin films, there always exists an anisotropy which allows almost two-dimensional magnetism to occur. In experiment, the magnetic thin films are normally grown on a nonmagnetic substrate. The effects of the presence of nonmagnetic overlayers on the magnetic properties of these thin films are widely studied both in experiment and theory. It has been shown that the interfaces between magnetic and nonmagnetic layers play an important role with respect to the magnetic properties of multilayers systems. The composition of the interface between FM and NM has a strong effect on the magnetic properties of the FM films. The critical Co film thickness for the reorientation transition of the magnetization has been observed to shift from 3–4 ML up to 18 ML by the presence of carbon at the interface. In addition, very low coverage of nonmagnetic materials on top of a magnetic layers can have a strong effect on the direction of the magnetization. For example, an almost 90 rotation from in-plane direction to out-of-plane direction of the magnetization is induced by 0.03 monolayer(ML) Cu on a 7 ML thick Co film on a stepped Cu(001) surface. The magnetic properties of ultrathin films do not change monotonically with the overlayer thickness in some systems. The magnetic anisotropy of thin FM film is observed to rotate from in-plane direction to out-of-plane direction with very thin NM overlayers, while it will return to in-plane direction when the NM overlayers become thicker. First principle calculations for a Co ML on Cu(111) show a switch from the in-plane anisotropy of the uncovered Co monolayer to perpendicular anisotropy only for a 1 ML thick Cu overlayer, while 2 ML Cu produce a slight in-plane anisotropy again. The induced polarization in the NM layers has been investigated experimentally in various systems, such as Pt/Co, Pd/Ni, Pd/Fe, Pd/Co, and Cu/Co. The direction of the induced polarization has been found to be different for different systems. In Pd/Fe films, the Pd near the interface is ferromagnetically coupled with the Fe film. The induced moment of the Pd has been shown to enhance the Curie temperature $`T_C`$ of the film. On the other hand, in Pt/Co systems with thicknesses up to 1.5 ML the Pt layer is negatively spin polarized (antiferromagnetic coupling). In Cu/Co systems the spin polarization of the Cu layers has been shown to oscillate as a function of the Cu thickness. This oscillation is regarded as the origin of the oscillation of the interlayer magnetic coupling in Cu/Co multilayers . In theory, based on first principle calculations, the polarization has been found to be positive both in Pd/Fe and NM/TM (NM denotes noble metal such as Au, Ag and Cu, TM: Fe, Cr) systems. Systematic calculations show that Fe, Co and Ni overlayers on Pd favour a ferromagnetic configuration, whereas V, Cr and Mn overlayers lead to an antiferromagnetic superstructure. Clearly, nonmagnetic overlayers play a very important role with respect to the magnetic properties of thin metallic films. Theoretically, the influence of nonmagnetic overlayers on transition metal films is mostly investigated within first principle calculations. However, these calculations can give the zero temperature properties of the films only. Certain idealized model systems have proved to be a good starting point for investigating the magnetic behavior at finite temperatures. In order to investigate the temperature dependence of the magnetization, the Ising model was adopted in Ref.. However, in transition metals, the magnetically active electrons are itinerant. It is by no means clear to what extent the results obtained by localized spin models are applicable to transition metal films. In addition, the quantum interference of Bloch waves in the NM layers has been applied to study the interlayer magnetic coupling in the FM/NM multilayers. Within this theory, only the properties of the NM layers are considered, while the properties of the FM layers are neglected completely. In a theory of interlayer magnetic coupling in the FM/NM multilayers, Edwards et al. obtained the difference of exchange coupling between two FM layers for difference band occupation in NM layers. Their results corresponds to a Hartree-Fock treatment of a Hubbard-like one-band tight-binding model with on-site interaction $`U=0`$ in the NM and with $`U=\mathrm{}`$ in the FM. However, they didn’t calculate the magnetization of the FM and NM, and the effect of the NM on the properties of the FM was neglected. The aim of the present paper is to investigate the influence of NM overlayers on the magnetism of itinerant-electron thin films within the Hubbard model. The influence of the coupling between the FM and NM layers on both the magnetic properties of the FM layers and the induced magnetization in the NM layers will be investigated in detail. The Hubbard model was originally introduced to explain band magnetism in transition metals and has become a standard model to study the essential physics of strongly correlated electron systems over years, such as spontaneous magnetism, metal-insulator transition and high-temperature superconductivity. It incorporates in the simplest way the interplay of the kinetic energy, the Coulomb interaction, the Pauli principle and the lattice geometry. In systems with reduced translational symmetry, the Hubbard model has been successfully applied to describe the temperature-driven reorientation transition of magnetization in itinerant-electron films, metal-insulator transition in thin films, the surface magnetism and low dimensional magnetism. The model, though in principle rather simple, nevertheless, provokes a non-trivial many-body problem, that could be solved up to now only for some special cases. In two and three dimensions, one still has to resort to approximate treatments. Due to the reduced translation symmetry in thin films even more complications are introduced. Recently a generalization of the spectral density approach (SDA) has been applied to study the magnetism of thin metallic films and surfaces . The SDA, which reproduces the exact results of Harris and Lange that concern the general shape of the spectral density in the strong-coupling limit ($`UW`$, $`U`$: on-site Coulomb interaction, $`W`$: bandwidth of the Bloch density of states), leads to rather convincing results concerning the magnetic properties of the Hubbard model. By comparison with different approximation schemes for the Hubbard model as well as numerically exact QMC calculations in the limit of infinite dimensions it has been shown that the correct inclusion of the exact results of Harris and Lange in the strong coupling regime is vital for a reasonable description of the magnetic behavior of the Hubbard model, especially at finite temperatures. The structure of the paper is as follows. First, the Hamiltonian of our model is proposed. Then the SDA for the Hubbard film is described in a simple way. In section IV we show the results of the numerical evaluation of the theory and discuss the magnetic behavior of the film system in terms of the layer and temperature dependent magnetizations and the quasiparticle density of states. Finally, a summary will be given. ## II Hamiltonian of the model In this paper, we will concentrate on the essence of the effect of NM overlayers on the magnetic properties of itinerant-electron thin films, as well as the influence of the FM layers on the NM layers. The structure discussed here is a NM/FM/NM sandwich structure. We study the symmetric situation where the numbers of the NM layers both above and below the FM layers are assumed to be equal. The description of this film geometry requires some care. Each lattice vector of the film is decomposed into two parts: $$𝑹_{i\alpha }=𝑹_i+𝒓_\alpha $$ (1) $`𝑹_i`$ denotes a lattice vector of the two-dimensional Bravais lattice of the surface layer with $`N`$ sites. To each lattice point $`𝑹_i`$ a $`d`$-atom basis $`𝒓_\alpha `$ ($`\alpha =1,2,\mathrm{},d`$) is associated, referring to the $`d=2d_{NM}+d_{FM}`$ layers of the film. Here, $`d_{NM}`$ denotes the thickness of the NM layers and $`d_{FM}`$ the thickness of the FM layers. Within each layer we assume translational invariance. Then a Fourier transformation with respect to the underlying two-dimensional (surface) Bravais lattice can be applied. The considered model Hamiltonian reads as follows: $$=\underset{i,j,\alpha ,\beta ,\sigma }{}(T_{ij}^{\alpha \beta }\mu \delta _{ij}^{\alpha \beta })c_{i\alpha \sigma }^+c_{j\beta \sigma }+\frac{U}{2}\underset{i,\alpha ,\sigma }{}V(\alpha )n_{i\alpha \sigma }n_{i\alpha \sigma },$$ (2) where $`c_{i\alpha \sigma }^+(c_{i\alpha \sigma })`$ stands for the creation (annihilation) operator of an electron with spin $`\sigma `$ at the lattice site $`𝑹_{i\alpha }`$, $`n_{i\alpha \sigma }=c_{i\alpha \sigma }^+c_{i\alpha \sigma }`$ is the number operator, and $`T_{ij}^{\alpha \beta }`$ denotes the hopping-matrix element between the lattice sites $`𝑹_{i\alpha }`$ and $`𝑹_{j\beta }`$. The hopping-matrix element between nearest-neighbour sites is set to $`t_{FM}`$ ($`t_{NM}`$) in the FM (NM) layers and to $`t_{NF}`$ between FM and NM layers. $`U`$ denotes the on-site Coulomb matrix element and $`\mu `$ is the chemical potential. The Coulomb interaction between the electrons is only considered in the FM layers. Therefore we choose for $`V(\alpha )`$: $$V(\alpha )=\{\begin{array}{cc}0,\hfill & \alpha d_{NM}\text{ or }\alpha >d_{NM}+d_{FM},\hfill \\ 1,\hfill & d_{NM}<\alpha d_{NM}+d_{FM}.\hfill \end{array}$$ (3) In the following, all quantities related to the NM layers will be labelled by a subscript $`NM`$, those related to the FM layers by a subscript $`FM`$. ## III Spectral-density approach to the Hubbard film Recently a generalization of SDA has been proposed to deal with the modifications due to reduced translational symmetry. In the following we give only a brief derivation of the SDA solution and refer the reader to previous papers for a detailed discussion. The basic quantity to be calculated is the retarded single-electron Green function $$G_{ij\sigma }^{\alpha \beta }(E)=c_{i\alpha \sigma };c_{j\beta \sigma }^+_E.$$ (4) All relevant information about the system can be obtained from the Green function. For example, the spin- and layer-dependent quasiparticle density of states (QDOS) is determined by the diagonal part of $`G_{ij\sigma }^{\alpha \beta }(E)`$: $$\rho _{\alpha \sigma }(E)=\frac{1}{\pi }\text{Im}G_{ii\sigma }^{\alpha \alpha }(E\mu ).$$ (5) The band occupations $`n_{\alpha \sigma }`$ are given by $$n_{\alpha \sigma }=_{\mathrm{}}^+\mathrm{}𝑑Ef_{}(E)\rho _{\alpha \sigma }(E),$$ (6) where $`f_{}(E)`$ is the Fermi function. Ferromagnetism is indicated by a spin-asymmetry in the band occupations $`n_{\alpha \sigma }`$ leading to non-zero layer magnetizations $`m_\alpha =n_\alpha n_\alpha `$. The band occupation in each layer is given by $`n_\alpha =n_\alpha +n_\alpha `$. The equation of motion for the single-electron Green function reads: $$\underset{l,\gamma }{}[(E+\mu )\delta _{il}^{\alpha \gamma }T_{il}^{\alpha \gamma }V(\gamma )\mathrm{\Sigma }_{il\sigma }^{\alpha \gamma }(E)]G_{lj\sigma }^{\gamma \beta }(E)=\mathrm{}\delta _{ij}^{\alpha \beta }.$$ (7) Here we have introduced the electronic self-energy $`\mathrm{\Sigma }_{ij\sigma }^{\alpha \beta }(E)`$ which incorporates all effects of electron correlations. The self-energy will automatically vanish within the NM layers due to $`V(\alpha )=0`$. The local approximation for self-energy in the FM layers is adopted, which has been tested recently for the case of reduced translational symmetry. Because the translational invariance is assumed within each layer, we then have $`\mathrm{\Sigma }_{ij\sigma }^{\alpha \beta }(E)=\delta _{ij}^{\alpha \beta }\mathrm{\Sigma }_\sigma ^\alpha (E)`$. The key point of the SDA is to find a reasonable ansatz for the self-energy in FM layers. Guided by the exactly solvable atomic limit of vanishing hopping ($`t_{FM}=0`$) and by the findings of Harris and Lange in the strong-coupling limit ($`U/t_{FM}>>1`$), a one-pole ansatz for the self-energy $`\mathrm{\Sigma }_\sigma ^\alpha (E)`$ can be motivated: $$\mathrm{\Sigma }_\sigma ^\alpha (E)=g_{1\sigma }^\alpha \frac{Eg_{2\sigma }^\alpha }{Eg_{3\sigma }^\alpha }$$ (8) where the spin- and layer-dependent parameters $`g_{1\sigma }^\alpha `$, $`g_{2\sigma }^\alpha `$ and $`g_{3\sigma }^\alpha `$ are fixed by exploiting the equality between two alternative but exact representations for the moments of the layer-dependent QDOS: $$\begin{array}{cc}M_{ij\sigma }^{(m)\alpha \beta }\hfill & =\frac{1}{\pi \mathrm{}}\text{Im}_{\mathrm{}}^{\mathrm{}}𝑑E(E)^mG_{ij\sigma }^{\alpha \beta }(E)\hfill \\ & \\ & =[\underset{m\text{times}}{\underset{}{[\mathrm{}[c_{i\alpha \sigma },]_{}\mathrm{},]_{}}},c_{j\beta \sigma }^+]_+.\hfill \end{array}$$ (9) Here $`\mathrm{}`$ denotes the grand-canonical average and $`[\mathrm{},\mathrm{}]_{(+)}`$ is the commutator (anticommutator). It has been shown that an inclusion of the first four moments of QDOS ($`m=`$0–3) is vital for a proper description of ferromagnetism in the Hubbard model, especially for finite temperatures. Further, the first four moments represent a necessary condition to be consistent with the strong coupling results of Harris and Lange. Taking into the first four moments to fix the three parameters $`g_{1\sigma }^\alpha `$, $`g_{2\sigma }^\alpha `$ and $`g_{3\sigma }^\alpha `$ in (8), one obtains the following self-energy : $$\mathrm{\Sigma }_\sigma ^\alpha (E)=Un_{\alpha \sigma }\frac{E+\mu B_{\alpha \sigma }}{E+\mu B_{\alpha \sigma }U(1n_{\alpha \sigma })}.$$ (10) The self-energy depends on the spin-dependent occupation numbers $`n_{\alpha \sigma }`$ and the so-called band-shift $`B_{\alpha \sigma }`$ that consists of higher correlation functions: $$B_{\alpha \sigma }=T_{ii}^{\alpha \alpha }+\frac{1}{n_{\alpha \sigma }(1n_{\alpha \sigma })}\underset{j\beta }{\overset{j\beta i\alpha }{}}T_{ij}^{\alpha \beta }c_{i\alpha \sigma }^+c_{j\beta \sigma }(2n_{i\alpha \sigma }1).$$ (11) A spin-dependent shift of the band center of gravity $`n_{\alpha \sigma }B_{\alpha \sigma }`$ $`\left((1n_{\alpha \sigma })B_{\alpha \sigma }\right)`$ may generate and stabilize ferromagnetic solutions. Although $`B_{\alpha \sigma }`$ consists of higher correlation functions, it can be expressed exactly via $`\rho _{\alpha \sigma }`$ and $`\mathrm{\Sigma }_\sigma ^\alpha (E)`$: $$\begin{array}{cc}B_{\alpha \sigma }\hfill & =T_{ii}^{\alpha \alpha }+\frac{1}{n_{\alpha \sigma }(1n_{\alpha \sigma })}\frac{1}{\mathrm{}}_{\mathrm{}}^+\mathrm{}𝑑Ef_{}(E)\hfill \\ & \\ & \times \left(\frac{2}{U}\mathrm{\Sigma }_\sigma ^\alpha (E\mu )1\right)\left[E\mathrm{\Sigma }_\sigma ^\alpha (E\mu )T_{ii}^{\alpha \alpha }\right]\rho _{\alpha \sigma }(E).\hfill \end{array}$$ (12) Now a closed set of equations is established via Eqs.(6), (7), (10) and (12), which can be solved self-consistently. ## IV Results and discussion In our calculations, a fcc(100) geometry is assumed for both the FM and the NM layers. We consider a uniform hopping $`t_{FM}=t_{NM}=t_{NF}=0.25`$eV between nearest neighbour sites. The band occupation of the FM layers is set to $`n_{FM}=1.4`$ for all calculations. The band occupation in the NM layers is denoted by $`n_{NM}`$. By adjusting the on-site hopping integrals $`T_{ii}^{\alpha \alpha }`$ we explicitly exclude charge transfer within the FM and the NM layers. Further, we keep the on-site Coulomb interaction in the FM layers fixed at $`U=12`$eV, which is three times the bandwidth of the three-dimensional fcc lattice and clearly refers to the strong-coupling regime. In the following we will refer to the considered structure as $`d_{NM}/d_{FM}/d_{NM}`$. In Fig. 1, the layer-dependent magnetizations are plotted as a function of temperature for different numbers of magnetic layers. The magnetizations are strongly layer-dependent. Without NM overlayers (Fig. 1e; 0/5/0 structure), the magnetization in each layer is fully polarized at very low temperatures. The magnetization curves of the inner layers show the usual Brillouin-type behavior. The surface magnetization, however, shows a very different behavior: It depends almost linearly on temperature in the temperature range $`T/T_C^B=0.7`$$`0.9`$. Compared to the inner layers, the surface magnetization decreases significantly faster as a function of temperature and shows a tendency to a reduced Curie temperature. However, due to the coupling between the surface and the inner layers that is induced by the electron hopping, a unique Curie temperature for the whole film is obtained. When the NM overlayers are taken into account (see Fig. 1a,1b,1c,1d; 4/$`d_{FM}`$/4 structure), the interface magnetization of the FM layers and its temperature behavior are strongly affected by the interaction between the FM and NM layers which is induced by the electron hopping at the interface. The interface is no longer fully polarized at low temperatures and decreases more roundly as a function of temperature than in the case without NM overlayers. The linear dependence of the interface magnetization in the range of $`T/T_C^B=0.7`$$`0.9`$ has disappeared. The magnetization of the inner FM layers is only weakly affected by the presence of the NM layers (see Fig. 1d). From Fig. 1, one can also read off that the Curie temperature $`T_C`$ increases gradually as a function of the number of the FM layers $`d_{FM}`$ and reaches its bulk value for $`d_{FM}=8`$$`10`$. The influence of the NM layers on the Curie temperature is analyzed in more detail in Fig. 2 where $`T_C(d_{FM})`$ is shown for different numbers of the NM layers. As can be seen in Fig. 2, the behavior of $`T_C`$ as a function of $`d_{FM}`$ is quite different for $`d_{NM}=0`$ and $`d_{NM}0`$. In the case without NM overlayers ($`d_{NM}=0`$) $`T_C`$ increases very steeply as a function of $`d_{FM}`$ and saturates already for $`d_{FM}=`$3–5. The influence of the NM layers is most important for very thin magnetic films $`d_{FM}<6`$. For $`d_{FM}=3,4`$ the presence of only one NM overlayer leads to a reduction of the Curie temperature of about 5% compared to its free FM film value. It is interesting to note that there are only minor changes in $`T_C`$ when the number of NM layers is further increased. Note that for $`d_{FM}3`$, the reduction of the $`T_C`$ is strongest for just one NM layer. As $`d_{NM}`$ is further increased, the reduction of the $`T_C`$ becomes slightly decreasing again. $`T_C`$ as a function of $`d_{NM}`$ saturates already for $`d_{NM}=3`$. This indicates that the influence of a very thin NM topping on the magnetic properties of the FM films is stronger than for thick NM overlayers. In experiment, it has also been observed that very thin NM layers have a stronger effect on the magnetic properties than thick overlayers. For example, the direction of the magnetization of a Co film shows a rotation from the in-plane direction to the out-of-plane direction induced only by very thin coverage of Cu, while for thick coverage of Cu, the direction of magnetization of the FM films will turn back to in-plane direction. First principle calculations for a Co ML on Cu(111) predict a switch from the in-plane anisotropy of the uncovered Co monolayer to perpendicular anisotropy only for a 1 ML thick Cu overlayer, while 2 ML Cu produces a slight in-plane anisotropy again. In order to investigate the influence of the NM overlayers on the magnetism of the FM films in more detail, we have calculated the Curie temperature as a function of the band occupation $`n_{NM}`$ of the NM layer for a 1/3/1 snadwich structure (Fig. 3a). $`T_C`$ does not change monotonically as a function of $`n_{NM}`$. For very small $`n_{NM}`$ it decreases rapidly from the free FM film value $`T_C^0`$ ($`d_{NM}=0`$) to a minimum at $`n_{NM}=`$0.35–0.4. Then it increases from the minimum to a maximum (at $`n_{NM}1.97`$) which lies above $`T_C^0`$. Finally, it drops quickly to $`T_C^0`$ at $`n_{NM}=2.0`$. The two limiting cases ($`T_C=T_C^0`$ at $`n_{NM}=0`$ and $`n_{NM}=2.0`$) are easy to understand. In these two situations, the NM bands are either empty or fully occupied and, therefore, have no influence on the properties of the FM film. To understand the behavior of $`T_C`$ as a function of $`n_{NM}`$, we have also calculated the magnetizations of the NM layer and the FM layers as a function of $`n_{NM}`$ at low temperature in Fig. 3b and Fig. 3c. The magnetization of the NM layer shows a similar dependence on $`n_{NM}`$ as the Curie temperature (see Fig. 3b). Since the magnetization of the FM layers is always positive, the sign of magnetization of the NM layer represents the coupling between FM and NM at the interface. A positive sign corresponds to ferromagnetic coupling, a negative sign to antiferromagnetic coupling. The results indicate that $`T_C`$ is strongly affected by the coupling of FM and NM. From Fig. 3c, one can see that the coupling of FM and NM has also a strong effect on the magnetizations in the FM layers, especially on the interface magnetization at low temperature. Further insight about the coupling between the FM and the NM layers can be obtained from the layer-dependent QDOS. In Fig. 4 the QDOS of a 1/3/1 sandwich structure is shown for $`T=0.1T_C^0`$ and three different values of the band occupation of the NM layers ($`n_{NM}=0.4,\mathrm{\hspace{0.17em}0.8},\mathrm{\hspace{0.17em}1.4}`$). For $`n_{NM}=0.8`$ the temperature dependence of the QDOS is plotted in Fig. 5 for $`T=0.1,\mathrm{\hspace{0.17em}0.9},\mathrm{\hspace{0.17em}1.0}T_C`$. Two kinds of splittings are observed in the FM spectrum. The strong Coulomb interaction between the electrons leads to a splitting of the spectrum into a high and a low energy part (“Hubbard splitting”). These two quasiparticle subbands (“Hubbard bands”) are separated by an energy amount of the order $`U`$. In the lower subband the electron mainly hops over empty lattice sites, whereas in the upper subband it hops over lattice sites that are already occupied by another electron with opposite spin. The weights of the subbands scale with the probability that a propagating electron meets the one or the other situation. The total weight of the QDOS of each layer is normalized to 1. In the strong coupling limit the weights of the lower and the upper subbands are given by $`(1n_{\alpha \sigma })`$ and $`n_{\alpha \sigma }`$, respectively. Due to the vanishing Coulomb repulsion the Hubbard splitting disappears in the NM spectrum. For temperatures below $`T_C`$, an additional spin splitting (“exchange splitting”) in majority- ($`\sigma =`$) and minority- ($`\sigma =`$) spin direction occurs in both the FM and the NM spectrum, leading to a non-zero magnetization $`m_\alpha =n_\alpha n_\alpha `$. Note that, in principle, the NM and the FM spectrum for each spin direction occupy exactly the same energy region, thus preventing the electrons to be trapped. However, the corresponding spectral weight may become very small as can be seen in Fig. 4 and Fig. 5. First we want to discuss the $`n_{NM}`$-dependence of the QDOS at low temperatures. For the case of $`d_{NM}=0`$, the majority QDOS lies completely below the chemical potential, and the system is fully polarized ($`n_\alpha =1`$) at $`T=0`$ (see also Fig. 1e). If the NM overlayer is taken into account (Fig. 4), the majority QDOS of both the interface and inner FM layers have tails above the chemical potential due to the hybridization between FM and NM layers. As a consequence the system is no longer fully polarized at low temperatures (see also Fig. 1a,1b,1c,1d). The weight of the tail determines the reduction of the magnetization compared to the fully polarized state. Since the weight of the tail becomes smaller as the band occupation of the NM layer is increased from $`n_{NM}=0.4`$ to $`n_{NM}=1.4`$ the reduction of the magnetization in the interface gets weaker as well (see also Fig. 3c). Let us now turn to the NM layer. As mentioned above, the QDOS of the NM layer has just one kind of splitting – the exchange splitting. The QDOS of the NM layer is quite different for majority- and minority- spin direction. Because there is no Coulomb interaction in the NM layers, the majority- and minority-QDOS do not affect each other. It is interesting to note, that above the chemical potential the majority NM-QDOS resembles the BDOS (Bloch density of states) of the square lattice which is equivalent to the free standing fcc(100) monolayer. This is because for energies above the chemical potential a ($`\sigma =`$)-electron within the NM layer is effectively isolated since the spectral weight of the ($`\sigma =`$)-FM-QDOS is very small in this energy region. For $`n_{NM}=0.4`$ (Fig. 4a), there is very low spectral weight in the majority NM-QDOS below the chemical potential. As a consequence the number of spin-up electrons is smaller than the number of spin-down electrons in the NM layer. The magnetization of the NM layer is negative, i.e., the NM and FM layers are antiferromagnetically coupled. With increasing $`n_{NM}`$ the center of gravity of the QDOS of the NM layer gets shifted to lower energies. When the peak of the majority NM-QDOS crosses the chemical potential, the number of the majority-spin electrons increases faster as a function of $`n_{NM}`$ than the number of minority-spin electrons. The magnetization of the NM layer increases and becomes positive (see Fig. 4b and 4c,also Fig. 3b). Of course, as $`n_{NM}`$ increases, the shape of the QDOS will also change. Until $`n_{NM}=1.97`$, the magnetization of the NM layer will increase as a function of $`n_{NM}`$. Then it drops quickly, because both the majority- and minority-spin QDOS gets shifted below the chemical potential. The temperature dependence of the FM-QDOS (Fig. 5) is dominated by two correlation effects. As a function of increasing temperature the spin-splitting between the centers of gravity of the majority and minority quasiparticle subbands decreases. In addition there is a temperature-dependent transfer of spectral weight between the lower and the upper quasiparticle subband. The interplay of these two correlation effects leads to a rather rapid demagnetization of the system as a function of temperature and allows for Curie temperatures in a physically reasonable order of magnitude . At $`T=T_C`$, the spin-splitting has disappeared completely, both in the FM and the NM spectrum (Fig. 5c). Due to the coupling between FM and NM layers, the NM-QDOS becomes temperature dependent as well. While the band edges of the NM-QDOS stay fixed, there is a redistribution of spectral weight within the NM-QDOS as a function of temperature. We would like to point out that this temperature-dependence may even result in a temperature induced change from ferromagnetic to antiferromagnetic coupling between FM and NM layers. From Fig. 5b, one can read off that at $`T=0.9T_C`$ the number of spin-up electrons is smaller than the number of spin-down electrons. Due to the reduced spin-splitting in the FM spectrum, the main peak of the majority NM-QDOS gets shifted above the chemical potential. As a consequence the coupling between the FM and the NM layer at the interface changes from ferromagnetic (Fig. 5a) to antiferromagnetic (Fig. 5b) coupling. This kind of temperature induced change of the coupling is observed only for very thin NM layers ($`d_{NM}=1`$). For $`d_{NM}>1`$, we have not found such a behavior. Finally, we want to discuss the induced polarization in the NM layers which is shown in Fig. 6. The induced magnetization of the NM layers oscillates with the layer index for $`d_{NM}4`$. As a consequence, the total magnetization of the NM layers oscillates as a function of the thickness of the NM as well. An oscillatory behavior of total magnetization of the NM layers as a function of the thickness of the NM has also been obtained by Bruno within the theory of interlayer magnetic coupling. However, our starting point and our results are completely different. The quantum interference was introduced to understand the properties of the NM layers. The Bloch waves of the NM layers are regarded as to be confined within two FM perturbations (or one FM perturbation and one Vacuum). Due to the constructive and destructive reflection at this confinement, the densities of spin-up and/or spin-down electrons are found to be oscillating as a function of the thickness of the NM. While the long range magnetic order was excluded in his theory. Within this concept, one can find that the magnetization of the NM interface should oscillate around zero. On the contrary, in our calculation, the amplitude of the oscillation decreases from the interface to the surface (Fig. 6a) and the profile of the induced magnetization is almost independent of $`d_{NM}`$(Fig. 6a). The induced magnetization at the interface exhibits only a very small oscillation around a certain negative value as $`d_{NM}4`$ (see Fig. 6c). This indicates that the coupling between FM and NM layers is only affected by the properties of the FM and NM layers close to the interface, whereas it is independent of the thickness of the NM overlayers as $`d_{NM}4`$. Due to the oscillatory behavior of the induced magnetization, we find the surface magnetization to be also oscillating as a function of the number of the NM layers (Fig. 6c). This surface magnetization may be observed in experiment by use of spin polarized photo-emission for very short mean free paths. We would like to point out that the oscillation of the NM magnetizations with respect to the layer index is affected by the band occupation of the NM layers. The amplitude as well as the period of the oscillation changes when the band occupation of the NM layers is changed (see Fig. 6b). ## V Conclusions The effect of uncorrelated layers on the magnetic properties in thin Hubbard films is studied by the spectral density approach (SDA). The SDA, which reproduces the exact results of $`t/U`$-perturbation theory of Harris and Lange in strong-coupling limit, leads to rather convincing results concerning the magnetic properties of the Hubbard model. By comparison with different approximation schemes for the Hubbard model as well as numerically exact QMC calculation in the limit of infinite dimensions it has been shown that the correct inclusion of the exact results of Harris and Lange in the strong coupling regime is vital for a reasonable description of the magnetic behavior of the Hubbard model, especially at finite temperatures. Within our theory, FM and NM layers are treated on the same footing. The effects of the coupling between FM and NM on the magnetic properties of the FM layers, as well as on the NM layers, have been studied in detail. The Curie temperature of thin FM films is modified by the presence of NM layers. For $`n_{FM}=1.4`$ and $`n_{NM}=0.8`$ the Curie temperature increases gradually as a function of the number of FM layers and converges to the corresponding bulk value for $`d_{FM}=810`$. The induced polarization in the NM layers displays a long-range decreasing oscillation with respect to the layer index of the NM layers. The induced magnetization of the interface NM layer hardly changes as a function of number of the NM layers when $`d_{NM}4`$. This means the coupling between the FM and NM layers is determined by the intrinsic properties of the FM and NM layers, such as the band occupation, the in-site Coulomb interaction and the hopping, and it is independent of the numbers of the FM and NM layers. The magnetic properties of this thin film system have been microscopically understood by means of spin- layer- and temperature- dependent quasiparticle density of states (QDOS) for a 1/3/1 sandwich structure. There exist two correlation induced band splittings in the FM spectrum. Besides the Hubbard splitting there is an additional exchange splitting for temperatures below $`T_C`$. Due to the vanishing Coulomb repulsion the Hubbard splitting disappears in the NM spectrum. For the case of $`d_{NM}=0`$, the majority FM-QDOS lies completely below the chemical potential, and the system is fully polarized ($`n_\alpha =1`$) at zero temperature. If the NM layers is taken into account, the majority QDOS of both the interface and inner FM layers have tails above the chemical potential due to the hybridization between the FM and NM layers. As a consequence the system is no longer fully polarized at low temperatures. There is a reduction of the FM magnetization compared to the fully polarized state. The reduction of the magnetization of the interface FM layer gets weaker as the band occupation of the NM layer is increased from $`n_{NM}=0.4`$ to $`n_{NM}=1.4`$. In addition for $`n_{NM}=0.4`$, there is very low spectral weight in the majority NM-QDOS below the chemical potential. As a consequence the number of spin-up electrons is smaller than the number of spin-down electrons in the NM layer. The magnetization of the NM layer is negative, i.e., the NM and FM layers are antiferromagnetic coupled. As $`n_{NM}`$ increases, the magnetization of the NM layer increases and becomes positive. The change of the coupling between the FM and NM layers can also be induced by the temperature. This temperature induced change of the coupling is observed only for very thin NM layers. ## Acknowledgements One of us (J.H.W.) wishes to acknowledge the Humboldt-Universität for the hospitality and financial support. Parts of this work have been done within the Sonderforschungsbereich 290 (“Metallische dünne Filme: Struktur, Magnetismus und elektronische Eigerschaften”) of the Deutsche Forschungsgemeinschaft.
no-problem/9908/cond-mat9908152.html
ar5iv
text
# Crystal symmetry, step-edge diffusion and unstable growth ## I Introduction The kinetic stability of a crystal growing homoepitaxially by Molecular Beam Epitaxy is determined primarily by the possible existence of a slope-dependent mass current $`\stackrel{}{j}(\stackrel{}{m})`$ along the surface, i.e. by a current which does not vanish in the limiting case of a constant slope $`\stackrel{}{m}`$ ($`\stackrel{}{m}=z`$, where $`z(\stackrel{}{r},t)`$ is the local height) . Such a current is generally ascribed to the so-called Ehrlich-Schwoebel (ES) effect at step-edges, which hinders interlayer diffusion . On singular surfaces, experimental results (mainly on metal growth) show that the instability leads to mound formation and often to a coarsening process, where the typical size $`L`$ of the mounds increases in time (generally with a power law: $`L(t)t^n`$) . The template of the mound structure is already formed in the early stages of growth (the so-called ‘linear regime’), and here crystal structure should determine shape and orientation of mounds. For example, cubic crystals are characterized by a four-fold and a six-fold symmetry, respectively on (100) and (111) faces: experimental analysis by Scanning Tunneling Microscopy has indeed shown square based mounds on Fe and Cu(100) and triangular based ones on Rh and Pt(111) . The relevance of the in-plane symmetry for the later stages of the growth process has been definitely proven by Siegert , who has shown –through a continuum description of the surface– that unstable currents with different in-plane symmetries may give rise to different coarsening exponents $`n`$. For vicinal surfaces ES barriers at steps are known to stabilize against step bunching and to destabilize against step meandering . It is therefore extremely important to determine what are the microscopic mechanisms giving rise to slope-dependent currents $`\stackrel{}{j}`$, what is the expression of $`\stackrel{}{j}`$, and how lattice symmetry enters in it. One of the main results of the present paper is the finding of two contributions to the slope-dependent current, one due to terrace diffusion ($`\stackrel{}{j}_t`$) and one due to step diffusion ($`\stackrel{}{j}_s`$). Both contributions are singular at zero slopeIn the limit of very large ES effect, $`\stackrel{}{j}_t`$ is no more anisotropic and therefore no more singular at $`\stackrel{}{m}=0`$.. This is at odds with the usual phenomenological expressions of $`\stackrel{}{j}_t`$, used in the continuum description of surface growth , which all reduce to the simple isotropic form $`\stackrel{}{j}_t=a\stackrel{}{m}`$ in the small slope regime ($`\stackrel{}{m}0`$). We will see that our expressions for $`\stackrel{}{j}_t`$ and $`\stackrel{}{j}_s`$ remain anisotropic even in this limit, and that implies a singular behaviour in $`\stackrel{}{m}=0`$. Other important results concern the step current $`\stackrel{}{j}_s`$, which is found to destabilize layer-by-layer growth against mound formation on a high symmetry surface, and step-flow against step bunching on a vicinal surface. Step-flow is stable or unstable against step meandering, depending on the step orientation. The destabilizing effect of $`\stackrel{}{j}_s`$ on a singular surface has been observed independently by O. Pierre-Louis et al. and by Ramana Murty and Cooper . The former have also studied analytically the effect on step meandering. Here we provide a unified treatment of these diverse effects within a continuum description of the surface, we predict the new phenomenon of step bunching induced by step currents, and we analyze the different anisotropic behaviours of $`\stackrel{}{j}_s`$ and $`\stackrel{}{j}_t`$. ## II Terrace current In Fig. 1 we draw a piece of a vicinal surface corresponding to a constant slope $`\stackrel{}{m}`$, and a piece of a step. Once adatoms have landed on the surface, they perform a diffusion process until they stick to the upper or lower step. The attachement rate $`D^{}`$ from below is considered extremely fast ($`D^{}/D=\mathrm{}`$, $`D`$ being the diffusion constant on the terrace); the rate $`D^{\prime \prime }`$ from above defines the ES length $`\mathrm{}_{\mathrm{ES}}=(D/D^{\prime \prime }1)`$ (in units of the lattice spacing) . This should be compared to the diffusion length $`\mathrm{}_\mathrm{D}`$ representing the minimal distance between nucleation centers on a high-symmetry surface . Under the usual conditions of crystal growth we have $`\mathrm{}_\mathrm{D}1`$, while both the cases $`\mathrm{}_{\mathrm{ES}}\mathrm{}_\mathrm{D}`$ (weak ES effect) and $`\mathrm{}_{\mathrm{ES}}\mathrm{}_\mathrm{D}`$ (strong ES effect) may take place . In one dimension (1d) we write the ES current (due to terrace diffusion) as $`j_t=mf(m^2)`$, and at small slopes (in the sense that $`m1/\mathrm{}_\mathrm{D}`$) we have the linear behaviour $`j_t=amf(0)m`$, with $`a=F\mathrm{}_{\mathrm{ES}}\mathrm{}_\mathrm{D}^2/[2(\mathrm{}_{\mathrm{ES}}+\mathrm{}_\mathrm{D})]`$, $`F`$ being the intensity of the external flux (i.e. the number of particles landing on the surface per unit time and lattice site). In two dimensions (2d), if we neglect in-plane anisotropy we can generalize and write $`\stackrel{}{j}_t=\stackrel{}{m}f(m^2)`$. Let us now discuss the microscopic origin of anisotropy and how it modifies $`\stackrel{}{j}_t`$. Throughout we will consider a (100)-surface with fourfold symmetry, and take the $`x`$ ($`y`$) axis along the () orientation, denoting by $`\widehat{x}`$ and $`\widehat{y}`$ the corresponding in-plane unit vectors. The extension to other crystal symmetries is straightforward in principle. In the absence of surface reconstructions, terrace diffusion by itself is an isotropic process, at least in its continuum description. In contrast, the sticking of an adatom to a step depends on the microscopic environment, which depends on the step orientation. So, the ES barrier seen by an adatom approaching a step depends on the orientation of the surface and this dependence translates into an orientation-dependent ES length $`\mathrm{}_{\mathrm{ES}}=\mathrm{}_{\mathrm{ES}}(\theta )`$, where $`\theta =\mathrm{arctan}(m_x/|m_y|)`$ is the angle of the step relative to the $`x`$-axis. Assuming straight steps, the expression for a one-dimensional surface can be taken over, and for small slopes ($`m\mathrm{}_\mathrm{D}1`$) we obtain $$\stackrel{}{j}_t=a(\theta )\stackrel{}{m}=\frac{F\mathrm{}_{\mathrm{ES}}(\theta )\mathrm{}_\mathrm{D}^2}{2(\mathrm{}_{\mathrm{ES}}(\theta )+\mathrm{}_\mathrm{D})}\stackrel{}{m}.$$ (1) The coefficient $`a`$ becomes independent of $`\theta `$ only in the regime of strong ES barriers, $`\mathrm{}_{\mathrm{ES}}(\theta )\mathrm{}_\mathrm{D}`$ (in this limit, $`a=F\mathrm{}_\mathrm{D}^2/2`$). For weak barriers in-plane anisotropy is therefore present even in the ‘linear’ regime $`m\mathrm{}_\mathrm{D}1`$ ($`a=F\mathrm{}_{\mathrm{ES}}(\theta )\mathrm{}_\mathrm{D}/2`$). Through the dependence of $`\theta `$ on $`\stackrel{}{m}`$, Eq.(1) is manifestly non-analytic at $`\stackrel{}{m}=0`$. ## III Step current Next show that crystal symmetry manifests itself also (and perhaps mainly) through step diffusion. Once adatoms have reached a step, they can diffuse along it at a rate $`D_s`$ and stick to a kink edge (see Fig. 1). Similarly to terrace diffusion, only if there is an asymmetry between $`D_s^{}`$ and $`D_s^{\prime \prime }`$, a net step current $`\stackrel{}{j}_s`$ exists; the strength of the asymmetry determines an ES length along the step which will be called $`\mathrm{}_k`$, the subscript $`k`$ standing for kink. Step diffusion biased by kink barriers is similar to terrace diffusion along a one-dimensional surface , but some differences are worth to be stressed. i) All the possible in-plane orientations $`\theta `$ of the step, with the correct symmetries, should be taken into account, because –especially for a high-symmetry orientation– all the $`\theta `$ are found on the same surface. This may be true also for a vicinal surface, if steps are subject to a strong meandering . In particular, orientations corresponding to $`\theta =0`$ and $`\theta =\pi /4`$ will be seen to behave in a qualitatively different way. ii) Adatoms arrive at the step at a rate $`F_s`$ depending on the terrace size $`\mathrm{}`$. For equally spaced steps $`F_s=F\mathrm{}`$. However, since (in 2d) the surface current is defined as the number of atoms crossing per unit time a segment of unit length, orthogonal to the current, the actual expression for the current is obtained by multiplying the ‘single-step current’ by the number of steps per unit length, i.e. $`1/\mathrm{}`$. This factor cancels the factor $`\mathrm{}`$ appearing in $`F_s`$, since the current is proportional to $`F_s`$ as well. iii) A step is a one-dimensional object, and therefore it has a larger roughness than a two-dimensional surface. In the expression for the unstable current, the diffusion length gives the minimal distance between steps (in 2d) or kinks (in 1d) along a high-symmetry orientation. In 2d, steps are created by nucleation and growth, and $`\mathrm{}_\mathrm{D}`$ is generally given by an expression as $`\mathrm{}_\mathrm{D}(D/F)^\gamma `$ with the exponent $`\gamma `$ depending on the details of the nucleation process . In 1d, the corresponding expression $`\mathrm{}_{D_s}=(D_s/F_s)^{\gamma _s}`$ should be compared to the distance between thermally excited kinks, and the smaller one (called $`\mathrm{}_d`$) be chosen. In most of our discussion we will assume that $`\mathrm{}_d`$ is sufficiently large so that double or multiple kinks can be neglected. The high symmetry in-plane orientations and for a step are fairly different in the mechanisms giving rise to a step-edge current. Along a segment, step diffusion takes place between nearest neighbours lattice sites at a rate $`D_s`$, and the analogy with a one dimensional surface is appropriate. In particular, an asymmetry in the sticking rates to a kink determines a net current along the straight segments of the step, i.e. in the $`\widehat{x}`$ direction; when $`\theta 0`$ this current does not vanish and it has a component along the slope $`\stackrel{}{m}`$, which will be seen to destabilize the flat surface. Conversely, along a orientation, diffusion is a two-steps process and it is very much slower, because it requires detachment from a high coordination site. As a first approximation, it may even be reasonable to assume that no detachment at all takes place. This does not prevent a nonzero step-edge current, for the following reason. Along the orientation, kinks are due to nucleation, or they must be thermally activated, because a kink increases the total length of the step. Along and close to the orientation, the total length of the step does not depend on the specific sequence of and terraces (see Fig. 2), and therefore the step is rough even at zero temperature. The path for going from the origin $`O`$ to $`P`$ is equivalent to a directed random walk, where the asymmetry $`p`$ between the probabilities $`p_{}=(1p)/2`$ and $`p_+=(1+p)/2`$ to go respectively in the $`\widehat{x}`$ and $`\widehat{y}`$ directions, is nothing but the tangent of the angle $`\beta =\pi /4\theta `$ formed by the average orientation of the step with the direction. Since step diffusion does take place along and segments, each step segment longer than one lattice constant contributes to the $`\widehat{x}`$ and $`\widehat{y}`$ components of the step current. This implies that $`\stackrel{}{j}_s`$ is nonzero also for $`\theta =\pi /4`$: in this case $`\stackrel{}{j}_s`$ is exactly parallel to $`\stackrel{}{m}`$ and it has a destabilizing character. In the following we will consider separately the cases of small $`\theta `$ and $`\theta `$ close to $`\pi /4`$, and afterwards we will write down a general expression for $`\stackrel{}{j}_s`$, valid for any value of the angle $`\theta `$. For the moment, we will suppose that the slope $`m=|\stackrel{}{m}|`$ of the surface is larger than $`1/\mathrm{}_\mathrm{D}`$, i.e. we are in the ‘vicinal’ regime. The step current $`\stackrel{}{j}_s`$ can always be written as the sum of $`\stackrel{}{j}_s^{}`$ and $`\stackrel{}{j}_s^{}`$, where $`\stackrel{}{j}_s^{}=(\stackrel{}{j}_s\stackrel{}{m})\stackrel{}{m}/m^2`$ and $`\stackrel{}{j}_s^{}=(\stackrel{}{j}_s\stackrel{}{m}_{})\stackrel{}{m}_{}/m^2`$. The vector $`\stackrel{}{m}_{}=(m_y,m_x)`$ is orthogonal to $`\stackrel{}{m}`$. If we are close to the orientation, the current $`\stackrel{}{j}_s`$ is simply $`\stackrel{}{j}_{[100]}=j_{1d}(m_s)\widehat{x}`$, where $`j_{1d}(m_s)`$ is the usual unstable current for a one-dimensional surface whose slope is $`m_s=\mathrm{tan}\theta =m_x/|m_y|`$. For small $`\theta `$, $`j_{1d}=a_sm_s`$, with $`a_s=(F_s/\mathrm{})\mathrm{}_k\mathrm{}_d^2/[2(\mathrm{}_k+\mathrm{}_d)]`$. By decomposing $`\stackrel{}{j}_{[100]}`$ along $`\stackrel{}{m}`$ and $`\stackrel{}{m}_{}`$, we obtain $$\stackrel{}{j}_{[100]}=\frac{j_{1d}(m_s)}{m^2}[m_x\stackrel{}{m}m_y\stackrel{}{m}_{}].$$ (2) The uphill component is $`(\stackrel{}{j}_{[100]}\stackrel{}{m}/m)a_sm_x^2/m_y^2>0`$, for small $`m_s`$. The positive value of this component explains why step-edge current is enough to destabilize a flat, high symmetry surface. More details are given in Sec. V. If we are close to the orientation, as explained above, the current originates from entropic fluctuations around the straight step, which create segments along and directions. Each segment of length $`s`$ contributes a local current proportional to $`(s1)`$. Therefore, the components $`j_x`$ and $`j_y`$ of the step current along the $`\widehat{x}`$ and $`\widehat{y}`$ axes are simply proportional to the probabilities (per step site) $`p_{}^2`$ and $`p_+^2`$ to have a couple of consecutive step sites in the horizontal and vertical direction, respectively: $$\stackrel{}{j}_{[110]}=\frac{F_s}{\mathrm{}}(p_{}^2\widehat{x}p_+^2\widehat{y})=F\left(\frac{1p}{2}\right)^2\widehat{x}F\left(\frac{1+p}{2}\right)^2\widehat{y}.$$ (3) By using the relations $`p=\mathrm{tan}(\theta \pi /4)`$ and $`m_s=\mathrm{tan}\theta `$, after some easy algebra we obtain $$\stackrel{}{j}_{[110]}=\frac{F}{\sqrt{1+m_s^2}}\left[\frac{|m_s|}{1+|m_s|}\frac{\stackrel{}{m}}{m}+\frac{1|m_s|^3}{(1+|m_s|)^2}\frac{\stackrel{}{m}_{}}{m}\right].$$ (4) The expressions (2) and (4) are valid close to the and orientations: they can be generalized to any value of $`m_s`$, i.e. to any angle $`\theta `$, by writing $$\stackrel{}{j}_s=A(\theta )\stackrel{}{m}/m+B(\theta )\stackrel{}{m}_{}/m.$$ (5) Both $`A`$ and $`B`$ are periodic in $`\theta `$, with period $`\pi /2`$ (see Fig. 3). Their behaviours close to $`\theta =0`$ and $`\theta =\pi /4`$ are derivable from $`\stackrel{}{j}_{[100]}`$ (Eq. (2)) and $`\stackrel{}{j}_{[110]}`$ (Eq. (4)). More precisely, $`A(\theta )`$ is always nonnegative, it has a minumum for $`\theta =0`$ and a maximum for $`\theta =\pi /4`$. The function $`B(\theta )`$ vanishes at $`\theta =0,\pi /4`$, it has a positive slope in $`\theta =0`$ and a negative slope in $`\theta =\pi /4`$. These properties are all that we need in the following, and they do not depend on the specific model assumed to derive Eqs. (2,4), because they are mainly due to symmetry considerations. For example, if multiple kinks are allowed when the step is close to the orientation, $`A(\theta )`$ is no more zero at $`\theta =0`$, but $`A(0)>0`$ and $`\theta =0`$ is still a minimum. Finally note that in contrast to the terrace current $`\stackrel{}{j}_t`$ , the step contribution (5) is independentStrictly speaking, a slope-dependence is maintained through the $`\mathrm{}`$-dependence in $`\mathrm{}_{D_s}`$, but when $`\mathrm{}`$ is small, $`\mathrm{}_{D_s}`$ should be replaced by the distance between thermally activated kinks (see the main text). of the surface slope $`m`$, i.e. the step distance, in the vicinal regime ($`m\mathrm{}_\mathrm{D}1`$). Before going on, let us generalize the expression of $`\stackrel{}{j}_s`$ to any value of the surface slope $`\stackrel{}{m}`$. In the limit $`m1/\mathrm{}_\mathrm{D}`$, both $`\stackrel{}{j}_s`$ and $`\stackrel{}{j}_t`$ go to zero, because contributions coming from steps and terraces of opposite sign compensate . In a region of small slope, $`\stackrel{}{j}_{s,t}|_{\mathrm{small}\mathrm{slope}}=\frac{N_+N_{}}{N}\stackrel{}{j}_{s,t}|_{\mathrm{large}\mathrm{slope}}`$, where $`N_\pm `$ is the number of positive and negative steps (for $`\stackrel{}{j}_s`$) or terraces (for $`\stackrel{}{j}_t`$) and $`N=N_++N_{}`$. Since $`(N_+N_{})/N=m\mathrm{}_\mathrm{D}`$, in the small slope regime: $`\stackrel{}{j}_s^{}=A(\theta )\mathrm{}_\mathrm{D}\stackrel{}{m}`$ and $`\stackrel{}{j}_s^{}=B(\theta )\mathrm{}_\mathrm{D}\stackrel{}{m}_{}`$. For a generic slope, we can write $$\stackrel{}{j}_s^{}=A(\theta )g(m^2)\stackrel{}{m}\stackrel{}{j}_s^{}=B(\theta )g(m^2)\stackrel{}{m}_{}$$ (6) where $`g(m^2)\mathrm{}_\mathrm{D}`$ for $`m\mathrm{}_\mathrm{D}1`$ and $`g(m^2)1/m`$ for $`m\mathrm{}_\mathrm{D}1`$. The simplest function interpolating between the two limiting values is $`g(m^2)=\mathrm{}_\mathrm{D}/\sqrt{1+m^2\mathrm{}_\mathrm{D}^2}`$, but its actual form does not need to be specified. ## IV Stability of vicinal surfaces Let us now perform a linear stability analysis of a growing vicinal surface of average slope $`\stackrel{}{m}_0`$. The local height is $`z(\stackrel{}{x},t)=\stackrel{}{m}_0\stackrel{}{x}+ϵ(\stackrel{}{x},t)`$ and the local slope is $`\stackrel{}{m}=\stackrel{}{m}_0+ϵ`$. The evolution, as determined by the step current, is given by the equation $`_tz=\stackrel{}{j}_s`$. By using the general properties given above, for $`A`$ and $`B`$, and working in the special cases $`\theta _0=0`$ and $`\theta _0=\pi /4`$, we obtain: $$_tz=\stackrel{}{j}_s=g(m_0^2)[A(\theta _0)+B^{}(\theta _0)]_{}^2ϵA(\theta _0)(/m_0)[m_0g(m_0^2)]_{}^2ϵ$$ (7) where $`_{}`$ and $`_{}`$ are directional derivatives perpendicular and parallel to $`\stackrel{}{m}`$ (i.e. parallel and perpendicular to the step). Thus the coefficient of $`_{}^2`$ gives informations on step meandering, and that of $`_{}^2`$ on step bunching . Since $`A(\theta )0`$ and $`(/m_0)[m_0g(m_0^2)]>0`$, the current $`\stackrel{}{j}_s`$ (more precisely $`\stackrel{}{j}_s^{}`$) has always a destabilizing character against step bunching; if multikinks are not allowed along the orientation, $`A(0)=0`$ and the effect is absent along the $`\widehat{x},\widehat{y}`$ axes. Also, as it may be expected, $`\stackrel{}{j}_s^{}`$ has no effect on this instability. Concerning step meandering, we must distinguish between $`\theta _0=0`$ and $`\theta _0=\pi /4`$, because $`B^{}(\theta _0)`$ has opposite signs in the two cases. For $`\theta _0=0`$ the derivative is positive and therefore steps along the orientation are destabilized by step meandering. On the contrary, the evaluation of $`[A(\pi /4)+B^{}(\pi /4)]`$ gives, using Eq. (4), a negative result $`(=F/\sqrt{2})`$. Therefore, steps along the orientations are stabilized against step meandering. Our conclusion regarding the steps agrees with the analysis of Pierre-Louis et al., while in the case of steps they find stability (instability) for large (small) values of the kink ES length $`\mathrm{}_k`$. Our expression (4) is valid if adatoms are not allowed to turn around corners and this effectively sets $`\mathrm{}_k=\mathrm{}`$. Its generalization to any $`\mathrm{}_k`$ indeed shows that $`A(\pi /4)`$ becomes a minimum (and the quantity $`[A(\pi /4)+B^{}(\pi /4)]`$ changes sign) for $`\mathrm{}_k<2\mathrm{}^{}`$, where $`\mathrm{}^{}`$ is the typical distance between corners along a step. In the ‘random walk’ model for the step, $`\mathrm{}^{}=2`$ and therefore the orientation is stabilized unless $`\mathrm{}_k`$ is extremely weak. Ramana Murty and Cooper have performed Monte Carlo simulations of a vicinal surface, with steps along the axis. Step meandering is indeed observed, even if the terrace current $`\stackrel{}{j}_t`$ is absent. Conversely, no step bunching seems to occur, suggesting that their simulations correspond to $`A(0)=0`$. ## V Stability of singular surfaces The analysis of a high-symmetry surface is complicated by the non-analytic behavior of $`\stackrel{}{j}_s`$ and $`\stackrel{}{j}_t`$ in $`\stackrel{}{m}=0`$. In the small slope regime $`(m1/\mathrm{}_\mathrm{D})`$, $`\stackrel{}{j}_s^{}`$ and $`\stackrel{}{j}_s^{}`$ become (see Eq. (6)) $$\stackrel{}{j}_s^{}=\mathrm{}_\mathrm{D}A(\theta )\stackrel{}{m}\stackrel{}{j}_s^{}=\mathrm{}_\mathrm{D}B(\theta )\stackrel{}{m}_{}.$$ (8) It should be stressed that the singularity is physically justified, as we now try to argue. Close to an extremum of the profile, $`z(x,y)=z_0+(c_1/2)x^2+(c_2/2)y^2+c_3xy`$, $`\stackrel{}{m}=(c_1x+c_3y,c_2y+c_3x)`$ and $`m_s=(c_1r+c_3)/|c_2+c_3r|`$, where $`r=x/y`$. The prefactors $`A`$ and $`B`$ are therefore manifestly non-analytic functions at $`x=y=0`$. The reason is that close to an extremum, steps are closed lines and as the top (or the bottom) of the profile is approached, the step orientation is no more defined. The angular dependence of $`A`$ and $`B`$ also implies that the evolution equation does not become linear in the small-slope regime, and hence arbitrary profiles cannot be treated as superpositions of harmonic ones. The problem of non-analyticity does not appear when we consider a one-dimensional profile, i.e. a profile varying only in one direction (for example, $`z=z(x,t)`$), because the prefactors $`A`$ and $`B`$ are now constants<sup>§</sup><sup>§</sup>§The angle $`\theta `$ may indeed take the values $`\theta _0`$ and $`(\theta _0+\pi )`$, corresponding to $`\mathrm{tan}\theta =\pm |m_s|`$, but because of the $`\pi /2`$ periodicity, $`A(\theta _0)=A(\theta _0+\pi )`$. This is no more true for the (111) surface of a cubic crystal.. This implies that the divergence of the current is easily evaluated: $`_tz`$ $`=`$ $`(\stackrel{}{j}_s+\stackrel{}{j}_t)=[\mathrm{}_\mathrm{D}A(\theta )\stackrel{}{m}+\mathrm{}_\mathrm{D}B(\theta )\stackrel{}{m}_{}+a(\theta )\stackrel{}{m}]`$ (9) $`=`$ $`[\mathrm{}_\mathrm{D}A(\theta _0)+a(\theta _0)]^2z\nu (\theta _0)^2z.`$ (10) The component of the step current parallel to the step $`(\stackrel{}{j}_s^{})`$ does not contribute, because $`\stackrel{}{m}_{}0`$, while the component parallel to the slope $`(\stackrel{}{j}_s^{})`$ destabilizes the flat surface, analogously to $`\stackrel{}{j}_t`$. The instability gives rise to pyramid-shape mounds, whose orientations $`\theta _i`$ should be the most unstable ones, i.e. correspond to the maxima of $`\nu (\theta )`$. In this respect, the step current favours the orientations forming $`45^{}`$ with the crystallographic axes, while the anisotropy induced by the terrace current depends on the microscopic details of the sticking processes. Since the presence of more kinks along the step should favour the descent of the adatom, it is reasonable to think that $`\mathrm{}_{\mathrm{ES}}(\theta )`$ is maximum in $`\theta =0,\pi /2`$ , and therefore the two contributions to $`\nu (\theta )`$ should compete. Close to $`\theta =0`$, we have $$\nu (\theta )=a_sm_s\frac{m_x}{m}+a(\theta )=a_s\theta ^2+a(\theta ).$$ (11) By using expression (1) for $`a(\theta )`$ and the expression of $`a_s`$ given above Eq. (2), we obtain $$\nu ^{\prime \prime }(0)=F\left[\frac{\mathrm{}_\mathrm{D}^3\mathrm{}_{\mathrm{ES}}^{\prime \prime }(0)}{2(\mathrm{}_{\mathrm{ES}}+\mathrm{}_\mathrm{D})^2}+\frac{\mathrm{}_k\mathrm{}_d^2}{\mathrm{}_k+\mathrm{}_d}\right].$$ (12) Mounds will align along the crystallographic axes if $`\nu (\theta )`$ has a maximum in $`\theta =0`$, i.e. if $`\nu ^{\prime \prime }(0)<0`$. It is apparent that for a sufficiently large ES effect at steps this condition is not fulfilled, because the anisotropic character of $`\stackrel{}{j}_t`$ disappears. On the other hand, $`\mathrm{}_{\mathrm{ES}}`$ should also not be too small, otherwise $`\stackrel{}{j}_t`$ itself would be negligible. Therefore $`\nu (\theta )`$ will have maxima in $`\theta =0,\pi /2`$ only if several conditions are simultaneously satisfied: i) $`\mathrm{}_{\mathrm{ES}}^{\prime \prime }(0)`$ must be negativeIn simulations of solid-on-solid models the step edge barriers are often implemented such as to reduce the probability for adatoms to approach steps, rather than to descend from them . In this case the barrier at a step may in fact (slightly) exceed that of the close packed step .; ii) $`\mathrm{}_{\mathrm{ES}}/\mathrm{}_\mathrm{D}`$ should be neither too large nor too small; iii) $`\mathrm{}_k/\mathrm{}_d`$ should be small. Refs. and recently reported simulations on high-symmetry surfaces, taking into account step-edge diffusion. In both cases, if kink barriers are present the mound sides align along and equivalent axes. Since in there are no ES barriers at steps and in the barriers are infinite and therefore isotropic, mound orientation is determined only by $`\stackrel{}{j}_s`$, in agreement with our picture. ## VI Discussion Some aspects of the subject considered in the present paper have not been sufficiently clarified and they deserve further analysis. First of all, the non-analytic behaviour of the surface current remains problematic, because it implies that the continuum evolution equation $`_tz+\stackrel{}{j}=0`$ is not defined at $`\stackrel{}{m}=0`$. As we have argued above, this non-analyticity is an inescapable consequence of the persistence of crystal anisotropy in the ‘linear’ regime of the instability; if it could be removed, e.g. through a more careful treatment of the interpolation between vicinal and singular surfaces, this would also imply that mounds are initially isotropic and develop their anisotropic shapes only in the nonlinear regime. It is however also conceivable that, as in the case of equilibrium surface relaxation below the roughening temperature , the appearance of a singularity in the continuum evolution equation carries a real physical message: That the surface is not well described by a smooth function $`z(\stackrel{}{r},t)`$ near its maxima and minima. A second important aspect concerns vicinal surfaces. We have seen that for steps $`\stackrel{}{j}_s`$ has a stabilizing character with respect to step meandering and a destabilizing character with respect to step bunching. It would be interesting to evaluate quantitatively these effects and to compare them with the opposing effects of the terrace current. This comparison has been done for step meandering , and it seems that the effect of $`\stackrel{}{j}_s`$ may dominate $`\stackrel{}{j}_t`$. At any rate the predicted step bunching instability should be clearly visible in simulations of models which have no ES barriers but only asymmetric sticking at kinks . Finally, in this work we have not addressed the effects of crystal anisotropy on the smoothening terms in the continuum evolution equation, which are crucial in determining the actual length scale of the instability . Under far from equilibrium conditions, the dominant smoothening mechanism is believed to be due to random nucleation , which is manifestly isotropic; the anisotropy of the equilibrium step free energies will however be felt if detachment from steps becomes significant. ## VII Conclusions In this paper we have studied the different contributions to the surface current on a (100)-surface, which depend only on the slope $`\stackrel{}{m}`$. Such contributions come from biased surface diffusion, both on terraces ($`\stackrel{}{j}_t`$) and along steps ($`\stackrel{}{j}_s`$), where the bias mechanism is an Ehrlich-Schwoebel barrier at steps and kinks, respectively. The expressions of $`\stackrel{}{j}_{s,t}`$ are relevant in two respects: they determine the linear stability of the flat surface and – in the case of unstable growth – they also determine the shape and the orientation of the emerging structure. The terrace current is parallel to the slope, while the step current has components parallel and perpendicular to the slope, because step diffusion takes place along the - and -segments that constitute the step. A first important result is that the anisotropic character of $`\stackrel{}{j}_s`$ and $`\stackrel{}{j}_t`$ persists in the small slope regime $`\stackrel{}{m}0`$: this means that they are both non-analytic at $`\stackrel{}{m}=0`$ and consequently this does not allow a complete description of the evolution of a high-symmetry surface. Concerning the stability of a singular surface, $`\stackrel{}{j}_s`$ is found to be destabilizing because it has a positive component in the direction of $`\stackrel{}{m}`$. The most unstable orientations form angles of 45 with the crystallographic axes, while $`\stackrel{}{j}_t`$ is usually (but not always, see footnote V) thought to select the $`\widehat{x},\widehat{y}`$ axes. If a competition exists $`\stackrel{}{j}_s`$ should prevail (see discussion below Eq. (12)). The stability of a vicinal surface is more complex. The terrace current $`\stackrel{}{j}_t`$ is known to stabilize against step bunching and to destabilize towards step meandering, whatever is the orientation of the surface. Surprisingly, the step current is instead found to generally favor step bunching (along the crystallographic axes $`\stackrel{}{j}_s`$ has no effect if multiple kinks are not present). Finally the effect of $`\stackrel{}{j}_s`$ on step meandering strongly depends on the surface orientation and the strength of the ES effect at kinks: steps are destabilized, while steps may be stabilized if the kink ES-barrier is not too weak. Acknowledgements. We have benefited from discussions with Th. Michely and P. Šmilauer. P.P. acknowledges support by the Alexander von Humboldt foundation. The work of J.K. was supported by DFG within SFB 237. ## A List of symbols | $`z(\stackrel{}{x},t)`$ | local heigth of the surface at point $`\stackrel{}{x}`$ and time $`t`$ | | --- | --- | | $`\stackrel{}{m}=z`$ | local slope | | $`\stackrel{}{m}_{}=(m_y,m_x)`$ | vector orthogonal to the slope | | $`\theta =\mathrm{arctan}\left(\frac{m_x}{|m_y|}\right)`$ | angle between the average orientation of a step and the $`\widehat{x}`$ axis | | $`m=|\stackrel{}{m}|=|\stackrel{}{m}_{}|`$ | modulus of the local slope | | $`\mathrm{}=1/m`$ | terrace size | | $`\stackrel{}{m}_0`$ | average slope of a vicinal surface | | $`\stackrel{}{j}_t`$ | terrace current | | $`\stackrel{}{j}_s`$ | step current | | $`\stackrel{}{j}_s^{}`$ | step current parallel to the slope $`\stackrel{}{m}`$ | | $`\stackrel{}{j}_s^{}`$ | step current perpendicular to the slope $`\stackrel{}{m}`$ | | $`F`$ | intensity of the external flux of particles | | $`F_s=F\mathrm{}`$ | rate of particles arriving to the step | | $`D`$ | diffusion constant on the terrace | | $`D_s`$ | diffusion constant along a step | | $`D^{}`$ | attachement rate of an adatom on a terrace to the ascending step | | $`D^{\prime \prime }`$ | attachement rate of an adatom on a terrace to the descending step | | $`D_s^{}`$ | attachement rate of an adatom on a step to the ascending kink | | $`D_s^{\prime \prime }`$ | attachement rate of an adatom on a step to the descending kink | | $`\mathrm{}_{\mathrm{ES}}`$ | ES length on a terrace | | $`\mathrm{}_k`$ | ES length along a step | | $`\mathrm{}_\mathrm{D}`$ | diffusion length on a terrace | | $`\mathrm{}_{D_s}`$ | diffusion length along a step | | $`\mathrm{}_d`$ | the minimum between $`\mathrm{}_{D_s}`$ and the distance between thermally | | | activated kinks | | $`p_{}`$ | In the directed random walk picture of a step close to the orientation, | | --- | --- | | | probabilities that the step goes in the $`\widehat{x}`$ ($`p_{}`$) and $`\widehat{y}`$ ($`p_+`$) directions | | $`p=p_+p_{}`$ | asymmetry in the probabilities $`p_\pm `$ |
no-problem/9908/cond-mat9908150.html
ar5iv
text
# Magnetic relaxation in a classical spin chain as model for nanowires ## I Introduction Many novel physical effects occur in connection with the decreasing size of the systems which are under investigation. Magnetic materials are now controllable in the nanometer regime and there is a broad interest in the understanding of the magnetism of small magnetic structures and particles due to the broad variety of industrial applications . Magnetic particles which are small enough to be single-domain are proposed to have good qualities for magnetic recording and arrays of isolated, nanometer-sized particles are thought to enhance the density of magnetic storage. But on the other hand there is an ultimate limit for the density of magnetic storage which is given by that size of the particles below which superparamagnetism sets in . Therefore, the role of thermal activation for the stability of the magnetization in nanometer-sized structures and particles is studied at present experimentally as well as theoretically. Wernsdorfer et al. measured the switching times in isolated nanometer-sized particles , and wires . For small enough particles they found agreement with the theory of Néel and Brown who described the magnetization switching in Stoner-Wohlfarth particles by thermal activation over a single energy barrier following from coherent rotation of the magnetic moments of the particle. For larger particles and also for wires nucleation processes and domain wall motion were found to become relevant. Asymptotic formulae for the escape rates following from corresponding Fokker-Planck equations have been derived for ensembles of isolated Stoner-Wohlfarth particles as well as for a one-dimensional model . Most of the numerical studies of the magnetization switching base on Monte Carlo methods. Here, mainly nucleation phenomena in Ising models have been studied but also vector spin models have been used to investigate the magnetization reversal in systems with many, continuous degrees of freedom qualitatively. However, Monte Carlo methods - even though well established in the context of equilibrium thermodynamics - do not allow for a quantitative interpretation of the results in terms of a realistic dynamics. Only recently, a Monte Carlo method with a quantified time step was introduced . Here, the interpretation of a Monte Carlo step as a realistic time interval was achieved by a comparison of one step of the Monte Carlo process with a time interval of a Langevin equation, i.e., a stochastic Landau-Lifshitz–Gilbert equation. Numerical methods for the direct integration of a Langevin equation are more time consuming than a Monte Carlo method but nevertheless highly desirable since here, naturally, a realistic time is introduced by the equation of motion. The validity of different integration schemes is still under discussion (see for a discussion of the validity of Itô and Stratonovich integration schemes) and, hence, here, as well as for the Monte Carlo methods, the investigation of analytically solvable models as test tools for the evaluation of the numerical techniques are desirable. In this paper we will consider a chain of classical magnetic moments — a system which can be interpreted as a simplified model for ferromagnetic nanowires or extremely elongated nanoparticles . This model is very useful for two reasons: i) since it was treated analytically asymptotic results for the energy barriers as well as for the escape rates are available. Hence, the model can be used as a test tool for numerical techniques. ii) Depending on the system size, given anisotropies and the strength of the magnetic field different reversal mechanisms may appear and can be investigated. For small system sizes the magnetic moments are expected to rotate coherently , while for sufficiently large system sizes the so-called soliton-antisoliton nucleation is proposed . Therefore, a systematic numerical investigation of the relevant energy barriers, time scales, and of the crossover from coherent rotation to nucleation is possible. The outline of the paper is as follows. In Sec. II first we introduce the model and we compare the two different numerical techniques mentioned above, namely the Monte Carlo method and the numerical solution of the Langevin equation, both of which we will use throughout the paper. In Sec. III we compare our numerical results with the theoretical considerations of Braun, concerning the energy barriers as well as the mean first passage times for magnetization switching by coherent rotation as well as by soliton-antisoliton nucleation. For higher temperatures or driving fields, we also find a crossover to multidroplet nucleation, similar to what is known from Ising models. In section IV we summarize our results and relate them to experimental work. ## II Model and Methods ### A Model Our intention is to compare our numerical results with Braun’s analytical work which bases on a continuum model for a magnetic nanowire. For our numerical investigations we use a discretized version of this model namely a one dimensional classical Heisenberg model. We consider a chain of magnetic moments of length $`L`$ (number of spins) with periodical boundary conditions defined by the Hamiltonian, $`E`$ $`=`$ $`J{\displaystyle \underset{ij}{}}𝐒_i𝐒_jd_x{\displaystyle \underset{i}{}}(S_i^x)^2`$ (1) $`+`$ $`d_z{\displaystyle \underset{i}{}}(S_i^z)^2\mu _s𝐁{\displaystyle \underset{i}{}}𝐒_i.`$ (2) where the $`𝐒_i=𝝁_i/\mu _s`$ are three dimensional magnetic moments of unit length. The first sum which represents the exchange of the magnetic moments is over nearest neighbor interactions with the exchange coupling constant $`J`$. The second and third sum represent uniaxial anisotropies of the system where the $`x`$-axis is the easy axis and the $`z`$-axis the hard axis of the system (anisotropy constants $`d_x=0.1J`$, $`d_z=J`$). These anisotropy terms may contain contributions from shape anisotropy as well as crystalline anisotropies . Even though an exact treatment of dipolar interactions would be desirable we let this problem for future work so that our results presented here are comparable to the analytical work of Braun. The last sum is the coupling of the moments to an external magnetic field, where $`𝐁`$ is the induction. All our simulations using the two different methods above start with a configuration where all magnetic moments point into the $`x`$ direction, antiparallel to the external magnetic field $`𝐁=B_x\widehat{}x`$. The time $`\tau `$ when the $`x`$-component of the magnetization changes its sign averaged over many simulation runs (100-1000, depending on system size and simulation method) is the most important quantity which we determine. In the case where the temperatures is low compared to the energy barrier the system is in the metastable initial state for a very long time while the time needed for the magnetization reversal itself is extremely short. In this case $`\tau `$ can be considered to be the so-called mean first passage time required to overcome the energy barrier. For low enough temperatures the mean first passage time should also be comparable to the reciprocal of the escape rate $`\lambda `$ which has been calculated for the model considered here asymptotically from the Fokker-Planck equation in certain limits (for details see ). In the classical nucleation theory the so-called lifetime or nucleation time is the time required by the system to build a supercritical droplet which from then on will grow systematically and reverse the system. The metastable lifetime was measured numerically in Ising models by use of similar methods . For low enough temperature $`T`$ all the different times mentioned above are expected to coincide. ### B Langevin dynamics The equation describing the dynamics of a system of magnetic moments is the Landau-Lifshitz-Gilbert (LLG) equation of motion with Langevin dynamics. This equation has the form $`\frac{(1+\alpha ^2)\mu _s}{\gamma }{\displaystyle \frac{𝐒_i}{t}}`$ $`=`$ $`𝐒_i\times \left(𝜻_i(t){\displaystyle \frac{E}{𝐒_i}}\right)`$ (3) $``$ $`\alpha 𝐒_i\times \left(𝐒_i\times \left(𝜻_i(t){\displaystyle \frac{E}{𝐒_i}}\right)\right),`$ (4) with the gyromagnetic ratio $`\gamma =1.76\times 10^{11}(\text{Ts})^1`$, the dimensionless damping constant $`\alpha `$. The first part of Eq. 3 describes the spin precession while the second part includes the relaxation of the moments. In both parts of this equation the thermal noise $`𝜻_i(t)`$ is included representing thermal fluctuations, with $`𝜻_i(t)=0`$ and $`𝜻_i^k(t)𝜻_j^l(t^{})=2\delta _{ij}\delta _{kl}\delta (tt^{})2\alpha k_\mathrm{B}T\mu _s/\gamma `$ where $`i,j`$ denote the lattice sites and $`k,l`$ the cartesian components. Eq. 3 is solved numerically using the Heun method which corresponds to the Stratonovich discretization scheme . Note, that since the Langevin dynamics simulations are much more time consuming than the Monte Carlo simulation we use this method here mainly for comparison and we will present only data for the relaxation on shorter time scales. ### C Monte Carlo simulations It is inconceivable that the Langevin dynamics simulation can be used over the whole time scale of physical interest so that we simulate the system also using Monte Carlo methods with a heat-bath algorithm. Monte Carlo approaches in general have no physical time associated with each step, so that an unquantified dynamic behavior is represented. However, we use a new time quantified Monte Carlo method which was proposed in where the interpretation of a Monte Carlo step as a realistic time interval was achieved by a comparison of one step of the Monte Carlo process with a time interval of the LLG equation in the high damping limit. We will use this algorithm in the following. The trial step of this algorithm is a random movement of a magnetic moment within a cone of size $`r`$ with $$r^2=\frac{20k_BT\alpha \gamma }{(1+\alpha ^2)\mu _s}\mathrm{\Delta }t.$$ (5) Using this algorithm one Monte Carlo step represents a given time interval $`\mathrm{\Delta }t`$ of the LLG equation in the high damping limit as long as $`\mathrm{\Delta }t`$ is chosen appropriately (for details see ). To test the algorithm, in Fig. 1 the $`\alpha `$ dependence of the mean first passage time of the Monte Carlo data is compared with the data of the Langevin dynamics simulation. Each data point is an average over 1000 independent runs. For low values of the damping constant the data do not coincide while in the high damping limit Monte Carlo and Langevin dynamics data converge. Hence, throughout this work we will use $`\alpha =4`$, which is large enough so that our Monte Carlo simulation and the numerical solution of Eq. 3 yield identical time scales. Even though this is an unphysically large value for $`\alpha `$ we can compare our results with the analytically obtained high damping asymptotes. ## III Results We investigate the influence of the system size on the occurring reversal mechanisms. There are two extreme cases of reversal mechanisms which might occur in our model namely coherent (or uniform) rotation and nucleation. For small system sizes all magnetic moments of the particle rotate uniformly in order to minimize the exchange energy. For larger system sizes it is favorable for the system to divide into parts of opposite directions of magnetization parallel to the easy axis which minimizes the anisotropy energy. This is a magnetization reversal driven by nucleation and subsequent domain wall propagation. The crossover between these mechanisms is discussed later in this section. First, we study the different reversal mechanism and compare our numerical data with theoretical formulae. ### A Coherent Rotation In the case of small chain length the magnetic moments rotate coherently while overcoming the energy barrier which is due to the anisotropy of the system. The first theoretical description of the coherent rotation of elongated single-domain particles was developed by Stoner and Wohlfarth . The belonging energy barrier is $$\mathrm{\Delta }E_{\mathrm{c}r}=Ld_x(1h)^2$$ (6) with $`h=\mu _sB_x/(2d_x)`$. Néel expanded this model for the case of thermal activation and Brown calculated the escape time $$\tau =\tau _{\mathrm{c}r}^{}\mathrm{exp}\frac{\mathrm{\Delta }E_{\mathrm{c}r}}{k_\mathrm{B}T}$$ (7) following a thermal activation law. The prefactor $`\tau ^{}`$ is not a simple constant attempt frequency as claimed frequently by several authors but a complicated function which in general may depend on system size, temperature, field and anisotropies. For our model it is $`{\displaystyle \frac{\tau _{\mathrm{c}r}^{}\gamma }{\mu _s}}=\frac{\pi (1+\alpha ^2)\sqrt{\frac{d_z(1+h)}{d_x(1h)+d_z}}}{\alpha (d_x(1h^2)d_z)+\sqrt{(\alpha (d_x(1h^2)+d_z))^2+4d_xd_z(1h^2)}}`$ (8) as was calculated from the Fokker-Planck equation . The main assumption underlying all results above is that the system can be described by a single degree of freedom, namely the magnetic moment of the particle which must have a constant absolute value. Then, the equations above should hold for low enough temperatures $`k_\mathrm{B}T\mathrm{\Delta }E_{\mathrm{c}r}`$ and for $`\mu _sB_x<2d_x`$. For larger fields the energy barrier is zero, so that the reversal is spontaneous without thermal activation (non-thermal magnetization reversal). In the following, our numerical results for the mean first passage times $`\tau `$ for coherent rotation are compared with the equations above. Fig. 2 shows the temperature dependence of $`\tau `$ for a given value of the external magnetic field $`h=0.75`$ and three different system sizes. For temperatures $`k_\mathrm{B}T<\mathrm{\Delta }E_{\mathrm{c}r}`$ our data confirm the asymptotic solutions above for smaller system sizes. For the largest system shown here ($`L=16`$) the numerical data are systematically lower than the theoretical prediction. Obviously, the prefactor $`\tau _{\mathrm{c}r}^{}`$ depends on the system size in contradiction to Eq. 8 while the energy barrier (Eq. 6) is obviously still correct (the slope of the data). This size dependence of $`\tau _{\mathrm{c}r}^{}`$ can be explained with the temperature dependence of the absolute value of the magnetic moment of the extended system. In an extended system, the energy barrier for vanishing field, $`d_xL`$, is reduced to $`d_x_i(S_i^x)^2`$ which expanded to first order of the temperature can lead to corrections of the form $`d_x_i(S_i^x)^2d_xL(1aT)`$. Including this in Eq. 7 remarkably leads to an effective reduction of the prefactor $`\tau _{\mathrm{c}r}^{}`$ by a factor $`\mathrm{exp}(ad_xL)`$ and not to a reduction of the effective energy barrier $`\mathrm{\Delta }E_{\mathrm{c}r}`$. To conclude, in extended systems with many degrees of freedom the reduction of the magnetization leads effectively to a size dependent reduction of the prefactor of the thermal activation law for low temperatures. ### B Soliton-Antisoliton Nucleation With increasing length of the chain a different reversal mechanism becomes energetically favorable, namely the so-called soliton-antisoliton nucleation proposed by Braun . Here, during the reversal the system splits into two parts with opposite directions of magnetization parallel to the easy axis. These two parts are separated by two domain walls with opposite directions of rotation in the easy $`xy`$-plane (a soliton-antisoliton pair) which pass the system during the reversal. Note, that we consider periodic boundary conditions, otherwise also the nucleation of one single soliton at one end of the chain would be possible. The energy barrier $`\mathrm{\Delta }E_{\mathrm{n}u}`$ which has to be overcome during this nucleation process is $$\mathrm{\Delta }E_{\mathrm{n}u}=\sqrt{2Jd_x}\left(4\text{tanh}R4hR\right),$$ (9) with $`R=\text{arcosh}(\sqrt{1/h})`$. In the limit of small magnetic fields, $`h0`$, this energy barrier has the form $`\mathrm{\Delta }E_{\mathrm{n}u}=4\sqrt{2Jd_x}`$ which represents the well-known energy of two domain walls. The corresponding mean first passage time follows once again a thermal activation law (see Eq. 7). Since our Monte Carlo simulations are for rather large damping, $`\alpha =4`$, we compare our numerical data with the prefactor obtained in the over-damped limit (Eq. 5.4 in ) which in our units is $`{\displaystyle \frac{\tau _{\mathrm{n}u}^{}\gamma }{\mu _s}}={\displaystyle \frac{\pi ^{3/2}(1+\alpha ^2)(k_\mathrm{B}T)^{1/2}(2J)^{1/4}}{16\alpha Ld_x^{7/4}|E_0(R)|\mathrm{tanh}R^{3/2}\mathrm{sinh}R}}`$ (10) The eigenvalue $`E_0(R)`$ has been calculated numerically in . In the limit $`h1`$ it is $`|E_0(R)|3R^2`$. The $`1/L`$ dependence of the prefactor reflects the size dependence of the probability of nucleation. The larger the system the more probable is the nucleation process. Furthermore, the prefactor has a remarkable $`\sqrt{T}`$ dependence leading to a slight curvature in the semi-logarithmic plot of the thermal activation law (see Eq. 7). Fig. 3 shows the temperature dependence of the reduced mean first passage time for $`h=0.75`$ and two different system sizes. The formulae above are confirmed only for rather low temperatures ($`\mathrm{\Delta }E_{\mathrm{n}u}/k_\mathrm{B}T>8`$ for the smaller system and $`\mathrm{\Delta }E_{\mathrm{n}u}/k_\mathrm{B}T>10`$ for the larger system). In the range of intermediate temperatures (but still $`k_\mathrm{B}T<\mathrm{\Delta }E_{\mathrm{n}u}`$) the numerical data of both, Langevin dynamics and Monte Carlo simulations deviate from the formulae above. Interestingly, in this region the mean first passage times do not depend on system size in contrast to Eq. 10 where a $`1/L`$-dependence occurs due to the size dependence of the nucleation probability. In order to understand this effect occurring in the intermediate temperature range, Fig. 4 shows the $`x`$-component of the magnetic moments of the chain at the mean first passage time $`\tau `$. The upper diagram shows the pure soliton-antisoliton nucleation occurring at low temperatures. Due to thermal fluctuations a nucleus is originated and two domain walls pass the system. The lower diagram shows that alternatively also several nuclei may grow simultaneously. Obviously, depending on temperature (and also other quantities, like system size and field) with a certain probability many nuclei may arise at the same time. This is a multiple nucleation which was investigated mainly in the context of Ising models where it is called multidroplet nucleation. ### C Multidroplet Nucleation The mean first passage time $`\tau _{\mathrm{m}n}`$ for the multidroplet nucleation can be calculated with the aid of the classical nucleation theory . In the case of multiple nucleation many nuclei with a critical size originate within the same time. These supercritical nuclei grow and join each other leading to the magnetization reversal. The mean first passage time for this process is determined by the probability for the occurrence of supercritical nuclei $`1/\tau _{\mathrm{n}u}`$ and the time that is needed for the growth of the nuclei. Let us assume that the radius of a supercritical nucleus grows linearly with the time $`t`$. Then $`\tau _{\mathrm{m}n}`$ is given by the condition that the change of the magnetization $`\mathrm{\Delta }M`$ equals the system size , $$\mathrm{\Delta }M(\tau _{\mathrm{m}n})=_0^{\tau _{\mathrm{m}n}}\frac{(2vt)^D}{\tau _{\mathrm{n}u}}\text{d}t=L^D,$$ (11) where $`v`$ is the domain wall velocity and $`D`$ the dimension of the system. Hence, the time when half of the system is reversed is given by $$\tau _{\mathrm{m}n}=\left(\frac{L}{2v}\right)^{\frac{D}{D+1}}\left((D+1)\tau _{\mathrm{n}u}^{}\right)^{\frac{1}{D+1}}\mathrm{exp}\frac{\mathrm{\Delta }E_{\mathrm{n}u}}{(D+1)k_\mathrm{B}T}.$$ (12) For the one dimensional system which we consider in this paper the lifetime is given by $$\tau _{\mathrm{m}n}=\sqrt{\frac{L\tau _{\mathrm{n}u}^{}}{v}}\mathrm{exp}\frac{\mathrm{\Delta }E_{\mathrm{n}u}}{2k_\mathrm{B}T}.$$ (13) This means that the (effective) energy barrier for the multidroplet nucleation is reduced by a factor 1/2 and the prefactor $`\tau _{\mathrm{m}n}^{}`$ does no longer depend of the system size since $`\tau _{\mathrm{n}u}^{}`$ for the soliton-antisoliton nucleation has a $`1/L`$-dependence (see Eq. 10). Since we do not know the value of the domain wall velocity we had to fit this parameter which influences the prefactor $`\tau _{\mathrm{m}n}^{}`$ for the comparison in Fig. 2 in the intermediate temperature range. Nevertheless, in Fig. 2 the reduction of the slope of the curve by the factor 1/2 in the multidroplet regime is confirmed. Also, it can be seen, that the prefactor is not dependent on the system size since the data for both sizes coincide as is explained by the considerations above (we used the same value for $`v`$ of course for both curves, since $`v`$ should not depend on the system size). A similar crossover from single to multidroplet excitations was observed in Ising models, field dependent as well as temperature dependent . Comparing the mean first passage times for single and multiple nucleation we get for the intersection of these two times the crossover condition $`L_{\mathrm{s}m}=\sqrt{v\tau _{\mathrm{n}u}^{}L_{\mathrm{s}m}}\mathrm{exp}{\displaystyle \frac{\mathrm{\Delta }E_{\mathrm{n}u}}{2k_\mathrm{B}T}}.`$ (14) The corresponding time is $`L_{\mathrm{s}m}/v`$ — the time that a domain wall needs to cross the system. This results is also comparable to calculations in Ising models . ### D Crossover In this section we are interested in the crossover between coherent rotation and soliton-antisoliton nucleation. Therefore, we compare the energy barrier of soliton-antisoliton nucleation (see Eq. 9) with the energy barrier of coherent rotation (see Eq. 6) in order to obtain a condition for the mechanism with the lowest activation energy. The resulting crossover line $`L_c`$ has the form $`L_c={\displaystyle \frac{\sqrt{2Jd_x}\left(4\text{tanh}R4hR\right)}{d_x(1h)^2}}.`$ (15) For vanishing magnetic field it is $`L_c=4\sqrt{\frac{2J}{d_x}}`$ and we get the simple condition that four domain walls (not two!) have to fit into the system. Note, that Braun also determined this crossover line from a slightly different condition. He calculated the system size below which no nonuniform solution for the Euler-Lagrange equations exists and he obtained a limit which is similar to Eq. 15 but a little bit lower. A diagram showing which reversal mechanisms occur in our model depending on system size and field is presented in Fig. 5. The crossover line $`L_c`$ derived above separates the coherent rotation region from that of soliton-antisoliton nucleation. For $`h>1`$ the reversal is non-thermal. In the nucleation region, for larger fields a temperature dependent crossover to multiple nucleation sets in. However, the lower the temperature the more vanishes this region. As an example we show in the figure the line $`L_{\mathrm{s}m}`$ which separates single from multiple nucleation for $`k_\mathrm{B}T=0.006J`$ and under the assumptions that the domain wall velocity is proportional to the field . In order to confirm Eq. 15 numerically we determine the mean correlations of the $`y`$-components of two spins located in a distance of $`L/2`$ (the maximum distance in a system with periodic boundary conditions) at the time $`\tau `$ during the reversal $`c_y(L/2)=\left[{\displaystyle \frac{1}{N}}{\displaystyle \underset{i=1}{\overset{N}{}}}S_i^y(\tau )S_{i+L/2}^y(\tau )\right]_{\mathrm{a}v}.`$ This quantity characterizes the reversal mechanism. Fig. 6 shows the system size dependence of $`c_y(L/2)`$ for different values of the magnetic field. For coherent rotation the limiting value of $`c_y(L/2)`$ is 1 since at the time $`\tau `$ all moments point into the $`y`$direction (small systems). On the other hand, for nucleation the system is mainly split into two parts where the moments point into $`\pm x`$ direction. In this case the correlations in $`y`$ direction are zero (larger system size). As a criterion for the crossover between nucleation and coherent rotation we use a value of $`1/2`$ for the correlation function and define the corresponding system size as the crossover length. This analysis leads to the numerical data shown in Fig. 5. The MC data confirm the theoretical crossover line except for the largest magnetic field. Here, the numerical data differ from the theoretical crossover line which diverges in the limit $`h1`$. In this region a mixed reversal mechanism appears: first the magnetic moments rotate coherently up to a certain rotation angle and then an instability sets in and a (restricted) soliton-antisoliton pair arises. Obviously, the coherent rotation is unstable in this region. A hint for the existence of this mixed mechanism can also be found in Fig. 6: for the field $`h=0.95`$ the correlation is not zero for large system sizes but remains finite due to the fact that the whole system first rotates by a certain angle before the nucleation sets in. ## IV Conclusions We investigated the magnetization switching in a classical Heisenberg spin chain which we consider as simple model for nanowires or elongated nanoparticles and as a test tool for numerical techniques since analytical expressions for the relevant energy barriers and time scales exist in several limits. Numerically we used Monte Carlo methods as well as Langevin dynamics simulations and we confirmed that the Monte Carlo algorithm we use yield time quantified data comparable to those of the Langevin dynamics simulation in the limit of high damping. Varying the system size we observed different reversal mechanisms, calculated the mean first passage times for the reversal (i.e. the time scale of the relaxation process) and compared our results with analytical considerations. The comparison of our numerical data with theoretical considerations confirms that both of our numerical techniques can be used to study the relaxation behavior of magnetic systems qualitatively and quantitatively. Fig. 7 summarizes our results. It shows the system size dependence of the reduced mean first passage time for two different temperatures. For small system sizes the spins rotate coherently. Here, the energy barrier (Eq. 6) is proportional to the system size leading to an increase of $`\tau `$ with system size. Following Eq. 8 the prefactor of the thermal activation law should not be $`L`$ dependent. However, here we found slight deviations from the asymptotic expressions stemming from the non-constant magnetization of extended systems. In the regime of soliton-antisoliton nucleation the energy barrier (Eq. 9) does not depend on the system size but the prefactor (Eq. 10) has a $`1/L`$ dependency. Interestingly, this leads to a decrease of the mean first passage time with increasing system size. Therefore, there is a maximum relaxation time close to where the crossover from coherent rotation to nucleation occurs (see Eq. 15). This decrease ends where the so-called multidroplet nucleation sets in (Eq. 14). From here on with increasing system size the mean first passage time remains constant (Eq. 13). Note, that qualitatively the same behavior can be found in the particle size dependence of the dynamic coercivity. Solving the equation describing the thermal activation (Eq. 7) in the three regimes explained above for $`h(L,\tau =\text{const})`$, one finds an increase of the coercivity in the coherent rotation regime, a decrease in the nucleation regime, and at the end a constant value for multiple nucleation. This is qualitatively in agreement with measurements of the size dependence of barium ferrite recording particles . Comparing the energy barrier for the nucleation process with experimental values (Eq. 9), one should first mention that in an experimental system nucleation usually will start a the sample ends. This will reduce the energy barrier by a factor of 2 since without periodical boundary conditions one does not need a soliton-antisoliton pair but just one single excitation. The energy barrier for the nucleation process (Eq. 9) was compared in with energy barriers measured in isolated Ni-nanowires finding the experimental energy barrier reduced by a factor of 3. This agreement is rather encouraging taking into account that in realistic systems the energy barrier might be reduced depending on the form of the sample ends. Regarding the crossover from coherent rotation to nucleation a similar analysis, even though less rigorous, has been performed for three dimensional systems . Here, in the limit $`h0`$ the crossover diameter is $`L_c=3/2\sqrt{2J/d_x}`$ instead of $`2\sqrt{2J/d_x}`$ for the spin chain with open boundary conditions. As discussed in for CoPt particles the crossover length should be of the order of 30nm which is also roughly in agreement with experiments on Co particles . We should mention that the model treated here contains magnetostatic energy only in a local approximation . Arguments against this treatment have been brought forward so that the question wether an explicit inclusion of the dipole-dipole interaction would lead to additional reversal modes (e. g. curling) - also depending on wether the diameter of the nanowire is larger or smaller than the exchange length - remains open. Calculations following these lines are therefore highly desirable. ## Acknowledgments We thank K. D. Usadel and D. A. Garanin for fruitful discussion and H. B. Braun also for providing numerical results for the eigenvalue $`|E_0(R)|`$. This work was supported by the Deutsche Forschungsgemeinschaft through the Graduiertenkolleg ”Struktur und Dynamik heterogener Systeme” and was done within the framework of the COST action P3 working group 4.
no-problem/9908/hep-ph9908445.html
ar5iv
text
# Quantum analysis of Rydberg atom cavity detector for dark matter axion search ## I Introduction Search for the so-called “invisible” axions as non-baryonic dark matter is one of the most challenging issues in particle physics and cosmology . The mass range $`m_a10^6\mathrm{eV}10^3\mathrm{eV}`$ is still open for the cosmic axions . (The light velocity is set to be $`c=1`$ throughout the present article.) As originally proposed by Sikivie , the basic idea for the dark matter axion search is to convert axions into microwave photons in a resonant cavity under a strong magnetic field via the Primakoff process. It is, however, extremely difficult to detect the cosmic axions due to their unusually weak interactions with ordinary matters. Pioneering tries with amplification-heterodyne-method were published already . An advanced experiment by the US group is currently continued, and some results have been reported, where the KSVZ axion of mass $`2.9\times 10^6\mathrm{eV}`$ to $`3.3\times 10^6\mathrm{eV}`$ is excluded at the 90% confidence level as the dark matter in the halo of our galaxy . In this paper, we describe a quite efficient scheme for the dark matter axion search, where Rydberg atoms are utilized to detect the axion-converted microwave photons . An experimental apparatus called CARRACK I (Cosmic Axion Research with Rydberg Atoms in resonant Cavities in Kyoto) is now running to search for the dark matter axions over a 10 % mass range around $`10^5\mathrm{eV}`$. Based on the performance of CARRACK I, a new large-scale apparatus CARRACK II has been constructed recently to search for the axions over a wide range of mass . Clearly, in order to derive quantitative and rigorous results from the axion search with this type of Rydberg atom cavity detectors, it is essential to develop the quantum theoretical formulations and calculations on the interaction of cosmic axions with Rydberg atoms via the Primakoff process in the resonant cavity . Here, we present the details of these quantum calculations. This theoretical analysis actually provides important guides for the detailed design of Rydberg atom cavity detector, which will be examined separately in forthcoming papers. It is also fascinating that these quantum theoretical investigations on the axion-photon-atom interaction will be useful in viewpoint of the applications of the cavity quantum electrodynamics to fundamental researches. This paper is organized as follows. In Sec. II, we describe the Rydberg atom cavity detector for the dark matter axion search. In the quantum theoretical point of view, this Rydberg atom cavity detector can be treated as the dynamical system of interacting oscillators with dissipation. In Sec. III, these quantum oscillators describing the photons, axions and atoms are introduced appropriately. Then, in Sec. IV, the interaction Hamiltonians are provided in terms of these quantum oscillators. In Sec. V, the quantum dynamics of interacting oscillators with dissipation is generally formulated in the Liouville picture. A series of master equations is derived to determine the time evolution of the quatum averages of the occupation numbers and higher-order correlations of these oscillators. By applying this formulation to the axion-photon-atom system in the resonant cavity, we can, in particular, calculate the number of the atoms in the upper state which are excited by absorbing the axion-converted and thermal background photons. In Sec. VI, we examine some characteristic properties of the axion-photon-atom interaction by solving analytically the master equation for the simple case with time-independent atom-photon coupling. Then, in Sec. VII, in order to make precise estimates on the sensitivity of the Rydberg atom cavity detector, we elaborate the calculations by taking into account the motion and uniform distribution of the Rydberg atoms in the incident beam and also the spatial variation of the electric field in the cavity. In Sec. VIII, the dependence of the detection sensitivity on the relevant experimental parameters is discussed. Detailed numerical calculations are performed in Sec. IX, and the quantum evolution of the axion-photon-atom system in the resonant cavity is determined precisely. The counting rates of signal and noise are calculated with the steady solutions of the master equation. The sensitivity of the Rydberg atom cavity detector is then estimated from these calculations. Finally, we summarize the present quantum analysis in Sec. X. Appendices are devoted to some supplementary issues. ## II Rydberg atom cavity detector for dark matter axion search A schematic diagram of the Rydberg atom cavity detector (CARRACK I) is shown in Fig. 1. The axions are first converted into photons under a strong magnetic field in the conversion cavity. Then, the photons are transferred to the detection cavity via a coupling hole, and they are absorbed there by Rydberg atoms. The detection cavity is set to be free from magnetic field to avoid the complexity of the atomic energy levels due to the Zeeman splitting. The Rydberg atoms, the transition frequency of which is tuned approximately to the cavity resonant frequency $`1\mathrm{G}\mathrm{H}\mathrm{z}`$, are prepared initially in a lower state with principal quantum number $`n100`$ by exciting alkaline atoms in the ground state with laser excitation just in front of the detection cavity. It should, however, be noted that atoms in the upper state with $`n^{}(>n)`$ are not generated at this stage. The atoms prepared in this way are excited to the upper state by absorbing the microwave photons in the detection cavity, and they are detected quite efficiently with the selective field ionization method just out of the cavity. By cooling the resonant cavity system down to about 10 mK with a dilution refrigerator in high vacuum, the thermal background photons can be reduced sufficiently to obtain a significant signal-to-noise ratio. Hence the Rydberg atom cavity detector, which is free from the amplifier noise by itself, is expected to be quite efficient for the dark matter axion search. In the quantum theoretical point of view, the Rydberg atom cavity detector can be treated as the dynamical system of the interacting oscillators with dissipation which appropriately describe the axions, photons and Rydberg atoms. In the following sections we develop detailed quantum theoretical formulations and calculations for this dynamical system. Specifically, the analysis is made by taking into account the actual experimental situation such as the motion and uniform distribution of the Rydberg atoms in the incident beam and also the spatial variation of the electric field in the cavity. This quantum treatment provides proper estimates on the sensitivity of the Rydberg atom cavity detector for dark matter axion search. ## III Axion-photon-atom system in the cavity We first identify the quantum oscillators which appropriately describe the photons, axions and Rydberg atoms in the resonant cavity. This provides the basis for investigating various properties of the axion-photon-atom system. ### A Resonant mode of photons The electric field operator in the cavity $`𝒱`$ is given by $$𝑬(𝐱,t)=(\mathrm{}\omega _c/2ϵ_0)^{1/2}[𝜶(𝐱)c(t)+𝜶^{}(𝐱)c^{}(t)]$$ (1) for the radiation mode with a resonant frequency $`\omega _c`$, where $`ϵ_0`$ is the dielectric constant. The creation and annihilation operators satisfy the usual commutation relation $$[c,c^{}]=1.$$ (2) The mode vector field $`𝜶(𝐱)`$ is normalized by the condition $$_𝒱|𝜶(𝐱)|^2d^3x=1.$$ (3) The whole cavity $`𝒱`$ may be viewed as a combination of two subcavities, the conversion cavity $`𝒱_1`$ with volume $`V_1`$ and the detection cavity $`𝒱_2`$ with volume $`V_2`$, which are coupled together: $$𝒱=𝒱_1𝒱_2.$$ (4) The axion-photon conversion takes place in $`𝒱_1`$ under the strong mangetic field, while the Rydberg atoms are excited by absorbing the photons in $`𝒱_2`$. It is then suitable to divide the mode vector as $$𝜶(𝐱)=𝜶_1(𝐱)+𝜶_2(𝐱),$$ (5) where $`𝜶_1(𝐱)=\mathrm{𝟎}`$ for $`𝐱𝒱_2`$ and $`𝜶_2(𝐱)=\mathrm{𝟎}`$ for $`𝐱𝒱_1`$, respectively. The normalization condition of $`𝜶(𝐱)`$ is rewritten as $$_{𝒱_1}|𝜶_1(𝐱)|^2d^3x+_{𝒱_2}|𝜶_2(𝐱)|^2d^3x=1.$$ (6) The actual cavity is designed so that neglecting the small joint region the subcavities $`𝒱_1`$ and $`𝒱_2`$ admit the mode vectors $`𝜶_1^0(𝐱)`$ and $`𝜶_2^0(𝐱)`$ (up to the normalization and complex phase), respectively, whose frequencies are tuned to be almost equal to some common value $`\omega _c^0`$. In this situation, as confirmed by numerical calculations and experimental observations, two nearby eigenmodes with the frequencies $`\omega _c,\omega _c^{}\omega _c^0`$ are obtained for the whole cavity $`𝒱`$. Then, the mode vector $`𝜶(𝐱)`$ is constructed approximately of $`𝜶_1(𝐱)𝜶_1^0(𝐱)`$ and $`𝜶_2(𝐱)𝜶_2^0(𝐱)`$ with significant magnitudes in both $`𝒱_1`$ and $`𝒱_2`$. The conversion of the cosmic axions takes place predominantly to the radiation mode which is resonant with the axions satisfying the condition $`|\omega _cm_a/\mathrm{}|\gamma _a`$ (axion width) $``$ small fraction of $`\gamma `$ (cavity damping rate), as will be discussed later. The cavity can be designed so as to give a sufficient separation of $`|\omega _c\omega _c^{}|>\mathrm{several}\gamma `$ for the nearby modes with strong coupling between $`𝒱_1`$ and $`𝒱_2`$. Therefore, in the search for the signal from the cosmic axions, the one resonant mode can be extracted solely for the electric field in a good approximation, as given in Eq. (1), whose frequency $`\omega _c`$ is supposed to be close enough to the axion frequency $`\omega _a=m_a/\mathrm{}`$. ### B Coherent mode of cosmic axions The axion field operator is expanded as usual in terms of the continuous modes: $`\varphi (𝐱,t)`$ $`=`$ $`\mathrm{}^{1/2}{\displaystyle \frac{d^3k}{(2\pi )^32\omega _k}\left[a_𝐤(t)\mathrm{e}^{i𝐤𝐱}+a_𝐤^{}(t)\mathrm{e}^{i𝐤𝐱}\right]}`$ (7) $``$ $`\varphi ^+(𝐱,t)+\varphi ^{}(𝐱,t),`$ (9) where $`\varphi ^+`$ and $`\varphi ^{}`$ represent the positive and negative frequency parts, respectively. The cosmic axions form a coherent state with momentum distribution $`\eta _a(𝐤)`$, $$|\eta _a=N_\eta ^{1/2}\mathrm{exp}\left(\frac{d^3k}{(2\pi )^32\omega _k}\eta _a(𝐤)a_𝐤^{}\right)|0,$$ (10) where $`N_\eta `$ is the normalization constant to ensure the condition $`\eta _a|\eta _a=1`$. The velocity dispersion of the galactic axions is expected to be very small as $`\beta _a10^3`$ so that $`\eta _a(𝐤)`$ takes significant values only in a small region $`_a`$ where $`|𝐤|\beta _am_a/\mathrm{}`$ . The axion field operator (9) may be divided into the low-momentum part $`\varphi __a`$ with $`𝐤_a`$ and the residual part $`\varphi _{\mathrm{res}}`$: $$\varphi (𝐱,t)=\varphi __a(𝐱,t)+\varphi _{\mathrm{res}}(𝐱,t).$$ (11) Since the coherent region $`_a`$ is chosen so as to give $`\eta _a|\varphi |\eta _a\eta _a|\varphi __a|\eta _a`$, the residual part $`\varphi _{\mathrm{res}}`$ does not provide significant contributions in the following calculations, i.e., $`\eta _a|\varphi _{\mathrm{res}}|\eta _a0`$. It is further noticed that the spatial variation of $`\varphi __a(𝐱,t)`$ is negligible in the cavity region, i.e., $`\mathrm{e}^{i𝐤𝐱}1`$ for $`|𝐤|\beta _am_a/\mathrm{}`$ and $`𝐱𝒱_1`$. This is because the de Broglie wavelength of the axions, $`\lambda _ah/(\beta _am_a)100\mathrm{m}`$ typically for $`m_a10^5\mathrm{eV}`$ and $`\beta _a10^3`$, is much longer than the microwave cavity scale $`0.1\mathrm{m}`$. In this situation to ensure approximately $`\varphi __a(𝐱,t)\varphi __a(\mathrm{𝟎},t)`$ in the cavity, the coherent axion mode can be identified as $`a(t)`$ $`=`$ $`(\mathrm{}\mathrm{\Sigma }_a)^{1/2}\varphi __a^+(\mathrm{𝟎},t)`$ (12) $`=`$ $`\mathrm{\Sigma }_a^{1/2}{\displaystyle __a}{\displaystyle \frac{d^3k}{(2\pi )^32\omega _k}}a_𝐤(t).`$ (13) Here, the normalization factor is given by $$\mathrm{\Sigma }_a=__a\frac{d^3k}{(2\pi )^32\omega _k}\frac{1}{2m_a}\left(\frac{\beta _am_a}{2\pi \mathrm{}}\right)^3,$$ (14) so that the coherent mode operator satisfies the canonical commutaion relation, $$[a,a^{}]=1.$$ (15) The normalization factor $`\mathrm{\Sigma }_a`$ is expressed in terms of the mean velocity $`\beta _a`$ of galactic axions with $`\omega _km_a/\mathrm{}`$ and $`\beta _a1`$. It should, however, be realized that this parameter $`\beta _a`$ is introduced just for the normalization of the coherent axion mode. The actual mean velocity of axions is rather determined by the axion spectrum, which may slightly be different from the parameter $`\beta _a`$ given in Eq. (14). The normalization factor $`\mathrm{\Sigma }_a`$ is canceled out anyway in calculating the axion signal rate, as explicitly seen later. The coherent cosmic axions can be described effectively in terms of a single mode oscillator, as shown in the above. Then, the energy spread of the galactic axions can be taken into account as the damping rate of the coherent axion mode, $$\gamma _a\beta _a^2m_a/\mathrm{}.$$ (16) The expectation value of the number operator $`a^{}a`$ of the coherent axion mode is determined by the energy density $`\rho _a`$ of the cosmic axions as follows. The number density operator for the axion field is given by $`\widehat{n}_a(𝐱,t)=(2\mathrm{})^1(i\varphi ^{}_0\varphi ^+i_0\varphi ^{}\varphi ^+)`$, and its expectation value is calculated for $`𝐱𝒱_1`$ and $`\beta _a1`$ as $`\eta _a|\widehat{n}_a(𝐱,0)|\eta _a`$ $``$ $`\mathrm{}^2m_a\eta _a|\varphi __a^{}(\mathrm{𝟎},0)\varphi __a^+(\mathrm{𝟎},0)|\eta _a`$ (17) $``$ $`\rho _a/m_a.`$ (18) Then, by considering Eq.(13) to relate the relevant part of axion field operator $`\varphi __a`$ to the coherent mode operators $`a`$ and $`a^{}`$, we can calculate the occupation number of the coherent axion mode as $$\overline{n}_a=\eta _a|a^{}(0)a(0)|\eta _a\left(\frac{2\pi \mathrm{}}{\beta _am_a}\right)^3\left(\frac{\rho _a}{m_a}\right).$$ (19) This expression implies that the coherent axion mode is normalized suitably in a box with a volume of $`(\text{de Broglie wavelength})^3`$. ### C Oscillator for Rydberg atoms The Rydberg atom is treated well as a two-level system in the resonant cavity. The relevant lower and upper atomic states are represented by $`|n`$ and $`|n^{}`$, respectively. They are connected by the electric dipole transition with frequency $`\omega _b=(E_n^{}E_n)/\mathrm{}`$, which is actually fine-tuned to be almost equal to the cavity frequency $`\omega _c`$ by utilizing the small Stark shift. This two-level atomic system is described in terms of spin 1/2 like operators $`D^+`$ $`=`$ $`|n^{}n|,D^{}=|nn^{}|,`$ (20) $`D^3`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(|n^{}n^{}||nn|\right).`$ (21) These operators satisfy the usual SU(2) Lie algebra, $$[D^+,D^{}]=2D^3,[D^3,D^\pm ]=\pm D^\pm .$$ (22) The free Hamiltonian for the two-level system is then given by $$H_{\mathrm{atom}}=\mathrm{}\omega _bD^3.$$ (23) The Rydberg atoms are initially prepared in the lower state in the present detectoin scheme for cosmic axions. It is also expected that the probability of the excitation of an atom to the upper state by absorbing a photon is small enough. This is because the average number of photons in the cavity is much smaller than one at sufficiently low temperatures ($`10\mathrm{m}\mathrm{K}`$). Therefore, the atoms almost remain in the lower state, providing a good approximation $$D^3n|D^3|n=\frac{1}{2}$$ (24) in the first commutation relation of Eq.(22). Then, the two-level atomic system operators may be substituted by those of an oscillator as $$bD^{},b^{}D^+,$$ (25) which satisfy the commutation relation $$[b,b^{}]=1.$$ (26) The second commutation relation of Eq.(22) is also reproduced with $`D^3=\frac{1}{2}+bb^{}`$ up to the higher order terms, and accordingly the free atomic Hamiltonian is expressed effectively as $$H_{\mathrm{atom}}=\mathrm{}\omega _bb^{}b,$$ (27) where the irrelevant constant $`\mathrm{}\omega _b/2`$ is discarded. This treatment of the Rydberg atom in terms of the quantum oscillator is valid if the probability in the upper state (or the lower state) is very small, i.e., $`b^{}b1`$, as is the case in the present scheme. ### D Characteristics of the photons, axions and atoms We have seen so far that the resonant photons, coherent cosmic axions and Rydberg atoms in the resonant cavity are described in terms of the appropriate quantum oscillators with dissipation due to the couplings to the relevant reservoirs. The characteristic properties of these quantum oscillators are summarized as follows. The thermal photon number $`\overline{n}_c`$ of the resonant mode is determined by the cavity temperature $`T_c`$ as $$\overline{n}_c=\left(\mathrm{e}^{\mathrm{}\omega _c/k_\mathrm{B}T_c}1\right)^1.$$ (28) The damping rate $`\gamma _c`$ of photons is described in terms of the quality factor $`Q`$ of the cavity as $$\gamma _c\gamma =5\times 10^{10}\mathrm{eV}\mathrm{}^1\left(\frac{\mathrm{}\omega _c}{10^5\mathrm{eV}}\right)\left(\frac{2\times 10^4}{Q}\right).$$ (29) The coherent axion mode is normalized in a box of the de Broglie wavelength, as seen in Eq. (19). The axion number is then estimated as $$\overline{n}_a=5.7\times 10^{25}\left(\frac{\rho _a}{0.3\mathrm{GeVcm}^3}\right)\left(\frac{10^3}{\beta _a}\right)^3\left(\frac{10^5\mathrm{eV}}{m_a}\right)^4,$$ (30) where the enegy density of the cosmic axions $`\rho _a`$ is taken to be equal to that of the galactic dark halo $`\rho _{\mathrm{halo}}0.3\mathrm{GeVcm}^3`$. The dissipation of the coherent axions is characterized by their energy spread as $$\gamma _a\beta _a^2m_a/\mathrm{}=0.02\gamma \left(\frac{\beta _a^2Q}{10^6\times 2\times 10^4}\right),$$ (31) where the resonant condition $`m_a\mathrm{}\omega _c`$ is considered. All the atoms are prepared in the lower state so that $$\overline{n}_b=0.$$ (32) The atomic dissipation is determined by its lifetime $`\tau _b`$ as $`\gamma _b=\tau _b^1`$. The lifetime of the Rydberg state is typically $`\tau _b10^3\mathrm{s}`$ for $`n100`$ in the vacuum . Since the transitions to the off-resonant states are highly suppressed in the resonant cavity, the atomic lifetime may even be longer. Hence, the damping rate of Rydberg atoms is estimated to be at most $$\gamma _b=6.6\times 10^{13}\mathrm{eV}\mathrm{}^1\left(\frac{10^3\mathrm{s}}{\tau _b}\right),$$ (33) which is actually much smaller than the photon damping rate $`\gamma `$. ## IV Interactions in the resonant cavity In this section, we provide the interaction Hamiltonians in terms of the quantum oscillators. They determine the quantum evolution of the axion-photon-atom system in the resonant cavity, which will be investigated in detail in the following sections. ### A Axion-photon interaction The axion-photon interaction under a strong static magnetic field with flux density $`𝑩_0`$ is described by the Lagrangian density $$_a=\mathrm{}^{1/2}ϵ_0g_{a\gamma \gamma }\varphi 𝑬𝑩_0,$$ (34) where $`\mathrm{}^{1/2}`$ and $`ϵ_0`$ (dielectric constant) are explicitly factored out so that the Lagrangian density has the right dimension $`_a\mathrm{}\mathrm{s}^1\mathrm{m}^3\mathrm{eVm}^3`$ with the axion-photon-photon coupling constant $`g_{a\gamma \gamma }\mathrm{eV}^1`$, as given below. The axion-photon-photon coupling constant is calculated as $$g_{a\gamma \gamma }=c_{a\gamma \gamma }\frac{\alpha }{2\pi ^2}\frac{m_a}{f_\pi m_\pi }\frac{(1+Z)}{\sqrt{Z}},$$ (35) where $`Z=m_u/m_d`$, and $$c_{a\gamma \gamma }=\frac{E}{C}\frac{2(4+Z)}{3(1+Z)}$$ (36) with $$E=\mathrm{Tr}Q_{\mathrm{PQ}}Q_{\mathrm{em}}^2,C\delta _{ab}=\mathrm{Tr}Q_{\mathrm{PQ}}\lambda _a\lambda _b.$$ (37) The parameter $`c_{a\gamma \gamma }`$ represents the variation of the axion-photon-photon coupling depending on the respective Peccei-Quinn models such as the so-called KSVZ and DFSZ . The original Lagrangian density for the axion-photon-photon coupling in Eq. (34) provides the effective interaction Hamiltonian between the coherent axion mode $`a`$ and the resonant radiation mode $`c`$, $$H_{ac}=\mathrm{}\kappa (a^{}c+ac^{}).$$ (38) The axion-photon conversion in the cavity $`𝒱_1`$ is well described with this interaction Hamiltonian. The coupling constant $`\kappa `$ is determined for $`\omega _cm_a/\mathrm{}`$ by considering the relations $`|\eta _a|\varphi ^\pm |\eta _a|\mathrm{}(\rho _a/m_a^2)^{1/2}`$ from Eq. (18) and $`|\eta _a|a|\eta _a|=\overline{n}_a^{1/2}`$ from Eq. (19) in calculating $`\eta _a|_{𝒱_1}(_a)d^3x|\eta _a\eta _a|H_{ac}|\eta _a`$ : $`\kappa `$ $`=`$ $`\mathrm{}^{1/2}g_{a\gamma \gamma }ϵ_0^{1/2}B_{\mathrm{eff}}\left[\left({\displaystyle \frac{\beta _am_a}{2\pi \mathrm{}}}\right)^3{\displaystyle \frac{V_1}{2}}\right]^{1/2}`$ (39) $`=`$ $`4\times 10^{26}\mathrm{eV}\mathrm{}^1\left({\displaystyle \frac{g_{a\gamma \gamma }}{1.4\times 10^{15}\mathrm{GeV}^1}}\right)\left({\displaystyle \frac{B_{\mathrm{eff}}}{4\mathrm{T}}}\right)`$ (40) $`\times `$ $`\left({\displaystyle \frac{\beta _am_a}{10^3\times 10^5\mathrm{eV}}}\right)^{3/2}\left({\displaystyle \frac{V_1}{5000\mathrm{c}\mathrm{m}^3}}\right)^{1/2},`$ (41) where $$B_{\mathrm{eff}}=\zeta _1GB_0,$$ (42) and $`B_0`$ is the maximal density of the external magnetic flux. The axion-photon-photon coupling constant $`g_{a\gamma \gamma }`$ is taken here to be the value expected from the DFSZ axion model at $`m_a=10^5\mathrm{eV}`$. This effective axion-photon coupling $`\kappa `$ apparently involves the $`\beta _a`$ dependence coming from the normalization given in Eq. (14). This $`\beta _a`$ dependence is, however, canceled with that of $`\overline{n}_a`$ given in Eq. (19) in calculating the signal rate which is actually proportional to $`(\kappa /\gamma )^2\overline{n}_a`$. The form factor for the magnetic field is given by $$G=\zeta _1^1V_1^{1/2}\left|_{𝒱_1}d^3x𝜶_1(𝐱)[𝑩_0(𝐱)/B_0]\right|$$ (43) with $$\zeta _1=\left[_{𝒱_1}d^3x|𝜶_1(𝐱)|^2\right]^{1/2}.$$ (44) This additional factor $`\zeta _1`$ ($`<1`$ as seen from Eq.(6)) represents the effective reduction of the axion-photon conversion which is due to the fact that the magnetic field is applied only in the conversion cavity $`𝒱_1`$. We may obtain the effective magnetic field strength $`B_{\mathrm{eff}}4\mathrm{T}`$, as taken in Eq.(41), by using typically a magnet of $`B_07\mathrm{T}`$ and the cavity system with $`G=\sqrt{0.7}`$ of $`\mathrm{TM}_{010}`$ mode and $`\zeta _10.7`$ for the conversion cavity. It should here be remarked that the possible interaction term $`\mathrm{}\kappa (ac+a^{}c^{})`$ is not included in Eq. (38). This is justified as follows. The characteristic time scale for the evolution of the system considered in the present scheme is placed by the lifetime of photons $`\tau _\gamma =\gamma ^1`$ in the cavity. Hence, although the interaction term $`\mathrm{}\kappa (ac+a^{}c^{})`$ representing the processes with energy change of $`\mathrm{}\omega _c+m_a`$ is also obtained from the original axion-photon-photon coupling $`_a`$, its effects are suppressed sufficiently for $`Q1`$ by the energy conservation in this time scale $`\tau _\gamma `$. ### B Atom-photon interaction We next consider the interaction between the Rydberg atoms and the resonant photons. The Rydberg atoms are utilized for counting photons in the cavity. They are excited by absorbing the photons through the electric dipole transition with frequency $`\omega _b`$. The emission and absorption of photon by the two-level atomic system is described by the interaction Hamiltonian $`H_\mathrm{\Omega }`$ $`=`$ $`\mathrm{}\mathrm{\Omega }[D^+c+D^{}c^{}]`$ (45) $``$ $`\mathrm{}\mathrm{\Omega }[b^{}c+bc^{}].`$ (46) Here, we may assume for simplicity, though not essential, that the mode vector field is described in terms of a uniform complex polarization vector $`\mathit{ϵ}_2`$ ($`|\mathit{ϵ}_2|=1`$) and a real profile function $`f_2(𝐱)`$ as $$𝜶_2(𝐱)=\mathit{ϵ}_2\overline{V}_2^{1/2}f_2(𝐱)$$ (47) with $`\overline{V}_2|𝜶_2(𝐱)|_{\mathrm{max}}^2`$. The profile function is normalized for convenience by the condition $`|f_2(𝐱)|_{\mathrm{max}}=1`$ at the antinodal positions. Then, the intrinsic atom-photon coupling constant $`\mathrm{\Omega }`$ is evaluated at the antinodal position in terms of the electric dipole transition matrix element $`d=|\mathit{ϵ}_2𝒅|`$ projected to the direction of polarization: $$\mathrm{\Omega }=\frac{d}{\mathrm{}}\left(\frac{\mathrm{}\omega _c}{2ϵ_0\overline{V}_2}\right)^{1/2},$$ (48) where $`\overline{V}_2V_2`$ is expected in accordance with the electric field normalization condition (6). The atom-photon coupling at an arbitrary atomic position in the cavity is also given with the profile function by $$\mathrm{\Omega }(𝐱)=\mathrm{\Omega }f_2(𝐱).$$ (49) In the actual experimental system, the Rydberg atoms are injected as a uniform beam. Then, certain number $`N`$ of Rydberg atoms are constantly present in the cavity. Such an ensemble of identical atoms behaves as a collective system in the interaction with the resonant radiation mode. The novelty of Rydberg atom physics is to provide situations where this collective behavior appears for a relatively small number of atoms, typically $`N10^310^6`$ . Suppose for simplicity that the $`N`$ atoms are at the antinodal position ($`\mathrm{\Omega }(𝐱)=\mathrm{\Omega }`$) in the detection cavity. Then, the effective coupling between the radiation mode and the $`N`$ atoms is actually given by $$H_{bc}=\underset{i=1}{\overset{N}{}}H_\mathrm{\Omega }(b_i,c)=\mathrm{}\mathrm{\Omega }_N(b^{}c+bc^{}),$$ (50) where the collective atomic mode operator is defined by $$b\frac{1}{\sqrt{N}}\underset{i=1}{\overset{N}{}}b_i$$ (51) satisfying the commutation relation $`[b,b^{}]=1`$. (Hereafter we may use without confusion the operators $`b`$ and $`b^{}`$ for the collective atomic mode rather than the single atomic mode.) The collective atom-photon coupling $`\mathrm{\Omega }_N=\mathrm{\Omega }\sqrt{N}`$ is estimated typically as $$\mathrm{\Omega }_N=1\times 10^{10}\mathrm{eV}\mathrm{}^1\left(\frac{\mathrm{\Omega }}{5\times 10^3\mathrm{s}^1}\right)\left(\frac{N}{10^3}\right)^{1/2}.$$ (52) Hence, if the number of Rydberg atoms is as large as $`N10^3`$, this collective coupling can be comparable to the cavity damping rate $`\gamma `$ as given in Eq. (29). When the atomic motion and distribution in the cavity are taken into account, the atom-photon coupling should be modified suitably. This point will be treated in Sec. VII. ## V Quantum evolution of the system We here formulate the quantum dynamics of interacting oscillators with dissipation. A systematic procedure is developed in the Liouville picture to calculate the quantum averages of the particle numbers and higher-order correlations. In practice, a series of master equations is derived for such quantities from the Fokker-Planck equation based on the Liouville picture. (The quantum evolution of the system can also be described in the Langevin picture, which is considered in Appendix B.) By applying this formulation, the quatum evolution of the axion-photon-atom system in the resonant cavity is determined with the effective interaction Hamiltonians presented in the preceeding section. Then, we can calculate, in particular, the number of the atoms in the upper state which are excited by absorbing the axion-converted and thermal photons. ### A Evolution in Liouville picture The reduced density matrix of the damped oscillators $`q_i`$ (such as $`a,b,c`$) interacting each other obeys the Liouville equation , $$\frac{d\rho }{dt}=\frac{1}{i\mathrm{}}[H,\rho ]+\mathrm{\Lambda }\rho .$$ (53) The Liouvillian relaxations are represented by the operator $`\mathrm{\Lambda }\rho `$, which may explicitly be given by $`\mathrm{\Lambda }\rho `$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{\gamma _i}{2}}\left[2q_i\rho q_i^{}q_i^{}q_i\rho \rho q_i^{}q_i\right]`$ (54) $`+`$ $`{\displaystyle \underset{i}{}}\gamma _i\overline{n}_i\left[q_i^{}\rho q_i+q_i\rho q_i^{}q_i^{}q_i\rho \rho q_iq_i^{}\right],`$ (55) where $`\gamma _i`$ and $`\overline{n}_i`$ are the damping rates and equilibrium occupation numbers, respectively. The total Hamiltonian is given by $$H=\underset{i}{}\mathrm{}\omega _iq_i^{}q_i+\underset{ij}{}\mathrm{}\mathrm{\Omega }_{ij}(t)q_i^{}q_j,$$ (56) where $`\mathrm{\Omega }_{ij}(t)=\mathrm{\Omega }_{ji}^{}(t)`$ represent the interaction terms, which may be time-dependent, as is the case for the interaction between photons and moving atoms in the cavity (see Eq. (64)). The solution of the Liouville equation is generally obtained in terms of the creation and annihilation operators as $$\rho =\rho (q_i,q_i^{},t).$$ (57) The quantum average of a physical quantity represented by a relevant operator $`𝒪`$ is evaluated with this density matrix by the formula $$\overline{𝒪}(t)=𝒪\mathrm{Tr}[𝒪(q_i,q_i^{},t)\rho (q_i,q_i^{},t)].$$ (58) In practical calculations, it may be useful to take the coherent state basis . Then, the density operator $`\rho `$ is represented by the “classical” time-dependent distribution function $`P(\alpha _i,\alpha _i^{},t)`$, and the quantum average is calculated by $$\overline{𝒪}(t)=\underset{i}{}d^2\alpha _i𝒪_\mathrm{n}(\alpha _i,\alpha _i^{},t)P(\alpha _i,\alpha _i^{},t).$$ (59) Here the subscript “n” means the the expression of the operator $`𝒪`$ when it is written in the normal ordered form. For example, $`𝒪_\mathrm{n}(\alpha ,\alpha ^{})=\alpha ^{}\alpha +1`$ for $`𝒪(q,q^{})=qq^{}`$. The Liouville equation is expressed in the coherent state basis as the Fokker-Planck equation to determine the distribution function $`P`$: $`{\displaystyle \frac{P}{t}}`$ $`=`$ $`i_{ij}(t){\displaystyle \frac{}{\alpha _i}}(\alpha _jP)i_{ij}^{}(t){\displaystyle \frac{}{\alpha _i^{}}}(\alpha _j^{}P)`$ (60) $`+`$ $`𝒟_{ij}{\displaystyle \frac{^2P}{\alpha _i\alpha _j^{}}}.`$ (61) Here, the effective Hamiltonian and diffusion term for the damped oscillators are represented by the matrices $`_{ij}(t)`$ $`=`$ $`\left(\omega _i{\displaystyle \frac{i}{2}}\gamma _i\right)\delta _{ij}+\mathrm{\Omega }_{ij}(t)(1\delta _{ij}),`$ (62) $`𝒟_{ij}`$ $`=`$ $`\gamma _i\overline{n}_i\delta _{ij}.`$ (63) They are given explicitly for the axion-photon-atom system under consideration by $$(t)=\left(\begin{array}{ccc}\omega _b\frac{i}{2}\gamma _b& \mathrm{\Omega }(t)& 0\\ \mathrm{\Omega }(t)& \omega _c\frac{i}{2}\gamma _c& \kappa \\ 0& \kappa & \omega _a\frac{i}{2}\gamma _a\end{array}\right)$$ (64) with $`\omega _a=m_a/\mathrm{}`$, and $$𝒟=\left(\begin{array}{ccc}\gamma _b\overline{n}_b& 0& 0\\ 0& \gamma _c\overline{n}_c& 0\\ 0& 0& \gamma _a\overline{n}_a\end{array}\right).$$ (65) The collective atom-photon coupling becomes time-dependent through the atomic motion along the $`x`$ axis with velocity $`v`$ as $$\mathrm{\Omega }(t)=\mathrm{\Omega }_Nf(vt).$$ (66) Here, it is assumed for simplicity that the $`N`$ atoms are injected together as a pulsed beam. This time-dependence is determined by the atomic velocity $`v`$ and the electric field profile $`f(x)f_2(x,y_0,z_0)`$ of the detection cavity along the atomic beam which is injected in the $`x`$ direction from the point $`(0,y_0,z_0)`$ in the $`y`$-$`z`$ plane. ### B Master equations We may be interested in the self-adjoint multiple moments of the oscillators, $`𝒩_{i_1\mathrm{}i_pj_1\mathrm{}j_p}^{(2p)}(t)q_{i_1}^{}\mathrm{}q_{i_p}^{}q_{j_1}\mathrm{}q_{j_p}`$ (67) $`={\displaystyle \underset{i}{}d^2\alpha _i\alpha _{i_1}^{}\mathrm{}\alpha _{i_p}^{}\alpha _{j_1}\mathrm{}\alpha _{j_p}P(\alpha _i,\alpha _i^{},t)},`$ (68) where $`2p`$ denotes the number of involved operators. A series of master equations is then obtained for these moments by considering the Fokker-Planck equation for the distribution function: $$\frac{d𝒩^{(2p)}}{dt}=i^{(2p)}(t)𝒩^{(2p)}+𝒟^{(2p2)}𝒩^{(2p2)}.$$ (69) Here the linear operators $`^{(2p)}`$ and $`𝒟^{(2p2)}`$ acting on the moments are defined by $`(^{(2p)}(t)𝒩^{(2p)})_{IJ}`$ $``$ $`{\displaystyle \underset{k}{}}𝒩_{i_1\mathrm{}i_pj_1\mathrm{}j_k^{}\mathrm{}j_p}^{(2p)}_{j_k^{}j_k}^\mathrm{T}(t)`$ (70) $``$ $`{\displaystyle \underset{k}{}}_{i_ki_k^{}}^{}(t)𝒩_{i_1\mathrm{}i_k^{}\mathrm{}i_pj_1\mathrm{}j_p}^{(2p)},`$ (71) $`(𝒟^{(2p2)}𝒩^{(2p2)})_{IJ}`$ $``$ $`{\displaystyle \underset{k,l}{}}𝒟_{i_kj_l}𝒩_{I_k^{}J_l^{}}^{(2p2)}`$ (72) with abbreviation of the indices $`Ii_1\mathrm{}i_p`$, $`Jj_1\mathrm{}j_p`$, $`I_k^{}i_1\mathrm{}i_{k1}i_{k+1}\mathrm{}i_p`$ and $`J_l^{}j_1\mathrm{}j_{l1}j_{l+1}\mathrm{}j_p`$. We examine, in particular, the second-order moment $`𝒩_{ij}(t)𝒩_{ij}^{(2)}(t)`$. The number of the atoms which are excited by absorbing the photons is given by the diagonal component of $`𝒩(t)`$ as $$n_b(t)=b^{}b=𝒩_{bb}(t).$$ (73) The master equation for $`𝒩(t)`$ reads $$\frac{d𝒩}{dt}=i𝒩^\mathrm{T}(t)+i^{}(t)𝒩+𝒟.$$ (74) This linear equation with an inhomogeneous term may be solved formally as follows. We first introduce a new matrix $`𝒩^{}(t)`$ instead of $`𝒩(t)`$ by $$𝒩^{}(t)=𝒰^1(t)𝒩(t)𝒰^{1\mathrm{T}}(t).$$ (75) The linear transformation $`𝒰(t)`$ representing the time evolution due to $`(t)`$ is given by $$𝒰(t)=\mathrm{P}\left[i\mathrm{exp}_0^t(\tau )𝑑\tau \right]$$ (76) satisfying the equation $`d𝒰/dt=i(t)𝒰`$ with $`𝒰(0)=\mathrm{𝟏}`$, where “P” denotes the chronological product. Then, the master equation (74) is reduced as $$\frac{d𝒩^{}}{dt}=𝒰^1(t)𝒟𝒰^{1\mathrm{T}}(t).$$ (77) By considering the initial condition $`𝒩^{}(0)=𝒩(0)`$ with $`𝒰(0)=\mathrm{𝟏}`$ and Eq. (63) for $`𝒟`$, we can solve the above equation as $$𝒩_{ij}^{}(t)=_{ij}^k(t)\overline{n}_k+𝒩_{ij}(0),$$ (78) where $$_{ij}^k(t)=_0^t\gamma _k𝒰_{ik}^1(\tau )𝒰_{kj}^{1\mathrm{T}}(\tau )𝑑\tau .$$ (79) Then, we obtain the solution as $$𝒩_{ij}(t)=_{ij}^k(t)\overline{n}_k+𝒰_{ik}^{}(t)𝒩_{kl}(0)𝒰_{lj}^\mathrm{T}(t),$$ (80) where $$_{ij}^k(t)=\left[𝒰^{}(t)^k(t)𝒰^\mathrm{T}(t)\right]_{ij}.$$ (81) The same result is also obtained in the Langevin picture (see Appendix B). This solution approaches to certain asymptotic value after a long enough time as $$𝒩_{ij}(t)_{ij}^k(t)\overline{n}_k(t\mathrm{max}[\gamma _i^1]).$$ (82) In practical calculations, the master equation (74) will be solved numerically with the time-dependent effective Hamiltonian $`(t)`$ as given in Eq. (64). ## VI Aspects of axion-photon-atom interaction We can see some characteristic properties of the axion-photon-atom interaction in the resonant cavity by examining the simple case with the constant atom-photon coupling $`\mathrm{\Omega }(t)=\mathrm{\Omega }_N`$ for the effective Hamiltonian $`(t)=`$ in Eq. (64). In this case, as derived in the Appendix A, the analytic solution is obtained for the particle numbers (with the condition $`\overline{n}_b=0`$) as $$n_i(t)=q_i^{}q_i=r_{ic}(t)\overline{n}_c+r_{ia}(t)\overline{n}_a,$$ (83) where $$r_{ij}(t)=\underset{m,n}{}g_{ij}^mg_{ij}^n\left[\left(1\frac{\gamma _j}{\mathrm{\Lambda }_{mn}}\right)\mathrm{e}^{\mathrm{\Lambda }_{mn}t}+\frac{\gamma _j}{\mathrm{\Lambda }_{mn}}\right]$$ (84) with $`g_{ij}^k`$ $`=`$ $`\underset{si\lambda _k}{lim}(s+i\lambda _k)(s\mathrm{𝟏}+i)_{ij}^1,`$ (85) $`\mathrm{\Lambda }_{mn}`$ $`=`$ $`i(\lambda _m^{}\lambda _n).`$ (86) Here, the atomic damping rate $`\gamma _b`$ may be neglected for simplicity, since it is sufficiently smaller than $`\gamma _a`$ and $`\mathrm{\Omega }_N`$ ($`\gamma _b0.001\gamma `$ with $`\tau _b10^3\mathrm{s}`$ for $`\omega _c10^5\mathrm{eV}`$ and $`Q10^4`$). The condition $`\omega _b=\omega _c`$ may also be taken for definiteness, since the atomic transition frequency should be tuned almost equal to the cavity frequency. Then, the eigenvalues of the Hamiltonian $``$ are given by $`\lambda _1`$ $`=`$ $`\omega _c{\displaystyle \frac{i}{4}}\gamma +i{\displaystyle \frac{(\gamma ^216\mathrm{\Omega }_N^2)^{1/2}}{4}},`$ (87) $`\lambda _2`$ $`=`$ $`\omega _c{\displaystyle \frac{i}{4}}\gamma i{\displaystyle \frac{(\gamma ^216\mathrm{\Omega }_N^2)^{1/2}}{4}},`$ (88) $`\lambda _3`$ $`=`$ $`\omega _a{\displaystyle \frac{i}{2}}\gamma _a,`$ (89) where $$(\gamma ^216\mathrm{\Omega }_N^2)^{1/2}=\{\begin{array}{cc}\sqrt{\gamma ^216\mathrm{\Omega }_N^2}\hfill & (\mathrm{\Omega }_N/\gamma 1/4)\hfill \\ i\sqrt{16\mathrm{\Omega }_N^2\gamma ^2}\hfill & (\mathrm{\Omega }_N/\gamma >1/4)\hfill \end{array}.$$ (90) If the number of atoms $`N`$ (or the atomic beam intensity $`I_{\mathrm{Ryd}}`$) is not so large giving $`\mathrm{\Omega }_N/\gamma <1/4`$, the damping rate of the eigenmode of $`\lambda _1`$ is smaller than $`\gamma /2`$, and that of $`\lambda _2`$ lies between $`\gamma /2`$ and $`\gamma `$. On the other hand, in the strong coupling region of $`\mathrm{\Omega }_N/\gamma >1/4`$ for the collective atom-photon interaction the eignemodes of $`\lambda _1`$ and $`\lambda _2`$ form a doublet around the frequency $`\omega _c`$ with the same damping rate $`\gamma /2`$ (Rabi splitting). Among the damping rates $`\mathrm{Re}[\mathrm{\Lambda }_{mn}]`$ of the respective terms in the factors $`r_{ij}(t)`$ representing the contributions of thermal photons and axions, $`\mathrm{Re}[\mathrm{\Lambda }_{33}]=\gamma _a`$ ($`0.01\gamma `$ for $`\beta _a10^3`$ and $`Q10^4`$, typically) is the smallest one. The rate $`\mathrm{Re}[\mathrm{\Lambda }_{11}]4\gamma (\mathrm{\Omega }_N/\gamma )^2`$ for $`\mathrm{\Omega }_N/\gamma 0.1`$ may also be comparable to the smallest rate $`\mathrm{Re}[\mathrm{\Lambda }_{33}]`$. The atomic transit time through the cavity is, on the other hand, given by $$t_{\mathrm{tr}}=L/v$$ (91) with the detection cavity length $`L`$ and the atomic velocity $`v`$. This transit time provides the effective cut-off for the axion-photon-atom interaction in the cavity. (Here we assume for simplicity that the atoms have the uniform velocity.) It is typically $`t_{\mathrm{tr}}400\tau _\gamma `$ with $`L=0.2\mathrm{m}`$ and $`v=350\mathrm{m}\mathrm{s}^1`$ for $`m_a=10^5\mathrm{eV}`$ and $`Q=2\times 10^4`$ in the case of the detection apparatus such as CARRACK I. Hence, the transit time can be regareded to be long enough compared to $`(\mathrm{Re}[\mathrm{\Lambda }_{mn}])^1`$, i.e., $$t_{\mathrm{tr}}>\mathrm{several}\gamma _a^1,$$ (92) so that the respective particle numbers will almost reach the asymptotic values as $$r_{ij}(t_{\mathrm{tr}})r_{ij}(\mathrm{})=\underset{m,n}{}g_{ij}^mg_{ij}^n\frac{\gamma _j}{\mathrm{\Lambda }_{mn}}.$$ (93) By using Eqs. (87) – (89) and the explicit matrix form (A21) for $`(s\mathrm{𝟏}+i)^1`$ given in Appendix A, we can calculate the coefficients $`g_{ij}^k`$ in Eq. (85). Then, we can show the relations $$r_{bc}(\mathrm{})=r_{cc}(\mathrm{})=1.$$ (94) This implies that if the axion-photon interaction is turned off, the numbers of the photons and excited atoms reach the same asymptotic value $`\overline{n}_c`$ of the thermal photon number at $`T_c`$ : $$n_b[cb]n_c[cc]\overline{n}_c.$$ (95) The number of axion-converted photons is, on the other hand, given approximately by $$n_c[ac]r_{ca}(\mathrm{})\overline{n}_a.$$ (96) The number of excited atoms due to the axion-converted photons is also given by $$n_b[acb]r_{ba}(\mathrm{})\overline{n}_a.$$ (97) These contributions from the axions are essentially determined by the factors $$g_{ca}^m,g_{ba}^m\frac{\kappa }{(\lambda _m\lambda _k)(\lambda _m\lambda _l)},$$ (98) where $`mk,l`$ and $`kl`$. The detuning of axion mass from the cavity frequency is given by $$\mathrm{\Delta }\omega _a\omega _a\omega _c.$$ (99) Then, the above factors are enhanced if the axion detuning $`\mathrm{\Delta }\omega _a`$ lies in the range where the resonant conditions $`\lambda _3\lambda _1`$ and/or $`\lambda _3\lambda _2`$ are satisfied. In the leading order of the axion-photon coupling $`\kappa `$ with Eq. (98), the factors for the axion contributions are given with $`r_{ca}(\mathrm{}),r_{ba}(\mathrm{})(\kappa /\gamma )^2`$. Then, by noting the relation $`(\kappa /\gamma )^2\overline{n}_a(\rho _a/m_a)V_1`$ from Eqs. (19) and (41) it is found that the axion-converted photons $`n_c[ac]`$ and the number of atoms $`n_b[acb]`$ excited by such photons are both proportional to the number of axions contained in the conversion cavity. It is here relevant to define the form factors for the axion contributions with respect to the axion detuning ($`t_{\mathrm{tr}}>\mathrm{several}\gamma _a^1\gamma ^1`$) by $$\sigma _{ia}(\mathrm{\Delta }\omega _a)=\frac{r_{ia}(t_{\mathrm{tr}})}{[\kappa /(\gamma /2)]^2}\frac{r_{ia}(\mathrm{})}{[\kappa /(\gamma /2)]^2}(i=b,c).$$ (100) These form factors $`\sigma _{ba}(\mathrm{\Delta }\omega _a)`$ for $`acb`$ (solid lines) and $`\sigma _{ca}(\mathrm{\Delta }\omega _a)`$ for $`ac`$ (dotted lines) are plotted together in Fig. 2 for some typical values of the atom-photon coupling, $`\mathrm{\Omega }_N/\gamma =0.1,0.5,1.0`$. The behavior of $`\sigma _{ba}(\mathrm{\Delta }\omega _a)`$ is, in particular, understood by noting its dominant contribution which is in fact given by the term of $`|g_{ba}^{m=3}|^2`$ ($`\gamma _a\gamma `$) as $$\sigma _{ba}(\mathrm{\Delta }\omega _a)\frac{4\mathrm{\Omega }_N^2\gamma ^2}{4\mathrm{\Delta }\omega _a^2\gamma (\gamma 4\gamma _a)+(4\mathrm{\Delta }\omega _a^24\mathrm{\Omega }_N^2+\gamma _a\gamma )^2}.$$ (101) The peak of this form factor appears at $$\mathrm{\Delta }\omega _a(\mathrm{peak})=\{\begin{array}{cc}0\hfill & (\mathrm{\Omega }_N\overline{\mathrm{\Omega }})\hfill \\ \pm \sqrt{\mathrm{\Omega }_N^2\overline{\mathrm{\Omega }}^2}\hfill & (\mathrm{\Omega }_N>\overline{\mathrm{\Omega }})\hfill \end{array},$$ (102) where $`\overline{\mathrm{\Omega }}=\gamma /\sqrt{8}+O(\gamma _a)`$. (Although $`\sigma _{ba}(0)`$ is apparently divergent for $`\mathrm{\Omega }_N=\frac{1}{2}(\gamma _a\gamma )^{1/2}`$ in the approximate formula of Eq. (101), the other contributions also become significant around $`\mathrm{\Delta }\omega _a=0`$ so as to give a finite value of $`\sigma _{ba}(0)`$.) It is found that for $`\mathrm{\Omega }_N\gamma /4`$ the peak value of the axion signal decreases due to the Rabi splitting, which approaches to $`\sigma _{ba}(\mathrm{\Delta }\omega _a(\mathrm{peak}))1`$ for $`(\mathrm{\Omega }_N/\gamma )^21`$. On the other hand, for the appropriate atom-photon coupling such as $$\mathrm{\Omega }_N(\gamma _a\gamma )^{1/2}$$ (103) the signal is enhanced significantly, as observed in Fig. 2 ($`\mathrm{\Omega }_N=0.1\gamma `$ and $`\gamma _a=0.02\gamma `$), by virtue of the narrow width (long coherence) of the galactic axions as $$\sigma _{ba}(\mathrm{\Delta }\omega _a)(\gamma _a/\gamma )^1,$$ (104) when the axion detuning becomes small enough to satisfy the condition $$|\mathrm{\Delta }\omega _a|\gamma _a.$$ (105) Here, the energy uncertainty $`\mathrm{}/t_{\mathrm{tr}}`$ due to the atomic motion is assumed to be smaller than the axion width $`\mathrm{}\gamma _a`$. The form factor $`\sigma _{ca}(\mathrm{\Delta }\omega _a)`$ determines the equilibrium number $`r_{ca}(\mathrm{})\overline{n}_a=4\sigma _{ca}(\mathrm{\Delta }\omega _a)(\kappa /\gamma )^2\overline{n}_a`$ of the axion-converted photons in the cavity. As seen in Fig. 2, its peak value is $`\sigma _{ca}1`$ almost independently of the atom-photon coupling $`\mathrm{\Omega }_N`$. It is here, in particular, interesting to observe a narrow dip in $`\sigma _{ca}(\mathrm{\Delta }\omega _a)`$ around $`\mathrm{\Delta }\omega _a=0`$ for $`\mathrm{\Omega }_N/\gamma =0.1`$. This indicates that the axion-converted photons are efficiently absorbed by the atoms for $`\mathrm{\Omega }_N/\gamma (\gamma _a/\gamma )^{1/2}`$. For larger $`\mathrm{\Omega }_N`$, two separate peaks appear in $`\sigma _{ca}(\mathrm{\Delta }\omega _a)`$ due to the Rabi splitting. Some characteristic features concerning the axion-photon-atom interaction in the resonant cavity have been discussed so far by making the analytic calculations for the simple case with the constant atom-photon coupling $`\mathrm{\Omega }_N`$. They will indeed be confirmed in Sec. IX by performing detailed numerical calculations for the realistic situation with the continuous atomic beam passing through the spatially varying electric field. ## VII Detection sensitivity with continuous atomic beam In order to make precise estimates for the sensitivity of the Rydberg atom cavity detector, we have to take into accout (i) the motion and (ii) the almost uniform distribution of the atoms in the incident beam as well as (iii) the spatial variation of the electric field in the cavity. We here elaborate the calculations presented in the preceeding sections by treating these points appropriately. The electric field felt by the atoms varies with time through the atomic motion. Accordingly, the effect of atomic motion can be incorporated by introducing the relevant time-dependence for the atom-photon coupling, which is determined by the profile of the electric field in the cavity, as given in Eq. (66). On the other hand, in order to treat the spatial distribution of the atoms, we divide the atoms in the cavity into $`K`$ bunches with a fixed beam intensity $$I_{\mathrm{Ryd}}=N/t_{\mathrm{tr}}.$$ (106) Here, $`N`$ is the total number of Rydberg atoms in the cavity. Then, the coutinuous atomic beam will actually be realized for $`K1`$. The collective atomic mode of each bunch is denoted by $`b_i`$ ($`i=1,2,\mathrm{},K`$), and the effective Hamiltonian is given by a $`(K+2)\times (K+2)`$ matrix $$(t)=\left(\begin{array}{cc}(\omega _b\frac{i}{2}\gamma _b)\mathrm{𝟏}& \begin{array}{cc}\mathrm{\Omega }_1(t)\hfill & 0\\ \mathrm{}\hfill & \mathrm{}\\ \mathrm{\Omega }_K(t)\hfill & 0\end{array}\\ \begin{array}{ccc}\mathrm{\Omega }_1& \mathrm{}& \mathrm{\Omega }_K(t)\\ 0& \mathrm{}& 0\end{array}& \begin{array}{cc}\omega _c\frac{i}{2}\gamma _c& \kappa \\ \kappa & \omega _a\frac{i}{2}\gamma _a\end{array}\end{array}\right).$$ (107) Now suppose that the $`i`$-th atomic bunch locates around $$x_i=(i1)\delta x$$ (108) for a time interval $$M\delta tt<(M+1)\delta t,$$ (109) where $`M=0,1,2,\mathrm{}`$, and $$\delta x=L/K,\delta t=t_{\mathrm{tr}}/K.$$ (110) Then, the collective atom-photon coupling for the $`i`$-th bunch containing $`N/K`$ atoms is given with the electric field profile by $$\mathrm{\Omega }_i(t)=(\mathrm{\Omega }_N/\sqrt{K})f(x_i+v\overline{t}),$$ (111) where $$\overline{t}tM\delta t(0\overline{t}<\delta t).$$ (112) After each time interval of $`\delta t`$, the $`K`$-th atomic bunch leaves the cavity, and a new one comes in. Then, the collective atomic modes of the respective bunches should be replaced as $$b_ib_{i+1},$$ (113) which is represented by a $`(K+2)\times (K+2)`$ matrix $$𝒫_{}=\left(\begin{array}{cccccccc}0& 0& 0& \mathrm{}& 0& 0& 0& 0\\ 1& 0& 0& \mathrm{}& 0& 0& 0& 0\\ 0& 1& 0& \mathrm{}& 0& 0& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& 0& \mathrm{}& 0& 0& 0& 0\\ 0& 0& 0& \mathrm{}& 1& 0& 0& 0\\ 0& 0& 0& \mathrm{}& 0& 0& 1& 0\\ 0& 0& 0& \mathrm{}& 0& 0& 0& 1\end{array}\right).$$ (114) In accordance with this replacement of the collective atomic modes at each period of $`\delta t`$, the time evolution of the particle number matrix $`𝒩_{ij}(t)`$ should be determined by dividing it into the corresponding parts in time as $$\overline{𝒩}(\overline{t},M)𝒩(t)(M\delta tt<(M+1)\delta t).$$ (115) These parts are connected together at $`t=M\delta t`$ with a suitable matching condition $$\overline{𝒩}(0,M+1)=𝒫_{}\overline{𝒩}(\delta t,M)𝒫_{}^\mathrm{T}.$$ (116) This condition is written down explicitly as $$\begin{array}{cc}\overline{𝒩}_{ij}(0,M+1)=\overline{𝒩}_{i1j1}(\delta t,M)\hfill & [2i,jK],\hfill \\ \overline{𝒩}_{1j}(0,M+1)=0\hfill & [1jK+2],\hfill \\ \overline{𝒩}_{i1}(0,M+1)=0\hfill & [1iK+2],\hfill \\ \overline{𝒩}_{ij}(0,M+1)=\overline{𝒩}_{i1j}(\delta t,M)\hfill & [2iK,j=a,c],\hfill \\ \overline{𝒩}_{ij}(0,M+1)=\overline{𝒩}_{ij1}(\delta t,M)\hfill & [i=a,c,2jK],\hfill \\ \overline{𝒩}_{ij}(0,M+1)=\overline{𝒩}_{ij}(\delta t,M)\hfill & [i,j=a,c].\hfill \end{array}$$ (117) Here, the second and third lines imply that the incoming atomic mode ($`b_1`$) does not have any correlation initially with the other quantum modes ($`b_2,\mathrm{},b_K,c,a`$) already interacting in the cavity. In this way, by solving the master equation (74) for many time intervals, we obtain the steady solution $$\overline{𝒩}(\overline{t})\overline{𝒩}(\overline{t},M1).$$ (118) It in fact appears independently of the choice of the initial value $`𝒩(0)`$, which is due to the dissipation of the quantum modes for the axions, photons and atoms. The number of excited atoms contained in each atomic bunch is then given by $$n_{b_i}(\overline{t})=\overline{𝒩}_{ii}(\overline{t})=n_{b_i}^a(\overline{t})+n_{b_i}^c(\overline{t}).$$ (119) Here, the contributions of the axions and thermal photons are proporional to the respective particle numbers as $$n_{b_i}^a(\overline{t})=r_{b_ia}(\overline{t})\overline{n}_a,n_{b_i}^c(\overline{t})=r_{b_ic}(\overline{t})\overline{n}_c.$$ (120) The distribution of excited atoms in the cavity is determined with this steady solution as $$\overline{\rho }_b(x)=\frac{n_{b_i}(\overline{t}=x/v)}{\delta x}((i1)\delta xx<i\delta x).$$ (121) The $`K`$-th atomic bunch exits the cavity at $`\overline{t}=\delta t`$, and the excited atoms contained there are detected. Accordingly, the counting rates for the contributions of the axions and thermal photons are calculated, respectively, for large enough $`K`$ and $`M`$ by $$R_s=\frac{n_{b_K}^a(\delta t)}{\delta t},$$ (122) $$R_n=\frac{n_{b_K}^c(\delta t)}{\delta t}.$$ (123) By using these counting rates, the measurement time required to search for the axion signal at the confidence level of $`m\sigma `$ is estimated as $$\mathrm{\Delta }t=\frac{m^2(1+R_n/R_s)}{R_s}.$$ (124) In the search for the axions with unknown mass, the cavity frequency ($`\omega _b=\omega _c`$ for definiteness) should be changed with appropriate scanning step $`\mathrm{\Delta }\omega _c`$. The total scanning time over a 10% frequency range is then given by $$t_{\mathrm{tot}}=\frac{0.1\omega _c}{\mathrm{\Delta }\omega _c}\mathrm{\Delta }t.$$ (125) The frequency step $`\mathrm{\Delta }\omega _c`$ is determined by considering the resonant condition for the absorption of the axion-converted photons by the Rydberg atoms. Specifically, as seen in Eqs. (103), (104) and (105), the signal rate is enhanced significantly for the axion detuning $`|\mathrm{\Delta }\omega _a|\gamma _a`$ with $`\mathrm{\Omega }_N/\gamma (\gamma _a/\gamma )^{1/2}`$. Hence, the scanning frequency step should be taken as $$\mathrm{\Delta }\omega _c\gamma _a.$$ (126) Then, if the axion really exists in a mass range searched with this frequency step, the resonant condition for the axion signal can be satisfied at a certain scanning step. ## VIII Dependence on the relevant parameters We here consider how the counting rates of the axion signal and thermal photon noise depend on the relevant experimental parameters, before presenting detailed numerical calculations in the next section. Although the notation for the case of $`K=1`$ with constant atom-photon coupling $`\mathrm{\Omega }_N`$ is used for simplicity, the essential features argued below are indeed valid for the realistic case with continuous atomic beam. The energy scales involved in the effective Hamiltonian are as follows: $`\begin{array}{ccc}\mathrm{𝐝𝐞𝐭𝐮𝐧𝐢𝐧𝐠𝐬}\hfill & :& \mathrm{\Delta }\omega _a\omega _a\omega _c,\mathrm{\Delta }\omega _b\omega _b\omega _c,\hfill \\ \mathrm{𝐜𝐨𝐮𝐩𝐥𝐢𝐧𝐠𝐬}\hfill & :& \kappa ,\mathrm{\Omega }_N,\hfill \\ \mathrm{𝐝𝐚𝐦𝐩𝐢𝐧𝐠𝐬}\hfill & :& \gamma _a,\gamma _b,\gamma _c\gamma .\hfill \end{array}`$ The atomic transit time is also important to determine the time evolution of the system, which is controlled by the atomic velocity $`v`$ with a fixed cavity length $`L`$ as $$v=L/t_{\mathrm{tr}}.$$ (127) The effects of the dissipation of axions and atoms are rather small compared to that of the photons in the actual situation with $`\gamma _a,\gamma _b\gamma `$. The atomic transit time is in fact much longer than the other characteristic time scales except for $`\kappa ^1`$, e.g., $`t_{\mathrm{tr}}400\tau _\gamma `$ for $`v=350\mathrm{m}\mathrm{s}^1`$ with $`L=0.2\mathrm{m}`$, $`Q=2\times 10^4`$ and $`m_a=10^5\mathrm{eV}`$. In this situation, the quantum averaged particle numbers approach to the asymptotic values with the factors $`r_{ij}(t_{\mathrm{tr}})r_{ij}(\mathrm{})`$, as given in Eq. (93). Then, the counting rates of the signal and noise, which are calculated by $`R_s=r_{ba}(t_{\mathrm{tr}})\overline{n}_a/t_{\mathrm{tr}}`$ and $`R_n=r_{bc}(t_{\mathrm{tr}})\overline{n}_c/t_{\mathrm{tr}}`$, are found to be roughly proportional to the atomic velocity $`v`$. In the weak atom-photon coupling region of $`\mathrm{\Omega }_N/\gamma 0.1`$, the counting rates $`R_s`$ and $`R_n`$ are roughly proportional to $`\mathrm{\Omega }_N^2NI_{\mathrm{Ryd}}`$, as seen in Eq. (101) for the axion signal. On the other hand, when the atom-photon coupling becomes significant with sufficient atomic beam intensity $`I_{\mathrm{Ryd}}`$, the noise rate saturates to certain value $`R_n\overline{n}_c/t_{\mathrm{tr}}`$, and the signal rate $`R_s`$ is maximized, as seen in Eqs. (103) and (104), for the atom-photon coupling $$\mathrm{\Omega }_N/\gamma (\gamma _a/\gamma )^{1/2}0.1\left(\frac{\beta _a}{10^3}\right)\left(\frac{Q}{10^4}\right)^{1/2}.$$ (128) It is possible to attain this optimal coupling in the actual detection apparatus by preparing the appropriate number of Rydberg atoms with a suitable laser system. The beam intensity of Rydberg atoms $`I_{\mathrm{Ryd}}=N/t_{\mathrm{tr}}`$ providing this optimal value (128) for the collective atom-photon coupling $`\mathrm{\Omega }_N=\mathrm{\Omega }\sqrt{N}`$ is chosen depending on the relevant parameters as $$I_{\mathrm{Ryd}}v\beta _a^2m_a^2Q^1\mathrm{\Omega }^2.$$ (129) It is typically estimated as $`I_{\mathrm{Ryd}}7\times 10^5\mathrm{s}^1`$ with $`N400`$ atoms for $`m_a=10^5\mathrm{eV}`$, $`\beta _a=10^3`$, $`Q=2\times 10^4`$, $`\mathrm{\Omega }=5\times 10^3\mathrm{s}^1`$, $`v=350\mathrm{m}\mathrm{s}^1`$ and $`L=0.2\mathrm{m}`$ ($`t_{\mathrm{tr}}400\tau _\gamma `$). On the other hand, if the atom-photon coupling becomes too strong, the signal rate decreases due to the appearance of Rabi splitting, as seen in Fig. 2. We have observed that the atom-photon interaction can not be treated perturbatively for $`\mathrm{\Omega }_N/\gamma 0.1`$ . The conversion between atoms and photons appears to be reversible in this case. On the other hand, since the axion-photon coupling is extremely small, the conversion of axions to photons can well be treated perturbatively as an irreversible process. The signal rate $`R_s`$ is then found to be proportional to $`(\kappa /\gamma )^2\overline{n}_a`$ in a very good approximation. Then, by considering the relations (19), (29) and (41) with $`\omega _cm_a/\mathrm{}`$, the dependence of the signal rate on the relevant parameters is specified as $$R_svc_{a\gamma \gamma }^2B_{\mathrm{eff}}^2Q^2V_1(\rho _a/m_a)\overline{\sigma }_{ba}.$$ (130) Here, the form factor $`\overline{\sigma }_{ba}=\sigma _{ba}(\pm \mathrm{\Delta }\omega _c/2)`$ for $`\mathrm{\Delta }\omega _a=\pm \mathrm{\Delta }\omega _c/2`$ ($`\mathrm{\Delta }\omega _b=0`$) is also indicated. This form factor actually depends on the choices of the atom-photon coupling $`\mathrm{\Omega }_N`$ and the scanning frequency step $`\mathrm{\Delta }\omega _c`$, as seen in Eqs. (103), (104) and (105). It is noticed in Eq. (130) that the signal rate is proportional to the number of axions contained in the conversion cavity $`(\rho _a/m_a)V_1`$. The dependence of the signal rate on the axion mass $`m_a`$ is further specified by considering the relation for the conversion cavity volume $$V_1m_a^2.$$ (131) This relation is due to the fact that the diameter of the cavity is proportional to $`m_a^1`$ while its length is fixed. To summarize these arguments, the optimal situation, as given in Eqs. (103), (104) and (105), can be set up experimentally for the dark matter axion search with Rydberg atom cavity detector. Then, the counting rates are expected to behave with respect to the changes of the relevant parameters as $`R_s`$ $``$ $`vc_{a\gamma \gamma }^2B_{\mathrm{eff}}^2Q\beta _a^2m_a^3\rho _a,`$ (132) $`R_n`$ $``$ $`v\overline{n}_c.`$ (133) Here, the noise rate is proportional to the thermal photon number $`\overline{n}_c[m_a/T_c]`$ which is determined as a function of the ratio $`m_a/T_c`$. The resonant value $`\overline{\sigma }_{ba}(\gamma _a/\gamma )^1\beta _a^2Q^1`$ is obtained for the axion-photon-atom interaction with the small enough axion detuning $`|\mathrm{\Delta }\omega _a|\mathrm{\Delta }\omega _c/2\gamma _a`$. (This is valid as long as the quantum uncertainty of energy $`\mathrm{}/t_{\mathrm{tr}}`$ due to the finite atomic transit time is smaller than the axion width $`\mathrm{}\gamma _a`$, as ensured in the actual detection system.) It should also be remarked that the small detuning $`\mathrm{\Delta }\omega _b`$ of the atomic transition frequency only shifts slightly the location of the peak of the axion signal from $`\mathrm{\Delta }\omega _a(\mathrm{peak})=0`$ to $`\mathrm{\Delta }\omega _a(\mathrm{peak})\mathrm{\Delta }\omega _b`$. By taking these signal and noise rates in the optimal case, the measurement time $`\mathrm{\Delta }t`$ at each scanning step is estimated with respect to the relevant parameters as $$\mathrm{\Delta }t\{\begin{array}{cc}v^1c_{a\gamma \gamma }^4B_{\mathrm{eff}}^4Q^2\beta _a^4m_a^6\rho _a^2\overline{n}_c\hfill & (R_s/R_n<1)\hfill \\ c_{a\gamma \gamma }^2B_{\mathrm{eff}}^2Q^2\beta _a^2m_a^3\rho _a^1\hfill & (R_s/R_n1)\hfill \end{array}.$$ (134) The total scanning time is then estimated with the appropriate scanning step $`\mathrm{\Delta }\omega _c\gamma _a`$ as $$t_{\mathrm{tot}}\mathrm{\Delta }t/\beta _a^2.$$ (135) Here the negative power of $`\beta _a`$ appearing in the right-hand side is indeed compensated by the positive power of $`\beta _a`$ contained in Eq. (134) for $`\mathrm{\Delta }t`$. The sensitivity seems to be improved apparently for the high atomic velocity in Eqs. (134) and (135). It should, however, be remarked that the condition $`\mathrm{}/t_{\mathrm{tr}}<\mathrm{}\gamma _a`$ for the resonant axion-photon-atom interaction is no longer ensured if the atomic velocity beccomes too high. The high atomic beam intensity is even required in such a case in accordance with Eq. (129). Hence, we will in fact see in the next section that the preferable atomic velocity is $`v100\mathrm{m}\mathrm{s}^11000\mathrm{m}\mathrm{s}^1`$ for the actual experimental apparatus. Some essential features have been discussed so far concerning the detection sensitivity of the Rydberg atom cavity detector. In the next section, they will indeed be confirmed by detailed numerical calculations for the realistic situation with continuous atomic beam. ## IX Numerical analysis Numerical calculations have been performed to determine precisely the quantum evolution of the axion-photon-atom system in the resonant cavity, where some practical values are taken for the experimental parameters such as $`\begin{array}{c}m_a=3\times 10^6\mathrm{eV}3\times 10^5\mathrm{eV},\hfill \\ \beta _a=3\times 10^43\times 10^3,\hfill \\ \rho _a=\rho _{\mathrm{halo}}=0.3\mathrm{GeVcm}^3,\hfill \\ T_c=10\mathrm{m}\mathrm{K}15\mathrm{m}\mathrm{K},\hfill \\ Q=1\times 10^47\times 10^4,B_{\mathrm{eff}}=4\mathrm{T},\hfill \\ V_1=5000\mathrm{c}\mathrm{m}^3\left(m_a/10^5\mathrm{eV}\right)^2,L=0.2\mathrm{m},\hfill \\ I_{\mathrm{Ryd}}=10^3\mathrm{s}^110^7\mathrm{s}^1,\hfill \\ v=350\mathrm{m}\mathrm{s}^110000\mathrm{m}\mathrm{s}^1,\hfill \\ \mathrm{\Omega }=5\times 10^3\mathrm{s}^1,\tau _b=10^3\mathrm{s}.\hfill \end{array}`$ The steady state is realized by solving the master equation (74) for a long enough time interval $`M\delta t`$ ($`M1`$). It in practice appears independently of the choice of initial value $`𝒩(0)`$ due to the finite damping rates of the axions, photons and atoms. The spatial distribution of the excited atoms in the cavity is calculated by Eq. (121) with this steady solution. The contributions of the galactic axions $`\overline{\rho }_b^{[a]}(x)`$ and the thermal photons $`\overline{\rho }_b^{[\gamma ]}(x)`$ are depicted together in Figs. 3 and 4 for the case of DFSZ axion with the tanh-type and sine-type electric field profiles, respectively, where the relevant parameters are taken as $`m_a=10^5\mathrm{eV}`$ ($`\omega _c=\omega _a`$), $`T_c=12\mathrm{m}\mathrm{K}`$, $`Q=2\times 10^4`$, $`\mathrm{\Omega }_N/\gamma =0.1`$ and $`v=350\mathrm{m}\mathrm{s}^1`$. These results obtained with $`K=10`$ and $`M=20`$ in fact appear to be smooth enough indicating that the steady state in the case with continuous atomic beam is realized in a good approximation. It is here remarkable that the form of electric field profile is reflected in the distribution of the excited atoms. In the following calculations, we take the tanh-type electric field profile for definiteness. The essential results concerning the interaction between axions and atoms mediated by photons in the resonant cavity, however, hold even in the cases with more realistic electric profiles. We can compute the signal and noise rates with the steady solutions obtained in this way. (The following calculations are made for $`K=5`$ and $`M=10`$ with the tanh-type electric field profile. The results are changed less than 10% even if larger $`K`$ and $`M`$ are taken.) It is readily checked by these calculations that the signal rate $`R_s`$ is indeed proportional to $`c_{a\gamma \gamma }^2B_{\mathrm{eff}}^2V_1(\rho _a/m_a)`$, as shown in Eq. (130). Hence, the signal rate for the KSVZ axion is larger than that for the DFSZ axion by a factor $`c_{a\gamma \gamma }^2(\mathrm{KSVZ})/c_{a\gamma \gamma }^2(\mathrm{DFSZ})7`$. In Fig. 5, we plot the signal and noise rates, $`R_s`$ and $`R_n`$, together depending on the atomic beam intensity $`I_{\mathrm{Ryd}}`$ (and the corresponding atom-photon coupling $`\mathrm{\Omega }_N`$), where the relevant parameters are taken as $`m_a=10^5\mathrm{eV}`$ ($`\omega _c=\omega _a`$), $`Q=2\times 10^4`$, $`L=0.2\mathrm{m}`$, $`v=350\mathrm{m}\mathrm{s}^1`$, $`\mathrm{\Omega }=5\times 10^3\mathrm{s}^1`$ and $`T_c=10\mathrm{m}\mathrm{K},12\mathrm{mK},15\mathrm{mK}`$. The noise rate is clearly proportional to the thermal photon number $`\overline{n}_c[m_a/T_c]`$, which is $`\overline{n}_c=8.9\times 10^6,6.2\times 10^5,4.3\times 10^4`$ at $`T_c=10\mathrm{m}\mathrm{K}`$, $`12\mathrm{m}\mathrm{K}`$, $`15\mathrm{m}\mathrm{K}`$, respectively, for the axion mass $`m_a=10^5\mathrm{eV}`$. In the weak beam intensity region, the signal and noise rates increase almost proportional to $`I_{\mathrm{Ryd}}=N/t_{\mathrm{tr}}`$. On the other hand, for sufficiently strong beam intensities the noise rate is saturated to certain asymptotic value. In this case, the atoms interact reversibly with the photons in the cavity so that the equilibrium with $`n_b[cb]\overline{n}_c`$ is realized. (In practice, due to the finite atomic lifetime the total number of excited atoms $`n_b[cb]=_0^L\overline{\rho }_b^{[\gamma ]}(x)𝑑x`$ obtained from the thermal photons is slightly different from $`\overline{n}_c`$.) As for the signal rate, we clearly observe in Fig. 5 that it is optimized for certain atomic beam intensity corresponding to the condition (128) for the atom-photon coupling. If the atomic beam intensity is too strong, the signal rate even decreases due to the Rabi splitting. It is hence important to adjust the beam intensity so as to attain the optimal condition for the signal. It is also checked that the lines representing the noise rate $`R_s`$ versus beam intensity $`I_{\mathrm{Ryd}}`$ shifts horizontally by changes of the intrinsic atom-photon coupling $`\mathrm{\Omega }`$. Although the theoretical estimate of the electric dipole transition matrix element $`d`$ is available for calculating $`\mathrm{\Omega }`$ with Eq. (48), some ambiguities would be present in such a naive theoretical estimate of $`\mathrm{\Omega }`$. Hence, it may rather be necessary to determine $`\mathrm{\Omega }`$ experimentally by fitting the thermal noise data in the weak beam intensity region to the expected lines, as shown in Fig. 5 for $`\mathrm{\Omega }=5\times 10^3\mathrm{s}^1`$, which are calculated by varying $`\mathrm{\Omega }`$ around some plausible value suggested by the theoretical calculation of $`d`$ . Another aspect should be pointed out concerning the counting rates of the excited atoms. As given in Eqs. (121) and (123), the noise rate is calculated by the formula $`R_n=v\overline{\rho }_b^{[\gamma ]}(L)`$ with the excited atom density around the exit of cavity ($`x=L`$). Then, it may be noticed in Fig. 5 that the saturated value of $`R_n`$ in the strong beam intensity region, e.g., $`0.23\mathrm{s}^1`$ for $`T_c=12\mathrm{m}\mathrm{K}`$, is in fact significantly larger than the naive estimate $`\overline{n}_c/t_{\mathrm{tr}}`$, e.g., $`0.11\mathrm{s}^1`$ for $`\overline{n}_c[10^5\mathrm{eV}/12\mathrm{m}\mathrm{K}]=6.2\times 10^5`$, $`v=350\mathrm{m}\mathrm{s}^1`$ and $`L=0.2\mathrm{m}`$. This enhancement of the counting rates, which is expected for the signal as well as the noise, is due to the fact that the densities of excited atoms $`\overline{\rho }_b^{[a]}(L)`$ and $`\overline{\rho }_b^{[\gamma ]}(L)`$ around the exit of cavity become higher than the average values, as seen in Figs. 3 and 4. The nonuniform density of the excited atoms in the cavity is indeed brought by the fact that the atoms in the continuous beam are passing through the spatially varying electric field with finite transit time. The behavior of the signal rate $`R_s`$ with respect to the axion detunig $`\mathrm{\Delta }\omega _a`$ is shown in Fig. 6. Here some typical values are taken for the atom-photon coupling, $`\mathrm{\Omega }_N/\gamma =0.1,0.3,0.7`$, and the other relevant parameters are chosen as $`m_a=10^5\mathrm{eV}`$, $`\beta _a=10^3`$, $`Q=2\times 10^4`$, $`L=0.2\mathrm{m}`$ and $`v=350\mathrm{m}\mathrm{s}^1`$. This behavior almost agrees with that of the form factor in Fig. 2 which is obtained for the simple case. (Note that the log-scale is taken for $`R_s`$ in Fig. 6.) It is therefore confirmed by these numerical calculations that the resonant axion-photon-atom interaction takes place if the axion detuning is small enough as $`|\mathrm{\Delta }\omega _a|\gamma _a`$ with the optimal atom-photon coupling $`\mathrm{\Omega }_N/\gamma (\gamma _a/\gamma )^{1/2}`$ (as long as $`\mathrm{}/t_{\mathrm{tr}}<\mathrm{}\gamma _a`$). It is also checked that if there is a small detuning $`\mathrm{\Delta }\omega _b`$ of the atomic transition frequency, this resonant condition is slightly modified as $`|\mathrm{\Delta }\omega _a\mathrm{\Delta }\omega _b|=|\omega _a\omega _b|\gamma _a`$. As clearly observed in Fig. 6, a salient feature which is found by these calculations for the case with continous atomic beam is the appearance of a ripple structure in $`R_s`$ versus $`\mathrm{\Delta }\omega _a`$ with the relatively strong atom-photon coupling, $`\mathrm{\Omega }_N/\gamma >0.1`$ for the present choice of parameters. It is realized that this structure is brought from the narrow axion width $`\gamma _a\beta _a^2m_a/\mathrm{}`$. We have checked that this fine structure indeed disappears if the axion spectrum spreads much wider with higher mean velocity such as $`\beta _a=3\times 10^3`$. The signal rate $`R_s`$ versus axion detuning $`\mathrm{\Delta }\omega _a`$ is calculated for some typical axion velocities, $`\beta _a=3\times 10^4,1\times 10^3,3\times 10^3`$. The results are shown in Fig. 7, where $`m_a=10^5\mathrm{eV}`$, $`Q=2\times 10^4`$, $`\mathrm{\Omega }_N/\gamma =0.1`$, $`L=0.2\mathrm{m}`$ and $`v=350\mathrm{m}\mathrm{s}^1`$ are taken. It is here clearly observed that this signal form factor exhibits the structure determined by the energy spread of axions $`\mathrm{}\gamma _a\beta _a^2m_a`$. The galactic axion spectrum can actually have some narrow peaks, as pointed out in . Then, by taking a small enough scanning frequency step $`\mathrm{\Delta }\omega _c`$, such peaks may be observed in the present detection scheme as well as the heterodyne method . The energy resolution is now limited by the qauntum uncertainty $`\mathrm{}/t_{\mathrm{tr}}`$, which could be improved by utilizing the lower velocity atomic beam. The dependence of the axion signal $`R_s`$ on the atomic velocity $`v`$ is also shown in Fig. 8 for $`m_a=10^5\mathrm{eV}`$, $`\beta _a=10^3`$, $`\mathrm{\Omega }_N/\gamma =0.1`$, $`L=0.2\mathrm{m}`$ and $`v=350\mathrm{m}\mathrm{s}^1,2000\mathrm{m}\mathrm{s}^1,10000\mathrm{m}\mathrm{s}^1`$. Here, we note that for high atomic velocities such as $`v=10000\mathrm{m}\mathrm{s}^1`$ this axion signal form factor has a width much larger than that of axions. It is in fact determined by the energy uncertainty $`\mathrm{}/t_{\mathrm{tr}}`$ (e.g., $`0.1\mathrm{}\gamma `$ for $`v=10000\mathrm{m}\mathrm{s}^1`$) due to the atomic motion. The measurement time $`\mathrm{\Delta }t`$ required for $`3\sigma `$ level at each scanning step is shown in Fig. 9 depending on the atomic beam intensity $`I_{\mathrm{Ryd}}`$. Here the axion detuning is taken for definiteness as $`|\mathrm{\Delta }\omega _a|\mathrm{\Delta }\omega _c/2`$ with the scanning frequency step $$\mathrm{\Delta }\omega _c=0.05\gamma =6\mathrm{k}\mathrm{H}\mathrm{z}\left(\frac{m_a}{10^5\mathrm{eV}}\right)\left(\frac{2\times 10^4}{Q}\right),$$ (136) which is of the order of axion width $`\gamma _a`$ for $`\beta _a10^3`$ meeting the resonant condition (105) for the axion signal. Then, the total scanning time $`t_{\mathrm{tot}}`$ is estimated, as shown in Fig. 10. These calculations are made for the DFSZ axion by taking some typical values of $`T_c`$, $`Q`$ and $`v`$, which are indicated in Figs. 9 and 10. The other relevant parameters are chosen as $`m_a=10^5\mathrm{eV}`$, $`\beta _a=10^3`$, $`\mathrm{\Omega }_N/\gamma =0.1`$, $`L=0.2\mathrm{m}`$ and $`v=350\mathrm{m}\mathrm{s}^1`$. Here, we can see that the beam intensity to optimize the detection sensitivity changes with respect to $`Q`$ and $`v`$ according to the relation (129), as discussed before. The optimal estimates of the measurement time and the total scanning time also show roughly the dependence on $`Q`$ and $`v`$ as indicated in (134) and (135). For the KSVZ axion, the sensitivity is much better by a factor $`7^250`$ ($`R_s/R_n<1`$) or $`7`$ ($`R_s/R_n1`$). The detection sensitivity for $`3\sigma `$ is finally estimated over the axion mass range $`m_a=3\times 10^6\mathrm{eV}3\times 10^5\mathrm{eV}`$, which can be searched with the present type of Rydberg atom cavity detector. In these calculations, the optimal atomic beam intensity is taken as $`I_{\mathrm{Ryd}}=4\times 10^5\mathrm{s}^1(m_a/10^5\mathrm{eV})^2(Q/3\times 10^4)^1`$ in accordance with the relation (129), which is also read from Figs. 9 and 10. The volume of the conversion cavity is taken as $`V_1=5000\mathrm{c}\mathrm{m}^3(m_a/10^5\mathrm{eV})^2`$ by considering the relation (131). The other relevant parameters are chosen as $`\beta _a=10^3`$, $`L=0.2\mathrm{m}`$, $`v=350\mathrm{m}\mathrm{s}^1`$, $`\mathrm{\Delta }\omega _c=0.05\gamma `$ and $`T_c=10\mathrm{m}\mathrm{K},12\mathrm{mK},15\mathrm{mK}`$. The results are shown in Figs. 11 and 12. It should here be noted that the $`Q`$ factor can actually increase for lower frequencies. In order to take into account this property of the $`Q`$ factor, we have assumed for example a relation $$Q(m_a)=3\times 10^4(10^5\mathrm{eV}/m_a)^{2/3},$$ (137) providing $`Q(m_a)1.4\times 10^46.7\times 10^4`$ for $`m_a=3\times 10^5\mathrm{eV}3\times 10^6\mathrm{eV}`$. The results obtained by taking a fixed value and this relation for the $`Q`$ factor are plotted toghether by solid and dotted lines, respectively, in Figs. 11 and 12. Here we clearly find that if the $`Q`$ factor increases as given in Eq. (137), the detection sensitivity can be improved significantly for the lower axion masses $`m_a10^6\mathrm{eV}`$. It is also observed in most cases that the the measurement time $`\mathrm{\Delta }t`$ and the total scanning time $`t_{\mathrm{tot}}`$ once become local minimum for certain axion mass around $`m_a=10^5\mathrm{eV}`$. As the axion mass gets smaller from this minimum point, $`\mathrm{\Delta }t`$ and $`t_{\mathrm{tot}}`$ increase until reaching a local maximum, and then turn to decrease again. This feature is understood by noting the behaviors of the signal and noise rates with respect to the axion mass. In fact, as shown in Eq. (132), the singal rate increases approximately as $`R_sm_a^3`$ (with fixed $`Q`$) when the axion mass gets smaller. This increase of the signal rate is eventually overwhelmed by the more rapid increase of the noise rate $`R_n\mathrm{e}^{m_a/k_\mathrm{B}T_c}`$ for $`m_a/k_\mathrm{B}T_c1`$ proportional to the thermal photon number $`\overline{n}_c`$. Then, as the axion mass gets smaller to be comparable to the cavity temperature $`T_c`$, the increase of the noise rate becomes gradually moderate, and the singnal rate begins to dominate again. We can now conclude with these detailed numerical calculations. If the galactic axion search is made by utilizing the Rydberg atom cavity detecter, the DFSZ axion limit can be reached in frequency ranges of 10% around the axion mass $`m_a10^6\mathrm{eV}10^5\mathrm{eV}`$ for reasonable scanning times. The optimal condition for the detection sensitivity is attained by cooling the cavity down to a temperature $`T_c10\mathrm{m}\mathrm{K}`$ and adjusting the experimental parameters such as the atomic beam intensity $`I_{\mathrm{Ryd}}`$ and velocity $`v`$ and also the scanning frequency step $`\mathrm{\Delta }\omega _c`$. ## X Summary We have developed quantum theoretical calculations on the dynamical system consisting of the cosmic axions, photons and Rydberg atoms which are interacting in the resonant cavity. The time evolution is determined for the number of Rydberg atoms in the upper state which are excited by absorbing the axion-converted and thermal background photons. The calculations are made, in particular, by taking into account the actual experimental situation such as the motion and uniform distribution of the Rydberg atoms in the incident beam and also the spatial variation of the electric field in the cavity. Some essential aspects on the axion-photon-atom interaction in the resonant cavity are clarified by these detailed calculations. Then, by using these results the detection sensitivity of the Rydberg atom cavity detector is estimated properly. The present quantum analysis clearly shows that the Rydberg atom cavity detector is quite efficient for the dark matter axion search. ###### Acknowledgements. This research was partly supported by a Grant-in-Aid for Specially Promoted Research by the Ministry of Education, Science, Sports and Culture, Japan under the program No. 09102010. ## A Analytic solution Basic properties of the axion-photon-atom interaction in the cavity can be examined by considering the simple case where the atom-photon coupling does not depend on the time (i.e., the spatial variation of the electric field in the cavity is not considered). In this case, the analytic solution is obtained for the particle number matrix $`𝒩_{ij}(t)`$ as follows . We first consider a more general case with $`K`$ atomic bunches in the incident beam where the atom-photon couplings $`\mathrm{\Omega }_1`$$`\mathrm{\Omega }_K`$, which may be defferent each other, are independent of the time. The effective Hamiltonian $``$ given in Eq. (107) with constant $`\mathrm{\Omega }_1`$$`\mathrm{\Omega }_K`$ is diagonalized by a nonsingular linear transformation $`𝒯`$ (not unitary for nonhermitian $``$): $$\overline{}=𝒯^1𝒯=\mathrm{diag}.(\lambda _1,\lambda _2,\mathrm{},\lambda _{K+2}).$$ (A1) Accordingly, the relevant matrices are introduced by $`\overline{𝒩}^{}(t)`$ $`=`$ $`𝒯^1𝒩^{}(t)𝒯^{1\mathrm{T}},`$ (A2) $`\overline{𝒟}`$ $`=`$ $`𝒯^1𝒟𝒯^{1\mathrm{T}},`$ (A3) where $`𝒩^{}(t)=𝒰^1(t)𝒩(t)𝒰^{1\mathrm{T}}(t)`$, as given in Eq. (75), with $`𝒰(t)=\mathrm{exp}[it]`$ for the time-independent $``$. Then, the master equation (74) is reduced to a set of unmixed equations, $$\frac{d\overline{𝒩}_{ij}^{}}{dt}=\mathrm{e}^{i(\lambda _i^{}\lambda _j)t}\overline{𝒟}_{ij}.$$ (A4) These equations are readily solved with an appropriate initial value $`𝒩(0)=𝒩^{}(0)`$ as $$\overline{𝒩}_{ij}^{}(t)=\overline{𝒩}_{ij}^{}(0)+\overline{𝒞}_{ij}(t),$$ (A5) where $$\overline{𝒞}_{ij}(t)=i\frac{\overline{𝒟}_{ij}}{\lambda _i^{}\lambda _j}\left[\mathrm{e}^{i(\lambda _i^{}\lambda _j)t}1\right].$$ (A6) By using this solution for $`\overline{𝒩}^{}(t)`$, the original $`𝒩(t)`$ is caluclated as $$𝒩(t)=𝒰^{}(t)\left[𝒩(0)+𝒞(t)\right]𝒰^\mathrm{T}(t),$$ (A7) where $`𝒞(t)`$ $`=`$ $`𝒯^{}\overline{𝒞}(t)𝒯^\mathrm{T},`$ (A8) $`𝒰(t)`$ $`=`$ $`𝒯\mathrm{exp}[i\overline{}t]𝒯^1.`$ (A9) The above procedure with Eqs. (A5) – (A9) to find the solution may be viewed as an inhomogeneous linear transformation from $`𝒩(0)`$ to $`𝒩(t)`$ depending on the form of $``$: $$𝒩(t)=𝒰_{}[𝒩(0)].$$ (A10) The analytic solution obtained in this way can be used in a good approximation for a sufficiently small time interval where the variation of the atom-photon coupling is neglected. (This is equivalent to approximate the electric field profile $`f(x)`$ with a relevant step-wise function, if the time interval is taken to be $`\delta t=\delta x/v`$.) Then, by applying the transformation (A10) for a time sequence $`tt_{z+1}>t_z>t_{z1}>\mathrm{}>t_0`$ with small intervals $`t_kt_{k1}`$ ($`1kz+1`$) the approximate solution for $`𝒩(t)`$ may be obtained as $$𝒩(t_0)\stackrel{𝒰_{\left(t_0\right)}}{}𝒩(t_1)\stackrel{𝒰_{\left(t_1\right)}}{}\mathrm{}\stackrel{𝒰_{\left(t_z\right)}}{}𝒩(t).$$ (A11) We now consider the specific case of $`K=1`$ with $`\mathrm{\Omega }(t)=\mathrm{\Omega }_N`$, where the Hamiltonian is given by $$=\left(\begin{array}{ccc}\omega _b\frac{i}{2}\gamma _b& \mathrm{\Omega }_N& 0\\ \mathrm{\Omega }_N& \omega _c\frac{i}{2}\gamma _c& \kappa \\ 0& \kappa & \omega _a\frac{i}{2}\gamma _a\end{array}\right).$$ (A12) Then, the transformation matrix $`𝒰(t)`$ is calculated from Eq. (A9) as $$𝒰_{ij}(t)=\underset{k}{}g_{ij}^k\mathrm{e}^{i\lambda _kt},$$ (A13) with $$g_{ij}^k=𝒯_{ik}𝒯_{kj}^1,$$ (A14) where the sum is not taken over the index $`k`$. This transformation matrix may also be expressed in terms of the Laplace transform $``$ as $$𝒰(t)=^1[(s\mathrm{𝟏}+i)^1].$$ (A15) The coefficients $`g_{ij}^k`$ are readily found by using the Laplace transform as $$g_{ij}^k=\underset{si\lambda _k}{lim}(s+i\lambda _k)(s\mathrm{𝟏}+i)_{ij}^1,$$ (A16) where in the leading orders of $`\kappa `$ $`(s\mathrm{𝟏}+i)^1`$ (A17) $`\mathrm{\Lambda }^1(s)\left(\begin{array}{ccc}s_as_c& i\mathrm{\Omega }_Ns_a& \kappa \mathrm{\Omega }_N\\ i\mathrm{\Omega }_Ns_a& s_as_b& i\kappa s_b\\ \kappa \mathrm{\Omega }_N& i\kappa s_b& s_cs_b+\mathrm{\Omega }_N^2\end{array}\right)`$ (A21) with $$s_is+i\omega _i+\frac{1}{2}\gamma _i,$$ (A22) and $`\mathrm{\Lambda }(s)`$ $`=`$ $`\mathrm{det}(s\mathrm{𝟏}+i)`$ (A23) $`=`$ $`(s+i\lambda _1)(s+i\lambda _2)(s+i\lambda _3).`$ (A24) The respective particle numbers are then obtained with the initial value $`𝒩(0)=\mathrm{diag}.(0,\overline{n}_c,\overline{n}_a)`$ as $$n_i(t)=𝒩_{ii}(t)=r_{ic}(t)\overline{n}_c+r_{ia}(t)\overline{n}_a,$$ (A25) where $$r_{ij}(t)=\underset{m,n}{}g_{ij}^mg_{ij}^n\left[\left(1\frac{\gamma _j}{\mathrm{\Lambda }_{mn}}\right)\mathrm{e}^{\mathrm{\Lambda }_{mn}t}+\frac{\gamma _j}{\mathrm{\Lambda }_{mn}}\right]$$ (A26) with $$\mathrm{\Lambda }_{mn}=i(\lambda _m^{}\lambda _n).$$ (A27) The eigenmodes of $``$ are readily found as follows in the limit of $`\kappa 0`$, which is indeed sufficient for $`\kappa /\gamma <10^{15}`$ or so: $`\lambda _1`$ $`=`$ $`\omega _b{\displaystyle \frac{i}{2}}\gamma _b\mathrm{\Delta }_{bc}+\left[(\mathrm{\Delta }_{bc})^2+\mathrm{\Omega }_N^2\right]^{1/2},`$ (A28) $`\lambda _2`$ $`=`$ $`\omega _c{\displaystyle \frac{i}{2}}\gamma _c+\mathrm{\Delta }_{bc}\left[(\mathrm{\Delta }_{bc})^2+\mathrm{\Omega }_N^2\right]^{1/2},`$ (A29) $`\lambda _3`$ $`=`$ $`\omega _a{\displaystyle \frac{i}{2}}\gamma _a,`$ (A30) where $`\mathrm{\Delta }_{bc}\frac{1}{2}(\omega _b\omega _c)\frac{i}{4}(\gamma _b\gamma _c)`$, and $`z^{1/2}|z|^{1/2}\mathrm{exp}[i\mathrm{arg}(z)/2]`$. The eigenmodes $`\lambda _1`$ and $`\lambda _2`$, which mainly consist of the atom and photon, are determined by the equation $$\left(\lambda \omega _b+\frac{i}{2}\gamma _b\right)\left(\lambda \omega _c+\frac{i}{2}\gamma _c\right)\mathrm{\Omega }_N^2=0.$$ (A31) The eignemode $`\lambda _3`$ is, on the other hand, almost identical with the axion mode. These eigenmodes of $``$ satisfy the conditions $$\mathrm{Re}[\mathrm{\Lambda }_{mn}]=(\mathrm{Im}[\lambda _m]+\mathrm{Im}[\lambda _n])>0,$$ (A32) so that the respective particle numbers $`n_i(t)`$ converge to certain asymptotic values for $`t\mathrm{}`$. ## B Langevin picture We here describe the quantum evolution of the axion-photon-atom system in the Langevin picture , which is supplementary to the treatment in the Liouville picture reproducing the same results for the quantum averages of the relevant quantities. The Langevin equation for the damped oscillators is given by $$\frac{dq_i}{dt}=i_{ij}(t)q_j+F_i.$$ (B1) The external forces $`F_i`$ are introduced for the Liouvillian relaxations. These external operators obey the following correlation property : $`F_i^{}(\tau )F_j(\tau ^{})`$ $`=`$ $`\delta _{ij}\gamma _i\overline{n}_i\delta (\tau \tau ^{}),`$ (B2) $`F_i^{}(\tau )q_j(0)`$ $`=`$ $`0(\tau ,\tau ^{}>0).`$ (B3) The solution of the Langevin equation (B1) is readily found with the transformation matrix $`𝒰(t)`$ representing the time evolution due to $`(t)`$, as given in Eq. (76): $$q_i(t)=𝒰_{ij}(t)q_j(0)+_0^t\left[𝒰(t)𝒰^1(\tau )\right]_{ij}F_j(\tau )𝑑\tau .$$ (B4) Then, the time evolution of the particle numbers is calculated with this solution (B4) of the Langevin equation and the correlations (B3) as $`𝒩_{ij}(t)=q_i^{}(t)q_j(t)`$ (B5) $`=𝒰_{ik}^{}(t)\left[𝒩(0)+{\displaystyle _0^t}𝒰^1(\tau )𝒟𝒰^{1\mathrm{T}}(\tau )𝑑\tau \right]_{kl}𝒰_{lj}^\mathrm{T}(t),`$ (B6) where $`𝒟_{ij}=\gamma _i\overline{n}_i\delta _{ij}`$. This solution for $`𝒩(t)`$ just coincides with that given in Eq. (80) which is obtained in the Liouville picture.
no-problem/9908/cond-mat9908215.html
ar5iv
text
# Pattern Formation and a Clustering Transition in Power-Law Sequential Adsorption ## Abstract A new model that describes adsorption and clustering of particles on a surface is introduced. A clustering transition is found which separates between a phase of weakly correlated particle distributions and a phase of strongly correlated distributions in which the particles form localized fractal clusters. The order parameter of the transition is identified and the fractal nature of both phases is examined. The model is relevant to a large class of clustering phenomena such as aggregation and growth on surfaces, population distribution in cities, plant and bacterial colonies as well as gravitational clustering. Many of the growth and pattern formation phenomena in nature occur via adsorption and clustering of particles on surfaces . The richness of these phenomena may be attributed to the great variety of structures and symmetries of the adsorbed particles and substrates. Nonequilibrium growth models often give rise to fractal structures, which are statistically self similar over a range of length-scales . In a large class of surface adsorption systems, the dominant dynamical process is the diffusion of the adsorbed particles which hop randomly on the surface until they nucleate into immobile clusters . The formation of fractal clusters, which are typical in these systems can be described by the diffusion limited aggregation (DLA) process . In DLA a cluster of particles grows due to a slow flux of particles which diffuse as random walkers until they attach to the cluster. The model describes a great variety of aggregation processes such as island growth in molecular beam epitaxy , electrodeposition, viscous fingering, dielectric breakdown, and various biological systems . In many other physical systems, once an adsorbed particle sticks to the surface it becomes immobile. These systems can be described by random sequential adsorption (RSA) processes . Within the RSA processes, one should distinguish between systems in which particles cannot overlap and systems in which they can adsorb on top of each other. Systems in which particles cannot overlap tend to reach a jamming limit, in which the sticking probability of a new particle vanishes . Models which allow multilayer growth describe a large class of physical systems, including deposition of colloids, liquid crystals , polymers and fiber particles . Recently, the case of power law distribution of particle sizes was studied both for uncorrelated adsorption and for non-overlapping particles . In the case of uncorrelated adsorption it was found that the boundary of the particle clusters is fractal . For non-overlapping particles, it was found that the area that remains exposed is fractal . Models that describe growth dynamics have been employed in recent years in a vast range of scientific fields as diverse as city organization and growth , city and highway traffic and growth of bacterial colonies . A common feature is the tendency of the basic objects to form clusters of high density (typically of fractal shape), surrounded by low density areas or voids. Other examples of clustering appear in the distribution of mass in the universe , in dissipative gases and granular flow as well as in step bunching on crystal surfaces during growth and due to electromigration . The phenomenon of cluster formation is therefore generic in a broad class of systems in spite of the fact that the pattern forming dynamical processes may vary substantially from system to system. This richness of clustering phenomena is not yet fully backed-up by appropriate models. In this Letter we introduce a new model: the Power-Law Sequential Adsorption (PLSA) model which describes a variety of surface adsorption and clustering processes. This model leads to a rich variety of structures, many of which are fractal, which mimic the experimental morphologies found in the examples cited above. In particular, it exhibits a unique clustering transition which separates between a weakly correlated phase in which the adsorbed particles are distributed homogeneously on the surface and a strongly correlated phase in which they form clusters. In the PLSA model circular particles of diameter $`d`$ are randomly deposited on a two dimensional (2D) surface one at a time. The deposition process starts from an initial state where there is one seed particle on the surface. The sticking probability $`0p1`$ of a newly deposited particle is determined by the distance $`r`$ from its center to the center of the nearest particle which is already on the surface. This probability is given by $$p(r)=\{\begin{array}{cc}1\hfill & rd\hfill \\ (d/r)^\alpha \hfill & r>d,\hfill \end{array}$$ (1) where the exponent $`\alpha 0`$ is a parameter of the model. The model thus exhibits a positive feedback clustering, like many of the clustering phenomena listed above. The random deposition process is repeated until the desired number of particles, $`M`$, stick to the surface. Since the sticking probability is given by a power law function, which is of a long range nature, this process should have been studied, in principle, in the infinite system limit. Numerical simulations, however, are done on a finite system of area $`L\times L`$, where $`L/d1`$. The coverage is given by $`\eta =A/L^2`$ where $`A`$ is the total area covered by the particles. Also, let $`\eta _0=\pi (d/2)^2M/L^2`$. The limit of uncorrelated adsorption, in which the sticking probability is $`p=1`$ uniformly, is obtained for $`\alpha =0`$. This limit was studied recently using fractal analysis, and the box-counting and Minkowski functions were calculated analytically . It was found that for a range of low coverages, apparent fractal behavior is observed between physically relevant cutoffs. The lower cutoff $`r_0`$ is given by the particle diameter $`r_0=d`$ while the upper cutoff $`r_1`$ is given by the average gap between adjacent particles, namely $`r_1=\rho ^{1/2}d`$, where $`\rho =M/L^2`$ is the particle density. The limit of strongly correlated adsorption is obtained for $`\alpha \mathrm{}`$. In this case only a single, connected cluster is generated on the surface. The perimeter of this cluster grows slowly when new particles are deposited on its edge, while it becomes more dense inside . We will now examine the morphological properties of the configurations of adsorbed particles for the full range of $`0<\alpha <\mathrm{}`$ using fractal analysis. For this analysis we use the box-counting (BC) procedure in which one covers the plane by non-overlapping boxes of linear size $`r`$. The box-counting function $`N(r)`$ is obtained by counting the number of boxes which have a non-empty intersection with the (fractal) set. A fractal dimension $`D_{BC}`$, is declared to prevail at a certain range of length-scales, if a relation of the type $`N_{BC}(r)r^{D_{BC}}`$ holds, or equivalently, if the slope of the log-log plot $$D_{BC}=\mathrm{slope}\{\mathrm{log}r,\mathrm{log}[N_{BC}(r)]\}$$ (2) is found to be constant over that range. Two configurations of particles, randomly deposited and adsorbed according to Eq. (1) onto the unit square ($`L=1`$), are shown in Fig. 1 for $`\eta _0=0.01`$. For $`\alpha =1.5`$ the particle distribution exhibits local density fluctuations but on larger scales it is rather homogeneous and extends over the entire system \[Fig. 1(a)\]. For $`\alpha =2.5`$ we observe a strongly clustered structure \[Fig. 1(b)\]. This structure resembles the set of turning points of a Lévy flight random walker . In fact, a Lévy flight corresponds to the special case in which the sticking probability of the next deposited particle depends only on the position of the latest particle adsorbed on the surface. Unlike Lévy flights which typically describe dynamic behavior our model describes clustering in spatial structures. It is also related to other models of spatial structures such as the continuous percolation model which is approached when the interaction is suppressed, at $`\alpha 0`$. Another related model, which describes the growth of a percolation cluster and exhibits power-law correlations between growth sites is presented in Ref. . The box-counting functions for the configurations generated by the PLSA model are shown in Fig. 2. It is observed that for $`\alpha <2`$ the box-counting function resembles the shape obtained for the uncorrelated case . This indicates that the basic features of the model studied in Ref. are maintained not only for short range correlations but for an entire class of long range correlations. The box-counting function for $`\alpha >2`$ exhibits a nearly linear behavior for the entire range from the particle size to the cluster size. To obtain the fractal dimensions of the sets from the box-counting functions one should identify the relevant range of length-scales over which the linear fit should be done. For the weakly correlated distributions the relevant range of length scales spans between $`r_0=d`$ and $`r_1=\rho ^{1/2}d`$ . For the strongly correlated distributions where clusters are formed, the relevant range is limited from above by the linear size of the entire cluster. The quality of the linear fit is measured by the coefficient of determination $`^2`$ . In both cases, given a desired value of $`^2`$ one can further narrow the range within the cutoffs described above to find the broadest range $`(r_0,r_1)`$ within which the linear regression maintains the given value of $`^2`$ . The fractal dimension $`D`$ as a function of $`\alpha `$ is shown in Fig. 3 for $`\eta _0=0.1`$, $`0.01`$ and $`0.001`$. Two domains are identified: a plateau of low dimension for the weakly correlated case and a plateau of high dimension for the strongly correlated case. Consider a seed particle located at the origin of an infinite plane. Particles are randomly deposited one at a time according to the PLSA model until one particle sticks to the surface. Consider the probability that the distance $`r`$ between the first particle which sticks and the seed particle at the origin will be larger than some value $`r_f`$ where $`r_f>d`$. This probability is given by: $$P(r>r_f)=\frac{_{r_f}^{\mathrm{}}2\pi r(\frac{d}{r})^\alpha 𝑑r}{_0^d2\pi r𝑑r+_d^{\mathrm{}}2\pi r(\frac{d}{r})^\alpha 𝑑r}.$$ (3) One readily verifies that for $`\alpha <2`$ the probability $`P(r>r_f)=1`$ for any finite $`r_f`$. For $`\alpha >2`$, on the other hand, this probability is given by $$P(r>r_f)=\frac{2}{\alpha }\left(\frac{d}{r_f}\right)^{\alpha 2}.$$ (4) Therefore, in the infinite system limit of the weakly correlated phase ($`\alpha <\alpha _c`$) the probability that the next particle will stick within any finite distance from an existing cluster is zero . In the strongly correlated phase ($`\alpha >\alpha _c`$), the probability that the next particle will stick within a finite distance $`r_f`$ from the cluster can be made arbitrarily close to one, by an adjustment of $`r_f`$ according to Eq. (4) . In general, the value of $`\alpha _c`$ for which the clustering transition takes place is equal to the space dimension. The order parameter of the clustering transition is $`V=(\eta _0\eta )/\eta _0`$, namely the fraction of the total area of the adsorbed particles which is lost due to overlap. Consider a finite number $`M`$ of particles of diameter $`d=1`$ adsorbed on the surface in the infinite system limit $`(L\mathrm{})`$. For $`\alpha <2`$, in this low coverage limit overlaps are negligible and $`V=0`$. For $`\alpha >2`$ clusters become more dense and overlaps more dominant as $`\alpha `$ increases. Our numerical studies are done on a finite system of size $`L=1`$ for a range of coverages. The order parameter $`V`$ as a function of $`\alpha `$, for $`\eta _0=0.01`$, is shown in Fig. 4. To examine the critical behavior in the infinite system limit we performed analytical calculations in one dimension (1D). In 1D the configuration is fully specified by the ordered list of $`M1`$ gaps between the $`M`$ particles. For the 1D case we have obtained the critical exponent $`\beta `$ for the order parameter $`V(\alpha \alpha _c)^\beta `$ in the $`L\mathrm{}`$ limit by constructing the probability distribution $`P_i`$, $`i=0,\mathrm{},M`$ that the next particle that sticks will stick within the gap $`g_i`$ (where $`g_0`$ and $`g_M`$ are the two semi-infinite gaps on both sides). The overlap was then calculated as a weighted average over all gaps. The result we obtain is that the exponent $`\beta =1`$. We also found that the fractal dimension exhibits critical behavior of the form $`D(\alpha \alpha _c)^\gamma `$, where $`\alpha _c=1`$ with the exponent $`\gamma =1`$. For the weakly correlated phase at $`\alpha <1`$ the fractal dimension in the $`L\mathrm{}`$ is $`D=0`$ while for $`\alpha >2`$ the dimension is $`D=2`$. In summary, we introduced a new model for random sequential adsorption, characterized by a power law distribution of sticking probabilities. This model exhibits a continuous phase transition between weakly correlated adsorption, in which the particle distribution is homogeneous on large scales and extends over the entire system, and strongly correlated adsorption in which a highly clustered structure is generated. We thus identified a broad class of distributions which maintain the basic properties of the weakly correlated random structures studied in Ref. and found the borderline between this class and the class of strongly correlated structures which exhibit clustering phenomena. The model should be useful in the study of a great variety of clustering problems.
no-problem/9908/hep-th9908135.html
ar5iv
text
# Untitled Document hep-th/9908135 SU-ITP-99/41 Supersymmetric Three-cycles and (Super)symmetry Breaking Shamit Kachru<sup>1</sup> and John McGreevy<sup>2</sup> <sup>1</sup>Department of Physics and SLAC Stanford University Stanford, CA 94305 Email: skachru@leland.stanford.edu <sup>2</sup> Department of Physics University of California at Berkeley Berkeley, CA 94720 Email: mcgreevy@socrates.berkeley.edu We describe physical phenomena associated with a class of transitions that occur in the study of supersymmetric three-cycles in Calabi-Yau threefolds. The transitions in question occur at real codimension one in the complex structure moduli space of the Calabi-Yau manifold. In type IIB string theory, these transitions can be used to describe the evolution of a BPS state as one moves through a locus of marginal stability: at the transition point the BPS particle becomes degenerate with a supersymmetric two particle state, and after the transition the lowest energy state carrying the same charges is a non-supersymmetric two particle state. In the IIA theory, wrapping the cycles in question with D6-branes leads to a simple realization of the Fayet model: for some values of the CY modulus gauge symmetry is spontaneously broken, while for other values supersymmetry is spontaneously broken. August 1999 1. Introduction In the study of string compactifications on manifolds of reduced holonomy, odd-dimensional supersymmetric cycles play an important part (see for instance \[1,,2,,3,,4,,5,,6,,7,,8,,9,,10\] and references therein). In type IIB string theory, a supersymmetric three-cycle can be wrapped by a D3-brane to yield a BPS state whose properties are amenable to exact study; in the IIA theory or in M theory, Euclidean membranes can wrap the three-cycle and contribute to “holomorphic” terms in the low energy effective action of the spacetime theory (that is, terms that are integrated over only a subset of the fermionic superspace coordinates). Of particular interest, partially due to their role in mirror symmetry \[5,,7\], have been special Lagrangian submanifolds in Calabi-Yau threefolds. In an interesting recent paper by Joyce , various transitions which these cycles undergo as one moves in the complex structure or Kähler moduli space of the underlying CY manifold were described. In this note, we study some of the physics associated with the simplest such transitions discussed in §6 and §7 of . These transitions are reviewed in §2. The physical picture which one obtains by wrapping D3-branes on the relevant cycles in IIB string theory is described in §3, while the physics of wrapped D6-branes in type IIA string theory occupies §4. Our discussion is purely local (in both the moduli space and the Calabi-Yau manifold), as was the analysis performed in ; we close with some speculations about more global aspects in §5. At all points in this paper, we will be concerned with $`\mathrm{𝚛𝚒𝚐𝚒𝚍}`$ special Lagrangian three cycles. Since the moduli space of a special Lagrangian three cycle $`N`$ (including Wilson lines of a wrapped D-brane) is a complex Kähler manifold of dimension $`b_1(N)`$ \[11,,6\], this means we have to focus on so-called “rational homology three spheres” with $`H_1(N,\text{ZZ})`$ at most a discrete group. We will further assume that $`H_1(N,\text{ZZ})`$ is trivial. 2. Splitting Supersymmetric Cycles 2.1. Definitions Let $`M`$ be a Calabi-Yau threefold equipped with a choice of complex structure and Kähler structure. Let $`\omega `$ be the Kähler form on $`M`$, and let $`\mathrm{\Omega }`$ be the holomorphic three-form, normalized to satisfy $$\frac{\omega ^3}{3!}=\frac{i}{8}\mathrm{\Omega }\overline{\mathrm{\Omega }}$$ This also allows us to define two real, closed three forms on $`M`$, Re($`\mathrm{\Omega }`$) and Im($`\mathrm{\Omega }`$). Let $`N`$ be an oriented real three-dimensional submanifold of $`M`$. We call $`N`$ special Lagrangian with phase $`e^{i\theta }`$ iff a) $`\omega |_N=0`$ b) $`(sin(\theta )\mathrm{Re}(\mathrm{\Omega })cos(\theta )\mathrm{Im}(\mathrm{\Omega }))|_N=0`$ (a) and (b) together imply that $$_N(cos(\theta )\mathrm{Re}(\mathrm{\Omega })+sin(\theta )\mathrm{Im}(\mathrm{\Omega }))=vol(N)$$ where $`vol(N)`$ is the volume of $`N`$. Physically, the relevance of $`\theta `$ for us will be the following. Let $`N`$ and $`N^{}`$ be three-cycles which are special Lagrangian with different phases $`\theta `$ and $`\theta ^{}`$. Compactifying, say, IIB string theory on $`M`$, we can obtain BPS states which preserve half of the $`𝒩=2`$ spacetime supersymmetry by wrapping three-branes on $`N`$ or $`N^{}`$. In the notation of , the surviving supersymmetries in the presence of a D3-brane on $`N`$, for example, are generated by $$ϵ_\delta =e^{i\delta }ϵ_++e^{i\delta }ϵ_{},$$ with $`\delta =\frac{\theta }{2}\frac{\pi }{4}.`$ For generic $`\theta \theta ^{}`$, however, $`N`$ and $`N^{}`$ preserve different $`𝒩=1`$ supersymmetries and the state with both wrapped three-branes would break all of the supersymmetry. 2.2. Transitions The following supersymmetric three-cycle transitions are conjectured by Joyce to occur in compact Calabi-Yau threefolds $`M`$. Choose two homology classes $`\chi ^\pm H_3(M,\text{ZZ})`$ which are linearly independent in $`H_3(M,\mathrm{IR})`$. For any $`\mathrm{\Phi }H^3(M,\text{ }\mathrm{C})`$, define $$\mathrm{\Phi }\chi ^\pm =_{\chi ^\pm }\mathrm{\Phi }$$ Thus $`\mathrm{\Phi }\chi ^\pm `$ are complex numbers. Following Joyce, define a subset $`W(\chi ^+,\chi ^{})`$ in $`H^3(M,\text{ }\mathrm{C})`$ by $$W(\chi ^+,\chi ^{})=\{\mathrm{\Phi }H^3(M,\text{ }\mathrm{C}):(\mathrm{\Phi }\chi ^+)(\overline{\mathrm{\Phi }}\chi ^{})(0,\mathrm{})\}$$ So $`W(\chi ^+,\chi ^{})`$ is a real hypersurface in $`H^3(M,\text{ }\mathrm{C})`$. Fix some small, positive angle $`ϵ`$. For $`\mathrm{\Phi }H^3(M,\text{ }\mathrm{C})`$ write $$(\mathrm{\Phi }\chi ^+)(\overline{\mathrm{\Phi }}\chi ^{})=Re^{i\theta }$$ where $`R0`$ and $`\theta (\pi ,\pi ]`$. Then we say $`\mathrm{\Phi }`$ lies in $`W(\chi ^+,\chi ^{})`$ if $`R>0`$ and $`\theta =0`$. We say that $`\mathrm{\Phi }`$ lies on the positive side of $`W(\chi ^+,\chi ^{})`$ if $`R>0`$ and $`0<\theta <ϵ`$. We say that $`\mathrm{\Phi }`$ lies on the negative side of $`W(\chi ^+,\chi ^{})`$ if $`R>0`$ and $`ϵ<\theta <0`$. Then, Joyce argues that the following kinds of transitions should occur. We are given a Calabi-Yau $`M`$ with compact, nonsingular three cycles $`N^\pm `$ in homology classes $`[N^\pm ]=\chi ^\pm `$. $`N^\pm `$ are taken to be special Lagrangian with phases $`\theta ^\pm `$. We assume $`N^\pm `$ intersect at one point $`pM`$, with $`N^+N^{}`$ a positive intersection. As we deform the complex structure of $`M`$, the holomorphic three form moves around in $`H^3(M,\text{ }\mathrm{C})`$ and therefore the phases $`\theta ^\pm `$ of $`N^\pm `$ change. When $`[\mathrm{\Omega }]`$ is on the positive side of $`W(\chi ^+,\chi ^{})`$ there exists a special Lagrangian threefold $`N`$ which is diffeomorphic to the connected sum $`N^+\mathrm{\#}N^{}`$, with $`[N]=[N^+]+[N^{}]`$ in $`H_3(M,\text{ZZ})`$. $`N`$ can be taken to be special Lagrangian with phase $`\theta =0`$ (this fixes the phase of $`\mathrm{\Omega }`$ for us). As we deform $`[\mathrm{\Omega }]`$ through $`W(\chi ^+,\chi ^{})`$, $`N`$ converges to the singular union $`N^+N^{}`$. When $`[\mathrm{\Omega }]`$ is in $`W(\chi ^+,\chi ^{})`$, the phases $`\theta ^\pm `$ align with $`\theta =0`$. On the negative side of $`W(\chi ^+,\chi ^{})`$, $`N`$ ceases to exist as a special Lagrangian submanifold of $`M`$ (while $`\theta ^\pm `$ again become distinct). For completeness and to establish some notation we will find useful, we briefly mention some motivation for the existence of these transitions . In Joyce’s model of the transition, there exists a manifold, $`D`$, with boundary $`SN`$, which is special Lagrangian with phase $`i`$. If we call its volume $`A`$, this means that $`iA=_D\mathrm{\Omega }`$. $`S`$ defines a 2-chain in $`N`$; since we are assuming that $`H_1(N,\text{ZZ})`$ is trivial, by Poincaré duality, $`S`$ must be trivial in homology. Because $`S`$ is real codimension one in $`N`$, it actually splits $`N`$ into two parts: $$N=C^+C^{},C^+=S,C^{}=S.$$ So $`C^\pm \pm D`$ define 3-chains and in fact it turns out that $$[C^\pm \pm D]=\chi ^\pm =[N^\pm ]$$ We see that we can determine the volume of $`D`$ just from knowledge of $`\chi ^\pm `$: $$A=\frac{1}{i}_D\mathrm{\Omega }=_D\mathrm{Im}(\mathrm{\Omega })=_{\chi ^+}\mathrm{Im}(\mathrm{\Omega })$$ using $`\mathrm{Re}(\mathrm{\Omega })|_D=0`$ and $`\mathrm{Im}(\mathrm{\Omega })|_N=0`$. But when $`[\mathrm{\Omega }]`$ goes through $`W(\chi ^+,\chi ^{})`$, we see from (2.1) and from the definition of $`W(\chi ^+,\chi ^{})`$ that $`A`$ becomes negative; at least in the local model in $`\text{ }\mathrm{C}^3`$, this means that $`N`$ does not exist. 3. Formerly BPS States in IIB String Theory Now, consider Type IIB string theory compactified on $`M`$. When the complex structure is such that $`[\mathrm{\Omega }]`$ is on the positive side of $`W(\chi ^+,\chi ^{})`$, one can obtain a BPS hypermultiplet by wrapping a D3-brane on $`N`$. One can also obtain BPS hypermultiplets by wrapping D3-branes on $`N^+`$ or $`N^{}`$. Because $$[N]=[N^+]+[N^{}]$$ one can make a state carrying the same charges as the BPS brane wrapping $`N`$ by considering the two particle state with D3-branes wrapping both $`N^+`$ and $`N^{}`$. How does the energy of the two states compare? Recall that the disc $`D`$ with boundary on $`N`$ splits $`N`$ into two components, $`C^\pm `$. Define $$B^\pm =_{C^\pm }\mathrm{\Omega }$$ Then if we let $`V`$ denote the volume of $`N`$ and $`V^\pm `$ denote the volumes of $`N^\pm `$, we recall: $$V=B^++B^{}$$ $$V^\pm e^{i\theta ^\pm }=B^\pm \pm iA$$ where $`A`$ is the volume of $`D`$. Since on this side of the transition $`A`$ is positive, $`\theta ^+`$ is small and positive while $`\theta ^{}`$ is small and negative. In fact, reality of the volumes $`V^\pm `$ lets us solve for $`\theta ^\pm `$ in terms of $`B^\pm `$ yielding $$\theta ^\pm =\pm \frac{A}{B^\pm }$$ The energy of the single particle state obtained by wrapping a D3-brane on $`N`$ is $`T_{D3}\times V`$ where $`T_{D3}`$ is the D3 brane tension. The energy of the (nonsupersymmetric) state obtained by wrapping D3-branes on both $`N^\pm `$ can be approximated by $`T_{D3}\times (V^++V^{})`$. Expanding (3.1) for small $`\theta ^\pm `$, we find: $$V^++V^{}=V+A(\theta ^+\theta ^{})=V+A^2(\frac{1}{B^+}+\frac{1}{B^{}})$$ So since $`A>0`$ and $`\pm \theta ^\pm >0`$ on this side of the transition, we see that the single wrapped brane on $`N`$ is energetically preferred. Therefore, when the complex structure is on the positive side of $`W(\chi ^+,\chi ^{})`$, the BPS state indeed has lower energy than the nonsupersymmetric two particle state carrying the same charges, by roughly $`T_{D3}\times A(\theta ^+\theta ^{})`$. Now as one moves in the complex structure moduli space of $`M`$ through a point where $`[\mathrm{\Omega }]`$ lies in $`W(\chi ^+,\chi ^{})`$, $`A`$ and $`\theta ^\pm `$ vanish. Therefore, (3.1) shows that that mass of the two particle state becomes equal to that of the single particle state: we are passing through a locus of marginal stability. On this locus, the two particle state consisting of branes wrapping $`N^\pm `$ is supersymmetric, since $`N^\pm `$ are special Lagrangian with the same phase. Finally, move through to the region where $`[\mathrm{\Omega }]`$ lies on the negative side of $`W(\chi ^+,\chi ^{})`$. Here, $`\pm \theta ^\pm <0`$. Since $`N`$ ceases to exist as a supersymmetric cycle, the two particle state with D3-branes wrapping $`N^\pm `$ is the lowest energy state carrying its charges.<sup>1</sup> In a global model, even if there do exist other supersymmetric cycles in the same class, there will be some region in moduli space close to the transition where the energy cost for moving to them in the Calabi-Yau will be larger than the energy gained. Note that the two particle state is $`\mathrm{𝚗𝚘𝚗𝚜𝚞𝚙𝚎𝚛𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}`$, since $`N^\pm `$ are special Lagrangian with different phases. Here, we are making the conservative assumption that there is no stable, nonsupersymmetric bound state of these two particles – such a bound state would be reflected in the existence of a (nonsupersymmetric) cycle in the homology class $`[N^+]+[N^{}]`$ with lower volume than $`V^++V^{}`$. This is tantamount to assuming that the force between the two particles is repulsive for slightly negative $`A`$. This is reasonable since for $`A`$ positive there is an attractive force and a (supersymmetric) bound state, and as $`A`$ decreases to zero the magnitude of the force and the binding energy decrease until they vanish when $`A=0`$. This phenomenon is an interesting variant on the examples of . There, a stable nonsupersymmetric state passes through a locus of marginal stability and becomes unstable to decay to a pair of BPS particles (which together break all of the supersymmetries). In the present example, a BPS particle becomes, as we move in complex structure moduli space, unstable to decay to a pair of BPS particles. Moving slightly further in moduli space, we see that the two BPS particles together break all of the supersymmetries. 4. D6-Branes and the Fayet Model Now, consider type IIA string theory on the Calabi-Yau $`M`$ in which the phenomena of §2 are taking place. Instead of studying particles in the resulting $`𝒩=2`$ supersymmetric theory, we wrap the three-cycle $`N`$ with a space-filling D6-brane (i.e., 3+1 of the dimensions of the D6-brane fill the non-compact space). This yields an $`𝒩=1`$ supersymmetric theory in the non-compact dimensions. For simplicity (since all our considerations are local), we can assume $`M`$ is non-compact so we do not have to worry about cancelling the D6 Ramond-Ramond charge. Alternatively, we could imagine the model discussed below arising as part of a larger system of branes and/or orientifolds on $`M`$. First, let’s discuss the physics when $`[\mathrm{\Omega }]`$ is on the positive side of $`W(\chi ^+,\chi ^{})`$. Since $`b_1(N)=0`$, $`N`$ has no moduli in $`M`$. Therefore, there are no moduli in the effective 3+1 dimensional field theory on the wrapped D6-brane. The $`U(1)`$ gauge field on the brane survives reduction on $`N`$, so the 3+1 dimensional low energy effective theory has a $`U(1)`$ gauge symmetry. Finally, because $`N`$ is a supersymmetric cycle with $`H_1(N,\text{ZZ})`$ trivial, there is a unique supersymmetric ground state in the gauge theory (as opposed to a discrete set of ground states parametrized by Wilson lines around $`N`$). What about the physics when $`[\mathrm{\Omega }]`$ is on the negative side of $`W(\chi ^+,\chi ^{})`$? The D6 which was wrapping $`N`$ has now split into two D6-branes, wrapping $`N^+`$ and $`N^{}`$. The $`U(1)`$ gauge field on each survives, yielding a $`U(1)^2`$ gauge theory. Because $`N^+`$ and $`N^{}`$ are supersymmetric cycles with different phases, the theory has no supersymmetric ground state. We do expect a stable nonsupersymmetric ground state, as long as $`[\mathrm{\Omega }]`$ is close enough to $`W(\chi ^+,\chi ^{})`$. What is the physics associated with the phase transition when $`[\mathrm{\Omega }]`$ lies in $`W(\chi ^+,\chi ^{})`$? At this point, the two D6-branes wrapping $`N^+`$ and $`N^{}`$ preserve the same supersymmetry, and intersect at a point in $`M`$. Because the light states are localized at the intersection, the global geometry of the intersecting cycles doesn’t matter and we can model the physics by a pair of flat special Lagrangian three-planes intersecting at a point. This kind of system was discussed in , and using their results it is easy to see that the resulting light strings give rise to precisely one chiral multiplet with charges $`(+,)`$ under the $`U(1)^2`$ gauge group of the two wrapped D branes. Therefore, one linear combination of the $`U(1)`$s (the normal “center of mass” $`U(1)`$) remains free of charged matter, while the other (the “relative” $`U(1)`$) gains a single charged chiral multiplet $`\mathrm{\Phi }`$. The relative $`U(1)`$ is therefore anomalous; demonstrates that the anomaly is cancelled by inflow from the bulk. Ignoring the center of mass $`U(1)`$ (which we identify with the surviving $`U(1)`$ on the positive side of $`W`$), the physics of this model is precisely reproduced by the Fayet model, the simplest model of spontaneous (super)symmetry breaking . This is a $`U(1)`$ gauge theory with a single charged chiral multiplet $`\mathrm{\Phi }`$ (containing a complex scalar $`\varphi `$). There is no superpotential, but including a Fayet-Iliopoulos term $`rD`$ in the spacetime Lagrangian, the potential energy is $$V(\varphi )=\frac{1}{g^2}(|\varphi |^2r)^2$$ where $`g`$ is the gauge coupling. The phase structure of the model is quite simple: For $`r>0`$, there is a unique supersymmetric minimum, and the $`U(1)`$ gauge symmetry is Higgsed. For $`r<0`$, there is a unique nonsupersymmetric minimum at $`\varphi =0`$, so the $`U(1)`$ symmetry is unbroken. Precisely when $`r=0`$, there is a $`U(1)`$ gauge theory with a massless charged chiral field and a supersymmetric ground state. Thus, we are led to identify the regions of positive, vanishing and negative $`r`$ with the positive side of $`W(\chi ^+,\chi ^{})`$, the locus where $`[\mathrm{\Omega }]`$ is in $`W`$, and the negative side of $`W`$. The single real modulus which varies in the transition experienced by the supersymmetric three-cycle $`N`$ can be identified with the Fayet-Iliopoulos parameter $`r`$. This identification is consistent with the conjecture in that in worldvolume gauge theories of A-type D-branes on Calabi-Yau spaces, complex structure moduli only enter as D-terms.<sup>2</sup> Note that the D6 branes in question here are considered A-type branes in the conventions of since the three non-compact spatial dimensions are ignored. 5. Discussion Exploration of the phenomena involving supersymmetric cycles in a Calabi-Yau manifold $`M`$ under variation of the moduli of $`M`$ has just started. It should be clear that as such phenomena are understood, they will have interesting implications for the physics of D-branes on Calabi-Yau spaces (for a nice discussion of various aspects of this, see ). One of the most enticing possibilities is that as more such phenomena are uncovered, we will find new ways to “geometrize” the study of supersymmetry breaking models in string theory. This would provide a complementary approach to attempts to write down interesting nonsupersymmetric string models informed by AdS/CFT considerations or insights about tachyon condensation and nonsupersymmetric branes . As a small step in this direction, it would be nice to find ways of going over small potential hills between different supersymmetric vacua of string theory. The transitions studied here, when put in the more global context of a manifold $`M`$ with (possibly) several supersymmetric cycles in each homology class, might provide a way of doing this. For instance in §4, as one moves $`[\mathrm{\Omega }]`$ into the negative side of $`W(\chi ^+,\chi ^{})`$, it is clear that one is increasing the scale of supersymmetry breaking (at least in the region close to the transition). Suppose that after one moves through the negative side of $`W`$ in complex structure moduli space, eventually $`N^+`$ and $`N^{}`$ approach each other and intersect again and the phenomenon of §2 occurs in reverse, with a new supersymmetric cycle $`N^{}`$ in the same homology class as $`[N^+]+[N^{}]`$ popping into existence. In such a case, one would have a nonsupersymmetric ground state for some range of parameters on the negative side of $`W`$, and then eventually reach a supersymmetric ground state again (with the D6 brane wrapping $`N^{}`$). Similarly, on the negative side of $`W`$ there could exist “elsewhere” in $`M`$ a supersymmetric cycle $`\stackrel{~}{N}`$ in the same class as $`[N^+]+[N^{}]`$. Although the cost in energy to move from wrapping $`N`$ to wrapping $`\stackrel{~}{N}`$ is nonzero and hence on the negative side of $`W`$ the phenomena of §3, §4 occur, eventually it may become advantageous for the D6 branes to shift over to wrapping $`\stackrel{~}{N}`$. This would again be a situation where supersymmetry is broken, and then restored, as one dials the complex structure modulus of the Calabi-Yau space. Acknowledgements We are grateful to J. Harvey, G. Moore and E. Silverstein for discussions. The research of S.K. is supported by an A.P. Sloan Foundation Fellowship and a DOE OJI Award. The research of J.M. is supported by the Department of Defense NDSEG Fellowship program. References relax K. Becker, M. Becker and A. Strominger, “Five-Branes, Membranes and Nonperturbative String Theory,” Nucl. Phys. B456 (1995) 130, hep-th/9507158. relax J. Harvey and G. Moore, “Algebras, BPS States, and Strings,” Nucl. Phys. B463 (1996) 315, hep-th/9510182; J. Harvey and G. Moore, “On the Algebras of BPS States,” Comm. Math. Phys. 197 (1998) 489, hep-th/9609017; J. Harvey and G. Moore, “Superpotentials and Membrane Instantons,” hep-th/9907206. relax M. Bershadsky, V. Sadov and C. Vafa, “D-Strings on D-Manifolds,” Nucl. Phys. B463 (1996) 398, hep-th/9510225; M. Bershadsky, V. Sadov and C. Vafa, “D-Branes and Topological Field Theories,” Nucl. Phys. B463 (1996) 420, hep-th/9511222. relax H. Ooguri, Y. Oz and Z. Yin, “D-Branes on Calabi-Yau Spaces and their Mirrors,” Nucl. Phys. B477 (1996) 407, hep-th/9606112; K. Becker, M. Becker, D. Morrison, H. Ooguri, Y. Oz and Z. Yin, “Supersymmetric Cycles in Exceptional Holonomy Manifolds and Calabi-Yau 4 Folds,” Nucl. Phys. B480 (1996) 225, hep-th/9608116. relax A. Strominger, S.T. Yau and E. Zaslow, “Mirror Symmetry is T-Duality,” Nucl. Phys. B479 (1996) 243, hep-th/9606040. relax N. Hitchin, “The moduli space of special Lagrangian submanifolds,” math.dg/9711002; N. Hitchin, “Lectures on Special Lagrangian Submanifolds,” math.dg/9907034. relax C. Vafa, “Extending Mirror Conjecture to Calabi-Yau with Bundles,” hep-th/9804131. relax A. Karch, D. Lüst and A. Miemiec, “N=1 Supersymmetric Gauge Theories and Supersymmetric Three Cycles,” hep-th/9810254. relax I. Brunner, M. Douglas, A. Lawrence and C. Romelsberger, “D-branes on the Quintic,” hep-th/9906200. relax D. Joyce, “On counting special Lagrangian homology 3-spheres,” hep-th/9907013. relax R.C. McLean, Deformations and moduli of calibrated submanifolds. PhD thesis, Duke University, 1990. relax A. Sen, “BPS D-Branes on Non-Supersymmetric Cycles,” hep-th/9812031. relax M. Berkooz, M. Douglas, and R. Leigh, “Branes Intersecting at Angles,” Nucl. Phys. B480 (1996) 265, hep-th/9606139. relax P. Fayet, “Higgs Model and Supersymmetry,” Nuovo Cim. 31A (1976) 626. relax S. Kachru and E. Silverstein, “4d Conformal Field Theories and Strings on Orbifolds,” Phys. Rev. Lett. 80 (1998) 4855, hep-th/9802183; S. Kachru, J. Kumar and E. Silverstein, “Vacuum Energy Cancellation in a Nonsupersymmetric String,” Phys. Rev. D59 (1999) 106004, hep-th/9807076. relax For a review of this program with extensive references see: A. Sen, “Non-BPS States and Branes in String Theory,” hep-th/9904207.
no-problem/9908/astro-ph9908360.html
ar5iv
text
# ABSTRACT ## ABSTRACT We will present some results on the broad–band observations of BeppoSAX of the bright Seyfert galaxies NGC 4151 and NGC 5548. ## 1 INTRODUCTION In the last ten years the increased sensitivity, resolution and bandpass of X-ray missions have drastically changed our view of the X-ray spectrum of emission line AGN. We have moved from an almost featureless power law into a complex shape, where several broad and narrow features, produced in different sites around the central engine, are imprinted onto the power law. These components span a wide range of energies, sometimes overlapping with each other. An unambiguous determination of each component is then difficult, unless simultaneous broad–band spectral measurements are secured. BeppoSAX, with its 0.1-200 keV range, appears particularly suited to undertake a broad-band study of AGN in X-rays. In this contribution we will present some results of Core Program ( and Science Verification Phase) observations devoted to the study of Broad-band spectral variability of Seyfert 1 galaxies. The scientific goals of the program are: * Probe the environment near the central source. This is achieved by disentangling each spectral component and studying its temporal behaviour, in particular the response to changes of the intrinsic continuum and the correlation with other components. * Investigate the origin of the intrinsic continuum. The key spectral parameters of the continuum, i.e. the slope $`\alpha `$ and the high energy cut-off $`E_c`$ can be determined with an unprecedented precision by BeppoSAX. This should allow to investigate the presence of correlation between those parameters and the luminosity, an important test-point for radiative models. The baseline observing strategy we chose is that of long looks of those bright Seyfert 1 galaxies characterized by variability time-scale of $``$ day. This assures a contiguous sequence of spectra with the S/N needed for spectral measurements up to $`100keV`$ and without substantial variation within each time bin ($``$ variability time–scale). We will present here some of the results obtained during the Science Verification Phase of NGC 4151, and during the AO1 on NGC 5548. ## 2 Broad–band spectral variability: a probe of the environment and of the origin of the intrinsic continuum The X-ray spectrum of NGC4151 is the most complex ever observed in a Seyfert galaxy, as the broad-band picture of BeppoSAX clearly shows in Fig.1. The spectrum is characterized by features typical of both Seyfert 1 and Seyfert 2 galaxies, making NGC 4151 the best laboratory for the study of these objects. The different temporal behaviour of these components yields the complex spectral variability showed by this object. In fig.2 we present the ratio of spectra taken 2 days apart ($`July_{HI},July_{Low}`$) and few months apart ($`Dec`$) in 1996. Let us first comment the long term behaviour (lower panel of Fig.2). All the variations can be attributed to a change in the structure of the absorber. The intrinsic continuum (cfr the ratio above 3-4 keV) remained unchanged, while below 1 keV the predominant constant soft components (Perola et al. 1986, Weaver et al. 1994) force to 1 the spectral ratio. The variability observed in July on $``$ day time–scale has a different origin. The factor–of–two flux increase is associated with a steepening of the intrinsic power law ($`\mathrm{\Delta }\alpha 0.3`$), fully consistent with the $`\alpha `$vs. $`F_X`$ relationship observed first with EXOSAT by Perola et al. 1986 and then confirmed with GINGA by Yaqoob et al. 1993. Fig.2 (upper panel) shows that the 2-10 keV spectral variability is indeed well reproduced by a change of the intrinsic slope. The presence of constant soft components dumps down to 1 the spectral ratio below 2 keV, but above 10 keV the ratio should continue to decrease with energy (thin line), contrary to what observed. This is fixed with a constant reflection component, whose presence is already required by spectral fitting (Piro et al. 1998), (thick line). Note also as the intensity of the iron line remained constant, notwithstanding the substantial change of the ionizing flux (see also Perola et al. 1986). This suggests that the line region is far from the central source and possibly is the same site of the reflection component. Intrinsic spectral variability narrows substantially the range of models of the intrinsic continuum. This point has not received much attention by theoreticians (with some noticeable exceptions), probably because the only solid evidence of intrinsic spectral variability was that of NGC 4151. With the broad-band spectral capability of BeppoSAX we can address this issue with much less ambiguities than in the past. Indeed, in the long-look observation of NGC 5548 we find a behaviour similar to that observed in NGC 4151 (fig. 3). ## 3 The high energy cut-off With the exception of NGC 4151, where the high energy cut-off $`E_C`$ is fairly well determined (Piro et al. 1998 and references therein), in other Seyfert galaxies only an average value has been derived (Zdziarski et al. 1995). With BeppoSAX we can measure $`E_C`$ in single objects, with the perspective of studying its correlation with $`\alpha `$ and the luminosity ( or the compactness parameter). The long observation of NGC 5548 has allowed to determine fairly precisely the cut-off ($`E_C=140_{30}^{+60}`$ keV, Nicastro et al. 1999), even with the source at a rather low flux (Fig.4). It should be also possible, at least in the brightest objects, to search for changes of $`E_C`$ correlated with intensity. Indeed, in NGC 4151, there may be an indication of a variation of $`E_C`$ from the low to the high state (upper panel of Fig.3). ## 4 REFERENCES * Nicastro F., Piro L., De Rosa A. et al. 1999, ApJ, submitted * Perola G.C., Piro L., Altamore A. et al. , 1986, ApJ, 306, 508 * Piro L., Nicastro F., Feroci M. et al. 1998, Nucl. Phys. B, 69/1-3, 481 * Weaver K.A., Yaqoob T., Holt S. et al. 1994, ApJ, L27 * Yaqoob T., Warwick R., Makino F. et al. 1993, MNRAS, 262, 435 * Zdziarski A., Johnson N., Done C. et al. 1995, ApJ, 438, L63
no-problem/9908/hep-ph9908299.html
ar5iv
text
# References UG-FT-101/99 hep-ph/9908299 August 1999 FOUR FERMION CONTACT TERMS IN CHARGED CURRENT PROCESSES AND LARGE EXTRA DIMENSIONS FERNANDO CORNET <sup>1</sup><sup>1</sup>1E-mail address: cornet@ugr.es Departamento de Física Teórica y del Cosmos, Universidad de Granada, 18071 Granada, Spain MONICA RELAÑO <sup>2</sup><sup>2</sup>2E-mail address: mpastor@ll.iac.es Instituto de Astrofísica de Canarias E-38200 La Laguna, Tenerife, Spain and JAVIER RICO <sup>3</sup><sup>3</sup>3E-mail address: javier.rico@cern.ch Institut für Teilchenphysik ETHZ, CH-8093 Zürich, Switzerland ABSTRACT We study the bounds that can be obtained on four-fermion contact terms from the experimental data for $`e^+p\overline{\nu }X`$ obtained at HERA and $`p\overline{p}e^\pm \nu ^{^{\text{(—)}}}`$, measured at TEVATRON. We compare these bounds with the ones available in the literature. Finally, we apply these results to study the compactification radius in theories with large extra dimensions and we obtain the bound $`M_c3.3TeV`$ . The Standard Model is in excellent shape from the experimental point of view. The only problem appears to be the recently reported evidence for neutrino oscillations , but the agreement between the theoretical predictions and all the high energy experimental data is remarkably good . Two years ago an excess of events in neutral ($`e^+pe^+X`$) and charged current ($`e^+p\nu _eX`$) Deep Inelastic Scattering was reported by the two HERA experiments: H1 and ZEUS . However, after an spectacular increase in the collected luminosity this possible hint for Physics Beyond the Standard Model has disappeared . It is now interesting (although certainly less exciting) to study the bounds that can be obtained from the new data on the mechanisms that were proposed as possible explanations for the excess of events. The effects of new physics can be parametrized in terms of higher dimension ($`d>4`$) operators . In particular, the first operators contributing to Deep Inelastic Scattering are dimension $`6`$, four-fermion contact interactions. These terms were introduced as an effective interaction relevant in case quarks and leptons were composite objects . However, it is clear that contact terms also appear as the low energy limit of the exchange of heavy particles, in the same way as the $`W^\pm `$ gauge boson exchange can be parametrized in the Fermi Lagrangian for energies much lower than $`M_W`$. The main difference between both approaches to contact terms is the interpretation of the mass scale $`\mathrm{\Lambda }`$ they contain. In the first case, compositeness, the mass scale is related to the inverse of the size of the composite object. In the second case, $`\mathrm{\Lambda }`$ is related to the mass and coupling constants of the exchanged particle. A particularly interesting application of contact terms appears as a result of recent advances in superstring theories, where it has been observed that compact dimensions of a radius of $`O(1TeV^1)`$ can be at the origin of supersymmetry breaking . Also, even larger compact dimensions, with sub-millimeter compactification radius, allow to reduce the Planck scale to become of the order of a few $`TeV`$, avoiding the gauge hierarchy problem . So, if we have $`6n`$ dimensions of $`O(TeV^1)`$ and $`n2`$ dimensions with a compactification size $`O(mmfm)`$, in Type I/I and Type IIA superstring theories we can achieve that the scale at which gravity becomes strong and the string scale are, both, $`O(TeV)`$ . Gravitons propagate in the $`n`$-dimensional space, giving rise to new, effective interactions among the ordinary particles that live in our conventional four dimensional space-time . Gauge bosons propagate in the $`10n`$ dimensional space and in the ordinary four dimensional space appear as a Kaluza-Klein tower of states. Neutral current processes receive two types of new contributions: graviton exchange and KK-states exchange, while charged current processes only receive the second type of contributions. The phenomenology of these models is being discussed extensively in these days . In this note we are going to use recent data from HERA ($`e^+p\overline{\nu }_eX`$) and TEVATRON ($`p\overline{p}e\overline{\nu }_eX`$) to obtain bounds on the mass scale appearing in the $`e\nu qq^{}`$ contact term and we will compare them with the bounds obtained from the unitarity of the CKM matrix . We will finish with a discussion of the connection of our results with extra dimensions physics. Low energy effects of physics beyond the SM, characterized by a mass scale $`\mathrm{\Lambda }`$ much larger than the Fermi scale, can be studied by a non-renormalizable effective lagrangian, in which all the operators are organized according to their dimensionality. Since the energies and momenta that can be reached in present experiments are much lower than $`\mathrm{\Lambda }`$, it is expected that the lowest dimension operators provide the dominant corrections to the SM predictions. Requiring $`SU(2)\times U(1)`$ invariance, the relevant lagrangian for $`e\nu qq^{}`$ charged current processes including dimension $`6`$ four-fermion operators is: $$=_{SM}+\eta ^{lq}(\overline{l}\gamma _\mu \tau ^Il)(\overline{q}\gamma ^\mu \tau ^Iq)+_S,$$ (1) where $`_{SM}`$ is the SM lagrangian, $`l=(\nu ,e)`$ and $`q=(u,d)`$ are the $`SU(2)`$ doublets containing the left-handed lepton and quark fields, $`\tau ^I`$ are the Pauli matrices and $`_S`$ are four fermion terms containing scalar currents instead of the vector currents shown explicitly in Eq. (1). It is customary to replace the coefficient $`\eta `$ by a mass scale $`\mathrm{\Lambda }`$: $$\eta =\frac{ϵg^2}{\mathrm{\Lambda }^2},$$ (2) with $`ϵ=\pm 1`$ taking into account the two possible interference patterns. For historical reasons $`\mathrm{\Lambda }`$ is usually interpreted as the mass scale for new physics in the strong coupling regime, i.e. with $$\frac{g^2}{4\pi }=1.$$ (3) If the contact term is due to the $`s`$ or $`t`$ channel exchange of a heavy particle with a mass $`M_h`$ much larger than the center of mass energy then $$\eta =\frac{\overline{g}^2}{M_h^2}\text{and}\mathrm{\Lambda }=\frac{\sqrt{4\pi }M_h}{\overline{g}},$$ (4) where $`\overline{g}`$ is the coupling constant of the heavy particle to a fermion pair. We will not consider the terms in $`_S`$ in our analysis because it has been shown that the ratio $$R=\frac{\mathrm{\Gamma }(\pi ^\pm e^\pm \nu )}{\mathrm{\Gamma }(\pi ^\pm \mu ^\pm \nu )}=(1.230\pm 0.004)\times 10^4$$ (5) provides a very strong bound on these terms: $$\mathrm{\Lambda }_s>500TeV.$$ (6) This is due to the fact that the scalar currents appearing in $`_S`$ do not lead to helicity suppression in the pion decay amplitude as $`VA`$ currents do . The expression for the cross-section $`\sigma (e^+p\overline{\nu }X)`$ can be found in Ref. , where a first analysis to the HERA data was presented. Here we perform a fit to the combined data for $`d\sigma /dxdQ^2`$ shown by the two experiments, H1 and ZEUS , at the Moriond and Vancouver conferences. The total integrated luminosity collected is $`37pb^1`$ for H1 and $`47pb^1`$ for ZEUS. We have used the MRSA parameterization , but one cannot expect important changes in the result when using another parameterization. In our fit we have included Standard Model radiative corrections, but we have neglected the interference between these corrections and the new terms. Certainly, the data are compatible with the Standard Model predictions and we can only obtain $`95\%C.L.`$ bounds on $`\mathrm{\Lambda }_+^{lq}`$ and $`\mathrm{\Lambda }_{}^{lq}`$, where the subscript refers to the value of $`ϵ`$. In order to obtain these bounds we have assumed that the probability density has the form: $$f(z,n)=\frac{z^{n/21}e^{z/2}}{2^{n/2}\mathrm{\Gamma }(n/2)},$$ (7) where $`n=m1`$, and $`m`$ is the number of data points included in the fit. This expression corresponds to a $`\chi ^2`$ distribution with $`n`$ degrees of freedom. The $`95\%C.L.`$ bounds we have obtained are: $$\begin{array}{ccc}\mathrm{\Lambda }_+^{lq}3.5TeV\hfill & \text{and}\hfill & \mathrm{\Lambda }_{}^{lq}3.1TeV.\hfill \end{array}$$ (8) It is interesting to note at this point that, due to the dominance of the first family quarks in the proton structure functions, the results shown in Eq. (8) are strongly dominated by contact terms involving only first family quarks and leptons, i.e. an $`e\nu ud`$ contact term. Indeed, neglecting new terms involving quarks from the second family we arrive to very similar bounds: $$\begin{array}{ccc}\mathrm{\Lambda }_+^{lq}3.2TeV\hfill & \text{and}\hfill & \mathrm{\Lambda }_{}^{lq}2.8TeV.\hfill \end{array}$$ (9) We now turn our attention to the closely related processes $`p\overline{p}e^\pm \nu ^{^{\text{(—)}}}`$ measured at TEVATRON. At the partonic level this process is related to the one at HERA via $`t`$ to $`s`$ channel exchange. This, however, introduces a problem because the $`W^\pm `$ gauge boson can now be produced on-mass shell, producing a very large background to study new physics. In our fit we have used the data for $`{\displaystyle \frac{d\sigma }{dm_t}}`$, where $`m_t`$ is the $`e\nu `$ transverse mass, from Ref. with $`m_t110GeV`$. This value has been chosen to optimize the sensitivity to new physics. The values for the $`W`$ mass and width we have used are: $`M_W=(80.41\pm 0.10)GeV`$ and $`\mathrm{\Gamma }_W=(2.06\pm 0.06)GeV`$ and we have checked that our results are not sensitive to changes of these parameters within one standard deviation. The bounds we obtain, $$\begin{array}{ccc}\mathrm{\Lambda }_+^{lq}2.0TeV\hfill & \text{and}\hfill & \mathrm{\Lambda }_{}^{lq}1.2TeV,\hfill \end{array}$$ (10) are much less restrictive than the ones obtained at HERA. The most stringent bounds on lepton-quark charged current contact terms have been obtained from the observed unitarity of the Cabbibbo-Kobayashi-Maskawa matrix elements in Ref. : $$|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2=0.9965\pm 0.0021.$$ (11) Since CKM matrix elements are experimentally determined from the ratio of semileptonic to leptonic processes, both, lepton-quark and purely leptonic contact terms contribute to Eq. (11): $$V_{ud_j}^{obs}=V_{ud_j}^{SM}\left(1\frac{\eta ^{lq}\eta ^{ll}}{8\sqrt{2}G_F}\right),$$ (12) where $`\eta ^{lq}`$ and $`\eta ^{ll}`$ stand for the lepton-quark and purely leptonic, respectively, contact term couplings. The bounds on $`\eta ^{lq}`$ and, consequently, on $`\mathrm{\Lambda }^{lq}`$ depend on the ones that can be obtained for $`\eta ^{ll}`$ (or $`\mathrm{\Lambda }^{ll}`$). Hagiwara and Matsumoto have performed a fit to electroweak parameters measured at LEP1, TEVATRON and LEP2 to obtain a value for the $`T`$ parameter from which the bounds : $$\begin{array}{ccc}\mathrm{\Lambda }_+^{ll}7.5TeV\hfill & \text{and}\hfill & \mathrm{\Lambda }_{}^{ll}10.2TeV\hfill \end{array}$$ (13) were obtained. Introducing this result into Eqs. (11), (12) and assuming that the contact terms are the same for all three families they obtained: $$\begin{array}{ccc}\mathrm{\Lambda }_+^{lq}5.8TeV\hfill & \text{and}\hfill & \mathrm{\Lambda }_{}^{lq}10.1TeV.\hfill \end{array}$$ (14) These bounds are more stringent than the ones obtained from HERA. However, one should notice that both sets of bounds are complementary because of the different assumptions used. Indeed, the result (14) rely on the assumption that $`V_{ud}`$, $`V_{us}`$ and $`V_{ub}`$ receive the same contribution from contact terms, while the bounds obtained from ZEUS and H1 data are independent from this assumption, as we have explicitly shown in Eqs. (8) and (9). We should also point out that the HERA data we used in our fit has been obtained with positron beams. Since positrons interact via charged current processes with $`d`$ and $`\overline{u}`$ quarks in the proton, while electrons interact with $`u`$ and $`\overline{d}`$ quarks, the cross section with positrons in the initial state is much smaller at large $`x`$ and $`Q^2`$ than the one with electrons. Thus, with the same integrated luminosity using electron beams as the one collected up to now with positron beams the bounds shown in Eqs. (8) and (9) will improve in such a way that for $`\mathrm{\Lambda }_+^{lq}`$ can become similar to the one in Eq. (14). The contact terms we have been studying up to now can be easily related to the exchange of a tower of KK states corresponding to the $`W`$ boson. Such a tower appears when the number of space-time dimensions is larger than $`4`$ and gauge bosons can propagate in the new dimensions. For energies lower than the inverse of the compactification radius $`(R1/M_c)`$ the gauge bosons propagating in the new dimensions appear as a tower of states with the same couplings as the standard bosons. The lightest of the new states has a mass $`O(M_c)`$. For $`M_cO(1TeV)`$ it is, thus, justified to approximate the effects of the exchange of the new particles in four fermion processes by a contact term of the type introduced in Eq. (1). The relation between the mass scale $`\mathrm{\Lambda }`$ and the compactification scale $`M_c`$ is particularly simple in the case of only one extra dimension with compactification scale $`O(TeV)`$: $$M_c^2=\frac{g^2\mathrm{\Lambda }_{}^2}{2\pi }\underset{n=1}{\overset{\mathrm{}}{}}\frac{1}{n^2},$$ (15) where $`g`$ is the $`SU(2)`$ coupling constant and the sum covers the contribution of the infinite number of states. This sum s finite and turns out to be $`\pi ^2/6`$. In case there are more than one extra dimension with the same compactification radius, the sum is divergent and a new parameter, a cut-off, must be introduced. Since the coupling of the new states to left handed fermions is universal, as it is the case for the $`W`$ boson, we just have take the most stringent bound for $`\mathrm{\Lambda }_{}`$ and convert it into a bound for $`M_c`$. Thus, using $`\mathrm{\Lambda }_{}10.2TeV`$ we obtain $$M_c3.3TeV.$$ (16) In summary, we have obtained bounds on the mass scale of the $`SU(2)\times U(1)`$ invariant, four-fermion, charged current contact term from the recent HERA and TEVATRON data. The first ones appear to be more sensitive to the presence of this contact term, but more luminosity (especially with electron beams) is needed before the bounds obtained from these processes can be competitive with the ones obtained from the unitarity of the CKM matrix. Finally, we have converted these results into a bound on the compactification scale of a large extra dimension in which the $`W`$ boson can propagate, obtaining $`M_c3.3TeV`$. This result is particularly interesting, not only because it is one of the largest lower bounds obtained, but also because,being obtained form charged current processes, it is free from any assumption on the effects of the KK tower of the graviton in theories with a low gravity scale. We thank A. Kotwal, K. Hagiwara, W. Hollik and M. Masip for very helpful discussions and comments. This research was partially supported by CICYT, under contract number AEN96-1672, and Junta de Andalucia, under contract FQM 101.
no-problem/9908/cond-mat9908247.html
ar5iv
text
# Exact solution for core-collapsed isothermal star clusters ## ACKNOWLEDGMENTS It is a pleasure to thank A. Allahverdyan for stimulating discussion.
no-problem/9908/hep-th9908120.html
ar5iv
text
# 1 Introduction ## 1 Introduction A supersymmetric index (Witten index) $`\mathrm{Tr}(1)^F`$ is a useful criterion of dynamical supersymmetry breaking in supersymmetric field theories . The index is given by a difference of the number of bosonic and fermionic zero energy states and it counts the number of the supersymmetric vacua. When $`\mathrm{Tr}(1)^F0`$ supersymmetry is not spontaneously broken since if pairs of bosonic and fermionic zero energy states gains a non-zero energy at least $`\mathrm{Tr}(1)^F`$ states remain as zero energy states. Conversely, however, for the case of $`\mathrm{Tr}(1)^F=0`$ we can not conclude whether supersymmetry is broken or not. But it indicates a possibility of spontaneous supersymmetry breaking. The indexes for various supersymmetric theories have been computed. Recently, the index for supersymmetric gauge theory with a mass gap, which is $`N=1`$ supersymmetric Yang-Mills Chern-Simons theory with gauge group $`G`$, is given by Witten . His result shows that $`\mathrm{Tr}(1)^F0`$ if $`|k|h/2`$, where $`h`$ is the dual Coxeter number of $`G`$, and suggests that dynamical symmetry breaking occurs for $`|k|<h/2`$. Three-dimensional supersymmetric Yang-Mills Chern-Simons theory can be realized on the brane configuration with a $`(p,q)`$5-brane in Type IIB superstring theory . For $`N=1`$ supersymmetric theory without any extra matters, the corresponding brane configuration has not been well understood. But $`N=2,3`$ supersymmetric theories are easy to handle by the branes. The moduli space of vacua of $`N=2,3`$ theory can be explained by the brane configuration . So we focus only $`N=2,3`$ theory throughout the present paper. In this paper, we extend the Witten’s computation of the index to $`N=2,3`$ $`SU(n)`$ super Yang-Mills Chern-Simons theory at level $`k`$ and show that this index is well interpreted by using the Type IIB brane configuration. Especially, the dynamical supersymmetry breaking in some region of $`k`$ is explained by the so-called “$`s`$-rule” (supersymmetric rule) for the branes. Moreover, when the index is non-zero we show that the index exactly coincides of the number of possible supersymmetric brane configurations. The organization of this paper is as follows. In section 2, we briefly review a microscopic computation of the index by Witten. The computation can straightforwardly extend to $`N=2,3`$ extend supersymmetry. We will give the formulae of the indexes. In section 3, we explain about the $`s`$-rule for the branes and generalize it to the stretched D3-branes between two different types of $`(p,q)`$5-branes. We next show in section 4 that if this generalized $`s`$-rule is applied to the $`N=2,3`$ Yang-Mills Chern-Simons brane configuration, the configuration for $`k<n`$ can not keep the $`s`$-rule, that is, the configuration is not supersymmetric. These results are consistent with the computation of the index. We will give the exact formula of the index by counting the number of possible supersymmetric configuration from M-theoretical point of view. We also discuss the relation between a supersymmetric quantum mechanics and the computation of the index in terms of the branes. In the subsequent subsection, we construct a family of theories which have the same index by using some brane dynamics. Finally, section 5 is devoted to a summary of our results and discussions of further problems. ## 2 Brief review of the microscopic computation of the index We first consider $`N=1`$ $`SU(n)`$ supersymmetric Yang-Mills Chern-Simons theory on $`\times T^2`$. The computation is done by a Born-Oppenheimer approximation. In this approximation we take a small volume limit of the torus. If $`g`$ and $`r`$ denote the gauge coupling and the radius of the torus, the mass of the vector multiplet $`kg^2`$ is much smaller than the Kaluza-Klein mass of order $`1/r`$. We will consider the quantizing of the “zero energy” states, whose energy are of order $`kg^2`$ by means of the approximation. Now let us consider the moduli space of a flat $`G`$-connection on $`T^2`$, which is a zero energy classical gauge field configuration. The moduli space $``$ of flat $`G`$-connections on $`T^2`$ is given by $$=(𝐔\times 𝐔)/W,$$ (2.1) where $`𝐔`$ is the maximal torus of $`G`$ and $`W`$ is the Weyl group. For $`G=SU(n)`$, $``$ becomes simply the complex projective space $`^{n1}`$ . We next consider “zero modes” for the gluino fields of positive and negative chirality $`\lambda _+`$ and $`\lambda _+`$, which have at most energy of order $`kg^2`$. We assume the zero modes of $`\lambda _\pm `$ take of the form: $$\lambda _\pm =\underset{a=1}{\overset{r}{}}\eta _\pm ^aT^a,$$ (2.2) where the $`T^a,a=1,\mathrm{},r`$ are a basis of the Lie algebra of $`𝐔`$ and $`\eta _\pm ^a`$ are fermionic constants, which obey the following canonical anti-commutation relations $$\{\eta _+^a,\eta _{}^b\}=\delta ^{ab},\{\eta _+,\eta _+\}=\{\eta _{},\eta _{}\}=0.$$ (2.3) The $`\eta _+`$ and $`\eta _{}`$ can be regarded as creation and annihilation operators, so we now introduce ground states annihilated by the $`\eta _\pm ^a`$ $$\eta _\pm |\mathrm{\Omega }_\pm =0.$$ (2.4) These two states are related each other by $$|\mathrm{\Omega }_+=\underset{a=1}{\overset{r}{}}\eta _+^a|\mathrm{\Omega }_{}.$$ (2.5) The anti-commutation relations (2.3) can be regarded as the Clifford algebra on $``$. Therefore, the Hilbert space made by quantizing the fermion zero modes maps to the space of spinor fields on $``$. Since $``$ is a complex manifold, the spinor field on $``$ is simply represented by the form with values in $`K^{1/2}`$, where $`K`$ is the canonical line bundle of $``$. So if we consider a general state in the fermionic space $$\eta _{}^{a_1}\mathrm{}\eta _{}^{a_q}|\mathrm{\Omega }_+,$$ (2.6) this is regarded as the $`(0,q)`$-form on $``$ with values in $`K^{1/2}`$. More precisely speaking, the forms on $``$ take values not only in $`K^{1/2}`$ but also in a line bundle $`𝒲`$, since $`|\mathrm{\Omega }_+`$ is regarded as a $`(0,0)`$-form on $``$ with values in $`𝒲`$. To determine the line bundle $`𝒲`$ let us consider the canonical quantization of the Yang-Mills Chern-Simons theory. The momentum conjugate to $`A_i^a`$ is given by $$\mathrm{\Pi }_i^a=\frac{1}{g^2}F_{0i}^a\frac{k}{4\pi }ϵ_{ij}A_j^a.$$ (2.7) Writing formally $`\mathrm{\Pi }_i^a=i\delta /\delta A_i^a`$, we have $$\frac{1}{g^2}F_{0i}^a=i\frac{D}{DA_i^a},$$ (2.8) where $$\frac{D}{DA_i^a}=\frac{\delta }{\delta A_i^a}+i\frac{k}{4\pi }ϵ_{ij}A_j^a$$ (2.9) is a connection on the line bundle $`𝒲`$. The connection form $`\frac{k}{4\pi }ϵ_{ij}A_j^a`$ means that the line bundle over $``$ is $`^k`$, where $``$ is the basic line bundle. The supercharges of the theory are written in terms of the connection $`Q_+`$ $`=`$ $`{\displaystyle \frac{1}{g^2}}{\displaystyle _{T^2}}\mathrm{Tr}F_{0z}\lambda _{}={\displaystyle _{T^2}}\mathrm{Tr}\lambda _{}{\displaystyle \frac{D}{DA_{\overline{z}}}},`$ (2.10) $`Q_{}`$ $`=`$ $`{\displaystyle \frac{1}{g^2}}{\displaystyle _{T^2}}\mathrm{Tr}F_{0\overline{z}}\lambda _+={\displaystyle _{T^2}}\mathrm{Tr}\lambda _+{\displaystyle \frac{D}{DA_z}}.`$ (2.11) Therefore, the supercharges $`Q_+`$ and $`Q_{}`$ can be identified with the $`\overline{}`$ and $`\overline{}^{}`$ operators, which is a decomposition of a Dirac operator acting on spinors valued in $`𝒲=^k`$. Since these two operators obey $$\{\overline{},\overline{}^{}\}=H,\overline{}^2=(\overline{}^{})^2=0,$$ (2.12) where $`H`$ is the Hamiltonian, the space of supersymmetric ground states is given by the cohomology $$\underset{i=0}{\overset{n1}{}}H^i(,𝒲K^{1/2}).$$ (2.13) For $`G=SU(n)`$, $`^{n1}`$. The basic line bundle over $``$ is $`=𝒪(1)`$ and the canonical bundle of $``$ is $`K=^n`$. Therefore, the above cohomology groups are rewritten as $$\underset{i=0}{\overset{n1}{}}H^i(^{n1},^{kn/2}),$$ (2.14) where we use $`𝒲=^k`$. Thus, the supersymmetric index is $$I_{N=1}(k)=\underset{i=0}{\overset{n1}{}}(1)^idimH^i(^{n1},^{kn/2}).$$ (2.15) These cohomology groups are computed by Serre and their dimensions are as follows $$dimH^i(^{n1},^r)=\{\begin{array}{cc}0\hfill & \text{for }0<i<n1\hfill \\ 0\hfill & \text{for }i=n1\text{}r>n\hfill \\ 0\hfill & \text{for }i=0\text{}r<0\hfill \\ \left(\genfrac{}{}{0pt}{}{n+r1}{r}\right)\hfill & \text{for }i=0\text{}r0\hfill \end{array},$$ (2.16) where $`\left(\genfrac{}{}{0pt}{}{n+r1}{r}\right)`$ is a binomial coefficient and means the dimension of the vector space of degree $`r`$ homogeneous polynomials in the $`n`$ homogeneous coordinates of $`^{n1}`$. Using this formula we obtain the supersymmetric index of the $`N=1`$ Yang-Mills Chern-Simons theory, $$I_{N=1}(k)=\{\begin{array}{cc}0\hfill & \text{for }|k|<n/2\hfill \\ \left(\genfrac{}{}{0pt}{}{k+n/21}{kn/2}\right)\hfill & \text{for }|k|n/2\hfill \end{array},$$ (2.17) where for the case of $`kn/2`$ we use Serre duality $$H^{n1}(^{n1},^r)H^0(^{n1},^{nr})\text{for }rn.$$ (2.18) We now extend this computation of the index to the case of $`N=2,3`$ supersymmetric Yang-Mills Chern-Simons theory in three-dimensions. First, we note that the $`N=2`$ vector multiplet consists of one spin 1 vector boson $`A_\mu `$, two spin 1/2 spinors $`\lambda _1,\lambda _2`$ and one spin 0 real adjoint scalar $`X`$. Since the adjoint scalar has the mass $`kg^2`$, the Coulomb branch of this theory is completely lifted. Therefore, the presence of the adjoint scalar does not affect the index. So, the computation of the index is modified by two gluino fields as $`I_{N=2}(k)`$ $`=`$ $`{\displaystyle \underset{i=0}{\overset{n1}{}}}(1)^idimH^i(^{n1},𝒲K^{1/2}K^{1/2})`$ (2.19) $`=`$ $`{\displaystyle \underset{i=0}{\overset{n1}{}}}(1)^idimH^i(^{n1},^{kn}).`$ Similarly, for the $`N=3`$ theory, the massive vector multiplet contains one spin 1 vector boson $`A_\mu `$, three spin 1/2 spinors $`\lambda _1,\lambda _2,\lambda _3`$, three spin 0 real adjoint scalars $`X_1,X_2,X_3`$ and one spin -1/2 spinor $`\chi `$. In this case, there is also no Coulomb branch moduli. The gluino fields contain one opposite spin field, so the contribution of one of spin 1/2 fields is canceled by the spin -1/2 field $`\chi `$. $`I_{N=3}(k)`$ $`=`$ $`{\displaystyle \underset{i=0}{\overset{n1}{}}}(1)^idimH^i(^{n1},𝒲K^{3/2}K^{1/2})`$ (2.20) $`=`$ $`{\displaystyle \underset{i=0}{\overset{n1}{}}}(1)^idimH^i(^{n1},^{kn})`$ Therefore, the index of $`N=3`$ theory coincides with the $`N=2`$ one. We can again obtain the index of the $`N=2,3`$ theories by the Serre’s formula. $$I_{N=2,3}(k)=\{\begin{array}{cc}0\hfill & \text{for }0<k<n\hfill \\ \left(\genfrac{}{}{0pt}{}{k1}{kn}\right)=\left(\genfrac{}{}{0pt}{}{k1}{n1}\right)\hfill & \text{for }kn\hfill \end{array}.$$ (2.21) For $`k0`$ we also compute the index by using the Serre duality, which is non-zero valued for any $`n`$, but the field theoretical meaning of this duality has not been understood at the present. So, we assume that $`k`$ is a positive integer in the following. We will discuss the relation between the index and the brane configuration in the following section. ## 3 The $`s`$-rule for Type IIB branes In this section, we explain about the $`s`$-rule for branes in string theory. The $`s`$-rule is a phenomenological rule of brane dynamics, which is first proposed by Hanany and Witten , and is needed in order to make supersymmetric configurations. For example, a NS5-brane and a D5-brane, which are completely twisted in the configuration space, can be supersymmetrically connected by only one D3-brane. If we use this rule we can find the exact correspondence between the brane configuration and the supersymmetric vacua of field theories. Some explanations on this rule are given from various point of view in Refs. . If we map the D3-brane between the NS5-brane and the D5-brane to an M2-brane between two M5-branes by U-duality, we obtain the following rule in M-theory: A configuration in which two completely twisted M5-brane are connected by more than one M2-brane is not supersymmetric. We can obtain all of other supersymmetric brane configurations keeping the $`s`$rule in various string theory from this M-theoretical rule by string duality. So we now consider the generalization of the $`s`$-rule for the D3-branes between two different type of the $`(p,q)`$5-brane. The $`(p,q)`$5-brane in Type IIB theory is described by a single M5-brane which wrapping simultaneously on two cycles of the compactified torus of M-theory. The wrapping number of the each cycles corresponds to charges of the $`(p,q)`$5-brane. This wrapping cycle is denoted by $`p\alpha +q\beta `$, where $`\alpha `$ and $`\beta `$ stand for the two independent cycles of the torus. If we introduce another $`(p^{},q^{})`$5-brane the corresponding M5’-brane similarly wraps on a $`p^{}\alpha +q^{}\beta `$ cycle. The M5-brane and the M5’-brane meet $`|pq^{}qp^{}|`$ times on the torus, where $`|pq^{}qp^{}|`$ is an intersection number of cycles $`p\alpha +q\beta `$ and $`p^{}\alpha +q^{}\beta `$. Since only one M2-brane can be attached on each intersecting point if we use the above $`s`$-rule in M-theory, the maximal number of the M2-branes between M5- and M5’-brane must be $`|pq^{}qp^{}|`$ to preserve supersymmetry. If the number of M2-branes is more than $`|pq^{}qp^{}|`$, we can not arrange the M2-branes in a supersymmetric way. By using string duality, the M5- and M5’-brane maps to the $`(p,q)`$5- and $`(p^{},q^{})`$5-brane in Type IIB theory. So we find that if the number of the D3-branes between the $`(p,q)`$5- and $`(p^{},q^{})`$5-brane is more than $`|pq^{}qp^{}|`$ then the configuration is not supersymmetric.<sup>1</sup><sup>1</sup>1If $`pq^{}qp^{}`$ is negative, it means that the stretched D3-branes are anti-D3-branes which have opposite orientation and charge. ## 4 Comparison with supersymmetric Yang-Mills Chern-Simons theory configuration In this section, we apply the result of the previous section to the brane configuration of supersymmetric Yang-Mills Chern-Simons theory and compare with the computation of the index. $`N=2,3`$ supersymmetric $`SU(n)`$ Yang-Mills Chern-Simons theories at level $`k`$ are realized on the generalized Hanany-Witten type configurations, in which $`n`$ D3-branes are suspended between an NS5-brane and a $`(p,q)`$5-brane . The coefficient of the Chern-Simons term, that is, the level of Chern-Simons theory is given by $`k=p/q`$ in this brane setup. For non-Abelian gauge theory this coefficient should be an integer due to the quantization condition. So, we set $`p=k`$ and $`q=1`$ in the pages that follow to avoid subtlety. Moreover, in order to compare with the result of the index computation, we only consider theories without any extra matter except for vector multiplet. For $`N=2`$ Yang-Mills Chern-Simons theory, the configuration is<sup>2</sup><sup>2</sup>2For derivation and notation, see Ref. . $$\begin{array}{c}\text{NS5}(012345)\hfill \\ \text{D3}(012|6|)\hfill \\ (k,1)\text{5}\left(01278\left[\genfrac{}{}{0pt}{}{5}{9}\right]_\theta \right)\hfill \end{array},$$ (4.1) and for $`N=3`$, $$\begin{array}{c}\text{NS5}(012345)\hfill \\ \text{D3}(012|6|)\hfill \\ (k,1)\text{5}\left(012\left[\genfrac{}{}{0pt}{}{3}{7}\right]_\theta \left[\genfrac{}{}{0pt}{}{4}{8}\right]_\theta \left[\genfrac{}{}{0pt}{}{5}{9}\right]_\theta \right)\hfill \end{array}.$$ (4.2) ### 4.1 Supersymmetric index from branes We first apply the generalized $`s`$-rule to the Type IIB brane configuration which describes $`SU(n)`$ Yang-Mills Chern-Simons theory at level $`k`$. In this configuration one of the 5-brane is an NS5-brane, namely, a $`(0,1)`$5-brane. Another is a $`(k,1)`$5-brane. These 5-branes are completely twisted in the $`N=2`$ and $`N=3`$ configuration (4.1) and (4.2). So, the supersymmetric configuration is restricted by the $`s`$-rule. The intersection number of these 5-branes in M-theory is $`k`$. Therefore, for supersymmetry, the maximal number of the D3-branes must be $`k`$. Therefore, we find that the configuration is supersymmetric if $`nk`$ and the situation for $`n>k`$ violates the $`s`$-rule and spontaneously breaks supersymmetry. This exactly agree with the computation of the index, that is, $`I_{N=2,3}(k)0`$ for $`nk`$ and $`I_{N=2,3}(k)=0`$ for $`n>k`$. We next consider how the value of the index itself is described in the brane configuration. The value of the index means the number of the possible supersymmetric vacua. So, we can expect that the non-zero value of the index coincides with the number of possible supersymmetric configurations of branes. In order to count the number of supersymmetric configuration, we again lift the Type IIB configuration to M-theory. The NS5-brane is an M5-brane wrapping on a cycle $`\beta `$ and the $`(k,1)`$5-brane is an M5-brane wrapping on a cycle $`k\alpha +\beta `$ in M-theory. These M5-branes intersect $`k`$ times on the torus. Therefore the M2-brane can be attached on $`k`$ intersecting points, but can not be attached on the same point by the $`s`$-rule. If $`n>k`$, the $`n`$ M2-branes can not be arranged without violating the $`s`$-rule. So, supersymmetry is broken. For $`nk`$ the number of possible supersymmetric configurations coincides with the number of choices of $`n`$ points within $`k`$ points, that is, $`\left(\genfrac{}{}{0pt}{}{k}{n}\right)`$. This number seems not to be the same with the index (2.21). However, precisely speaking, the gauge group $`G`$ on the $`n`$ D3-branes, which we are considering, is $`U(n)`$ rather than $`SU(n)`$. Since the computation of the index in section 2 is for $`G=SU(n)`$, this discrepancy occurs. To compare with the index for $`SU(n)`$ gauge theory, we must fix one of positions of the M2-brane, which corresponds to a phase of overall $`U(1)`$ factor of $`U(n)`$. If we fix the position of one M2-brane, residual ways of the supersymmetric arrangement of M2-branes is $`\left(\genfrac{}{}{0pt}{}{k1}{n1}\right)`$. This number exactly agrees with the index (2.21). A similar derivation of the supersymmetric index by counting possible supersymmetric configurations is discussed in the case of the M-theory description of $`N=1`$ supersymmetric Yang-Mills theory in Ref. . As discussed in Ref. , the positions of M2-branes correspond to vevs of the Wilson line operator $`W_2`$ along the $`x^2`$-direction. In the supersymmetric configuration, all of M2-brane positions are different. So, we find that the vevs of the Wilson line operator must take the following form $$W_2=\mathrm{diag}(e^{2\pi im_1/k},e^{2\pi im_2/k},\mathrm{},e^{2\pi im_n/k}),$$ (4.3) up to over all phase, where $`m_i`$ are different positive integers which satisfy $`0m_1<m_2<\mathrm{}<m_n<k`$. Therefore, we can conclude that in the supersymmetric phase, gauge symmetry of $`N=2,3`$ $`U(n)`$ Yang-Mills Chern-Simons theory is broken to $`U(1)^n`$ by the vev of the Wilson line operator. (For $`G=SU(n)`$, broken to $`U(1)^{n1}`$.) ### 4.2 Supersymmetric quantum mechanics on the brane Let us next consider a dual description of the above Type IIB brane configuration which describes supersymmetric Yang-Mills Chern-Simons theory. That picture make easy to learn about the connection to the computation of the index. We first consider a Maxwell Chern-Simons theory for simplicity. The corresponding brane configuration is that a single D3-brane is suspended between a NS5-brane and a $`(k,1)5`$-brane. If we consider the long wavelength limit of the Maxwell Chern-Simons Lagrangian, in which we drop all spatial derivatives, we obtain $$L=\frac{1}{2g^2}\dot{A}_i^2+\frac{k}{2}ϵ^{ij}\dot{A}_iA_j,$$ (4.4) where the gauge coupling $`g`$ is give by the sting coupling $`g_s`$ and the difference between two 5-branes $`L`$ as $`1/g^2=L/g_s`$. This Lagrangian has exactly the same form as the Lagrangian for a non-relativistic charged particle with mass $`1/g^2`$ moving in the plane in the presence of a external magnetic flux $`k`$ perpendicular to the plane. Therefore, the Maxwell Chern-Simons system has the same canonical structure with the Landau problem. On the other hand, if we take T-dual along the $`x^1`$\- and $`x^2`$-direction and S-dual in Type IIB theory to the $`N=2`$ configuration (4.1), we have $$\begin{array}{c}\text{D5}(012345)\hfill \\ \text{F1}(0|6|)\hfill \\ k\text{D3}\left(078\left[\genfrac{}{}{0pt}{}{5}{9}\right]_\theta \right)\text{D5}\left(01278\left[\genfrac{}{}{0pt}{}{5}{9}\right]_\theta \right)\hfill \end{array},$$ (4.5) where $`k\text{D3}\text{D5}`$ is a bound state of $`k`$ D3-branes and a D5-brane. On the D3-D5 bound state, $`k`$ D3-branes are regarded as $`k`$ magnetic flux on the D5-brane. Therefore, on the D3-D5 bound state the end of the fundamental string stretched from the D5-brane looks like a charged particle in the $`k`$ magnetic flux. (See Fig. 2.) Since the gauge field configuration $`A_i`$ maps to the position of the fundamental string $`X_i`$ by T-duality, we obtain really the (supersymmetric) quantum mechanical Lagrangian of the Landau problem for the position of the string on $`(x^1,x^2)`$-plane after using T-duality and S-duality $$L=\frac{1}{2}\left(\frac{L}{2\pi l_s^2}\right)\dot{X}_i^2+\frac{\stackrel{~}{k}}{2}ϵ^{ij}\dot{X}_iX_j,$$ (4.6) where $`l_s`$ is a string length and $`\stackrel{~}{k}=\frac{k}{2\pi l_s^2g_s}`$. This is a Lagrangian for the particle with mass which is string tension times length $`L`$ in magnetic flux which is proportional to $`k`$ as expected from the dualized brane configuration. If we now extend the above situation to $`SU(n)`$ gauge theory, $`n`$ becomes the number of particles. Due to the $`SU(n)`$ restriction the sum of the particle positions must vanish. The moduli space of the $`n`$-tuple of such the particle positions on the torus is known as a copy of complex projective space $`^{n1}`$ , which is a phase space of the quantum mechanical system (4.6). After all, analysis of the vacuum structure of supersymmetric Yang-Mills Chern-Simons theory replaces the quantization of the supersymmetric quantum mechanics on the D3-D5 bound state by using U-duality. This is the same thing with the microscopic computation of the index in section 2. The covariant derivative (2.9) and the supercharges (2.11) and (2.11) can be regarded as operators acting on the supersymmetric quantum mechanics of the Landau problem. Finally, we comment on a non-commutative nature on the torus. In limit of $`Ll_s`$, the Lagrangian (4.6) becomes $`L=\frac{\stackrel{~}{k}}{2}ϵ^{ij}\dot{X}_iX_j`$. This is first order in time derivatives, so the two coordinates $`X_1`$ and $`X_2`$ are canonically conjugate to one another, that is, $$[X_i,X_j]=iϵ_{ij}/\stackrel{~}{k}.$$ (4.7) Thus, the coordinates of the ends of open strings are non-commutative. We can understand this property as the non-commutative torus by turning on a non-zero B-field background using a gauge transformation . This non-commutative nature is closely related to the dynamics of Chern-Simons theory. ### 4.3 Duality and mirror relations In this subsection we present a family of theories with a Chern-Simons term, which have the same supersymmetric index. We first consider an exchange of 5-branes in the $`x^6`$-direction. When two different types of twisted 5-branes cross in the $`x^6`$-coordinate and exchange positions, some D3-branes are annihilated or created by the Hanany-Witten transition . For example, if $`n`$ D3-branes are stretched between the NS5-brane and the $`(k,1)`$5-brane ($`n<k`$), then the number of stretched D3-branes becomes $`kn`$ after the transition. This phenomenon is also simply explained from the M-theoretical point of view . In M-theory, the Hanany-Witten transition is creation or annihilation of a single M2-brane between two twisted M5-branes, which can not avoid each other in the configuration space. In the case of $`n<k`$, M2-branes can be attached on the $`n`$ different positions within the $`k`$ allowed positions on the M5-brane. After exchanging the positions of the two M5-branes in the $`x^6`$-coordinate, $`n`$ M2-branes are disappeared and new M2-branes are created on $`kn`$ opening positions. Thus, we find that by the Hanany-Witten transition the gauge group of $`N=2,3`$ supersymmetric $`U(n)`$ Yang-Mills Chern-Simons theory at level $`k`$ becomes $`U(kn)`$. (If $`G=SU(n)`$, then $`\widehat{G}=SU(kn+1)`$ since we must fix one of the M2-branes.) On this transition the number of the possible supersymmetric configuration, namely, the index, has not been changed because of the equivalence of the combination $`\left(\genfrac{}{}{0pt}{}{k}{n}\right)=\left(\genfrac{}{}{0pt}{}{k}{kn}\right)`$. Another operation to the brane configuration is S-duality in Type IIB theory. If we apply the S-duality to the brane configuration which describes supersymmetric Yang-Mills Chern-Simons theory at level $`k`$, we have a self-dual model at level $`1/k`$ as a worldvolume effective theory . Since this S-dual transforation can be understood as just a coordinate flip of $`x^2`$ and $`x^{10}`$ in M-theory, there is no difference between the configurations of the Yang-Mills Chern-Simons theory and self-dual model themselves. So, we expect that these theories have the same supersymmetric index. In this way, we can construct the family of theories with the same index by using the operations in superstring theory. For $`N=2,3`$ supersymmetric theories, $`U(n)`$ and $`U(kn)`$ Yang-Mills Chern-Simons theory at level $`k`$ and $`U(n)`$ and $`U(kn)`$ valued self-dual models at level $`1/k`$ have all the common supersymmetric index. It suggests that these theories are related each other by duality and mirror symmetry of three-dimensional supersymmetric field theory. ## 5 Conclusion and discussions We have investigated in this paper the relation between the supersymmetric index of three-dimensional supersymmetric Yang-Mills Chern-Simons theory and the brane configuration in Type IIB superstring theory. We have found that when the index is non-zero valued, the corresponding brane configuration is supersymmetric. Conversely, when the index vanishes, the configuration violates the $`s`$-rule and becomes non-supersymmetric. In addition, we have found that the number of the possible supersymmetric configuration exactly coincides with the value of the index. These results are considered as an explanation of the $`s`$-rule for the brane dynamics from the point of view of the worldvolume effective theory. The computation of the index can be generalized to other gauge groups $`G`$ of rank $`r`$ . The moduli space $``$ of the flat $`G`$-connection is a weighted projective space $`𝕎_{s_0,s_1,\mathrm{},s^r}^r`$, where the weights $`s_i`$ are 1 and the coefficients of the highest coroot of $`G`$ and obey $`_{i=0}^rs_i=h`$. The supersymmetric condition is also generalized by using this dual Coxeter number $`h`$. The corresponding brane configuration is probably constructed by adding orientifold planes as like as supersymmetric Yang-Mills theory with orthogonal and symplectic gauge groups . It would be interesting to find the correspondence between the index and the more general $`s`$-rule including the orientifold planes. The original computation of the index is for $`N=1`$ Yang-Mills Chern-Simons theory, which is chiral. It is generally hard to construct the chiral theory on the branes. Moreover, the Chern-Simons coefficient of $`N=1`$ supersymmetric theory is renormalized and shifted by $`\mathrm{sgn}(k)\frac{h}{2}`$ . This renormalization is closely related to the derivation of the index. However, we have not understood the meaning of the renormalization of the coefficient in terms of the branes in superstring theory. We hope that we can also analyze the dynamics of lower supersymmetric theory by using branes. The construction of the dual or mirror theory in terms of the brane dynamics is simple as mentioned. However, field theoretical meanings of these symmetries have not been so clear. It is interesting to extend the analysis of Ref. to the non-Abelian case. We hope that the brane configurations in superstring theory help to understand non-perturbative dynamics of supersymmetric quantum field theories in three-dimensions. ### Note added As I completed this paper, I received Ref. , which also considers supersymmetry breaking in $`N=3`$ Yang-Mills Chern-Simons theory and its $`N=2,1`$ deformations in a similar manner using branes. ## Acknowledgments I would like to thank the organizers of the Summer Institute ’99 in Fuji-Yoshida for hospitality during completion of this work. I am also grateful to all participants for useful comments and discussions.
no-problem/9908/cond-mat9908165.html
ar5iv
text
# New Tetrahedral Global Minimum for the 98-atom Lennard-Jones Cluster ## Abstract A new atomic cluster structure corresponding to the global minimum of the 98-atom Lennard-Jones cluster has been found using a variant of the basin-hopping global optimization algorithm. The new structure has an unusual tetrahedral symmetry with an energy of $`543.665361ϵ`$, which is $`0.022404ϵ`$ lower than the previous putative global minimum. The new LJ<sub>98</sub> structure is of particular interest because its tetrahedral symmetry establishes it as one of only three types of exceptions to the general pattern of icosahedral structural motifs for optimal LJ microclusters. Similar to the other exceptions the global minimum is difficult to find because it is at the bottom of a narrow funnel which only becomes thermodynamically most stable at low temperature. 02.60.Pn,36.40.Mr,61.46.+w The determination of the global minima of Lennard-Jones (LJ) clusters by numerical global optimization techniques has been been intensely studied in the size range $`N`$=13-147 by both chemical physicists and applied mathematicians. The LJ potential, which is given by $$E=4ϵ\underset{i<j}{}\left[\left(\frac{\sigma }{r_{ij}}\right)^{12}\left(\frac{\sigma }{r_{ij}}\right)^6\right],$$ (1) where $`ϵ`$ is the pair well depth and $`2^{1/6}\sigma `$ is the equilibrium pair separation, is a simple yet reasonably accurate model of the interactions between heavy rare gas atoms. In general, there has been good agreement between physical measurements on rare gas clusters from electron diffractometry and mass spectrometry and computational global optimization results regarding magic number sizes and corresponding cluster geometries . Both approaches find that Mackay icosahedra are the dominant structural motif. The LJ microcluster problem has also become a benchmark for evaluating global optimization algorithms. The number of local minima (excluding permutational isomers) on the potential energy surface (PES) is believed to grow exponentially with $`N`$ and is estimated to be of the order of $`10^{40}`$ for $`N`$=98. A wide variety of global optimization techniques including simulated annealing , genetic algorithms , smoothing and hypersurface deformation techniques , lattice methods , growth sequence analysis , and tunneling have been applied to the problem. Unbiased methods that make no assumptions regarding cluster geometry are of the most interest, since these have the best chance of successful generalization to more complex potentials such as those in the protein folding problem. Most of the global minima in this size range were first found by Northby in a lattice-based search of icosahedral structures . These structures consist of a core Mackay icosahedron (Figure 1b) surrounded by a partially filled outer shell. More recently, there have been a number of improvements in some of these putative global minima. Firstly, further refinements to Northby’s algorithm, particularly the relaxation of the assumption that the core Mackay icosahedron is always complete, has a led to a number of new global minima . Secondly, consideration of particularly stable face-centred-cubic (fcc) and decahedral forms has also led to new global minima . At $`N`$=38 the global minimum is a fcc truncated octahedron (Figure 1a) and at $`N`$=75-77 and 102-104 the global minima are based on Marks decahedra (Figure 1c). Thirdly, powerful unbiased global optimization algorithms, particularly the basin-hopping and genetic algorithms, have recently begun to catch up with those methods that incorporate particular physical insights into the LJ problem, and are now able to find all the known lowest-energy minima. Given this combined attack on the LJ optimization problem, it might have been imagined that all the global minima for $`N<150`$ had been found. Here, however, we report a new lowest-energy structure for LJ<sub>98</sub>, which has an energy of $`543.665361ϵ`$ and $`T_d`$ point group symmetry. This compares to an energy of $`543.642957ϵ`$ for the previous icosahedral putative global minimum which was found by Deaven et al . The LJ<sub>98</sub> global minimum is organized around a central fcc tetrahedron with four atoms on each edge (Figure 2c). Four additional fcc tetrahedrons (minus apices) are erected over the faces of the central tetrahedron to form a 56-atom stellated tetrahedron (Figure 2b). An additional 42 atoms decorate the closed-packed sites on the surface of the stellated tetrahedron to complete the structure (Figure 2a). The new LJ<sub>98</sub> structure is of particular interest because its tetrahedral symmetry establishes it as only the third known type of exception to the general pattern of icosahedral structural motifs for optimal LJ microclusters, and the first to be discovered by an unbiased optimization method. Given its unusual structure one might wonder why it is so low in energy. For LJ clusters optimizing the energy is a balance between maximizing the number of nearest neighbours and minimizing the strain energy (the energetic penalty for nearest-neighbour distances deviating from the equilibrium pair value) . The spherical shape and high proportion of $`\{111\}`$ faces gives the structure a large number of nearest neighbours (432 compared to 437 for the lowest-energy icosahedral minimum and 428 for the lowest-energy decahedral structure), whilst its strain energy is intermediate between icosahedral and decahedral structures. The lower strain energy allows it to be lower in energy than the icosahedral minima, even though it has fewer nearest neighbours. The strain in the structure is focussed around the six edges of the central fcc tetrahedron. The atoms along these edges have the same local coordination as atoms along the five-fold axis of a decahedron. It is also natural to ask how general this structure is. Firstly, analogous structures can be formed with smaller and larger tetrahedra at their core. The previous one in this series is at $`N`$=34 and the next one is at $`N`$=195. However, these structures are not energetically competitive: the former because it has too high a proportion of $`\{100\}`$ faces, and the latter because it is not sufficiently spherical. Secondly, the structures of the other non-icosahedral LJ global minima have been experimentally observed for gold and nickel clusters, and found to be particularly stable in theoretical calculations of transition metal clusters . Therefore, we performed some optimization calculations for the Sutton-Chen family of potentials . The tetrahedral structure was lowest in energy for silver, but a decahedral minimum was lower in energy for nickel and a fcc minimum for gold. This is consistent with previous results, which indicted that, of these three metals, silver clusters exhibited ordered structures with the most strain. The new LJ<sub>98</sub> optimum was found using a variant of the basin-hopping global optimization algorithm . The key idea behind the algorithm is the mapping of the original LJ potential energy function, $`E(𝐱)`$, for each point x on the $`3N`$-dimensional Cartesian coordinate space onto a “transformed” energy function, $`T(𝐱)`$. $`T(𝐱)`$ takes the value of $`E(𝐱)`$ at the local minium, $`𝐱_{\mathrm{𝐦𝐢𝐧}}`$, arrived at by applying a given local optimization procedure, such as the conjugate gradient algorithm, with $`𝐱`$ as the starting point for the algorithm. Thus $`T(𝐱)`$ is a “plateau” function that takes on the constant value $`E(𝐱_{\mathrm{𝐦𝐢𝐧}})`$ on the catchment basin surrounding each local minimum $`𝐱_{\mathrm{𝐦𝐢𝐧}}`$. $`T(𝐱)`$ is a lower bound to $`E(𝐱)`$ and coincides with $`E(𝐱)`$ at all of the latter’s local minima, but all barriers are removed in the $`T(𝐱)`$ landscape and transitions bewteen catchment basins can take place all along the basin boundaries. The original basin-hopping algorithm consists of a Metropolis search of the transformed landscape, $`T(𝐱)`$, using a Monte Carlo sampling procedure to move between local minima. In the variant used in the discovery of LJ<sub>98</sub> , the Metropolis criterion of accepting uphill moves with a probability that is an exponentially decreasing function of the energy increment is abandoned in favor of only accepting downhill moves. The algorithm is restarted from a fresh random starting local minimum whenever progress stalls for a sufficiently large number of move attempts. The variant was successful in locating the LJ<sub>98</sub> global minimum in 6 of 1000 random starts, with a mean computational time between encounters of about 30 hours on a 333 MHz Sun Ultra II processor. This structure has also been subsequently found using the original basin-hopping algorithm . These results show that the LJ<sub>98</sub> global minimum is particularly difficult to find. The origins of this difficulty are probably similar to the other non-icosahedral clusters. Analyses of the PESs of LJ<sub>38</sub> and LJ<sub>75</sub> using disconnectivity graphs have shown that they consist of a wide icosahedral “funnel” and a much narrower funnel leading to the global minimum . On relaxation down the PES the cluster is much more likely to enter the icosahedral funnel, where it is then trapped because of the large (free) energy barriers to escape from this funnel into the funnel of the global minimum. This situation is compounded by the thermodynamics of these clusters . The icosahedral funnel has a larger entropy because of the larger number of low-energy minima, and so the funnel of the global minimum is only lowest in free energy at low temperatures. Therefore, at temperatures where the dynamics occur at a reasonable rate there is a thermodynamic driving force to enter the icosahedral funnel. For LJ<sub>98</sub> there are at least 114 minima that are lower in energy than the second lowest-energy minimum in the tetrahedral funnel, and so the global minimum is only lowest in free energy below $`T`$=0.0035$`ϵk^1`$ (a typical melting temperature for a LJ cluster is 0.3$`ϵk^1`$). This transition temperature is markedly lower than for LJ<sub>38</sub> or LJ<sub>75</sub> . The basin-hopping transformation of the PES helps to ameliorate some of these difficulties. The transformation changes the thermodynamics so that the global minimum still has a significant occupation probability at temperatures where the cluster can escape from the icosahedral funnel. However, on relaxation down the PES the system is still much more likely to enter the icosahedral funnel. For example, our optimization runs were fifteen times more likely to terminate at the lowest-energy LJ<sub>98</sub> icosahedral minimum than at the global minimum. Coordinate files for the new LJ<sub>98</sub> structure, as well as all other putative LJ microcluster global optima, can be found in the Cambridge Cluster Database .
no-problem/9908/cond-mat9908365.html
ar5iv
text
# Possible charge inhomogeneities in the CuO2 planes of YBa2Cu3O6+x (x=0.25, 0.45, 0.65, 0.94) from pulsed neutron diffraction ## I Introduction The observation of charge stripes in La<sub>2-x-y</sub>Nd<sub>y</sub>Sr<sub>x</sub>CuO<sub>4</sub> raises the interesting possibility that inhomogeneous charge distributions in general, and stripes in particular, are a generic phenomenon of high-temperature superconductors. Charge stripes give rise to local structural distortions which can be evident using local structural probes. For example, both atomic pair distribution function (PDF) analysis on neutron powder diffraction data and extended x-ray absorption fine structure (XAFS) indicate that in the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> system local structural distortions exist which are consistent with such charge inhomogeneities. It is clearly important to establish their presence more widely in the high-temperature superconductors. A long-standing controversy exists between the diffraction and XAFS communities concerning the existence of a double well potential for the apical, O(4), ion in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> (123). This was first reported from XAFS data as a double well with the minima of the wells separated along the $`c`$-direction by 0.13 Å. It was also reported that the well structure becomes modified close to T<sub>c</sub>. The controversy arose because this result seemed to contradict single crystal and powder diffraction data which gave no evidence of enlarged thermal factors for O(4) along the $`z`$ direction as would be expected if such a double well existed. In addition, the XAFS result predicts an anomalously short Cu(1)-O(4) bond (Cu(1) is the chain copper) which seems to be questionable on chemical grounds. Nonetheless, subsequent XAFS studies have consistently reproduced the main result: that the Cu(1)-O(4) and Cu(2)-O(4) pair distributions from the data are best modeled as each having two equally populated components separated by $`0.1`$ Å. There appears to be no correlation between superconducting properties and the observation of the split position. Also, the split Cu-O(4) correlations are not present in all samples. The importance of this structural feature to the superconductivity is clearly doubtful; however, a solution to this controversy may elucidate important information about the properties of these materials, especially in light of the significant evidence that lattice effects are important in the superconductors. We have taken a different approach to study this problem. We have made an atomic pair distribution function (PDF) analysis of neutron powder diffraction data. The PDF technique is a diffraction technique which, nonetheless, reveals local atomic structure directly. In this sense it bridges the diffraction and local structure domains. We would expect the PDF to reflect the atomic pair distributions observed with XAFS whereas a conventional Rietveld analysis of the same data set should recover the crystallographic result. In the PDF technique the total scattering data are measured including both Bragg and diffuse scattering. These data are Fourier transformed into real space yielding the PDF directly. There are two main advantages of this approach over XAFS. First, the data reduction to obtain the PDF is straightforward and deductive and results in a virtually undistorted, high-resolution, PDF. This can also be recovered from XAFS data, but only by careful fitting procedures. Second, the PDF is obtained over a wide range of atomic separation, $`r`$. This allows data modeling to be carried out over an extended range of the PDF which puts more constraints on possible data interpretations and makes structural solutions more (though not completely) unique. The disadvantage with respect to XAFS of the present study, is that the total PDF is measured rather than a chemical specific PDF which has fewer atom-pairs contributing to the observed PDF. Also, most XAFS studies relating to this question were made on oriented samples with a polarized beam which further reduces the number of correlations in the resulting PDF. Whilst this kind of analysis is, in principle, possible using diffraction it has not been done. We have measured PDFs from four samples of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> with x=0.25, 0.45, 0.65, 0.94 at 15K. These were analyzed using PDFFIT, a full-profile fitting program analogous to Rietveld refinement but which fits the PDF and therefore yields local structural information. The resulting structural parameters are in good agreement with the average crystal structure; in particular we refine a thermal factor on the O(4) site which is not unphysically large. We have attempted to refine split positions on O(4) without success. However, we do refine a split position along $`z`$ on the in-pane copper site which results in a small improvement in agreement. We interpret this observation in terms of an inhomogeneous charge distribution in the CuO<sub>2</sub> plane consistent with the presence of localized charges; for example, as would be expected in the presence of charge stripes. ## II Experimental The YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> samples were prepared using standard solid state reaction methods. Stochiometric quantities of CuO, BaCO<sub>3</sub>, and Y<sub>2</sub>O<sub>3</sub> were ground in an Al<sub>2</sub>O<sub>3</sub> mortar and pestle under acetone until well mixed. The sample was air-dried and the powder contents were loaded into a 3/4” diameter steel die and uni-axially pressed at 1000 lbs. The pellet sample was removed and placed into an alumina boat. The sample was placed into a preheated (800 C)-tube furnace under 1 atm of flowing oxygen. The furnace tube was sealed and the furnace temperature was immediately raised to 960 C. The sample was fired for 72 hours and quenched. After cooling to room temperature, the sample was removed, ground under acetone, and repressed. This firing cycle was repeated until the powder x-ray diffraction traces showed no evidence of the presence of second phases. The typical reaction time was on the order of 1 week. Fully oxidized YBCO ($`x=0.94`$) was prepared by heating a portion of the YBCO sample to 450 C in 1 atm PO<sub>2</sub>. The sample was cooled to room temperature at a rate of 2 C/min and at 1 atm PO<sub>2</sub>. The oxygen non-stoichiometric YBCO samples were prepared using data from Kishio et al. Portions of the first YBCO sample were divided and placed into separate alumina boats. The annealing temperature and PO<sub>2</sub> were determined from Kishio et al. as set by the desired oxygen content for each sample. An Ametek oxygen analyzer was used to set the oxygen content of the annealing gas mixture as determined from the furnace exhaust while the furnace was at room temperature. The oxygen concentration was maintained by diluting a 20% O<sub>2</sub>/Ar balance gas mixture with Ar. Each sample was annealed at the selected temperature until the oxygen content of the exhaust gas returned the concentration that was intially set by the mixing manifold. Typically, upon heating, the samples would lose oxygen and this would cause a spike in the measured oxygen concentration. After equilibration, the boat was quenched to room temperature under the controlled PO<sub>2</sub> atmosphere with the aid of a Pt wire that passed through a septum cap to the furnace tube exterior. Thus the atmosphere inside the furnace was maintained during the entire anneal and subsequent quench to room temperature. The oxygen stoichiometry of each sample was determined by reduction of the sample in 6% H<sub>2</sub>/Ar forming gas. The TGA scans were made at 5 C/min to a maximum temperature of 1100 C. The gas flow rate was 80 ml/min. A high resolution Siemens D5000 powder x-ray diffractometer using Cu Ka radiation and an incident beam monochrometer was used for XRD. A Perkin/Elmer TGA 7 was used for thermogravimetry. Neutron powder diffraction data were collected on the High Intensity Powder Diffractometer (HIPD) at the Manuel Lujan Neutron Scattering Center (MLNSC) at Los Alamos National Laboratory for the $`x=0.65`$ and 0.94 samples and on the Glass, Liquids and Amorphous diffractometer (GLAD) at the Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory for the $`x=0.25`$ and 0.45 samples. The samples of about 10 g were sealed in a cylindrical vanadium tube with helium exchange gas. Data were collected at various temperatures between 15 K and room temperature in a closed cycle helium refrigerator. Additional data sets were collected to account for the scattering from the sample environment and the empty can. A vanadium rod was measured to account for the flux distribution at the sample position. The data are corrected for detector deadtime and efficiency, background, absorption, multiple scattering, inelasticity effects and normalized with respect to the incident flux and the total sample scattering cross-section to yield the total scattering structure function, $`S(Q)`$. This quantity is Fourier transformed according to $$G(r)=\frac{2}{\pi }_0^{\mathrm{}}Q[S(Q)1]\mathrm{sin}(Qr)𝑑Q.$$ (1) Data collection and analysis procedures have been described elsewhere. Random errors in the data from statistical counting fluctuations are estimated by propagating the errors from the raw data using standard error propagation. The error propagation process has been described in detail elsewhere . The reduced structure factor $`F(Q)=Q[S(Q)1]`$ from a typical data-set is shown in Fig. 1(a) The resulting PDF is shown in Fig. 1(b). The PDF is a real-space representation of the local structure in the form of pair distances. Modeling of PDF was carried out using the PDFFIT program to perform a least-squares full-profile fit. The structural inputs for the program are atomic positions, occupancies and anisotropic thermal factors directly analogous to Rietveld refinements, allowing direct comparison of data analyzed in real and reciprocal space. Estimated standard deviations on the refined values are obtained from the variance-covariance matrix in the usual way. Again, we stress that the PDF is being fit and the local structure is obtained from the PDF refinement. This is because the PDF is obtained from both Bragg and diffuse scattering whereas the Rietveld fits include only Bragg scattering. We have also carried out Rietveld refinement of our data in reciprocal-space using the program GSAS. and these are compared with the PDF fits. Local distortions away from the average structure are incorporated into the PDF modeling by reducing the symmetry or increasing the size of the unit cell used in the model. ## III Results ### A Comparison with the crystallographic structure We would like to know if refinements of the PDF reproduce the average crystallographic structure. We compare our refinements with the results of the single-crystal x-ray diffraction study of Schweiss et al.. The results are shown in Table I. The data are from the the $`x=0.94`$ sample taken at 90 K and the constraints on the atomic positional and anisotropic thermal factors were made to mirror those of the single crystal study. We took two separate data-sets at 90 K on cooling and warming. The results from each data-set reproduce very well so only the cooling cycle data-set refinements are reproduced in the table. The agreement with the single crystal study is clearly very good. We have also compared our PDF fits with Rietveld refinements of our own data using the GSAS Rietveld package. Again, the agreement is good although the Rietveld refinement produced some unphysical thermal factors. In the case of the PDF fits the data are used over a wider range of $`Q`$ than the Rietveld fits: $`Q_{max}=25`$ Å<sup>-1</sup> for the PDFs and $`Q_{max}=15.7`$ Å<sup>-1</sup>($`d=0.4`$ Å) for the Rietveld refinements. The qualitative results of the average structure are reproduced in the local structure; for example, U<sub>33</sub> on the apical oxygen site is small (0.0046(2)) in both the Rietveld and PDFFIT refinements. The good agreement between the PDF and crystallographic fits suggest that the difference between the XAFS and crystallographic results cannot be explained simply due to the different length-scale of the two measurements. ### B Comparison with the XAFS results In Fig. 2(a) we compare two PDF models which simulate the crystallographic and XAFS models. The PDF calculated using the average crystal structure is shown as a solid line. The dashed line shows the PDF calculated when the O(4) site is split by 0.13 Å along $`z`$ and each site is occupied 50% as suggested by the XAFS results. All other parameters in the models were kept the same. The difference is shown below. The presence of such a split position clearly has a small but significant effect on the PDF throughout the $`r`$-range. The magnitude of the signal expected from a split O(4) site is small in the total-PDF; however by fitting over a range of $`r`$ it should be possible to establish the existence of such a split from the PDF if it exists. In Fig. 2(b) we show the data from $`x=0.94`$, $`T=90`$ K, as symbols with the PDF from the converged crystallographic model plotted as a solid line. This is the same as the solid line in Fig. 2(a). The difference curve is shown below. If the split site exists in the data but not in the model, which is constrained to have the crystallographic symmetry with no split site, we would expect the two difference curves to be similar. The difference curves appear uncorrelated suggesting qualitatively that the split O(4) site is not present in the data. ### C Search for anharmonic atomic sites Using PDFFIT we searched for possible split positions in the structure. First, motivated by the XAFS model, we tried refining a split position for the O(4) ion along the $`z`$-direction. Refinements were carried out over the range $`1r15`$ Å and $`1r5`$ Å to check the response of the intermediate and short-range structure. An initial split of 0.1 Å was given to the model and $`\mathrm{\Delta }`$, the magnitude of the split, was allowed to vary. The occupancy of each split site was set initially to 0.5 but allowed to vary with the constraint that $`n_A+n_B=1.0`$ where $`n_A`$ and $`n_B`$ are the occupancies of the $`A`$ and $`B`$ sites respectively (see Fig. 3 for details). In both cases $`\mathrm{\Delta }`$ refined to a negligible value, again questioning the existence of such a split site. Based on the crystallographic results we notice that the chain oxygen, O(1), has a large thermal factor along the $`a`$ direction perpendicular to the chain. Also Cu(1), the chain copper, has a relatively large thermal factor perpendicular to the chain. Also, interestingly, the in-plane copper, Cu(2), has a similarly large thermal factor along $`z`$. We searched for evidence of split atomic sites on each of these atoms in the directions indicated taking a similar approach to that used for the O(4). Interestingly, the only case where a split site refined to a finite value giving a lower residual was in the case of Cu(2) split along $`z`$. The results are summarized in Tables II and III. The geometry of the split Cu(2) $`z`$ sites are shown in Fig. 3. The two distinct copper sites are labeled A and B corresponding to sites which are displaced respectively towards and away from the apical oxygen. For fits over the range to 15.27 Å, refinements to all the data-sets were stable and convergent for the split position on Cu(2) along $`z`$ (which was not the case for a split on O(4) for example); however, the split sometimes refined to a negligibly small value and the improvement in fit is always small and barely significant. The results are much more robust when the fit is confined to the range $`1.5<r<5.2`$. In this case significant improvements in agreement factor are produced when splits in the range $`x=0.10(5)0.28(8)`$ Å are refined. Refined values from the GLAD data and the HIPD data are self consistent but there is not quantitative agreement between the results from these two instruments. The GLAD data refine a significantly larger split. This is consistent with the observation in Table II that GLAD thermal factors are consistently larger than those refined from HIPD data. The origin of this is that the GLAD data have a lower real-space resolution. We correct in the modeling for the experimental resolution coming from the finite $`Q_{max}`$ of the measurement; however, there is an additional loss in real-space resolution which comes from a $`Q`$-dependent asymmetric line broadening from time-of-flight spectrometers which is significant in the GLAD data and is not, at present, corrected in the modeling. Thus, we expect the GLAD data to overestimate the split. The most likely value of the split is between 0.1 and 0.2 Å. The observation of disorder on the Cu(2) site but not the apical oxygen site is consistent with another recent differential PDF study on optimally doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. ## IV Discussion First we discuss the inability to refine a split site on O(1) perpendicular to the chain. The thermal factors are enormous on this site in this direction and there is certainly lateral disorder associated with the chain oxygens. It was therefore a surprise that a split position did not improve the fit. Presumably the reason is that the chain is buckled in such a way that the oxygen ions take up a variety of displaced positions rather than two distinct displaced positions. This implies that the wavelength of the buckling is longer than two unit cells. We cannot determine if this buckling is static or dynamic. A similar argument can be made for the Cu(1) displacements in the $`a`$ direction, though these are somewhat smaller than those observed on the chain oxygen ions. We turn now to the planar copper ions. In this case a finite split is refined with an improvement in residual. Based on preceding arguments, this suggests that the distribution of Cu(2) is bimodal (Fig. 3). This might be expected if every copper site is not in the same charge-state as would happen in the presence of polarons or local charge-stripes in which case some sites would be Cu<sup>2+</sup> and others Cu<sup>3+</sup>, for example. Cu(2) sits at the base of a pyramidal cap of oxygen ions and does not lie on a center of symmetry; this is in contrast to the copper in single layer materials where it is octahedrally coordinated. In the present case, in response to increasing its charge state, the copper might be expected to change its position in such a way as to move towards the negative oxygen ions: Cu<sup>3+</sup> is expected to move into the oxygen pyramid somewhat. This is seen in the average structure as a function of oxygen concentration. The average Cu(2)-O(4) bond length changes from 2.4421 Å to 2.2708 Å on going from $`x=0.1`$ to $`x=0.94`$. This effect is also seen in XAFS. The difference in these two bond lengths is 0.1713 Å which is a similar magnitude to the observed split, $`\mathrm{\Delta }`$. If there are charge inhomogeneities in the CuO<sub>2</sub> plane copper ions with different Cu(2)-O(4) bond-lengths will coexist. In the absence of long-range order this will only be evident directly in the local structure, though an elongated thermal factor will be apparent crystallographically. Thus, the PDF results are consistent with crystallographic observations of an enlarged Cu(2) U<sub>33</sub>, (at least on underdoped samples). Recently, yttrium XAFS results also point towards a Cu(2) site split along the $`z`$-direction. In this case the split is smaller ($`0.05`$ Å) and is correlated with displacements of O(2) and O(3) and probably the whole pyramid of O(2,3,4). However, the disagreement with this study in the size of the split is probably not significant given the uncertaintly of both measurements. It would be interesting to model correlated atom displacements; however, we plan to collect data with better statistics or differential PDF data before attempting this. Correlated displacements of Cu(2) in directions out of the plane are also consistent with the presence of diffuse scattering in electron diffraction. We also note that there is independent evidence for disorder on copper sites which may be correlated with the charge-state of the copper. Ion-channeling experiments found an anomaly in the Y-Ba-Cu signal in superconducting 123 around T<sub>c</sub> but not in the Y-Ba signal of the same sample. The motivation for this study was to see whether doping induces a double-well potential at the apical oxygen site as suggested from earlier XAFS work. Our results clearly rule out a local split site for O(4) along $`z`$. However, we note that our findings are not necessarily in contradiction with the XAFS results. This can be understood as follows: In an XAFS experiment the neighbourhood of a particular atomic species is probed. The single scattering paths correspond to pair distance distributions similarly as observed from PDF. However, one of the atoms in the pair is the photoabsorber. If a distance appears split it is therefore not unique which of the two atoms in the pair sits on a split site. Thus, in the present case the beat observed in the Cu XAFS may possibly be explained by a split position on Cu(2). In principle this could be resolved by investigating the pair correlations between other atoms too. In practice this is difficult since typically in the XAFS data analysis multiple scattering paths involving triple- and higher scattering paths contribute significantly beyond the first two nearest-neighbour peaks. The multiple scattering paths are generally difficult to take into account and their number grows rapidly with increasing distance from the photoabsorber. We also note that the XAFS signal is affected by the valence of the photoabsorber ion. We are not aware of an analysis of the XAFS data which takes into account the possibility that some Cu sites are 2+ and others 3+. This might also help to reconcile the XAFS and diffraction work. It is interesting that refinement of the Cu(2) split position is more robust when a narrower range of $`r`$ is fit. This suggests that even by 15 Å the structure resembles the average structure. This suggests that any charge inhomogeneities are atomic scale and there is no evidence of even nanometer-scale charge phase separation. At optimal doping we would expect $`15\%`$ of Cu sites to contain holes. In the stripe model the distance of closest approach of these holes would be $`5.4`$ Å and the separation of stripes would be $`16`$ Å . Atomic pair correlations originating in one stripe and terminating in an adjacent one will more resemble the average structure than the local configuration either in a charge stripe or between stripes. Our PDF results support charge inhomogeneities on this atomic length scale rather than some kind of longer length-scale phase separation. Local structural inhomogeneities in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> disappear in the overdoped state. This coincides with the point where the pseudogap transition, $`T_p`$, merges with $`T_c`$. The suggestion is that below $`T_p`$ the charge dynamics are from fluctuating localized charges whereas above $`T_p`$ the carriers are delocalized and electronically induced structural inhomogeneities go away. We note that $`T_p`$ falls rapidly to $`T_c`$ between $`x=0.95`$ and $`x=1.0`$. The optimum doping level is sometimes inferred by the value of $`T_c`$ only and the oxygen content is not well characterized. However, $`T_c`$ versus doping is quite flat around optimum doping. A sample with a true doping level which falls slightly below optimum doping would be in the underdoped regime and therefore exhibit structural distortions. A slightly overdoped sample on the contrary would not exhibit structural distortions. This might explain the fact that for samples around optimal doping there is some disagreement about the existence or otherwise of local structural distortions. The presence of atomic scale charge inhomogeneities in the CuO<sub>2</sub> planes are presently emerging as a common feature of high-T<sub>c</sub> cuprates. Inhomogeneous local bucklings of the Cu-O bond have been observed in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and Nd<sub>2-x</sub>Ce<sub>x</sub>CuO<sub>4</sub>. Recently stripes have been observed in hole-doped 214 cuprates. Our results suggest that such inhomogeneities are also present in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. In the present analysis we address only the question whether sites are split or not and not the possibility of short-range ordered patterns of atomic displacements in the chains and planes. The existence of stripes cannot be invoked directly from our analysis. However, they can be interpreted as being due to the presence of atomic scale charge inhomogeneities consistent with the presence of a dynamic local stripe phase. ## V Conclusions From our pair distribution analysis on YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> (x=0.25, 0.45, 0.65, 0.94) we find no evidence for a split apical oxygen site. A finite split of the order 0.1 Å is obtained for the in-plane Cu(2) along $`z`$ resulted in an improvement of the fit over the average crystallographic model. The origin of such a split can be explained assuming the presence of both Cu<sup>2+</sup> and Cu<sup>3+</sup> in the CuO<sub>2</sub> planes as might be expected in the presence of charge stripes or polarons. ###### Acknowledgements. We would like to thank V. Petkov, Th. Proffen and T. A. Tyson for invaluable discussions and J. Johnson for her help with GLAD data collection. This work was supported by the NSF through grant DMR-9700966. MG acknowledges support from the Swiss National Science Foundation. The IPNS is funded by the U.S. Department of Energy under contract W-31-109-Eng-38 and the MLNSC under contract W-7405-ENG-36.
no-problem/9908/cond-mat9908410.html
ar5iv
text
# Electronic Raman Scattering in Superconducting Cuprates ## Abstract We show that the novel features observed in Raman experiments on optimally doped and underdoped Bi-$`2212`$ compounds in $`B_{1g}`$ geometry can be explained by a strong fermionic self-energy due to the interaction with spin fluctuations. We compute the Raman intensity $`R(\omega )`$ both above and below $`T_c`$, and show that in both cases $`R(\omega )`$ progressively deviates, with decreasing doping, from that in a Fermi-gas due to increasing contribution from the fermionic self-energy. We also show that the final state interaction increases with decreasing doping and gradually transforms the $`2\mathrm{\Delta }`$ peak in the superconducting state into a pseudo resonance mode below $`2\mathrm{\Delta }`$. We argue that these results agree well with the experimental data for Bi-$`2212`$. The form of the fermionic spectral function in optimally doped and underdoped cuprate superconductors has been the subject of intensive experimental and theoretical studies over the last few years. ARPES and neutron experiments demonstrated that the fermionic spectral function undergoes a substantial evolution with decreasing doping, and for underdoped cuprates is very different from the one in a Fermi-gas (FG) . Raman scattering in $`B_{1g}`$ geometry is another spectroscopy to study the electronic properties in this momentum region. For electronic Raman scattering, the intensity $`R(\omega )`$ is in general given by the imaginary part of the fermionic particle-hole bubble at small external momentum $`𝐪`$ and finite frequency, weighted with the Raman vertices . In this paper we focus on the $`B_{1g}`$ Raman scattering. In $`B_{1g}`$ geometry, the Raman vertex $`V_{B_{1g}}\mathrm{cos}k_x\mathrm{cos}k_y`$, and the scattering thus mostly probes the vicinity of $`(0,\pi )`$ . Recent Raman experiments on overdoped, optimally doped and underdoped Bi$``$2212 demonstrated that the $`B_{1g}`$ Raman intensity $`R(\omega )`$ undergoes significant changes with decreasing doping , and progressively deviates from predictions of a FG theory. Indeed, according to the FG theory, $`R(\omega )`$ in the normal state is finite only for nonzero external momentum $`q`$, and vanishes at $`\omega >v_Fq`$, where $`v_F`$ is the Fermi velocity . In the superconducting state, $`R(\omega )`$ is finite due to a particle-hole mixing and possesses a peak at $`\omega =2\mathrm{\Delta }`$ where $`\mathrm{\Delta }`$ is the maximum of the superconducting gap $`\mathrm{\Delta }_k`$ . Below $`2\mathrm{\Delta }`$, the intensity scales as $`\omega ^3`$ at small frequencies due to the presence of nodes in the $`d`$wave gap . In Fig. 1 we present the experimental Raman intensity for several Bi$``$2212 compounds both below and above $`T_c`$ . In the overdoped, $`T_c=82K`$ material, the behavior of $`R(\omega )`$ is qualitatively consistent with the FG theory, i.e., it is featureless above $`T_c`$ (not shown), while at $`T<T_c`$, $`R(\omega )\omega ^3`$ at small frequencies , and exhibits a sharp peak at $`2\mathrm{\Delta }50`$ meV. At and below optimal doping, however, the form of $`R(\omega )`$ is inconsistent with the FG theory. In the normal state, the intensity increases with frequency as $`R(\omega )\sqrt{\omega }`$, and saturates at a few hundred meV . (see the inset in Fig. 1). In the superconducting state, the key experimental observation is that with underdoping, the peak in $`R(\omega )`$ occurs at progressively smaller frequency than the ”$`2\mathrm{\Delta }`$” extracted from tunneling experiments , and almost saturates at about $`75`$ meV . Simultaneously, the low frequency behavior becomes predominantly linear in $`\omega `$ with a kink around $`4050`$ meV, while above the peak the intensity develops a dip at about $`90`$ meV and at even larger frequencies recovers the normal state $`\sqrt{\omega }`$ form. In this Letter, we make two points. First, we argue that the inconsistency of the experimental results for Bi-$`2212`$ at and below optimal doping with the FG scenario indicates that the fermionic self-energy is large and substantially modifies the form of the fermionic Green’s function both in the normal and in the superconducting state. Phenomenologically, this effect was considered in . Here we perform a microscopic analysis within a spin-fermion model in which the fermionic self-energy emerges due to an interaction with overdamped spin fluctuations and is the largest for fermions in the vicinity of $`(0,\pi )`$. Since the Raman intensity in $`B_{1g}`$ geometry predominantly probes the region near $`(0,\pi )`$, it directly reflects the changes in the fermionic spectrum imposed by the large self-energy. Second, we show that a magnetically induced final state interaction replaces the $`2\mathrm{\Delta }`$ peak in $`R(\omega )`$ by a pseudo resonance peak at a smaller frequency $`\omega _{res}`$. Near optimal doping, this effect is almost unobservable, but it becomes visible in underdoped cuprates and explains the discrepancy between the Raman peak frequency and the one in the density of states. We now turn to the calculations. The Raman intensity in a superconductor is given by a set of fermionic bubbles made of normal and anomalous Green’s functions (Fig.2). We first compute $`R(\omega )`$ without final state interaction. In this approximation, only the first two diagrams in Fig. 2a contribute (with bare vertices), and we have $`R(\omega )Im{\displaystyle 𝑑k𝑑\mathrm{\Omega }V_{B_{1g}}^2(𝐤)}`$ (1) $`\times (G_{sc}(k,\mathrm{\Omega }_+)G_{sc}(k,\mathrm{\Omega }_{})+F(k,\mathrm{\Omega }_+)F(k,\mathrm{\Omega }_{})),`$ (2) where $`\mathrm{\Omega }_\pm =\mathrm{\Omega }\pm \omega /2`$ and $`G_{sc}(k,\omega )`$ $`=`$ $`G_n^1(k,\omega )/(G_n^1(k,\omega )G_n^1(k,\omega )+\mathrm{\Delta }_k^2)`$ (3) $`F(k,\omega )`$ $`=`$ $`i\mathrm{\Delta }_k/(G_n^1(k,\omega )G_n^1(k,\omega )+\mathrm{\Delta }_k^2).`$ (4) Here $`\mathrm{\Delta }_k`$ is the $`d`$-wave superconducting gap, and $`G_n^1(k,\omega )=\omega ϵ_k+\overline{g}^2\mathrm{\Sigma }(𝐤,\omega )`$, where $`\overline{g}`$ is a dimensionless spin-fermion coupling, and $`\mathrm{\Sigma }(𝐤,\omega )`$ is the $`\mathrm{\Delta }`$-dependent fermionic self-energy. Theoretically, $`\overline{g}^2\xi ^1`$ where $`\xi `$ is the magnetic correlation length . In strongly overdoped cuprates, $`\overline{g}1`$, and the system behavior resembles that in a FG. However, at and below optimal doping, $`\overline{g}1`$ in which case the self-energy overshadows the bare $`\omega `$ term in the Green’s function, i.e., $`G_n^1(k,\omega )\overline{g}^2\mathrm{\Sigma }(k,\omega )ϵ_k`$. The form of the fermionic self-energy is an input for our Raman calculations. Obviously, the self-energy is the largest near $`(0,\pi )`$ and symmetry related points where the scattering by nearly antiferromagnetic spin fluctuations is the strongest. The self-energy in this $`k`$range also strongly depends on doping as evidenced by ARPES data. Since our goal is to relate the doping dependent changes in $`R(\omega )`$ with those in $`\mathrm{\Sigma }(k,\omega )`$, we restrict our consideration to the vicinity of $`(0,\pi )`$ where $`\mathrm{\Delta }_k`$ is close to its maximum value $`\mathrm{\Delta }`$. We will argue below that for strong coupling, the $`B_{1g}`$ Raman intensity is dominated by $`k`$ near $`(0,\pi )`$ down to $`\omega \mathrm{\Delta }`$, and crosses over to $`\omega ^3`$ behavior due to the nodes of the d-wave gap only at vanishingly small frequencies. It has been argued that for $`\overline{g}1`$, $`\mathrm{\Sigma }(k,\omega )`$ near $`(0,\pi )`$ is independent of the quasiparticle energy up to corrections $`O((\mathrm{log}\overline{g})/\overline{g}^2)`$, and in a superconductor behaves as $`\mathrm{\Sigma }(\omega )\omega `$ at $`\omega 2\overline{\mathrm{\Delta }}`$, and as $`\mathrm{\Sigma }(\omega )e^{i\pi /4}\sqrt{|\omega |}sgn\omega `$ at $`\omega max(2\overline{\mathrm{\Delta }},\omega _{sf})`$, where $`\overline{\mathrm{\Delta }}=\mathrm{\Delta }/\overline{g}`$ is a measured gap, and $`\omega _{sf}\xi ^2`$ is a typical relaxation frequency of spin fluctuations. The physical reasoning here is twofold. First, in the normal state, the scattering by nearly-critical overdamped spin fluctuations yields $`\mathrm{\Sigma }(\omega )=2\omega /(1+\sqrt{1i|\omega |/\omega _{sf}})`$ which displays a crossover from a Fermi-liquid behavior at $`\omega <\omega _{sf}`$ to a quantum-critical behavior for $`\omega >\omega _{sf}`$ where $`\mathrm{\Sigma }(\omega )\sqrt{\omega }`$ . Second, in a superconductor, the strength of the scattering by spin fluctuations is reduced below $`2\overline{\mathrm{\Delta }}`$ due to a feedback effect on the spin damping . The calculation of the full $`\mathrm{\Sigma }(\omega )`$ is rather involved and requires one to solve a set of two coupled complex integral equations for the fermionic self-energy and the spin polarization operator. Below we use the approximate self-consistent solution of this set of equations which correctly reproduces the behavior of $`\mathrm{\Sigma }(\omega )`$ at large and small frequencies. In the latter case it yields $`\mathrm{\Sigma }(\omega )\omega `$, i.e., the peak in the spectral function occurs right at $`\omega =\overline{\mathrm{\Delta }}`$. We now proceed with $`R(\omega )`$. We assume that the density of states near $`(0,\pi )`$ depends only weakly on $`ϵ_k`$ and replace the $`k`$integration in (2) by the integration over $`ϵ_k`$. We then obtain (neglecting overall prefactor) $`R(\omega )=Im\chi (\omega )`$ where $$\chi (\omega )=_0^{\mathrm{}}𝑑\mathrm{\Omega }\frac{\overline{\mathrm{\Delta }}^2\mathrm{\Sigma }(\mathrm{\Omega }_+)\mathrm{\Sigma }(\mathrm{\Omega }_{})+D(\mathrm{\Omega }_+)D(\mathrm{\Omega }_{})}{D(\mathrm{\Omega }_+)D(\mathrm{\Omega }_{})(D(\mathrm{\Omega }_+)+D(\mathrm{\Omega }_{}))}+C,$$ (5) and $`D(\mathrm{\Omega }_\pm )=\sqrt{\mathrm{\Sigma }^2(\mathrm{\Omega }_\pm )\overline{\mathrm{\Delta }}^2}`$. The constant $`C>0`$ is a real number. Its presence in (5) is related to the fact that Eq. (2) with $`k`$integration substituted by the integration over $`ϵ_k`$ lacks convergence. In this case, one cannot simply interchange frequency and energy integrations and has to include a regularization procedure . The value of $`C`$ is irrelevant for the calculations of the Raman intensity without vertex corrections as it does not contribute to $`Im\chi (\omega )`$. It however becomes relevant when one includes the effects of the final state interaction which accounts for the renormalization of the Raman vertex (see below). These vertex corrections involve the $`d`$wave component of the effective interaction, $`\mathrm{\Gamma }_d`$ which decreases at large frequencies and therefore provides the physical regularization of the Raman bubble. In the spin-fluctuation approach, $`\mathrm{\Gamma }_d`$ starts decreasing at $`\omega ϵ_F^2/\overline{g}`$ . Another regularization is provided by the fact that the integral over $`ϵ_k`$ is cut at $`ϵ_F`$. Obviously, the value of $`C`$ depends on the ratio $`ϵ_F/\overline{g}`$. For $`ϵ_F\overline{g}`$ which is the case at weak coupling, the cutoff in a momentum space dominates. In this situation, an adequate way to evaluate the Raman bubble is to integrate first over frequency. This is how previous calculations of $`R(\omega )`$ have been performed . One then obtains $`C=1`$. However, in the strong coupling limit $`ϵ_F\overline{g}`$, which is likely to be satisfied in underdoped cuprates, a more adequate way to evaluate the Raman bubble is to integrate first over $`ϵ_k`$ as we did. In this case, $`C=0`$. We now analyze Eq. (5) first in the normal state and then in the superconducting state. In the normal state, a substitution of the explicit form of $`\mathrm{\Sigma }(\omega )`$ into (5) yields $`R(\omega )=R(\omega _{sf})\mathrm{\Phi }(\omega /\omega _{sf})`$ where $`\mathrm{\Phi }(x)1.07x`$ for $`x1`$, and $`\mathrm{\Phi }(x)1.73\sqrt{x}`$ for $`1x\overline{g}^4`$. We see that at $`\omega >\omega _{sf}`$, relevant to experiments, $`R(\omega )\sqrt{\omega }`$. We fitted the normal state data by this form and found almost perfect agreement with the experiment (see the inset of Fig. 1). For even larger $`x\overline{g}^4`$, the bare $`\omega `$ term in the quasiparticle Green’s function begins to dominate, and the theoretical $`R(\omega )`$ saturates, passes through a very broad maximum at $`x=2.24\overline{g}^4`$ and then slowly decays as $`1/\sqrt{x}`$. This also agrees with the data. A similar behavior was found numerically in Ref. . In the superconducting state, the form of $`R(\omega )`$ clearly depends on the ratio $`b=\overline{\mathrm{\Delta }}/\omega _{sf}`$. Experimentally, $`b1`$ in strongly overdoped cuprates, and $`b1`$ in strongly underdoped cuprates . For small $`b`$, Eq.(5) expectedly reproduces the FG result for the Raman intensity: $`R(\omega )=0`$ for $`\omega <2\overline{\mathrm{\Delta }}`$ and $`R(\omega )(\omega \sqrt{\omega ^24\overline{\mathrm{\Delta }}^2})^1`$ for $`\omega >2\overline{\mathrm{\Delta }}`$. The inclusion of the $`k`$ dependence of the gap corrects this behavior at small frequencies where it yields a finite $`R(\omega )(\omega /\overline{\mathrm{\Delta }})^3`$ , and also very near $`2\overline{\mathrm{\Delta }}`$ where it changes the square root singularity to a logarithmic one. For large $`b`$, however, the form of the Raman intensity substantially deviates from the FG result. We computed numerically the Raman intensity for different values of $`b`$, and present the results in Fig. 3. We emphasize four key features in $`R(\omega )`$: (i) the $`2\overline{\mathrm{\Delta }}`$ peak is still present even for large $`b`$, (ii) with increasing $`b`$, the peak frequency becomes progressively larger than $`2\overline{\mathrm{\Delta }}`$, (iii) there is a dip in $`R(\omega )`$ above the peak, (iv) at small frequencies, $`\omega 2\overline{\mathrm{\Delta }}`$, $`R(\omega )`$ is predominantly linear in $`\omega `$. These features in the Raman intensity can all be understood analytically by analyzing Eq. (5) in various limits. The broadening of the peak, the dip and the linear behavior at low frequencies all agree with the features present in the data in Fig. 1. However, one key disagreement with the data remains. Namely, we calculated the density of states $`N(\omega )=𝑑kImG_{sc}(k,\omega )`$ along the same lines as the Raman intensity and found that the peak in $`N(\omega )`$ is located at almost exactly a half of the peak frequency in $`R(\omega )`$, although both peak frequencies shift to larger values with increasing $`b`$. The data, we remind, show that with underdoping, the peak in $`R(\omega )`$ occurs at progressively lower frequencies than twice the peak frequency in $`N(\omega )`$. We now argue that the experimentally observed downturn renormalization of the peak in $`R(\omega )`$ compared to twice that in $`N(\omega )`$ is due to a fact that the final state interaction between scattered quasiparticles gives rise to a bound state below $`2\overline{\mathrm{\Delta }}`$. The final state interaction in both $`s`$wave and $`d`$-wave superconductors have been considered several times in the literature . It gives rise to corrections to the particle-hole vertex (Fig. 2b), and also introduces an additional scattering process which mixes particle-hole and particle-particle channels (Fig. 2a). A simple experimentation shows that the vertex corrections to the $`B_{1g}`$ vertex involve the $`d`$wave component of the effective interaction in the zero-sound channel, while the ”mixed” diagram (the last one in Fig. 2a) contains the fully renormalized $`s`$wave vertex in the particle-particle channel. Several authors argued that for a $`d`$wave superconductor with spin-independent interaction, the effective $`d`$wave coupling in the zero-sound channel is repulsive, and the bound state does not appear unless the mixed diagram prevails over the conventional, RPA-type vertex renormalization. We, however, demonstrate below that for magnetically-mediated $`d`$wave superconductivity there is an additional sign change between vertices in the zero-sound and the Cooper channels, and this eventually gives rise to attractive $`d`$wave coupling in the zero-sound channel. Indeed, consider the effective interaction between fermions mediated by the exchange of spin fluctuations with momenta near $`Q`$. We have $`\mathrm{\Gamma }=g^2\chi (𝐪,\mathrm{\Omega })\sigma _{\alpha \beta }\sigma _{\gamma \delta }`$ where $`𝐪=𝐤𝐤^{}`$ and $`\mathrm{\Omega }=\omega \omega ^{}`$ are transferred momentum and frequency, respectively. To simplify the discussion on the sign of the interaction, assume that $`\mathrm{\Gamma }`$ has a dominant $`d`$wave partial amplitude $`\mathrm{\Gamma }_d`$, i.e., $$\mathrm{\Gamma }(kk^{},\omega \omega ^{})d_kd_k^{}\mathrm{\Gamma }_d\sigma _{\alpha \beta }\sigma _{\gamma \delta }$$ (6) where $`d_k`$ are $`d`$wave eigenfunctions. In general, $`\mathrm{\Gamma }_d`$ depends on the transferred frequency. However, one can straightforwardly demonstrate that for a relaxational form of the spin susceptibility, the frequency dependence of $`\mathrm{\Gamma }_d`$ becomes relevant at frequencies $`\omega ϵ_F^2/\overline{g}\overline{\mathrm{\Delta }}`$ . The vertex renormalization on the other hand is mostly determined by much smaller $`\omega \overline{\mathrm{\Delta }}`$. In this situation, $`\mathrm{\Gamma }_d`$ can, to a good accuracy, be approximated by a constant. Let us first verify that there is a superconducting instability in the spin-singlet particle-particle channel. Substituting (6) into the equation for the full particle-particle vertex and making use of the identity $$\sigma _{\alpha \beta }\sigma _{\gamma \delta }=T3S$$ (7) where $`T,S=(\delta _{\alpha \beta }\delta _{\gamma \delta }\pm \delta _{\alpha \delta }\delta _{\gamma \beta })/2`$ are triplet and singlet spin configurations, respectively, we obtain (using for simplicity Fermi gas Green’s functions) $$\mathrm{\Gamma }_S^{tot}=3\mathrm{\Gamma }_d/(1+3\mathrm{\Gamma }_dL)$$ (8) where $`L=log(\omega _{max}/T)>0`$. We see that the behavior of the total particle-particle vertex in the spin singlet channel depends on the sign of $`\mathrm{\Gamma }_d`$. Several authors have demonstrated that near the antiferromagnetic instability, $`\mathrm{\Gamma }_d<0`$ . This obviously implies the superconducting instability. However, if the interaction were spin independent, then the $`d`$wave instability would require a positive $`\mathrm{\Gamma }_d`$. Let us turn to the $`B_{1g}`$ Raman intensity. The simplest way to check the sign of the vertex correction is to consider a Fermi gas in the normal state, and compute the density-density correlator at zero frequency and at a finite momentum. In the normal state, the mixed diagram does not contribute, and the ladder series of vertex correction diagrams can be easily summed up. We found $$V_{B_{1g}}^{full}(q)=\frac{V_{B_{1g}}(q)}{1+\mathrm{\Gamma }_d\chi _d(q)}$$ (9) where $`V_{B_{1g}}`$ is the bare $`B_{1g}`$ vertex, and $$\chi _d(q)=𝑑𝐤(d_k)^2\frac{\mathrm{\Theta }(ϵ_+)\mathrm{\Theta }(ϵ_{})}{ϵ_+ϵ_{}}$$ (10) is the uniform $`d`$wave susceptibility ($`ϵ_\pm =ϵ_{k\pm q/2}`$, and $`\mathrm{\Theta }(x)=1`$ when $`x>0`$ and $`\mathrm{\Theta }(x)=0`$ when $`x<0`$). Obviously, $`\chi _d(q)>0`$. We see that for negative $`\mathrm{\Gamma }_d`$, the $`B_{1g}`$ vertex in the particle-hole channel is enhanced, i.e., the effective interaction in this channel is attractive. We show below that in the superconducting state this attraction gives rise to a pseudo-resonance below $`2\overline{\mathrm{\Delta }}`$. Previous analytical studies obtained the same expression as in (10), but they considered spin-independent $`d`$wave interaction, and therefore set $`\mathrm{\Gamma }_d>0`$. In this situation, vertex corrections reduce the $`B_{1g}`$ vertex and do not give rise to a resonance behavior. The conclusion that vertex corrections reduce the $`B_{1g}`$ vertex has recently been reached in numerical studies of the Hubbard model . The reasons for the discrepancy with our analysis are not clear to us because the effective interaction selected in is apparently also mediated by spin fluctuations. We now continue our analysis of the superconducting state. The vertex renormalization is given by a set of diagrams in Fig. 2. For a spin-mediated interaction, the $`s`$wave coupling is repulsive such that the last diagram in Fig. 2a does not contain a low-energy resonance mode and can be safely neglected. We are then left with the ladder series of vertex correction diagrams. For a frequency independent $`\mathrm{\Gamma }_d`$, the series of vertex corrections is then geometrical and yields $$R_{full}(\omega )=\frac{Im\chi (\omega )}{(1+\mathrm{\Gamma }_dRe\chi (\omega ))^2+(\mathrm{\Gamma }_dIm\chi (\omega ))^2}$$ (11) where, we remind, $`\mathrm{\Gamma }_d<0`$, and $`\chi (\omega )`$ is given by (5). Now recall that in a superconductor $`Im\chi (\omega )`$ is small at $`\omega <2\overline{\mathrm{\Delta }}`$. Evaluating $`Re\chi (\omega )`$ we found that it is positive below $`\omega <2\overline{\mathrm{\Delta }}`$. Thus for small frequencies, we found $`Re\chi (\omega )=A(\omega /\overline{\mathrm{\Delta }})^2`$ where in a FG, $`A=1/3`$. Substituting this result into (11), we find that $`R_{full}`$ possesses a resonance peak below $`2\overline{\mathrm{\Delta }}`$, at a frequency $`\omega =\omega _{res}`$ where $`|\mathrm{\Gamma }_d|ReR(\omega _{res})=1`$. Actually, the solution for $`\omega _{res}`$ exists already at weak coupling simply because in a FG, $`Re\chi (\omega )`$ diverges logarithmically as $`\omega `$ approaches $`2\overline{\mathrm{\Delta }}`$ from below. As the coupling increases, the peak position progressively deviates downwards from $`2\overline{\mathrm{\Delta }}`$. A similar reasoning has been previously applied to explain the presence of the resonance peak in the neutron scattering data below $`T_c`$ . Indeed, the bound state which we found is only a pseudo resonance because in a $`d`$wave superconductor, $`Im\chi (\omega )`$ is finite for all $`\omega 0`$ because of the nodes of the gap. On the other hand, the very existence of the peak only requires a reduction of $`Im\chi (\omega )`$ below $`2\overline{\mathrm{\Delta }}`$ which is a natural consequence of a reduction of the fermionic spectral weight at low frequencies in a superconductor. Moreover, as the spectral weight reduction occurs already in the pseudogap regime , our theory predicts that the Raman peak should survive above $`T_c`$ and disappear only at a temperature where the pseudogap behavior becomes invisible. In Fig. 4 we fitted the data from Fig. 1 to Eq. (11) using $`\mathrm{\Gamma }_d`$ as an adjustable parameter. We found that in overdoped cuprates, the pseudo resonance frequency is almost indistinguishable from twice the peak frequency extracted from the tunneling data . However, as the system moves towards lower doping, $`|\mathrm{\Gamma }_d|`$ obviously increases, and the peak in $`R_{full}(\omega )`$ moves to progressively lower frequencies compared to twice the peak frequency of the tunneling data. One more point. For $`s`$wave case, weak coupling calculations have shown that for an attractive zero-sound coupling, $`R_{full}(\omega )`$ has two peaks, one at $`\omega _{res}`$ and another near $`2\mathrm{\Delta }`$. We did not find indications for a two peak structure in $`R_{full}(\omega )`$. The reason is that in our case, the peak in $`R(\omega )`$ is already rather broad. Furthermore, we found numerically that the overall shape of $`R_{full}(\omega )`$ does not change much compared to that in $`R(\omega )`$, i.e., the dip and the linear behavior at small frequencies are present also in $`R_{full}(\omega )`$ (see Fig. 3). ¿From this perspective, the shape of the Raman intensity is mostly determined by the self-energy corrections, while the position of the Raman peak is determined by the resonance in the Raman vertex. To summarize, we have demonstrated that the experimentally observed doping evolution of the $`B_{1g}`$ Raman intensity in cuprates can be explained by an interaction with spin fluctuations. We argued that for the optimally doped and underdoped materials our results capture all salient features of the experimental data in Fig.1: (i) the $`\sqrt{\omega }`$ behavior of the intensity in the normal state, (ii) a predominantly linear low frequency behavior of $`R(\omega )`$ in a superconductor, (iii) a reduction of the peak amplitude with decreasing doping and a development of a dip above the peak, and (iv) a progressive downturn deviation of the Raman peak position compared to the distance between the peaks in the tunneling density of states. Finally, the prediction that the Raman peak survives in the pseudogap regime is also consistent with the data . An issue which we didn’t address in this paper is the experimentally observed strong discrepancy between the Raman data in $`B_{1g}`$ and $`A_{1g}`$ geometries . This discrepancy (e.g., different locations of the peak frequencies) is still unexplained and clearly calls for more theoretical work on Raman scattering. It is our pleasure to thank T. Devereaux, R. Gatt, R. Joynt, M. V. Klein, A. Millis, H. Monien, D. Pines and J. Schmalian for useful conversations. The research was supported by NSF grant DMR-9629839 (A.C.) and by the NSF cooperative agreement DMR91-20000 through STCS (D.M. and G.B.).
no-problem/9908/astro-ph9908356.html
ar5iv
text
# Time-Varying Fine-Structure Constant Requires Cosmological Constant ## Abstract Webb et al. presented preliminary evidence for a time-varying fine-structure constant. We show Teller’s formula for this variation to be ruled out within the Einstein-de Sitter universe, however, it is compatible with cosmologies which require a large cosmological constant. The possibility of time-varying physical constants was suggested by Dirac and Milne . This suggestion has been widely discussed for a number of motivations: (i) The assumption of physical quantities being constant in space and time is ad hoc. It has to be either confirmed or rejected by observation . (ii) The possibility of life as we know it depends on the values of a few basic physical constants and appears to be sensitive to their numerical values. This argument appeals to the anthropic principle . (iii) The numerical values of the basic dimensionless constants, e. g. the fine-structure constant, are not yet explained. If they depend on other constants and the age of the universe and are therefore time-varying, then the actual number of the independent free parameters of the standard theory of particle physics would be reduced. Products and quotients of physical constants have been shown to be of fundamental importance for new physical phenomena: (i) The unit of flux quantization in superconductors is $`\varphi =h/(2e)`$, where $`h`$ denotes the Planck constant and $`e`$ the unit electric charge . This finding was the first experimental proof of the famous BCS-theory . Furthermore, the flux quantization in units of $`\varphi `$ is of central importance for the Josephson junctions . (ii) The characteristic magnetic flux in the Aharonov-Bohm effect is quantized in units of $`\varphi =h/e`$. (iii) The unit conductance in the integer quantum Hall effect is $`\widehat{\sigma }=e^2/h`$. In the fractional quantum Hall effect the conductance is $`\sigma =pe^2/h`$, where $`p`$ is a rational number. (iv) The Chandrasekhar limit mass of white dwarf stars is $$M_c=\frac{3\sqrt{\pi }}{2\mu ^2M_p^2}\left(\frac{\mathrm{}c}{G}\right)^{3/2},$$ (1) where $`\mathrm{}=h/(2\pi )`$, $`c`$ is the speed of light, $`G`$ is Newton’s gravitational constant, $`M_p`$ is the proton rest mass, and $`\mu `$ is the number ratio of nucleons and electrons. In order to find further fundamental and important physical phenomena one is attempted to try to multiply or divide the Planck units , $`l_p`$ $`=`$ $`(G\mathrm{}/c^3)^{1/2}`$ (2) $`t_p`$ $`=`$ $`(G\mathrm{}/c^5)^{1/2}`$ (3) $`m_p`$ $`=`$ $`(\mathrm{}c/G)^{1/2},`$ (4) with the Einstein constant , $$\kappa =8\pi G/c^4,$$ (5) the “natural units” $`\mathrm{}`$ and $`c`$, and the present value $`H_0`$ of the Hubble parameter . By just doing this, Teller found the remarkable relation (in modern notation), $$\kappa m_pH_0c=\kappa \mathrm{}H_0/l_p=8\pi t_pH_0=\text{exp}(1/\alpha _0),$$ (6) where $$\alpha _0=\frac{e^2}{4\pi \epsilon _0\mathrm{}c}$$ (7) is the fine-structure constant. Here and in the following the subscript “0” refers to the present value of the respective parameter. From Teller’s equation one can derive the change rate $$\frac{\dot{\alpha }_z}{\alpha _z}=\alpha _z\frac{\dot{H}_z}{H_z},$$ (8) where the subscript “$`z`$” denotes the value of the respective parameter at cosmological redshift $`z`$ and the dot denotes the derivative with time. This rate can be determined experimentally. By examining quasar absorption lines in the redshift region $`0.5<z<1.6`$, Webb et al. found preliminary evidence for a time-varying fine-structure constant, $$\frac{\mathrm{\Delta }\alpha }{\alpha }=(1.09\pm 0.36)\times 10^5,$$ (9) implying $$\frac{\dot{\alpha }_z}{\alpha _z}=1\times 10^5H_z.$$ (10) If the preliminary finding by Webb et al. will not be confirmed by future investigations, then Eq. 10 should be considered as an upper limit for the change rate of the fine-structure constant. It is our aim to examine which cosmologies are compatible with both Teller’s equation and (the upper limit of) the change rate of the fine-structure constant determined by Webb et al. Compatibility is satisfied for cosmologies like the de Sitter universe and the steady state cosmology , where the Hubble parameter is constant in time. Unfortunately, these models have already been ruled out as viable cosmologies, because they contradict a number of other cosmological findings . According to general agreement the expansion of the universe is explained by the Friedmann-Lemaître equation , $$H^2=\frac{8\pi G\varrho }{3}\frac{kc^2}{R^2}+\frac{\lambda c^2}{3}.$$ (11) The curvature radius $`R`$ and the mean mass density $`\varrho `$ scale with the redshift as, $`R`$ $`=`$ $`R_0/(1+z)`$ (12) $`\varrho `$ $`=`$ $`\varrho _0(1+z)^3.`$ (13) By defining the present values of the mass parameter $$\mathrm{\Omega }_0=\frac{8\pi G\varrho _0}{3H_0^2}$$ (14) and the cosmological parameter $$\lambda _0=\frac{\lambda c^2}{3H_0^2},$$ (15) the Friedmann-Lemaître equation can be written as , $$\left(\frac{H_z}{H_0}\right)^2=\mathrm{\Omega }_0(1+z)^3(\mathrm{\Omega }_0+\lambda _01)(1+z)^2+\lambda _0.$$ (16) The Einstein-de Sitter universe , where $`\mathrm{\Omega }_0=1`$ and $`\lambda _0=0`$, is a special case of the Friedmann-Lemaître universe. In this cosmology, Eq. 16 is reduced to $$\left(\frac{H_z}{H_0}\right)^2=(1+z)^3.$$ (17) By using this equation Dyson derived from Teller’s equation the formula $$\alpha _z=\frac{\alpha _0}{1\frac{3}{2}\alpha _0\text{ln}(1+z)},$$ (18) which somewhat resembles expressions known from running coupling constants in quantum field theory. Dyson’s equation (18) yields the rate $$\frac{\dot{\alpha }_z}{\alpha _z}=\frac{3}{2}\alpha _zH_z$$ (19) and is, unfortunately, ruled out by the experimental result of Webb et al. By regarding Eq. 16 we find Teller’s equation to be compatible with the observation by Webb et al. if the present value $`\mathrm{\Omega }_0`$ of the mass parameter is extremely small and the cosmological parameter $`\lambda _0`$ slightly above unity. The lower bound on $`\mathrm{\Omega }_0`$ is given by the baryonic matter content of the universe. The visible baryonic matter of the universe was determined to be $$\mathrm{\Omega }_0=0.003h^1,$$ (20) where $`h=H_0/(100\text{km/(s Mpc)})`$. (The parameter $`h`$ was introduced into cosmology for ease of notation; it should not be confused with the Planck constant.) This extremely small value of $`\mathrm{\Omega }_0`$ is compatible with other determinations of $`\mathrm{\Omega }_0`$ derived from examinations of the dynamics of clusters of galaxies if majority of their mass consists of a network of cosmic strings. This hypothetical network contributes to the local mass but do not influence the expansion of the universe . Consequently, the mass contribution of the cosmic string network does not appear in the Friedmann-Lemaître equation. Furthermore, determinations of $`\mathrm{\Omega }_0`$ from hot big bang nucleosynthesis are model-dependent (especially the degree of inhomogeneities in the early universe is not well understood), for a discussion see Refs. and . If the observation by Webb et al. will be confirmed and Teller’s equation is correct, then we can predict the following present values of cosmological parameters: $`H_0`$ $`=`$ $`69.7\text{km/(s Mpc)}`$ (21) $`\mathrm{\Omega }_0`$ $`=`$ $`0.004`$ (22) $`\lambda _0`$ $`=`$ $`1.004.`$ (23) This value of $`H_0`$ is compatible with majority of its recent determinations, e. g. Pierce et al. found $`H_0=(87\pm 7)\text{km/(s Mpc)}`$, Freedman et al. reported $`H_0=(80\pm 17)\text{km/(s Mpc)}`$, and Tanvir et al. measured $`H_0=(69\pm 8)\text{km/(s Mpc)}`$. Harris et al. found $`H_0=(77\pm 8)\text{km/(s Mpc)}`$ and Madore et al. reported $`H_0=(73\pm 21)\text{km/(s Mpc)}`$. Interestingly, the cosmological parameters of the Bonn-Potsdam model resemble those suggested above. Recently, van de Bruck has examined the effects of the hypothetical cosmic string network on the cosmological velocity field within the framework of this model and has shown them to be compatible with cosmological observations. The possible time-variation of the fine-structure constant implies the variation of at least one of the constants $`c`$, $`\mathrm{}`$, $`\epsilon _0`$, and $`e`$. A time-varying speed of light, for example, would imply a change in the radiation losses of accelerated charges, which would be of importance especially in the very early universe . In practice, time-variations of quantities with nonzero dimension can be measured and defined only relative to other quantities which have the same dimension. We will suggest a convention where the most fundamental constants are defined as being constant in time and can serve as a reference for the possible time-variation of other quantities. We suggest to consider the “natural units” $`c`$, $`\mathrm{}`$, $`k_B`$, and $`\epsilon _0`$ as fundamental and constant in time. This is because according to relativity, lengths are measured as time intervals multiplied with the speed of light. Even our unit length, the meter, is defined as the length which the speed of light covers in 1/299 792 458 seconds. Furthermore, quantum phenomena are described either by the particle picture or the wave picture, respectively. The “conversion factor” of some of the quantities of this duality is $`\mathrm{}`$. Finally, temperature is a macroscopic quantity and can be defined by the energy of microscopic objects divided by the Boltzmann constant $`k_B`$. According to concepts of quantum gravity, the Planck units, especially the Planck time, are most fundamental. Hence, we suggest to measure all quantities in units of the Planck units (and multiples thereof), $`t_p`$, $`l_p=ct_p`$, $`m_p=\mathrm{}/(cl_p)`$, $`T_p=m_pc^2/k_B`$, and $`I_p=(\epsilon _0\mathrm{}c)^{1/2}/t_p`$. Under these conventions, any time-variation of the fine-structure constant implies the variation of the unit electric charge $`e`$.
no-problem/9908/cond-mat9908030.html
ar5iv
text
# Aging phenomena in spin glass and ferromagnetic phases: domain growth and wall dynamics ## 1 Introduction In a spin glass, cooling below $`T_g`$ is the starting point of the slow ‘aging’ evolution. The ac susceptibility relaxes downwards with time after the initial quench (the ‘age’) (see e.g. and references therein). A natural basis for the interpretation of aging is the idea of a slow domain growth of a spin-glass type ‘ordered phase’, as was developed in . However, several non-trivial experimental features of aging in spin glasses are hard to interpret within a simple domain growth scenario . Namely, after aging at a given temperature, aging is reinitialized by a further temperature decrease <sup>1</sup><sup>1</sup>1This has been called ‘chaos’, by analogy with the theoretical scenario of . However, as we discuss below, this effect can be related to the temperature variation of Boltzmann weights, and a better word might be ‘rejuvenation’., and the memory of previous aging can be retrieved upon heating back. On the other hand, domain growth phenomena should be more directly relevant to the case of ferromagnets. Our motivation in the present work is to compare aging in these systems and in spin glasses, having in mind the discussion of a consistent picture of aging in the latter, more complex case. ## 2 General features of the reentrant system The $`CdCr_{22x}In_{2x}S_4`$ chromium thiospinel insulator has been extensively characterized by neutron diffraction measurements . For $`x=0`$, it is a ferromagnet ($`T_c=84.5K`$) with ferromagnetic nearest neighbour interactions and antiferromagnetic next-nearest neighbour interactions. For $`x0.10`$, the ferromagnetic phase disappears, and a spin-glass phase appears at $`20K`$ . The $`x=0.15`$ system is a well studied spin-glass compound . In the intermediate region $`0<x<0.10`$, the ferromagnetic phase is followed at lower temperature by a reentrant spin glass phase. The main sample of our present study is the $`x=0.05`$ compound (of the same batch as in ). Fig.1 shows its general magnetic features. The dc susceptibilities $`\chi _{FC}`$ and $`\chi _{ZFC}`$ have been measured along the usual field-cooled (FC) and zero-field cooled (ZFC) procedures. In the high temperature region, the dc and ac susceptibilities follow a paramagnetic behaviour. At $`T_c70K`$, they rise up abruptly, reaching a plateau. $`\chi _{ZFC}`$ and $`\chi ^{}`$ at low frequency are at about the same level, $`15\%`$ below the $`\chi _{FC}`$ value. The ac out-of-phase susceptibility $`\chi ^{\prime \prime }`$ peaks around $`T_c`$, and remains non-negligible in the ferromagnetic plateau region, confirming the existence of magnetic irreversibilities . At $`T_g10K`$, $`\chi ^{\prime \prime }`$ shows a strong peak, while $`\chi ^{}`$ and $`\chi _{ZFC}`$ decrease with decreasing temperature. In this low-temperature phase, spin glass features coexist with those of a regular ferromagnetic state (non zero magnetisation, Bragg peaks, spin waves ). The dc plateau susceptibility in the ferromagnetic phase, which corresponds approximately to the expected demagnetizing factor level, indicates that the system (although disordered) is organized in ferromagnetic domains. In Fig.2, we show $`\chi _{ac}(T)`$ measurements in the vicinity of the ferromagnetic transition. The curves have been taken upon cooling from the paramagnetic phase down to $`65K`$ and heating back, for two different cooling/heating rates. $`\chi ^{\prime \prime }`$ is clearly hysteretic (when heating back it is always lower than during cooling), and sensitive to the rate of temperature change (at a given temperature, $`\chi ^{\prime \prime }`$ is lower for slower rates). This effect is much larger than in spin-glasses, in which the cooling rate is to a large extent irrelevant (see ). But it is similar to the behaviour observed e.g. in a dipolar glass , where thermally activated domain growth is, as in the present case, the natural scenario. In Fig.2, the obtained heating curve is the same for both cooling/heating rates. The hysteresis seen in Fig.2 is essentially determined during the time spent around $`T_c`$, where domain walls are indeed expected to be weakly pinned. Finally, no significant hysteresis is seen on $`\chi ^{}`$ (inset of Fig.2), probably because $`\chi ^{}`$, dominated by a volume response of the domains, is much less sensitive than $`\chi ^{\prime \prime }`$ to domain wall dynamics. ## 3 Comparing aging phenomena in the spin-glass and ferromagnetic phases We have applied to this system the procedure which allowed the characterization of the so-called ‘memory and chaos effects’ in spin glasses . The ac susceptibility has been measured at the three frequencies of 0.04, 0.4 and 4 Hz in the same run. We use the data in the paramagnetic regime (assuming $`\chi ^{\prime \prime }=0`$) for checking and correcting slight frequency-dependent phase shifts in the detection setup. We first determine ‘reference curves’ (solid lines in Fig.3-4): starting from the paramagnetic phase, the temperature is continuously decreased down to 3K at a rate of $`0.1K/min`$ , and then raised back to 100K at this same rate. In a second experiment, we study aging in the following way. At 6 temperatures $`T_i=67,40,20,13,8\mathrm{and}5\mathrm{K}`$, we have interrupted the cooling, stabilized the temperature, and let the system age at constant $`T_i`$ during 9 hours. When reaching 3K, we continuously re-heated up to 100K (the whole measurement lasts about 3 days). At each temperature $`T_i`$, $`\chi ^{\prime \prime }`$ slowly relaxes downwards with time. In the spin-glass temperature region (Fig.3), the $`\chi ^{\prime \prime }`$ relaxation is the strongest in absolute as well as relative value (amounting, for $`0.04Hz`$, resp. to $`9,20\mathrm{and}10\%`$ at $`5,8,\mathrm{and}13\mathrm{K}`$). The corresponding $`\chi ^{}`$ relaxation, although systematic, remains lower than $`1\%`$. In the ferromagnetic region (Fig.4), a $`\chi ^{\prime \prime }`$ relaxation is also clearly observed (although weaker than at low temperature, amounting for $`0.04Hz`$ to resp. 4 and 5 % at 40 and 67K). Fig. 5 shows the frequency dependence of the aging part of $`\chi ^{\prime \prime }`$. In both ferromagnetic and spin-glass phases, the same systematic trend of a stronger relaxation at lower frequency is found. Quantitatively, the curves can be rescaled onto a unique curve as a function of the scaling variable $`\omega (t+t_0)`$ ($`t_0`$ being an off-set time which takes into account the fact that the cooling procedure is not instantaneous). This scaling is equivalent to the (approximate) $`t/t_w`$ scaling of dc experiments, which is typically seen in spin-glasses (see discussions in ). The most important point concerns the effect on aging of temperature changes. At all temperatures $`T_i`$ (hence in both phases), we find that when cooling is resumed after aging $`\chi ^{\prime \prime }`$ merges back (even increasing in some cases) with the reference curve, although this reference is a non-equilibrium curve. This happens here exactly like in spin glasses , in a similar fashion as in some other systems . Aging processes have to restart at lower temperatures, as if the aging evolution was not cumulative with that at a higher temperature (‘chaos-like’ or ‘rejuvenation’ effect), in contrast with the continuous hysteresis phenomenon displayed in Fig.2. Another important feature is seen in the $`\chi ^{\prime \prime }(T)`$ curve taken upon re-heating steadily from $`3K`$. When reaching each of the temperatures $`T_i`$ in the spin-glass region (5, 8 and 13K), $`\chi ^{\prime \prime }(T)`$ departs from the reference curve and traces back a dip which displays the memory of the past relaxation at $`T_i`$. Such memory effects have been characterized in details in spin glasses . In the ferromagnetic phase (20, 40 and 67K), in contrast, no memory effect is found with this procedure (Fig. 4). This result is in agreement with previous ac and dc temperature cycling experiments performed on another reentrant system , in which however disorder effects seem to be stronger than here (larger FC-ZFC separation, widely rounded ferromagnetic $`\chi _{FC}`$ plateau, a behaviour which is very similar to that of the more diluted $`x=0.90`$ thiospinel ). In , it is observed that the effect of positive and negative temperature cycles is similar in the ferromagnetic phase (rejuvenation-type upon cooling as well as heating), while it is strongly asymmetric in the spin-glass phase (rejuvenation upon cooling, memory upon heating). Hence, rejuvenation effects are found in these disordered ferromagnets like in spin glasses, but memory effects are not readily seen in the ferromagnetic phase, where domain growth processes are important. In the following, we discuss wall dynamics as a possible source of interplay between spin-glass like and domain growth processes, and show that memory effects can be obtained in the ferromagnetic phase provided that the low-temperature cycling is short enough. ## 4 Ferromagnetic aging: a combination of domain growth and spin-glass like dynamics In an ideal ferromagnet, domain growth only involves microscopic time scales. In the presence of pinning disorder (here dilution of $`Cr`$ by $`In`$, or structural defects), a domain wall has many metastable configurations between which it makes thermally activated hops. This gives rise to slow dynamics at the laboratory time scale. Actually, the problem of pinned elastic interfaces has similarities with that of spin-glasses. The frustration comes from the elastic energy associated to a deformation of the wall, which limits the possible excursions between the favourable pinning sites. Several theoretical arguments suggest that the energy landscape of a pinned domain wall is hierarchical, with small length scale $`\mathrm{}`$ reconformations corresponding to small energy barriers $`E(\mathrm{})\mathrm{{\rm Y}}\mathrm{}^\theta `$. In the spin glass phase, the rejuvenation upon cooling and the memory when heating back have been ascribed to a hierarchical organization of the metastable states, which are progressively uncovered as the temperature is decreased (see also ). In brief, when the temperature is reduced, the system remains in a deep well (allowing for the ‘memory effect’), while new subwells appear, inducing new aging processes (rejuvenation effect). For a pinned domain wall, the hierarchy of reconformation length scales is an appealing transposition in ‘real space’ of the spin-glass hierarchy of metastable states, in the following sense. As time elapses at fixed temperature, wall reconformations occur at longer and longer length scales (aging), while shorter length processes $`\mathrm{}<\mathrm{}^{}`$, with $`E(\mathrm{}^{})=kT`$, are fluctuating in equilibrium. When the temperature is lowered, the long wavelength modes become frozen (thus corresponding to the deep well, which allows the memory), and the ‘glass length’ $`\mathrm{}^{}`$ decreases to $`\mathrm{}^{}`$. The modes such that $`\mathrm{}^{}<\mathrm{}<\mathrm{}^{}`$, on the other hand, are no longer in equilibrium because the Boltzman weights of the different ‘sub-wells’ have changed, and start aging (rejuvenation) . These reconformations are expected to contribute to $`\chi ^{\prime \prime }`$ as a function of $`\omega t`$ , as indeed observed here in both phases. In the ferromagnetic phase (and particularly close to $`T_c`$), in parallel with these reconformation processes, the average domain size grows with time. Thus, the impurities with which a domain wall interacts are changing. Obviously, this net motion of the domain walls cannot preserve the memory of the reconformations. Domain growth effects are directly visible in the overall decrease of $`\chi ^{\prime \prime }`$ between cooling and heating, a strong effect close to the transition at 70K (Fig.2 and 4), also present (slighter, but systematic) in the spin-glass phase (Fig.3). The absence of memory in Fig.4 is observed after a long period spent at lower temperatures, during which it is likely that the positions of the domain walls have indeed significantly changed. We have therefore performed a slightly different experiment, with the aim of controlling the effect of domain growth after the aging relaxation. After aging $`3.5h`$ at $`T_i=66.6K`$, the sample is cooled to a temperature $`T_d`$ which remains close to $`T_i`$, and then re-heated. In order to still accelerate the procedure, we have suppressed the measurement at the lowest frequency of $`0.04`$Hz. Fig.6 shows that when the low-temperature excursion is small enough (like $`T_d=64K`$), a partial memory of aging is revealed. The curves are somewhat noisy, but the effect is systematic as a function of $`T_d`$ : for lower and lower $`T_d`$ values (60, 50 and 30K), the dip progressively disappears . The same experiment, performed in the middle of the ferromagnetic plateau ($`T_i=40K`$, $`T_d=30\mathrm{and}35K`$), shows the same qualitative trend (although with weaker relaxation) . Furthermore, we have recently started the investigation of the non-diluted $`CdCr_2S_4`$ ferromagnet . The first results (inset of Fig.6) do again show the same ‘rejuvenation and partial memory’ effects. In this non-diluted system, there are also competing interactions . Further studies on ferromagnets with non-frustrated interactions are needed in order to clarify the respective contributions of the frustration of interactions on the one hand, and of pinning mechanisms on defects of various origins on the other hand . Finally, let us note that a similar systematic shift of the dip position with $`T_d`$ is observed in both samples of Fig.6. This is reminiscent of the way in which the memory is erased by a slight re-heating in the $`x=0.85`$ thiospinel spin glass (see 2nd ref. of ), and is yet unexplained. ## 5 Conclusions Within the domain-growth description of spin-glasses , the way to interpret the rejuvenation effect (and the weak cooling rate dependence of the measured quantities) is to invoke the idea of the chaotic evolution of the spin-glass order with temperature . However, as emphasized in , this is difficult to reconcile with the simultaneous memory effect discussed above. Furthermore, such chaos should be absent in the ferromagnetic phase (or indeed only concern the domain wall conformations). Let us finally note that until now numerical simulations have failed in identifying this type of ‘chaos’ . On the other hand, the coexistence of memory and rejuvenation has readily been interpreted in a hierarchical phase space picture , which finds in the present case a natural real space interpretation in terms of wall reconformations . Comparing the spin-glass and ferromagnetic situations suggests that the dynamics in spin glasses is dominated by wall reconformation processes and not by domain growth, which would erase the memory, and lead to strong cooling rate dependence. The extrapolation of this idea to the case of spin glasses raises incentive questions about the nature of ‘domains’ and ‘walls’ in this case. Many ideas on the non-trivial geometry of the spin-glass domains have indeed been proposed over the years . A possibility is that the ordered phase of a spin-glass contains a large number of pinned, zero tension walls, which reconform in their disordered landscape without any overall tendency to coarsen. The growth of a spin-glass correlation length, reported both numerically and experimentally , could then be understood as the progressive increase of a typical length scale for wall reconformations at fixed temperature, while the effect of temperature variations would involve a hierarchy of different length scales. \*** ## 6 Acknowledgements We are grateful to F. Alet for his active participation to this work. We benefited from stimulating discussions with M. Ocio, J. Houdayer, A. Levelut, O. Martin, S. Miyashita and H. Yoshino, and thank L. Le Pape for his skillfull technical support.
no-problem/9908/astro-ph9908334.html
ar5iv
text
# Diquark Condensates and Compact Star Cooling ## 1 Introduction The interiors of compact stars have been discussed as systems where high-density phases of strongly interacting matter do occur in nature, see (Glendenning, 1996) and (Weber, 1999). The consequences of different phase transition scenarios for the cooling behaviour of compact stars have been reviewed recently (Page, 1992; Schaab et al., 1996, 1997) in comparison with existing X-ray data. A particular discussion has been devoted to the idea that a strong attraction in three flavor $`uds`$-quark matter may allow for the existence of super-dense anomalous nuclei and strange quark stars (Bodmer, 1971; Witten, 1984; de Rujula and Glashow, 1984). Thereby, in dependence on the value of the bag constant $`B`$ different possible types of stars were discussed: ordinary neutron stars (NS) without any quark core, neutron stars with quark matter present only in their deep interiors (for somewhat intermediate values of $`B`$), neutron stars with a large quark core (QCNS) and a crust typical for neutron stars, and quark stars (QS) with a tiny crust of normal matter and with no crust (both for low $`B`$ values). By QCNS we understand compact stars in which the hadronic shell is rather narrow in the sense that it does not essentially affect the cooling which is mainly due to the quark core. However, this hadronic shell plays the role of an insulating layer between quark matter and the normal crust. By QS we mean compact stars for which the hadronic shell is absent what allows only for tiny crusts with maximum densities below the neutron drip density ($`10^{11}`$ g/cm<sup>3</sup>) and masses $`M_{\mathrm{cr}}\stackrel{<}{_{}}10^5M_{}`$ (Alcock et al., 1986; Horvath et al., 1991). Therefore, we suppose that the difference between the models of QS and QCNS regarding their cooling behaviour is only in the thickness of the crust. This results in completely different relations between internal ($`T`$) and surface ($`T_\mathrm{s}`$) temperatures for QS and QCNS. Recent works (Alford et al., 1998; Rapp et al., 1998; Schäfer, 1998; Carter and Diakonov, 1999; Rapp et al., 1999; Blaschke and Roberts, 1998; Bloch et al., 1999; Pisarski and Rischke, 1999; Alford et al., 1999; Schäfer and Wilczek, 1999a; Alford et al., 1999; Schäfer and Wilczek, 1999b) demonstrate the possibility of diquark condensates characterized by large pairing gaps ($`\mathrm{\Delta }100`$ MeV) in quark cores of some neutron stars and in QS and discuss different possible phases of quark matter. Large gaps arise for quark-quark interactions motivated by instantons (Diakonov et al., 1996; Carter and Diakonov, 1999; Rapp et al., 1999) and by nonperturbative gluon propagators (Blaschke and Roberts, 1998; Bloch et al., 1999; Pisarski and Rischke, 1999). To be specific in our predictions we will consider models of the canonical QS and QCNS of 1.4 solar masses ($`1.4M_{}`$) at a constant density. The constant density profile is actually a very good approximation for QS of the mass $`M1.4M_{}`$, see Alcock et al. (1986). We consider the model of QCNS with a crust of the mass $`M_{\mathrm{cr}}10^1M_{}`$, the model of QS with a tiny crust mass ($`M_{\mathrm{cr}}\stackrel{<}{_{}}10^5M_{}`$) and the model of QS with no crust. For QCNS we shall use the same $`T_s/T`$ ratio as for ordinary NS, see (Tsuruta, 1979; Maxwell, 1979; Shapiro and Teukolsky, 1983), whereas for QS we use somewhat larger values for this ratio, namely $`T_s=510^2T`$ for a tiny crust (Horvath et al., 1991) and $`T_s=T`$ for a negligible crust (Pizzochero, 1991). In the latter case, however, we assume the existence of black body photon radiation from the surface as for the cases of more extended crusts. We will estimate the cooling of QS and QCNS first in absence of color superconductivity and then in presence of color superconductivity for small quark gaps ($`\mathrm{\Delta }`$ 0.1…1 MeV), as suggested by Bailin and Love (1984) and for large gaps ($`\mathrm{\Delta }100`$ MeV) as obtained in Refs. (Alford et al., 1998; Rapp et al., 1998; Schäfer, 1998; Carter and Diakonov, 1999; Rapp et al., 1999; Blaschke and Roberts, 1998; Pisarski and Rischke, 1999; Alford et al., 1999; Schäfer and Wilczek, 1999a; Alford et al., 1999; Schäfer and Wilczek, 1999b). In the latter case we will consider two phases: the color-flavor-locked $`uds`$-phase (Alford et al., 1999; Schäfer and Wilczek, 1999a; Alford et al., 1999; Schäfer and Wilczek, 1999b) and the $`N_f=2`$ color superconducting phase (Alford et al., 1998; Rapp et al., 1998; Schäfer, 1998; Blaschke and Roberts, 1998; Carter and Diakonov, 1999; Rapp et al., 1999; Bloch et al., 1999; Pisarski and Rischke, 1999), in which the $`s`$-quark is absent and the $`ud`$-diquark condensate selects a direction in color space whereby the color charge has to be compensated by the remaining unpaired quarks. Finally, we want to discuss the question whether the hypothesis of a color superconducting quark matter phase in compact star interiors is compatible with existing X-ray data. ## 2 Normal quark matter A detailed discussion of the neutrino emissivity of quark matter has first been given by Iwamoto (1982) where the possibility of color superconductivity has not been discussed. In this work the quark direct Urca reactions (QDU) $`due\overline{\nu }`$ and $`ued\nu `$ were suggested as the most efficient processes. Their emissivities were estimated as <sup>1</sup><sup>1</sup>1We note that the numerical factors in the estimates below are valid only within an order of magnitude. This is sufficient for the qualitative comparison of cooling scenarios we present in this work. $`ϵ_\nu ^{\mathrm{QDU}}10^{26}\alpha _s\left({\displaystyle \frac{\rho _b}{\rho _0}}\right)Y_e^{1/3}T_9^6\mathrm{erg}\mathrm{cm}^3\mathrm{sec}^1,`$ (1) where at baryon densities $`\rho _b2\rho _0`$ the strong coupling constant is $`\alpha _s1`$ and decreases logarithmically at still higher densities Kisslinger and Morley (1976). The nuclear saturation density is $`\rho _0=0.17\mathrm{fm}^3`$, $`Y_e=\rho _e/\rho _b`$ is the electron fraction, and $`T_9`$ is the temperature in units of $`10^9`$ K. The larger the density of the $`uds`$-system, the smaller is its electron fraction. For a density $`\rho _b3\rho _0`$ one can expect a rather low electron fraction of strange quark matter $`Y_e`$10<sup>-5</sup> (Glendenning, 1996) and eq. (1) yields $`ϵ_\nu ^{\mathrm{QDU}}10^{25}T_9^6\mathrm{erg}\mathrm{cm}^3\mathrm{sec}^1`$, see (Duncan, 1983; Horvath et al., 1991). We did not yet discuss the strange quark contribution given by the direct Urca processes $`sue\overline{\nu }`$ and $`ues\nu `$. Although these processes can occur, their contribution is suppressed compared to the corresponding $`ud`$-reactions (Duncan, 1983) by an extra factor $`\text{sin}^2\theta _\mathrm{C}10^3`$, where $`\theta _\mathrm{C}`$ is the Cabibbo angle. If for somewhat larger density the electron fraction was too small ($`Y_e<Y_{ec}\sqrt{3}\pi m_e^3/(8\alpha _s^{3/2}\rho _b)210^8`$, for $`\alpha _s0.7`$ and $`\rho _b5\rho _0`$, $`m_e`$ is the electron mass), then all the QDU processes would be completely switched off (Duncan, 1983) and the neutrino emission would be governed by two-quark reactions like the quark modified Urca (QMU) $`dquqe\overline{\nu }`$ and the quark bremsstrahlung (QB) processes $`q_1q_2q_1q_2\nu \overline{\nu }`$. The emissivities of the QMU and QB processes were estimated as (Iwamoto, 1982) $`ϵ_\nu ^{\mathrm{QMU}}ϵ_\nu ^{\mathrm{QB}}(\mathrm{scr})10^{20}T_9^8\mathrm{erg}\mathrm{cm}^3\mathrm{sec}^1.`$ (2) The latter estimate of QB emissivity is done in the suggestion that the exchanged gluon is screened. If one suggests that quarks are coupled by transverse non-screened gluons then one gets (Price, 1980) $`ϵ_\nu ^{\mathrm{QB}}(\mathrm{unscr})10^{22}T_9^6\mathrm{erg}\mathrm{cm}^3\mathrm{sec}^1`$ . With a nonperturbative gluon exchange (Blaschke and Roberts, 1998; Bloch et al., 1999) we expect that the estimate $`ϵ_\nu ^{\mathrm{QB}}(\mathrm{scr})`$ is more appropriate than the one given by $`ϵ_\nu ^{\mathrm{QB}}(\mathrm{unscr})`$. Therefore, in evaluating the emissivity of QB processes we will use (2). Other neutrino processes (like the plasmon decay $`\gamma _{\mathrm{pl}}ee^1\nu \overline{\nu }`$ which goes via coupling to intermediate electron-electron hole states, and the corresponding color plasmon decay $`g_{\mathrm{pl}}qq^1\nu \overline{\nu }`$ which goes via coupling of the gluon to quark-quark hole states (Iwamoto, 1982), see Fig. 4) have much smaller emissivities in the normal quark matter phase under consideration and can be neglected. Among the processes in the crust, the electron bremsstrahlung on nuclei gives the largest contribution to the emissivity as estimated in (Iwamoto, 1982) $`ϵ_\nu ^{\mathrm{cr}}10^{21}\left({\displaystyle \frac{M_{cr}}{M_{}}}\right)T_9^6\mathrm{erg}\mathrm{cm}^3\mathrm{sec}^1.`$ (3) This contribution can be neglected for QS due to a tiny mass of the QS crust $`M_{cr}\stackrel{<}{_{}}10^5M_{}`$. Besides, one should add the photon contribution $`ϵ_\gamma 210^{18}\left({\displaystyle \frac{R}{10\mathrm{km}}}\right)^2T_{s7}^4\mathrm{erg}\mathrm{cm}^3\mathrm{sec}^1,`$ (4) where $`T_{s7}`$ is the surface temperature in units of $`10^7`$ K, see (Shapiro and Teukolsky, 1983). This process becomes the dominant one for QS at essentially shorter times than for the QCNS due to the higher $`T_s/T`$ ratios for the former. Internal and surface temperatures are related by a coefficient determined by the scattering processes occurring in the outer region, where the electrons become non-degenerate. For NS with rather thick crust, an appropriate fit to numerical calculations (Tsuruta, 1979) is given by a simple formula (Shapiro and Teukolsky, 1983) $`T_s=(10T)^{2/3},`$ (5) where $`T_s`$ and $`T`$ both are measured in units of K. We shall use this expression when dealing with QCNS. A rough estimate of (5) yields $`T_s=a\times 10^2T`$ with $`a0.22`$ in dependence on the value of the internal temperature varying in the interval $`10^{10}\mathrm{}10^7`$ K of our interest. In the QS the crust is much more thin than in the NS and the ratio $`T_s/T`$ should be significantly larger. Therefore, depending on the thickness of the crust we shall use two estimates for QS scenarios: $`T_s=5\times 10^2T`$ for a tiny crust (Horvath et al., 1991), and $`T_sT`$ for negligible crust (Pizzochero, 1991). In order to compute the cooling history of the star we still need the specific heat of the electron, photon, gluon and quark sub-systems. In accordance with the estimates of Iwamoto (1982), Horvath et al. (1991) we have $$c_e0.610^{20}\left(\frac{Y_e\rho _b}{\rho _0}\right)^{2/3}T_9\mathrm{erg}\mathrm{cm}^3\mathrm{K}^1,$$ (6) $$c_\gamma 0.310^{14}T_9^3\mathrm{erg}\mathrm{cm}^3\mathrm{K}^1,$$ (7) $$c_g0.310^{14}N_gT_9^3\mathrm{erg}\mathrm{cm}^3\mathrm{K}^1,$$ (8) $$c_q10^{21}\left(\frac{\rho _b}{\rho _0}\right)^{2/3}T_9\mathrm{erg}\mathrm{cm}^3\mathrm{K}^1,$$ (9) where $`N_g`$ is the number of different color states of massless gluons. The very small contribution to the specific heat of the crust can be neglected (Lattimer et al., 1994). The cooling equation reads $`{\displaystyle \underset{i=q,e,\gamma ,g}{}}c_i{\displaystyle \frac{dT}{dt}}=ϵ_\gamma {\displaystyle \underset{j=\mathrm{QDU},\mathrm{QMU},\mathrm{QB},\mathrm{cr}}{}}ϵ_\nu ^j,`$ (10) where the summation is over all contributions to the specific heats and emissivities as discussed above. The evolution at large times on which we are focusing our interest here is insensitive to the assumed value of the initial temperature $`T_0`$. We checked it using different values of initial temperature. To be specific we choose the value $`T_{0,9}=10`$ as a typical initial temperature for proto-neutron stars. In Figs. Diquark Condensates and Compact Star Cooling-Diquark Condensates and Compact Star Cooling we show the cooling history of QCNS with standard thickness of the NS crust (when internal and surface temperatures are related by Eq. (5)), QS with a tiny crust ($`T_s=510^2T`$) and QS with negligible crust ($`T_s=T`$), respectively. Solid curves are for the matter suggested to be in the normal state, i. e. in the absence of color superconductivity. Different groups of data points are taken from Table 3 of Ref. (Schaab et al., 1999) where the notations are explained. In the lower panels of Figs. Diquark Condensates and Compact Star Cooling-Diquark Condensates and Compact Star Cooling, we show the results for $`Y_e>Y_{ec}`$ taking $`Y_e=10^5`$, $`\alpha _s=1`$, $`\rho =3\rho _0`$. This is a representative set of parameters for which the QDU processes contribute to the cooling. We see that two low-temperature pulsars can be explained as QCNS being in normal state (Fig. 1, thick solid curve, lower panel) and many observations can be interpreted as QS in normal state with a tiny crust (Fig. 2, lower panel). The upper panels of Figs. 1-3 demonstrate the cooling history of QCNS and QS for $`Y_e<Y_{ec}`$, namely for $`Y_e=0`$, $`\alpha _s=0.7`$, and $`\rho =5\rho _0`$. QCNS being in normal state (Fig. 1, thick solid curve, upper panel) cool down rather slowly but are still in agreement with the data for a few pulsars. For QS with a tiny crust (Fig. 2, $`T_e=5\times 10^2T`$) we get a nice fit of many data points. In both cases $`Y_e<Y_{ec}`$ and $`Y_e>Y_{ec}`$, see Fig. 3, QS with negligible crust cool down too fast in disagreement with the X-ray data. ## 3 Color superconductivity In the standard scenario of NS cooling the inclusion of nucleon pairing suppresses the emissivity resulting in a more moderate cooling. Now, considering QS and QCNS we will show that we deal with the opposite case. Due to the pairing, the emissivity of QDU processes is suppressed by a factor $`\text{exp}(\mathrm{\Delta }/T)`$ and the emissivities of QMU and QB processes are suppressed by a factor $`\text{exp}(2\mathrm{\Delta }/T)`$ for $`T<T_c`$. Thereby in our calculations we will use expression (1) for QDU suppressing the rate by $`\text{exp}(\mathrm{\Delta }/T)`$ and expressions (2) for QMU and QB suppressing the rates by $`\text{exp}(2\mathrm{\Delta }/T)`$ for $`T<T_c`$. We also observe that plasmon and color plasmon decay processes are switched off in the superconducting phase when the photons and the gluons acquire masses due to the Higgs effect, as it happens for photons in usual superconductors. Voskresensky et al. (1998) however demonstrated that in superconducting matter there appears a new neutrino neutral current process analogous to the plasmon decay but now due to a massive photon decay. Its emissivity is suppressed by the factor $`\text{exp}(m_\gamma /T)`$ rather than by $`\text{exp}(\mathrm{\Delta }/T)`$, as for direct Urca processes, or by $`\text{exp}(2\mathrm{\Delta }/T)`$, as for modified Urca and corresponding bremsstrahlung processes. This results in a large contribution for small but finite values of $`m_\gamma `$. Naively, one could expect that in a color superconductor the squared photon mass is proportional to the fine structure constant $`\alpha =1/137`$ as it is in ordinary superconductors. Then it would be much smaller than the squared gluon mass since the latter quantity in the color superconducting phase has to be proportional to the corresponding strong coupling constant $`\alpha _s`$. In reality, due to the common gauge transformation for electromagnetic and color fields one deals with mixed photon-gluon excitations. We demonstrate this using the expression for the free energy density of the color superconducting phase (Bailin and Love, 1984), $`f`$ $`=`$ $`f_n+ad^{}d+{\displaystyle \frac{1}{2}}b(d^{}d)^2`$ (11) $`+`$ $`c(+iq\stackrel{}{A}+{\displaystyle \frac{ig\stackrel{}{B}}{\sqrt{3}}})d^{}(iq\stackrel{}{A}{\displaystyle \frac{ig\stackrel{}{B}}{\sqrt{3}}})d`$ $`+`$ $`{\displaystyle \frac{(\text{rot}\stackrel{}{A})^2}{8\pi }}+{\displaystyle \frac{(\text{rot}\stackrel{}{B})^2}{8\pi }},`$ where $`f_n`$ is the normal part of the free energy density, $`a=\mu p_{Fq}t/\pi ^2`$, $`t=(TT_c)/T_c<0`$, $`b=7\zeta (3)\mu p_{Fq}/(8\pi ^4T_c^2)`$, $`c=p_{Fq}^2b/(6\mu ^2)`$, $`\mu p_{Fq}`$ is chemical potential of ultra-relativistic quark, $`q=\sqrt{\alpha }/3`$ is the electric charge of a $`ud`$-pair, $`\alpha =1/137`$. We have introduced an interaction with two gauge fields $`A_\mu `$ and $`B_\mu `$. $`A_\mu `$ is the electromagnetic field and $`B_\mu `$ relates to the gluons. For simplicity we consider only fluctuations of space-like components of the fields and assume the $`B_\mu `$ field to be an Abelian field. Variation of (11) with respect to the fields gives the corresponding equations of motion. Taking $`d=d_0+d^{}`$, where $`d_0=\sqrt{a/b}`$ is the order parameter and $`d^{}`$ is the fluctuating field, we linearize the equations of motion for the fluctuating fields $`d^{},\stackrel{}{A},\stackrel{}{B}`$. Solving these equations in the Fourier representation, we get three branches of the spectrum. The branch $`\omega ^2=\stackrel{}{q}^2+2|\alpha |`$ corresponds to fluctuations of the order parameter characterized by a large mass $`m_d=\sqrt{2t\mu p_{Fq}}/\pi m_\pi =140`$ MeV. The branch $`\omega ^2=\stackrel{}{q}^{\mathrm{\hspace{0.17em}\hspace{0.17em}2}}+m_{\gamma ,g}^2`$ (12) describes a massive photon-gluon excitation with a mass $`m_{\gamma ,g}^2=8\pi c(\alpha +3\alpha _s)d_0^2/9`$. The extra branch $`\omega ^2=\stackrel{}{q}^{\mathrm{\hspace{0.17em}\hspace{0.17em}2}}`$ describes a massless mixed photon-gluon Goldstone excitation. Thus in difference with a usual proton superconducting phase of NS where the photon has rather low mass $`m_\gamma =d_0\sqrt{8\pi c\alpha }4`$ MeV for $`\mu 400`$ MeV, in the color superconductor we, probably, deal with a much more massive mixed photon-gluon excitation (with $`\alpha `$ being replaced by $`\alpha +3\alpha _s`$) and with the corresponding Goldstone boson<sup>2</sup><sup>2</sup>2However, the penetration depth of the external magnetic field is associated with the above mentioned small photon mass rather than with the massive photon-gluon excitation or with the massless Goldstone excitation, see (Blaschke et al., 1999).. The Goldstone boson does not contribute to the mentioned photon-gluon decay process, whereas the massive excitation does. Now, armed with an expression for the photon-gluon mass we may estimate the emissivity of the corresponding processes $`(\gamma g)ee^1+qq^1\nu \overline{\nu }`$, where $`e^1`$ and $`q^1`$ are the electron hole and the quark hole, respectively, see Fig. 4. Using the result for $`\gamma ee^1+pp^1\nu \overline{\nu }`$ (Voskresensky et al., 1998) we easily get $`ϵ_{(\gamma g)}`$ $``$ $`10^{29}\left({\displaystyle \frac{m_{\gamma ,g}}{\text{MeV}}}\right)^{7/2}T_9^{3/2}\left(1+{\displaystyle \frac{3T}{2m_{\gamma ,g}}}\right)`$ (13) $`\times `$ $`\text{exp}(m_{\gamma ,g}/T)\text{erg}\text{cm}^3\text{sec}^1,`$ for $`T<m_{\gamma ,g}`$, and using the condition $`\mathrm{\Delta }\mu `$. As we see, the emissivity of this process is strongly suppressed for the values $`m_{\gamma ,g}70`$ MeV following from eq. (12). Also the specific heat of this mixed photon-gluon excitation is suppressed by the same exponential factor $`\text{exp}(m_{\gamma ,g}/T)`$. For the Goldstone excitation the contribution to the specific heat is given by Eq. (7). For the quark specific heat at $`T<T_c`$ we use an expression similar to the one which applies for the case of nucleon pairing (Mühlschlegel, 1959; Maxwell, 1979; Horvath et al., 1991), i.e. $`c_{sq}`$ $`=`$ $`3.2c_q(T_c/T)\text{exp}(\mathrm{\Delta }/T)`$ (14) $`\times `$ $`\left[2.51.7T/T_c+3.6(T/T_c)^2\right],`$ where $`T_c`$ is related to $`\mathrm{\Delta }`$ as $`\mathrm{\Delta }=1.76T_c`$ for the case of small gaps. For CFL and 2SC phases we will use $`T_c0.4\mathrm{\Delta }`$ . Actually, in the latter case the relation between $`T_c`$ and $`\mathrm{\Delta }`$ is unsettled. However, one believes that the coefficient in standard BCS formula $`T_c0.57\mathrm{\Delta }`$ is appreciably reduced as impact of instanton-anti-instanton molecules (Rapp et al., 1999). The mentioned uncertainty in the value of $`T_c`$ for CFL and 2SC phases does not significantly affect the cooling curves since the dominant effect comes from the exponential factor where $`\mathrm{\Delta }`$ enters rather than $`T_c`$. Now, armed with all necessary expressions we may estimate the cooling of QS and QCNS being in $`uds`$\- or 2SC phases (except for the crust). ### 3.1 Cooling of different $`uds`$-phases: $`uds`$-small gaps, CFL ($`Y_e>Y_{ec}`$), and CFL ($`Y_e<Y_{ec}`$) We select the following values of the pairing gaps: small gaps $`\mathrm{\Delta }=0.1\mathrm{}1`$ MeV as suggested to occur in the color superconducting region by Bailin and Love (1984) and a large gap $`\mathrm{\Delta }50`$ MeV, as suggested for CFL phase in recent works (Alford et al., 1998; Rapp et al., 1999). Horvath et al. (1991) have discussed the cooling of superconducting QS and QCNS for very small gaps with critical temperatures $`T_{c9}=0.1,0.5,1`$. However, for QDU processes the suppression factor $`\text{exp}(2\mathrm{\Delta }/T)`$ has been used rather than $`\text{exp}(\mathrm{\Delta }/T)`$. The value of the critical temperature $`T_{c9}=0.1`$ seems to be quite small, therefore we use $`\mathrm{\Delta }=0.1\mathrm{}1`$ MeV for the case of small gaps. We calculate the cooling history of QS and QCNS using eq. (10), where now summation over $`j`$ implies summation of emissivities of QDU, eq. (1), suppressed by $`\text{exp}(\mathrm{\Delta }/T)`$, QMU and QB, eq.(2), suppressed by $`\text{exp}(2\mathrm{\Delta }/T)`$, emissivity of the crust, eq. (3), and emissivity of photon-gluon decay, eq. (13). Summation over $`i`$ implies summation of the quark contribution evaluated according to eq. (14), the electron contribution, eq. (6), the massless photon-gluon Goldstone contribution which coincides with that given by eq. (7) and the contributions of massive gluons given by eq. (8) suppressed by $`\text{exp}(m_{\gamma g}/T)`$ and thus being very tiny. The contribution of the crust to the specific heat is negligible. Also in the CFL phase exist 9 hadronic quasi-Goldstone modes. Although their masses are not known we may roughly estimate them as $`m_h>m_q`$, where $`m_q`$ is the bare quark mass with a minimum value of $`5`$ MeV. With these masses the contribution of hadronic quasi-Goldstone modes is also very small at temperatures of our interest and can be neglected. The dotted curves in Figs. 1 - 3 demonstrate the cooling history of QCNS and QS for the gap $`\mathrm{\Delta }=0.1`$ MeV, whereas the dashed lines correspond to the cooling of the CFL phase for $`\mathrm{\Delta }=50`$ MeV. All thin dotted and dashed lines correspond to the case when the process of massive mixed photon-gluon decay is artificially excluded whereas the corresponding thick lines represent the cooling history when this process is taken into account according to Eq. (13). This new process essentially influences on the early stage of the cooling although the mass of the mixed photon-gluon excitation was supposed to be very high ($`m_{\gamma ,g}=70`$ MeV for $`Y_e=10^5>Y_{ec}`$, lower panel, and $`m_{\gamma ,g}=60`$ MeV for $`Y_e=0`$, upper panel). This is due to to a big numerical factor in eq. (13). In all the cases we obtain very rapid cooling in disagreement with the data. Particularly rapid cooling occurs for the CFL phase. In the latter case contributions of the QDU, QMU and QB processes to the emissivity are suppressed as well as the quark contribution to the specific heat. The rate is governed by the photon emissivity from the surface. For $`Y_e>Y_{ec}`$, the specific heat is determined by the electrons. As the consequence of this reduction of the specific heat we get a very rapid cooling of the CFL ($`Y_e>Y_{ec}`$) phase, see lower panel of Figs. 1 - 3. For $`Y_e=0`$ (upper panel of Figs. 1 - 3) there are no electrons and the specific heat is determined by a very small contribution of the Goldstone mode given by eq. (7), so that both QCNS and QS cool down even faster than for the case $`Y_e=10^5>Y_{ec}`$. In both $`Y_e>Y_{ec}`$ and $`Y_e<Y_{ec}`$ cases the cooling time of the CFL phase is extremely small. This means that in reality the cooling is governed by the heat transport in the thin crust (Pizzochero, 1991), which we did not take into account. Thus we see that QCNS and QS, if present among the objects measured in X-rays, can’t be in the CFL phase. The cooling of this phase is so rapid that one might expect problems not only for the models of QS and QCNS but also for the models of NS with quark cores consisting of the CFL phase only in deep interiors. ### 3.2 Cooling of the 2SC phase This phase is probably more reliable for QCNS rather than for QS since the CFL phase is energetically favorable in the latter case. The 2SC phase is characterized by large gaps, $`\mathrm{\Delta }100`$ MeV (Alford et al., 1998; Rapp et al., 1998, 1999). To be specific we suppose that blue-green and green-blue $`ud`$-quarks are paired, whereas red $`u`$\- and $`d`$-quarks ($`u_r,d_r`$) remain unpaired. This has the consequence that the QDU processes on the red (unpaired) quarks, as $`d_ru_re\overline{\nu }`$, as well as QMU, $`d_rq_ru_rq_re\overline{\nu }`$, and QB, $`q_{1r}q_{2r}q_{1r}q_{2r}\overline{\nu }\nu `$, are not blocked by the gaps whereas other processes involving paired quarks are blocked out by large diquark gaps. The QDU process on red quarks occurs in the $`Y_e>Y_{ec}`$ case only. Its emissivity is given by $`ϵ_\nu ^{\mathrm{QDU}}(d_r)`$ $``$ $`10^{25}\alpha _s(\rho _b/\rho _0)Y_e^{1/3}T_9^6`$ (15) $`\mathrm{erg}\mathrm{cm}^3\mathrm{sec}^1.`$ The extra suppression factor of the rate (1) comes from the fact that the number of available unpaired color states is reduced. QMU and QB processes on red quarks are also rather efficient. Although there is no one-gluon exchange between $`d_rd_r`$, the QMU and QB processes may go via a residual quark-quark interaction, e.g, via two-gluon exchange. We roughly estimate the corresponding emissivities as $`ϵ_\nu ^{\mathrm{QMU}}(d_rq_r)`$ $``$ $`ϵ_\nu ^{\mathrm{QB}}(q_{1r}q_{2r})`$ (16) $``$ $`10^{19}T_9^8\text{erg}\text{cm}^3\text{sec}^1.`$ In the 2SC ($`Y_e>Y_{ec}`$) phase the QDU process on red quarks is the dominant process and QMU and QB processes on red quarks are subdominant processes whereas in the 2SC ($`Y_e<Y_{ec}`$) phase QDU processes do not occur and QMU and QB processes on red quarks become the dominant processes. Other processes like QDU, QMU and QB with participation of other color and flavor quarks are continued to be appreciably suppressed by large gaps. The specific heat is also changed in the 2SC phase since the $`d_r`$ and $`u_r`$ contributions are not suppressed by a factor $`\text{exp}(\mathrm{\Delta }/T)`$ whereas color-paired $`ud`$-contributions remain to be suppressed. With these findings we calculate the cooling history of QCNS and QS. The results are presented in Fig. 5 for $`Y_e=10^5`$, $`\rho =3\rho _0`$ (thick lines), and $`Y_e=0`$, $`\rho =5\rho _0`$ (thin lines). We see that in both cases the cooling history of QCNS and also of QS with a tiny crust ($`T_s=510^2T`$) nicely agrees with the X-ray data. The cooling of QS with negligible crust does not agree with the data. ## 4 Conclusions We have estimated the contributions of various quark processes to the emissivity. Among them, the new decay process of the massive mixed photon-gluon excitation is operating at the early stage of the cooling and QDU, QMU and QB processes on red quarks determine the cooling of the 2SC phase. We discussed the cooling history of QS and QCNS taking into account different possibilities: $`Y_e>Y_{ec}`$ and $`Y_e<Y_{ec}`$, the normal quark phase, and various color superconducting phases as the “$`uds`$-phase - small gaps” suggested by Bailin and Love (1984), the CFL phase, and the 2SC phase, as suggested in recent works (Alford et al., 1998; Rapp et al., 1998; Schäfer, 1998; Alford et al., 1999; Rapp et al., 1999). In all the cases we see that QS and QCNS being in the CFL phase cool down extremely fast in disagreement with known X-ray data. Also the cooling curves for the case of small gaps ($`\mathrm{\Delta }=0.1\mathrm{}1`$ MeV) disagree with the data. Even if the CFL phase would be realised only in the deep interior region of a NS it would be problematic to satisfy X-ray data. In this case the star would radiate mostly not from the surface but from the CFL region due to its extremely small specific heat related to the Goldstone excitation. Thus the cooling time would be determined by the heat transport from exterior regions to the center rather than by the cooling of the hadronic shell. The cooling history of the QS and QCNS with a crust being in the normal state agrees with the data. In this respect the following remark is in order. It is now believed that quark matter below $`T_c50`$ MeV is in the color superconducting state characterized by a diquark condensate with large energy gaps ($`\mathrm{\Delta }100`$ MeV) rather than being in the normal state or the superfluid state characterized by small gaps ($`\mathrm{\Delta }\stackrel{<}{_{}}1`$ MeV). If so, one could think that our above discussion of normal quark matter and of the case of a small gap has just pedagogic reasoning. However, this is not really so. Indeed, besides the idea of abnormal strange nuclei and strange stars (Bodmer, 1971; Witten, 1984; de Rujula and Glashow, 1984) there is the very similar idea of abnormal pion condensate nuclei and stars with pion condensate nuclei, see (Migdal, 1971; Voskresensky, 1977) and the review (Migdal et al., 1990), chapters 15, 16. The same relates to the kaon condensate objects. Pion condensate systems cool down at about the same rate as given by QDU processes (for $`Y_e10^5`$) <sup>3</sup><sup>3</sup>3One should bear in mind a strong suppression of emissivities of pion condensate processes due to nucleon-nucleon correlation effects (Blaschke et al., 1995) and an enhancement of the specific heat (Voskresensky and Senatorov, 1984, 1986; Migdal et al., 1990) which are often ignored in the cooling simulations.. Besides, they can be in the normal state or in the superfluid state characterized by very small gaps $`\mathrm{\Delta }\stackrel{<}{_{}}0.1`$ MeV. The cooling history of systems being in normal state is described by thick solid curves on the lower panels of Figs. 1 - 3. Thus we may also conclude that the hypothesis of pion condensate nuclei-stars (being in normal $`\mathrm{\Delta }=0`$ state with a crust) does not contradict to the X-ray observations. Stars with pion and kaon condensate nuclei being in the superfluid state with gaps $`\mathrm{\Delta }\stackrel{>}{_{}}0.1`$ MeV are ruled out as objects being observed in X-rays. The cooling history of the 2SC phase of QCNS and QS with a tiny crust ($`T_s=510^2T`$) agrees with the X-ray data. The cooling history of QS with no crust disagrees with X-ray data. Three final remarks are in order: (i) It is conceivable that there are more complex collective effects which essentially affect the specific heat and the luminosity. E.g., we calculated the mixed photon-gluon spectrum in a simplified model of two Abelian gauge fields and concluded that the mass of the excitation is large, whereas one can’t exclude that in the realistic non-Abelian case there exists a photon-gluon excitation of a small mass that could lead to very efficient cooling via the mixed photon-gluon decay process given by eq. (13). The masses of hadronic quasi-Goldstone modes in CFL phase should be carefully studied. (ii) As we mentioned above in the discussion of the DU process, we have neglected the contribution of strange quarks (QDU-s) relative to that of the light quarks. The former contribution is $`10^3`$ times smaller than the latter in the case of normal matter and in the case when all diquark gaps are identical, the discussion given for the CFL phase applies. However, if the pairing gaps for strange diquarks are smaller than those for nonstrange diquarks the QDU-s contribution to the emissivity can be essentially enhanced. Up to now there exist only rough estimates of the values of the gaps and this gives rise to large uncertainties in final estimates of the emissivity. Above, in order to be specific, we made calculations considering QDU on u- and d- quarks only. The inclusion of QDU-s in the case when the strange diquark gaps can be smaller than those for non-strange diquarks is simulated by varying the values of the gaps in a wide interval. This does not change the qualitative picture of the QS and QCNS cooling we have discussed in the present work. (iii) We also would like to point out that, if the compact object formed in the explosion of SN 1987A was a QS or a QCNS being in the CFL phase ($`Y_e<Y_{ec}`$), it is now so cold that it is already impossible to observe it in soft X-rays. This becomes particularly interesting if continued observation of SN 1987A would find a pulsar and would not observe it in X-rays. We acknowledge the important remarks and fruitful discussions by M. Alford and K. Rajagopal after reading the draft of the paper. We also thank B. Friman, J. E. Horvath, E. Kolomeitsev, R. Pisarski, D. Rischke, A. Sedrakian, and F. Weber for their discussions. One of us (DNV) is grateful for the hospitality extended to him during the visit at Rostock University and acknowledges financial support from the Max-Planck-Gesellschaft.
no-problem/9908/cond-mat9908072.html
ar5iv
text
# Nonlinear Dynamics of Nuclear–Electronic Spin Processes in Ferromagnets ## Abstract Spin dynamics is considered in ferromagnets consisting of electron and nuclear subsystems interacting with each other through hyperfine forces. In addition, the ferromagnetic sample is coupled with a resonance electric circuit. Under these conditions, spin relaxation from a strongly nonequilibrium initial state displays several peculiarities absent for the standard set–up in studying spin relaxation. The main feature of the nonlinear spin dynamics considered in this communication is the appearance of ultrafast coherent relaxation, with characteristic relaxation times several orders shorter than the transverse relaxation time $`T_2`$. This type of coherent spin relaxation can be used for extracting additional information on the intrinsic properties of ferromagnetic materials and also can be employed for different technical applications. I. INTRODUCTION Spin systems can exhibit rather nontrivial dynamics when the magnetic sample is prepared in a strongly nonequilibrium state and, in addition, is coupled with a resonance electric circuit . Due to the resonator feedback field, a coherent motion of spins can develop resulting in their ultrafast relaxation. However, the feedback field can organize coherent relaxation only when some initial mechanism triggers the process. A simple case could be the application of an external pulse at the initial time. If this pulse is sufficiently strong, the spin dynamics could be described by the Bloch equations. A more difficult, but interesting, situation is when no external pulse starts the process, but the latter develops in a self–organized way due to local spin fluctuations caused by their interactions. In such a case, the Bloch equations are not appropriate and one has to resort to microscopic models. A theory of coherent spin relaxation in a system of nuclear spins inside a paramagnetic matrix has been developed , being based on a microscopic Hamiltonian with dipole interactions between nuclei. In the present paper we generalize this theory to include ferromagnetic materials. As far as ferrimagnets are often described as ferromagnets with an effective magnetization, our approach is applicable to ferrimagnets as well. In this way, our aim is to suggest a general microscopic theory valid for a wide class of materials, including those having long–range magnetic order. II. THEORY We consider a magnet consisting of electronic and nuclear subsystems interacting with each other by hyperfine forces. The system of electrons possesses long–range ferromagnetic order. The subsystem of nuclear spins is prepared in a strongly nonequilibrium state, which can be done by dynamic nuclear polarization techniques. The sample is inserted into a coil connected with a resonance electric circuit. The general Hamiltonian describing a wide class of magnetic materials can be taken in the form $$\widehat{H}=\widehat{H}_e+\widehat{H}_n+\widehat{H}_{int},$$ (1) in which $$\widehat{H}_e=\frac{1}{2}\underset{ij}{}J_{ij}𝐒_i𝐒_j\mu _e\underset{i}{}𝐁𝐒_i$$ (2) is the Hamiltonian of electron spins, $$\widehat{H}_n=\frac{1}{2}\underset{ij}{}\underset{\alpha \beta }{}C_{ij}^{\alpha \beta }I_i^\alpha I_j^\beta \mu _n\underset{i}{}𝐁𝐈_i$$ (3) is the nuclear spin Hamiltonian, and $$\widehat{H}_{int}=A\underset{i}{}𝐒_i𝐈_i+\frac{1}{2}\underset{ij}{}\underset{\alpha \beta }{}A_{ij}^{\alpha \beta }S_i^\alpha I_j^\beta $$ (4) is the term corresponding to hyperfine interactions. Here $`J_{ij}`$ is an exchange interaction; $`\mu _e=g_e\mu _B`$, with $`g_e`$ being the electron gyromagnetic ratio and $`\mu _B`$, the Bohr magneton; the nuclear dipole interactions $`C_{ij}^{\alpha \beta }=\mu _n^2\left(\delta _{\alpha \beta }3n_{ij}^\alpha n_{ij}^\beta \right)/r_{ij}^3`$ contain $`\mu _n=g_n\mu _N`$, with $`g_n`$ being the nuclear gyromagnetic ratio and $`\mu _N`$, nuclear magneton, and $`r_{ij}|𝐫_{ij}|,𝐧_{ij}𝐫_{ij}/r_{ij},𝐫_{ij}𝐫_i𝐫_j`$; the hyperfine interactions consist of a contact part with a constant $`A`$ and of a dipole-dipole part with $`A_{ij}^{\alpha \beta }=\mu _e\mu _n\left(\delta _{\alpha \beta }3n_{ij}^\alpha n_{ij}^\beta \right)/r_{ij}^3`$; the indices $`i`$ and $`j`$ enumerate electrons or nuclei according to the context, and $`\alpha ,\beta =x,y,z`$; $`𝐒_i`$ is an electron spin operator while $`𝐈_j`$ is a nuclear spin operator. The total magnetic field $`𝐁`$ is the vector sum of an external magnetic field in the $`z`$ direction and of a transverse field $`H_1=H_a+H`$ in the $`x`$ direction, consisting of an effective field $`H_a`$ of a transverse magnetocrystalline anisotropy and of a resonator feedback field $`H`$. The latter satisfies the Kirchhoff equation $$\frac{dH}{dt}+2\gamma _3H+\omega ^2_0^tH(\tau )𝑑\tau =4\pi \eta \frac{dM_x}{dt},$$ (5) in which $`\eta `$ is a filling factor; $`\omega `$, resonator natural frequency; $`\gamma _3\omega /2Q`$ is the resonator ringing width; $`Q`$ is a quality factor; and $`M_x=\frac{1}{V}_i(\mu _e<S_i^x>+\mu _n<I_i^x>)`$ is the transverse magnetization, where the angle brackets mean the statistical averaging. Employing the Heisenberg equations of motion, we derive the time evolution equations for the following averages, related to the electron and nuclear spins, $$x\frac{1}{N_e}\underset{i}{}<S_i^{}>,z\frac{1}{N_e}\underset{i}{}<S_i^z>,$$ $$u\frac{1}{N_n}\underset{i}{}<I_i^{}>,s\frac{1}{N_n}\underset{i}{}<I_i^z>,$$ (6) where $`N_e`$ and $`N_n`$ are the number of electrons and nuclei, respectively, and $`S_i^{}`$ and $`I_i^{}`$ are the ladder operators. As the transverse variables $`x`$ and $`u`$ are complex, we need also the equations of motion for either $`x^{}`$ and $`u^{}`$ or $`|x|`$ and $`|u|`$. In this way, we obtain seven evolution equations, three for the electron variables $`x,z`$ and $`|x|^2`$, three for the nuclear variables $`u,s`$, and $`|u|^2`$, and the feedback–field equation (5). Although this is a rather complicated system of nonlinear equations, it can be treated by using the scale separation approach . Details of this approach have been thoroughly described in Refs. \[3–5\]. Note first that if one invokes the standard semiclassical decoupling of spin correlators, assuming the translational invariance of the average spins, then some of the terms in the evolution equations become zero because of the properties of the dipolar interactions. The translation invariance of averages is equivalent to neglecting inhomogeneous local spin fluctuations. However, the latter are crucially important for the correct description of spin dynamics \[1–3\]. The inhomogeneous spin fluctuations can be retained by treating them as random local fields. Thus we come to the stochastic semiclassical approximation \[3–5\]. Then, using the method of the Laplace transforms, we may express the feedback field from Eq. (5) through the derivatives of spin variables and employ this relation in the evolution equations for $`x,z,|x|^2`$, and $`u,s,|u|^2`$. The latter spin variables can be classified into fast and slow with respect to each other by comparing their time derivatives. We keep in mind the following usual inequalities: $$\left|\gamma _1/\omega _E\right|1,\left|\gamma _2/\omega _E\right|1,\left|\mathrm{\Gamma }_1/\omega _N\right|1,\left|\mathrm{\Gamma }_2/\omega _N\right|1,$$ (7) in which $`\gamma _1`$ and $`\mathrm{\Gamma }_1`$ are the transverse attenuations for the electron and nuclear spins, $`\gamma _2`$ and $`\mathrm{\Gamma }_2`$ are the longitudinal attenuations for electron and nuclear spins, respectively, and $$\omega _E\left(\mu _eH_0As\right)/\mathrm{},\omega _N\left(\mu _nH_0Am\right)/\mathrm{}$$ (8) are the electron spin resonance frequency and the nuclear magnetic resonance frequency, respectively; $`m`$ being an average magnetization in the electron system. Another reasonable assumption is that the external magnetic field $`H_0`$ is stronger than the magnetocrystalline anisotropy field $`H_a`$ and that, similarly to (7), the inhomogeneous broadening, caused by local spin fluctuations, is smaller than the corresponding frequencies, so that $$\left|\alpha _e/\omega _E\right|1,\left|\alpha _n/\omega _N\right|1,\left|\gamma _{}/\omega _E\right|1,\left|\mathrm{\Gamma }_{}/\omega _N\right|1,$$ (9) where $`\alpha _e\mu _eH_a/\mathrm{}`$ and $`\alpha _n\mu _nH_a/\mathrm{}`$ are the anisotropy parameters and $`\gamma _{}`$ and $`\mathrm{\Gamma }_{}`$ are the inhomogeneous widths for electrons and nuclei, respectively. Also, because the nuclear magneton is three orders smaller than the Bohr magneton, we have $$\left|\mu _n/\mu _e\right|1,\mathrm{\Gamma }_1/\gamma _11,\mathrm{\Gamma }_2/\gamma _21.$$ (10) Finally, we consider the case of a high quality resonator, having a large quality factor, and we assume that the resonator natural frequency is tuned close to the frequency of nuclear magnetic resonance, so that $`|\mathrm{\Delta }_N/\omega _N|1`$ and $`\gamma _3/\omega 1`$, where $`\mathrm{\Delta }_N=\omega \omega _N`$. With these inequalities, we can classify all variables into fast and slow with respect to each other. Following the general scheme \[3–5\], we solve the equations for fast variables while treating slow variables as quasi–integrals of motion. Then, the found solutions for fast variables are substituted into the equations for slow variables and the right–hand sides of the latter equations are averaged over the periods of fast oscillations and over the random local fields. Introducing also the change of variables $$w=|u|^2\frac{\alpha _n^2+\mathrm{\Gamma }_{}^2+\delta ^2}{\omega _N^2}s^2,\delta \frac{\sqrt{2}\pi ^2\eta \gamma _{}\rho _e\mu _e\mu _n}{\omega _N}m,$$ (11) we come to the equations describing the slow nuclear spin variables $$\frac{ds}{dt}=\mathrm{\Gamma }_2gw\mathrm{\Gamma }_1(s\zeta ),\frac{dw}{dt}=2\mathrm{\Gamma }_2(1+gs)w,$$ (12) where $$g\pi ^2\eta \frac{\rho _n\mu _n^2\omega _N}{\mathrm{\Gamma }_2\omega }\left(1+\frac{\rho _e\mu _eAm}{\rho _n\mu _n\omega _N}\right)$$ (13) is the parameter of effective coupling of nuclear spins with the resonator. For the relaxation times $`T_1\mathrm{\Gamma }_1^1`$ and $`T_2\mathrm{\Gamma }_2^1`$ one usually has the relation $`T_2T_1`$. Therefore, for the times $`tT_1`$, equations (12) can be solved analytically giving $$s=\frac{T_2}{g\tau _0}\mathrm{tanh}\left(\frac{tt_0}{\tau _0}\right)\frac{1}{g},w=\left(\frac{T_2}{g\tau _0}\right)^2\mathrm{sech}^2\left(\frac{tt_0}{\tau _0}\right),$$ (14) where $`\tau _0`$ is the collective relaxation time and $`t_0`$ is the delay time, respectively, $$\tau _0=\frac{T_2}{\sqrt{(1+gs_0)^2+g^2w_0^2}},t_0=\frac{\tau _0}{2}\mathrm{ln}\left|\frac{T_2\tau _0(1+gs_0)}{T_2+\tau _0(1+gs_0)}\right|,$$ (15) with $`s_0=s(0)`$ and $`w_0=w(0)`$ defined by initial conditions. III. CONCLUSION We analysed the obtained solutions for the parameters typical of such ferromagnetic materials as EuO, EuS, EuSe, Li<sub>x</sub>Fe<sub>3-x</sub>O<sub>4</sub>, Mn<sub>x</sub>Sb<sub>1-x</sub>, NiMnSb, NiMnSi, Co<sub>2</sub>MnSi, and Co in the fcc and hcp phases . Since ferrimagnets can often be modeled as ferromagnets with an effective magnetization , our analysis is applicable as well to ferrimagnetic materials, such as MnFe<sub>2</sub>O<sub>4</sub>. For these materials, taking as initial conditions $`s_0=I,w_0=0`$, where $`I`$ is a nuclear spin, we obtain the relaxation time $`\tau _0T_2/gI`$, where $`g\pi ^2\mu _e/2\mu _n`$, and the delay time $`t_0=\tau _0\mathrm{ln}(2\omega _N/10^7)`$. This gives the coupling parameter $`g10^4`$, which, with $`T_210^4`$ s, yields the relaxation time $`\tau _010^8`$ s. Because $`\omega _N10^9`$ s<sup>-1</sup>, we have the delay time $`t_05\times 10^8`$ s. The described regime corresponds to the ultrafast coherent spin relaxation, when during the time $`t_0+\tau _0`$ the initial strongly nonequilibrium spin polarization $`s_0=I`$ changes to its equilibrium value $`s(t)I`$ at $`tt_0`$. The time $`t_0+\tau _0`$ can be four orders less than the standard spin–spin relaxation time $`T_2`$. This ultrafast coherent spin relaxation can serve as an additional technique for studying the spin-spin correlations in ferromagnetic materials, complementing other known techniques, such as neutron diffraction, light scattering, and nuclear magnetic resonance. The ultrafast relaxation mechanism can also find application in the important problem of fast repolarization of solid–state targets used in scattering experiments , as well as for fast switching devices in electronics and computing. ACNOWLEDGEMENT Financial support from the University of Western Ontario and NSERC of Canada is appreciated.
no-problem/9908/cond-mat9908458.html
ar5iv
text
# Excitation Spectrum for Quantum Spin Systems with Ladder, Plaquette and Mixed-Spin Structures ## I Introduction Two-dimensional (2D) antiferromagnetic quantum spin systems with spin gap provide a new interesting paradigm for quantum phase transitions. A typical example is the spin plaquette system such as $`\mathrm{CaV}_4\mathrm{O}_9`$, which may be described by the 2D Heisenberg model with a plaquette structure. Another interesting example found recently is the 2D spin system composed of orthogonal spin dimers such as $`\mathrm{SrCu}_2(\mathrm{BO}_3)_2`$, which may be described by the 2D Heisenberg model on a square lattice with some diagonal exchange couplings. In these 2D spin systems, the plaquette or dimer structure is essential to stabilize the non-magnetic phase with the spin gap. Concerning the spin gap formation, the topological nature of spins is also important in low-dimensional systems. In this context, mixed-spin systems have attracted considerable attention recently, in which the spatial arrangement of different spins plays a crucial role to generate the spin gap or induce an antiferromagnetic long-range order. For instance, see the references for experiments and theories in 1D cases. In the previous paper, we have investigated the ground state quantities for the 2D spin systems with ladder, plaquette and mixed-spin structures, by extending the works based on the series expansion methods. Although the quantum phase transitions have been described qualitatively well, it has turned out that the obtained results lead to unsatisfactory estimates for the phase boundary in some region of the phase diagram. A more crucial problem raised is to what extent our series expansion correctly captures the lattice structure and/or the spin topological properties, since our expansion approach has relied on the lower-order expansions in coupling constants. Not only to resolve this problem but also to confirm our approach to be reliable, it is desirable to produce more accurate results by improving the series expansions, and also to study other quantities besides the ground state quantities. The purpose of this work is to study the excitation spectrum for the 2D quantum spin systems with the above-mentioned structures, and clarify the role of the competing interactions in the disordered phase with the spin gap. We shall see that the excitation spectrum calculated in higher orders than the ground-state susceptibility improves the phase diagram, and at the same time confirms that our series-expansion approach indeed provides reliable results. The paper is organized as follows. In §2, we briefly summarize how to apply the series expansion techniques to our systems. By performing the series expansion for the excited states and employing the asymptotic analysis of power-series expansions in §3, we obtain the dispersion relations and the phase diagram. We then discuss how the spin gap phase competes with the magnetically ordered phase for three kinds of 2D quantum spin systems mentioned above. The last section is devoted to brief summary. ## II Series Expansion Methods We begin by briefly summarizing the series expansion method. We employ here the cluster expansion around a given strong-coupling spin singlet state. Let us explain the idea taking the dimer expansion as an example. First the 2D Hamiltonian is divided into two parts: $`H=H_0+H_1`$. The unperturbed Hamiltonian $`H_0`$ is composed of an assembly of isolated singlet dimers which are formed by the strong antiferromagnetic bonds. Namely, our starting configuration for the perturbation has the disordered ground state with the spin gap. We then introduce the interaction term $`H_1`$ among the independent dimers and observe how the physical quantities are changed by exploiting the power series expansion with respect to $`H_1`$. The advantage of this cluster expansion is that we can combine analytical and numerical techniques in a complementary way. For example, the computer can be utilized to systematically generate the higher order terms from the lower ones. We exploit the cluster expansions which may be most appropriate for each system with different structures. To discuss how the introduction of $`H_1`$ perturbs the disordered state with the spin gap and enhances the antiferromagnetic correlation, we calculate the dispersion relation $`E(𝐤)`$ for 2D spin systems with various structures. This quantity is expanded as a power series in $`\lambda `$ as $`E(𝐤)`$ $`=`$ $`{\displaystyle \underset{l,m,n}{}}a_{lmn}\mathrm{cos}(lk_x+mk_y)\lambda ^n,`$ (1) where the wave number is denoted by $`𝐤=(k_x,k_y)`$ and the Brillouin zone for each model will be defined in the following section. We shall calculate the dispersion relation up to the eighth order in $`\lambda `$ for the ladder-structure system and the fifth order for both the plaquette system and the mixed-spin system. To estimate the minimum value of the dispersion relation in the first Brillouin zone, we can also expand the spin gap $`\mathrm{\Delta }`$ up to the same order. Since we are not able to analyze the critical phenomena only with the obtained power-series, further asymptotic analyses are necessary to discuss the phase transitions. To this end, we make use of the Padé approximants and the differential methods to estimate the critical point for the phase transition, the dispersion relation, etc. Especially, the critical point between the magnetically ordered and disordered phase is estimated not only by the ordinary Dlog Padé approximants but also by the biased Padé approximants. In the biased method we assume that the phase transition in our 2D quantum spin model should belong to the universality class of the 3D classical Heisenberg model. Namely, the critical value $`\lambda _c`$ for the perturbation parameter is determined by the formula $`\mathrm{\Delta }(\lambda _c\lambda )^\nu `$ with the known exponent $`\nu =0.71`$ around the transition point. We also apply the first-order inhomogeneous differential method to the power-series to obtain the dispersion relation. It should be noted here that since higher-order coefficients in the series expansions are necessary to deduce the dispersion relation in this method correctly, we might be sometimes left with wrong values at a certain wave number after applying the asymptotic analysis. It is known that this type of pathology occasionally happens in these asymptotic approximations. If we carefully discard this spurious behavior to find the correct one, these analyses provide a fairly good approximation in many cases, which will be explicitly shown in each case treated below. ## III Excitation Spectrum and Phase Diagram Let us now introduce the 2D antiferromagnetic quantum spin system defined by the Heisenberg Hamiltonian, $`H`$ $`=`$ $`H_0+H_1,`$ (2) $`H_0`$ $`=`$ $`J_1{\displaystyle \underset{(i,j)G_1}{}}𝐒_i𝐒_j,`$ (3) $`H_1`$ $`=`$ $`J_2{\displaystyle \underset{(i,j)G_2}{}}𝐒_i𝐒_j+J_3{\displaystyle \underset{(i,j)G_3}{}}𝐒_i𝐒_j,`$ (4) where $`J_1`$, $`J_2`$ and $`J_3`$ denote the antiferromagnetic coupling constants, and $`𝐒_j`$ is the spin operator at the $`j`$-th site. To treat the mixed-spin systems as well as the ladder and plaquette systems, the spin $`𝐒_j`$ is allowed to take different values at each cite. We denote the bonds $`(i,j)`$ for the non-perturbed Hamiltonian as $`G_1`$, while those for the perturbed parts as $`G_2`$ and $`G_3`$. By appropriately choosing the set of ($`G_1`$, $`G_2`$, $`G_3`$), we can deal with the 2D systems with various structures by the series expansion techniques. We treat below the case of $`\lambda (J_2/J_1)<1`$ and $`\alpha \lambda (J_3/J_1)<1(0<\alpha <1)`$. In the following, the excitation spectrum is analyzed to discuss the quantum phase transitions for the 2D antiferromagnetic spin systems with ladder, plaquette and mixed-spin structures. We carry out the dimer expansion, the plaquette expansion and the mixed-spin cluster expansion. Starting with the above strong-coupling spin singlet states, we can perform the cluster expansion with respect to $`\lambda `$ and $`\alpha \lambda `$. ### A Dimer expansion for ladder-structure systems We first discuss a 2D spin system with the ladder structure, which is shown schematically in Fig. 1, where the bold, the thin and the dashed lines represent the coupling constants $`1`$, $`\lambda _\mathrm{L}`$ and $`\alpha _\mathrm{L}\lambda _\mathrm{L}`$, respectively. It is noted here that the spin ladder system ($`\alpha _\mathrm{L}=0`$) was already studied in detail by the cluster expansion. By changing $`\alpha _\mathrm{L}`$, we can see how the isolated 2-leg ladder $`(\alpha _\mathrm{L}=0)`$ is changed to the 2D system. We calculate the energy for spin-triplet excitations by means of the dimer expansion up to the eighth order in $`\lambda _\mathrm{L}`$ for various values of $`\alpha _\mathrm{L}`$. Note that the Brillouin zone is reduced to half of the original one because the dimer singlet is composed of two spins in the $`y`$-direction. By applying the first-order inhomogeneous differential method to the power series computed above, we obtain the spin-triplet excitation spectrum shown in Fig. 2 in the case of $`\lambda _\mathrm{L}=0.5`$. When $`\alpha _\mathrm{L}=0`$, the system is reduced to the isolated 2-leg ladders with the inter-leg (intra-leg) coupling constant $`1(\lambda _\mathrm{L}=0.5)`$, which is known to have the disordered ground state with the spin gap. This gives rise to the flat dispersion between $`(\pi ,0)`$ and $`(\pi ,\pi /2)`$. The computed coefficients for the spin gap $`\mathrm{\Delta }=E(\pi ,0)`$, in the series of $`\lambda _\mathrm{L}`$ are tabulated for some particular values of $`\alpha _\mathrm{L}`$ in Table I. Note that for the isolated ladder case $`(\alpha _\mathrm{L}=0)`$, our results correspond to those obtained previously by Oitmaa et al.. The obtained spin gap with a fixed $`\lambda _\mathrm{L}`$ is shown in Fig. 3. It is seen that the spin gap decreases with the increase of the inter-ladder coupling $`\alpha _\mathrm{L}`$ and finally vanishes at which the quantum phase transition to the antiferromagnetically ordered state occurs. We wish to mention that the order in our cluster expansion is not high enough to deduce the accurate dispersion for $`\alpha _\mathrm{L}`$ close to the transition point within the first-order inhomogeneous differential approximation, as seen in Fig. 3. It thus seems difficult to deduce the critical point $`\alpha _c`$ correctly. However, as far as the critical value is concerned, we can use alternative analysis based on the Padé approximants, which provides a rather accurate estimate for $`\alpha _c`$, by assuming $`\mathrm{\Delta }(\alpha _c\alpha )^\nu `$ near the critical point. By employing the latter analysis complementarily around the critical point $`\alpha _c`$, we have obtained the corrected spin gap as a function of the inter-ladder coupling $`\alpha _\mathrm{L}`$, which is shown as the solid line in Fig. 3. We also show the phase diagram for the coupled-ladder system in Fig. 4. The solid line represents the phase boundary obtained by the biased \[4/3\] Padé approximants for the spin gap, and the dashed line is the boundary determined previously by the staggered susceptibility. It is remarkable that the present result is in fairly good agreement with the QMC simulations, and considerably improves the previous one especially in the region with small $`\alpha _\mathrm{L}`$. The above analysis may not be sufficient to discuss the 2D ladder-structure systems generically, because the parameter region treated by the cluster expansion is restricted. To make a complementary analysis, we next regard the present system as the coupled-dimer chains, for which the bold, the thin and the dashed lines in Fig. 1 represent the coupling constants $`1`$, $`\alpha _\mathrm{D}\lambda _\mathrm{D}`$ and $`\lambda _\mathrm{D}`$. We carry out the similar calculation up to the eighth order in $`\alpha _\mathrm{D}`$, and list the resulting power series for several values $`\alpha _\mathrm{D}`$ in Table II. Applying the Padé approximants to the computed spin gap, we obtain the phase diagram shown in Fig. 5. In this figure, the solid line represents the phase boundary obtained by the biased \[4/3\] Padé approximants, and the dashed line is the one obtained previously by the staggered susceptibility. We find that these two boundaries are in fairly good agreement with each other, and furthermore consistent with those of the QMC simulations (the solid circles in Fig. 5). By these comparisons, we can say that our cluster expansion approach gives quite accurate results for the phase diagram. ### B Plaquette expansion In the following, we consider the plaquette-structure systems. Introducing the spin systems with two kinds of the plaquette structures, we discuss the quantum phase transitions between the ordered and the disordered states. We note here that series expansion studies on plaquette systems have been done extensively by several groups so far, which we shall also compare with our results in some special cases. #### 1 plaquettes on a square lattice First, we treat the plaquettes on a square lattice shown in Fig. 6. The starting Hamiltonian $`H_0`$ is composed of the isolated plaquettes, whose ground state is spin singlet with the excitation gap $`\mathrm{\Delta }=1`$. We study how the antiferromagnetic correlation develops in the presence of the inter-plaquette interaction $`\lambda `$ and $`\alpha \lambda `$. In the previous paper, we calculated the staggered susceptibility up to the fourth order and determined the phase boundary between the magnetically ordered and the disordered states (see the dashed line in Fig. 9). We here calculate the dispersion for spin excitations up to the fifth order, and list the obtained series of the spin gap $`\mathrm{\Delta }=E(0,0)`$ for several values of $`\alpha `$ in Table III. Using the first-order inhomogeneous differential methods, we obtain the dispersion relation shown in Fig. 7. Note that the Brillouin zone is reduced to a quarter of the original one due to the plaquette structure. We also show the spin gap $`\mathrm{\Delta }=E(0,0)`$ as a function of $`\alpha `$ in Fig. 8. As the inter-plaquette coupling $`\alpha `$ is increased, the antiferromagnetic correlation grows up, which causes the decrease in the excitation gap, and finally induces the quantum phase transition to the antiferromagnetic with the vanishing spin gap. In the case $`(\alpha ,\lambda )=(0,1)`$, our model is reduced to the independent isotropic two-leg ladders for which the spin gap $`\mathrm{\Delta }=0.43`$ is obtained. This value is slightly small compared with $`\mathrm{\Delta }=0.504`$ (density matrix renormalization group) and 0.5028 (dimer expansion), which implies that higher-order cluster expansions may be necessary to obtain more accurate values of the spin gap for the plaquette system. In contrast, it is shown below that the phase diagram can be obtained with much higher accuracy. By applying Padé approximants to the power series of the spin gap, we obtain the phase diagram in Fig. 9. It is remarkable that the phase boundary given in this paper considerably improves the previous one in the small $`\alpha `$ regime, which can be confirmed by the result of the QMC simulations (the dot shown in the figure). We note here that for the special case of $`\alpha =1`$, similar results were previously reported by Fukumoto et al. and Weihong et al. #### 2 plaquettes on a 1/5 depleted square lattice We next deal with the plaquette system shown in Fig. 10, which may be regarded as a 1/5 depleted square lattice, by extending the work done by Gelfand et al. This system is also considered to be made out of the plaquette chains, since the model with $`\alpha =0`$ and finite $`\lambda `$ is reduced to the isolated plaquette chains. The results for the cluster expansion of the spin gap up to the fifth order are tabulated in Table IV for several values of $`\alpha `$. The resulting value of $`\mathrm{\Delta }`$ deduced by the Padé analysis is shown in Fig. 11 as a function of $`\alpha `$. In the case of $`\alpha =0`$, the model is reduced to the isolated plaquette chains with the spin gap. In this case, by applying the differential methods to the power series, the spin gap is estimated as $`\mathrm{\Delta }=0.607\pm 0.001`$ for $`\lambda =1`$, which is in good agreement with the result of the exact diagonalization $`\mathrm{\Delta }=0.6086`$. To observe the phase transition when the couplings $`\alpha `$ between the plaquette chains increased, the phase diagram is shown in Fig. 12. Here, the phase boundary (solid line) is determined by applying the biased \[2/3\] Padé approximants to the spin gap. We find that this line is quite consistent with the previous results shown as the dashed line. The fact that the two lines evaluated for different quantities in different orders produce a quite similar behavior confirms that the obtained boundary is indeed reliable although our calculation is restricted to the lower-order expansions. We also note that the results already obtained by QMC and also by the plaquette expansion in the case of $`\alpha =1`$ are in good agreement with the present one. ### C Mixed-spin cluster expansion Let us now turn to another interesting 2D system composed of two kind of different spins, which has attracted much attention recently. In this mixed-spin system, the topological nature of spins is important for the system to generate the spin gap or induce an antiferromagnetic long-range order. In this subsection, we extend the previous calculations to those of the excited states, and quantitatively discuss the phase transition in 2D mixed-spin systems. We will clarify that the arrangement of different spins affects the nature of the quantum phase transitions from the spin-gap phase to the antiferromagnetic phase. We shall also check that our series expansion approach correctly captures the spin structure though our calculation is based on the lower-order perturbations. We deal with two typical systems composed of $`s=1/2`$ and $`1`$, as displayed in Figs. 13 and 14. #### 1 columnar-type mixed-spin system We begin with the columnar-type mixed-spin system, for which the mixed-spin chains are stacked uniformly in a vertical direction (Fig. 13). Starting from the mixed-spin clusters of $`1/211/2`$, we perform the series expansion with respect to $`\lambda `$. Note that the Brillouin zone is reduced to a third of the original one since the mixed-spin cluster is composed of three spins in the $`x`$-direction. We list the power series obtained up to the fifth order for the excitation spectrum in Table V. It is noted that in the case of the isolated mixed-spin chain ($`\alpha =0`$), these coefficients are the same as those for the plaquette chain, (see Fig. 10) and thus the isotropic mixed-spin chain with $`\lambda =1`$ has the same spin gap $`\mathrm{\Delta }=0.607`$. We can indeed prove that the mixed-spin chain is identical to the plaquette chain as far as the ground state and the low-energy elementary excitation are concerned. Using the first-order inhomogeneous differential methods, we obtain the dispersion relation shown in Fig. 15. We recall that the mixed-spin system in the case of $`\alpha =0`$ is reduced to the mixed-spin chain with the spin gap defined at the wave number $`𝐤=(\pi /3,\pi )`$. Increasing the inter-chain coupling $`\alpha `$, we can see that the spin gap decreases as the magnetic correlation grows up, and finally vanishes at which the phase transition to the magnetically ordered phase takes place. In Fig. 16, the phase boundary is shown by the left solid line, which is obtained by the biased \[2/3\] Padé approximants for the spin gap. In comparison, we also display the previous result by the left dashed line, which was determined by the staggered susceptibility in the fourth-order expansion. As has been the case for the plaquette systems, we can see again that the phase boundaries which were determined via the different physical quantities are consistent with each other. This demonstrates that the reliable phase boundary is established by the present analysis. #### 2 diagonal-type mixed-spin system We next discuss the diagonal-type mixed-spin system shown in Fig. 14, for which the mixed-spin chains are stacked diagonally. According to this structure, the shape of the Brillouin zone for the diagonal system is quite different from those for the columnar one as shown in Fig. 17. The definition of the coupling constants is the same as that in Fig. 13. The mixed-spin cluster expansion for the excited states up to the fifth order with the asymptotic analysis yields the phase diagram and the dispersion relations shown in Figs. 16 and 18, respectively. In Fig. 16, the right solid line represents the phase boundary determined by the biased \[3/2\] Padé approximants for the spin gap . The resulting series for some particular values of $`\alpha `$ are tabulated in Table VI. For $`\alpha =0`$, the system correctly reproduces an assembly of independent mixed-spin chains which have the disordered ground state. Increasing $`\alpha `$, the correlation among the mixed-spin chains grows up, and the quantum phase transition occurs. Especially, in the case of the mixed-spin chains with the isotropic bonds $`(\lambda =1)`$, the phase transition to the ordered state occurs at the critical value $`\alpha _c=0.21`$. We note that as well as the case of the columnar case, this line is consistent with the phase boundary determined from the ground-state susceptibility, shown as the dashed line in Fig. 16, which may ensure that phase diagrams for both of the two distinct mixed-spin systems are determined in rather high accuracy. ## IV Summary We have performed the systematic cluster expansion to study the two dimensional quantum spin systems with modulated lattice as well as spin structures. By applying the asymptotic analysis to the obtained series, we have calculated the excitation spectrum and have discussed the quantum phase transitions. We have thus constructed the phase diagram which improves the previous one obtained via the staggered susceptibility. In particular, we find that the present results for the systems with the ladder and plaquette structures are in fairly good agreement with the results of the QMC simulations. We have further studied the critical phenomena for the spin systems with modulated $`S=1,1/2`$ structure. By careful study on two types of slightly different systems with mixed-spins, it has been clarified that the topology of the spin arrangement plays an important role to stabilize the spin-gap phase. ## Acknowledgements The work is partly supported by a Grant-in-Aid from the Ministry of Education, Science, Sports, and Culture. A. K. is supported by the Japan Society for the Promotion of Science. Numerical computation in this work was carried out at the Yukawa Institute Computer Facility.
no-problem/9908/physics9908001.html
ar5iv
text
# Comment on “Recurrences without closed orbits” ## Abstract In a recent paper Robicheaux and Shaw \[Phys. Rev. A 58, 1043 (1998)\] calculate the recurrence spectra of atoms in electric fields with non-vanishing angular momentum $`L_z0`$. Features are observed at scaled actions “an order of magnitude shorter than for any classical closed orbit of this system.” We investigate the transition from zero to nonzero angular momentum and demonstrate the existence of short closed orbits with $`L_z0`$. The real and complex “ghost” orbits are created in bifurcations of the “uphill” and “downhill” orbit along the electric field axis, and can serve to interpret the observed features in the quantum recurrence spectra. In Ref. Robicheaux and Shaw calculate quantum photoabsorption spectra of atoms in electric fields with nonzero magnetic quantum numbers, $`m0`$ and observe recurrence peaks at short actions in the Fourier transform recurrence spectra. For spectra with magnetic quantum number $`m=0`$ these peaks can be directly interpreted as the recurrences of the “uphill” and “downhill” orbit along the electric field axis. For nonzero angular momenta the authors argue that “these two orbits are not possible, because $`L_z`$ is conserved and there is a repulsive $`L_z^2/(x^2+y^2)`$ term in the potential.” The observed features at short actions are therefore interpreted as “recurrences without closed orbits.” It is the purpose of this comment to demonstrate that the uphill and downhill orbit do not disappear to nowhere at the transition from zero to nonzero angular momentum $`L_z`$, but, by contrast, closed orbits with approximately the same short action still exist for $`L_z0`$. As will be shown, the orbits along the $`z`$ axis undergo bifurcations and split into a real and a complex “ghost” orbit. The importance of ghost orbits for the photoabsorption spectra of atoms in a magnetic field has been discussed at length in . For the hydrogen atom in an electric field the Hamiltonian separates in semiparabolical coordinates, $`\mu =\sqrt{r+z}`$, $`\nu =\sqrt{rz}`$, i.e., $`H=H_\mu +H_\nu `$ with $`H_\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}p_\mu ^2\epsilon \mu ^2+{\displaystyle \frac{\stackrel{~}{L}_z^2}{2\mu ^2}}+{\displaystyle \frac{1}{2}}\mu ^4=2Z_1,`$ (1) $`H_\nu `$ $`=`$ $`{\displaystyle \frac{1}{2}}p_\nu ^2\epsilon \nu ^2+{\displaystyle \frac{\stackrel{~}{L}_z^2}{2\nu ^2}}{\displaystyle \frac{1}{2}}\nu ^4=2Z_2,`$ (2) $`Z_1+Z_2=1`$, $`\epsilon =EF^{1/2}`$ the scaled energy, and $`\stackrel{~}{L}_z=L_zF^{1/4}`$ the scaled angular momentum. $`F`$ is the electric field strength. Obviously, for $`L_z0`$ the centrifugal barrier does not allow trajectories to start exactly at the origin. However, the shortest closed orbits can easily be derived from the conditions on the time evolution $`p_\nu (\tau )=0`$, $`\nu (\tau )=\nu _0=\text{const}`$ for the orbits bifurcating from the uphill orbit and $`p_\mu (\tau )=0`$, $`\mu (\tau )=\mu _0=\text{const}`$ for the orbits bifurcating from the downhill orbit. In the following we discuss the bifurcation of the uphill orbit, for the downhill orbit similar results are obtained at low energies $`\epsilon 2.0`$. The effective potential $`V(\nu )=\epsilon \nu ^2+\stackrel{~}{L}_z^2/2\nu ^2\nu ^4/2`$ has a local minimum for energies $`\epsilon <\frac{3}{2}\stackrel{~}{L}_z^{2/3}`$. The stationary $`\nu `$ motion is obtained from the condition of vanishing derivative $`dV(\nu )/d\nu =0`$, yielding $$2\nu _0^6+2\epsilon \nu _0^4+\stackrel{~}{L}_z^2=0.$$ (3) Two approximate solutions of (3) at $`\epsilon 0`$ are $`\nu _0^2\pm \stackrel{~}{L}_z/\sqrt{2\epsilon }`$ approaching $`\nu _0=0`$ in the limit $`\stackrel{~}{L}_z0`$. The two solutions represent a real and a ghost orbit for $`\nu _0`$ real and imaginary, respectively. With given $`\nu _0`$ it is a straightforward task to calculate the constant of motion $`Z_1=1Z_2`$ and to solve for $`\mu (\tau )`$ in Eq. 1. The shapes of the closed orbits are presented in dimensionless scaled coordinates $`(\stackrel{~}{\rho }=\rho F^{1/2},\stackrel{~}{z}=zF^{1/2})`$ in Fig. 1 for $`\epsilon =4.0`$ and scaled angular momentum $`\stackrel{~}{L}_z=0.014`$, which belongs to the magnetic quantum number $`m=1`$ at a value $`\omega =2\pi \sqrt{\epsilon /E}=450`$. This corresponds to the values chosen in Figure 1 of Ref. . The solid line is the real orbit, which is the uphill orbit distorted by the repulsive centrifugal barrier. An analogous orbit has been discovered and discussed for the diamagnetic Kepler system with non-vanishing angular momentum . The dashed and dash-dotted lines are the real and imaginary part of the complex ghost orbit, respectively. The real part of the ghost orbit is nearly identical with the uphill orbit at vanishing angular momentum $`\stackrel{~}{L}_z=0`$. The scaled action of both orbits is $`\stackrel{~}{S}1/\sqrt{2\epsilon }=0.35`$ in perfect agreement with the first recurrence peak in Figures 2 and 3 of Ref. . As mentioned above, for $`L_z0`$ the centrifugal barrier does not allow trajectories to start exactly at the origin. The nearest distance of closed orbits from the origin depends on the values of the constants of motion $`Z_1`$ and $`Z_2`$ in Eqs. 1 and 2. It is negligible small for orbits with $`Z_1Z_20.5`$ and increases when $`Z_1`$ or $`Z_2`$ approaches the minimal allowed value. For the real closed orbit in Fig. 1 the nearest distance from the origin is $`\stackrel{~}{r}_{\mathrm{min}}=r_{\mathrm{min}}F^{1/2}=0.0025`$ in scaled units, which is about 13 Bohr radii at $`\omega =2\pi \sqrt{\epsilon /E}=450`$. This is slightly outside the classically allowed region of the initial state $`|2p1`$, however, it should be noted that a small change of the initial conditions results in approximately closed orbits where the distance to the origin at the start and return is reduced to about a few Bohr radii. The real closed orbit in Fig. 1 is more strongly excited in dipole transitions from initial states of larger size than the size of the hydrogenic $`|2p1`$ state as can clearly be seen in Figs. 5 and 6 of Ref. for the excitation of the K and Cs atom, respectively. The relatively large nearest distance of the closed orbit to the origin suppresses effects of classical core scattering , especially for the K atom (see Fig. 5 in Ref. ), which might result in strong damping of the multiple repetitions for orbits with $`L_z=0`$ diving deeply into the ionic core. For the Cs atom the ionic core has larger size and core scattering can be observed in Fig. 6 of Ref. , but, in contrast to the interpretation given in , orbits are scattered into the truly existing short closed orbit presented in Fig. 1. However, it is still an outstanding task to reproduce quantitatively the amplitudes of recurrence peaks in the Fourier transform quantum spectra of the hydrogen atom and non-hydrogenic atoms by application of closed orbit theory. Robicheaux and Shaw are probably right that the theory may need to be generalized to account for the effects of orbits not starting at and returning back exactly to the origin. In conclusion, we have investigated the bifurcation scenario of the shortest closed orbits of the hydrogen atom in an electric field at the transition from zero to non-vanishing angular momentum $`L_z`$, and revealed the existence of short real and complex ghost orbits with $`L_z0`$. They are born in bifurcations of the uphill and downhill orbit along the field axis and can serve to interpret features at short scaled actions in the quantum recurrence spectra calculated by Robicheaux and Shaw . This work was supported by the Deutsche Forschungsgemeinschaft. I am grateful to G. Wunner for a critical reading of the manuscript.
no-problem/9908/hep-ph9908443.html
ar5iv
text
# 1 Introduction ## 1 Introduction As is well known supersymmetric theories contain many new sources of CP violation which mostly arise from the phases of the soft SUSY breaking parameters and that such phases contribute to the electric dipole moments (EMDs) of the electron and of the neutron. Experimentally the electron and the neutron EDMs have very strict limits, i.e., for the neutron the limit is $$|d_n|<6.3\times 10^{26}ecm$$ (1) and for the electron the limit is $$|d_e|<4.3\times 10^{27}ecm$$ (2) and these limits impose stringent constrains on particle physics models. In SUSY/string models one normally expects CP violating phases O(1) and phases of this size typically lead to EDM predictions in such models already in excess of the current experimental limits. Of the possible remedies to this problem the conventional approach has been to assume that the phases are small, typically, $`O(10^{23})`$, which, however, constitutes a fine tuning. Another possibility suggested is to assume that the SUSY spectrum is heavy in the several TeV region. Generally, a heavy spectrum may constitutes fine tuning except in certain limited domains of the parameter space. Further, such a heavy spectrum may lie outside the reach of even the Large Hadron Collider (LHC) and thus a disappointing scenario from the point of view of particle physicists. A third more encouraging possibility is that the large phases could indeed be there, but one escapes the experimental EDM constraints because of cancellations among the various contributions to the EDMs. This possibility was proposed in Ref. and there have been further verification and developments and applications such as in dark matter, in low energy processes, and on other SUSY phenomena. The cancellation mechanism opens a new window on the SUSY parameter space where large CP phases along with a light SUSY spectrum can co-exist. Thus significant effects on SUSY phenomena can result. One of the quantities affected by CP phases is $`a_\mu =(g_\mu 2)/2`$, where $`g_\mu 2`$ is the anomalous magnetic moment of the muon. This quantity is of considerable current interest since the new Brookhaven experiment will measure $`a_\mu `$ to an accuracy of better than a factor of 20. Further, recently there has been considerable progress in reducing the hadronic error. With the reduced hadronic error the new $`g_\mu `$ experiment will test the Standard Model (SM) electro-weak correction which including the two loop SM corrections stands at $$a_\mu ^{SM}=15.1\times 10^{10}$$ (3) It turn out that the supersymmetric electro-weak corrections to $`a_\mu `$ can be quite large and these supersymmetric effects on $`a_\mu `$ have been investigated for many years. However, the CP violating supersymmetric electro-weak effects on $`a_\mu `$ have been ignored for the reason that small CP phases or large CP phases with a heavy spectrum lead only to negligible effects on $`a_\mu `$. With the cancellation mechanism the possibility of large CP phases along with a light spectrum arises and such a situation can lead to very significant effects on $`a_\mu `$. Indeed in a recent work the effects of CP phases on $`a_\mu `$ in the context of the minimal supergravity model (mSUGRA) were analysed and it was shown that CP violating phases can produce significant effects on $`a_\mu `$. In the absence of CP violating phases the soft SUSY breaking parameters at the GUT scale in mSUGRA consist of the universal scalar mass $`m_0`$, the universal gaugino mass $`m_{\frac{1}{2}}`$, the universal trilinear coupling $`A_0`$ and $`\mathrm{tan}\beta =<H_2>/<H_1>`$ where $`H_2`$ gives mass to the up quark and $`H_1`$ gives mass to the down quark. More generally the soft SUSY breaking parameters as well as the Higgs VEVs are complex and have phases. However, by a redefinition of fields it is easily seen that there are only two CP violating phases in mSUGRA. These can be chosen to be the phase of $`A_0`$ and the phase of $`\mu _0`$ where $`\mu _0`$ appears in the Higgs mixing term, i.e., in the term $`\mu _0H_1H_2`$ in the superpotential. In this paper we extend our analysis of the effects of CP violating phases on $`a_\mu `$ to supergravity models with non-universalities and to the Minimal Supersymmetric Standard Model (MSSM) which has many more CP violating phases. The existence of a larger set of CP phases widens the region of the parameter space where cancellations can occur. The purpose of this paper is to derive the general one loop supersymmetric correction to $`a_\mu `$ with the most general set of CP violating phases allowed in MSSM and determine the numerical effects of these CP violating phases on $`a_\mu `$ under the experimental constraints on the electron and on the neutron EDM. The outline of the rest of the paper is as follows: In Sec.2 we derive the general one loop formula for $`a_f`$ for the case of a fermion $`f`$ interacting with a fermion and a scalar in the presence of CP violating phases and without any approximation on the relative size of the external and internal particle masses in the loop. In Sec.3 we apply this formula for the computation of the chargino and neutralino exchange contributions to $`a_\mu `$ for the most general allowed set of CP violating phases in this sector. In Sec.4 and in Appendix A we study the combination of CP phases that enter $`a_\mu `$ and compare them with the corresponding combinations that arise in the expressions for the electron and the neutron EDMs. In Sec.5 and in Appendix B the supersymmetric limit of our result is given and it is explicitly shown how the one loop Standard Model contribution to $`a_\mu `$ including the one loop QED correction, i.e. $`\alpha _{em}/2\pi `$, is cancelled by the supersymmetric contribution. In Sec.6 we give a discussion of the satisfaction of the EDM constraints. In Sec.7 we give an analysis of CP violating effects on $`a_\mu `$. Conclusions are given in Sec.8. Appendix C is devoted to a discussion of the vanishing CP violating phases and a comparison of our results with previous analyses. ## 2 CP Effects on $`g2`$ in MSSM We give here the general analysis for the CP effects on $`g2`$ of a fermion. In general for the interaction of a fermion $`\psi _f`$ of mass $`m_f`$ interacting with a fermion $`\psi _i`$ of mass $`m_i`$ and a scalar $`\varphi _k`$ of mass $`m_k`$, the vertex interaction has the general form $$_{int}=\underset{ik}{}\overline{\psi _f}(K_{ik}\frac{1\gamma _5}{2}+L_{ik}\frac{1+\gamma _5}{2})\psi _i\varphi _k+H.c.$$ (4) This interaction violates CP invariance iff $`Im(K_{ik}L_{ik}^{})0`$. The one loop contribution to $`a_f`$ is given by $$a_f=a_f^1+a_f^2$$ (5) where $`a_f^1`$ and $`a_f^2`$ arise from Fig. 1(a) and Fig. 1(b) respectively. $`a_f^1`$ is a sum of two terms: $`a_f^1=a_f^{11}+a_f^{12}`$ where $$a_f^{11}=\underset{ik}{}\frac{m_f}{8\pi ^2m_i}Re(K_{ik}L_{ik}^{})I_1(\frac{m_f^2}{m_i^2},\frac{m_k^2}{m_i^2})$$ (6) and $$I_1(\alpha ,\beta )=_0^1𝑑x_0^{1x}𝑑z\frac{z}{\alpha z^2+(1\alpha \beta )z+\beta }$$ (7) and where $$a_f^{12}=\underset{ik}{}\frac{m_f^2}{16\pi ^2m_i^2}(|K_{ik}|^2+|L_{ik}|^2)I_2(\frac{m_f^2}{m_i^2},\frac{m_k^2}{m_i^2})$$ (8) and $$I_2(\alpha ,\beta )=_0^1𝑑x_0^{1x}𝑑z\frac{z^2z}{\alpha z^2+(1\alpha \beta )z+\beta }$$ (9) Similarly, $`a_f^2`$ consists of two terms: $`a_f^2=a_f^{21}+a_f^{22}`$ where $$a_f^{21}=\underset{ik}{}\frac{m_f}{8\pi ^2m_i}Re(K_{ik}L_{ik}^{})I_3(\frac{m_f^2}{m_i^2},\frac{m_k^2}{m_i^2})$$ (10) and $$I_3(\alpha ,\beta )=_0^1𝑑x_0^{1x}𝑑z\frac{1z}{\alpha z^2+(\beta \alpha 1)z+1}$$ (11) and where $$a_f^{22}=\underset{ik}{}\frac{m_f^2}{16\pi ^2m_i^2}(|K_{ik}|^2+|L_{ik}|^2)I_4(\frac{m_f^2}{m_i^2},\frac{m_k^2}{m_i^2})$$ (12) and $$I_4(\alpha ,\beta )=_0^1𝑑x_0^{1x}𝑑z\frac{z^2z}{\alpha z^2+(\beta \alpha 1)z+1}$$ (13) In the above we have given the exact expressions for the integrals $`I_1I_4`$ rather than their approximate forms in the limit when one neglects terms of size $`m_f^2`$ relative to $`m_k^2`$ and $`m_i^2`$ which allows one to write simple closed form expressions for them. We will see that the general expressions are needed to discuss the supersymmetric limit of our results which provides the absolute check on our normalizations. ## 3 $`g_\mu 2`$ with CP Violating Phases We apply now the above relations for the computation of the chargino and the neutralino exchange contributions. We consider the chargino exchange contributions first. The CP violating phases enter here via the chargino mass matrix defined by $$M_C=\left(\begin{array}{cc}|\stackrel{~}{m}_2|e^{i\xi _2}& \sqrt{2}m_W\mathrm{sin}\beta e^{i\chi _2}\\ \sqrt{2}m_W\mathrm{cos}\beta e^{i\chi _1}& |\mu |e^{i\theta _\mu }\end{array}\right)$$ (14) where $`\chi _1`$ and $`\chi _2`$ are phases of the Higgs VEVs, i.e., $`<H_i>=|<H_i>|e^{i\chi _i}`$ (i=1,2). The matrix of Eq.(14) can be diagonalized by the biunitary transformation $`U^{}M_CV^1=diag(\stackrel{~}{m}_{\chi _1^+},\stackrel{~}{m}_{\chi _2^+})`$ where U and V are unitary matrices. By looking at the muon-chargino-sneutrino interaction one can identify $`K_i`$ and $`L_i`$ and one finds $$a_\mu ^\chi ^{}=a_\mu ^{21}+a_\mu ^{22}$$ (15) where $`a_\mu ^{21}`$ and $`a_\mu ^{22}`$ are given below. We exhibit these only in the limit where $`I_3(\alpha ,\beta )`$ and $`I_4(\alpha ,\beta )`$ have their first arguments zero and one may write $$I_3(0,x)=\frac{1}{2}F_3(x),I_4(0,x)=\frac{1}{6}F_4(x)$$ (16) where $$F_3(x)=\frac{1}{(x1)^3}(3x^24x+12x^2lnx)$$ (17) $$F_4(x)=\frac{1}{(x1)^4}(2x^3+3x^26x+16x^2lnx).$$ (18) In the above approximation we have $$a_\mu ^{21}=\frac{m_\mu \alpha _{EM}}{4\pi \mathrm{sin}^2\theta _W}\underset{i=1}{\overset{2}{}}\frac{1}{M_{\chi _i^+}}Re(\kappa _\mu U_{i2}^{}V_{i1}^{})F_3(\frac{M_{\stackrel{~}{\nu }}^2}{M_{\chi _i^+}^2}).$$ (19) and $$a_\mu ^{22}=\frac{m_\mu ^2\alpha _{EM}}{24\pi \mathrm{sin}^2\theta _W}\underset{i=1}{\overset{2}{}}\frac{1}{M_{\chi _i^+}^2}(|\kappa _\mu U_{i2}^{}|^2+|V_{i1}|^2)F_4(\frac{M_{\stackrel{~}{\nu }}^2}{M_{\chi _i^+}^2}).$$ (20) where $$\kappa _\mu =\frac{m_\mu }{\sqrt{2}M_W\mathrm{cos}\beta }e^{i\chi _1}$$ (21) Next we discuss the neutralino exchange contribution to $`a_\mu `$. CP violating effects here are all contained in the neutralino and smuon mass matrices. For the neutralino mass matrix the CP violating phases enter as below $$\left(\begin{array}{cccc}|\stackrel{~}{m}_1|e^{i\xi _1}& 0& M_z\mathrm{sin}\theta _W\mathrm{cos}\beta e^{i\chi _1}& M_z\mathrm{sin}\theta _W\mathrm{sin}\beta e^{i\chi _2}\\ 0& |\stackrel{~}{m}_2|e^{i\xi _2}& M_z\mathrm{cos}\theta _W\mathrm{cos}\beta e^{i\chi _1}& M_z\mathrm{cos}\theta _W\mathrm{sin}\beta e^{i\chi _2}\\ M_z\mathrm{sin}\theta _W\mathrm{cos}\beta e^{i\chi _1}& M_z\mathrm{cos}\theta _W\mathrm{cos}\beta e^{i\chi _2}& 0& |\mu |e^{i\theta _\mu }\\ M_z\mathrm{sin}\theta _W\mathrm{sin}\beta e^{i\chi _1}& M_z\mathrm{cos}\theta _W\mathrm{sin}\beta e^{i\chi _2}& |\mu |e^{i\theta _\mu }& 0\end{array}\right).$$ (22) The neutralino mass matrix $`M_{\chi ^0}`$ is a complex non hermitian and symmetric matrix and can be diagonalized using a unitary matrix X such that $`X^TM_{\chi ^0}X`$=$`\mathrm{diag}(\stackrel{~}{m}_{\chi _1^0},\stackrel{~}{m}_{\chi _2^0},\stackrel{~}{m}_{\chi _3^0},\stackrel{~}{m}_{\chi _4^0})`$. Since the loop correction involving the neutralino exchange also involves the smuon exchange (see Fig.1a) the CP phases in the smuon $`(mass)^2`$ also enter the analysis. The smuon $`(mass)^2`$ matrix is given by $$M_{\stackrel{~}{\mu }}^2=\left(\begin{array}{cc}M_{\stackrel{~}{\mu }11}^2& m_\mu (A_\mu ^{}m_0\mu \mathrm{tan}\beta e^{i(\chi _1+\chi _2)})\\ m_\mu (A_\mu m_0\mu ^{}\mathrm{tan}\beta e^{i(\chi _1+\chi _2)})& M_{\stackrel{~}{\mu }22}^2\end{array}\right),$$ (23) This matrix is hermitian and can be diagonalized by the unitary transformation $$D^{}M_{\stackrel{~}{\mu }}^2D=\mathrm{diag}(M_{\stackrel{~}{\mu }1}^2,M_{\stackrel{~}{\mu }2}^2)$$ (24) The neutralino exchange contribution to $`a_\mu `$ is given by $$a_\mu ^{\chi ^0}=a_\mu ^{11}+a_\mu ^{12}$$ (25) where $$a_\mu ^{11}=\frac{m_\mu \alpha _{EM}}{2\pi \mathrm{sin}^2\theta _W}\underset{j=1}{\overset{4}{}}\underset{k=1}{\overset{2}{}}\frac{1}{M_{\chi _j^0}}Re(\eta _{\mu j}^k)I_1(\frac{m_\mu ^2}{M_{\chi _j^0}^2},\frac{M_{\stackrel{~}{\mu _k}}^2}{M_{\chi _j^0}^2})$$ (26) and $`\eta _{\mu j}^k`$ $`=({\displaystyle \frac{1}{\sqrt{2}}}[\mathrm{tan}\theta _WX_{1j}+X_{2j}]D_{1k}^{}\kappa _\mu X_{3j}D_{2k}^{})`$ (27) $`(\sqrt{2}\mathrm{tan}\theta _WX_{1j}D_{2k}+\kappa _\mu X_{3j}D_{1k})`$ and $`a_\mu ^{12}`$ is given by $$a_\mu ^{12}=\frac{m_\mu ^2\alpha _{EM}}{4\pi \mathrm{sin}^2\theta _W}\underset{j=1}{\overset{4}{}}\underset{k=1}{\overset{2}{}}\frac{1}{M_{\chi _j^0}^2}X_{\mu j}^kI_2(\frac{m_\mu ^2}{M_{\chi _j^0}^2},\frac{M_{\stackrel{~}{\mu _k}}^2}{M_{\chi _j^0}^2})$$ (28) where $`X_{\mu j}^k`$ $`={\displaystyle \frac{m_\mu ^2}{2M_W^2\mathrm{cos}^2\beta }}|X_{3j}|^2`$ (29) $`+{\displaystyle \frac{1}{2}}\mathrm{tan}^2\theta _W|X_{1j}|^2(|D_{1k}|^2+4|D_{2k}|^2)+{\displaystyle \frac{1}{2}}|X_{2j}|^2|D_{1k}|^2`$ $`+\mathrm{tan}\theta _W|D_{1k}|^2Re(X_{1j}X_{2j}^{})`$ $`+{\displaystyle \frac{m_\mu \mathrm{tan}\theta _W}{M_W\mathrm{cos}\beta }}Re(e^{i\chi _1}X_{3j}X_{1j}^{}D_{1k}D_{2k}^{})`$ $`{\displaystyle \frac{m_\mu }{M_W\mathrm{cos}\beta }}Re(e^{i\chi _1}X_{3j}X_{2j}^{}D_{1k}D_{2k}^{})`$ If one ignores the muon mass with respect to the other masses involved in the problem, the form factors $`I_1(\alpha ,\beta )`$ and $`I_2(\alpha ,\beta )`$ become $$I_1(0,x)=\frac{1}{2}F_1(x),I_2(0,x)=\frac{1}{6}F_2(x)$$ (30) where $$F_1(x)=\frac{1}{(x1)^3}(1x^2+2xlnx)$$ (31) and $$F_2(x)=\frac{1}{(x1)^4}(x^3+6x^23x26xlnx).$$ (32) ## 4 The number of independent linear combinations of phases that enter $`a_\mu `$ Not all the phases that enter in the chargino, neutralino and smuon mass matrices are independent. We discuss here the set of independent phases that enter $`a_\mu `$. We consider the chargino contribution to $`a_\mu `$ first. Here the matrix elements of $`U`$ and $`V`$ as defined in the paragraph following Eq.(14) along with $`\kappa _\mu `$ as defined by Eq.(21) carry the phases $`\xi _2`$, $`\theta _\mu `$, $`\chi _1`$ and $`\chi _2`$. By introducing the transformation $`M_C=B_RM_C^{}B_L^{}`$ and choosing $`B_R=diag(e^{i\xi _2},e^{i\chi _1})`$ and $`B_L=diag(1,e^{i(\chi _2+\xi _2)})`$ we can rotate the phases so that $`M_C^{}`$ is given by $$M_C^{}=\left(\begin{array}{cc}|\stackrel{~}{m}_2|& \sqrt{2}m_W\mathrm{sin}\beta \\ \sqrt{2}m_W\mathrm{cos}\beta & |\mu |e^{i(\theta _\mu +\xi _2+\chi _1+\chi _2)}\end{array}\right)$$ (33) The matrix $`M_C^{}`$ can be diagonalized by the biunitary transformation $`U_R^{}M_C^{}U_L`$=diag $`(\stackrel{~}{m}_{\chi _1^+},\stackrel{~}{m}_{\chi _2^+})`$. It is clear that the matrix elements of $`U_L`$ and $`U_R`$ are functions only of the combination $`\theta =\theta _\mu +\xi _2+\chi _1+\chi _2`$. We also have $`U^{}M_CV^1=diag(\stackrel{~}{m}_{\chi _1^+},\stackrel{~}{m}_{\chi _2^+})`$ where $`U=(B_RU_R)^T`$, and V=$`(B_LU_L)^{}`$. By inserting the new forms of $`U`$ and $`V`$ in the chargino contribution one finds (as shown in Appendix A) that $`a_\mu ^{21}`$ and $`a_\mu ^{22}`$ depend on only one combination, i.e., $`\theta =\theta _\mu +\xi _2+\chi _1+\chi _2`$. Now we turn to the neutralino contribution, the phases that enter here are $`\theta _\mu `$, $`\alpha _{A_\mu }`$, $`\xi _2`$, $`\xi _1`$, $`\chi _1`$ and $`\chi _2`$ and they are carried by the matrix elements of $`X`$, $`D_\mu `$ and the phase of $`\kappa _\mu `$. Next we make the transformation $`M_{\chi ^0}`$=$`P_{\chi ^0}^T`$ $`M_{\chi ^0}^{}`$ $`P_{\chi ^0}`$ where $$P_{\chi ^0}=diag(e^{i\frac{\xi _1}{2}},e^{i\frac{\xi _2}{2}},e^{i(\frac{\xi _1}{2}+\chi _1)},e^{i(\frac{\xi _2}{2}+\chi _2)})$$ (34) After the transformation the matrix $`M_{\chi ^0}^{^{}}`$ takes the form $$\left(\begin{array}{cccc}|\stackrel{~}{m}_1|& 0& M_z\mathrm{sin}\theta _W\mathrm{cos}\beta & M_z\mathrm{sin}\theta _W\mathrm{sin}\beta e^{i\frac{\mathrm{\Delta }\xi }{2}}\\ 0& |\stackrel{~}{m}_2|& M_z\mathrm{cos}\theta _W\mathrm{cos}\beta e^{i\frac{\mathrm{\Delta }\xi }{2}}& M_z\mathrm{cos}\theta _W\mathrm{sin}\beta \\ M_z\mathrm{sin}\theta _W\mathrm{cos}\beta & M_z\mathrm{cos}\theta _W\mathrm{cos}\beta e^{i\frac{\mathrm{\Delta }\xi }{2}}& 0& |\mu |e^{i\theta ^{}}\\ M_z\mathrm{sin}\theta _W\mathrm{sin}\beta e^{i\frac{\mathrm{\Delta }\xi }{2}}& M_z\mathrm{cos}\theta _W\mathrm{sin}\beta & |\mu |e^{i\theta ^{}}& 0\end{array}\right).$$ (35) where $`\theta ^{^{}}=\frac{\xi _1+\xi _2}{2}+\theta _\mu +\chi _1+\chi _2`$, and $`\mathrm{\Delta }\xi =(\xi _1\xi _2)`$. The matrix $`M_{\chi ^0}^{^{}}`$ can be diagonalized by the transformation $`Y^TM_{\chi ^0}^{}Y`$=$`\mathrm{diag}(\stackrel{~}{m}_{\chi _1^0},\stackrel{~}{m}_{\chi _2^0},\stackrel{~}{m}_{\chi _3^0},\stackrel{~}{m}_{\chi _4^0})`$ where Y is a function only of $`\theta ^{^{}}`$ and $`\mathrm{\Delta }\xi /2`$. Thus the complex non hermitian and symmetric matrix $`M_{\chi ^0}`$ can be diagonalized using a unitary matrix $`X=P_{\chi ^0}^{}Y`$ such that $`X^TM_{\chi ^0}X=\mathrm{diag}(\stackrel{~}{m}_{\chi _1^0},\stackrel{~}{m}_{\chi _2^0},\stackrel{~}{m}_{\chi _3^0},\stackrel{~}{m}_{\chi _4^0})`$. As shown in Appendix A by applying the above transformations to each term of $`\eta _{\mu j}^k`$ and $`X_{\mu j}^k`$ one finds that the phase combinations that enter here are $`\theta ^{^{}}`$, $`\mathrm{\Delta }\xi /2`$ and $`\alpha _{A_\mu }+\theta _\mu +\chi _1+\chi _2`$ from which we can construct the three combinations $`\xi _1+\theta _\mu +\chi _1+\chi _2`$, $`\xi _2+\theta _\mu +\chi _1+\chi _2`$ and $`\alpha _{A_\mu }+\theta _\mu +\chi _1+\chi _2`$. By defining $`\theta _1=\theta _\mu +\chi _1+\chi _2`$ one finds that the $`a_\mu `$ dependence on phases from both the chargino and the neutralino exchanges consists only of the three combinations $`\alpha _{A_\mu }+\theta _1`$, $`\xi _1+\theta _1`$ and $`\xi _2+\theta _1`$. One may compare these combinations with those that appear in the supersymmetric contribution to the electron and the neutron EDMs. In the analysis of Ref. we found that the electron and the neutron EDMs depend on the following combinations: $`\xi _i+\theta _1`$, $`(i=1,2,3)`$ and $`\alpha _{A_k}+\theta _1`$ with $`k=u,d,t,b,c,s;l`$. We note that even though $`a_\mu `$ and the EDMs are very different physical quantities the linear combination of phases that enter in them are similar. In fact the phases that enter $`a_\mu `$ are a subset of phases that enter in the supersymmetric contributions to the EDMs. ## 5 The Supersymmetric Limit To check the absolute normalization of our results we discuss now their supersymmetric limit. In Ref. the supersymetric contributions to $`a_\mu `$ from the chargino sector in the supersymmetric limit were computed by going to the limit such that $$U^{}M_CV^1=diag(M_W,M_W)$$ (36) In this limit it was shown that the contributions from this sector was precisely negative of the contribution from the W exchange. The analysis of Ref. was carried out in the framework of mSUGRA with two CP violating phases. For the MSSM case being discussed here with many CP phases the structure of $`U`$ and $`V`$ matrices in susy limit will be modified so that $$U=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1& e^{i\chi _1}\\ 1& e^{i\chi _1}\end{array}\right),V=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1& e^{i\chi _2}\\ 1& e^{i\chi _2}\end{array}\right)$$ (37) However, taking the phases $`\chi _1`$ and $`\chi _2`$ into account we find exactly the same result as in Ref. due to the appearance of the $`\kappa _\mu `$ phase. Thus the sum of the W exchange contribution and of the chargino exchange contributions cancel in the supersymmetric limit. Similarly it was shown in Ref. by making a unitary transformation that the neutralino mass matrix in the supersymmetric limit can be written in the form $$X^TM_{\chi ^0}X=diag(0,0,M_Z,M_Z)$$ (38) where the eigen-values are positive definite. It was then shown that the last two eigen-modes give a contribution which is negative of the contribution from the Z exchange in the Standard Model. In the case of MSSM we are discussing here the structure of the diagonalizing matrix $`X`$ is now changed because of the $`\chi _1`$ and $`\chi _2`$ phases (see Appendix B). However, the final result we arrived at in Ref. still holds due to the appearance of the $`\kappa _\mu `$ phase once again. Thus the sum of the Z exchange and of the heavy neutralino exchanges exactly cancel in the supersymmetric limit. We now turn to the supersymmetric limit of the contribution from the first two massless eigen states of the neutralino mass matrix. A direct sum over the first two eigen-modes for the case of Eq.(38) in the supersymmetric limit gives $$a_\mu ^{susy}(zeromodes)=\frac{\alpha _{em}}{2\pi }$$ (39) Thus the sum of the Standard Model contributions to $`a_\mu `$ from the photon at one loop and form the Z and the W exchanges at one loop is cancelled by the supersymmetric contributions from the neutralino and the chargino exchanges in MSSM at one loop in the supersymmetric limit, i.e., in the supersymmetric limit one has $$a_\mu ^{MSSM}=0$$ (40) This result is consistent with the expectation on general grounds. The details of the derivation of Eq.(39) are given in Appendix B. ## 6 Satisfaction of EDM Constraints Before proceeding to discuss the CP effects on $`a_\mu `$ we describe briefly the EDM constraints on the CP violating phases. As is well known for the case of the neutron EDM there are three operators that contribute to the neutron EDM, namely, the electric dipole moment operator, the color dipole moment operator and the purely gluonic dimension six operator. Both the electric and color operators have three components each from the chargino, neutralino and gluino contributions. For the electron case we have only the electric dipole moment operator which has only two components, the chargino and the neutralino ones. Recently, it has been pointed out that in addition to the above contributions certain two loop graphs may also contribute significantly in some regions of the parameter space. In our analysis here we include the effects of these contributions as well. However, the effect of these terms is found to be generally very small compared to the other contributions in most of the parameter space we consider. Satisfaction of the EDM constraints can be achieved in a straightforward fashion using the cancellation mechanism. ## 7 Analysis of CP Violating Effects In the above we have given the most general analysis of $`a_\mu `$ within the framework of MSSM with inclusion of CP violating phases. Our results limit to those of Refs. in the limit when CP violating effects vanish (see Appendix C for details). For the case of the general analysis with phases in MSSM the number of parameters that enter $`a_\mu `$ along with the number of parameters that enter the EDM constraints which must be imposed on the CP violating phases is large. For the purpose of a numerical study of the CP violating effects on $`a_\mu `$ we shall confine ourselves to a more constrained set. Here we shall generate the masses of the sparticles at low energy starting with parameters at the GUT scale and evolve these downwards using renormalization group equations. At the GUT scale we shall use the parameters $`m_0`$, $`m_{1/2}`$, and $`A_0`$, and $`\mu `$ will be determined via radiative breaking of the electro-weak symmetry. We set $`\chi _1+\chi _2=0`$ and choose the phases that we vary to consist of $`\theta _\mu `$, $`\alpha _{A_0}`$, and $`\xi _i`$ $`(i=1,2,3)`$. The choice of the above constrained set is simply for the purpose of reducing the number of parameters for the numerical study. We begin our discussion of the numerical results by exhibiting the dependence of $`a_\mu `$ on the CP violating phases but without the imposition of the EDM constraints. The dependence of $`a_\mu `$ on $`\theta _\mu `$ and $`\alpha _{A_0}`$ was already studied in Ref. and we confine ourselves here to the dependence of $`a_\mu `$ on $`\xi _1`$ and $`\xi _2`$. In Fig.2 we exhibit the dependence of $`a_\mu `$ on $`\xi _1`$ and in Fig.3 the dependence of $`a_\mu `$ on $`\xi _2`$. From Figs.2 and 3 we find that $`a_\mu `$ is significantly affected by the dependence of both $`\xi _1`$ and $`\xi _2`$. However, a comparison of Fig.2 and Fig.3 shows that the dependence of $`a_\mu `$ on $`\xi _2`$ is much stronger than on $`\xi _1`$. The reason behind this difference is easily understood. The relatively weaker dependence on the $`\xi _1`$ phase arises because this phase appears only in the neutralino contribution while the $`\xi _2`$ phase appears both in the neutralino and in the chargino contributions to $`a_\mu `$. We discuss now the effects of CP violating phases on $`a_\mu `$ under the EDM constraints. In Table 1 we show four points which lie on the curves of Fig.2 and Fig.3. As one can see the SUSY mass parameters $`m_0`$ and $`m_{1/2}`$ are relatively small (i.e., $`m_0,m_{1/2}<<1TeV`$), the CP phases are large and there is compatability with experimental constraints on the electron EDM and on the neutron EDM as a consequence of the cancellation mechanism. One also finds on comparing $`a_\mu `$ with and without phases that the effects of CP violating phases on $`a_\mu `$ are very significant. | Table 1: | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | | $`\theta _{\mu _0}`$ | $`\alpha _{A_0}`$ | $`d_n(10^{26}ecm)`$ | $`d_e(10^{27}ecm)`$ | $`[a_\mu (phases)](10^9)`$ | $`[a_\mu (0)](10^9)`$ | | (1) | 2.35 | $`.4`$ | $`3.08`$ | $`0.86`$ | $`4.8`$ | 7.45 | | (2) | $`1.98`$ | $`0.4`$ | $`0.34`$ | $`1.67`$ | $`7.8`$ | $`11.7`$ | | (3) | $`1.2`$ | $`1.5`$ | $`1.87`$ | $`2.24`$ | $`3.25`$ | $`5.6`$ | | (4) | $`2.7`$ | $`0.4`$ | $`1.87`$ | $`0.03`$ | $`15.5`$ | $`3.15`$ | Table caption: Parameters other than those exhibited corresponding to the cases (1)-(4) are: (1) $`m_0`$=70, $`m_{1/2}`$=99, $`tan\beta `$=3 , $`|A_0|`$=5.6, $`\xi _1`$=$`1`$, $`\xi _2`$=1.5, $`\xi _3`$=0.62; (2) $`m_0`$=80, $`m_{1/2}`$=99, $`tan\beta `$=5 , $`|A_0|`$=5.5, $`\xi _1`$=$`0.8`$, $`\xi _2`$=1.5, $`\xi _3`$=0.95; (3) $`m_0`$=75, $`m_{1/2}`$=132, $`tan\beta `$=4 , $`|A_0|`$=6.6, $`\xi _1`$=$`1`$, $`\xi _2`$=1.78, $`\xi _3`$=2.74; (4) $`m_0`$=70, $`m_{1/2}`$=99, $`tan\beta `$=6 , $`|A_0|`$=3.2, $`\xi _1`$=0.63, $`\xi _2`$=0.41, $`\xi _3`$=0.47, where all masses are in GeV units and all phases are in rad. In Fig.4 we exhibit $`a_\mu `$ as a function of $`m_{1/2}`$ where all points on these trajectories satisfy the experimental constraints on the electron EDM and on the neutron EDM by cancellation. One finds that the magnitude of the supersymmetric electro-weak contributions are comparable to and even larger than the Standard Model electro-weak contribution as given by Eq.(3). ## 8 Conclusions We have given in this paper a complete one loop analysis of the effects of CP violating phases on $`a_\mu `$ with the most general set of allowed phases in MSSM in this sector. We have checked the absolute normalization of our results exhibiting the complete cancellation of the supersymmetric result in the supersymmetric limit with the Standard Model result including the qed one loop correction to $`a_\mu `$, i.e., $`\alpha _{em}/2\pi `$. A detailed numerical analysis of the CP violating effects on $`a_\mu `$ for the regions which satisfy the EDM constraints is also given. Computations of $`a_\mu `$ under the EDM constraints shows that the supersymmetric electro-weak effects can generate significant contributions to $`a_\mu `$ even with moderate values of $`\mathrm{tan}\beta `$, i.e., $`\mathrm{tan}\beta 36`$, which can be comparable to the Standard Model electro-weak correction. Thus supersymmetric CP effects on $`a_\mu `$ are within the realm of observability in the new Brookhaven $`g_\mu 2`$ experiment. Acknowledgements This research was supported in part by NSF grant PHY-9901057. Appendix A: In this appendix we give the explicit derivation of the linear combinations of phases on which $`a_\mu `$ depends. For the chargino contributions the phases are contained in the quantities $`Re(\kappa _\mu U_{i2}^{}V_{i1}^{})`$ and $`|\kappa _\mu U_{i2}^{}|^2+|V_{i1}|^2`$. By using $`U=(B_RU_R)^T`$ and $`V=(B_LU_L)^+`$ as defined in Sec.4, where $`U_L`$ and $`U_R`$ are functions of the combination $`\theta =\theta _\mu +\xi _2+\chi _1+\chi _2`$, one finds $$U_{i2}^{}=e^{i\chi _1}U_{R2i}^{}$$ (41) and $$V_{i1}^{}=U_{L1i}$$ (42) which leads to $$\kappa _\mu U_{i2}^{}V_{i1}^{}=|\kappa _\mu |U_{R2i}^{}U_{L1i}$$ (43) and $$|\kappa _\mu U_{i2}^{}|^2+|V_{i1}|^2=|\kappa _\mu |^2|U_{R2i}^{}|^2+|U_{L1i}|^2$$ (44) Eqs.(43) and (44) show that the chargino contribution to $`a_\mu `$ depends only on one combination of phases, i.e., $`\theta =\theta _\mu +\xi _2+\chi _1+\chi _2`$. For the case of the neutralino contribution to $`a_\mu `$, the phases are contained in the quantities $`Re(\eta _{\mu j}^k)`$ and $`X_{\mu j}^k`$. We first consider the quantity $`Re(\eta _{\mu j}^k)`$. It consists of six terms in the product $`\eta _{\mu j}^k`$ $`=([aX_{1j}+bX_{2j}]D_{1k}^{}\kappa _\mu X_{3j}D_{2k}^{})`$ (45) $`(cX_{1j}D_{2k}+\kappa _\mu X_{3j}D_{1k})`$ where $`a`$, $`b`$ and $`c`$ are real numbers and independent of phases. The first term in the expansion of Eq.(45) is $$acX_{1j}^2D_{1k}^{}D_{2k}=\pm acX_{1j}^2\mathrm{cos}\theta _f\mathrm{sin}\theta _fe^{i\beta _f}$$ (46) where $`+()`$ sign is for $`k=1(2)`$ and where the following definitions are used $$\mathrm{tan}2\theta _f=\frac{2m_\mu [|m_0A_\mu |^2+|\mu R_\mu |^22|m_0A_\mu \mu R_\mu |\mathrm{cos}\alpha ]^{1/2}}{M_{\stackrel{~}{\mu }11}^2M_{\stackrel{~}{\mu }22}^2}$$ (47) Here $`R_\mu =\mathrm{tan}\beta e^{i(\chi _1+\chi _2)}`$ and $`\alpha =\alpha _{A_\mu }+\theta _\mu +\chi _1+\chi _2`$. The phase $`\beta _f`$ is defined such that $$\mathrm{cos}\beta _f=\frac{A}{[A^2+B^2]^{1/2}}$$ (48) and $$\mathrm{sin}\beta _f=\frac{B}{[A^2+B^2]^{1/2}}$$ (49) where $`A`$ is defined by $$A=|m_0A_\mu |\mathrm{cos}\alpha _{A_\mu }|\mu R_\mu ^{}|\mathrm{cos}(\theta _\mu +\chi _1+\chi _2)$$ (50) and $`B`$ is defined by $$B=|m_0A_\mu |\mathrm{sin}\alpha _{A_\mu }+|\mu R_\mu ^{}|\mathrm{sin}(\theta _\mu +\chi _1+\chi _2)$$ (51) By using $`X_{1j}=Y_{1j}e^{i\xi _1/2}`$ where $`Y_{1j}`$ are functions only of $`\theta ^{^{}}`$ and $`\mathrm{\Delta }\xi /2`$, one can write the first term as given by Eq.(46) as follows $$acX_{1j}^2D_{1k}^{}D_{2k}=acY_{1j}^2f_k(\alpha )e^{i(\xi _1\beta _f)}$$ (52) where $`f_k(\alpha )`$ are real functions of $`\alpha `$. By using the definition of $`\beta _f`$ as given by Eqs.(48) and (49) and by taking the real part of Eq.(52) we find that the right hand side of Eq.(52) contains the three combinations $`\theta ^{^{}}`$, $`\mathrm{\Delta }\xi /2`$ and $`\alpha `$ which come from the first part of the the right hand side of Eq.(52) and in addition it contains the following two combinations: $`\alpha _{A_\mu }\xi _1`$ and $`\theta _\mu +\chi _1+\chi _2+\xi _1`$ which come from the exponent. But the latter two combinations are linear combinations of the first three combinations. Thus the left hand side of Eq.(46) or Eq.(52) will depend only on the combinations $`\theta ^{^{}}`$, $`\mathrm{\Delta }\xi /2`$ and $`\alpha `$. The same analysis can be applied to the other five terms and each one of them will give us the same three linear combinations. Next we consider $`X_{\mu j}^k`$. It consists of six terms and the quantities in them which contain phases are $`|X_{3j}|^2`$, $`|X_{1j}|^2(|D_{1k}|^2+4|D_{2k}|^2)`$, $`|X_{2j}|^2|D_{1k}|^2`$, $`|D_{1k}|^2Re(X_{1j}X_{2j}^{})`$, $`Re(e^{i\chi _1}X_{3j}X_{1j}^{}D_{1k}D_{2k}^{})`$ and $`Re(e^{i\chi _1}X_{3j}X_{2j}^{}D_{1k}D_{2k}^{})`$. The first one of them, i.e. $`|X_{3j}|^2`$, can be written in terms of the $`Y`$ matrix as $`|Y_{3j}|^2`$ which depends only on the two combinations $`\theta ^{^{}}`$ and $`\mathrm{\Delta }\xi /2`$. The second expression can be written as $`|Y_{1j}|^2g_k(\alpha )`$ where $`g_k(\alpha )`$ are real functions of $`\alpha `$ as defined after Eq.(47). So this term will depend on the three combinations $`\theta ^{^{}}`$, $`\mathrm{\Delta }\xi /2`$ and $`\alpha `$. The third expression is similar to the second one and will give the same combinations. The fourth expression can be written as $`h_k(\alpha )Re(Y_{1j}Y_{2j}^{})`$ where $`h_k(\alpha )`$ are real functions of $`\alpha `$. So this term also depends on the same three combinations. The fifth expression can be written as $$Re(e^{i\chi _1}X_{3j}X_{1j}^{}D_{1k}D_{2k}^{})=Re(Y_{3j}Y_{1j}^{}s_k(\alpha )e^{i(\xi _1\beta _f)})$$ (53) where $`s_k(\alpha )`$ are real functions of $`\alpha `$. By treating the exponential term as we did in the first term of $`\eta _{\mu j}^k`$, it will give us two extra combinations besides the usual three. These are $`\alpha _{A_\mu }\xi _1`$ and $`\theta _\mu +\chi _1+\chi _2+\xi _1`$ which, however, are linear combinations of the usual three. Thus we end up here with the same three combinations. The sixth expression can be written as $$Re(e^{i\chi _1}X_{3j}X_{2j}^{}D_{1k}D_{2k}^{})=Re(Y_{3j}Y_{2j}^{}s_k(\alpha )e^{i(\frac{\xi _1+\xi _2}{2}\beta _f)})$$ (54) from which we can identify the usual three combinations and in addition one has the following combinations in the exponent :$`\alpha _{A_\mu }\frac{\xi _1+\xi _2}{2}`$ and $`\theta _\mu +\chi _1+\chi _2+\frac{\xi _1+\xi _2}{2}`$. These again are linear combination of the usual three and thus we end up with only three phases in the neutralino contribution, i.e., $`\theta ^{^{}}`$, $`\mathrm{\Delta }\xi /2`$ and $`\alpha `$. Appendix B In this appendix we discuss the supersymmetric limit in the massless sector.. For this purpose we begin by exhibiting the unitary matrix X that diagonalizes the neutralino mass matrix in the supersymmetric limit such that the eigen-values are arranged so that $$X^TM_{\chi ^0}X=diag(0,0,M_Z,M_Z)$$ (55) With the above ordering the unitary matrix X takes on the form $$\left(\begin{array}{cccc}\alpha & \beta & \frac{\mathrm{sin}\theta _W}{\sqrt{2}}& i\frac{\mathrm{sin}\theta _W}{\sqrt{2}}\\ \alpha \mathrm{tan}\theta _W& \beta \mathrm{tan}\theta _W& \frac{\mathrm{cos}\theta _W}{\sqrt{2}}& i\frac{\mathrm{cos}\theta _W}{\sqrt{2}}\\ \alpha e^{i\chi _1}& \frac{1}{2}\beta sec^2\theta _We^{i\chi _1}& \frac{1}{2}e^{i\chi _1}& \frac{i}{2}e^{i\chi _1}\\ \alpha e^{i\chi _2}& \frac{1}{2}\beta sec^2\theta _We^{i\chi _2}& \frac{1}{2}e^{i\chi _2}& \frac{i}{2}e^{i\chi _2}\end{array}\right).$$ (56) where $$\alpha =\frac{1}{\sqrt{3+\mathrm{tan}^2\theta _W}},\beta =\frac{1}{\sqrt{1+\mathrm{tan}^2\theta _W+\frac{1}{2}sec^4\theta _W}}$$ (57) Using these results it is easily seen that the sum over the first two neutralino mass eigen-values gives $$a_\mu ^{11}(zeromodes)=\frac{\alpha _{EM}}{2\pi \mathrm{sin}^2\theta _W}H\underset{j=1}{\overset{2}{}}\underset{k=1}{\overset{2}{}}Re(\eta _{\mu j}^k)(\frac{M_{\chi _j^0}}{m_\mu })0$$ (58) where we set $`M_{\stackrel{~}{\mu }k}=m_\mu `$ and the factor $`H`$ is defined by $$H=_0^1𝑑x_0^{1x}𝑑z\frac{z}{(z1)^2}$$ (59) Thus in the supersymmetric limit the entire supersymmetric contribution to $`a_\mu ^1`$ from the masseless neutralino states comes from $`a_\mu ^{12}`$. To compute this contribution we need the sum $$a_\mu ^{12}=\frac{m_\mu ^2\alpha _{EM}}{4\pi \mathrm{sin}^2\theta _W}\underset{j=1}{\overset{2}{}}\underset{k=1}{\overset{2}{}}\frac{1}{M_{\chi _j^0}^2}X_{\mu j}^kI_2(\frac{m_\mu ^2}{M_{\chi _j^0}^2},\frac{M_{\stackrel{~}{\mu _k}}^2}{M_{\chi _j^0}^2})$$ (60) where the sum over j runs only over the first two modes. In the supersymmetric limit we set $`M_{\stackrel{~}{\mu k}}^2=m_\mu ^2`$, $`M_{\chi _j^0}0`$ (j=1,2), and $`x_{\mu j}\frac{m_\mu ^2}{M_{\chi _j^0}^2}\mathrm{}`$ (j=1,2) and $$\frac{m_\mu ^2}{M_{\chi _j^0}^2}I_2(x_{\mu j},x_{\mu j})\frac{1}{2}$$ (61) Now substitution of the explicit form of the X matrix gives $$\underset{j=1}{\overset{2}{}}\underset{k=1}{\overset{2}{}}X_{\mu j}^k=4sin^2\theta _W$$ (62) Use of Eqs. (61) and (62) in Eq.(60) gives $$a_\mu ^{12}(zeromodes)=\frac{\alpha _{em}}{2\pi }$$ (63) Thus we find that in the supersymmetric limit the exchange of two massless neutralinos gives a one loop contribution to $`a_\mu `$ which is exactly negative of the photonic one loop contribution. Thus in the supersymmetric limit the sum of the one loop contributions of the zero modes of the theory cancel. The cancellation provides an absolute check on the normalization of our supersymmetric result in this sector. Appendix C In this section we consider the limit of vanishing CP violating phases and compare our results with those of previous works. We first compare our results with those of Ref.. We consider the chargino contribution first. Using Eq.(2.8) of Ref. and noting that the free part of the Lagrangian density for the complex scalar fields in that work is given by $`\frac{1}{2}(_\mu z^{}^\mu zm^2z^{}z)`$, we find that our $`K_i`$ and $`L_i`$ are related to the $`A_L^\pm `$ and $`A_R^\pm `$ of Ref. as follows: $$K_{1,2\nu }i\sqrt{2}A_L^{+,},L_{1,2\nu }i\sqrt{2}A_R^{+,}$$ (64) Further, our form factors $`F_3(x)`$ and $`F_4(x)`$ are related to the form factors $`F_1`$ and $`F_2`$ of Ref. as follows: $$F_3(x)=F_2(x),F_4(x)=F_1(x)$$ (65) Defining $$g_1^{\stackrel{~}{W}}=2a_\mu ^{22},g_2^{\stackrel{~}{W}}=2a_\mu ^{21}$$ (66) we find that our Eq.(12) in the limit of vanishing CP violating phases is given in the notation of Ref. by $$g_1^{\stackrel{~}{W}}=\frac{m_\mu ^2}{24\pi ^2}\underset{a=1,2}{}\frac{A_R^{(a)2}+A_L^{(a)2}}{\stackrel{~}{m}_a^2}F_1(x_a)$$ (67) and similarly our Eq.(10) in the same limit in the notation of Ref. is given by $$g_2^{\stackrel{~}{W}}=\frac{m_\mu }{4\pi ^2}\underset{a=1,2}{}\frac{A_R^{(a)}A_L^{(a)}}{\stackrel{~}{m}_a}F_2(x_a)$$ (68) where $$x_a=\frac{\stackrel{~}{m}_\nu ^2}{\stackrel{~}{m}_a^2};a=1,2$$ (69) Eqs. (67) and (68) agree precisely with Eqs.(2.6a) and (2.6b) of Ref. to leading order in $`\mu ^2/\stackrel{~}{m}_a^2`$ taking account of the typo in Eq.(2.6a) where $`A_R^{(a)}`$ should read $`A_R^{(a)2}`$ and noting that $`A_L^{2+,}`$ is proportional to $`m_\mu ^2/M_W^2`$ and thus does not contribute to leading order. We consider next the neutralino contribution. From the interaction Lagrangian Eq.(2.4) of Ref. we find the transition from our notation to that of Ref.as follows: $$K_{kr}\sqrt{2}i(O_{1r}^{^{}}B_k^LC_kO_{2r}^{^{}}),L_{kr}\sqrt{2}i(O_{2r}^{^{}}B_k^R+C_kO_{1r}^{^{}})$$ (70) where we identify $`O^{}`$ to be $$O^{^{}}=\left(\begin{array}{cc}\mathrm{cos}\delta & \mathrm{sin}\delta \\ \mathrm{sin}\delta & \mathrm{cos}\delta \end{array}\right)$$ (71) Noting that our form factors $`F_1(x)`$ and $`F_2(x)`$ are related to the form factors $`G_2(x)`$ and $`G_1(x)`$ of Ref. by $$F_1(x)=G_2(x),F_2(x)=2G_1(x)$$ (72) and defining $`g_1^{\stackrel{~}{Z}}=2a_\mu ^{12}`$ and $`g_2^{\stackrel{~}{Z}}=2a_\mu ^{11}`$ we find that our Eq.(8) gives precisely Eq.(2.10) of Ref. taking account of the typos in Eq.(2.10a) in that $`1/\stackrel{~}{\mu }_k`$ should read $`1/\stackrel{~}{\mu }_k^2`$ and $`G_2(x_{2k})`$ in the same equation should read $`G_1(x_{2k})`$. Further, our Eq.(6) agrees precisely with Eq.(2.12) of Ref.. Next we compare our results with those of Ref.. For this purpose in the chargino sector we identify $`\stackrel{~}{W}_1`$ and $`\stackrel{~}{W}_2`$ states with the states $`\stackrel{~}{W}^{}`$ and $`\stackrel{~}{H}^{}`$ of Ref. in order to use Table 1 of Ref.. With this identification in the limit of vanishing CP violating phases we find that in the chargino sector our matrices V and U are real and orthogonal and are related to the matrices $`O_1`$ and $`O_2`$ of Ref. as follows $$V_{km}^{}O_{1mk},U_{km}^{}O_{2mk}$$ (73) The analysis of Ref. computes only the contribution $`a_\mu ^{21}`$ of $`a_\mu ^\chi ^{}`$ in their Eq.(5). Relating our $`F_3(\eta )`$ to their $`F_{s\nu }(\eta )`$ by $`F_3(\eta )=2F_{s\nu }(x)`$, we find that our Eq.(19) can be written in the form $$2a_\mu ^{21}=\frac{m_\mu e^2}{4\pi ^2\mathrm{sin}^2\theta _W^2}\underset{k}{}\frac{m_\mu }{\sqrt{2}M_km_W\mathrm{cos}\beta }O_{22k}O_{1k1}^TF_{s\nu }(\eta _{\nu k}^{^{}})$$ (74) which is exactly Eq.(5) of Ref. on relating their $`\mathrm{sin}\theta _H`$ to our $`\mathrm{cos}\beta `$ by $`\mathrm{sin}\theta _H=\mathrm{cos}\beta `$. The consistency of the analysis of Ref. with our analysis, however, requires that the sign of the terms with $`M_W`$ in the chargino mass matrix given by Eq.(3b) of Ref. be reversed. To compare our results to those of Ref. in the neutralino sector we note that our $`H_1^0`$ and $`H_2^0`$ states are related to their $`H^0`$ and $`H^0^{}`$ states by $`H_1^0=H^0`$ and $`H_2^0=H^0^{}`$. In the limit of vanishing CP violating phases, our neutralino and smuon mass matrices become real, and the corresponding diagonalizing matrices X and D become orthogonal and can be identified with the real orthogonal matrices O and S of Ref.: $$XO,DS$$ (75) The consistency of the analysis of Ref. with our analysis, however, requires that the sign of the terms with $`M_Z`$ in the neutranlino mass matrix given by Eq.(3a) of Ref. be reversed. The analysis of Ref. calculated only the part $`a_\mu ^{11}`$ in their Eq.(6). To compare the result of $`a_\mu ^{11}`$ of our Eq.(25) with their Eq.(6) we first note that our $`F_1`$ is related to their F by $`F_1(\eta )=F(\eta )`$. Second we need to identify the fields $`\stackrel{~}{W}_i`$ (i-1,2,3) in Eq.(6) of Ref. in order to use Table 1 of Ref. to write out in detail the interactions of Eq.(6). This identification is as follows: $$\stackrel{~}{W}_1=\stackrel{~}{B}^0,\stackrel{~}{W}_2=\stackrel{~}{W}^0,\stackrel{~}{W}_3=\stackrel{~}{H}^0.$$ (76) Further in Ref. we identify $`L=1`$ and $`R=2`$ in their Eq.(6), and we need to complete their Table 1 since the term $`g(\mu _L\stackrel{~}{H}^0s_\mu ^R)`$ is missing in Table 1 and one needs it to expand out Eq.(6). Here we find that the entry for the magnitude for this coupling in their Table 1 should be the same as the magnitude for the coupling $`g(\mu _R\stackrel{~}{H}^0s_\mu ^L)`$ listed in Table 1 (see Eqs. (5.1) and (5.4) of Ref.). Using the above correspondence we find that our result for $`2a_\mu ^{11}`$ gotten from our Eq.(25) produces exactly Eq.(6) of Ref. in the limit of vanishing CP phases.
no-problem/9908/gr-qc9908029.html
ar5iv
text
# Traversable wormholes from massless conformally coupled scalar fields ## 1 Introduction The equations of general relativity relate the geometry of a spacetime manifold to its matter content. Given a geometry one can find the distribution of mass-energy and momenta that support it. Unless there are restrictions on the features that the matter content can possess, general relativity allows the existence of spacetime geometries in which apparently distant regions of space are close to each other through wormhole connections. The existence or not of traversable wormhole geometries has many implications that could change our way of looking at the structure of spacetime (see for a survey and bibliography on the subject). The analysis of M. Morris and K. Thorne regarding traversable wormhole geometries showed that, in order for the flaring out of geodesics characteristic of a Lorentzian wormhole throat to happen, it is necessary that the matter that supports the wormhole throat be peculiar: it has to violate the Null Energy Condition (NEC) . That is to say, even a null geodesic observer would see negative energy densities on passing the throat. This analysis was originally done with static spherically symmetric configurations, but the NEC violation is a generic property of an arbitrary wormhole throat . It is often (mistakenly) thought that classical matter always satisfies NEC. By contrast, in the quantum regime there are well-known situations in which NEC violations can easily be obtained . For this reason, most investigations regarding wormhole physics are developed within the realm of semiclassical gravity, where the expectation value of the quantum energy-momentum tensor is used as the source for the gravitational field . However, one can easily see that the energy-momentum tensor of a scalar field conformally coupled to gravity can violate NEC even at the classical level . We would like to highlight that this energy-momentum tensor is the most natural one for a scalar field at low energies (energies well below the Planck scale, or any other scale below which a scalar field theory might become non-renormalizable), because its matrix elements are finite in every order of renormalized perturbation theory . In flat spacetime, this energy-momentum tensor defines the same four-momentum and Lorentz generators as that associated with a minimally coupled scalar field and, in fact, it was first constructed as an improvement of the latter . Thus, we wish to focus attention on the so-called “new improved energy-momentum tensor” of particle physics. At higher energies, still well below the Planck scale, there may also be other forms of classical violations of the NEC, such as higher derivative theories , Brans–Dicke theory , or even more exotic possibilities.<sup>1</sup><sup>1</sup>1 For instance, we mention the work by H. Ellis in which he considered changing the sign in front of the energy-momentum tensor for a minimally coupled scalar field. Reversing this sign from the usual one, he found classical wormhole solutions, which with hindsight is not surprising since reversing the energy-momentum tensor explicitly violates the energy conditions. This paper is of particular interest since it pre-dates the Morris–Thorne analysis by 15 years. In this paper, we will concentrate on the massless conformally coupled scalar field. We will explicitly show that it can provide us with the flaring out condition characteristic of traversable wormholes. We have analytically solved the Einstein equations for static and spherically symmetric geometries. We find a three-parameter class of exact solutions. These solutions include the Schwarzschild geometry, certain naked singularities, and a collection of traversable wormholes. However, in all these wormhole geometries the effective Newton’s constant has a different sign in the two asymptotic regions. At the end of the paper we will briefly discuss some ways of escaping from this somewhat disconcerting conclusion. ## 2 Einstein conformal scalar field solutions In this section, we will describe the exact solutions to the combined system of equations for Einstein gravity and a massless conformally coupled scalar field, in the simple case of static and spherically symmetric configurations. The Einstein equations can be written as $`\kappa G_{\mu \nu }=T_{\mu \nu }`$, where $`G_{\mu \nu }=R_{\mu \nu }\frac{1}{2}g_{\mu \nu }R`$ is the Einstein tensor, with $`R_{\mu \nu }`$ the Ricci tensor and $`R`$ the scalar curvature. $`T_{\mu \nu }`$ is the energy-momentum tensor of the matter field, and $`\kappa =(8\pi G_N)^1`$, with $`G_N`$ denoting Newton’s constant. For a massless conformal scalar field $`\varphi _c`$, the energy-momentum tensor acquires the form $$T_{\mu \nu }=_\mu \varphi _c_\nu \varphi _c\frac{1}{2}g_{\mu \nu }(\varphi _c)^2+\frac{1}{6}\left[G_{\mu \nu }\varphi _c^22_\mu (\varphi _c_\nu \varphi _c)+2g_{\mu \nu }^\lambda (\varphi _c_\lambda \varphi _c)\right],$$ (2.1) with the field satisfying the equation $`(^2\frac{1}{6}R)\varphi _c=0`$. This is the generalization to curved spacetime of the “new improved energy-momentum tensor” more usually invoked in a particle physics context . The key feature of the energy-momentum tensor for a conformal field is that it is traceless, $`T_{\mu \nu }g^{\mu \nu }=0`$, and therefore $`R=0`$. For this reason, we can write the coupled Einstein plus conformal scalar field equations as $`R_{\mu \nu }=`$ $`\left(\kappa {\displaystyle \frac{1}{6}}\varphi _c^2\right)^1\left({\displaystyle \frac{2}{3}}_\mu \varphi _c_\nu \varphi _c{\displaystyle \frac{1}{6}}g_{\mu \nu }(\varphi _c)^2{\displaystyle \frac{1}{3}}\varphi _c_\mu _\nu \varphi _c\right),`$ (2.2) $`^2\varphi _c=`$ $`0.`$ (2.3) We are interested in static and spherically symmetric solutions of these equations. In order to find these solutions, we will start by looking for metrics conformally related to the Janis–Newman–Winicour–Wyman (JNWW) static spherically symmetric solutions of the Einstein minimally coupled massless scalar field equations: $`R_{\mu \nu }=`$ $`\kappa ^1\left(_\mu \varphi _m_\nu \varphi _m\right),`$ (2.4) $`^2\varphi _m=`$ $`0.`$ (2.5) The JNWW solutions can be expressed as $`ds_m^2=\left(1{\displaystyle \frac{2\eta }{r}}\right)^{\mathrm{cos}\chi }dt^2+\left(1{\displaystyle \frac{2\eta }{r}}\right)^{\mathrm{cos}\chi }dr^2+\left(1{\displaystyle \frac{2\eta }{r}}\right)^{1\mathrm{cos}\chi }r^2(d\theta ^2+\mathrm{sin}^2\theta d\mathrm{\Phi }^2),`$ (2.6) $`\varphi _m=\sqrt{{\displaystyle \frac{\kappa }{2}}}\mathrm{sin}\chi \mathrm{ln}\left(1{\displaystyle \frac{2\eta }{r}}\right).`$ (2.7) The JNWW solutions possess obvious symmetries under $`\chi \chi `$, (with $`\varphi _m\varphi _m`$). Less obvious is that by making a coordinate transformation $`r\stackrel{~}{r}=r2\eta `$, one uncovers an additional symmetry under $`\{\eta ,\chi \}\{\eta ,\chi +\pi \}`$, (with $`\varphi _m+\varphi _m`$). In view of these symmetries one can without loss of generality take $`\eta 0`$ and $`\chi [0,\pi ]`$. Similar symmetries will be encountered for conformally coupled scalars.<sup>2</sup><sup>2</sup>2 The key to this symmetry is to realise that $$\left(1\frac{2\eta }{r}\right)=\left(1+\frac{2\eta }{\stackrel{~}{r}}\right)^1.$$ The requirement that a metric conformal to the JNWW metric, $`ds=\mathrm{\Omega }(r)ds_m`$ with $`\mathrm{\Omega }(r)`$ the conformal factor, have a zero scalar curvature \[necessary if it has to be solution of the system of equations (2.2)\], easily provides a second-order differential equation for the conformal factor as a function of $`\varphi _m`$: $$\frac{d^2\mathrm{\Omega }(\varphi _m)}{d\varphi _{m}^{}{}_{}{}^{2}}=\frac{1}{6\kappa }\varphi _m.$$ (2.8) Its solutions can be parametrized in the form $$\mathrm{\Omega }=\alpha _+\mathrm{exp}\left(+\varphi _m/\sqrt{6\kappa }\right)+\alpha _{}\mathrm{exp}\left(\varphi _m/\sqrt{6\kappa }\right),$$ (2.9) with $`\alpha _+`$ and $`\alpha _{}`$ two real constants. The equation $`^2\varphi _c=0`$ can now be integrated yielding $$\varphi _c=A\frac{\sqrt{6\kappa }}{4\alpha _+\alpha _{}}\left[\frac{\alpha _+\mathrm{exp}\left(+\varphi _m/\sqrt{6\kappa }\right)\alpha _{}\mathrm{exp}\left(\varphi _m/\sqrt{6\kappa }\right)}{\alpha _+\mathrm{exp}\left(+\varphi _m/\sqrt{6\kappa }\right)+\alpha _{}\mathrm{exp}\left(\varphi _m/\sqrt{6\kappa }\right)}\right]+B,$$ (2.10) where $`A`$ and $`B`$ are two additional integration constants. In order that the metric and conformal scalar field just found be solutions of the whole set of equations (2.2), the four integration constants, $`\alpha _+`$, $`\alpha _{}`$, $`A`$, and $`B`$, must be inter-related in a specific way. After a little algebra the equation for the $`tt`$ component in (2.2) implies the following relations<sup>3</sup><sup>3</sup>3 In view of the assumed static and spherical symmetries, the Einstein equations provide only three constraints, and by the contracted Bianchi identities only two of these are independent. Thus it is only necessary to consider the trace $`R`$ and the $`tt`$ component $`R_{\widehat{t}\widehat{t}}`$ to guarantee a solution of the entire tensor equation. $`AB\alpha _+\alpha _{}=0,`$ (2.11) $`\alpha _+^2\alpha _{}^2\left(1{\displaystyle \frac{B^2}{6\kappa }}\right)+{\displaystyle \frac{A^2}{16}}=0.`$ (2.12) Therefore, we have two options: Case i) $`B=0`$, $`A=\pm 4\alpha _+\alpha _{}`$; $`\mathrm{\Omega }=\alpha _+\mathrm{exp}\left(+\varphi _m/\sqrt{6\kappa }\right)+\alpha _{}\mathrm{exp}\left(\varphi _m/\sqrt{6\kappa }\right),`$ (2.13) $`\varphi _c=\pm \sqrt{6\kappa }\left[{\displaystyle \frac{\alpha _+\mathrm{exp}\left(+\varphi _m/\sqrt{6\kappa }\right)\alpha _{}\mathrm{exp}\left(\varphi _m/\sqrt{6\kappa }\right)}{\alpha _+\mathrm{exp}\left(+\varphi _m/\sqrt{6\kappa }\right)+\alpha _{}\mathrm{exp}\left(\varphi _m/\sqrt{6\kappa }\right)}}\right].`$ (2.14) Case ii) $`A=0`$, $`B=\pm \sqrt{6\kappa }`$; $`\mathrm{\Omega }=\alpha _+\mathrm{exp}\left(+\varphi _m/\sqrt{6\kappa }\right)+\alpha _{}\mathrm{exp}\left(\varphi _m/\sqrt{6\kappa }\right),`$ (2.15) $`\varphi _c=\pm \sqrt{6\kappa }.`$ (2.16) Notice that these two branches of solutions intersect when either $`\alpha _+`$ or $`\alpha _{}`$ are equal to zero. The set of solutions we have found is a generalization of the solutions found by Froyland , and some time later by Agnese and La Camera . Indeed, in the case $`\alpha _+=\alpha _{}`$ the conformal factor becomes $`\mathrm{\Omega }=\mathrm{cosh}(\varphi _m/\sqrt{6\kappa })`$, (we drop a unimportant constant factor), and the field $`\varphi _c=\pm \sqrt{6\kappa }\mathrm{tanh}(\varphi _m/\sqrt{6\kappa })`$, in agreement with the expression given by Agnese and La Camera. The Froyland solution is in fact identical to that of Agnese–La Camera though this is not obvious because Froyland chose to work in Schwarzschild coordinates. Because of this, Froyland could only provide an implicit (rather than explicit) solution.<sup>4</sup><sup>4</sup>4 That is, Froyland calculated $`(\varphi _c)`$, which in Schwarzschild coordinates is not analytically invertible to provide $`\varphi _c()`$. The coordinate system chosen by Agnese and La Camera is much better behaved in this regard and $`\varphi _c(r)`$ can be explicitly calculated as we have seen above. The trade-off is that whereas $`(r)`$ can be written down explicitly, there is no way of analytically inverting this function to get $`r()`$. Let us now analyze the different behaviours of these solutions. For this task, it is convenient to look at the conformal factor as a function of $`r`$, $$\mathrm{\Omega }(r)=\alpha _+\left(1\frac{2\eta }{r}\right)^{\frac{\mathrm{sin}\chi }{2\sqrt{3}}}+\alpha _{}\left(1\frac{2\eta }{r}\right)^{\frac{\mathrm{sin}\chi }{2\sqrt{3}}}.$$ (2.17) In the same way, as a function of $`r`$, the Schwarzschild radial coordinate $``$ has the form $$(r)=r\left(1\frac{2\eta }{r}\right)^{\frac{1\mathrm{cos}\chi }{2}}\mathrm{\Omega }(r).$$ (2.18) If we define an angle $`\mathrm{\Delta }`$ by $`\mathrm{tan}(\mathrm{\Delta }/2)=(\alpha _+\alpha _{})/(\alpha _++\alpha _{})`$, then the domain $`\mathrm{\Delta }(\pi ,\pi ]`$ exhausts all possible metric configurations, as a constant overall factor in the metric can be absorbed in the definition of coordinates. We have a three-parameter family of solutions depending on $`\eta `$, the angle $`\mathrm{\Delta }`$, and the angle $`\chi (\pi ,\pi ]`$. In fig. 1 we have drawn the parameter space as a square. Indeed, parallel edges are identified, so the parameter space is an orbifold (a two-torus subjected to symmetry identifications). In fact, the solution space is invariant if we change $`\{\eta ,\chi ,\mathrm{\Delta }\}`$ to $`\{\eta ,\chi ,\mathrm{\Delta }\}`$. Furthermore, the coordinate transformation $`r\stackrel{~}{r}=r2\eta `$ can now be used to deduce an invariance under $`\{\eta ,\chi ,\mathrm{\Delta }\}\{\eta ,\chi +\pi ,\mathrm{\Delta }\}`$. Combining the two symmetries, we deduce an invariance under $`\{\eta ,\chi ,\mathrm{\Delta }\}\{\eta ,\pi \chi ,\mathrm{\Delta }\}`$. Thus without loss of generality we only have to deal with the region $`\eta 0`$ with $`\chi [0,\pi ]`$. Also, we need only consider the geometries in which $`\mathrm{\Delta }\pi `$ because, for that value, the geometries do not have an asymptotic flat region when $`r`$ approaches infinity.<sup>5</sup><sup>5</sup>5 We mention in particular that the physical scale length, effective Newton constant, and physical mass of the spacetime can be isolated from a weak-field expansion near spatial infinity. We find $$G_{\mathrm{eff}}M=\eta \left(\mathrm{cos}\chi +\mathrm{tan}\left[\frac{\mathrm{\Delta }}{2}\right]\frac{\mathrm{sin}\chi }{\sqrt{3}}\right).$$ We note that this scale length is invariant under all the symmetries mentioned above (as it must be). The effective Newton constant is $$G_{\mathrm{eff}}=\frac{1}{8\pi (\kappa \frac{1}{6}\varphi _{\mathrm{}}^2)}=\frac{1}{8\pi \kappa }\left(1\mathrm{tan}^2\frac{\mathrm{\Delta }}{2}\right)^1.$$ Thus $`G_{\mathrm{eff}}`$, $`M`$, and $`G_{\mathrm{eff}}M`$ are separately invariants of the symmetries discussed above. For the objectional value $`\mathrm{\Delta }=\pi `$, the lack of an asymptotically flat region is reflected in an infinite physical mass. For the rest of parameter space, an analysis when $`r`$ approaches $`2\eta `$ of the behaviour of the $`tt`$ component of the Ricci tensor ($`R_{\widehat{t}\widehat{t}}`$ in an orthonormal coordinate basis) shows that it diverges for every point in parameter space except when $`\chi =0`$, $`\{\chi =\frac{\pi }{3},\mathrm{\Delta }\frac{\pi }{2}\}`$, $`\{\chi =\frac{2\pi }{3},\mathrm{\Delta }=\frac{\pi }{2}\}`$, or $`\chi =\pi `$. Thus, except for these parameter values, we have geometries with a naked curvature singularity at $`r=2\eta `$. Among these singular geometries, those with $`0<\chi <\frac{\pi }{3}`$ and $`\mathrm{\Delta }\frac{\pi }{2}`$ deserve additional attention. For them, the Schwarzschild radial coordinate blows up when approaching $`r=2\eta `$. They are geometries with wormhole shape, but with something “strange” in the other “asymptotic” region. We can easily check that the proper radial distance between every point $`r>2\eta `$ and $`r=2\eta `$ is finite. Also the proper volume beyond every sphere at finite $`r>2\eta `$ is itself finite, even though the proper area of the spherical sections diverges as one approaches $`r=2\eta `$. Therefore, although one certainly encounters a flare-out (a wormhole throat) before reaching $`r=2\eta `$, we cannot speak properly of another asymptotic region.<sup>6</sup><sup>6</sup>6 Thus these geometries are certainly Lorentzian wormholes, and could even be called “traversable in principle”, but because of the nasty behaviour on the other side of the wormhole throat, they do not deserve to be called “traversable in practice”. The first non singular case is $`\chi =0`$. In this case we recover the Schwarzschild black hole geometry. This geometry, of course, does not have a curvature singularity at $`r=2\eta `$, just a coordinate singularity. It is instead singular at $`r=0`$. For the special cases $`\{\chi =\frac{\pi }{3},\mathrm{\Delta }\frac{\pi }{2}\}`$ there exist situations in which the Schwarzschild radial coordinate $``$ and the gravitational potential $`g_{tt}`$ go to non-zero constants when approaching $`r=2\eta `$. This suggests that we might be able to extend the geometry beyond $`r=2\eta `$. We leave for the next section the analysis of how this extension can be done, showing that there exist genuine wormhole solutions. In the case $`\{\chi =\frac{2\pi }{3},\mathrm{\Delta }=\frac{\pi }{2}\}`$ we can write the metric as $$ds^2=dt^2+\left(1\frac{2\eta }{r}\right)dr^2+(r2\eta )^2(d\theta ^2+\mathrm{sin}^2\theta d\mathrm{\Phi }^2).$$ (2.19) Writing $`\stackrel{~}{r}=r2\eta `$, so that $$ds^2=dt^2+\frac{d\stackrel{~}{r}^2}{\left(1+\frac{2\eta }{\stackrel{~}{r}}\right)}+\stackrel{~}{r}^2(d\theta ^2+\mathrm{sin}^2\theta d\mathrm{\Phi }^2),$$ (2.20) a brief calculation shows that this geometry is also singular at $`\stackrel{~}{r}=0`$, $`r=2\eta `$, even though $`R_{\widehat{t}\widehat{t}}`$ remains finite there. (The other Ricci components, $`R_{\widehat{r}\widehat{r}}`$ and $`R_{\widehat{\theta }\widehat{\theta }}`$, diverge as $`\stackrel{~}{r}0`$.) Finally, the case $`\chi =\pi `$ corresponds to a negative mass Schwarzschild geometry. It has a naked curvature singularity at $`r=2\eta `$, but after the coordinate change $`r\stackrel{~}{r}=r2\eta `$, the naked curvature singularity moves to $`\stackrel{~}{r}=0`$, and the character of the geometry becomes obvious. ## 3 Traversable wormhole solutions This section is devoted to the analysis of the case $`\{\chi =\frac{\pi }{3},\mathrm{\Delta }\frac{\pi }{2}\}`$. For this task, it is convenient to change to isotropic coordinates $$r=\overline{r}\left(1+\frac{\eta }{2\overline{r}}\right)^2.$$ (3.21) With the new radial coordinate running from $`\overline{r}=\eta /2`$ to $`\mathrm{}`$ we cover the same portion of the metric manifold that $`r[\eta ,\mathrm{})`$ did before. However, can the manifold be analytically extended beyond $`\overline{r}=\eta /2`$? The answer is yes. We can write the metric in isotropic coordinates as $$ds^2=\left[\alpha _+\frac{\left(1\frac{\eta }{2\overline{r}}\right)}{\left(1+\frac{\eta }{2\overline{r}}\right)}+\alpha _{}\right]^2\left[dt^2+\left(1+\frac{\eta }{2\overline{r}}\right)^4[d\overline{r}^2+\overline{r}^2(d\theta ^2+\mathrm{sin}^2\theta d\mathrm{\Phi }^2)]\right],$$ (3.22) noticing that it is perfectly well behaved at $`\overline{r}=\eta /2`$. We want to point out that for $`0<\overline{r}<\eta /2`$ the conformal factor $`\mathrm{\Omega }^2`$ is real and negative, while the JNWW solution is ill-behaved in the sense that the metric $`ds_m^2`$ has opposite signature to the usual. Nevertheless, the metric $`ds^2`$ is perfectly well behaved. Thus, strictly speaking, only the region $`\overline{r}>\eta /2`$ is conformally related with its corresponding JNWW solution. Among these solutions, in those with $`\mathrm{\Delta }(\pi ,0)`$ the Schwarzschild radial coordinate $`(\overline{r})`$ reaches a minimum value at $`\overline{r}=(\eta /2)|\mathrm{tan}(\mathrm{\Delta }/2)|^{1/2}`$, and comes back to infinity at $`\overline{r}=0`$, showing up as another asymptotically flat region. As well as this, the $`tt`$ component of the metric is everywhere non-zero. Therefore, the region in parameter space $`\{\chi =\frac{\pi }{3},\mathrm{\Delta }(\pi ,0)\}`$ represents genuine traversable wormhole solutions, with the throat of the wormhole being located at $`\overline{r}=(\eta /2)|\mathrm{tan}(\mathrm{\Delta }/2)|^{1/2}`$. For completeness, in the solution with $`\mathrm{\Delta }=0`$, $`(\overline{r})`$ reaches a minimum at $`\overline{r}=0`$, but this sphere is at a infinite proper distance from every other $`\overline{r}`$ so we can conclude that this geometry is a “cornucopia” (tube without end). ($`\mathrm{\Delta }=\pi `$ represents the reversed cornucopia with no asymptotic region at $`\overline{r}=\mathrm{}`$). For the remaining values of $`\mathrm{\Delta }(0,\pi )`$ the geometry pinches off (the conformal prefactor in equation (3.22) goes to zero) at finite positive radius $`\overline{r}=(\eta /2)\mathrm{tan}(\mathrm{\Delta }/2)`$. We have seen that a scalar field coupled conformally to gravity can support wormhole geometries. On these wormhole configurations the conformal scalar field takes the following form $$\varphi _c=\pm \sqrt{6\kappa }\frac{\alpha _+\left(1\frac{\eta }{2\overline{r}}\right)\alpha _{}\left(1+\frac{\eta }{2\overline{r}}\right)}{\alpha _+\left(1\frac{\eta }{2\overline{r}}\right)+\alpha _{}\left(1+\frac{\eta }{2\overline{r}}\right)}.$$ (3.23) It is a monotonically increasing or decreasing function (depending on the overall sign) between one asymptotic region and the other, taking the values $$\varphi _c|_{\mathrm{asym}_1}=\pm \sqrt{6\kappa }\mathrm{tan}\frac{\mathrm{\Delta }}{2},\mathrm{and}\varphi _c|_{\mathrm{asym}_2}=\pm \frac{\sqrt{6\kappa }}{\mathrm{tan}\frac{\mathrm{\Delta }}{2}}.$$ (3.24) This monotonic behaviour for the scalar field causes an asymmetry between the asymptotic regions. In fact, we can realize of the real importance of this asymmetry by looking at the effective Newton’s constant, $`G_{\mathrm{eff}}=8\pi (\kappa \frac{1}{6}\varphi _c^2)^1`$, that can be defined on these systems. It not only reaches a different value on each asymptotic region, $$G_{\mathrm{eff}}|_{\mathrm{asym}_1}=\frac{1}{8\pi \kappa }\frac{1}{(1\mathrm{tan}^2\frac{\mathrm{\Delta }}{2})},G_{\mathrm{eff}}|_{\mathrm{asym}_2}=\frac{1}{8\pi \kappa }\frac{\mathrm{tan}^2\frac{\mathrm{\Delta }}{2}}{(1\mathrm{tan}^2\frac{\mathrm{\Delta }}{2})},$$ (3.25) it also reaches a different sign. From the point of view of the asymptotic region with a positive effective Newton’s constant, the wormhole throat is located in the region in which $`G_{\mathrm{eff}}`$ has already changed its sign. This asymmetry is reflected also in the values of the asymptotic masses measured on the two sides of the wormhole throat. The scale lengths are $$(G_{\mathrm{eff}}M)|_{\mathrm{asym}_1}=\frac{\eta }{2}\left(1+\mathrm{tan}\frac{\mathrm{\Delta }}{2}\right),(G_{\mathrm{eff}}M)|_{\mathrm{asym}_2}=\frac{\eta }{2}\left(1+\frac{1}{\mathrm{tan}\frac{\mathrm{\Delta }}{2}}\right).$$ (3.26) The observers in the asymptotic region with a positive $`G_{\mathrm{eff}}`$ see a positive asymptotic mass, while those in the other asymptotic region see a negative $`G_{\mathrm{eff}}`$ and a positive asymptotic mass. The case $`\mathrm{\Delta }=\frac{\pi }{2}`$ corresponds to $`\alpha _+=0`$, so $`ds^2=dt^2+\left(1+{\displaystyle \frac{\eta }{2\overline{r}}}\right)^4[d\overline{r}^2+\overline{r}^2(d\theta ^2+\mathrm{sin}^2\theta d\mathrm{\Phi }^2)],`$ (3.27) $`\varphi _c=\pm \sqrt{6\kappa }.`$ (3.28) This describes a limiting symmetric wormhole with an everywhere zero effective Newton’s constant and both asymptotic masses equal to zero. These asymmetric features are disappointing because we would like to have wormholes connecting equivalent regions of space. At this point we have to say that while a conformal scalar field can provide us with the “flaring out” condition for geodesics, it has the drawback of reversing the sign of the effective Newton’s constant in the other asymptotic region. ## 4 Discussion We have found that, among the classical solutions of general relativity coupled to a massless conformal scalar field there exist genuine Lorentzian traversable wormhole geometries. Although perfectly well behaved from a geometrical point of view, they are asymmetric in the sense that the effective Newton’s constant has a different sign in each asymptotic region. Inspecting the expression for the Laplacian of a scalar field in a static and spherically symmetric geometry $$^2\varphi _c=\frac{1}{\sqrt{g}}_r\left[\sqrt{g}g^{rr}_r\varphi _c\right],$$ (4.29) we see that in order for the scalar field to be able to change its monotonic behaviour in a non-singular geometry, there must be some points at which $`^2\varphi _c0`$. This suggests that in more general situations that those analyzed in this paper we could find traversable wormhole solutions with no asymmetry between the asymptotic regions. For example, if we add to our system a quantity of normal matter with a positive trace for the energy-momentum tensor ($`T>0`$), and we place this normal matter in two thin spherical shells (see fig. 2), we can join smoothly an inner region with the geometry of the symmetric wormhole solution (3.27) to two outer asymptotic geometries, both with positive effective Newton’s constants.<sup>7</sup><sup>7</sup>7 Thus we are considering geometries that are built piecewise out of segments of the solutions considered in this paper, with junction conditions applied at the thin shells. The requirement that $`T>0`$ in the shells translates into a localized negative scalar curvature, $`R<0`$, necessary for bringing down the value of the scalar field from that in the inner region. Finally, we conclude by emphasising that it is not so much the occurrence of classical wormholes in and of themselves that is the main surprise of this paper. Rather, what is truly surprising here is that such an inoffensive and physically well-motivated classical source, the “new improved stress-energy tensor”, leads to classical traversable wormholes. ## Acknowledgments The research of CB was supported by the Spanish Ministry of Education and Culture (MEC). MV was supported by the US Department of Energy.
no-problem/9908/astro-ph9908211.html
ar5iv
text
# Ionisation, shocks and evolution of the emission line gas of distant 3CR radio galaxies ## 1 Introduction Powerful high redshift ($`z\mathrm{}>1`$) radio galaxies display a number of remarkable characteristics. Their near infrared emission shows them to be amongst the most massive galaxies known in the early Universe and to have radial light profiles following de Vaucouleurs law, indicating that they are fully formed giant elliptical galaxies \[Best et al. 1998\]. At optical and rest–frame ultraviolet (UV) wavelengths, however, the galaxies have very irregular morphologies, frequently showing a strong excess of emission aligned along the axis of the radio source \[McCarthy et al. 1987, Chambers et al. 1987\]. Their emission line properties are equally spectacular; luminous emission line regions surround the radio galaxies, extending for several tens of kiloparsecs or more (e.g. McCarthy et al. 1995), with velocity shears up to a few hundred km s <sup>-1</sup> and line widths as high as 1500 km s <sup>-1</sup> (e.g. McCarthy et al. 1996). The emission line regions are characterised by high ionisation spectra including strong emission from species such as NeV and CIV. The origin of this luminous emission line gas, its kinematics and ionisation, and their connection to the radio source phenomenon remain important astrophysical questions. Over the past few years we have been carrying out a detailed investigation of a sample of 28 radio galaxies with redshifts $`z1`$ from the revised 3CR catalogue \[Laing et al. 1983\], using optical imaging with the Hubble Space Telescope (HST), high resolution radio interferometry with the Very Large Array (VLA) and near infrared imaging with UKIRT \[Longair et al. 1995, Best et al. 1996, Best et al. 1997, Best et al. 1998\]. Here, and in an accompanying paper (Best et al. 1999; hereafter Paper 1), results are presented from a deep spectroscopic campaign using the William Herschel Telescope (WHT) on 14 of these radio galaxies, to study in detail the emission line gas. The reader is referred to Paper 1 for details of the sample selection, data reduction, the reduced one dimensional spectra and tabulated line fluxes, and the distributions of the intensity, velocity and line width of the emission line gas as a function of position along the slit for each galaxy. The current paper is concerned with investigating the galaxy to galaxy variations in the ionisation and kinematics of the emission line gas, and comparing these variations with the radio and optical properties. The layout is as follows. In Section 2 the photoionisation and shock ionisation models are discussed and their predictions are compared with the observed emission line ratios of the galaxies on an emission line diagnostic diagram. The kinematics of the gas are studied in Section 3. In Section 4 the results are compared with those of a low redshift sample of galaxies and a scenario is proposed to explain the observed properties of the emission line gas at low and high redshifts. The spectroscopic properties of the distant galaxies are compared to their optical properties, and the implications of these results for the continuum alignment effect are discussed. Conclusions are drawn in Section 5. Throughout the paper, values for the cosmological parameters of $`\mathrm{\Omega }=1`$ and $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> are assumed. ## 2 The ionisation of the extended emission line regions Robinson et al. \[Robinson et al. 1987\] showed that for most low redshift ($`z\mathrm{}<0.1`$) radio galaxies the emission line spectrum can be explained adequately if it is assumed that the gas is photoionised by a power–law emission source such as that provided by an active galactic nucleus (AGN). Baum et al. \[Baum et al. 1992\] obtained similar results for a sample of radio galaxies out to redshift 0.2, although they noted that photoionisation models could not reproduce \[NII\] 6584 /H$`\alpha `$ ratios as high as were observed for some galaxies; they suggested that localised sources of heating and ionisation, for example shocks or the UV continuum of surrounding hot gas (e.g. Heckman et al. 1989), may play a role in some radio sources. McCarthy \[McCarthy 1993\] constructed a composite spectrum from a large sample of radio galaxies with redshifts $`0.1<z<3`$; he showed that the emission line spectra of these more distant (more radio powerful) sources are also consistent with being photoionised, but as Villar–Martín et al. \[Villar–Martín et al. 1997\] argued, this composite spectrum is dominated by the few most highly ionised galaxies and so this result does not necessarily apply to the population as a whole. This photoionisation mechanism is in complete agreement with the currently popular orientation–based unification schemes of radio galaxies and radio loud quasars (e.g. Barthel 1989) in which all radio galaxies should host a powerful obscured active galactic nucleus, supplying a large flux of anisotropically emitted ionising photons. The presence of such an obscured quasar nucleus is also indicated by the detection of spatially extended polarised emission and broad permitted lines in polarised light, due to scattering of the AGN light by electrons or dust (see Antonucci 1993 for a review). Photoionisation is not the only story, however. As reviewed by Binette et al. \[Binette et al. 1996\], simple photoionisation models fail to reproduce some important features of the emission line spectra of the narrow line regions of active galaxies. In particular, the strengths of many high excitation lines (e.g. \[NeV\] 3426, CIV 1549, and high ionisation Fe lines) are under–predicted by factors as large as 10, the electronic temperatures derived by simple photoionisation models are too low when compared with those inferred from the line ratio \[OIII\] 4363 / \[OIII\] 5007, and photoionisation models alone cannot reproduce the large observed scatter in the HeII 4686 / H$`\beta `$ ratio. Moreover, a significant fraction of radio galaxies show indications of interactions between the radio jets and the surrounding emission line gas, with the radio source shocks determining the morphology and kinematics of the gas. Detailed studies of individual sources (e.g. PKS 2250$``$41, $`z=0.31`$, Clark et al. 1997, Villar–Martín et al. 1999; 3C171, $`z=0.24`$, Clark et al. 1998) have shown that in some regions of the source the shocks can also dominate the ionisation; for example, minima in the ionisation state are observed coincident with the radio hotspots, and an anticorrelation is found between the ionisation state of the extended gas and its (jet shock broadened) line width. At high redshifts, $`z\mathrm{}>0.6`$, interactions between the radio jets and the gas are readily apparent from the kinematics of the ionised gas (see Section 3). What has not been clear, however, is to what extent the shocks play a role in the ionisation of these high redshift sources. ### 2.1 The CIII\] 2326 / CII\] 1909 vs \[NeIII\] 3869 / \[NeV\] 3426 diagram Line ratio diagnostic diagrams, pioneered by Baldwin et al. \[Baldwin et al. 1981\], provide a powerful tool for investigating the ionisation mechanism of emission line gas. They have been widely used to distinguish the extended emission line regions of active nuclei from HII regions and planetary nebulae, and in recent years also between shock and photoionisation models for AGN. Standard emission line diagnostics at rest–frame optical wavelengths are shifted in to the near–infrared wavebands for redshifts $`z\mathrm{}>1`$, and so cannot easily be used for high redshift radio galaxies. In the past couple of years, however, new diagnostic diagrams have been constructed for rest–frame UV emission lines redshifted into the optical window \[Villar–Martín et al. 1997, Allen et al. 1998\]. The emission line pairs CIII\] 2326 & CII\] 1909 and \[NeIII\] 3869 & \[NeV\] 3426 are well–suited for use as line ratio diagnostics for distant radio galaxies for a number of reasons. All four of these fairly high excitation lines are relatively strong in AGN spectra, and so their fluxes can be determined with sufficient accuracy even for high redshift radio galaxies. The two lines in each pair involve the same element, and therefore there is no dependence of the line ratios on metallicity or abundances; they are also close in wavelength, and so the effects of differential reddening or calibration errors are minimised. Perhaps most importantly, the predictions of shock and photoionisation models for these line ratios are significantly different. The line ratio CIII\] 2326 / CII\] 1909 was determined for 13 of the 14 sources in the sample from the data presented in Paper 1; the values of this ratio are given in Table 1 together with many other derived quantities of the radio galaxies. For the lowest redshift source, 3C340, CII\] 1909 is not redshifted to a sufficiently high wavelength to be observed. The ratio \[NeIII\] 3869 / \[NeV\] 3426 is only available from the Paper 1 data for the source 3C324, but for nine of the other sources in the sample values are available from the literature; these also are compiled in Table 1. The two line ratios are plotted against each other on Figure 1, where they are compared to the theoretical predictions of shock and photoionisation as discussed in the following subsections. ### 2.2 Photoionisation models The theoretical line ratios of CIII\] 2326 / CII\] 1909 and \[NeIII\] 3869 / \[NeV\] 3426 for photoionised gas were taken from the work of Allen et al. \[Allen et al. 1998\] who calculated the predicted line ratios for a number of emission lines using the MAPPINGS II code \[Sutherland et al. 1993\]. Allen et al. considered a planar slab of gas illuminated by a power-law spectrum of ionising radiation, and calculated the emission line ratios for a wide range of conditions: for two different spectral indices of the input spectrum ($`F_\nu \nu ^\alpha `$ with $`\alpha =1`$ and $`\alpha =1.4`$), and two different densities of cloud ($`n_\mathrm{e}=100`$ and 1000 cm<sup>-3</sup>), the ionisation parameter $`U`$<sup>1</sup><sup>1</sup>1The ionisation parameter $`U`$ is defined as the ratio of the number density of ionising photons striking the cloud to the gas density ($`n_\mathrm{H}`$) at the front face of the cloud $`[U=(cn_\mathrm{H})^1_{\nu _0}^{\mathrm{}}(F_\nu \mathrm{d}\nu )/h\nu ]`$, where c is the speed of light and $`\nu _0`$ is the ionisation potential of hydrogen. was allowed to vary in the range $`10^4U1`$. A high energy cutoff was applied to the ionising spectrum at 1.36 keV to avoid over-producing the intensity of the soft X-rays. The models are ionisation bounded and correspond to a range in cloud sizes from 0.003 to 32 parsec. The resultant line ratio sequences are shown on Figure 1. It is beyond the scope of this paper to provide a more detailed description of the photoionisation models, or of the other theoretical models considered next. For more complete discussions of these modelling techniques the reader is referred to the papers from which the theoretical line ratios have been drawn, in this case the work of Allen et al. \[Allen et al. 1998\]. ### 2.3 Photoionisation including matter bounded clouds To avoid the shortcomings of simple photoionisation models discussed at the beginning of Section 2, Binette et al. \[Binette et al. 1996\] considered photoionisation of a composite population containing both optically thin (matter bounded; MB) and optically thick (ionisation bounded; IB) clouds (cf. Viegas and Prieto 1992). In their models, all of the photoionising radiation passes initially through the MB clouds which absorb a fraction ($`F_{\mathrm{MB}}40`$%) of the impinging ionising photons and produce the majority of the high ionisation lines in the spectrum. The radiation which isn’t absorbed then strikes the population of IB clouds; this radiation has already been filtered by the MB clouds as a result of which the IB clouds give rise to predominantly low and intermediate excitation lines. According to these models, the variation in the emission line ratios from galaxy to galaxy has its origin in the variation of the ratio (hereafter $`A_{\mathrm{M}/\mathrm{I}}`$) of the solid angle from the photoionising source subtended by MB clouds relative to that of IB clouds. A larger value of $`A_{\mathrm{M}/\mathrm{I}}`$ corresponds to a larger weight given to the MB clouds and hence a higher excitation spectrum. Since the model states that all of the ionising radiation striking the IB clouds must first have passed through the MB clouds, the ratio $`A_{\mathrm{M}/\mathrm{I}}`$ strictly cannot be below unity. Binette et al. consider two physical situations which might produce such a composite cloud population (some combination of the two would also be possible): (i) the MB ‘clouds’ may be optically thin shells surrounding a denser IB core of a cloud; (ii) the MB clouds could be a separate population of clouds which lie close to the ionising source. In the second case, the MB clouds would have to have a covering factor of unity in order that all lines of sight to the IB clouds pass through a MB cloud; this is possible for very small clouds, consistent with the fact that they would be optically thin. Note that in the case of the MB clouds forming a separate cloud population, if some fraction of them are obscured from the observer, for example by the same material that obscures the active nucleus itself, this may give rise to an apparent $`A_{\mathrm{M}/\mathrm{I}}<1`$. Binette et al. tabulated the line ratios in the two cloud populations for a single set of parameters, chosen to give a good match to Seyfert spectra. They adopted a power-law spectrum with a spectral index of $`\alpha =1.3`$ ($`F_\nu \nu ^\alpha `$), ionisation parameters of $`U_{\mathrm{MB}}=0.04`$ and $`U_{\mathrm{IB}}=5.2\times 10^4`$, a density in the MB clouds of 50 cm<sup>-3</sup>, and absorbed ionising photon fractions of $`F_{\mathrm{MB}}=0.4`$ and $`F_{\mathrm{IB}}=0.97`$ (see their paper for a more detailed discussion of these quantities). From these data, a sequence of line ratios of CIII\] 2326 / CII\] 1909 and \[NeIII\] 3869 / \[NeV\] 3426 have been calculated allowing the quantity $`A_{\mathrm{M}/\mathrm{I}}`$ to vary in the range $`0.001A_{\mathrm{M}/\mathrm{I}}100`$; this sequence is shown on Figure 1. ### 2.4 Shock ionisation models Fast radiative shocks are a powerful source of ionising photons which can have a profound influence upon the temperature and ionisation properties of the gas in the post–shock region. An overview of shock ionisation models is provided by Dopita & Sutherland \[Dopita & Sutherland 1996\]. The two most important parameters for controlling the post–shock emission line spectrum are the velocity of the shock and the ratio $`B/\sqrt{n}`$, where $`B`$ is the pre–shock transverse magnetic field and $`n`$ is the pre–shock number density of the emission line clouds. The latter ratio controls the density, and hence the effective ionisation parameter, of the post–shock gas since at high shock velocities the transverse magnetic field limits the compression caused by the shock through a balance between the magnetic pressure of the cloud ($`B^2`$) and the ram pressure of the shock ($`n`$). Dopita and Sutherland \[Dopita & Sutherland 1996\] calculated the emission line ratios expected from gas ionised by the photons produced in shocks for a range of physical conditions: the velocity of the shock through the emission line clouds was allowed to vary from 150 to 500 km s <sup>-1</sup>, and the ‘magnetic parameter’ was varied in the range $`0B/\sqrt{n}4`$$`\mu `$G cm<sup>-1.5</sup>, which spans the expected range of values (see their paper for more details). These authors also emphasised the importance of photons produced by the shock diffusing upstream and ionising the pre–shock gas. This may give rise to extensive precursor emission line regions, with different spectral characteristics to the compressed shocked gas. They therefore calculated the emission line spectra predicted for these precursor regions for a range of shock velocities from 200 to 500 km s <sup>-1</sup>. In distant radio galaxies such as the ones studied in this paper, the spatial resolution is insufficient to distinguish between the precursor and post-shock emission regions, and a combined spectrum of the two would be observed. Using the data tables provided by Dopita & Sutherland, the emission line ratios of CIII\] 2326 / CII\] 1909 and \[NeIII\] 3869 / \[NeV\] 3426 were calculated both for simple shock models and for shock models including a precursor region. These theoretical ratios are shown on Figure 1. ### 2.5 Interpreting the line diagnostic diagram Figure 1 clearly demonstrates that the ionisation states of radio galaxies, even within a tightly defined sample such as the one studied here, show considerable variations: the CIII\] 2326 / CII\] 1909 ratio differs by nearly a factor of ten between 3C368 and 3C340. The nine sources for which ratios are available for both lines clump into two groups of sources; 3C217, 3C324, 3C352 and 3C368 are grouped together in the region corresponding to the shock ionisation models, while 3C22, 3C265, 3C280, 3C340 and 3C356 are all close to the photoionisation predictions. Interestingly, the four sources in the first region all have projected radio linear sizes smaller than 115 kpc and the five sources in the second group all have sizes larger than this value. This result is more apparent in Figure 2 where the ratio of CIII\] 2326 / CII\] 1909<sup>2</sup><sup>2</sup>2This ratio is preferred to the \[NeIII\] 3869 / \[NeV\] 3426 ratio here, and in later figures, because data are available for more of the galaxies and because the ratios are all drawn from the homogeneous data set presented in Paper 1. The Neon ratio provides similar results, as is apparent from the strong inverse correlation between the two ratios seen on Figure 1. is plotted against the linear size of the radio source; the two parameters are correlated at the 98.5% significance level in a Spearmann–Rank test. For all nine sources, the photoionisation models of Binette et al. \[Binette et al. 1996\] including matter–bounded clouds provide an acceptable fit to the data. There is, however, a problem with this model. As discussed in Section 2.3, a value of $`A_{\mathrm{M}/\mathrm{I}}<1`$ is only possible if the MB clouds form a separate population of clouds, some of which are obscured from the observer. The four small radio sources would have to have a value $`A_{\mathrm{M}/\mathrm{I}}0.1`$, implying that over 90% of their MB clouds are obscured, while there is no requirement for obscuration of the larger radio sources ($`A_{\mathrm{M}/\mathrm{I}}1`$). The amount of obscuration would therefore have to depend upon the size of the radio source. Three factors determine the projected linear size of a radio source: the orientation of the source with respect to the line of sight, the age of the radio source and the advance rate of the hotspots. The first of these cannot be responsible since sources orientated more towards the line of sight, hence appearing smaller, will have less, not more, obscuration towards their central regions (cf. orientation–based schemes of radio galaxies and radio loud quasars). Although the second option cannot be excluded, it seems improbable that the obscuration of MB clouds in the central regions will decrease by an order of magnitude during the short timescale of a radio source lifetime (a few $`\times 10^7`$ years) without the same process either destroying the clouds themselves or having other consequences, for example for the visibility of the broad–line regions. The third possibility, that there may be a connection between a higher obscuration of the central regions of small radio sources and a slower advance rate of their hotspots, has parallels with the suggestion that compact symmetric radio sources are small because they are confined by a dense (obscuring) surrounding medium (e.g. van Breugel et al 1984). However, recent investigations of compact radio sources using VLBI techniques have derived hotspot advance velocities of significant fractions of the speed of light ($`0.2c`$, e.g. Owsianik & Conway 1998, Owsianik et al 1998), supporting a youth rather than confinement scenario for these sources. This indicates that there is no strong connection between hotspot advance speed and obscuration (density) of the central regions of radio sources. It seems unlikely, therefore, that the value of $`A_{\mathrm{M}/\mathrm{I}}`$ should be strongly dependent upon the radio source size. Although this possibility cannot categorically be excluded, our preferred interpretation of the line diagnostic diagram is that for four sources, for which the extent of the emission line region is within a factor of two of the radio source size, the ionisation is dominated by shocks, whilst for the other five sources photoionisation dominates. This interpretation will be supported by the discussion of the kinematics in the following section; the matter–bounded cloud photoionisation model would provide no clear explanation of the variations seen in the kinematical properties. The uncertainties in the emission line ratios for each individual galaxy are too large to pin down any parameters of the ionisation accurately, but one feature is readily apparent. The five sources in the photoionisation region of the diagram are significantly more consistent with a flatter spectral index ($`\alpha 1.0`$) for the power–law ionising continuum than with the steeper one ($`\alpha 1.4`$) typically adopted for low redshift sources. Villar–Martín et al. \[Villar–Martín et al. 1997\] found a similar result analysing the rest–frame UV emission lines of radio galaxies with redshifts $`z>1.7`$, suggesting that this may be a general feature of high redshift AGN. ## 3 Morphological and kinematical properties of the emission line gas The emission line gas surrounding low redshift radio galaxies shows velocity shears within the galaxies of between 50 and 500 km s <sup>-1</sup>, and (deconvolved) full width at half maxima (FWHM) of the emission lines typically in the range 200 to 600 km s <sup>-1</sup>\[Tadhunter et al. 1989b, Baum et al. 1992\]. In many cases the kinematics are consistent with a gravitational origin. At high redshifts the kinematics can be much more extreme, with velocity dispersions often in excess of 1000 km s <sup>-1</sup> (McCarthy et al. 1996, Paper 1) and components offset by several hundreds of km s <sup>-1</sup> with respect to bulk of the gas (Tadhunter 1991, Paper 1). These remarkable kinematics are inconsistent with gravitational origins (cf. Tadhunter 1991). In this section the variation in the kinematics is compared with other properties of the radio source to investigate their origin. In Table 1 a number of the parameters of the emission line properties of the gas are provided. The integrated \[OII\] 3727 emission line intensity and the rest–frame equivalent width of this emission line are as calculated in Paper 1. The projected linear size of the emission line region along the slit direction was determined from the extent of the locations at which fits to the \[OII\] 3727 emission line profile were obtained in Figures 2 to 15(d) of Paper 1 (excluding the detached emission line systems for 3C356 and 3C441). The range in relative velocities was calculated, to the nearest 25 km s <sup>-1</sup>, from Figures 2 to 15(e) of Paper 1, considering the velocity separation between the most positive and most negative velocity components of the \[OII\] 3727 emission line, excluding any data points with uncertainties greater than 100 km s <sup>-1</sup>. The maximum value of the FWHM of the emission line gas was determined from Figures 2 to 15(f) in Paper 1, again excluding any locations with uncertainties greater than 100 km s <sup>-1</sup>. One further parameter was calculated, hereafter referred to as the ‘number of velocity components’, $`N_\mathrm{v}`$, to provide an indication of the smoothness of the velocity profile. $`N_\mathrm{v}`$ was defined as the number of single velocity gradient components (ie. straight lines) necessary to fit, within the errors, the velocity profiles along the slit direction (Figures 2 to 15e of Paper 1; cf van Ojik et al. 1997). A galaxy whose mean motion is consistent with simple rotation will provide a single component fit; higher values of $`N_\mathrm{v}`$ correspond to irregular motions. This analysis, being by its very nature somewhat subjective, was carried out separately by two of the authors and by a third independent scientist. For 11 of the 14 galaxies a unanimous value of $`N_\mathrm{v}`$ was obtained. The remaining three galaxies show more complicated profiles and their classification is ambiguous: for 3C280, values of 3, 4 and 5 were obtained, and so a value of $`4\pm 1`$ is adopted; for 3C324 the profile is very different from the other galaxies, being composed of two kinematically distinct systems as discussed in Paper 1 — a value of 2 is used; values of 2, 3 and 4 were assigned to 3C368, and so $`3\pm 1`$ is adopted. The precise values of $`N_\mathrm{v}`$ for these galaxies are of less importance than the fact they they are clearly inconsistent with a value of 1. The values of $`N_\mathrm{v}`$ are compiled in Table 1. A number of features are immediately apparent from Table 1. It is noteworthy that of the seven galaxies with projected radio sizes smaller than 150 kpc, six have values $`N_\mathrm{v}>1`$ with only one having $`N_\mathrm{v}=1`$, while six of the seven sources larger than this size have $`N_\mathrm{v}=1`$ (see Figure 3). A chi–squared test shows that the probability of this occurring by chance is below 1%. Small radio sources predominantly have emission line gas with distorted velocity profiles and the emission line gas of large radio sources has a velocity profile generally consistent with rotation. A similar result is found with the variation of the maximum FWHM with radio size, shown in Figure 4. These two parameters are anti-correlated at greater than the 99% significance level (Spearmann Rank test), with the four sources lying in the ‘shock’ region of the line diagnostic diagram (Figure 1) having clearly the highest values. This latter point is made more clearly in Figure 5 where the FWHM of the \[OII\] 3727 emission can be seen to be inversely correlated with the CIII\] 2326 / CII\] 1909 emission line ratio, at the 98.5% significance level using a Spearmann Rank correlation test. The kinematical and ionisation properties of these galaxies are fundamentally connected. It is not only the kinematics of the gas that evolve with the radio source size, but also the physical extent and the luminosity of the line emission. Figure 6 shows the variation of the equivalent width of the \[OII\] 3727 emission line with increasing size of the radio source. Although this correlation is less strong (96% significance in a Spearmann Rank test), it is apparent that the small sources in the ‘shock–dominated’ region of Figure 1 show enhanced \[OII\] 3727 equivalent widths. A more accurate description of Figure 6 is not that there is an inverse correlation between the equivalent width of the \[OII\] emission and radio size, but rather that at large ($`\mathrm{}>150`$ kpc) radio sizes the distribution of equivalent widths is fairly flat, and at small sizes there is often a factor of 2 to 3 excess emission relative to this level. This enhancement of the line equivalent widths of small radio sources with respect to large sources implies an even greater boosting of their line luminosities, for two reasons. First, the optical continuum emission of small radio sources is more luminous than that of large sources, as indicated by Best et al. \[Best et al. 1996\], decreasing the apparent increase in the emission line equivalent width. Second, the equivalent width is determined from the extracted 1–dimensional spectrum from a spatial region along the slit of about 35 kpc (see Paper 1); the physical extent of the emission line regions of small radio sources is greater than that of large radio sources, as shown in Figure 7. Excluding 3C265, which is an exceptional source in many ways (e.g. see discussion in Best et al. 1997), the emission line regions of radio sources with sizes $`\mathrm{}>200`$ kpc have total extents of up to about 50 kpc (25 kpc radius, if symmetrical). Smaller radio sources, however, have emission line regions ranging from this size up to about 100 kpc, a size comparable to the extent of the radio source. In other words, line emission at distances from the AGN of 30 to 50 kpc generally is only seen at the stage of radio source evolution when the hotspots are passing, or have just passed, through this region. ## 4 Discussion A number of results have been derived in the previous sections and these are summarised here for clarity. * Radio sources with small linear sizes ($`\mathrm{}<120`$ kpc) have lower ionisation states than larger radio sources. Their emission line ratios are in agreement with the theoretical predictions of shock ionisation models, whilst those of large radio sources are consistent with photoionisation. * There is a strong inverse correlation between the FWHM of the \[OII\] 3727 emission and the size of the radio source. The four sources with ‘shock–dominated’ ionisation states have the highest FWHM. * Large radio sources often have smooth velocity profiles consistent with rotation, whilst those of small sources are more distorted. * The \[OII\] 3727 emission line strength correlates inversely with the radio source size. The four ‘shock–dominated’ sources have the highest integrated \[OII\] 3727 equivalent widths. * The physical extent of the line emitting regions is larger in smaller radio sources. Before discussing the interpretation of these correlations, first a comparison is made to see if such results also hold for low redshift radio galaxies. ### 4.1 Comparison with low redshift radio galaxies Baum et al. \[Baum et al. 1992\] studied the ionisation and kinematics of a sample of 40 radio galaxies with redshifts $`z\mathrm{}<0.2`$. Their sample contained a large mixture of radio source types, including both Fanaroff & Riley (1974; hereafter FR) class I and II objects<sup>3</sup><sup>3</sup>3FR I radio sources are edge–darkened sources of generally lower radio luminosity that the FR II sources; FR II’s are characterised by bright hotspots towards the extremities of each lobe. as well as sources with intermediate structures. Many differences are now known to exist between the FR I and FR II sources besides the large differences at radio wavelengths, such as the luminosity and environments of their host galaxies \[Hill & Lilly 1991, Baum et al. 1995, Ledlow & Owen 1996\], the luminosity of their emission line gas \[Zirbel & Baum 1995\], differences in the dust properties \[de Koff et al. 1999\], and possibly a different mode of accretion on to the central black hole \[Reynolds et al. 1996\]. Baum et al. also found a significant difference in the host galaxy kinematics between the two radio source types. They classified the kinematics of the radio galaxies into three classes, ‘rotators’, ‘calm non–rotators’ and ‘violent non–rotators’. They found that almost all of the FR II sources fell into the rotator or violent non-rotator classes; most of the FR I and intermediate type sources were calm non-rotators. All of the FR II’s had strong emission lines with a relatively high ionisation parameter, whilst the FR I and intermediate class sources had much weaker emission lines of lower ionisation, with the surrounding hot interstellar and intracluster medium likely to play an important role in the ionisation, both through heat conduction and through ionisation by its ultraviolet and soft X–ray emission (see also Baum et al. 1995, Zirbel et al. 1995). Given the large differences between FR I and FR II sources, to allow a direct comparison with the high redshift sample, attention here is restricted to only the FR II sources in their sample. Four FRII’s in the sample lie more southerly than declination $`30^{}`$ and for two of these accurate determinations of the radio size could not be found in the literature; to avoid introducing any biases by selecting only the well-studied sources, all four of these sources have been excluded from further consideration. The remaining sample of low redshift FR II radio galaxies is listed in Table 2, along with the linear size of the radio source taken from the literature and the kinematic classification given by Baum et al. \[Baum et al. 1992\]. In Figure 8 a histogram of the linear sizes is presented, separating the rotator and non-rotator classes. It is clear that the non-rotator classes are associated with small radio sources, and the rotator class with larger sources, exactly as is found for the high redshift sample. The only exception to this rule is 3C305, which is a rotator with a small radio size: indeed, this is the smallest radio source in the sample (15 kpc), and it could be argued that any shocks associated with the radio source have not yet passed through a significant proportion of the host galaxy, accounting for the lack of clear non-rotational kinematics. Even including 3C305, the probability of the radio sizes of the rotator and non-rotator classes being drawn from the same parent samples is less than 0.5% (using a Mann–Whitney U–test). All of the FR II’s in the sample have relatively high ionisation states, but differences are seen from galaxy to galaxy. Baum et al. \[Baum et al. 1990\] present the line strengths of the \[OI\] 6300.3, \[NII\] 6548.1,6583.4, H$`\alpha `$ and \[SII\] 6716.4,6730.8 emission lines as a function of position for half of the sample considered in their 1992 paper. Although these are all relatively low ionisation lines and therefore not the most sensitive to differences between shock and photoionisation, the \[OI\] / H$`\alpha `$ ratio should be somewhat higher for shock ionised gas than for photoionised gas. An ‘average’ value of this emission line ratio has been calculated for each galaxy as the mean of the ratios at the various positions tabulated by Baum et al. \[Baum et al. 1990\]; these are given in Table 2, the errors quoted representing the scatter in the ratio with location in the galaxy. In Figure 9 these ratios are plotted against radio size: a Spearmann rank test shows that this emission line ratio is anticorrelated with radio size at the 96% confidence level. The low redshift sample therefore provides similar results to the high redshift sample. Large FR II radio sources have kinematics consistent with rotation and higher ionisation states than small radio sources, whose ionisation and kinematics show more evidence for the role of shocks. It is of note that the low and intermediate redshift sources for which individual studies have shown unambiguously that the kinematics and ionisation are dominated by shocks are almost invariably cases in which the radio source is of comparable size to the extended emission line regions (e.g. Clark et al. 1997,1998), naturally agreeing with this picture. One significant difference that remains between the low and high redshift samples is that the high redshift sources are more extreme in their emission line properties (luminosities, line widths, etc) than those at low redshifts. The most important factor influencing this is the sharp increase of the radio power with redshift in the flux–limited samples, with corresponding increases in both the flux of ionising photons from the AGN and the energy of the jet shocks. However, Tadhunter et al. \[Tadhunter et al. 1998\] investigated the correlations of different emission line strengths with redshift and showed that this cannot be the only reason: the ionisation–sensitive \[OII\] 3727 / \[OIII\] 5007 ratio does not decrease strongly with redshift as it should if the only difference between the low power, low redshift and the high power, high redshift objects was that the latter contained a more luminous photoionising source. They concluded that a secondary effect such as an increase in the density of the intergalactic medium or an increase in the importance of jet-cloud interactions with redshift is also required. ### 4.2 The role of shocks in small sources The results presented in the previous sections, coupled with the evidence that a similar situation is seen at low redshifts, lead naturally to a single scenario to explain all of the emission line properties. For small radio sources the morphology, kinematics and ionisation properties of the emission line gas are dominated by the effects of the bow shock associated with the expansion of the radio source. As this bow shock passes through the interstellar and intergalactic medium (ISM & IGM), the inter–cloud gas is quickly accelerated to the velocity of the bow shock, but the warm emission line clouds are essentially bypassed by the shock front (e.g. Rees 1989, Begelman & Cioffi 1989). The clouds are accelerated during the short time it takes the shock to pass the cloud by, due to the imbalance in the pressures between the pre–shock and post–shock gas on the front and back of the cloud. The velocity to which the clouds are accelerated in this way is easily shown to be independent of cloud size and to be well below 100 kms (e.g. Rees 1989). Much larger velocities are induced, however, if the effect of ram–pressure acceleration by the shocked IGM gas (often referred to as entrainment) is considered. Behind the initial bow shock, the clouds find themselves in a shocked layer of IGM, moving outwards at speeds approaching that of the bow shock. The clouds will be accelerated within this medium until they pass across the contact discontinuity into the radio cocoon, where the pressure is the same as in the shocked layer of gas but the density is much lower, and they are no longer accelerated; there is essentially no mixing of the hot inter–cloud gas across this contact discontinuity (e.g. Norman et al. 1982). During the time $`\mathrm{\Delta }T`$ for which a cloud is between the bow shock and the contact discontinuity, the momentum imparted to the cloud by the shocked IGM can be approximated to first order as $`r_\mathrm{c}^2v_\mathrm{s}^2n_\mathrm{g}m_\mathrm{p}\mathrm{\Delta }T`$, where $`r_\mathrm{c}`$ is the cloud size, $`v_\mathrm{s}`$ is the bow shock velocity, $`n_\mathrm{g}`$ is the post–shock number density of the inter–cloud gas and $`m_\mathrm{p}`$ is the proton mass. The mass of the cloud is of order $`r_\mathrm{c}^3n_\mathrm{c}m_\mathrm{p}`$, where $`n_\mathrm{c}`$ is the cloud number density, and the timescale $`\mathrm{\Delta }T`$ is of order $`D/v_\mathrm{s}`$, where $`D`$ is the distance between the bow–shock and the contact discontinuity; therefore, the velocity to which the cloud is accelerated is $$v_\mathrm{c}\frac{D}{r_\mathrm{c}}\frac{n_\mathrm{g}}{n_\mathrm{c}}v_\mathrm{s}.$$ (1) In the radio source evolution models of Kaiser & Alexander \[Kaiser & Alexander 1997\], radio sources grow self–similarly and $`D`$ is found to be about 3% of the distance between the AGN and the bow shock \[Kaiser & Alexander 1999\]; for the radio source passing through the emission line region at radius $`15`$ kpc then, $`D0.5`$ kpc. Assuming a density ratio of $`n_\mathrm{g}/n_\mathrm{c}10^4`$ for pressure equilibrium between the clouds ($`T10^4`$ K) and the surrounding IGM ($`T10^8`$ K), and a shock velocity of $`v_\mathrm{s}0.05c`$ from typical hot-spot advance velocities (e.g. Liu, Pooley & Riley 1992), then a cloud of size $`r_\mathrm{c}1`$ pc will be accelerated to 750 km s <sup>-1</sup>, comparable to the velocities observed in the small radio sources (the actual velocities may need to be slightly higher since the radio galaxies are believed to lie close to the plane of the sky). The spread in projected cloud velocities from clouds of different sizes and from clouds accelerated through different regions of the bow shock will lead to the broad velocity dispersions. The acceleration of the emission line gas clouds by the radio bow shocks therefore explains the distorted velocity profiles and large line widths observed in small radio sources. It is interesting to note that the acquired cloud velocities are proportional to the bow–shock velocity $`v_\mathrm{s}`$. If the bow–shock velocity increases with radio power (redshift), as has been suggested from spectral ageing measurements of hotspot advance velocities (e.g. Liu, Pooley & Riley 1992), this would explain why greater velocity widths are seen in high redshift sources than low redshift sources. The ionisation state of the large radio sources indicates that the dominant source of ionising photons in these sources is the AGN. Since the properties of the AGN are not expected to change dramatically between small and large radio sources, the gas surrounding the small sources should receive a similar flux of photoionising radiation. The lower ionisation state seen in the spectra of these galaxies arises in part due to compression of emission line gas clouds by the radio source shocks, decreasing the ionisation parameter. The presence of extra (softer) ionising photons associated with the shocks further influences the ionisation state. Bicknell et al. (1997; see also Dopita and Sutherland 1996) have investigated the emission line luminosity that can be generated by radio source shocks expanding through a single–phase ISM with a power–law density gradient, $`\rho (r)=\rho _0(r/r_0)^\delta `$. For $`\delta =2`$, they show that the work done on the ISM by the expanding radio cocoon ($`P\mathrm{d}V`$) is approximately half of the energy supplied by the radio jet; if the shock is fully radiative then a significant proportion of this energy is fed into emission line luminosity. The luminosity of the \[OIII\] 5007 emission line can be estimated as $$L([\mathrm{OIII}])\frac{6}{8\delta }\left(\frac{\kappa _{1.4}}{10^{11}}\right)^1\left(\frac{P_{1.4}}{10^{27}\mathrm{W}\mathrm{Hz}^1}\right)\times 10^{43}\mathrm{ergs}\mathrm{s}^1,$$ where $`P_{1.4}`$ is the monochromatic power of the radio source at 1.4 GHz, and $`\kappa _{1.4}`$ is the conversion factor from the energy flux of the jet to the monochromatic radio power at 1.4 GHz, which Bicknell et al. \[Bicknell et al. 1997\] estimate to be of order $`10^{10.5}`$. Adopting this value, taking the flux density of a typical $`z1`$ 3CR source at an observed frequency of 1.4 GHz to be 2 Jy, and assuming that the \[OII\] 3727 / \[OIII\] 5007 emission line flux ratio is $`0.5`$ \[McCarthy 1993\], then for $`\delta =2`$ the observed \[OII\] 3727 emission line flux produced by the shocks is calculated to be $`f([\mathrm{OII}])3\times 10^{15}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>. Of this, probably between a third and a half (that is, 1 to $`1.5\times 10^{15}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>) will fall within the projected sky area from which the spectrum was extracted. This predicted emission line flux can be compared to the \[OII\] 3727 emission line fluxes observed in the data (Table 1), which lie in the range $`0.5`$ to $`5\times 10^{15}`$ ergs s<sup>-1</sup> cm<sup>-2</sup> with the smaller radio sources typically having the higher values (see also Figure 6). These results are completely consistent with a small (factors of 2 to 5) boosting of the emission line luminosities of small sources due to the extra energy input from the shocks. Once the radio source shocks have passed beyond the emission line clouds, the shock–induced emission line luminosity will fall. Under the simplest assumptions, once the jets pass beyond the confining ISM the pressure inside the cocoon will drain away, and the cocoon wall shocks will no longer be pressure driven (e.g. Dopita 1999). These shocks will pass into a momentum conserving phase; their velocity will decrease roughly as $`v_\mathrm{s}r^2`$, and so since the shock luminosity per unit area scales as $`v_\mathrm{s}^3`$, the shock–induced emission line luminosity will fall as $`r^4`$. Although these assumptions are oversimplified, taking no account of confinement by an intracluster medium for example, it is clear that once the shock fronts have now passed well beyond the emission line regions, the contribution of ionising photons produced by the shocks will decrease rapidly; this is in complete accord with the larger sources having photoionisation dominated emission line regions. ### 4.3 The physical extent of the emission line gas The physical extent of the emission line region of each galaxy along the slit direction was provided in Table 1. For comparison, the extent of the aligned optical (rest–frame ultraviolet) emission has also been determined from the HST observations of Best et al. \[Best et al. 1997\]; using the HST image taken through the filter at a rest–frame wavelength of about 4000Å, the angular distance over which optical emission was observed at greater than three times the rms sky noise level of the image was measured for each galaxy, and the corresponding ‘optical sizes’ are given in Table 1. These values can further be compared with these results of Best et al. \[Best et al. 1998\], who showed from near–infrared imaging that, underlying the aligned emission, the radio sources are hosted by giant elliptical galaxies with characteristic radii of typically 10 to 15 kpc. The extent of the optical aligned emission does not exceed 25 kpc except in three cases: 3C247, 3C265 and 3C368. For 3C368, the HST ‘continuum’ image is actually dominated by a combination of line emission and the correspondingly luminous nebular continuum emission (see discussion in Section 4.5). The large extent of 3C247 is also likely to be predominantly line emission, since it arises from a diffuse halo of emission exactly tracking that seen in a narrow–band \[OII\] 3727 image by McCarthy et al. \[McCarthy et al. 1995\]. With the exception of 3C265 (which, as discussed in Paper 1, is an unusual source in many ways), it is therefore reasonable to say that the aligned continuum emission has an extent of only a couple of characteristic radii, and so lies within the body of the host galaxy. The situation with the emission line gas is very different: this has a physical extent which can exceed 100 kpc, with a mean extent of over 60 kpc. The emission line gas clearly extends well beyond the confines of the host galaxy. As was shown in Figure 7, there is also a difference in the physical extent of the line emitting regions between large and small radio sources, with line emission at radii of 30 to 50 kpc generally only seen in small radio sources. Unless there is an intrinsic difference between the environments of the small and large radio sources, which seems unlikely given all of the correlations found, the emission line gas clouds must also be present out to radii $`\mathrm{}>30`$ kpc in large radio sources, but is not visible. Again, the role of shocks can be considered to explain this. At these radii, the flux of ionising photons from the active nucleus may be insufficient to produce an observable emission line luminosity. As the radio source shocks pass through these regions, however, the gas density will be increased and, as discussed above, a large source of local ionising photons will become available, pushing up the emission line luminosity. Following the passage of the radio shocks and the consequent removal of the associated ionising photons, this enhanced line emission will fade over timescales much shorter than the radio source lifetime. Thus, luminous line emission is only seen from the clouds at radii 30 to 50 kpc at the time that the radio source shocks are passing through these regions. A direct consequence of this model is that for radio sources smaller than about 100 kpc a positive correlation between radio source size and emission line region size should be observed, since line emission from the clouds at radii 30 to 50 kpc will not be seen until the radio source has advanced that far. Such a correlation has indeed been observed in the Ly-$`\alpha `$ emission of radio galaxies with $`z2`$ \[van Ojik et al. 1997\]. An interesting test of the model presented here could be carried out by taking high spatial resolution long–slit spectra of a sample of radio galaxies with radio sizes smaller than the size of the emission line regions. The prediction is that within the region of the host galaxy occupied by the radio source, the radio source shocks will be important; the emission line ratios will be consistent with shock ionisation, and the gas kinematics will be distorted with broad velocity dispersions. Outside of this region, however, the gas clouds will not yet have been influenced by the radio source shocks and photoionisation should dominate. A study with a similar principle has been carried out on the radio source 1243+036, a radio galaxy of radio size about 50 kpc at a redshift $`z=3.6`$. Distorted Ly-$`\alpha `$ velocity structures with large velocity FWHM are seen within the radio source structure, but Ly-$`\alpha `$ emission also extends beyond that to at least 75 kpc radius in an apparently rotating halo \[van Ojik et al. 1996\]. Villar–Martín et al. \[Villar–Martín et al. 1999\] have also found that the line emission of PKS 2250$``$41 ($`z=0.308`$) is composed of distinct kinematic components: a low ionisation component with broad velocity width in the region of the radio source structure, and a narrower high ionisation component which extends beyond the radio lobe. Carrying out studies such as these for a large sample of radio sources is important because the velocity structures of the line emission in regions outside the radio shocks will directly show the initial motions of the emission line clouds and can be used to determine whether these clouds are simply material associated with the formation of the galaxy which has been expelled into the IGM, or whether they have an external origin, brought in either by a galaxy merger or a cooling flow. It is difficult to distinguish between such scenarios in larger radio sources since information on the initial cloud velocities has been destroyed by the bow shock acceleration. ### 4.4 Evolution of the velocity structures One significant issue remains to be explained in this picture, and that is how the velocity structures of the large radio sources are produced. The high gas velocities and velocity dispersions induced by the shocks in small radio sources are seen to evolve within the timescale of a radio source lifetime, a few $`\times 10^7`$ years, such that the emission line clouds obtain an underlying velocity profile consistent with a rotating halo, albeit with a still high velocity dispersion. Questions that need to be considered are whether this truly is rotation that is being seen, over what timescale can the extreme shock–induced kinematics be damped down, and can a mean rotation profile be produced whilst the velocity dispersion remains so high? Regarding the first question, given the single slit position and relatively low spatial resolution for the high redshift radio galaxies, it cannot categorically be stated that the emission line profiles of large radio sources are rotation profiles. The data are consistent, for example, with outflow along the radio axes, although in this case it is not clear why the velocity increases with radius (a structure more like that of 3C324 – see Paper 1 – might be expected) while the velocity dispersion decreases with increasing source size. At low redshifts, however, much higher spatial resolution studies using multiple slit positions show clearly that the gas is in rotating structures (e.g. Baum et al. 1990). It therefore seems reasonable to assume that this may also be true at higher redshifts, and even if this is not the case, the questions noted above still need to be addressed for the low redshift radio sources. Three plausible mechanisms can be considered for the evolution in the velocities of the emission line clouds over the radio source lifetime. The first is that the emission line clouds settle back into stable orbits within the host galaxy through gravitational dynamics alone. The timescale for this process is of order a few crossing times of the clouds, where for clouds moving with velocity $`v_\mathrm{c}500`$ km s <sup>-1</sup> at a radius $`r15`$ kpc in the galaxy, the crossing time is $`t_\mathrm{c}2r/v6\times 10^7`$ years. This timescale is longer than the radio source lifetime, and so gravity alone cannot give rise to the observed evolution in the emission line structures. A second possibility concerns the deceleration of emission line clouds moving with respect to the interstellar medium, due to ram–pressure arguments. This works through the same process as the acceleration argument discussed in Section 4.2. As the emission line clouds move through the inter–cloud gas, those clouds moving with the largest velocities sweep up the greatest mass of inter–cloud gas, and so are decelerated most strongly. This process will decrease the width of the cloud velocity distribution. Simulations have been carried out, as detailed in Appendix A, to investigate the timescale over which the mean velocity of an ensemble of emission line clouds (with an initial velocity distribution similar to that seen in small radio sources) evolves to that of the IGM in which the clouds are moving, and the timescale over which the dispersion of the velocity distribution is decreased. It is found that the peak of the velocity distribution evolves to that of the gas in which it is moving considerably more quickly than the velocity width decreases. Both timescales depends upon the typical cloud size and the ratio of the cloud density to that of the inter–cloud medium within the radio cocoon, and for reasonable assumptions the timescale for decrease of the velocity widths is comparable to the radio source lifetime (see Appendix A for details). Therefore, if a population of emission line clouds were placed within the rotating ISM of a galaxy, a mean rotation profile for the emission line clouds could be recovered whilst the FWHM of the emission lines remained large, as is observed in the radio galaxies. The problem with this model, however, is that the radio bow shock sweeps up essentially all of the inter–cloud gas, with little mixing through the contact discontinuity (e.g. Norman et al. 1982). The radio cocoon is filled primarily with material supplied by the radio jets, and so there is essentially no gas left following a rotation profile. Such gas would have to be resupplied to the ISM, for example by supernovae and stellar winds from stars in rotational orbits, but it is very unlikely that enough gas can be supplied in this manner. Alternatively, the cocoon material supplied by the radio jets would itself have to be in rotational motion, perhaps through angular momentum transfer from the rotating IGM to the radio source as the bow shocks advance. To summarise, although this mechanism can decrease the velocity widths of the gas, it is not clear whether a rotation profile can be re-established quickly enough. The third possibility considers the evolution of the population of radiating clouds through a combination of galaxy rotation and cloud shredding. Klein et al. \[Klein et al. 1994\] showed that in the aftermath of a bow shock, emission line clouds may be susceptible to shredding due to growing Kelvin–Helmholtz and Rayleigh–Taylor instabilities on their surface. The clouds could be shredded over a timescale of a few ‘cloud crushing times’, $`t_{\mathrm{cc}}\chi ^{1/2}r_\mathrm{c}/v_\mathrm{b}`$, where $`\chi `$ is the density ratio of the cloud to the surrounding medium inside the cocoon, $`r_\mathrm{c}`$ is the post–shock cloud radius, and $`v_\mathrm{b}`$ is the velocity of the bow shock through the IGM. Kaiser et al. \[Kaiser et al. 1999\] considered such cloud disruption as a way to resupply material to the radio cocoon in order to explain how a secondary hotspot can be formed in the newly discovered class of double–double radio galaxies (e.g. Schoenmakers et al. 1999); they derived a value of $`t_{\mathrm{cc}}5\times 10^6(r_\mathrm{c}/\mathrm{pc})`$ yrs. The cloud shredding time is shortest for the smallest emission line clouds; clouds smaller than about a parsec will be shredded on timescales shorter than the radio source lifetime. These small clouds were the most rapidly accelerated (see Equation 1) and so are responsible for producing much of the high velocity dispersions and distorted velocity structures. If these high velocity small clouds are destroyed then the line emission will become dominated by the remaining more massive clouds, which were less accelerated by the radio source shocks, have a lower velocity dispersion, and may still maintain the vestiges of a rotation profile. Equation 1 further shows that the velocity acquired by the warm clouds is proportional to the velocity of the bow shock. In directions perpendicular to the radio axis the bow shock velocity is lower by a factor of the aspect ratio of the cocoon (typically between about 1.5 and 6, e.g. Leahy et al. 1989), and so the warm clouds in these directions will be less accelerated. In small radio sources these clouds will not be very luminous since they lie away from the strongest radio source shocks and outside of the cone of photoionising radiation from the partially obscured AGN; the emission will be dominated by the higher velocity clouds along the radio jet direction. Over a rotation timescale ($`10^7`$ years), however, these low velocity clouds may be brought within the ionisation cone of the AGN, become ionised, and contribute significantly to the emission line luminosity. Likewise, clouds in small radius orbits around the AGN will acquire lower velocities, since the distance between the bow shock and the contact discontinuity is less and so the period of acceleration is shorter. Thus the rotation profile may re-establish itself from the central regions of the galaxy outwards. By these two mechanisms of shredding and mixing of the cloud populations, the observed population of emission line clouds will evolve such that, in large radio sources, an increasing percentage of the emission will arise from clouds which were less accelerated by the bow shock and so the rotation profile will be gradually recovered. The cloud shredding model has a further advantage that if some fraction of the clouds are destroyed in large radio sources then the emission line luminosity will decrease with increasing radio size, as is observed. The one drawback of this model is that it is surprising that the distinction between radio sources showing rotation profiles and those with distorted profiles is so sharp. Note also that if this scenario is the correct one then the emission line clouds must lie in rotating orbits prior to the radio source activity, providing some information as to their origin. In conclusion, the observation of emission line clouds in rotating halos around large radio galaxies is not trivial to explain, given the large influence of the radio source bow shocks passing through the medium. Gravitational effects alone cannot be responsible for re–establishing rotation profiles, but a combination of cloud shredding and cloud mixing, maybe with some help from ram–pressure deceleration, could reproduce the effect. ### 4.5 Implications for the alignment effect In 1987, McCarthy et al. and Chambers et al. demonstrated that the optical–UV emission of radio galaxies with redshifts $`z\mathrm{}>0.6`$ has a strong tendency to be elongated and aligned along the direction of the radio source. HST images of a sample of 28 of these radio galaxies \[Best et al. 1997\] have demonstrated that the form of this so–called ‘alignment effect’ varies strongly from galaxy to galaxy, and in particular appears to evolve with increasing size of the radio source \[Best et al. 1996\]. Small radio sources show a number of intense blue knots tightly aligned along the direction of the radio jet, whilst larger sources generally have more diffuse optical–UV morphologies. Given the strong similarity between this radio size evolution of the continuum alignment effect and the evolution of the emission line gas properties, it is instructive to examine the role of the radio source shocks and the emission line clouds in giving rise to continuum emission. One direct connection is the nebular continuum emission from the warm emission line gas clouds \[Dickson et al. 1995\], that is, free–free emission, free–bound recombination, two–photon continuum and the Balmer forest lines. The flux density of this emission is directly connected to the flux of the H$`\beta `$ emission line. The very luminous line emission seen in the spectra of these powerful radio galaxies (e.g. Paper 1) thus implies that nebular continuum emission is likely to make a significant contribution to their UV flux density. Indeed, 3C368 was one of the original three radio galaxies studied by Dickson et al. \[Dickson et al. 1995\], and they found a nebular continuum contribution in the northern knots as high as 60% of the total continuum emission at rest–frame wavelengths just below the 3646Å Balmer break (see also Stockton et al. 1996). As can be seen from Table 1, 3C368 is a somewhat extreme case and the contribution for more typical galaxies will be somewhat lower, but still of great significance. In Section 3 it was shown that the luminosity of the emission lines correlated inversely with the size of the radio source (Figure 6); therefore, the strength of nebular continuum emission will decrease with increasing radio source size, and in small sources will be found predominantly along the radio jet tracing the strongest radio source shocks. This reflects exactly the observed evolution of the continuum alignment effect. A second alignment effect hypothesis involving the emission line clouds is that star formation is induced by the passage of the radio jet, due to the radio source shocks compressing gas clouds and pushing them over the Jean’s limit (e.g. Rees 1989, Begelman and Cioffi 1989, De Young 1989). It should be noted that it is the most massive clouds which would collapse to form the stars, and these are distinct from the smallest clouds which are the most likely to be destroyed by the bow shock. In regions which might be star–forming, $`\mathrm{}<10^6`$ years behind the bow shock, the only clouds which will already have been destroyed by instabilities on their surface are those of size $`r_\mathrm{c}\mathrm{}<0.1`$ pc (see Section 4.3); for a mean cloud density of 100 cm<sup>-3</sup> this corresponds to a total cloud mass of less than 10$`{}_{}{}^{2}M_{}^{}`$, not massive enough to have formed a star anyway. As discussed by Best et al. \[Best et al. 1996\], the jet–induced star formation mechanism can also account directly for the evolution of the optical–UV morphology with radio size: the mass of stars required to produce the excess optical–UV emission is only a few $`\times 10^8M_{}`$ \[Lilly & Longair 1984, Dunlop et al. 1989\], well below 1% of the stellar mass of the galaxy, and since the starburst luminosity drops rapidly with age they become indistinguishable from the evolved star population over a timescale of a few $`\times 10^7`$ years. On the negative side, no direct evidence for young stars in these radio galaxies was found in our spectra (cf. 4C41.17 at higher redshift, $`z=3.8`$; Dey et al. 1997), although the clearest features of young stellar populations fall outside the observed wavelength ranges. Another important continuum alignment model is scattering of light from a hidden quasar nucleus by electrons \[Fabian 1989\] or dust (e.g Tadhunter et al. 1989a, di Serego Alighieri et al. 1989). Strong support for this model comes from the observation that the optical emission of some distant radio galaxies is polarised at the $`10`$% level with the electric vector oriented perpendicular to the radio axis (e.g. Cimatti et al. 1996 and references therein), and the detection of broad permitted lines in polarised light \[Dey & Spinrad 1996, Cimatti et al. 1996, Tran et al. 1998\]: clearly some fraction of the excess optical–UV emission must be associated with this mechanism. However, the lack of polarised emission from some sources (e.g. 3C368, van Breugel et al. 1996; see also Tadhunter et al. 1997) dictates that this is not universal; even for 3C324 where the polarisation percentage is high, only a fraction $`\mathrm{}<3050`$% of the optical–UV emission is associated with the scattered component \[Cimatti et al. 1996\]. A problem for scattering models is that, in the simplest picture, a biconical emission region is expected for the scattered light, rather than the knotty strings of emission observed to lie along the radio jet. However, in light of jet–shock models, this could be explained by extra scattering particles being made available along the radio jet axis, either as dust grains being produced in jet–induced star forming regions, or by radio source shocks disrupting optically thick clouds along the radio jet direction and exposing previously hidden dust grains \[Bremer et al. 1997\]. In conclusion, radio source shocks will play a key role in producing the observed morphology and radio size evolution of the continuum alignment effect. Nebular continuum emission will be enhanced in small radio sources, some gas clouds may be induced to collapse and form stars, and extra scattering particles associated either with any star formation or the disruption of gas clouds could enhance the scattered component. ## 5 Conclusions The main conclusions of this work can be summarised as follows: * Small radio sources show a lower ionisation state than large radio sources. The emission line ratios of radio sources with linear sizes $`\mathrm{}<120`$ kpc are consistent with the gas being ionised by photons produced by the shocks associated with the radio source. The emission line luminosities of the small sources are boosted by a small factor ($`25`$) relative to large sources, in accord with them receiving an extra source of ionising photons from the shock. * Small radio sources have very distorted velocity profiles, large velocity widths, and emission line regions covering a larger spatial extent than those of large sources; the latter have much smoother velocity profiles which appear to be dominated by gravitation. These properties are fully explained in terms of the passage of the shocks associated with the radio source. * A strong correlation is found between the ionisation state of the gas and its kinematical properties, indicating that the two are fundamentally connected. * These correlations, originally derived for the sample of redshift one radio galaxies studied in Paper 1, are shown also to hold for a sample of FR II radio galaxies with redshifts $`z\mathrm{}<0.2`$. * The similarity of the evolution of the emission line gas properties with radio size to that of the continuum alignment effect makes a strong case for the continuum alignment effect also having a large dependence upon radio source shocks. * The continuum alignment effect is generally confined to within the extent of the host galaxy, but line emission is observed over a considerably larger spatial extent. ## Acknowledgements The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roches de los Muchachos of the Instituto de Astrofisica de Canarias. This work was supported in part by the Formation and Evolution of Galaxies network set up by the European Commission under contract ERB FMRX– CT96–086 of its TMR programme. We are grateful to Mark Allen for supplying the output of the MAPPINGS II photoionisation models in digitised form, and Hy Spinrad for providing the Neon line ratios for 3C217 and 3C340. We thank Matt Lehnert, Arno Schoenmakers and Christian Kaiser for useful discussions, and the referee, Mike Dopita, for his careful consideration of the original manuscript and a number of useful suggestions. ## Appendix A Deceleration of emission line clouds moving through the IGM Consider a spherical cloud of emission line gas with number density $`n_\mathrm{c}`$ and radius $`r_\mathrm{c}`$ travelling at velocity $`v_\mathrm{c}`$ through gas of number density $`n_\mathrm{g}`$ and velocity $`v_\mathrm{g}`$. In a time $`\mathrm{d}t`$ a mass of gas of approximately $`\pi r_\mathrm{c}^2n_\mathrm{g}m_p(v_\mathrm{c}v_\mathrm{g})\mathrm{d}t`$, where $`m_p`$ is the proton mass, is displaced by the cloud and accelerated from velocity $`v_\mathrm{g}`$ to velocity $`v_\mathrm{c}`$. The momentum of the cloud is correspondingly decreased: $$\frac{4}{3}\pi r_\mathrm{c}^3n_\mathrm{c}m_p\mathrm{d}v_\mathrm{c}=\pi r_\mathrm{c}^2n_\mathrm{g}m_p(v_\mathrm{c}v_\mathrm{g})^2\mathrm{d}t$$ Defining $`t_0`$ as $`t_0=4n_\mathrm{c}r_0/3n_\mathrm{g}`$, where $`r_0`$ is a typical cloud radius, then $$\mathrm{d}v_\mathrm{c}=\frac{(v_\mathrm{c}v_\mathrm{g})^2}{r_\mathrm{c}/r_0}\frac{\mathrm{d}t}{t_0}$$ Using this equation it is possible to follow the evolution of an ensemble of such emission line clouds. For simplicity the distribution of emission line cloud radii was chosen to be flat in logarithm space over a factor of 1000 range centred on $`r_0`$, that is, $`\mathrm{P}(\mathrm{log}(\mathrm{r}_\mathrm{c}))`$ is constant in the range $`1.5\mathrm{log}(r_\mathrm{c}/r_0)1.5`$, and 0 outside that range. The initial velocity distribution of the clouds was set to follow a Gaussian distribution with a mean velocity of zero and a FWHM of 1000 km s <sup>-1</sup>, chosen to represent the velocity dispersion observed in small radio sources. A Monte Carlo simulation was then used to follow the evolution of the velocity distribution of the cloud population in gas moving with velocity +200 km s <sup>-1</sup>, typical of the relative velocity offsets seen in the galaxy profiles; the results are shown in Figure 10. It can be seen that the peak of the velocity distribution of the cloud population evolves rapidly to that of the gas in which it is moving; the width of the velocity distribution becomes progressively narrower but over a much longer timescale. The resulting velocity distribution is no longer Gaussian, but can be approximated as a Gaussian distribution plus extended broad wings (the slight dip at 200 km s <sup>-1</sup> for the first plotted time interval should be ignored; it arises only due to the simplicity of the model in which a cloud whose initial velocity is close to 200 km s <sup>-1</sup> will decelerate very slowly). The timescale over which the FWHM of the emission line clouds decreases to that observed in large radio sources (a few hundred km s <sup>-1</sup>) can be used to test the plausibility of this model. This time interval is $`T5\times 10^7t_0`$ (the solid line on Figure 10), where $`t_0=4n_\mathrm{c}r_0/3n_\mathrm{g}`$. Taking $`T3\times 10^7`$ yr as an appropriate age for radio sources a few hundred kpc in size, and assuming pre–shock density ratio $`n_\mathrm{c}/n_\mathrm{g}10^4`$ with the IGM density decreased a further factor $`\mathrm{\Delta }`$ by the bow shock, this gives $`(r_0/\mathrm{pc})\mathrm{\Delta }5`$. For $`\mathrm{\Delta }40`$ (e.g. Clarke & Burns 1991) the median cloud size would be about about 0.15 parsec; although small, this is certainly plausible given the simplicity of the assumptions.
no-problem/9908/astro-ph9908085.html
ar5iv
text
# Phenomenology of a realistic accelerating universe using only Planck-scale physics \[ ## Abstract Modern data is showing increasing evidence that the Universe is accelerating. So far, all attempts to account for the acceleration have required some fundamental dimensionless quantities to be extremely small. We show how a class of scalar field models (which may emerge naturally from superstring theory) can account for acceleration which starts in the present epoch with all the potential parameters $`O(1)`$ in Planck units. \] Current evidence that the Universe is accelerating, if confirmed, requires dramatic changes in the field of theoretical cosmology. Until recently, there was strong prejudice against the idea that the Universe could be accelerating. There simply is no compelling theoretical framework that could accommodate an accelerating universe. Since the case for an accelerating universe continues to build (see for example ), attempts have been made to improve the theoretical situation, with some modest success. Still, major open questions remain. All attempts to account for acceleration introduce a new type of matter (the “dark energy” or “quintessence”) with an equation of state $`p_Q=w_Q\rho _Q`$ relating pressure and energy density. Values of $`w_Q<0.6`$ today are preferred by the data and in many models $`w_Q`$ can vary over time. (In this framework, $`w_Q=1,\dot{w}_Q=0`$ gives a cosmological constant.) One challenge faced by quintessence models is similar to the old “flatness problem” which is addressed by cosmic inflation. Consider $`\mathrm{\Omega }_{tot}\rho _{tot}/\rho _c`$ where $`\rho _c`$ is the critical density (achieved by a perfectly flat universe). The dynamics of the standard big bang drive $`\mathrm{\Omega }_{tot}`$ away from unity, and require extreme fine tuning of initial conditions for $`\mathrm{\Omega }_{tot}`$ to be as close to unity as it is today (inflation can set up the required initial conditions). In models with quintessence one must consider $`\mathrm{\Omega }_Q\rho _Q/\rho _c`$ which is observed to obey $$\mathrm{\Omega }_Q\mathrm{\Omega }_{other}(\rho _{tot}\rho _Q)/\rho _c$$ (1) today. The a “fine tuning” problem in quintessence models comes from the tendency for $`\mathrm{\Omega }_Q`$ to evolve away from $`\mathrm{\Omega }_{other}`$. Equation 1 is achieved in these models either 1) by fine tuning initial conditions or 2) by introducing a small scale into the fundamental Lagrangian which causes $`\rho _Q`$ to only start the acceleration today. This second category includes cosmological constant models and also a very interesting category of “tracker” quintessence models which achieve the right behavior independently of the initial conditions for the quintessence field. One then has to explain the small scale in the Lagrangian, and this may indeed be possible Here we discuss a class of quintessence models which behave differently. Like the “tracker” models, these models predict acceleration independently of the initial conditions for the quintessence field. These models also have $`\rho _Q(today)`$ fixed by parameters in the fundamental Lagrangian. The difference is that all the parameters in our quintessence potential are $`O(1)`$ in Planck units. As with all known quintessence models, our model does not solve the cosmological constant problem: We do not have an explanation for the location of the zero-point of our potential. This fact limits the extent to which any quintessence model can claim to “naturally” explain an accelerating universe. Recently Steinhardt has suggested that $`M`$-theory arguments specify the zero-point of potentials in 3+1 dimensions. Our zero point coincides with the case favored by Steinhardt’s argument. We start by considering a homogeneous quintessence field $`\varphi `$ moving in a potential of the form $$V(\varphi )=e^{\lambda \varphi }.$$ (2) We work in units where $`M_P(8\pi G)^{(1/2)}=\mathrm{}=c=1`$. The role of spatial variations in such a field has been studied in . Inhomogeneities can be neglected for our purpose, which is to study the large-scale evolution of the universe. We assume inflation or some other mechanism has produced what is effectively a flat Friedmann-Robertson-Walker universe and work entirely within that framework. Cosmological fields with this type of exponential potential have been studied for some time, and are well understood. (A nice review can be found in .) Let us review some key features: A quintessence field with this potential will approach a “scaling” solution, independent of initial conditions. During scaling $`\mathrm{\Omega }_Q`$ takes on a fixed value which depends only on $`\lambda `$ (and changes during the radiation-matter transition). In general, if the density of the dominant matter component scales as $`\rho a^n`$ then after an initial transient the quintessence field obeys $`\mathrm{\Omega }_\varphi =\frac{n}{\lambda ^2}`$, effectively “locking on” to the dominant matter component. Figure 1(upper panel) shows $`\mathrm{\Omega }_Q(a)`$ for scaling solutions in exponential potential models, where $`a`$ is the scale factor of the expanding universe ($`a(today)=1`$). At the Planck epoch $`a10^{30}`$. In Fig. 1(upper panel) one can see the initial transients which all approach the unique scaling solution determined only by $`\lambda `$. In it is shown that exponential potentials are the only potentials that give this scaling behavior. Scaling models are special because the condition $`\mathrm{\Omega }_Q\mathrm{\Omega }_{other}`$ is achieved naturally through the scaling behavior. The problem with these exponential models is that no choice of $`\lambda `$ can give a model that accelerates today and is consistent with other data. The tightest constraint comes from requiring that $`\mathrm{\Omega }_Q`$ not be too large during nucleosynthesis (at $`a10^{10}`$). The heavy curve in Fig 1(upper panel) just saturates a generous $`\mathrm{\Omega }_Q<0.2`$ bound at nucleosynthesis, and produces a sub-dominant $`\mathrm{\Omega }_Q`$ today. The combined effects of sub-dominance and scaling cause $`w_Q=0`$ in the matter era, so this solution is irrelevant to a Universe which is accelerating today. The lower panel of Fig. 1 shows how the value of $`\varphi `$ changes by only about an order of magnitude over the entire history of the universe, while the scale factor (and $`\rho _Q`$) change by many orders of magnitude. This effect, which is due to the exponential form of the potential, plays a key role in what follows. The point is that modest variations to the simple exponential form can produce much more interesting solutions. Because $`\varphi `$ takes on values throughout history that are $`O(1)`$ in Planck units, the parameters in the modified $`V(\varphi )`$ can an also be $`O(1)`$ and produce solutions relevant to current observations. Many theorists believe that fields with potentials of the form $$V(\varphi )=V_p(\varphi )e^{\lambda \varphi }.$$ (3) are predicted in the low energy limit of $`M`$-theory , where $`V_p(\varphi )`$ is a polynomial. As a simple example we consider $$V_p(\varphi )=(\varphi B)^\alpha +A.$$ (4) For a variety of values for $`\alpha `$, $`A`$ and $`B`$ solutions like the one shown in Fig. 2 can be produced. In this solution $`\mathrm{\Omega }_Q`$ is well below the nucleosynthesis bound, and the universe is accelerating today. We show $`\mathrm{\Omega }_Q(a)`$ (dashed), $`\mathrm{\Omega }_{\mathrm{𝑚𝑎𝑡𝑡𝑒𝑟}}(a)`$ (solid) and $`\mathrm{\Omega }_{\mathrm{𝑟𝑎𝑑𝑖𝑎𝑡𝑖𝑜𝑛}}(a)`$ (dotted). The lower panel in figure 2 plots $`w_Q(a)`$ and shows that the necessary negative values are achieved at the present epoch. Figure 3 illustrates how the solutions depend on the parameters in $`V_p(\varphi )`$. We plot quintessence energy density $`\rho _Q`$ as a function of the scale factor $`a`$. After showing some initial transient behavior each solution scales with the other matter for an extended period before $`\rho _Q`$ comes to dominate. The radiation-mater transition, which occurs at around $`a=10^5`$ can be seen in figure 3 as a change in the slope in the scaling domain. The constant parameter $`B`$ in Eqn. 4 has been selected from the range $`1440`$ for these models, yet the point of $`\varphi `$ domination shifts clear across the entire history of the universe. In this picture, the fact that $`\mathrm{\Omega }_Q`$ is just approaching unity today rather than $`10^{10}`$ years ago is put in by hand, as is the case with other models of cosmic acceleration. Our models are special because this can be accomplished while keeping the parameters in the potential $`O(1)`$ in Planck units. Although we only illustrate the $`B`$ dependence here, we have found that similar behavior holds when other parameters in $`V_p(\varphi )`$ are varied. Let us examine more closely what is going on: The derivative of $`V(\varphi )`$ is given by $$\frac{dV}{d\varphi }=\left(\frac{V_p^{}}{V_p}\lambda \right)V.$$ (5) In regions where $`V_p`$ is dominated by a single power-law $`\varphi ^n`$ behavior $`V_p^{}/V_pn/\varphi `$ which, unless $`n`$ is large, rapidly becomes $`<<\lambda `$ for values of $`\lambda `$ large enough to evade the nucleosynthesis bound, leading to little difference from a simple exponential potential. However, there will be points were $`V_p`$ can show other behavior which can impact $`V^{}`$. Using equation 4 gives $$\frac{V_p^{}}{V_p}=\frac{\alpha (\varphi B)^{\alpha 1}}{(\varphi B)^\alpha +A}$$ (6) This varies rapidly near $`\varphi =B`$ and for $`\alpha =2`$ peaks at a value $`V_p=1/\sqrt{A}`$. The upper panel of Fig. 4 shows the behavior of $`V`$ near $`\varphi =B=34.8`$ for the solution shown in Fig 2. The dashed curve shows a pure exponential for comparison. The lower panel shows the curves $`V_p^{}/V_p`$ and $`\lambda `$ (the constant curve). Where these two curves cross $`V^{}=0`$. Because the peak value $`1/\sqrt{A}>\lambda `$, two zeros are produced in $`V^{}`$ creating the bump shown in the figure. In our solution $`\rho _Q`$ is coming to dominate near $`\varphi =B`$ because the field is getting trapped in the local minimum. The behavior of the scaling solution ensures that $`\varphi `$ gets stuck in the minimum rather than rolling on through (regardless of the initial conditions). At one stage in this work we focused on potentials of the form $`V(\varphi )=\mathrm{exp}(\lambda _{\mathrm{𝑒𝑓𝑓}}\varphi )`$ with the idea that $`\lambda _{\mathrm{𝑒𝑓𝑓}}`$ might not be absolutely constant, but could be slowly varying with $`\varphi `$. We considered forms such as $`\lambda _{\mathrm{𝑒𝑓𝑓}}=\lambda (1(\varphi /B)^\alpha )`$ and found many interesting solutions, especially for moderately large values of $`B`$ which make $`\lambda _{\mathrm{𝑒𝑓𝑓}}`$ slowly varying. For example, $`\lambda =13`$, $`B=65`$, and $`\alpha =1.5`$ give a solution similar to Fig 2. If this form for $`\lambda _{\mathrm{𝑒𝑓𝑓}}`$ were taken seriously for large $`\varphi `$, then these models have an absolute minimum in $`V`$ which $`\varphi `$ settles into (or at least approached) at the start of acceleration. But our expression may just represent an approximation to $`\lambda _{\mathrm{𝑒𝑓𝑓}}`$ over the relevant (finite) range of $`\varphi `$ values. Of course we always can re-write Eqn. 3 in terms of $`\lambda _{\mathrm{𝑒𝑓𝑓}}`$ with $`\lambda _{\mathrm{𝑒𝑓𝑓}}=\lambda ln(V_p)/\varphi `$. In the end we focused on potentials in the form of Eqn. 3 because they seem more likely to connect with ideas from $`M`$-theory. Whatever form one considers for $`V`$, the concept remains the same. Simple corrections to pure $`V=\mathrm{exp}(\lambda \varphi )`$ can produce interesting solutions with all parameters $`O(1)`$ in Planck units. We should acknowledge that we use $`O(1)`$ rather loosely here. In the face of the sort of numbers required by other quintessence models or for, say, a straight cosmological constant ($`\rho _\mathrm{\Lambda }10^{120}`$) numbers like $`.01`$ and $`34.8`$ are $`O(1)`$. Also, the whole “quintessence” idea has several important open questions. Some authors argue that values of $`\varphi >1`$ should not be considered without a full quantum gravitational treatment, although currently most cosmologists do not to worry as long as the densities are $`<<1`$ (a condition our models easily meet). Another issue that has been emphasized by Carroll is that even with the (standard) assumption that $`\varphi `$ is only coupled to other matter via gravity, there still will be other observable consequences that will constrain quintessence models and require small couplings. Because in our models $`\dot{\varphi }0`$ today the tightest constraints in are evaded, but there would still be effective dimensionless parameters $`10^4`$ required. Looking toward the bigger picture, a general polynomial $`V_p`$ will produce other features of the sort we have noted. Some bumps in the potential can be “rolled” over classically, but may produce features in the perturbation spectrum or other observable effects. We are investigating a variety of cosmological scenarios with a more general version of $`V_p`$. We are also looking at the effect of quantum decay processes which are relevant to local minima of the sort we consider here. We expect a range of possibilities depending on the nature of $`V_p`$. In conclusion, we have exhibited a class of quintessence models which show realistic accelerating solutions. These solutions are produced with parameters in the quintessence potential which are $`O(1)`$ in Planck units. Without a fundamental motivation for such a potential, all arguments about “naturalness” and “fine tuning” are not very productive. We feel, however, that this work represents interesting progress at a phenomenological level, and might point out promising directions in which to search for a more fundamental picture. ACKNOWLEDGMENTS: We thank J. Lykken, P. Steinhardt, N. Turok, and B. Nelson for helpful conversations. We acknowledge support from DOE grant DE-FG03-91ER40674, UC Davis, and thank the Isaac Newton Institute for hospitality while this work was completed.
no-problem/9908/gr-qc9908041.html
ar5iv
text
# Radiative falloff in black-hole spacetimes ### Introduction A theorem that establishes the uniqueness of the Schwarzschild black hole as the endpoint of gravitational collapse without rotation was proved by Werner Israel more than 30 years ago , and the mechanism by which the gravitational field eventually relaxes to the Schwarzschild form was elucidated by Richard Price more than 25 years ago . Given the venerable age of this topic, it is surprising that more can be said about it today. Yet, many papers on radiative falloff have been written in the last few years . Most of the new developments are concerned with rotating collapse, and how the gravitational field eventually relaxes to the Kerr form. The question we pursue in this two-part contribution is different. Focusing our attention on nonrotating black holes, we ask: How do the conditions far away from the black hole affect the relaxation process? In Part I we consider a black hole immersed in an inflationary universe. (This was first done by Brady et al. , and additional details can be found in Ref. .) In Part II, William G. Laarakkers will consider a black hole immersed in a spatially-flat, dust-filled universe. ### Radiative falloff in Schwarzschild spacetime Price’s result can be summarized as follows. As a nonspherical star undergoes gravitational collapse, the gravitational field becomes highly dynamical, and the escaping radiation interacts with the spacetime curvature surrounding the star. At late times, well after the initial burst of radiation was emitted, the gravitational field relaxes to a pure spherical state. If $`\delta g`$ schematically represents the deviation of the metric from the Schwarzschild form, then $`\delta gt^{(2\mathrm{}+2)}`$, where $`\mathrm{}`$ is the multipole order of the perturbation; the dominant contribution to $`\delta g`$ comes from the quadrupole ($`\mathrm{}=2`$) mode. The inverse power-law decay applies to many other situations involving radiation interacting with the curvature created by a massive object. The simplest model problem which exhibits this behaviour involves a massless scalar field in Schwarzschild spacetime. In this context, the background geometry is not affected by the field $`\mathrm{\Phi }`$, which satisfies the wave equation $$\left(g^{\alpha \beta }_\alpha _\beta \xi R\right)\mathrm{\Phi }=0,$$ (1) where $`g_{\alpha \beta }`$ is the spacetime metric, $`R`$ the Ricci scalar (which vanishes for Schwarzschild spacetime), and $`\xi `$ a coupling constant. Because the spacetime is spherically symmetric, the field can be decomposed according to $$\mathrm{\Phi }=\underset{\mathrm{}m}{}\frac{1}{r}\psi _{\mathrm{}}(t,r)Y_\mathrm{}m(\theta ,\varphi ).$$ (2) This leads to a decoupled equation for each wave function $`\psi _{\mathrm{}}`$, and we can focus on a single mode at a time. The problem is formulated as follows. A pulse of scalar radiation (described by $`\psi _{\mathrm{}}`$) impinges on the black hole and interacts with the spacetime curvature, which creates a potential barrier fairly well localized near $`r=3M`$. The wave pulse is partially reflected and transmitted, and at late times, a tail remains. At such times, the field falls off as $`\psi _{\mathrm{}}t^{(2\mathrm{}+3)}`$. This is Price’s power-law decay, and this behaviour is displayed in Fig. 1. A number of analytical and numerical studies of radiative dynamics have revealed that the inverse power-law behaviour is not sensitive to the presence of an event horizon. In fact, power-law tails are a weak-curvature phenomenon, and it is the asymptotic structure of the spacetime at radii $`r2M`$ which dictates how the field behaves at times $`t2M`$. It is this observation that motivated our work: How is the field’s evolution affected if the conditions at infinity are altered? ### Radiative falloff in Schwarzschild-de Sitter spacetime To provide an answer to this question, we remove the black hole from its underlying flat spacetime and place it in de Sitter spacetime, which describes an exponentially expanding universe. The Schwarzschild-de Sitter (SdS) spacetime has a metric given by $`ds^2`$ $`=`$ $`fdt^2+f^1dr^2+r^2d\mathrm{\Omega }^2,`$ (3) $`f`$ $`=`$ $`12M/rr^2/a^2.`$ (5) Here, $`a^2=3/\mathrm{\Lambda }`$, where $`\mathrm{\Lambda }`$ is the cosmological constant. (The SdS metric is a solution to the modified vacuum field equations, $`G_{\alpha \beta }+\mathrm{\Lambda }g_{\alpha \beta }=0`$, which imply $`R=4\mathrm{\Lambda }=12/a^2`$.) The spacetime possesses an event horizon at $`r=r_e2M`$ and a cosmological horizon at $`r=r_ca`$. We assume that $`r_er_c`$, so that the two length scales are cleanly separated. We examine the time evolution of a scalar field in SdS spacetime; the field is still governed by Eq. (1), and it still admits the decomposition of Eq. (2). Figure 2 provides a comparison between the behaviour of $`\psi _{\mathrm{}}`$ in the two spacetimes (Schwarzschild and SdS). We see that at early times, the wave functions behave identically; the field has not yet become aware of the different conditions at $`rr_e`$. At later times, however, deviations become apparent. For $`\mathrm{}=0`$, the Schwarzschild behaviour $`\psi _0t^3`$ is replaced by the wave function changing sign at $`t260`$, and settling down to a constant value at late times. For $`\mathrm{}=1`$, the Schwarzschild behaviour $`\psi _1t^5`$ is replaced by a faster decay which eventually becomes exponential. The field’s exponential decay is confirmed by monitoring its evolution up to times $`t>r_c`$. If $`\xi =0`$, we find that $`\psi _{\mathrm{}}e^{\mathrm{}\kappa _ct}`$ at late times , where $`\kappa _c1/r_c`$ is the surface gravity of the cosmological horizon. A rich spectrum of late-time behaviours is revealed when $`\xi `$, the curvature-coupling constant, is allowed to be nonzero. Figure 3 shows the time-dependence of $`\psi _0`$ for several values of $`\xi `$. For $`\xi `$ smaller than a critical value $`\xi _c`$, the field decays monotonically with a decay constant that increases with increasing $`\xi `$. When $`\xi >\xi _c`$, however, the wave function oscillates with a decaying amplitude. As $`\xi `$ is increased away from the critical value $`\xi _c`$, the frequency of the oscillations increases, but the decay constant stays the same. This qualitative change of behaviour as $`\xi `$ goes through $`\xi _c`$ is quite remarkable. It can be explained with a detailed analytical calculation that will not be presented here (see Ref. ). This calculation reveals that at late times, the field behaves as $`\psi _{\mathrm{}}e^{p\kappa _ct}`$, where $$p=\mathrm{}+\frac{3}{2}\frac{1}{2}\sqrt{916\xi }+O\left(\frac{r_e}{r_c}\right).$$ (6) This relation implies that $`p`$ becomes complex, and $`\psi _{\mathrm{}}`$ oscillatory, when $`\xi >\xi _c3/16`$. Part II by William G. Laarakkers ### The spacetime The background spacetime in which the scalar field’s evolution is followed is the Schwarzschild-Einstein-de Sitter spacetime. Qualitatively, it can be described as follows. The idea is to start out with a spatially-flat, expanding, dust-filled universe. Then a ball of dust is “scooped” out, which leaves behind a spherical vacuum region. The dust that was removed is replaced by a Schwarzschild black hole, which is placed in the middle of the vacuous region. This produces a spacetime with two distinct regions. The inner (black hole) region is described by the Schwarzschild metric, and the outer (cosmological) region is described by the Friedman-Robertson-Walker (FRW) metric (see Fig. 4). There are two important things to note about this spacetime. First, if the mass of the black hole is the same as the mass of the dust that was scooped out, the metric will be smooth across the boundary separating the two regions of the spacetime. Also, since the dust is pressureless it will not flow across the boundary, and the boundary itself will be co-moving with the universe. Because the specific finite-difference equation used in the numerical work requires the use of null coordinates (see ), the metrics of the two regions must be put in double-null coordinate form. For the black hole region the metric is written as $`ds^2=\left(1{\displaystyle \frac{2M}{r}}\right)dudv+r^2d\mathrm{\Omega }^2.`$ (7) Here, $`u`$ and $`v`$ are ingoing and outgoing null coordinates, and $`r`$ is defined implicitly by $`r+2M\mathrm{ln}(r/2M1)=(vu)/2`$. In the cosmological region the metric takes the form $`ds^2=a^2(u^{},v^{})(du^{}dv^{}+\chi ^2d\mathrm{\Omega }^2),`$ (8) where $`u^{}`$ and $`v^{}`$ are ingoing and outgoing null coordinates of the FRW spacetime, different from $`u`$ and $`v`$. The FRW radial coordinate is $`\chi =\frac{1}{2}(v^{}u^{})`$. The scale factor $`a`$ is given by $`a(u^{},v^{})=\frac{1}{16}C(u^{}+v^{})^2`$, where $`C`$ is a constant that depends on the mass $`M`$ of the black hole and the density of the dust. The first task is to find one coordinate system that can describe both regions of the spacetime. This is required so that a single wave equation valid over the entire spacetime can be constructed. Since it is known that the metric is continuous across the boundary we can evaluate the metric induced on both sides of the boundary hypersurface, and set them equal. This construction allows us to find the ingoing Schwarzschild coordinate $`u`$ as a function of the ingoing cosmological coordinate $`u^{}`$, and the outgoing coordinate $`v`$ as a function of $`v^{}`$. Thus we now have a single coordinate system covering both regions of the spacetime. ### The wave equation The wave equation that governs the evolution of the scalar field is one without curvature coupling (equivalent to setting $`\xi =0`$ (see Part I and ). Thus the massless scalar field $`\mathrm{\Phi }`$ obeys the equation $`\mathrm{}\mathrm{\Phi }=g^{\alpha \beta }_\alpha _\beta \mathrm{\Phi }=0.`$ (9) The spherical symmetry of the problem allows us to decompose the field in terms of spherical harmonics, and then to evolve only the part of the field that depends on the null coordinates. Thus the field can be decomposed as $`\mathrm{\Phi }={\displaystyle \underset{\mathrm{}m}{}}{\displaystyle \frac{1}{R}}\psi _{\mathrm{}}Y_\mathrm{}m(\theta ,\varphi ),`$ (10) where $`R=r`$, $`\psi _{\mathrm{}}=\psi _{\mathrm{}}(u,v)`$ in the black hole region, and $`R=a\chi `$, $`\psi _{\mathrm{}}=\psi _{\mathrm{}}(u^{},v^{})`$ in the cosmological region. When all quantities are expressed in the starred coordinate system, each wavefunction $`\psi _{\mathrm{}}`$ satisfies the equation $`4{\displaystyle \frac{{}_{}{}^{2}\psi }{u^{}v^{}}}+V\psi =0,`$ (11) where the potential $`V`$ takes a different form depending on which region of the spacetime the field lies: $`V_{\mathrm{Schild}}`$ $`=`$ $`{\displaystyle \frac{du}{du^{}}}{\displaystyle \frac{dv}{dv^{}}}f\left[{\displaystyle \frac{\mathrm{}(\mathrm{}+1)}{r^2}}+{\displaystyle \frac{2M}{r}}\right]`$ () $`V_{\mathrm{FRW}}`$ $`=`$ $`{\displaystyle \frac{4\mathrm{}(\mathrm{}+1)}{(v^{}u^{})^2}}{\displaystyle \frac{8}{(v^{}+u^{})^2}}.`$ () ### Results The numerical code evaluates the field on the event horizon, on the boundary between the two regions of the spacetime, and at future null infinity. Our discussion here will be restricted to the value of the field on the event horizon for the $`\mathrm{}=0`$ and $`\mathrm{}=1`$ modes. The evolution was started at a “late” time, meaning that the boundary has expanded far enough that it can be clearly seen that the field initially behaves as it would in pure Schwarzschild spacetime. For both modes considered we see in figures 5 and 6 that the field first exhibits quasi-normal ringing followed by the well known power law decay (see, among others, ). However, at a certain time in the evolution, the field’s behaviour deviates from the behaviour exhibited in the pure Schwarzschild case. The point at which the field changes behaviour corresponds to the time at which information about the existence of the cosmological region reaches the event horizon. As the wave packet falls towards the event horizon (approximated by $`u^{}=u_{\mathrm{max}}`$, where $`u_{\mathrm{max}}`$ is the largest value of $`u^{}`$ in the numerical grid) it encounters the localized potential (dashed line — see Fig. 7). Part of the wave is transmitted through the barrier and reaches the event horizon, and part of the wave is back-scattered by the potential. The reflected wave heads out towards the cosmological region (to the right), where it encounters the boundary $`\mathrm{\Sigma }`$. For the $`\mathrm{}=0`$ mode the potential at the boundary is discontinuous and negative, and the field now changes sign. This sign change is the large dip in Fig. 5 (note that this is a log scale, so that as the field passes through $`\psi =0`$ the logarithm goes to negative infinity ). It is when this information reaches the event horizon that the evolution of the field deviates from its evolution in pure Schwarzschild spacetime. The field continues to decay with a power-law falloff, but the falloff is much slower than in the pure Schwarzschild case. The discussion for the $`\mathrm{}=1`$ case is similar, until the reflected wave reaches the boundary $`\mathrm{\Sigma }`$. This is because the potential at the boundary is discontinuous and positive for $`\mathrm{}>0`$ (see equation ()The wave equation). Therefore the field will be partially transmitted through the barrier at the boundary and partially reflected off. The transmitted wave will make its way off to future null infinity. As for the part of the wave packet that has now been reflected twice, it will fall back towards the black hole where it once again encounters the localized potential. The part of the wave that manages to make it through the potential on its second encounter heads back towards the event horizon, carrying information about the existence of the boundary. This second encounter with the localized potential has the same effect on the packet as it did the first time—namely, the field again exhibits quasi-normal ringing (see inset of Fig. 6). This “echoing” phenomenon occurs only for the $`\mathrm{}>0`$ modes of the field. ## Acknowledgments The work presented in Part I was carried out in collaboration with Patrick Brady and Chris Chambers; additional details can be found in Ref. . This work was supported by the Natural Sciences and Engineering Research Council.
no-problem/9908/quant-ph9908090.html
ar5iv
text
# Untitled Document I am terribly sorry !! My inexperience with the e-print archive and the cross-listing mechanics led me to erroneously re-submit the paper to Mathematical Physics. Readers should look at math-ph/9909004. I hope that you will apologize me !!
no-problem/9908/astro-ph9908098.html
ar5iv
text
# Cosmic ray composition estimation below the knee of the spectrum from the time structure of Čerenkov light in EAS ## 1 Introduction The chemical composition of cosmic rays measured at the Earth is an important key to understanding the production and propagation of cosmic rays. Up to $``$ 1TeV per particle the flux of cosmic rays is sufficiently high that direct measurements of high statistical significance can be made with satellite or balloon based detectors. At energies of 100TeV per particle the flux of primaries is so low that direct composition measurements are limited by large statistical uncertainties. Current knowledge of the composition at 100TeV, obtained from direct measurement, is summarized in . Above 100TeV primary fluxes are such that cosmic rays can only be studied through extensive air showers (EAS), generated as the primaries interact with the earth’s atmosphere. At ground level EAS can be characterized by measuring secondary electrons, muons, hadrons and Čerenkov light. If the primary energy is sufficiently large ($`>10^{17}`$eV) fluorescence light from nitrogen excitation is also detectable. On average, EAS from primaries of different mass will develop in different ways, leading to composition dependent differences in the secondary observables. In practice, inherent fluctuations in the development of EAS and the complexity of interpreting ground level measurements has limited the success of composition measurement around the knee of the spectrum ($`10^{15}`$eV). Historically, the mass resolution of ground based experiments has been so poor that results are expressed as the ratio of light (protons and helium) to heavy (mass $`>`$ helium) components. Estimates of this ratio at the knee from current experiments vary considerably ($``$0.3 to $``$0.6 ) although there is general agreement that the average composition around the knee becomes heavier with increasing energy. Several new experiments, designed to simultaneously measure many of the secondary observables of EAS, should improve considerably the current knowledge of the cosmic ray composition around the knee of the spectrum . ## 2 Čerenkov light from extensive air showers The arrival time distribution of Čerenkov photons from EAS has been studied for a large range of primary particle energies (see, for example and references therein). For vertically incident primaries with energy $`>`$100TeV, which are detectable by ground level particle arrays, the vast majority of Čerenkov photons come from the electromagnetic (EM) component of the cascade. In this case the basic core-distance dependent time structure of the Čerenkov pulse can be described by the simple model outlined in . Most of the Čerenkov emission occurs from energetic particles traveling at speed $`c`$ near the core of the shower, which can be approximated as a single line of emission. The time structure is determined by a combination of varying distances and refractive index induced delays between the observer and different parts of the cascade. At the core, photons from the bottom of the shower will arrive first, with photons emitted higher up being delayed by the refractive index of the atmosphere. Away from the core the Čerenkov photons emitted at the bottom of the cascade experience greater geometrical delays than those emitted higher up. At the “Čerenkov shoulder”() refractive and geometrical delays cancel and , in this simple model, photons from all parts of the cascade arrive simultaneously. Beyond the Čerenkov shoulder the geometrical delays dominate and the width of the pulse becomes a strong function of core distance. Clearly the greater the longitudinal extent of the shower, the wider the Čerenkov pulse at most core locations. The simple model described above predicts reasonably well the general behavior of Čerenkov pulses from EAS. For real cascades, however, the relationship between core location and Čerenkov pulse width is blurred by the distribution of particle energies and the finite lateral extent of the shower core. The model also ignores the contribution of Čerenkov light from muons. The highest energy muons are created early in the hadronic core of the cascade and can easily survive to produce Čerenkov light down to ground level. This light will arrive in advance of light produced by the EM component of the cascade. The total muon energy of the cascade is carried by relatively few particles leading to a poor efficiency in Čerenkov production compared to the EM component. As the energy of the primary is reduced, however, the relative contribution of the muons to the total Čerenkov yield is increased. This is particularly true for the region inside the shoulder of the lateral distribution, where many of the photons from the the most deeply penetrating part of the EM cascade arrive (). As the primary energy increases, the multiplicative nature of the EM cascade efficiently converts the extra primary energy into large numbers of Čerenkov producing electrons. The cascade develops deeper in the atmosphere, so the Čerenkov light is more concentrated at ground level and suffers less atmospheric absorption than light produced higher in the atmosphere. While a higher energy primary also results in more energy in the muon channel, much of that energy is carried by a few very energetic muons or partly lost to the EM component if the charged pions interact rather than decay. For a vertically incident primary hadron of a few TeV, the simple model of Čerenkov pulse production described previously becomes inadequate. The electromagnetic component of the cascade will develop rapidly, and within $``$150m of the core the Čerenkov light produced will appear as a “flash” inasmuch as the duration will be short compared to the duration of the entire pulse. The majority of the time structure of the pulse comes from Čerenkov light from penetrating muons that appears on the leading edge of the pulse. The ratio of light on the leading edge to that in the “flash” will reflect the ratio of muons/electrons in the cascade capable of generating Čerenkov light. The total time spread of Čerenkov photons observed within the shoulder of the lateral distribution is determined by the atmospheric thickness between the EM Čerenkov emission and the observer. If observations are made at sufficiently large zenith angles, the timing separation between EM and muonic Čerenkov light will be maintained for even a very energetic primary. ## 3 The dependence of the pulse profile on the mass of the primary The effect of primary mass on the shape of the Čerenkov pulse profile can be predicted through general arguments about EAS development: a detailed characterization of pulse profiles, obtained from Monte Carlo simulations, will be presented in section 5. Assuming maximal or near-maximal fragmentation of the primary nuclei, consider now the differences in the development of the electromagnetic components of EAS generated by protons and iron nuclei of the same total energy. The longitudinal development profiles of proton and iron induced EAS are remarkably similar ( ). The individual sub-showers from the nucleons of the iron primary develop and decay more rapidly than the primary proton EAS, but these component nucleons interact at a variety of atmospheric depths effectively elongating the cascade. While the development of the proton and iron cascades will have similar profiles, on average the iron cascades will develop higher in the atmosphere. The transverse momentum of the pions in a cascade increases only slowly with total momentum ( ), so the lateral extent of the secondary particles in the iron cascade will be greater than that of the proton cascade. The combination of these two effects - height of maximum and wider lateral distribution, result in the Čerenkov light from the EM component of the iron induced EAS being more diffuse at ground level than for the proton induced EAS. Over the energy range considered here, the Čerenkov photon density at ground level for a primary iron nucleus is about half that of a primary proton of the same total energy. The arguments used to describe the development of the EM cascade also apply to some extent to the muonic component of the cascade: the muons in the iron cascade tend to be produced higher and with greater lateral spread. The muonic cascade from the iron primary is , however, much more efficient at producing Čerenkov light. The energy of the muonic component of an iron induced cascade is carried by large numbers of relatively low energy muons. The much higher energy interactions at the hadronic core of the proton cascade provides fewer muons with larger average energy. The overall result is that the ratio of the total Čerenkov light that is derived from the muons increases with increasing primary mass. ## 4 Measuring Čerenkov pulse profiles To fully exploit the mass dependent differences between EAS, the detector must be able to collect enough photons to make a detailed pulse profile for those EAS with EM components maximizing high in the atmosphere. The bandwidth of the system must be high, and the field of view sufficiently small that pulse parameterization is not seriously affected by the night sky background. An isochronous large area mirror, such as those used in TeV gamma-ray astronomy, viewed by a single photomultiplier tube would fulfill these conditions. The use of such a system for cosmic ray composition measurement has been described in , and examined in detail for VHE cosmic rays (E$`<`$10TeV) in . At any single zenith angle the range of primary energies that can be investigated is quite limited. The primary energy must be sufficiently high that a large number of Čerenkov photons are available but the steep nature of the primary energy spectrum and the shape of the Čerenkov lateral distribution bias any sample towards lower energy events. The higher the primary energy at a fixed zenith angle the less distinct is the timing separation between the Čerenkov light of muonic and EM origin (see section 2). A further consideration is that for the higher energy events, the apparent image size is much larger so that on average less of the total angular distribution of the Čerenkov light is sampled by a narrow FOV detector. Fortunately the limited energy range is easily overcome by observing at a range of zenith angles. The total atmospheric thickness changes from $``$1000 gcm<sup>-2</sup> at zenith to $``$36000 gcm<sup>-2</sup> for horizontal observations. This, in principal, would allow Čerenkov composition measurements over a very large energy range (a few TeV to tens of PeV). Observing at large zenith angles provides increased collection area for the higher energy primaries, and also provides a greater distance over which the Čerenkov emission can occur. This tends to stretch the pulse out, making the timing measurement easier and less affected by systematic uncertainties in the measurement system. A system similar to that described above has been operated on the BIGRAT atmospheric Čerenkov detector. This system comprised a 4m diameter parabolic mirror viewed by a single photomultiplier tube subtending a field of view (FOV) of $`1.0^{}`$. The system was designed to be sensitive to the differences between Čerenkov pulses initiated by gamma-rays and cosmic rays for large zenith angle observations. While no detailed composition analysis was performed, it was noted that the shape of the cosmic ray pulse profiles was inconsistent with a pure proton composition ( ). ## 5 Monte Carlo Simulations The Monte Carlo simulations presented here have been made using CORSIKA version 4.5 , with GHEISHA code for low energy hadrons and VENUS for high energy hadrons. The EM cascade is fully simulated using the EGS routines and Rayleigh, Mie and ozone absorption processes are modeled for the Čerenkov light. The detector consists of a single 5m diameter isochronous light collector located at 160m above sea level. The mirror is viewed by a single photomultiplier tube with assumed bialkali spectral sensitivity, subtending a full FOV of $`1.6^{}`$. This FOV has not been rigorously optimized for pulse profile measurement: it is large enough that it can sample most of the angular distribution of the EAS of interest, and small enough to exclude very large-arrival-angle large-core-distance cascades. The photoelectrons detected by the photomultiplier are converted into a pulse by convolving the arrival time of each photoelectron with a simple symmetric detector response function with a rise-time (0-100%) of 2ns. The waveform that is generated is sampled 4 times per nano-second. The night sky background is simulated by adding Poisson distributed photoelectrons to the waveform at an average rate of 2 per nano-second. In this paper, results of simulations at $`60^{}`$ and $`70^{}`$ from zenith will be presented. At $`60^{}`$ proton, helium, oxygen and iron primaries have been simulated, but only proton and iron at $`70^{}`$ from zenith. To model a single telescope realistically it is important to include primaries over the full range of energies, core locations and arrival directions to which the instrument is sensitive (see table 1 for a summary). For all species an integral spectral index of -1.6 has been assumed. To reduce computing time each shower has been sampled a total of eight times. At $`60^{}`$ and $`70^{}`$ from zenith the slant distances are $`2`$ and $`3`$ vertical atmospheres respectively. It is possible to extend the energy range of observations up to the knee region by observing at even larger zenith angles, but this is beyond the limitations of the Monte Carlo simulation package used here. CORSIKA v4.5 uses a flat earth/atmosphere and beyond $`70^{}`$ this leads to increasing inaccuracies in describing the depth profile of the atmosphere. At extreme zenith angles the atmospheric depth also changes considerably across the full angular acceptance of the detector ($`4^{}`$), further complicating the interpretation of results. As the total atmospheric depth traversed by the Čerenkov light increases, the effects of atmospheric absorption become more important; this issue will be addressed in more detail in section 7. Fig. 3 shows the average pulse profiles for proton and iron primaries at $`60^{}`$ from zenith. The pulses contain between 600 and 900 photoelectrons, but no other selection conditions have been applied. The pulse size selection acts to limit the range of energies (and subsequently core locations) that are present in the sample. The individual contributions to the Čerenkov pulse by the muonic and EM components are also shown. It can be seen that the muonic Čerenkov light is typically well in advance of the light from the EM component and that the muonic/EM Čerenkov light ratio of iron primaries is higher than that of proton primaries. The differences between iron and proton initiated Čerenkov pulse profiles can be seen in simple pulse parameters, such as rise-time (10% to 90% of pulse maximum) and full width at half maximum (FWHM). The distributions of these parameters as a function of core location are shown in Fig. 2 and Fig. 2. In addition to rise-time and FWHM a third parameter, called LT-ratio (Leading to Trailing signal ratio), will also be defined. The LT-ratio parameter is the ratio of the signal on the leading edge of the pulse to the signal on the trailing edge of the pulse. The signal on the leading and trailing edges are calculated from the sum of photoelectrons arriving in a 10ns period that starts 2.5ns and finishes 12.5ns from the maximum height of the pulse. The LT-ratio parameter is useful for rejecting a small number of events ($``$10% of iron and $``$5% of protons) where a large muon peak is present on the leading edge of the pulse. This peak can cause a mis-characterization of the pulse by the simplistic determination of the rise-time and FWHM parameters. The relationship between primary composition and rise-time is strongest around the Čerenkov shoulder. The distribution of core locations can be limited to some extent by making a simple FWHM cut (see Fig. 2). Fig. 4 shows the distribution of rise-times and other parameters for proton and iron primaries after pulses with FWHM greater than 5.0ns have been rejected. There are clear differences between the Čerenkov pulse profiles of proton and iron initiated EAS and this is reflected in the distributions of the rise-time parameter. Also shown are the rise-time distributions of helium and oxygen primaries at $`60^{}`$ from zenith (Fig. 6), and of proton and iron primaries at $`70^{}`$ from zenith (Fig. 6). Many of the difficulties in interpreting EAS data at ground level are due to the fluctuations in shower development. In particular, the depth of first interaction (DOFI) variation for primary protons causes large variations in the secondary particle properties at ground level. Fig. 7 shows that the Čerenkov pulse profile of a primary proton is largely independent of the DOFI. ## 6 Composition estimation While clear differences exist between the pulse parameters of various primary species, the interpretation of experimental results leading to a composition estimate over a range of energies will clearly be complex. Even for a narrow range of total pulse sizes at a fixed zenith angle each primary species will have a different distribution of energies, core locations and arrival directions. As with other ground based experiments, correct interpretation of results will rely on accurate modeling of cascade development, atmospheric attenuation and the detector response. The considerable overlap between the rise-time distributions of the various primary species shows that it will be impossible to assign unambiguously primary mass on an event by event basis. Instead, the composition may be inferred by combining the simulated rise-time distributions of individual primary species to reproduce the experimentally observed rise-time distribution. The Monte Carlo simulations allow the ratio of each species derived from such a comparison to be converted directly to a flux. If observations are taken over a range of zenith angles, such that the average energy at each zenith angle increases by a factor of say, 5, the energy spectrum for each primary species can be inferred over a wide range of energies. The assumed spectral index for each species within each energy band can be adjusted by statistical resampling, and the comparison process repeated to achieve consistency between the different energy bands. An example of the accuracy to which the ratios of various primary species can be estimated is shown in Fig. 8. This example, at $`60^{}`$ from zenith, represents the simplest case, where the cosmic ray flux is assumed to consist of only protons and iron nuclei. The Monte Carlo data set for each species has been divided randomly into two halves. From the first half, a “test distribution” of rise-times has been created, which will represent an experimentally measured sample. If the test distribution is created assuming equal fluxes of proton and iron primaries, after allowing for triggering efficiency, collecting area and event selection, the ratios of events in the sample are (proton:iron=0.79:0.21). The second half of the Monte Carlo rise-time data set has then been repeatedly sampled, allowing the flux ratios of the primary species to vary over all possible values. Each of these “sample distributions” is then compared to the “test distribution” using a Kolmogorov-Smirnov (K-S) test. If the K-S test statistic indicates a probability greater than 90% that the test and sample distributions are drawn from the same parent distribution, then the primary ratios are recorded. The most probable ratio for each species is determined with high precision, but the absolute accuracy is limited - mainly by statistical fluctuations in the test distribution. Fig. 8 shows the distribution of most likely ratios of primary species for repeated regeneration of the test distribution. It should be noted that each test and sample distribution is not fully independent, each being drawn from a limited Monte Carlo data set. Each sampled distribution corresponds to only $`10`$ hours of actual observations (400 events in each of the test and sample distributions). A reasonable observational data-set of several hundreds of hours duration, and more Monte Carlo simulations, would provide greater flux accuracy than indicated in Fig. 8. The procedure described above can also be applied to a four component cosmic ray flux (proton:helium:oxygen:iron), although the flux accuracy is reduced compared to the two component (proton and iron) fit. In addition, with the limited size of the Monte Carlo data set, a completely unbiased search is not possible, and the range of compositions searched must be limited to avoid local statistical minimums in the differences between the test and sample distributions. ## 7 Experimental considerations One of the advantages of using a single mirror/single PMT combination is the ease of calibration of such an experiment. The mirror reflectivity, PMT quantum efficiency, gain and impulse response can all be accurately determined. The background noise to the Čerenkov pulses can be easily monitored and incorporated into Monte Carlo simulations. The greatest source of uncertainty will be in characterizing the atmosphere, and in particular describing the absorption of the Čerenkov light in the atmosphere. Failure to correctly describe the absorption profile of the atmosphere will distort the apparent ratio of light emitted at varying depths from the observation point. Demanding consistency of pulse parameter distributions on a night by night basis should reject nights where the atmosphere is disturbed (significantly different from a molecular atmosphere). In addition to this, atmospheric attenuation could be measured directly through stellar extinction and ground-level standard light sources placed at varying distances from the observatory. Although accurate accounting for absorption is most critical for observations at large zenith, the effects should also be observable for near-zenith observations. It should be possible, therefore, to gauge the accuracy of the absorption estimate and other calibration procedures by comparing the Čerenkov pulse profile estimate of the primary cosmic ray composition with that obtained by direct measurement. This comparison should also be useful in determining the accuracy of the Monte Carlo simulations as a whole. ## 8 Conclusion Monte Carlo simulations presented in this paper have shown that the temporal distribution of Čerenkov light emitted from EAS is sensitive to the muon/electron ratio of the cascade. Using a single large area mirror coupled to a narrow field of view photo-detector, it is possible to use these pulse profiles to estimate the chemical composition of primary cosmic rays over a large range of energies. The author would like to thank Philip Edwards, Jamie Holder, Bruce Dawson John Patterson, Roger Clay and Gavin Rowell for helpful comments. The author acknowledges the receipt of a JSPS postdoctoral fellowship.
no-problem/9908/hep-th9908056.html
ar5iv
text
# 1 The model ## 1 The model Gauge theories on noncommutative spaces are believed to be relevant to the quantization of D-branes in background $`B_{\mu \nu }`$ fields . The structure of such theories is similar to that of ordinary gauge theory except that the usual product of fields is replaced by a “star product” defined by $`\varphi \chi =\varphi (X)\mathrm{exp}\{i\theta ^{\mu \nu }{\displaystyle \frac{}{X^\mu }}{\displaystyle \frac{}{Y^\nu }}\}\chi (Y)`$ (1) where $`\theta ^{\mu \nu }`$ is an antisymmetric constant tensor. The effect of such a modification is reflected in the momentum space vertices of the theory by factors of the form $`\mathrm{exp}[i\theta ^{\mu \nu }p_\mu q_\nu ]e^{ipq}`$ (2) The purpose of this paper is to show how these factors arise in an elementary way. We will begin by describing a simple quantum mechanical system which is fundamental to our construction. We then consider string theory in the presence of a D3-brane and a constant large $`B_{\mu \nu }`$ field. In the light cone frame the first quantized string is described by our elementary model. We use the model to compute the string splitting vertex and show how the factors in eq. (2) emerge. We then turn to the structure of the perturbation series for the non-commutative theory in infinite flat space. We find that planar diagrams with any number of loops are identical to their commutative counterparts apart from trivial external line phase factors. Compactification, which can lead to entirely new features, is not studied in this paper. ### 1.1 The model at classical level Consider a pair of unit charges of opposite sign in a magnetic field $`B`$ in the regime where the Coulomb and the radiation terms are negligible. The coordinates of the charges are $`\stackrel{}{x_1}`$ and $`\stackrel{}{x_2}`$ or in component form $`x_1^i`$ and $`x_2^i`$. The Lagrangian is $`={\displaystyle \frac{m}{2}}\left((\dot{x}_1)^2+(\dot{x}_2)^2\right)+{\displaystyle \frac{B}{2}}ϵ_{ij}\left(\dot{x}_1^ix_1^j\dot{x}_2^ix_2^j\right){\displaystyle \frac{K}{2}}(x_1x_2)^2`$ (3) where the first term is the kinetic energy of the charges, the second term is their interaction with the magnetic field and the last term is an harmonic potential between the charges. In what follows we will be interested in the limit in which the first term can be ignored. This is typically the case if $`B`$ is so large that the available energy is insufficient to excite higher Landau levels . Thus we will focus on the simplified Lagrangian $`={\displaystyle \frac{B}{2}}ϵ_{ij}\left(\dot{x}_1^ix_1^j\dot{x}_2^ix_2^j\right){\displaystyle \frac{K}{2}}(x_1x_2)^2`$ (4) Let us first discuss the classical system. The canonical momenta are given by $`\begin{array}{c}p_i^1={\displaystyle \frac{}{\dot{x}_1^i}}=Bϵ_{ij}x_1^j\hfill \\ p_i^2=Bϵ_{ij}x_2^j\hfill \end{array}`$ (7) Let us define center of mass and relative coordinates $`X`$, $`\mathrm{\Delta }`$: $`\begin{array}{c}\stackrel{}{X}=(\stackrel{}{x_1}+\stackrel{}{x_2})/2\hfill \\ \stackrel{}{\mathrm{\Delta }}=(\stackrel{}{x_1}\stackrel{}{x_2})/2\hfill \end{array}`$ (10) The Lagrangian is $`=m((\dot{X})^2+(\dot{\mathrm{\Delta }})^2)+2Bϵ_{ij}\dot{X}^i\mathrm{\Delta }^j2K(\mathrm{\Delta })^2`$ (11) Dropping the kinetic terms gives $`=2Bϵ_{ij}\dot{X}^i\mathrm{\Delta }^j2K(\mathrm{\Delta })^2`$ (12) The momentum conjugate to $`X`$ is $`{\displaystyle \frac{}{\dot{X}^i}}=2Bϵ_{ij}\mathrm{\Delta }^j=P_i`$ (13) This is the center of mass momentum. Finally, the Hamiltonian is $`=2K(\mathrm{\Delta })^2=2K\left({\displaystyle \frac{P}{2B}}\right)^2={\displaystyle \frac{K}{2B^2}}P^2`$ (14) This is the hamiltonian of a nonrelativistic particle with mass $`M={\displaystyle \frac{B^2}{K}}`$ (15) Evidently the composite system of opposite charges moves like a galileian particle of mass $`M`$. What is unusual is that the spatial extension $`\mathrm{\Delta }`$ of the system is related to its momentum so that the size grows linearly with $`P`$ according to eq. (13). How does this growth with momentum effect the interactions of the composite? Let’s suppose charge 1 interacts locally with an “impurity” centered at the origin. The interaction has the form $`V(\stackrel{}{x}_1)=\lambda \delta (\stackrel{}{x}_1)`$ (16) In terms of $`X`$ and $`\mathrm{\Delta }`$ this becomes $`V=\lambda \delta (X+\mathrm{\Delta })=\lambda \delta (X^i{\displaystyle \frac{1}{2B}}ϵ^{ij}P_j)`$ (17) Note that the interaction in terms of the center of mass coordinate is nonlocal in a particular way. The interaction point is shifted by a momentum dependent amount. This is the origin of the peculiar momentum dependent phases that appear in interaction vertices on the noncommutative plane. More generally, if particle 1 sees a potential $`V(x_1)`$ the interaction becomes $`V\left(X{\displaystyle \frac{ϵP}{2B}}\right)`$ (18) ### 1.2 Quantum level The main problem in quantizing the system is to correctly define expressions like (18) which in general have factor ordering and other quantum ambiguities. In order to define them, let us assume that $`V`$ can be expressed as a Fourier transform $`V(x)={\displaystyle 𝑑q\stackrel{~}{V}(q)e^{iqx}}`$ (19) We can then formally write $`V(X{\displaystyle \frac{ϵP}{2B}})={\displaystyle 𝑑q\stackrel{~}{V}(q)e^{iq(X\frac{ϵP}{2B})}}`$ (20) The factor ordering is not ambiguous because $`[q_iX^i,q_lϵ^{lj}P_j]=q_iq_lϵ^{lj}[X^iP_j]=0`$ (21) Consider the matrix element $`k|\mathrm{exp}[iq(X{\displaystyle \frac{ϵp}{2B}})]|l`$ (22) where $`k|`$ and $`|l`$ are momentum eigenvectors. Using eq.(17) we can write this as $`k|\mathrm{exp}[iqX]\mathrm{exp}[i{\displaystyle \frac{qϵP}{2B}}]|l`$ (23) Since $`|l`$ is an eigenvector of $`P`$ this becomes $`k|\mathrm{exp}[iqX]|l\mathrm{exp}[i{\displaystyle \frac{qϵl}{2B}}]=\delta (kql)\mathrm{exp}[iqϵl/2B]`$ (24) The phase factor is the usual Moyal bracket phase that is ubiquitous in noncommutative geometry. ## 2 String theory in magnetic fields Let us consider bosonic string theory in the presence of a D3-brane. The coordinates of the brane are $`x^0`$, $`x^1`$, $`x^2`$, $`x^3`$. The remaining coordinates will play no role. We will also assume a background antisymmetric tensor field $`B_{\mu \nu }`$ in the 1,2 direction. We will study the open string sector with string ends attached to the D3-brane in the light cone frame. Define $`x^\pm =x^0\pm x^3`$ (25) and make the usual light cone choice of world sheet time $`\tau =x^+`$ (26) The string action is $`={\displaystyle \frac{1}{2}}{\displaystyle _L^L}𝑑\tau 𝑑\sigma \left[\left({\displaystyle \frac{x^i}{\tau }}\right)^2\left({\displaystyle \frac{x^i}{\sigma }}\right)^2+B_{ij}\left({\displaystyle \frac{x^i}{\tau }}\right)\left({\displaystyle \frac{x^j}{\sigma }}\right)\right]`$ (27) We have numerically fixed $`\alpha ^{}`$ and the parameter $`L`$ can be identified with $`P_{}`$, the momentum conjugate to $`x_{}`$. In what follows we will be interested in the limit $`B\mathrm{}`$. Let us make the following rescalings $`\{\begin{array}{c}x^i={\displaystyle \frac{y^i}{\sqrt{B}}}\hfill \\ \tau =tB\hfill \end{array}`$ (30) Then $`={\displaystyle \frac{1}{2}}{\displaystyle _L^L}𝑑\tau 𝑑\sigma \left[{\displaystyle \frac{1}{B^2}}\left({\displaystyle \frac{y}{t}}\right)^2\left({\displaystyle \frac{y}{\sigma }}\right)^2+ϵ_{ij}\left({\displaystyle \frac{y^i}{t}}\right)\left({\displaystyle \frac{y^j}{\sigma }}\right)\right]`$ (31) Now for $`B\mathrm{}`$ we can drop the first term. Furthermore by an integration by parts and up to a total time derivative the last term can be written $`ϵ_{ij}{\displaystyle \frac{y_i}{t}}y_j|_L^L`$ (32) Thus $`={\displaystyle \frac{1}{2}}{\displaystyle 𝑑\sigma 𝑑\tau \left(\frac{y}{\sigma }\right)^2}+ϵ_{ij}\dot{y_i}y_j|_L^L`$ (33) Since for $`\sigma \pm L`$ the time derivatives of $`y`$ do not appear in $`S`$ we may trivially integrate them out. The solution of the classical equation of motion is $`y(\sigma )=y+{\displaystyle \frac{\mathrm{\Delta }\sigma }{L}}`$ (34) with $`\mathrm{\Delta }`$ and $`y`$ independent of $`\sigma `$. The resulting action is $`=\left[{\displaystyle \frac{2\mathrm{\Delta }^2}{L}}+\dot{y}ϵ\mathrm{\Delta }\right]`$ (35) Evidently, the action is of the same form as the model in section 1 with $`B`$ and $`K`$ rescaled. ## 3 The interaction vertex Interactions in light cone string theory are represented by string splitting and joining. Consider two incoming strings with momenta $`p_1`$, $`p_2`$ and center of mass positions $`y_1`$, $`y_2`$. If their endpoints coincide they can join to form a third string with momentum $`p_3`$. The constraints on the endpoints are summarized by the overlap $`\delta `$ function $`\nu =\delta ((y_1\mathrm{\Delta }_1)(y_2+\mathrm{\Delta }_2))\delta ((y_2\mathrm{\Delta }_2)(y_3+\mathrm{\Delta }_3))\delta ((y_3\mathrm{\Delta }_3)(y_1+\mathrm{\Delta }_1))`$ (36) From eq. (29) we see that the center of mass momentum is related to $`\mathrm{\Delta }`$ by $`P=ϵ\mathrm{\Delta }`$ (37) Inserting this in eq. (30) gives the vertex $`\nu =\delta (y_1y_2+(ϵp_1+ϵp_2))\delta (y_2y_3+(ϵp_2+ϵp_3))\delta (y_3y_1+(ϵp_3+ϵp_1))`$ (38) To get the vertex in momentum space multiply by $`e^{i(p_1y_1+p_2y_2+p_3y_3)}`$ and integrate over $`y`$. This yields $`\nu =e^{i(p_1ϵp_2)}\delta (p_1+p_2+p_3)`$ (39) This is the usual form of the vertex in noncommutative field theory. We have scaled the “transverse” coordinates $`x^1`$, $`x^2`$ (but not $`x^0`$, $`x^3`$) and momenta so that the $`B`$ field does not appear in the vertex. If we go back to the original units the phases in (39) will be proportional to $`1/B`$. Evidently a quantum of noncommutative Yang Mills theory may be thought of as a straight string connecting two opposite charges. The separation vector $`\mathrm{\Delta }`$ is perpendicular to the direction of motion $`P`$. Now consider the geometry of the 3-body vertex. The string endpoints $`u`$, $`v`$, $`w`$ define a triangle with sides $`\begin{array}{c}\mathrm{\Delta }_1=(uv)\\ \mathrm{\Delta }_2=(vw)\\ \mathrm{\Delta }_3=(wu)\end{array}`$ (43) and the three momenta are perpendicular to the corresponding $`\mathrm{\Delta }`$. It is straightforward to see that the phase $`ϵ_{ij}p_iq_j/Bpq`$ (44) is just the area of the triangle times $`B`$. In other words, it is the magnetic flux through the triangle. Note that it can be of either sign. More generally, we may consider a Feynman tree diagram constructed from such vertices. For example consider figure (1a). The overall phase is the total flux through the triangles A, B and C. In fact we can simplify this by shrinking the internal propagators to get figure (1b). Thus the phase is the flux through a polygon formed from the $`\mathrm{\Delta }`$’s of the external lines. The phase depends only on the momenta of the external lines and their cyclic order. ## 4 Structure of Perturbation Theory In this section we will consider the effects of the Moyal phases on the structure of Feynman amplitudes in noncommutative Yang Mills theory. Let us first review the diagram rules for ordinary Yang Mills theory in ’t Hooft double-line representation. The gauge propagator can be represented as a double line as if the gauge boson were a quark-antiquark pair as in figure (2). Each gluon is equipped with a pair of gauge indices $`i,j`$, a momentum $`p`$ and a polarization $`\epsilon `$ satisfying $`\epsilon p=\epsilon ^\mu p_\mu =0`$. The vertex describing 3-gauge boson interaction is shown in figure (3). In addition to Kronecker $`\delta `$ for the gauge indices and momentum $`\delta `$ functions the vertex contains the factor $`(\epsilon _1p_3+\epsilon _3p_2+\epsilon _2p_1)`$ (45) The factor is antisymmetric under interchange of any pair and so it must be accompanied by an antisymmetric function of the gauge indices. For a purely abelian theory the vertex vanishes when symmetrized. Now we add the new factor coming from the Moyal bracket. This factor is $`e^{ip_1p_2}=e^{ip_2p_3}=e^{ip_3p_1}`$ (46) where $`p_ap_b`$ indicates an antisymmetric product $`\begin{array}{c}pq=p_\mu q_\nu \theta ^{\mu \nu }\\ \theta ^{\mu \nu }=\theta ^{\nu \mu }\end{array}`$ (49) Because these factors are not symmetric under interchange of particles, the vertex no longer vanishes when Bose symmetrized even for the Abelian theory. The phase factors satisfy an important identity. Let us consider the phase factors that accompany a given diagram. In fact from now on a diagram will indicate only the phase factor from the product of vertices. Now consider the diagram in figure (4a). It is given by $`e^{i(p_1p_2)}e^{i(p_1+p_2)p_3}=e^{i(p_1p_2+p_2p_3+p_1p_3)}`$ (50) On the other hand the dual diagram figure (4b) is given by $`e^{i(p_1(p_2+p_3)+p_2p_3)}`$. It is identical to the previous diagram. Thus the Moyal phases satisfy old fashioned “channel duality”. This conclusion is also obvious from the “flux through polygon” construction of the previous section. In what follows, a “duality move” will refer to a replacement of a diagram such as in figure (4a) by the dual diagram in figure (4b). Now consider any planar diagram with $`L`$ loops. By a series of “duality moves” it can be brought to the form indicated in figure (5) consisting of a tree with $`L`$ simple one-loop tadpoles. Let us consider the tadpole, figure (6). The phase factor is just $`e^{iqq}=1`$. Thus the loop contributes nothing to the phase and the net effect of the Moyal factors is exactly that of the tree diagram. In fact all trees contributing to a given topology have the same phase, which is a function only of the external momentum. The result is that for planar diagrams the Moyal phases do not affect the Feynman integrations at all. In particular the planar diagrams have exactly the same divergences as in the commutative theory. Evidently in the large N limit noncommutative Yang Mills = ordinary Yang Mills. On the other hand, divergences that occur in nonplanar diagrams can be regulated by the phase factors. For example consider the nonplanar diagram in figure (7). The Moyal phase for the diagram is $`e^{ipq}e^{ipq}=e^{2ipq}`$ (51) and does not cancel. It is not difficult to see that such oscillating phases will regulate divergent diagrams and make them finite, unless the diagram contains divergent planar subdiagrams. Thus it seems that the leading high momentum behavior of the theory is controlled by the planar diagrams. Among other things it means that in this region the $`1/N`$ corrections to the ’t Hooft limit must vanish. An interesting question arises if we study the theory on a torus of finite size . For an ordinary local field theory high momentum behavior basically corresponds to small distance behavior. For this reason we expect the high momentum behavior on a torus to be identical to that in infinite space once the momentum becomes much larger than the inverse size of the torus. However in the noncommutative case the story is more interesting. We have seen that high momentum in the 1,2 plane is associated with $`large`$ distances in the perpendicular direction. Most likely this means that the finite torus generically behaves very differently at high momentum than the infinite plane. Indeed there is an exception to the rule that nonplanar diagrams are finite. If a line with a nonplanar self energy insertion such as in figure (7) happens to have vanishing momentum in the 1,2 plane then according to eq. (40) its phase will vanish. Thus for a set of measure zero, the nonplanar self energy diagram can diverge. This presumably leads to no divergences in infinite space when the line in question is integrated over. The situation could be different for compact noncommutative geometries since integrals over momenta are replaced by sums . The fact that the large N limit is essentially the same for noncommutative and ordinary Yang Mills theories implies that in the AdS/CFT correspondence the introduction of noncommutative geometry does not change the thermodynamics of the theory . It may also be connected to the fact that in the matrix theory construction of Connes-Douglas-Schwartz and Douglas-Hull, the large N limit effectively decompactifies $`X^{11}`$ and should therefore eliminate dependence on the 3-form potential. However the argument is not straightforward since in matrix theory we are not usually in the ’t Hooft limit. ## Acknowledgements L. S. would like to thank Steve Shenker for discussion.
no-problem/9908/patt-sol9908003.html
ar5iv
text
# Crystallization Kinetics in the Swift–Hohenberg Model ## Effect of noise. Let us consider the effect of weak additive noise $`\eta (x,t)`$ in the r.h.s. of Eq. (1). For simplicity we assume that $`\eta `$ is delta-correlated noise with the intensity (temperature) $`T`$: $$\eta (x,t)\eta (x^{},t^{})=T\delta (xx^{})\delta (tt^{}).$$ In the presence of noise, there is no sharp threshold for the onset of motion at $`ϵ<ϵ_c`$. Instead, thermally-activated motion (creep) occurs at $`ϵ>ϵ_c`$ (see Fig. 5). In this case the average creep velocity is determined by the intensity of noise. For $`ϵ<ϵ_c`$ the noise will slightly increase the speed of the front. In contrast to the deterministic motion, the intervals between consecutive nucleation events are random. In order to estimate the effect of the noise at $`\delta >0`$, we will treat $`\eta `$ as small perturbation. In this analysis, we shall not introduce scaling explicitly, since the scaling of noise that is comparable by its effect with deterministic perturbations cannot be determined a priori. Following the lines of the above analysis, we project noise at the zero mode to obtain the solvability condition $$\alpha \dot{a}=\delta ^{1/2}(\beta \gamma a^2)+\delta ^{1/2}\stackrel{~}{\eta }(t),$$ (5) where $`\stackrel{~}{\eta }(t)=_{\mathrm{}}^{\mathrm{}}\eta (x,t)U_1(x)𝑑x`$. For $`T=0`$, Eq. (5) has a stable fixed point $`a_s=\sqrt{\beta /\gamma }`$ and an unstable one $`a_u=\sqrt{\beta /\gamma }`$. At $`a<a_u`$, one has explosive growth of the solution to Eq. (5), while $`aa_s`$ at $`a>a_u`$. Thus, we have to estimate the probability $`P`$ for the amplitude of the zero mode $`a`$ to be smaller than $`a_u`$. This quantity can be derived from the corresponding Fokker-Planck equation for the probability density $`p(a,t)`$ : $$p_t=\frac{\delta ^{1/2}}{\alpha }\frac{}{a}\left[(\beta \gamma a^2)p\right]+\frac{T}{2\alpha \delta }_a^2p,$$ (6) where we used $`\stackrel{~}{\eta }^2=\alpha T`$. This is the standard Kramers problem. The stationary probability $`p(a)`$ is given by $$p\mathrm{exp}\left[\frac{2\delta ^{3/2}(\beta a\gamma a^3/3)}{T}\right].$$ (7) The probability $`P`$ is given by the integral $`P=_{\mathrm{}}^{a_u}p(x)𝑑x`$. For $`T\delta ^{3/2}`$, we can use the saddle-point method, which gives the following result: $$P(a<a_u)\mathrm{exp}\left[\frac{4\delta ^{3/2}\beta ^{3/2}}{3\gamma ^{1/2}T}\right]=\mathrm{exp}\left[\frac{0.57\delta ^{3/2}}{T}\right].$$ (8) Since the time between the nucleation events $`\tau _n1/P`$, we find that the velocity of the front in the stable region is given by $`v1/\tau _n\mathrm{exp}\left[0.57\delta ^{3/2}/T\right]`$. At very large $`ϵ>ϵ_0=6.287`$, the flat state $`u=\pm \sqrt{ϵ1}`$ has lower energy than the periodic state. Nevertheless, the uniform state does invade periodic state at any $`ϵ`$ because of the strong self-induced pinning. With large-amplitude noise there will be some probability for the flat state to propagate towards periodic state by thermally-activated annihilation events at the edge of the periodic pattern, however t large $`ϵ`$ the probability of annihilation at the edge is of the same order as that in the bulk of the patterned state. Thus, for very large $`ϵ`$ and large $`T`$ we may expect melting of the periodic structure both on the edge and in the bulk. The above results can be trivially extended to regular two-dimensional periodic structures (rolls) selected by the SH equation . More interesting is the behavior of a 2$`D`$ hexagonal lattice which was predicted by weakly nonlinear analysis near the bifurcation of nontrivial uniform solutions at $`ϵ=\frac{3}{2}`$ and found numerically . We anticipate that propagation of the hexagonal structure into the uniform state will exhibit the same features of stick-and-slip motion as described above, and can be studied by similar methods. For SH equation (1) at $`ϵ>\frac{3}{2}`$, hexagonal lattices coexist with roll patterns which have lower energy, so the front propagation may in fact give rise to rolls. However, the hexagons may become dominant in the modified SH equation with an added quadratic nonlinearity near $`ϵ=0`$, where the robust “crystallization” of the hexagonal lattice is expected. The work on this subject is now in progress. The authors thank the Max-Planck-Institut für Physik komplexer Systeme, Dresden, Germany for hospitality during the Workshop on Topological Defects in Non-Equilibrium Systems and Condensed Matter. ISA and LST acknowledge support from the U.S. DOE under grants No. W-31-109-ENG-38, DE-FG03-95ER14516, DE-FG03-96ER14592 and NSF, STCS #DMR91-20000. LMP acknowledges the support by the Fund for Promotion of Research at the Technion and by the Minerva Center for Nonlinear Physics of Complex Systems.
no-problem/9908/quant-ph9908066.html
ar5iv
text
# Theory of Quantum Error Correction for General Noise ## Proof. Recall the necessary and sufficient conditions for a code $`𝒞`$ to permit correction of the errors in $`𝒥_e`$ : $`𝒞`$ detects the operators in $`𝒥_e^{}𝒥_e`$. This condition is also correct for c-codes with a transmission basis. Since $`𝒥_e^{}𝒥_e𝒥_{2e}`$, the result follows. Error Bounds. To make the analysis based on minimum distance and $`e`$-error correction useful, it is necessary to show that $`e`$-error-correcting codes protect information well. We give a quantitative relationship for the worst-case error as a function of time $`t`$. ###### Theorem 2 The error amplitude of information protected in an $`e`$-error-correcting (c-)code is at most $`(\lambda t)^{e+1}/(e+1)!`$. We defer the proof until after error-correcting codes have been characterized as subsystems. Note that independence from the internal Hamiltonian of the environment implies that even if the latter is subject to arbitrary, adversarial manipulation, the error-correcting code still effectively protects information on a time scale of $`O(1/\lambda )`$. Existence of Large Codes. A goal of constructing good error-correcting codes is to maximize the dimension of minimum (c-)distance $`d`$ codes. The greedy algorithm for constructing good minimum-distance classical codes works well in the general case. Let $`\{E_1=I,E_2,\mathrm{},E_D\}`$ be a basis of $`𝒥_{d1}`$, with dimension $`D`$, and let $`x`$ denote the least integer $`x`$. ###### Theorem 3 There exist codes of $`S`$ with minimum c-distance $`d`$ of dimension at least $`\frac{N}{D}`$. ## Proof. Minimum c-distance is equivalent to the existence of an orthonormal basis $`|c_1,\mathrm{},|c_k`$ of the c-code such that, for each operator $`E_l`$ to be detected, $`c_i|E_l|c_j`$ $`=`$ $`\alpha _{i,l}\delta _{i,j}.`$ (4) The proof greedily constructs such a basis. Let $`|c_1`$ be any state of $`S`$. Suppose that $`|c_1,\mathrm{},|c_k`$ have been found, fulfilling (4). Choose $`|c_{k+1}`$ orthogonal to the vectors $`E_i|c_j`$, $`i=1,\mathrm{},D`$, $`j=1,\mathrm{},k`$. Such a state exists provided that $`kD<N`$. The new set of $`|c_i`$ satisfies (4). Upon continuing until the set cannot be extended, a c-code of dimension at least $`\frac{N}{D}`$ is found. Our best general construction of good codes for quantum information is based on finding a subcode of a c-code. ###### Theorem 4 There exist minimum distance $`d`$ codes of $`S`$ of dimension at least $`\frac{N}{D}\frac{1}{D+1}`$. ## Proof. Let $`𝒞`$ be a c-code of $`S`$ of dimension at least $`\frac{N}{D}`$ with basis $`|c_i`$ satisfying (4). Let $`Y`$ be the set of indices of the basis vectors. To construct a large quantum code, we partition $`Y`$ into subsets $`Y_i`$ and seek non-negative coefficients $`\beta _{i,j}`$, satisfying $`_{jY_i}\beta _{i,j}=1`$. Let $`|q_i=_{jY_i}\sqrt{\beta _{i,j}}|c_j`$. Then the orthonormal vectors $`|q_i`$ span the desired code provided $`q_i|E_l|q_j`$ $`=`$ $`\gamma _l\delta _{ij},i,j,l.`$ (5) Compute $`\gamma _{l,i}=_{jY_i}\beta _{i,j}\alpha _{j,l}`$, the $`\alpha _{j,l}`$’s being given in (4). We need the $`\gamma _{l,i}`$ to be independent of $`i`$. This problem can be cast in terms of a convex sets problem. We need to find as many disjoint subsets of the set of vectors $`\stackrel{}{\alpha }_j=\{\alpha _{j,l}\}_l`$ with the property that their convex closures have a common intersection. Since $`𝒥_{d1}`$ is -closed, the $`\stackrel{}{\alpha }_j`$ live in a subspace of real dimension $`D`$. By invoking a generalization of Radon’s Theorem , a necessary condition for the existence of at least $`r`$ such sets is $`r(D+1)DN/D`$. The result follows. While the result of Thm. 4 is fully general, the proof does not yield a straightforward constructive method. Also, the lower bound on the existence of good codes is sub-optimal for various systems of interest, including qubits with linear interactions, where the achieved rate is well below the best lower bounds known . According to Thms. 3 and 4, the above bounds for minimum distance $`d`$ codes are found to be less favorable for independent than for collective interactions, both for classical and for quantum information. This reflects the lower level of complexity of the error process generated by collective interactions. Subsystems. If a system consists of a number of qubits, the obvious subsystems are the qubits. If the system consists of a number of photon modes, each mode is a subsystem. However, in order to use these modes as qubits, one could choose the two polarization states for a single photon in a mode as the computational basis. The relevant system is then the subspace where each mode is occupied by exactly one photon, and it is in this subspace that we can identify the qubit subsystems. In both examples, subsystems appear as factors (in the tensor product sense) of subspaces of a larger state space. To avoid working with explicit bases and states, it is convenient to resort to a general algebraic definition. We shall characterize a subsystem of $`S`$ in terms of a subalgebra of operators acting on $`𝒮`$ together with an irrep of the subalgebra. This is motivated by the following fundamental result from the representation theory of -closed operator algebras . ###### Theorem 5 Let $`𝒜`$ be a -closed algebra of operators on $`𝒮`$, including the identity. Then $`𝒮`$ is isomorphic to a direct sum, $`𝒮`$ $``$ $`{\displaystyle \underset{i}{}}𝒞_i𝒵_i,`$ (6) in such a way that in the representation on the righthandside, $`𝒜=_i\text{Mat}(𝒞_i)I^{(Z_i)}`$ and the commutant of $`𝒜`$ is given by $`Z(𝒜)=_iI^{(C_i)}\text{Mat}(𝒵_i)`$. Here, $`\text{Mat}()`$ denotes the set of all linear operators from $``$ to itself, while the commutant $`Z(𝒜)`$ is the space of all operators commuting with $`𝒜`$. Formally, each factor $`Z_i`$ $`(C_i)`$ in Thm. 5 defines a subsystem of $`S`$ with associated state space $`𝒵_i`$ $`(𝒞_i)`$. As a result of the theorem, subsystems are naturally definable in terms of either algebras or their commutants. Noiseless Subsystems. Consider the interaction algebra $`𝒥`$ associated with (1). Since $`𝒥`$ is -closed, the representation of Thm. 5 applies. For each subsystem $`Z_i`$, states in $`𝒵_i`$ are completely immune to the interaction, as the interaction operators only act on the co-factor $`𝒞_i`$. Thus, $`Z_i`$ is a noiseless subsystem i.e., a subsystem where information is intrinsically stabilized against the effects of the noise. As remarked above, a trivial example occurs if we are given two qubits, where only the second one is susceptible to noise. In this case, information in the first qubit is canonically maintained with no need for corrective action. Noiseless (or decoherence-free) subspaces can be recognized as special cases of the general decomposition (6) for appropriate interaction algebras $`𝒥`$. However, relevant situations can be devised, where noiseless subsystems exist in the absence of noiseless subspaces. Example. Let us consider three qubits $`A,B,C`$ with collective linear interactions. The interactions are the generators for spatial rotations. As pointed out in , no state of three qubits is invariant under spatial rotations, the minimal implementation of a noiseless subspace requiring $`n=4`$ qubits. However, the state space decomposes into one spin-$`\frac{3}{2}`$ and two spin-$`\frac{1}{2}`$ irreducible subspaces. The two spin-$`\frac{1}{2}`$ components together are representable as the product of two two-state spaces as in Thm. 5, with $`𝒥`$ acting only on the first. Thus the second one is a noiseless subsystem. Another method of finding this subsystem is to observe that the commutant $`Z(𝒥)`$ is non-trivial. In particular, it includes the scalars under spatial rotations, $`s_1`$ $`=`$ $`\sigma _x^{(A)}\sigma _x^{(B)}+\sigma _y^{(A)}\sigma _y^{(B)}+\sigma _z^{(A)}\sigma _z^{(B)},`$ (7) $`s_2`$ $`=`$ $`\sigma _x^{(A)}\sigma _x^{(C)}+\sigma _y^{(A)}\sigma _y^{(C)}+\sigma _z^{(A)}\sigma _z^{(C)},`$ (8) which are generating observables for the noiseless subsystem. Equivalently, the latter is seen to support one of the irreps of the algebra generated by the scalars. Error-correcting Codes as Subsystems. The traditional view of error-correcting codes involves encoding the information and correcting errors after the information carriers are transmitted through a noisy channel. The concept of noiseless subsystem shows that, for the purposes of information maintenance, it is not necessary to correct errors, insofar as they affect components independent of the system where information is stored. In general, we wish to protect the information against all errors in $`𝒥_e`$ for some reasonably large $`e`$. Since a subsystem unaffected by the operators in $`𝒥_e`$ is automatically noiseless, but in most cases of interest noiseless subsystems do not exist, it is necessary to take an active role in maintaining information. Rather than using error correction to restore the overall state of the system after errors happened, we propose to use a quantum operation before the latter occur, in such a way that the net effect of the quantum operation followed by errors in $`𝒥_e`$ assures preservation of the information in a subsystem. A quantum operation is described by a family $`𝒜=\{A_i\}_i`$ of linear operators acting on $`𝒮`$, evolving the system density operator as $`\rho `$ $``$ $`_iA_i\rho A_i^{}`$. The combined action of the quantum operation, followed by errors in $`𝒥_e`$, is represented by the product of an operator $`E𝒥_e`$ and one of the operators $`A_i𝒜`$. Consequently, a state of a noiseless subsystem of the -closed algebra generated by $`𝒥_e𝒜`$ is preserved in this process. ###### Theorem 6 Every $`e`$-error-correcting code arises as a noiseless subsystem of $`𝒥_e𝒜`$ for some $`𝒜`$ with the property that $`I\text{span}(𝒜^{}𝒜)`$. Conversely, every noiseless subsystem of $`𝒥_e𝒜`$ with $`𝒜`$ satisfying the above condition corresponds to an $`e`$-error-correcting code. ## Proof. The fact that error-correcting codes yield such noiseless subsystems follows from Thm. III.5 of by letting $`𝒜`$ consist of operators that return the state of the error system to the state $`|(0)`$ (in the notation of ). Conversely, the condition $`I\text{span}(𝒜^{}𝒜)`$ ensures the existence of a quantum operation whose operators are in $`𝒜`$. Thus, the process suggested earlier protects the information against errors in $`𝒥_e`$. Because of the necessity of the conditions for error-correcting codes , there exists an associated $`e`$-error-correcting code in the usual sense. As a consequence, noiseless subsystems are infinite-distance quantum error-correcting codes. Error Analysis. We are now ready to give a proof of Thm. 2 based on viewing error-correcting codes as subsystems protected by an initial quantum operation $`𝒜`$. ## Proof of Thm. 2. By purifying the environment , we can assume that the environment’s initial state is $`|\psi __B`$. The initial state of the system has the intended state in the subsystem associated with the error-correcting code. Again, by purifying and by adding the reference system to $`S`$, we can assume that the state is given by $`|\varphi _0__S`$. The quantum operation $`𝒜`$ can be assumed to arise from a unitary evolution $`U`$ applied to $`|\varphi _0__S|0__A`$, where $`A`$ is an ancillary system. Let $`|\varphi =U|\varphi _0__S|0__A`$ and consider the subsequent interaction with the environment over time $`t`$. By slicing the interaction time into intervals of duration $`t/n`$, the overall evolution up to time $`t`$ can be written as ($`\mathrm{}=1`$) $`\underset{n\mathrm{}}{lim}{\displaystyle \underset{k=1}{\overset{n}{}}}\delta U_k^{(S)}\delta U_k^{(B)}|\varphi |\psi __B,`$ (9) where $`\delta U_k^{(S)}`$, $`\delta U_k^{(B)}`$ denote the unitary evolutions during the $`k`$’th interval due to $`J`$ and to the environment’s internal Hamiltonian respectively. It suffices to consider a first-order expansion $`\delta U_k^{(S)}=IiJ(t/n)+O((t/n)^2)`$. The elements contributing noise all involve at least $`e+1`$ factors of $`J`$. By distributing some of the sums $`IiJt/n`$ starting at the first time interval, the expression inside the limit can be thought of as a sum over the branches of a binary tree of products of operators associated with the edges and nodes of the tree . The root node is labeled $`\delta U_1^{(B)},`$ and its two edges by $`I`$ and $`iJt/n`$ respectively. The two children are labeled by $`\delta U_2^{(B)}`$, their descendant edges by $`I`$ and $`iJt/n`$ and so on. We choose to terminate a branch at a point where there are $`e+1`$ factors of $`iJt/n`$ on its path and label the leaf with the remaining product of unitary operators. The total error is estimated by summing the error amplitudes associated with the products along each of these terminated branches. A counting argument shows that there are $`\left(\genfrac{}{}{0pt}{}{n}{e+1}\right)`$ such branches. Using $`|CD||C||D|`$ and the fact that unitary operators preserve the amplitude, the error of each such branch is bounded by $`(\lambda t/n)^{e+1}`$. Thus, the error amplitude is at most $`\left(\genfrac{}{}{0pt}{}{n}{e+1}\right)(\lambda t/n)^{e+1}(\lambda t)^{e+1}/(e+1)!`$. Markovian Noise. As mentioned above, in many cases involving infinite dimensional environments the estimate of Thm. 2 cannot be used without redefining $`\lambda `$. For instance, when the noise is to a good approximation Markovian, the evolution of the system density operator can be written as $`\rho `$ $``$ $`\rho _t=lim_n\mathrm{}_{t/n}^n(\rho )`$, where the superoperator $`_{t/n}`$ takes the form $`_{t/n}(\rho )=\rho +(t/n)(_iL_i\rho L_i^{}+V\rho +\rho V^{})`$ for a suitable choice of operators $`L_i`$ and $`V`$ . Our techniques apply with $`𝒥_1`$ given by the linear span of $`I`$, these operators and their Hermitian transposes. The bound of Thm. 2 holds provided we replace error amplitude with error probability and redefine $`\lambda `$ as $`\lambda `$ $`=`$ $`2|V|+|L_1|^2+|L_2|^2+\mathrm{}.`$ (10) The need to consider error probability rather than amplitude is due to the statistical nature of Markovian noise. ###### Theorem 7 The error probability of information protected in an $`e`$-error-correcting (c-)code subjected to Markovian noise is bounded by $`(\lambda t)^{e+1}/(e+1)!`$. ## Proof. Let $`\rho `$ be the state of the system after the protecting quantum operation has been applied, and $`\rho _t=lim_n\mathrm{}_{t/n}^n(\rho )`$. If we write $`\rho _t=\rho _c+\rho _e`$, where $`\rho _c`$ has no error in the subsystem of interest, then the error probability is bounded by $`\text{tr}(\sqrt{\rho _e^{}\rho _e})`$. The product of $`n`$ infinitesimal evolutions can be expanded as in the proof of Thm. 2, but replacing unitary operators with trace-preserving superoperators and omitting the environmental contribution. The non-identity terms in the branches are superoperators of the form $`\rho ^{\prime \prime }`$ $``$ $`(t/n)(_iL_i\rho ^{\prime \prime }L_i^{}+V\rho ^{\prime \prime }+\rho ^{\prime \prime }V^{})`$, whose effect can be bounded by the parameter $`\lambda `$ given in (10). (Use the fact that if we define $`l_1(\rho )=\text{tr}(\sqrt{\rho ^{}\rho })`$, then $`l_1(\rho U)l_1(\rho )|U|`$). In the past, error probability has been used in almost all treatments of noise. Therefore, Thm. 7 further connects our formulation of error correction to the usual one. Discrete Quantum Operations. A picture of the situation involving qubits coupled to independent environments, which is what has been typically addressed by quantum error-correction theory, is that a known, somewhat noisy quantum operation is applied to each qubit. In this case, time does not play an explicit role. Instead, we are given quantum operations $`_i`$, each expressible in the form $`_i(\rho )`$ $`=`$ $`(I+V_i)\rho (I+V_i^{})+L_{i1}\rho L_{i1}^{}+L_{i2}\rho L_{i2}^{}+\mathrm{}`$. The noise operation involves applying each quantum operation to the system in some order. To apply our theory and bounds to this situation, define $`𝒥_1`$ as the span of $`I`$, the operators $`V_i`$, $`L_{ij}`$ and their Hermitian transposes. The noise-strength parameter can be redefined as $`\lambda =\mathrm{max}_i(2|V_i|+|V_i|^2+_j|L_{ij}|^2)`$, so that Thm. 7 applies for $`t=1`$. This gives estimates not far from the standard ones in the case of qubits subject to depolarizing noise . Time-Dependent Noise. Time-dependent noise can arise from the use of a rotating frame to compensate for internal evolution of the system, or to compensate for time-dependent control actions, such as the ones exploited in decoupling schemes for open quantum systems . To adapt our theory to this situation, it suffices to maximize the expression of $`\lambda `$ over time and choose $`𝒥_1`$ as the span of all first-order operators occurring at various times. Summary. By suitably incorporating the description of the error generation process within a general algebraic setting, we showed how to reformulate quantum error correction without restricting the statistical properties of the environmental noise. The existence of large codes was established for both classical and quantum information, opening the way to accurate quantum computations in the presence of arbitrary errors. In addition to substantially strengthening the power of quantum error-correction theory, our analysis points to the notion of a noiseless subsystem as an emerging unifying framework for quantum information protection. Full exploitation of the above concept should prove fruitful in the general context of quantum information processing. Acknowledgments. We thank Asher Peres for the three qubit example of a noiseless subsystem. E. K. and R. L. received support from the Department of Energy, under contract W-7405-ENG-36, and from the NSA. L. V. was supported in part by DARPA/ARO under the QUIC initiative.
no-problem/9908/astro-ph9908222.html
ar5iv
text
# Adaptive optics near-IR imaging of NGC 2992 - unveiling core structures related to radio figure-8 loops ## 1 Introduction NGC 2992 is an Sa galaxy seen almost edge-on with a nearby, likely interacting, companion (NGC 2993). It possesses an active Seyfert 1.9 nucleus. A large and prominent dust lane runs through the center of the galaxy roughly north to south, splitting the nuclear region in two. Ulvestad and Wilson (1984) found that the radio structure of the nucleus of NGC 2992 has the shape of a “figure-8”, with a maximum extent of about 550 pc, oriented out of the plane of the galactic disk (assumed distance 37.3 Mpc using the recession velocity relative to the Local Group of 1864 km s<sup>-1</sup> (Ward et al. 1980) and $`H_0=50`$ km s<sup>-1</sup>Mpc<sup>-1</sup>, angular scale of 182 pc / 1″ ). Most of of the 6cm radio emission from the center of the galaxy arises in the loops of the figure-8 rather than in the nucleus. There are several favored models for such figure-8 radio emission. The loops could result from expanding gas bubbles which are seen preferentially as limb-brightened loops (Wehrle & Morris 1988). Such outflows may be associated with the AGN core. The continually diminishing X-ray emission has been interpreted as a dying active nucleus (Bassani et al. 1998), possibly because fuel is no longer being channeled down to an accretion disk region. However, this would not likely affect the appearance of the surrounding region at our resolution of $``$20 pc since even at outflow speeds close to $`c`$ the timescale is $``$ 50 years. Alternatively starburst driven superwinds could result in expanding gas bubbles leading to such emission (Heckman et al. 1990). Extended X-ray and H$`\alpha `$ emission (Colbert et al. 1996, Colbert et al. 1998) perpendicular to the plane of the galaxy may be an indication for the superwind. If the superwind were produced by an energetic burst of supernovae in the past, the current radius of the loop would then imply that the SNe explosion occurred over $`2.35\times 10^5`$ years ago for a typical expansion velocity of 1000 km/s (Koo et al. 1992, Tenorio-Tagle et al. 1998). A different model proposes toroidal magnetic fields which result in a loop-like emission of synchrotron emission (Wehrle & Morris 1988). Here, strong differential rotation in the galactic nuclear disk builds up the magnetic field having some radial component, until an instability occurs leading to an expanding magnetic arch. The field configuration upon expansion away from the nucleus is then a pair of loops. The synchrotron emission results when particles are accelerated to relativistic energies in the magnetic arches. We have observed NGC 2992 at high spatial resolution using the Adaptive Optics Bonnette (AOB) on the CFHT. This is part of a larger project to map the cores of nearby Seyfert galaxies in the near-IR with adaptive optics in order to study core morphologies. Our new near-IR imagery of NGC 2992 unveils emission features (dust obscured in optical wavelength HST imaging) which may be related to the VLA radio emission. This discovery allows a consistent model of the radio emission to be formulated, incorporating the larger scale bi-polar morphology observed in H$`\alpha `$ by Allen et al. (1998). In section 2 we outline the observations and reductions. In section 3 we present the results of our multi-wavelength analysis. Finally in section 4, we discuss the implications of the observed morphologies for various models. ## 2 Observational Details ### 2.1 CFHT AOB imaging Observations were obtained using the CFHT in March, 1997 and 1998, with the Adaptive Optics Bonnette (AOB) (Rigaut et al. 1998) feeding the MONICA (1997) and KIR (1998) near-IR cameras (Nadeau et al. 1994). The AOB is based on the curvature wavefront sensing concept (Roddier et al. 1991), and uses a 19 zone bimorph mirror to correct for wavefront distortions. The MONICA detector is a Rockwell NICMOS3 array with 256x256 pixels and 0.034”/pixel sampling. The KIR camera replaced MONICA in early 1998, and uses the Hawaii 1k x 1k pixel array with the same pixel sampling. The core of this galaxy is sufficiently bright as a guiding source to enable near diffraction limited resolution under favorable natural seeing conditions. AO correction is much more efficient in the near-IR as a result of the $`\lambda ^{1.2}`$ dependence of the Fried parameter, $`R_0`$, which characterizes the atmospheric coherence length. We obtained an H-band image with MONICA under good observing conditions (seeing FWHM $`<`$ 0$`\stackrel{}{.}`$6 resulting in 0$`\stackrel{}{.}`$15 resolution. The $`J`$, $`H`$, $`K`$\- band and narrow-band $`CO`$ images obtained with KIR had worse natural seeing ($`>`$0$`\stackrel{}{.}`$9) and the corrected PSF has a FWHM of $``$0$`\stackrel{}{.}`$45 at $`H`$, $`K`$ and $`CO`$, and $``$0$`\stackrel{}{.}`$60 at $`J`$. When forming colour maps, images are always smoothed to the worst resolution of the pair of images in question. As the field size is small (9”x9” for MONICA, 36” x 36” for KIR), blank sky images were taken intermittently between science frames. On-source images were taken in a mosaic of 4 positions, alternately putting the galaxy core in each of the four quadrants of the array to account for bad pixels and image uniformity. Flux and PSF calibrations were performed using the UKIRT standard stars fs13 and fs25. Flat-field images were taken on the dome with the lamps turned on and off to account for the thermal glow of the telescope. Data reduction was performed in the standard way for near-IR imaging: i) bad pixel correction; ii) sky subtraction, using a median-averaged sky estimate; iii) flat-field correction; iv) re-centering and co-addition of the different exposures through cross-correlation techniques. The continuum image for the vibrational transition $`CO`$(2-0) absorption ($`\lambda _c`$ = 2.296$`\mu `$m $`\mathrm{\Delta }\lambda `$ = 200 Å) was estimated by fitting a line through several regions in the $`J`$, $`H`$, and $`K`$ images with no apparent dust-lane structure. All images were convolved to 0$`\stackrel{}{.}`$6 resolution before estimating the CO continuum image. This extrapolation was then sampled with a Gaussian of the same width as the $`CO`$ narrow filter centered at 2.296$`\mu `$m. This flux level was then used to normalize the $`K`$-band image for subtraction of the $`CO`$ image, resulting in a \[continuum - $`CO`$\] index within the range typically observed (see for example Davidge & Courteau, 1999 for results on M81 with the same $`CO`$ filter used at CFHT). The above result was also checked against the following procedure. The outer regions of the image have typical early-spiral bulge colours (e.g. Glass & Moorwood 1985) and are likely to be relatively free of dust emission. By matching the $`K`$-band image in flux to the $`CO`$ image near the outer edges of the field, we obtain a zero-point in the \[continuum - $`CO`$\] image. These two procedures were found to agree within 15%. Obtaining an accurate measure of the resulting PSF of the image with AO is problematic, since the atmospheric conditions are continually changing, and a stellar image taken before or after the science exposure will likely not resemble the PSF of the science frame. An accurate PSF must be temporally and spatially coincident with the actual region of interest in the field, since the PSF degrades away from the guiding source. Even when there is a star within the relatively small field of view of the near-IR cameras used with AOB, the PSF is frequently quite different from the core of the active galaxy (guiding source). Attempts have been made to model the off-axis PSF in these cases with some success (Hutchings et al. 1998). A technique has been developed to reconstruct a model PSF for the nuclear region of the galaxy using the adaptive optics modal control loop information obtained during the actual observations (Veran et al. 1998a,1998b). For the core brightness as seen by the wavefront sensor (V=16.5) in the MONICA H-band image, simulations have shown that the reconstructed PSF should match the true PSF to an accuracy of approximately 10% (Veran et al. 1998). The factors degrading such a reconstruction are mostly the faintness and extension of the particular galaxy core, and superior results are obtained with brighter, point-like galaxy nuclei (or stars). The error in our reconstruction is mostly in the Strehl ratio and outer artifacts with the FWHM being very close to that of the actual PSF. The image is then deconvolved to the point at which the LUCY algorithm (Lucy 1984) converges ($``$25 iterations), using the model PSF as input. The LUCY deconvolution can create ringing artifacts around bright unresolved galaxy cores. However the core of NGC 2992 is extended while many of the structures surrounding the core are point-like. This is ideal for LUCY since the core region consists essentially of point sources superposed over a varying background. The algorithm was found to provide a believable deconvolution, with no new features appearing which were not in the original image at some level. The main benefit is the reduction of sidelobes, and the seeing pedestal present with achieved Strehl ratios in the range 20-30%. The estimated final resolution in the image is 0$`\stackrel{}{.}`$12 corresponding to 4 pixels. ### 2.2 Existing HST and UKIRT data An archive HST F606W filter image was obtained with the WFPC2 camera in 1994 as part of a snapshot survey of nearby active galaxies (Malkan 1998). The pixel scale with the PC camera is 0$`\stackrel{}{.}`$044 per pixel. In order to compare this directly with our AOB image, we have interpolated and rebinned the image after rotating it to the proper orientation. The prescription to calibrate the HST F606W image to Johnson V-band as outlined in Malkan et al. (1998) is used for NGC 2992. NGC 2992 was also observed at the United Kingdom InfraRed Telescope (UKIRT) in the near-IR bands J, H and K. The plate scale is 0.29″/pixel. These data were originally presented by Alonso-Herrero et al. (1998). ### 2.3 VLA radio maps A VLA image at 6 cm (5 GHz) was obtained in 1987 by A. Wehrle and is reproduced here with her permission (figure 1, right panel). An image at 8.4GHz (3.57cm) was also obtained from the VLA archive (courtesy of H. Falcke) in C-configuration which resolves more clearly into knot-like structures (figure 1, left panel). ## 3 Results The morphology in the central region of NGC 2992 is a complicated superposition of various components, appearing most prominently in different wavelength regimes. An irregular-shaped patchy region dominates much of the western portion of the shorter wavelength images (especially $`R`$-band – see figure 2 and subsequent images). The fact that this region appears clearly as a deficit in the R-band image, and takes on a patchy morphology is strong evidence for dust obscuration as the source of the colour gradient. The region is therefore most pronounced in long baseline colour maps, such as R-H, with longer wavelengths being least affected by dust. Larger scale images of the galaxy reveal this feature as a dust-lane structure bisecting the core region (see Wehrle & Morris 1988). In Table 1, we list the various structural components in the core along with the figure which best highlights the feature. ### 3.1 The Core Region In figure 2, we present the near diffraction limited H-band image of the central 7″ of NGC 2992. The HST F606W (covering V+R band - hereafter called R) image is also depicted. The galaxy core is resolved in both the H and R filters, with a FWHM $``$0$`\stackrel{}{.}`$26 corresponding to 47.2 pc. For the mediocre correction in the KIR J and K images the core is essentially unresolved with FWHM 0$`\stackrel{}{.}`$45 ($`K`$) and 0$`\stackrel{}{.}`$6 ($`J`$). It is important to understand what percentage of the nuclear light may be emitted by an unresolved (AGN) component. We estimate the contribution to the extended galaxy core from the unresolved H-band component. Our model-constructed PSF (see section 2 and Veran et al. 1998a) is a reasonable estimate of the true PSF for the diffraction limited H-band image. We first produce a surface brightness profile of the galaxy using the IRAF task $`ELLIPSE`$, and another one for the PSF. The PSF is then scaled to match the peak of the galaxy brightness. Both profiles are integrated out to a radial distance of 1″ with the result that the an unresolved component contributes at most 38% at H in a 0.2″ diameter aperture, and 17% in a 0.5″ aperture. In Table 2 we present aperture photometry of the core region. The V-H colour of the nucleus (V-H = 3.4) is typical for a reddened stellar population (Glass & Moorwood 1985). Note that the deconvolved magnitudes agree with the raw data at an aperture radius of 3″ . This would be as expected given that the deconvolution process restores flux to the central spike from the surrounding pedestal. This is encouraging despite the claims that deconvolved images may not be appropriate for photometry since the non-linear algorithm does not necessarily conserve flux (Magain et al. 1998). The disagreement with Alonso-Herrero et al. (1998) for the H-band apertures less than 3″ diameter is likely a result of the improved resolution of our image coupled with our absolute errors of 4% for the RMS of calibration star measurements throughout the observing run. The radio core spectrum is flat to within 5% measured from our 8.4 GHz VLA image, in conjunction with archive 5 GHz and 1.5 GHz VLA measurements (mean of $`\nu F_\nu =3.4\times 10^{16}ergss^1`$) Wehrle & Morris 1988, Ulvestad et al. 1984. This is indicative of synchrotron emission arising in the AGN core dominating over any possible synchrotron emission associated with supernovae at these wavelengths (Robson 1996). Examination of the cores in both optical and near-IR images (figures 2, 4 and 5) reveals a bright point-like knot to the south $``$110 pc (0.6″) from the nucleus of the galaxy. A second knot to the southwest appears in the near-IR only. An \[R-H\] colour map is formed by first convolving the HST image with a Gaussian of width 0$`\stackrel{}{.}`$12 to match the AOB resolution. The colour map (figure 3) reveals the southwestern knot is almost one magnitude redder in R-H than the southern knot, likely as a result of dust. The SW knot is completely obscured in the HST image. We find the colours in these regions (V-H = 3.6 to 4.2) are consistent with reddened stellar populations. We perform aperture photometry on these knots in H-band find apparent magnitudes between 16.0 and 16.5 at H depending on aperture size used (the aperture measurements have relatively large error due to their proximity and overlap with the central AGN core). At the distance of NGC 2992, the absolute magnitudes lie between $`15.3`$ and $`14.8`$ which are comparable to the luminosities of stellar clusters detected in the interacting galaxy Arp299, which have an average H-band absolute magnitude of $`H=15`$ (Lai et al. 1998, Alonso-Herrero et al. 1998). Arp220 (Scoville et al. 1998) also has compact stellar clusters of similar luminosities at 1.6$`\mu `$m (H-band). In the latter work, merger remnants have been suggested as an explanation for such bright knots near the galaxy core ($`>10`$% of the AGN peak for NGC 2992). ### 3.2 Spiral Structure The H- and R-band images (figure 2) show elongated isophotes to the southwest along the galaxy disk, and extending east from the nucleus. In the R-band case the galaxy morphology is much more distorted due to the effects of dust. These extensions and distortions in the isophotes suggest structure underlying the elliptical symmetry of the disk within the central regions of galaxy. We subtract a model image of the galaxy for both a large-scale H-band image (60″$`\times `$ 60″) and the AOB H-band image. The model is built from either elliptical fitted isophotes (figure 4), or a running median filter image of FWHM twice the resolution (figure 5). The elliptical isophote model has the advantage of highlighting structures deviating substantially from elliptical symmetry, while the median filtering tends to bring out fainter point-like structures. The profile in ellipticity ($`E`$) and position angle ($`PA`$) of the isophotal model shows that this region has a substantial twist in both $`E`$ and $`PA`$. In figure 4 (upper panel), the model subtracted large-scale image displays what appears to be a spiral structure along the disk, as well as a $``$3″ extension to the west, also noted in Alonso-Herrero et al. (1998). Our new high resolution images show that both the spiral structure and western extension can be traced down to the very core. This latter extension will be taken up in the next section. The image of the larger-scale galaxy shows a break in the spiral structures at $``$7″ radius. This may indicate nested structures which are kinematically distinct, as in the multi-level spiral arm structure observed with adaptive optics in NGC 5248 (Laine et al. 1998). Indeed, the knotty morphology of the spiral structures seen in the AOB and HST images of NGC 2992 are suggestive of star formation in spiral arms. The high inclination of NGC 2992 makes deprojection unreliable and it is difficult to discern how these larger- and smaller-scale spiral structures relate. By the same measure, it is also difficult to know whether the isophotal twists in the core may be due to a bar, a triaxial bulge seen in projection, or simply the effect of the spiral arms (Friedli et al. 1996). In figures 4 and 5, the 8.4 GHz radio contours are overlayed on the model-subtracted images. There is clearly some radio emission coincident with the southern spiral arm, which breaks up into a similar knotty morphology to the H-band model-subtracted image. Note especially the strong near-IR peak at the southern tip of the spiral arm has an associated peak in the radio, $``$3″ south and 2″ west of the nucleus. We find the radio spectrum to be steeper along the spiral arm to the south than in the core, possibly indicative of star formation. The prominent southern knots near the core, outlined in the previous section, clearly lie along the southern spiral arm. By comparing the brightest knots within the spiral arms in the optical and near-IR images, it appears that the nuclei of the galaxy in H- and R-bands, determined by centroiding on the cores, do not line up precisely. The knots in the southern spiral arm, revealed in the model subtracted images, have similar geometric configurations in the two filters (H and R), but when these are superposed, there is a an (x,y) offset of (0$`\stackrel{}{.}`$3, 0$`\stackrel{}{.}`$15) between the nuclei corresponding to a distance of $``$70 pc. A true offset is possible due to dust obscuration and/or extended unresolved starbursts (see for instance NGC 1068 (Alloin et al. 1998 for a similar case). Given the much more distorted nuclear region in the HST R-band image compared to the H-band, such a scenario is plausible. However, the spiral arm morphology may be slightly different in the two wavelengths, due to the effects of both dust and unresolved, blended structures. The difficulties in matching structures with slightly different morphologies in the outer galaxy leads us to register the images by centroiding on the galaxy nucleus. ### 3.3 Figure-8 Loops There is little sign of optical or near-IR counterparts to the radio loops out past the disk of the galaxy, even at K-band where the ability to see through the dust lane is greatest. However, as noted above, there is an extended feature to the northwest, observed as extended contours in the near-IR (figure 2) and as a prominent feature in the model subtracted near-IR images (figure 4), extending $``$1.5″ before becoming more diffuse and mixing with the stellar emission on the western side of the dust-lane. In the optical HST image (figures 2b, 5b), there is no sign of this extended feature, likely due to the dust lane obscuration. This extension aligns with the mouth of the northern radio loop and appears to continue outwards into the loop, which is verified in the coarser resolution UKIRT image where the S/N allows tracing the feature out further. This indicates that the source of the radio loop may be connected to this feature. Figure 3 shows that the extended feature is the reddest region in the central 7$`\stackrel{}{.}`$0 of the galaxy with an R-H colour of 4.5. The lower resolution KIR images in J,H,K show that the colours in this region clearly stand out from the surrounding disk colours, possibly as non-stellar (in a 1″ aperture, J-H=0.5, H-K=1.0). These colours may be related to a highly reddened ($`A_V>5`$) burst of stars, possibly with a nebular component. However they are also consistent with a reddened continuum power-law emission (Glass & Moorwood 1985). Artifacts associated with the AO correction have been shown to produce extensions to the PSF when guiding on extended objects such as as Seyfert nuclei (Chapman et al. 1998). Although caution must therefore be taken in associating such a structure with a physical interpretation, we have compared our AOB H-band image with an HST NICMOS K-band image, and found the same extended isophotes to the west of the core. ### 3.4 Diffuse Inner Loop The R-H map (figure 3) shows extended red emission which takes on a loop-like morphology extending north of the nucleus from the 1″ to the 2″ declination offsets, an enhancement in H rather than a deficit in R. It does not appear to be associated with the spiral arm further to the east. In the median filtered H-band image (figure 5), it is also possible to discern knotty features in a loop-like morphology, however the emission lies deep within the region taken to be a dust-lane based on the obscuration of the HST optical imagery. No counterpart is seen in the model subtracted R-band (figure 5). The 8.4 GHz radio contours superposed over the above figures (3 & 5) reveal a similar loop-like diffuse emission embedded within the larger, well defined radio loop. The spectrum (from 5GHz/8.4GHz) is steeper here than in the core, consistent with star forming regions. Buried star formation regions have been identified in the dust lane crossing the nuclear region of Centaurus A (Schreier et al. 1996,1998), using HST-NICMOS images in H-band. The J,H,K colours in this region of NGC 2992 however are difficult to interpret with the dust absorption gradient across the dust lane. It is not clear that the structure has a true loop morphology, and may be simply a result of the way the dust lane cuts through the core region. However, the coincidence of the radio emission suggests that the near-IR excess may represent more than the artifact of dust absorption. ### 3.5 The CO map The radial distribution of the CO index, (continuum – CO), is shown in figure 6, where the plotted values are azimuthal averages in 0.1″ bins. Although the CO(2-0) filter used corresponds to the standard photometric index (2.296$`\mu `$m center, 200 Åwidth) (Doyon et al. 1994), the subtracted continuum was extrapolated from the broad band colours (see section 2). There is then likely a systematic offset in absolute photometry, and we use caution in interpreting the CO magnitudes. When $`r<`$ 1.5″ the CO index strengthens implying much more CO absorption within the central 3″ . The CO index depends on both the metallicity and the age of the stellar population (giants versus supergiants), and has been found to be difficult to model. If we assume that the metallicity is constant within the center, then the variation may be an age effect. In general, the CO index increases as the starburst ages (Vanzi, Alonso-Herrero & Rieke 1998, Doyon et al. 1994). The NGC 2992 CO profile suggests that a somewhat younger population is present in a ring around the galaxy center, while the stellar population in the very core would be slightly older than the surroundings. Given the rather poor resolution in these images (0.45″), we are unable to discern at what level the stellar population contributes within the resolved 50pc core, evident in the HST R- and MONICA H-band images. However, it is clear that hot dust ($``$ 1000 K) emission does not contribute significantly to the core region. The power-law tail of a strong hot dust continuum contribution at K-band would swamp the CO absorption signature, leading to a weakening CO index (less CO absorption). At larger distances the CO index shows a slight radial gradient, but this is not significant given the uncertainties in the background. ### 3.6 Characterizing the Extinction The effects of dust absorption become noticeably less with increasing wavelength in our images of NGC 2992. The K-band image has the most symmetrical appearance, with emission from the galaxy disk visible furthest into the dust-lane. By referencing fiducial colours of expected stellar populations to the actual observed colours in NGC 2992, we can estimate the extinction due to dust. The H-K colour map is in general good for characterizing the extinction because it probes deeper within the dusty regions. However it has the problem that K-band may be contaminated by hot dust (1000 K) emission, resulting in possible overestimates of the extinction. Our CO-narrowband imaging confirms previous speculation from ground-based J-K and K-L colours (Alonso-Herrero et al. 1998) that hot dust is unlikely to contribute significantly. We assume that the colour of a typical early-type bulge stellar population is $`HK=0.2`$ (Glass & Moorwood 1985), and that redder colours imply some degree of obscuration. Taking into account that the differential extinction between H and K is $`A(H)=0.175A(V)`$ and $`A(K)=0.112A(V)`$ (Rieke et al. 1985), we find $$A(HK)=2.5\mathrm{log}(\frac{f(H)}{f(H_0)})+2.5\mathrm{log}(\frac{f(K)}{f(K_0)})$$ where $`f(K_0)/f(H_0)`$ is the flux ratio corresponding to $`HK=0.2`$. Thus $`A(V)=A(HK)/0.063`$ in magnitudes. Such an analysis is only true for extinction to the stars. Other emission processes (power law, emission line gas, etc.) will skew the results. The J-H and H-K colours show that there are two regions which can likely not be explained simply by reddened stellar colours. The first is within the dust lane, along the knotty spiral structure to the south and to the north along the diffuse loop described in section 3.4. The colours here tend to be bluer in both J-H and H-K, possibly indicative of nebular emission. The second is the extended region to the west of the nucleus where the colours are far from the locus of normal stellar colors, and different from both the disk/bulge to the east and the rest of the colours within the dust lane. The extinction map (figure 7) shows that if the central peak of emission is in fact due mainly to a compact stellar core, then there is an optical extinction of A(V)$``$4 magnitudes. The J-H/H-K colours of the nucleus are consistent with a reddened supergiant population. Taken at face value, the extinction map contours indicate A(V) extinction from 2.6 to 8.6 magnitudes with 2 magnitude intervals, although non-stellar emission likely over-redden some of the circumnuclear regions as discussed above. Using the J-H map results in an extinction estimate which is lower, possibly as a result of the larger optical depth at shorter wavelengths. ## 4 Discussion We have combined the various structures observed in the different wavelength regimes and depicted them schematically in figure 8. The radio loops lack any optical or near-IR counterparts except along the galaxy disk axis. The bipolar outflows along the spin axis of the galaxy observed at larger scales in the optical (Allen et al. 1998) align along the same axis defined by the radio loops. These facts suggest the actual figure-8 loops likely lie out of the galactic disk plane. However, the strong radio contours, emanating from the nucleus to the southwest, lie along the buried spiral arm within the disk. Also, the red R-H loop to the north appears to be associated with the diffuse radio emission embedded within the larger radio loop. Our hypothesis is then that the radio morphology consists of two components superimposed: 1) the loops out of the plane of the disk. 2) a component in the disk associated with the southern spiral arm, and a diffuse loop to the north. Starburst SNe remnants are the likely source of these radio components. The appearance of the radio figure-8 becomes more symmetrical if the galactic disk components (point 2 above) are subtracted, which supports such a superposition scenario. The assumption of trailing large-scale spiral arms, in addition to the prominent dust lane likely lying in front of the bulge, imply the NW edge of the galaxy disk is closer to us. This scenario places the southern portion of the Extended Emission Line Region (EELR) closer to us, with associated outflowing material (Allen et al. 1998). The southern radio loop would then also be closer to us, with the northern loop lying partially behind the dust lane. However, the larger-scale spiral arms extending radially out past 30″ appear to wind in the opposite orientation to the inner spiral observed in our high resolution imagery. This either forces the inner spiral arms to be leading, or else they are trailing in counter-rotation, with the inner region kinematically distinct from the outer galaxy. The case is not clear from the velocity fields presented in Allen et al. (1998), which have low spatial resolution coupled with a complicated superposition of rotation and outflow components. The near-IR extended emission feature to the northwest gives the appearance of expanding into the northern radio loop, and the two emission features may be associated. The morphology and the rather extreme near-IR colours suggest that an AGN driven jet, possibly with some continuum component, could be directed into the radio loop. Hot dust present in an outflow is not ruled out for this feature either. This sort of near-IR “jet” may exist towards the southern radio loop as well at a lower level. ### 4.1 Interpretation As noted in the introduction, there have been several models put forward for such figure-8 radio emission. The most convincing in light of our new near-IR imaging is that the figure-8 loops result from expanding gas bubbles which are seen preferentially as limb-brightened loops (Wehrle and Morris 1988). The northern loop may be related to the near-IR extended emission feature to the northwest. Further evidence for the outflow picture exists in the form of soft X-ray emission extended perpendicular to the disk of the galaxy Colbert et al. 1998), and H$`\alpha `$ imagery clearly showing the location of an extended emission line region (EELR) (Colbert et al. 1996). These authors discuss two possible explanations for the extended X-ray emission: 1) an AGN driven hot plasma, 2) a superwind from a compact starburst. A galactic-scale superwind can be generated by either a compact starburst in the galaxy core, or an AGN driven outflow which thermalizes the ISM at some distance from the core (Colbert et al. 1996). In both cases the superwind would blow preferentially out of the galaxy plane where the pressure is lowest, such as observed in NGC253 (Unger et al. 1987). With the resolved galaxy core in our images, it is not clear to what extent dust absorption, hot dust emission, scattered AGN light, or a compact stellar cluster contribute to the extended emission. The colours appear most consistent with reddened stellar light. Thus either picture could be consistent with our data since a stellar cluster and AGN (optical emission lines, flat radio spectrum) are likely contributing to core emission processes. However, the larger scale EELR need not be aligned with the 2″ diameter radio loops in the case of the AGN driven source. The orientation of the EELR observed at larger scales (Allen et al. 1998) is in fact roughly aligned with the radio loops. For superwind models, the anisotropic EELR, seen in \[OIII\] and H$`\alpha `$, is likely to be associated with the outflow regions. However, the superwind in itself does not yield a mechanism to produce continuum emission (Allen et al. 1998), thus our near-IR data may rule out this latter model. If the near-IR “jet” is not actually related to the radio loops, the superwind model is still a plausible source of the loop emission. The CO index provides evidence for a substantial population gradient in the core. We consider the case where the radio loops are due to a energetic burst of supernovae in the past. The luminosity of the stellar cluster must be at least three times that of the bubble so the shock can reach a galactic scale height (Koo et al. 1992, Tenorio-Tagle et al. 1998) The stellar cluster luminosity is estimated from the H-band image. We determine the size of the “hypothetical” stellar cluster to be 0.5″ with an absolute H-magnitude of $`16.5+M_{AGN}`$, where the AGN contribution is unknown and may be almost zero if the supermassive black hole is no longer being fueled, as described in the introduction (Bassini et al. 1998). The model PSF (section 3.1) scaled to the peak of the extended core revealed that an unresolved AGN component could contribute at most 37% to the emission within a 0.2″ diameter aperture. The second model outlined in the introduction, with the toroidal magnetic fields causing the emission, is more difficult to explain in light of the near-IR extension and knotty emission along the southern radio loop. The outflow driven bubble model explains the currently available data much more naturally. In addition, a calculation of the magnetic energy in the loops from the 8.4GHz VLA data (Falcke et al. 1997, Werhle et al. 1988), makes it difficult to model consistently in this manner. Other problems with this model (Wehrle et al. 1988, Cameron 1985, Heyvaerts et al. 1987) associated with rotation timescales and the lack of twisting of the radio/near-IR loops, make it even less plausible. ## 5 Conclusions We have presented adaptive optics near-IR and radio images of NGC2992 in conjunction with archive HST optical imagery. A spiral structure within the central 6″ and a 1″ extended feature are traced down to the core at the resolution of our images. We speculate that multiple radio components are superposed which contribute to the observed figure-8 morphology in the VLA images: one associated with the spiral structure in the galaxy disk, and another flowing out of the galaxy plane. IR and optical spectra at high spatial resolution will likely provide the means of determining if the population gradients in the core of NGC 2992 are due to changes in age and/or metallicity. Such spectral imagery will also permit the nature of the extended structures to be explored, shedding light on the possible connection to the radio loops. Our current hypothesis concerning the radio loops involves an AGN outflow powering the loop rather than a starburst superwind, as any near-IR emission related to the jet would be unlikely in the latter case. NGC 2992 represents yet another example of star formation and AGN components both existing in the galaxy core (Storchi-Bergman et al. 1997). There is no obvious indication in our data of whether there is any connection between the two in evolutionary terms. ### ACKNOWLEDGEMENTS We would like to acknowledge the staff at CFHT and VLA for facilitating these observations. The CADC database was invaluable in obtaining HST images.
no-problem/9908/astro-ph9908349.html
ar5iv
text
# Interstellar extinction towards the inner Galactic BulgeBased on observations collected at the European Southern Observatory, La Silla Chile ## 1 Introduction Studying the stellar populations in the Galactic Bulge requires knowledge of the interstellar extinction. Previous studies (e. g. Catchpole et al. Catchpole90 (1990), Frogel et al. Frogel99 (1999), Unavane et al. Unavane98 (1998)) have shown that it can vary from $`\mathrm{A}_\mathrm{V}=5^\mathrm{m}`$ up to $`\mathrm{A}_\mathrm{V}=35^\mathrm{m}`$ towards the Galactic Centre region. As in most parts of the Galactic Bulge, interstellar absorption is not homogeneous but occurs in clumps, a detailed extinction map of the Bulge is therefore essential. Catchpole et al. (Catchpole90 (1990)) mapped the interstellar extinction around the Galactic Centre ($`2\mathrm{deg}^2`$) using the red giant branch of 47 Tuc as a reference. Stanek et al. (Stanek96 (1996)) mapped the interstellar extinction of Baade’s window using OGLE photometry of red clump stars. Frogel et al. (Frogel99 (1999)) determined the interstellar reddening for a few fields in the inner Galactic Bulge using the red giant branch of Baade’s window as a reference. $`\mathrm{A}_\mathrm{K}`$ values ranging from 0.27 up to 2.15 mag were found by the latter. The DENIS survey (Epchtein et al. Epchtein97 (1997), Persi et al. Persi95 (1995)) with 2MASS (Skrutskie et al. Skrutskie97 (1997)) is the first attempt to carry out a complete survey of the southern sky. The limiting magnitudes in the three near-IR photometric bands (I = 0.8 $`\mu `$m, J = 1.25 $`\mu `$m and $`\mathrm{K}_\mathrm{S}=2.15\mu `$m) for point sources are $`18^\mathrm{m}`$, $`16^\mathrm{m}`$ and $`13^\mathrm{m}`$ respectively. The photometric accuracy (rms) is better than 0.1 mag and the astrometric accuracy better than 1 arcsec. These numbers are for uncrowded fields. DENIS $`\mathrm{K}_\mathrm{S}/(\mathrm{J}\mathrm{K}_\mathrm{S})`$ colour magnitude diagrams (CMDs) in regions of low extinction show a well defined RGB and AGB sequence in the Galactic Bulge (see Fig. 1). Due to their high luminosities (up to $`\mathrm{M}_{\mathrm{bol}}4.5`$ for non-saturated sources in DENIS) these stars are ideal tracers of the stellar populations in the Bulge and are found even in highly extincted regions. In this paper we present a method to derive interstellar extinction using isochrones from Bertelli et al. (Bertelli94 (1994)) in combination with DENIS CMDs. We show that this method is appropriate for low as well as for highly extincted regions. Finally we present a map of the extinction in the inner Bulge between -8$`<`$$`\mathrm{}`$$`<`$8 and -1.5$`<`$b$`<`$$`1.5^{}`$. The finer details of the features seen in the map will be discussed in a subsequent paper. ## 2 Observations The near infrared data were acquired in the framework of the DENIS survey, in a dedicated observation of a large Bulge field, simultaneously (Summer 1998) in the three usual DENIS bands, Gunn-I (0.8 $`\mu `$m), J (1.25 $`\mu `$m) and $`\mathrm{K}_\mathrm{S}`$ (2.15 $`\mu `$m). For the source extraction we used PSF fitting optimised for the crowded fields (Alard et al. in preparation). For the astrometry, the individual DENIS frames were cross-correlated with the PMM catalog (USNO-A2.0). The absolute astrometry is then fixed by the accuracy of this catalog ($``$ 1”). The internal accuracy of the DENIS astrometry, derived from the identifications in the overlaps is of the order of 0.5”. For the determination of the zero point all standard stars observed in a given night have been used. The typical uncertainty of the zero points has been derived from the overlapping regions and is about 0.05, 0.15 and 0.15 mag in the I,J and $`\mathrm{K}_\mathrm{S}`$ bands respectively. ## 3 Extinction determination using isochrones Colour-Magnitude Diagrams were constructed for sources in a small window (radius of 2 arcmin) in the field. The modal value of the distribution of the $`\mathrm{A}_\mathrm{V}`$ required to move the stars in the CMD to the zero extinction isochrone was taken as the value of the extinction for this window. The interstellar extinction law ($`\mathrm{A}_\mathrm{V}:\mathrm{A}_\mathrm{J}:\mathrm{A}_{\mathrm{K}_\mathrm{S}}=1:0.256:0.089`$) from Glass (Glass99 (1999)) was used. The window was then displaced laterally in uniform steps to construct an image of the spatial distribution of the extinction over the whole field. We have found that towards the Galactic Bulge a sampling window of radius 2 arcmin provides a sufficient number of stars to form a sequence enabling a reliable estimate of the extinction in such a window. We presently do not use the DENIS data with $`\mathrm{K}_\mathrm{S}<7`$ due to the saturation of the detectors. We only use those sources which have been detected in J as well as in $`\mathrm{K}_\mathrm{S}`$ with $`\mathrm{K}_\mathrm{S}`$ brighter than 11 mag in order to be as complete as possible in J. This criterion is also quite important in order to rule out fake sources at the fainter end of the luminosity function and to guarantee an RGB/AGB identification. However, in the regions with very high extinction ($`\mathrm{A}_\mathrm{V}>25`$) we find that a large proportion of the sources detected at $`\mathrm{K}_\mathrm{S}`$ do not have counterparts at the shorter wavelengths. The location of these ’missing’ J sources is concentrated in regions with high extinction ($`\mathrm{A}_\mathrm{V}>25^\mathrm{m}`$). Hence for regions with large extinction, where the number of sources detected with the above criterion is smaller than in regions with lower extinction, we only get a lower limit on the extinction. Results with such a sampling window and with displacement steps of 1 arcmin are discussed below. Isochrones from Bertelli et al. (Bertelli94 (1994)) placed at 8 kpc distance for a 10 Gyr stellar population with Z=0.02 has been used as a reference system. The isochrones were calculated for the ESO filter system by convolving the near-infrared bands with the spectral energy distributions from Kurucz (Kurucz92 (1992)) for temperatures higher than 4000 K. At lower temperatures the effective temperature scale from Ridgway et al. (Ridgway80 (1980)) for the late M giants and the Lançon & Rocca-Volmerange (1992) scale for the early M giants has been used. The lack of very red standards limits the near-infrared colour transformation (Bressan & Nasi Bressan95 (1999)) and causes the colours of the Z=0.02 isochrone to ’saturate’ around $`(\mathrm{J}\mathrm{K})_0=1.35`$. Therefore a new, empirical $`\mathrm{T}_{\mathrm{eff}}`$-(J–K) colour relation has been derived by making a fit through the $`\mathrm{T}_{\mathrm{eff}}/(\mathrm{J}\mathrm{K})_0`$ data available for cool giants \[see Schultheis et al. Schultheis98 (1998) and Ng et al. (in preparation)\]. Schultheis et al. (Schultheis98 (1998)) demonstrated the good agreement of the isochrones with the new $`\mathrm{T}_{\mathrm{eff}}`$-colour relation using NIR photometry of a sample of Miras and Semiregular Variables in a field located at the outer Bulge. However, the upper part above the RGB tip ($`\mathrm{K}_\mathrm{S}8.0`$) remains nevertheless more uncertain. Based on observed near infrared spectra for a sample of M giants and Mira Variables, kindly provided by A. Lançon, we have found the difference between K and $`\mathrm{K}_\mathrm{S}`$ to be small, in the order of 0.04-0.05 mag. ### 3.1 Uncertainties of the isochrone method Figure 1 shows a DENIS CMD in a part of the Baade’s Window (SgrI, $`0.06\mathrm{deg}^2`$). Overlaid are the isochrones with different metallicities (for $`\mathrm{A}_\mathrm{V}=0`$). The isochrone with Z=0.02 is found to follow well the observed CMD and is shown at $`\mathrm{A}_\mathrm{V}=0`$ and also at $`\mathrm{A}_\mathrm{V}=1.35`$ which is the modal value of the extinction found for the field. It is a well known fact that in the Galactic Bulge one deals with a wide metallicity and age range. Using isochrones with a different Z would give us different extinction values. In order to estimate the effect of metallicity on the derived extinction values, isochrones with Z=0.05 and Z=0.008 have been considered. Between the isochrone with Z=0.008 and Z=0.05 there is a $`\mathrm{\Delta }(\mathrm{J}\mathrm{K})0.2`$ (see also e. g. Bressan et al. Bressan98 (1998)) which corresponds to a $`\mathrm{\Delta }\mathrm{A}_\mathrm{V}1^\mathrm{m}`$. This is the typical uncertainty in the determination of the extinction assuming solar metallicity. In contrast to the effect of metallicity the isochrones are hardly affected by an age-range (see e. g. Schultheis et al. 1998). Analysis of repeated observations (1996 & 1998) shows that the internal dispersion in the photometry, in the crowded regions, is less than 0.15 mag for $`\mathrm{K}_\mathrm{S}<11`$. Compared to the distance spread of the Galactic Bulge ($`0.35\mathrm{mag}`$) the errors in the photometric accuracy as well as the errors coming from the isochrones are negligible. ### 3.2 CMDs in sampling windows In Figure 2 we show the CMDs for sampling windows at three different locations, namely a field with low extinction in Baade’s window, one with intermediate extinction at $`\mathrm{}=0.37^o`$ and $`b=0.5^o`$ and one at the Galactic Centre ($`\mathrm{}=0,b=0`$). The low extinction field considered here is a small part of the Sgr I field whose CMD is presented in Figure 1. While the RGB/AGB branch in Baade’s window shows a narrow RGB/AGB sequence, for the two other fields one sees clearly a wide spread in the $`(\mathrm{J}\mathrm{K}_\mathrm{S})`$ colour. The Baade’s window is known to have low and well behaved extinction and this is seen as a sharp peak in the distribution of $`\mathrm{A}_\mathrm{V}`$ (figure 2). For the field at $`\mathrm{}=0.37,b=0.5`$ and around the Galactic Centre one does not find a single well defined peak but perhaps two different peaks which indicates that there maybe two or more distinct layers of extinction causing material along this line of sight. Alternatively the absorbing matter may show clumpiness on a scale smaller than the 4 arcminute diameter of the sampling window. As mentioned earlier, our J and K detections are complete in regions with low extinction (up to $`\mathrm{A}_\mathrm{V}<25`$) and a large proportion of K sources do not have J counterparts (up to 60-90%) in the regions with high extinction (GC). It is unfortunately not possible to use the results from the regions with low extinction to assign a completeness limit for the obscured ones because the controlling factors are confusion in the first case and the detector sensitivity in the second. A map of the ’missing’ J sources shows that they are a significant source of uncertainty in regions with $`\mathrm{A}_\mathrm{V}>25`$. ## 4 Results The whole map for the inner Galactic Bulge is shown in Fig. 3. The map has a resolution of 4 arcminutes (the diameter of the sampling window used). Note the clumpy and filamentary behaviour of the distribution of extinction especially close to the Galactic plane. There are also small pockets with uniform extinction of about $`\mathrm{A}_\mathrm{V}=6`$ magnitudes (for example the ISOGAL field $`\mathrm{}=0,b=1`$ studied by Omont et al. 1999). Table 1 gives the values of the mean extinction in various locations in the inner Bulge. The distribution of extinction along the line of sight through the Bulge will be discussed in detail in a subsequent paper (Ganesh et al in preparation). In Fig. 4 we present a contour map of the extinction around the Galactic Centre. The distribution of extinction in this region has been studied earlier by Catchpole et al (Catchpole90 (1990)) using H&K data. We find a good agreement in the observed structures keeping in mind the higher resolution results presented here. The difference in $`\mathrm{A}_\mathrm{V}`$ compared to Catchpole et al. is typically smaller than 3. The average interstellar extinction along the Galactic plane (see Fig. 4) is $`24^\mathrm{m}`$ in $`\mathrm{A}_\mathrm{V}`$ although this might be a lower limit (as seen by the presence of patches with $`\mathrm{A}_\mathrm{V}>30`$) due to the effect of the ’missing’ J sources discussed earlier. Along the minor axis the $`\mathrm{A}_\mathrm{V}`$ decreases to $`15^\mathrm{m}`$ towards the edges, starting from nearly $`25^\mathrm{m}`$ near the Centre. The derived extinction values from Fig. 4 are also qualitatively in agreement with the values found by Wood et al. (1998) near the Galactic Centre. ## 5 Conclusion DENIS observations in the J and $`\mathrm{K}_\mathrm{S}`$ band together with isochrones by Bertelli et al. (Bertelli94 (1994)) are an excellent tool to map the interstellar extinction. A comparison with the field studied by Catchpole shows a very good agreement although we see more details due to the better resolution. Several fields with relatively low ($`\mathrm{A}_\mathrm{V}6`$) and homogeneous extinction can be identified on the basis of the extinction map. This identification should help further detailed investigation of the stellar population in these windows. A study of the three-dimensional distribution of the material responsible for the interstellar extinction should be facilitated by the availability of the extinction map. Acknowledgements We want to thank I. S. Glass, J. van Loon and Y. K. Ng for useful discussions. The referee, G. P. Tiede, is thanked for useful comments and suggestions. SG was supported by a fellowship from the Ministére des Affaires Etrangeŕes, France. MS acknowledges the receipt of an ESA fellowship. The DENIS project is partially funded by European Commission through SCIENCE and Human Capital and Mobility plan grants. It is also supported, in France by the Institut National des Sciences de l’Univers, the Education Ministry and the Centre National de la Recherche Scientifique, in Germany by the State of Baden-Würtemberg, in Spain by the DG1CYT, in Italy by the Consiglio Nazionale delle Ricerche, in Austria by the Fonds zur Förderung der wissenschaftlichen Forschung und Bundesministerium für Wissenshaft und Forschung, in Brazil by the Foundation for the development of Scientific Research of the State of Sao Paulo (FAPESP), and in Hungary by an OTKA grant and an ESOC&EE grant.
no-problem/9908/cond-mat9908298.html
ar5iv
text
# Johnson-Nyquist noise in narrow wires ## Abstract The Johnson-Nyquist noise in narrow semiconducting wires having a transverse size smaller than the screening length is shown to be white up to frequency $`D/L^2`$ and to decay at higher frequencies as $`\omega ^{\frac{1}{2}}`$. This result is contrasted with the noise spectra in neutral and charged liquids. It is interesting to compare properties of charged and neutral systems. The role of the Coulomb interaction crucially depends on the effective dimensionality of the charged system. For instance, due to the long-range nature of the Coulomb interaction in three-dimensional systems the density excitations of neutral liquids (acoustic phonons) are transformed to gapped plasmons, while in one- and two-dimensional systems plasmons remain gapless. Here I examine the noise spectrum as another aspect of the singular role of Coulomb interaction critically depending on the dimensionality. The noise spectrum is quite different in charged and neutral liquids. The equilibrium Johnson-Nyquist noise in an electrical conductor (with a screening length smaller than any size of the conductor) is white up to very high frequencies (the smaller of the elastic scattering rate $`1/\tau `$ and the Maxwell relaxation frequency $`4\pi \sigma `$); while in neutral liquids, the noise becomes frequency-dependent above the “Thouless” frequency $`D/L^2`$ ($`D`$ is a diffusion constant and $`L`$ is the distance between points). The difference is due to screening in charged liquids and depends on the dimensionality of the conductor. I show here that for electrical wires having a transverse size $`a`$ smaller than the Debye screening length $`\lambda _D`$ (noted here as narrow wires), the Johnson-Nyquist noise decays as $`\omega ^{\frac{1}{2}}`$ above the “Thouless” frequency independently of external screening. The threshold frequency $`D/L^2`$ can be made arbitrarily small by increasing the separation between two points $`L`$. To calculate the fluctuations of the electrochemical potential, we need to relate it to the coupled fluctuations of charge density and currents. We start by writing the continuity equation and the current equation valid in the hydrodynamic limit: $`{\displaystyle \frac{\rho }{t}}+div(\stackrel{}{j})=0;\stackrel{}{j}=\sigma \stackrel{}{E}^{tot}D\stackrel{}{}\rho .`$ (1) For self-consistency, we need to account for the potential induced by the fluctuation of charge density: $`\varphi _{q,\omega }^{ind}=\frac{u_1(q)}{ϵ_{\mathrm{}}}\rho _{q,\omega }`$ (Coulomb’s law). Since we consider a conductor with transverse dimensions $`a`$ smaller than the screening length $`\lambda _D`$, $`u_1(q)=2ln\frac{1}{qa}`$ is a one-dimensional Coulomb potential ($`q`$ is a wave vector along a one-dimensional conductor). The total potential driving current is the sum of the external and induced potentials. Thus the full system of equations is: $`i\omega \rho _{q,\omega }+iqj_{q,\omega }=0,j_{q,\omega }=\sigma (iq)\varphi _{q,\omega }^{tot}+D(iq)\rho _{q,\omega },`$ (2) $`\varphi _{q,\omega }^{tot}=\varphi _{q,\omega }^{ext}+\varphi _{q,\omega }^{ind},\varphi _{q,\omega }^{ind}={\displaystyle \frac{2ln\frac{1}{qa}}{ϵ_{\mathrm{}}}}\rho _{q,\omega }.`$ (3) Finally, after some elementary algebra, we can use the above equations to relate the charge density variation to the external potential: $`\rho _{q,\omega }={\displaystyle \frac{\sigma _1q^2}{i\omega +(D+2(\sigma _1/ϵ_{\mathrm{}})ln(1/qa))q^2}}\varphi _{q,\omega }^{ext}.`$ (4) Using the Einstein relation $`\sigma =D\chi _0`$ and the expression for the one-dimensional conductivity $`\sigma _1=\sigma a^2`$, the density-density response function is $`\chi _{q,\omega }{\displaystyle \frac{D\chi _0a^2q^2}{i\omega +Dq^2(1+(2a^2\chi _0/ϵ_{\mathrm{}})ln(1/qa))}}.`$ (5) We can now apply the fluctuation dissipation theorem (FDT) to calculate the density fluctuation spectrum (assuming classical fluctuations, $`\mathrm{}\omega kT`$): $`<\delta \rho _{q,\omega }^2>=\mathrm{}Im\chi _{q,\omega }coth({\displaystyle \frac{\mathrm{}\omega }{2kT}}){\displaystyle \frac{2kT}{\omega }}Im\chi _{q,\omega }.`$ (6) The static charge compressibility $`\chi _0`$ is simply related to the Debye screening (or Thomas-Fermi) length: $`1/\lambda _D^2=4\pi \chi _0`$. The induced potential fluctuations can be expressed through the charge density fluctuations: $`<\varphi _{q,\omega }^{ind}^2>={\displaystyle \frac{(u_1(q))^2}{ϵ_{\mathrm{}}^2}}<\delta \rho _{q,\omega }^2>,`$ (7) $`<\varphi _{q,\omega }^{ind}^2>={\displaystyle \frac{2kT}{ϵ_{\mathrm{}}^2}}{\displaystyle \frac{\sigma _1q^2(2ln\frac{1}{qa})^2}{\omega ^2+D^2(1+\frac{a^2}{2\pi ϵ_{\mathrm{}}\lambda _D^2}ln\frac{1}{qa})^2q^4}}.`$ (8) We can compare Eqn. (8) with the spectral density of potential fluctuations in bulk three-dimensional charged and neutral liquids. In the case of a three-dimensional charged liquid, we need to use the three-dimensional Coulomb potential $`\varphi _{q,\omega }^{ind}=\frac{u_3(q)}{ϵ_{\mathrm{}}}\rho _{q,\omega }=\frac{4\pi }{ϵ_{\mathrm{}}q^2}\rho _{q,\omega }`$. Following the above simple derivation, we get the expression for voltage fluctuations (it is sufficient for our purposes to consider only longitudinal fluctuations) in a three-dimensional conductor: $`<\varphi _{q,\omega }^{ind(3d)}^2>={\displaystyle \frac{32\pi ^2kT}{ϵ_{\mathrm{}}^2q^2}}{\displaystyle \frac{\sigma }{\omega ^2+(Dq^2+4\pi \sigma /ϵ_{\mathrm{}})^2}}.`$ (9) In the case of a neutral liquid, there is no long-range induced potential; therefore, we get the standard density-density response function and potential fluctuations describing diffusion: $`<\varphi _{q,\omega }^{(n)}^2>={\displaystyle \frac{<\delta \rho _{q,\omega }^2>}{\chi _0^2}}={\displaystyle \frac{2T}{\chi _0^2}}{\displaystyle \frac{D|\chi _0|q^2}{\omega ^2+(Dq^2)^2}}.`$ (10) We can now use the spectral densities (Eqns. 8-10) to calculate the experimentally measured differential noise between the two ends of the sample, averaged over transverse modes: $`<\varphi _{12}(\omega )^2>={\displaystyle \underset{q_x}{}}{\displaystyle \frac{sin^2(q_xa)}{q_x^2a^2}}{\displaystyle \underset{q_y}{}}{\displaystyle \frac{sin^2(q_ya)}{q_y^2a^2}}`$ (11) $`{\displaystyle 𝑑q_z4sin^2(\frac{q_zL}{2})}<\varphi (q,\omega )^2>.`$ (12) The Johnson-Nyquist noise in a three-dimensional conductor (9) can be easily calculated, because the dominant contribution to the sums and the integral comes from “zero modes” ($`q_x,q_y,q_z0`$): $`<\varphi _{12}^{3d}(\omega )^2>=2kT{\displaystyle \frac{16\pi ^2\sigma L}{ϵ_{\mathrm{}}^2S}}{\displaystyle \frac{1}{(\frac{4\pi \sigma }{ϵ_{\mathrm{}}})^2+\omega ^2}}.`$ (13) Such noise is readily interpreted as the noise from a conductor having an internal resistance $`R=\frac{L}{\sigma S}`$ and an internal capacitance $`C=\frac{ϵS}{4\pi L}`$ connected in parallel: $`R(\omega )=R/(1+(RC\omega )^2)`$. Remarkably, the Johnson-Nyquist noise is white up to the frequency $`4\pi \sigma `$, which is independent of the length of the wire. The appropriate physical picture of fluctuations in an electrical conductor is that charge fluctuations relax on a very fast time scale $`1/4\pi \sigma `$, producing quasi-homogeneous current fluctuations. It is important to point out that the noise can depend on frequency through the frequency dependence of the conductivity $`\sigma (\omega )`$. For the Drude model of conductivity, the characteristic frequency for fall-off of the conductivity $`\sigma (\omega )`$ is then the elastic scattering rate $`1/\tau `$. In the case of a “one-dimensional” wire $`\lambda _D>a`$, we can take into account only one “zero mode”($`q_x=q_y=0`$), since higher harmonics make contributions smaller in powers of $`(a/L)^2`$. If we approximate the weak logarithmic dependence in Eqn.(8) by a constant $`ln\frac{1}{qa}ln\frac{L}{a}`$, we get an expression similar to Eqn.(10) with the renormalized diffusion coefficient $`D^{}D(1+\frac{a^2}{2\pi ϵ\lambda _D^2}ln(L/a))`$. Thus the frequency dependence of noise for a “one-dimensional” wire is the same as for a neutral liquid. This result is expected, since the screening is not efficient in one dimension. The integral over wave vector $`q_z`$ can be evaluated explicitly, while the approximation of the logarithm by a constant is reliable for limits $`\omega 0`$ and $`\omega \mathrm{}`$. $`<\varphi _{12}^{1d}(\omega )^2>=2kT{\displaystyle \frac{D^2}{D^2}}{\displaystyle \frac{\pi L}{\sigma _1\theta }}(1e^\theta (cos\theta sin\theta )),`$ (14) where $`\theta =(\omega /2\omega _0)^{1/2}`$ and $`\omega _0=D^{}/L^2`$ is the natural diffusion frequency. From the above expression for noise in a one-dimensional wire, we see that it is white up to the “Thouless” frequency $`\omega _0`$, and it decays above this frequency as $`1/\sqrt{\omega }`$. The same frequency dependence (with $`D^{}D`$) is expected for the fluctuations of the chemical potential between two points in a narrow vessel ($`\omega D/a^2`$) of liquid. It is clear indeed that the difference in chemical potentials between two points is relaxed through diffusion on a characteristic time scale $`L^2/D`$. In fact, it is the classical result for any quantity (such as temperature, density) obeying a diffusion process that does not have long-range correlations. The nature of the relaxation of a random potential fluctuation is quite different in charged and neutral liquids. In charged liquids, it is essentially the fast process of screening, and in neutral liquids it is the process of diffusion. In reduced dimensionality systems (such as narrow wires), the Coulomb interaction does not cause long-range correlations; therefore, the noise in a narrow conductor can be similar to neutral systems. The experimental observation of predicted noise properties is feasible in semiconducting materials having a low concentration of carriers. The screening length $`\lambda _D`$ in such materials can be as large as $`10^4cm`$. In fact, with current experimental techniques (see Reference for a review of experiments), even the high Maxwell relaxation frequency crossover can be observed in “wide” wires ($`a\lambda _D`$, the situation almost always encountered) with poor conductivity. In metals, normally both the elastic rate $`1/\tau `$ and the Maxwell frequency $`4\pi \sigma `$ are high and difficult to observe. The question of the frequency dependence of equilibrium and “shot” noise was raised recently by Y. Naveh et al. Special geometries (sandwich and ground plane) were suggested to observe the Maxwell and Thouless crossover frequencies. The above calculation shows that the crossover at the Maxwell relaxation frequency is a general property of Coulomb systems and should be observed independently of geometry and length $`L`$ for “wide” wires. Moreover, for “narrow” wires ($`\lambda _D>a`$) the Thouless frequency crossover should be seen independently of “external screening” by electrodes or the ground plane. In conclusion, the noise in narrow conductors ($`\lambda _D>a`$) becomes frequency-dependent starting from low frequency $`D/L^2`$ (quite similar to simple diffusion systems), although in wide conductors, the noise is white up to the smaller of the frequencies $`4\pi \sigma `$ or $`1/\tau `$. This work was supported by the National Science Foundation through the Science and Technology Center for Superconductivity (Grant No. DMR-91-20000). I thank A. Leggett and M. Weissman for helpful discussions.
no-problem/9908/hep-ex9908011.html
ar5iv
text
# Search for Neutral Heavy Leptons in a High-Energy Neutrino Beam ## References
no-problem/9908/astro-ph9908279.html
ar5iv
text
# Magnetar Spin-Down ## 1 INTRODUCTION There is a growing collection of pulsating high-energy sources which occupy a unique phase-space in their combination of long ($`>5`$ s), monotonically increasing periods and high period derivatives. One subgroup in this collection are the soft gamma-ray repeaters (SGRs), transient sources that exhibit repeated bursts of soft ($`30`$ keV), short duration ($`0.1`$ s) $`\gamma `$-rays. Bursts of average energy $`10^{41}`$ ergs repeat on irregular intervals, while giant bursts of energy $`10^{45}`$ ergs have been observed once in each of the sources SGR 0526-66 and SGR 1900+14. Recently, a period of $`P=7.47`$ s has been detected from SGR 1806-20 in quiescent emission (Kouveliotou et al. 1998) and a period of $`P=5.16`$ s from SGR 1900+14 in both quiescent (Kouveliotou et al. 1999) and giant burst emission (Hurley et al. 1999a). Both have measured period derivatives around $`\dot{P}10^{10}\mathrm{s}\mathrm{s}^1`$. Another group of sources having similar $`P`$ and $`\dot{P}`$ are the anomalous X-ray pulsars (AXPs), pulsating X-ray sources with periods in the range $`612`$ s and period derivatives in the range $`10^{12}10^{11}\mathrm{s}\mathrm{s}^1`$ (Gotthelf & Vasisht 1998). These sources have shown only quiescent emission with no bursting behavior. The most plausible and commonly invoked model to explain the characteristics of both the SGRs and AXPs is that of a neutron star having a dipole magnetic field of $`>10^{14}`$ G, much higher than the fields of ordinary, isolated pulsars and well above the quantum critical field of $`4.4\times 10^{13}`$ G. Such stars, known as magnetars, were first proposed by Duncan & Thompson (1992), Usov (1992) and Paczynski (1992) to account for various properties of SGRs and $`\gamma `$-ray bursts. All of the SGR and AXP sources are found in or near young ($`\tau <10^5`$ yr) supernova remnants (SNR). SGRs in addition have been associated with X-ray and radio plerions whose emission power far exceeds the dipole spin-down power. It was therefore proposed that relativistic particle outflows from the SGR bursts (Tavani 1994, Harding 1995, Frail et al. 1997) or from a steady flux of Alfven waves (Thompson & Duncan 1996), provide the power for the nebular emission. The existence of such a wind has been inferred indirectly by X-ray and radio observations of the synchrotron nebula G10.0–0.3 around SGR1806–20 (Murakami et al. 1994, Kulkarni et al. 1994)<sup>1</sup><sup>1</sup>1Note that a revised IPN location now places the SGR source outside the core of the radio plerion (Hurley et al. 1999b). Thompson & Duncan (1996) estimated that the particle luminosity from SGR1806–20 is of the order of $`10^{37}\mathrm{erg}\mathrm{s}^1`$. Such energetic particle winds will also affect the spin-down torque of the star by distorting the dipole field structure near the light cylinder (Thompson & Blaes 1998 \[TB98\]). We will show, however, deriving a formula similar to the one given by TB98, that if relativistic wind outflow continuously dominates the spin-down of the neutron star in SGR 1806-20 at a level $`\stackrel{>}{}10^{36}\mathrm{erg}\mathrm{s}^1`$, then the surface magnetic dipole field is too low to be consistent with a magnetar model. It is possible however that the wind outflows from SGR sources are episodic, lasting for a time following each burst that may be small compared to the time between bursts. In this case, dipole radiation will dominate the spin-down between bursts, even though wind outflow may dominate the average rotational energy-loss rate. Using a general formula for the spin-down torque that includes both dipole radiation and episodic particle winds, we derive the magnetic field and characteristic age of the neutron star as a function of the observed $`P`$ and $`\dot{P}`$, the wind luminosity and wind duty cycle. We find that the derived surface field and age can have a range of values between the pure dipole and pure wind cases that depend on the duty cycle of the wind outflows, even for a constant value of the average wind luminosity. ## 2 ENERGY LOSS FROM A WIND It is known that neutron star rotation drastically distorts the magnetic field near and beyond the light cylinder radius $`R_{\mathrm{LC}}=c/\mathrm{\Omega }`$ (where $`\mathrm{\Omega }=2\pi /P`$ is the neutron star angular velocity; $`P`$ is the period), and that magnetic field lines crossing the light cylinder remain open all the way to ‘infinity’. Field lines will also open up in the presence of a powerful wind of particles emitted from the neutron star. Near the surface of the star, dipole magnetic field pressure is high enough to completely dominate the wind stresses. However, the magnetic energy density drops much faster with distance than that of a quasi–isotropic particle wind of kinetic luminosity $`L_p`$ at infinity; thus, ignoring a transitional region between these two regimes, beyond a distance of the order $`r_{\mathrm{open}}r_0(B_0^2r_0^2c/2L_p)^{1/4}`$ magnetic field lines will be ‘combed out’ by the wind. Here, $`B_0`$ is the value of the neutron star surface dipole magnetic field and $`r_010`$ km, is the radius of the star (henceforth, we will use small $`r`$ to denote spherical distances \[from the center\], and capital $`R`$ for cylindrical distances \[from the axis\]). The fraction of open field lines originates from a region of radial extent $$R_{pc}r_0(r_0/r_{\mathrm{open}})^{1/2}r_0$$ (1) around the axis, the so-called polar cap (this estimate is obtained for an undistorted dipole). An aligned magnetic rotator, the simplified geometry examined herein, spins down (even though an aligned magnetic rotator in vacuum does not), because a neutron star magnetosphere cannot be a true vacuum (Goldreich & Julian 1969). As is discussed in Contopoulos, Kazanas & Fendt (1999; hereafter CKF), a neutron star magnetosphere is spontaneously charged in order to support the charges and electric currents required in the realistic solution. An electric current also flows, in a large scale electric circuit through the polar cap to infinity, closing along an equatorial current sheet discontinuity, which connects to the interface between open and closed field lines at the light cylinder, with the circuit closing across field lines along the polar cap. This electric current $`I_{pc}`$ flows along the magnetic field lines crossing the polar cap and generates the required spin-down magnetic torque on the neutron star. The neutron star spins down because the electromagnetic torque generated at the surface of the polar cap opposes its rotation. Inside the neutron star surface, this current flows horizontally towards the edge of the polar cap, where it flows out in a current sheet along the interface between open and closed field lines. The electric current flowing through the polar cap is (to a good approximation) distributed as $`I\mathrm{\Psi }(2\mathrm{\Psi }/\mathrm{\Psi }_{pc})`$, where $`\mathrm{\Psi }`$ is the total amount of magnetic flux contained inside cylindrical radius $`R`$, and $`\mathrm{\Psi }_{pc}`$ is the total amount of magnetic flux which opens up to infinity. This is an exact expression for a magnetic (split) monopole, and CKF showed that it remains approximately valid even for a dipole. Since on the neutron star surface $`\mathrm{\Psi }R^2`$ when $`R<R_{pc}r_0`$, $$I(R)=I_{pc}\left(\frac{R}{R_{pc}}\right)^2\left[2\left(\frac{R}{R_{pc}}\right)^2\right].$$ (2) When this current flows horizontally in a layer of thickness $`h(R)`$ across the polar cap, an electric current density $$J(R)=\frac{I_{pc}}{2\pi Rh(R)}\left(\frac{R}{R_{pc}}\right)^2\left[2\left(\frac{R}{R_{pc}}\right)^2\right]$$ (3) will flow horizontally, which, combined with the axial magnetic field $`B_{}`$ threading the polar cap, generates an azimuthal Lorenz force per unit volume, $`f(R)=\frac{1}{c}J(R)\times B_0`$. Integrating $`f(R)`$ over the volume of the polar cap crust where the above electric current flows horizontally, and doubling our result to account for the two (north/south) polar caps, we obtain the total electromagnetic torque $$T\frac{2}{3c}I_{pc}B_0R_{pc}^2.$$ (4) We present two simple, physically equivalent, estimates of the electric current flowing in the magnetosphere. One is to consider particles (electrons/positrons) with Goldreich–Julian charge densities $`\rho _{GJ}B_0/2\pi R_{\mathrm{LC}}`$ flowing outwards at the speed of light from the polar cap. This naive estimate gives $$I_{pc}\pi R_{pc}^2\rho _{GJ}c=\frac{B_0r_0c}{2}\left(\frac{r_0}{R_{\mathrm{LC}}}\right)\left(\frac{r_0}{r_{\mathrm{open}}}\right)$$ (5) Another equivalent, more physical, way to estimate the total amount of electric current flowing in the magnetosphere through the polar cap is to make the naive (and correct) assumption that, at the distance of the light cylinder the two magnetic field components (toroidal and poloidal) must be of the same order of magnitude, $`B_\varphi |_{\mathrm{LC}}B_p|_{\mathrm{LC}}.`$ This is indeed true in the force–free axisymmetric magnetosphere (without wind), since the light cylinder is the Alfvén point (Li & Melrose 1994). In general, the Alfvén point is some short distance inside the light cylinder. When the two field components are scaled back to the surface of the star at the edge of the polar cap, we obtain $$B_p|_{pc}=B_p|_{\mathrm{LC}}\left(\frac{R_{\mathrm{LC}}}{r_{\mathrm{open}}}\right)^2\left(\frac{r_{\mathrm{open}}}{r_0}\right)^3B_0.$$ (6) The structure of the field is dipole–like out to $`r_{\mathrm{open}}`$ and monopole–like out to the light cylinder. From Eqn (6) and the relation $`B_\varphi |_{\mathrm{LC}}B_p|_{\mathrm{LC}}`$, we have $$B_\varphi |_{pc}=B_0\frac{r_0^3}{R_{pc}R_{\mathrm{LC}}r_{\mathrm{open}}},\mathrm{and}\mathrm{therefore}$$ (7) $$I_{pc}\frac{B_\varphi |_{pc}R_{pc}c}{2}=\frac{B_0r_0c}{2}\left(\frac{r_0}{R_{\mathrm{LC}}}\right)\left(\frac{r_0}{r_{\mathrm{open}}}\right).$$ (8) This is a very simple result, and shows that the two ways of looking at the problem are equivalent. Using the above relation for the polar cap current in equation (4) we obtain the expression for the torque $`T`$ associated with the above spin–down arguments. The corresponding energy loss rate due to the above torque is therefore $`\dot{E}=T\mathrm{\Omega }`$, given more explicitly by $$\dot{E}_w=T\mathrm{\Omega }=\frac{B_0^2r_0^6\mathrm{\Omega }^4}{3c^3}\left(\frac{R_{\mathrm{LC}}}{r_{\mathrm{open}}}\right)^2=\dot{E}_D\left(\frac{L_p}{\dot{E}_D}\right)^{1/2}$$ (9) where we have used $`L_p/4\pi cr_{\mathrm{open}}^2=B(r_{\mathrm{open}})^2/8\pi `$ to obtain an expression for $`\dot{E}_w`$ in terms of the wind luminosity, $`L_p`$, and dipole energy loss, $`\dot{E}_D`$. Note that the standard dipole spin–down formula (modulo the different numerical factor in the denominator) is modified by the term $`(R_{\mathrm{LC}}/r_{\mathrm{open}})^2`$ which incorporates the effects of the “loading” of the magnetosphere with the outflowing wind. This expression, while of similar functional form, differs from that of TB98 because of a normalization error in this work (Thompson, priv. comm.), but agrees with a corrected expression given by Thompson et al. (1999). Using Eqn. (9) with values for $`P=P_{1806}=7.48`$ s, $`\dot{P}=\dot{P}_{1806}=8.310^{11}\mathrm{s}\mathrm{s}^1`$, i.e. those observed in SGR1806–20 and assuming the presence of a steady wind of luminosity $`L_{37}=L_p/10^{37}`$ erg s<sup>-1</sup>, we obtain an estimate of the surface magnetic field of the neutron star: $$B_03\times 10^{13}\mathrm{G}\left(\frac{P}{P_{1806}}\right)^1\left(\frac{\dot{P}}{\dot{P}_{1806}}\right)L_{37}^{1/2},$$ (10) where we have assumed $`r_0=10`$ km and $`I=10^{45}\mathrm{g}\mathrm{cm}^2`$. If one uses the observed value of the spin–down rate and the average value of the particle luminosity needed to account for the energetics of the nebula, then $`B_0`$ is significantly below the estimate of $`B_010^{15}`$ using Eqn (9) with $`R_{\mathrm{LC}}r_{\mathrm{open}}`$. This modified spin–down law leads to exponential increase in the pulsar period instead of the power law increase associated with the purely dipole emission (this is easily seen from equating $`\dot{E}_w=I\mathrm{\Omega }\dot{\mathrm{\Omega }}`$ and integrating). One can thus estimate the age $`\tau `$ of SGR1806-20 through the relation $$\tau =\frac{P}{2\dot{P}}\mathrm{ln}\left[\frac{L_pP^3}{4\pi ^2I\dot{P}}\right],$$ (11) which yields $`\tau `$ 11,800 yr for $`L_p=10^{37}`$ erg s<sup>-1</sup>. Thus the steady wind model can naturally account for the fact that the age of the SNR G10.0–0.3 is much larger than the characteristic dipole spin-down age, but with the penalty that the magnetar model must be abandoned. ## 3 COMBINED WIND AND DIPOLE SPIN-DOWN The expression for the magnetic field and characteristic age of the neutron star given above are only valid if the relativistic particle wind completely and continuously dominates the spin down of the star. If the particle wind flow is either discontinuous or not dominant, then a more general description of the spin-down energy loss must be used. If the wind has instantaneous luminosity $`L_p`$ during its times of activity and duty cycle $`D_p`$, defined as the fraction of total on-time, then the average energy loss from combined wind and dipole is $`\dot{E}`$ $`=`$ $`I\mathrm{\Omega }\dot{\mathrm{\Omega }}=\dot{E}_D(1D_p)+\dot{E}_wD_p`$ (12) $`=`$ $`{\displaystyle \frac{B_0^2r_0^6\mathrm{\Omega }^4}{6c^3}}(1D_p)+L_p^{1/2}D_p{\displaystyle \frac{B_0r_0^3\mathrm{\Omega }^2}{\sqrt{6c^3}}},`$ where we have used Eqn (9). The surface magnetic field may then be found as the solution to the quadratic equation, giving $$B_0=\frac{\sqrt{6c^3}}{8\pi ^2}\frac{L_p^{1/2}D_pP^2}{(1D_p)r_0^3}F(P,\dot{P})$$ (13) where, $$F(P,\dot{P})=\left\{1\left(1+\frac{4\dot{E}(1D_p)}{L_pD_p^2}\right)^{1/2}\right\},$$ (14) and $`\dot{E}=4\pi ^2I\dot{P}/P^3`$. Note that when $`L_pD_p^24\dot{E}(1D_p)`$, $$B_0\left(\frac{3c^3I\dot{P}P}{2\pi ^2r_0^6(1D_p)}\right)^{1/2}$$ (15) which gives the standard dipole formula when $`D_p=0`$. If $`L_pD_p^24\dot{E}(1D_p)`$, Eqn (13) gives the pure wind formula Eqn(10) with $`L_p`$ replaced by $`L_pD_p^2`$. We may also integrate Eqn (12) from the initial period $`P_0`$ to the present period $`P`$ to obtain the general expression for the neutron star characteristic age $`\tau `$. Assuming that $`P_0P`$, $$\tau \frac{4\pi ^2I}{L_pD_p^2P^2}\frac{\mathrm{ln}[12/F(P,\dot{P})]}{F(P,\dot{P})}.$$ (16) This expression gives the usual characteristic age for dipole spin down, $`\tau =P/2\dot{P}`$, in the limit $`L_pD_p^24\dot{E}(1D_p)`$ and $`D_p=0`$. One must be careful in the limit $`L_pD_p^24\dot{E}(1D_p)`$. If $`D_p`$ is close to 1, then the first term in Eqn (12) should be dropped to give an expression for $`\tau `$ which is the same as Eqn (11), again with $`L_p`$ replaced by $`L_pD_p^2`$. We now examine the consequences of the general expressions (13) and (16) for SGR1806-20, the SGR source for which we have the best estimate of the particle wind luminosity. Figure 1 shows the values of $`B_0`$ and $`\tau `$ computed from Eqns (13) and (16) for the measured $`P`$ and $`\dot{P}`$ of SGR 1806-20 (Kouveliotou et al. 1998) as a function of the parameter $`L_pD_p^2/\dot{E}`$, indicating the fractional wind contribution to the spin-down energy loss rate, assuming $`D_p1`$. For small $`L_pD_p^2/\dot{E}`$, the curves approach their dipole radiation values of $`B_010^{15}`$ G and $`\tau 1500`$ yr. For $`L_pD_p^2/\dot{E}\stackrel{>}{}0.1`$, $`B_0`$ and $`\tau `$ begin to depart from these values, with the magnetic field decreasing and the age increasing to connect smoothly to the wind-dominated solutions. We have seen in Section II that assuming continuous, wind dominated spin-down can give a characteristic age that agrees with the age ($`\tau 10^4`$ yr) of the plerion surrounding SGR1806-20, but that the derived magnetic field drops into the range of normal radio pulsars. In such a case, however, the free energy associated with the magnetic field decay is not sufficient to account for the observed luminosity, and an alternative free energy source must be considered. The parameter $`L_pD_p^2`$ may be estimated for SGR 1806-20 from its observed burst characteristics. In general, we can write the particle luminosity associated with a burst as $`L_p=E_\gamma ϵ_\gamma ^1\mathrm{\Delta }\tau _w^1`$, where $`E_\gamma `$ is the $`\gamma `$-ray burst energy, $`ϵ_\gamma `$ is the conversion efficiency of particle energy to $`\gamma `$-rays and $`\mathrm{\Delta }\tau _w`$ is the duration of the wind outflow following the burst. If $`T`$ is the average time between bursts, then the wind duty cycle is $`D_p=\mathrm{\Delta }\tau _w/T`$, and $$L_pD_p^2=E_\gamma ϵ_\gamma ^1\mathrm{\Delta }\tau _wT^2.$$ (17) In addition, the requirement that the X-ray nebula (Murakami et al. 1994) is powered by the aggregate of the bursts’ wind outflows leads to the condition $`L_pD_p=10^{42}\mathrm{erg}E_{40}(10^2/ϵ_\gamma )/T=10^{37}\mathrm{erg}\mathrm{s}^1(10^2/\eta )`$, where $`\eta `$ is the conversion efficiency of particle luminosity to nebular emission and $`E_{40}E_\gamma /10^{40}\mathrm{erg}`$. For the multiple SGR bursts from SGR 1806-20, Eqn (17) and the above requirement on $`L_pD_p`$, we find $`E_{40}(10^2/ϵ_\gamma )=(T_{SGR}/10^5\mathrm{s})(10^2/\eta )`$, giving $`L_pD_p^2=10^{34}\mathrm{erg}\mathrm{s}^1(\mathrm{\Delta }\tau _w/10^2\mathrm{s})(10^5\mathrm{s}/T_{SGR})(10^2/\eta )`$. Likewise, for the giant bursts, we have $`L_pD_p^2=3\times 10^{35}\mathrm{erg}\mathrm{s}^1(\mathrm{\Delta }\tau _w/10^7\mathrm{s})(10\mathrm{yr}/T_G)(10^2/\eta )`$. From Eqn (13), Eqn (16) and Figure 1, the conflicting goals of preserving the magnetar model (i.e. $`B_0\stackrel{>}{}10^{14}`$ G) and bringing the characteristic age within a factor of 2 of the $`10^4`$ yr age of G10.0-0.3, may be satisfied with $`L_pD_p^2/\dot{E}10100`$. Since $`\dot{E}=8\times 10^{33}\mathrm{erg}\mathrm{s}^1`$, the duration of the particle outflow must be much larger than the $`\gamma `$-ray burst duration and the wind flow duty cycle must be, $`D_p0.0080.08`$. ## 4 DISCUSSION The results of our analysis leave two alternatives for interpreting the spin down of SGRs, given the present limited data. The first assumes a continuous wind outflow at a luminosity sufficient to yield a characteristic age in agreement with that of the surrounding SNR. We have shown that in the case of SGR1806-20, this assumption leads to a surface magnetic field of only $`B_0=3\times 10^{13}`$ G, well below the magnetar range ($`>10^{14}`$ G). However, this alternative requires a source of free energy other that the field decay to power both the nebular emission and the SGR bursts. The second option assumes an episodic (or at least variable) wind outflow such that the average wind luminosity is sufficient to power the nebular emission. This allows a range of combinations of surface field and age and depends on the wind duty cycle. We show that, in the case of SGR1806-20, it is possible to accommodate both a characteristic age $`\tau 7500`$ yr, consistent with the estimated SNR age of $`10^4`$ yr, and a magnetar model ($`B_0=10^{14}`$G). One should then observe a sudden increase in the period derivative following SGR bursts. Evidence for such an increase was seen following the bursting activity of June - August 1998 from SGR1900+14 (Marsden et al. 1999). These options assume that gravitational radiation did not play a significant role in early spin-down evolution of the star, but it is unlikely to have made a large difference in the characteristic age. There are a number of arguments in favor of the magnetar model for SGRs, most of which have been discussed by Thompson & Duncan (1995) and Baring & Harding (1998). An additional argument is that the pure dipole fields of AXPs, which are spinning down smoothly (Gotthelf et al. 1999) and have much lower luminosity wind flows (if any), lie in the magnetar range. We suggest that detailed monitoring of the spin periods of the SGRs, to search for variations in the period derivative, can measure or place limits on the duty cycle of particle outflows and thus determine whether a magnetar model for these sources is viable. We thank Rob Duncan, Chris Thompson, Peter Woods and the referee Cole Miller for valuable commments and discussions.
no-problem/9908/gr-qc9908075.html
ar5iv
text
# Brans-Dicke-type theories and avoidance of the cosmological singularity ## I Introduction Scalar-tensor(ST) theories of gravity can be formulated in an infinite number of equivalent frames related by conformal rescalings of the spacetime metric. Among all conformally related frames the Jordan frame(JF) and the Einstein frame (EF) are distinguished . Although the JF and the EF formulations of a given ST theory provide mathematically equivalent descriptions of the same physics, the physical equivalence of these descriptions is under discussion (for an exhaustive review on the subject see ). Moreover, most authors on the subject share the conviction that only one of the conformally related frames is the ’physical frame’. Other admit that JF and EF formulations of ST theory provide just two different descriptions of the same physics but, they claim, only one of both JF and EF metrics is the ’physical metric’, i.e. the metric that is measured with clocks and rods made of ordinary matter . Among those that share the viewpoint of the non-physical equivalence of the JF and the EF formulations, it does not exists a unified criterion about which frame is the physical one . Some authors of this group choose the Jordan frame as the physical frame since the ordinary matter is minimally coupled to the JF metric. Other reject this choice using energy arguments. The scalar field kinetic energy is negative definite, or indefinite in the Jordan frame. This implies that, in this frame, the theory does not have a stable ground state. In the Einstein frame, meanwhile, the scalar field possesses a positive definite kinetic energy for $`\omega >\frac{3}{2}`$. We feel this remains an open question. In reference a point of view on this subject was presented that avoides any discussion on the physical preference of one or another conformal frame for the formulation of a given theory of gravity. It is based on the following observation. The conformal transformations of the metric can be considered as particular transformations of the units of length, time, and mass (a simple point-dependent scale factor applied to the units of length, time, and reciprocal mass). Spacetime coincidences are not affected by these transformations, i.e. spacetime measurements are not sensitive to the conformal rescalings of the metric<sup>*</sup><sup>*</sup>*Another way of looking at this is realizing that the experimental measurements deal always with dimensionless numbers and these are unchanged under the transformations of the physical units. For a readable discussion on the dimensionless nature of measurements we recommend reading section II of reference . This means that experimental observations are unchanged under these transformations. Consequently, the different conformal formulations of a given theory of gravity are experimentally indistinguishable. This line of reasoning leads that a statement such like ’the Jordan frame (or any other) formulation of Brans-Dicke-type theories is the physical one’ is devoid of any physical, i.e. experimentally testable meaning. It can be taken as an independent postulate of the theory only. Then the discussion about which conformal frame is the physical one is devoid of interest since it is a non-well-posed question. An alternative approach to this subject can be based on the following postulate. The different conformal representations of a given BD-type theory of gravity are physically equivalent. In the present paper we shall study the consequences of this postulate for flat Friedmann-Robertson-Walker(FRW) cosmology in Brans-Dicke-type theory of gravitation with minimal coupling between the scalar field and the matter fields in the Einstein frame. This last is just general relativity(GR) with an extra scalar field. This paper has been organized as follows. In section II the class of Brans-Dicke-type theories of gravitation is presented and a brief discussion on conformal equivalence among members of this class is given. The meaning of the physical equivalence among the different conformal formulations is discussed. The Jordan frame formulation of general relativity is considered in section III. The Jordan frame flat FRW cosmology for a universe filled with barotropic perfect-fuid-type matter is studied in section IV. The implications for string theory of the results obtained in this last section are outlined in section V. Finally, in section VI a physical discussion of these results is given and the final fate of the cosmological singularity is conjectured. ## II Brans-Dicke-type theories of gravity and conformal equivalence The Jordan frame Lagrangian for BD-type theories is given by: $$L=\frac{\sqrt{g}}{16\pi }(\varphi R\frac{\omega }{\varphi }(\varphi )^2),$$ (1) where $`R`$ is the Ricci scalar of the Jordan frame metric $`𝐠`$, $`\varphi `$ is the BD scalar field and $`\omega `$ is the BD coupling constant (a free parameter). Under the conformal rescaling of the metric: $$\widehat{g}_{ab}=\varphi g_{ab},$$ (2) and the scalar field redefinition $`\widehat{\varphi }=ln\varphi `$, the JF Lagrangian for BD-type theory (2.1) is mapped into the Einstein frame Lagrangian for BD-type theory: $$L=\frac{\sqrt{\widehat{g}}}{16\pi }(\widehat{R}(\omega +\frac{3}{2})(\widehat{}\widehat{\varphi })^2),$$ (3) where $`\widehat{R}`$ is the curvature scalar given in terms of the EF metric $`\widehat{𝐠}`$. Respecting interactions with matter in BD-type theory, only two possibilities seem to be physically interesting and reasonable : 1. Matter minimally couples to the metric in Jordan frame: $$L=\frac{\sqrt{g}}{16\pi }(\varphi R\frac{\omega }{\varphi }(\varphi )^2)+L_{matter},$$ (4) where $`L_{matter}`$ is the Lagrangian density for the matter fields. Theory given by (2.4) is just the JF formulation of Brans-Dicke theory. 2. Matter minimally couples to the metric in the Einstein frame: $$L=\frac{\sqrt{\widehat{g}}}{16\pi }(\widehat{R}(\omega +\frac{3}{2})(\widehat{}\widehat{\varphi })^2)+L_{matter}.$$ (5) In this case the scalar field $`\widehat{\varphi }`$ is minimally coupled to curvature so the dimensional gravitational constant $`G`$ is a real constant. Due to the minimal coupling between ordinary matter and the spacetime metric the rest mass of any test particle $`m`$ is constant too over the manifold. This leads that the dimensionless gravitational coupling constant $`Gm^2`$ ($`\mathrm{}=c=1`$) is a real constant too unlike BD theory where $`Gm^2\varphi ^1`$. The field equations derivable from (2.5) are: $$\widehat{G}_{ab}=8\pi \widehat{T}_{ab}+(\omega +\frac{3}{2})(\widehat{}_a\widehat{\varphi }\widehat{}_b\widehat{\varphi }\frac{1}{2}\widehat{g}_{ab}(\widehat{}\widehat{\varphi })^2),$$ (6) $$\widehat{\mathrm{}}\widehat{\varphi }=0,$$ (7) with $`\widehat{G}_{ab}\widehat{R}_{ab}\frac{1}{2}\widehat{g}_{ab}\widehat{R}`$. The conservation equation: $$\widehat{}_n\widehat{T}^{na}=0,$$ (8) is fulfilled. $`\widehat{T}_{ab}=\frac{2}{\sqrt{\widehat{g}}}\frac{}{\widehat{g}^{ab}}(\sqrt{\widehat{g}}L_{matter})`$ are the components of the stress-energy tensor for the ordinary matter fields. The theory given by (2.5) is just Einstein’s general relativity with a scalar field as an additional matter source of gravity. For $`\widehat{\varphi }=const.`$ or $`\omega =\frac{3}{2}`$ we recover the usual Einstein’s theory. This formulation of general relativity is linked with Riemann geometry because the test particles follow the geodesics of the metric $`\widehat{𝐠}`$, $$\frac{d^2x^a}{d\widehat{s}^2}=\widehat{\mathrm{\Gamma }}_{mn}^a\frac{dx^m}{d\widehat{s}}\frac{dx^n}{d\widehat{s}},$$ (9) where $`\widehat{\mathrm{\Gamma }}_{bc}^a=\frac{1}{2}\widehat{g}^{an}(\widehat{g}_{bn,c}+\widehat{g}_{cn,b}\widehat{g}_{bc,n})`$ are the Christoffel symbols of the metric $`\widehat{𝐠}`$. In fact, Riemann geometry is based on the parallel transport law: $$d\xi ^a=\widehat{\gamma }_{mn}^a\xi ^mdx^n,$$ (10) and the length preservation requirement: $$d\widehat{g}(\xi ,\xi )=0,$$ (11) where, in the coordinate basis $`\widehat{g}(\xi ,\xi )=\widehat{g}_{nm}\xi ^n\xi ^m`$, $`\widehat{\gamma }_{bc}^a`$ are the affine connections of the manifold, and $`\xi ^a`$ are the components of an arbitrary vector $`\xi `$. The above postulates of parallel transport and length preservation lead that, in Riemann geometry, the affine connections of the manifold coincide with the Christoffel symbols of the metric $`\widehat{\gamma }_{bc}^a=\widehat{\mathrm{\Gamma }}_{bc}^a`$. Under the conformal transformation (2.2) the Lagrangian (2.4) is mapped into the Einstein frame Lagrangian for BD theory: $$L=\frac{\sqrt{\widehat{g}}}{16\pi }(\widehat{R}(\omega +\frac{3}{2})(\widehat{}\widehat{\varphi })^2)+e^{2\widehat{\varphi }}L_{matter},$$ (12) while (2.5) is mapped into: $$L=\frac{\sqrt{g}}{16\pi }(\varphi R+\varphi ^1(\varphi )^2)+\varphi ^2L_{matter},$$ (13) that is the JF Lagrangian for general relativity with an extra scalar field. At the same time, under (2.2) the parallel transport law (2.10) is mapped into: $$d\xi ^a=\gamma _{mn}^a\xi ^mdx^n,$$ (14) where $`\gamma _{bc}^a=\mathrm{\Gamma }_{bc}^a+\frac{1}{2}\varphi ^1(_b\varphi \delta _c^a+_c\varphi \delta _b^a^a\varphi g_{bc})`$ are the affine connections of a Weyl-type manifold. These do not coincide with the Christoffel symbols of the Jordan frame metric $`𝐠`$ and, correspondingly, one can define metric and affine magnitudes and operators in a Weyl-type manifold. Weyl-type geometry is given by the parallel transport law (2.14) and by the length transport law: $$dg(\xi ,\xi )=\varphi ^1dx^n_n\varphi g(\xi ,\xi ),$$ (15) that is equivalent to (2.11) in respect to the conformal transformation (2.2). All of this means that the Jordan frame formulation of general relativity should be linked with a Weyl-type geometry with units of measure varying length over the manifold according to the length transport law (2.15). In the Jordan frame GR, in particular the gravitational constant $`G`$ varies in spacetime like $`\varphi ^1`$ while the rest masses of material particles $`m`$ vary like $`\varphi ^{\frac{1}{2}}`$, i.e. $`Gm^2=const.`$ is preserved. It is a conformal invariant feature of general relativity. In fact $`Gm^2`$ is a dimensionless constant and hence it is unchanged under the transformation of the physical units (2.2). According to the viewpoit on the physical equivalence of the conformal formulations of a given theory of gravitation presented in , both Jordan frame and Einstein frame formulations of general relativity are physically equivalent. Otherwise, these formulations of GR are equally consistent with the observational evidence. These are not alternative theories but alternative formulations of the same theory. This leads that we have two different geometrical representations of a same physical situation. In the one representation (Riemann geometry) the units of measure are constant over the manifold. In the other representation (Weyl-type geometry) the units of measure are variable over the spacetime. None of them is preferred over the other. It is a matter of mathematical convenience or, may be, philosophical prejudice which representation of the theory one chooses for the description of the given physics. This point will be further discussed in section VI. ## III Jordan frame general relativity This formulation of general relativity is based on the Lagrangian (2.13). It is not a complete geometrical theory. Gravitational effects are described here by a scalar field in a Weyl-type manifold, i.e. the gravitational field shows both tensor (spin-2) and scalar (spin-0) modes. For example, in this formulation of GR the gravitational redshift effect appears only partially as a metric phenomenon, the remainder of the effect being due to a real change in the energy levels of the atoms ($`m\varphi ^{\frac{1}{2}}`$). The field equations of the Jordan frame GR theory can be derived either directly from (2.13) by taking the variational derivatives of the Lagrangian respect to the dynamical variables or by conformally mapping equations (2.6) and (2.7) back to the JF metric according to (2.2). We obtain: $$G_{ab}=\frac{8\pi }{\varphi }T_{ab}+\frac{\omega }{\varphi ^2}(_a\varphi _b\varphi \frac{1}{2}g_{ab}g^{nm}_n\varphi _m\varphi )+\frac{1}{\varphi }(_a_b\varphi g_{ab}\mathrm{}\varphi ),$$ (16) and $$\mathrm{}\varphi =0,$$ (17) where $`T_{ab}=\frac{2}{\sqrt{g}}\frac{}{g^{ab}}(\sqrt{g}\varphi ^2L_{matter})`$ is the stress-energy tensor for ordinary matter in the Jordan frame. The energy is not conserved since the scalar field $`\varphi `$ exchanges energy with the metric and with the matter fields. The corresponding dynamic equation is: $$_nT^{na}=\frac{1}{2}\varphi ^1^a\varphi T,$$ (18) The equation of motion of an uncharged, spinless mass point that is acted on by the JF metric field $`𝐠`$ and by the scalar field $`\varphi `$, $$\frac{d^2x^a}{ds^2}=\mathrm{\Gamma }_{mn}^a\frac{dx^m}{ds}\frac{dx^n}{ds}\frac{1}{2}\varphi ^1_n\varphi (\frac{dx^n}{ds}\frac{dx^a}{ds}g^{an}),$$ (19) does not coincide with the geodesic equation of the Jordan frame metric. Most authors consider that one of the most undesirable features of the Jordan frame formulation of BD-type theories is linked with the fact that, in this frame the stress-energy tensor for the scalar field $`\varphi `$ ($`\frac{\varphi }{8\pi }`$ times the sum of the 2nd and 3rd terms in the right hand side(r.h.s.) of eq.(3.1)) has a non-canonical form. This leads that the scalar field kinetic energy is negative definite (or indefinite) implying that the theory may not have a stable ground state. However, as noted in reference the terms with the second covariant derivatives of the scalar field contain the connection, and hence a part of the dynamical description of gravity. In a new connection was presented that leads to a canonical form of the scalar field stress-energy tensor in the Jordan frame. We obtain the same result in an alternative way. Equation (3.1) can be written in terms of affine magnitudes in the Weyl-type manifold. In this case the affine connections of the JF (Weyl-type) manifold $`\gamma _{bc}^a`$ do not coincide with the Christoffel symbols of the JF metric $`\mathrm{\Gamma }_{bc}^a`$ (see section II). Then we can define the ’affine’ Einstein tensor $`{}_{}{}^{\gamma }G_{ab}^{}`$ that is given in terms of the affine connections of the manifold $`\gamma _{bc}^a`$ instead of the Christoffel symbols of the Jordan frame metric $`\mathrm{\Gamma }_{bc}^a`$. Equation (3.1) can then be rewritten as: $${}_{}{}^{\gamma }G_{ab}^{}=\frac{8\pi }{\varphi }T_{ab}+\frac{(\omega +\frac{3}{2})}{\varphi ^2}(_a\varphi _b\varphi \frac{1}{2}g_{ab}g^{nm}_n\varphi _m\varphi ),$$ (20) where now $`\frac{\varphi }{8\pi }`$ times the second term in the r.h.s. of this equation shows the canonical form for the scalar field stress-energy tensor. This way the main physical objection against this formulation of general relativity is removed. We shall call this as the ’true’ stress-energy tensor for $`\varphi `$ while $`\frac{\varphi }{8\pi }`$ times the sum of the 2nd and 3rd terms in the r.h.s. of eq.(3.1) we shall call as the ’effective’ stress-energy tensor for the BD scalar field $`\varphi `$. The r.h.s. of eq.(3.1) may be negative definite implying that some energy conditions may not be fulfilled and hence the relevant singularity theorems may not hold. However in the ligth of the comments above this does not imply that the energy conditions for the ’true’ matter content of the theory (the usual energy conditions applied to the sum of the stress-energy tensor for ordinary matter and the ’true’ stress-energy tensor for the scalar field) do not hold. Another remarkable feature of the Jordan frame GR theory is that it is invariant in form under the following conformal transformations (these are in fact transformations of the units of length, time, and mass): $`\stackrel{~}{g}_{ab}`$ $`=`$ $`\varphi ^2g_{ab},`$ (21) $`\stackrel{~}{\varphi }`$ $`=`$ $`\varphi ^1,`$ (22) and $`\stackrel{~}{g}_{ab}`$ $`=`$ $`fg_{ab},`$ (23) $`\stackrel{~}{\varphi }`$ $`=`$ $`f^1\varphi ,`$ (24) where $`f`$ is some smooth function given on the manifold. The invariance can be verified by direct substitution of (3.6) or (3.7) in (2.13) or (3.1-3.4). The Lagrangian (2.13) is also invariant in respect to the more general conformal transformation (first presented in ): $`\stackrel{~}{g}_{ab}`$ $`=`$ $`\varphi ^{2\alpha }g_{ab},`$ (25) $`\stackrel{~}{\varphi }`$ $`=`$ $`\varphi ^{12\alpha }.`$ (26) This transformation is accompanied by a redefinition of the BD coupling constant: $$\stackrel{~}{\omega }=\frac{\omega 6\alpha (\alpha 1)}{(12\alpha )^2},$$ (27) with $`\alpha \frac{1}{2}`$. The case $`\alpha =\frac{1}{2}`$ constitute a singularity in the transformations (3.10-3.12). We shall point out that, for instance, Brans-Dicke theory (Lagrangian (2.4)) is invariant under (3.8,3.9) only in the absence of ordinary matter or for matter with a trace-free stress-energy tensor. The Einstein frame formulation of BD-type theories is not invariant under the conformal transformations (3.8,3.9) too. ## IV Jordan frame general relativity and the cosmological singularity In this section we shall study flat FRW universes given, in the Einstein frame by the line element (we use coordinates $`t,r,\theta ,\phi `$): $$d\widehat{s}^2=dt^2+\widehat{a}^2(t)(dr^2+r^2d\mathrm{\Omega }^2)$$ (28) where $`\widehat{a}(t)`$ is the Einstein frame scale factor, and $`d\mathrm{\Omega }^2d\theta ^2+sin^2\theta d\phi ^2`$. The universe is supposed to be filled with a barotropic perfect fluid ($`\widehat{p}=(\gamma 1)\widehat{\mu }`$, $`\widehat{\mu }`$ is the energy density of matter and the barotropic index $`0<\gamma <2`$). We shall consider arbitrary $`\omega >\frac{3}{2}`$. For $`\omega <\frac{3}{2}`$ the kinetic term of the BD scalar field $`\widehat{\varphi }`$ in the Einstein frame has a negative energy. The case with $`\omega =\frac{3}{2}`$ has been already studied in ref.. The Jordan frame field equation (2.6) can be reduced to the following equation for determining the Einstein frame scale factor: $$(\frac{\dot{\widehat{a}}}{\widehat{a}})^2=\frac{8\pi }{3}\frac{(C_2)^2}{\widehat{a}^{3\gamma }}+\frac{1}{6}(\omega +\frac{3}{2})\frac{(C_1)^2}{\widehat{a}^6},$$ (29) where $`C_1`$ and $`C_2`$ are arbitrary integration constants. While deriving (4.2) we have considered that $`\widehat{\mu }=\frac{(C_2)^2}{\widehat{a}^{3\gamma }}`$ and that after integrating eq.(2.7) once $$\dot{\widehat{\varphi }}=\frac{C_1}{\widehat{a}^3}.$$ (30) The overdot means derivative with respect to the Einstein frame proper time $`t`$. If we introduce the time variable: $$dt=\frac{\widehat{a}^{3\gamma 3}}{63\gamma }d\eta ,$$ (31) then eq.(4.2) can be readily integrated to give: $$\widehat{a}(\eta )=A^{\frac{\alpha }{3}}(\eta ^2\eta _0^2)^{\frac{\alpha }{3}},$$ (32) where we have defined $`A\frac{2\pi }{3}(C_2)^2`$, $`\alpha \frac{1}{2\gamma }`$, $`\eta _0=\frac{3}{8\pi }\frac{C_1}{\beta (C_2)^2}`$, and $`\beta \frac{1}{\sqrt{\frac{2}{3}\omega +1}}`$. The scalar field $`\widehat{\varphi }`$ can be found from eq.(4.3): $$\widehat{\varphi }^\pm (\eta )=\widehat{\varphi }_0\pm ln[\frac{\eta \eta _0}{\eta +\eta _0}]^{\frac{2}{3}\alpha \beta }.$$ (33) The Jordan frame scale factor $`a^\pm (\eta )=\widehat{a}(\eta )exp[\frac{1}{2}\widehat{\varphi }^\pm (\eta )]`$ is given by the following expression: $$a^\pm (\eta )=\frac{A^{\frac{\alpha }{3}}}{\sqrt{\varphi _0}}[(\eta \eta _0)^{1\beta }(\eta +\eta _0)^{1\pm \beta }]^{\frac{\alpha }{3}}.$$ (34) This solution shows two branches. These are given by the choice of the ’+’ and the ’-’ signs in (4.7). It is valid for any finite $`\beta >0`$ ($`\frac{3}{2}<\omega <\mathrm{}`$). The relation between the proper time $`\tau `$ in the Jordan frame and the time variable $`\eta `$ is given by: $$(\tau \tau _0)^\pm =\frac{\alpha A^{(\gamma 1)\alpha }}{3\sqrt{\varphi _0}}[(\eta \eta _0)^{\gamma 1\frac{\beta }{3}}(\eta +\eta _0)^{\gamma 1\pm \frac{\beta }{3}}]^\alpha 𝑑\eta .$$ (35) For big $`\eta \eta _0`$, $$(\tau \tau _0)^\pm \frac{A^{(\gamma 1)\alpha }}{3\sqrt{\varphi _0}\gamma }\eta ^{\alpha \gamma },$$ (36) so that, $`\eta +\mathrm{}`$ leads that $`\tau +\mathrm{}`$ for both ’+’ and ’-’ branches of our solution. For $`\eta \eta _0`$ we have: $$(\tau \tau _0)^\pm \frac{A^{(\gamma 1)\alpha }}{(3\beta )\sqrt{\varphi _0}}[(2\eta _0)^{\gamma 1\pm \frac{\beta }{3}}(\eta \eta _0)^{1\frac{\beta }{3}}]^\alpha .$$ (37) In the ’+’ branch of the solution it is valid for any $`\beta 3`$ ($`\omega \frac{4}{3}`$). For $`\beta =3`$ we obtain in this limit (in the ’+’ branch of the solution): $$\tau \tau _0\frac{\alpha A^{(\gamma 1)\alpha }}{3\sqrt{\varphi _0}}(2\eta _0)^{\gamma \alpha }\mathrm{ln}(\eta \eta _0).$$ (38) Summing up. For the ’-’ branch we get that, when $`\eta \eta _0`$ in the Einstein frame, in its conformal Jordan frame $`\tau \tau _0`$. Then in this branch of the solution the evolution of the flat, perfect-fluid-filled, FRW universe (in the Jordan frame) is basically the same as in the Einstein frame. It evolves from a global (cosmological) singularity at the beginning of time $`\tau _0`$ ($`a^{}(\eta _0)=0`$) into an infinite size universe at the infinite future $`\tau =+\mathrm{}`$ ($`a^{}(+\mathrm{})=\mathrm{}`$). Otherwise, when we work with the ’+’ branch of the solution, in the range $`3\beta <\mathrm{}`$ ($`\frac{3}{2}<\omega \frac{4}{3}`$), $`\eta \eta _0`$ means $`\tau \mathrm{}`$. In this case the Jordan frame flat FRW universe evolves from an infinite size at the infinite past ($`a^+(\mathrm{})=\mathrm{}`$) into an infinite size at the infinite future ($`a^+(+\mathrm{})=\mathrm{}`$), through a minimum size $`a^{}=\frac{1}{\sqrt{\varphi _0}}A^{\frac{\beta }{3}}\frac{(\beta +1)^{\beta +1}}{(\beta 1)^{\beta 1}}\eta _0^{\frac{2}{3}\beta }`$ at some intermediate time $`\eta ^{}=\beta \eta _0`$. In this range there is no curvature singularity, neither in the past, nor in the future. In reference the same behaviour was found in the case $`\omega =\frac{3}{2}`$. This way, when we study general relativity with an extra scalar field (Lagrangians (2.5) for the Einstein frame formulation and (2.13) for the Jordan frame one) we find that there is a branch of the flat, perfect-fluid, FRW solution to the field equations of the theory where, in the range $`\frac{3}{2}\omega \frac{4}{3}`$, $`0<\gamma <2`$ of the free parameters $`\omega `$ and $`\gamma `$, the Jordan frame universe is free of the cosmological singularity. Unlike this in the Einstein frame representation the cosmological singularity is always present. We should compare this result with that obtained in ref. for BD theory given by the JF Lagrangian (2.4). In this case the cosmological singularity is avoided only in some regions (regions IV and VII in fig.5 of ref.) in the section $`\frac{3}{2}\omega \frac{4}{3}`$, $`0<\gamma <2`$ of the parameter space. Our result serves as an illustration of the notion of geometrical duality developed in (see section III), and should be interpreted (in the light of the postulate of the physical equivalence of conformal representations of a given classical theory of gravity) in the following way. Both Einstein frame picture with the cosmological singularity and the Jordan frame one without them (in the given region of the parameter space) are physically equivalent and equally consistent with the observational evidence. In the EF picture (Lagrangian (2.5)) linked with Riemann geometry, test particle’s rest masses as well as the gravitational constant $`G`$ are real constants over the spacetime. Meanwhile, in its dual Jordan frame picture due to the Lagrangian (2.13) and linked with Weyl-type geometry, these magnitudes are not constant anymore. It is directly related to the fact that, in Weyl-type geometry, units of measure vary over the manifold. Although we can work (in principle) in any one of the conformal frames (JF, EF or other conformal frames), when we approach the cosmological singularity occuring in the Einstein frame formulation of general relativity, the only way we are able to describe the physics there (without renouncing to the known physical laws) is by ’jumping’ to the Jordan frame representation where the singularity is removed (in the ’+’ branch of the solution for the given region in the parameter space). The following paradox is remarkable and should be discussed. If we take the Einstein frame GR for the description of the physics, the following situation takes place. The past-directed geodesics of all of the fluid particles converge in a point of infinite density where the known physical laws break down and the very existence of these particles abruptly ends up. When we choose the Jordan frame formulation of general relativity for the description of the given physical situation we find a very unlike picture. The past-directed world-lines of the free-falling fluid particles (these do not coincide with the geodesics of the JF metric) converge up to a time in the past where the cross section area of the world-tube is a minimum and then they diverge for ever. Even if the physical interpretation of the physical rality may not be unique, the physical reality itself should be unique (it is the basic postulate in physics). The paradoxical fact is that the fluid particles are part of this reality (they are unique) and it is difficult to imagine that in one picture they have a finite life-time (into the past) while in the other they are eternal. This question is even more difficult since experiment can not help us in the search for an answer. This point will be further discussed in section VI. ## V The low-energy limit of string theory Finally we shall outline some implications of the present viewpoint for the low-energy limit of string theory. It is rooted the belief that in the Planck energy scales gravity is not driven by Einstein’s general relativity, but by some of its scalar-tensor modifications. In particular the low-energy theory of the fundamental string contains the BD-type theory given by the basic Lagrangian (2.1) with $`\omega =1`$. Actually, for pure dilaton gravity we have: $$L=\frac{\sqrt{g}}{16\pi }e^{2\phi }(R+4(\phi )^2),$$ (39) where $`R`$ is the Ricci scalar of the four-dimensional external spacetime and $`\phi `$ is the dilaton field. Under the field redefinition $`\varphi =e^{2\phi }`$, the Lagrangian (5.1) can be transformed into (2.1) with $`\omega =1`$. We should remember, however, that the theory given by (2.1) can be linked either with Riemann geometry or with Weyl-type geometry indistinctly (see section II) so, in this case, BD theory and general relativity with an extra scalar field are undistinguishable. This degeneration vanishes when matter fields are present. In this case dilaton gravity is given by : $$L=\frac{\sqrt{g}}{16\pi }e^{2\phi }(R+4(\phi )^2)+e^{2(a1)\phi }L_{matter}.$$ (40) This Lagrangian can not be reduced to the corresponding Lagrangian (2.4) for Brans-Dicke theory with $`\omega =1`$ by the redefinition $`\varphi =e^{2\phi }`$, because the non-minimal coupling between the matter Lagrangian $`L_{matter}`$ and the dilaton field $`\phi `$ in (5.2). Only for $`a=1`$ there is not coupling between $`L_{matter}`$ and the dilaton, and the Lagrangian (5.2) can be succesfully transformed into the corresponding BD Lagrangian (2.4) . However, other values for $`a`$ ($`a1`$) are also available and should be taken into account. In particular, when $`a=1`$, eq.(5.2) can be transformed into (2.13) with $`\omega =1`$, that is the JF Lagrangian for GR theory with an extra scalar field, given in the EF by: $$L=\frac{\sqrt{\widehat{g}}}{16\pi }(\widehat{R}\frac{1}{2}(\widehat{}\widehat{\varphi })^2)+L_{matter}.$$ (41) When solitonic degrees of freedom such as p-branes are taken into account, then the effective Lagrangian can be written as BD Lagrangian (2.4) with $`\omega `$ given by : $$\omega =\frac{(D1)(d2)d^2}{(D2)(d2)d^2},$$ (42) where $`d=p+1`$. In four dimensions ($`D=4`$), $`\omega =\frac{4}{3}`$ for the 0-brane and $`\omega =\frac{3}{2}`$ for the instanton ($`p=1`$). If one assumes that, in the regime of Planck length curvature, gravity is described by general relativity with an extra scalar field, given in the Jordan frame by the Lagrangian (2.13) with $`\omega `$ given by (5.4) ($`D=4`$), and considering the gas of solitonic p-brane as a perfect fluid with the barotropic equation of state , then one can conclude that the cosmological singularity can be avoided in some cases (in particular for 0-brane and for the instanton), while for the fundamental string (1-brane) the cosmological singularity is unavoidable because the value $`\omega =1`$ falls outside the range $`\frac{3}{2}\omega \frac{4}{3}`$ (see section IV). This result should be interpreted in the light of recent developments of string theory suggesting that, in the high curvature regime, the solitonic p-brane will be copiously produced since they become light and dominate the universe in that regime . Then our result is an indication that, in such a extreme regime, the cosmological singularity may be removed by the solitonic degrees of freedom such like the 0-brane and the instanton. ## VI Is the cosmological singularity really avoidable? The result that in the Jordan frame formulation of general relativity the cosmological singularity vanishes (in the ’plus’ branch of the solution) was expected since $`R_{mn}k^mk^n`$ is negative definite in this frame for any non-spacelike vector $`𝐤`$. This means that the relevant singularity theorems may not hold. This is in contradiction with the Einstein frame formulation where $`\widehat{R}_{mn}k^mk^n`$ is non-negative and a space-like singularity at the beginning of time $`t=0`$ ($`\eta =\eta _0`$) occurs. The very striking fact is that both geometrical representations with and without the cosmological singularity are observationally indistinguishable since, on the one hand they are equivalent in respect to the conformal transformation of the metric (2.2) and, on the other the physical experiment is insensible to this particular transformation of the physical units. A more careful analysis of the physical equivalence between a picture with singularity and a picture without them shows that this is a very paradoxical situation (see section IV). In fact, we can, in principle, link a physical observer with each one of the fluid particles. Suppose that a part of these observers (A observers) take the Einstein frame formulation of general relativity for the description of the evolution of the universe while the other part of observers (B observers) take the Jordan frame GR for modeling the universe. The free-fall world-lines of the A observers (their geodesic lines) meet together a finite proper time into the past. At this point the known physical laws break down and the very existence of the A observers abruptly ends up, i.e. their life-time is finite into the past. The B observers, on the contrary, are eternal (both into the past and into the future) and the physical laws they know (these are the same for both sets of observers) hold for all times. This is a very profound paradox and we do not pretend to solve it. We can only pretend to conjecture on this subject. Two explanations to this paradoxical situation are possible. The first one is based on the fact that Einstein’s theory is a classical theory of spacetime and near of the cosmological singularity we need a quantum theory (this theory has not been well stablished at present). When a viable quantum gravity theory will be worked out it may be that this singularity is removed. Our result, when applied to the low-energy limit of string theory (section V), is an indication in favour of this idea since string theory seems to be the better candidate for a final quantum theory of gravity. In the Jordan frame no singularity occurs (in the ’+’ branch of the solution for the given region $`\frac{3}{2}\omega \frac{4}{3}`$, $`0<\gamma <2`$ in the parameter space) and, consequently, we do not need of any quantum theory for describing gravitationIt is true provided the minimum value $`a^{}`$ of the Jordan frame scale factor is much greater than the Planck length.. This explanation is in agreement with a point of view developed in reference . According to this viewpoint, to bring in the quantum effects into the classical gravity theory one needs to make only a conformal transformation. If we start with Einstein’s classical theory of gravitation then we can set in the quantum effects of matter by simply making a conformal transformation into, say, the Jordan frame. In this sense the Jordan frame formulation of general relativity already contains the quantum effects (Jordan frame GR represents a unified description of both gravity and the quantum effects of matter). The second possibillity is more radical. The Einstein frame formulation is not invariant under the particular transformations of the units of time, length and mass studied in section III. It is very striking since the physical laws should be invariant under the transformations of units. Unlike this The Jordan frame formulation of general relativity is invariant in respect to these transformations. This means that the picture without the cosmological singularity is more viable than the one with them. Consequently the cosmological singularity is a fictitious entity due to a wrong choice of the formulation of the theory. In the last instance it may be that both possibillities are connected. We hope that when a viable Einstein frame quantum theory of gravity will be worked out it should be invariant under the transformations of the physical units. All of this is, of course, a matter of pure conjecture and we hope other possibillities will be worked out. The fatal point is that this will remain a subject of conjecture since experiment can not help us in solving the present paradox. AKNOWLEDGEMENT We thank the unknown referee for recommendations and MES of Cuba by financial support.
no-problem/9908/math9908081.html
ar5iv
text
# Universal Metric Spaces and Extension Dimension ## Abstract. For any countable $`CW`$-complex $`K`$ and a cardinal number $`\tau \omega `$ we construct a completely metrizable space $`X(K,\tau )`$ of weight $`\tau `$ with the following properties: $`\mathrm{e}\mathrm{dim}X(K,\tau )K`$, $`X(K,\tau )`$ is an absolute extensor for all normal spaces $`Y`$ with $`\mathrm{e}\mathrm{dim}YK`$, and for any completely metrizable space $`Z`$ of weight $`\tau `$ and $`\mathrm{e}\mathrm{dim}ZK`$ the set of closed embeddings $`ZX(K,\tau )`$ is dense in the space $`C(Z,X(K,\tau ))`$ of all continuous maps from $`Z`$ into $`X(K,\tau )`$ endowed with the limitation topology. This result is applied to prove the existence of universal spaces for all metrizable spaces of given weight and with a given cohomological dimension. ###### Key words and phrases: Extension dimension, strongly universal space, $`K`$-soft maps The existence of universal separable metric spaces for extension dimension with respect to countable $`CW`$-complexes was proved by Olszewski in . In the class of all metric spaces of a given weight this problem was recently solved by Levin . In the present note we show the existence of universal metric spaces having some extra properties (see Theorem 1 below). The concept of extension dimension was introduced by Dranishnikov (see also , ). For a normal space $`X`$ and a $`CW`$-complex $`K`$ we write $`\mathrm{e}\mathrm{dim}XK`$ (the extension dimension of $`X`$ does not exceed $`K`$) if $`K`$ is an absolute extensor for $`X`$. This means that any continuous map $`f:AK`$, defined on a closed subset $`A`$ of $`X`$, admits a continuous extension $`\overline{f}:XK`$. Since not every $`CW`$-complex is an absolute neighborhood extensor for normal spaces, we can enlarge the class of normal spaces $`X`$ with $`\mathrm{e}\mathrm{dim}XK`$ ($`K`$ is a $`CW`$-complex) by introducing the following notion (see \[14, Definition 2.5\]): A normal space $`X`$ is in the class $`\alpha (K)`$ if every continuous map from a closed $`AX`$ to $`K`$ which extends to a map of a neighborhood of $`A`$ to $`K`$ can be extended to a map of $`X`$ to $`K`$. Obviously, if $`KANE(X)`$ (this, for example, holds for every $`X`$ admitting a perfect map onto a first countable paracompact space ) then $`X\alpha (K)`$ if and only if $`\mathrm{e}\mathrm{dim}XK`$. We also adopt the following definition: a continuous map $`f:XY`$ is called $`K`$-soft (resp., $`K`$-soft with respect to metrizable spaces) if for any normal (resp., metrizable) space $`Z`$ with $`Z\alpha (K)`$, any closed $`Z_0Z`$, and any two maps $`g:Z_0X`$, $`h:ZY`$ with $`fg=h|Z_0`$, there exists a map $`k:ZX`$ such that $`fk=h`$ and $`k|Z_0=g`$. For any $`CW`$-complex $`K`$ and any cardinal number $`\tau \omega `$ let $`(K,\tau )`$ be the class of all completely metrizable spaces $`X`$ of weight $`\tau `$ with $`\mathrm{e}\mathrm{dim}XK`$. The following theorem is our main result: ###### Theorem 1. For any countable $`CW`$-complex $`K`$ and a cardinal number $`\tau \omega `$ there exists a completely metrizable space $`X(K,\tau )`$ and a $`K`$-soft map $`f(K,\tau ):X(K,\tau )l_2(\tau )`$ satisfying the following properties: * $`X(K,\tau )(K,\tau )`$. * $`X(K,\tau )`$ is an absolute extensor for all normal spaces $`Y`$ with $`Y\alpha (K)`$. * $`f(K,\tau )`$ is strongly $`(K,\tau )`$-universal, i.e. for any open cover $`𝒰`$ of $`X(K,\tau )`$, any (complete) metric space $`Z`$ of weight $`\tau `$ with $`\mathrm{e}\mathrm{dim}ZK`$ and any map $`g:ZX(K,\tau )`$ there exists a (closed) embedding $`h:ZX(K,\tau )`$ $`𝒰`$-close to $`g`$ with $`f(K,\tau )g=f(K,\tau )h`$. For the case $`K=S^n`$, $`n`$, Theorem 1 was proved in \[4, Theorem 2.7\]. Our proof of Theorem 1 is based on the next few lemmas and the techniques developed in and . ###### Lemma 2. For any countable $`CW`$-complex $`K`$ and any separable (completely) metrizable space $`X`$ there exists a separable (completely) metrizable space $`Y_X`$ with $`\mathrm{e}\mathrm{dim}Y_XK`$ and a $`K`$-soft map $`f:Y_XX`$. ###### Proof. Let $`P`$ be a Polish $`ANR`$ homotopically equivalent to $`K`$ and $`\phi :XP`$ and $`\psi :PX`$ be two maps such that $`\psi \phi `$ is homotopic to $`id_P`$ and $`\psi \phi `$ is homotopic to $`id_K`$. For extension dimension with respect to $`P`$ this lemma was proved in \[2, Proposition 5.9\]. So, for a given (complete) separable metric space $`X`$ there is a (complete) separable metric space $`Y_X`$ with $`\mathrm{e}\mathrm{dim}Y_XP`$ and a $`P`$-soft map $`f:Y_XX`$. According to next claim, $`f`$ is $`K`$-soft. Claim. If $`Z\alpha (K)`$ is normal, then $`Z\alpha (P)`$ Suppose $`Z\alpha (K)`$ is a normal space. Since every Polish $`ANR`$ is an $`ANE`$ for normal spaces, $`Z\alpha (P)`$ is equivalent to $`P\AE (Z)`$. Take a map $`g:AP`$, where $`AZ`$ is closed and consider the map $`\psi g:AK`$. Because $`g`$ can be extended to a map from a neighborhood $`U`$ of $`A`$ into $`P`$, $`\psi g`$ can be extended to a map from $`U`$ to $`K`$. Since $`Z\alpha (K)`$, there is an extension $`h:ZK`$ of $`\psi g`$. Then the restriction $`(\phi h)|A`$ is homotopic to $`g`$. Finally, using that the Homotopy Extension Theorem holds for normal spaces and Polish $`ANR`$’s, we conclude that $`g`$ is extendable to a map from $`Z`$ into $`P`$. Hence $`Z\alpha (P)`$. It remains only to show that $`\mathrm{e}\mathrm{dim}Y_XK`$. And this follows from $`\mathrm{e}\mathrm{dim}Y_XP`$ and the fact that the Homotopy Extension Theorem holds for metric spaces and $`CW`$-complexes . ∎ ###### Lemma 3. Let $`f:XY`$ be a uniformly 0-dimensional map of metrizable spaces $`X`$ and $`Y`$. Then $`\mathrm{e}\mathrm{dim}X\mathrm{e}\mathrm{dim}Y`$. Recall that a map $`f:XY`$, where $`X`$ and $`Y`$ are metrizable spaces, is called uniformly 0-dimensional if there exists a compatible metric on $`X`$ such that for every $`\epsilon >0`$ and every $`yf(X)`$ there is an open neighborhood $`U`$ of $`y`$ such that $`f^1(U)`$ can be represented as the union of disjoint open sets of $`diam<\epsilon `$. It is well known that every metric space admits a uniformly 0-dimensional map into Hilbert cube $`Q`$. ###### Lemma 4. For any countable $`CW`$-complex $`K`$ and a (completely) metrizable space $`Y`$ of weight $`\tau `$ there exist a (completely) metrizable space X of weight $`\tau `$ and a $`K`$-soft map $`f:XY`$ such that $`\mathrm{e}\mathrm{dim}XK`$. ###### Proof. It suffices to prove this corollary when $`Y`$ is the space $`l_2(\tau )`$. Fix a compatible metric $`d_1`$ on $`l_2(\tau )`$ and an uniformly 0-dimensional surjection (with respect to $`d_1`$) $`g:l_2(\tau )A`$ with $`A`$ a separable metric space. By Lemma 2, there exists a separable metric space $`Z`$ with $`\mathrm{e}\mathrm{dim}ZK`$ and a $`K`$-soft map $`h:ZA`$. Let $`X`$ be the fibered product of $`l_2(\tau )`$ and $`Z`$ with respect to $`g`$ and $`h`$, and let $`f:Xl_2(\tau )`$ and $`p:XZ`$ denote the corresponding projections of this fibered product. If $`d_2`$ is any metric on $`Z`$, then $`p`$ is uniformly 0-dimensional with respect to the metric $`(d_1^2+d_2^2)^{1/2}`$ (see ). Hence, by Lemma 3, $`\mathrm{e}\mathrm{dim}ZK`$. The $`K`$-softness of $`h`$ implies that $`f`$ is also $`K`$-soft. It remains only to show that $`X`$ is completely metrizable. To this end let $`B_X`$ be the space obtained from $`\beta X`$ by making the points of $`\beta XX`$ isolated. According to \[13, Lemma 2\], $`B_X`$ is paracompact, and obviously, $`B_X`$ is first countable. Claim. $`\mathrm{e}\mathrm{dim}B_XK`$ Let $`s:FK`$ be an arbitrary map, where $`FB_X`$ is closed. Since $`\mathrm{e}\mathrm{dim}XK`$, there exists an extension $`s_1:FXK`$ of $`s`$. Now we need the following result \[8, Theorem 11.2\]: any contractible $`CW`$-complex is an absolute extensor for all spaces admitting a perfect map onto a first countable paracompact space. This statement implies that $`\mathrm{Cone}(K)`$ is an absolute extensor for $`B_X`$. Therefore, there exists an extension $`s_2:B_X\mathrm{Cone}(K)`$ of $`s_1`$. Let $`H=s_2^1(\mathrm{Cone}(K)\{b\})`$, where $`b\mathrm{Cone}(K)K`$. Fix a retraction $`r:\mathrm{Cone}(K)\{b\}K`$. Since $`H`$ is clopen in $`B_X`$, it follows that $`rs_2`$ can be extended to a map $`s_3:B_XK`$. Then $`s_3`$ is an extension of $`s`$ and, consequently, $`\mathrm{e}\mathrm{dim}B_XK`$. Now let us go back to the proof of the completeness of $`X`$. Considering $`X`$ as a closed subset of $`B_X`$ and using the fact that $`l_2(\tau )`$ is an absolute extensor for paracompact spaces, we can find a map $`q:B_Xl_2(\tau )`$ such that $`q|X=f`$. Then, since $`f`$ is $`K`$-soft and $`\mathrm{e}\mathrm{dim}B_XK`$, there exists a retraction from $`B_X`$ onto $`X`$. Finally, applying the argument from \[13, the proof of Lemma 2\], we conclude that $`X`$ is complete. ∎ Proof of Theorem 1. We will construct an inverse sequence $`S=\{X_n,p_n^{n+1},n\}`$ such that: 1. $`X_1=l_2(\tau )`$ and $`X_n(K,\tau )`$ for each $`n1`$; 2. each $`p_n^{n+1}:X_{n+1}X_n`$ is a $`K`$-soft map such that for any completely metrizable space $`Z`$ of weight $`\tau `$ with $`\mathrm{e}\mathrm{dim}ZK`$ and any map $`g:ZX_n`$ there exists a closed embedding $`h:ZX_{n+1}`$ with $`p_n^{n+1}h=g`$. If $`X_i`$ and $`p_{i1}^i`$ have already been constructed for $`i=1,2,\mathrm{},n`$, then, by Lemma 4, there exist a completely metrizable space $`X_{n+1}`$ of weight $`\tau `$ and a $`K`$-soft map $`h_{n+1}:X_{n+1}X_n\times l_2(\tau )`$ such that $`\mathrm{e}\mathrm{dim}X_{n+1}K`$. Let $`p_n^{n+1}=\pi _nh_{n+1}`$, where $`\pi _n:X_n\times l_2(\tau )X_n`$ is the natural projection. Denote by $`X(K,\tau )`$ the limit space of $`S`$ and by $`f(K,\tau )`$ the limit projection $`p_1:X(K,\tau )X_1`$. Obviously, $`X(K,\tau )`$ is a completely metrizable space of weight $`\tau `$ and $`f(K,\tau )`$ is $`K`$-soft. Following the proof of Lemma 2.6 from one can show that $`f(K,\tau )`$ is strongly $`(K,\tau )`$-universal. Finally, by the limit theorem of Rubin-Schapiro , $`\mathrm{e}\mathrm{dim}X(K,\tau )K`$. Observe that $`X(K,\tau )`$ is an absolute extensor for all normal spaces $`Y`$ with $`\mathrm{e}\mathrm{dim}YK`$ because $`l_2(\tau )`$ is an absolute extensor for normal spaces and $`f(K,\tau )`$ is $`K`$-soft. We can apply Theorem 1 to obtain universal spaces for all metrizable spaces with a given cohomological dimension and a given weight. Recall that, for any abelian group $`G`$ and a natural number $`n`$, the cohomological dimension $`dim_GX`$ of $`X`$ with a coefficient group $`G`$ is $`n`$ iff $`\mathrm{e}\mathrm{dim}XK(G,n)`$, where $`X`$ is a normal space and $`K(G,n)`$ is the Eilenberg-MacLane complex. Let us agree the following notations: a map $`f`$ is called $`(G,n)`$-soft iff it is $`K(G,n)`$-soft, and $`f`$ is strongly $`(G,n,\tau )`$-universal iff $`f`$ is strongly $`(K(G,n),\tau )`$-universal. ###### Corollary 5. Let $`G`$ be a countable (resp., torsion) abelian group. Then for every $`n`$ and $`\tau \omega `$ there exists a completely metrizable space $`X_\tau (G,n)`$ of weight $`\tau `$ and a map $`f_\tau (G,n):X_\tau (G,n)l_2(\tau )`$ such that: * $`\mathrm{dim}_GX_\tau (G,n)=n`$. * $`X_\tau (G,n)`$ is an absolute extensor for all normal (rep., metrizable) spaces $`Y`$ with $`\mathrm{dim}_GYn`$. * $`f_\tau (G,n)`$ is strongly $`(G,n,\tau )`$-universal and $`(G,n)`$-soft (resp., $`(G,n)`$-soft with respect to metrizable spaces). ###### Proof. If $`G`$ is countable, the proof follows directly from Theorem 1 with $`K=K(G,n)`$. If $`G`$ is torsion, by \[7, Theorem B(a)\], there exists a countable family $`\sigma (G)`$ of countable groups such that $`\mathrm{dim}_GY=\mathrm{max}\{\mathrm{dim}_HY:H\sigma (G)\}`$ for any metrizable space $`Y`$. Then, according to \[9, Lemma 2.4\], for each $`n`$ there is a countable complex $`K_n`$ with $`\mathrm{dim}_GYn`$ if and only if $`\mathrm{e}\mathrm{dim}YK_n`$, $`Y`$ is any metrizable space. Finally, apply Theorem 1 to $`K_n`$. ∎ Similarly, using Theorem 1 and \[7, Theorem B(a),(b) and (f)\]), we can obtain the following ###### Corollary 6. Let $`G`$ be an arbitrary abelian group. Then for every $`n`$ and $`\tau \omega `$ there exists a completely metrizable space $`Y_\tau (G,n)`$ of weight $`\tau `$ and a map $`g_\tau (G,n):Y_\tau (G,n)l_2(\tau )`$ such that: * $`\mathrm{dim}_GY_\tau (G,n)n+1`$. * $`Y_\tau (G,n)`$ is an absolute extensor for all metrizable spaces $`Z`$ with $`\mathrm{dim}_GZn`$. * $`g_\tau (G,n)`$ is strongly $`(G,n,\tau )`$-universal and $`(G,n)`$-soft with respect to metrizable spaces. For $`\tau =\omega `$ weaker versions of Corollary 5 and Corollary 6 were proved in .
no-problem/9908/nucl-th9908085.html
ar5iv
text
# Comment on ”Evidence for the Existence of Supersymmetry in Atomic Nuclei” ## Acknowledgement This work was been partially supported by Polish KBN Grant 2 P03B 076 13 and the Max Planck Institut für Astrophysik (Garching).
no-problem/9908/hep-ph9908279.html
ar5iv
text
# The 𝑥-dependence of Parton Distributions Compared with Neutrino Data ## Abstract We use the variational principle of Quantum HadronDynamics, an alternative formulation of Quantum ChromoDynamics, to determine the wavefunction of valence quarks in a baryon at a low value of $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}`$. This can be used to predict the structure function $`\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{F}_\textcolor[rgb]{0,0,0}{3}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{,}\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}`$ at higher values of $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}`$ using the evolution equations of perturbative QCD. This prediction is compared to the measurements of neutrino scattering cross-section by the CDHS and CCFR experiments. The agreement is quite good, confirming the validity of QHD as a way of studying hadronic structure. Keywords: Structure Functions; Parton Model; Deep Inelastic Scattering; Neutrino Scattering; QCD; Skyrme model; Quantum HadronDynamics. PACS : 12.39Ki,13.60.-r, 12.39Dc,12.38Aw. One of us has proposed that it is possible to describe strong interactions directly in terms of hadrons rather than in terms of quarks and gluons; this new approach was called Quantum HadronDynamics (QHD). It is equivalent to Quantum ChromoDynamics (QCD, the universally accepted theory of strong interactions), except that the semi-classical approximation of QHD is a good approximation to nature: it is related to the large $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}`$ limit of QCD. Semi-classical methods applied directly to QCD give results that are too inaccurate except in the limit of short distances. QHD has been constructed so far only in two space-time dimensions. However, that is sufficient to obtain the structure functions of Deep Inelastic Scattering: in the limit of zero transverse momentum for the constituents, we can dimensionally reduce the theory to two space-time dimensions. More precisely, the dependence of the structure functions on the Bjorken variable is determined by ignoring the transverse dimensions. Thus we can predict the non-perturbative $`\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}`$ dependence of structure functions, something that is completely inaccessible to the standard perturbative formulation of QCD. The leading order effects of transverse momenta are described by the DGLAP equations that determine the $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}`$ dependence of the structure functions: this is the fundamental insight of the paper of Altarelli and Parisi. In this paper we will focus on understanding the distribution of valence quarks in the nucleon. This is measured directly in neutrino (and anti-neutrino scattering) against the nucleon: the isospin averaged valence parton distribution function is just the structure function $`\textcolor[rgb]{0,0,0}{F}_\textcolor[rgb]{0,0,0}{3}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}\textcolor[rgb]{0,0,0}{)}`$ measured in this process . To leading order in $`\frac{\textcolor[rgb]{0,0,0}{1}}{\textcolor[rgb]{0,0,0}{\mathrm{log}}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}}`$, it is not necessary to know the gluon structure functions of the hadron in the DGLAP evolution of this quantity. The anti-quark content of the baryon is zero at the initial value $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}`$ within our approximations. This is consistent with the phenomenological model of Glück and Reya, that the anti-quark distribution of the proton is dynamically generated by $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}`$ evolution from an initial value of zero at low $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}`$. We will, in a later paper, calculate the anti-quark distribution functions and show that this is indeed justified. However, we expect the “primordial” gluon distribution to be non-zero. In a previous paper we derived a variational principle that determines this valence quark distribution function in the semi-classical approximation of QHD. It was also shown there how to derive this variational principle from QCD. In a separate paper we derived the same principle from an interacting valence parton model of the baryon. In this paper we will obtain approximate solutions of the variational principle and compare with the experimental data of the CCFR and CDHS collaborations. The agreement of the predictions of QHD with data is quite spectacular: we now have a theory of the $`\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}`$ dependence of deep inelastic structure functions. Let $`\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}`$ be the wavefunction of a valence parton in a proton, thought of as a function of the null component of momentum ($`\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{p}^\textcolor[rgb]{0,0,0}{0}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{p}^\textcolor[rgb]{0,0,0}{1}`$) . For kinematical reasons, $`\textcolor[rgb]{0,0,0}{P}\textcolor[rgb]{0,0,0}{>}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{>}\textcolor[rgb]{0,0,0}{0}`$, where $`\textcolor[rgb]{0,0,0}{P}`$ is the total momentum of the proton. Naively, we would have the sum rules, $`{\displaystyle \textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{P}}\textcolor[rgb]{0,0,0}{|}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{,}\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}{\displaystyle \textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{P}}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{|}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{P}\textcolor[rgb]{0,0,0}{.}`$ (1) However, we have to modify the momentum sum rule since it is known that the valence partons carry only about half the momentum of the proton: the rest is carried mostly by the gluons, with a small contribution from anti-quarks. Thus we impose $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}\textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{P}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{|}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}\frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{f}\textcolor[rgb]{0,0,0}{P}`$, where $`\textcolor[rgb]{0,0,0}{f}`$ is the fraction of the momentum carried by the valence partons. We will address the problem of gluon distribution functions in a later paper, where we will attempt to calculate $`\textcolor[rgb]{0,0,0}{f}`$; here we will treat it as a parameter. The wavefunction $`\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}`$ is determined by minimizing the energy $`\textcolor[rgb]{0,0,0}{E}_\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{\psi }\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{=}{\displaystyle \textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{P}}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{1}}{\textcolor[rgb]{0,0,0}{2}}}\textcolor[rgb]{0,0,0}{[}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{+}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{\mu }^\textcolor[rgb]{0,0,0}{2}}{\textcolor[rgb]{0,0,0}{p}}}\textcolor[rgb]{0,0,0}{]}\textcolor[rgb]{0,0,0}{|}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{+}{\displaystyle \frac{\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}^\textcolor[rgb]{0,0,0}{2}}{\textcolor[rgb]{0,0,0}{2}}}{\displaystyle \textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{|}\textcolor[rgb]{0,0,0}{\psi }\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{|}\textcolor[rgb]{0,0,0}{\psi }\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}\frac{\textcolor[rgb]{0,0,0}{|}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{|}}{\textcolor[rgb]{0,0,0}{2}}\textcolor[rgb]{0,0,0}{𝑑}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{𝑑}\textcolor[rgb]{0,0,0}{y}}`$ (2) subject to the above conditions. (Here $`\textcolor[rgb]{0,0,0}{\psi }\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{P}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{e}^{\textcolor[rgb]{0,0,0}{i}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{x}}\frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}`$ is the wave-function in position space.) This variational principle was derived from QCD through QHD in Ref. as well as from an interacting parton model in Ref. . It describes partons (within the mean field approximation) which are interacting with each other through a linear Coulomb potential, which binds them into a baryon. The linear potential comes from eliminating the gluon fields: it is the one-dimensional Fourier transform of the gluon propagator. A Lorentz invariant formulation (which is more convenient for our purposes) would be to minimize the mass<sup>2</sup> of the baryon rather than its energy: $`\textcolor[rgb]{0,0,0}{}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{\left[}{\displaystyle \textcolor[rgb]{0,0,0}{}\frac{\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}}\textcolor[rgb]{0,0,0}{|}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}\frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{\right]}\textcolor[rgb]{0,0,0}{\left[}{\displaystyle \textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{P}}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{\mu }^\textcolor[rgb]{0,0,0}{2}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{p}}}\textcolor[rgb]{0,0,0}{|}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{+}{\displaystyle \frac{\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}^\textcolor[rgb]{0,0,0}{2}}{\textcolor[rgb]{0,0,0}{2}}}{\displaystyle \textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{|}\textcolor[rgb]{0,0,0}{\psi }\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{|}\textcolor[rgb]{0,0,0}{\psi }\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{|}^\textcolor[rgb]{0,0,0}{2}\frac{\textcolor[rgb]{0,0,0}{|}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{|}}{\textcolor[rgb]{0,0,0}{2}}\textcolor[rgb]{0,0,0}{𝑑}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{𝑑}\textcolor[rgb]{0,0,0}{y}}\textcolor[rgb]{0,0,0}{\right]}\textcolor[rgb]{0,0,0}{.}`$ (3) We are ignoring the flavor and spin quantum numbers of the partons so what we get will be the spin and flavor averaged wavefunction. Also $`\textcolor[rgb]{0,0,0}{\mu }^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{m}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{}\frac{\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}^\textcolor[rgb]{0,0,0}{2}}{\textcolor[rgb]{0,0,0}{\pi }}`$ is the quark mass<sup>2</sup> after a finite renormalization , and m is the current quark mass. The dimensional parameter $`\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{\mathrm{\Lambda }}_{\textcolor[rgb]{0,0,0}{Q}\textcolor[rgb]{0,0,0}{C}\textcolor[rgb]{0,0,0}{D}}`$ determines the strength of the interaction. We now define the number density of valence partons $`\textcolor[rgb]{0,0,0}{V}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}\textcolor[rgb]{0,0,0}{\left[}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{+}\textcolor[rgb]{0,0,0}{C}_\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{(}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{\alpha }_\textcolor[rgb]{0,0,0}{s}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}}{\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{+}\textcolor[rgb]{0,0,0}{C}_\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{(}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{\alpha }_\textcolor[rgb]{0,0,0}{s}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}}{\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{)}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{+}\textcolor[rgb]{0,0,0}{C}_\textcolor[rgb]{0,0,0}{3}\textcolor[rgb]{0,0,0}{(}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{\alpha }_\textcolor[rgb]{0,0,0}{s}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}}{\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{)}^\textcolor[rgb]{0,0,0}{3}\textcolor[rgb]{0,0,0}{\right]}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{P}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}}\textcolor[rgb]{0,0,0}{\left|}\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}\textcolor[rgb]{0,0,0}{P}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{\right|}^\textcolor[rgb]{0,0,0}{2}`$ (4) We have normalized this so that the Gross-Lewellyn-Smith sum rule (including the perturbative corrections up to order $`\textcolor[rgb]{0,0,0}{\alpha }_\textcolor[rgb]{0,0,0}{s}^\textcolor[rgb]{0,0,0}{3}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}`$) is satisfied. The coefficients $`\textcolor[rgb]{0,0,0}{C}_\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{,}\textcolor[rgb]{0,0,0}{C}_\textcolor[rgb]{0,0,0}{2}`$ and $`\textcolor[rgb]{0,0,0}{C}_\textcolor[rgb]{0,0,0}{3}`$ are given in Ref.. We will finally set the number of colors $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}`$ to 3 and the number of flavours to 2 at the initial low value of $`\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}`$. Since the total momentum scales like $`\textcolor[rgb]{0,0,0}{P}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}`$, in the limit as $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{\mathrm{}}`$, the parton momentum has the range $`\textcolor[rgb]{0,0,0}{0}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{<}\textcolor[rgb]{0,0,0}{\mathrm{}}`$. In the limit when $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{\mathrm{}}`$ and $`\textcolor[rgb]{0,0,0}{m}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0}`$, we have found an exact minimum of the variational principle: $`\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{C}\textcolor[rgb]{0,0,0}{e}^{\textcolor[rgb]{0,0,0}{}\frac{\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{g}}}\textcolor[rgb]{0,0,0}{.}`$ The condition for minimizing $`\textcolor[rgb]{0,0,0}{}^\textcolor[rgb]{0,0,0}{2}`$ is an integral equation and we can verify that this function is a solution by explicit computation. (The calculation involves infrared singular integrals, which are defined through an appropriate principal value prescription as in .) This suggests that even when $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}`$ is finite, a reasonable variational ansatz would be $`\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{\psi }}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{p}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{C}\textcolor[rgb]{0,0,0}{\left(}\frac{\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{g}}\textcolor[rgb]{0,0,0}{\right)}^\textcolor[rgb]{0,0,0}{a}\textcolor[rgb]{0,0,0}{[}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{}\frac{\textcolor[rgb]{0,0,0}{p}}{\textcolor[rgb]{0,0,0}{P}}\textcolor[rgb]{0,0,0}{]}^\textcolor[rgb]{0,0,0}{b}\textcolor[rgb]{0,0,0}{.}`$ (Recall that $`\textcolor[rgb]{0,0,0}{e}^\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{lim}_\textcolor[rgb]{0,0,0}{n}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{\mathrm{}}\textcolor[rgb]{0,0,0}{[}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{}\frac{\textcolor[rgb]{0,0,0}{x}}{\textcolor[rgb]{0,0,0}{n}}\textcolor[rgb]{0,0,0}{]}^\textcolor[rgb]{0,0,0}{n}`$.) The constant $`\textcolor[rgb]{0,0,0}{C}`$ is determined by the normalization condition. By using the momentum sum rule, we get $`\textcolor[rgb]{0,0,0}{b}\textcolor[rgb]{0,0,0}{=}\frac{\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{f}}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{+}\textcolor[rgb]{0,0,0}{a}\textcolor[rgb]{0,0,0}{[}\frac{\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}}{\textcolor[rgb]{0,0,0}{f}}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{]}\textcolor[rgb]{0,0,0}{.}`$$`\textcolor[rgb]{0,0,0}{a}`$” is determined by the variational principle. It is zero in the limit of chiral symmetry and rises like a power of $`\frac{\textcolor[rgb]{0,0,0}{m}^\textcolor[rgb]{0,0,0}{2}}{\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}^\textcolor[rgb]{0,0,0}{2}}`$. In the limit $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{\mathrm{}}`$, $`\textcolor[rgb]{0,0,0}{a}`$ is determined by the transcendental equation $`{\displaystyle \frac{\textcolor[rgb]{0,0,0}{\pi }\textcolor[rgb]{0,0,0}{m}^\textcolor[rgb]{0,0,0}{2}}{\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}^\textcolor[rgb]{0,0,0}{2}}}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{+}{\displaystyle \textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{1}}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{y}}{\textcolor[rgb]{0,0,0}{y}^\textcolor[rgb]{0,0,0}{2}}}\textcolor[rgb]{0,0,0}{\left[}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{+}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{)}^\textcolor[rgb]{0,0,0}{a}\textcolor[rgb]{0,0,0}{+}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{)}^\textcolor[rgb]{0,0,0}{a}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\right]}\textcolor[rgb]{0,0,0}{+}{\displaystyle \textcolor[rgb]{0,0,0}{}_\textcolor[rgb]{0,0,0}{1}^{\textcolor[rgb]{0,0,0}{\mathrm{}}}}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{y}}{\textcolor[rgb]{0,0,0}{y}^\textcolor[rgb]{0,0,0}{2}}}\textcolor[rgb]{0,0,0}{\left[}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{+}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{)}^\textcolor[rgb]{0,0,0}{a}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\right]}`$ (5) which we derived by a Frobenius-type analysis of the integral equation for the minimization of $`\textcolor[rgb]{0,0,0}{}^\textcolor[rgb]{0,0,0}{2}`$ in an earlier paper. Thus in the limit $`\textcolor[rgb]{0,0,0}{m}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0}`$, $`\textcolor[rgb]{0,0,0}{a}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0}`$, and we have the valence parton distribution function $`\textcolor[rgb]{0,0,0}{V}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{C}\textcolor[rgb]{0,0,0}{[}\textcolor[rgb]{0,0,0}{1}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{b}\textcolor[rgb]{0,0,0}{]}^{\frac{\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}}{\textcolor[rgb]{0,0,0}{f}}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{2}}`$ where $`\textcolor[rgb]{0,0,0}{C}`$ is fixed by the GLS sum rule. This variational approximation agrees well with our numerical solution of the same problem in Ref. but is much simpler to use. The limit of chiral symmetry is a good approximation in the case of the nucleon since the current quark masses of the up and down quarks (5-8 MeV) are small compared to the energy scale of the strong interactions ($`\textcolor[rgb]{0,0,0}{\mathrm{\Lambda }}_{\textcolor[rgb]{0,0,0}{Q}\textcolor[rgb]{0,0,0}{C}\textcolor[rgb]{0,0,0}{D}}\textcolor[rgb]{0,0,0}{}`$ 200 MeV). The valence parton distribution depends on $`\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}`$ only through the ratio $`\frac{\textcolor[rgb]{0,0,0}{m}^\textcolor[rgb]{0,0,0}{2}}{\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}^\textcolor[rgb]{0,0,0}{2}}`$ and in the limit $`\textcolor[rgb]{0,0,0}{m}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0}`$, is independent of $`\stackrel{\textcolor[rgb]{0,0,0}{~}}{\textcolor[rgb]{0,0,0}{g}}`$. The structure function $`\textcolor[rgb]{0,0,0}{F}_\textcolor[rgb]{0,0,0}{3}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}\textcolor[rgb]{0,0,0}{,}\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}`$ of neutrino scattering on an isoscalar target has been accurately measured by the CCFR and CDHS experimental groups at several values of $`\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}`$ and $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}`$. To compare with that data we need to evolve our computed distribution function to the appropriate values of $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}`$ by the DGLAP equation: $`{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{V}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}\textcolor[rgb]{0,0,0}{,}\textcolor[rgb]{0,0,0}{t}\textcolor[rgb]{0,0,0}{)}}{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{t}}}\textcolor[rgb]{0,0,0}{=}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{\alpha }_\textcolor[rgb]{0,0,0}{s}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{t}\textcolor[rgb]{0,0,0}{)}}{\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{\pi }}}{\displaystyle \textcolor[rgb]{0,0,0}{}_{\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}}^\textcolor[rgb]{0,0,0}{1}}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{d}\textcolor[rgb]{0,0,0}{y}}{\textcolor[rgb]{0,0,0}{y}}}\textcolor[rgb]{0,0,0}{V}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{y}\textcolor[rgb]{0,0,0}{,}\textcolor[rgb]{0,0,0}{t}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{P}_{\textcolor[rgb]{0,0,0}{q}\textcolor[rgb]{0,0,0}{q}}\textcolor[rgb]{0,0,0}{(}{\displaystyle \frac{\textcolor[rgb]{0,0,0}{x}_\textcolor[rgb]{0,0,0}{B}}{\textcolor[rgb]{0,0,0}{y}}}\textcolor[rgb]{0,0,0}{)}\textcolor[rgb]{0,0,0}{,}`$ (6) where $`\textcolor[rgb]{0,0,0}{t}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{\mathrm{log}}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{/}\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{)}`$, $`\textcolor[rgb]{0,0,0}{\alpha }_\textcolor[rgb]{0,0,0}{s}\textcolor[rgb]{0,0,0}{(}\textcolor[rgb]{0,0,0}{t}\textcolor[rgb]{0,0,0}{)}`$ is evaluated using the two-loop $`\textcolor[rgb]{0,0,0}{\beta }`$ function and $`\textcolor[rgb]{0,0,0}{P}_{\textcolor[rgb]{0,0,0}{q}\textcolor[rgb]{0,0,0}{q}}`$ is the evolution kernel to leading order given in Ref. . We solve the evolution equation numerically, assuming an initial value of $`\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0.4}\textcolor[rgb]{0,0,0}{\mathrm{GeV}}^\textcolor[rgb]{0,0,0}{2}`$. This low value of $`\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}`$ is justified since we can show that the anti-quark distribution function of the nucleon is quite small. The range of $`\textcolor[rgb]{0,0,0}{Q}`$ over which we are evolving is small ($`\textcolor[rgb]{0,0,0}{Q}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{0.6}`$ to $`\textcolor[rgb]{0,0,0}{5}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}`$) and its effect is small, thus justifying the use of the leading order DGLAP equation. We also set $`\textcolor[rgb]{0,0,0}{N}_\textcolor[rgb]{0,0,0}{c}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{3}`$, $`\textcolor[rgb]{0,0,0}{\mathrm{\Lambda }}_{\textcolor[rgb]{0,0,0}{Q}\textcolor[rgb]{0,0,0}{C}\textcolor[rgb]{0,0,0}{D}}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{200}`$ MeV and the current quark mass $`\textcolor[rgb]{0,0,0}{m}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0}`$. As for the fraction of baryon momentum carried by valence partons $`\textcolor[rgb]{0,0,0}{f}`$, we choose it to be $`\textcolor[rgb]{0,0,0}{0.5}`$, which is in agreement with the phenomenological fits of parton distribution functions ( See e.g.,.) In a later paper we will derive the gluon structure function as well, and at that time we will have a theoretical prediction for this parameter. The plots show that we have quite good agreement with data. Thus we have established that Quantum HadronDynamics is a successful way of deriving hadronic structure functions from QCD. Acknowledgements: We thank A. Bodek and especially V. John for discussions. This work was supported in part by U.S. Department of Energy grant No. DE–FG02-91ER40685. Figure 1: Comparison of QHD prediction of $`\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{F}_\textcolor[rgb]{0,0,0}{3}`$ (solid curve) with measurements by CCFR($`\textcolor[rgb]{0,0,0}{}`$) and CDHS($`\textcolor[rgb]{0,0,0}{\mathrm{}}`$). We have chosen the parameters $`\textcolor[rgb]{0,0,0}{Q}_\textcolor[rgb]{0,0,0}{0}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0.4}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$ and $`\textcolor[rgb]{0,0,0}{f}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{0.5}`$. (a) CCFR at $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{7.9}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$, CDHS at $`\textcolor[rgb]{0,0,0}{7.13}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{8.46}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$ and QHD prediction at $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{7.9}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$. (b) CCFR at $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{12.6}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$, CDHS at $`\textcolor[rgb]{0,0,0}{12.05}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{}\textcolor[rgb]{0,0,0}{14.3}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$ and QHD prediction at $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{13}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$. Figure 2: Comparison of QHD prediction of $`\textcolor[rgb]{0,0,0}{x}\textcolor[rgb]{0,0,0}{F}_\textcolor[rgb]{0,0,0}{3}`$ (solid curve) with CCFR($`\textcolor[rgb]{0,0,0}{}`$) measurements at (a) $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{20}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$ and (b) $`\textcolor[rgb]{0,0,0}{Q}^\textcolor[rgb]{0,0,0}{2}\textcolor[rgb]{0,0,0}{=}\textcolor[rgb]{0,0,0}{31.6}\textcolor[rgb]{0,0,0}{G}\textcolor[rgb]{0,0,0}{e}\textcolor[rgb]{0,0,0}{V}^\textcolor[rgb]{0,0,0}{2}`$.
no-problem/9908/cond-mat9908427.html
ar5iv
text
# The Upper Critical Field in Disordered Two-Dimensional Superconductors ## I Introduction Increasing disorder is known to suppress superconductivity in low-dimensional systems such as thin films and narrow wires. This occurs because the disorder causes electrons to move diffusively rather than ballistically, making them less efficient at screening the Coulomb repulsion between electrons. The increased Coulomb repulsion decreases both the electron-electron attraction needed for superconductivity, and the density of states of electrons available for pairing at the Fermi surface. Typical types of experimental data are: (i) $`T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$, the transition temperature as a function of normal state resistance per square; (ii) $`\mathrm{\Delta }_0(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$, the order parameter at zero temperature, as a function of normal state resistance per square; (iii) $`H_{c2}(T,R_{\text{ }\text{ }\text{ }\text{ }\text{ }})T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }},H)`$, the upper critical field as a function of temperature and normal state resistance per square; (iv) $`T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }},1/\tau _s)`$, transition temperature as a function of resistance per square and spin-flip scattering rate in films with magnetic impurities. It is found experimentally that $`T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$ curves from a wide variety of materials fit a universal curve with a single fitting parameter, whilst the few experimental measurements of $`\mathrm{\Delta }_0(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$, seem to have $`\mathrm{\Delta }_0(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})/2k_BT_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$ roughly constant. This fitting to a single curve, whilst pleasing in showing that the basic ingredients of our theories are correct, does not allow detailed analysis of the theory. Data of types (iii) and (iv) are more promising because there is an additional parameter to vary – the magnetic field in (iii), and spin-flip scattering rate in (iv). To the best of our knowledge, only one experiment of type (iv) has been performed, and we discuss it elsewhere. Several experiments of type (iii) have been performed; some seem to show a positive curvature in $`H_{c2}(T)`$ at low temperature as disorder is increased. Moreover this effect is predicted by theory, and this seems to be another confirmation of the basic theoretical model. However, we need to be careful: positive curvature in $`H_{c2}(T)`$ is a ubiquitous feature of exotic superconductors, and occurs in many systems where localization is not believed to be the cause. Indeed any pair-breaking mechanism that varies as a function of magnetic field can lead to such anomalous behaviour in $`H_{c2}`$. This means that it is often difficult to distinguish between the various mechanisms that might be present. It is therefore particularly important to be sure of our theory, and in this light we re-examine the predictions of localization theory. One of the main problems of the localization theory is that even first-order perturbation theory results are hard to obtain correctly. The first-order results are capable of explaining experimental data in the weak disorder regime, but for stronger disorder it is clear that we need something else. As an example consider the prediction for $`T_c`$ suppression, $$\mathrm{ln}\left(\frac{T_c}{T_{c0}}\right)=\frac{1}{3}\frac{R_{\text{ }\text{ }\text{ }\text{ }\text{ }}}{R_0}\mathrm{ln}^3\left(\frac{1}{2\pi T_{c0}\tau }\right)$$ (1) where $`T_{c0}`$ is transition temperature for clean system, $`R_0=2\pi h/e^2162k\mathrm{\Omega }`$, and $`\tau `$ is the elastic scattering time. This yields an exponential curve for $`T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$, which behaves like a straight line for small $`R_{\text{ }\text{ }\text{ }\text{ }\text{ }}`$. It is clear that $`T_c`$ deduced from this equation can never go to zero for finite $`R_{\text{ }\text{ }\text{ }\text{ }\text{ }}`$, as happens in experiment. A very simple ad hoc way of going beyond simple perturbation theory is to replace $`T_{c0}`$ on the right hand side by $`T_c`$, pleading perhaps to self-consistency. If we define $`x=\mathrm{ln}(T_{c0}/T_c)`$, $`\beta =\mathrm{ln}(1/2\pi T_{c0}\tau )`$, and $`t=R_{\text{ }\text{ }\text{ }\text{ }\text{ }}/R_0`$, the new equation has the cubic form $$x=\frac{t}{3}(\beta +x)^3$$ (2) and can easily be solved. However a new problem emerges because there are two positive roots for every value of $`R_{\text{ }\text{ }\text{ }\text{ }\text{ }}`$. At first we can take the larger of the roots, on physical principles, because it is this root which tends to $`T_c/T_{c0}=1`$ at $`R_{\text{ }\text{ }\text{ }\text{ }\text{ }}=0`$. However we eventually come to a re-entrance point beyond which no solutions exist. It is clear that this re-entrance is unphysical, an artefact of our ad hoc extension of perturbation theory. In the case of $`T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$ the story has a happy ending in that perturbation theory can be correctly extended by a renormalization group (RG) treatment based on Finkelstein’s interacting non-linear sigma model. This leads to the result $$\mathrm{ln}\left(\frac{T_c}{T_{c0}}\right)=\frac{1}{|\gamma |}\frac{1}{2\sqrt{t}}\mathrm{ln}\left(\frac{1+\sqrt{t}/|\gamma |}{1\sqrt{t}/|\gamma |}\right),$$ (3) where $`\gamma =1/\mathrm{ln}(T_{c0}/1.13\tau )`$. This equation reduces to the first-order result for small $`t`$ and now $`T_c`$ goes smoothly to zero at $`t=\gamma ^2`$. The three curves are plotted for comparison in Fig. (1). The reason for discussing $`T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$ in detail above is that the same problem occurs for $`H_{c2}(T,R_{\text{ }\text{ }\text{ }\text{ }\text{ }})T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }},H)`$. The standard theory in this case, due to Maekawa, Ebisawa and Fukuyama (MEF) is the equivalent of the ad hoc extension discussed above, and has the form $`\mathrm{ln}\left({\displaystyle \frac{T_c}{T_{c0}}}\right)`$ $`=`$ $`\psi \left({\displaystyle \frac{1}{2}}\right)\psi \left({\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{2\pi T_c\tau _H}}\left[12t\mathrm{ln}\left({\displaystyle \frac{1}{2\pi T_c\tau }}\right)\right]\right)R_{HF}R_V`$ (4) $`R_{HF}`$ $`=`$ $`{\displaystyle \frac{1}{2}}t\mathrm{ln}^2\left({\displaystyle \frac{1}{2\pi T_c\tau }}\right)t\mathrm{ln}\left({\displaystyle \frac{1}{2\pi T_c\tau }}\right)\left[\psi \left({\displaystyle \frac{1}{2}}\right)\psi \left({\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{2\pi T_c\tau _H}}\right)\right]`$ (5) $`R_V`$ $`=`$ $`{\displaystyle \frac{1}{3}}t\mathrm{ln}^3\left({\displaystyle \frac{1}{2\pi T_c\tau }}\right)t\mathrm{ln}^2\left({\displaystyle \frac{1}{2\pi T_c\tau }}\right)\left[\psi \left({\displaystyle \frac{1}{2}}\right)\psi \left({\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{2\pi T_c\tau _H}}\right)\right],`$ (6) where $`1/\tau _H=DeH`$. This equation suffers similar re-entrance problems at finite $`1/\tau _H`$ to those found at $`1/\tau _H=0`$, when it is just the $`T_c`$ equation. Indeed the $`H_{c2}(T)`$ curves can only be plotted down to the value of $`T`$ at which re-entrance occurs, and at this point the curves appear to have infinite slope. This leads to us asking the question of whether the positive curvature in $`H_{c2}(T)`$ is also an artefact of the ad hoc approximation used. What we need to answer this question is the finite magnetic field analogue of the RG result discussed above. However the RG is very difficult, and the answer is not forthcoming from this source. Fortunately Oreg and Finkelstein have recently shown that the RG result can be obtained from a diagrammatic resummation technique. This method has the great advantage of being analytically tractable and easy to use. In this paper we extend this approach to finite magnetic field to see whether we really do expect positive curvature in $`H_{c2}(T)`$. Indeed this paper is intended as somewhat of a showcase for this resummation method, to demonstrate the ease of its extension to a wide variety of problems. Oreg and Finkel’stein take the Coulomb interaction to be featureless, and we shall follow them. We should therefore explain why it is legitimate to use a featureless interaction rather than the correct screened Coulomb interaction. The screened Coulomb interaction has a singularity at low momentum, and one might naively think that this would lead to a strong enhancement of the suppression of transition temperature. However a cancellation occurs between diagrams 1–4 and diagram 5 of Fig. 2, which effectively removes the low momentum singularity. The Coulomb interaction is then effectively featureless, all diagrammatic sums being dominated by large frequency and momentum. It turns out that one gets the same result from doing the perturbation theory correctly with the screened Coulomb interaction as from using a constant interaction of strength $`g=N(0)V=1/2`$ and keeping only diagrams 1–4. This is what we shall do in this paper. We also note that the re-summation method of Oreg and Finkelstein keeps only diagrams 3 and 4, which is again legitimate as their contribution is greater than that from diagrams 1 and 2. However it is not difficult to include these terms in the re-summation, as we will show in section III. The cancellation of low momentum singularities is expected to be a general feature, enforced by gauge invariance, so we are free to ignore them and use a featureless interaction as long as we include the diagrams that give the dominant contribution at large frequency and momentum. The outline of the rest of the paper is as follows. In section II we review the first-order perturbation calculation for $`H_{c2}(T)`$ assuming a featureless Coulomb interaction. We derive the analytical MEF formula by making an asymptotic approximation to the Matsubara frequency sums which arise in perturbation theory. We make plots of $`H_{c2}(T)`$ both by solving the implicit MEF equation, and its equivalent where the Matsubara sums are performed exactly. Surprisingly we find no positive curvature in the latter case. In section III we derive $`H_{c2}(T)`$ using the resummation technique both in the form used by Oreg and Finkel’stein, and also in an extended form which includes self-energy diagrams missed in their formalism. Again we find no positive curvature in $`H_{c2}(T)`$. In section IV we discuss the experimental situation and draw conclusions. ## II Review of First-Order Perturbation Theory In this section we will carefully review the first-order perturbation theory calculation of $`H_{c2}(T)`$. We do this in detail because we will show that accurately performing sums rather than making an analytical approximation to them removes the upward curvature in $`H_{c2}(T)`$. We identify $`T_c`$ as the temperature at which the magnetic field dependent pair propagator diverges. To find the latter we calculate the correction to the momentum-dependent pair propagator, $`L(q,0)`$, and then make the usual substitution $`Dq^22/\tau _H`$, where $`1/\tau _H=DeH`$. The zeroth-order (mean field) pair propagator is given by $$L_0(q,0)^1=N(0)\left[\mathrm{ln}\left(\frac{T}{T_{c0}}\right)+\psi \left(\frac{1}{2}+\frac{Dq^2}{4\pi T}\right)\psi \left(\frac{1}{2}\right)\right]$$ (7) so that upon substitution $`Dq^22/\tau _H`$ we get the usual Abrikosov-Gorkov result for the pair-breaking effect of the magnetic field $$\mathrm{ln}\left(\frac{T_c}{T_{c0}}\right)=\psi \left(\frac{1}{2}\right)\psi \left(\frac{1}{2}+\frac{1}{2\pi T_c\tau _H}\right).$$ (8) We will calculate the corrections to the pair polarization bubble, $`\delta P(q,0)`$, which will lead to a change in $`T_c`$ given by $$\mathrm{ln}\left(\frac{T_c}{T_{c0}}\right)=\psi \left(\frac{1}{2}\right)\psi \left(\frac{1}{2}+\frac{1}{2\pi T_c\tau _H}\right)+\frac{\delta P(q,0)}{N(0)}$$ (9) Since we will assume a featureless interaction, we need to evaluate diagrams 1 to 4 of Fig. 2. Diagrams 1 and 3 involve the summation of three terms to form a Hikami box, the general form for which is $$2\pi N(0)\tau ^4\left[D(\mathrm{\Delta }_4^1+2\mathrm{\Delta }_4^2)+|ϵ_1|+|ϵ_2|+|ϵ_3|+|ϵ_4|\right]\theta (ϵ_1ϵ_2)\theta (ϵ_2ϵ_3)\theta (ϵ_3ϵ_4)$$ (10) where $`\mathrm{\Delta }_4^1={\displaystyle \underset{i=1}{\overset{4}{}}}𝐪_i𝐪_{i+1}`$ and $`\mathrm{\Delta }_4^2={\displaystyle \underset{i=1}{\overset{2}{}}}𝐪_i𝐪_{i+2}`$, the $`𝐪_i`$ being the incoming momenta, and the $`ϵ_i`$ the Matsubara frequencies on the electron Green functions in the box. Using standard diagrammatic rules, and the above result for the Hikami box, we obtain the 4 contributions to the pair polarization bubble $`P_1`$ $`=`$ $`4\pi N(0)VT{\displaystyle \underset{ϵ_l}{}}T{\displaystyle \underset{ϵ_n}{}}{\displaystyle \underset{q^{}}{}}{\displaystyle \frac{Dq^2+Dq^2+3|ϵ_l|+|ϵ_n|}{(Dq^2+2|ϵ_l|)^2(Dq^2+|ϵ_l|+|ϵ_n|)^2}}\theta (ϵ_lϵ_n)`$ (11) $`P_2`$ $`=`$ $`4\pi N(0)VT{\displaystyle \underset{ϵ_l}{}}T{\displaystyle \underset{ϵ_n}{}}{\displaystyle \underset{q^{}}{}}{\displaystyle \frac{1}{(Dq^2+2|ϵ_l|)^2(D(q^{}+q)^2+|ϵ_l|+|ϵ_n|)}}\theta (ϵ_lϵ_n)`$ (12) $`P_3`$ $`=`$ $`4\pi N(0)VT{\displaystyle \underset{ϵ_l}{}}T{\displaystyle \underset{ϵ_n}{}}{\displaystyle \underset{q^{}}{}}{\displaystyle \frac{Dq^2+Dq^2+|ϵ_l|+|ϵ_n|}{(Dq^2+|ϵ_l|)(Dq^2+|ϵ_n|)(Dq^2+|ϵ_l|+|ϵ_n|)^2}}\theta (ϵ_lϵ_n)`$ (13) $`P_4`$ $`=`$ $`4\pi N(0)VT{\displaystyle \underset{ϵ_l}{}}T{\displaystyle \underset{ϵ_n}{}}{\displaystyle \underset{q^{}}{}}{\displaystyle \frac{1}{(Dq^2+2|ϵ_l|)(Dq^2+|ϵ_n|)(D(q^{}+q)^2+|ϵ_l|+|ϵ_n|)}}\theta (ϵ_lϵ_n)`$ (14) We note that the relative signs of $`ϵ_l`$ and $`ϵ_n`$ are irrelevant in the sum, since the summand depends only upon $`|ϵ_l|`$ and $`|ϵ_n|`$, and that the featurelessness of the potential allows us to shift the momentum sum in terms 2 and 4. We find that terms 1 and 2 partially cancel, whilst terms 3 and 4 reinforce, to yield the sum $$P=\frac{g}{D}T\underset{ϵ_l>0}{}T\underset{ϵ_n>0}{}\left\{\frac{1}{[ϵ_l+1/\tau _H][ϵ_l+ϵ_n]}+\frac{1}{[ϵ_l+1/\tau _H][ϵ_n+1/\tau _H]}\left[\mathrm{ln}\left(\frac{1}{[ϵ_l+ϵ_n]\tau }\right)+\frac{1}{[ϵ_l+ϵ_n]\tau _H}\right]\right\}$$ (15) where $`g=N(0)V`$ and we have performed the $`q^{}`$-sum and set $`Dq^22/\tau _H`$. We first reproduce MEF’s analytic approximation to the sums over Matsubara frequencies. To do this it turns out to be easier to make a choice of relative sign of Matsubara frequencies, $`ϵ_lϵ_n<0`$, and to set $`ϵ_n=ϵ_l+\omega _m`$. If $`ϵ_l=2\pi T(l+1/2)`$ and $`\omega _m=2\pi Tm`$, the sum becomes $`P`$ $`=`$ $`{\displaystyle \frac{g}{4\pi ^2D}}{\displaystyle \underset{m=1}{\overset{M}{}}}{\displaystyle \underset{l=0}{\overset{m1}{}}}\left\{{\displaystyle \frac{1}{[l+1/2+\alpha ]m}}+{\displaystyle \frac{1}{[l+1/2+\alpha ][ml1/2+\alpha ]}}\left[\mathrm{ln}\left({\displaystyle \frac{M}{m}}\right)+{\displaystyle \frac{\alpha }{m}}\right]\right\}`$ (16) $`=`$ $`{\displaystyle \frac{g}{4\pi ^2D}}{\displaystyle \underset{m=1}{\overset{M}{}}}\left[\psi \left({\displaystyle \frac{1}{2}}+m+\alpha \right)\psi \left({\displaystyle \frac{1}{2}}+\alpha \right)\right]\left\{{\displaystyle \frac{1}{m}}+{\displaystyle \frac{2}{(m+2\alpha )}}\left[\mathrm{ln}\left({\displaystyle \frac{M}{m}}\right)+{\displaystyle \frac{\alpha }{m}}\right]\right\}`$ (17) where $`\alpha =1/2\pi T\tau _H`$, $`M=1/2\pi T\tau `$. If we first evaluate the sum for $`\alpha =0`$, we see that it will be dominated by large $`m`$, so we replace the difference of digamma functions by $`\mathrm{ln}m`$, and the sum over $`m`$ by an integral to get $$P(0)=\frac{g}{4\pi ^2D}\left[\frac{1}{3}\mathrm{ln}^3\left(\frac{1}{2\pi T_c\tau }\right)+\frac{1}{2}\mathrm{ln}^2\left(\frac{1}{2\pi T_c\tau }\right)\right].$$ (18) To evaluate the result for finite $`\alpha `$ we subtract off the $`\alpha =0`$ result and try to analytically approximate the difference. Again we expect the sum to be dominated by large $`m`$, so we can ignore the $`\alpha `$ in $`\psi (1/2+m+\alpha )`$ and $`(m+2\alpha )`$, and ignore the term proportional to $`\alpha `$ as it is less divergent. The difference then has the form $`P(\alpha )P(0)`$ $`=`$ $`{\displaystyle \frac{g}{4\pi ^2D}}{\displaystyle \underset{m=1}{\overset{M}{}}}\left[\psi \left({\displaystyle \frac{1}{2}}\right)\psi \left({\displaystyle \frac{1}{2}}+\alpha \right)\right]\left\{{\displaystyle \frac{1}{m}}+{\displaystyle \frac{2}{m}}\mathrm{ln}\left({\displaystyle \frac{M}{m}}\right)\right\}`$ (19) $`=`$ $`{\displaystyle \frac{g}{4\pi ^2D}}\left[\psi \left({\displaystyle \frac{1}{2}}\right)\psi \left({\displaystyle \frac{1}{2}}+\alpha \right)\right]\left\{\mathrm{ln}\left({\displaystyle \frac{1}{2\pi T_c\tau }}\right)+\mathrm{ln}^2\left({\displaystyle \frac{1}{2\pi T_c\tau }}\right)\right\}`$ (20) From our knowledge of the calculation which includes the full screened Coulomb interaction we know that we should set $`g=1/2`$, from which it follows that $`g/4\pi ^2N(0)D=t`$. Putting the Eqns. (19) and (18) into Eqn. (9) yields the MEF formula cited in Eqn. (4). It turns out that this approximation, although apparently reasonable, is not justified. If we plot the values of $`P(\alpha )`$ calculated by performing the sums directly, we find that they do not agree with Eqn. (4). Consequently the $`H_{c2}(T)`$ curves predicted by the analytic approximation and exact sum are also different. On the left side of Fig. 3 we display $`H_{c2}(T)`$ predicted by the MEF result of Eqn. (4) for several values of $`R_{\text{ }\text{ }\text{ }\text{ }\text{ }}`$ in the case where $`\mathrm{ln}(1/2\pi T_{c0}\tau )=6`$. We clearly see the positive curvature at low temperature, and the result that the curves all terminate at finite $`T`$ due to re-entrance problems. On the right side of Fig. 3 we plot $`H_{c2}(T)`$ deduced directly from the first-order perturbation theory result of Eqn. (16). We note that since the answers we get depend upon the logarithm of the upper cut-off, if we treat the upper cut-off differently, we will get slightly different answers. However, this difference will always be lost in fitting to experiment since we determine the unknown upper cut-off parameter from the initial slope of the data. The reason we mention this is that the exact sums can be performed in two slightly different ways: we can have the sum over $`ϵ_l`$ and $`\omega _m`$, or the sum over $`ϵ_l`$ and $`ϵ_n`$, the cut-offs in each case being taken at $`1/\tau `$. The results are slightly different due to different treatment of the upper cut-off. However in both cases we find no positive curvature in $`H_{c2}(T)`$. There is still the problem that the curves finish at finite $`T`$ because of re-entrance, and this tells us that the ad hoc prescription being used to go beyond first-order perturbation theory is still inadequate. But again we stress the key point of this section: even within the ad hoc extension of first-order perturbation theory, we do not get positive curvature in $`H_{c2}(T)`$ if we do the Matsubara sums exactly. ## III Oreg and Finkelstein’s Resummation Technique In this section we answer the question of what is the correct way to go beyond first-order perturbation theory. To do this, we extend a resummation technique recently developed by Oreg and Finkelstein (OF). This involves calculating the pair scattering amplitude, $`\mathrm{\Gamma }_c(ϵ_n,ϵ_l)`$, and identifying $`T_c`$ as the temperature at which it first diverges. The ladder summation involved is shown diagrammatically in Fig. 4 This is actually an extended version of the OF approach because as well as considering diagrams 3 and 4 of Fig. 2, which correspond to an effective Coulomb pseudopotential, and end up in the block $`t\mathrm{\Lambda }`$, we also consider diagrams 1 and 2 of Fig. 2, which correspond to an effective self-energy, and end up in the block $`\mathrm{\Sigma }`$ which renormalises the Cooperon impurity ladder. In fact we will consider both versions of the summation technique: the simpler one involves using the bare Cooperon $`C_0`$ rather than the dressed one $`C`$. OF demonstrate that the matrix equation they obtain in 2D can be approximated by a parquet-like differential equation that turns out to be identical to that obtained from renormalization group analysis. Instead of generating a continuous approximation to the matrix equation, we simply solve the equation and change the temperature (which affects the upper cut-off $`M=1/2\pi T\tau `$) until we find the first zero eigenvalue of the matrix, at which point it has become singular, and we have reached $`T_c`$. Numerically this involves diagonalizing matrices of rank less than 2000 or so. Let us now proceed to the calculation of the pair amplitude matrix $`\mathrm{\Gamma }`$. If we evaluate the diagrams of Fig. 4, but ignore the magnetic field and self-energy correction to the Cooperon, we obtain the same equation as OF, $$\mathrm{\Gamma }(ϵ_n,ϵ_l)=|\gamma |+t\mathrm{\Lambda }(ϵ_n,ϵ_l)\pi T\underset{m=(M+1)}{\overset{M}{}}\left[|\gamma |+t\mathrm{\Lambda }(ϵ_n,ϵ_m)\right]\frac{1}{|ϵ_m|}\mathrm{\Gamma }(ϵ_m,ϵ_l)$$ (21) except that we have explicitly kept both positive and negative Matsubara frequencies because the expression for $`\mathrm{\Lambda }(ϵ_l,ϵ_n)`$ will turn out to depend upon whether its two Matsubara frequencies have the same or opposite sign. This is a reflection of the breaking of time-reversal invariance caused by the magnetic field. Let us first see how to solve the above equation, and then later put in the magnetic field, and the correction to the Cooperon. As a matrix equation, it takes the form $$\widehat{\mathrm{\Gamma }}=|\gamma |\widehat{1}+t\widehat{\mathrm{\Lambda }}\frac{1}{2}[|\gamma |\widehat{1}+t\widehat{\mathrm{\Lambda }}]\widehat{ϵ}^1\widehat{\mathrm{\Gamma }}$$ (22) where $`\widehat{\mathrm{\Gamma }}_{nm}=\mathrm{\Gamma }(ϵ_n,ϵ_m)`$, $`\widehat{1}_{nm}=1`$, $`\widehat{\mathrm{\Lambda }}_{nm}=\mathrm{\Lambda }(ϵ_n,ϵ_m)`$ and $`\widehat{ϵ}_{nm}=(n+1/2)\delta _{nm}`$. This has the solution $$\widehat{\mathrm{\Gamma }}=\widehat{ϵ}^{1/2}(\widehat{I}|\gamma |\widehat{\mathrm{\Pi }})^1\widehat{ϵ}^{1/2}(|\gamma |\widehat{1}+t\widehat{\mathrm{\Lambda }})$$ (23) where $$\widehat{\mathrm{\Pi }}=\frac{1}{2}\widehat{ϵ}^{1/2}[\widehat{1}|\gamma |^1t\widehat{\mathrm{\Lambda }}]\widehat{ϵ}^{1/2}$$ (24) and $`\widehat{I}_{nm}=\delta _{nm}`$ is the identity matrix. It follows that when the matrix $`\widehat{\mathrm{\Pi }}`$ has an eigenvalue equal to $`1/|\gamma |`$, the pair amplitude diverges, and we have found $`T_c`$. Our approach is therefore to start at the mean field transition temperature, $`T_{c0}`$, and decrease temperature until one of the eigenvalues of $`(\widehat{I}|\gamma |\widehat{\mathrm{\Pi }})`$ changes sign. The matrix $`\widehat{\mathrm{\Pi }}(T)`$ depends upon $`T`$ both through the dependence of $`\widehat{\mathrm{\Lambda }}`$ upon $`T`$ and through its rank $`2M`$, where $`M=(2\pi T\tau )^1`$. We start at the mean-field value of $`M`$, which we will call $`M_0`$, and decrease the temperature by increasing $`M`$ successively by one. We diagonalize the matrix $`\widehat{\mathrm{\Pi }}`$ for each value of $`M`$ until an eigenvalue changes sign. At this point we have found $`T_c`$ for the given problem, and $`T_c/T_{c0}=M_0/M`$. We can then change a parameter such as $`t`$ or $`\alpha `$ and repeat the procedure. This removes the need for any perturbative expansion. Let us now put back in the magnetic field and Cooperon self-energy corrections. The $`|ϵ_m|`$ denominator in the Eqn. (21) comes from the Cooperon $$C_0(ϵ_m)=\frac{1}{2\pi N(0)\tau ^2}\frac{1}{Dq^2+2|ϵ_m|}$$ (25) and in the presence of a magnetic field, $`|ϵ_m|`$ is replaced by $`|ϵ_m|+1/\tau _H`$, and hence $`\widehat{ϵ}_{nm}=(n+1/2+\alpha )\delta _{nm}`$, where $`\alpha =(2\pi T\tau _H)^1`$. The contributions to $`t\mathrm{\Lambda }`$ from the diagrams of Fig. 4 which correspond to diagrams 3 and 4 of Fig. 2 are given by $`t\mathrm{\Lambda }_3(ϵ_n,ϵ_m)`$ $`=`$ $`{\displaystyle \frac{g}{\pi N(0)}}{\displaystyle \underset{q^{}}{}}{\displaystyle \frac{Dq^2+Dq^2+|ϵ_n|+|ϵ_m|}{[Dq^2+|ϵ_n|+|ϵ_m|]}}\theta (ϵ_nϵ_m)`$ (26) $`t\mathrm{\Lambda }_4(ϵ_n,ϵ_m)`$ $`=`$ $`{\displaystyle \frac{g}{\pi N(0)}}{\displaystyle \underset{q^{}}{}}{\displaystyle \frac{1}{[Dq^2+|ϵ_n|+|ϵ_m|]}}\theta (ϵ_nϵ_m)`$ (27) and so performing the $`q^{}`$-sum and setting $`Dq^22/\tau _H`$ gives $$t\mathrm{\Lambda }(ϵ_n,ϵ_m)=\frac{g}{4\pi ^2N(0)D}\{\begin{array}{cc}\mathrm{ln}\left[\frac{1}{(|ϵ_n|+|ϵ_m|)\tau }\right]\hfill & \text{ }ϵ_nϵ_m>0\hfill \\ \mathrm{ln}\left[\frac{1}{(|ϵ_n|+|ϵ_m|)\tau }\right]+\frac{2}{(|ϵ_n|+|ϵ_m|)\tau _H}\hfill & \text{ }ϵ_nϵ_m<0\hfill \end{array}$$ (28) It follows that the matrix elements of $`\widehat{\mathrm{\Lambda }}`$ are $$\mathrm{\Lambda }_{nm}=\{\begin{array}{cc}\mathrm{ln}\left(\frac{M}{n+m+1}\right)\hfill & \text{ }ϵ_nϵ_m>0\hfill \\ \mathrm{ln}\left(\frac{M}{n+m+1}\right)+\frac{2\alpha }{(n+m+1)}\hfill & \text{ }ϵ_nϵ_m<0\hfill \end{array}$$ (29) To include the self-energy correction into the Cooperon, we note that the corrected Cooperon is given by $$C=[C_0^1\mathrm{\Sigma }]^1=\frac{1}{2\pi N(0)\tau ^2}\left[Dq^2+2|ϵ_m|\frac{1}{2\pi N(0)\tau ^2}\mathrm{\Sigma }\right]^1$$ (30) so that we need to absorb a factor $`1/2\pi N(0)\tau ^2`$ into $`\mathrm{\Sigma }`$. The contributions from the diagrams of Fig. 4 which correspond to diagrams 1 and 2 of Fig. 2 are $`\mathrm{\Sigma }_1(ϵ_n)`$ $`=`$ $`{\displaystyle \frac{2g}{N(0)}}T{\displaystyle \underset{ϵ_m}{}}{\displaystyle \underset{q^{}}{}}\left[{\displaystyle \frac{Dq^2+Dq^2+3|ϵ_n|+|ϵ_m|}{(Dq^2+|ϵ_n|+|ϵ_m|)^2}}\right]\theta (ϵ_nϵ_m)`$ (31) $`\mathrm{\Sigma }_2(ϵ_n)`$ $`=`$ $`{\displaystyle \frac{2g}{N(0)}}T{\displaystyle \underset{ϵ_m}{}}{\displaystyle \underset{q^{}}{}}{\displaystyle \frac{1}{(Dq^2+|ϵ_n|+|ϵ_m|)}}\theta (ϵ_nϵ_m)`$ (32) which partially cancel to give the result $$\mathrm{\Sigma }(ϵ_n)=\frac{g}{2\pi N(0)D}T\underset{m}{}\frac{1}{(|ϵ_n|+|ϵ_m|)}\theta (ϵ_nϵ_m)=\frac{g}{4\pi ^2N(0)D}\left(\underset{k=n+1}{\overset{M}{}}\frac{1}{k}\right)[Dq^2+2|ϵ_n|]$$ (33) The weak-localization contribution to the Cooperon self-energy is given by $$\mathrm{\Sigma }_{WL}(ϵ_n)=\frac{1}{2\pi N(0)}\underset{q^{}}{}\frac{2D(q^2+q^2)+4|ϵ_n|}{Dq^2+2|ϵ_n|}=\frac{1}{8\pi ^2N(0)D}\mathrm{ln}\left(\frac{1}{2|ϵ_n|\tau }\right)[Dq^2]$$ (34) Incorporating $`\mathrm{\Sigma }`$ into the Cooperon means that $`\widehat{ϵ}`$ has elements $$\widehat{ϵ}_{nm}=\left\{\left(n+\frac{1}{2}+\alpha \right)\left[1+t\underset{k=n+1}{\overset{M}{}}\frac{1}{k}\right]2\alpha t\mathrm{ln}\left(\frac{M}{n+1/2}\right)\right\}\delta _{nm}$$ (35) Using the new formulas for $`\widehat{ϵ}`$ and $`\widehat{\mathrm{\Lambda }}`$, we now plot $`H_{c2}(T)`$ for various values of $`t=R_{\text{ }\text{ }\text{ }\text{ }\text{ }}/R_0`$. The results are shown in Fig. 5: the plot on the left does not include self-energy corrections; the plot on the right does. The results are very similar and show no sign of upward curvature in $`H_{c2}(T)`$. We note that there are no re-entrance problems in our calculation of $`T_c`$: we can plot the curves down to as low a temperature as we like if we are prepared to diagonalize large enough matrices. ## IV Discussion and Conclusions The central message of the paper is that the theory of localization and interaction does not predict an anomalous positive curvature in the upper critical field $`H_{c2}(T)`$ of thin film superconductors. A subsidiary message is that the resummation method developed by Oreg and Finkelstein is a very powerful and adaptable tool for going beyond perturbation theory in disordered superconductors. We suspected that the positive curvature in $`H_{c2}(T)`$ was an artefact related to the re-entrance problem found in $`T_c(R_{\text{ }\text{ }\text{ }\text{ }\text{ }})`$, and that it would not survive a systematic non-perturbative treatment. The latter is true, but the cause of the positive curvature artefact turns out to be an incorrect analytic approximation to sums of Matsubara frequencies. We will now consider the experimental situation by attempting to fit experimental data to both the MEF formula of Eqn. (4), and the exact first-order perturbation theory result of Eqn. (16). The data which fits best to the localization theory is that of Graybeal and Beasley on amorphous films of Mo-Ge, because values of $`T_c`$ for all films can be obtained by choosing a single value of the parameter $`\beta =\mathrm{ln}(1/2\pi T_{c0}\tau )`$. The fits in Fig. 6 used the parameter value $`\beta =7.17`$ for the MEF curves and $`\beta =5.3`$ for the exact sum curves. Values of $`T_{c0}=7.2K`$ and $`H_{c0}=120`$kOe were taken directly from the data. The experimental data appears to fit better to the exact sum curves than the MEF curves, and there seems little sign of an upward curvature in $`H_{c2}(T)`$. When the same procedure is applied to the data of Okuma et al on Zn films, we find that we need to use different values of $`\beta `$ for the two films to get the correct $`T_c`$ for given $`R_{\text{ }\text{ }\text{ }\text{ }\text{ }}`$. The plots in Fig. 7 use values $`\beta =7.5`$ for the $`400\mathrm{\Omega }`$ film and $`\beta =8.1`$ for the $`600\mathrm{\Omega }`$ film for the MEF formula; $`\beta =5.56`$ for the $`400\mathrm{\Omega }`$ film and $`\beta =6.1`$ for the $`600\mathrm{\Omega }`$ film for the exact sum formula. Both films have the same thickness of $`100\AA `$, but different resistances, and hence different diffusion constants $`D`$. The latter were calculated using the material parameters for Zn to get values of $`H_{c2}(T=0)`$ for zero-resistance films of the same composition equal to 5.08kOe for the $`400\mathrm{\Omega }`$ film and 7.62kOe for $`600\mathrm{\Omega }`$ film. $`T_{c0}`$ was taken to be $`1K`$. Again the data seems to fit better to the exact sum curves than the MEF formula. To fit the data of Hebard and Paalanen on amorphous In-InO<sub>x</sub> films again requires a different value of $`\beta `$ for each film. The plots in Fig. 8 use $`\beta =3.5`$ for the $`2250\mathrm{\Omega }`$ film and $`\beta =4.0`$ for the $`2900\mathrm{\Omega }`$ and $`3300\mathrm{\Omega }`$ films for the MEF curves; $`\beta =1.6`$ for $`2250\mathrm{\Omega }`$ film and $`\beta =2.0`$ for the $`2900\mathrm{\Omega }`$ and $`3300\mathrm{\Omega }`$ films for the exact sum curves. The value of $`T_{c0}=3.6K`$ quoted in the paper was used, and a value of $`H_{c2}(0)=100kOe`$ was estimated from the experimental plots. There is a clear upturn in $`H_{c2}(T)`$ at low $`T`$, but it is not explained by the MEF curves which are effectively all upturn at these parameter values. Again the fit to exact sum curves seems better. In conclusion it seems that the experimental curves fit better to the exact sum curves than the MEF curves. Any low temperature upturns in $`H_{c2}(T)`$ are not explained by the MEF formula, and presumably correspond to different physics. ACKNOWLEDGEMENTS We thank I. Aleiner, A. Clerk, A.M. Finkel’stein, I.V. Lerner and Y. Oreg for helpful discussions. RAS acknowledges the support of a Nuffield Foundation Award to Newly Appointed Lecturers in Science and Mathematics. BSH acknowledges the support of a UK EPSRC Graduate Studentship. VA is supported by the US National Science Foundation under grant DMR-9805613.
no-problem/9908/astro-ph9908271.html
ar5iv
text
# Spectrophotometry of HII Regions, Diffuse Ionized Gas and Supernova Remnants in M31: The Transition from Photo- to Shock-Ionization ## 1 Introduction Our nearest large galaxy neighbor, M31, provides us with an excellent opportunity to study the star-formation process and the properties of the interstellar medium in great detail. This Sb spiral is interesting in that it has a low star formation rate, about 0.2$``$0.5 M/yr (Walterbos 1988), relative to the typical Sb galaxies, which can have star formation rates up to 4M/yr (Kennicutt, Tamblyn, & Congdon 1994). M31’s proximity<sup>1</sup><sup>1</sup>1Freedman & Madore 1990 determined a distance for M31 $``$ 750 kpc based on Cepheid observations; for this work, however, we adopt the traditional value D $``$ 690 kpc, consistent with the WB92 catalog. has inspired numerous studies of its emission-line nebulae both in imaging and spectroscopic mode (e.g. Pellet et al. 1978; Blair, Kirshner, & Chevalier 1981; Dennefeld & Kunth 1981; Blair, Kirshner, & Chevalier 1982; Dopita et al. 1984a; Walterbos & Braun 1992; Meyssonnier, Lequeux, & Azzopardi 1993; Magnier et al. 1995). Due to the large angular size of M31, the spatial coverage of the spectroscopic observations has necessarily been rather limited. An H$`\alpha `$ and \[SII\] 6716,31$`\mathrm{\AA }`$ imaging survey of most of the northeast half of M31 resulted in a catalog of 958 gaseous nebulae (Walterbos & Braun 1992, hereafter WB92). The sensitive survey (rms noise $``$ 1.1 x 10<sup>-17</sup> erg cm<sup>-2</sup> sec<sup>-1</sup> pix<sup>-1</sup>) also allowed Walterbos & Braun (1994) to study the diffuse ionized gas (DIG) in this galaxy. This faint component of the ISM, which permeates the disk and can contribute up to 50% of the total H$`\alpha `$ luminosity of spiral galaxies (see also e.g.Ferguson et al. 1996; Hoopes et al. 1996) remains ill-understood because the ionizing source has not been well constrained. The proximity of M31 facilitates the isolation of gaseous nebulae from the DIG, allowing a direct comparison of their properties. The WB92 survey also yielded a set of 52 supernova remnant (SNR) candidates, presented in Braun & Walterbos (1993, hereafter BW93), which are generally fainter than the candidates in other M31 surveys such as Blair, Kirshner, & Chevalier 1981 and Magnier et al. 1995. To confirm the BW93 SNR candidates, and to further study and compare the spectral properties of HII regions and DIG in M31, we chose several targets for spectroscopic observation. This paper presents the optical spectrophotometric results of a sample of 44 distinct HII regions, 18 SNR candidates (two of which turned out to be HII regions), and numerous regions of DIG. Greenawalt, Walterbos, & Braun (1997) discussed the global spectral characteristics of the DIG in M31 based on averaged spectra obtained from the same data discussed here. In this paper, we will compare spectra of individual regions of DIG with HII region and SNR spectra. Details about the observations and data reduction are given in §2. We discuss the extinction corrections in §3. Next we consider the nebular conditions and radial trends in various line ratios for our sample of HII regions in §4. In §5, we discuss the confirmation and radial spectral line variations of the SNRs. This section concludes with a summary of what we can determine about the radial abundance gradient in M31 from the HII region and SNR spectra. It will become clear that while abundance trends are definitely present, line ratio variations due to excitation effects in HII regions and variations in shock conditions in SNRs contribute substantial scatter to the line ratios. We therefore conclude with a comparison of the emission-line properties of HII regions, DIG and SNRs in §6, to shed further light on the ionization mechanism for the DIG. A summary of our results is presented in §7. ## 2 Observations and Data Reduction A complete description of the observations and data reduction has already been provided elsewhere (Greenawalt, Walterbos, & Braun 1997); we briefly summarize the important aspects here. Our long-slit spectra were obtained with the RC spectrograph on the Mayall 4m telescope at Kitt Peak National Observatory in November 1991. The dispersion of the KPC-10A grating we used is 2.77$`\mathrm{\AA }`$/pixel which provides a spectral resolution of 6$`\mathrm{\AA }`$. The total spectral coverage was 3550$`\mathrm{\AA }`$ to 6850$`\mathrm{\AA }`$ with a large overlap region 4000-6400$`\mathrm{\AA }`$. The dimensions of the slit were 2” X 5.4’, with a spatial scale of 0.69”/pixel. Sixteen different slit positions were obtained on the NE half of M31, concentrated in the three spirals arms between 5 kpc and 15 kpc from the center of the galaxy, with special attention to the annulus of high star-formation activity near 10 kpc. The slit positions on the sky were carefully chosen to obtain at least one HII region, one SNR candidate, and DIG in one slit. Two 15-minute observations were taken at each position with an observation of a nearby standard star in between. These standards aided in the flux-calibration of the object spectra, which was conducted using the standard IRAF<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. package techniques. Table 1 provides the following details on the observations: 2D spectrum name which corresponds to the WB92 identification for the most prominent source in the slit (column 1); epoch 2000 right-ascension and declination for slit center (columns 2 and 3); slit position angle on the sky chosen in each case such as not to compromise the parallactic angle (column 4); image field from WB92 where the slit was positioned (column 5); and all WB92 catalog objects for which 1D spectra were extracted (column 6). The extraction of one-dimensional spectra from the two-dimensional images proceeded as follows. All discrete objects on the 2D spectra were identified on the H$`\alpha `$ images of WB92. The object boundaries were determined from a comparison of the H$`\alpha `$ emission-line profile along the slit and a cut across the H$`\alpha `$ images which most closely corresponds to the position of the slit on the sky. This was not always a straight-forward task because of the difference in spatial resolution between the spectra and the emission-line images, but close matches were obtained after some trial and error. Sky apertures were identified as close as possible on either side of the object apertures and often included the faint emission from nearby DIG. Background sky levels were determined with a first or second order fit in IRAF and subtracted from each object spectrum. The spectra for 50 separate regions of DIG were identified in a similar manner, using apparent morphology and an upper cutoff in emission measure; all DIG extractions have a surface brightness $``$ 100 pc cm<sup>-6</sup>. The DIG spectrum apertures vary from 10 pixels up to 60 pixels. Many of the large objects in our sample show significant internal structure. We have extracted separate spectra for each of these structures using the H$`\alpha `$ emission variations as indicators of boundaries. The first spectrum extracted for each nebula contains light from across the entire object in the slit. Subapertures were identified and labelled from left to right across the slit independent of position angle. All the apertures and subapertures are labelled in Fig 1a and Fig 1b. Each structure was classified based on the apparent morphology and light profile in the H$`\alpha `$ images. We have adopted the classification scheme for the HII regions as published in WB92 with only a few slight variations described further in the Appendix. Emission-line fluxes were determined by fitting Gaussians to the line profiles with the IRAF routine SPLOT. Uncertainties were estimated by measuring the rms noise on either side of the emission features with an additional 3% flux calibration error based on the spectra of the standard stars. The fluxes of the emission-lines measured in the overlap region between the blue and red spectra (4000-6400$`\mathrm{\AA }`$) were averaged together. The lines included in this overlap region are H$`\gamma `$, H$`\beta `$, \[OIII\]4959$`\mathrm{\AA }`$, 5007$`\mathrm{\AA }`$, and \[OI\]6300$`\mathrm{\AA }`$. ## 3 Extinction Corrections The emission lines of HII regions and SNR were corrected for interstellar reddening using the Balmer decrement derived from the observed H$`\alpha `$ to H$`\beta `$ line flux ratio for each spectrum assuming a standard Galactic extinction curve (Savage & Mathis 1979). The extinction in the faint extended DIG regions was addressed carefully by Greenawalt, Walterbos & Braun (1997). They concluded that assuming a well-mixed dust layer rather than a foreground dust layer produced a difference of less than 10% in the line flux ratios for the average observed Balmer decrement, and we will use the foreground screen model here for the DIG as well as for the HII regions and SNRs. We have assumed the same reddening correction law for the DIG and the HII regions. The intrinsic value of H$`\alpha `$/H$`\beta `$ was assumed to be 2.86 for the HII regions and DIG, based on Case B recombination and a gas T<sub>e</sub> $``$ 10,000K (Osterbrock 1989). The SNR H$`\alpha `$/H$`\beta `$ was taken to be 3.00, the collisional value consistent with shock models of Raymond (1979) and Shull & McKee (1979). The extinction and other properties of the nebulae are presented in Table 2 and Table 3 for HII regions and SNRs, respectively. Column 3 lists the galactocentric radius for each nebula published in WB92. This radial distance is based on the spiral arm most likely associated with the object as determined by the kinematical model of Braun (1991). Columns (4,5) list the observed H$`\alpha `$/H$`\beta `$ line ratio and the extinction in magnitudes, A(V). These parameters were used to correct each individual spectrum for interstellar reddening. The dispersion of A(V) within an HII region can provide important information about the distribution of dust and gas within the nebula. It has been shown that the density within a single HII region is not homogeneous (e.g. Kennicutt 1984). M31 is close enough that our spectra can be used to investigate variations in A(V) across large HII region complexes. By analyzing emission from a cross-cut of an entire HII complex, we are no longer biased by selecting only the bright, hence possibly less extincted sections of the HII region. We can thus investigate if single extinction values derived for more distant extra-galactic HII regions could be in error due to significant extinction variations across the complexes. In a recent study of HII regions in M33 and M101, Petersen & Gammelgaard (1997) found that the variation of A(V) within large HII region complexes was typically 0.3 magnitudes or less and that the extinction values usually increased toward the edges of the nebulae. We conducted a similar exercise for the largest HII regions in our sample. Fig 2 shows the A(V) variation within HII regions complexes with more than 3 aperture extractions. The extinction obtained from each subaperture is plotted along with the H$`\alpha `$ profile of the entire complex within the slit. We note several points. Not surprisingly, the extinction measured for the brightest region of a complex tends to be close to the derived average extinction for that complex, a consequence of luminosity weighting that is occuring in deriving an average extinction. However, the brightest regions in a complex are not necessarily the ones with the least extinction (e.g. K703, K525). This is important, because if they were, then the derived average extinctions for complexes would be systematically underestimated as a result of this luminosity weighting. Typical variations in extinction across a complex amount to $`\pm 0.5`$ mag. While this is not negligible, this variation is less than that observed in the HII region population as a whole (panel g in Fig 2). There is a faint region in the complex K703 which apparently has significantly higher extinction than the rest of the complex (by 2 mag). In the present small sample, however, it is the only such region and it is therefore not clear how common such variations are. The peak in the distribution of A(V) (Fig 2g) for our sample is 0.8 magnitudes. The observed average foreground extinction toward M31 is about 0.25 V mag (Burstein & Heiles 1984, Walterbos & Schwering 1987). The excess must be due to internal extinction in the disk of M31. The peak in our distribution is only slightly lower than the peak of the whole WB92 catalog of 958 nebulae, which was near 1.0 mag. The extinction corrections in WB92 were determined by estimating of the amount of atomic hydrogen gas present in front of each object. Our extinction values, based on Balmer decrements obtained from spectra, are much more accurate. While there is overall agreement in the measured range of extinctions derived here and in WB92, the agreement for individual objects is poor, which is not surprising given that the WB92 method only provides a statistical estimate of the extinctions. We find no clear trends in A(V) with observed H$`\alpha `$ intensity, but we do observe the highest measured extinctions in the annulus of vigorous star-formation between 8 to 12 kpc, where the gas distributions also peak (e.g. Walterbos 1988). ## 4 HII Regions We have spectra for 46 separate HII regions from the WB92 catalog of the NE half of M31; this includes two spectroscopically rejected SNR candidates which have been reclassified as faint HII regions. The spatial distribution of these nebulae is concentrated in the high star-formation 10 kpc spiral arm, with the fewest number of objects observed at the inner 5 kpc ring. This makes the study of radial trends, including the determination of the chemical abundance gradient more uncertain. However, we have observed some correlations between line ratios and position in the galaxy as well as spectral differences between nebulae of different morphologies. These are presented in §4.2, while the next section discusses the morphological classification of the HII regions. ### 4.1 H$`\alpha `$ Morphology The general properties of our HII region sample are shown in Fig 3. Average emission measures for each object were determined from the reddening corrected total H$`\alpha `$ fluxes for each nebula using the spectroscopically derived A(V) and the sizes given in WB92. The nebular diameters are the average of D<sub>major</sub> and D<sub>minor</sub> quoted in WB92. The average electron density, n<sub>e,rms</sub>, is derived from the corrected H$`\alpha `$ emission measure assuming unit filling factor (E.M. = $``$ n$`{}_{}{}^{2}{}_{e}{}^{}`$ dl). It was also assumed that the HII regions are spherically symmetric on average such that the path length through the nebula along the line of sight is the same as the apparent linear size. It has been noted (Kennicutt 1984) that these are unrealistic assumptions; we merely want to demonstrate the range in properties in comparison to other HII region samples. The dotted lines in this plot represent lines of constant emission measure. The point types used in Fig 3 distinguish between the morphological classifications given in the WB92 catalog: center-brightened compact (shown as triangles), diffuse (five-pointed stars) and rings (open circles). We have also included the large HII region complexes in these plots, shown as open squares. The plot shows that a rather smooth trend exists from the lower surface brightness extended objects to the compact sources. The distinction between the compact sources and the extended objects is much greater than between the diffuse, ring and HII region complexes. This separation also appears in the spectral properties of the objects, as is shown in the next section. ### 4.2 HII Region Emission-Lines and Nebular Conditions The most prominent HII region emission-line fluxes corrected for interstellar reddening are presented in Table 4 in the form of line ratios with respect to H$`\beta `$ or H$`\alpha `$. Columns (1,2) list the name of the object as designated in the WB92 catalog plus cross-references to Pellet et al. (1978) objects, and the aperture sequence label. The other columns list the extraction aperture width in arcseconds (column 3), the H$`\alpha `$ emission measure for the region within the spectrum corrected for interstellar reddening (column 4), and corrected line flux ratios: \[OII\]3727$`\mathrm{\AA }`$/H$`\beta `$ (column 5); \[OIII\]5007$`\mathrm{\AA }`$/H$`\beta `$ (column 6); \[NII\]6548,83$`\mathrm{\AA }`$/H$`\alpha `$ (column 7); and \[SII\]6717,31$`\mathrm{\AA }`$/H$`\alpha `$ (column 8). The assigned morphology is listed in the final column of Table 4. We have used a similar classification for the subapertures as that used for the discrete nebulae in WB92 with a few modifications; see the Appendix for a complete description. Faint emission lines rarely found in M31 HII regions were also detected in a few of our objects. These features include the \[NeIII\] forbidden lines and He I recombination lines. The extinction corrected fluxes for these lines are listed in Table 5. Electron temperatures and densities were derived from the appropriate optical emission line ratios using the Lick Observatory FIVELEV code (De Robertis, Dufour, & Hunt 1987) and recently updated atomic data. Columns (6,7) in Table 2 list the density sensitive \[SII\] line ratio and the derived n<sub>e</sub> (cm<sup>-3</sup>) for the HII regions. Most of our objects are in the low density limit and only two objects had marginally detectable \[OIII\]4363, required to determine T<sub>e</sub> (Osterbrock 1989). The electron temperatures were estimated at T<sub>e</sub> $``$ 16,500K $`\pm `$ 4,000K for K87 and T<sub>e</sub> $``$ 9,000K $`\pm `$ 1,000K for K932. A canonical HII region temperature of 10<sup>4</sup>K was assumed for all other objects. While the HII regions show emission-line ratios within the range predicted by photoionization models, we do observe some spatial variations in line flux even within the limited galactocentric radius sampled by our data. These variations are likely due to the chemical abundance gradient in the disk of the galaxy but may also reflect differences in the ionization level and excitation between nebulae at different radii. Due to the large inclination of the disk of M31, the galactocentric radii of most of our objects are uncertain. In order to study the effects of uncertain position in the galaxy, we have considered three radial distance determinations in studying the spectral variations. The first and least accurate distance (D<sub>1</sub>) is determined by assuming that the disk of M31 is an infinitely thin disk projected an inclination angle of 77; the radial distances are based on the x-y positions of the objects in this coordinate system. The M31 kinematical model of Braun (1991) provides two possible distance measurements (D<sub>2</sub>, D<sub>3</sub>), based on fitting spiral arms to the HI kinematics; WB92 lists these values as the position of the first and second most likely spiral arm associated with each source. As is apparent in all the radial plots that follow, line ratio trends with radius are, with some exceptions, marginal or not seen at all. The plots show a lot of scatter even at a single radius. While all the radial plots were created using each of the possible distance measurements, we have chosen to show only the plots based on D<sub>2</sub> which show radial gradients with the least scatter. Undoubtedly, some of the scatter in these plots is still likely due to the uncertainty in the radial distance. However, much of the scatter is real and presents an important problem in the interpretation of radial gradients in emission-line ratios, as has been pointed out by Blair & Long (1997). For example, the scatter may be caused by internal variations in temperature, excitation or density. Most of the data points are based on small sections of nebulae that are already modest in luminosity to begin with. If the HII regions can be ionized by just a few massive stars, the presence or absence of just one bright O star can significantly affect the spectrum. These effects are discussed more in Section 6. A line was fit to the data points in each plot, using a weighted least squares method. The fits are shown only if the resulting slope differed significantly from 0, and if the correlation coefficient exceeded 0.5. The first set of radial diagrams shows the variation of three oxygen line ratios: \[OIII\]5007$`\mathrm{\AA }`$/H$`\beta `$ (Fig 4a-d), \[OII\]3727$`\mathrm{\AA }`$/\[OIII\]5007$`\mathrm{\AA }`$ (Fig 4e-h), and R<sub>23</sub> (Fig 4i-l) defined by Edmunds & Pagel (1984) as the ratio of the sum of \[OIII\]$`\lambda `$5007$`\mathrm{\AA }`$, \[OIII\]$`\lambda `$4959$`\mathrm{\AA }`$, and \[OII\]$`\lambda `$3272$`\mathrm{\AA }`$ with respect to H$`\beta `$. The size difference in the symbols used indicates if the data are from a discrete object (large size) or a subaperture of an extended object or HII region complex (small size). This distinction is almost equivalent to a luminosity coding of the data points. The line ratio \[OIII\]5007$`\mathrm{\AA }`$/H$`\beta `$, the excitation parameter ($`\eta `$), essentially measures the hardness of the radiation field within the ionized nebula or equivalently, the temperature of the central OB star(s). This line ratio is also sensitive to chemical composition (Searle 1971). An outward radial increase in \[OIII\]/H$`\beta `$ has been known to exist in disk galaxies for some time (Aller 1942 and Searle 1971) and is commonly used as evidence for an oxygen abundance gradient in disk galaxies (Scowen, Dufour, & Hester 1992; Zaritsky, Hill, & Elston 1990). However, a strict empirical correlation between chemical abundance and the observed \[OIII\]5007$`\mathrm{\AA }`$/H$`\beta `$ line ratio is debatable since the line ratio only measures one ionization state of oxygen (Edmunds & Pagel 1984; Zaritsky 1992). For M31, the expected increase in excitation with galactocentric distance shows up only in the compact HII regions. The diffuse nebulae, ring segments and DIG do not reproduce this trend and appear only to scatter in the range covered by the compact sources. The high surface-brightness compact nebulae show a decrease in the line ratio \[OII\]/\[OIII\] with increasing distance from the center, as observed in HII regions in other galaxies. The extended objects and DIG again appear to show mostly scatter. While the \[OII\]/\[OIII\] ratio gives some information about the level of ionization in a nebulae, it is also affected by metallicity. To properly study the ionization of HII regions, one would need to sample the multiple ionization states of another element, like sulfur, but this requires spectral coverage to the infrared (\[SIII\]9069$`\mathrm{\AA }`$ and \[SIII\]9532$`\mathrm{\AA }`$). The R<sub>23</sub> parameter has been suggested as a good probe of oxygen abundance in cases where the electron temperature, T<sub>e</sub>, cannot be determined directly from observations (Pagel et al. 1979). Several calibrations of this line ratio with oxygen abundance have been published (Pagel, Edmunds & Smith 1980; Edmunds & Pagel 1984; Vilchez 1989) including the use of a weighted sum of the \[OII\] and \[OIII\] lines (Binette et al. 1982). Radial trends in the R<sub>23</sub> ratio have been observed and studied extensively in many nearby galaxies (Evans 1986; Henry & Howard 1992; Garnett et al. 1997; Kennicutt & Garnett 1996). We find a similar trend in our M31 data, although it again only appears in the center-brightened nebulae. We studied the luminosity dependence of the oxygen emission-line ratios with position in the galaxy to see if the lack of radial trends and the large amount of scatter at any given radius could be explained by differences in the number of ionizing stars within the nebulae. Encoding the data points with the integrated luminosity of the nebula from which the aperture was extracted indeed strengthens the radial trends in the compact objects but has little effect on the scatter observed in the other objects. The radial plots of \[NII\]6583$`\mathrm{\AA }`$ and \[SII\]6716,31$`\mathrm{\AA }`$ relative to H$`\alpha `$ are shown in Fig 5. There appears to be a decreasing trend towards the outer disk of M31 in the \[NII\]/H$`\alpha `$ ratio in the lower surface brightness objects only. We will come back to this in the discussion of the SNR spectra (§5.2 and §5.3). For the bright compact HII regions, no such trend is observed, possibly because a significant fraction of the nitrogen may be doubly ionized there. The radial distribution of H$`\alpha `$ surface brightness of the object sections for which we have obtained spectra is shown in Fig 5i-l. This figure shows that the observed emission-line ratio trends discussed above are not caused by a correlation between H$`\alpha `$ surface brightness and radial distance. We believe the observed trends in oxygen and nitrogen are the consequence of the chemical abundance gradient in M31 (see §5.3). ## 5 Supernova Remnants ### 5.1 Confirmation of Candidates One of the original goals of this project was to spectroscopically confirm the supernova remnant candidates from the BW93 catalog. This catalog consists of 52 SNR candidates which were identified by high \[SII\]/H$`\alpha `$ line ratios (\[SII\]/H$`\alpha `$ $`>`$ 0.45) in the narrow-band images published in WB92. Several of these candidates coincide with SNR candidate objects in other catalogs such as Blair, Kirshner, & Chevalier (1981, hereafter BKC81) and Magnier et al. (1995). A recent attempt has been made to confirm SNR candidates using the ROSAT HRI (Magnier et al. 1997). Less than 10% of the optically identified candidates were detected in the X-ray, however. This can be explained as the result of a very tenuous interstellar medium in the vicinity of SNR in M31, with typical densities less than 0.1 cm<sup>-3</sup>. Thus, it is quite difficult to confirm the candidates using X-ray emission alone and optical spectroscopy continues to be the necessary and preferred method of identifying these objects in M31. Table 6 lists the spectral line ratios for all SNR candidate spectra. Included in this table are the object name plus cross-reference to BKC81 and Blair, Kirshner & Chevalier (1982, hereafter BKC82) and aperture sequence label in Columns (1,2), aperture width in arcseconds in Column (3), and reddening corrected H$`\alpha `$ emission measure in Column (4). Columns (5-9) list the reddening corrected flux ratios of \[OII\]3727$`\mathrm{\AA }`$ and \[OIII\]5007$`\mathrm{\AA }`$ to H$`\beta `$ and \[OI\]6300$`\mathrm{\AA }`$, \[NII\]6548,83$`\mathrm{\AA }`$ and \[SII\]6716,31$`\mathrm{\AA }`$ to H$`\alpha `$. To address the candidacy of our objects, we first compared the observed spectroscopic \[SII\]/H$`\alpha `$ line ratio with the ratio obtained from the narrow-band line images for the exact object section captured in each spectrum. In all but a few rare cases, there is a good agreement between the spectral ratio and image ratio. Bad pixels in the line images or in the 2D spectra images affected the agreement in a few cases; poor signal-to-noise also played a factor for a few candidates. The morphology of the candidates studied here include discrete ring-like structures, faint diffuse nebulae and compact sources embedded in nebular complexes (indicated by an ’A’ in the object name). Most of the SNR extractions show enhanced \[OII\]3727$`\mathrm{\AA }`$ and \[OIII\]5007$`\mathrm{\AA }`$ emission, and a few of the objects show a bright \[OI\]6300$`\mathrm{\AA }`$ feature in their spectra. These factors taken together help in the confirmation of these candidates. In several cases, the gas surrounding embedded compact SNR candidates does not show enhanced \[SII\] emission relative to H$`\alpha `$. Two SNR candidates (K310 and K446) were rejected as supernova remnants due to low \[SII\]/H$`\alpha `$ and lack of bright \[OIII\] emission. It is suspected that these objects are in fact HII regions. SNR candidates of particular interest are described in the Appendix. ### 5.2 SNR Line Ratios and Nebular Conditions The brightness of optical emission lines in shocked gas depends on multiple factors: the electron density and electron temperature in the post-shock region which are related to the velocity of the shock front; the density of the medium into which the shock is propagating; and the chemical abundance of elements of the swept up material, among other things. The modelling of these effects on the optical spectra of SNR has been carried out by Dopita, Mathewson & Ford (1977), Shull & McKee (1979), and Raymond (1979). Through a series of diagnostic diagrams based on the ratios of optical emission lines, it has been shown that the variations in the spectra of SNRs are due primarily to the chemical abundance effects and only minimally to the shock conditions for shock velocites near and above 100 km s<sup>-1</sup> (Dopita et al. 1984a). However, the modelling of interstellar shocks is not straightforward due to the large number of variables involved and the theoretical models do not always agree with observations. There are many complications due to the depletion of volatiles onto grains, the destruction of grains and even magnetic field effects. However, there are several basic line ratio diagnostics which can help probe the physical conditions of the shocks. BKC82 have used the variation in the observed line flux ratios of SNR to derive the abundance gradient in the disk of M31. The spatial distribution of our confirmed candidates is limited to the outer regions of the disk and we have no objects at galactocentric radii smaller than 8kpc. Thus we cannot use our data to directly measure the abundance gradient in M31. However, the general radial trends found by BKC82 are fairly well reproduced with our new data. For comparison, we have plotted the BKC82 data points (in open squares) as well as our confirmed SNR data points (solid squares) in Fig 6. The agreement in the slopes and also in the scatter is good given the faintness of most of our remnants relative to those studied by BKC82. Fig 6a shows the radial variation of the excitation line ratio \[OIII\]5007$`\mathrm{\AA }`$/H$`\beta `$ for SNRs. Contrary to the trend found in the HII region spectra, the SNR spectra do not show a clear increase in excitation with distance from the center, though there is significant scatter in the ratio at any given position. Since the \[OIII\] emission arises from a region very close to the shock front, two competing factors govern the brightness of the \[OIII\] lines: the postshock temperature (equivalent to shock velocity) and the oxygen abundance (Dopita 1977). The ionization line ratio, \[OIII\]5007$`\mathrm{\AA }`$/\[OII\]3727$`\mathrm{\AA }`$, on the other hand, has been shown to be more sensitive to the postshock conditions and less sensitive to metallicity (Dopita 1977). Our data in Fig 6b show fair agreement with the BKC points with more scatter, likely due to measurement error. These two emission line ratios do not likely indicate a chemical abundance gradient, but reflect a gradient in the SNR physical conditions across the galaxy. Our SNR data also show the correlation between \[OII\] and \[OIII\] emission found by BKC82 (Fig 6c). The curves plotted here are shock model predictions for metallicity variations (thick solid curve), shock front velocity variation (dash-dot curve) and shock velocity variation with the pre-ionation of the medium (dash curve) taken from Dopita et al. (1984). The behavior of the two oxygen line ratios can be reproduced by varying the abundance of oxygen relative to hydrogen (by number) but not by varying the shock conditions. This indicates that the group of SNRs studied here do show a wide range of oxygen abundance. The \[NII\]/H$`\alpha `$ trend shown in Fig 6d shows the cleanest radial trend. The \[NII\] lines are not strongly affected by shock temperature since they originate in the large recombination region behind the shock front. These lines are also not affected by collisional de-excitation and so are relatively insensitive to electron density. This trend must be a direct result of the abundance gradient in M31. The \[SII\]/H$`\alpha `$ gradient (Fig 6e), may also primarily show an abundance effect. The physical conditions derived from the optical spectra for our sample of SNRs are listed in Table 3 (density sensitive \[SII\] line ratio and n<sub>e</sub> in Columns (6,7) respectively) and Table 7 (T<sub>e</sub>). Electron densities were determined for those regions with good \[SII\] doublet line detections (\[SII\]6716$`\mathrm{\AA }`$/\[SII\]6731$`\mathrm{\AA }`$ within the theoretical limits) using the Lick Observatory FIVEL program with the assumption that the recombination zone from which the sulfur doublet lines arise has a temperature of 10<sup>4</sup>K. In most cases, the SNRs are in the low density limit. This has been recently confirmed by X-ray data which show that upper limits to the electron density in the interstellar medium in the vicinity of SNR candidates in M31 is about 0.1 cm<sup>-3</sup> (Magnier et al. 1997). Unlike HII regions, the conditions in the postshock gas of SNR are such that the high excitation emission-line \[OIII\]4363$`\mathrm{\AA }`$ is easier to detect. This line was detected in a few of our SNR candidates spectra, allowing the determination of T<sub>e</sub>. Using the formalism of Kaler et al. (1976) and the \[OIII\]4959$`\mathrm{\AA }`$ \+ 5007$`\mathrm{\AA }`$/\[OIII\]4363$`\mathrm{\AA }`$ emission-line ratio, we have determined the temperatures listed Table 7. In Fig 6f, we plot the electron temperatures derived for the SNR with available \[OIII\]4363$`\mathrm{\AA }`$ measurements as a function of radial distance. Our data points are in fair agreement with the BKC82 points at radii near 10kpc. There are too few points in the outer regions of the galaxy to see a spatial correlation in the post-shock temperature in the galaxy. Such a correlation would explain the slight increase of the \[OIII\]/H$`\beta `$ and \[OIII\]/\[OII\] line ratios towards the center of the galaxy shown in Fig 6a and 6b. ### 5.3 Summary of abundance trends in HII regions and SNRs Despite our limited radial sampling of the disk of M31, we have detected radial trends in some of the emission lines for both HII regions and SNRs which likely reflect the abundance gradient in M31. We compare our results, which concern mostly low-luminosity objects, with those of BKC81 and BKC82 who concentrated on the brightest objects and covered a larger radial range. Dopita et al. (1984) reanalyzed the Blair et al. data with improved shock ionization models, to resolve a discrepancy in the derived oxygen abundances for HII regions and SNRs. The oxygen line ratios \[OIII\]/H$`\beta `$, \[OII\]/\[OIII\], and R<sub>23</sub> observed in the center-brightened HII regions (Fig 4a, 4e, 4i) show the expected correlation with radial position within the galaxy. These line ratios are all consistent with increasing metallicity towards the center of the galaxy. The gradient we can infer from the R<sub>23</sub> parameter is -0.06 $`\pm `$ 0.03 dex kpc<sup>-1</sup>, in fair agreement with the gradient derived from HII region spectra by BKC82 and the gradient based on shock-ionization modeling of SNR line ratios (Dopita et al. 1984), -0.05 $`\pm `$ 0.02 dex kpc$`1`$. Peculiarly, none of the other types of HII regions show a strong radial gradient in the oxygen emission lines. The correlation between the \[OII\] and \[OIII\] emission from the SNRs (Fig 6c) indicates that these objects also exhibit metallicity differences (Dopita et al. 1984). The radial distribution of \[OII\]/H$`\beta `$ and \[OIII\]/H$`\beta `$ observed in our sample of SNRs is in good agreement with the line ratios found in BKC82 sample of SNRs. This further confirms the oxygen abundance gradient determined from the proper shock-ionization treatment of the BKC82 SNR emission lines by Dopita et al. (1984). The center-brightened HII regions do not show a radial gradient in \[NII\]/H$`\alpha `$ over the range 5 to 15 kpc (see Fig 5), consistent with BKC82, who mainly show higher ratios for this line within 5 kpc from the center. However, the \[NII\]/H$`\alpha `$ ratios are correlated with radial distance in the case of the ring-like nebulae which show a gradient in \[NII\]/H$`\alpha `$ (Fig 5) of -0.02 $`\pm `$ 0.01 dex kpc <sup>-1</sup>; the diffuse nebulae and the DIG also show this trend, but with more scatter. The photoionization models for these low-excitation regions (e.g. Domgörgen & Mathis 1994) indeed show the line ratio of \[NII\]/H$`\alpha `$ to correlate with the nitrogen abundance. These authors note that the forbidden line ratios depend on electron temperature and abundance in complicated ways, but their calculations show a monotonic pattern for \[NII\]/H$`\alpha `$, while the \[SII\]/H$`\alpha `$ model ratio trends in Domgörgen & Mathis (1994) with abundance seem to be more complex. This may explain why our data do not show a radial trend in \[SII\]/H$`\alpha `$ ratios. In comparison, the SNRs show a steeper gradient of -0.04 $`\pm `$ 0.01 dex kpc <sup>-1</sup> in the \[NII\]/H$`\alpha `$ line ratio (Fig 6d). This discrepancy, although not explicitly noted by BKC82, can be reproduced using their HII region and SNR data. Both the average value of the \[NII\]/H$`\alpha `$ line ratio and the radial gradient of this line seen in the SNR and the HII regions differ significantly. However, the methods used to determine the nitrogen abundance in the photoionized nebulae and the shock-ionized nebulae are distinct, so despite differences in the line flux ratios, the nitrogen abundances derived from both types of objects are consistent with each other (BKC82). Our ring-like and diffuse nebulae, and SNR line ratios are consistent with the HII regions and SNRs in BKC82, so we confirm the abundance gradient published previously for the nitrogen in M31, approximately -0.07 dex kpc<sup>-1</sup>. ## 6 Discussion The optical spectra of HII Regions, SNRs, and regions of DIG make it possible to compare the emission-line properties of these different classes of objects. Diagnostic line-ratio diagrams are a good way to constrain the ionization and excitation properties of various phases of the ISM. This type of spectral analysis is especially important in the case of the DIG where the ionization mechanism has not been well constrained. We consider observed line ratio trends with H$`\alpha `$ surface brightness, present diagnostic diagrams which separate stellar photoionized from shock-ionized nebulae, and discuss how a smooth transition between HII regions and DIG is indicated. ### 6.1 Spectral Line Transitions: HII Regions and DIG As noted by several authors (Martin 1997; Wang, Heckman & Lehnert 1997; Greenawalt, Walterbos & Braun 1997), there is a strong correlation between the \[SII\]/H$`\alpha `$ line flux ratio and the H$`\alpha `$ surface brightness for HII regions and DIG. Is such a trend present for other forbidden lines as well? The four panels of Fig 7 show the line ratio variations with H$`\alpha `$ emission measure in HII regions and DIG for \[OIII\]/H$`\beta `$, \[SII\]/H$`\alpha `$, \[NII\]/H$`\alpha `$, and \[OII\]/H$`\beta `$. Errorbars have only been included for the DIG to reduce clutter; errorbars for the HII region data are typically much smaller due to higher signal-to-noise. No systematic variation is observed in the \[OIII\] emission with changing surface brightness in either the HII regions or in the DIG. In fact, the DIG \[OIII\] emission is comparable to that of HII regions. In the Wang, Heckman & Lehnert (1997) scenario which proposes the existence of two types of DIG, this suggests that we may be mostly seeing diffuse gas which resides in the plane of the galaxy and is associated with star-formation. Strong \[OIII\] emission would be consistent with a second type of DIG, a disturbed phase which may exist higher above the plane of disks galaxies and may be shock-ionized (Wang, Heckman & Lehnert, 1997; Rand 1998). Of course, the relatively edge-on orientation of M31 might hide the faint emission from this gas far above the midplane of the galaxy, but there are other reasons to believe that M31’s DIG layer is not very thick (Walterbos & Braun 1994). We have already mentioned that the \[OIII\] emission from the center-brightened HII regions varies as a function of position in the galaxy due to the metallicity gradient in M31. This increase is not seen in the DIG surrounding these HII regions (see §3.3), which may explain why there is less scatter in this ratio for the DIG than for HII regions. The strongest correlation observed to date is the relative increase in the \[SII\] emission with decreasing H$`\alpha `$ surface brightness (Fig 7b). The \[SII\]/H$`\alpha `$ ratio has commonly been used as one of the defining characteristics separating HII regions and DIG, the first being the low surface brightness in H$`\alpha `$ for the DIG, coupled with diffuse morphology. Most HII regions have \[SII\]/H$`\alpha `$ ratios below 0.3 while the DIG has ratios which reach up to 1.0. The interesting point about this plot is the positioning of the diffuse and ring nebulae relative to the center-brightened sources and the DIG. These points seem to fill the gap between the two brightness extremes creating a smooth sequence in the \[SII\]/H$`\alpha `$ ratio. The extended nebulae may be ionized by a more diffuse radiation field than the center-brightened sources. Thus the smooth trend observed in this plot is likely the result of a smoothly decreasing ionization parameter from the compact sources down to the most diffuse ionized gas. This trend is reproduced in the fainter \[NII\] emission (Fig 7c) albeit with much more scatter than the \[SII\] lines. In this case, abundance effects may play a role in the scatter. As is the case for \[OIII\], the \[OII\] emission (Fig 7d) is not strongly correlated with H$`\alpha `$ emission measure in the HII regions or the DIG. One might have naively expected to see the same increasing trend with decreasing H$`\alpha `$ emission measure in all emission lines arising from singly-ionized species if the increase in flux is due to a decreasing ionization parameter. However, the photoionization models of Domgörgen & Mathis (1994) predict that only the \[NII\] and \[SII\] flux will increase as the ionization parameter decreases. The \[OII\] emission is nearly independent of the ionization parameter and thus remains fairly constant in the HII regions and in the DIG. ### 6.2 Diagnostic Diagrams: HII Regions, SNR and DIG Over the last two decades, theoretical modeling of the emission line spectrum of ionized gas has proven quite useful in determining various characteristics of the gas. Line ratio diagnostics of excitation conditions were investigated by Baldwin, Phillips & Terlevich (1981, hereafter BPT81), resulting in a set of diagrams which separate photoionized gas from shock-ionized gas. Evans & Dopita (1985) developed a set of solar metallicity HII region grids on these diagrams to study the effects of varying the stellar ionization temperature T<sub>ion</sub>, the mean ionization parameter $`\overline{Q}`$(H), and the element-averaged metallicity $`\overline{Z}`$. These grids are based on photoionization models for steady-state spherically symmetric nebulae with unit filling factor and a centrally located OB association, all assumptions which do not necessarily hold in real HII regions. However, observational data on HII regions fit quite nicely into these grids showing relatively good agreement between known conditions and those predicted by the models. Here we use recent ionization models which predict line ratios for various ionization mechanisms, ionization parameters and nebular conditions (eg. Shields & Filippenko 1990; Sokolowski 1993; Domgörgen & Mathis 1994) on these diagnostic diagrams to study the differences in the spectra of the different types of HII regions, DIG and SNRs. Although in all cases we have used reddening corrected line ratios, one of the advantages of diagnostic diagrams is that most of the ratios used are based on lines very near in wavelength so errors in the reddening correction will not affect our results. The first set of diagnostic diagrams compares the \[OIII\] and \[NII\] emission to the \[SII\] emission in the HII regions, DIG and SNR. The group of data points clearly shifts towards higher \[SII\]/H$`\alpha `$ in each consecutive panel (Fig 8a-d). The center-brightened HII regions fall completely within the photoionization model box (labelled as “HII region””) of Shields & Filippenko (1990) while the diffuse and ring-like nebulae begin to scatter outside this box towards shock ionization models (labelled as “LINER”). The DIG and SNRs are found well within the box for shock-ionization with very few exceptions. Although the \[SII\] emission from the DIG is inconsistent with the photoionization models of Shields & Filippenko (1990), the predictions of Sokolowski (1993) and of Domgörgen & Mathis (1994) which specifically addressed photoionization by a dilute radiation field, do reproduce the enhancement of the \[SII\] emission in the lower emission measure gas. The arrows on the model curves indicate the direction of decreasing ionization parameter. The \[NII\] and \[SII\] emission features can be used together to best separate photoionization and shock-ionization mechanisms (Fig 8e-h). All confirmed SNR in our data have by definition \[SII\]/H$`\alpha `$ $``$ 0.45, so we have divided up each of the plots by a dashed line at log(\[SII\]/H$`\alpha `$) = -0.35. From the plots, it is apparent that there also seems to be a lower limit on the \[NII\] emission of the shock-ionized nebulae near log(\[NII\]/H$`\alpha `$) = -0.3. We have placed a dashed lined at this value to split up each plot into four quadrants. Only the SNR occupy the upper right corner while the HII regions and DIG are found mostly outside of this quadrant. The two Domgörgen & Mathis (1994) models (“leaky” and “composite”) are in fair agreement with the observed spectra of the DIG, showing the increase in \[NII\] with a more rapid increase in the \[SII\] emission as the ionization parameter decreases. Two classic BPT81 excitation plots based on the ionization line ratio \[OII\]/\[OIII\] are given in Fig 9. The first set of panels shows the correlation between \[OIII\]/H$`\beta `$ and \[OII\]/\[OIII\]. The center-bright HII regions lie completely below the upper envelope of photoionization models (Evans & Dopita, 1985). The diffuse and ring nebulae show a bit more scatter which places a few points above this curve. The one point lying far above this curve corresponds to the bright ring segment, K450.a which shows extraordinarily high reddening due to a very weak H$`\beta `$ flux. The DIG (Fig 9c) also lie mostly below this curve, suggesting the oxygen line ratios are consistent with even the simplest photoionization models. The Domgörgen & Mathis (1994) models at the higher ionization parameter predict the \[OIII\] emission fairly well. The SNRs (Fig9d) cluster about the upper photoionization curve with most points lying above the curve, as expected, although the number of points lying below the theoretical curve is substantial. The \[OIII\] emission in a SNR is sensitive to both the shock velocity and chemical abundance, as has already been pointed out, so the points below this curve are not necessarily consistent with photoionization model predictions. Placing our data points in the \[NII\]6584/H$`\alpha `$ versus \[OII\]/\[OIII\] plane (Fig 9e-h) also separates the two ionization mechanisms. The dashed-line boxes represent theoretical models for photoionization and shock ionization based on Shields & Filippenko (1990). As in previous plots we also show the models of Domgörgen & Mathis (1994). Although the center-brightened nebulae show a wide range in the ionization ratio, the \[NII\] emission remains fairly constant around the typical HII region value of log(\[NII\]/H$`\alpha `$)$``$-0.4. The DIG, diffuse and ring-like HII regions (in Fig 9b and 9c) show a similar distribution consistent with photoionization with only a few points scattering into the shock model regime. However, the SNRs cluster about an average value of log(\[NII\]/H$`\alpha `$) $``$ -0.2, forcing the points to lie in the shock-ionization box. Again, there are numerous SNRs which lie below the shock-ionization model box and there appears to be even more overlap between the SNRs and the HII regions than in Fig 9a-d. This is most likely an abundance effect, since the \[NII\] lines in SNRs are sensitive to chemical composition not the shock conditions. The SNRs which fall below log(\[NII\]/H$`\alpha `$)=-0.2 still show the elevated \[SII\] emission expected from shock conditions as is shown in Fig 8h. Analysis of the emission-line diagnostic diagrams leads to the following conclusions: * The enhanced \[SII\] emission from the DIG are the only forbidden lines not clearly reproduced by simple photoionization models, that would seem to be more consistent with shock-ionization. However, dilute photoionization models such as those presented by Domgörgen & Mathis (1994), do explain the enhanced \[SII\] emission while keeping the \[NII\], \[OII\], and \[OIII\] line strengths near the values found in typical HII regions. The optical emission lines from the DIG observed thus far in M31 are consistent with the low ionization parameter photoionization models as described in Greenawalt, Walterbos, & Braun (1997). * The diagnostic diagrams presented here separate the spectroscopically confirmed SNRs from the photoionized nebulae fairly well, the best case being that of the \[NII\]/H$`\alpha `$ vs. \[SII\]/H$`\alpha `$ plot (Fig 8). The shock-ionized nebulae exist in a very restricted quadrant of this plot with enhanced \[SII\] and \[NII\] emission. The DIG does not occupy this quadrant. * The line ratios for the diffuse HII regions and the ring nebulae are consistent with photoionization models and form a transition between center-brightened HII regions and the DIG in terms of the \[SII\]/H$`\alpha `$ ratio. ## 7 Summary We have obtained deep optical spectroscopic observations for 46 HII regions (including 2 rejected SNRs), 16 confirmed SNRs, and numerous regions of diffuse ionized gas in M31. The majority of the HII regions studied here are in the low density limit and have low enough temperatures that direct determination of T<sub>e</sub> by the observation of the \[OIII\] 4363$`\mathrm{\AA }`$ line is impossible. This complicates the determination of the abundance gradient in M31, although the observed variations of the line ratios with radius indicate that a gradient does exist. The abundance gradient we determine for M31 based on the R<sub>23</sub> parameter is -0.06 $`\pm `$ 0.03 dex kpc <sup>-1</sup>, consistent with all previously determined values. This radial variation of line ratios does appear to depend on morphology. The higher surface brightness HII regions show spectra which are dominated by local chemical abundance effects and thus can be used to trace out the chemical gradient in the galaxy. The spectra of the diffuse and ring HII regions do not show radial line ratio trends and appear to be dominated by ionization and excitation effects rather than metallicity. We have confirmed 16 of the 18 observed SNR candidates from the BW93 catalog. The two remaining objects were rejected due to low \[SII\] emission. These objects also do not show enhanced \[OIII\] or \[OI\] emission in their spectra and were thus reclassified as normal HII regions. The radial trends in the line ratios for the confirmed SNR agree with previously published results (BKC81 and BKC82). While the \[SII\] emission from the HII regions and DIG shows a strong increase with decreasing H$`\alpha `$ emission measure, no other spectral feature exhibits such behavior. The smooth transition in \[SII\]/H$`\alpha `$ seen from the center-brightened HII regions to the DIG strongly suggests that the DIG is affected by a lower ionization parameter, diluted radiation field. The diffuse and ring-like HII regions which exhibit strong ionization and excitation effects in their optical spectra, represent transitional objects between the bright HII regions and the faint DIG. Using diagnostic diagrams which compare the \[OIII\], \[NII\], and \[SII\] emission to the brightest Balmer lines, we have shown how most of the emission features found in the DIG are consistent with even the simplest photoionization models. However, the \[SII\] emission from the DIG cannot be explained by these simple models. The increase in \[SII\] emission with decrease in H$`\alpha `$ surface brightess we have observed is well explained by the diluted radiation field photoionization models of Domgörgen & Mathis (1994). There is no evidence for shock ionization in the DIG layer of M31. We would like to acknowledge the insightful discussions with Don Garnett on abundance gradient effects on emission-line ratios in spiral galaxies, and Stacy McGaugh on empirical abundance calibrators. We graciously thank Charles Hoopes for reading and commenting on the manuscript during its development. The comments from an anonymous referee helped to clarify presentation of the results. This work has been supported by NSF grant AST-9617014 and a Cottrell Scholar Award from Research Corporation to R.A.M.W. ## 8 Appendix : Special Cases Of the 46 discrete HII regions observed we have identified 9 diffuse nebulae, 16 center-bright nebulae, 12 extended complexes, and 9 discrete ring structures. In a very few cases, the new classification used is in disagreement with the WB92 classification. The subapertures obtained for structures within extended nebulae have also been classified using the same nomenclature. It is important to note, in this case, that a morphological classification of “ring” can mean that the spectrum contains light across an entire object which appears as an H$`\alpha `$ ring or can mean that the spectrum is from a section of a ring; a distinction is made only in Fig 4 and Fig 5 where the point size indicates if the object is a discrete nebula or a substructure within a nebula. We have also obtained spectra of large complexes and other nebulae within which are embedded SNR candidates from the BW93 catalog. Here we provide a set of brief descriptions for these objects as well as for objects which have been re-classified or which require a more detailed description of their morphology. The central positions for apertures are labelled in the finding charts, Fig 1a and Fig 1b. K87 was classified as a center-brightened source by WB92 although it consists of a bright compact source (K87.b) centered within a faint diffuse shell of emission approximately 150 pc in diameter. It is unclear whether this diffuse ring of gas is really associated with the bright knot, especially since the spectral properties are quite different. The line fluxes in the spectrum of the west filament (K87.a) are included in Table 4; the east filament had insufficient signal-to-noise to include in our results. K103.a is a filamentary structure which appears on the far northern edge of a confirmed supernova remnant, K103A. Several spectra were extracted to study the complicated structures in the large complex, K103. The spectra of various knots and filaments do show the elevated \[SII\]/H$`\alpha `$ ratio indicating shock-ionization and confirm the nature of the embedded supernova remnant candidate. The filament K103.a, however, has a very low \[SII\] ratio, suggesting that it is actually not part of the supernova remnant. K132 is listed as a diffuse nebula in WB92 but appears to be a bright compact source embedded within an amorphous diffuse nebula. This knot is a center-brightened 25 pc HII region. The diffuse gas surrounding this object is too faint to obtain a good spectrum. K310 is one of two BW93 supernova remnant candidates we have rejected due to the low \[SII\]/H$`\alpha `$ ratio observed in its spectrum. It is a 55 pc, very diffuse and faint (E.M. $``$ 90 pc cm<sup>-6</sup>) amorphous nebula. Though it is fainter than our adopted DIG cutoff surface brightness, it is not classified as DIG because it appears as a discrete nebula. K446 was also suspected to be a supernova remnant due to the high \[SII\]/H$`\alpha `$ ratio obtained from line images. Our spectroscopy has ruled out this candidate however, which shows a ratio of only 0.24. The low ratio determined from the images was affected by a few bad pixels. K496 is a large HII region which contains an embedded SNR candidate, K496A (BW93). We did not obtain a spectrum of the candidate however. The section of the object for which we have obtained a spectrum does not show the signature enhanced \[SII\]/H$`\alpha `$ line ratio so we have included K496 with the HII regions. K525 is a large HII region complex consisting of several diffuse structures, faint arcs, bright compact knots and includes an embedded supernova remnant candidate, K525A (BW93) toward the center of the region. All but the central aperture which corresponds to the supernova remnant (K525.c), show \[SII\]/H$`\alpha `$ ratios typically found in photoionized nebulae. K526 The spectrum obtained for this HII region complex contains light from an embedded supernova remnant candidate K526A, and a center-bright nebula with a diameter near 80 pc. The \[SII\]/H$`\alpha `$ ratio in two spectra (K526.b and K526.c) are enhanced, confirming the embedded SNR; the other aperture, K526.a, which corresponds to the compact source, shows \[SII\] emission typical of HII regions. K877 is a large (d $``$ 200 pc) complex which contains an embedded Wolf-Rayet star (#1201 in MLA93) near the center of a small ring shaped structure. K932 was classified as a single object in the WB92 catalog, though closer inspection of the H$`\alpha `$ image reveals that it is actually a cluster of several very bright compact sources and may include a small faint complete ring to the north. The first spectrum (K932.a) is that of a small knot near the rim of the faint ring-like HII region. The second (K932.b) is a bright compact source with a diameter of about 40 pc. We have listed K932 as a complex because it does not appear to be a discrete nebula.
no-problem/9908/astro-ph9908064.html
ar5iv
text
# The obscured growth of massive black holes ## 1 Introduction The local mass density of massive black holes residing in the nuclei of galaxies (Magorrian et al 1998; Richstone et al 1998) is in good agreement with that expected from the intensity of the X-ray background at 30 keV (Fabian & Iwasawa 1999; Salucci et al 1999). The X-ray background is assumed to be due to radiatively-efficient accretion onto black holes at redshifts $`z12`$, with accretion accounting for the bulk of their mass. An important requirement is the presence of large column densities of absorbing matter, $`N_\mathrm{H}10^{24}\mathrm{cm}^2`$, around many of the X-ray sources. Intrinsically they are assumed to radiate the broad-band spectrum of a typical quasar (Elvis et al 1994), with a large UV bump, and a power-law X-ray continuum of photon index, $`\mathrm{\Gamma }2`$, up to a few $`100\mathrm{keV}`$. Photoelectric absorption in this matter then hardens the observed spectrum considerably below 30 keV so that the cumulative spectrum from a population of sources with a range of column densities and redshifts resembles that of the observed X-ray Background (Setti & Woltjer 1989; Madau et al 1994; Celotti et al 1995; Matt & Fabian 1994; Comastri et al 1995; Wilman & Fabian 1999). Provided that the sources are not too Thomson-thick, it is only above about 30 keV that the unabsorbed radiation from growing black holes is observed. Two major issues for the model are discussed here; a) the typical mass and luminosity of the obscured objects and b) the source of the obscuring matter. The first a) is an issue because few luminous obscured quasars $`L(210\mathrm{keV})>10^{44}\mathrm{erg}\mathrm{s}^1`$ have yet been found by X-ray (or other waveband) surveys or studies (Boyle et al 1998; Brandt et al 1997; Halpern et al 1999), so it might seem that highly obscured objects are only of low luminosity. This could be a serious problem for the growth of black holes of mass $`10^9\mathrm{M}_{}`$ or more, which might have luminosities considerably more than that. The second b) requires large column densities to obscure most of the Sky as seen from the sources, in order that most accretion power in the Universe is absorbed (Fabian & Iwasawa 1999 estimate that about 85 per cent is absorbed). We have previously argued that a circumnuclear starburst is responsible for the obscuration in nearby objects (Fabian et al 1998), but in its simple form could be difficult to sustain if the central black hole takes a Gyr or more to grow to its present mass. A model is presented here in which a black hole grows with its surrounding stellar spheroid by the accretion of hot gas. The gas is also cooling and forming a distributed gaseous cold component within which stars slowly form. In the central regions the cold component provides the required column density for absorption of the quasar radiation from the accreting black hole. The quasar is assumed to make a slow wind, which eventually becomes powerful enough to blow away the absorbing gas, probably due to a wind, and an unobscured quasar is seen (see Silk & Rees 1998 and Blandford 1999 for a similar wind model). Because the fuel supply for the black hole has also been ejected however, the quasar dies when its accretion disc is exhausted. The high obscuration phase occurs only during the growing phase and thus at redshifts greater than unity which have not been explored yet in the hard X-ray band. Reprocessing of the radiation by the absorbing gas into far infrared emission may make these objects detectable in the sub-millimetre band. ## 2 Growth of the black hole Magorrian et al (1998) find from the demographics of nearby massive black holes that the mass of the hole $`M_{\mathrm{BH}}`$ is proportional to the mass of the surrounding spheroid $`M_{\mathrm{sph}}`$ or bulge; $`M_{\mathrm{BH}}0.005M_{\mathrm{sph}}`$. There is considerable scatter about this relation of order $`\pm 1`$ dex. This can be combined with the mass function of galactic bulges, where most of the mass resides. Salucci et al (1999) use the Schechter function luminosity function of bulges to obtain the local mass function of black holes. Most of the mass lies in black holes of individual mass around the break in the function, i.e. $`3\times 10^8\mathrm{M}_{}`$. Taking this as a typical mass, and assuming growth by accretion with a radiative efficiency of $`\eta =0.1\eta _1`$, the bolometric luminosity of the final object $`L_\mathrm{B}=3\times 10^{46}3\times 10^{44}\mathrm{erg}\mathrm{s}^1`$ if it is at one and 0.01 times the Eddington limit $`L_{\mathrm{Edd}}`$, respectively. The Salpeter time for the growth of the black hole, i.e. the mass doubling time, $`t_\mathrm{s}=3\times 10^7(L_{\mathrm{Edd}}/L_\mathrm{B})\eta _{0.1}\mathrm{yr}.`$ If we now assume that the initial mass of all massive black holes is less than that of the central black hole in our Galaxy, say $`10^6\mathrm{M}_{}`$, and has grown to $`3\times 10^6\mathrm{M}_{}`$ within 3 Gyr, which corresponds to $`z2`$, then $`8t_\mathrm{s}<3\times 10^9\mathrm{yr}`$ and the black hole must have grown at a rate of at about 10 per cent of the Eddington rate or more. Thus over the final $`3\times 10^8\mathrm{yr},`$ $`L_\mathrm{B}>3\times 10^{45}\mathrm{erg}\mathrm{s}^1`$. From the work of Elvis et al (1994) the observed 2–10 keV luminosity is about 3 per cent of the bolometric one for a quasar so we can conclude that $`L(210\mathrm{keV})>10^{44}\mathrm{erg}\mathrm{s}^1`$ during the growth of a typical black hole and we are dealing with powerful, obscured quasars. It is clear that the objects making up the X-ray background, and the radiative growth phase of most of the mass in nearby black holes are not represented by any object observed so far. ## 3 The obscuration The spectrum of the X-ray Background requires that most accretion power is absorbed. This means that some absorption (say $`N_\mathrm{H}>10^{22}\mathrm{cm}^2`$) occurs in 90 per cent of objects and heavy absorption ($`N_\mathrm{H}>10^{24}\mathrm{cm}^2`$) in 30–50 per cent of them. 10 per cent are unabsorbed and are the quasars identified in blue optical surveys or are bright in the soft X-ray band of ROSAT. So far these could be distinct different populations of quasars or different phases in the growth of all quasars. Such high absorbed fractions mean that the covering fraction of the sky by high column density material seen from a growing black hole must approach $`4\pi \mathrm{sr}`$. This cannot be provided by any thin disc or by the standard torus of unified models. The absorbing matter is probably concentrated within the innermost few 100 kpc, or its total mass becomes high. It must presumably be cold and fairly neutral, or it will not absorb the X-rays. At first sight this is at variance with it being space covering, especially if it must be so for a Gyr or more. Such matter should collide and dissipate into a disclike structure. Following our earlier work on a circumnuclear starburst (Fabian et al 1998), it is plausible that collisions do take place leading to dissipation but that some massive star formation occurs in the shocked and cooled dense gas. The winds and supernovae from those stars then supply the energy to keep the rest of the cold matter in a chaotic and space covering state. A more detailed model can be developed by assuming that the black hole is growing at the same time as the galaxy does. The stellar spheroid continues to grow by cooling of the gas heated by gravitational collapse of the protogalaxy. It is likely that the hot phase density while the galaxy continues to grow is the maximum possible, which means that the radiative cooling time of the gas equals the gravitational infall time. This condition has been studied in the context of quasar fuelling and growth by Nulsen & Fabian (1999). The gas accretes in a Bondi flow, probably forming a disc well within the Bondi radius. The accretion rate is such that it can typically be 10 per cent of the Eddington value. The situation as envisaged here is essentially a maximal cooling flow and we can use the properties of observed cooling flows (Fabian 1994 and references therein) to indicate how the cooled gas is distributed and how it may lead to absorption. X-ray observations of cluster cooling flows show that the mass deposited by cooling is distributed with $`\dot{M}(<r)r`$, where $`r`$ is the radius. The density distribution of the cooled gas is therefore $`\rho r^2`$. If the gravitational potential of the galaxy is isothermal it remains so. X-ray absorption with column densities of the order of $`10^{21}\mathrm{cm}^2`$ is also observed in many cluster cooling flows. Although not observed in other wavebands, it could represent very cold, dusty gas (Fabian et al 1994). It is therefore proposed that the cooled gas in protogalaxies does not all rapidly form stars but that much of it forms long-lived cold dusty clouds. As discussed above, energy from the massive stars helps to prevent the clouds rapidly dissipating into a large disc. The inner regions of protogalaxies are therefore highly obscured. The column density through the cold clouds is obtained from integration of the cold gas density distribution, assumed to be $$n=n_0r_0^2r^2.$$ $`(1)`$ If the cold clouds are a fraction $`f`$ of the total mass within radius $`r,M(<r)=2v^2r/G,`$ then $$n_0r_0^2=f\frac{v^2}{2\pi Gm_\mathrm{p}}$$ $`(2)`$ and the column density in to radius $`r_{\mathrm{in}}`$ $$N_\mathrm{H}=\frac{v^2}{2\pi Gm_\mathrm{p}}\frac{f}{r_{\mathrm{in}}}=10^{24}f\frac{v_{2.5}^2}{r_2}\mathrm{cm}^2.$$ $`(3)`$ Here the (line of sight) velocity dispersion of the (isothermal) spheroid is $`300v_{2.5}\mathrm{km}\mathrm{s}^1`$ and $`r_{\mathrm{in}}=100r_2\mathrm{pc}.`$ It is therefore reasonable to assume that column densities at the level required by X-ray Background models can be produced in this manner, provided that $`f`$ is fairly high. It is interesting that the mass within the radius where the column density becomes Thomson thick ($`N_\mathrm{T}=\sigma _T^12\times 10^{24}\mathrm{cm}^2,`$ where $`\sigma _\mathrm{T}`$ is the Thomson electron scattering cross section, is given by $$r_\mathrm{T}=\frac{v^2f}{2\pi Gm_\mathrm{p}N_\mathrm{T}},$$ $`(4)`$ for which the enclosed mass $$M_\mathrm{T}=\frac{v^4f}{2\pi G^2m_\mathrm{p}N_\mathrm{T}}.$$ $`(5)`$ $`M_\mathrm{T}`$ is thus approximately proportional to the mass of the whole spheroid, $`M_{\mathrm{sph}}`$, since approximately $`M_{\mathrm{sph}}v^4`$, as suggested by the Faber-Jackson relation amd by galaxy modelling (Salucci & Persic 1997). The last authors note that $`r_\mathrm{e}L_B^{0.7}`$ and $`M_{\mathrm{sph}}L_B^{1.35},`$ which with $`v(M/r)^{0.5}`$ yields a result very close to $`M_{\mathrm{sph}}v^4`$. As the stellar content of the galaxy and the central black hole grow, the accretion radius of the hole also grows. When $`M_{\mathrm{BH}}M_\mathrm{T}`$ the accretion radius is at $`r_\mathrm{T}`$ and the Thomson depth of the obscuring matter around the quasar has dropped to unity. The growth of the quasar continues until it runs out of gas. This could be because all the gas has cooled and formed stars. More likely it is connected with the central quasar (Silk & Rees 1998; Blandford 1999). Consider the possibility that as well as radiation the quasar produces a wind at velocity $`v_\mathrm{W}c`$ and with a kinetic power $`L_\mathrm{w}`$ which scales with the radiated power $`L_{\mathrm{rad}}`$ as $`L_\mathrm{w}/L_{\mathrm{rad}}=L_{\mathrm{Bol}}/L_{\mathrm{Edd}}`$. Thus when the bolometric power is 10 per cent of the Eddington value the power of the wind is 10 per cent of the radiated power. Note that winds and outflows are observed from many classes of accreting objects (see e.g. Livio 1997). The ratio of wind to (optically-thin) radiation pressure is then $$\frac{P_\mathrm{w}}{P_{\mathrm{rad}}}=\frac{1}{2}\frac{L_\mathrm{w}}{L_{\mathrm{rad}}}\frac{c}{v_\mathrm{w}}=\frac{1}{2}\frac{L_{\mathrm{Bol}}}{L_{\mathrm{Edd}}}\frac{c}{v_\mathrm{w}}.$$ $`(6)`$ For an optically-thin medium of Thomson depth $`\tau _\mathrm{T}`$, the effective Eddington limit $$L_{\mathrm{Edd}}^\mathrm{w}=L_{\mathrm{Edd}}\frac{\tau }{2}\frac{v_\mathrm{w}}{c}.$$ $`(7)`$ This means that if $`v_\mathrm{w}15,000\mathrm{km}\mathrm{s}^1`$ and $`L_{\mathrm{Bol}}/L_{\mathrm{Edd}}0.1`$ then the wind pressure balances gravity. A wind force slightly in excess of this can therefore eject the gas. Since $`\tau r^1`$ and $`Mr`$, this condition applies throughout the galaxy and all the gas is ejected. (The assumption here that the mass is in a thin shell is applicable to the gas as it is swept up.) This result can be placed on a surer footing by noting that the force balance between gravity acting on a column of matter at radius $`r`$ of total mass $`N_\mathrm{H}4\pi r^2m_\mathrm{p}`$ and the outward force due to a wind $`2L_\mathrm{w}/v`$ yields the limiting luminosity $$L_{\mathrm{Edd}}^\mathrm{w}=2\pi GMm_\mathrm{p}N_\mathrm{H}v_\mathrm{w}.$$ $`(8)`$ Substituting for $`M`$ and $`N_\mathrm{H}`$ in the model galaxy we obtain $$L_{\mathrm{Edd}}^\mathrm{w}=\frac{2v^4fv_\mathrm{w}}{G}.$$ $`(9)`$ The gas is ejected by the wind when the wind power exceeds this limit, i.e. when $$L_\mathrm{w}>L_{\mathrm{Edd}}^\mathrm{w}.$$ $`(10)`$ If $`L_\mathrm{w}=aL_{\mathrm{Edd}}`$ then the limit occurs when $$a\frac{4\pi GMm_\mathrm{p}c}{\sigma _\mathrm{T}}=\frac{v^4fv_\mathrm{w}}{2G},$$ $`(11)`$ or at the critical mass $$M_\mathrm{c}=\frac{v^4}{2\pi G^2m_\mathrm{p}N_\mathrm{T}}\frac{v_\mathrm{w}}{c}\frac{f}{a}=\frac{M_\mathrm{T}}{2a}\frac{v_\mathrm{w}}{c}.$$ $`(12)`$ Thus if $`v_\mathrm{w}0.1c`$, and $`a0.1`$ then $`M_\mathrm{c}M_\mathrm{T}/2.`$ Assuming that most of the mass within $`r_\mathrm{T}`$ lies in the central black hole again means that $`M_{\mathrm{BH}}M_\mathrm{T}M_{\mathrm{sph}}.`$ Most of the power during the main growth phase of a massive black hole is then radiated into gas with a Thomson depth of about unity, as required for modelling the X-ray Background spectrum. Clearly $`f`$ cannot equal unity if a galaxy is to be formed. In practice it will be a function of time. The important issue here is that it cannot be small, i.e. it probably lies in the range of 0.1 – 0.5. The black hole mass is therefore proportional to the spheroid mass, as observed (Magorrian et al 1998). The quasar is now unobscured and is observable as an ordinary blue excess object for as long as it has fuel. A reasonable estimate would be about a million yr for the disc to empty. The ejection phase is tentatively identified with broad absorption line quasars (BAL quasars; see e.g. Weymann 1997). The normalization for the relation between $`M_{\mathrm{BH}}`$ and $`M_{\mathrm{sph}}`$ is obtained by using the Faber-Jackson relation given by Binney & Tremaine (1992), where $`v=220(L/L_{})^{0.25}\mathrm{km}\mathrm{s}^1`$ and $`L_{}=4\times 10^{10}\mathrm{L}_{}`$ in the V-band, and a mass-to-light ratio in that band of 6 (both for a Hubble constant of $`50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). The result is $$\frac{M_\mathrm{T}}{M_{}}=0.01$$ $`(13)`$ so $$\frac{M_{\mathrm{BH}}}{M_{\mathrm{sph}}}0.005,$$ $`(14)`$ for the values of $`a`$ and $`v_\mathrm{w}`$ above. This is in good agreement with the relation found by Magorrian et al (1998). In detail, the weak variations in $`M/L`$ such as summarized by Salucci & Persic (1997) could shift the observed relation from being strictly linear. Note that the ordinary quasar phase markes the end of the main growth phase of both the black hole and its host spheroid. The central black hole in galaxies therefore has a profound effect on the whole galaxy, in providing a limit to the stellar component. In this context, Silk & Rees (1998) have shown that a powerful quasar wind could end the formation of a galaxy and Blandford (1999) has noted that it could prevent the formation of a galactic disc. ## 4 Discussion It has been shown here that a black hole, growing by accretion in the centre of a young, isothermal, spheroidal galaxy which has a large mass component of cold gas, will appear highly obscured ($`\tau _\mathrm{T}>1`$). Provided that a significant fraction of the accretion power, assumed to be close to the Eddington limit for the black hole, is released as a sub-relativistic wind, then, as suggested by Silk & Rees (1998) the cold, and accompanying hot, gas components are ejected from the galaxy when the Thomson depth to the outside $`\tau _\mathrm{T}`$ drops to about unity. The central accretion source then appears as an unobscured quasar which lasts until its disc empties. The growth of both the central black hole and the stellar body of the galaxy then terminate, unless fresh gas and stars are brought in from outside, for example by a merger. Assuming that most of the mass within the region where $`\tau _\mathrm{T}1`$ has been accreted into the black hole, it is found that its mass $`M_{\mathrm{BH}}M_{\mathrm{sph}}`$, the mass of the spheroid of the galaxy. The rough explanation for the proportionality is that a more massive galaxy has more cold gas and so requires a more powerful wind to eject the gas. A stronger wind requires a more massive black hole. The normalization results by relating the wind power to the Eddington limit. The model accounts for the bulk of the X-ray Background which requires that most accretion power, resulting from the growth of massive black holes, is obscured. Also, such obscured accretion leads to agreement with the local mass density of black holes (Fabian & Iwasawa 1999). Most of the absorbed radiation will be reradiated in the far infrared/sub-mm bands and contribute to the source counts and backgrounds there. It can plausibly account for some of the sub-mm sources recently discovered by SCUBA (Barger et al 1998; Hughes et al 1998, Blain et al 1999). Indeed it predicts a population of distant Ultra-Luminous Infrared Galaxies (ULIRGs; Sanders & Mirabel 1996) associated with the main growth phase of massive black holes and distinct from the nearby population, which is probably due to mergers briefly fuelling central starbursts and fully grown black holes. (Whether radiation from the black hole can then dominate the ULIRG emission depends on the Eddington limit, i.e. the mass of the black hole, relative to the strength of the starburst; see e.g. Wilman et al 1999.) The bolometric power of a typical growing black hole must be high, $`L_{\mathrm{Bol}}>10^{46}\mathrm{erg}\mathrm{s}^1`$; which means an observed 2–10 keV luminosity when $`N_\mathrm{H}2\times 10^{24}\mathrm{cm}^2`$ of about $`3\times 10^{15}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. Current ASCA (e.g. Ueda et al 1998) and BeppoSAX (Fiore et al 1999) hard X-ray surveys have not probed deep enough to reveal these objects, although they should be obvious in the deeper surveys planned for Chandra and XMM. These surveys will enable the major growth phase of black holes and, with optical/infrared identifications, their redshift distribution to be studied. Note that the obscured growth phase of a massive black hole represents a distinctly different phase from the briefer unobscured phase predicted after the gas is ejected and the quasar dies or any later phase, obscured or unobscured, when the quasar is revived by a merger or other transient fuelling event. These last phases are the ones which have been observed so far; the major growth phase has not. It is unlikely therefore that the properties of the major growth phase are obtainable by any simple extrapolation from observations of any transient recent phases, i.e. from studies of the properties of active galaxies at low redshift ($`z<0.5`$). It is important that the cold gas in the young galaxy, which provides the X-ray (and other waveband) obscuration, be metal enriched, and probably dusty. This is likely to be a consequence of continued star formation, particularly of massive stars, throughout the galaxy. The energy of stellar winds and supernovae help to keep the cold gas space covering. Note that the injected metals and stellar mass loss will be distributed both as assumed for the stars and the cold gas. Indeed the metallicity of the obscuring gas closest to the black hole may be higher than the solar values, which leads to better fits to the X-ray Background spectrum (Wilman & Fabian 1999). This also accounts for the high metallicity inferred in the broad-line region gas for many quasars (Hamann & Ferland 1999). The column density distribution of the gas obscuring the radiation from the growing black hole will be such that most power is emitted just before the gas is ejected, which happens around $`\tau _\mathrm{T}1`$. The fraction of the power radiated at optical depths greater than $`\tau _\mathrm{T}`$ scales roughly as $`\tau _\mathrm{T}^1`$. Angular momentum and other factors may cause the gas in the young spheroidal galaxy, and the wind, to not be completely spherical. Thus when the wind ejects the gas it may do so most along one axis and only later eject gas in other directions (if at all). This means that the column density distribution for $`\tau _\mathrm{T}<1`$ is complicated to predict and is best found from the shape of the X-ray Background spectrum. An important consequence for the stellar radiation in young growing galaxies is that most of it too is obscured. This agrees with recent evidence for the star formation history of the Universe from sub-mm and other observations. The ejection of the metal-rich cold gas in the young galaxy should terminate its stellar growth, as well as growth of the black hole. The final appearance of a galaxy is thus significantly affected by its central black hole. How far the gas is ejected depends on how long the unobscured quasar phase lasts and what the surrounding gas mass and density is; whether for example the galaxy is in a group or cluster. The most massive black holes will be in the most massive galaxies and may last longest in the unobscured quasar phase. They might also be surrounded by a hot intragroup medium which could prevent much of the hotter space-filling phase from being ejected. If a surrounding hot phase is a necessary ingredient for a radio source then such objects might be more likely to be radio galaxies. During the ejection phase the quasar might be classed as a BAL and later it might be seen to be surrounded by extended metal-rich filaments, depending on the velocity of ejection of the cold gas. The metal-rich gas, if mixed with surrounding hot intracluster gas, will enhance the local metallicity, providing one source for the extensive metallicity gradients found by X-ray spectroscopy around many cD galaxies in clusters (Fukazawa et al 1994). Finally, it is noted that the model requires a significant power output in the form of a wind associated with the growth of black holes. This wind power is dissipated as heat in the surrounding medium. It may have a marked effect on surrounding intracluster gas (Ensslin et al 1998; Wu, Fabian & Nulsen 1999), possibly contributing to the heating required to change the X-ray luminosity–temperature relation, $`L_\mathrm{x}T_\mathrm{x}^\alpha `$, from the predicted one with $`\alpha 2`$ to the observed one with $`\alpha 3.`$ The estimates of Wu et al (1999) indicate that it will also heat the general intergalactic medium to a temperature of $`10^7\mathrm{K}`$ at $`z12`$. In summary, the growth of both massive black holes and galactic bulges is a highly obscured, and related, process, best observed directly in the hard X-ray band and indirectly, through radiation of the absorbed energy, in the sub-mm band. ## 5 Acknowledgements I thank the referee for comments and The Royal Society for support.
no-problem/9908/hep-lat9908057.html
ar5iv
text
# Elsevier instructions for the preparation of a 2-column format camera-ready paper in LaTeX ## 1 FORMAT Text should be produced within the dimensions shown on these pages: each column 7.5 cm wide with 1 cm middle margin, total width of 16 cm and a maximum length of 20.2 cm on first pages and 21 cm on second and following pages. The document style uses the maximal stipulated length apart from the following two exceptions (i) does not begin a new section directly at the bottom of a page, but transfers the heading to the top of the next page; (ii) never (well, hardly ever) exceeds the length of the text area in order to complete a section of text or a paragraph. ### 1.1 Spacing We normally recommend the use of 1.0 (single) line spacing. However, when typing complicated mathematical text automatically increases the space between text lines in order to prevent sub- and superscript fonts overlapping one another and making your printed matter illegible. ### 1.2 Fonts These instructions have been produced using a 10 point Computer Modern Roman. Other recommended fonts are 10 point Times Roman, New Century Schoolbook, Bookman Light and Palatino. ## 2 PRINTOUT The most suitable printer is a laser printer. A dot matrix printer should only be used if it possesses an 18 or 24 pin printhead (“letter-quality”). The printout submitted should be an original; a photocopy is not acceptable. Please make use of good quality plain white A4 (or US Letter) paper size. The dimensions shown here should be strictly adhered to: do not make changes to these dimensions, which are determined by the document style. The document style leaves at least 3 cm at the top of the page before the head, which contains the page number. Printers sometimes produce text which contains light and dark streaks, or has considerable lighting variation either between left-hand and right-hand margins or between text heads and bottoms. To achieve optimal reproduction quality, the contrast of text lettering must be uniform, sharp and dark over the whole page and throughout the article. If corrections are made to the text, print completely new replacement pages. The contrast on these pages should be consistent with the rest of the paper as should text dimensions and font sizes. ## 3 TABLES AND ILLUSTRATIONS Tables should be made with ; illustrations should be originals or sharp prints. They should be arranged throughout the text and preferably be included on the same page as they are first discussed. They should have a self-contained caption and be positioned in flush-left alignment with the text margin within the column. If they do not fit into one column they may be placed across both columns (using `\begin{table*}` or `\begin{figure*}` so that they appear at the top of a page). ### 3.1 Tables Tables should be presented in the form shown in Table 1. Their layout should be consistent throughout. Horizontal lines should be placed above and below table headings, above the subheadings and at the end of the table above any notes. Vertical lines should be avoided. If a table is too long to fit onto one page, the table number and headings should be repeated above the continuation of the table. For this you have to reset the table counter with `\addtocounter{table}{-1}`. Alternatively, the table can be turned by $`90^{}`$ (‘landscape mode’) and spread over two consecutive pages (first an even-numbered, then an odd-numbered one) created by means of `\begin{table}[h]` without a caption. To do this, you prepare the table as a separate document and attach the tables to the empty pages with a few spots of suitable glue. ### 3.2 Line drawings Line drawings should be drawn in India ink on tracing paper with the aid of a stencil or should be glossy prints of the same; computer prepared drawings are also acceptable. They should be attached to your manuscript page, correctly aligned, using suitable glue and not transparent tape. When placing a figure at the top of a page, the top of the figure should be at the same level as the bottom of the first text line. All notations and lettering should be no less than 2 mm high. The use of heavy black, bold lettering should be avoided as this will look unpleasantly dark when printed. ### 3.3 Black and white photographs Photographs must always be sharp originals (not screened versions) and rich in contrast. They will undergo the same reduction as the text and should be pasted on your page in the same way as line drawings. ### 3.4 Colour photographs Sharp originals (not transparencies or slides) should be submitted close to the size expected in publication. Charges for the processing and printing of colour will be passed on to the author(s) of the paper. As costs involved are per page, care should be taken in the selection of size and shape so that two or more illustrations may be fitted together on one page. Please contact the Technical Editor in the Camera-Ready Publications Department at Elsevier for a price quotation and layout instructions before producing your paper in its final form. ## 4 EQUATIONS Equations should be flush-left with the text margin; ensures that the equation is preceded and followed by one line of white space. provides the document-style option fleqn to get the flush-left effect. $$H_{\alpha \beta }(\omega )=E_\alpha ^{(0)}(\omega )\delta _{\alpha \beta }+\alpha |W_\pi |\beta $$ (1) You need not put in equation numbers, since this is taken care of automatically. The equation numbers are always consecutive and are printed in parentheses flush with the right-hand margin of the text and level with the last line of the equation. For multi-line equations, use the eqnarray environment. For complex mathematics, use the “tensyAMS- package. References should be collected at the end of your paper. Do not begin them on a new page unless this is absolutely necessary. They should be prepared according to the sequential numeric system making sure that all material mentioned is generally available to the reader. Use `\cite` to refer to the entries in the bibliography so that your accumulated list corresponds to the citations made in the text body. Above we have listed some references according to the sequential numeric system .
no-problem/9908/nucl-ex9908004.html
ar5iv
text
# High-precision Studies of the 3He(e,e′p) Reaction at the Quasielastic Peak ## Abstract Precision studies of the reaction <sup>3</sup>He(e,ep) using the three-spectrometer facility at the Mainz microtron MAMI are presented. All data are for quasielastic kinematics at $`|\stackrel{}{q}|=685`$ MeV/c. Absolute cross sections were measured at three electron kinematics. For the measured missing momenta range from 10 to 165 MeV/c, no strength is observed for missing energies higher than 20 MeV. Distorted momentum distributions were extracted for the two-body breakup and the continuum. The longitudinal and transverse behavior was studied by measuring the cross section for three photon polarizations. The longitudinal and transverse nature of the cross sections is well described by a currently accepted and widely used prescription of the off-shell electron-nucleon cross-section. The results are compared to modern three-body calculations and to previous data. The study of few-body nuclear systems has acquired new importance due to recent developments on both theoretical and experimental fronts. Several schemes have been developed to perform microscopic calculations which are based on the NN interaction rather than on a mean-field approach. These include non-relativistic Faddeev-type calculations for three-body systems and Monte-Carlo variational calculations for three- and four-body systems . Fully-relativistic calculations are also being developed for three bodies . New experimental facilities with high-quality continuous-wave (cw) electron beams and high-resolution spectrometers provide the means to rigorously test these modern calculations. In particular, precision measurements of electromagnetic response functions, which are selectively sensitive to the various components of the nuclear currents, are possible. The first results of a program to study <sup>3,4</sup>He(e,ep) at a fixed 3-momentum transfer, $`q=|\stackrel{}{q}|=685`$ MeV/c, and three energy transfers, $`\omega `$, corresponding to kinematics on top of the quasielastic peak, well above it (“dip”) and well below it are reported here. By varying the energy transfer, we hope to selectively enhance or suppress various effects contributing to the interaction. In order to further understand these contributing effects, we studied the longitudinal and transverse components of the cross section by measuring it at three electron scattering angles (virtual photon polarizations, $`ϵ`$). The ongoing research program was carried out in the three-spectrometer facility at the Mainz microtron MAMI by the A1 collaboration. The results reported here are from measurements performed on <sup>3</sup>He in quasielastic kinematics ($`x_\mathrm{B}=1`$ ; $`\omega =228`$ MeV), where the dominant mechanism is expected to be the quasi-free knockout of a single proton. Further experimental details are described in Ref. . Few exclusive and semi-exclusive electron-scattering measurements have been performed on <sup>3</sup>He, and the existing data are inconclusive. The <sup>3</sup>He(e,ep) reaction was measured at Saclay at 3-momentum transfers of 300 and 430 MeV/c. Measurements covered the missing energy range 0–70 MeV and missing momentum range 0 $`p_m`$ 300 MeV/c. Momentum distributions were extracted for the two-body breakup channel, <sup>3</sup>He(e,ep)<sup>2</sup>H. Another <sup>3</sup>He(e,ep) measurement at Saclay in dip kinematics suggested the existence of two-body (short-range) correlations. The latter data exhibit some agreement with calculations by Laget . Both experiments were performed in perpendicular kinematics. The longitudinal and transverse response functions were measured for <sup>3</sup>He(e,ep)<sup>2</sup>H in quasielastic kinematics at Saclay over the $`q`$ range of 350–700 MeV/c. At the lower $`q`$, the longitudinal response is quenched by about 30% relative to the transverse, while at $`q>500`$ MeV/c, the experimental spectral functions are consistent with PWIA predictions. In another L/T measurement in the dip region at missing momentum 260 MeV/c, the longitudinal spectral function was observed to be strongly quenched for both the two-body breakup and the continuum channels, in agreement with calculations by Laget which include meson-exchange currents and final-state interactions (FSI). A few experiments are planned at the high momentum-transfer range now available at TJNAF. In particular, a measurement of cross sections and response functions at high $`q`$ and for high $`p_m`$ is scheduled for late 1999. In this Letter, we report on measurements of the <sup>3</sup>He(e,ep) reaction at a central 3-momentum transfer $`q=685`$ MeV/c and central energy transfer $`\omega =228`$ MeV, corresponding to the center of the quasielastic peak. Three incident beam energies ($`E_b=`$ 540, 675, and 855 MeV) with electron scattering angles ($`\theta _e=`$ 103.85, 72.05, and 52.36) were used. This corresponds to three virtual photon polarizations ($`ϵ=`$ 0.214, 0.457, 0.648) respectively. Protons with momenta ranging from 393 to 710 MeV/c were detected in parallel kinematics $`(\stackrel{}{p_p}\stackrel{}{q})`$ to facilitate the study of the longitudinal and transverse components of the cross section. The incident cw beam current was 40 $`\mu `$A, and the beam was rastered on the target by $`\pm 3.5`$ mm in both the horizontal and vertical directions. Scattered electrons were detected in Spectrometer A ($`\mathrm{\Delta }\mathrm{\Omega }21`$ msr) and time-coincident protons in Spectrometer B ($`\mathrm{\Delta }\mathrm{\Omega }5.6`$ msr). Spectrometer C was used as a luminosity monitor by detecting electrons at a fixed setting for each kinematics. The solid angle subtended by the spectrometers for an extended target was extensively studied, and a detailed description can be found in Ref. . The target consisted of cold <sup>3</sup>He gas ($`T=2023`$K and $`P=510`$ atm) encapsulated in a stainless steel sphere 8 cm in diameter and having 82 $`\mu `$m thick walls. A fan circulated the target gas from the cell through a heat exchanger to dispense the heat deposited by the incident electron beam. The target density was determined at each beam energy by measuring elastic electron scattering at a low beam current (5 $`\mu `$A) and comparing the result to published cross sections . The target density (which varied with the incident electron beam current) was then monitored and determined continuously using the singles rate in Spectrometer C. The systematic error on the overall normalization is $`\pm `$5% and is dominated by the uncertainties in the <sup>3</sup>He elastic cross sections and the monitoring of the target density over time. At each kinematic setting, the <sup>3</sup>He(e,ep) cross section was measured as a function of both missing momentum ($`p_m`$) and missing energy ($`E_m`$). The missing momentum range accessed was different for each kinematic setting. Hence, at $`ϵ=`$ (0.214, 0.457, 0.648) the $`p_m`$ range was (10-95, 10-125, 10-165 MeV/c) respectively. Great care was given to the radiative corrections, which were performed by unfolding the radiative tails in a 2-dimensional $`(E_m,p_m)`$ space. The correction factor proposed by Penner was used for internal (Schwinger) radiation, and the factor proposed by Friedrich for external radiation, which included substantial contributions from the target walls. Additional details about the radiative corrections can be found in Ref. . Fig. 1(a) displays radiatively-corrected missing energy spectra for <sup>3</sup>He(e,ep) at the three different $`ϵ^{}s`$ and for an arbitrarily chosen missing-momentum bin of 40-50 MeV/c. The high resolution evident in the figure enables accurate definition of the parameters ($`E_m,p_m`$) at which the cross sections were extracted. The dominant features are the two-body breakup (<sup>3</sup>He $``$ p + <sup>2</sup>H) peak at 5.49 MeV and the threshold for three-body breakup (<sup>3</sup>He $``$ p + p + n) at 7.72 MeV. Higher missing energies correspond to the continuum of unbound states of the undetected pn-pair. The continuum includes the excitation of the unbound singlet S-state of the deuteron at $`E_m=(7.72+0.55)`$ MeV. There is no strength at $`E_m>20`$ MeV in the radiatively-corrected spectrum. We note that there was no measured strength at $`E_m>20`$ MeV for the entire $`p_m`$ range for which these results are reported. The 2-dimensional experimental spectral function, $$S^{\mathrm{exp}}(E_m,p_m)=\frac{1}{p_{p}^{}{}_{}{}^{2}\sigma _{ep}^{CC1}}\frac{d^6\sigma }{d\mathrm{\Omega }_ed\mathrm{\Omega }_pdE_edp_p},$$ (1) where $`\sigma _{ep}^{CC1}`$ is the electron-nucleon off-shell cross section of de Forest has been determined. Fig. 1(b) displays the measured spectral functions for the three virtual photon polarizations and for an arbitrary missing momentum bin $`40p_m50`$ MeV/c. The extracted <sup>3</sup>He(e,ep) spectral functions are found to be independent of $`ϵ`$ within statistical and systematic uncertainties over the entire region $`0E_m20`$ MeV. Also displayed in the figure is a theoretical spectral function from Schulze and Sauer for $`p_m=45`$ MeV/c. The theoretical spectral function describes the shape of the experimental one well, but the magnitude is approximately 20% larger. Note that the experimental values were affected by FSI which account for at least part of this difference. For the two-body breakup channel, <sup>3</sup>He(e,ep)<sup>2</sup>H, the measured cross section as a function of $`p_m`$ for the three values of $`ϵ`$ is shown in Fig. 2(a). A strong dependence on $`ϵ`$ is observed. To gain insight into the $`ϵ`$-dependence of the cross-section, the measured <sup>3</sup>He proton momentum distributions $`\rho _2(p_m)`$, obtained by integrating $`S^{\mathrm{exp}}(E_m,p_m)`$ over the two-body breakup peak, are plotted in Fig. 2 (b). The momentum distribution shows very little dependence on the virtual photon polarization indicating that the longitudinal/transverse behavior of the two-body breakup cross section is explained well by $`\sigma _{ep}^{CC1}`$. The data also overlap well with published results obtained in non-parallel kinematics and at $`q=430`$ and 300 MeV/c, supporting the (PWIA) hypothesis that S<sup>exp</sup>($`E_m,p_m`$) can be factorized in the cross section. The remaining $`ϵ`$ dependence of the cross sections is evaluated by comparing the integrals $$N(ϵ)=4\pi _{10}^{100}\rho _2(p_m)p_m^2𝑑p_m$$ (2) in Fig. 2 (b). The uncertainties quoted for $`N(ϵ)`$ are statistical only. The values of $`N(ϵ)`$ vary by about 10% for $`ϵ=0.2140.648`$. We note that, to the extent this difference may be significant, the cross sections are slightly more longitudinal than those of $`\sigma _{ep}^{CC1}`$. Also shown in the figure are three calculations by Schulze and Sauer , Salmè et al. , and Forest et al. . The Schulze and Sauer momentum distribution was calculated using the Paris potential, and is very similar to that of Salmè. The calculations by Forest et al. use the Argonne v18 NN-potential and the Urbana IX three-nucleon-interaction potential together with variational Monte-Carlo techniques. Note that the theoretically-extracted momentum distributions do not take into account final-state interactions which do affect the measured distributions. Hence, the differences of about 20-25% between the measured and calculated integrals, N, in Fig. 2 (b) contain (but are not necessarily restricted to) the effects of FSI. A similar analysis can be performed at higher excitation energies. The experimentally-extracted momentum distributions for the continuum, $`\rho _3(p_m)`$, integrated over the range $`E_m`$ = 7-20 MeV, are presented in Fig. 3. They display about a 10% dependence on $`ϵ`$, similar to the two-body breakup channel. Also shown in the figure are older data from Saclay and a calculated momentum distribution from Schulze and Sauer . We note that at $`p_m<50`$ MeV/c, our data set is well below that of Jans et al. The theoretical values from Schulze and Sauer are again approximately 20% larger than the data which are subject to the effects of FSI. We have performed precise measurements of the <sup>3</sup>He(e,ep) reaction in quasielastic kinematics. The data span the missing momentum range up to 95-165 MeV/c, and missing energy range up to 80 MeV. After radiative corrections, there is no observed strength at excitations higher than 20 MeV. The dependence on the virtual-photon polarization of the observed cross section (and hence the longitudinal/transverse behavior) over the entire excitation range is almost entirely due to the off-shell e-N cross section, and is described well by $`\sigma _{ep}^{CC1}`$. We conjecture that meson exchange and isobar currents, which are predominantly transverse, cannot be very important because the experimental results scale as $`\sigma _{ep}^{CC1}`$. We note that in a similar region of momentum transfer ($`q=500`$ MeV/c and $`q=1050`$ MeV/c), the inclusive <sup>3</sup>He(e,e) cross section, which integrates mainly over non-parallel kinematics, exhibits similar behavior . Theoretical calculations of the spectral function and momentum distributions reproduce well the shape of the measured quantities, but are 10-20% larger in magnitude. This difference can be attributed, at least in part, to the effects of FSI. A theoretical study which includes an exact treatment of FSI is now in progress . We would like to thank the MAMI staff for providing excellent support for this experiment. This Work was supported by the Deutsche Forschungsgemeinschaft (SFB 201), by the Bundesministerium für Forschung und Technologie and by the U.S. Department of Energy and the National Science Foundation.
no-problem/9908/hep-ph9908388.html
ar5iv
text
# 1 Introduction ## 1 Introduction The production of vector bosons $`W^\pm `$, $`Z^0`$ and $`\gamma ^{}`$ in high energy hadronic collisions is one of the most important processes that should be investigated in order to test the Standard Model of electroweak interactions and Quantum Chromodynamics. By measuring the rapidity or the mass distribution of the leptonic decay products one can also investigate the quark and antiquark distributions inside the colliding hadrons. For such processes, generated at the lowest order via a hard scattering like $`q\overline{q}^{}V`$, where $`q`$ and $`q^{}`$ have the same flavour for $`Z/\gamma ^{}`$ production and different flavour for $`W`$ production, higher order corrections due to multiple gluon emission in the initial-state radiation will play a crucial role. Many analyses have been devoted to the phenomenology of vector boson production, particularly to the differential distribution with respect to the transverse momentum $`q_T`$ of the produced vector boson. The approach of resummation of large logarithms of the ratio $`m_V/q_T`$ has been followed in many cases. This was originally proposed by Dokshitzer, Dyakonov and Troyan (DDT) and then accomplished by Collins, Soper and Sterman (CSS) , who performed the leading logarithmic resummation in the space of the impact parameter $`b`$, which is the Fourier space conjugate to $`q_T`$. Ladinsky and Yuan implemented the CSS results numerically . Resummations of the initial-state multiple emission have been performed in in the $`b`$-space and more recently in in both the $`b`$\- and the $`q_T`$-space. In the resummation is also matched with the exact perturbative first-order result, which is important at high $`q_T`$. In the $`b`$-space approach non-perturbative effects in the region of large values of $`b`$ are taken into account via Gaussian functions in $`b`$, corresponding to a smearing of the transverse momentum distribution , which can also be directly implemented in $`q_T`$-space . Another possible approach to studying the phenomenology of vector bosons is to use Monte Carlo simulations of the initial-state parton shower. Standard parton showers are performed in the leading-log approximation, therefore they are reliable only in the soft or collinear region of the phase space, corresponding to low $`q_T`$ values for the produced vector boson. If we wish to study the high $`q_T`$ region of the spectrum it is necessary to provide parton showers with matrix-element corrections. Refs. implement matrix-element corrections to simulations of vector boson production in the PYTHIA Monte Carlo event generator and compare them with the results obtained at the Tevatron collider by the DØ collaboration. In this paper we reconsider this problem and apply matrix-element corrections to the initial-state radiation of the HERWIG parton shower, following the general prescription contained in , as we already did for $`e^+e^{}`$ annihilation , Deep Inelastic Scattering and top quark decays . It is worth recalling that at present no Monte Carlo program including the full next-to-leading order (NLO) results exists, as it is not known how to set up a full NLO calculation in a probabilistic way. When providing parton showers with matrix-element corrections we still only get the leading-order normalization, because in the initial-state cascade we only include leading logs and not the full one-loop virtual contributions. In Section 2 we review the basis of the HERWIG parton shower algorithm for the initial-state radiation in hadronic collisions. In Sections 3 and 4 we discuss the hard and soft matrix-element corrections to vector boson production. In Section 5 we plot some relevant phenomenological distributions at the centre-of-mass energy of the Tevatron and of the LHC using the new version of HERWIG. We compare our results with previous versions of HERWIG, resummed calculations and experimental data. Finally, in Section 6 we discuss our results and make some concluding comments. ## 2 The parton shower algorithm The production of a vector boson $`V`$ in hadronic collisions is given at lowest order by the elementary parton-level process $`q\overline{q}V`$. In the following, we shall assume that the vector boson decays into a lepton pair (Drell–Yan interactions). The first-order tree-level corrections to such a process are given by the processes $`q\overline{q}Vg`$ and $`qgVq`$ ($`\overline{q}gV\overline{q}`$), where the initial-state partons can come from either incoming hadron. A possible method to implement the initial-state parton shower in a probabilistic way is the Altarelli–Parisi approach, in which the initial energy scale $`Q_0`$ is increased up to the probed value $`Q`$ and all the effect of the emitted partons is integrated out. On the contrary, standard Monte Carlo programs explicitly keep track of the accompanying radiation, by implementing the so-called ‘backward evolution’ in which the hard scale is reduced away from the hard vertex, tracing the hard scattering partons back into the original incoming hadrons and explicitly generating the distribution of emitted partons. In the leading infrared approximation, the probability of the emission of an additional parton from a parton $`i`$ is given by the general result for the radiation of a soft/collinear parton: $$dP=\frac{dq_i^2}{q_i^2}\frac{\alpha _S\left(\frac{1z_i}{z_i}q_i\right)}{2\pi }P_{ab}(z_i)dz_i\frac{\mathrm{\Delta }_{S,a}(q_{i\mathrm{max}}^2,q_c^2)}{\mathrm{\Delta }_{S,a}(q_i^2,q_c^2)}\frac{x_i/z_i}{x_i}\frac{f_b(x_i/z_i,q_i^2)}{f_a(x_i,q_i^2)}.$$ (1) The HERWIG parton shower is ordered according to the variable $`q_i^2=E^2\xi _i`$, where $`E`$ is the energy of the parton that split and $`\xi _i=\frac{p_hp_i}{E_hE_i}`$, where $`p_i`$ is the four-momentum of the emitted parton; $`p_h`$ is a lightlike vector with momentum component parallel to the incoming hadron; $`E_h`$ and $`E_i`$ are the energy components of $`p_h`$ and $`p_i`$; and $`z_i`$ is the energy fraction of the outgoing space-like parton (which goes on to participate in the hard process) with respect to the incoming one (i.e. $`z_i=1E_i/E`$). In the approximation of massless partons, we have $`\xi _i=1\mathrm{cos}\theta `$, where $`\theta `$ is the emission angle from the incoming hadron direction. When all emission is soft, the energy of the emitted partons is negligible ($`E_iE`$), therefore ordering according to $`q_i^2`$ corresponds to angular ordering; when the emission is hard, the energy of the radiated parton is similar to that of the splitting parton, so $`q_i^2`$ ordering is equivalent to transverse momentum ordering. In (1) $`f_a(x_i,q_i^2)`$ is the parton distribution function for the partons of type $`a`$ in the initial-state hadron, $`x_i`$ being the parton energy fraction. At each step of the backward evolution a parton of type $`a`$, a quark for example, can evolve back to any other type of parton $`b`$, in this case either a quark of the same flavour or a gluon, having a higher value of $`x_i`$. The quantity $`\mathrm{\Delta }_{S,a}(q_i^2,q_c^2)`$ is the Sudakov form factor, resulting from the leading-logarithmic resummation and representing the probability that no resolvable radiation is emitted from a parton of type $`a`$ whose upper limit on emission is $`q_i^2`$, with $`q_c^2`$ being, in the case of HERWIG, a cutoff on transverse momentum. This cutoff implies a minimum value of the evolution scale $`q_i^2`$ that can be reached, $`q_i^2>4q_c^2`$, but in practice this is smaller than the smallest scale at which most standard parton distribution function sets are reliable, so an additional cutoff on $`q_i^2`$ has to be applied. The ratio of form factors appearing in Eq. (1) represents the probability that the emission considered is the first, i.e. the one with the highest value of $`q_i^2`$. In terms of Feynman diagrams, the Sudakov form factor sums up all-order virtual and unresolved contributions. $`P_{ab}(z_i)`$ is the Altarelli-Parisi splitting function for a parton of type $`b`$ to evolve to one of type $`a`$ with momentum fraction $`z_i`$. $`\alpha _S`$ is the strong coupling, evaluated at a scale of order the transverse momentum of the emitted parton, which sums large higher order corrections. This, together with the angular ordering condition, makes HERWIG accurate to next-to-leading order at large $`x`$ . The definition of the variables $`q_i^2`$ and $`z_i`$ is not Lorentz-invariant, but it is frame-dependent. Colour coherence implies that for any pair of colour-connected partons $`i`$ and $`j`$ the maximum values of the $`q`$ variables are related by $`q_{i\mathrm{max}}q_{j\mathrm{max}}=p_ip_j`$. Therefore one is free to choose the frame in which to define the initiating values $`q_{i\mathrm{max}}`$ and $`q_{j\mathrm{max}}`$, with the only prescription being that their product must equal $`p_ip_j`$. The subsequent emissions are then ordered in $`q_i^2`$. For vector boson production, as in most cases, symmetric limits are fixed by HERWIG, i.e. $`q_{i\mathrm{max}}^2=q_{j\mathrm{max}}^2=p_ip_j`$ and the energy of the parton which initiates the cascade is set to $`E=q_{\mathrm{max}}=\sqrt{p_ip_j}`$. Ordering according to $`q_i^2`$ therefore dictates $`\xi _i<z_i^2`$. After we generate the initial-state shower, the original partons are not on their mass-shell anymore, so their energy and momentum cannot be conserved. Energy-momentum conservation is then achieved by applying a separate boost to each jet along its own direction. As a result of this, the jet momenta are no longer equal to the parton ones, but energy and momentum are globally conserved and the vector boson acquires a transverse momentum from the recoil against the emitted partons. Since the mass shift becomes negligible in the soft and collinear limits, the precise details of this kinematic reshuffling are not fixed a priori, but are free choices of the model. Once the backward evolution has terminated, a model to reconstruct the original hadron is required. In HERWIG, if the backward evolution has not resulted in a valence quark, additional non-perturbative parton emission is generated to evolve back to a valence quark. Such a valence quark has a Gaussian distribution with respect to the non-perturbative intrinsic transverse momentum in the hadron, with a width that is an adjustable parameter of the model. In the following, when discussing the phenomenological implications of our work, we shall consider both HERWIG’s default value of zero, and an increased value of 1 GeV, bracketing the reasonable range of non-perturbative effects. The algorithm so far discussed is reliable only in the soft or collinear limits and, since it only describes radiation for $`\xi _i<z_i^2`$, there are regions of the phase space that are completely empty (‘dead zones’). The radiation in such regions, according to the full matrix element, should be suppressed, but not completely absent as happens in HERWIG. We therefore need to improve the HERWIG model by implementing matrix-element corrections. As usual, this method works in two steps: we populate the missing phase space region by generating radiation according to a distribution obtained from the first-order matrix-element calculation (‘hard corrections’); we correct the algorithm in the already-populated region using the matrix-element probability distribution whenever an emission is capable of being the ‘hardest so far’ (‘soft corrections’). ## 3 Hard Corrections In order to implement the hard and soft matrix-element corrections to simulations of the initial-state radiation in Drell–Yan processes, we firstly have to relate the HERWIG variables $`\xi `$ and $`z`$ to the kinematic ones we use in the matrix-element calculation. For the process $`q(p_1)\overline{q}(p_2)g(p_3)V(q)`$ we parametrize the phase space according to the Mandelstam variables $`\widehat{s}=(p_1+p_2)^2`$, $`\widehat{t}=(p_1p_3)^2`$ and $`\widehat{u}=(p_2p_3)^2`$. Throughout this paper, we neglect the parton masses, so we have $`\widehat{s}+\widehat{t}+\widehat{u}=m_V^2`$. The phase space limits, in terms of the variables $`\widehat{s}`$ and $`\widehat{t}`$, are: $`m_V^2<`$ $`\widehat{s}`$ $`<s,`$ (2) $`m_V^2\widehat{s}<`$ $`\widehat{t}`$ $`<\mathrm{\hspace{0.33em}\hspace{0.33em}0},`$ (3) where $`s`$ is the squared energy in the centre-of-mass frame. Note that the point $`\widehat{s}=m_V^2`$ corresponds to the soft singularity, and the lines $`\widehat{t}=0`$ and $`\widehat{t}=m_V^2\widehat{s}`$ to collinear emission. In order to relate $`\widehat{s}`$ and $`\widehat{t}`$ to $`\xi `$ and $`z`$ we use the property that the mass $`m`$ and the transverse momentum $`p_t`$ of the $`q`$$`g`$ ($`\overline{q}`$$`g`$) jets are conserved in the showering frame. In doing this, we observe that, in the approximation of massless partons, the energy of the annihilating $`q\overline{q}`$ pair which produces the vector boson $`V`$ is equal to $`E^{}=\sqrt{p_qp_{\overline{q}}}=\sqrt{m_V^2/2}`$. In terms of the showering variables, we obtain: $`m^2`$ $`=`$ $`{\displaystyle \frac{1z}{z^2}}\xi m_V^2,`$ (4) $`p_t^2`$ $`=`$ $`{\displaystyle \frac{(1z)^2}{2z^2}}\xi (2\xi )m_V^2,`$ (5) and in terms of the matrix-element variables: $`m^2`$ $`=`$ $`\widehat{t},`$ (6) $`p_t^2`$ $`=`$ $`{\displaystyle \frac{\widehat{u}\widehat{t}}{\widehat{s}}}.`$ (7) Combining them we get the following equations: $`z`$ $`=`$ $`{\displaystyle \frac{m_V^2}{\widehat{t}}}+\sqrt{\left({\displaystyle \frac{m_V^2}{\widehat{t}}}\right)^2{\displaystyle \frac{2m_V^2}{\widehat{s}\widehat{t}}}(m_V^2\widehat{t})},`$ (8) $`\xi `$ $`=`$ $`2{\displaystyle \frac{\frac{m_V^2}{\widehat{t}}\frac{m_V^2\widehat{t}}{\widehat{s}}+m_V^2\sqrt{\frac{1}{\widehat{t}^2}\frac{2}{\widehat{s}\widehat{t}}+\frac{2}{m_V^2\widehat{s}}}}{1\frac{m_V^2}{\widehat{t}}m_V^2\sqrt{\frac{1}{\widehat{t}^2}\frac{2}{\widehat{s}\widehat{t}}+\frac{2}{m_V^2\widehat{s}}}}}.`$ (9) The region where HERWIG does not allow gluon radiation can be derived by solving the equation $`\xi >z^2`$: $`\widehat{s}_{\mathrm{min}}<`$ $`\widehat{s}`$ $`<s`$ (10) $`\widehat{t}_{\mathrm{min}}<`$ $`\widehat{t}`$ $`<\widehat{t}_{\mathrm{max}},`$ (11) where $`\widehat{t}_{\mathrm{max}}`$ can be obtained by solving the equation $$\widehat{t}^2+3m_V^2\widehat{t}+2m_V^4\left(1\frac{m_V^2}{\widehat{s}}\right)=0,$$ (12) or: $$\widehat{t}_{\mathrm{max}}=\frac{m_V^2}{2}\left(3\sqrt{1+8m_V^2/\widehat{s}}\right).$$ (13) It is straightforward to write $`\widehat{t}_{\mathrm{min}}`$ as $$\widehat{t}_{\mathrm{min}}=m_V^2\widehat{s}\widehat{t}_{\mathrm{max}},$$ (14) while $`\widehat{s}_{\mathrm{min}}`$ can be determined by the condition $`\widehat{t}_{\mathrm{min}}(\widehat{s})<\widehat{t}_{\mathrm{max}}(\widehat{s})`$: $$\widehat{s}_{\mathrm{min}}=\frac{m_V^2}{2}\left(7\sqrt{17}\right).$$ (15) In Fig. 1 we plot the total phase space and HERWIG’s limits for a vector boson mass of $`m_V=80`$ GeV and a centre-of-mass energy of $`\sqrt{s}=200`$ GeV, in terms of the normalized variables $`\widehat{s}/m_V^2`$ and $`\widehat{t}/m_V^2`$. We can see that, as in cases and and differently from , the soft and the collinear singularities are well inside the HERWIG phase space: we also have an overlapping region, corresponding to a kinematic configuration in $`\widehat{s}`$ and $`\widehat{t}`$ where radiation can come from either parton. Note that the only dependence on the external physical parameters is the position of the edge at large $`\widehat{s}`$, i.e. $`\widehat{s}_{\mathrm{max}}=s`$, while the values of $`\widehat{s}_{\mathrm{min}}/m_V^2`$ and the limits in $`\widehat{t}/m_V^2`$ are independent of the centre-of-mass energy and of other kinematic conditions like the vector boson rapidity. Once we have the total and HERWIG phase space limits, in order to implement matrix-element corrections, we have to apply the exact differential cross section in the dead zone. In a general prescription is given to allow first-order corrections to quark scattering and annihilation processes once a generator of the lowest order process is available. For the Drell–Yan case, assuming that the virtuality and the rapidity of the produced vector boson are fixed by the Born process $`q\overline{q}V`$, the first-order differential cross section $`d\sigma `$ is proportional to the parton-level lowest order $`\sigma _0`$ according to the relation: $$d^2\sigma =\sigma _0\frac{f_{q/1}(\chi _1)f_{\overline{q}/2}(\chi _2)}{f_{q/1}(\eta _1)f_{\overline{q}/2}(\eta _2)}\frac{C_F\alpha _S}{2\pi }\frac{d\widehat{s}d\widehat{t}}{\widehat{s}^2\widehat{t}\widehat{u}}\left[(m_V^2\widehat{u})^2+(m_V^2\widehat{t})^2\right].$$ (16) In the above equation $`f_{q/1}(\chi _1)`$ and $`f_{\overline{q}/2}(\chi _2)`$ are the parton distribution functions of the scattering partons inside the incoming hadrons 1 and 2 for energy fractions $`\chi _1`$ and $`\chi _2`$ in the process $`q\overline{q}Vg`$, while $`f_{q/1}(\eta _1)`$ and $`f_{\overline{q}/2}(\eta _2)`$ refer to the Born process and cancel off the factors that are already in $`\sigma _0`$. The assumption that the rapidity and mass of the vector boson are the same as in the process $`q\overline{q}V`$ allows us to recover the Born result in the limit of an extremely soft gluon radiation. As stated here, Eq. (16) is a trivial rewriting of the first-order differential cross section, but the main point of is that if the azimuth of the emitted gluon is generated in the right way, Eq. (16) correctly describes the full process including the vector boson decay. Thus, to implement our matrix-element corrections, we do not need to know anything about the final state of the vector boson – its properties are correctly inherited from the Born process. In a similar way we deal with the Compton process $`q(p_1)g(p_3)q(p_2)V(q)`$. We define the Mandelstam variables $`\widehat{s}=(p_1+p_3)^2`$, $`\widehat{t}=(p_3p_2)^2`$ and $`\widehat{u}=(p_1p_2)^2`$, and we find the same expressions for the variables $`\xi `$ and $`z`$ and for the phase space limits. We obtain for the differential cross section : $$d^2\sigma =\sigma _0\frac{f_{q/1}(\chi _1)f_{g/2}(\chi _2)}{f_{q/1}(\eta _1)f_{\overline{q}/2}(\eta _2)}\frac{T_R\alpha _S}{2\pi }\frac{d\widehat{s}d\widehat{t}}{\widehat{s}^3\widehat{t}}\left[(m_V^2\widehat{t})^2+(m_V^2\widehat{s})^2\right].$$ (17) Extending this formula to processes where we have an incoming antiquark or where the gluon belongs to hadron 1 is straightforward. We then generate events according to the above distributions in the dead zone using standard techniques. When applying the hard corrections, in principle one should also implement the form factor, but, since we are quite far from the soft and collinear singularities, it is actually not important and we shall neglect it in the following. This is justified by the fact that the total fraction of events that receive an emission from the hard correction is small. For example for $`W`$ production it is 3.9% at the Tevatron and 9.2% at the LHC. Also, the fact that such fractions are quite small allows us to neglect multiple emissions in the dead zone and makes the use of the exact first-order result reliable. Among these events, the fraction of $`q\overline{q}^{}Wg`$ processes is 53.5% at the Tevatron and 24.5% at the LHC. The reason for the differences between the two machines can be understood in terms of the parton distribution functions. The gluon density inside the protons is higher when the colliding energy is increased because $`x`$ is decreased; moreover at the LHC we have $`pp`$ interactions instead of $`p\overline{p}`$, therefore a $`q\overline{q}^{}`$ annihilation requires an antiquark $`\overline{q}^{}`$ to be taken from the ‘sea’. The equivalent numbers for $`Z`$ production are essentially identical. ## 4 Soft Corrections According to , we should also correct the emission in the region that is already populated by HERWIG using the exact first-order calculation for every emission that is the hardest so far. This can be performed by multiplying the parton shower distribution by a factor that is equal to the ratio of HERWIG’s differential distribution to the matrix-element one. The only non-trivial part of this is in calculating the Jacobian factor $`J(\widehat{s},\widehat{t};z,\xi )`$ of the transformation $`(z,\xi )(\widehat{s},\widehat{t})`$. HERWIG’s cross section is then given by $$\frac{d^2\sigma }{d\widehat{s}d\widehat{t}}=\frac{d^2\sigma }{dzd\xi }J(\widehat{s},\widehat{t};z,\xi ),$$ (18) where $`d^2\sigma /dzd\xi `$ is given by the elementary emission probability given in Eq. (1). The Jacobian factor $`J`$ can be simply calculated from the relations given earlier: $$J(\widehat{s},\widehat{t};z,\xi )=\frac{\widehat{t}(m_V^2\widehat{t})}{\widehat{s}^2}\frac{z^5}{m_V^4\xi (1z)^2\left(z+\xi (1z)\right)}.$$ (19) At this point we are able to make some comparisons with the approach that is followed in where matrix-element corrections are added to the PYTHIA simulation of vector boson production. The parton shower probability distribution is applied over the whole phase space (in its older versions PYTHIA had a cutoff on the virtuality $`k^2`$ of the hard scattering parton that was constrained to be $`k^2<m_W^2`$ in order to avoid double counting) and the exact matrix-element correction is applied only to the branching that is closest to the hard vertex. Unlike , we have complementary phase space regions where we apply either the parton shower distribution (1) or the exact matrix-element ones (16,17), while in the parton shower region ($`\xi <z^2`$) we use the exact amplitude to generate the hardest emission so far instead of just the first emission. Correcting only the first emission can lead to problems due to the implementation of the Sudakov form factor whenever a subsequent harder emission occurs, as we would get the unphysical result that the probability of the hard emission would depend on the infrared cutoff that appears in the expression of the form factor. See for more details on this point. ## 5 Results Having implemented the hard and soft matrix-element corrections to the initial-state radiation in Drell–Yan interactions, we wish to investigate the impact they have on relevant phenomenological observables that can be measured at the Tevatron and in future at the LHC. In the following, we shall mostly concentrate on $`W`$ production although $`\gamma ^{}`$ and $`Z`$ events are treated in exactly the same way. ### 5.1 Vector boson transverse momentum A particularly significant phenomenological quantity is the transverse momentum of the $`W`$ to the beam axis, which has been object of many theoretical and experimental analyses. In the soft/collinear limit, the transverse momentum of the $`W`$ is constrained to be $`q_T<m_W`$, since in the hard process $`q\overline{q}^{}W`$ the $`W`$ is produced with no transverse momentum and it can acquire some $`q_T`$ only as a result of the initial-state parton showering. When the emission is generated according to the exact matrix element of processes like $`q\overline{q}^{}Wg`$, $`qgq^{}W`$ or $`\overline{q}g\overline{q}^{}W`$, the $`W`$ produced in the hard process is allowed to have a non-zero $`q_T`$ and events with $`q_T>m_W`$ are expected. In Fig. 2 we compare the differential cross sections with respect to the $`W`$ $`q_T`$ for $`p\overline{p}`$ collisions at the Tevatron energy<sup>*</sup><sup>*</sup>*The next Tevatron run will be at the slightly higher energy of 2 TeV. For the sake of comparison with existing data we use 1.8 TeV, but the results would not be qualitatively different for 2 TeV., $`\sqrt{s}=1.8`$ TeV, obtained using HERWIG 5.9, the latest public version, with 6.1, the new version in progress where we include for the first time matrix-element corrections to the initial-state parton shower in vector boson production. We set an intrinsic transverse momentum equal to zero and use the MRS (R1) parton distribution functions . We can see from the plots that the impact of matrix-element corrections is negligible at low values of $`q_T`$ but it is quite relevant at high $`q_T`$, where we have many more events with respect to the 5.9 version. Above some value of $`q_T`$ HERWIG 5.9 does not generate events anymore, while the 6.1 version still gives a non-zero differential cross section thanks to the events generated via the exact hard matrix element. As in $`e^+e^{}`$ annihilation and DIS, it is actually the hard corrections that have a marked impact on our distributions, while the effect of the soft ones is quite negligible. It is also interesting to plot the $`q_T`$ spectrum obtained running the ‘$`W`$ \+ jets’ process of HERWIG forcing the produced $`W`$ to decay leptonically. This generates the hard process $`q\overline{q}^{}Wg`$ (or the equivalent ones with an initial-state gluon) for all events. As this matrix element diverges when the transverse momentum of the $`W`$ approaches zero, HERWIG applies a user-defined cut on the $`q_T`$ generated in the hard process, which we set to 10 GeV. In our plot, this does not appear as a sharp cutoff since the $`W`$ gets some recoil momentum due to the initial-state parton shower which can increase or decrease its transverse momentum. If, on the contrary, we had plotted the $`q_T`$ of the $`W`$ generated in the hard $`22`$ process, we would have got a sharp peak at $`q_T=`$ 10 GeV. The agreement we find between the simulations of Drell–Yan processes provided with matrix-element corrections and the ‘$`W`$ \+ jet’ events for large $`q_T`$ reassures us that the implementation of the hard corrections is reliable. We also see that the HERWIG distributions for Drell–Yan processes show a sharp peak in the first bin, which includes the value $`q_T=0`$: it corresponds to a fraction of events with no initial-state radiation and so $`W`$ bosons produced with zero transverse momentum. With any fixed infrared cutoff value, one expects a non-zero (though exponentially suppressed) fraction of events to give no resolvable radiation. These would normally be smeared out by non-perturbative effects like the intrinsic transverse momentum, as we shall see in later plots, but since we set the width of its distribution to zero by default, all such events appear in the lowest $`q_T`$ bin. This is actually a technical deficiency of the Monte Carlo simulation and not a detectable physical effect. The DØ collaboration recently published data on the transverse momentum distribution of $`W`$ bosons at the Tevatron. In Fig. 3 we compare the HERWIG predictions with it. In order to contribute to the investigation of possible effects of non-perturbative physics, we also run HERWIG setting an intrinsic partonic transverse momentum equal to 1 GeV, which we consider to be the maximum reasonable value. We see that the data has a significantly broader distribution at small $`q_T`$ than HERWIG without intrinsic transverse momentum and that increasing the r.m.s. $`p_t`$ to 1 GeV is nowhere near enough to account for this. Furthermore the description of the data in the intermediate $`q_T`$ range, 30–70 GeV, is also rather poor. The intrinsic $`p_t`$ does not affect the predictions significantly for $`q_T`$ values above about 10 GeV, so there is no obvious way to improve the fit for intermediate values. However, these effects are actually because the DØ data are uncorrected for detector effects. We have run HERWIG through DØ’s fast simulation program, CMS, and show results in Fig. 4. We see that HERWIG now describes the data rather well. The detector smearing is so strong at low $`q_T`$ that the additional smearing produced by the intrinsic transverse momentum becomes irrelevant. At this point it is worthwhile commenting on the results shown in Refs. . Both compare generator-level results with the DØ data. Ref. ’s actually look rather similar to our generator-level results, so it is likely that after applying detector corrections they will describe the data as well as HERWIG. Ref. found good agreement with the DØ data, but only after increasing the intrinsic transverse momentum to 4 GeV. It seems likely that this accounts for the smearing at low $`q_T`$ and would not be necessary after including detector smearing. However, in the intermediate $`q_T`$ range the results of are significantly lower than HERWIG. Since we find a detector correction of around a factor of two in this region, it would be very interesting to see the results of at detector level to see whether they are still able to fit the data. In Fig. 5 we show the $`q_T`$ distributions for $`pp`$ collisions at the energy of the LHC, $`\sqrt{s}=14`$ TeV, and find that the impact of the corrections is even bigger once the energy is increased. Unlike in the Tevatron transverse momentum distributions, we do not have the previously-mentioned sharp peak at $`q_T=0`$: this is because at the LHC we have $`pp`$ interactions and the protons do not have valence antiquarks, while in order to produce a $`W`$ we do need a $`q\overline{q}^{}`$ hard scattering. As a result, the backward evolution has to produce at least one splitting, which always gives the $`W`$ itself some transverse momentum. From the window in the top-right corner, we also see that at very low $`q_T`$ the uncorrected version, 5.9, has a few percent more events than 6.1, particularly in the case of the LHC. As we said in the introduction, although we have matched to the tree-level NLO matrix elements, we still get the LO normalization, therefore the total cross sections obtained from versions 5.9 and 6.1 are the same. Since at the energy of the LHC we are generating a higher fraction of events at large $`q_T`$ via the exact matrix-element distribution, it is reasonable that this enhancement is partially compensated by a slight suppression in the low $`q_T`$ region. In the region of low $`q_T`$, it is worthwhile comparing the HERWIG distributions with some resummed calculations that are available in the literature. All these calculations are based on the approach suggested in where the differential cross section with respect to the vector boson $`q_T`$ is expressed as the resummation of logarithms $`l=\mathrm{log}(m_V^2/q_T^2)`$ to all orders in $`\alpha _S`$. Two conflicting nomenclatures are used in the literature to denote which logarithms are summed: in the differential cross section, at each order in $`\alpha _S`$ the largest term is $`1/q_T^2\alpha _S^nl^{2n1}`$, which are sometimes known as the leading logarithms, $`1/q_T^2\alpha _S^nl^{2n2}`$ being known as the next-to-leading logarithms, and so on. According to this classification, the results in and are NNLL and NNNLL respectively. However, these logarithms ‘exponentiate’, allowing the differential cross section to be written in terms of the exponential (the ‘form factor’) of a series in $`\alpha _S`$ whose largest term is $`\alpha _S^nl^{n+1}`$, which are also sometimes known as the leading logarithms, $`\alpha _S^nl^n`$ being known as the next-to-leading logarithms, and so on. In all NLL terms according to this nomenclature are summed in the form factor, which is evaluated either in the impact parameter $`b`$-space or in the $`q_T`$-space. The non-perturbative contribution is taken into account in the $`b`$-space formalism following the general ideas in and setting Gaussian functions in the impact parameter $`b`$ to quantify these effects. In Fig. 6 we compare the HERWIG 6.1 differential cross sectionAs the resummed calculations deal with a fixed value of $`m_W`$, Fig. 6 is obtained running HERWIG with a vanishingly small $`W`$ width, so $`m_W80.4`$ GeV, its default value. $`W`$ width effects are nevertheless fully included in HERWIG and in the other plots we show. At low $`q_T`$ this assumption does not change the results dramatically, at high $`q_T`$ the effect of the $`W`$ width is important as it allows values of $`q_T`$ larger that the default $`W`$ mass even in the parton shower approximation, otherwise they could come only via the exact matrix-element generated events. with the ones obtained from these approaches all normalized to the corresponding total cross section. HERWIG clearly lies well within the range of the resummed approaches, except at very small $`q_T`$ where they become unreliable. To be more precise, the agreement is better between HERWIG and the two resummations in the $`q_T`$-space, but even the $`b`$-space result is not too far from the Monte Carlo distribution. If we now wish to compare the HERWIG simulation with matrix-element corrections to the resummed calculations even for larger values of $`q_T`$ we need to match the latter with the exact $`𝒪(\alpha _S)`$ results to make them reliable there. This has already been done in the literature within the approach of , while in the analysis is limited to the low $`q_T`$ regime. Many prescriptions exist concerning how to perform such a matching. Ours is to simply add the exact matrix-element cross sections for the parton level processes $`q\overline{q}^{}Wg`$ and $`q(\overline{q})gWq^{}(\overline{q}^{})`$, already calculated in (16) and (17), to the resummed expressions and, in order to avoid double counting, subtract off the terms they have in common. It is straightforward to show that these are simply those terms in the exact $`𝒪(\alpha _S)`$ result that do not vanish as $`q_T0`$. This prescription works fine if the resulting distribution is continuous at the point $`q_T=m_W`$, which means that the resummation and the low $`q_T`$ $`𝒪(\alpha _S)`$ result exactly compensate each other and only the exact ‘hard’ matrix-element contribution survives. As discussed in , it is not trivial to implement such a matching: the authors in fact do not succeed in obtaining a continuous distribution at the crucial matching point $`q_T=m_W`$, but rather a step of size $`\alpha _S^2`$ was found. This comes about because the derivative of the Sudakov exponent is not required to go smoothly to zero at that point. We independently implement the matching for all the resummed calculations with which we wish to compare the HERWIG results and we find that the matching works well only for the $`q_T`$-space resummation performed in . For the $`b`$-space and the approaches in we do indeed find a step at $`q_T=m_W`$. In fact even for the $`q_T`$-space method of we find a ‘kink’ at $`q_T=m_W`$, i.e. although the curve is continuous, its derivative changes discontinuously there, albeit by an amount that is too small to notice on the figure. In Fig. 7 we compare the HERWIG 6.1 distributions with after the matching over the whole $`q_T`$ spectrum. We see that for the resummed distribution that is well matched and continuous for $`q_T=m_W`$ the agreement with HERWIG is pretty good everywhere. We do have slight discrepancies for medium values of $`q_T`$, but they are well within the range that could be expected from the differences between the approaches followed by the Monte Carlo program and the calculation which keeps all the next-to-leading logarithms in the form factor. While the plots shown so far refer to $`W`$ production, it is also worth comparing with some recent preliminary CDF data on $`Z`$ production . In Fig. 8, we have the CDF distribution with respect to the transverse momentum of the $`\gamma ^{}/Z`$ boson produced at the Tevatron and decaying into an $`e^+e^{}`$ pair with invariant mass in the range 66 GeV $`<m_Z<`$ 116 GeV. We compare the data with HERWIG before and after matrix-element corrections; we also normalize the HERWIG distribution to the experimental value of the cross section, 245.3 pb. The result is that we obtain good agreement with the experimental data only thanks to the application of the hard and soft corrections, otherwise the predictions would have been badly wrong for values $`q_T>50`$ GeV. There is perhaps some evidence that HERWIG does not produce enough smearing at low $`q_T`$, even with an intrinsic $`p_t`$ of 1 GeV, with HERWIG peaking at about 2 GeV and the data peaking at about 3 GeV, but the overall fit is nevertheless acceptable. We have however found that better agreement can be obtained with an intrinsic $`p_t`$ of 2 GeV. ### 5.2 Jet distributions We now look at the impact the matrix-element corrections have on the jet activity at the Tevatron and at the LHC. An interesting object to analyse is the hardest jet in transverse energy (the so-called ‘first jet’). In Fig. 9 we plot the differential spectrum for the transverse energy of the first jet for $`\sqrt{s}=1.8`$ TeV and $`\sqrt{s}=14`$ TeV, using HERWIG 5.9 and 6.1 and running the inclusive version of the $`k_T`$ algorithm for a radius $`R=0.5`$ at the Tevatron and $`R=1`$ for the LHC The correspondence between the radius $`R`$ in the $`k_T`$ algorithm and $`R_{\mathrm{cone}}`$ in an iterative cone algorithm is $`R_{\mathrm{cone}}0.75\times R`$ . The Tevatron experimentalists run an iterative cone algorithm with radius $`R_{\mathrm{cone}}=0.4`$, so we choose $`R=0.5`$ for the radius parameter when we consider jet events at $`\sqrt{s}=1.8`$ TeV. For the LHC we stick to the recommended value of $`R=1`$.. The result is that the improvement introduced does have a significant effect since the number of events in which the first jet has high $`E_T`$ is significantly increased. The 5.9 and 6.1 distributions are similar for small values of $`E_T`$, but for increasing $`E_T`$ the effect of the corrections introduced in HERWIG gets more and more relevant and at very high $`E_T`$ only events generated via the exact hard amplitude survive. The impact is really enormous in the case of the LHC as can be seen from Fig. 9b. In Fig. 10 we plot the inclusive number of jets $`n_{\mathrm{jets}}`$ that pass a transverse energy cut $`E_T>10`$ GeV. We see that implementing the matrix-element corrections significantly shifts the distribution towards larger $`n_{\mathrm{jets}}`$. If we look at events with three or four jets having $`E_T>10`$ GeV, we see that their number is increased considerably both at the Tevatron and at the LHC. We have roughly an enhancement of a factor of 2 for three-jet events at both the energies, while for events with four high transverse energy jets, at the LHC we still get an enhancement of 2, while at the Tevatron the difference is almost a factor of 4. ### 5.3 Rapidity distributions Fig. 11 shows the distribution of the rapidity $`y`$ of the dilepton pair at $`\sqrt{s}=1.8`$ TeV and $`\sqrt{s}=14`$ TeV. As we sum over $`W^+`$ and $`W^{}`$ they are symmetric in $`\pm y`$. We see that the matrix-element corrections do not significantly affect the rapidity distributions. Fig. 12 shows the comparison between HERWIG and CDF for the rapidity of the produced $`e^+e^{}`$ pair; the agreement is again good and the contribution of the matrix-element corrections is insignificant. ## 6 Conclusions We have analysed Drell–Yan processes in hadron collisions in the Monte Carlo parton shower approach. This is accurate in the soft/collinear approximation, but leaves empty regions in phase space. We implemented matrix-element corrections generating radiation according to the first-order amplitude in the dead zone and for every hardest so far emission in the already populated region in the HERWIG parton shower. We compared our results with the previous version HERWIG 5.9, with experimental Tevatron data from the DØ and CDF collaborations and existing resummed calculations of the spectrum of the transverse momentum $`q_T`$ of the vector boson. We found that the implemented corrections have a marked impact on the phenomenological distributions for high values of $`q_T`$ and the new version of HERWIG fits the DØ data for $`W`$ production well over the whole $`q_T`$ spectrum after we correct the HERWIG results to detector level. At large $`q_T`$ it is crucial to provide the Monte Carlo algorithm with matrix-element corrections in order to succeed in obtaining such an agreement. We also compared the HERWIG results after matrix-element corrections to some existing calculations based on a resummation of the initial-state radiation in the $`q_T`$-space and in the $`b`$-space. We found that in the range of low $`q_T`$, where actually the effect of matrix-element corrections is not so relevant and the resummed calculations are quite reliable, the parton shower distribution is in reasonable agreement with all of them, with discrepancies due to the methods followed by these different approaches. We also matched the resummed results to the exact $`𝒪(\alpha _S)`$ result, in such a way to allow them to be trustworthy at all $`q_T`$ values and found that the matching works well for a resummation performed in the $`q_T`$-space keeping all the next-to-leading logarithms in the Sudakov exponent. In this case, we also obtained good agreement with the HERWIG 6.1 $`q_T`$ distribution. The other approaches considered showed a discontinuity at the point $`q_T=m_W`$ once we match them to the exact first-order perturbative result. We also studied $`W+`$ jet events at the Tevatron and at the LHC and found a significant effect of the new improvement of HERWIG as a larger number of jets of large transverse energy passes the typical experimental cuts. We compared the new version of HERWIG with the experimental data of the CDF collaboration on the transverse momentum and rapidity of $`Z`$ bosons. We found good agreement after implementing the corrections. As a result, we feel confident that the simulation of vector boson production is now reliable. Using the new version of HERWIG 6.1 to fit the experimental data will therefore provide us with better tests of the Standard Model and of QCD for the following Run II at the Tevatron and, ultimately, at the LHC. For the sake of completeness we have however to say that our analysis has been performed forcing the vector boson to decay into a lepton pair, as most of the experimental studies do. For hadronic channels (i.e. $`Wq\overline{q}^{}`$), also the decay products are allowed to emit gluons and to give rise to a parton shower that is still described in the leading soft/collinear approximation by HERWIG. The implementation of matrix-element corrections to hadronic $`W`$ decays is straightforward as they are very similar to the corrections to the process $`Zq\overline{q}`$ that are discussed in and is in progress. It is also worth remarking that the method applied in this paper to implement matrix-element corrections to the initial-state radiation in $`W`$ and $`Z`$ production can be extended to a wide range of processes that are relevant for the phenomenology of hadron colliders. Among these, the inclusion of matrix-element corrections to simulations of heavy quark production and particularly of top production in $`p\overline{p}`$ or $`pp`$ interactions is expected to have a marked impact on the top mass reconstruction and many observables that are relevant for heavy quark phenomenology. This work is also in progress. ## Acknowledgements We acknowledge Stefano Frixione, Michelangelo Mangano, Stefano Moretti, Willis Sakumoto and Bryan Webber for discussions of these and related topics. We are indebted to Giovanni Ridolfi who provided us with the code to obtain the plots in Fig. 6. We are also grateful to the DØ Collaboration for making their detector simulation, CMS, available to us, and especially to Cecilia Gerber for the considerable effort it took to make it run outside the usual DØ environment. ## References 1. G. Altarelli, R.K. Ellis and G. Martinelli, Nucl. Phys. B143 (1978) 521; Nucl. Phys. B146 (1978) 544 (erratum); Nucl. Phys. B157 (1979) 461. 2. Yu.L. Dokshitzer, D.I. Dyakonov and S.I. Troyan, Phys. Rep. 58 (1980) 269. 3. J. Collins and D. Soper, Nucl. Phys. B193 (1981) 381; Erratum Nucl. Phys. B213 (1983) 454; Nucl. Phys. B197 (1982) 446; J. Collins, D. Soper and G. Sterman, Nucl. Phys. B250 (1985) 199. 4. G.A. Ladinsky and C.P. Yuan, Phys. Rev. D50 (1994) 4239. 5. C.T.H. Davies, W.J. Stirling and B.R. Webber, Nucl. Phys. B256 (1985) 413. 6. P.B. Arnold and R. Kauffman, Nucl. Phys. B349 (1991) 381. 7. R.K. Ellis, D.A. Ross and S. Veseli, Nucl. Phys. B503 (1997) 309. 8. R.K. Ellis and S. Veseli, Nucl. Phys. B511 (1998) 649. 9. S. Frixione, P. Nason and G. Ridolfi, Nucl. Phys. B542 (1999) 311. 10. A. Kulesza and W.J. Stirling, DTP-99-02, hep-ph/9902234. 11. T. Sjöstrand, Comp. Phys. Comm. 46 (1987) 367. 12. G. Marchesini et al., Comput. Phys. Commun. 67 (1992) 465. 13. G. Miu and T. Sjöstrand, Phys. Lett. B449 (1999) 313. 14. S. Mrenna, UCD-99-13, hep-ph/9902471. 15. M.H. Seymour, Comput. Phys. Commun. 90 (1995) 95. 16. M.H. Seymour, Z. Phys. C56 (1992) 161. 17. M.H. Seymour, Matrix Element Corrections to Parton Shower Simulation of Deep Inelastic Scattering, contributed to 27th International Conference on High Energy Physics (ICHEP), Glasgow, 1994, Lund preprint LU-TP-94-12, unpublished. 18. G. Corcella and M.H. Seymour, Phys. Lett. B442 (1998) 417. 19. T. Sjöstrand, Phys. Lett. B157 (1985) 231. 20. G. Marchesini and B.R. Webber, Nucl. Phys. B310 (1988) 461. 21. S. Catani, G. Marchesini and B.R. Webber, Nucl. Phys. B349 (1991) 635. 22. M.H. Seymour, Nucl. Phys. B436 (1995) 443. 23. A.D. Martin, R.G. Roberts and W.J. Stirling, Phys. Lett. B387 (1996) 419. 24. DØ Collaboration, B. Abbott et al., Phys. Rev. Lett. 80 (1998) 5498. 25. DØ Collaboration, B. Abbott et al., Phys. Rev. Lett. 80 (1998) 3000; DØ Collaboration, B. Abbott et al., Phys. Rev. D58 (1998) 092003; I. Adam, Ph.D. thesis, Columbia University, 1997, Nevis Report #294, http://www-d0.fnal.gov/results/publications\_talks/thesis/adam/ian\_thesis\_all.ps; E. Flattum, Ph.D. thesis, Michigan State University, 1996, http://www-d0.fnal.gov/results/publications\_talks/thesis/flattum/eric\_thesis.ps. 26. CDF Collaboration, T. Affolder et al., Fermilab-Pub-99/220-E; CDF Collaboration, A. Bodek et al., Fermilab-Conf-99/160-E. 27. S. Catani, Yu.L. Dokshitzer, M.H. Seymour and B.R. Webber, Nucl. Phys. B406 (1993) 187 M.H. Seymour, Nucl. Phys. B421 (1994) 545. 28. S.D. Ellis and D.E. Soper, Phys. Rev. D48 (1993) 3160.
no-problem/9908/hep-ph9908493.html
ar5iv
text
# 1 |𝑇⁢(𝑠)/(16⁢𝜋)|²=(sin𝛿⁢(𝑠))² where 𝛿⁢(𝑠) is the phase of 𝑇⁢(𝑠), eq. (). The solid line corresponds to 𝑚_𝐻=0.9 TeV, the dashed line to 𝑚_𝐻=150 TeV and the dotted-dashed line to 𝑚_𝐻=11 TeV. Note as the continuous curve (m_H=0.9 TeV) is very similar to that with m_H=150 TeV (dashed line). The Case of a WW Dynamical Scalar Resonance within a Chiral Effective Description of the Strongly Interacting Higgs Sector. J. A. Oller<sup>1</sup><sup>1</sup>1Present address: Forschungzentrum Jülich, Institut für Kernphysik (Theorie), D-52425 Jülich, Germany Departamento de Física Teórica and IFIC Centro Mixto Universidad de Valencia-CSIC 46100 Burjassot (Valencia), Spain ## Abstract We have studied the strongly interacting $`W_LW_LW_LW_L`$ I=L=0 partial wave amplitude making use of effective chiral Lagrangians. The Higgs boson is explicitly included and the N/D method is used to unitarize the amplitude. We recover the chiral perturbative expansion at next to leading order for low energies. The cases $`m_H<<4\pi v`$ and $`m_H>>4\pi v`$ are considered in detail. It is shown that in the latter situation a state appears with a mass $`1`$ TeV. This state is dynamically generated through the strong interactions between the $`W_L`$ and is not responsible for the spontaneous electroweak symmetry breaking. However, its shape can be very similar to that of the $`m_H1`$ TeV case, which corresponds to a conventional heavy Higgs boson. Although the $`SU(2)_LU(1)_Y`$ theory of electroweak interactions is extremely successful, its spontaneous breaking to $`U(1)_{em}`$ is a controversial aspect. In the so-called standard model (SM) this is accomplished through a Higgs boson which provides a renormalizable way to generate the needed $`W`$ and $`Z`$ masses. However, an experimental verification of this mechanism is still lacking. In fact, unveiling the nature of the electroweak symmetry breaking sector is the number one aim at the LHC which could detect the Higgs boson up to a mass $`m_H1`$ TeV, with $`m_H`$ the mass of the Higgs boson. Making use of an effective theory formalism we will study in this work the strongly interacting Higgs sector, $`m_H1`$ TeV. Throughout the paper the SM will be considered first. Then, the extension of our conclusions to more general scenarios will be also discussed. The effective theory formalism is suited to study the strong electroweak Goldstone boson interactions since for energies well above the mass of the weak bosons ($`m_W0.1`$ TeV) their scattering amplitudes coincide with those of the longitudinal components of the electroweak gauge bosons ($`W_L`$). This is the content of the so-called equivalence theorem . We will consider the class of models in which the symmetry breaking sector has a chiral $`SU(2)_LSU(2)_R`$ symmetry that breaks spontaneously to the diagonal $`SU(2)_{L+R}`$ subgroup. The latter “custodial” SU(2) is sufficient (but has not been proved necessary) to protect $`\rho =\frac{M_W^2}{M_Z^2\mathrm{cos}\theta _W}1`$ . This symmetry together with the equivalence theorem guarantees also the universal scattering theorems for the strongly interacting $`W^{}s`$ and $`Z^{}s`$ . The former symmetry breaking pattern is the one found in the SM for $`g^{}=0`$, with $`g^{}`$ the coupling associated to the hypercharge. In the case of chiral perturbation theory ($`\chi PT`$) one has the same symmetry breaking situation than before but in this case the custodial symmetry is called isospin (I). We will consider the $`𝒪(p^2)`$ and $`𝒪(p^4)`$ chiral Lagrangians derived in . The equivalence theorem can be reconciled with this low energy expansion as long as we keep just the lowest order in $`g`$, which is a rather good approximation. Nevertheless, as we are interested only in the scattering of the Goldstone bosons assuming the custodial $`SU(2)`$ symmetry we only need the following terms: $`_2`$ $`=`$ $`{\displaystyle \frac{v^2}{4}}<_\mu U^\mu U^{}>`$ (1) $`_4`$ $`=`$ $`L_1<_\mu U^\mu U^{}>^2+L_2<_\mu U^\nu U^{}>^2`$ where $$U=\mathrm{exp}(\frac{i\pi ^i\tau ^i}{v})$$ (2) with $`\tau ^i`$ the Pauli matrices, $`\pi ^i`$ the Goldstone bosons, $`v=(\sqrt{2}G_F)^{1/2}\frac{1}{4}`$ TeV and the symbol $`<>`$ represents the trace of the matrix inside it. The Lagrangian given in eq. (1) describes the scattering of massless pions in $`\chi PT`$ up to $`𝒪(p^4)`$ just by changing $`v`$ to $`f_\pi =92.4`$ MeV. Keeping in mind the equivalence theorem, we can write the $`W_LW_L`$ scattering amplitude up to $`𝒪(p^4)`$ from the $`\chi PT`$ calculation in the chiral limit (massless pions). The result is: $`A(s,t,u)`$ $`=`$ $`{\displaystyle \frac{s}{v^2}}+{\displaystyle \frac{15s^2+7(tu)^2}{576\pi ^2v^4}}+{\displaystyle \frac{2}{v^4}}\left[(4L_1^r(\mu )+L_2^r(\mu ))s^2+L_2^r(\mu )(tu)^2\right]`$ $``$ $`{\displaystyle \frac{1}{96\pi ^2v^4}}\left[3s^2\mathrm{log}{\displaystyle \frac{s}{\mu ^2}}+t(tu)\mathrm{log}{\displaystyle \frac{t}{\mu ^2}}+u(ut)\mathrm{log}{\displaystyle \frac{u}{\mu ^2}}\right]`$ with $`L_i^r(\mu )`$ the finite renormalized value of $`L_i`$ at the scale $`\mu `$ such that $$L_i=L_i^r(\mu )+\gamma _i\lambda $$ (4) where $$\lambda =\frac{\mu ^{d4}}{16\pi ^2}\left[\frac{1}{d4}\frac{1}{2}(\mathrm{log}(4\pi )+\mathrm{\Gamma }^{}(1)+1)\right]$$ (5) and $`\gamma _1`$ $`=`$ $`{\displaystyle \frac{1}{12}}`$ (6) $`\gamma _2`$ $`=`$ $`{\displaystyle \frac{1}{6}}`$ In deriving eq. (S0.Ex2) from that of we have multiplied by 4 the term with the counterterms $`L_i^r(\mu )`$ in order to connect with the work where the counterterms are calculated in the SM for a heavy Higgs. From this work one has: $`L_1^r(\mu )`$ $`=`$ $`{\displaystyle \frac{v^2}{8m_H^2}}{\displaystyle \frac{1}{16\pi ^2}}{\displaystyle \frac{1}{24}}\left({\displaystyle \frac{76}{3}}{\displaystyle \frac{27\pi }{2\sqrt{3}}}\mathrm{log}{\displaystyle \frac{m_H^2}{\mu ^2}}\right)`$ (7) $`L_2^r(\mu )`$ $`=`$ $`{\displaystyle \frac{1}{16\pi ^2}}{\displaystyle \frac{1}{12}}\left({\displaystyle \frac{11}{6}}\mathrm{log}{\displaystyle \frac{m_H^2}{\mu ^2}}\right)`$ If the $`L_i^r(\mu )`$ in eq. (S0.Ex2) are substituted by those of eq. (7) one can see that the $`\mu `$ dependence of $`A(s,t,u)`$ disappears and the scale is then fixed by $`m_H`$. The resulting amplitude would reproduce the one loop $`𝒪(G_Fm_H^2)`$ amplitude in the standard model first calculated in refs. and valid for $`4\pi v`$, $`m_H>>\sqrt{s}>>m_W`$. The term $`{\displaystyle \frac{v^2}{8m_H^2}}`$ in $`L_1^r(\mu )`$ in eq. (7) comes from the exchange at tree level of the Higgs bosons at $`𝒪(p^4)`$. In fact, the contribution of the tree level Higgs to $`A(s,t,u)`$ to all orders can be easily calculated and is given by: $$\frac{s^2}{v^2(m_H^2s)}$$ (8) The former result up to $`𝒪(p^4)`$ coincides with the contribution of the Higgs boson to $`A(s,t,u)`$ coming from the term $`v^2/(8m_H^2)`$ in $`L_1^r(\mu )`$ of eq. (7). We will then make a resummation of counterterms up to an infinite order by replacing the $`𝒪(p^4)`$ Higgs exchange contribution by the result given in eq. (8) with the full propagator. Substituting also the rest of eq. (7) in eq. (S0.Ex2) one finally obtains: $`A(s,t,u)`$ $`=`$ $`{\displaystyle \frac{s}{v^2}}+{\displaystyle \frac{s^2}{v^2(m_H^2s)}}{\displaystyle \frac{1}{96\pi ^2v^4}}\left[s^2(50{\displaystyle \frac{27\pi }{\sqrt{3}}})+{\displaystyle \frac{2}{3}}(tu)^2\right]`$ $``$ $`{\displaystyle \frac{1}{96\pi ^2v^4}}\left[3s^2\mathrm{log}{\displaystyle \frac{s}{m_H^2}}+t(tu)\mathrm{log}{\displaystyle \frac{t}{m_H^2}}+u(ut)\mathrm{log}{\displaystyle \frac{u}{m_H^2}}\right]`$ We will concentrate in the I=0 S-wave partial amplitude, $`T(s)`$, the one which corresponds to the Higgs boson. In terms of $`A(s,t,u)`$ this partial wave is given by: $$T(s)=\frac{1}{4}_1^1d\mathrm{cos}\theta \left(3A(s,t,u)+A(t,s,u)+A(u,t,s)\right)$$ (10) With this definition elastic unitarity reads for $`s>0`$: $$\mathrm{Im}T(s)=\frac{1}{16\pi }|T(s)|^2$$ (11) or equivalently $$\mathrm{Im}\mathrm{\hspace{0.17em}1}/T(s)=\frac{1}{16\pi }$$ (12) Taking into account eqs. (S0.Ex5) and (10) one has: $`T(s)`$ $`=`$ $`{\displaystyle \frac{s}{v^2}}+{\displaystyle \frac{3s^2}{2v^2(m_H^2s)}}+{\displaystyle \frac{m_H^4}{sv^2}}\left[\mathrm{log}(1+{\displaystyle \frac{s}{m_H^2}}){\displaystyle \frac{s}{m_H^2}}+{\displaystyle \frac{s^2}{2m_H^4}}\right]`$ $``$ $`{\displaystyle \frac{s^2}{1728\pi ^2v^4}}\left[1673297\sqrt{3}\pi +108\mathrm{log}{\displaystyle \frac{s}{m_H^2}}+42\mathrm{log}{\displaystyle \frac{s}{m_H^2}}\right]`$ where we have added and subtracted the $`𝒪(p^0)`$ and $`𝒪(p^2)`$ contributions of $`m_H^4/(sv^2)\mathrm{log}(1+s/m_H^2)`$, due to the exchange of the Higgs in the crossed channels. In this way, the fact that any exchange of the Higgs boson begins at $`𝒪(p^4)`$, as it is clear from eq. (8), is explicitly shown. In order to study the resonance spectrum of the strongly interacting Higgs sector we are going to make an N/D representation of the former chiral perturbative result. In a former work , we used this method to study the strong meson-meson scattering including the resonance region, which in this case appears typically for $`\sqrt{s}1`$ GeV. In the N/D method a partial wave amplitude is expressed as the quotient of two functions, $$T(s)=\frac{N(s)}{D(s)}$$ (14) with the denominator function $`\text{D}(s)`$, bearing the right hand cut or unitarity cut and the numerator function $`\text{N}(s)`$, the left hand cut. Taking into account eq. (12), $`N(s)`$ and $`D(s)`$ will obey the following equations: $$\begin{array}{cc}\mathrm{Im}D(s)=\mathrm{Im}T(s)^1N=\frac{1}{16\pi }N(s)\hfill & s>0\hfill \\ \mathrm{Im}D(s)=0\hfill & s<0\hfill \end{array}$$ (15) $$\begin{array}{cc}\mathrm{Im}N(s)=\mathrm{Im}T(s)D(s)\mathrm{Im}T_{Left}D(s)\hfill & s<0\hfill \\ \mathrm{Im}N(s)=0\hfill & s>0\hfill \end{array}$$ (16) In ref. we do not include the left hand cut, although some estimations were done. In eq. (S0.Ex6) the left hand cut first appears through the term $`\mathrm{log}{\displaystyle \frac{s}{m_H^2}}`$ which acquires an imaginary part for $`s<0`$. However, in order to reproduce $`T(s)`$ up to the order calculated in eq. (S0.Ex6), we will consider here the left hand cut in a perturbative way, such that our $`N`$ function will satisfy eq. (16) up to one loop calculated at $`𝒪(p^4)`$. When no left hand cut is included one can always take : $`N(s)`$ $`=`$ $`1`$ (17) $`D(s)`$ $`=`$ $`\stackrel{~}{a}+{\displaystyle \underset{i}{}}{\displaystyle \frac{R_i}{ss_i}}+g(s)K^1(s)+g(s)`$ where each term of the sum is referred to as a CDD pole and $`g(s)`$ is given by $$g(s)=\frac{1}{16\pi ^2}\left(\mathrm{log}\frac{s}{m_H^2}a\right)$$ (18) In we prove that the form for $`D(s)`$ in eq. (17) has enough room to accommodate the exchange in the s-channel of S and P-wave resonances plus polynomials terms. <sup>2</sup><sup>2</sup>2In ref. the prove was restricted to the case when the polynomial terms come from the $`_2`$ chiral Lagrangian. The extension of the proof to include also $`𝒪(p^4)`$ local terms can be done in a straightforward way. This is exactly the situation we have from eq. (S0.Ex6), when removing those terms, namely, $`\mathrm{log}{\displaystyle \frac{s}{m_H^2}}\text{ and }\left[\mathrm{log}(1+{\displaystyle \frac{s}{m_H^2}}){\displaystyle \frac{s}{m_H^2}}+{\displaystyle \frac{s^2}{2m_H^4}}\right]`$ which give rise to the left hand cut. In eq. (17) elastic unitarity is fulfilled to all orders in the chiral expansion. For $`s>16m_W^2`$ we have neglected the multi $`W_L`$ channels, with four or more $`W_L`$. In fact, in eq. (S0.Ex6) only 2$`W_L`$ appear in the loops since the inclusion of 4$`W_L`$ requires at least two loops, which is $`𝒪(p^6)`$. Expanding $`T(s)`$ from eq. (17) up to one loop at $`𝒪(p^4)`$ and comparing with eq.(S0.Ex6) without those terms responsible for the left hand cut given above, one has: $$KK_2^2g(s)=\frac{s}{v^2}+\frac{3s^2}{2v^2(m_H^2s)}\frac{s^2}{1728\pi ^2v^4}\left(1673297\sqrt{3}\pi \right)\frac{s^2}{16\pi ^2v^4}\mathrm{log}\frac{s}{m_H^2}$$ (19) where $`K_2`$ is the $`K`$ function at $`𝒪(p^2)`$. Hence: $$K(s)=\frac{s}{v^2}+\frac{3s^2}{2v^2(m_H^2s)}\frac{s^2}{1728\pi ^2v^4}\left(1673297\sqrt{3}\pi +a\right)$$ (20) In order to take into account the crossed channel contributions present in eq. (S0.Ex6) we write $`N(s)`$ $`=`$ $`1+\delta N(s)`$ (21) $`D(s)`$ $`=`$ $`K(s)^1+g(s)\left(1+\delta N(s)\right)`$ In this way, the resulting $`T(s)=N(s)/D(s)`$ I=L=0 partial wave will satisfy unitarity to all orders. Expanding $`T(s)`$ up to one loop at $`𝒪(p^4)`$ and comparing the result with eq. (S0.Ex6), taking into account also eq. (19), one has: $$\delta N(s)=K^1(s)\left[\frac{7s^2}{288\pi ^2v^4}\mathrm{log}\frac{s}{m_H^2}+\frac{m_H^4}{v^2s}\left(\mathrm{log}(1+\frac{s}{m_H^2})\frac{s}{m_H^2}+\frac{s^2}{m_H^4}\right)\right]$$ (22) Hence, our final amplitude $`T(s)`$ will read $`T(s)`$ $`=`$ $`{\displaystyle \frac{1+\delta N(s)}{K^1(s)+g(s)\left(1+\delta N(s)\right)}}`$ $`=`$ $`{\displaystyle \frac{1}{\left[K\left(1+\delta N(s)\right)\right]^1+g(s)}}`$ From eqs. (20) and (22) one has: $`K(s)\left(1+\delta N(s)\right)`$ $`=`$ $`{\displaystyle \frac{s}{v^2}}{\displaystyle \frac{s^2}{1728\pi ^2v^4}}\left(1673297\sqrt{3}\pi +108a+42\mathrm{log}{\displaystyle \frac{s}{m_H^2}}\right)`$ $`+`$ $`{\displaystyle \frac{3s^2}{2v^2(m_H^2s)}}+{\displaystyle \frac{m^4}{v^2s}}\left(\mathrm{log}(1+{\displaystyle \frac{s}{m_H^2}}){\displaystyle \frac{s}{m_H^2}}+{\displaystyle \frac{s^2}{2m_H^4}}\right)`$ In order to have the usual chiral power counting beginning at $`𝒪(p^2)`$ for $`N`$ and $`D`$, we multiply both functions at the same time by $`K`$. This leaves unchanged their ratio, $`T(s)`$, and their cut structure. Thus, we will have: $`N(s)`$ $`=`$ $`K(s)\left(1+\delta N(s)\right)`$ (25) $`D(s)`$ $`=`$ $`1+g(s)N(s)`$ It is then easy to see that our $`N`$ function satisfies eq. (16) up to one loop calculated at $`𝒪(p^4)`$. In fact, at this level, $`\mathrm{Im}T_{Left}`$ is given by the imaginary part of eq. (S0.Ex10). The $`D`$ function satisfies eq. (15) identically. This expresses that our final amplitude $`T(s)`$, given in eq. (S0.Ex9), is unitary to all orders. It is interesting to compare eq. (S0.Ex9) with the Inverse Amplitude Method (IAM) which we have also used with great phenomenological success in the meson-meson scattering . The IAM has been as well used in the strongly interacting Higgs sector in ref. . This method can be obtained easily from eq. (S0.Ex9) as a special case. To see this let us write: $$N(s)=K(s)(1+\delta N(s))=N_2+N_4+\mathrm{}$$ (26) with $`N_2`$ and $`N_4`$ the $`𝒪(p^2)`$ and $`𝒪(p^4)`$ contributions of the left hand side respectively. Then expanding $`\left(K(1+\delta N)\right)^1`$ we have $$\frac{1}{K(1+\delta N)}=\frac{1}{N_2+N_4}=\frac{1}{N_2}\frac{N_4}{N_2^2}+\mathrm{}$$ (27) Introducing this result in eq. (S0.Ex9), $`T(s)`$ reads $$T(s)=\frac{1}{\frac{1}{N_2}\frac{N_4}{N_2^2}+g(s)}=\frac{N_2^2}{N_2N_4+N_2^2g(s)}=\frac{T_2^2}{T_2T_4}$$ (28) with $`T_2`$ the $`𝒪(p^2)`$ chiral amplitude and $`T_4`$ the $`𝒪(p^4)`$ contribution, which is given by $`N_4N_2^2g(s)`$. The last expression in eq. (28) is the usual way in which the IAM approach for a partial wave amplitude is presented. Thus, the IAM results in our formalism as a special case through the approximation given in eq. (27). Let us consider first the case $`m_H<<4\pi v3`$ TeV. In order to see analytically what is going on, let us note that for $`sm_H^2`$ the local terms in eq. (S0.Ex10) of $`𝒪(p^4)`$ divided by $`(4\pi v)^2`$ are much smaller than the ones with $`m_H^2`$ in the denominator. On the other hand, close to the bare pole, $`sm_H^2`$, the direct exchange in the s-channel of the resonance dominates over their crossed exchanges and hence we will have: $`K(1+\delta N)`$ $``$ $`{\displaystyle \frac{3m_H^4}{2v^2(m_H^2s)}}{\displaystyle \frac{\alpha ^2}{m_H^2s}}`$ (29) $`T(s)`$ $``$ $`{\displaystyle \frac{\alpha ^2/(2m_H)}{m_H\sqrt{s}\frac{i}{32\pi m_H}\alpha ^2}}`$ If one considers $`\alpha `$ as an arbitrary parameter, only for $`\alpha ^2=3m_H^4/(2v^2)`$ one has the SM. Eq. (29) corresponds to a Breit-Wigner resonance, coming from the tree level Higgs pole, with a mass $`m_H`$ and a width $$\mathrm{\Gamma }=\frac{\alpha ^2}{16\pi m_H}$$ (30) For the SM Higgs boson one obtains the lowest order prediction $$\mathrm{\Gamma }=\frac{3m_H^3}{32\pi v^2}$$ (31) which is much smaller than $`m_H`$ for $$m_H<<4\pi v\sqrt{\frac{2}{3\pi }}1.5\text{TeV}$$ (32) In fact, eq. (29) has sense only when $`m_H<<\mathrm{\Gamma }`$. In the former situation, if we applied the IAM method we would obtain for $`sm_H^2`$ $$T(s)=\frac{6m_H^4/(11v^2)}{\frac{6m_H^2}{11}si\frac{3m_H^4}{88\pi v^2}}$$ (33) with corresponds to a Breit-Wigner with a pole at $`\sqrt{s}=\sqrt{\frac{6}{11}}m_H`$ instead of $`m_H`$ as in eq. (29). Thus, the resummation done in eq. (27) for $`\left[K(1+\delta N)\right]`$ has shifted the tree level pole from $`m_H\sqrt{\frac{6}{11}}m_H0.74m_H`$. This possible source of inaccuracy of the IAM was already established in . In the meson-meson scattering this does not occur because of Vector Meson Dominance in the vector channels and the leading role of unitarity together with coupled channels that happens in the resonant scalar channels with I=0, 1 and 1/2 . A much more interesting case occurs for $`m_H>>4\pi v`$. In this case, for $`\sqrt{s}4\pi v`$, one can only retain in eq. (S0.Ex10) the two first terms so that: $$K(s)(1+\delta N(s))\frac{s}{v^2}\frac{s^2}{16\pi ^2v^4}\left(a+b+\frac{7}{18}\mathrm{log}\frac{s}{m_H^2}\right)$$ (34) In the former equation the $`𝒪(p^2)`$ term is independent of the underlying fundamental theory and is fixed by symmetry and the experimental value of $`v`$. The same happens for the coefficient $`7/18`$ in front of $`\mathrm{log}{\displaystyle \frac{s}{m_H^2}}`$ since it is given by loops at $`𝒪(p^4)`$ in the crossed channels with the $`𝒪(p^2)`$ amplitude at the vertices. However, the coefficients $`a`$ and $`b`$ depend on the underlying theory although one expects them to be of $`𝒪(1)`$ since they are given over the relevant scale coming from loops $`4\pi v`$ . In the case of the SM one has from eq. (S0.Ex10): $$b=\frac{1673297\sqrt{3}\pi }{108}\frac{1}{2}$$ (35) However, the coefficient $`a`$ in eqs. (S0.Ex10) and (34) is not fixed by the $`𝒪(p^4)`$ chiral perturbation result since at this order cancels with the contributions proportional to $`a`$ from the $`g(s)`$ function, eq. (18). Substituting eq. (34) in eq. (S0.Ex9), $`T(s)`$ then reads $$T(s)\left[\left(\frac{s}{v^2}\frac{s^2}{16\pi ^2v^4}\left(a+b+\frac{7}{18}\mathrm{log}\frac{s}{m_H^2}\right)\right)^1+\frac{1}{16\pi ^2}\left(\mathrm{log}\frac{s}{m_H^2}a\right)\right]^1$$ (36) On the other hand, for $`a`$ and $`b`$ of $`𝒪(1)`$ and with $`s1`$ TeV<sup>2</sup>, well below $`(4\pi v)^2`$, the approximation given in eq. (27) is numerically accurate and then we recover the IAM result eq. (28). As a consequence our results from eqs. (S0.Ex9) and eq. (S0.Ex10) coincide in this case with those already derived in making use of the IAM. The most interesting conclusion is that for $`m_H\mathrm{}`$ there is a pole in the I=L=0 partial wave amplitude, with a vanishing mass and width as $`m_H\mathrm{}`$. Note also that in eq. (28) the dependence on $`a`$ disappears. In order to understand properly the previous result one should keep in mind that, in the present case, $`m_H`$ is just a scale below which the only degrees of freedom are the Goldstone bosons. That is, there are no bare resonances (elementary heavy fields) with any quantum numbers and masses below $`m_H`$. Taking this into account, we can compare our result with the ones of ref. . In this reference, the IAM is applied to study the resonance spectrum of the strongly interacting Higgs sector as a function of $`L_1^r(\mu )`$ and $`L_2^r(\mu )`$ with $`\mu =1`$ TeV. The case we are considering, $`m_H>>4\pi v`$, corresponds to those values of the $`L_i^r`$ such that the underlying theory has not bare resonances below $`m_H`$. For instance, from eq. (7) we see that, for $`m_H>>4\pi v`$ and $`\mu =1`$ TeV, $`L_1^r`$ and $`L_2^r`$ are positive and large ($`L_2^r2L_1^r`$)<sup>3</sup><sup>3</sup>3For $`m_H=150`$ TeV, as considered in Fig. 1, one obtains from eq. (7) $`L_1^r=0.0024`$ and $`L_2^r=0.0043`$, with $`\mu =1`$ TeV. As one can see in Fig. 4 of ref. , there is a rather narrow pole with a mass bellow 1 TeV in the I=L=0 channel for these values of the counterterms. This result is of course in agreement with the content of Fig. 1 of the present work.. In fact in ref. , for this region of values of the counterterms, one finds low mass and narrow poles with I=L=0 and no resonances in the other channels (I=L=1 and I=2, L=0). In Fig. 1 we show three curves corresponding to $`|T(s)/(16\pi )|^2=(\mathrm{sin}\delta (s))^2`$ with $`\delta (s)`$ the phase of $`T(s)`$, eq. (S0.Ex9). Three different values for the parameter $`m_H`$ with $`b`$ given in eq. (35) and $`a=1`$ are considered. Theses curves are rather stable under changes of $`a`$ and $`b`$ of $`𝒪(1)`$, for instance, in the interval $`[1,1]`$. The solid line is for $`m_H=0.9`$ TeV and one sees for $`s1.5`$ TeV the presence of a clear resonance located around $`m_H`$, as already discussed above in the case of $`m_H<<4\pi v`$. However, the dashed line, with $`m_H=150`$ TeV, also shows a clear signal for a narrow resonance at 1 TeV and with a shape around the pole position rather similar to that for $`m_H=0.9`$ TeV. This pole with a mass below 1.5 TeV begins to appear for $`m_H10`$ TeV as already stated in . The dotted-dashed line in fact corresponds to $`m_H=11`$ TeV and one sees clearly a bump corresponding to this pole. However, one has to realize that this state appears without the tree level pole of the Higgs boson, as it is expected from eq. (36) and as we have explicitly checked from eqs. (S0.Ex9) and (S0.Ex10) by removing in the last equation the last line. As a consequence this state does not originate from the tree level Higgs pole and is not a Higgs boson responsible for the spontaneous electroweak symmetry breaking. It is just a consequence of the strong interactions between the $`W_L`$ bosons giving rise to a $`W_LW_L`$ resonance, similarly as the deuteron pole is a bound state of a proton and a neutron. For $`m_H4\pi v`$ the physics involved is much more dependent on the values of the parameters $`a`$ and $`b`$ and both the tree level pole of the Higgs and the rest of counterterms are similarly important and have large interferences. In general, as suggested by eq. (31), the Higgs is very wide, with a width as large or more than its mass. This is in fact what happens for instance for $`b`$ given by eq. (35), $`m_H=3`$ TeV and $`a=0`$, where a pole coming from the Higgs tree level pole appears around $`(1.2+i\mathrm{\hspace{0.17em}0.6})`$ TeV in the unphysical sheet. However, for certain values of $`a`$ of $`𝒪(1)`$, for instance, $`a=2`$ one sees two poles at: $`(4+i\mathrm{\hspace{0.17em}0.04})`$ and $`(1.2+i\mathrm{\hspace{0.17em}0.7})`$ TeV. As a consequence, the deviations from the light Higgs situation can be very large and simple Breit-Wigner pictures from bare poles are not adequate in this case since the naive formula given in eq. (31) gives a width that for $`m_H>1.5`$ TeV is larger than $`m_H`$. This is a clear signal that the situation cannot be reduced to such simple terms which are valid only for narrow resonances ($`\mathrm{\Gamma }<<m_H`$). In ref. the N/D method is also used to study the strongly interacting Higgs sector. We reproduce their results with a suitable choice of the constant $`a`$. However, we would like to indicate that while we maintain here the chiral power counting and recover the full chiral amplitude up to $`𝒪(p^4)`$ this is not the case in . Another difference is the way in which the left hand cut is treated. We have included it in a chiral loop expansion while in that reference the left hand cut is fixed a priori to be given by the crossed Higgs exchange. Conclusions Making use of the N/D method and the electroweak effective chiral Lagrangians we have studied the properties of the strong $`WW`$ scattering in the scalar channel. This approach gives rise to physical full unitarized amplitudes respecting the low energy symmetry constraints. In particular, it is able to describe resonances as poles in the unphysical sheet. In this way, we have also played a special attention to the nature of the resonances relevant within the LHC energy range. There are two clearly distinct cases: For $`m_H<<4\pi v`$ the amplitudes are dominated by the tree level Higgs boson pole. For $`m_H>>4\pi v`$, although the previous tree level Higgs pole disappears for the energies considered, there is another physical pole below $`\sqrt{s}<1.5`$ TeV. The shape of this resonance can be very similar to that of the first case. However, the nature of this second pole is completely different, since it corresponds to a dynamical $`WW`$ resonance and hence it is not responsible for the spontaneous breaking of the symmetry. We have also seen that this conclusion is stable under changes of the chiral parameters of higher order. Thus, it should be a common feature of any underlying theory responsible for the electroweak symmetry breaking with unbroken custodial symmetry and with heavier particles appearing with a mass $`m_H>>4\pi v`$. Acknowledgments I would like acknowledge a critical reading and fruitful discussions with J. R. Peláez, E. Oset and M. J. Vicente-Vacas. This work has been supported by an FPI scholarship of the Generalitat Valenciana and partially by the EEC-TMR Program$``$Contact No. ERBFMRX-CT98-0169.